text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
"""
This module contains data structures and container classes that are used by OVITO's modification pipeline system.
**Data collection:**
* :py:class:`DataCollection` (a container for data objects and attributes)
**Data objects:**
* :py:class:`DataObject` (base of all data object types)
* :py:class:`Bonds`
* :py:class:`BondProperty`
* :py:class:`BondTypeProperty`
* :py:class:`DislocationNetwork`
* :py:class:`ParticleProperty`
* :py:class:`ParticleTypeProperty`
* :py:class:`SimulationCell`
* :py:class:`SurfaceMesh`
**Auxiliary classes:**
* :py:class:`ParticleType`
* :py:class:`BondType`
* :py:class:`CutoffNeighborFinder`
* :py:class:`NearestNeighborFinder`
* :py:class:`DislocationSegment`
"""
import numpy as np
try:
# Python 3.x
import collections.abc as collections
except ImportError:
# Python 2.x
import collections
# Load the native module.
from PyScriptScene import DataCollection
from PyScriptScene import DataObject
from PyScriptApp import CloneHelper
# Give the DataCollection class a dict-like interface.
DataCollection.__len__ = lambda self: len(self.objects)
def _DataCollection__iter__(self):
for o in self.objects:
if hasattr(o, "_data_key"):
yield o._data_key
else:
yield o.object_title
DataCollection.__iter__ = _DataCollection__iter__
def _DataCollection__getitem__(self, key):
for o in self.objects:
if hasattr(o, "_data_key"):
if o._data_key == key:
return o
else:
if o.object_title == key:
return o
raise KeyError("DataCollection does not contain object key '%s'." % key)
DataCollection.__getitem__ = _DataCollection__getitem__
def _DataCollection__getattr__(self, name):
for o in self.objects:
if hasattr(o, "_data_attribute_name"):
if o._data_attribute_name == name:
return o
raise AttributeError("DataCollection does not have an attribute named '%s'." % name)
DataCollection.__getattr__ = _DataCollection__getattr__
def _DataCollection__str__(self):
return "DataCollection(" + str(list(self.keys())) + ")"
DataCollection.__str__ = _DataCollection__str__
# Mix in base class collections.Mapping:
DataCollection.__bases__ = DataCollection.__bases__ + (collections.Mapping, )
# Implement the 'attributes' property of the DataCollection class.
def _DataCollection_attributes(self):
"""
A dictionary object with key/value pairs that have been loaded from an input file
or were produced by modifiers in the data pipeline.
Attributes are integer, float, or string values that are processed or generated by the
modification pipeline. They represent global information or scalar quantities. This is in contrast
to more complex *data objects*, which are also stored in the :py:class:`!DataCollection` such as particle properties or bonds.
Attribute names (dictionary keys) are simple strings such as ``"Timestep"`` or
``"ConstructSurfaceMesh.surface_area"``.
**Attributes loaded from input files**
The ``Timestep`` attribute is loaded from LAMMPS dump files and other simulation file formats
that store the simulation timestep. This kind of information read from an input file can be retrieved from
the attributes dictionary of the :py:attr:`~ovito.ObjectNode.source` data collection as follows::
>>> node = ovito.dataset.selected_node
>>> node.source.attributes['Timestep']
140000
Note that the attributes dictionary will contain all key/value pairs parsed from the
comment line in the header of *extended XYZ* files.
**Attributes computed by modifiers**
Analysis modifiers like the :py:class:`~ovito.modifiers.CommonNeighborAnalysisModifier` or
the :py:class:`~ovito.modifiers.ClusterAnalysisModifier` output scalar computation results
as attributes. The class documentation of each modifier lists the names of the attributes it
produces.
For example, the number of clusters generated by the :py:class:`~ovito.modifiers.ClusterAnalysisModifier`
can be queried as follows::
node.modifiers.append(ClusterAnalysisModifier(cutoff = 3.1))
node.compute()
nclusters = node.output.attributes["ClusterAnalysis.cluster_count"]
**Exporting attributes to a text file**
The :py:func:`ovito.io.export_file` function allows writing selected attributes to a text
file, possibly as functions of time::
export_file(node, "data.txt", "txt",
columns = ["Timestep", "ClusterAnalysis.cluster_count"],
multiple_frames = True)
**User-defined attributes**
The :py:class:`~ovito.modifiers.PythonScriptModifier` makes it possible for you to define your own
attributes that are dynamically computed (on the basis of other information)::
node.modifiers.append(CommonNeighborAnalysisModifier())
def compute_fcc_fraction(frame, input, output):
n_fcc = input.attributes['CommonNeighborAnalysis.counts.FCC']
output.attributes['fcc_fraction'] = n_fcc / input.number_of_particles
node.modifiers.append(PythonScriptModifier(function = compute_fcc_fraction))
node.compute()
print(node.output.attributes['fcc_fraction'])
In this example the :py:class:`~ovito.modifiers.CommonNeighborAnalysisModifier` outputs the computed
attribute ``CommonNeighborAnalysis.counts.FCC``, which is the absolute number of atoms that
form an FCC lattice. To compute the fraction of FCC atoms, we need to divide by the total number of
atoms in the system. To this end, a :py:class:`~ovito.modifiers.PythonScriptModifier` is defined and
inserted into the pipeline following the :py:class:`~ovito.modifiers.CommonNeighborAnalysisModifier`.
The user-defined modifier function generates a new attribute named ``fcc_fraction``. Finally,
after the pipeline has been evaluated, the value of the user-defined attribute can
be accessed as shown or exported to a text file.
"""
# Helper class used to implement the 'attributes' property of the DataCollection class.
class _AttributesView(collections.MutableMapping):
def __init__(self, data_collection):
self._data_collection = data_collection
def __len__(self):
return len(self._data_collection.attribute_names)
def __getitem__(self, key):
if not isinstance(key, str):
raise TypeError("Attribute key is not a string.")
v = self._data_collection.get_attribute(key)
if v is not None:
return v
raise KeyError("DataCollection contains no attribute named '%s'." % key)
def __setitem__(self, key, value):
if not isinstance(key, str):
raise TypeError("Attribute key is not a string.")
self._data_collection.set_attribute(key, value)
def __delitem__(self, key):
if not isinstance(key, str):
raise TypeError("Attribute key is not a string.")
v = self._data_collection.get_attribute(key)
if v is None:
raise KeyError("DataCollection contains no attribute named '%s'." % key)
self._data_collection.set_attribute(key, None)
def __iter__(self):
for aname in self._data_collection.attribute_names:
yield aname
def __repr__(self):
return repr(dict(self))
return _AttributesView(self)
DataCollection.attributes = property(_DataCollection_attributes)
# Implement the DataCollection.copy_if_needed() method.
def _DataCollection_copy_if_needed(self, obj):
"""
Makes a copy of a data object if it was created upstream in the data pipeline.
Typically, this method is used in the user-defined implementation of a :py:class:`~ovito.modifiers.PythonScriptModifier` that
participates in OVITO's data pipeline system. The user-defined modifier function receives a collection with
input data objects from the system. However, directly modifying these input
objects is not allowed because they are owned by the upstream part of the data pipeline.
This is where this method comes into play: It makes a copy of a data object and replaces
it with its copy in the modifier's output. The modifier can then go ahead and modify the copy as needed,
because it is now exclusively owned by the modifier.
The method first checks if *obj*, which must be a data object from this data collection, is
owned by anybody else. If yes, it creates an exact copy of *obj* and replaces the original
in this data collection with the copy. Now the copy is an independent object, which is referenced
by nobody except this data collection. Thus, the modifier function is now free to modify the contents
of the data object.
Note that the :py:meth:`!copy_if_needed` method should always be called on the *output* data collection
of the modifier.
:param DataObject obj: The object in the output data collection to be copied.
:return: An exact copy of *obj* if *obj* is owned by someone else. Otherwise the original instance is returned.
"""
assert(isinstance(obj, DataObject))
# The object to be modified must be in this data collection.
if not obj in self.values():
raise ValueError("DataCollection.copy_if_needed() must be called with an object that is part of this data collection.")
# Check if object is owned by someone else.
# This is indicated by the fact that the object has more than one dependent (which would be this data collection).
if obj.num_dependents > 1:
# Make a copy of the object so it can be safely modified.
clone = CloneHelper().clone(obj, False)
self.replace(obj, clone)
return clone
return obj
DataCollection.copy_if_needed = _DataCollection_copy_if_needed
def _DataCollection_to_ase_atoms(self):
"""
Constructs and returns an `ASE Atoms object <https://wiki.fysik.dtu.dk/ase/ase/atoms.html>`_ from the particles
stored in this :py:class:`!DataCollection`.
.. note::
Calling this method raises an ImportError if ASE (`Atomistic Simulation Environment <https://wiki.fysik.dtu.dk/ase/>`_) is not available. Note that the built-in
Python interpreter shipping with OVITO does *not* contain the ASE module.
It is therefore recommended to build OVITO from source (as explained in the user manual),
which will allow you to use all modules installed in the system's Python interpreter.
:return: A new `ASE Atoms object <https://wiki.fysik.dtu.dk/ase/ase/atoms.html>`_ that contains the
contains the converted particle data from this :py:class:`!DataCollection`.
"""
from ase.atoms import Atoms
# Extract basic dat: pbc, cell, positions, particle types
pbc = self.cell.pbc
cell_matrix = np.array(self.cell.matrix)
cell, origin = cell_matrix[:, :3].T, cell_matrix[:, 3]
info = {'cell_origin': origin }
positions = np.array(self.position)
type_names = dict([(t.id, t.name) for t in
self.particle_type.type_list])
symbols = [type_names[id] for id in np.array(self.particle_type)]
# construct ase.Atoms object
atoms = Atoms(symbols,
positions,
cell=cell,
pbc=pbc,
info=info)
# Convert any other particle properties to additional arrays
for name, prop in self.items():
if name in ['Simulation cell',
'Position',
'Particle Type']:
continue
if not isinstance(prop, ParticleProperty):
continue
prop_name = prop.name
i = 1
while prop_name in atoms.arrays:
prop_name = '{0}_{1}'.format(prop.name, i)
i += 1
atoms.new_array(prop_name, prop.array)
return atoms
DataCollection.to_ase_atoms = _DataCollection_to_ase_atoms
def _DataCollection_create_from_ase_atoms(cls, atoms):
"""
Converts an `ASE Atoms object <https://wiki.fysik.dtu.dk/ase/ase/atoms.html>`_ to a :py:class:`!DataCollection`.
.. note::
The built-in Python interpreter shipping with OVITO does *not* contain the ASE module (`Atomistic Simulation Environment <https://wiki.fysik.dtu.dk/ase/>`_).
It is therefore recommended to build OVITO from source (as explained in the user manual),
which will allow you to use all modules installed in the system's Python interpreter.
:param atoms: The `ASE Atoms object <https://wiki.fysik.dtu.dk/ase/ase/atoms.html>`_ to be converted.
:return: A new :py:class:`!DataCollection` instance containing the converted data from the ASE object.
"""
data = cls()
# Set the unit cell and origin (if specified in atoms.info)
cell = SimulationCell()
matrix = np.zeros((3,4))
matrix[:, :3] = atoms.get_cell().T
matrix[:, 3] = atoms.info.get('cell_origin',
[0., 0., 0.])
cell.matrix = matrix
cell.pbc = [bool(p) for p in atoms.get_pbc()]
data.add(cell)
# Add ParticleProperty from atomic positions
num_particles = len(atoms)
position = ParticleProperty.create(ParticleProperty.Type.Position,
num_particles)
position.marray[...] = atoms.get_positions()
data.add(position)
# Set particle types from chemical symbols
types = ParticleProperty.create(ParticleProperty.Type.ParticleType,
num_particles)
symbols = atoms.get_chemical_symbols()
type_list = list(set(symbols))
for i, sym in enumerate(type_list):
types.type_list.append(ParticleType(id=i+1, name=sym))
types.marray[:] = [ type_list.index(sym)+1 for sym in symbols ]
data.add(types)
# Check for computed properties - forces, energies, stresses
calc = atoms.get_calculator()
if calc is not None:
for name, ptype in [('forces', ParticleProperty.Type.Force),
('energies', ParticleProperty.Type.PotentialEnergy),
('stresses', ParticleProperty.Type.StressTensor),
('charges', ParticleProperty.Type.Charge)]:
try:
array = calc.get_property(name,
atoms,
allow_calculation=False)
if array is None:
continue
except NotImplementedError:
continue
# Create a corresponding OVITO standard property.
prop = ParticleProperty.create(ptype, num_particles)
prop.marray[...] = array
data.add(prop)
# Create extra properties in DataCollection
for name, array in atoms.arrays.items():
if name in ['positions', 'numbers']:
continue
if array.dtype.kind == 'i':
typ = 'int'
elif array.dtype.kind == 'f':
typ = 'float'
else:
continue
num_particles = array.shape[0]
num_components = 1
if len(array.shape) == 2:
num_components = array.shape[1]
prop = ParticleProperty.create_user(name,
typ,
num_particles,
num_components)
prop.marray[...] = array
data.add(prop)
return data
DataCollection.create_from_ase_atoms = classmethod(_DataCollection_create_from_ase_atoms)
|
srinath-chakravarthy/ovito
|
src/plugins/pyscript/python/ovito/data/__init__.py
|
Python
|
gpl-3.0
| 15,914
|
[
"ASE",
"LAMMPS",
"OVITO"
] |
10a814fafb818f542045c230e611c38294c5844f96c6b285e75ba7113f7a437c
|
import os
import numpy as _np
import pychemia
__author__ = "Guillermo Avendano-Franco"
__copyright__ = "Copyright 2016"
__version__ = "0.1"
__maintainer__ = "Guillermo Avendano-Franco"
__email__ = "gtux.gaf@gmail.com"
__status__ = "Development"
__date__ = "May 16, 2016"
def read_kpoints(path='KPOINTS'):
"""
Load the file KPOINTS in the directory 'path' or
read directly the file 'path' and return a kpoints
object for pychemia
:param path: (str) File path for KPOINTS file
:return:
"""
if os.path.isfile(path):
filename = path
elif os.path.isdir(path) and os.path.isfile(path + '/KPOINTS'):
filename = path + '/KPOINTS'
else:
print("KPOINTS path not found")
return
# Reading the KPOINTS file
rf = open(filename, 'r')
comment = rf.readline()
del comment
nkpt = int(rf.readline())
mode = rf.readline()
if nkpt > 0:
if mode[0].lower() in ['c', 'k']:
kmode = 'Cartesian'
else:
kmode = 'Reciprocal'
kp = pychemia.crystal.KPoints(kmode=kmode)
for i in range(nkpt):
line = _np.array([float(x) for x in rf.readline().split()])
pos = line[:3]
wgt = line[3]
kp.add_kpt(pos, wgt)
else:
if mode[0].lower() in ['g']:
kmode = 'Gamma'
elif mode[0].lower() in ['m']:
kmode = 'Monkhorst-pack'
else:
raise ValueError("Kpoints mode must be 'Gamma' or 'Monkhorst-pack'")
kp = pychemia.crystal.KPoints(kmode=kmode)
line = _np.array([int(x) for x in rf.readline().split()])
grid = line[:3]
try:
line = _np.array([float(x) for x in rf.readline().split()])
shift = line[:3]
except ValueError:
shift = _np.zeros(3)
kp.set_grid(grid, shift)
return kp
def write_kpoints(kp, filepath='KPOINTS'):
"""
Takes an object kpoints from pychemia and
save the file KPOINTS in the directory 'path' or
save the file 'path' as a VASP KPOINTS file
:param kp: Kpoints object
:param filepath: (str) Filename where the KPOINTS file is created
"""
if os.path.isdir(filepath):
filename = filepath + '/KPOINTS'
else:
filename = filepath
wf = open(filename, 'w')
wf.write('Automatic mesh\n')
if kp.kmode == 'cartesian' or kp.kmode == 'reciprocal':
wf.write(str(kp.nkpt) + '\n')
wf.write(kp.kmode.title() + '\n')
for i in range(kp.nkpt):
wf.write(" %15.7f %15.7f %15.7f %20.7f\n"
% (kp.kpoints_list[i, 0],
kp.kpoints_list[i, 1],
kp.kpoints_list[i, 2],
kp.weights[i]))
elif kp.kmode == 'gamma' or kp.kmode == 'monkhorst-pack':
wf.write('0\n')
wf.write(kp.kmode.title() + '\n')
wf.write(" %7d %7d %7d\n" % tuple(kp.grid))
wf.write(" %7.4f %7.4f %7.4f\n" % tuple(kp.shifts))
wf.close()
|
MaterialsDiscovery/PyChemia
|
pychemia/code/vasp/kpoints.py
|
Python
|
mit
| 3,040
|
[
"CRYSTAL",
"VASP"
] |
fc05f9f0aeadbd39bd747622480d2fc8233e50ebc190aa5a06303a6055452353
|
# ============================================================================
#
# Copyright (C) 2007-2010 Conceptive Engineering bvba. All rights reserved.
# www.conceptive.be / project-camelot@conceptive.be
#
# This file is part of the Camelot Library.
#
# This file may be used under the terms of the GNU General Public
# License version 2.0 as published by the Free Software Foundation
# and appearing in the file license.txt included in the packaging of
# this file. Please review this information to ensure GNU
# General Public Licensing requirements will be met.
#
# If you are unsure which license is appropriate for your use, please
# visit www.python-camelot.com or contact project-camelot@conceptive.be
#
# This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
# WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
#
# For use of this library in commercial applications, please contact
# project-camelot@conceptive.be
#
# ============================================================================
"""wrapper around pkg_resources, with fallback to using directories specified
in the settings file if pkg_resources cannot be used.
to allow fallback to the settings file, specify the settings_attribute method,
this is the attribute in the settings file that contains the folder with the
resources as opposed to the folder containing the module itself.
this mechanism will probably be rewritten to support the loading of resources
from zip files instead of falling back to settings.
when running from a bootstrapper, we'll try to use pgk_resources, even when
runnin from within a zip file.
"""
import pkg_resources
import sys
import os
import logging
logger = logging.getLogger('camelot.core.resources')
def resource_filename(module_name, filename, settings_attribute=None):
"""Return the absolute path to a file in a directory
if the directory for the module cannot be accessed through pkg_resources,
fall back to the settings attribute
"""
import settings
if sys.path[0].endswith('.zip') and not hasattr(settings, 'BOOTSTRAPPER'):
# we're running from a zip file, pkg_resources won't work
if not settings_attribute:
logger.error('resources of module %s cannot be loaded because no settings_attribute is specified and the module is inside a zip file')
return ''
absolute_path = os.path.join(getattr(settings, settings_attribute), filename)
if not os.path.exists(absolute_path):
logger.error('resources of module %s cannot be loaded because %s does not exist'%(module_name, absolute_path))
return ''
return os.path.join(absolute_path)
else:
return pkg_resources.resource_filename(module_name, filename)
def resource_string(module_name, filename, settings_attribute):
import settings
if sys.path[0].endswith('.zip') and not hasattr(settings, 'BOOTSTRAPPER'):
return open(resource_filename(module_name, filename, settings_attribute), 'rb').read()
else:
return pkg_resources.resource_string(module_name, filename)
|
kurtraschke/camelot
|
camelot/core/resources.py
|
Python
|
gpl-2.0
| 3,141
|
[
"VisIt"
] |
3337c2be04a0c77fc63af7b3f84b8d6de35ec2b213a8faf32e0e5f5d7fee1cd3
|
# Solar Envelope
# Provides two solar envelopes as both a 3d point grid and a polysurface given a filtered list of suns and a border line representing the building site
# The first (Solar Rights envelope) represents the maximum heights in which new masses could be placed in a given site withought interfering with the sun rights of surrounding buildings
# The second (Solar Collection envelope) represents the opposite - the minimum heights from which new developemnt would recieve sun access, given an urban context
#
# Ladybug: A Plugin for Environmental Analysis (GPL) started by Mostapha Sadeghipour Roudsari
#
# This file is part of Ladybug.
#
# Copyright (c) 2013-2015, Boris Plotnikov <pborisp@gmail.com> and with the assistance and guidance of Prof. Guedi Capeluto, based on SustArc model
# For further reading it might be worth taking a look at Ralph Knowles's work, e.g - http://www.fau.usp.br/aut5823/Acesso_ao_Sol/Knowles_2003_Solar_Envelope.pdf
# and G. Capeluto and E. Shaviv's, e.g - http://www.ibpsa.org/proceedings/BS1999/BS99_C-22.pdf
# the component relies to a great extend on the concepts described there
# Ladybug is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation; either version 3 of the License,
# or (at your option) any later version.
#
# Ladybug is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ladybug; If not, see <http://www.gnu.org/licenses/>.
#
# @license GPL-3.0+ <http://spdx.org/licenses/GPL-3.0+>
"""
Use this component to generate a solar envelope for a given test surface, set of solar vectors, and context geometry that you want to ensure solar access to. Solar envelopes are typically used to illustrate the volume that can be built within in order to ensure that a new development does not shade the surrounding properties for a given set of sun vectors.
-
Provided by Ladybug 0.0.61
"""
ghenv.Component.Name = 'Ladybug_SolarEnvelope'
ghenv.Component.NickName = 'SolarEnvelope'
ghenv.Component.Message = 'VER 0.0.61\nNOV_05_2015'
ghenv.Component.Category = "Ladybug"
ghenv.Component.SubCategory = "3 | EnvironmentalAnalysis"
#compatibleLBVersion = VER 0.0.59\nFEB_01_2015
try: ghenv.Component.AdditionalHelpFromDocStrings = "3"
except: pass
import time, math, Rhino, copy
import System.Threading.Tasks as tasks
import Grasshopper.Kernel as gh
import scriptcontext as scriptc
#defualt values
maxHeightDefaultVal = 100
minHeightDefaultVal = -20
defaultNumOfCPUs = 1
inputsDictEnvelope = {
0: ["_baseSrf", "A surface representing the area for which you want to create the solar envelope."],
1: ["_obstacleCrvs", "List of curves indicating the bottom borders of our surroundings that are taken into account in calculating the solar envelope."],
2: ["_sunVectors", "Sun vectors representing hours of the year when sun should be accessible to the properties surrounding the baseSrf. sunVectors can be generated using the Ladybug sunPath component."],
3: ["gridSize_", "A numeric value inidcating the gird size of the analysis in Rhino model units. The smaller the grid size - the more test points( more accurate but slower). Default value is automatically set based on the size of the input _baseSrf."],
4: ["maxHeight_", "If there are no obstrucsions, this would be the heighest value for the solar envelope points. The default value set to 100 meters above the average baseSrf height."],
5: ["envelopeToRun_", "Set to 'True' if you would like the component to calculate a solar rights boundary and 'False' if you would like a solar collection boundary. The default is set to solar envelope."],
6: ["_numOfCPUs_", "Number of CPUs to be used for the simulation. Default value would be " + str(defaultNumOfCPUs)],
7: ["_runIt", "Set to 'True' to run the component and generate solar envelope points."]
}
inputsDictCollection = {
0: ["_baseSrf", "A surface representing the area for which you want to create the solar envelope."],
1: ["_obstacleCrvs", "List of curves indicating the top borders of our surroundings that are taken into account in calculating the solar collection."],
2: ["_sunVectors", "Sun vectors representing hours of the year when sun should be accessible to the properties surrounding the baseSrf. sunVectors can be generated using the Ladybug sunPath component."],
3: ["gridSize_", "A numeric value inidcating the gird size of the analysis in Rhino model units. The smaller the grid size - the more test points( more accurate but slower). Default value is automatically set based on the size of the input _baseSrf."],
4: ["maxHeight_", "If there are no obstrucsions this would be the lowest value for the solar collection points. Default value set to 20 meters below the average baseSrf height."],
5: ["envelopeToRun_", "Set to 'True' if you would like the component to calculate a solar rights boundary and 'False' if you would like a solar collection boundary. The default is set to solar envelope."],
6: ["_numOfCPUs_", "Number of CPUs to be used for the simulation. Default value would be " + str(defaultNumOfCPUs)],
7: ["_runIt", "Set to 'True' to run the component and generate solar collection points."]
}
outputsDictEnvelope = {
0: ["readMe!", "Log of the component."],
1: ["envelopePts", "A list of 3d points representing the heights to which the solar envelope reaches. Plug into a native GH 'Delunay Mesh' component to visualize the full solar envelope."],
2: ["envelopeBrep", "The closed volume in which you can build that will not shade the surrounding obstacleCrvs from the input sunVectors."]
}
outputsDictCollect = {
0: ["readMe!", "Log of the component."],
1: ["envelopePts", "A list of 3d points representing the heights to which the solar collection reaches. Plug into a native GH 'Delunay Mesh' component to visualize the full solar collection boundary."],
2: ["envelopeBrep", "The closed volume in which you can build above which the building will have direct solar access to the input sunVectors."]
}
def issueWarning(message,boolToReturn = False):
print message
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, message)
return boolToReturn
#allDataProvided - boolean. if even before we run this function not all data is provided then we'll resutnr false either way but also would
# add the messages is something in the units is also problematic
def calculateModelUnits(allDataProvided):
units = scriptc.doc.ModelUnitSystem
unitsTxt = str(units).split('.')[-1]
if `units` == 'Rhino.UnitSystem.Meters': conversionFactor = 1.00
elif `units` == 'Rhino.UnitSystem.Centimeters': conversionFactor = 0.01
elif `units` == 'Rhino.UnitSystem.Millimeters': conversionFactor = 0.001
elif `units` == 'Rhino.UnitSystem.Feet': conversionFactor = 0.305
elif `units` == 'Rhino.UnitSystem.Inches': conversionFactor = 0.0254
else:
issueWarning("You're Kidding me! Which units are you using?"+ unitsTxt + "?")
allDataProvided = issueWarning("Please use Meters, Centimeters, Millimeters, Inches or Feet")
return allDataProvided, units , unitsTxt, conversionFactor
def collectInputOutput():
#If some of the component inputs and outputs are not right, blot them out or change them.
for input in range(8):
ghenv.Component.Params.Input[input].NickName = inputsDictCollection[input][0]
ghenv.Component.Params.Input[input].Name = inputsDictCollection[input][0]
ghenv.Component.Params.Input[input].Description = inputsDictCollection[input][1]
for output in range(3):
ghenv.Component.Params.Output[output].NickName = outputsDictCollect[output][0]
ghenv.Component.Params.Output[output].Name = outputsDictCollect[output][0]
ghenv.Component.Params.Output[output].Description = outputsDictCollect[output][1]
def restoreInputOutput():
#If some of the component inputs and outputs are not right, blot them out or change them.
for input in range(8):
ghenv.Component.Params.Input[input].NickName = inputsDictEnvelope[input][0]
ghenv.Component.Params.Input[input].Name = inputsDictEnvelope[input][0]
ghenv.Component.Params.Input[input].Description = inputsDictEnvelope[input][1]
for output in range(3):
ghenv.Component.Params.Output[output].NickName = outputsDictEnvelope[output][0]
ghenv.Component.Params.Output[output].Name = outputsDictEnvelope[output][0]
ghenv.Component.Params.Output[output].Description = outputsDictEnvelope[output][1]
def computeGridSize(baseSrf):
baseSrfBB = Rhino.Geometry.Brep.GetBoundingBox(baseSrf, Rhino.Geometry.Plane.WorldXY)
baseSrfBB = Rhino.Geometry.Box(baseSrfBB)
baseSrfBBDim = [baseSrfBB.X[1]-baseSrfBB.X[0], baseSrfBB.Y[1]-baseSrfBB.Y[0], baseSrfBB.Z[1]-baseSrfBB.Z[0]]
gridSizeInit = baseSrfBBDim[1]/5
gridsize = round(gridSizeInit, 4)
return gridsize
class SolarEnvelope:
def __init__(self,_baseSrf,gridSize,obstacleCurves,sunVectors, defaultHeight,numOfCPUs_,_solarEnvelope = True) :
self._solarEnvelope = _solarEnvelope # true for solar envelope and false for solar collection
self.defaultHeight = self.computeHeightWithBaseSrf(defaultHeight,_baseSrf)
self.suns = []
self.gridPoints = []
self.finalPointsList = []
self.chunks = []
self.NumOfThreads = numOfCPUs_
#this is the minimum angle under which we consider the sun - below that angle
#(between the sun vector and the obstacle curve) we act as if the sun vector isn't relevant
#currently not in use, WIP
marginAngle = 20
if self._solarEnvelope:
self.lineExtention = 1000 #positive number means were going forward (for use in solar rights envelope)
else:
self.lineExtention = -1000 #negative number means we're going back to the sun (for use in solar collection envelope)
#we don't care if the angle is very big or very small so we get the sin of it - TODO - make this work
marginAngle = math.sin(math.radians(marginAngle))
self.obstacleCurves = obstacleCurves
self.buildSunPosList(sunVectors)
self.getPtsFromClosedCrv(_baseSrf,gridSize)
self.parallelFindPointHeights()
def computeHeightWithBaseSrf(self,defaultHeight,baseSf):
baseSrfBB = Rhino.Geometry.Brep.GetBoundingBox(baseSf, Rhino.Geometry.Plane.WorldXY)
baseSrfHeight = baseSrfBB.Center.Z
defaultHeightFinal = baseSrfHeight + defaultHeight
return defaultHeightFinal
def buildSunPosList(self,sunVectors):
azimuthAngles = []
alltitudeAngles = []
for vec in sunVectors:
baseVec = Rhino.Geometry.Vector3d(vec.X, vec.Y, 0)
alt = math.degrees(Rhino.Geometry.Vector3d.VectorAngle(vec, baseVec))
if vec.X < 0.0: az = math.degrees(Rhino.Geometry.Vector3d.VectorAngle(vec, Rhino.Geometry.Vector3d.YAxis, Rhino.Geometry.Plane.WorldXY)) - 180
else: az = math.degrees(Rhino.Geometry.Vector3d.VectorAngle(vec, Rhino.Geometry.Vector3d.YAxis, Rhino.Geometry.Plane.WorldXY)) + 180
azimuthAngles.append(az)
alltitudeAngles.append(alt)
for (i, _azimuthAngle) in enumerate(azimuthAngles):
self.suns.Add(SingleSun(alltitudeAngles[i], _azimuthAngle))
def parallelFindPointHeights(self):
def _findPointsHeight(i):
chunk = self.chunks[i]
for x in range(len(chunk.points)):
g = chunk.points[x]
for y in range(len(chunk.obstacleCurves)):
obCurve = chunk.obstacleCurves[y]
#not using this for now
#lineAngles = getLineEdgeAngles(obCurve,checkPoint,yVector)
for j in range(len(chunk.suns)):
tempHeight = g.getPointHeight(obCurve, chunk.suns[j])
if self._solarEnvelope :
if tempHeight < g.point.Z :
g.point.Z = tempHeight
else :
if tempHeight > g.point.Z :
g.point.Z = tempHeight
#split an array into equeal size chunks, the last item will contain the remaining elements
itemsInEveryChunk = int(math.ceil(len(self.gridPoints) / self.NumOfThreads))
splittedPoints = [self.gridPoints[i:i+itemsInEveryChunk] for i in range(0,len(self.gridPoints),itemsInEveryChunk)]
#divide to chunks and run every chunk as a thread
self.chunks = [] #do we really need this?
for i in range(self.NumOfThreads):
self.chunks.append(ParallelChunkObject(copy.deepcopy(splittedPoints[i]),copy.deepcopy(self.suns), copy.deepcopy(self.obstacleCurves)))
tasks.Parallel.ForEach(xrange(self.NumOfThreads),_findPointsHeight)
self.gridPoints = []
for pointChunk in self.chunks:
self.gridPoints.extend(pointChunk.points)
def getPtsFromClosedCrv(self,srf,gridSize):
regionMeshPar = Rhino.Geometry.MeshingParameters.Default
regionMeshPar.MinimumEdgeLength = regionMeshPar.MaximumEdgeLength = gridSize/2
self.regionMesh = Rhino.Geometry.Mesh.CreateFromBrep(srf, regionMeshPar)[0]
vertices = self.regionMesh.Vertices
for item in vertices:
g = GridPt(Rhino.Geometry.Point3d(item),self.defaultHeight,self)
self.gridPoints.Add(g)
def computeFinalSolarVol(self):
#Change the vertex heights of the initial mesh.
finalPoints = []
for vertexCount, gridPt in enumerate(self.gridPoints):
self.regionMesh.Vertices[vertexCount] = Rhino.Geometry.Point3f(gridPt.point.X, gridPt.point.Y, gridPt.point.Z)
finalPoints.Add(gridPt.point)
finalEnvelopeBrep = Rhino.Geometry.Brep.CreateFromMesh(self.regionMesh,True)
return finalEnvelopeBrep, finalPoints
#divide our point array that we need to calculate to several subarrays and calculate every subarray in multithreading
# seems like when we're doing multithreading, a race condition can occour even on just reading data so copy all the data
#for every subarray and wrap it in an object
class ParallelChunkObject:
def __init__(self,points,_suns, _obstacleCurves) :
self.points = points
self.suns = _suns
self.obstacleCurves = _obstacleCurves
class GridPt:
def __init__(self, point, defaultHeight,mainRef):
self.point = point
self.point.Z = defaultHeight
self.defaultHeight = defaultHeight
self.mainRef = mainRef
self.isStart = False
#handle all the logic and return the z (height) of the relevant point for one specified obstacle line
#if the z value is lower than what we had replace it (because the lowest one is the relevant one)
def getPointHeight(self, bLine, singleSun):
self.initialHeight = bLine.PointAtEnd.Z
_checkPoint = Rhino.Geometry.Point2d(self.point.X,self.point.Y)
xAdd = - self.mainRef.lineExtention * math.sin(math.radians(singleSun.azimuth));
yAdd = - self.mainRef.lineExtention * math.cos(math.radians(singleSun.azimuth));
point1 = Rhino.Geometry.Point3d( self.point.X,self.point.Y,self.initialHeight )
point2 = Rhino.Geometry.Point3d( self.point.X + xAdd,self.point.Y + yAdd,self.initialHeight)
_sunLine = Rhino.Geometry.LineCurve(point1,point2)
_intersections = Rhino.Geometry.Intersect.Intersection.CurveCurve(_sunLine, bLine, 0.001, 0.0)
if _intersections :
_intersectionPoint = Rhino.Geometry.Point2d(_intersections[0].PointA[0],_intersections[0].PointA[1])
dist = (_intersectionPoint - _checkPoint).Length
t = math.tan(math.radians(singleSun.alltitude))
if self.mainRef._solarEnvelope :
return dist * t + self.initialHeight
else :
return self.initialHeight - dist * t
else :
#sun not relevant so no obstacles to look out for - return the heighest(rights)/lowest(collection) point defined
return self.defaultHeight
#class to organize all the data in a single sun object
#properties for later use, when we'll get more comprehensive data from and epw file - hour, day, month, temperature, radiation
class SingleSun:
def __init__(self, _alltitude, _azimuth):
self.alltitude = _alltitude
self.azimuth = _azimuth
if envelopeToRun_: restoreInputOutput()
else: collectInputOutput()
if _runIt == True:
#if we want to use it (for debugging externally with pydev we need to define and set the debug variable to True
#if debug:
# import pydevd as py
# py.settrace()
allDataProvided = True
if not _baseSrf :
allDataProvided = issueWarning("Base surface must be provided")
if len(_sunVectors) == 0:
allDataProvided = issueWarning("A list of sun vectors from ladybug must be provided")
if gridSize_ != None and gridSize_<=0:
allDataProvided = issueWarning("gridSize_ must be greater than or equal to zero")
allDataProvided, units, unitsTxt, conversionFactor = calculateModelUnits(allDataProvided)
#solar rights envelope specific conditions that must be met
if envelopeToRun_ :
if maxHeight_ != None and maxHeight_< 0:
allDataProvided = issueWarning("maxHeight_ must be greater than or equal to zero")
#solar collection specific conditions that must be met
else :
if not _obstacleCrvs:
allDataProvided = issueWarning("Top obstacle curves must be provided")
if maxHeight_ != None and maxHeight_ > 0:
allDataProvided = issueWarning("maxHeight_ must be smaller than or equal to zero")
if allDataProvided:
print "Starting simulation"
if not gridSize_ :
gridSize_ = computeGridSize(_baseSrf)
print "No gridSize provided. Grid size automatically set to " + str(gridSize_) + " " + unitsTxt + " based on the size of tha _baseSrf."
#let the default value be 1, at least for now, maybe later on calculate availible cpus and make this automatic
if not _numOfCPUs_ :
_numOfCPUs_ = defaultNumOfCPUs
print "No number of availible CPUs provided. Using the default value of " + str(defaultNumOfCPUs)
#solar rights envelope specific parameters, tests and settings
if envelopeToRun_:
if maxHeight_ == None :
maxHeight_ = maxHeightDefaultVal / conversionFactor
print "No height provided, using the default value of " + str(maxHeightDefaultVal) + " meters above the baseSrf height."
if not _obstacleCrvs :
print "No top obstacle curves selected, taking the base surface as the solar envelope border"
_obstacleCrvs = []
for crv in _baseSrf.Curves3D: _obstacleCrvs.append(crv)
#solar collection specific parameters, tests and settings
else:
if maxHeight_ == None:
maxHeight_ = minHeightDefaultVal / conversionFactor
print "No height provided, using the default value of " + str(minHeightDefaultVal) + " meters above the baseSrf height."
se = SolarEnvelope(_baseSrf,gridSize_,_obstacleCrvs, _sunVectors, maxHeight_,_numOfCPUs_,envelopeToRun_)
envelopeBrep, envelopePts = se.computeFinalSolarVol()
else:
print "To run the component, set _runIt to True"
ghenv.Component.Params.Output[1].Hidden = True
|
boris-p/ladybug
|
src/Ladybug_SolarEnvelope.py
|
Python
|
gpl-3.0
| 19,935
|
[
"EPW"
] |
e3efe8e0f780ae48754bbb5bec068e586b4c08d1d68eb8b98ce3fc1253606427
|
#!/usr/bin/env python
#pylint: disable=missing-docstring
#################################################################
# DO NOT MODIFY THIS HEADER #
# MOOSE - Multiphysics Object Oriented Simulation Environment #
# #
# (c) 2010 Battelle Energy Alliance, LLC #
# ALL RIGHTS RESERVED #
# #
# Prepared by Battelle Energy Alliance, LLC #
# Under Contract No. DE-AC07-05ID14517 #
# With the U. S. Department of Energy #
# #
# See COPYRIGHT for full restrictions #
#################################################################
import chigger
reader = chigger.exodus.ExodusReader('../input/mug_blocks_out.e', time=1, timestep=1)
reader.update()
|
yipenggao/moose
|
python/chigger/tests/error/time_timestep.py
|
Python
|
lgpl-2.1
| 1,032
|
[
"MOOSE"
] |
c9d22e6b2dc1a55df29dee106423f79da7589d2d50e6af530b36f5e8335c89fa
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for compute resource tracking."""
import re
import uuid
from oslo.config import cfg
from nova.compute import flavors
from nova.compute import resource_tracker
from nova.compute import task_states
from nova.compute import vm_states
from nova import context
from nova import db
from nova.objects import base as obj_base
from nova.objects import migration as migration_obj
from nova.openstack.common import jsonutils
from nova.openstack.common import timeutils
from nova import test
from nova.tests.objects import test_migration
from nova.virt import driver
FAKE_VIRT_MEMORY_MB = 5
FAKE_VIRT_MEMORY_OVERHEAD = 1
FAKE_VIRT_LOCAL_GB = 6
FAKE_VIRT_VCPUS = 1
CONF = cfg.CONF
class UnsupportedVirtDriver(driver.ComputeDriver):
"""Pretend version of a lame virt driver."""
def __init__(self):
super(UnsupportedVirtDriver, self).__init__(None)
def get_host_ip_addr(self):
return '127.0.0.1'
def get_available_resource(self, nodename):
# no support for getting resource usage info
return {}
class FakeVirtDriver(driver.ComputeDriver):
def __init__(self, pci_support=False):
super(FakeVirtDriver, self).__init__(None)
self.memory_mb = FAKE_VIRT_MEMORY_MB
self.local_gb = FAKE_VIRT_LOCAL_GB
self.vcpus = FAKE_VIRT_VCPUS
self.memory_mb_used = 0
self.local_gb_used = 0
self.pci_support = pci_support
def get_host_ip_addr(self):
return '127.0.0.1'
def get_available_resource(self, nodename):
d = {
'vcpus': self.vcpus,
'memory_mb': self.memory_mb,
'local_gb': self.local_gb,
'vcpus_used': 0,
'memory_mb_used': self.memory_mb_used,
'local_gb_used': self.local_gb_used,
'hypervisor_type': 'fake',
'hypervisor_version': 0,
'hypervisor_hostname': 'fakehost',
'cpu_info': '',
}
if self.pci_support:
d['pci_passthrough_devices'] = jsonutils.dumps([{
'label': 'forza-napoli',
'compute_node_id': 1,
'address': '0000:00:00.1',
'product_id': 'p1',
'vendor_id': 'v1',
'status': 'available',
'extra_k1': 'v1'}])
return d
def estimate_instance_overhead(self, instance_info):
mem = instance_info['memory_mb'] # make sure memory value is present
overhead = {
'memory_mb': FAKE_VIRT_MEMORY_OVERHEAD
}
return overhead # just return a constant value for testing
class BaseTestCase(test.TestCase):
def setUp(self):
super(BaseTestCase, self).setUp()
self.flags(reserved_host_disk_mb=0,
reserved_host_memory_mb=0)
self.context = context.get_admin_context()
self.flags(use_local=True, group='conductor')
self.conductor = self.start_service('conductor',
manager=CONF.conductor.manager)
self._instances = {}
self._instance_types = {}
self.stubs.Set(self.conductor.db,
'instance_get_all_by_host_and_node',
self._fake_instance_get_all_by_host_and_node)
self.stubs.Set(self.conductor.db,
'instance_update_and_get_original',
self._fake_instance_update_and_get_original)
self.stubs.Set(self.conductor.db,
'flavor_get', self._fake_flavor_get)
self.host = 'fakehost'
def _create_compute_node(self, values=None):
compute = {
"id": 1,
"service_id": 1,
"vcpus": 1,
"memory_mb": 1,
"local_gb": 1,
"vcpus_used": 1,
"memory_mb_used": 1,
"local_gb_used": 1,
"free_ram_mb": 1,
"free_disk_gb": 1,
"current_workload": 1,
"running_vms": 0,
"cpu_info": None,
"stats": [{"key": "num_instances", "value": "1"}],
"hypervisor_hostname": "fakenode",
}
if values:
compute.update(values)
return compute
def _create_service(self, host="fakehost", compute=None):
if compute:
compute = [compute]
service = {
"id": 1,
"host": host,
"binary": "nova-compute",
"topic": "compute",
"compute_node": compute,
}
return service
def _fake_instance_system_metadata(self, instance_type, prefix=''):
sys_meta = []
for key in flavors.system_metadata_flavor_props.keys():
sys_meta.append({'key': '%sinstance_type_%s' % (prefix, key),
'value': instance_type[key]})
return sys_meta
def _fake_instance(self, stash=True, **kwargs):
# Default to an instance ready to resize to or from the same
# instance_type
itype = self._fake_flavor_create()
sys_meta = self._fake_instance_system_metadata(itype)
if stash:
# stash instance types in system metadata.
sys_meta = (sys_meta +
self._fake_instance_system_metadata(itype, 'new_') +
self._fake_instance_system_metadata(itype, 'old_'))
instance_uuid = str(uuid.uuid1())
instance = {
'uuid': instance_uuid,
'vm_state': vm_states.RESIZED,
'task_state': None,
'memory_mb': 2,
'root_gb': 3,
'ephemeral_gb': 1,
'os_type': 'Linux',
'project_id': '123456',
'vcpus': 1,
'host': None,
'node': None,
'instance_type_id': 1,
'launched_on': None,
'system_metadata': sys_meta,
'availability_zone': None,
'vm_mode': None,
'reservation_id': None,
'display_name': None,
'default_swap_device': None,
'power_state': None,
'scheduled_at': None,
'access_ip_v6': None,
'access_ip_v4': None,
'key_name': None,
'updated_at': None,
'cell_name': None,
'locked': None,
'locked_by': None,
'launch_index': None,
'architecture': None,
'auto_disk_config': None,
'terminated_at': None,
'ramdisk_id': None,
'user_data': None,
'cleaned': None,
'deleted_at': None,
'id': 333,
'disable_terminate': None,
'hostname': None,
'display_description': None,
'key_data': None,
'deleted': None,
'default_ephemeral_device': None,
'progress': None,
'launched_at': None,
'config_drive': None,
'kernel_id': None,
'user_id': None,
'shutdown_terminate': None,
'created_at': None,
'image_ref': None,
'root_device_name': None,
}
instance.update(kwargs)
self._instances[instance_uuid] = instance
return instance
def _fake_flavor_create(self, **kwargs):
instance_type = {
'id': 1,
'name': 'fakeitype',
'memory_mb': FAKE_VIRT_MEMORY_MB,
'vcpus': FAKE_VIRT_VCPUS,
'root_gb': FAKE_VIRT_LOCAL_GB / 2,
'ephemeral_gb': FAKE_VIRT_LOCAL_GB / 2,
'swap': 0,
'rxtx_factor': 1.0,
'vcpu_weight': 1,
'flavorid': 'fakeflavor'
}
instance_type.update(**kwargs)
id_ = instance_type['id']
self._instance_types[id_] = instance_type
return instance_type
def _fake_instance_get_all_by_host_and_node(self, context, host, nodename):
return [i for i in self._instances.values() if i['host'] == host]
def _fake_flavor_get(self, ctxt, id_):
return self._instance_types[id_]
def _fake_instance_update_and_get_original(self, context, instance_uuid,
values):
instance = self._instances[instance_uuid]
instance.update(values)
# the test doesn't care what the original instance values are, it's
# only used in the subsequent notification:
return (instance, instance)
def _driver(self):
return FakeVirtDriver()
def _tracker(self, host=None):
if host is None:
host = self.host
node = "fakenode"
driver = self._driver()
tracker = resource_tracker.ResourceTracker(host, driver, node)
return tracker
class UnsupportedDriverTestCase(BaseTestCase):
"""Resource tracking should be disabled when the virt driver doesn't
support it.
"""
def setUp(self):
super(UnsupportedDriverTestCase, self).setUp()
self.tracker = self._tracker()
# seed tracker with data:
self.tracker.update_available_resource(self.context)
def _driver(self):
return UnsupportedVirtDriver()
def test_disabled(self):
# disabled = no compute node stats
self.assertTrue(self.tracker.disabled)
self.assertEqual(None, self.tracker.compute_node)
def test_disabled_claim(self):
# basic claim:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
self.assertEqual(0, claim.memory_mb)
def test_disabled_instance_claim(self):
# instance variation:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
self.assertEqual(0, claim.memory_mb)
def test_disabled_instance_context_claim(self):
# instance context manager variation:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
with self.tracker.instance_claim(self.context, instance) as claim:
self.assertEqual(0, claim.memory_mb)
def test_disabled_updated_usage(self):
instance = self._fake_instance(host='fakehost', memory_mb=5,
root_gb=10)
self.tracker.update_usage(self.context, instance)
def test_disabled_resize_claim(self):
instance = self._fake_instance()
instance_type = self._fake_flavor_create()
claim = self.tracker.resize_claim(self.context, instance,
instance_type)
self.assertEqual(0, claim.memory_mb)
self.assertEqual(instance['uuid'], claim.migration['instance_uuid'])
self.assertEqual(instance_type['id'],
claim.migration['new_instance_type_id'])
def test_disabled_resize_context_claim(self):
instance = self._fake_instance()
instance_type = self._fake_flavor_create()
with self.tracker.resize_claim(self.context, instance, instance_type) \
as claim:
self.assertEqual(0, claim.memory_mb)
class MissingServiceTestCase(BaseTestCase):
def setUp(self):
super(MissingServiceTestCase, self).setUp()
self.context = context.get_admin_context()
self.tracker = self._tracker()
def test_missing_service(self):
self.tracker.update_available_resource(self.context)
self.assertTrue(self.tracker.disabled)
class MissingComputeNodeTestCase(BaseTestCase):
def setUp(self):
super(MissingComputeNodeTestCase, self).setUp()
self.tracker = self._tracker()
self.stubs.Set(db, 'service_get_by_compute_host',
self._fake_service_get_by_compute_host)
self.stubs.Set(db, 'compute_node_create',
self._fake_create_compute_node)
def _fake_create_compute_node(self, context, values):
self.created = True
return self._create_compute_node()
def _fake_service_get_by_compute_host(self, ctx, host):
# return a service with no joined compute
service = self._create_service()
return service
def test_create_compute_node(self):
self.tracker.update_available_resource(self.context)
self.assertTrue(self.created)
def test_enabled(self):
self.tracker.update_available_resource(self.context)
self.assertFalse(self.tracker.disabled)
class BaseTrackerTestCase(BaseTestCase):
def setUp(self):
# setup plumbing for a working resource tracker with required
# database models and a compatible compute driver:
super(BaseTrackerTestCase, self).setUp()
self.updated = False
self.deleted = False
self.tracker = self._tracker()
self._migrations = {}
self.stubs.Set(db, 'service_get_by_compute_host',
self._fake_service_get_by_compute_host)
self.stubs.Set(db, 'compute_node_update',
self._fake_compute_node_update)
self.stubs.Set(db, 'compute_node_delete',
self._fake_compute_node_delete)
self.stubs.Set(db, 'migration_update',
self._fake_migration_update)
self.stubs.Set(db, 'migration_get_in_progress_by_host_and_node',
self._fake_migration_get_in_progress_by_host_and_node)
self.tracker.update_available_resource(self.context)
self.limits = self._limits()
def _fake_service_get_by_compute_host(self, ctx, host):
self.compute = self._create_compute_node()
self.service = self._create_service(host, compute=self.compute)
return self.service
def _fake_compute_node_update(self, ctx, compute_node_id, values,
prune_stats=False):
self.updated = True
values['stats'] = [{"key": "num_instances", "value": "1"}]
self.compute.update(values)
return self.compute
def _fake_compute_node_delete(self, ctx, compute_node_id):
self.deleted = True
self.compute.update({'deleted': 1})
return self.compute
def _fake_migration_get_in_progress_by_host_and_node(self, ctxt, host,
node):
status = ['confirmed', 'reverted']
migrations = []
for migration in self._migrations.values():
migration = obj_base.obj_to_primitive(migration)
if migration['status'] in status:
continue
uuid = migration['instance_uuid']
migration['instance'] = self._instances[uuid]
migrations.append(migration)
return migrations
def _fake_migration_update(self, ctxt, migration_id, values):
# cheat and assume there's only 1 migration present
migration = self._migrations.values()[0]
migration.update(values)
return migration
def _limits(self, memory_mb=FAKE_VIRT_MEMORY_MB +
FAKE_VIRT_MEMORY_OVERHEAD, disk_gb=FAKE_VIRT_LOCAL_GB,
vcpus=FAKE_VIRT_VCPUS):
"""Create limits dictionary used for oversubscribing resources."""
return {
'memory_mb': memory_mb,
'disk_gb': disk_gb,
'vcpu': vcpus
}
def _assert(self, value, field, tracker=None):
if tracker is None:
tracker = self.tracker
if field not in tracker.compute_node:
raise test.TestingException(
"'%(field)s' not in compute node." % {'field': field})
x = tracker.compute_node[field]
self.assertEqual(value, x)
class TrackerTestCase(BaseTrackerTestCase):
def test_free_ram_resource_value(self):
driver = FakeVirtDriver()
mem_free = driver.memory_mb - driver.memory_mb_used
self.assertEqual(mem_free, self.tracker.compute_node['free_ram_mb'])
def test_free_disk_resource_value(self):
driver = FakeVirtDriver()
mem_free = driver.local_gb - driver.local_gb_used
self.assertEqual(mem_free, self.tracker.compute_node['free_disk_gb'])
def test_update_compute_node(self):
self.assertFalse(self.tracker.disabled)
self.assertTrue(self.updated)
def test_init(self):
self._assert(FAKE_VIRT_MEMORY_MB, 'memory_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb')
self._assert(FAKE_VIRT_VCPUS, 'vcpus')
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self._assert(0, 'running_vms')
self._assert(FAKE_VIRT_MEMORY_MB, 'free_ram_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'free_disk_gb')
self.assertFalse(self.tracker.disabled)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
self._assert('{}', 'pci_stats')
class TrackerPciStatsTestCase(BaseTrackerTestCase):
def test_update_compute_node(self):
self.assertFalse(self.tracker.disabled)
self.assertTrue(self.updated)
def test_init(self):
self._assert(FAKE_VIRT_MEMORY_MB, 'memory_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb')
self._assert(FAKE_VIRT_VCPUS, 'vcpus')
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self._assert(0, 'running_vms')
self._assert(FAKE_VIRT_MEMORY_MB, 'free_ram_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'free_disk_gb')
self.assertFalse(self.tracker.disabled)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
expected = """[{"count": 1,
"vendor_id": "v1",
"product_id": "p1",
"extra_info": {"extra_k1": "v1"}}]"""
expected = re.sub(r'\s+', '', expected)
pci = re.sub(r'\s+', '', self.tracker.compute_node['pci_stats'])
self.assertEqual(expected, pci)
def _driver(self):
return FakeVirtDriver(pci_support=True)
class InstanceClaimTestCase(BaseTrackerTestCase):
def test_update_usage_only_for_tracked(self):
instance = self._fake_instance(memory_mb=3, root_gb=1, ephemeral_gb=1,
task_state=None)
self.tracker.update_usage(self.context, instance)
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'current_workload')
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertNotEqual(0, claim.memory_mb)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(2, 'local_gb_used')
# now update should actually take effect
instance['task_state'] = task_states.SCHEDULING
self.tracker.update_usage(self.context, instance)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(2, 'local_gb_used')
self._assert(1, 'current_workload')
def test_claim_and_audit(self):
claim_mem = 3
claim_disk = 2
instance = self._fake_instance(memory_mb=claim_mem, root_gb=claim_disk,
ephemeral_gb=0)
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertEqual(5, self.compute["memory_mb"])
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["memory_mb_used"])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["free_ram_mb"])
self.assertEqual(6, self.compute["local_gb"])
self.assertEqual(claim_disk, self.compute["local_gb_used"])
self.assertEqual(6 - claim_disk, self.compute["free_disk_gb"])
# 1st pretend that the compute operation finished and claimed the
# desired resources from the virt layer
driver = self.tracker.driver
driver.memory_mb_used = claim_mem
driver.local_gb_used = claim_disk
self.tracker.update_available_resource(self.context)
# confirm tracker is adding in host_ip
self.assertTrue(self.compute.get('host_ip') is not None)
# confirm that resource usage is derived from instance usages,
# not virt layer:
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['free_ram_mb'])
self.assertEqual(claim_disk, self.compute['local_gb_used'])
self.assertEqual(6 - claim_disk, self.compute['free_disk_gb'])
def test_claim_and_abort(self):
claim_mem = 3
claim_disk = 2
instance = self._fake_instance(memory_mb=claim_mem,
root_gb=claim_disk, ephemeral_gb=0)
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertNotEqual(None, claim)
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["memory_mb_used"])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["free_ram_mb"])
self.assertEqual(claim_disk, self.compute["local_gb_used"])
self.assertEqual(6 - claim_disk, self.compute["free_disk_gb"])
claim.abort()
self.assertEqual(0, self.compute["memory_mb_used"])
self.assertEqual(5, self.compute["free_ram_mb"])
self.assertEqual(0, self.compute["local_gb_used"])
self.assertEqual(6, self.compute["free_disk_gb"])
def test_instance_claim_with_oversubscription(self):
memory_mb = FAKE_VIRT_MEMORY_MB * 2
root_gb = ephemeral_gb = FAKE_VIRT_LOCAL_GB
vcpus = FAKE_VIRT_VCPUS * 2
limits = {'memory_mb': memory_mb + FAKE_VIRT_MEMORY_OVERHEAD,
'disk_gb': root_gb * 2,
'vcpu': vcpus}
instance = self._fake_instance(memory_mb=memory_mb,
root_gb=root_gb, ephemeral_gb=ephemeral_gb)
self.tracker.instance_claim(self.context, instance, limits)
self.assertEqual(memory_mb + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(root_gb * 2,
self.tracker.compute_node['local_gb_used'])
def test_additive_claims(self):
self.limits['vcpu'] = 2
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1,
vcpus=1)
with self.tracker.instance_claim(self.context, instance, self.limits):
pass
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1,
vcpus=1)
with self.tracker.instance_claim(self.context, instance, self.limits):
pass
self.assertEqual(2 + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(4, self.tracker.compute_node['local_gb_used'])
self.assertEqual(2, self.tracker.compute_node['vcpus_used'])
def test_context_claim_with_exception(self):
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1)
try:
with self.tracker.instance_claim(self.context, instance):
# <insert exciting things that utilize resources>
raise test.TestingException()
except test.TestingException:
pass
self.assertEqual(0, self.tracker.compute_node['memory_mb_used'])
self.assertEqual(0, self.tracker.compute_node['local_gb_used'])
self.assertEqual(0, self.compute['memory_mb_used'])
self.assertEqual(0, self.compute['local_gb_used'])
def test_instance_context_claim(self):
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1)
with self.tracker.instance_claim(self.context, instance):
# <insert exciting things that utilize resources>
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(2, self.tracker.compute_node['local_gb_used'])
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(2, self.compute['local_gb_used'])
# after exiting claim context, build is marked as finished. usage
# totals should be same:
self.tracker.update_available_resource(self.context)
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(2, self.tracker.compute_node['local_gb_used'])
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(2, self.compute['local_gb_used'])
def test_update_load_stats_for_instance(self):
instance = self._fake_instance(task_state=task_states.SCHEDULING)
with self.tracker.instance_claim(self.context, instance):
pass
self.assertEqual(1, self.tracker.compute_node['current_workload'])
instance['vm_state'] = vm_states.ACTIVE
instance['task_state'] = None
instance['host'] = 'fakehost'
self.tracker.update_usage(self.context, instance)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
def test_cpu_stats(self):
limits = {'disk_gb': 100, 'memory_mb': 100}
self.assertEqual(0, self.tracker.compute_node['vcpus_used'])
instance = self._fake_instance(vcpus=1)
# should not do anything until a claim is made:
self.tracker.update_usage(self.context, instance)
self.assertEqual(0, self.tracker.compute_node['vcpus_used'])
with self.tracker.instance_claim(self.context, instance, limits):
pass
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
# instance state can change without modifying vcpus in use:
instance['task_state'] = task_states.SCHEDULING
self.tracker.update_usage(self.context, instance)
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
instance = self._fake_instance(vcpus=10)
with self.tracker.instance_claim(self.context, instance, limits):
pass
self.assertEqual(11, self.tracker.compute_node['vcpus_used'])
instance['vm_state'] = vm_states.DELETED
self.tracker.update_usage(self.context, instance)
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
def test_skip_deleted_instances(self):
# ensure that the audit process skips instances that have vm_state
# DELETED, but the DB record is not yet deleted.
self._fake_instance(vm_state=vm_states.DELETED, host=self.host)
self.tracker.update_available_resource(self.context)
self.assertEqual(0, self.tracker.compute_node['memory_mb_used'])
self.assertEqual(0, self.tracker.compute_node['local_gb_used'])
class ResizeClaimTestCase(BaseTrackerTestCase):
def setUp(self):
super(ResizeClaimTestCase, self).setUp()
def _fake_migration_create(mig_self, ctxt):
self._migrations[mig_self.instance_uuid] = mig_self
mig_self.obj_reset_changes()
self.stubs.Set(migration_obj.Migration, 'create',
_fake_migration_create)
self.instance = self._fake_instance()
self.instance_type = self._fake_flavor_create()
def _fake_migration_create(self, context, values=None):
instance_uuid = str(uuid.uuid1())
mig_dict = test_migration.fake_db_migration()
mig_dict.update({
'id': 1,
'source_compute': 'host1',
'source_node': 'fakenode',
'dest_compute': 'host2',
'dest_node': 'fakenode',
'dest_host': '127.0.0.1',
'old_instance_type_id': 1,
'new_instance_type_id': 2,
'instance_uuid': instance_uuid,
'status': 'pre-migrating',
'updated_at': timeutils.utcnow()
})
if values:
mig_dict.update(values)
migration = migration_obj.Migration()
migration.update(mig_dict)
# This hits the stub in setUp()
migration.create('fake')
def test_claim(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
self.assertEqual(1, len(self.tracker.tracked_migrations))
def test_abort(self):
try:
with self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits):
raise test.TestingException("abort")
except test.TestingException:
pass
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self.assertEqual(0, len(self.tracker.tracked_migrations))
def test_additive_claims(self):
limits = self._limits(FAKE_VIRT_MEMORY_MB * 2 +
FAKE_VIRT_MEMORY_OVERHEAD * 2,
FAKE_VIRT_LOCAL_GB * 2,
FAKE_VIRT_VCPUS * 2)
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, limits)
instance2 = self._fake_instance()
self.tracker.resize_claim(self.context, instance2, self.instance_type,
limits)
self._assert(2 * FAKE_VIRT_MEMORY_MB + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(2 * FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(2 * FAKE_VIRT_VCPUS, 'vcpus_used')
def test_claim_and_audit(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
def test_same_host(self):
self.limits['vcpu'] = 3
src_type = self._fake_flavor_create(id=2, memory_mb=1,
root_gb=1, ephemeral_gb=0, vcpus=1)
dest_type = self._fake_flavor_create(id=2, memory_mb=2,
root_gb=2, ephemeral_gb=1, vcpus=2)
# make an instance of src_type:
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=0,
vcpus=1, instance_type_id=2)
instance['system_metadata'] = self._fake_instance_system_metadata(
dest_type)
self.tracker.instance_claim(self.context, instance, self.limits)
# resize to dest_type:
claim = self.tracker.resize_claim(self.context, instance,
dest_type, self.limits)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD * 2, 'memory_mb_used')
self._assert(4, 'local_gb_used')
self._assert(3, 'vcpus_used')
self.tracker.update_available_resource(self.context)
claim.abort()
# only the original instance should remain, not the migration:
self._assert(1 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(1, 'local_gb_used')
self._assert(1, 'vcpus_used')
self.assertEqual(1, len(self.tracker.tracked_instances))
self.assertEqual(0, len(self.tracker.tracked_migrations))
def test_revert(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self.tracker.drop_resize_claim(self.instance)
self.assertEqual(0, len(self.tracker.tracked_instances))
self.assertEqual(0, len(self.tracker.tracked_migrations))
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
def test_revert_reserve_source(self):
# if a revert has started at the API and audit runs on
# the source compute before the instance flips back to source,
# resources should still be held at the source based on the
# migration:
dest = "desthost"
dest_tracker = self._tracker(host=dest)
dest_tracker.update_available_resource(self.context)
self.instance = self._fake_instance(memory_mb=FAKE_VIRT_MEMORY_MB,
root_gb=FAKE_VIRT_LOCAL_GB, ephemeral_gb=0,
vcpus=FAKE_VIRT_VCPUS, instance_type_id=1)
values = {'source_compute': self.host, 'dest_compute': dest,
'old_instance_type_id': 1, 'new_instance_type_id': 1,
'status': 'post-migrating',
'instance_uuid': self.instance['uuid']}
migration = self._fake_migration_create(self.context, values)
# attach an instance to the destination host tracker:
dest_tracker.instance_claim(self.context, self.instance)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used', tracker=dest_tracker)
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used',
tracker=dest_tracker)
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used',
tracker=dest_tracker)
# audit and recheck to confirm migration doesn't get double counted
# on dest:
dest_tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used', tracker=dest_tracker)
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used',
tracker=dest_tracker)
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used',
tracker=dest_tracker)
# apply the migration to the source host tracker:
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
# flag the instance and migration as reverting and re-audit:
self.instance['vm_state'] = vm_states.RESIZED
self.instance['task_state'] = task_states.RESIZE_REVERTING
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + 1, 'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
def test_resize_filter(self):
instance = self._fake_instance(vm_state=vm_states.ACTIVE,
task_state=task_states.SUSPENDING)
self.assertFalse(self.tracker._instance_in_resize_state(instance))
instance = self._fake_instance(vm_state=vm_states.RESIZED,
task_state=task_states.SUSPENDING)
self.assertTrue(self.tracker._instance_in_resize_state(instance))
instance = self._fake_instance(vm_state=vm_states.ACTIVE,
task_state=task_states.RESIZE_MIGRATING)
self.assertTrue(self.tracker._instance_in_resize_state(instance))
def test_dupe_filter(self):
self._fake_flavor_create(id=2, memory_mb=1, root_gb=1,
ephemeral_gb=1, vcpus=1)
instance = self._fake_instance(host=self.host)
values = {'source_compute': self.host, 'dest_compute': self.host,
'instance_uuid': instance['uuid'], 'new_instance_type_id': 2}
self._fake_migration_create(self.context, values)
self._fake_migration_create(self.context, values)
self.tracker.update_available_resource(self.context)
self.assertEqual(1, len(self.tracker.tracked_migrations))
def test_set_instance_host_and_node(self):
instance = self._fake_instance()
self.assertEqual(None, instance['host'])
self.assertEqual(None, instance['launched_on'])
self.assertEqual(None, instance['node'])
claim = self.tracker.instance_claim(self.context, instance)
self.assertNotEqual(0, claim.memory_mb)
self.assertEqual('fakehost', instance['host'])
self.assertEqual('fakehost', instance['launched_on'])
self.assertEqual('fakenode', instance['node'])
class NoInstanceTypesInSysMetadata(ResizeClaimTestCase):
"""Make sure we handle the case where the following are true:
1) Compute node C gets upgraded to code that looks for instance types in
system metadata. AND
2) C already has instances in the process of migrating that do not have
stashed instance types.
bug 1164110
"""
def setUp(self):
super(NoInstanceTypesInSysMetadata, self).setUp()
self.instance = self._fake_instance(stash=False)
class OrphanTestCase(BaseTrackerTestCase):
def _driver(self):
class OrphanVirtDriver(FakeVirtDriver):
def get_per_instance_usage(self):
return {
'1-2-3-4-5': {'memory_mb': 4, 'uuid': '1-2-3-4-5'},
'2-3-4-5-6': {'memory_mb': 4, 'uuid': '2-3-4-5-6'},
}
return OrphanVirtDriver()
def test_usage(self):
# 2 instances, 4 mb each, plus overhead
self.assertEqual(8 + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
def test_find(self):
# create one legit instance and verify the 2 orphans remain
self._fake_instance()
orphans = self.tracker._find_orphaned_instances()
self.assertEqual(2, len(orphans))
|
bclau/nova
|
nova/tests/compute/test_resource_tracker.py
|
Python
|
apache-2.0
| 38,685
|
[
"exciting"
] |
33fd2e8e6e79e59e165ff991cd9001d3a704b412d9489b5276a19bc3ea8c7ce4
|
import numpy as np
from scipy.stats import kstest, norm
class LinearRegression(object):
'''Ordinary Least Squares...'''
def __init__(self, X, y, fit_intercept=True):
self.X = X
self.y = y
self.fit_intercept = fit_intercept
self._coeff = None
def fit(self, check_residuals=True, threshold=0.05):
if self.fit_intercept:
self.X = self._add_intercept(self.X)
self._solve_ols()
if check_residuals:
print 'checking residuals...'
if self._check_residuals(threshold):
print '...residuals are gaussian distributed at %3.2f...' % threshold
else:
print '...residuals are Not gaussian distributed...'
def _add_intercept(self, X):
'''add a column of 1s in the X matrix...'''
return np.insert(X, 0, np.ones_like(X[:, 0]), axis=1)
def _solve_ols(self):
'''matrix solution for OLS...'''
XT = self.X.transpose()
XTX = np.dot(XT, self.X)
XTX_i = np.linalg.inv(XTX)
self._coeff = np.dot(np.dot(XTX_i, XT), self.y)
def _calculate_residuals(self):
return self.y - np.dot(self.X, self._coeff)
def _check_residuals(self, threshold=0.05):
'''check residuals using ks_test for normality...'''
residuals = self._calculate_residuals()
mu, std = np.mean(residuals), np.std(residuals)
def g_cdf(x):
return norm.cdf(x, mu, std)
# standard 2-sided ks test...
t_stat, p_value = kstest(residuals, g_cdf)
# returns True for gaussian noise
return p_value > threshold
def calc_r_squared(self, adjusted=True):
'''returns the standard R2 value...'''
n_obs, n_var = self.X.shape
y_ = np.mean(self.y)
p = np.dot(self.X, self._coeff)
ss_t = np.sum(np.square(self.y - y_))
ss_e = np.sum(np.square(self.y - p))
r2 = 1.0 - ss_e/ss_t
if adjusted:
return 1.0 - (1.0 - r2) * ((n_obs - 1) / (n_obs - n_var))
return r2
|
jeffcarter-github/MachineLearningLibrary
|
MachineLearningLibrary/Regression/LinearRegression.py
|
Python
|
mit
| 2,075
|
[
"Gaussian"
] |
eb3664046c168a2613eaf68ff933921a6d2b731c1d166a4c6b008272e93b08da
|
# coding: utf-8
"""
Vericred API
Vericred's API allows you to search for Health Plans that a specific doctor
accepts.
## Getting Started
Visit our [Developer Portal](https://developers.vericred.com) to
create an account.
Once you have created an account, you can create one Application for
Production and another for our Sandbox (select the appropriate Plan when
you create the Application).
## SDKs
Our API follows standard REST conventions, so you can use any HTTP client
to integrate with us. You will likely find it easier to use one of our
[autogenerated SDKs](https://github.com/vericred/?query=vericred-),
which we make available for several common programming languages.
## Authentication
To authenticate, pass the API Key you created in the Developer Portal as
a `Vericred-Api-Key` header.
`curl -H 'Vericred-Api-Key: YOUR_KEY' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Versioning
Vericred's API default to the latest version. However, if you need a specific
version, you can request it with an `Accept-Version` header.
The current version is `v3`. Previous versions are `v1` and `v2`.
`curl -H 'Vericred-Api-Key: YOUR_KEY' -H 'Accept-Version: v2' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Pagination
Endpoints that accept `page` and `per_page` parameters are paginated. They expose
four additional fields that contain data about your position in the response,
namely `Total`, `Per-Page`, `Link`, and `Page` as described in [RFC-5988](https://tools.ietf.org/html/rfc5988).
For example, to display 5 results per page and view the second page of a
`GET` to `/networks`, your final request would be `GET /networks?....page=2&per_page=5`.
## Sideloading
When we return multiple levels of an object graph (e.g. `Provider`s and their `State`s
we sideload the associated data. In this example, we would provide an Array of
`State`s and a `state_id` for each provider. This is done primarily to reduce the
payload size since many of the `Provider`s will share a `State`
```
{
providers: [{ id: 1, state_id: 1}, { id: 2, state_id: 1 }],
states: [{ id: 1, code: 'NY' }]
}
```
If you need the second level of the object graph, you can just match the
corresponding id.
## Selecting specific data
All endpoints allow you to specify which fields you would like to return.
This allows you to limit the response to contain only the data you need.
For example, let's take a request that returns the following JSON by default
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890',
field_we_dont_care_about: 'value_we_dont_care_about'
},
states: [{
id: 1,
name: 'New York',
code: 'NY',
field_we_dont_care_about: 'value_we_dont_care_about'
}]
}
```
To limit our results to only return the fields we care about, we specify the
`select` query string parameter for the corresponding fields in the JSON
document.
In this case, we want to select `name` and `phone` from the `provider` key,
so we would add the parameters `select=provider.name,provider.phone`.
We also want the `name` and `code` from the `states` key, so we would
add the parameters `select=states.name,states.code`. The id field of
each document is always returned whether or not it is requested.
Our final request would be `GET /providers/12345?select=provider.name,provider.phone,states.name,states.code`
The response would be
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890'
},
states: [{
id: 1,
name: 'New York',
code: 'NY'
}]
}
```
## Benefits summary format
Benefit cost-share strings are formatted to capture:
* Network tiers
* Compound or conditional cost-share
* Limits on the cost-share
* Benefit-specific maximum out-of-pocket costs
**Example #1**
As an example, we would represent [this Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/33602TX0780032.pdf) as:
* **Hospital stay facility fees**:
- Network Provider: `$400 copay/admit plus 20% coinsurance`
- Out-of-Network Provider: `$1,500 copay/admit plus 50% coinsurance`
- Vericred's format for this benefit: `In-Network: $400 before deductible then 20% after deductible / Out-of-Network: $1,500 before deductible then 50% after deductible`
* **Rehabilitation services:**
- Network Provider: `20% coinsurance`
- Out-of-Network Provider: `50% coinsurance`
- Limitations & Exceptions: `35 visit maximum per benefit period combined with Chiropractic care.`
- Vericred's format for this benefit: `In-Network: 20% after deductible / Out-of-Network: 50% after deductible | limit: 35 visit(s) per Benefit Period`
**Example #2**
In [this other Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/40733CA0110568.pdf), the **specialty_drugs** cost-share has a maximum out-of-pocket for in-network pharmacies.
* **Specialty drugs:**
- Network Provider: `40% coinsurance up to a $500 maximum for up to a 30 day supply`
- Out-of-Network Provider `Not covered`
- Vericred's format for this benefit: `In-Network: 40% after deductible, up to $500 per script / Out-of-Network: 100%`
**BNF**
Here's a description of the benefits summary string, represented as a context-free grammar:
```
root ::= coverage
coverage ::= (simple_coverage | tiered_coverage) (space pipe space coverage_modifier)?
tiered_coverage ::= tier (space slash space tier)*
tier ::= tier_name colon space (tier_coverage | not_applicable)
tier_coverage ::= simple_coverage (space (then | or | and) space simple_coverage)* tier_limitation?
simple_coverage ::= (pre_coverage_limitation space)? coverage_amount (space post_coverage_limitation)? (comma? space coverage_condition)?
coverage_modifier ::= limit_condition colon space (((simple_coverage | simple_limitation) (semicolon space see_carrier_documentation)?) | see_carrier_documentation | waived_if_admitted | shared_across_tiers)
waived_if_admitted ::= ("copay" space)? "waived if admitted"
simple_limitation ::= pre_coverage_limitation space "copay applies"
tier_name ::= "In-Network-Tier-2" | "Out-of-Network" | "In-Network"
limit_condition ::= "limit" | "condition"
tier_limitation ::= comma space "up to" space (currency | (integer space time_unit plural?)) (space post_coverage_limitation)?
coverage_amount ::= currency | unlimited | included | unknown | percentage | (digits space (treatment_unit | time_unit) plural?)
pre_coverage_limitation ::= first space digits space time_unit plural?
post_coverage_limitation ::= (((then space currency) | "per condition") space)? "per" space (treatment_unit | (integer space time_unit) | time_unit) plural?
coverage_condition ::= ("before deductible" | "after deductible" | "penalty" | allowance | "in-state" | "out-of-state") (space allowance)?
allowance ::= upto_allowance | after_allowance
upto_allowance ::= "up to" space (currency space)? "allowance"
after_allowance ::= "after" space (currency space)? "allowance"
see_carrier_documentation ::= "see carrier documentation for more information"
shared_across_tiers ::= "shared across all tiers"
unknown ::= "unknown"
unlimited ::= /[uU]nlimited/
included ::= /[iI]ncluded in [mM]edical/
time_unit ::= /[hH]our/ | (((/[cC]alendar/ | /[cC]ontract/) space)? /[yY]ear/) | /[mM]onth/ | /[dD]ay/ | /[wW]eek/ | /[vV]isit/ | /[lL]ifetime/ | ((((/[bB]enefit/ plural?) | /[eE]ligibility/) space)? /[pP]eriod/)
treatment_unit ::= /[pP]erson/ | /[gG]roup/ | /[cC]ondition/ | /[sS]cript/ | /[vV]isit/ | /[eE]xam/ | /[iI]tem/ | /[sS]tay/ | /[tT]reatment/ | /[aA]dmission/ | /[eE]pisode/
comma ::= ","
colon ::= ":"
semicolon ::= ";"
pipe ::= "|"
slash ::= "/"
plural ::= "(s)" | "s"
then ::= "then" | ("," space) | space
or ::= "or"
and ::= "and"
not_applicable ::= "Not Applicable" | "N/A" | "NA"
first ::= "first"
currency ::= "$" number
percentage ::= number "%"
number ::= float | integer
float ::= digits "." digits
integer ::= /[0-9]/+ (comma_int | under_int)*
comma_int ::= ("," /[0-9]/*3) !"_"
under_int ::= ("_" /[0-9]/*3) !","
digits ::= /[0-9]/+ ("_" /[0-9]/+)*
space ::= /[ \t]/+
```
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import os
import sys
import unittest
import vericred_client
from vericred_client.rest import ApiException
from vericred_client.models.state_network_size_response import StateNetworkSizeResponse
class TestStateNetworkSizeResponse(unittest.TestCase):
""" StateNetworkSizeResponse unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testStateNetworkSizeResponse(self):
"""
Test StateNetworkSizeResponse
"""
model = vericred_client.models.state_network_size_response.StateNetworkSizeResponse()
if __name__ == '__main__':
unittest.main()
|
vericred/vericred-python
|
test/test_state_network_size_response.py
|
Python
|
apache-2.0
| 10,105
|
[
"VisIt"
] |
7e8487cf0614dc9f892603dc5e30ddaf251f3eb5db94f86353d6665813636e3b
|
# The contents of this file are subject to the Common Public Attribution
# License Version 1.0. (the "License"); you may not use this file except in
# compliance with the License. You may obtain a copy of the License at
# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
# License Version 1.1, but Sections 14 and 15 have been added to cover use of
# software over a computer network and provide for limited attribution for the
# Original Developer. In addition, Exhibit A has been modified to be consistent
# with Exhibit B.
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
# the specific language governing rights and limitations under the License.
#
# The Original Code is reddit.
#
# The Original Developer is the Initial Developer. The Initial Developer of
# the Original Code is reddit Inc.
#
# All portions of the code written by reddit are Copyright (c) 2006-2013 reddit
# Inc. All Rights Reserved.
###############################################################################
from pylons.i18n import _, ungettext
from pylons.controllers.util import redirect_to
from r2.controllers.reddit_base import (
base_listing,
pagecache_policy,
PAGECACHE_POLICY,
paginated_listing,
disable_subreddit_css,
RedditController,
)
from r2 import config
from r2.models import *
from r2.config.extensions import is_api
from r2.lib import recommender
from r2.lib.pages import *
from r2.lib.pages.things import hot_links_by_url_listing
from r2.lib.pages import trafficpages
from r2.lib.menus import *
from r2.lib.utils import to36, sanitize_url, check_cheating, title_to_url
from r2.lib.utils import query_string, UrlParser, url_links_builder
from r2.lib.template_helpers import get_domain
from r2.lib.filters import unsafe, _force_unicode, _force_utf8
from r2.lib.emailer import Email
from r2.lib.db.operators import desc
from r2.lib.db import queries
from r2.lib.db.tdb_cassandra import MultiColumnQuery
from r2.lib.strings import strings
from r2.lib.search import (SearchQuery, SubredditSearchQuery, SearchException,
InvalidQuery)
from r2.lib.validator import *
from r2.lib import jsontemplates
from r2.lib import sup
import r2.lib.db.thing as thing
from r2.lib.errors import errors
from listingcontroller import ListingController
from oauth2 import require_oauth2_scope
from api_docs import api_doc, api_section
from pylons import c, request, response
from r2.models.token import EmailVerificationToken
from r2.controllers.ipn import generate_blob, validate_blob, GoldException
from operator import attrgetter
import string
import random as rand
import re, socket
import time as time_module
from urllib import quote_plus
class FrontController(RedditController):
allow_stylesheets = True
@validate(article=VLink('article'),
comment=VCommentID('comment'))
def GET_oldinfo(self, article, type, dest, rest=None, comment=''):
"""Legacy: supporting permalink pages from '06,
and non-search-engine-friendly links"""
if not (dest in ('comments','related','details')):
dest = 'comments'
if type == 'ancient':
#this could go in config, but it should never change
max_link_id = 10000000
new_id = max_link_id - int(article._id)
return self.redirect('/info/' + to36(new_id) + '/' + rest)
if type == 'old':
if not article.subreddit_slow.can_view(c.user):
self.abort403()
new_url = "/%s/%s/%s" % \
(dest, article._id36,
quote_plus(title_to_url(article.title).encode('utf-8')))
if not c.default_sr:
new_url = "/r/%s%s" % (c.site.name, new_url)
if comment:
new_url = new_url + "/%s" % comment._id36
if c.extension:
new_url = new_url + "/.%s" % c.extension
new_url = new_url + query_string(request.GET)
# redirect should be smarter and handle extensions, etc.
return self.redirect(new_url, code=301)
@require_oauth2_scope("read")
@api_doc(api_section.listings, uses_site=True)
def GET_random(self):
"""The Serendipity button"""
sort = rand.choice(('new','hot'))
links = c.site.get_links(sort, 'all')
if isinstance(links, thing.Query):
links._limit = g.num_serendipity
links = [x._fullname for x in links]
else:
links = list(links)[:g.num_serendipity]
rand.shuffle(links)
builder = IDBuilder(links, skip=True,
keep_fn=lambda x: x.fresh,
num=1)
links = builder.get_items()[0]
if links:
l = links[0]
return self.redirect(add_sr("/tb/" + l._id36))
else:
return self.redirect(add_sr('/'))
@disable_subreddit_css()
@validate(VAdmin(),
thing=VByName('article'),
oldid36=nop('article'),
after=nop('after'),
before=nop('before'),
count=VCount('count'))
def GET_details(self, thing, oldid36, after, before, count):
"""The (now deprecated) details page. Content on this page
has been subsubmed by the presence of the LinkInfoBar on the
rightbox, so it is only useful for Admin-only wizardry."""
if not thing:
try:
link = Link._byID36(oldid36)
return self.redirect('/details/' + link._fullname)
except (NotFound, ValueError):
abort(404)
kw = {'count': count}
if before:
kw['after'] = before
kw['reverse'] = True
else:
kw['after'] = after
kw['reverse'] = False
return DetailsPage(thing=thing, expand_children=False, **kw).render()
@validate(VUser(),
personalized=VBoolean('pers', default=False),
discovery=VBoolean('disc', default=False),
rising=VBoolean('ris', default=False),
over18=VBoolean('over18', default=False))
def GET_explore(self, personalized, discovery, rising, over18):
# default when no params are given is to show everything
if not any([personalized, discovery, rising]):
personalized = discovery = rising = True
settings = recommender.ExploreSettings(personalized=personalized,
discovery=discovery,
rising=rising,
over18=over18)
recs = recommender.get_recommended_content_for_user(c.user,
record_views=True,
settings=settings)
content = ExploreItemListing(recs, settings)
return BoringPage(_("explore"),
show_sidebar=True,
show_chooser=True,
page_classes=['explore-page'],
content=content).render()
@validate(article=VLink('article'))
def GET_shirt(self, article):
if not can_view_link_comments(article):
abort(403, 'forbidden')
return self.abort404()
def _comment_visits(self, article, user, new_visit=None):
timer = g.stats.get_timer("gold.comment_visits")
timer.start()
hc_key = "comment_visits-%s-%s" % (user.name, article._id36)
old_visits = g.hardcache.get(hc_key, [])
append = False
if new_visit is None:
pass
elif len(old_visits) == 0:
append = True
else:
last_visit = max(old_visits)
time_since_last = new_visit - last_visit
if (time_since_last.days > 0
or time_since_last.seconds > g.comment_visits_period):
append = True
else:
# They were just here a few seconds ago; consider that
# the same "visit" as right now
old_visits.pop()
if append:
copy = list(old_visits) # make a copy
copy.append(new_visit)
if len(copy) > 10:
copy.pop(0)
g.hardcache.set(hc_key, copy, 86400 * 2)
timer.stop()
return old_visits
@validate(article=VLink('article'),
comment=VCommentID('comment'),
context=VInt('context', min=0, max=8),
sort=VMenu('controller', CommentSortMenu),
limit=VInt('limit'),
depth=VInt('depth'))
def POST_comments(self, article, comment, context, sort, limit, depth):
# VMenu validator will save the value of sort before we reach this
# point. Now just redirect to GET mode.
return self.redirect(request.fullpath + query_string(dict(sort=sort)))
@require_oauth2_scope("read")
@validate(article=VLink('article',
docs={"article": "ID36 of a link"}),
comment=VCommentID('comment',
docs={"comment": "(optional) ID36 of a comment"}),
context=VInt('context', min=0, max=8),
sort=VMenu('controller', CommentSortMenu),
limit=VInt('limit',
docs={"limit": "(optional) an integer"}),
depth=VInt('depth',
docs={"depth": "(optional) an integer"}),
)
@api_doc(api_section.listings,
uri='/comments/{article}',
uses_site=True,
extensions=['json', 'xml'])
def GET_comments(self, article, comment, context, sort, limit, depth):
"""Get the comment tree for a given Link `article`.
If supplied, `comment` is the ID36 of a comment in the comment tree for
`article`. This comment will be the (highlighted) focal point of the
returned view and `context` will be the number of parents shown.
`depth` is the maximum depth of subtrees in the thread.
`limit` is the maximum number of comments to return.
See also: [/api/morechildren](#POST_api_morechildren) and
[/api/comment](#POST_api_comment).
"""
if comment and comment.link_id != article._id:
return self.abort404()
sr = Subreddit._byID(article.sr_id, True)
if sr.name == g.takedown_sr:
request.environ['REDDIT_TAKEDOWN'] = article._fullname
return self.abort404()
if not c.default_sr and c.site._id != sr._id:
return self.abort404()
if not can_view_link_comments(article):
abort(403, 'forbidden')
#check for 304
self.check_modified(article, 'comments')
# If there is a focal comment, communicate down to
# comment_skeleton.html who that will be. Also, skip
# comment_visits check
previous_visits = None
if comment:
c.focal_comment = comment._id36
elif (c.user_is_loggedin and c.user.gold and
c.user.pref_highlight_new_comments):
previous_visits = self._comment_visits(article, c.user, c.start_time)
# check if we just came from the submit page
infotext = None
if request.GET.get('already_submitted'):
infotext = strings.already_submitted % article.resubmit_link()
check_cheating('comments')
if not c.user.pref_num_comments:
num = g.num_comments
elif c.user.gold:
num = min(c.user.pref_num_comments, g.max_comments_gold)
else:
num = min(c.user.pref_num_comments, g.max_comments)
kw = {}
# allow depth to be reset (I suspect I'll turn the VInt into a
# validator on my next pass of .compact)
if depth is not None and 0 < depth < MAX_RECURSION:
kw['max_depth'] = depth
elif c.render_style == "compact":
kw['max_depth'] = 5
displayPane = PaneStack()
# allow the user's total count preferences to be overwritten
# (think of .embed as the use case together with depth=1)
if limit and limit > 0:
num = limit
if c.user_is_loggedin and c.user.gold:
if num > g.max_comments_gold:
displayPane.append(InfoBar(message =
strings.over_comment_limit_gold
% max(0, g.max_comments_gold)))
num = g.max_comments_gold
elif num > g.max_comments:
if limit:
displayPane.append(InfoBar(message =
strings.over_comment_limit
% dict(max=max(0, g.max_comments),
goldmax=max(0,
g.max_comments_gold))))
num = g.max_comments
# if permalink page, add that message first to the content
if comment:
displayPane.append(PermalinkMessage(article.make_permalink_slow()))
displayPane.append(LinkCommentSep())
# insert reply box only for logged in user
if c.user_is_loggedin and can_comment_link(article) and not is_api():
#no comment box for permalinks
display = False
if not comment:
age = c.start_time - article._date
if article.promoted or age.days < g.REPLY_AGE_LIMIT:
display = True
displayPane.append(UserText(item=article, creating=True,
post_form='comment',
display=display,
cloneable=True))
if previous_visits:
displayPane.append(CommentVisitsBox(previous_visits))
# Used in later "more comments" renderings
pv_hex = md5(repr(previous_visits)).hexdigest()
g.cache.set(pv_hex, previous_visits, time=g.comment_visits_period)
c.previous_visits_hex = pv_hex
# Used in template_helpers
c.previous_visits = previous_visits
if article.contest_mode:
sort = "random"
# finally add the comment listing
displayPane.append(CommentPane(article, CommentSortMenu.operator(sort),
comment, context, num, **kw))
subtitle_buttons = []
if c.focal_comment or context is not None:
subtitle = None
elif article.num_comments == 0:
subtitle = _("no comments (yet)")
elif article.num_comments <= num:
subtitle = _("all %d comments") % article.num_comments
else:
subtitle = _("top %d comments") % num
if g.max_comments > num:
self._add_show_comments_link(subtitle_buttons, article, num,
g.max_comments, gold=False)
if (c.user_is_loggedin and c.user.gold
and article.num_comments > g.max_comments):
self._add_show_comments_link(subtitle_buttons, article, num,
g.max_comments_gold, gold=True)
res = LinkInfoPage(link=article, comment=comment,
content=displayPane,
page_classes=['comments-page'],
subtitle=subtitle,
subtitle_buttons=subtitle_buttons,
nav_menus=[CommentSortMenu(default=sort),
LinkCommentsSettings(article)],
infotext=infotext).render()
return res
def _add_show_comments_link(self, array, article, num, max_comm, gold=False):
if num == max_comm:
return
elif article.num_comments <= max_comm:
link_text = _("show all %d") % article.num_comments
else:
link_text = _("show %d") % max_comm
limit_param = "?limit=%d" % max_comm
if gold:
link_class = "gold"
else:
link_class = ""
more_link = article.make_permalink_slow() + limit_param
array.append( (link_text, more_link, link_class) )
@validate(VUser(),
name=nop('name'))
def GET_newreddit(self, name):
"""Create a subreddit form"""
title = _('create a subreddit')
content=CreateSubreddit(name=name or '')
res = FormPage(_("create a subreddit"),
content=content,
).render()
return res
@pagecache_policy(PAGECACHE_POLICY.LOGGEDIN_AND_LOGGEDOUT)
@require_oauth2_scope("modconfig")
@api_doc(api_section.moderation, uses_site=True)
def GET_stylesheet(self):
"""Get the subreddit's current stylesheet.
This will return either the content of or a redirect to the subreddit's
current stylesheet if one exists.
See also: [/api/subreddit_stylesheet](#POST_api_subreddit_stylesheet).
"""
if g.css_killswitch:
self.abort404()
# de-stale the subreddit object so we don't poison nginx's cache
if not isinstance(c.site, FakeSubreddit):
c.site = Subreddit._byID(c.site._id, data=True, stale=False)
if c.site.stylesheet_url_http:
url = Reddit.get_subreddit_stylesheet_url()
if url:
redirect_to(url)
else:
self.abort404()
if not c.secure:
stylesheet_contents = c.site.stylesheet_contents
else:
stylesheet_contents = c.site.stylesheet_contents_secure
if stylesheet_contents:
c.allow_loggedin_cache = True
if c.site.stylesheet_modified:
self.abort_if_not_modified(
c.site.stylesheet_modified,
private=False,
max_age=timedelta(days=7),
must_revalidate=False,
)
response.content_type = 'text/css'
if c.site.type == 'private':
response.headers['X-Private-Subreddit'] = 'private'
return stylesheet_contents
else:
return self.abort404()
def _make_moderationlog(self, srs, num, after, reverse, count, mod=None, action=None):
query = Subreddit.get_modactions(srs, mod=mod, action=action)
builder = QueryBuilder(query, num=num, after=after,
count=count,
reverse=reverse,
wrap=default_thing_wrapper())
listing = ModActionListing(builder)
pane = listing.listing()
return pane
modname_splitter = re.compile('[ ,]+')
@require_oauth2_scope("modlog")
@disable_subreddit_css()
@paginated_listing(max_page_size=500, backend='cassandra')
@validate(
mod=nop('mod', docs={"mod": "(optional) a moderator filter"}),
action=VOneOf('type', ModAction.actions),
)
@api_doc(api_section.moderation, uses_site=True,
uri="/about/log", extensions=["json", "xml"])
def GET_moderationlog(self, num, after, reverse, count, mod, action):
"""Get a list of recent moderation actions.
Moderator actions taken within a subreddit are logged. This listing is
a view of that log with various filters to aid in analyzing the
information.
The optional `mod` parameter can be a comma-delimited list of moderator
names to restrict the results to, or the string `a` to restrict the
results to admin actions taken within the subreddit.
The `type` parameter is optional and if sent limits the log entries
returned to only those of the type specified.
"""
if not c.user_is_loggedin or not (c.user_is_admin or
c.site.is_moderator(c.user)):
return self.abort404()
if mod:
if mod == 'a':
modnames = g.admins
else:
modnames = self.modname_splitter.split(mod)
mod = []
for name in modnames:
try:
mod.append(Account._by_name(name, allow_deleted=True))
except NotFound:
continue
mod = mod or None
if isinstance(c.site, (MultiReddit, ModSR)):
srs = Subreddit._byID(c.site.sr_ids, return_dict=False)
# grab all moderators
mod_ids = set(Subreddit.get_all_mod_ids(srs))
mods = Account._byID(mod_ids, data=True)
pane = self._make_moderationlog(srs, num, after, reverse, count,
mod=mod, action=action)
elif isinstance(c.site, FakeSubreddit):
return self.abort404()
else:
mod_ids = c.site.moderators
mods = Account._byID(mod_ids, data=True)
pane = self._make_moderationlog(c.site, num, after, reverse, count,
mod=mod, action=action)
panes = PaneStack()
panes.append(pane)
action_buttons = [NavButton(_('all'), None, opt='type', css_class='primary')]
for a in ModAction.actions:
action_buttons.append(NavButton(ModAction._menu[a], a, opt='type'))
mod_buttons = [NavButton(_('all'), None, opt='mod', css_class='primary')]
for mod_id in mod_ids:
mod = mods[mod_id]
mod_buttons.append(NavButton(mod.name, mod.name, opt='mod'))
mod_buttons.append(NavButton('admins*', 'a', opt='mod'))
base_path = request.path
menus = [NavMenu(action_buttons, base_path=base_path,
title=_('filter by action'), type='lightdrop', css_class='modaction-drop'),
NavMenu(mod_buttons, base_path=base_path,
title=_('filter by moderator'), type='lightdrop')]
extension_handling = "private" if c.user.pref_private_feeds else False
return EditReddit(content=panes,
nav_menus=menus,
location="log",
extension_handling=extension_handling).render()
def _make_spamlisting(self, location, only, num, after, reverse, count):
include_links, include_comments = True, True
if only == 'links':
include_comments = False
elif only == 'comments':
include_links = False
if location == 'reports':
query = c.site.get_reported(include_links=include_links,
include_comments=include_comments)
elif location == 'spam':
query = c.site.get_spam(include_links=include_links,
include_comments=include_comments)
elif location == 'modqueue':
query = c.site.get_modqueue(include_links=include_links,
include_comments=include_comments)
elif location == 'unmoderated':
query = c.site.get_unmoderated()
else:
raise ValueError
if isinstance(query, thing.Query):
builder_cls = QueryBuilder
elif isinstance (query, list):
builder_cls = QueryBuilder
else:
builder_cls = IDBuilder
def keep_fn(x):
# no need to bother mods with banned users, or deleted content
if x._deleted:
return False
if getattr(x,'author',None) == c.user and c.user._spam:
return False
if location == "reports":
return x.reported > 0 and not x._spam
elif location == "spam":
return x._spam
elif location == "modqueue":
if x.reported > 0 and not x._spam:
return True # reported but not banned
if x.author._spam and x.subreddit.exclude_banned_modqueue:
# banned user, don't show if subreddit pref excludes
return False
verdict = getattr(x, "verdict", None)
if verdict is None:
return True # anything without a verdict
if x._spam and verdict != 'mod-removed':
return True # spam, unless banned by a moderator
return False
elif location == "unmoderated":
# banned user, don't show if subreddit pref excludes
if x.author._spam and x.subreddit.exclude_banned_modqueue:
return False
return not getattr(x, 'verdict', None)
else:
raise ValueError
builder = builder_cls(query,
skip=True,
num=num, after=after,
keep_fn=keep_fn,
count=count, reverse=reverse,
wrap=ListingController.builder_wrapper,
spam_listing=True)
listing = LinkListing(builder)
pane = listing.listing()
# Indicate that the comment tree wasn't built for comments
for i in pane.things:
if hasattr(i, 'body'):
i.child = None
return pane
def _edit_normal_reddit(self, location, created):
if (location == 'edit' and
c.user_is_loggedin and
(c.user_is_admin or c.site.is_moderator_with_perms(c.user, 'config'))):
pane = PaneStack()
if created == 'true':
pane.append(InfoBar(message=strings.sr_created))
c.allow_styles = True
c.site = Subreddit._byID(c.site._id, data=True, stale=False)
pane.append(CreateSubreddit(site=c.site))
elif (location == 'stylesheet'
and c.site.can_change_stylesheet(c.user)
and not g.css_killswitch):
if hasattr(c.site,'stylesheet_contents_user') and c.site.stylesheet_contents_user:
stylesheet_contents = c.site.stylesheet_contents_user
elif hasattr(c.site,'stylesheet_contents') and c.site.stylesheet_contents:
stylesheet_contents = c.site.stylesheet_contents
else:
stylesheet_contents = ''
c.allow_styles = True
pane = SubredditStylesheet(site=c.site,
stylesheet_contents=stylesheet_contents)
elif (location == 'stylesheet'
and c.site.can_view(c.user)
and not g.css_killswitch):
stylesheet = (c.site.stylesheet_contents_user or
c.site.stylesheet_contents)
pane = SubredditStylesheetSource(stylesheet_contents=stylesheet)
elif (location == 'traffic' and
(c.site.public_traffic or
(c.user_is_loggedin and
(c.site.is_moderator(c.user) or c.user.employee)))):
pane = trafficpages.SubredditTraffic()
elif (location == "about") and is_api():
return self.redirect(add_sr('about.json'), code=301)
else:
return self.abort404()
return EditReddit(content=pane,
location=location,
extension_handling=False).render()
@base_listing
@disable_subreddit_css()
@validate(VSrModerator(perms='posts'),
location=nop('location'),
only=VOneOf('only', ('links', 'comments')))
def GET_spamlisting(self, location, only, num, after, reverse, count):
c.allow_styles = True
c.profilepage = True
pane = self._make_spamlisting(location, only, num, after, reverse,
count)
extension_handling = "private" if c.user.pref_private_feeds else False
if location in ('reports', 'spam', 'modqueue'):
buttons = [NavButton(_('links and comments'), None, opt='only'),
NavButton(_('links'), 'links', opt='only'),
NavButton(_('comments'), 'comments', opt='only')]
menus = [NavMenu(buttons, base_path=request.path, title=_('show'),
type='lightdrop')]
else:
menus = None
return EditReddit(content=pane,
location=location,
nav_menus=menus,
extension_handling=extension_handling).render()
@base_listing
@disable_subreddit_css()
@validate(VSrModerator(perms='flair'),
name=nop('name'))
def GET_flairlisting(self, num, after, reverse, count, name):
user = None
if name:
try:
user = Account._by_name(name)
except NotFound:
c.errors.add(errors.USER_DOESNT_EXIST, field='name')
c.allow_styles = True
pane = FlairPane(num, after, reverse, name, user)
return EditReddit(content=pane, location='flair').render()
@disable_subreddit_css()
@validate(location=nop('location'),
created=VOneOf('created', ('true','false'),
default='false'))
@api_doc(api_section.subreddits, uri="/r/{subreddit}/about/edit",
extensions=["json"])
def GET_editreddit(self, location, created):
"""Get the current settings of a subreddit.
In the API, this returns the current settings of the subreddit as used
by [/api/site_admin](#POST_api_site_admin). On the HTML site, it will
display a form for editing the subreddit.
"""
c.profilepage = True
if isinstance(c.site, FakeSubreddit):
return self.abort404()
else:
return self._edit_normal_reddit(location, created)
@require_oauth2_scope("read")
@api_doc(api_section.subreddits, uri='/r/{subreddit}/about', extensions=['json'])
def GET_about(self):
"""Return information about the subreddit.
Data includes the subscriber count, description, and header image."""
if not is_api() or isinstance(c.site, FakeSubreddit):
return self.abort404()
return Reddit(content=Wrapped(c.site)).render()
@require_oauth2_scope("read")
def GET_sidebar(self):
usertext = UserText(c.site, c.site.description)
return Reddit(content=usertext).render()
@require_oauth2_scope("read")
def GET_sticky(self):
if c.site.sticky_fullname:
sticky = Link._by_fullname(c.site.sticky_fullname, data=True)
self.redirect(sticky.make_permalink_slow())
else:
abort(404)
def GET_awards(self):
"""The awards page."""
return BoringPage(_("awards"), content=UserAwards()).render()
# filter for removing punctuation which could be interpreted as search syntax
related_replace_regex = re.compile(r'[?\\&|!{}+~^()"\':*-]+')
related_replace_with = ' '
@base_listing
@require_oauth2_scope("read")
@validate(article=VLink('article'))
def GET_related(self, num, article, after, reverse, count):
"""Related page: performs a search using title of article as
the search query.
"""
if not can_view_link_comments(article):
abort(403, 'forbidden')
query = self.related_replace_regex.sub(self.related_replace_with,
article.title)
query = _force_unicode(query)
query = query[:1024]
query = u"|".join(query.split())
query = u"title:'%s'" % query
rel_range = timedelta(days=3)
start = int(time_module.mktime((article._date - rel_range).utctimetuple()))
end = int(time_module.mktime((article._date + rel_range).utctimetuple()))
nsfw = u"nsfw:0" if not (article.over_18 or article._nsfw.findall(article.title)) else u""
query = u"(and %s timestamp:%s..%s %s)" % (query, start, end, nsfw)
q = SearchQuery(query, raw_sort="-text_relevance",
syntax="cloudsearch")
pane = self._search(q, num=num, after=after, reverse=reverse,
count=count)[2]
return LinkInfoPage(link=article, content=pane,
page_classes=['related-page'],
subtitle=_('related')).render()
@base_listing
@require_oauth2_scope("read")
@validate(article=VLink('article'))
def GET_duplicates(self, article, num, after, reverse, count):
if not can_view_link_comments(article):
abort(403, 'forbidden')
builder = url_links_builder(article.url, exclude=article._fullname,
num=num, after=after, reverse=reverse,
count=count)
num_duplicates = len(builder.get_items()[0])
listing = LinkListing(builder).listing()
res = LinkInfoPage(link=article,
comment=None,
num_duplicates=num_duplicates,
content=listing,
page_classes=['other-discussions-page'],
subtitle=_('other discussions')).render()
return res
@base_listing
@require_oauth2_scope("read")
@validate(query=nop('q', docs={"q": "a search query"}))
@api_doc(api_section.subreddits, uri='/subreddits/search', extensions=['json', 'xml'])
def GET_search_reddits(self, query, reverse, after, count, num):
"""Search subreddits by title and description."""
q = SubredditSearchQuery(query)
results, etime, spane = self._search(q, num=num, reverse=reverse,
after=after, count=count,
skip_deleted_authors=False)
res = SubredditsPage(content=spane,
prev_search=query,
elapsed_time=etime,
num_results=results.hits,
# update if we ever add sorts
search_params={},
title=_("search results"),
simple=True).render()
return res
search_help_page = "/wiki/search"
verify_langs_regex = re.compile(r"\A[a-z][a-z](,[a-z][a-z])*\Z")
@base_listing
@require_oauth2_scope("read")
@validate(query=VLength('q', max_length=512),
sort=VMenu('sort', SearchSortMenu, remember=False),
recent=VMenu('t', TimeMenu, remember=False),
restrict_sr=VBoolean('restrict_sr', default=False),
syntax=VOneOf('syntax', options=SearchQuery.known_syntaxes))
@api_doc(api_section.search, extensions=['json', 'xml'], uses_site=True)
def GET_search(self, query, num, reverse, after, count, sort, recent,
restrict_sr, syntax):
"""Search links page."""
if query and '.' in query:
url = sanitize_url(query, require_scheme=True)
if url:
return self.redirect("/submit" + query_string({'url':url}))
if not restrict_sr:
site = DefaultSR()
else:
site = c.site
if not syntax:
syntax = SearchQuery.default_syntax
try:
cleanup_message = None
try:
q = SearchQuery(query, site, sort,
recent=recent, syntax=syntax)
results, etime, spane = self._search(q, num=num, after=after,
reverse=reverse,
count=count)
except InvalidQuery:
# Clean the search of characters that might be causing the
# InvalidQuery exception. If the cleaned search boils down
# to an empty string, the search code is expected to bail
# out early with an empty result set.
cleaned = re.sub("[^\w\s]+", " ", query)
cleaned = cleaned.lower().strip()
q = SearchQuery(cleaned, site, sort, recent=recent)
results, etime, spane = self._search(q, num=num,
after=after,
reverse=reverse,
count=count)
if cleaned:
cleanup_message = strings.invalid_search_query % {
"clean_query": cleaned
}
cleanup_message += " "
cleanup_message += strings.search_help % {
"search_help": self.search_help_page
}
else:
cleanup_message = strings.completely_invalid_search_query
res = SearchPage(_('search results'), query, etime, results.hits,
content=spane,
nav_menus=[SearchSortMenu(default=sort),
TimeMenu(default=recent)],
search_params=dict(sort=sort, t=recent),
infotext=cleanup_message,
simple=False, site=c.site,
restrict_sr=restrict_sr,
syntax=syntax,
converted_data=q.converted_data,
facets=results.subreddit_facets,
sort=sort,
recent=recent,
).render()
return res
except SearchException + (socket.error,) as e:
return self.search_fail(e)
def _search(self, query_obj, num, after, reverse, count=0,
skip_deleted_authors=True):
"""Helper function for interfacing with search. Basically a
thin wrapper for SearchBuilder."""
builder = SearchBuilder(query_obj,
after=after, num=num, reverse=reverse,
count=count,
wrap=ListingController.builder_wrapper,
skip_deleted_authors=skip_deleted_authors)
listing = LinkListing(builder, show_nums=True)
# have to do it in two steps since total_num and timing are only
# computed after fetch_more
try:
res = listing.listing()
except SearchException + (socket.error,) as e:
return self.search_fail(e)
timing = time_module.time() - builder.start_time
return builder.results, timing, res
@validate(VAdmin(),
comment=VCommentByID('comment_id'))
def GET_comment_by_id(self, comment):
href = comment.make_permalink_slow(context=5, anchor=True)
return self.redirect(href)
@validate(url=VRequired('url', None),
title=VRequired('title', None),
text=VRequired('text', None),
selftext=VRequired('selftext', None),
then=VOneOf('then', ('tb','comments'), default='comments'))
def GET_submit(self, url, title, text, selftext, then):
"""Submit form."""
resubmit = request.GET.get('resubmit')
url = sanitize_url(url)
if url and not resubmit:
# check to see if the url has already been submitted
listing = hot_links_by_url_listing(url, sr=c.site)
links = listing.things
if links and len(links) == 1:
return self.redirect(links[0].already_submitted_link)
elif links:
infotext = (strings.multiple_submitted
% links[0].resubmit_link())
res = BoringPage(_("seen it"),
content=listing,
infotext=infotext).render()
return res
if not c.user_is_loggedin:
raise UserRequiredException
if not (c.default_sr or c.site.can_submit(c.user)):
abort(403, "forbidden")
captcha = Captcha() if c.user.needs_captcha() else None
extra_subreddits = []
if isinstance(c.site, MultiReddit):
extra_subreddits.append((
_('%s subreddits') % c.site.name,
c.site.srs
))
newlink = NewLink(
url=url or '',
title=title or '',
text=text or '',
selftext=selftext or '',
captcha=captcha,
resubmit=resubmit,
default_sr=c.site if not c.default_sr else None,
extra_subreddits=extra_subreddits,
show_link=c.default_sr or c.site.link_type != 'self',
show_self=((c.default_sr or c.site.link_type != 'link')
and not request.GET.get('no_self')),
then=then,
)
return FormPage(_("submit"),
show_sidebar=True,
page_classes=['submit-page'],
content=newlink).render()
def GET_frame(self):
"""used for cname support. makes a frame and
puts the proper url as the frame source"""
sub_domain = request.environ.get('sub_domain')
original_path = request.environ.get('original_path')
sr = Subreddit._by_domain(sub_domain)
return Cnameframe(original_path, sr, sub_domain).render()
def GET_framebuster(self, what=None, blah=None):
"""
renders the contents of the iframe which, on a cname, checks
if the user is currently logged into reddit.
if this page is hit from the primary domain, redirects to the
cnamed domain version of the site. If the user is logged in,
this cnamed version will drop a boolean session cookie on that
domain so that subsequent page reloads will be caught in
middleware and a frame will be inserted around the content.
If the user is not logged in, previous session cookies will be
emptied so that subsequent refreshes will not be rendered in
that pesky frame.
"""
if not c.site.domain:
return ""
elif c.cname:
return FrameBuster(login=(what == "login")).render()
else:
path = "/framebuster/"
if c.user_is_loggedin:
path += "login/"
u = UrlParser(path + str(random.random()))
u.mk_cname(require_frame=False, subreddit=c.site,
port=request.port)
return self.redirect(u.unparse())
# the user is not logged in or there is no cname.
return FrameBuster(login=False).render()
def GET_catchall(self):
return self.abort404()
@validate(period=VInt('seconds',
min=sup.MIN_PERIOD,
max=sup.MAX_PERIOD,
default=sup.MIN_PERIOD))
def GET_sup(self, period):
#dont cache this, it's memoized elsewhere
c.used_cache = True
sup.set_expires_header()
if c.extension == 'json':
return sup.sup_json(period)
else:
return self.abort404()
@require_oauth2_scope("modtraffic")
@validate(VSponsor('link'),
link=VLink('link'),
campaign=VPromoCampaign('campaign'),
before=VDate('before', format='%Y%m%d%H'),
after=VDate('after', format='%Y%m%d%H'))
def GET_traffic(self, link, campaign, before, after):
if c.render_style == 'csv':
return trafficpages.PromotedLinkTraffic.as_csv(campaign or link)
content = trafficpages.PromotedLinkTraffic(link, campaign, before,
after)
return LinkInfoPage(link=link,
page_classes=["promoted-traffic"],
comment=None, show_promote_button=True,
content=content).render()
@validate(VEmployee())
def GET_site_traffic(self):
return trafficpages.SitewideTrafficPage().render()
@validate(VEmployee())
def GET_lang_traffic(self, langcode):
return trafficpages.LanguageTrafficPage(langcode).render()
@validate(VEmployee())
def GET_advert_traffic(self, code):
return trafficpages.AdvertTrafficPage(code).render()
@validate(VEmployee())
def GET_subreddit_traffic_report(self):
content = trafficpages.SubredditTrafficReport()
if c.render_style == 'csv':
return content.as_csv()
return trafficpages.TrafficPage(content=content).render()
@validate(VUser())
def GET_account_activity(self):
return AccountActivityPage().render()
def GET_contact_us(self):
return BoringPage(_("contact us"), show_sidebar=False,
content=ContactUs(), page_classes=["contact-us-page"]
).render()
def GET_ad_inq(self):
return FormPage('advertise',
content = SelfServeBlurb(),
loginbox = False).render()
def GET_rules(self):
return BoringPage(_("rules of reddit"), show_sidebar=False,
content=RulesPage(), page_classes=["rulespage-body"]
).render()
@validate(vendor=VOneOf("v", ("claimed-gold", "claimed-creddits",
"paypal", "coinbase"),
default="claimed-gold"))
def GET_goldthanks(self, vendor):
vendor_url = None
lounge_md = None
if vendor == "claimed-gold":
claim_msg = _("claimed! enjoy your reddit gold membership.")
if g.lounge_reddit:
lounge_md = strings.lounge_msg
elif vendor == "claimed-creddits":
claim_msg = _("your gold creddits have been claimed! now go to "
"someone's userpage and give them a present!")
elif vendor == "paypal":
claim_msg = _("thanks for buying reddit gold! your transaction "
"has been completed and emailed to you. you can "
"check the details by logging into your account "
"at:")
vendor_url = "https://www.paypal.com/us"
elif vendor == "coinbase":
claim_msg = _("thanks for buying reddit gold! your transaction is "
"being processed. if you have any questions please "
"email us at %(gold_email)s")
claim_msg = claim_msg % {'gold_email': g.goldthanks_email}
else:
abort(404)
return BoringPage(_("thanks"), show_sidebar=False,
content=GoldThanks(claim_msg=claim_msg,
vendor_url=vendor_url,
lounge_md=lounge_md)).render()
@validate(VUser(),
token=VOneTimeToken(AwardClaimToken, "code"))
def GET_confirm_award_claim(self, token):
if not token:
abort(403)
award = Award._by_fullname(token.awardfullname)
trophy = FakeTrophy(c.user, award, token.description, token.url)
content = ConfirmAwardClaim(trophy=trophy, user=c.user.name,
token=token)
return BoringPage(_("claim this award?"), content=content).render()
@validate(VUser(),
token=VOneTimeToken(AwardClaimToken, "code"))
def POST_claim_award(self, token):
if not token:
abort(403)
token.consume()
award = Award._by_fullname(token.awardfullname)
trophy, preexisting = Trophy.claim(c.user, token.uid, award,
token.description, token.url)
redirect = '/awards/received?trophy=' + trophy._id36
if preexisting:
redirect += '&duplicate=true'
self.redirect(redirect)
@validate(trophy=VTrophy('trophy'),
preexisting=VBoolean('duplicate'))
def GET_received_award(self, trophy, preexisting):
content = AwardReceived(trophy=trophy, preexisting=preexisting)
return BoringPage(_("award claim"), content=content).render()
class FormsController(RedditController):
def GET_password(self):
"""The 'what is my password' page"""
return BoringPage(_("password"), content=Password()).render()
@validate(VUser(),
dest=VDestination(),
reason=nop('reason'))
def GET_verify(self, dest, reason):
if c.user.email_verified:
content = InfoBar(message=strings.email_verified)
if dest:
return self.redirect(dest)
else:
if reason == "submit":
infomsg = strings.verify_email_submit
else:
infomsg = strings.verify_email
content = PaneStack(
[InfoBar(message=infomsg),
PrefUpdate(email=True, verify=True,
password=False, dest=dest)])
return BoringPage(_("verify email"), content=content).render()
@validate(VUser(),
token=VOneTimeToken(EmailVerificationToken, "key"),
dest=VDestination(default="/prefs/update?verified=true"))
def GET_verify_email(self, token, dest):
fail_msg = None
if token and token.user_id != c.user._fullname:
fail_msg = strings.email_verify_wrong_user
elif c.user.email_verified:
# they've already verified.
if token:
# consume and ignore this token (if not already consumed).
token.consume()
return self.redirect(dest)
elif token and token.valid_for_user(c.user):
# successful verification!
token.consume()
c.user.email_verified = True
c.user._commit()
Award.give_if_needed("verified_email", c.user)
return self.redirect(dest)
# failure. let 'em know.
content = PaneStack(
[InfoBar(message=fail_msg or strings.email_verify_failed),
PrefUpdate(email=True,
verify=True,
password=False)])
return BoringPage(_("verify email"), content=content).render()
@validate(token=VOneTimeToken(PasswordResetToken, "key"),
key=nop("key"))
def GET_resetpassword(self, token, key):
"""page hit once a user has been sent a password reset email
to verify their identity before allowing them to update their
password."""
done = False
if not key and request.referer:
referer_path = request.referer.split(g.domain)[-1]
done = referer_path.startswith(request.fullpath)
elif not token:
return self.redirect("/password?expired=true")
token_user = Account._by_fullname(token.user_id, data=True)
return BoringPage(
_("reset password"),
content=ResetPassword(
key=key,
done=done,
username=token_user.name,
)
).render()
@disable_subreddit_css()
@validate(VUser(),
location=nop("location"),
verified=VBoolean("verified"))
def GET_prefs(self, location='', verified=False):
"""Preference page"""
content = None
infotext = None
if not location or location == 'options':
content = PrefOptions(done=request.GET.get('done'))
elif location == 'update':
if verified:
infotext = strings.email_verified
content = PrefUpdate()
elif location == 'apps':
content = PrefApps(my_apps=OAuth2Client._by_user_grouped(c.user),
developed_apps=OAuth2Client._by_developer(c.user))
elif location == 'feeds' and c.user.pref_private_feeds:
content = PrefFeeds()
elif location == 'delete':
content = PrefDelete()
elif location == 'otp':
content = PrefOTP()
else:
return self.abort404()
return PrefsPage(content=content, infotext=infotext).render()
@validate(dest=VDestination())
def GET_login(self, dest):
"""The /login form. No link to this page exists any more on
the site (all actions invoking it now go through the login
cover). However, this page is still used for logging the user
in during submission or voting from the bookmarklets."""
if (c.user_is_loggedin and
not request.environ.get('extension') == 'embed'):
return self.redirect(dest)
return LoginPage(dest=dest).render()
@validate(dest=VDestination())
def GET_register(self, dest):
if (c.user_is_loggedin and
not request.environ.get('extension') == 'embed'):
return self.redirect(dest)
return RegisterPage(dest=dest).render()
@validate(VUser(),
VModhash(),
dest=VDestination())
def GET_logout(self, dest):
return self.redirect(dest)
@validate(VUser(),
VModhash(),
dest=VDestination())
def POST_logout(self, dest):
"""wipe login cookie and redirect to referer."""
self.logout()
return self.redirect(dest)
@validate(VUser(),
dest=VDestination())
def GET_adminon(self, dest):
"""Enable admin interaction with site"""
#check like this because c.user_is_admin is still false
if not c.user.name in g.admins:
return self.abort404()
return AdminModeInterstitial(dest=dest).render()
@validate(VAdmin(),
dest=VDestination())
def GET_adminoff(self, dest):
"""disable admin interaction with site."""
if not c.user.name in g.admins:
return self.abort404()
self.disable_admin_mode(c.user)
return self.redirect(dest)
def _render_opt_in_out(self, msg_hash, leave):
"""Generates the form for an optin/optout page"""
email = Email.handler.get_recipient(msg_hash)
if not email:
return self.abort404()
sent = (has_opted_out(email) == leave)
return BoringPage(_("opt out") if leave else _("welcome back"),
content=OptOut(email=email, leave=leave,
sent=sent,
msg_hash=msg_hash)).render()
@validate(msg_hash=nop('x'))
def GET_optout(self, msg_hash):
"""handles /mail/optout to add an email to the optout mailing
list. The actual email addition comes from the user posting
the subsequently rendered form and is handled in
ApiController.POST_optout."""
return self._render_opt_in_out(msg_hash, True)
@validate(msg_hash=nop('x'))
def GET_optin(self, msg_hash):
"""handles /mail/optin to remove an email address from the
optout list. The actual email removal comes from the user
posting the subsequently rendered form and is handled in
ApiController.POST_optin."""
return self._render_opt_in_out(msg_hash, False)
@validate(dest=VDestination("dest"))
def GET_try_compact(self, dest):
c.render_style = "compact"
return TryCompact(dest=dest).render()
@validate(VUser(),
secret=VPrintable("secret", 50))
def GET_claim(self, secret):
"""The page to claim reddit gold trophies"""
return BoringPage(_("thanks"), content=Thanks(secret)).render()
@validate(VUser(),
passthrough=nop('passthrough'))
def GET_creditgild(self, passthrough):
"""Used only for setting up credit card payments for gilding."""
try:
payment_blob = validate_blob(passthrough)
except GoldException:
self.abort404()
if c.user != payment_blob['buyer']:
self.abort404()
if not payment_blob['goldtype'] == 'gift':
self.abort404()
recipient = payment_blob['recipient']
thing = payment_blob.get('thing')
if not thing:
thing = payment_blob['comment']
if (not thing or
thing._deleted or
not thing.subreddit_slow.can_view(c.user)):
self.abort404()
if isinstance(thing, Comment):
summary = strings.gold_summary_gilding_page_comment
else:
summary = strings.gold_summary_gilding_page_link
summary = summary % {'recipient': recipient.name}
months = 1
price = g.gold_month_price * months
if isinstance(thing, Comment):
desc = thing.body
else:
desc = thing.markdown_link_slow()
content = CreditGild(
summary=summary,
price=price,
months=months,
stripe_key=g.STRIPE_PUBLIC_KEY,
passthrough=passthrough,
description=desc,
period=None,
)
return BoringPage(_("reddit gold"),
show_sidebar=False,
content=content).render()
@validate(VUser(),
goldtype=VOneOf("goldtype",
("autorenew", "onetime", "creddits", "gift",
"code")),
period=VOneOf("period", ("monthly", "yearly")),
months=VInt("months"),
# variables below are just for gifts
signed=VBoolean("signed"),
recipient_name=VPrintable("recipient", max_length=50),
thing=VByName("thing"),
giftmessage=VLength("giftmessage", 10000))
def GET_gold(self, goldtype, period, months,
signed, recipient_name, giftmessage, thing):
if thing:
thing_sr = Subreddit._byID(thing.sr_id, data=True)
if (thing._deleted or
thing._spam or
not thing_sr.can_view(c.user) or
not thing_sr.allow_gilding):
thing = None
start_over = False
recipient = None
if goldtype == "autorenew":
if period is None:
start_over = True
elif c.user.has_gold_subscription:
return self.redirect("/gold/subscription")
elif goldtype in ("onetime", "creddits", "code"):
if months is None or months < 1:
start_over = True
elif goldtype == "gift":
if months is None or months < 1:
start_over = True
if thing:
recipient = Account._byID(thing.author_id, data=True)
if recipient._deleted:
thing = None
recipient = None
start_over = True
else:
try:
recipient = Account._by_name(recipient_name or "")
except NotFound:
start_over = True
else:
goldtype = ""
start_over = True
if start_over:
can_subscribe = not c.user.has_gold_subscription
return BoringPage(_("reddit gold"),
show_sidebar=False,
content=Gold(goldtype, period, months, signed,
recipient, recipient_name,
can_subscribe=can_subscribe)).render()
else:
payment_blob = dict(goldtype=goldtype,
account_id=c.user._id,
account_name=c.user.name,
status="initialized")
if goldtype == "gift":
payment_blob["signed"] = signed
payment_blob["recipient"] = recipient.name
payment_blob["giftmessage"] = _force_utf8(giftmessage)
if thing:
payment_blob["thing"] = thing._fullname
passthrough = generate_blob(payment_blob)
return BoringPage(_("reddit gold"),
show_sidebar=False,
content=GoldPayment(goldtype, period, months,
signed, recipient,
giftmessage, passthrough,
thing)
).render()
@validate(VUser())
def GET_subscription(self):
user = c.user
content = GoldSubscription(user)
return BoringPage(_("reddit gold subscription"), show_sidebar=False,
content=content).render()
|
h2oloopan/easymerge
|
EasyMerge/tests/reddit/r2/r2/controllers/front.py
|
Python
|
mit
| 61,772
|
[
"VisIt"
] |
33e416e3d1d5ad0398429f67186db1643bb300c21e921d26a6b3a3b2fd39f1e0
|
import warnings
from collections import defaultdict
import numpy as np
import pandas as pd
from .coding import strings, times, variables
from .coding.variables import SerializationWarning, pop_to
from .core import duck_array_ops, indexing
from .core.common import contains_cftime_datetimes
from .core.pycompat import is_duck_dask_array
from .core.variable import IndexVariable, Variable, as_variable
CF_RELATED_DATA = (
"bounds",
"grid_mapping",
"climatology",
"geometry",
"node_coordinates",
"node_count",
"part_node_count",
"interior_ring",
"cell_measures",
"formula_terms",
)
CF_RELATED_DATA_NEEDS_PARSING = (
"cell_measures",
"formula_terms",
)
class NativeEndiannessArray(indexing.ExplicitlyIndexedNDArrayMixin):
"""Decode arrays on the fly from non-native to native endianness
This is useful for decoding arrays from netCDF3 files (which are all
big endian) into native endianness, so they can be used with Cython
functions, such as those found in bottleneck and pandas.
>>> x = np.arange(5, dtype=">i2")
>>> x.dtype
dtype('>i2')
>>> NativeEndiannessArray(x).dtype
dtype('int16')
>>> indexer = indexing.BasicIndexer((slice(None),))
>>> NativeEndiannessArray(x)[indexer].dtype
dtype('int16')
"""
__slots__ = ("array",)
def __init__(self, array):
self.array = indexing.as_indexable(array)
@property
def dtype(self):
return np.dtype(self.array.dtype.kind + str(self.array.dtype.itemsize))
def __getitem__(self, key):
return np.asarray(self.array[key], dtype=self.dtype)
class BoolTypeArray(indexing.ExplicitlyIndexedNDArrayMixin):
"""Decode arrays on the fly from integer to boolean datatype
This is useful for decoding boolean arrays from integer typed netCDF
variables.
>>> x = np.array([1, 0, 1, 1, 0], dtype="i1")
>>> x.dtype
dtype('int8')
>>> BoolTypeArray(x).dtype
dtype('bool')
>>> indexer = indexing.BasicIndexer((slice(None),))
>>> BoolTypeArray(x)[indexer].dtype
dtype('bool')
"""
__slots__ = ("array",)
def __init__(self, array):
self.array = indexing.as_indexable(array)
@property
def dtype(self):
return np.dtype("bool")
def __getitem__(self, key):
return np.asarray(self.array[key], dtype=self.dtype)
def _var_as_tuple(var):
return var.dims, var.data, var.attrs.copy(), var.encoding.copy()
def maybe_encode_nonstring_dtype(var, name=None):
if "dtype" in var.encoding and var.encoding["dtype"] not in ("S1", str):
dims, data, attrs, encoding = _var_as_tuple(var)
dtype = np.dtype(encoding.pop("dtype"))
if dtype != var.dtype:
if np.issubdtype(dtype, np.integer):
if (
np.issubdtype(var.dtype, np.floating)
and "_FillValue" not in var.attrs
and "missing_value" not in var.attrs
):
warnings.warn(
f"saving variable {name} with floating "
"point data as an integer dtype without "
"any _FillValue to use for NaNs",
SerializationWarning,
stacklevel=10,
)
data = duck_array_ops.around(data)[...]
data = data.astype(dtype=dtype)
var = Variable(dims, data, attrs, encoding)
return var
def maybe_default_fill_value(var):
# make NaN the fill value for float types:
if (
"_FillValue" not in var.attrs
and "_FillValue" not in var.encoding
and np.issubdtype(var.dtype, np.floating)
):
var.attrs["_FillValue"] = var.dtype.type(np.nan)
return var
def maybe_encode_bools(var):
if (
(var.dtype == bool)
and ("dtype" not in var.encoding)
and ("dtype" not in var.attrs)
):
dims, data, attrs, encoding = _var_as_tuple(var)
attrs["dtype"] = "bool"
data = data.astype(dtype="i1", copy=True)
var = Variable(dims, data, attrs, encoding)
return var
def _infer_dtype(array, name=None):
"""Given an object array with no missing values, infer its dtype from its
first element
"""
if array.dtype.kind != "O":
raise TypeError("infer_type must be called on a dtype=object array")
if array.size == 0:
return np.dtype(float)
element = array[(0,) * array.ndim]
# We use the base types to avoid subclasses of bytes and str (which might
# not play nice with e.g. hdf5 datatypes), such as those from numpy
if isinstance(element, bytes):
return strings.create_vlen_dtype(bytes)
elif isinstance(element, str):
return strings.create_vlen_dtype(str)
dtype = np.array(element).dtype
if dtype.kind != "O":
return dtype
raise ValueError(
"unable to infer dtype on variable {!r}; xarray "
"cannot serialize arbitrary Python objects".format(name)
)
def ensure_not_multiindex(var, name=None):
if isinstance(var, IndexVariable) and isinstance(var.to_index(), pd.MultiIndex):
raise NotImplementedError(
"variable {!r} is a MultiIndex, which cannot yet be "
"serialized to netCDF files "
"(https://github.com/pydata/xarray/issues/1077). Use "
"reset_index() to convert MultiIndex levels into coordinate "
"variables instead.".format(name)
)
def _copy_with_dtype(data, dtype):
"""Create a copy of an array with the given dtype.
We use this instead of np.array() to ensure that custom object dtypes end
up on the resulting array.
"""
result = np.empty(data.shape, dtype)
result[...] = data
return result
def ensure_dtype_not_object(var, name=None):
# TODO: move this from conventions to backends? (it's not CF related)
if var.dtype.kind == "O":
dims, data, attrs, encoding = _var_as_tuple(var)
if is_duck_dask_array(data):
warnings.warn(
"variable {} has data in the form of a dask array with "
"dtype=object, which means it is being loaded into memory "
"to determine a data type that can be safely stored on disk. "
"To avoid this, coerce this variable to a fixed-size dtype "
"with astype() before saving it.".format(name),
SerializationWarning,
)
data = data.compute()
missing = pd.isnull(data)
if missing.any():
# nb. this will fail for dask.array data
non_missing_values = data[~missing]
inferred_dtype = _infer_dtype(non_missing_values, name)
# There is no safe bit-pattern for NA in typical binary string
# formats, we so can't set a fill_value. Unfortunately, this means
# we can't distinguish between missing values and empty strings.
if strings.is_bytes_dtype(inferred_dtype):
fill_value = b""
elif strings.is_unicode_dtype(inferred_dtype):
fill_value = ""
else:
# insist on using float for numeric values
if not np.issubdtype(inferred_dtype, np.floating):
inferred_dtype = np.dtype(float)
fill_value = inferred_dtype.type(np.nan)
data = _copy_with_dtype(data, dtype=inferred_dtype)
data[missing] = fill_value
else:
data = _copy_with_dtype(data, dtype=_infer_dtype(data, name))
assert data.dtype.kind != "O" or data.dtype.metadata
var = Variable(dims, data, attrs, encoding)
return var
def encode_cf_variable(var, needs_copy=True, name=None):
"""
Converts an Variable into an Variable which follows some
of the CF conventions:
- Nans are masked using _FillValue (or the deprecated missing_value)
- Rescaling via: scale_factor and add_offset
- datetimes are converted to the CF 'units since time' format
- dtype encodings are enforced.
Parameters
----------
var : Variable
A variable holding un-encoded data.
Returns
-------
out : Variable
A variable which has been encoded as described above.
"""
ensure_not_multiindex(var, name=name)
for coder in [
times.CFDatetimeCoder(),
times.CFTimedeltaCoder(),
variables.CFScaleOffsetCoder(),
variables.CFMaskCoder(),
variables.UnsignedIntegerCoder(),
]:
var = coder.encode(var, name=name)
# TODO(shoyer): convert all of these to use coders, too:
var = maybe_encode_nonstring_dtype(var, name=name)
var = maybe_default_fill_value(var)
var = maybe_encode_bools(var)
var = ensure_dtype_not_object(var, name=name)
for attr_name in CF_RELATED_DATA:
pop_to(var.encoding, var.attrs, attr_name)
return var
def decode_cf_variable(
name,
var,
concat_characters=True,
mask_and_scale=True,
decode_times=True,
decode_endianness=True,
stack_char_dim=True,
use_cftime=None,
decode_timedelta=None,
):
"""
Decodes a variable which may hold CF encoded information.
This includes variables that have been masked and scaled, which
hold CF style time variables (this is almost always the case if
the dataset has been serialized) and which have strings encoded
as character arrays.
Parameters
----------
name : str
Name of the variable. Used for better error messages.
var : Variable
A variable holding potentially CF encoded information.
concat_characters : bool
Should character arrays be concatenated to strings, for
example: ["h", "e", "l", "l", "o"] -> "hello"
mask_and_scale : bool
Lazily scale (using scale_factor and add_offset) and mask
(using _FillValue). If the _Unsigned attribute is present
treat integer arrays as unsigned.
decode_times : bool
Decode cf times ("hours since 2000-01-01") to np.datetime64.
decode_endianness : bool
Decode arrays from non-native to native endianness.
stack_char_dim : bool
Whether to stack characters into bytes along the last dimension of this
array. Passed as an argument because we need to look at the full
dataset to figure out if this is appropriate.
use_cftime : bool, optional
Only relevant if encoded dates come from a standard calendar
(e.g. "gregorian", "proleptic_gregorian", "standard", or not
specified). If None (default), attempt to decode times to
``np.datetime64[ns]`` objects; if this is not possible, decode times to
``cftime.datetime`` objects. If True, always decode times to
``cftime.datetime`` objects, regardless of whether or not they can be
represented using ``np.datetime64[ns]`` objects. If False, always
decode times to ``np.datetime64[ns]`` objects; if this is not possible
raise an error.
Returns
-------
out : Variable
A variable holding the decoded equivalent of var.
"""
var = as_variable(var)
original_dtype = var.dtype
if decode_timedelta is None:
decode_timedelta = decode_times
if concat_characters:
if stack_char_dim:
var = strings.CharacterArrayCoder().decode(var, name=name)
var = strings.EncodedStringCoder().decode(var)
if mask_and_scale:
for coder in [
variables.UnsignedIntegerCoder(),
variables.CFMaskCoder(),
variables.CFScaleOffsetCoder(),
]:
var = coder.decode(var, name=name)
if decode_timedelta:
var = times.CFTimedeltaCoder().decode(var, name=name)
if decode_times:
var = times.CFDatetimeCoder(use_cftime=use_cftime).decode(var, name=name)
dimensions, data, attributes, encoding = variables.unpack_for_decoding(var)
# TODO(shoyer): convert everything below to use coders
if decode_endianness and not data.dtype.isnative:
# do this last, so it's only done if we didn't already unmask/scale
data = NativeEndiannessArray(data)
original_dtype = data.dtype
encoding.setdefault("dtype", original_dtype)
if "dtype" in attributes and attributes["dtype"] == "bool":
del attributes["dtype"]
data = BoolTypeArray(data)
if not is_duck_dask_array(data):
data = indexing.LazilyIndexedArray(data)
return Variable(dimensions, data, attributes, encoding=encoding)
def _update_bounds_attributes(variables):
"""Adds time attributes to time bounds variables.
Variables handling time bounds ("Cell boundaries" in the CF
conventions) do not necessarily carry the necessary attributes to be
decoded. This copies the attributes from the time variable to the
associated boundaries.
See Also:
http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/
cf-conventions.html#cell-boundaries
https://github.com/pydata/xarray/issues/2565
"""
# For all time variables with bounds
for v in variables.values():
attrs = v.attrs
has_date_units = "units" in attrs and "since" in attrs["units"]
if has_date_units and "bounds" in attrs:
if attrs["bounds"] in variables:
bounds_attrs = variables[attrs["bounds"]].attrs
bounds_attrs.setdefault("units", attrs["units"])
if "calendar" in attrs:
bounds_attrs.setdefault("calendar", attrs["calendar"])
def _update_bounds_encoding(variables):
"""Adds time encoding to time bounds variables.
Variables handling time bounds ("Cell boundaries" in the CF
conventions) do not necessarily carry the necessary attributes to be
decoded. This copies the encoding from the time variable to the
associated bounds variable so that we write CF-compliant files.
See Also:
http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/
cf-conventions.html#cell-boundaries
https://github.com/pydata/xarray/issues/2565
"""
# For all time variables with bounds
for v in variables.values():
attrs = v.attrs
encoding = v.encoding
has_date_units = "units" in encoding and "since" in encoding["units"]
is_datetime_type = np.issubdtype(
v.dtype, np.datetime64
) or contains_cftime_datetimes(v)
if (
is_datetime_type
and not has_date_units
and "bounds" in attrs
and attrs["bounds"] in variables
):
warnings.warn(
"Variable '{0}' has datetime type and a "
"bounds variable but {0}.encoding does not have "
"units specified. The units encodings for '{0}' "
"and '{1}' will be determined independently "
"and may not be equal, counter to CF-conventions. "
"If this is a concern, specify a units encoding for "
"'{0}' before writing to a file.".format(v.name, attrs["bounds"]),
UserWarning,
)
if has_date_units and "bounds" in attrs:
if attrs["bounds"] in variables:
bounds_encoding = variables[attrs["bounds"]].encoding
bounds_encoding.setdefault("units", encoding["units"])
if "calendar" in encoding:
bounds_encoding.setdefault("calendar", encoding["calendar"])
def decode_cf_variables(
variables,
attributes,
concat_characters=True,
mask_and_scale=True,
decode_times=True,
decode_coords=True,
drop_variables=None,
use_cftime=None,
decode_timedelta=None,
):
"""
Decode several CF encoded variables.
See: decode_cf_variable
"""
dimensions_used_by = defaultdict(list)
for v in variables.values():
for d in v.dims:
dimensions_used_by[d].append(v)
def stackable(dim):
# figure out if a dimension can be concatenated over
if dim in variables:
return False
for v in dimensions_used_by[dim]:
if v.dtype.kind != "S" or dim != v.dims[-1]:
return False
return True
coord_names = set()
if isinstance(drop_variables, str):
drop_variables = [drop_variables]
elif drop_variables is None:
drop_variables = []
drop_variables = set(drop_variables)
# Time bounds coordinates might miss the decoding attributes
if decode_times:
_update_bounds_attributes(variables)
new_vars = {}
for k, v in variables.items():
if k in drop_variables:
continue
stack_char_dim = (
concat_characters
and v.dtype == "S1"
and v.ndim > 0
and stackable(v.dims[-1])
)
new_vars[k] = decode_cf_variable(
k,
v,
concat_characters=concat_characters,
mask_and_scale=mask_and_scale,
decode_times=decode_times,
stack_char_dim=stack_char_dim,
use_cftime=use_cftime,
decode_timedelta=decode_timedelta,
)
if decode_coords in [True, "coordinates", "all"]:
var_attrs = new_vars[k].attrs
if "coordinates" in var_attrs:
coord_str = var_attrs["coordinates"]
var_coord_names = coord_str.split()
if all(k in variables for k in var_coord_names):
new_vars[k].encoding["coordinates"] = coord_str
del var_attrs["coordinates"]
coord_names.update(var_coord_names)
if decode_coords == "all":
for attr_name in CF_RELATED_DATA:
if attr_name in var_attrs:
attr_val = var_attrs[attr_name]
if attr_name not in CF_RELATED_DATA_NEEDS_PARSING:
var_names = attr_val.split()
else:
roles_and_names = [
role_or_name
for part in attr_val.split(":")
for role_or_name in part.split()
]
if len(roles_and_names) % 2 == 1:
warnings.warn(
f"Attribute {attr_name:s} malformed", stacklevel=5
)
var_names = roles_and_names[1::2]
if all(var_name in variables for var_name in var_names):
new_vars[k].encoding[attr_name] = attr_val
coord_names.update(var_names)
else:
referenced_vars_not_in_variables = [
proj_name
for proj_name in var_names
if proj_name not in variables
]
warnings.warn(
f"Variable(s) referenced in {attr_name:s} not in variables: {referenced_vars_not_in_variables!s}",
stacklevel=5,
)
del var_attrs[attr_name]
if decode_coords and "coordinates" in attributes:
attributes = dict(attributes)
coord_names.update(attributes.pop("coordinates").split())
return new_vars, attributes, coord_names
def decode_cf(
obj,
concat_characters=True,
mask_and_scale=True,
decode_times=True,
decode_coords=True,
drop_variables=None,
use_cftime=None,
decode_timedelta=None,
):
"""Decode the given Dataset or Datastore according to CF conventions into
a new Dataset.
Parameters
----------
obj : Dataset or DataStore
Object to decode.
concat_characters : bool, optional
Should character arrays be concatenated to strings, for
example: ["h", "e", "l", "l", "o"] -> "hello"
mask_and_scale : bool, optional
Lazily scale (using scale_factor and add_offset) and mask
(using _FillValue).
decode_times : bool, optional
Decode cf times (e.g., integers since "hours since 2000-01-01") to
np.datetime64.
decode_coords : bool or {"coordinates", "all"}, optional
Controls which variables are set as coordinate variables:
- "coordinates" or True: Set variables referred to in the
``'coordinates'`` attribute of the datasets or individual variables
as coordinate variables.
- "all": Set variables referred to in ``'grid_mapping'``, ``'bounds'`` and
other attributes as coordinate variables.
drop_variables : str or iterable, optional
A variable or list of variables to exclude from being parsed from the
dataset. This may be useful to drop variables with problems or
inconsistent values.
use_cftime : bool, optional
Only relevant if encoded dates come from a standard calendar
(e.g. "gregorian", "proleptic_gregorian", "standard", or not
specified). If None (default), attempt to decode times to
``np.datetime64[ns]`` objects; if this is not possible, decode times to
``cftime.datetime`` objects. If True, always decode times to
``cftime.datetime`` objects, regardless of whether or not they can be
represented using ``np.datetime64[ns]`` objects. If False, always
decode times to ``np.datetime64[ns]`` objects; if this is not possible
raise an error.
decode_timedelta : bool, optional
If True, decode variables and coordinates with time units in
{"days", "hours", "minutes", "seconds", "milliseconds", "microseconds"}
into timedelta objects. If False, leave them encoded as numbers.
If None (default), assume the same value of decode_time.
Returns
-------
decoded : Dataset
"""
from .backends.common import AbstractDataStore
from .core.dataset import Dataset
if isinstance(obj, Dataset):
vars = obj._variables
attrs = obj.attrs
extra_coords = set(obj.coords)
close = obj._close
encoding = obj.encoding
elif isinstance(obj, AbstractDataStore):
vars, attrs = obj.load()
extra_coords = set()
close = obj.close
encoding = obj.get_encoding()
else:
raise TypeError("can only decode Dataset or DataStore objects")
vars, attrs, coord_names = decode_cf_variables(
vars,
attrs,
concat_characters,
mask_and_scale,
decode_times,
decode_coords,
drop_variables=drop_variables,
use_cftime=use_cftime,
decode_timedelta=decode_timedelta,
)
ds = Dataset(vars, attrs=attrs)
ds = ds.set_coords(coord_names.union(extra_coords).intersection(vars))
ds.set_close(close)
ds.encoding = encoding
return ds
def cf_decoder(
variables,
attributes,
concat_characters=True,
mask_and_scale=True,
decode_times=True,
):
"""
Decode a set of CF encoded variables and attributes.
Parameters
----------
variables : dict
A dictionary mapping from variable name to xarray.Variable
attributes : dict
A dictionary mapping from attribute name to value
concat_characters : bool
Should character arrays be concatenated to strings, for
example: ["h", "e", "l", "l", "o"] -> "hello"
mask_and_scale : bool
Lazily scale (using scale_factor and add_offset) and mask
(using _FillValue).
decode_times : bool
Decode cf times ("hours since 2000-01-01") to np.datetime64.
Returns
-------
decoded_variables : dict
A dictionary mapping from variable name to xarray.Variable objects.
decoded_attributes : dict
A dictionary mapping from attribute name to values.
See Also
--------
decode_cf_variable
"""
variables, attributes, _ = decode_cf_variables(
variables, attributes, concat_characters, mask_and_scale, decode_times
)
return variables, attributes
def _encode_coordinates(variables, attributes, non_dim_coord_names):
# calculate global and variable specific coordinates
non_dim_coord_names = set(non_dim_coord_names)
for name in list(non_dim_coord_names):
if isinstance(name, str) and " " in name:
warnings.warn(
"coordinate {!r} has a space in its name, which means it "
"cannot be marked as a coordinate on disk and will be "
"saved as a data variable instead".format(name),
SerializationWarning,
stacklevel=6,
)
non_dim_coord_names.discard(name)
global_coordinates = non_dim_coord_names.copy()
variable_coordinates = defaultdict(set)
not_technically_coordinates = set()
for coord_name in non_dim_coord_names:
target_dims = variables[coord_name].dims
for k, v in variables.items():
if (
k not in non_dim_coord_names
and k not in v.dims
and set(target_dims) <= set(v.dims)
):
variable_coordinates[k].add(coord_name)
if any(
attr_name in v.encoding and coord_name in v.encoding.get(attr_name)
for attr_name in CF_RELATED_DATA
):
not_technically_coordinates.add(coord_name)
global_coordinates.discard(coord_name)
variables = {k: v.copy(deep=False) for k, v in variables.items()}
# keep track of variable names written to file under the "coordinates" attributes
written_coords = set()
for name, var in variables.items():
encoding = var.encoding
attrs = var.attrs
if "coordinates" in attrs and "coordinates" in encoding:
raise ValueError(
f"'coordinates' found in both attrs and encoding for variable {name!r}."
)
# if coordinates set to None, don't write coordinates attribute
if (
"coordinates" in attrs
and attrs.get("coordinates") is None
or "coordinates" in encoding
and encoding.get("coordinates") is None
):
# make sure "coordinates" is removed from attrs/encoding
attrs.pop("coordinates", None)
encoding.pop("coordinates", None)
continue
# this will copy coordinates from encoding to attrs if "coordinates" in attrs
# after the next line, "coordinates" is never in encoding
# we get support for attrs["coordinates"] for free.
coords_str = pop_to(encoding, attrs, "coordinates")
if not coords_str and variable_coordinates[name]:
coordinates_text = " ".join(
str(coord_name)
for coord_name in variable_coordinates[name]
if coord_name not in not_technically_coordinates
)
if coordinates_text:
attrs["coordinates"] = coordinates_text
if "coordinates" in attrs:
written_coords.update(attrs["coordinates"].split())
# These coordinates are not associated with any particular variables, so we
# save them under a global 'coordinates' attribute so xarray can roundtrip
# the dataset faithfully. Because this serialization goes beyond CF
# conventions, only do it if necessary.
# Reference discussion:
# http://mailman.cgd.ucar.edu/pipermail/cf-metadata/2014/007571.html
global_coordinates.difference_update(written_coords)
if global_coordinates:
attributes = dict(attributes)
if "coordinates" in attributes:
warnings.warn(
f"cannot serialize global coordinates {global_coordinates!r} because the global "
f"attribute 'coordinates' already exists. This may prevent faithful roundtripping"
f"of xarray datasets",
SerializationWarning,
)
else:
attributes["coordinates"] = " ".join(map(str, global_coordinates))
return variables, attributes
def encode_dataset_coordinates(dataset):
"""Encode coordinates on the given dataset object into variable specific
and global attributes.
When possible, this is done according to CF conventions.
Parameters
----------
dataset : Dataset
Object to encode.
Returns
-------
variables : dict
attrs : dict
"""
non_dim_coord_names = set(dataset.coords) - set(dataset.dims)
return _encode_coordinates(
dataset._variables, dataset.attrs, non_dim_coord_names=non_dim_coord_names
)
def cf_encoder(variables, attributes):
"""
Encode a set of CF encoded variables and attributes.
Takes a dicts of variables and attributes and encodes them
to conform to CF conventions as much as possible.
This includes masking, scaling, character array handling,
and CF-time encoding.
Parameters
----------
variables : dict
A dictionary mapping from variable name to xarray.Variable
attributes : dict
A dictionary mapping from attribute name to value
Returns
-------
encoded_variables : dict
A dictionary mapping from variable name to xarray.Variable,
encoded_attributes : dict
A dictionary mapping from attribute name to value
See Also
--------
decode_cf_variable, encode_cf_variable
"""
# add encoding for time bounds variables if present.
_update_bounds_encoding(variables)
new_vars = {k: encode_cf_variable(v, name=k) for k, v in variables.items()}
# Remove attrs from bounds variables (issue #2921)
for var in new_vars.values():
bounds = var.attrs["bounds"] if "bounds" in var.attrs else None
if bounds and bounds in new_vars:
# see http://cfconventions.org/cf-conventions/cf-conventions.html#cell-boundaries
for attr in [
"units",
"standard_name",
"axis",
"positive",
"calendar",
"long_name",
"leap_month",
"leap_year",
"month_lengths",
]:
if attr in new_vars[bounds].attrs and attr in var.attrs:
if new_vars[bounds].attrs[attr] == var.attrs[attr]:
new_vars[bounds].attrs.pop(attr)
return new_vars, attributes
|
pydata/xarray
|
xarray/conventions.py
|
Python
|
apache-2.0
| 30,727
|
[
"NetCDF"
] |
b6a5a7679ff048dd71bc27c185424d58713d4921b1575c03995ad410116488ea
|
from vtk import *
from vtk.wx.wxVTKRenderWindowInteractor import wxVTKRenderWindowInteractor
import thread
import wx
import pydicom
import os
class VtkPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.parent = parent
# Top Sizer Configuration
self.topSizer = wx.BoxSizer(wx.HORIZONTAL)
self.IMAGE_PATH = None
self.ROOT_PIPE = None
self.DICOM_IMAGES = None
self.FIRST_IMAGE = None
self.IMAGE_LARGEST_PIXEL = None
self.IMAGE_SMALLEST_PIXEL = None
self.loadView() # Loading View
# Initializing
self.wxThresholdRadioBox.SetSelection(0)
self.wxLowerSlider.Disable()
def loadView(self):
# LEFT
#self.InteractorImageRenderWindow = wxVTKRenderWindowInteractor(self, -1)
#self.InteractorImageRenderWindow.Enable(1)
#self.topSizer.Add(self.InteractorImageRenderWindow, 1, wx.EXPAND| wx.RIGHT, 5)
#self.ImageViewer = vtkImageViewer2()
#a = wx.Panel(self, -1)
#a.SetBackgroundColour("#00FF00")
#self.topSizer.Add(a,1,wx.EXPAND|wx.RIGHT, 5)
#self.wxImageSlider = wx.Slider(self, -1, style=wx.SL_VERTICAL | wx.SL_LEFT | wx.SL_MIN_MAX_LABELS | wx.EXPAND)
#self.topSizer.Add(self.wxImageSlider, 0, wx.EXPAND)
# RIGHT
self.Interactor3DRenderWindow = wxVTKRenderWindowInteractor(self,-1)
self.Interactor3DRenderWindow.Enable(1)
self.topSizer.Add(self.Interactor3DRenderWindow, 1, wx.EXPAND)
# Botton Sizer Configuration
self.bottomSizer = wx.BoxSizer(wx.HORIZONTAL)
# Load and Save Section
self.loadSaveStaticBox = wx.StaticBox(self, -1, "Load and Save")
self.wxLoadSaveStaticBox = wx.StaticBoxSizer(self.loadSaveStaticBox, wx.VERTICAL)
self.wxLoadButton = wx.Button(self, -1, "Load Image", size=(100, 40))
self.wxLoadSaveStaticBox.Add(self.wxLoadButton, 1, wx.ALIGN_CENTER_VERTICAL | wx.ALL, 10)
self.wxSaveButton = wx.Button(self, -1, "Save Volume", size=(100, 40))
self.wxLoadSaveStaticBox.Add(self.wxSaveButton, 1, wx.ALIGN_CENTER_VERTICAL | wx.ALL, 10)
self.bottomSizer.Add(self.wxLoadSaveStaticBox, 0, wx.EXPAND | wx.RIGHT, 10)
# Configuration Section
self.configurationStaticBox = wx.StaticBox(self, -1, "Configuration")
self.wxConfigurationStaticBox = wx.StaticBoxSizer(self.configurationStaticBox, wx.VERTICAL)
self.wxThresholdRadioBox = wx.RadioBox(self, -1, "Threshold Configuration", (-1, -1), (-1, -1), ["Upper", "Lower", "Range"], 3, wx.RA_SPECIFY_COLS)
self.wxConfigurationStaticBox.Add(self.wxThresholdRadioBox, 0, wx.EXPAND | wx.ALL, 5)
self.bottomSizer.Add(self.wxConfigurationStaticBox, 1, wx.EXPAND)
# Sliders
self.wxLowerStaticText = wx.StaticText(self, -1, "Lower: 0")
self.wxConfigurationStaticBox.Add(self.wxLowerStaticText, 0, wx.EXPAND | wx.LEFT | wx.RIGHT, 1)
self.wxLowerSlider = wx.Slider(self, -1, minValue=0, maxValue=100)
self.wxConfigurationStaticBox.Add(self.wxLowerSlider, 0, wx.EXPAND | wx.LEFT | wx.RIGHT, 1)
self.wxUpperStaticText = wx.StaticText(self, -1, "Upper: 0")
self.wxConfigurationStaticBox.Add(self.wxUpperStaticText, 0, wx.EXPAND | wx.LEFT | wx.RIGHT, 1)
self.wxUpperSlider = wx.Slider(self, -1, minValue=0, maxValue=100)
self.wxConfigurationStaticBox.Add(self.wxUpperSlider, 0, wx.EXPAND | wx.LEFT | wx.RIGHT, 1)
# Filter Section
self.filterStaticBox = wx.StaticBox(self, -1, "Filters")
self.wxFilterStaticBox = wx.StaticBoxSizer(self.filterStaticBox, wx.VERTICAL)
self.bottomSizer.Add(self.wxFilterStaticBox, 1, wx.EXPAND|wx.LEFT,10)
# Root Sizer Configuration
self.rootSizer = wx.BoxSizer(wx.VERTICAL)
self.rootSizer.Add(self.topSizer, 2, wx.EXPAND | wx.ALL, 10)
self.rootSizer.Add(self.bottomSizer, 1, wx.EXPAND | wx.ALL, 10)
self.SetSizer(self.rootSizer)
self.Layout()
#Status Bar
# Implement
# Binds
self.wxLoadButton.Bind(wx.EVT_BUTTON, self.OnButtonLoadImageClick)
self.wxThresholdRadioBox.Bind(wx.EVT_RADIOBOX, self.OnWxThresholdRadioBoxChanged)
self.wxUpperSlider.Bind(wx.EVT_SLIDER, self.OnWxUpperSliderChanged)
self.wxLowerSlider.Bind(wx.EVT_SLIDER, self.OnWxLowerSliderChanged)
# States
self.wxSaveButton.Disable()
self.loadSaveStaticBox.Disable()
self.wxThresholdRadioBox.Disable()
self.wxUpperSlider.Disable()
def OnWxThresholdRadioBoxChanged(self, evt):
selection = evt.Int
self.wxLowerSlider.Disable()
self.wxUpperSlider.Disable()
if selection == 0: # Upper
self.wxUpperSlider.Enable()
self.wxLowerStaticText.SetLabel("Lower: 0")
self.wxLowerSlider.SetValue(0)
elif selection == 1: # Lower
self.wxLowerSlider.Enable()
self.wxUpperStaticText.SetLabel("Upper: 0")
self.wxUpperSlider.SetValue(0)
elif selection == 2: # Range
self.wxLowerSlider.Enable()
self.wxUpperSlider.Enable()
else:
raise NotImplementedError()
def OnWxUpperSliderChanged(self, evt):
self.wxUpperStaticText.SetLabel("Upper: " +str(self.wxUpperSlider.GetValue()))
def OnWxLowerSliderChanged(self, evt):
self.wxLowerStaticText.SetLabel("Lower: " + str(self.wxLowerSlider.GetValue()))
def showImage(self,imageData):
raise NotImplementedError()
self.ImageViewer.SeInputConnection(imageData.GetOutputPort())
self.ImageViewer.SetupInteractor(self.InteractorImageRenderWindow)
self.ImageViewer.Render()
def OnButtonLoadImageClick(self, evt):
dirDialog = wx.DirDialog(self,
message="Select a Folder with Dicom Files",
defaultPath=r"C:\Projects\IC\sample", # Temporary
#defaultPath=r"C:\Users\work\Desktop\dicoms\old\dentalTeste", # Temporary
style=wx.DD_DEFAULT_STYLE,)
if dirDialog.ShowModal() == wx.ID_OK:
self.loadDicomImages(dirDialog.GetPath())
#self.wxSaveButton.Enable()
dirDialog.Destroy()
def loadDicomImages(self, dicomFilesPath):
# TODO Implement verification if dicom files exists
dicomPath = os.path.join(dicomFilesPath, os.listdir(dicomFilesPath)[1])
try:
dicomFile = pydicom.read_file(dicomPath)
except Exception as e:
raise e
if 0x00280106 in dicomFile and 0x00280107 in dicomFile:
self.IMAGE_SMALLEST_PIXEL = dicomFile[0x00280106].value # SmallestImagePixelValue
self.IMAGE_LARGEST_PIXEL = dicomFile[0x00280107].value # LargestImagePixelValue
else:
self.IMAGE_SMALLEST_PIXEL = 0
self.IMAGE_LARGEST_PIXEL = 100
self.wxUpperSlider.SetRange(self.IMAGE_SMALLEST_PIXEL, self.IMAGE_LARGEST_PIXEL)
self.wxLowerSlider.SetRange(self.IMAGE_SMALLEST_PIXEL, self.IMAGE_LARGEST_PIXEL)
self.wxLowerSlider.Update()
self.Layout()
dicomImages = vtkDICOMImageReader()
dicomImages.SetDirectoryName(dicomFilesPath)
dicomImages.Update()
self.DICOM_IMAGES = dicomImages
self.adjustImageThreshold(800)
#self.__plotImage(self.ROOT_PIPE)
self.createVolume()
self.decimateVolume(0.5)
self.view3DImage(self.ROOT_PIPE)
# View
#self.wxConfigurationStaticBox.Enable()
#self.wxFilterStaticBox.Enable()
def createVolume(self):
mesh_3d = vtkDiscreteMarchingCubes()
mesh_3d.SetInputConnection(self.ROOT_PIPE.GetOutputPort())
mesh_3d.GenerateValues(1, 1, 1)
mesh_3d.Update()
self.ROOT_PIPE = mesh_3d
def __plotImage(self, imageData):
raise NotImplementedError()
self.ImageViewer.SetInputConnection(imageData.GetOutputPort())
self.ImageViewer.SetupInteractor(self.InteractorImageRenderWindow)
self.ImageViewer.Render()
self.InteractorImageRenderWindow.Start()
def adjustImageThreshold(self, lower_limit=None, upper_value=None):
thresholdFilter = vtkImageThreshold()
thresholdFilter.SetInputConnection(self.DICOM_IMAGES.GetOutputPort())
if lower_limit is not None and upper_value is not None: # Threshold By Range
if lower_limit > upper_value:
temp_value = upper_value
upper_value = lower_limit
lower_limit = temp_value
thresholdFilter.ThresholdBetween(lower_limit, upper_value)
elif lower_limit is not None and upper_value is None: # Threshold By lower
thresholdFilter.ThresholdByLower(lower_limit)
elif upper_value is not None and lower_limit is None: # Threshold By Upper
thresholdFilter.ThresholdByUpper(upper_value)
else:
thresholdFilter.ThresholdByLower(255)
thresholdFilter.SetInValue(0)
thresholdFilter.ReplaceInOn()
#thresholdFilter.SetOutValue(1)
thresholdFilter.SetOutValue(1)
thresholdFilter.ReplaceOutOn()
thresholdFilter.Update()
thresholdFilter.Update()
self.ROOT_PIPE = thresholdFilter
self.FIRST_IMAGE = thresholdFilter
return thresholdFilter
def saveStlFile(self, file_name="3d_volume"):
writer = vtkSTLWriter()
writer.SetInputConnection(self.ROOT_PIPE.GetOutputPort())
writer.SetFileTypeToBinary()
writer.SetFileName("".join([file_name, ".stl"]))
writer.Write()
def gaussianFilter(self, imageData):
gaussianFilter = vtkImageGaussianSmooth()
gaussianFilter.SetInputConnection(imageData.GetOutputPort())
gaussianFilter.Update()
return gaussianFilter
def fillHoles(self):
fillHolesFilter = vtkFillHolesFilter()
fillHolesFilter.SetInputConnection(self.ROOT_PIPE.GetOutputPort())
fillHolesFilter.SetHoleSize(1000.0)
dataFixed = vtkPolyDataNormals()
dataFixed.SetInputConnection(fillHolesFilter.GetOutputPort())
dataFixed.ConsistencyOn()
dataFixed.SplittingOff()
dataFixed.Update()
self.ROOT_PIPE = dataFixed
return dataFixed
def smoothVolume(self, type="laplacian", level=1):
smooth = None
if type == "laplacian":
smooth = vtkSmoothPolyDataFilter()
smooth.SetNumberOfIterations(level)
elif type == "linear":
smooth = vtkLinearSubdivisionFilter()
smooth.SetNumberOfSubdivisions(level)
elif type == "loop":
smooth = vtkLoopSubdivisionFilter()
smooth.SetNumberOfSubdivisions(level)
elif type == "butterfly":
smooth = vtkButterflySubdivisionFilter()
smooth.SetNumberOfSubdivisions(level)
else:
print "Invalid Type assuming Laplacian"
self.smoothVolume(type="laplacian", level=level)
return
smooth.SetInputConnection(self.ROOT_PIPE.GetOutputPort())
smooth.Update()
self.ROOT_PIPE = smooth
def decimateVolume(self, reduction=0.1):
if not 0 < reduction < 1:
reduction = 0.3 # 30%
decimatedVolume = vtkDecimatePro()
decimatedVolume.SetInputConnection(self.ROOT_PIPE.GetOutputPort())
decimatedVolume.SetTargetReduction(reduction)
decimatedVolume.Update()
self.ROOT_PIPE = decimatedVolume
def view3DImage(self, imageData):
polyDataMapper = vtkPolyDataMapper()
polyDataMapper.ImmediateModeRenderingOn()
polyDataMapper.SetInputConnection(imageData.GetOutputPort())
polyDataMapper.Update()
volume_3d = vtkActor()
volume_3d.SetMapper(polyDataMapper)
volume_3d.GetProperty().SetColor(0,0,1)
renderer = vtkRenderer()
renderer.AddActor(volume_3d)
renderer.SetBackground(1.0, 1.0, 1.0) # White
self.Interactor3DRenderWindow.GetRenderWindow().AddRenderer(renderer)
self.Layout()
#self.InteractorRenderWindow.GetRenderWindow().SetSize(500,500)
#self.InteractorRenderWindow.Initialize()
#self.InteractorRenderWindow.GetRenderWindow().Render()
#self.InteractorRenderWindow.Start()
def main():
app = wx.App()
main_frame = wx.Frame(None, size=(800,600))
sizer = wx.BoxSizer(wx.VERTICAL)
main_frame.SetSizer(sizer)
frame = VtkPanel(main_frame)
sizer.Add(frame,1,wx.EXPAND)
main_frame.Show()
main_frame.CenterOnScreen()
app.MainLoop()
if __name__ == "__main__":
main()
|
gabrielmini/vtkDicomRender
|
main.py
|
Python
|
lgpl-3.0
| 12,916
|
[
"VTK"
] |
903c94660f57695409edd96b99a2658d3e039d7954b47c41417dedce827eab2b
|
from datetime import datetime
from abc import ABCMeta, abstractmethod
import argparse
import os.path
import numpy as np
import gdal
import netCDF4
import yaml
from eodatasets import serialise
from ingester.utils import _get_nbands_lats_lons_from_gdalds
EPOCH = datetime(1970, 1, 1, 0, 0, 0)
class BaseNetCDF(object):
"""
Base class for creating a NetCDF file based upon GeoTIFF data.
Sub-classes will create the NetCDF in different structures.
"""
__metaclass__ = ABCMeta
def __init__(self, netcdf_path, mode='r', chunk_x=400, chunk_y=400, chunk_time=1):
self.nco = netCDF4.Dataset(netcdf_path, mode)
self.netcdf_path = netcdf_path
self.chunk_x = chunk_x
self.chunk_y = chunk_y
self.chunk_time = chunk_time
self.tile_spec = TileSpec()
def close(self):
self.nco.close()
def _create_standard_dimensions(self, lats, lons):
"""
Creates latitude, longitude and time dimension
Time is unlimited
Latitude and longitude are given the values in lats,lons
"""
self.nco.createDimension('longitude', len(lons))
self.nco.createDimension('latitude', len(lats))
self.nco.createDimension('time', None)
timeo = self.nco.createVariable('time', 'f4', 'time')
timeo.units = 'seconds since 1970-01-01 00:00:00'
timeo.standard_name = 'time'
timeo.long_name = 'Time, unix time-stamp'
timeo.calendar = 'standard'
timeo.axis = "T"
lon = self.nco.createVariable('longitude', 'f4', 'longitude')
lon.units = 'degrees_east'
lon.standard_name = 'longitude'
lon.long_name = 'longitude'
lon.axis = "X"
lat = self.nco.createVariable('latitude', 'f4', 'latitude')
lat.units = 'degrees_north'
lat.standard_name = 'latitude'
lat.long_name = 'latitude'
lat.axis = "Y"
lon[:] = lons
lat[:] = lats
def _set_wgs84_crs(self):
crso = self.nco.createVariable('crs', 'i4')
crso.long_name = "Lon/Lat Coords in WGS84"
crso.grid_mapping_name = "latitude_longitude"
crso.longitude_of_prime_meridian = 0.0
crso.semi_major_axis = 6378137.0
crso.inverse_flattening = 298.257223563
return crso
def _set_global_attributes(self):
self.nco.spatial_coverage = "1.000000 degrees grid"
self.nco.geospatial_lat_min = self.tile_spec.get_lat_min()
self.nco.geospatial_lat_max = self.tile_spec.get_lat_max()
self.nco.geospatial_lat_units = "degrees_north"
self.nco.geospatial_lat_resolution = "0.00025"
self.nco.geospatial_lon_min = self.tile_spec.get_lon_min()
self.nco.geospatial_lon_max = self.tile_spec.get_lon_max()
self.nco.geospatial_lon_units = "degrees_east"
self.nco.geospatial_lon_resolution = "0.00025"
creation_date = datetime.utcnow().strftime("%Y%m%d")
self.nco.history = "NetCDF-CF file created %s." % creation_date
# Attributes for NCI Compliance
self.nco.title = "Experimental Data files From the Australian Geoscience Data Cube - DO NOT USE"
self.nco.summary = "These files are experimental, short lived, and the format will change."
self.nco.source = "This data is a reprojection and retile of the Landsat L1T surface reflectance " \
"scene data available from /g/data/rs0/scenes/"
self.nco.product_version = "0.0.0"
self.nco.date_created = datetime.today().isoformat()
self.nco.Conventions = 'CF-1.6'
self.nco.license = "Creative Commons Attribution 4.0 International CC BY 4.0"
def _add_time(self, start_date):
# Convert to datetime at midnight
start_datetime = datetime.combine(start_date, datetime.min.time())
# Convert to seconds since epoch (1970-01-01)
start_datetime_delta = start_datetime - EPOCH
times = self.nco.variables['time']
# Save as next coordinate in file
times[len(times)] = start_datetime_delta.total_seconds()
@classmethod
def create_from_tile_spec(cls, file_path, tile_spec):
netcdf = cls(file_path, mode='w')
netcdf.tile_spec = tile_spec
netcdf._set_wgs84_crs()
netcdf._set_global_attributes()
netcdf._create_variables()
return netcdf
@classmethod
def open_with_tile_spec(cls, file_path, tile_spec):
netcdf = cls(file_path, mode='a')
netcdf.tile_spec = tile_spec
return netcdf
@abstractmethod
def _create_variables(self):
"""
Create the structure of the NetCDF file, ie, which variables with which dimensions
"""
pass
@abstractmethod
def _write_data_to_netcdf(self, dataset, eodataset):
"""
Read in all the data from the geotiff `dataset` and write it as a new time
slice to the NetCDF file
:param dataset: open GDAL dataset
:return:
"""
pass
def append_gdal_tile(self, geotiff, eodataset):
"""
Read a geotiff file and append it to the open NetCDF file
:param geotiff:string path to a geotiff file
:return:
"""
gdal_dataset = gdal.Open(geotiff)
self._add_time(eodataset.acquisition.aos)
self._write_data_to_netcdf(gdal_dataset, eodataset)
del gdal_dataset
class MultiVariableNetCDF(BaseNetCDF):
"""
Create individual datasets for each `band` of data
This closely matches the existing GeoTiff tile file structure
"""
def _create_variables(self):
self._create_standard_dimensions(self.tile_spec.lats, self.tile_spec.lons)
self._create_bands(self.tile_spec.bands)
# Create Variable Length Variable to store extra metadata
extra_meta = self.nco.createVariable('extra_metadata', str, 'time')
extra_meta.long_name = 'Extra source metadata'
def _create_bands(self, bands):
for i, band in enumerate(bands, 1):
band = self.nco.createVariable('band' + str(i), 'i2', ('time', 'latitude', 'longitude'),
zlib=True, chunksizes=[self.chunk_time, self.chunk_y, self.chunk_x],
fill_value=-999)
band.grid_mapping = 'crs'
band.set_auto_maskandscale(False)
band.units = '1'
srcfilename = self.nco.createVariable('srcfilename_band' + str(i), str, 'time')
srcfilename.long_name = 'Source filename from data import'
def _get_netcdf_bands(self, bands):
netcdfbands = []
for i, _ in enumerate(bands, 1):
band = self.nco.variables['band' + str(i)]
netcdfbands.append(band)
return netcdfbands
def _write_data_to_netcdf(self, gdal_dataset, eodataset):
netcdfbands = self._get_netcdf_bands(self.tile_spec.bands)
gdal_bands = [gdal_dataset.GetRasterBand(idx + 1) for idx in range(gdal_dataset.RasterCount)]
metadata_bands = sorted(eodataset.image.bands.values(), key=lambda band: band.number)
time_index = len(self.nco.variables['time']) - 1
for in_band, out_band, metadata in zip(gdal_bands, netcdfbands, metadata_bands):
out_band.long_name = metadata.number
out_band.missing_value = -999
out_band[time_index, :, :] = in_band.ReadAsArray()
extra_meta = self.nco.variables['extra_metadata']
# FIXME Yucky, we don't really want to be using yaml and private methods here
extra_meta[time_index] = yaml.dump(eodataset, Dumper=serialise._create_relative_dumper('/'))
class SingleVariableNetCDF(BaseNetCDF):
"""
Store all data values in a single dataset with an extra dimension for `band`
"""
def _create_variables(self):
lats = self.tile_spec.lats
lons = self.tile_spec.lons
self._create_standard_dimensions(lats, lons)
self._create_band_dimension()
self._create_data_variable()
def _create_band_dimension(self):
nbands = len(self.tile_spec.bands)
self.nco.createDimension('band', nbands)
band = self.nco.createVariable('band_name', str, 'band')
band.long_name = "Surface reflectance band name/number"
def _create_data_variable(self):
chunk_band = 1
observations = self.nco.createVariable('observation', 'i2', ('band', 'time', 'latitude', 'longitude'),
zlib=True,
chunksizes=[chunk_band, self.chunk_time, self.chunk_y, self.chunk_x],
fill_value=-999)
observations.long_name = "Surface reflectance factor"
observations.units = '1'
observations.grid_mapping = 'crs'
observations.set_auto_maskandscale(False)
observations.coordinates = 'band_name'
def _write_data_to_netcdf(self, gdal_dataset, eodataset):
nbands, lats, lons = _get_nbands_lats_lons_from_gdalds(gdal_dataset)
time_index = len(self.nco.dimensions['time']) - 1
band_var = self.nco.variables['band_name']
ds_bands = sorted(eodataset.image.bands.values(), key=lambda band: band.number)
observation = self.nco.variables['observation']
for band_idx in range(nbands):
in_band = gdal_dataset.GetRasterBand(band_idx + 1)
metadata = ds_bands[band_idx]
band_var[band_idx] = metadata.number
observation[band_idx, time_index, :, :] = in_band.ReadAsArray()
class TileSpec(object):
bands = []
lats = []
lons = []
lat_resolution = None
lon_resolution = None
def __init__(self, bands=None, lats=None, lons=None, lat_resultion=None, lon_resolution=None):
self.bands = [] if bands is None else bands
self.lats = [] if lats is None else lats
self.lons = [] if lons is None else lons
self.lat_resolution = lat_resultion
self.lon_resolution = lon_resolution
def get_lat_min(self):
return min(self.lats)
def get_lat_max(self):
return max(self.lats)
def get_lon_min(self):
return min(self.lons)
def get_lon_max(self):
return max(self.lons)
class Messenger:
def __init__(self, **kwargs):
self.__dict__ = kwargs
def get_input_spec_from_file(filename):
gdal_dataset = gdal.Open(filename)
return tile_spec_from_gdal_dataset(gdal_dataset)
def input_spec_from_eodataset(eodataset):
pass
def tile_spec_from_gdal_dataset(gdal_dataset):
"""
Return a specification of a GDAL dataset, used for creating a new NetCDF file to hold the same data
Example specification:
dict(bands=[{'dtype': 'Int16',
'name': 'Photosynthetic Vegetation',
'no_data': -999.0},
{'dtype': 'Int16',
'name': 'Non-Photosynthetic Vegetation',
'no_data': -999.0},
{'dtype': 'Int16', 'name': 'Bare Soil', 'no_data': -999.0},
{'dtype': 'Int16', 'name': 'Unmixing Error', 'no_data': -999.0}],
lats=array([-33., -33.00025, -33.0005, ..., -33.99925, -33.9995,
-33.99975]),
lons=array([150., 150.00025, 150.0005, ..., 150.99925, 150.9995,
150.99975]))
:param gdal_dataset: a gdal dataset
:return: nested dictionary describing the structure
"""
nbands, nlats, nlons = gdal_dataset.RasterCount, gdal_dataset.RasterYSize, gdal_dataset.RasterXSize
geotransform = gdal_dataset.GetGeoTransform()
lons = np.arange(nlons)*geotransform[1]+geotransform[0]
lats = np.arange(nlats)*geotransform[5]+geotransform[3]
bands = []
for band_idx in range(nbands):
src_band = gdal_dataset.GetRasterBand(band_idx + 1)
src_metadata = src_band.GetMetadata() # eg. filename: 'source.tif', name: 'Photosynthetic Vegetation'
name = src_metadata.get('name')
dtype = gdal.GetDataTypeName(src_band.DataType)
no_data = src_band.GetNoDataValue()
bands.append(dict(name=name, dtype=dtype, no_data=no_data))
return TileSpec(bands=bands, lats=lats, lons=lons, lat_resultion=geotransform[5], lon_resolution=geotransform[1])
def append_to_netcdf(gdal_tile, netcdf_path, eodataset, netcdf_class=MultiVariableNetCDF):
"""
Append a raster slice to a new or existing NetCDF file
:param gdal_tile: pathname to raster slice, readable by gdal
:param netcdf_path: pathname to
:param eodataset:
:param netcdf_class:
:return:
"""
tile_spec = get_input_spec_from_file(gdal_tile)
if not os.path.isfile(netcdf_path):
ncfile = netcdf_class.create_from_tile_spec(netcdf_path, tile_spec)
else:
ncfile = netcdf_class.open_with_tile_spec(netcdf_path, tile_spec)
ncfile.append_gdal_tile(gdal_tile, eodataset)
ncfile.close()
def main():
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("--create", action='store_true', help="Create a new, empty, NetCDF file")
group.add_argument("--append", action='store_true', help="Append the geotiff to a new portion of the NetCDF")
parser.add_argument("-b", "--band_as_dimension", action="store_true",
help="Store bands as a dimension instead of as new dataset")
parser.add_argument("geotiff", help="Input GeoTIFF filename")
parser.add_argument("netcdf", help="NetCDF file to create or write to")
args = parser.parse_args()
if args.band_as_dimension:
netcdf_class = SingleVariableNetCDF
else:
netcdf_class = MultiVariableNetCDF
if args.create:
dcnc = netcdf_class(args.netcdf, mode='w')
tile_spec = get_input_spec_from_file(args.geotiff)
dcnc.create_from_tile_spec(tile_spec)
dcnc.close()
elif args.append:
dcnc = netcdf_class(args.netcdf, mode='a')
dcnc.append_gdal_tile(args.geotiff)
dcnc.close()
else:
print 'Unknown action'
if __name__ == '__main__':
main()
|
omad/datacube-experiments
|
ingester/netcdf_writer.py
|
Python
|
bsd-3-clause
| 14,297
|
[
"NetCDF"
] |
84dffde0baac5bc1dc57e7e7aa16fca3c49aa93d775f8e69ecb25468228f2324
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
from __future__ import division, unicode_literals
import unittest
import os
import json
import numpy as np
from pymatgen.analysis.structure_matcher import StructureMatcher, \
ElementComparator, FrameworkComparator, OrderDisorderElementComparator, \
OccupancyComparator
from monty.json import MontyDecoder
from pymatgen.core.operations import SymmOp
from pymatgen import Structure, Element, Lattice
from pymatgen.util.coord import find_in_coord_list_pbc
from pymatgen.util.testing import PymatgenTest
test_dir = os.path.join(os.path.dirname(__file__), "..", "..", "..",
'test_files')
class StructureMatcherTest(PymatgenTest):
def setUp(self):
with open(os.path.join(test_dir, "TiO2_entries.json"), 'r') as fp:
entries = json.load(fp, cls=MontyDecoder)
self.struct_list = [e.structure for e in entries]
self.oxi_structs = [self.get_structure("Li2O"),
Structure.from_file(os.path.join(
test_dir, "POSCAR.Li2O"))]
def test_ignore_species(self):
s1 = Structure.from_file(os.path.join(test_dir, "LiFePO4.cif"))
s2 = Structure.from_file(os.path.join(test_dir, "POSCAR"))
m = StructureMatcher(ignored_species=["Li"], primitive_cell=False,
attempt_supercell=True)
self.assertTrue(m.fit(s1, s2))
self.assertTrue(m.fit_anonymous(s1, s2))
groups = m.group_structures([s1, s2])
self.assertEqual(len(groups), 1)
s2.make_supercell((2, 1, 1))
ss1 = m.get_s2_like_s1(s2, s1, include_ignored_species=True)
self.assertAlmostEqual(ss1.lattice.a, 20.820740000000001)
self.assertEqual(ss1.composition.reduced_formula, "LiFePO4")
self.assertEqual({
k.symbol: v.symbol for k, v in
m.get_best_electronegativity_anonymous_mapping(s1, s2).items()},
{"Fe": "Fe", "P": "P", "O": "O"})
def test_get_supercell_size(self):
l = Lattice.cubic(1)
l2 = Lattice.cubic(0.9)
s1 = Structure(l, ['Mg', 'Cu', 'Ag', 'Cu', 'Ag'], [[0]*3]*5)
s2 = Structure(l2, ['Cu', 'Cu', 'Ag'], [[0]*3]*3)
sm = StructureMatcher(supercell_size='volume')
self.assertEqual(sm._get_supercell_size(s1, s2),
(1, True))
self.assertEqual(sm._get_supercell_size(s2, s1),
(1, True))
sm = StructureMatcher(supercell_size='num_sites')
self.assertEqual(sm._get_supercell_size(s1, s2),
(2, False))
self.assertEqual(sm._get_supercell_size(s2, s1),
(2, True))
sm = StructureMatcher(supercell_size='Ag')
self.assertEqual(sm._get_supercell_size(s1, s2),
(2, False))
self.assertEqual(sm._get_supercell_size(s2, s1),
(2, True))
sm = StructureMatcher(supercell_size='wfieoh')
self.assertRaises(ValueError, sm._get_supercell_size, s1, s2)
def test_cmp_fstruct(self):
sm = StructureMatcher()
s1 = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
s2 = np.array([[0.11, 0.22, 0.33]])
frac_tol = np.array([0.02, 0.03, 0.04])
mask = np.array([[False, False]])
mask2 = np.array([[True, False]])
self.assertRaises(ValueError, sm._cmp_fstruct, s2, s1, frac_tol, mask.T)
self.assertRaises(ValueError, sm._cmp_fstruct, s1, s2, frac_tol, mask.T)
self.assertTrue(sm._cmp_fstruct(s1, s2, frac_tol, mask))
self.assertFalse(sm._cmp_fstruct(s1, s2, frac_tol/2, mask))
self.assertFalse(sm._cmp_fstruct(s1, s2, frac_tol, mask2))
def test_cart_dists(self):
sm = StructureMatcher()
l = Lattice.orthorhombic(1, 2, 3)
s1 = np.array([[0.13, 0.25, 0.37], [0.1, 0.2, 0.3]])
s2 = np.array([[0.11, 0.22, 0.33]])
s3 = np.array([[0.1, 0.2, 0.3], [0.11, 0.2, 0.3]])
s4 = np.array([[0.1, 0.2, 0.3], [0.1, 0.6, 0.7]])
mask = np.array([[False, False]])
mask2 = np.array([[False, True]])
mask3 = np.array([[False, False], [False, False]])
mask4 = np.array([[False, True], [False, True]])
n1 = (len(s1) / l.volume) ** (1/3)
n2 = (len(s2) / l.volume) ** (1/3)
self.assertRaises(ValueError, sm._cart_dists, s2, s1, l, mask.T, n2)
self.assertRaises(ValueError, sm._cart_dists, s1, s2, l, mask.T, n1)
d, ft, s = sm._cart_dists(s1, s2, l, mask, n1)
self.assertTrue(np.allclose(d, [0]))
self.assertTrue(np.allclose(ft, [-0.01, -0.02, -0.03]))
self.assertTrue(np.allclose(s, [1]))
#check that masking best value works
d, ft, s = sm._cart_dists(s1, s2, l, mask2, n1)
self.assertTrue(np.allclose(d, [0]))
self.assertTrue(np.allclose(ft, [0.02, 0.03, 0.04]))
self.assertTrue(np.allclose(s, [0]))
#check that averaging of translation is done properly
d, ft, s = sm._cart_dists(s1, s3, l, mask3, n1)
self.assertTrue(np.allclose(d, [0.08093341]*2))
self.assertTrue(np.allclose(ft, [0.01, 0.025, 0.035]))
self.assertTrue(np.allclose(s, [1, 0]))
#check distances are large when mask allows no 'real' mapping
d, ft, s = sm._cart_dists(s1, s4, l, mask4, n1)
self.assertTrue(np.min(d) > 1e8)
self.assertTrue(np.min(ft) > 1e8)
def test_get_mask(self):
sm = StructureMatcher(comparator=ElementComparator())
l = Lattice.cubic(1)
s1 = Structure(l, ['Mg', 'Cu', 'Ag', 'Cu'], [[0]*3]*4)
s2 = Structure(l, ['Cu', 'Cu', 'Ag'], [[0]*3]*3)
result = [[True, False, True, False],
[True, False, True, False],
[True, True, False, True]]
m, inds, i = sm._get_mask(s1, s2, 1, True)
self.assertTrue(np.all(m == result))
self.assertTrue(i == 2)
self.assertEqual(inds, [2])
#test supercell with match
result = [[1, 1, 0, 0, 1, 1, 0, 0],
[1, 1, 0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0, 1, 1]]
m, inds, i = sm._get_mask(s1, s2, 2, True)
self.assertTrue(np.all(m == result))
self.assertTrue(i == 2)
self.assertTrue(np.allclose(inds, np.array([4])))
#test supercell without match
result = [[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 1, 1]]
m, inds, i = sm._get_mask(s2, s1, 2, True)
self.assertTrue(np.all(m == result))
self.assertTrue(i == 0)
self.assertTrue(np.allclose(inds, np.array([])))
#test s2_supercell
result = [[1, 1, 1], [1, 1, 1],
[0, 0, 1], [0, 0, 1],
[1, 1, 0], [1, 1, 0],
[0, 0, 1], [0, 0, 1]]
m, inds, i = sm._get_mask(s2, s1, 2, False)
self.assertTrue(np.all(m == result))
self.assertTrue(i == 0)
self.assertTrue(np.allclose(inds, np.array([])))
#test for multiple translation indices
s1 = Structure(l, ['Cu', 'Ag', 'Cu', 'Ag', 'Ag'], [[0]*3]*5)
s2 = Structure(l, ['Ag', 'Cu', 'Ag'], [[0]*3]*3)
result = [[1, 0, 1, 0, 0],
[0, 1, 0, 1, 1],
[1, 0, 1, 0, 0]]
m, inds, i = sm._get_mask(s1, s2, 1, True)
self.assertTrue(np.all(m == result))
self.assertTrue(i == 1)
self.assertTrue(np.allclose(inds, [0, 2]))
def test_get_supercells(self):
sm = StructureMatcher(comparator=ElementComparator())
l = Lattice.cubic(1)
l2 = Lattice.cubic(0.5)
s1 = Structure(l, ['Mg', 'Cu', 'Ag', 'Cu'], [[0]*3]*4)
s2 = Structure(l2, ['Cu', 'Cu', 'Ag'], [[0]*3]*3)
scs = list(sm._get_supercells(s1, s2, 8, False))
for x in scs:
self.assertAlmostEqual(abs(np.linalg.det(x[3])), 8)
self.assertEqual(len(x[0]), 4)
self.assertEqual(len(x[1]), 24)
self.assertEqual(len(scs), 48)
scs = list(sm._get_supercells(s2, s1, 8, True))
for x in scs:
self.assertAlmostEqual(abs(np.linalg.det(x[3])), 8)
self.assertEqual(len(x[0]), 24)
self.assertEqual(len(x[1]), 4)
self.assertEqual(len(scs), 48)
def test_fit(self):
"""
Take two known matched structures
1) Ensure match
2) Ensure match after translation and rotations
3) Ensure no-match after large site translation
4) Ensure match after site shuffling
"""
sm = StructureMatcher()
self.assertTrue(sm.fit(self.struct_list[0], self.struct_list[1]))
# Test rotational/translational invariance
op = SymmOp.from_axis_angle_and_translation([0, 0, 1], 30, False,
np.array([0.4, 0.7, 0.9]))
self.struct_list[1].apply_operation(op)
self.assertTrue(sm.fit(self.struct_list[0], self.struct_list[1]))
#Test failure under large atomic translation
self.struct_list[1].translate_sites([0], [.4, .4, .2],
frac_coords=True)
self.assertFalse(sm.fit(self.struct_list[0], self.struct_list[1]))
self.struct_list[1].translate_sites([0], [-.4, -.4, -.2],
frac_coords=True)
# random.shuffle(editor._sites)
self.assertTrue(sm.fit(self.struct_list[0], self.struct_list[1]))
#Test FrameworkComporator
sm2 = StructureMatcher(comparator=FrameworkComparator())
lfp = self.get_structure("LiFePO4")
nfp = self.get_structure("NaFePO4")
self.assertTrue(sm2.fit(lfp, nfp))
self.assertFalse(sm.fit(lfp, nfp))
#Test anonymous fit.
self.assertEqual(sm.fit_anonymous(lfp, nfp), True)
self.assertAlmostEqual(sm.get_rms_anonymous(lfp, nfp)[0],
0.060895871160262717)
#Test partial occupancies.
s1 = Structure(Lattice.cubic(3),
[{"Fe": 0.5}, {"Fe": 0.5}, {"Fe": 0.5}, {"Fe": 0.5}],
[[0, 0, 0], [0.25, 0.25, 0.25],
[0.5, 0.5, 0.5], [0.75, 0.75, 0.75]])
s2 = Structure(Lattice.cubic(3),
[{"Fe": 0.25}, {"Fe": 0.5}, {"Fe": 0.5}, {"Fe": 0.75}],
[[0, 0, 0], [0.25, 0.25, 0.25],
[0.5, 0.5, 0.5], [0.75, 0.75, 0.75]])
self.assertFalse(sm.fit(s1, s2))
self.assertFalse(sm.fit(s2, s1))
s2 = Structure(Lattice.cubic(3),
[{"Mn": 0.5}, {"Mn": 0.5}, {"Mn": 0.5},
{"Mn": 0.5}],
[[0, 0, 0], [0.25, 0.25, 0.25],
[0.5, 0.5, 0.5], [0.75, 0.75, 0.75]])
self.assertEqual(sm.fit_anonymous(s1, s2), True)
self.assertAlmostEqual(sm.get_rms_anonymous(s1, s2)[0], 0)
def test_oxi(self):
"""Test oxidation state removal matching"""
sm = StructureMatcher()
self.assertFalse(sm.fit(self.oxi_structs[0], self.oxi_structs[1]))
sm = StructureMatcher(comparator=ElementComparator())
self.assertTrue(sm.fit(self.oxi_structs[0], self.oxi_structs[1]))
def test_primitive(self):
"""Test primitive cell reduction"""
sm = StructureMatcher(primitive_cell=True)
self.struct_list[1].make_supercell([[2, 0, 0], [0, 3, 0], [0, 0, 1]])
self.assertTrue(sm.fit(self.struct_list[0], self.struct_list[1]))
def test_class(self):
# Tests entire class as single working unit
sm = StructureMatcher()
# Test group_structures and find_indices
out = sm.group_structures(self.struct_list)
self.assertEqual(list(map(len, out)), [4, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1])
self.assertEqual(sum(map(len, out)), len(self.struct_list))
for s in self.struct_list[::2]:
s.replace_species({'Ti': 'Zr', 'O':'Ti'})
out = sm.group_structures(self.struct_list, anonymous=True)
self.assertEqual(list(map(len, out)), [4, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1])
def test_mix(self):
structures = [self.get_structure("Li2O"),
self.get_structure("Li2O2"),
self.get_structure("LiFePO4")]
for fname in ["POSCAR.Li2O", "POSCAR.LiFePO4"]:
structures.append(Structure.from_file(os.path.join(test_dir, fname)))
sm = StructureMatcher(comparator=ElementComparator())
groups = sm.group_structures(structures)
for g in groups:
formula = g[0].composition.reduced_formula
if formula in ["Li2O", "LiFePO4"]:
self.assertEqual(len(g), 2)
else:
self.assertEqual(len(g), 1)
def test_left_handed_lattice(self):
"""Ensure Left handed lattices are accepted"""
sm = StructureMatcher()
s = Structure.from_file(os.path.join(test_dir, "Li3GaPCO7.json"))
self.assertTrue(sm.fit(s, s))
def test_as_dict_and_from_dict(self):
sm = StructureMatcher(ltol=0.1, stol=0.2, angle_tol=2,
primitive_cell=False, scale=False,
comparator=FrameworkComparator())
d = sm.as_dict()
sm2 = StructureMatcher.from_dict(d)
self.assertEqual(sm2.as_dict(), d)
def test_no_scaling(self):
sm = StructureMatcher(ltol=0.1, stol=0.1, angle_tol=2,
scale=False, comparator=ElementComparator())
self.assertTrue(sm.fit(self.struct_list[0], self.struct_list[1]))
self.assertTrue(sm.get_rms_dist(self.struct_list[0],
self.struct_list[1])[0] < 0.0008)
def test_supercell_fit(self):
sm = StructureMatcher(attempt_supercell=False)
s1 = Structure.from_file(os.path.join(test_dir, "Al3F9.json"))
s2 = Structure.from_file(os.path.join(test_dir, "Al3F9_distorted.json"))
self.assertFalse(sm.fit(s1, s2))
sm = StructureMatcher(attempt_supercell=True)
self.assertTrue(sm.fit(s1, s2))
self.assertTrue(sm.fit(s2, s1))
def test_get_lattices(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=True, scale=True,
attempt_supercell=False)
l1 = Lattice.from_lengths_and_angles([1, 2.1, 1.9] , [90, 89, 91])
l2 = Lattice.from_lengths_and_angles([1.1, 2, 2] , [89, 91, 90])
s1 = Structure(l1, [], [])
s2 = Structure(l2, [], [])
lattices = list(sm._get_lattices(s=s1, target_lattice=s2.lattice))
self.assertEqual(len(lattices), 16)
l3 = Lattice.from_lengths_and_angles([1.1, 2, 20] , [89, 91, 90])
s3 = Structure(l3, [], [])
lattices = list(sm._get_lattices(s=s1, target_lattice=s3.lattice))
self.assertEqual(len(lattices), 0)
def test_find_match1(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=True, scale=True,
attempt_supercell=False)
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s2 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0.1,0],[0,0.1,-0.95],[.7,.5,.375]])
s1, s2, fu, s1_supercell = sm._preprocess(s1, s2, False)
match = sm._strict_match(s1, s2, fu, s1_supercell = True, use_rms = True, break_on_match = False)
scale_matrix = match[2]
s2.make_supercell(scale_matrix)
fc = s2.frac_coords + match[3]
fc -= np.round(fc)
self.assertAlmostEqual(np.sum(fc), 0.9)
self.assertAlmostEqual(np.sum(fc[:,:2]), 0.1)
cart_dist = np.sum(match[1] * (l.volume/3) ** (1/3))
self.assertAlmostEqual(cart_dist, 0.15)
def test_find_match2(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=True, scale=True,
attempt_supercell=False)
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Si', 'Si'], [[0,0,0.1],[0,0,0.2]])
s2 = Structure(l, ['Si', 'Si'], [[0,0.1,0],[0,0.1,-0.95]])
s1, s2, fu, s1_supercell = sm._preprocess(s1, s2, False)
match = sm._strict_match(s1, s2, fu, s1_supercell = False,
use_rms = True, break_on_match = False)
scale_matrix = match[2]
s2.make_supercell(scale_matrix)
s2.translate_sites(range(len(s2)), match[3])
self.assertAlmostEqual(np.sum(s2.frac_coords), 0.3)
self.assertAlmostEqual(np.sum(s2.frac_coords[:,:2]), 0)
def test_supercell_subsets(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True, allow_subset=True,
supercell_size='volume')
sm_no_s = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True, allow_subset=False,
supercell_size='volume')
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Ag', 'Si', 'Si'],
[[.7,.4,.5],[0,0,0.1],[0,0,0.2]])
s1.make_supercell([2,1,1])
s2 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0.1,-0.95],[0,0.1,0],[-.7,.5,.375]])
shuffle = [0,2,1,3,4,5]
s1 = Structure.from_sites([s1[i] for i in shuffle])
#test when s1 is exact supercell of s2
result = sm.get_s2_like_s1(s1, s2)
for a, b in zip(s1, result):
self.assertTrue(a.distance(b) < 0.08)
self.assertEqual(a.species_and_occu, b.species_and_occu)
self.assertTrue(sm.fit(s1, s2))
self.assertTrue(sm.fit(s2, s1))
self.assertTrue(sm_no_s.fit(s1, s2))
self.assertTrue(sm_no_s.fit(s2, s1))
rms = (0.048604032430991401, 0.059527539448807391)
self.assertTrue(np.allclose(sm.get_rms_dist(s1, s2), rms))
self.assertTrue(np.allclose(sm.get_rms_dist(s2, s1), rms))
#test when the supercell is a subset of s2
subset_supercell = s1.copy()
del subset_supercell[0]
result = sm.get_s2_like_s1(subset_supercell, s2)
self.assertEqual(len(result), 6)
for a, b in zip(subset_supercell, result):
self.assertTrue(a.distance(b) < 0.08)
self.assertEqual(a.species_and_occu, b.species_and_occu)
self.assertTrue(sm.fit(subset_supercell, s2))
self.assertTrue(sm.fit(s2, subset_supercell))
self.assertFalse(sm_no_s.fit(subset_supercell, s2))
self.assertFalse(sm_no_s.fit(s2, subset_supercell))
rms = (0.053243049896333279, 0.059527539448807336)
self.assertTrue(np.allclose(sm.get_rms_dist(subset_supercell, s2), rms))
self.assertTrue(np.allclose(sm.get_rms_dist(s2, subset_supercell), rms))
#test when s2 (once made a supercell) is a subset of s1
s2_missing_site = s2.copy()
del s2_missing_site[1]
result = sm.get_s2_like_s1(s1, s2_missing_site)
for a, b in zip((s1[i] for i in (0, 2, 4, 5)), result):
self.assertTrue(a.distance(b) < 0.08)
self.assertEqual(a.species_and_occu, b.species_and_occu)
self.assertTrue(sm.fit(s1, s2_missing_site))
self.assertTrue(sm.fit(s2_missing_site, s1))
self.assertFalse(sm_no_s.fit(s1, s2_missing_site))
self.assertFalse(sm_no_s.fit(s2_missing_site, s1))
rms = (0.029763769724403633, 0.029763769724403987)
self.assertTrue(np.allclose(sm.get_rms_dist(s1, s2_missing_site), rms))
self.assertTrue(np.allclose(sm.get_rms_dist(s2_missing_site, s1), rms))
def test_get_s2_large_s2(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=False,
attempt_supercell=True, allow_subset=False,
supercell_size='volume')
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Ag', 'Si', 'Si'],
[[.7,.4,.5],[0,0,0.1],[0,0,0.2]])
l2 = Lattice.orthorhombic(1.01, 2.01, 3.01)
s2 = Structure(l2, ['Si', 'Si', 'Ag'],
[[0,0.1,-0.95],[0,0.1,0],[-.7,.5,.375]])
s2.make_supercell([[0,-1,0],[1,0,0],[0,0,1]])
result = sm.get_s2_like_s1(s1, s2)
for x,y in zip(s1, result):
self.assertLess(x.distance(y), 0.08)
def test_get_mapping(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=False,
allow_subset = True)
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Ag', 'Si', 'Si'],
[[.7,.4,.5],[0,0,0.1],[0,0,0.2]])
s1.make_supercell([2,1,1])
s2 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0.1,-0.95],[0,0.1,0],[-.7,.5,.375]])
shuffle = [2,0,1,3,5,4]
s1 = Structure.from_sites([s1[i] for i in shuffle])
#test the mapping
s2.make_supercell([2,1,1])
#equal sizes
for i, x in enumerate(sm.get_mapping(s1, s2)):
self.assertEqual(s1[x].species_and_occu,
s2[i].species_and_occu)
del s1[0]
#s1 is subset of s2
for i, x in enumerate(sm.get_mapping(s2, s1)):
self.assertEqual(s1[i].species_and_occu,
s2[x].species_and_occu)
#s2 is smaller than s1
del s2[0]
del s2[1]
self.assertRaises(ValueError, sm.get_mapping, s2, s1)
def test_get_supercell_matrix(self):
sm = StructureMatcher(ltol=0.1, stol=0.3, angle_tol=2,
primitive_cell=False, scale=True,
attempt_supercell=True)
l = Lattice.orthorhombic(1, 2, 3)
s1 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s1.make_supercell([2,1,1])
s2 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0.1,0],[0,0.1,-0.95],[-.7,.5,.375]])
result = sm.get_supercell_matrix(s1, s2)
self.assertTrue((result == [[-2,0,0],[0,1,0],[0,0,1]]).all())
s1 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s1.make_supercell([[1, -1, 0],[0, 0, -1],[0, 1, 0]])
s2 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0.1,0],[0,0.1,-0.95],[-.7,.5,.375]])
result = sm.get_supercell_matrix(s1, s2)
self.assertTrue((result == [[-1,-1,0],[0,0,-1],[0,1,0]]).all())
#test when the supercell is a subset
sm = StructureMatcher(ltol=0.1, stol=0.3, angle_tol=2,
primitive_cell=False, scale=True,
attempt_supercell=True, allow_subset=True)
del s1[0]
result = sm.get_supercell_matrix(s1, s2)
self.assertTrue((result == [[-1,-1,0],[0,0,-1],[0,1,0]]).all())
def test_subset(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=False,
allow_subset=True)
l = Lattice.orthorhombic(10, 20, 30)
s1 = Structure(l, ['Si', 'Si', 'Ag'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s2 = Structure(l, ['Si', 'Ag'],
[[0,0.1,0],[-.7,.5,.4]])
result = sm.get_s2_like_s1(s1, s2)
self.assertEqual(len(find_in_coord_list_pbc(result.frac_coords,
[0,0,0.1])), 1)
self.assertEqual(len(find_in_coord_list_pbc(result.frac_coords,
[0.7,0.4,0.5])), 1)
#test with fewer species in s2
s1 = Structure(l, ['Si', 'Ag', 'Si'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s2 = Structure(l, ['Si', 'Si'],
[[0,0.1,0],[-.7,.5,.4]])
result = sm.get_s2_like_s1(s1, s2)
mindists = np.min(s1.lattice.get_all_distances(
s1.frac_coords, result.frac_coords), axis=0)
self.assertLess(np.max(mindists), 1e-6)
self.assertEqual(len(find_in_coord_list_pbc(result.frac_coords,
[0,0,0.1])), 1)
self.assertEqual(len(find_in_coord_list_pbc(result.frac_coords,
[0.7,0.4,0.5])), 1)
#test with not enough sites in s1
#test with fewer species in s2
s1 = Structure(l, ['Si', 'Ag', 'Cl'],
[[0,0,0.1],[0,0,0.2],[.7,.4,.5]])
s2 = Structure(l, ['Si', 'Si'],
[[0,0.1,0],[-.7,.5,.4]])
self.assertEqual(sm.get_s2_like_s1(s1, s2), None)
def test_out_of_cell_s2_like_s1(self):
l = Lattice.cubic(5)
s1 = Structure(l, ['Si', 'Ag', 'Si'],
[[0,0,-0.02],[0,0,0.001],[.7,.4,.5]])
s2 = Structure(l, ['Si', 'Ag', 'Si'],
[[0,0,0.98],[0,0,0.99],[.7,.4,.5]])
new_s2 = StructureMatcher(primitive_cell=False).get_s2_like_s1(s1, s2)
dists = np.sum((s1.cart_coords - new_s2.cart_coords) ** 2, axis=-1) ** 0.5
self.assertLess(np.max(dists), 0.1)
def test_disordered_primitive_to_ordered_supercell(self):
sm_atoms = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True,
allow_subset=True,
supercell_size = 'num_atoms',
comparator=OrderDisorderElementComparator())
sm_sites = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True,
allow_subset=True,
supercell_size = 'num_sites',
comparator=OrderDisorderElementComparator())
lp = Lattice.orthorhombic(10, 20, 30)
pcoords = [[0, 0, 0],
[0.5, 0.5, 0.5]]
ls = Lattice.orthorhombic(20,20,30)
scoords = [[0, 0, 0],
[0.75, 0.5, 0.5]]
prim = Structure(lp, [{'Na':0.5}, {'Cl':0.5}], pcoords)
supercell = Structure(ls, ['Na', 'Cl'], scoords)
supercell.make_supercell([[-1,1,0],[0,1,1],[1,0,0]])
self.assertFalse(sm_sites.fit(prim, supercell))
self.assertTrue(sm_atoms.fit(prim, supercell))
self.assertRaises(ValueError, sm_atoms.get_s2_like_s1, prim, supercell)
self.assertEqual(len(sm_atoms.get_s2_like_s1(supercell, prim)), 4)
def test_ordered_primitive_to_disordered_supercell(self):
sm_atoms = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True,
allow_subset=True,
supercell_size = 'num_atoms',
comparator=OrderDisorderElementComparator())
sm_sites = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True,
allow_subset=True,
supercell_size = 'num_sites',
comparator=OrderDisorderElementComparator())
lp = Lattice.orthorhombic(10, 20, 30)
pcoords = [[0, 0, 0],
[0.5, 0.5, 0.5]]
ls = Lattice.orthorhombic(20,20,30)
scoords = [[0, 0, 0],
[0.5, 0, 0],
[0.25, 0.5, 0.5],
[0.75, 0.5, 0.5]]
s1 = Structure(lp, ['Na', 'Cl'], pcoords)
s2 = Structure(ls, [{'Na':0.5}, {'Na':0.5}, {'Cl':0.5}, {'Cl':0.5}], scoords)
self.assertTrue(sm_sites.fit(s1, s2))
self.assertFalse(sm_atoms.fit(s1, s2))
def test_occupancy_comparator(self):
lp = Lattice.orthorhombic(10, 20, 30)
pcoords = [[0, 0, 0],
[0.5, 0.5, 0.5]]
s1 = Structure(lp, [{'Na':0.6, 'K':0.4}, 'Cl'], pcoords)
s2 = Structure(lp, [{'Xa':0.4, 'Xb':0.6}, 'Cl'], pcoords)
s3 = Structure(lp, [{'Xa':0.5, 'Xb':0.5}, 'Cl'], pcoords)
sm_sites = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5,
primitive_cell=False, scale=True,
attempt_supercell=True,
allow_subset=True,
supercell_size = 'num_sites',
comparator=OccupancyComparator())
self.assertTrue(sm_sites.fit(s1, s2))
self.assertFalse(sm_sites.fit(s1, s3))
def test_electronegativity(self):
sm = StructureMatcher(ltol=0.2, stol=0.3, angle_tol=5)
s1 = Structure.from_file(os.path.join(test_dir, "Na2Fe2PAsO4S4.json"))
s2 = Structure.from_file(os.path.join(test_dir, "Na2Fe2PNO4Se4.json"))
self.assertEqual(sm.get_best_electronegativity_anonymous_mapping(s1, s2),
{Element('S'): Element('Se'),
Element('As'): Element('N'),
Element('Fe'): Element('Fe'),
Element('Na'): Element('Na'),
Element('P'): Element('P'),
Element('O'): Element('O'),})
self.assertEqual(len(sm.get_all_anonymous_mappings(s1, s2)), 2)
<<<<<<< HEAD
=======
# test include_dist
dists = {Element('N'): 0, Element('P'): 0.0010725064}
for mapping, d in sm.get_all_anonymous_mappings(s1, s2, include_dist=True):
self.assertAlmostEqual(dists[mapping[Element('As')]], d)
>>>>>>> a41cc069c865a5d0f35d0731f92c547467395b1b
def test_rms_vs_minimax(self):
# This tests that structures with adjusted RMS less than stol, but minimax
# greater than stol are treated properly
# stol=0.3 gives exactly an ftol of 0.1 on the c axis
sm = StructureMatcher(ltol=0.2, stol=0.301, angle_tol=1, primitive_cell=False)
l = Lattice.orthorhombic(1, 2, 12)
sp = ["Si", "Si", "Al"]
s1 = Structure(l, sp, [[0.5, 0, 0], [0, 0, 0], [0, 0, 0.5]])
s2 = Structure(l, sp, [[0.5, 0, 0], [0, 0, 0], [0, 0, 0.6]])
self.assertArrayAlmostEqual(sm.get_rms_dist(s1, s2),
(0.32 ** 0.5 / 2, 0.4))
self.assertEqual(sm.fit(s1, s2), False)
self.assertEqual(sm.fit_anonymous(s1, s2), False)
self.assertEqual(sm.get_mapping(s1, s2), None)
if __name__ == '__main__':
unittest.main()
|
Bismarrck/pymatgen
|
pymatgen/analysis/tests/test_structure_matcher.py
|
Python
|
mit
| 31,701
|
[
"pymatgen"
] |
ac3b9e70bbed3681dc8aa1f24fd1db0f9bf6ac8a3f7903cf24b1dc2162870d92
|
"""
Tests for ConvMolFeaturizer.
"""
from __future__ import division
from __future__ import unicode_literals
__author__ = "Han Altae-Tran and Bharath Ramsundar"
__copyright__ = "Copyright 2016, Stanford University"
__license__ = "MIT"
import unittest
import os
import sys
import numpy as np
import rdkit
from deepchem.feat.mol_graphs import ConvMol
from deepchem.feat.mol_graphs import MultiConvMol
from deepchem.feat.graph_features import ConvMolFeaturizer
class TestConvMolFeaturizer(unittest.TestCase):
"""
Test ConvMolFeaturizer featurizes properly.
"""
def test_carbon_nitrogen(self):
"""Test on carbon nitrogen molecule"""
# Note there is a central nitrogen of degree 4, with 4 carbons
# of degree 1 (connected only to central nitrogen).
raw_smiles = ['C[N+](C)(C)C']
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = ConvMolFeaturizer()
mols = featurizer.featurize(mols)
mol = mols[0]
# 5 atoms in compound
assert mol.get_num_atoms() == 5
# Get the adjacency lists grouped by degree
deg_adj_lists = mol.get_deg_adjacency_lists()
assert np.array_equal(deg_adj_lists[0], np.zeros([0, 0], dtype=np.int32))
# The 4 outer atoms connected to central nitrogen
assert np.array_equal(deg_adj_lists[1],
np.array([[4], [4], [4], [4]], dtype=np.int32))
assert np.array_equal(deg_adj_lists[2], np.zeros([0, 2], dtype=np.int32))
assert np.array_equal(deg_adj_lists[3], np.zeros([0, 3], dtype=np.int32))
# Central nitrogen connected to everything else.
assert np.array_equal(deg_adj_lists[4],
np.array([[0, 1, 2, 3]], dtype=np.int32))
assert np.array_equal(deg_adj_lists[5], np.zeros([0, 5], dtype=np.int32))
assert np.array_equal(deg_adj_lists[6], np.zeros([0, 6], dtype=np.int32))
def test_single_carbon(self):
"""Test that single carbon atom is featurized properly."""
raw_smiles = ['C']
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = ConvMolFeaturizer()
mol_list = featurizer.featurize(mols)
mol = mol_list[0]
# Only one carbon
assert mol.get_num_atoms() == 1
# No bonds, so degree adjacency lists are empty
deg_adj_lists = mol.get_deg_adjacency_lists()
assert np.array_equal(deg_adj_lists[0], np.zeros([1, 0], dtype=np.int32))
assert np.array_equal(deg_adj_lists[1], np.zeros([0, 1], dtype=np.int32))
assert np.array_equal(deg_adj_lists[2], np.zeros([0, 2], dtype=np.int32))
assert np.array_equal(deg_adj_lists[3], np.zeros([0, 3], dtype=np.int32))
assert np.array_equal(deg_adj_lists[4], np.zeros([0, 4], dtype=np.int32))
assert np.array_equal(deg_adj_lists[5], np.zeros([0, 5], dtype=np.int32))
assert np.array_equal(deg_adj_lists[6], np.zeros([0, 6], dtype=np.int32))
def test_alkane(self):
"""Test on simple alkane"""
raw_smiles = ['CCC']
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = ConvMolFeaturizer()
mol_list = featurizer.featurize(mols)
mol = mol_list[0]
# 3 carbonds in alkane
assert mol.get_num_atoms() == 3
deg_adj_lists = mol.get_deg_adjacency_lists()
assert np.array_equal(deg_adj_lists[0], np.zeros([0, 0], dtype=np.int32))
# Outer two carbonds are connected to central carbon
assert np.array_equal(deg_adj_lists[1], np.array(
[[2], [2]], dtype=np.int32))
# Central carbon connected to outer two
assert np.array_equal(deg_adj_lists[2], np.array([[0, 1]], dtype=np.int32))
assert np.array_equal(deg_adj_lists[3], np.zeros([0, 3], dtype=np.int32))
assert np.array_equal(deg_adj_lists[4], np.zeros([0, 4], dtype=np.int32))
assert np.array_equal(deg_adj_lists[5], np.zeros([0, 5], dtype=np.int32))
assert np.array_equal(deg_adj_lists[6], np.zeros([0, 6], dtype=np.int32))
|
Agent007/deepchem
|
deepchem/feat/tests/test_graph_features.py
|
Python
|
mit
| 3,858
|
[
"RDKit"
] |
4ae04ea0cd3f2f5d10b5dece394c00fb6ae9fc27156c7019aa6294b28d894bf7
|
# Copyright 2021, Google LLC. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utils for running experiments with discrete DP and compression."""
import pprint
from absl import logging
import numpy as np
import tensorflow_federated as tff
from distributed_dp import accounting_utils
from distributed_dp import ddpquery_utils
from distributed_dp import modular_clipping_factory
def get_total_dim(client_template):
"""Returns the dimension of the client template as a single vector."""
return sum(np.prod(x.shape) for x in client_template)
def pad_dim(dim):
return np.math.pow(2, np.ceil(np.log2(dim)))
def build_aggregator(compression_flags, dp_flags, num_clients,
num_clients_per_round, num_rounds, client_template):
"""Create a `tff.aggregator` containing all aggregation operations."""
clip, epsilon = dp_flags['l2_norm_clip'], dp_flags['epsilon']
# No DP (but still do the clipping if necessary).
if epsilon is None or epsilon == -1:
agg_factory = tff.aggregators.UnweightedMeanFactory()
if clip is not None:
assert clip > 0, 'Norm clip must be positive.'
agg_factory = tff.aggregators.clipping_factory(clip, agg_factory)
logging.info('Using vanilla aggregation with clipping %s', clip)
params_dict = {'clip': clip}
return agg_factory, params_dict
# Parameters for DP
assert epsilon > 0, f'Epsilon should be positive, found {epsilon}.'
assert clip is not None and clip > 0, f'Clip must be positive, found {clip}.'
sampling_rate = float(num_clients_per_round) / num_clients
delta = dp_flags['delta'] or 1.0 / num_clients # Default to delta = 1 / N.
mechanism = dp_flags['dp_mechanism'].lower()
dim = get_total_dim(client_template)
params_dict = {
'epsilon': epsilon,
'delta': delta,
'clip': clip,
'dim': dim,
'sampling_rate': sampling_rate,
'mechanism': mechanism,
'num_clients': num_clients,
'num_clients_per_round': num_clients_per_round,
'num_rounds': num_rounds
}
logging.info('Shared DP Parameters:')
logging.info(pprint.pformat(params_dict))
# Baseline: continuous Gaussian.
if mechanism == 'gaussian':
noise_mult = accounting_utils.get_gauss_noise_multiplier(
target_eps=epsilon,
target_delta=delta,
target_sampling_rate=sampling_rate,
steps=num_rounds)
# Operations include clipping on client and noising + averaging on server;
# No MeanFactory and ClippingFactory needed.
agg_factory = tff.aggregators.DifferentiallyPrivateFactory.gaussian_fixed(
noise_multiplier=noise_mult,
clients_per_round=num_clients_per_round,
clip=clip)
gauss_params_dict = {'noise_mult': noise_mult}
logging.info('Gaussian Parameters:')
logging.info(gauss_params_dict)
params_dict.update(gauss_params_dict)
# Discrete Mechanisms.
elif mechanism in ('ddgauss', 'dskellam'):
padded_dim = pad_dim(dim)
k_stddevs = compression_flags['k_stddevs'] or 4
beta = compression_flags['beta']
bits = compression_flags['num_bits']
# Modular clipping has exclusive upper bound.
mod_clip_lo, mod_clip_hi = -(2**(bits - 1)), 2**(bits - 1)
if mechanism == 'ddgauss':
gamma, local_stddev = accounting_utils.ddgauss_params(
q=sampling_rate,
epsilon=epsilon,
l2_clip_norm=clip,
bits=bits,
num_clients=num_clients_per_round,
dim=padded_dim,
delta=delta,
beta=beta,
steps=num_rounds,
k=k_stddevs)
scale = 1.0 / gamma
elif mechanism == 'dskellam':
scale, local_stddev = accounting_utils.skellam_params(
epsilon=epsilon,
l2_clip=clip,
bits=bits,
num_clients=num_clients_per_round,
beta=beta,
dim=padded_dim,
q=sampling_rate,
steps=num_rounds,
delta=delta,
k=k_stddevs)
gamma = 1.0 / scale
central_stddev = local_stddev * np.sqrt(num_clients_per_round)
noise_mult_clip = central_stddev / clip
inflated_l2 = accounting_utils.rounded_l2_norm_bound(
clip * scale, beta=beta, dim=padded_dim) / scale
noise_mult_inflated = central_stddev / inflated_l2
discrete_params_dict = {
'bits': bits,
'beta': beta,
'dim': dim,
'padded_dim': padded_dim,
'gamma': gamma,
'scale': scale,
'k_stddevs': k_stddevs,
'local_stddev': local_stddev,
'mechanism': mechanism,
'noise_mult_clip': noise_mult_clip,
'noise_mult_inflated': noise_mult_inflated,
'inflated_l2': inflated_l2
}
logging.info('%s parameters:', mechanism)
logging.info(pprint.pformat(discrete_params_dict))
params_dict.update(discrete_params_dict)
# Build nested aggregators.
agg_factory = tff.aggregators.SumFactory()
# 1. Modular clipping.
agg_factory = modular_clipping_factory.ModularClippingSumFactory(
clip_range_lower=mod_clip_lo,
clip_range_upper=mod_clip_hi,
inner_agg_factory=agg_factory)
# 2. Quantization followed by the distributed DP mechanism.
ddp_query = ddpquery_utils.build_ddp_query(
mechanism=mechanism,
local_stddev=local_stddev,
l2_norm_bound=clip,
beta=beta,
padded_dim=padded_dim,
scale=scale,
client_template=client_template)
agg_factory = tff.aggregators.DifferentiallyPrivateFactory(
query=ddp_query, record_aggregation_factory=agg_factory)
# 3. L2 norm clipping as the first step.
agg_factory = tff.aggregators.clipping_factory(
clipping_norm=clip, inner_agg_factory=agg_factory)
# 4. Apply a MeanFactory at last (mean can't be part of the discrete
# DPQueries (like the case of Gaussian) as the records may become floats
# and hence break the decompression process).
agg_factory = tff.aggregators.UnweightedMeanFactory(
value_sum_factory=agg_factory)
else:
raise ValueError(f'Unsupported mechanism: {dp_flags["dp_mechanism"]}')
return agg_factory, params_dict
|
google-research/federated
|
distributed_dp/fl_utils.py
|
Python
|
apache-2.0
| 6,642
|
[
"Gaussian"
] |
322e30244572e40a322d8fa261265f17dd652d2c66f4efee9543462d3a45c872
|
"""Generate the StageIV "daily" file at 12z."""
import datetime
import os
import sys
import numpy as np
from pyiem.util import ncopen, logger
LOG = logger()
BASEDIR = "/mesonet/data/stage4"
def init_year(ts):
"""Create a new NetCDF file for a year of our specification!"""
# Get existing stageIV netcdf file to copy its cordinates from
tmplnc = ncopen(f"{BASEDIR}/2020_stage4_hourly.nc", "r")
fn = f"{BASEDIR}/{ts.year}_stage4_daily.nc"
if os.path.isfile(fn):
LOG.info("Cowardly refusing to overwrite %s", fn)
return
nc = ncopen(fn, "w")
nc.title = "StageIV 12z-12z Totals for %s" % (ts.year,)
nc.platform = "Grided"
nc.description = "StageIV"
nc.institution = "Iowa State University, Ames, IA, USA"
nc.source = "Iowa Environmental Mesonet"
nc.project_id = "IEM"
nc.realization = 1
nc.Conventions = "CF-1.0" # *cough*
nc.contact = "Daryl Herzmann, akrherz@iastate.edu, 515-294-5978"
nc.history = "%s Generated" % (
datetime.datetime.now().strftime("%d %B %Y"),
)
nc.comment = "No Comment at this time"
# Setup Dimensions
nc.createDimension("x", tmplnc.dimensions["x"].size)
nc.createDimension("y", tmplnc.dimensions["y"].size)
ts2 = datetime.datetime(ts.year + 1, 1, 1)
days = (ts2 - ts).days
nc.createDimension("time", int(days))
# Setup Coordinate Variables
lat = nc.createVariable("lat", float, ("y", "x"))
lat.units = "degrees_north"
lat.long_name = "Latitude"
lat.standard_name = "latitude"
lat.axis = "Y"
# Grid centers
lat[:] = tmplnc.variables["lat"][:]
lon = nc.createVariable("lon", float, ("y", "x"))
lon.units = "degrees_east"
lon.long_name = "Longitude"
lon.standard_name = "longitude"
lon.axis = "X"
lon[:] = tmplnc.variables["lon"][:]
tm = nc.createVariable("time", float, ("time",))
tm.units = "Days since %s-01-01 12:00:0.0" % (ts.year,)
tm.long_name = "Time"
tm.standard_name = "time"
tm.axis = "T"
tm.calendar = "gregorian"
tm[:] = np.arange(0, int(days))
p01d = nc.createVariable(
"p01d_12z", float, ("time", "y", "x"), fill_value=1.0e20
)
p01d.units = "mm"
p01d.long_name = "Precipitation"
p01d.standard_name = "Precipitation"
p01d.coordinates = "lon lat"
p01d.description = "Precipitation accumulation at 12z"
nc.close()
tmplnc.close()
def main(argv):
"""Go Main"""
init_year(datetime.datetime(int(argv[1]), 1, 1))
if __name__ == "__main__":
main(sys.argv)
|
akrherz/iem
|
scripts/iemre/init_stage4_daily.py
|
Python
|
mit
| 2,555
|
[
"NetCDF"
] |
0c0c7ec066a29b2b0a4004fd044f1ae6bbdca08a719b71d274f49831544d41ff
|
from collections import OrderedDict
import logging
import os
import simtk.unit as units
from intermol.utils import which, run_subprocess
CRM_PATH = ''
logger = logging.getLogger('InterMolLog')
# --------- energy evaluation methods ---------- #
# this dict hasn't been upated yet:
key_dict = {'ENERgy': 'Potential',
'BONDs': 'Bond',
'ANGLes': 'Angle',
'DIHEdrals': 'Proper Dih.',
'IMPRopers': 'Improper Dih.',
'EEL': 'Coulomb',
'VDWaals': 'LJ (SR)',
'EVDW': 'Disper. corr.'
}
def remove_zero_terms(sdict):
# remove entries that are all zero
for key in sdict:
if sdict[key]._value == 0.0:
sdict.pop(key)
def standardize_key(sdict,in_key):
if in_key in key_dict:
sdict[key_dict[in_key]] = sdict[in_key]
sdict.pop(in_key)
def pick_crystal_type(box):
''' take a box vector and determine the crystal type (string output).
has not been thoroughly debugged because of lack of examples
'''
a = box[0]
b = box[1]
c = box[2]
alpha = box[3]
beta = box[4]
gamma = box[5]
rectangular = (alpha == 90.0 and beta == 90.0 and gamma == 90.0)
if rectangular:
if a == b:
if b == c:
boxtype = 'cubic'
else:
boxtype = 'tetragonal'
else:
boxtype = 'orthorombic'
elif alpha == gamma and alpha == 90:
if alpha == beta:
boxtype = 'orthorhombic'
else:
boxtype = 'monoclinic'
elif a == b:
if alpha == beta and alpha == 90.0 and gamma == 120.0:
boxtype = 'hexagonal'
elif a == c and alpha == 109.4712206344907 and alpha == beta and alpha == gamma:
boxtype = 'octahedral' # this will be hard to match because of the degree of precision of alpha
elif a == c and alpha == 60.0 and gamma == alpha and beta == 90.0:
boxtype = 'rhdo'
elif a == c and alpha == beta and gamma == beta and alpha < 120.0:
boxtype = 'rhombohedral'
else:
boxtype = 'triclinic'
return boxtype
def write_input_file(inpfile, psffile, rtfs, prms, strms,
boxtype, boxvecs, crdfile, charmm_settings):
#annoyingly we need to write the charmm input file, which containes nonbonded interaction parameters,
#and files thatare used.
counter = 10
increment = 10
with open(inpfile, 'w') as charmm_inp:
# will use relative paths because of length of line issues in charmm
charmm_inp.write('! CHARMM Energy for %s\n' % os.path.relpath(inpfile))
for r in rtfs:
charmm_inp.write('open read card unit %d name \"%s\"\nread rtf card unit %d\n' % (counter, os.path.relpath(r), counter))
counter = counter + increment
for p in prms:
charmm_inp.write('open read card unit %d name \"%s\"\nread para card unit %d\n' % (counter, os.path.relpath(p), counter))
counter = counter + increment
for s in strms:
charmm_inp.write('stream \"%s\"\n' % (os.path.relpath(s)))
charmm_inp.write('read psf card name \"%s\"\n' % (os.path.relpath(psffile)))
charmm_inp.write('read coor card name \"%s\"\nread coor card comp name \"%s\"\n' % (os.path.relpath(crdfile), os.path.relpath(crdfile)))
charmm_inp.write('crystal define %s %.8f %.8f %.8f %.8f %.8f %.8f\ncrystal build noper 0\n' % (boxtype,
boxvecs[0], boxvecs[1], boxvecs[2], boxvecs[3], boxvecs[4], boxvecs[5]))
# ! These segments are used for water and ions in bulk solvent
#define bulks sele .not. (segid A .or. segid B) end
# ! You may need to change these depending on how you plan to do recentering
#image byseg sele .not. resname tip3 .and. .not. bulks end
#image byres sele resname tip3 .or. bulks end
charmm_inp.write("%s\n" % (charmm_settings))
charmm_inp.write("energy\nstop")
def charmm_energies(inpfile, crm_path):
"""Compute single-point energies using CHARMM.
Args:
inpfile (str)
crm_path (str):
Returns:
e_out:
ener_xvg:
"""
logger.info('Evaluating energy of {0}'.format(inpfile))
# find the directory the input file is in.
directory, _ = os.path.split(os.path.abspath(inpfile))
# create files for output
stdout_path = os.path.join(directory, 'charmm_stdout.txt')
stderr_path = os.path.join(directory, 'charmm_stderr.txt')
# delete previous stdout and stderr
if os.path.isfile(stdout_path):
os.remove(stdout_path)
if os.path.isfile(stderr_path):
os.remove(stderr_path)
if not which(crm_path):
raise IOError('Unable to find CHARMM executable (charmm).')
if os.path.isdir(crm_path):
charmm_bin = os.path.join(crm_path, 'charmm')
else:
charmm_bin = crm_path
# run charmm - this assumes all files required in the input file are present
cmd = [charmm_bin, '-i', inpfile]
proc = run_subprocess(cmd, 'charmm', stdout_path, stderr_path)
if proc.returncode != 0:
logger.error('charmm failed. See %s' % stderr_path)
# Extract energies from charmm output
return _group_energy_terms(stdout_path)
def _group_energy_terms(mdout):
"""Parse CHARMM output file to extract and group the energy terms in a dict. """
with open(mdout) as f:
all_lines = f.readlines()
# find where the energy information starts
i = 0
for line in all_lines:
if line[0:9] == 'ENER ENR:':
startline = i
break
i+=1
energy_types = []
energy_values = []
for line in all_lines[startline:]:
if line[0:4] == 'ENER':
if '>' in line:
# unforunately, we can' just split; we have to do it by fixed width
startcol = 15
colwidth = 13
klim = (len(line)-startcol)/colwidth
for k in range(klim):
startc = startcol+k*colwidth-1
endc = startc + colwidth
v = line[startc:endc]
energy_values.append(float(v)*units.kilocalories_per_mole)
else:
names = line.split()
for n in names[2:]:
if n != "Eval#":
energy_types.append(n)
e_out = OrderedDict(zip(energy_types, energy_values))
# remove zero terms from the comparison
remove_zero_terms(e_out)
# remove components that are not energy
nonenergykeys = ['GRMS','VIRI']
for key in nonenergykeys:
if key in e_out:
e_out.pop(key)
# rename energy terms to standardize
for key in e_out:
key = standardize_key(e_out, key)
# sum up terms to components we can jointly report
# this will likely need to change because the precise terms reported by CHARMM will vary.
vanderwaals = ['LJ (SR)', 'IMNBvdw', 'Disper. corr.']
electrostatic = ['ELEC', 'IMELec', 'EWKSum', 'EWSElf', 'EWEXcl']
dihedrals = ['Proper Dih.', 'Improper Dih.']
bonded = ['Bond', 'Angle', 'All dihedrals']
nonbonded = ['Electrostatic', 'van der Waals'] # must come last, since is a sum of summed terms
sumterms = [vanderwaals, electrostatic, dihedrals, bonded, nonbonded]
newkeys = ['van der Waals','Electrostatic', 'All dihedrals', 'Bonded', 'Nonbonded']
for k, key in enumerate(newkeys):
e_out[key] = 0 * units.kilocalories_per_mole
for group in sumterms[k]:
if group in e_out:
e_out[key] += e_out[group]
return e_out, mdout
|
ctk3b/InterMol
|
intermol/charmm/__init__.py
|
Python
|
mit
| 7,699
|
[
"CHARMM",
"CRYSTAL"
] |
95e01809d31fb560c2287cf907366c016eecd715f81e8f23aacbb1fbf93138d6
|
"""A that shows a slice at a specified radius
"""
# Author: Martin Weier
#Copyright (C) 2006 California Institute of Technology
#This program is free software; you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation; either version 2 of the License, or
#any later version.
#This program is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#GNU General Public License for more details.
#You should have received a copy of the GNU General Public License
#along with this program; if not, write to the Free Software
#Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
# Enthought library imports.
from enthought.traits import Instance, Range, Int, Float
from enthought.traits.ui import View, Group, Item
from enthought.tvtk import tvtk
# Local imports
from enthought.mayavi.core.filter import Filter
######################################################################
# `ShowSurface` class.
######################################################################
class ShowSurface(Filter):
# The version of this class. Used for persistence.
__version__ = 0
# The threshold filter.
prog_filter = Instance(tvtk.ProgrammableFilter, ())
# Upper threshold (this is a dynamic trait that is changed when
# input data changes).
surfacelevel = Range(1, 17, 1,
desc='the surface filter')
# Our view.
view = View(Group(Item(name='surfacelevel')
))
current_level=Int()
nx = Int()
ny = Int()
nz = Int()
def setvalues(self,nx,ny,nz,level):
"""This Method needs to be set before the execution of the filter
it accepts nx,ny,nz,level"""
self.nx = nx
self.ny = ny
self.nz = nz
self.current_level = level
######################################################################
# `Filter` interface.
######################################################################
def setup_pipeline(self):
"""Override this method so that it *creates* its tvtk
pipeline.
This method is invoked when the object is initialized via
`__init__`. Note that at the time this method is called, the
tvtk data pipeline will *not* yet be setup. So upstream data
will not be available. The idea is that you simply create the
basic objects and setup those parts of the pipeline not
dependent on upstream sources and filters.
"""
# Just setup the default output of this filter.
self.prog_filter.set_execute_method(self._showsurface)
self.outputs = [self.prog_filter.output]
def update_pipeline(self):
"""Override this method so that it *updates* the tvtk pipeline
when data upstream is known to have changed.
This method is invoked (automatically) when the input fires a
`pipeline_changed` event.
"""
# By default we set the input to the first output of the first
# input.
fil = self.prog_filter
fil.input = self.inputs[0].outputs[0]
# We force the ranges to be reset to the limits of the data.
# This is because if the data has changed upstream, then the
# limits of the data must be changed.
#self._update_ranges(reset=True)
#fil.threshold_between(self.lower_threshold, self.upper_threshold)
#fil.update()
self.outputs[0] = fil.output
self.pipeline_changed = True
def update_data(self):
"""Override this method to do what is necessary when upstream
data changes.
This method is invoked (automatically) when any of the inputs
sends a `data_changed` event.
"""
#self._update_ranges(reset=True)
# Propagate the data_changed event.
self.prog_filter.set_execute_method(self._showsurface)
self.outputs = [self.prog_filter.output]
self.data_changed = True
def _showsurface(self):
print "showsurface update"
print self.current_level
input = self.prog_filter.unstructured_grid_input
numCells = input.number_of_cells
quadgrid = tvtk.UnstructuredGrid()
quadgrid.allocate(1,1)
reduced_points = []
reduced_scalars = []
reduced_vectors = []
j = 1
cell_count=0
for i in xrange(numCells):
if j==self.current_level:
cell = input.get_cell(i)
scalars = input.point_data.scalars
vectors = input.point_data.vectors
point_ids = cell.point_ids
points = cell.points
reduced_points.append(points[2])
reduced_points.append(points[1])
reduced_points.append(points[5])
reduced_points.append(points[6])
reduced_scalars.append(scalars[point_ids[2]])
reduced_scalars.append(scalars[point_ids[1]])
reduced_scalars.append(scalars[point_ids[5]])
reduced_scalars.append(scalars[point_ids[6]])
reduced_vectors.append(vectors[point_ids[2]])
reduced_vectors.append(vectors[point_ids[1]])
reduced_vectors.append(vectors[point_ids[5]])
reduced_vectors.append(vectors[point_ids[6]])
quadgrid.insert_next_cell(9,[cell_count,cell_count+1,cell_count+2,cell_count+3])
cell_count+=4
if j == self.nx:
j=1
j+=1
vtkReduced_vectors = tvtk.FloatArray()
vtkReduced_scalars = tvtk.FloatArray()
vtkReduced_vectors.from_array(reduced_vectors)
vtkReduced_scalars.from_array(reduced_scalars)
vtkReduced_scalars.name = 'Scalars'
vtkReduced_vectors.name = 'Vectors'
#showsurfF.unstructured_grid_output = quadgrid
self.prog_filter.unstructured_grid_output.set_cells(9,quadgrid.get_cells())
self.prog_filter.unstructured_grid_output.point_data.scalars = vtkReduced_scalars
self.prog_filter.unstructured_grid_output.point_data.vectors = vtkReduced_vectors
self.prog_filter.unstructured_grid_output.points = reduced_points
######################################################################
# Non-public interface
######################################################################
def _surfacelevel_changed(self, new_value):
fil = self.prog_filter
print self.current_level
self.current_level = new_value-1
self._showsurface()
fil.update()
self.data_changed = True
|
geodynamics/citcoms
|
visual/Mayavi2/original_plugins/plugins/filter/CitcomSshowSurface.py
|
Python
|
gpl-2.0
| 6,990
|
[
"Mayavi"
] |
4e0dc8abc3d802b2fbe19152480af8511a495502d8628a757df63a755655b733
|
# Licensed under GPL version 3 - see LICENSE.rst
'''Gratings made by the MKI `Space Nanotechnology Laboratory`_
The MIT Kavli Institute for Astrophysics and Space Research
`Space Nanotechnology Laboratory`_ produces elements for astronomical
instruments in space. One particular field of research are critical
angle transmission (CAT) gratings, see e.g. `Heilmann et al. (2015)`_
.. _Space Nanotechnology Laboratory: http://snl.mit.edu/
.. _Heilmann et al. (2015): http://dx.doi.org/10.1117/12.2188525
'''
import numpy as np
from numpy.core.umath_tests import inner1d
from scipy.interpolate import RectBivariateSpline, interp1d
import astropy.units as u
from astropy.utils.data import get_pkg_data_filename
from astropy.table import Table
from marxs.optics import (CATGrating,
OrderSelector, FlatStack,
FlatOpticalElement)
from marxs.math.utils import norm_vector
from marxs.optics.scatter import RandomGaussianScatter
__all__ = ['l1transtab', 'l1_order_selector',
'l1_dims', 'l2_dims',
'qualityfactor', 'd',
'load_table2d',
'InterpolateEfficiencyTable',
'QualityFactor',
'L1',
'L2Abs',
'L2Diffraction',
'CATL1L2Stack',
'NonParallelCATGrating',
]
d = 0.0002
'''Spacing of grating bars'''
l1transtab = Table.read(get_pkg_data_filename('SiTransmission.csv'), format='ascii.ecsv')
'''Transmission through 1 mu of Si'''
l1_order_selector = OrderSelector(orderlist=np.array([-4, -3, -2, -1, 0, 1, 2, 3, 4]),
p=np.array([0.006, 0.0135, 0.022, 0.028, 0.861, 0.028, 0.022, 0.0135, 0.006]))
'''Simple order selector for diffraction on L1.
The numbers here are calculated for the 2018 Arcus gratings assuming the L1 structure
is independent from the grating membrane itself (which is not true, but a valid first
approximation.)
'''
l1_dims = {'bardepth': 0.004 * u.mm, 'period': 0.005 * u.mm, 'barwidth': 0.0009 * u.mm}
'''Dimensions of L1 support bars running perpendicular to the grating bars'''
l2_dims = {'bardepth': 0.5 * u.mm, 'period': 0.966 * u.mm, 'barwidth': 0.1 * u.mm}
'''Dimensions of hexagonal L2 support'''
qualityfactor = {'d': 200. * u.um, 'sigma': 1.75 * u.um}
'''Scaling of grating efficiencies, parameterized as a Debye-Waller factor'''
class DataFileFormatException(Exception):
'''Exception for grating efficiency files not matching expected format.'''
pass
def load_table2d(filename):
'''Get a 2d array from an ecsv input file.
In the table file, the data is flattened to a 1d form.
The first two columns are x and y, like this:
The first column looks like this with many duplicates:
[1,1,1,1,1,1,2,2,2,2,2,2,3,3,3, ...].
Column B repeats like this: [1,2,3,4,5,6,1,2,3,4,5,6,1,2,3, ...].
All remaining columns are data on the same x-y grid, and the grid
has to be regular.
Parameters
----------
filename : string
Name and path of data file
Returns
-------
tab : `astropy.table.Table`
Table as read in. Useful to access units or other meta data.
x, y : `astropy.table.Column`
Unique entries in first and second column
dat : np.array
The remaining outputs are np.arrays of shape (len(x), len(y))
'''
tab = Table.read(filename, format='ascii.ecsv')
x = tab.columns[0]
y = tab.columns[1]
n_x = len(set(x))
n_y = len(set(y))
if len(x) != (n_x * n_y):
raise DataFileFormatException('Data is not on regular grid.')
x = x[::n_y]
y = y[:n_y]
coldat = [tab[d].data.reshape(n_x, n_y) for d in tab.columns[2:]]
return tab, x, y, coldat
class InterpolateEfficiencyTable(object):
'''Order Selector for MARXS using a specific kind of data table.
Ralf Heilmann from the SNL typically writes simulated grating efficiencies
into Excel tables. Since Excel is hard to read in Python and not very
suited to version control with git, those tables are converted to csv files
of a certain format.
A short summary of this format is given here, to help reading the code.
The table contains data in 3-dimenasional (wave, n_theta, order) space,
flattened into a 2d table.
- Row 1 + 2: Column labels. Not used here.
- Column A: wavelength in nm.
- Column B: blaze angle in deg.
- Rest: data
For each wavelength there are multiple blaze angles listed, so Column A
contains
many dublicates and looks like this: [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3, ...].
Column B repeats like this: [1,2,3,4,5,6,1,2,3,4,5,6,1,2,3, ...].
Because the wave, theta grid is regular, this class can use the
`scipy.interpolate.RectBivariateSpline` for interpolation in each 2-d slice
(``order`` is an integer and not interpolated).
Parameters
----------
filename : string
path and name of data file
k : int
Degree of spline. See `scipy.interpolate.RectBivariateSpline`.
'''
def __init__(self, filename, k=3):
tab, wave, theta, orders = load_table2d(filename)
theta = theta.to(u.rad)
# Order is int, we will never interpolate about order,
self.orders = np.array([int(n) for n in tab.colnames[2:]])
self.interpolators = [RectBivariateSpline(wave, theta, d, kx=k, ky=k) for d in orders]
def probabilities(self, energies, pol, blaze):
'''Obtain the probabilties for photons to go into a particular order.
This has the same parameters as the ``__call__`` method, but it returns
the raw probabilities, while ``__call__`` will draw from these
probabilities and assign an order and a total survival probability to
each photon.
Parameters
----------
energies : np.array
Energy for each photons
pol : np.array
Polarization for each photon (not used in this class)
blaze : np.array
Blaze angle for each photon
Returns
-------
orders : np.array
Array of orders
interpprobs : np.array
This array contains a probability array for each photon to reach a
particular order
'''
# convert energy in keV to wavelength in nm
# (nm is the unit of the input table)
wave = (energies * u.keV).to(u.nm, equivalencies=u.spectral()).value
interpprobs = np.empty((len(self.orders), len(energies)))
for i, interp in enumerate(self.interpolators):
interpprobs[i, :] = interp.ev(wave, blaze)
return self.orders, interpprobs
def __call__(self, energies, pol, blaze):
orders, interpprobs = self.probabilities(energies, pol, blaze)
totalprob = np.sum(interpprobs, axis=0)
# Cumulative probability for orders, normalized to 1.
cumprob = np.cumsum(interpprobs, axis=0) / totalprob
ind_orders = np.argmax(cumprob > np.random.rand(len(energies)), axis=0)
return orders[ind_orders], totalprob
class QualityFactor(FlatOpticalElement):
'''Scale probabilites of theoretical curves to measured values.
All gratings look better in theory than in practice. This grating quality
factor scales the calculated diffraction probabilities to the observed
performance.
'''
def __init__(self, qualityfactor=qualityfactor, **kwargs):
self.factor = np.exp(- (2 * np.pi * qualityfactor['sigma'] /
qualityfactor['d'])**2)
super().__init__(**kwargs)
def specific_process_photons(self, photons, intersect, interpos, intercoos):
return {'probability': self.factor**(photons['order'][intersect]**2)}
return photons
def check_lx_dims(lx_dims):
'''Check that dimensions in l1_dims or l2_dims make sense'''
if not (lx_dims['barwidth'] < lx_dims['period']):
raise ValueError('Period of grating must be larger than bar width.')
class L1(CATGrating):
'''A CAT grating representing only the L1 structure
This is treated independently of the CAT grating layer itself
although the two gratings are not really in the far-field limit.
CAT gratings of this class determine (statistically) if a photon
passes through the grating bars or the L1 support.
The L1 support is simplified as solid Si layer of 4 mu thickness.
Parameters
----------
relativearea : float
Relative open area (i.e. area not covered by L1 supports)
depth : `astropy.units.Quantity`
Depth of grating bars
'''
blaze_name = 'blaze_L1'
order_name = 'order_L1'
def __init__(self, l1_dims=l1_dims, **kwargs):
check_lx_dims(l1_dims)
self.openfraction = 1 - l1_dims['barwidth'] / l1_dims['period']
energy = l1transtab['energy'].to(u.keV, equivalencies=u.spectral())
trans = np.exp(np.log(l1transtab['transmission']) * l1_dims['bardepth'] / (1 * u.micrometer))
self.transfunc = interp1d(energy, trans)
kwargs['d'] = l1_dims['period'].to(u.mm).value
super().__init__(**kwargs)
def specific_process_photons(self, photons, intersect,
interpos, intercoos):
catresult = super().specific_process_photons(photons, intersect, interpos, intercoos)
# Now select which photons go through the L1 support and
# set the numbers appropriately.
# It is easier to have the diffraction calculated for all photons
# and then re-set numbers for a small fraction here.
# That, way, I don't have to duplicate the blaze calculation and no
# crazy tricks are necessary to keep the indices correct.
l1 = np.random.rand(intersect.sum()) > self.openfraction
ind = intersect.nonzero()[0][l1]
catresult['dir'][l1] = photons['dir'].data[ind, :]
catresult['polarization'][l1] = photons['polarization'].data[ind, :]
catresult['order_L1'][l1] = 0
catresult['probability'][l1] = self.transfunc(photons['energy'][ind])
return catresult
class L2Abs(FlatOpticalElement):
'''L2 absorption and shadowing
Some photons may pass through the CAT grating membrane and L1
support, but are absorbed by the L2 sidewalls. We treat this
statistically by reducing the overall probability. This
implementation ignores the effect that photons might scatter on
the L2 sidewall surface (those would be scattered away from the
CCDs anyway for most layouts).
Note that this does not read the L2 from a file, but calculates it
directly from the dimensions.
'''
def __init__(self, l2_dims=l2_dims, **kwargs):
check_lx_dims(l2_dims)
self.bardepth = l2_dims['bardepth']
self.period = l2_dims['period']
self.barwidth = l2_dims['barwidth']
super().__init__(**kwargs)
self.innerfree = self.period - self.barwidth
def specific_process_photons(self, photons, intersect,
interpos, intercoos):
p3 = norm_vector(photons['dir'].data[intersect])
ex, ey, en = self.geometry.get_local_euklid_bases(intercoos[intersect, :])
angle = np.arccos(np.abs(inner1d(p3, en)))
# fractional area NOT covered by the hexagon structure
openfraction = (self.innerfree / self.period)**2
# fractional area shadowed by inclined hexagon structure
shadowarea = (self.bardepth * self.innerfree * np.sin(angle))
totalarea = self.period**2 / 2 * np.sqrt(3)
shadowfraction = shadowarea / totalarea
return {'probability': openfraction - shadowfraction}
class L2Diffraction(RandomGaussianScatter):
'''Very simple approximation of L2 diffraction effects.
L2 is a hexagonal pattern, but at such a large spacing, that diffraction
at higher orders can be safely neglected. The only thing that does
happen is a slight broadening due to the single-slit function, and again,
only the core of that matters. So, we simply approximate this here with
simple Gaussian Scattering using the radius of the Airy disk as estimate
for the broadening sigma.
'''
scattername = 'L2Diffraction'
def __init__(self, l2_dims=l2_dims, **kwargs):
check_lx_dims(l2_dims)
self.innerfree = l2_dims['period'] - l2_dims['barwidth']
super().__init__(**kwargs)
def scatter(self, photons, intersect, interpos, intercoos):
wave = (photons['energy'].data[intersect] * u.keV).to(u.mm, equivalencies=u.spectral())
# 1.22 from Airy disk formula https://en.wikipedia.org/wiki/Airy_disk
# 0.4 is approx factor between sigma and r (first minimum)
sigma = 1.22 * 0.4 * np.arcsin(wave / self.innerfree)
return np.random.normal(size=intersect.sum()) * sigma
class NonParallelCATGrating(CATGrating):
'''CAT Grating where the angle of the reflective wall changes.
This element represents a CAT grating where not all grating bar walls
are perpendicular to the surface of the grating. This is only
true for a ray through the center. The angle changes linearly with
the distance to the center in the dispersion direction.
Each grating bar has a fixed angle, i.e. no change of the direction
happens along the grating bars (perpendicular to the dispersion direction).
Parameters
----------
d_blaze_mm : float
Change in direction of the reflecting grating bar sidewall, which
directly translates to a change in blaze angle [rad / mm].
blaze_center : float
Blaze angle at center of grating, ``0`` means grating bars are
perpendicular to element surface. [rad]
'''
def __init__(self, **kwargs):
self.d_blaze_mm = kwargs.pop('d_blaze_mm', 0)
self.blaze_center = kwargs.pop('blaze_center', 0)
super().__init__(**kwargs)
def blaze_angle_modifier(self, intercoos):
'''
Parameters
----------
intercoos : np.array
intercoos coordinates for photons interacting with optical element
'''
return self.blaze_center + intercoos[:, 0] * self.d_blaze_mm
def catsupportbars(photons):
'''Metal structure that holds grating facets will absorb all photons
that do not pass through a grating facet.
We might want to call this L3 support ;-)
'''
photons['probability'][photons['facet'] < 0] = 0.
return photons
class CATL1L2Stack(FlatStack):
'''SNL fabricated CAT grating
This element combines all parts of a CAT grating into a single object.
These include the grating membrane and the absorption and diffraction due
to the L1 and L2 support.
Approximations are done for all those elements, see the individial classes
for more details. Except for `order_selector` all other parameters are set with
defaults defined in module level variables.
Parameters
----------
order_selector : `marxs.optics.OrderSelector`
Order selector for the grating membrane
groove_angle : float
Goove angle of grating bars (in rad). Default: 0
l1_order_selector : `marxs.optics.OrderSelector`
Order selector for L1 dispersion (cross-dispersion direction for grating)
qualityfactor : dict
Parameterization of grating quality scaling factor. See model level variable
for format.
l1_dims : dict
Dimensions of L1 support. See module level variable for format.
l2_dims : dict
Dimensions of L2 support. See module level variable for format.
'''
def __init__(self, **kwargs):
kwargs['elements'] = [CATGrating,
QualityFactor,
L1,
L2Abs,
L2Diffraction,
]
groove_angle = kwargs.pop('groove_angle', 0.)
l2dim = kwargs.pop('l2_dims', l2_dims)
l1dim = kwargs.pop('l1_dims', l1_dims)
kwargs['keywords'] = [{'order_selector': kwargs.pop('order_selector'),
'd': kwargs.pop('d', d),
'groove_angle': groove_angle},
{'qualityfactor': kwargs.pop('qualityfactor', qualityfactor)},
{'l1_dims': l1dim,
'order_selector': kwargs.pop('l1_order_selector', l1_order_selector),
'groove_angle': np.pi / 2. + groove_angle},
{'l2_dims': l2dim},
{'l2_dims': l2dim}
]
super().__init__(**kwargs)
|
hamogu/marxs
|
marxs/missions/mitsnl/catgrating.py
|
Python
|
gpl-3.0
| 16,691
|
[
"Gaussian"
] |
de61b9f982846afff872039a0e47bae1f9be5c7c8840996f010e18af1286b1da
|
# mount.py
#
# Copyright (C) 2011-2015 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Author(s): Brian C. Lane <bcl@redhat.com>
#
import logging
log = logging.getLogger("livemedia-creator")
import os
import pycdlib
from pycdlib.pycdlibexception import PyCdlibException
from pylorax.imgutils import mount, umount
class IsoMountpoint(object):
"""
Mount the iso and check to make sure the vmlinuz and initrd.img files exist
Also check the iso for a a stage2 image and set a flag and extract the
iso's label.
stage2 can be either LiveOS/squashfs.img or images/install.img
"""
def __init__(self, iso_path, initrd_path=None):
"""
Mount the iso
:param str iso_path: Path to the iso to mount
:param str initrd_path: Optional path to initrd
initrd_path can be used to point to a tree with a newer
initrd.img than the iso has. The iso is still used for stage2.
self.kernel and self.initrd point to the kernel and initrd.
self.stage2 is set to True if there is a stage2 image.
self.repo is the path to the mounted iso if there is a /repodata dir.
"""
self.label = None
self.iso_path = iso_path
self.initrd_path = initrd_path
if not self.initrd_path:
self.mount_dir = mount(self.iso_path, opts="loop")
else:
self.mount_dir = self.initrd_path
kernel_list = [("/isolinux/vmlinuz", "/isolinux/initrd.img"),
("/ppc/ppc64/vmlinuz", "/ppc/ppc64/initrd.img"),
("/images/pxeboot/vmlinuz", "/images/pxeboot/initrd.img")]
if os.path.isdir(self.mount_dir+"/repodata"):
self.repo = self.mount_dir
else:
self.repo = None
self.stage2 = os.path.exists(self.mount_dir+"/LiveOS/squashfs.img") or \
os.path.exists(self.mount_dir+"/images/install.img")
try:
for kernel, initrd in kernel_list:
if (os.path.isfile(self.mount_dir+kernel) and
os.path.isfile(self.mount_dir+initrd)):
self.kernel = self.mount_dir+kernel
self.initrd = self.mount_dir+initrd
break
else:
raise Exception("Missing kernel and initrd file in iso, failed"
" to search under: {0}".format(kernel_list))
except:
self.umount()
raise
self.get_iso_label()
def umount( self ):
"""Unmount the iso"""
if not self.initrd_path:
umount(self.mount_dir)
def get_iso_label(self):
"""
Get the iso's label using pycdlib
Sets self.label if one is found
"""
try:
iso = pycdlib.PyCdlib()
iso.open(self.iso_path)
self.label = iso.pvd.volume_identifier.decode("UTF-8").strip()
except PyCdlibException as e:
log.error("Problem reading label from %s: %s", self.iso_path, e)
|
rhinstaller/lorax
|
src/pylorax/mount.py
|
Python
|
gpl-2.0
| 3,658
|
[
"Brian"
] |
bd4bb794b88b612c7752f14efa941bb160745eb1eea2a831312d9df192e98dff
|
from validate_app import validateApp
import os
from distutils import spawn
import sys
from parse_files import parseOutMapping, bringTogether
from bashSub import bashSub
def checkPreprocessApplications():
applications = ["bwa", "samtools", "bowtie2"]
source = ["http://bio-bwa.sourceforge.net/", "http://samtools.sourceforge.net/", "http://sourceforge.net/projects/bowtie-bio/files/bowtie2/2.2.6/"]
i = 0
for app in applications:
if spawn.find_executable(app) is None:
sys.stderr.write("It doesn't look like you have app - " + app + "\n")
sys.stderr.write("Download it here - " + source[i] + "\n")
exit(0)
else:
sys.stderr.write(app + " found\n")
i+= 1
def returnReads(dictSampleSeqFiles):
SE = ""
PE1 = ""
PE2 = ""
# data struct
# { (sampleKey, seqKey) : [[SE], [SE], [PE1, PE2], [PE1, PE2]] }
# diving into each of the sub lists in the dictionary value key
for e in dictSampleSeqFiles:
# if sublist only has one elment then it is SE read
if len(e) == 1:
if SE == "":
SE = e[0]
else:
SE += "," + e[0]
else:
if PE1 == "":
PE1 = e[0]
PE2 = e[1]
else:
PE1 += "," + e[0]
PE2 += "," + e[1]
return [SE, PE1, PE2]
def check_dir(Dir):
if not os.path.exists(Dir):
os.mkdir(Dir)
class mappingCMD:
def __init__(self):
self.metaDataFolder = "MetaData"
def createIndex(self, ref, algorithm, forceIndex):
if os.path.exists(ref):
if "bowtie" in algorithm:
if not os.path.exists(ref + ".bt2") or forceIndex:
createTheIndex = bashSub("bowtie2-build ", [ref], [''], '', '/dev/null')
print createTheIndex.getCommand()
createTheIndex.runCmd("")
else:
if not os.path.exists(ref + ".sa") or forceIndex:
createTheIndex = bashSub("bwa index ", [ref], [''], '', '/dev/null')
print createTheIndex.getCommand()
createTheIndex.runCmd("")
else:
print "Doesn't seem ref - " + ref + "actually exists"
exit(1)
def index(self, ref, algorithm, forceIndex):
if ref == "":
print "Would you mind adding a reference file? (-R) Thank you."
exit(1)
else:
self.createIndex(ref, algorithm, forceIndex)
def execute(self, args):
time = 0
checkPreprocessApplications();
logFiles = []
# checkPreprocessApplications()
validate = validateApp()
validate.setValidation(True)
dictSampleSeqFiles = validate.validateSampleSheet(args.readFolder, args.finalDir, args.samplesFile, args.force, True)
self.index(args.refFasta, args.mapping, args.forceIndex)
for key in dictSampleSeqFiles:
check_dir(args.finalDir)
check_dir(key[1])
meta = key[1] # NOT USED SO FAR
fileEnding = key[1].split("/")[-1]
endString = ' 2>/dev/null | tee >(samtools flagstat - >' + os.path.join(key[1], fileEnding + '.flagstats') + ') | samtools view -bS - | samtools sort - ' + os.path.join(key[1], fileEnding)
SEandPE = returnReads(dictSampleSeqFiles[key])
files = dictSampleSeqFiles[key][0]
RGstring = "-R '@RG\\tID:" + fileEnding + "\\tSM:" + fileEnding + "\\tPL:ILLUMINA\\tLB:whatever\\tPU:whatever\\tDS:Paired'"
if SEandPE[0] != "":
terminalString = []
if len(files) == 3 and args.forcePairs:
awkR1 = """
awk '{
printf "%s/1\\n", $0
getline
print substr($0, 0, length/2)
getline
print $0
getline
print substr($0, 0, length/2)
}' """ + files[2]
awkR2 = """awk 'BEGIN {
j = n = split("A C G T", t)
for (i = 0; ++i <= n;)
map[t[i]] = t[j--]
}
{
printf "%s/2\\n", $0
getline
for (i = length; i > length/2 ; i--) {
printf "%s", map[substr($0, i, 1)]
}
printf "\\n"
getline
print $0
getline
for (i = length; i > length/2; i--) {
printf "%s", substr($0, i, 1)
}
printf "\\n"
}' """ + files[2]
terminalString = []
terminalString.append(bashSub("bwa mem -M " + RGstring, [str(int(args.threads))], ['-t'], args.refFasta + " <(cat " + files[0] + " <(" + awkR1 + ")) <(cat " + files[1] + " <(" + awkR2 + ")) " + endString, "/dev/null"))
runIndex=bashSub("samtools index ", [os.path.join(key[1], fileEnding + ".bam")], [''], '', '/dev/null')
runIdxStats=bashSub("samtools idxstats ", [os.path.join(key[1], fileEnding + ".bam")], [''], '> ' + os.path.join(key[1], fileEnding + ".idxstats"), '/dev/null')
elif len(files) == 3:
terminalString = []
terminalString.append(bashSub("bwa mem -M " + RGstring, [str(int(args.threads)/2)], ['-t'], args.refFasta + " " + files[0] + " " + files[1] , "/dev/null"))
terminalString.append(bashSub("bwa mem -M " + RGstring, [str(int(args.threads)/2)], ['-t'], args.refFasta + " " + files[2] , "/dev/null"))
terminalString.append(bashSub("samtools merge - " + terminalString[-1].processSub()[0] + " " + terminalString[-2].processSub()[0] + " " + endString , [""], [""], "", "/dev/null"))
runIndex=bashSub("samtools index ", [os.path.join(key[1], fileEnding + ".bam")], [''], '', '/dev/null')
runIdxStats=bashSub("samtools idxstats ", [os.path.join(key[1], fileEnding + ".bam")], [''], '> ' + os.path.join(key[1], fileEnding + ".idxstats"), '/dev/null')
elif SEandPE[1] != "":
RGstring = "-R '@RG\\tID:" + fileEnding + "\\tSM:" + fileEnding + "\\tPL:ILLUMINA\\tLB:whatever\\tPU:whatever\\tDS:Paired'"
terminalString = []
terminalString.append(bashSub("bwa mem -M " + RGstring, [args.threads], ['-t'], args.refFasta + " " + SEandPE[1] + " " + SEandPE[2] + endString, "/dev/null"))
runIndex = bashSub("samtools index ", [os.path.join(key[1], fileEnding + ".bam")], [''], '', '/dev/null')
runIdxStats = bashSub("samtools idxstats ", [os.path.join(key[1], fileEnding + ".bam")], [''], '> ' + os.path.join(key[1], fileEnding + ".idxstats"), '/dev/null')
print "___ PE COMMANDS ___"
print terminalString[-1].getCommand()
terminalString[-1].runCmd("")
print runIndex.getCommand()
runIndex.runCmd("")
print runIdxStats.getCommand()
runIdxStats.runCmd("")
sys.stderr.flush()
time += runIndex.returnTime() + runIdxStats.returnTime() + terminalString[-1].returnTime()
logFiles.append(parseOutMapping(key[1], key[1].split("/")[-1]))
bringTogether(logFiles, os.path.join(args.finalDir, "Mapping_Summary.log"))
print "Total amount of seconds to run all samples"
print "Seconds: " + str(time)
|
msettles/expHTS
|
expHTS/mappingCMD.py
|
Python
|
apache-2.0
| 8,155
|
[
"BWA",
"Bowtie"
] |
8909d3ec3df2b30887af5f783521b2c743a32b322c15db3d47d06a6792ff8a22
|
# Copyright (C) 2010-2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import espressomd
import unittest as ut
import unittest_decorators as utx
import numpy as np
@utx.skipIfMissingFeatures(["LB_ELECTROHYDRODYNAMICS"])
class LBEHTest(ut.TestCase):
from espressomd import lb
s = espressomd.System(box_l=[6.0, 6.0, 6.0])
def setUp(self):
self.params = {'time_step': 0.01,
'tau': 0.02,
'agrid': 0.5,
'dens': 0.85,
'viscosity': 30.0,
'friction': 3.0,
'temp': 0.0,
'skin': 0.2,
'muE': [0.1, 0.2, 0.3]}
self.s.periodicity = [1, 1, 1]
self.s.time_step = self.params['time_step']
self.s.cell_system.skin = self.params['skin']
self.lbf = self.lb.LBFluid(
visc=self.params['viscosity'],
dens=self.params['dens'],
agrid=self.params['agrid'],
tau=self.s.time_step,
kT=self.params['temp']
)
self.s.actors.add(self.lbf)
self.s.thermostat.set_lb(
LB_fluid=self.lbf,
gamma=self.params['friction'])
def tearDown(self):
self.s.actors.clear()
def test(self):
s = self.s
p = s.part.add(pos=0.5 * s.box_l, mu_E=self.params['muE'])
mu_E = np.array(self.params['muE'])
# Terminal velocity is mu_E minus the momentum the fluid
# got by accelerating the particle in the beginning.
v_term = (1. - 1. / (s.volume() * self.params['dens'])) * mu_E
s.integrator.run(steps=500)
np.testing.assert_allclose(v_term, np.copy(p.v), atol=1e-5)
if __name__ == "__main__":
ut.main()
|
fweik/espresso
|
testsuite/python/lb_electrohydrodynamics.py
|
Python
|
gpl-3.0
| 2,426
|
[
"ESPResSo"
] |
3022ff8212a6f3b8fc0adc3600a2133165d77a1ce6b463aaf99e6b14e8d175b9
|
from __future__ import print_function
import json
import os
import os.path
import re
import sys
import warnings
from collections import defaultdict
from distutils.command.build_scripts import build_scripts as BuildScripts
from distutils.command.sdist import sdist as SDist
try:
from setuptools import setup, find_packages
from setuptools.command.build_py import build_py as BuildPy
from setuptools.command.install_lib import install_lib as InstallLib
from setuptools.command.install_scripts import install_scripts as InstallScripts
except ImportError:
print("Ansible now needs setuptools in order to build. Install it using"
" your package manager (usually python-setuptools) or via pip (pip"
" install setuptools).", file=sys.stderr)
sys.exit(1)
sys.path.insert(0, os.path.abspath('lib'))
from ansible.release import __version__, __author__
SYMLINK_CACHE = 'SYMLINK_CACHE.json'
def _find_symlinks(topdir, extension=''):
"""Find symlinks that should be maintained
Maintained symlinks exist in the bin dir or are modules which have
aliases. Our heuristic is that they are a link in a certain path which
point to a file in the same directory.
.. warn::
We want the symlinks in :file:`bin/` that link into :file:`lib/ansible/*` (currently,
:command:`ansible`, :command:`ansible-test`, and :command:`ansible-connection`) to become
real files on install. Updates to the heuristic here *must not* add them to the symlink
cache.
"""
symlinks = defaultdict(list)
for base_path, dirs, files in os.walk(topdir):
for filename in files:
filepath = os.path.join(base_path, filename)
if os.path.islink(filepath) and filename.endswith(extension):
target = os.readlink(filepath)
if target.startswith('/'):
# We do not support absolute symlinks at all
continue
if os.path.dirname(target) == '':
link = filepath[len(topdir):]
if link.startswith('/'):
link = link[1:]
symlinks[os.path.basename(target)].append(link)
else:
# Count how many directory levels from the topdir we are
levels_deep = os.path.dirname(filepath).count('/')
# Count the number of directory levels higher we walk up the tree in target
target_depth = 0
for path_component in target.split('/'):
if path_component == '..':
target_depth += 1
# If we walk past the topdir, then don't store
if target_depth >= levels_deep:
break
else:
target_depth -= 1
else:
# If we managed to stay within the tree, store the symlink
link = filepath[len(topdir):]
if link.startswith('/'):
link = link[1:]
symlinks[target].append(link)
return symlinks
def _cache_symlinks(symlink_data):
with open(SYMLINK_CACHE, 'w') as f:
json.dump(symlink_data, f)
def _maintain_symlinks(symlink_type, base_path):
"""Switch a real file into a symlink"""
try:
# Try the cache first because going from git checkout to sdist is the
# only time we know that we're going to cache correctly
with open(SYMLINK_CACHE, 'r') as f:
symlink_data = json.load(f)
except (IOError, OSError) as e:
# IOError on py2, OSError on py3. Both have errno
if e.errno == 2:
# SYMLINKS_CACHE doesn't exist. Fallback to trying to create the
# cache now. Will work if we're running directly from a git
# checkout or from an sdist created earlier.
library_symlinks = _find_symlinks('lib', '.py')
library_symlinks.update(_find_symlinks('test/lib'))
symlink_data = {'script': _find_symlinks('bin'),
'library': library_symlinks,
}
# Sanity check that something we know should be a symlink was
# found. We'll take that to mean that the current directory
# structure properly reflects symlinks in the git repo
if 'ansible-playbook' in symlink_data['script']['ansible']:
_cache_symlinks(symlink_data)
else:
raise RuntimeError(
"Pregenerated symlink list was not present and expected "
"symlinks in ./bin were missing or broken. "
"Perhaps this isn't a git checkout?"
)
else:
raise
symlinks = symlink_data[symlink_type]
for source in symlinks:
for dest in symlinks[source]:
dest_path = os.path.join(base_path, dest)
if not os.path.islink(dest_path):
try:
os.unlink(dest_path)
except OSError as e:
if e.errno == 2:
# File does not exist which is all we wanted
pass
os.symlink(source, dest_path)
class BuildPyCommand(BuildPy):
def run(self):
BuildPy.run(self)
_maintain_symlinks('library', self.build_lib)
class BuildScriptsCommand(BuildScripts):
def run(self):
BuildScripts.run(self)
_maintain_symlinks('script', self.build_dir)
class InstallLibCommand(InstallLib):
def run(self):
InstallLib.run(self)
_maintain_symlinks('library', self.install_dir)
class InstallScriptsCommand(InstallScripts):
def run(self):
InstallScripts.run(self)
_maintain_symlinks('script', self.install_dir)
class SDistCommand(SDist):
def run(self):
# have to generate the cache of symlinks for release as sdist is the
# only command that has access to symlinks from the git repo
library_symlinks = _find_symlinks('lib', '.py')
library_symlinks.update(_find_symlinks('test/lib'))
symlinks = {'script': _find_symlinks('bin'),
'library': library_symlinks,
}
_cache_symlinks(symlinks)
SDist.run(self)
# Print warnings at the end because no one will see warnings before all the normal status
# output
if os.environ.get('_ANSIBLE_SDIST_FROM_MAKEFILE', False) != '1':
warnings.warn('When setup.py sdist is run from outside of the Makefile,'
' the generated tarball may be incomplete. Use `make snapshot`'
' to create a tarball from an arbitrary checkout or use'
' `cd packaging/release && make release version=[..]` for official builds.',
RuntimeWarning)
def read_file(file_name):
"""Read file and return its contents."""
with open(file_name, 'r') as f:
return f.read()
def read_requirements(file_name):
"""Read requirements file as a list."""
reqs = read_file(file_name).splitlines()
if not reqs:
raise RuntimeError(
"Unable to read requirements from the %s file"
"That indicates this copy of the source code is incomplete."
% file_name
)
return reqs
PYCRYPTO_DIST = 'pycrypto'
def get_crypto_req():
"""Detect custom crypto from ANSIBLE_CRYPTO_BACKEND env var.
pycrypto or cryptography. We choose a default but allow the user to
override it. This translates into pip install of the sdist deciding what
package to install and also the runtime dependencies that pkg_resources
knows about.
"""
crypto_backend = os.environ.get('ANSIBLE_CRYPTO_BACKEND', '').strip()
if crypto_backend == PYCRYPTO_DIST:
# Attempt to set version requirements
return '%s >= 2.6' % PYCRYPTO_DIST
return crypto_backend or None
def substitute_crypto_to_req(req):
"""Replace crypto requirements if customized."""
crypto_backend = get_crypto_req()
if crypto_backend is None:
return req
def is_not_crypto(r):
CRYPTO_LIBS = PYCRYPTO_DIST, 'cryptography'
return not any(r.lower().startswith(c) for c in CRYPTO_LIBS)
return [r for r in req if is_not_crypto(r)] + [crypto_backend]
def read_extras():
"""Specify any extra requirements for installation."""
extras = dict()
extra_requirements_dir = 'packaging/requirements'
for extra_requirements_filename in os.listdir(extra_requirements_dir):
filename_match = re.search(r'^requirements-(\w*).txt$', extra_requirements_filename)
if not filename_match:
continue
extra_req_file_path = os.path.join(extra_requirements_dir, extra_requirements_filename)
try:
extras[filename_match.group(1)] = read_file(extra_req_file_path).splitlines()
except RuntimeError:
pass
return extras
def get_dynamic_setup_params():
"""Add dynamically calculated setup params to static ones."""
return {
# Retrieve the long description from the README
'long_description': read_file('README.rst'),
'install_requires': substitute_crypto_to_req(
read_requirements('requirements.txt'),
),
'extras_require': read_extras(),
}
static_setup_params = dict(
# Use the distutils SDist so that symlinks are not expanded
# Use a custom Build for the same reason
cmdclass={
'build_py': BuildPyCommand,
'build_scripts': BuildScriptsCommand,
'install_lib': InstallLibCommand,
'install_scripts': InstallScriptsCommand,
'sdist': SDistCommand,
},
name='ansible',
version=__version__,
description='Radically simple IT automation',
author=__author__,
author_email='info@ansible.com',
url='https://ansible.com/',
project_urls={
'Bug Tracker': 'https://github.com/ansible/ansible/issues',
'CI: Shippable': 'https://app.shippable.com/github/ansible/ansible',
'Code of Conduct': 'https://docs.ansible.com/ansible/latest/community/code_of_conduct.html',
'Documentation': 'https://docs.ansible.com/ansible/',
'Mailing lists': 'https://docs.ansible.com/ansible/latest/community/communication.html#mailing-list-information',
'Source Code': 'https://github.com/ansible/ansible',
},
license='GPLv3+',
# Ansible will also make use of a system copy of python-six and
# python-selectors2 if installed but use a Bundled copy if it's not.
python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',
package_dir={'': 'lib',
'ansible_test': 'test/lib/ansible_test'},
packages=find_packages('lib') + find_packages('test/lib'),
include_package_data=True,
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: Information Technology',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
'Natural Language :: English',
'Operating System :: POSIX',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: System :: Installation/Setup',
'Topic :: System :: Systems Administration',
'Topic :: Utilities',
],
scripts=[
'bin/ansible',
'bin/ansible-playbook',
'bin/ansible-pull',
'bin/ansible-doc',
'bin/ansible-galaxy',
'bin/ansible-console',
'bin/ansible-connection',
'bin/ansible-vault',
'bin/ansible-config',
'bin/ansible-inventory',
'bin/ansible-test',
],
data_files=[],
# Installing as zip files would break due to references to __file__
zip_safe=False
)
def main():
"""Invoke installation process using setuptools."""
setup_params = dict(static_setup_params, **get_dynamic_setup_params())
ignore_warning_regex = (
r"Unknown distribution option: '(project_urls|python_requires)'"
)
warnings.filterwarnings(
'ignore',
message=ignore_warning_regex,
category=UserWarning,
module='distutils.dist',
)
setup(**setup_params)
warnings.resetwarnings()
if __name__ == '__main__':
main()
|
Dhivyap/ansible
|
setup.py
|
Python
|
gpl-3.0
| 12,900
|
[
"Galaxy"
] |
d27f137b12765bc81e00c84a8ba782dbaffb7f5ba0684b5c17c8b3d54d885ef4
|
# This component assembles an analysis recipe for the annual adaptive comfort component
#
# Honeybee: A Plugin for Environmental Analysis (GPL) started by Mostapha Sadeghipour Roudsari
#
# This file is part of Honeybee.
#
# Copyright (c) 2013-2015, Chris Mackey <Chris@MackeyArchitecture.com>
# Honeybee is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation; either version 3 of the License,
# or (at your option) any later version.
#
# Honeybee is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Honeybee; If not, see <http://www.gnu.org/licenses/>.
#
# @license GPL-3.0+ <http://spdx.org/licenses/GPL-3.0+>
"""
Use this component to assemble an adaptive comfort recipe for the "Honeybee_Annual Indoor Comfort Analysis" component.
-
Provided by Honeybee 0.0.57
Args:
_viewFactorMesh: The data tree of view factor meshes that comes out of the "Honeybee_Indoor View Factor Calculator".
_viewFactorInfo: The python list that comes out of the "Honeybee_Indoor View Factor Calculator".
_epwFile: The epw file that was used to run the EnergyPlus model. This will be used to generate sun vectors and get radiation data for estimating the temperature delta for sun falling on occupants.
===============: ...
_srfIndoorTemp: A list surfaceIndoorTemp data out of the "Honeybee_Read EP Surface Result" component.
srfOutdoorTemp_: A list surfaceOutdoorTemp data out of the "Honeybee_Read EP Surface Result" component.
_zoneAirTemp: The airTemperature output of the "Honeybee_Read EP Result" component.
_zoneRelHumid: The relativeHumidity output of the "Honeybee_Read EP Result" component.
_zoneAirFlowVol: The airFlowVolume output of the "Honeybee_Read EP Result" component.
_zoneAirHeatGain: The airHeatGainRate output of the "Honeybee_Read EP Result" component.
metabolicRate_: A number representing the metabolic rate of the human subject in met. If no value is input here, the component will assume a metabolic rate of 1 met, which is the metabolic rate of a seated human being. This input can also accept a list of 8760 metabolic rates to represent how an occuant's metabolic rate might change from hour to hour.
clothingLevel_: A number representing the clothing level of the human subject in clo. If no value is input here, the component will assume a clothing level of 1 clo, which is roughly the insulation provided by a 3-piece suit. A person dressed in shorts and a T-shirt has a clothing level of roughly 0.5 clo and a person in a thick winter jacket can have a clothing level as high as 2 to 4 clo. This input can also accept a list of 8760 clothing levels to represent how an occuant's clothing might change from hour to hour.
===============: ...
comfortPar_: Optional comfort parameters from the "Ladybug_PMV Comfort Parameters" component. Use this to adjust maximum and minimum acceptable humidity ratios. These comfortPar can also change whether comfort is defined by eighty or ninety percent of people comfortable. By default, comfort is defined as 90% of the occupants comfortable and there are no limits on humidity when there is no thermal stress.
wellMixedAirOverride_: Set to "True" if you know that your building will have a forced air system with diffusers meant to mix the air as well as possilbe. This will prevent the calculation from running the air stratification function and instead assume well mixed conditions. This input can also be a list of 8760 boolean values that represent the hours of the year when a forced air system or ceiling fans are run to mix the air. The default is set to 'False' to run the stratification calculation for every hour of the year, assuming no forced air heating/cooling system.
inletHeightOverride_: An optional list of float values that match the data tree of view factor meshes and represent the height, in meters, from the bottom of the view factor mesh to the window inlet height. This will override the default value used in the air stratification calculation, which sets the inlet height in the bottom half of the average glazing height.
windowShadeTransmiss_: A decimal value between 0 and 1 that represents the transmissivity of the shades on the windows of a zone (1 is no shade and 0 is fully shaded). This input can also be a list of 8760 values between 0 and 1 that represents a list of hourly window shade transmissivities to be applied to all windows of the model. Finally and most importantly, this can be the 'windowTransmissivity' output of the 'Read EP Surface Result' component for an energy model that has been run with window shades. This final option ensures that the energy model and the confort map results are always aligned although it is the most computationally expensive of the options. The default is set to 0, which assumes no additional shading to windows.
clothingAbsorptivity_: An optional decimal value between 0 and 1 that represents the fraction of solar radiation absorbed by the human body. The default is set to 0.7 for (average/brown) skin and average clothing. You may want to increase this value for darker skin or darker clothing.
additionalWindSpeed_: An additional value of indoor wind speed in m/s to be added to the base speed computed from the zone volume and hourly flow volume. Use this input to account for objects like ceiling fans that might increase the interior wind speed felt by the occupants while not affecting the total flow volume into the zone much. This input can also be a list of 8760 additional wind speed values that represent the hours of the year when wind speed is increased. Lastly, this input can be a data tree of values with branches that are each 8760 values long and correspond to the branches of the input viewFactorMesh_. This last option enables you to specify different wind speeds for different continuous air volumes.
outdoorTerrain_: An interger from 0 to 3 that sets the terrain class associated with the wind speed used in outdoor wind calculations. Interger values represent the following terrain classes:
0 = Urban: large city centres, 50% of buildings above 21m over a distance of at least 2000m upwind.
1 = Suburban: suburbs, wooded areas.
2 = Country: open, with scattered objects generally less than 10m high.
3 = Water: Flat, unobstructed areas exposed to wind flowing over a large water body (no more than 500m inland).
Returns:
readMe!: ...
===============: ...
comfRecipe: An analysis recipe for the "Honeybee_Annual Indoor Comfort Analysis" component.
"""
ghenv.Component.Name = "Honeybee_PMV Comfort Analysis Recipe"
ghenv.Component.NickName = 'PMVComfRecipe'
ghenv.Component.Message = 'VER 0.0.57\nJUL_06_2015'
ghenv.Component.Category = "Honeybee"
ghenv.Component.SubCategory = "09 | Energy | Energy"
#compatibleHBVersion = VER 0.0.56\nFEB_01_2015
#compatibleLBVersion = VER 0.0.59\nJUN_25_2015
try: ghenv.Component.AdditionalHelpFromDocStrings = "6"
except: pass
from System import Object
from System import Drawing
import System
import Grasshopper.Kernel as gh
from Grasshopper import DataTree
from Grasshopper.Kernel.Data import GH_Path
import Rhino as rc
import scriptcontext as sc
import math
import os
w = gh.GH_RuntimeMessageLevel.Warning
tol = sc.doc.ModelAbsoluteTolerance
def checkTheInputs():
w = gh.GH_RuntimeMessageLevel.Warning
#Unpack the viewFactorInfo.
checkData25 = True
try:
testPtViewFactor, zoneSrfNames, testPtSkyView, testPtBlockedVec, testPtZoneWeights, testPtZoneNames, ptHeightWeights, zoneInletInfo, zoneHasWindows, outdoorIsThere, outdoorNonSrfViewFac, outdoorPtHeightWeights, testPtBlockName, zoneWindowTransmiss, zoneWindowNames, zoneFloorReflectivity, constantTransmis, finalAddShdTransmiss = _viewFactorInfo
except:
testPtViewFactor, zoneSrfNames, testPtSkyView, testPtBlockedVec, testPtZoneWeights, testPtZoneNames, ptHeightWeights, zoneInletInfo, zoneHasWindows, outdoorIsThere, outdoorNonSrfViewFac, outdoorPtHeightWeights, testPtBlockName, zoneWindowTransmiss, zoneWindowNames, zoneFloorReflectivity, constantTransmis, finalAddShdTransmiss = [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], True, []
checkData25 = False
warning = "_viewFactorInfo is not valid."
print warning
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, warning)
#Convert the data tree of _viewFactorMesh to py data.
viewFactorMesh = []
checkData13 = True
pathCheck = 0
finalCheck = len(testPtViewFactor)
if _viewFactorMesh.BranchCount != 0:
if _viewFactorMesh.Branch(0)[0] != None:
treePaths = _viewFactorMesh.Paths
for path in treePaths:
i = path.Indices[0]
if i == pathCheck:
branchList = _viewFactorMesh.Branch(path)
dataVal = []
for item in branchList:
dataVal.append(item)
viewFactorMesh.append(dataVal)
pathCheck += 1
else:
while pathCheck < i:
viewFactorMesh.append([])
pathCheck += 1
if i == pathCheck:
branchList = _viewFactorMesh.Branch(path)
dataVal = []
for item in branchList:
dataVal.append(item)
viewFactorMesh.append(dataVal)
pathCheck += 1
if len(viewFactorMesh) < finalCheck:
while len(viewFactorMesh) < finalCheck:
viewFactorMesh.append([])
else:
checkData13 = False
print "Connect a data tree of view factor meshes from the 'Honeybee_Indoor View Factor Calculator' component."
else:
checkData13 = False
print "Connect a data tree of view factor meshes from the 'Honeybee_Indoor View Factor Calculator' component."
#Create a function to check and create a Python list from a datatree
def checkCreateDataTree(dataTree, dataName, dataType):
dataPyList = []
for i in range(dataTree.BranchCount):
branchList = dataTree.Branch(i)
dataVal = []
for item in branchList:
try: dataVal.append(float(item))
except: dataVal.append(item)
dataPyList.append(dataVal)
#Test to see if the data has a header on it, which is necessary to know if it is the right data type. If there's no header, the data should not be vizualized with this component.
checkHeader = []
dataHeaders = []
dataNumbers = []
for list in dataPyList:
if str(list[0]) == "key:location/dataType/units/frequency/startsAt/endsAt":
checkHeader.append(1)
dataHeaders.append(list[:7])
dataNumbers.append(list[7:])
else:
dataNumbers.append(list)
if sum(checkHeader) == len(dataPyList):
dataCheck2 = True
else:
dataCheck2 = False
warning = "Not all of the connected " + dataName + " has a Ladybug/Honeybee header on it. This header is necessary to generate an indoor temperture map with this component."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
#Check to be sure that the lengths of data in in the dataTree branches are all the same.
dataLength = len(dataNumbers[0])
dataLenCheck = []
for list in dataNumbers:
if len(list) == dataLength:
dataLenCheck.append(1)
else: pass
if sum(dataLenCheck) == len(dataNumbers) and dataLength <8761:
dataCheck4 = True
else:
dataCheck4 = False
warning = "Not all of the connected " + dataName + " branches are of the same length or there are more than 8760 values in the list."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
if dataCheck2 == True:
#Check to be sure that all of the data headers say that they are of the same type.
header = dataHeaders[0]
headerUnits = header[3]
headerStart = header[5]
headerEnd = header[6]
simStep = str(header[4])
headUnitCheck = []
headPeriodCheck = []
for head in dataHeaders:
if dataType in head[2]:
headUnitCheck.append(1)
if head[3] == headerUnits and str(head[4]) == simStep and head[5] == headerStart and head[6] == headerEnd:
headPeriodCheck.append(1)
else: pass
if sum(headPeriodCheck) == len(dataHeaders):
dataCheck5 = True
else:
dataCheck5 = False
warning = "Not all of the connected " + dataName + " branches are of the same timestep or same analysis period."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
if sum(headUnitCheck) == len(dataHeaders):
dataCheck6 = True
else:
dataCheck6 = False
warning = "Not all of the connected " + dataName + " data is for the correct data type."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
#See if the data is hourly.
if simStep == 'hourly' or simStep == 'Hourly': pass
else:
dataCheck6 = False
warning = "Simulation data must be hourly."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
else:
dataCheck5 = False
dataCheck6 == False
if dataLength == 8760: annualData = True
else: annualData = False
simStep = 'unknown timestep'
headerUnits = 'unknown units'
dataHeaders = []
return dataCheck5, dataCheck6, headerUnits, dataHeaders, dataNumbers, [header[5], header[6]]
#Run all of the EnergyPlus data through the check function.
checkData1, checkData2, airTempUnits, airTempDataHeaders, airTempDataNumbers, analysisPeriod = checkCreateDataTree(_zoneAirTemp, "_zoneAirTemp", "Air Temperature")
checkData3, checkData4, srfTempUnits, srfTempHeaders, srfTempNumbers, analysisPeriod = checkCreateDataTree(_srfIndoorTemp, "_srfIndoorTemp", "Inner Surface Temperature")
checkData21, checkData22, flowVolUnits, flowVolDataHeaders, flowVolDataNumbers, analysisPeriod = checkCreateDataTree(_zoneAirFlowVol, "_zoneAirFlowVol", "Air Flow Volume")
checkData23, checkData24, heatGainUnits, heatGainDataHeaders, heatGainDataNumbers, analysisPeriod = checkCreateDataTree(_zoneAirHeatGain, "_zoneAirHeatGain", "Air Heat Gain Rate")
checkData17, checkData18, relHumidUnits, relHumidDataHeaders, relHumidDataNumbers, analysisPeriod = checkCreateDataTree(_zoneRelHumid, "_zoneRelHumid", "Relative Humidity")
#Try to bring in the outdoor surface temperatures.
outdoorClac = False
try:
checkData29, checkData30, outSrfTempUnits, outSrfTempHeaders, outSrfTempNumbers, analysisPeriod = checkCreateDataTree(srfOutdoorTemp_, "_srfOutdoorTemp_", "Outer Surface Temperature")
if outdoorIsThere == True: outdoorClac = True
except:
outdoorClac = False
checkData29, checkData30, outSrfTempUnits, outSrfTempHeaders, outSrfTempNumbers = True, True, 'C', [], []
#Check the windowShadeTransmiss_.
checkData14 = True
checkData32 = True
winStatusNumbers = []
winStatusHeaders = []
allWindowShadesSame = True
if windowShadeTransmiss_.BranchCount == 1:
if windowShadeTransmiss_ != []:
windowShadeTransmiss = []
for shadeValue in windowShadeTransmiss_.Branch(0):
windowShadeTransmiss.append(shadeValue)
if len(windowShadeTransmiss) == 8760:
allGood = True
for transVal in windowShadeTransmiss:
transFloat = float(transVal)
if transFloat <= 1.0 and transFloat >= 0.0: winStatusNumbers.append(transFloat)
else: allGood = False
if allGood == False:
checkData14 = False
warning = 'windowShadeTransmiss_ must be a value between 0 and 1.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
elif len(windowShadeTransmiss) == 1:
if float(windowShadeTransmiss[0]) <= 1.0 and float(windowShadeTransmiss[0]) >= 0.0:
for count in range(8760):
winStatusNumbers.append(float(windowShadeTransmiss[0]))
else:
checkData14 = False
warning = 'windowShadeTransmiss_ must be a value between 0 and 1.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
checkData14 = False
warning = 'windowShadeTransmiss_ must be either a list of 8760 values that correspond to hourly changing transmissivity over the year or a single constant value for the whole year.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
elif windowShadeTransmiss_.BranchCount > 1:
allWindowShadesSame = False
checkData14, checkData32, winStatusUnits, winStatusHeaders, winStatusNumbers, analysisPeriod = checkCreateDataTree(windowShadeTransmiss_, "windowShadeTransmiss_", "Surface Window System Solar Transmittance")
#Convert all of the numbers in shade status data tree to window transmissivities.
for winBCount, windowBranchList in enumerate(winStatusNumbers):
for shadHrCt, shadVal in enumerate(windowBranchList):
winStatusNumbers[winBCount][shadHrCt] = float(shadVal)
elif constantTransmis == True:
for count in range(8760):
winStatusNumbers.append(1)
print 'No value found for windowShadeTransmiss_. The window shade status will be set to 1 assuming no additional shading beyond the window glass transmissivity.'
#Check to see if there are hourly transmissivities for the additional shading.
if constantTransmis == False:
allWindowShadesSame = False
for transmisslistCount, transmissList in enumerate(finalAddShdTransmiss):
winStatusNumbers.append(transmissList)
srfName = 'AddShd' + str(transmisslistCount)
shdHeader = ['key:location/dataType/units/frequency/startsAt/endsAt', 'Location', 'Surface Window System Solar Transmittance for ' + srfName + ': Window', 'Fraction', 'Hourly', analysisPeriod[0], analysisPeriod[1]]
winStatusHeaders.append(shdHeader)
#Check the additionalWindSpeed_.
checkData33 = True
winSpeedNumbers = []
pathCheck = 0
allWindSpeedsSame = True
if additionalWindSpeed_.BranchCount == 1:
additionalWindSpeed = []
for windValue in additionalWindSpeed_.Branch(0):
additionalWindSpeed.append(windValue)
if len(additionalWindSpeed) == 8760:
allGood = True
for winSp in additionalWindSpeed:
windFloat = float(winSp)
if windFloat >= 0.0: winSpeedNumbers.append(windFloat)
else: allGood = False
if allGood == False:
checkData33 = False
warning = 'additionalWindSpeed_ must be a value greater than 0.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
elif len(additionalWindSpeed) == 1:
if float(additionalWindSpeed[0]) >= 0.0:
for count in range(8760):
winSpeedNumbers.append(float(additionalWindSpeed[0]))
else:
checkData33 = False
warning = 'additionalWindSpeed_ must be a value greater than 0.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
checkData33 = False
warning = 'additionalWindSpeed_ must be either a list of 8760 values that correspond to hourly changing wind speeds over the year or a single constant value for the whole year.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
elif additionalWindSpeed_.BranchCount > 1:
allWindSpeedsSame = False
treePaths = additionalWindSpeed_.Paths
for path in treePaths:
i = path.Indices[0]
if i == pathCheck:
branchList = additionalWindSpeed_.Branch(path)
dataVal = []
for item in branchList:
dataVal.append(item)
winSpeedNumbers.append(dataVal)
pathCheck += 1
else:
while pathCheck < i:
winSpeedNumbers.append([])
pathCheck += 1
if i == pathCheck:
branchList = additionalWindSpeed_.Branch(path)
dataVal = []
for item in branchList:
dataVal.append(item)
winSpeedNumbers.append(dataVal)
pathCheck += 1
if len(winSpeedNumbers) < finalCheck:
while len(winSpeedNumbers) < finalCheck:
winSpeedNumbers.append([])
for winCount, winList in enumerate(winSpeedNumbers):
if len(winList) != 0 and winCount != len(winSpeedNumbers) - 1: pass
elif winCount == len(winSpeedNumbers) - 1: pass
else:
checkData33 = False
warning = 'additionalWindSpeed_ data tree is not formatted correctly. Try simplifying the tree paths before connecting it.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
for count in range(8760):
winSpeedNumbers.append(0)
print 'No value found for additionalWindSpeed_. No additional value for wind speed will be added.'
#Check to be sure that the units of flowVol and heat gain are correct.
checkData9 = True
if flowVolUnits == "m3/s": pass
else:
checkData9 = False
warning = "_zoneFlowVol must be in m3/s."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
checkData10 = True
if heatGainUnits == "W": pass
else:
checkData10 = False
warning = "_zoneHeatGain must be in W."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
checkData11 = True
if airTempUnits == srfTempUnits == "C": pass
else:
checkData11 = False
warning = "_zoneAirTemp and _srfIndoorTemp must be in degrees C."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
checkData19 = True
if relHumidUnits == "%": pass
else:
checkData11 = False
warning = "_zoneRelHumid must be in %."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
checkData28 = True
if outSrfTempUnits == "C": pass
else:
checkData28 = False
warning = "_srfOutdoorTemp must be in degrees C."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
#Try to parse the weather file in order to get direct rad, diffuse rad, and location data.
checkData5 = True
if not os.path.isfile(_epwFile):
checkData5 = False
warningM = "Failed to find the file: " + str(_epwFile)
print warningM
ghenv.Component.AddRuntimeMessage(w, warningM)
else:
locationData = lb_preparation.epwLocation(_epwFile)
location = locationData[-1]
weatherData = lb_preparation.epwDataReader(_epwFile, locationData[0])
directNormalRadiation = weatherData[5]
diffuseHorizontalRadiation = weatherData[6]
globalHorizontalRadiation = weatherData[7]
outDryBulbTemp = weatherData[0]
outRelHumid = weatherData[2]
outWindSpeed = weatherData[3]
#Separate out the _dirNormRad, the diffuse Horizontal rad, and the location data.
directSolarRad = []
diffSolarRad = []
latitude = None
longitude = None
timeZone = None
if checkData5 == True:
directSolarRad = directNormalRadiation[7:]
diffSolarRad = diffuseHorizontalRadiation[7:]
globHorizRad = globalHorizontalRadiation[7:]
locList = location.split('\n')
for line in locList:
if "Latitude" in line: latitude = float(line.split(',')[0])
elif "Longitude" in line: longitude = float(line.split(',')[0])
elif "Time Zone" in line: timeZone = float(line.split(',')[0])
#Check to be sure that the number of mesh faces and test points match.
checkData8 = True
if checkData25 == True:
for zoneCount, zone in enumerate(viewFactorMesh):
if len(zone) != 1:
totalFaces = 0
for meshCount, mesh in enumerate(zone):
totalFaces = totalFaces +mesh.Faces.Count
if totalFaces == len(testPtViewFactor[zoneCount]): pass
else:
totalVertices = 0
for meshCount, mesh in enumerate(zone):
totalVertices = totalVertices +mesh.Vertices.Count
if totalVertices == len(testPtViewFactor[zoneCount]): pass
else:
checkData8 = False
warning = "For one of the meshes in the _viewFactorMesh, the number of faces in the mesh and test points in the _testPtViewFactor do not match.\n" + \
"This can sometimes happen when you have geometry created with one Rhino model tolerance and you generate a mesh off of it with a different tolerance.\n"+ \
"Try changing your Rhino model tolerance and seeing if it works."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
else:
if zone[0].Faces.Count == len(testPtViewFactor[zoneCount]): pass
else:
if zone[0].Vertices.Count == len(testPtViewFactor[zoneCount]): pass
else:
checkData8 = False
warning = "For one of the meshes in the _viewFactorMesh, the number of faces in the mesh and test points in the _testPtViewFactor do not match.\n" + \
"This can sometimes happen when you have geometry created with one Rhino model tolerance and you generate a mesh off of it with a different tolerance.\n"+ \
"Try changing your Rhino model tolerance and seeing if it works."
print warning
ghenv.Component.AddRuntimeMessage(w, warning)
#If there are no outdoor surface temperatures and there are outdoor view factors, remove it from the mesh.
if outdoorClac == False and outdoorIsThere == True:
zoneSrfNames = zoneSrfNames[:-1]
testPtViewFactor = testPtViewFactor[:-1]
viewFactorMesh = viewFactorMesh[:-1]
testPtSkyView = testPtSkyView[:-1]
testPtBlockedVec = testPtBlockedVec[:-1]
#Figure out the number of times to divide the sky based on the length of the blockedVec list.
numSkyPatchDivs = 0
checkData12 = True
if checkData25 == True:
for blockList in testPtBlockedVec:
if blockList != []:
if len(blockList[0]) == 145: numSkyPatchDivs = 0
elif len(blockList[0]) == 577: numSkyPatchDivs = 1
elif len(blockList[0]) == 1297: numSkyPatchDivs = 2
elif len(blockList[0]) == 2305: numSkyPatchDivs = 3
else:
checkData12 = False
warning = "You have an absurdly high number of view vectors from the 'Indoor View Factor' component such that it is not supported by the current component."
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
#Check the metabolicRate_.
checkData20 = True
metabolicRate = []
if metabolicRate_ != []:
if len(metabolicRate_) == 8760:
for val in metabolicRate_:
metabolicRate.append(val)
elif len(metabolicRate_) == 1:
for count in range(8760):
metabolicRate.append(metabolicRate_[0])
else:
checkData20 = False
warning = 'metabolicRate_ must be either a list of 8760 values that correspond to hourly air mixing over the year or a single constant value for the whole year.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
for count in range(8760):
metabolicRate.append(1.0)
print 'No value found for metabolicRate_. The metabolic rate will be set to 1 met.'
#Check the clothingLevel_.
checkData26 = True
clothingLevel = []
if clothingLevel_ != []:
if len(clothingLevel_) == 8760:
for val in clothingLevel_:
clothingLevel.append(val)
elif len(clothingLevel_) == 1:
for count in range(8760):
clothingLevel.append(clothingLevel_[0])
else:
checkData26 = False
warning = 'clothingLevel_ must be either a list of 8760 values that correspond to hourly air mixing over the year or a single constant value for the whole year.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
for count in range(8760):
clothingLevel.append(1.0)
print 'No value found for clothingLevel_. The clothing level will be set to 1 clo or a 3-piece suit.'
#Check the clothing absorptivity.
checkData7 = True
cloA = 0.7
if cloAbsorptivity_ != None:
if cloAbsorptivity_ <= 1.0 and cloAbsorptivity_ >= 0.0: floorR = cloAbsorptivity_
else:
checkData7 = False
warning = 'cloAbsorptivity_ must be a value between 0 and 1.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
print 'No value found for cloAbsorptivity_. The absorptivity will be set to 0.7 for average brown skin and typical clothing.'
#Check the outdoor terrain.
# Evaluate the terrain type to get the right roughness length.
checkData31, terrainType, gradientHeightDiv, d, a, yValues, yAxisMaxRhinoHeight, nArrows, printMsg = lb_wind.terrain(outdoorTerrain_)
print printMsg
if checkData31 == False:
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, printMsg)
#Check the inletHeightOverride_.
inletHeightOverride = []
checkData15 = True
if checkData25 == True and len(inletHeightOverride_) > 0:
if len(inletHeightOverride_) == len(viewFactorMesh): inletHeightOverride = inletHeightOverride_
else:
checkData15 = False
warning = 'The length of data in the inletHeightOverride_ does not match the number of branches in the data tree of the _viewFactorMesh.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
#Check the wellMixedAirOverride_.
checkData16 = True
mixedAirOverride = []
if wellMixedAirOverride_ != []:
if len(wellMixedAirOverride_) == 8760:
for val in wellMixedAirOverride_:
mixedAirOverride.append(int(val))
elif len(wellMixedAirOverride_) == 1:
for count in range(8760):
mixedAirOverride.append(int(wellMixedAirOverride_[0]))
else:
checkData16 = False
warning = 'wellMixedAirOverride_ must be either a list of 8760 values that correspond to hourly air mixing over the year or a single constant value for the whole year.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
for count in range(8760):
mixedAirOverride.append(0)
print 'No value found for wellMixedAirOverride_. The stratification calculation will be run for every hour of the year.'
#If there are comfort parameters hooked up, read them out.
checkData27 = True
if comfortPar_ != []:
try:
PPDComfortThresh = float(comfortPar_[0])
humidRatioUp = float(comfortPar_[1])
humidRatioLow = float(comfortPar_[2])
except:
PPDComfortThresh = 10.0
humidRatioUp = 0.03
humidRatioLow = 0.0
checkData27 = False
warning = 'The comfortPar are not valid comfort parameters from the Ladybug_Comfort Parameters component.'
print warning
ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, warning)
else:
PPDComfortThresh = 10.0
humidRatioUp = 0.03
humidRatioLow = 0.0
#Do a final check of everything.
if checkData1 == True and checkData2 == True and checkData3 == True and checkData4 == True and checkData5 == True and checkData7 == True and checkData8 == True and checkData9 == True and checkData10 == True and checkData11 == True and checkData12 == True and checkData13 == True and checkData14 == True and checkData15 == True and checkData16 == True and checkData17 == True and checkData18 == True and checkData19 == True and checkData20 == True and checkData21 == True and checkData22 == True and checkData23 == True and checkData24 == True and checkData25 == True and checkData26 == True and checkData27 == True and checkData28 == True and checkData29 == True and checkData30 == True and checkData31 == True and checkData32 == True and checkData33 == True:
checkData = True
else: checkData = False
return checkData, srfTempNumbers, srfTempHeaders, airTempDataNumbers, airTempDataHeaders, flowVolDataHeaders, flowVolDataNumbers, heatGainDataHeaders, heatGainDataNumbers, relHumidDataHeaders, relHumidDataNumbers, clothingLevel, metabolicRate, zoneSrfNames, testPtViewFactor, viewFactorMesh, latitude, longitude, timeZone, diffSolarRad, directSolarRad, globHorizRad, testPtSkyView, testPtBlockedVec, numSkyPatchDivs, winStatusNumbers, cloA, zoneFloorReflectivity, testPtZoneNames, testPtZoneWeights, ptHeightWeights, zoneInletInfo, inletHeightOverride, PPDComfortThresh, humidRatioUp, humidRatioLow, mixedAirOverride, zoneHasWindows, outdoorClac, outSrfTempHeaders, outSrfTempNumbers, outdoorNonSrfViewFac, outDryBulbTemp, outRelHumid, outWindSpeed, d, a, outdoorPtHeightWeights, allWindowShadesSame, winStatusHeaders, testPtBlockName, zoneWindowTransmiss, zoneWindowNames, allWindSpeedsSame, winSpeedNumbers, analysisPeriod
#Check to be sure that LB+HB are flying.
initCheck = False
if sc.sticky.has_key('honeybee_release') == False and sc.sticky.has_key('ladybug_release') == False:
print "You should first let Ladybug and Honeybee fly..."
ghenv.Component.AddRuntimeMessage(w, "You should first let Ladybug and Honeybee fly...")
else:
initCheck = True
lb_preparation = sc.sticky["ladybug_Preparation"]()
lb_wind = sc.sticky["ladybug_WindSpeed"]()
#Check the data input.
checkData = False
if _viewFactorMesh.BranchCount > 0 and len(_viewFactorInfo) > 0 and _epwFile != None and _srfIndoorTemp.BranchCount > 0 and _zoneAirTemp.BranchCount > 0 and _zoneAirFlowVol.BranchCount > 0 and _zoneAirHeatGain.BranchCount > 0 and _zoneRelHumid.BranchCount > 0 and initCheck == True:
if _viewFactorInfo[0] != None:
checkData, srfTempNumbers, srfTempHeaders, airTempDataNumbers, airTempDataHeaders, flowVolDataHeaders, flowVolDataNumbers, heatGainDataHeaders, heatGainDataNumbers, relHumidDataHeaders, relHumidDataNumbers, clothingLevel, metabolicRate, zoneSrfNames, testPtViewFactor, viewFactorMesh, latitude, longitude, timeZone, diffSolarRad, directSolarRad, globHorizRad, testPtSkyView, testPtBlockedVec, numSkyPatchDivs, winTrans, cloA, floorR, testPtZoneNames, testPtZoneWeights, ptHeightWeights, zoneInletInfo, inletHeightOverride, PPDComfortThresh, humidRatioUp, humidRatioLow, mixedAirOverride, zoneHasWindows, outdoorClac, outSrfTempHeaders, outSrfTempNumbers, outdoorNonSrfViewFac, outDryBulbTemp, outRelHumid, outWindSpeed, d, a, outdoorPtHeightWeights, allWindowShadesSame, winStatusHeaders, testPtBlockName, zoneWindowTransmiss, zoneWindowNames, allWindSpeedsSame, winSpeedNumbers, analysisPeriod = checkTheInputs()
if checkData == True:
comfRecipe = ["PMV", srfTempNumbers, srfTempHeaders, airTempDataNumbers, airTempDataHeaders, flowVolDataHeaders, flowVolDataNumbers, heatGainDataHeaders, heatGainDataNumbers, relHumidDataHeaders, relHumidDataNumbers, clothingLevel, metabolicRate, zoneSrfNames, testPtViewFactor, viewFactorMesh, latitude, longitude, timeZone, diffSolarRad, directSolarRad, globHorizRad, testPtSkyView, testPtBlockedVec, numSkyPatchDivs, winTrans, cloA, floorR, testPtZoneNames, testPtZoneWeights, ptHeightWeights, zoneInletInfo, inletHeightOverride, PPDComfortThresh, humidRatioUp, humidRatioLow, mixedAirOverride, zoneHasWindows, outdoorClac, outSrfTempHeaders, outSrfTempNumbers, outdoorNonSrfViewFac, outDryBulbTemp, outRelHumid, outWindSpeed, d, a, outdoorPtHeightWeights, allWindowShadesSame, winStatusHeaders, testPtBlockName, zoneWindowTransmiss, zoneWindowNames, allWindSpeedsSame, winSpeedNumbers, analysisPeriod]
|
samuto/Honeybee
|
src/Honeybee_PMV Comfort Analysis Recipe.py
|
Python
|
gpl-3.0
| 38,894
|
[
"EPW"
] |
4182565de5271371141c4e2a167b9a6379b8773114fc32c00dd78aa98785b4b2
|
"""
Testing for the forest module (sklearn.ensemble.forest).
"""
# Authors: Gilles Louppe,
# Brian Holt,
# Andreas Mueller,
# Arnaud Joly
# License: BSD 3 clause
import pickle
from collections import defaultdict
from itertools import product
import numpy as np
from scipy.sparse import csr_matrix, csc_matrix, coo_matrix
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_false, assert_true
from sklearn.utils.testing import assert_less, assert_greater
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn import datasets
from sklearn.decomposition import TruncatedSVD
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomTreesEmbedding
from sklearn.grid_search import GridSearchCV
from sklearn.svm import LinearSVC
from sklearn.utils.validation import check_random_state
from sklearn.tree.tree import SPARSE_SPLITTERS
# toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y = [-1, -1, -1, 1, 1, 1]
T = [[-1, -1], [2, 2], [3, 2]]
true_result = [-1, 1, 1]
# also load the iris dataset
# and randomly permute it
iris = datasets.load_iris()
rng = check_random_state(0)
perm = rng.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
# also load the boston dataset
# and randomly permute it
boston = datasets.load_boston()
perm = rng.permutation(boston.target.size)
boston.data = boston.data[perm]
boston.target = boston.target[perm]
FOREST_CLASSIFIERS = {
"ExtraTreesClassifier": ExtraTreesClassifier,
"RandomForestClassifier": RandomForestClassifier,
}
FOREST_REGRESSORS = {
"ExtraTreesRegressor": ExtraTreesRegressor,
"RandomForestRegressor": RandomForestRegressor,
}
FOREST_TRANSFORMERS = {
"RandomTreesEmbedding": RandomTreesEmbedding,
}
FOREST_ESTIMATORS = dict()
FOREST_ESTIMATORS.update(FOREST_CLASSIFIERS)
FOREST_ESTIMATORS.update(FOREST_REGRESSORS)
FOREST_ESTIMATORS.update(FOREST_TRANSFORMERS)
def check_classification_toy(name):
"""Check classification on a toy dataset."""
ForestClassifier = FOREST_CLASSIFIERS[name]
clf = ForestClassifier(n_estimators=10, random_state=1)
clf.fit(X, y)
assert_array_equal(clf.predict(T), true_result)
assert_equal(10, len(clf))
clf = ForestClassifier(n_estimators=10, max_features=1, random_state=1)
clf.fit(X, y)
assert_array_equal(clf.predict(T), true_result)
assert_equal(10, len(clf))
# also test apply
leaf_indices = clf.apply(X)
assert_equal(leaf_indices.shape, (len(X), clf.n_estimators))
def test_classification_toy():
for name in FOREST_CLASSIFIERS:
yield check_classification_toy, name
def check_iris_criterion(name, criterion):
# Check consistency on dataset iris.
ForestClassifier = FOREST_CLASSIFIERS[name]
clf = ForestClassifier(n_estimators=10, criterion=criterion,
random_state=1)
clf.fit(iris.data, iris.target)
score = clf.score(iris.data, iris.target)
assert_greater(score, 0.9, "Failed with criterion %s and score = %f"
% (criterion, score))
clf = ForestClassifier(n_estimators=10, criterion=criterion,
max_features=2, random_state=1)
clf.fit(iris.data, iris.target)
score = clf.score(iris.data, iris.target)
assert_greater(score, 0.5, "Failed with criterion %s and score = %f"
% (criterion, score))
def test_iris():
for name, criterion in product(FOREST_CLASSIFIERS, ("gini", "entropy")):
yield check_iris_criterion, name, criterion
def check_boston_criterion(name, criterion):
# Check consistency on dataset boston house prices.
ForestRegressor = FOREST_REGRESSORS[name]
clf = ForestRegressor(n_estimators=5, criterion=criterion, random_state=1)
clf.fit(boston.data, boston.target)
score = clf.score(boston.data, boston.target)
assert_greater(score, 0.95, "Failed with max_features=None, criterion %s "
"and score = %f" % (criterion, score))
clf = ForestRegressor(n_estimators=5, criterion=criterion,
max_features=6, random_state=1)
clf.fit(boston.data, boston.target)
score = clf.score(boston.data, boston.target)
assert_greater(score, 0.95, "Failed with max_features=6, criterion %s "
"and score = %f" % (criterion, score))
def test_boston():
for name, criterion in product(FOREST_REGRESSORS, ("mse", )):
yield check_boston_criterion, name, criterion
def check_regressor_attributes(name):
# Regression models should not have a classes_ attribute.
r = FOREST_REGRESSORS[name](random_state=0)
assert_false(hasattr(r, "classes_"))
assert_false(hasattr(r, "n_classes_"))
r.fit([[1, 2, 3], [4, 5, 6]], [1, 2])
assert_false(hasattr(r, "classes_"))
assert_false(hasattr(r, "n_classes_"))
def test_regressor_attributes():
for name in FOREST_REGRESSORS:
yield check_regressor_attributes, name
def check_probability(name):
# Predict probabilities.
ForestClassifier = FOREST_CLASSIFIERS[name]
with np.errstate(divide="ignore"):
clf = ForestClassifier(n_estimators=10, random_state=1, max_features=1,
max_depth=1)
clf.fit(iris.data, iris.target)
assert_array_almost_equal(np.sum(clf.predict_proba(iris.data), axis=1),
np.ones(iris.data.shape[0]))
assert_array_almost_equal(clf.predict_proba(iris.data),
np.exp(clf.predict_log_proba(iris.data)))
def test_probability():
for name in FOREST_CLASSIFIERS:
yield check_probability, name
def check_importances(name, X, y):
# Check variable importances.
ForestClassifier = FOREST_CLASSIFIERS[name]
for n_jobs in [1, 2]:
clf = ForestClassifier(n_estimators=10, n_jobs=n_jobs)
clf.fit(X, y)
importances = clf.feature_importances_
n_important = np.sum(importances > 0.1)
assert_equal(importances.shape[0], 10)
assert_equal(n_important, 3)
X_new = clf.transform(X, threshold="mean")
assert_less(0 < X_new.shape[1], X.shape[1])
# Check with sample weights
sample_weight = np.ones(y.shape)
sample_weight[y == 1] *= 100
clf = ForestClassifier(n_estimators=50, n_jobs=n_jobs, random_state=0)
clf.fit(X, y, sample_weight=sample_weight)
importances = clf.feature_importances_
assert_true(np.all(importances >= 0.0))
clf = ForestClassifier(n_estimators=50, n_jobs=n_jobs, random_state=0)
clf.fit(X, y, sample_weight=3 * sample_weight)
importances_bis = clf.feature_importances_
assert_almost_equal(importances, importances_bis)
def test_importances():
X, y = datasets.make_classification(n_samples=1000, n_features=10,
n_informative=3, n_redundant=0,
n_repeated=0, shuffle=False,
random_state=0)
for name in FOREST_CLASSIFIERS:
yield check_importances, name, X, y
def check_unfitted_feature_importances(name):
assert_raises(ValueError, getattr, FOREST_ESTIMATORS[name](random_state=0),
"feature_importances_")
def test_unfitted_feature_importances():
for name in FOREST_ESTIMATORS:
yield check_unfitted_feature_importances, name
def check_oob_score(name, X, y, n_estimators=20):
# Check that oob prediction is a good estimation of the generalization
# error.
# Proper behavior
est = FOREST_ESTIMATORS[name](oob_score=True, random_state=0,
n_estimators=n_estimators, bootstrap=True)
n_samples = X.shape[0]
est.fit(X[:n_samples // 2, :], y[:n_samples // 2])
test_score = est.score(X[n_samples // 2:, :], y[n_samples // 2:])
if name in FOREST_CLASSIFIERS:
assert_less(abs(test_score - est.oob_score_), 0.1)
else:
assert_greater(test_score, est.oob_score_)
assert_greater(est.oob_score_, .8)
# Check warning if not enough estimators
with np.errstate(divide="ignore", invalid="ignore"):
est = FOREST_ESTIMATORS[name](oob_score=True, random_state=0,
n_estimators=1, bootstrap=True)
assert_warns(UserWarning, est.fit, X, y)
def test_oob_score():
for name in FOREST_CLASSIFIERS:
yield check_oob_score, name, iris.data, iris.target
# csc matrix
yield check_oob_score, name, csc_matrix(iris.data), iris.target
# non-contiguous targets in classification
yield check_oob_score, name, iris.data, iris.target * 2 + 1
for name in FOREST_REGRESSORS:
yield check_oob_score, name, boston.data, boston.target, 50
# csc matrix
yield check_oob_score, name, csc_matrix(boston.data), boston.target, 50
def check_oob_score_raise_error(name):
ForestEstimator = FOREST_ESTIMATORS[name]
if name in FOREST_TRANSFORMERS:
for oob_score in [True, False]:
assert_raises(TypeError, ForestEstimator, oob_score=oob_score)
assert_raises(NotImplementedError, ForestEstimator()._set_oob_score,
X, y)
else:
# Unfitted / no bootstrap / no oob_score
for oob_score, bootstrap in [(True, False), (False, True),
(False, False)]:
est = ForestEstimator(oob_score=oob_score, bootstrap=bootstrap,
random_state=0)
assert_false(hasattr(est, "oob_score_"))
# No bootstrap
assert_raises(ValueError, ForestEstimator(oob_score=True,
bootstrap=False).fit, X, y)
def test_oob_score_raise_error():
for name in FOREST_ESTIMATORS:
yield check_oob_score_raise_error, name
def check_gridsearch(name):
forest = FOREST_CLASSIFIERS[name]()
clf = GridSearchCV(forest, {'n_estimators': (1, 2), 'max_depth': (1, 2)})
clf.fit(iris.data, iris.target)
def test_gridsearch():
# Check that base trees can be grid-searched.
for name in FOREST_CLASSIFIERS:
yield check_gridsearch, name
def check_parallel(name, X, y):
"""Check parallel computations in classification"""
ForestEstimator = FOREST_ESTIMATORS[name]
forest = ForestEstimator(n_estimators=10, n_jobs=3, random_state=0)
forest.fit(X, y)
assert_equal(len(forest), 10)
forest.set_params(n_jobs=1)
y1 = forest.predict(X)
forest.set_params(n_jobs=2)
y2 = forest.predict(X)
assert_array_almost_equal(y1, y2, 3)
def test_parallel():
for name in FOREST_CLASSIFIERS:
yield check_parallel, name, iris.data, iris.target
for name in FOREST_REGRESSORS:
yield check_parallel, name, boston.data, boston.target
def check_pickle(name, X, y):
# Check pickability.
ForestEstimator = FOREST_ESTIMATORS[name]
obj = ForestEstimator(random_state=0)
obj.fit(X, y)
score = obj.score(X, y)
pickle_object = pickle.dumps(obj)
obj2 = pickle.loads(pickle_object)
assert_equal(type(obj2), obj.__class__)
score2 = obj2.score(X, y)
assert_equal(score, score2)
def test_pickle():
for name in FOREST_CLASSIFIERS:
yield check_pickle, name, iris.data[::2], iris.target[::2]
for name in FOREST_REGRESSORS:
yield check_pickle, name, boston.data[::2], boston.target[::2]
def check_multioutput(name):
# Check estimators on multi-output problems.
X_train = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1], [-2, 1],
[-1, 1], [-1, 2], [2, -1], [1, -1], [1, -2]]
y_train = [[-1, 0], [-1, 0], [-1, 0], [1, 1], [1, 1], [1, 1], [-1, 2],
[-1, 2], [-1, 2], [1, 3], [1, 3], [1, 3]]
X_test = [[-1, -1], [1, 1], [-1, 1], [1, -1]]
y_test = [[-1, 0], [1, 1], [-1, 2], [1, 3]]
est = FOREST_ESTIMATORS[name](random_state=0, bootstrap=False)
y_pred = est.fit(X_train, y_train).predict(X_test)
assert_array_almost_equal(y_pred, y_test)
if name in FOREST_CLASSIFIERS:
with np.errstate(divide="ignore"):
proba = est.predict_proba(X_test)
assert_equal(len(proba), 2)
assert_equal(proba[0].shape, (4, 2))
assert_equal(proba[1].shape, (4, 4))
log_proba = est.predict_log_proba(X_test)
assert_equal(len(log_proba), 2)
assert_equal(log_proba[0].shape, (4, 2))
assert_equal(log_proba[1].shape, (4, 4))
def test_multioutput():
for name in FOREST_CLASSIFIERS:
yield check_multioutput, name
for name in FOREST_REGRESSORS:
yield check_multioutput, name
def check_classes_shape(name):
# Test that n_classes_ and classes_ have proper shape.
ForestClassifier = FOREST_CLASSIFIERS[name]
# Classification, single output
clf = ForestClassifier(random_state=0).fit(X, y)
assert_equal(clf.n_classes_, 2)
assert_array_equal(clf.classes_, [-1, 1])
# Classification, multi-output
_y = np.vstack((y, np.array(y) * 2)).T
clf = ForestClassifier(random_state=0).fit(X, _y)
assert_array_equal(clf.n_classes_, [2, 2])
assert_array_equal(clf.classes_, [[-1, 1], [-2, 2]])
def test_classes_shape():
for name in FOREST_CLASSIFIERS:
yield check_classes_shape, name
def test_random_trees_dense_type():
# Test that the `sparse_output` parameter of RandomTreesEmbedding
# works by returning a dense array.
# Create the RTE with sparse=False
hasher = RandomTreesEmbedding(n_estimators=10, sparse_output=False)
X, y = datasets.make_circles(factor=0.5)
X_transformed = hasher.fit_transform(X)
# Assert that type is ndarray, not scipy.sparse.csr.csr_matrix
assert_equal(type(X_transformed), np.ndarray)
def test_random_trees_dense_equal():
# Test that the `sparse_output` parameter of RandomTreesEmbedding
# works by returning the same array for both argument values.
# Create the RTEs
hasher_dense = RandomTreesEmbedding(n_estimators=10, sparse_output=False,
random_state=0)
hasher_sparse = RandomTreesEmbedding(n_estimators=10, sparse_output=True,
random_state=0)
X, y = datasets.make_circles(factor=0.5)
X_transformed_dense = hasher_dense.fit_transform(X)
X_transformed_sparse = hasher_sparse.fit_transform(X)
# Assert that dense and sparse hashers have same array.
assert_array_equal(X_transformed_sparse.toarray(), X_transformed_dense)
def test_random_hasher():
# test random forest hashing on circles dataset
# make sure that it is linearly separable.
# even after projected to two SVD dimensions
# Note: Not all random_states produce perfect results.
hasher = RandomTreesEmbedding(n_estimators=30, random_state=1)
X, y = datasets.make_circles(factor=0.5)
X_transformed = hasher.fit_transform(X)
# test fit and transform:
hasher = RandomTreesEmbedding(n_estimators=30, random_state=1)
assert_array_equal(hasher.fit(X).transform(X).toarray(),
X_transformed.toarray())
# one leaf active per data point per forest
assert_equal(X_transformed.shape[0], X.shape[0])
assert_array_equal(X_transformed.sum(axis=1), hasher.n_estimators)
svd = TruncatedSVD(n_components=2)
X_reduced = svd.fit_transform(X_transformed)
linear_clf = LinearSVC()
linear_clf.fit(X_reduced, y)
assert_equal(linear_clf.score(X_reduced, y), 1.)
def test_random_hasher_sparse_data():
X, y = datasets.make_multilabel_classification(random_state=0)
hasher = RandomTreesEmbedding(n_estimators=30, random_state=1)
X_transformed = hasher.fit_transform(X)
X_transformed_sparse = hasher.fit_transform(csc_matrix(X))
assert_array_equal(X_transformed_sparse.toarray(), X_transformed.toarray())
def test_parallel_train():
rng = check_random_state(12321)
n_samples, n_features = 80, 30
X_train = rng.randn(n_samples, n_features)
y_train = rng.randint(0, 2, n_samples)
clfs = [
RandomForestClassifier(n_estimators=20, n_jobs=n_jobs,
random_state=12345).fit(X_train, y_train)
for n_jobs in [1, 2, 3, 8, 16, 32]
]
X_test = rng.randn(n_samples, n_features)
probas = [clf.predict_proba(X_test) for clf in clfs]
for proba1, proba2 in zip(probas, probas[1:]):
assert_array_almost_equal(proba1, proba2)
def test_distribution():
rng = check_random_state(12321)
# Single variable with 4 values
X = rng.randint(0, 4, size=(1000, 1))
y = rng.rand(1000)
n_trees = 500
clf = ExtraTreesRegressor(n_estimators=n_trees, random_state=42).fit(X, y)
uniques = defaultdict(int)
for tree in clf.estimators_:
tree = "".join(("%d,%d/" % (f, int(t)) if f >= 0 else "-")
for f, t in zip(tree.tree_.feature,
tree.tree_.threshold))
uniques[tree] += 1
uniques = sorted([(1. * count / n_trees, tree)
for tree, count in uniques.items()])
# On a single variable problem where X_0 has 4 equiprobable values, there
# are 5 ways to build a random tree. The more compact (0,1/0,0/--0,2/--) of
# them has probability 1/3 while the 4 others have probability 1/6.
assert_equal(len(uniques), 5)
assert_greater(0.20, uniques[0][0]) # Rough approximation of 1/6.
assert_greater(0.20, uniques[1][0])
assert_greater(0.20, uniques[2][0])
assert_greater(0.20, uniques[3][0])
assert_greater(uniques[4][0], 0.3)
assert_equal(uniques[4][1], "0,1/0,0/--0,2/--")
# Two variables, one with 2 values, one with 3 values
X = np.empty((1000, 2))
X[:, 0] = np.random.randint(0, 2, 1000)
X[:, 1] = np.random.randint(0, 3, 1000)
y = rng.rand(1000)
clf = ExtraTreesRegressor(n_estimators=100, max_features=1,
random_state=1).fit(X, y)
uniques = defaultdict(int)
for tree in clf.estimators_:
tree = "".join(("%d,%d/" % (f, int(t)) if f >= 0 else "-")
for f, t in zip(tree.tree_.feature,
tree.tree_.threshold))
uniques[tree] += 1
uniques = [(count, tree) for tree, count in uniques.items()]
assert_equal(len(uniques), 8)
def check_max_leaf_nodes_max_depth(name, X, y):
# Test precedence of max_leaf_nodes over max_depth.
ForestEstimator = FOREST_ESTIMATORS[name]
est = ForestEstimator(max_depth=1, max_leaf_nodes=4,
n_estimators=1).fit(X, y)
assert_greater(est.estimators_[0].tree_.max_depth, 1)
est = ForestEstimator(max_depth=1, n_estimators=1).fit(X, y)
assert_equal(est.estimators_[0].tree_.max_depth, 1)
def test_max_leaf_nodes_max_depth():
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
for name in FOREST_ESTIMATORS:
yield check_max_leaf_nodes_max_depth, name, X, y
def check_min_samples_leaf(name, X, y):
# Test if leaves contain more than leaf_count training examples
ForestEstimator = FOREST_ESTIMATORS[name]
# test both DepthFirstTreeBuilder and BestFirstTreeBuilder
# by setting max_leaf_nodes
for max_leaf_nodes in (None, 1000):
est = ForestEstimator(min_samples_leaf=5,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y)
out = est.estimators_[0].tree_.apply(X)
node_counts = np.bincount(out)
# drop inner nodes
leaf_count = node_counts[node_counts != 0]
assert_greater(np.min(leaf_count), 4,
"Failed with {0}".format(name))
def test_min_samples_leaf():
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
X = X.astype(np.float32)
for name in FOREST_ESTIMATORS:
yield check_min_samples_leaf, name, X, y
def check_min_weight_fraction_leaf(name, X, y):
# Test if leaves contain at least min_weight_fraction_leaf of the
# training set
ForestEstimator = FOREST_ESTIMATORS[name]
rng = np.random.RandomState(0)
weights = rng.rand(X.shape[0])
total_weight = np.sum(weights)
# test both DepthFirstTreeBuilder and BestFirstTreeBuilder
# by setting max_leaf_nodes
for max_leaf_nodes in (None, 1000):
for frac in np.linspace(0, 0.5, 6):
est = ForestEstimator(min_weight_fraction_leaf=frac,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
if isinstance(est, (RandomForestClassifier,
RandomForestRegressor)):
est.bootstrap = False
est.fit(X, y, sample_weight=weights)
out = est.estimators_[0].tree_.apply(X)
node_weights = np.bincount(out, weights=weights)
# drop inner nodes
leaf_weights = node_weights[node_weights != 0]
assert_greater_equal(
np.min(leaf_weights),
total_weight * est.min_weight_fraction_leaf,
"Failed with {0} "
"min_weight_fraction_leaf={1}".format(
name, est.min_weight_fraction_leaf))
def test_min_weight_fraction_leaf():
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
X = X.astype(np.float32)
for name in FOREST_ESTIMATORS:
yield check_min_weight_fraction_leaf, name, X, y
def check_sparse_input(name, X, X_sparse, y):
ForestEstimator = FOREST_ESTIMATORS[name]
dense = ForestEstimator(random_state=0, max_depth=2).fit(X, y)
sparse = ForestEstimator(random_state=0, max_depth=2).fit(X_sparse, y)
assert_array_almost_equal(sparse.apply(X), dense.apply(X))
if name in FOREST_CLASSIFIERS or name in FOREST_REGRESSORS:
assert_array_almost_equal(sparse.predict(X), dense.predict(X))
assert_array_almost_equal(sparse.feature_importances_,
dense.feature_importances_)
if name in FOREST_CLASSIFIERS:
assert_array_almost_equal(sparse.predict_proba(X),
dense.predict_proba(X))
assert_array_almost_equal(sparse.predict_log_proba(X),
dense.predict_log_proba(X))
if name in FOREST_TRANSFORMERS:
assert_array_almost_equal(sparse.transform(X).toarray(),
dense.transform(X).toarray())
assert_array_almost_equal(sparse.fit_transform(X).toarray(),
dense.fit_transform(X).toarray())
def test_sparse_input():
X, y = datasets.make_multilabel_classification(random_state=0,
n_samples=40)
for name, sparse_matrix in product(FOREST_ESTIMATORS,
(csr_matrix, csc_matrix, coo_matrix)):
yield check_sparse_input, name, X, sparse_matrix(X), y
def check_memory_layout(name, dtype):
# Check that it works no matter the memory layout
est = FOREST_ESTIMATORS[name](random_state=0, bootstrap=False)
# Nothing
X = np.asarray(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# C-order
X = np.asarray(iris.data, order="C", dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# F-order
X = np.asarray(iris.data, order="F", dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# Contiguous
X = np.ascontiguousarray(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
if est.base_estimator.splitter in SPARSE_SPLITTERS:
# csr matrix
X = csr_matrix(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# csc_matrix
X = csc_matrix(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# coo_matrix
X = coo_matrix(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# Strided
X = np.asarray(iris.data[::3], dtype=dtype)
y = iris.target[::3]
assert_array_equal(est.fit(X, y).predict(X), y)
def test_memory_layout():
for name, dtype in product(FOREST_CLASSIFIERS, [np.float64, np.float32]):
yield check_memory_layout, name, dtype
for name, dtype in product(FOREST_REGRESSORS, [np.float64, np.float32]):
yield check_memory_layout, name, dtype
@ignore_warnings
def check_1d_input(name, X, X_2d, y):
ForestEstimator = FOREST_ESTIMATORS[name]
assert_raises(ValueError, ForestEstimator(random_state=0).fit, X, y)
est = ForestEstimator(random_state=0)
est.fit(X_2d, y)
if name in FOREST_CLASSIFIERS or name in FOREST_REGRESSORS:
assert_raises(ValueError, est.predict, X)
@ignore_warnings
def test_1d_input():
X = iris.data[:, 0]
X_2d = iris.data[:, 0].reshape((-1, 1))
y = iris.target
for name in FOREST_ESTIMATORS:
yield check_1d_input, name, X, X_2d, y
def check_class_weights(name):
# Check class_weights resemble sample_weights behavior.
ForestClassifier = FOREST_CLASSIFIERS[name]
# Iris is balanced, so no effect expected for using 'balanced' weights
clf1 = ForestClassifier(random_state=0)
clf1.fit(iris.data, iris.target)
clf2 = ForestClassifier(class_weight='balanced', random_state=0)
clf2.fit(iris.data, iris.target)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
# Make a multi-output problem with three copies of Iris
iris_multi = np.vstack((iris.target, iris.target, iris.target)).T
# Create user-defined weights that should balance over the outputs
clf3 = ForestClassifier(class_weight=[{0: 2., 1: 2., 2: 1.},
{0: 2., 1: 1., 2: 2.},
{0: 1., 1: 2., 2: 2.}],
random_state=0)
clf3.fit(iris.data, iris_multi)
assert_almost_equal(clf2.feature_importances_, clf3.feature_importances_)
# Check against multi-output "balanced" which should also have no effect
clf4 = ForestClassifier(class_weight='balanced', random_state=0)
clf4.fit(iris.data, iris_multi)
assert_almost_equal(clf3.feature_importances_, clf4.feature_importances_)
# Inflate importance of class 1, check against user-defined weights
sample_weight = np.ones(iris.target.shape)
sample_weight[iris.target == 1] *= 100
class_weight = {0: 1., 1: 100., 2: 1.}
clf1 = ForestClassifier(random_state=0)
clf1.fit(iris.data, iris.target, sample_weight)
clf2 = ForestClassifier(class_weight=class_weight, random_state=0)
clf2.fit(iris.data, iris.target)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
# Check that sample_weight and class_weight are multiplicative
clf1 = ForestClassifier(random_state=0)
clf1.fit(iris.data, iris.target, sample_weight ** 2)
clf2 = ForestClassifier(class_weight=class_weight, random_state=0)
clf2.fit(iris.data, iris.target, sample_weight)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
def test_class_weights():
for name in FOREST_CLASSIFIERS:
yield check_class_weights, name
def check_class_weight_balanced_and_bootstrap_multi_output(name):
# Test class_weight works for multi-output"""
ForestClassifier = FOREST_CLASSIFIERS[name]
_y = np.vstack((y, np.array(y) * 2)).T
clf = ForestClassifier(class_weight='balanced', random_state=0)
clf.fit(X, _y)
clf = ForestClassifier(class_weight=[{-1: 0.5, 1: 1.}, {-2: 1., 2: 1.}],
random_state=0)
clf.fit(X, _y)
# smoke test for subsample and balanced subsample
clf = ForestClassifier(class_weight='balanced_subsample', random_state=0)
clf.fit(X, _y)
clf = ForestClassifier(class_weight='subsample', random_state=0)
ignore_warnings(clf.fit)(X, _y)
def test_class_weight_balanced_and_bootstrap_multi_output():
for name in FOREST_CLASSIFIERS:
yield check_class_weight_balanced_and_bootstrap_multi_output, name
def check_class_weight_errors(name):
# Test if class_weight raises errors and warnings when expected.
ForestClassifier = FOREST_CLASSIFIERS[name]
_y = np.vstack((y, np.array(y) * 2)).T
# Invalid preset string
clf = ForestClassifier(class_weight='the larch', random_state=0)
assert_raises(ValueError, clf.fit, X, y)
assert_raises(ValueError, clf.fit, X, _y)
# Warning warm_start with preset
clf = ForestClassifier(class_weight='auto', warm_start=True,
random_state=0)
assert_warns(UserWarning, clf.fit, X, y)
assert_warns(UserWarning, clf.fit, X, _y)
# Not a list or preset for multi-output
clf = ForestClassifier(class_weight=1, random_state=0)
assert_raises(ValueError, clf.fit, X, _y)
# Incorrect length list for multi-output
clf = ForestClassifier(class_weight=[{-1: 0.5, 1: 1.}], random_state=0)
assert_raises(ValueError, clf.fit, X, _y)
def test_class_weight_errors():
for name in FOREST_CLASSIFIERS:
yield check_class_weight_errors, name
def check_warm_start(name, random_state=42):
# Test if fitting incrementally with warm start gives a forest of the
# right size and the same results as a normal fit.
X, y = datasets.make_hastie_10_2(n_samples=20, random_state=1)
ForestEstimator = FOREST_ESTIMATORS[name]
clf_ws = None
for n_estimators in [5, 10]:
if clf_ws is None:
clf_ws = ForestEstimator(n_estimators=n_estimators,
random_state=random_state,
warm_start=True)
else:
clf_ws.set_params(n_estimators=n_estimators)
clf_ws.fit(X, y)
assert_equal(len(clf_ws), n_estimators)
clf_no_ws = ForestEstimator(n_estimators=10, random_state=random_state,
warm_start=False)
clf_no_ws.fit(X, y)
assert_equal(set([tree.random_state for tree in clf_ws]),
set([tree.random_state for tree in clf_no_ws]))
assert_array_equal(clf_ws.apply(X), clf_no_ws.apply(X),
err_msg="Failed with {0}".format(name))
def test_warm_start():
for name in FOREST_ESTIMATORS:
yield check_warm_start, name
def check_warm_start_clear(name):
# Test if fit clears state and grows a new forest when warm_start==False.
X, y = datasets.make_hastie_10_2(n_samples=20, random_state=1)
ForestEstimator = FOREST_ESTIMATORS[name]
clf = ForestEstimator(n_estimators=5, max_depth=1, warm_start=False,
random_state=1)
clf.fit(X, y)
clf_2 = ForestEstimator(n_estimators=5, max_depth=1, warm_start=True,
random_state=2)
clf_2.fit(X, y) # inits state
clf_2.set_params(warm_start=False, random_state=1)
clf_2.fit(X, y) # clears old state and equals clf
assert_array_almost_equal(clf_2.apply(X), clf.apply(X))
def test_warm_start_clear():
for name in FOREST_ESTIMATORS:
yield check_warm_start_clear, name
def check_warm_start_smaller_n_estimators(name):
# Test if warm start second fit with smaller n_estimators raises error.
X, y = datasets.make_hastie_10_2(n_samples=20, random_state=1)
ForestEstimator = FOREST_ESTIMATORS[name]
clf = ForestEstimator(n_estimators=5, max_depth=1, warm_start=True)
clf.fit(X, y)
clf.set_params(n_estimators=4)
assert_raises(ValueError, clf.fit, X, y)
def test_warm_start_smaller_n_estimators():
for name in FOREST_ESTIMATORS:
yield check_warm_start_smaller_n_estimators, name
def check_warm_start_equal_n_estimators(name):
# Test if warm start with equal n_estimators does nothing and returns the
# same forest and raises a warning.
X, y = datasets.make_hastie_10_2(n_samples=20, random_state=1)
ForestEstimator = FOREST_ESTIMATORS[name]
clf = ForestEstimator(n_estimators=5, max_depth=3, warm_start=True,
random_state=1)
clf.fit(X, y)
clf_2 = ForestEstimator(n_estimators=5, max_depth=3, warm_start=True,
random_state=1)
clf_2.fit(X, y)
# Now clf_2 equals clf.
clf_2.set_params(random_state=2)
assert_warns(UserWarning, clf_2.fit, X, y)
# If we had fit the trees again we would have got a different forest as we
# changed the random state.
assert_array_equal(clf.apply(X), clf_2.apply(X))
def test_warm_start_equal_n_estimators():
for name in FOREST_ESTIMATORS:
yield check_warm_start_equal_n_estimators, name
def check_warm_start_oob(name):
# Test that the warm start computes oob score when asked.
X, y = datasets.make_hastie_10_2(n_samples=20, random_state=1)
ForestEstimator = FOREST_ESTIMATORS[name]
# Use 15 estimators to avoid 'some inputs do not have OOB scores' warning.
clf = ForestEstimator(n_estimators=15, max_depth=3, warm_start=False,
random_state=1, bootstrap=True, oob_score=True)
clf.fit(X, y)
clf_2 = ForestEstimator(n_estimators=5, max_depth=3, warm_start=False,
random_state=1, bootstrap=True, oob_score=False)
clf_2.fit(X, y)
clf_2.set_params(warm_start=True, oob_score=True, n_estimators=15)
clf_2.fit(X, y)
assert_true(hasattr(clf_2, 'oob_score_'))
assert_equal(clf.oob_score_, clf_2.oob_score_)
# Test that oob_score is computed even if we don't need to train
# additional trees.
clf_3 = ForestEstimator(n_estimators=15, max_depth=3, warm_start=True,
random_state=1, bootstrap=True, oob_score=False)
clf_3.fit(X, y)
assert_true(not(hasattr(clf_3, 'oob_score_')))
clf_3.set_params(oob_score=True)
ignore_warnings(clf_3.fit)(X, y)
assert_equal(clf.oob_score_, clf_3.oob_score_)
def test_warm_start_oob():
for name in FOREST_CLASSIFIERS:
yield check_warm_start_oob, name
for name in FOREST_REGRESSORS:
yield check_warm_start_oob, name
def test_dtype_convert():
classifier = RandomForestClassifier()
CLASSES = 15
X = np.eye(CLASSES)
y = [ch for ch in 'ABCDEFGHIJKLMNOPQRSTU'[:CLASSES]]
result = classifier.fit(X, y).predict(X)
assert_array_equal(result, y)
|
djgagne/scikit-learn
|
sklearn/ensemble/tests/test_forest.py
|
Python
|
bsd-3-clause
| 35,292
|
[
"Brian"
] |
57fef763d2ca99aebf2837d9a40f3cf6be67be52fba85c4eb28d021cbca015e6
|
#!/usr/bin/env python
# Copyright 2014-2018 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Qiming Sun <osirpt.sun@gmail.com>
#
import unittest
import numpy
from pyscf import gto
from pyscf import symm
def get_so(atoms, basis, cart=False):
atoms = gto.mole.format_atom(atoms)
gpname, origin, axes = symm.detect_symm(atoms)
gpname, axes = symm.subgroup(gpname, axes)
atoms = gto.mole.format_atom(atoms, origin, axes, 'Bohr')
mol = gto.M(atom=atoms, basis=basis, unit='Bohr', spin=None)
mol.cart = cart
so = symm.basis.symm_adapted_basis(mol, gpname)[0]
n = 0
for c in so:
if c.size > 0:
n += c.shape[1]
assert(n == mol.nao_nr())
return n, so
class KnowValues(unittest.TestCase):
def test_symm_orb_h2o(self):
atoms = [['O' , (1. , 0. , 0. ,)],
[1 , (0. , -.757 , 0.587,)],
[1 , (0. , 0.757 , 0.587,)] ]
basis = {'H': gto.basis.load('cc_pvqz', 'C'),
'O': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis )[0], 165)
self.assertEqual(get_so(atoms,basis,1)[0], 210)
def test_symm_orb_d2h(self):
atoms = [[1, (0., 0., 0.)],
[1, (1., 0., 0.)],
[1, (0., 1., 0.)],
[1, (0., 0., 1.)],
[1, (-1, 0., 0.)],
[1, (0.,-1., 0.)],
[1, (0., 0.,-1.)]]
basis = {'H': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis )[0], 385)
self.assertEqual(get_so(atoms,basis,1)[0], 490)
def test_symm_orb_c2v(self):
atoms = [[1, (1., 0., 2.)],
[2, (0., 1., 0.)],
[1, (-2.,0.,-1.)],
[2, (0.,-1., 0.)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis)[0], 220)
def test_symm_orb_c2h(self):
atoms = [[1, (1., 0., 2.)],
[2, (0., 1., 0.)],
[1, (-1.,0.,-2.)],
[2, (0.,-1., 0.)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis )[0], 220)
self.assertEqual(get_so(atoms,basis,1)[0], 280)
atoms = [[1, (1., 0., 1.)],
[1, (1., 0.,-1.)],
[2, (0., 0., 2.)],
[2, (2., 0.,-2.)],
[3, (1., 1., 0.)],
[3, (1.,-1., 0.)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),
'Li': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis )[0], 330)
self.assertEqual(get_so(atoms,basis,1)[0], 420)
def test_symm_orb_d2(self):
atoms = [[1, (1., 0., 1.)],
[1, (1., 0.,-1.)],
[2, (0., 0., 2.)],
[2, (2., 0., 2.)],
[2, (1., 1.,-2.)],
[2, (1.,-1.,-2.)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis )[0], 330)
self.assertEqual(get_so(atoms,basis,1)[0], 420)
def test_symm_orb_ci(self):
atoms = [[1, ( 1., 0., 0.)],
[2, ( 0., 1., 0.)],
[3, ( 0., 0., 1.)],
[4, ( .5, .5, .5)],
[1, (-1., 0., 0.)],
[2, ( 0.,-1., 0.)],
[3, ( 0., 0.,-1.)],
[4, (-.5,-.5,-.5)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),
'Li': gto.basis.load('cc_pvqz', 'C'),
'Be': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis)[0], 440)
def test_symm_orb_cs(self):
atoms = [[1, (1., 0., 2.)],
[2, (1., 0., 0.)],
[3, (2., 0.,-1.)],
[4, (0., 0., 1.)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),
'Li': gto.basis.load('cc_pvqz', 'C'),
'Be': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis)[0], 220)
def test_symm_orb_c1(self):
atoms = [[1, ( 1., 0., 0.)],
[2, ( 0., 1., 0.)],
[3, ( 0., 0., 1.)],
[4, ( .5, .5, .5)]]
basis = {'H' : gto.basis.load('cc_pvqz', 'C'),
'He': gto.basis.load('cc_pvqz', 'C'),
'Li': gto.basis.load('cc_pvqz', 'C'),
'Be': gto.basis.load('cc_pvqz', 'C'),}
self.assertEqual(get_so(atoms,basis)[0], 220)
def test_symm_orb_c3v_as_cs(self):
atoms = [['Fe', ( 0.000000 , 0.000000 , 0.015198 )],
['C', ( 0.000000 , 0.000000 , -1.938396)],
['C', ( 0.000000 , -1.394127 , -1.614155)],
['C', ( -1.207349 , 0.697064 , -1.614155)],
['C', ( 1.207349 , 0.697064 , -1.614155)],
['H', ( -0.922915 , -1.965174 , -1.708739)],
['H', ( 0.922915 , -1.965174 , -1.708739)],
['H', ( -1.240433 , 1.781855 , -1.708739)],
['H', ( -2.163348 , 0.183319 , -1.708739)],
['H', ( 2.163348 , 0.183319 , -1.708739)],
['H', ( 1.240433 , 1.781855 , -1.708739)],
['C', ( 0.000000 , 1.558543 , 0.887110 )],
['C', ( 1.349738 , -0.779272 , 0.887110 )],
['C', ( -1.349738 , -0.779272 , 0.887110 )],
['O', ( 0.000000 , 2.572496 , 1.441607 )],
['O', ( 2.227847 , -1.286248 , 1.441607 )],
['O', ( -2.227847 , -1.286248 , 1.441607 )],]
basis = {'Fe':gto.basis.load('def2svp', 'C'),
'C': gto.basis.load('def2svp', 'C'),
'H': gto.basis.load('def2svp', 'C'),
'O': gto.basis.load('def2svp', 'C'),}
n, so = get_so(atoms, basis)
self.assertEqual([c.shape[1] for c in so], [134, 104])
if __name__ == "__main__":
print("Full Tests symm.basis")
unittest.main()
|
gkc1000/pyscf
|
pyscf/symm/test/test_basis.py
|
Python
|
apache-2.0
| 6,887
|
[
"PySCF"
] |
f8ad14f0c50dad2252409169dde1960b8be38dc4b5cae3484470715325bc3642
|
"""Gaussian Mixture Model."""
# Author: Wei Xue <xuewei4d@gmail.com>
# Modified by Thierry Guillemot <thierry.guillemot.work@gmail.com>
# License: BSD 3 clause
import numpy as np
from scipy import linalg
from .base import BaseMixture, _check_shape
from ..externals.six.moves import zip
from ..utils import check_array
from ..utils.validation import check_is_fitted
from ..utils.extmath import row_norms
###############################################################################
# Gaussian mixture shape checkers used by the GaussianMixture class
def _check_weights(weights, n_components):
"""Check the user provided 'weights'.
Parameters
----------
weights : array-like, shape (n_components,)
The proportions of components of each mixture.
n_components : int
Number of components.
Returns
-------
weights : array, shape (n_components,)
"""
weights = check_array(weights, dtype=[np.float64, np.float32],
ensure_2d=False)
_check_shape(weights, (n_components,), 'weights')
# check range
if (any(np.less(weights, 0.)) or
any(np.greater(weights, 1.))):
raise ValueError("The parameter 'weights' should be in the range "
"[0, 1], but got max value %.5f, min value %.5f"
% (np.min(weights), np.max(weights)))
# check normalization
if not np.allclose(np.abs(1. - np.sum(weights)), 0.):
raise ValueError("The parameter 'weights' should be normalized, "
"but got sum(weights) = %.5f" % np.sum(weights))
return weights
def _check_means(means, n_components, n_features):
"""Validate the provided 'means'.
Parameters
----------
means : array-like, shape (n_components, n_features)
The centers of the current components.
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
means : array, (n_components, n_features)
"""
means = check_array(means, dtype=[np.float64, np.float32], ensure_2d=False)
_check_shape(means, (n_components, n_features), 'means')
return means
def _check_precision_positivity(precision, covariance_type):
"""Check a precision vector is positive-definite."""
if np.any(np.less_equal(precision, 0.0)):
raise ValueError("'%s precision' should be "
"positive" % covariance_type)
def _check_precision_matrix(precision, covariance_type):
"""Check a precision matrix is symmetric and positive-definite."""
if not (np.allclose(precision, precision.T) and
np.all(linalg.eigvalsh(precision) > 0.)):
raise ValueError("'%s precision' should be symmetric, "
"positive-definite" % covariance_type)
def _check_precisions_full(precisions, covariance_type):
"""Check the precision matrices are symmetric and positive-definite."""
for prec in precisions:
_check_precision_matrix(prec, covariance_type)
def _check_precisions(precisions, covariance_type, n_components, n_features):
"""Validate user provided precisions.
Parameters
----------
precisions : array-like
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : string
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
precisions : array
"""
precisions = check_array(precisions, dtype=[np.float64, np.float32],
ensure_2d=False,
allow_nd=covariance_type == 'full')
precisions_shape = {'full': (n_components, n_features, n_features),
'tied': (n_features, n_features),
'diag': (n_components, n_features),
'spherical': (n_components,)}
_check_shape(precisions, precisions_shape[covariance_type],
'%s precision' % covariance_type)
_check_precisions = {'full': _check_precisions_full,
'tied': _check_precision_matrix,
'diag': _check_precision_positivity,
'spherical': _check_precision_positivity}
_check_precisions[covariance_type](precisions, covariance_type)
return precisions
###############################################################################
# Gaussian mixture parameters estimators (used by the M-Step)
def _estimate_gaussian_covariances_full(resp, X, nk, means, reg_covar):
"""Estimate the full covariance matrices.
Parameters
----------
resp : array-like, shape (n_samples, n_components)
X : array-like, shape (n_samples, n_features)
nk : array-like, shape (n_components,)
means : array-like, shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features, n_features)
The covariance matrix of the current components.
"""
n_components, n_features = means.shape
covariances = np.empty((n_components, n_features, n_features))
for k in range(n_components):
diff = X - means[k]
covariances[k] = np.dot(resp[:, k] * diff.T, diff) / nk[k]
covariances[k].flat[::n_features + 1] += reg_covar
return covariances
def _estimate_gaussian_covariances_tied(resp, X, nk, means, reg_covar):
"""Estimate the tied covariance matrix.
Parameters
----------
resp : array-like, shape (n_samples, n_components)
X : array-like, shape (n_samples, n_features)
nk : array-like, shape (n_components,)
means : array-like, shape (n_components, n_features)
reg_covar : float
Returns
-------
covariance : array, shape (n_features, n_features)
The tied covariance matrix of the components.
"""
avg_X2 = np.dot(X.T, X)
avg_means2 = np.dot(nk * means.T, means)
covariance = avg_X2 - avg_means2
covariance /= nk.sum()
covariance.flat[::len(covariance) + 1] += reg_covar
return covariance
def _estimate_gaussian_covariances_diag(resp, X, nk, means, reg_covar):
"""Estimate the diagonal covariance vectors.
Parameters
----------
responsibilities : array-like, shape (n_samples, n_components)
X : array-like, shape (n_samples, n_features)
nk : array-like, shape (n_components,)
means : array-like, shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features)
The covariance vector of the current components.
"""
avg_X2 = np.dot(resp.T, X * X) / nk[:, np.newaxis]
avg_means2 = means ** 2
avg_X_means = means * np.dot(resp.T, X) / nk[:, np.newaxis]
return avg_X2 - 2 * avg_X_means + avg_means2 + reg_covar
def _estimate_gaussian_covariances_spherical(resp, X, nk, means, reg_covar):
"""Estimate the spherical variance values.
Parameters
----------
responsibilities : array-like, shape (n_samples, n_components)
X : array-like, shape (n_samples, n_features)
nk : array-like, shape (n_components,)
means : array-like, shape (n_components, n_features)
reg_covar : float
Returns
-------
variances : array, shape (n_components,)
The variance values of each components.
"""
return _estimate_gaussian_covariances_diag(resp, X, nk,
means, reg_covar).mean(1)
def _estimate_gaussian_parameters(X, resp, reg_covar, covariance_type):
"""Estimate the Gaussian distribution parameters.
Parameters
----------
X : array-like, shape (n_samples, n_features)
The input data array.
resp : array-like, shape (n_samples, n_components)
The responsibilities for each data sample in X.
reg_covar : float
The regularization added to the diagonal of the covariance matrices.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
nk : array-like, shape (n_components,)
The numbers of data samples in the current components.
means : array-like, shape (n_components, n_features)
The centers of the current components.
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
"""
nk = resp.sum(axis=0) + 10 * np.finfo(resp.dtype).eps
means = np.dot(resp.T, X) / nk[:, np.newaxis]
covariances = {"full": _estimate_gaussian_covariances_full,
"tied": _estimate_gaussian_covariances_tied,
"diag": _estimate_gaussian_covariances_diag,
"spherical": _estimate_gaussian_covariances_spherical
}[covariance_type](resp, X, nk, means, reg_covar)
return nk, means, covariances
def _compute_precision_cholesky(covariances, covariance_type):
"""Compute the Cholesky decomposition of the precisions.
Parameters
----------
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends of the covariance_type.
"""
estimate_precision_error_message = (
"Fitting the mixture model failed because some components have "
"ill-defined empirical covariance (for instance caused by singleton "
"or collapsed samples). Try to decrease the number of components, "
"or increase reg_covar.")
if covariance_type in 'full':
n_components, n_features, _ = covariances.shape
precisions_chol = np.empty((n_components, n_features, n_features))
for k, covariance in enumerate(covariances):
try:
cov_chol = linalg.cholesky(covariance, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol[k] = linalg.solve_triangular(cov_chol,
np.eye(n_features),
lower=True).T
elif covariance_type == 'tied':
_, n_features = covariances.shape
try:
cov_chol = linalg.cholesky(covariances, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol = linalg.solve_triangular(cov_chol, np.eye(n_features),
lower=True).T
else:
if np.any(np.less_equal(covariances, 0.0)):
raise ValueError(estimate_precision_error_message)
precisions_chol = 1. / np.sqrt(covariances)
return precisions_chol
###############################################################################
# Gaussian mixture probability estimators
def _compute_log_det_cholesky(matrix_chol, covariance_type, n_features):
"""Compute the log-det of the cholesky decomposition of matrices.
Parameters
----------
matrix_chol : array-like
Cholesky decompositions of the matrices.
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : {'full', 'tied', 'diag', 'spherical'}
n_features : int
Number of features.
Returns
-------
log_det_precision_chol : array-like, shape (n_components,)
The determinant of the precision matrix for each component.
"""
if covariance_type == 'full':
n_components, _, _ = matrix_chol.shape
log_det_chol = (np.sum(np.log(
matrix_chol.reshape(
n_components, -1)[:, ::n_features + 1]), 1))
elif covariance_type == 'tied':
log_det_chol = (np.sum(np.log(np.diag(matrix_chol))))
elif covariance_type == 'diag':
log_det_chol = (np.sum(np.log(matrix_chol), axis=1))
else:
log_det_chol = n_features * (np.log(matrix_chol))
return log_det_chol
def _estimate_log_gaussian_prob(X, means, precisions_chol, covariance_type):
"""Estimate the log Gaussian probability.
Parameters
----------
X : array-like, shape (n_samples, n_features)
means : array-like, shape (n_components, n_features)
precisions_chol : array-like
Cholesky decompositions of the precision matrices.
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : {'full', 'tied', 'diag', 'spherical'}
Returns
-------
log_prob : array, shape (n_samples, n_components)
"""
n_samples, n_features = X.shape
n_components, _ = means.shape
# det(precision_chol) is half of det(precision)
log_det = _compute_log_det_cholesky(
precisions_chol, covariance_type, n_features)
if covariance_type == 'full':
log_prob = np.empty((n_samples, n_components))
for k, (mu, prec_chol) in enumerate(zip(means, precisions_chol)):
y = np.dot(X, prec_chol) - np.dot(mu, prec_chol)
log_prob[:, k] = np.sum(np.square(y), axis=1)
elif covariance_type == 'tied':
log_prob = np.empty((n_samples, n_components))
for k, mu in enumerate(means):
y = np.dot(X, precisions_chol) - np.dot(mu, precisions_chol)
log_prob[:, k] = np.sum(np.square(y), axis=1)
elif covariance_type == 'diag':
precisions = precisions_chol ** 2
log_prob = (np.sum((means ** 2 * precisions), 1) -
2. * np.dot(X, (means * precisions).T) +
np.dot(X ** 2, precisions.T))
elif covariance_type == 'spherical':
precisions = precisions_chol ** 2
log_prob = (np.sum(means ** 2, 1) * precisions -
2 * np.dot(X, means.T * precisions) +
np.outer(row_norms(X, squared=True), precisions))
return -.5 * (n_features * np.log(2 * np.pi) + log_prob) + log_det
class GaussianMixture(BaseMixture):
"""Gaussian Mixture.
Representation of a Gaussian mixture model probability distribution.
This class allows to estimate the parameters of a Gaussian mixture
distribution.
Read more in the :ref:`User Guide <gmm>`.
.. versionadded:: 0.18
Parameters
----------
n_components : int, defaults to 1.
The number of mixture components.
covariance_type : {'full' (default), 'tied', 'diag', 'spherical'}
String describing the type of covariance parameters to use.
Must be one of:
'full'
each component has its own general covariance matrix
'tied'
all components share the same general covariance matrix
'diag'
each component has its own diagonal covariance matrix
'spherical'
each component has its own single variance
tol : float, defaults to 1e-3.
The convergence threshold. EM iterations will stop when the
lower bound average gain is below this threshold.
reg_covar : float, defaults to 1e-6.
Non-negative regularization added to the diagonal of covariance.
Allows to assure that the covariance matrices are all positive.
max_iter : int, defaults to 100.
The number of EM iterations to perform.
n_init : int, defaults to 1.
The number of initializations to perform. The best results are kept.
init_params : {'kmeans', 'random'}, defaults to 'kmeans'.
The method used to initialize the weights, the means and the
precisions.
Must be one of::
'kmeans' : responsibilities are initialized using kmeans.
'random' : responsibilities are initialized randomly.
weights_init : array-like, shape (n_components, ), optional
The user-provided initial weights, defaults to None.
If it None, weights are initialized using the `init_params` method.
means_init : array-like, shape (n_components, n_features), optional
The user-provided initial means, defaults to None,
If it None, means are initialized using the `init_params` method.
precisions_init : array-like, optional.
The user-provided initial precisions (inverse of the covariance
matrices), defaults to None.
If it None, precisions are initialized using the 'init_params' method.
The shape depends on 'covariance_type'::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
warm_start : bool, default to False.
If 'warm_start' is True, the solution of the last fitting is used as
initialization for the next call of fit(). This can speed up
convergence when fit is called several times on similar problems.
In that case, 'n_init' is ignored and only a single initialization
occurs upon the first call.
See :term:`the Glossary <warm_start>`.
verbose : int, default to 0.
Enable verbose output. If 1 then it prints the current
initialization and each iteration step. If greater than 1 then
it prints also the log probability and the time needed
for each step.
verbose_interval : int, default to 10.
Number of iteration done before the next print.
Attributes
----------
weights_ : array-like, shape (n_components,)
The weights of each mixture components.
means_ : array-like, shape (n_components, n_features)
The mean of each mixture component.
covariances_ : array-like
The covariance of each mixture component.
The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
precisions_ : array-like
The precision matrices for each component in the mixture. A precision
matrix is the inverse of a covariance matrix. A covariance matrix is
symmetric positive definite so the mixture of Gaussian can be
equivalently parameterized by the precision matrices. Storing the
precision matrices instead of the covariance matrices makes it more
efficient to compute the log-likelihood of new samples at test time.
The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
precisions_cholesky_ : array-like
The cholesky decomposition of the precision matrices of each mixture
component. A precision matrix is the inverse of a covariance matrix.
A covariance matrix is symmetric positive definite so the mixture of
Gaussian can be equivalently parameterized by the precision matrices.
Storing the precision matrices instead of the covariance matrices makes
it more efficient to compute the log-likelihood of new samples at test
time. The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
converged_ : bool
True when convergence was reached in fit(), False otherwise.
n_iter_ : int
Number of step used by the best fit of EM to reach the convergence.
lower_bound_ : float
Lower bound value on the log-likelihood (of the training data with
respect to the model) of the best fit of EM.
See Also
--------
BayesianGaussianMixture : Gaussian mixture model fit with a variational
inference.
"""
def __init__(self, n_components=1, covariance_type='full', tol=1e-3,
reg_covar=1e-6, max_iter=100, n_init=1, init_params='kmeans',
weights_init=None, means_init=None, precisions_init=None,
random_state=None, warm_start=False,
verbose=0, verbose_interval=10):
super(GaussianMixture, self).__init__(
n_components=n_components, tol=tol, reg_covar=reg_covar,
max_iter=max_iter, n_init=n_init, init_params=init_params,
random_state=random_state, warm_start=warm_start,
verbose=verbose, verbose_interval=verbose_interval)
self.covariance_type = covariance_type
self.weights_init = weights_init
self.means_init = means_init
self.precisions_init = precisions_init
def _check_parameters(self, X):
"""Check the Gaussian mixture parameters are well defined."""
_, n_features = X.shape
if self.covariance_type not in ['spherical', 'tied', 'diag', 'full']:
raise ValueError("Invalid value for 'covariance_type': %s "
"'covariance_type' should be in "
"['spherical', 'tied', 'diag', 'full']"
% self.covariance_type)
if self.weights_init is not None:
self.weights_init = _check_weights(self.weights_init,
self.n_components)
if self.means_init is not None:
self.means_init = _check_means(self.means_init,
self.n_components, n_features)
if self.precisions_init is not None:
self.precisions_init = _check_precisions(self.precisions_init,
self.covariance_type,
self.n_components,
n_features)
def _initialize(self, X, resp):
"""Initialization of the Gaussian mixture parameters.
Parameters
----------
X : array-like, shape (n_samples, n_features)
resp : array-like, shape (n_samples, n_components)
"""
n_samples, _ = X.shape
weights, means, covariances = _estimate_gaussian_parameters(
X, resp, self.reg_covar, self.covariance_type)
weights /= n_samples
self.weights_ = (weights if self.weights_init is None
else self.weights_init)
self.means_ = means if self.means_init is None else self.means_init
if self.precisions_init is None:
self.covariances_ = covariances
self.precisions_cholesky_ = _compute_precision_cholesky(
covariances, self.covariance_type)
elif self.covariance_type == 'full':
self.precisions_cholesky_ = np.array(
[linalg.cholesky(prec_init, lower=True)
for prec_init in self.precisions_init])
elif self.covariance_type == 'tied':
self.precisions_cholesky_ = linalg.cholesky(self.precisions_init,
lower=True)
else:
self.precisions_cholesky_ = self.precisions_init
def _m_step(self, X, log_resp):
"""M step.
Parameters
----------
X : array-like, shape (n_samples, n_features)
log_resp : array-like, shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
"""
n_samples, _ = X.shape
self.weights_, self.means_, self.covariances_ = (
_estimate_gaussian_parameters(X, np.exp(log_resp), self.reg_covar,
self.covariance_type))
self.weights_ /= n_samples
self.precisions_cholesky_ = _compute_precision_cholesky(
self.covariances_, self.covariance_type)
def _estimate_log_prob(self, X):
return _estimate_log_gaussian_prob(
X, self.means_, self.precisions_cholesky_, self.covariance_type)
def _estimate_log_weights(self):
return np.log(self.weights_)
def _compute_lower_bound(self, _, log_prob_norm):
return log_prob_norm
def _check_is_fitted(self):
check_is_fitted(self, ['weights_', 'means_', 'precisions_cholesky_'])
def _get_parameters(self):
return (self.weights_, self.means_, self.covariances_,
self.precisions_cholesky_)
def _set_parameters(self, params):
(self.weights_, self.means_, self.covariances_,
self.precisions_cholesky_) = params
# Attributes computation
_, n_features = self.means_.shape
if self.covariance_type == 'full':
self.precisions_ = np.empty(self.precisions_cholesky_.shape)
for k, prec_chol in enumerate(self.precisions_cholesky_):
self.precisions_[k] = np.dot(prec_chol, prec_chol.T)
elif self.covariance_type == 'tied':
self.precisions_ = np.dot(self.precisions_cholesky_,
self.precisions_cholesky_.T)
else:
self.precisions_ = self.precisions_cholesky_ ** 2
def _n_parameters(self):
"""Return the number of free parameters in the model."""
_, n_features = self.means_.shape
if self.covariance_type == 'full':
cov_params = self.n_components * n_features * (n_features + 1) / 2.
elif self.covariance_type == 'diag':
cov_params = self.n_components * n_features
elif self.covariance_type == 'tied':
cov_params = n_features * (n_features + 1) / 2.
elif self.covariance_type == 'spherical':
cov_params = self.n_components
mean_params = n_features * self.n_components
return int(cov_params + mean_params + self.n_components - 1)
def bic(self, X):
"""Bayesian information criterion for the current model on the input X.
Parameters
----------
X : array of shape (n_samples, n_dimensions)
Returns
-------
bic : float
The lower the better.
"""
return (-2 * self.score(X) * X.shape[0] +
self._n_parameters() * np.log(X.shape[0]))
def aic(self, X):
"""Akaike information criterion for the current model on the input X.
Parameters
----------
X : array of shape (n_samples, n_dimensions)
Returns
-------
aic : float
The lower the better.
"""
return -2 * self.score(X) * X.shape[0] + 2 * self._n_parameters()
|
vortex-ape/scikit-learn
|
sklearn/mixture/gaussian_mixture.py
|
Python
|
bsd-3-clause
| 27,921
|
[
"Gaussian"
] |
636e9c50d12c3689fd4a7171a5e1c058786fb3db60f5ab0f1e55102ace155328
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module contains the error classes for the chemenv package.
"""
__author__ = "David Waroquiers"
__copyright__ = "Copyright 2012, The Materials Project"
__credits__ = "Geoffroy Hautier"
__version__ = "2.0"
__maintainer__ = "David Waroquiers"
__email__ = "david.waroquiers@gmail.com"
__date__ = "Feb 20, 2016"
class AbstractChemenvError(Exception):
"""
Abstract class for Chemenv errors.
"""
def __init__(self, cls, method, msg):
"""
:param cls:
:param method:
:param msg:
"""
self.cls = cls
self.method = method
self.msg = msg
def __str__(self):
return str(self.cls) + ": " + self.method + "\n" + repr(self.msg)
class NeighborsNotComputedChemenvError(AbstractChemenvError):
"""
Neighbors not computed error.
"""
def __init__(self, site):
"""
:param site:
"""
self.site = site
def __str__(self):
return "The neighbors were not computed for the following site : \n" + str(self.site)
class EquivalentSiteSearchError(AbstractChemenvError):
"""
Equivalent site search error.
"""
def __init__(self, site):
"""
:param site:
"""
self.site = site
def __str__(self):
return "Equivalent site could not be found for the following site : {}".format(str(self.site))
class SolidAngleError(AbstractChemenvError):
"""
Solid angle error.
"""
def __init__(self, cosinus):
"""
:param cosinus:
"""
self.cosinus = cosinus
def __str__(self):
return "Value of cosinus ({}) from which an angle should be retrieved" "is not between -1.0 and 1.0".format(
self.cosinus
)
class ChemenvError(Exception):
"""
Chemenv error.
"""
def __init__(self, cls, method, msg):
"""
:param cls:
:param method:
:param msg:
"""
self.cls = cls
self.method = method
self.msg = msg
def __str__(self):
return str(self.cls) + ": " + self.method + "\n" + repr(self.msg)
|
richardtran415/pymatgen
|
pymatgen/analysis/chemenv/utils/chemenv_errors.py
|
Python
|
mit
| 2,241
|
[
"pymatgen"
] |
642d085eb541283a3a59a2225df1e6387c113ebd0c1601281407bd8f60388b82
|
import logging
from numpy.lib.twodim_base import diag, eye
from numpy.ma.core import asarray
import os
from os.path import expanduser
from pickle import dump
from independent_jobs.tools.Log import Log
from kameleon_mcmc.distribution.Gaussian import Gaussian
from kameleon_mcmc.mcmc.MCMCChain import MCMCChain
from kameleon_mcmc.mcmc.MCMCParams import MCMCParams
from kameleon_mcmc.mcmc.output.StatisticsOutput import StatisticsOutput
from kameleon_mcmc.mcmc.output.StoreChainOutput import StoreChainOutput
from kameleon_mcmc.mcmc.samplers.StandardMetropolis import StandardMetropolis
from ozone.distribution.OzonePosterior import OzonePosterior
def main():
Log.set_loglevel(logging.DEBUG)
prior = Gaussian(Sigma=eye(2) * 100)
posterior = OzonePosterior(prior, logdet_alg="scikits",
solve_method="scikits")
proposal_cov = diag([ 4.000000000000000e-05, 1.072091680000000e+02])
mcmc_sampler = StandardMetropolis(posterior, scale=1.0, cov=proposal_cov)
start = asarray([-11.35, -13.1])
mcmc_params = MCMCParams(start=start, num_iterations=5000)
chain = MCMCChain(mcmc_sampler, mcmc_params)
chain.append_mcmc_output(StatisticsOutput(print_from=1, lag=1))
home = expanduser("~")
folder = os.sep.join([home, "sample_ozone_posterior_average_serial"])
store_chain_output = StoreChainOutput(folder)
chain.append_mcmc_output(store_chain_output)
loaded = store_chain_output.load_last_stored_chain()
if loaded is None:
logging.info("Running chain from scratch")
else:
logging.info("Running chain from iteration %d" % loaded.iteration)
chain = loaded
chain.run()
f = open(folder + os.sep + "final_chain", "w")
dump(chain, f)
f.close()
if __name__ == "__main__":
main()
|
karlnapf/ozone-roulette
|
ozone/scripts/sample_ozone_posterior_ground_truth.py
|
Python
|
bsd-2-clause
| 1,847
|
[
"Gaussian"
] |
f4fc6481642a979fbd43f7f0784b72dbb80655b36ffa2878c6fcfb50cc1edc78
|
#* This file is part of the MOOSE framework
#* https://www.mooseframework.org
#*
#* All rights reserved, see COPYRIGHT for full restrictions
#* https://github.com/idaholab/moose/blob/master/COPYRIGHT
#*
#* Licensed under LGPL 2.1, please see LICENSE for details
#* https://www.gnu.org/licenses/lgpl-2.1.html
from PyQt5.QtCore import pyqtSignal, pyqtSlot, QObject, QProcess
import re, os
import mooseutils
from peacock.base import MooseWidget
from peacock.utils import TerminalUtils
class JobRunner(QObject, MooseWidget):
"""
Actually runs the process. It will read the output and
translate any terminal color codes into html.
It will also attempt to parse the output to check to
see if we are at a new time step and emit the
timestep_updated signal.
Signals:
started: Emitted when we start running.
finished: Emitted when we are finished. Arguments are exit code and status message.
outputAdded: Emitted when there is new output.
timeStepUpdated: A new time step has started
error: Emitted when an error is encountered. Arguments are QProcess code and error description
"""
started = pyqtSignal()
finished = pyqtSignal(int, str)
outputAdded = pyqtSignal(str)
timeStepUpdated = pyqtSignal(int)
error = pyqtSignal(int, str)
def __init__(self, **kwds):
super(JobRunner, self).__init__(**kwds)
self.process = QProcess(self)
self.process.setProcessChannelMode(QProcess.MergedChannels)
self.process.readyReadStandardOutput.connect(self._readOutput)
self.process.finished.connect(self._jobFinished)
self.process.started.connect(self.started)
self.process.error.connect(self._error)
self._error_map = { QProcess.FailedToStart: "Failed to start",
QProcess.Crashed: "Crashed",
QProcess.Timedout: "Timedout",
QProcess.WriteError: "Write error",
QProcess.ReadError: "Read error",
QProcess.UnknownError: "Unknown error",
}
self.killed = False
self.setup()
def run(self, cmd, args):
"""
Start the command.
Arguments:
cmd: The command to run
args: A list of string arguments
"""
self.killed = False
self._sendMessage("Running command: %s %s" % (cmd, ' '.join(args)))
self._sendMessage("Working directory: %s" % os.getcwd())
self.process.start(cmd, args)
self.process.waitForStarted()
def _sendMessage(self, msg):
mooseutils.mooseMessage(msg, color="MAGENTA")
self.outputAdded.emit('<span style="color:magenta;">%s</span>' % msg)
@pyqtSlot(QProcess.ProcessError)
def _error(self, err):
"""
Slot called when the QProcess encounters an error.
Inputs:
err: One of the QProcess.ProcessError enums
"""
if not self.killed:
msg = self._error_map.get(err, "Unknown error")
self.error.emit(int(err), msg)
mooseutils.mooseMessage(msg, color="RED")
self.outputAdded.emit(msg)
@pyqtSlot(int, QProcess.ExitStatus)
def _jobFinished(self, code, status):
"""
Slot called when the QProcess is finished.
Inputs:
code: Exit code of the process.
status: QProcess.ExitStatus
"""
exit_status = "Finished"
if status != QProcess.NormalExit:
if self.killed:
exit_status = "Killed by user"
else:
exit_status = "Crashed"
self.finished.emit(code, exit_status)
self._sendMessage("%s: Exit code: %s" % (exit_status, code))
def kill(self):
"""
Kills the QProcess
"""
self.killed = True
mooseutils.mooseMessage("Killing")
self.process.terminate()
self.process.waitForFinished(1000)
if self.isRunning():
mooseutils.mooseMessage("Failed to terminate job cleanly. Doing a hard kill.")
self.process.kill()
self.process.waitForFinished()
@pyqtSlot()
def _readOutput(self):
"""
Slot called when the QProcess produces output.
"""
lines = []
while self.process.canReadLine():
tmp = self.process.readLine().data().decode("utf-8").rstrip()
lines.append(TerminalUtils.terminalOutputToHtml(tmp))
match = re.search(r'Time\sStep\s*([0-9]{1,})', tmp)
if match:
ts = int(match.group(1))
self.timeStepUpdated.emit(ts)
output = '<pre style="display: inline; margin: 0;">%s</pre>' % '\n'.join(lines)
self.outputAdded.emit(output)
def isRunning(self):
return self.process.state() == QProcess.Running
|
nuclear-wizard/moose
|
python/peacock/Execute/JobRunner.py
|
Python
|
lgpl-2.1
| 4,858
|
[
"MOOSE"
] |
b62237dfdb05c769f49ad138b93112756f2e6428e20120ebff537b9284cff1f2
|
#!/usr/bin/env python
import sys
import gfxprim.core as core
import gfxprim.loaders as loaders
import gfxprim.filters as filters
def progress_callback1(perc):
sys.stdout.write("\rloading %3.2f%%" % perc)
sys.stdout.flush()
return 0
def progress_callback2(perc, args):
sys.stdout.write("\r%s %3.2f%%" % (args[1], perc))
sys.stdout.flush()
return 0
def main():
if len(sys.argv) != 2:
print("Takes an image as an argument")
sys.exit(1)
try:
img = loaders.load(sys.argv[1], progress_callback1)
print('')
except OSError as detail:
print("Failed to load image '%s': %s" % (sys.argv[1], detail))
exit(1)
try:
callback = (progress_callback2, "Gaussian Blur")
img = img.filters.gaussian_blur_alloc(50, 50, callback)
print('')
except OSError:
print("Filter Aborted")
if __name__ == '__main__':
main()
|
gfxprim/gfxprim
|
demos/py_simple/progress_callback.py
|
Python
|
lgpl-2.1
| 923
|
[
"Gaussian"
] |
f021bd22d824f463b97bc83a9f38a11d1dede0edb537b701aca94bb310f5f674
|
# vim: expandtab ts=4 sw=4 sts=4 fileencoding=utf-8:
#
# Copyright (C) 2007-2010 GNS3 Development Team (http://www.gns3.net/team).
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation;
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# http://www.gns3.net/contact
#
import os, sys, socket, glob, shutil, time, base64, subprocess, tempfile
import GNS3.NETFile as netfile
import GNS3.Dynagen.dynamips_lib as lib
import GNS3.Dynagen.qemu_lib as qemu_lib
import GNS3.Globals as globals
import GNS3.UndoFramework as undo
import GNS3.WindowManipulator as winm
import GNS3.Dynagen.portTracker_lib as tracker
from PyQt4 import QtGui, QtCore, QtNetwork
from PyQt4.QtGui import QMainWindow, QIcon, QWizard
from GNS3.Ui.Form_MainWindow import Ui_MainWindow
from GNS3.Ui.Form_About import Ui_AboutDialog
from GNS3.IOSDialog import IOSDialog
from GNS3.SymbolManager import SymbolManager
from GNS3.ProjectDialog import ProjectDialog
from GNS3.SnapshotDialog import SnapshotDialog
from GNS3.Utils import debug, translate, fileBrowser, showDetailedMsgBox, runTerminal
from GNS3.Config.Preferences import PreferencesDialog
from GNS3.Config.Objects import recentFilesConf
from GNS3.Node.IOSRouter import IOSRouter
from GNS3.Node.AnyEmuDevice import AnyEmuDevice, JunOS, IDS, QemuDevice
from GNS3.Node.AnyVBoxEmuDevice import AnyVBoxEmuDevice
from GNS3.Pixmap import Pixmap
from GNS3.Export.DeployementWizard import DeployementWizard
class Workspace(QMainWindow, Ui_MainWindow):
""" This class is for managing the whole GUI `Workspace'.
Currently a Workspace is similar to a MainWindow
"""
def __init__(self):
# Initialize some variables
self.projectFile = None
self.projectWorkdir = None
self.projectConfigs = None
self.isTemporaryProject = False
self.saveCaptures = False
# Ask to unbase when saving
self.unbase = False
# Initialize the windows
QMainWindow.__init__(self)
self.submenu_Docks = QtGui.QMenu(self)
self.submenu_RecentFiles = QtGui.QMenu(self)
Ui_MainWindow.setupUi(self, self)
self.__createSubMenus()
self.__connectActions()
self.setCorner(QtCore.Qt.TopLeftCorner, QtCore.Qt.LeftDockWidgetArea)
self.setCorner(QtCore.Qt.BottomLeftCorner, QtCore.Qt.LeftDockWidgetArea)
self.setCorner(QtCore.Qt.TopRightCorner, QtCore.Qt.RightDockWidgetArea)
self.setCorner(QtCore.Qt.BottomRightCorner, QtCore.Qt.RightDockWidgetArea)
# By default show hostnames
self.flg_showHostname = True
self.action_ShowHostnames.setChecked(True)
# By default don't show interface names
self.flg_showInterfaceNames = False
# By default show only saved interface names (after loading a topology)
self.flg_showOnlySavedInterfaceNames = False
# By default don't show layer positioning
self.flg_showLayerPos = False
# Load UndoView with the Undo Stack
self.UndoViewDock.setStack(globals.GApp.topology.undoStack)
# By default, don't show the UndoView
self.dockWidget_UndoView.hide()
# Add Undo & Redo actions to Edit menu
action = globals.GApp.topology.undoStack.createUndoAction(self, translate('Workspace', '&Undo'))
action.setIcon(QIcon(':/icons/edit-undo.svg'))
action.setShortcut(translate("Workspace", "Ctrl+Z"))
self.menu_Edit.addAction(action)
self.menu_Edit.insertAction(self.action_SelectAll, action)
action = globals.GApp.topology.undoStack.createRedoAction(self, translate('Workspace', '&Redo'))
action.setShortcut(translate("Workspace", "Ctrl+Y"))
action.setIcon(QIcon(':/icons/edit-redo.svg'))
self.menu_Edit.insertAction(self.action_SelectAll, action)
self.menu_Edit.insertAction(self.action_SelectAll, self.menu_Edit.addSeparator())
# Class to display error/warning messages once
self.errorMessage = QtGui.QErrorMessage(self)
self.errorMessage.setMinimumSize(350, 200)
# Auto save timer
self.timer = QtCore.QTimer()
QtCore.QObject.connect(self.timer, QtCore.SIGNAL("timeout()"), self.__action_Autosave)
# Network Manager (used to check for update)
self.networkManager = QtNetwork.QNetworkAccessManager(self)
# Automatic check for update every 2 weeks (1209600 seconds)
if globals.GApp.systconf['general'].auto_check_for_update:
currentEpoch = int(time.mktime(time.localtime()))
if currentEpoch - globals.GApp.systconf['general'].last_check_for_update > 1209600:
# let's check for an update
self.__action_CheckForUpdate(silent=True)
globals.GApp.systconf['general'].last_check_for_update = currentEpoch
# Port tracker
self.track = tracker.portTracker()
# Register local addresses into tracker
local_addresses = map(lambda addr: unicode(addr.toString()), QtNetwork.QNetworkInterface.allAddresses())
for addr in local_addresses:
self.track.addLocalAddress(addr)
try:
from GNS3.TipsDialog import TipsDialog
self.tips_dialog = TipsDialog(self)
except:
self.tips_dialog = None
self.createToolsMenu()
self.updateAction_addLink()
def createToolsMenu(self):
""" Populate Tools menu
"""
# First Clear the menu
self.menu_Tools.clear()
# Terminal
terminal_action = QtGui.QAction(translate("Workspace", "Terminal"), self.menu_Tools)
terminal_action.setShortcut(translate("Workspace", "Ctrl+T"))
self.menu_Tools.addAction(terminal_action)
# VPCS
vpcs_action = QtGui.QAction(translate("Workspace", "VPCS"), self.menu_Tools)
if sys.platform.startswith('win'):
if self.projectConfigs:
vpcs_action.setData(QtCore.QVariant('vpcs-start.cmd "' + self.projectConfigs + '"'))
else:
vpcs_action.setData(QtCore.QVariant("vpcs-start.cmd"))
elif sys.platform.startswith('darwin'):
if self.projectConfigs:
#vpcs_action.setData(QtCore.QVariant("cd \\\"" + self.projectConfigs + "\\\" ; " + os.getcwdu() + os.sep + 'vpcs'))
vpcs_action.setData(QtCore.QVariant("cd \\\"" + self.projectConfigs + "\\\" ; " + os.getcwdu() + os.sep + '../Resources/vpcs'))
else:
#vpcs_action.setData(QtCore.QVariant(os.getcwdu() + os.sep + 'vpcs'))
vpcs_action.setData(QtCore.QVariant(os.getcwdu() + os.sep + '../Resources/vpcs'))
else:
result = []
for path_dir in os.environ.get('PATH', '').split(os.pathsep):
p = os.path.join(path_dir, 'vpcs')
if os.access(p, os.X_OK):
result.append(p)
if not len(result):
vpcs_action = QtGui.QAction(translate("Workspace", "VPCS not installed"), self.menu_Tools)
elif self.projectConfigs:
vpcs_action.setData(QtCore.QVariant("cd \"" + self.projectConfigs + "\" ; vpcs # /vpcs"))
else:
vpcs_action.setData(QtCore.QVariant('vpcs'))
self.menu_Tools.addAction(vpcs_action)
# Loopback Manager (Windows only)
if sys.platform.startswith('win'):
loopback_manager_action = QtGui.QAction(translate("Workspace", "Loopback Manager"), self.menu_Tools)
loopback_manager_action.setData(QtCore.QVariant("loopback-manager.cmd"))
self.menu_Tools.addAction(loopback_manager_action)
# Network device list (Windows only)
if sys.platform.startswith('win'):
network_device_list_action = QtGui.QAction(translate("Workspace", "Network device list"), self.menu_Tools)
network_device_list_action.setData(QtCore.QVariant("network-device-list.cmd"))
self.menu_Tools.addAction(network_device_list_action)
# Config extractor (Windows only)
if sys.platform.startswith('win'):
config_extractor_action = QtGui.QAction(translate("Workspace", "Configuration extractor"), self.menu_Tools)
config_extractor_action.setData(QtCore.QVariant("config-extractor.cmd"))
self.menu_Tools.addAction(config_extractor_action)
# Dynamips server (Windows only)
if sys.platform.startswith('win'):
dynamips_server_action = QtGui.QAction(translate("Workspace", "Dynamips server"), self.menu_Tools)
dynamips_server_action.setData(QtCore.QVariant("dynamips-start.cmd"))
self.menu_Tools.addAction(dynamips_server_action)
# Qemuwrapper (Windows only)
if sys.platform.startswith('win'):
qemuwrapper_action = QtGui.QAction(translate("Workspace", "Qemuwrapper"), self.menu_Tools)
qemuwrapper_action.setData(QtCore.QVariant("qemuwrapper-start.cmd"))
self.menu_Tools.addAction(qemuwrapper_action)
# Vboxwrapper (Windows only)
if sys.platform.startswith('win'):
vboxwrapper_action = QtGui.QAction(translate("Workspace", "Vboxwrapper"), self.menu_Tools)
vboxwrapper_action.setData(QtCore.QVariant("vboxwrapper-start.cmd"))
self.menu_Tools.addAction(vboxwrapper_action)
# Lab instructions
# if self.projectFile and os.path.exists(os.path.dirname(self.projectFile)):
# instructions_files = glob.glob(os.path.dirname(self.projectFile) + os.sep + "instructions.*")
# instructions_files += glob.glob(os.path.dirname(self.projectFile) + os.sep + "instructions" + os.sep + "instructions*")
# if len(instructions_files):
# path = instructions_files[0]
# instructions_action = QtGui.QAction(translate("Workspace", "Instructions"), self.menu_Tools)
# instructions_action.setData(QtCore.QVariant(path))
# self.menu_Tools.addAction(instructions_action)
def slotRunTool(self, action):
""" Run a tool from Tools menu
"""
if action.text() == translate("Workspace", 'Terminal'):
runTerminal()
# elif action.text() == translate("Workspace", "Instructions"):
# if QtGui.QDesktopServices.openUrl(QtCore.QUrl('file:///' + action.data().toString(), QtCore.QUrl.TolerantMode)) == False:
# QtGui.QMessageBox.critical(self, translate("Workspace", "Instructions"), translate("Workspace", "Couldn't open " + action.data().toString()))
elif action.text() == translate("Workspace", "VPCS not installed"):
QtGui.QMessageBox.information(self, translate("Workspace", "VPCS"), translate("Workspace", "vpcs must be found in PATH and marked as executable"))
else:
# tool_path = action.data().toString()
debug("Running tool: %s" % action.data().toString())
# if not os.path.exists(tool_path):
# QtGui.QMessageBox.critical(self, translate("Workspace", "Tool"), translate("Workspace", "Cannot locate: %s") % tool_path)
# return
runTerminal(action.data().toString())
def __connectActions(self):
""" Connect all needed pair (action, SIGNAL)
"""
self.connect(self.action_Export, QtCore.SIGNAL('triggered()'), self.__action_Export)
self.connect(self.action_AddLink, QtCore.SIGNAL('triggered()'), self.__action_addLink)
self.connect(self.action_IOS_images, QtCore.SIGNAL('triggered()'), self.__action_IOSImages)
self.connect(self.action_Symbol_Manager, QtCore.SIGNAL('triggered()'), self.__action_Symbol_Manager)
self.connect(self.action_ShowHostnames, QtCore.SIGNAL('triggered()'), self.__action_ShowHostnames)
self.connect(self.action_ShowinterfaceNames, QtCore.SIGNAL('triggered()'), self.__action_ShowInterfaceNames)
self.connect(self.action_ZoomIn, QtCore.SIGNAL('triggered()'), self.__action_ZoomIn)
self.connect(self.action_ZoomOut, QtCore.SIGNAL('triggered()'), self.__action_ZoomOut)
self.connect(self.action_ZoomReset, QtCore.SIGNAL('triggered()'), self.__action_ZoomReset)
self.connect(self.action_BrowseAllDevices, QtCore.SIGNAL('triggered()'), self.__action_BrowseAllDevices)
self.connect(self.action_Router, QtCore.SIGNAL('triggered()'), self.__action_Router)
self.connect(self.action_Switch, QtCore.SIGNAL('triggered()'), self.__action_Switch)
self.connect(self.action_EndDevices, QtCore.SIGNAL('triggered()'), self.__action_EndDevices)
self.connect(self.action_SecurityDevices, QtCore.SIGNAL('triggered()'), self.__action_SecurityDevices)
self.connect(self.action_DefaultStyle, QtCore.SIGNAL('triggered()'), self.__action_DefaultStyle)
self.connect(self.action_EnergySavingStyle, QtCore.SIGNAL('triggered()'), self.__action_EnergySavingStyle)
#self.connect(self.action_HighContrastStyle, QtCore.SIGNAL('triggered()'), self.__action_HighContrastStyle)
self.connect(self.action_SelectAll, QtCore.SIGNAL('triggered()'), self.__action_SelectAll)
self.connect(self.action_SelectNone, QtCore.SIGNAL('triggered()'), self.__action_SelectNone)
self.connect(self.action_Console, QtCore.SIGNAL('triggered()'), self.__action_Console)
self.connect(self.action_TelnetAll, QtCore.SIGNAL('triggered()'), self.__action_TelnetAll)
self.connect(self.action_ConsoleAuxAll, QtCore.SIGNAL('triggered()'), self.__action_ConsoleAuxAll)
self.connect(self.action_StartAll, QtCore.SIGNAL('triggered()'), self.__action_StartAll)
self.connect(self.action_StopAll, QtCore.SIGNAL('triggered()'), self.__action_StopAll)
self.connect(self.action_SuspendAll, QtCore.SIGNAL('triggered()'), self.__action_SuspendAll)
self.connect(self.action_ReloadAll, QtCore.SIGNAL('triggered()'), self.__action_ReloadAll)
self.connect(self.action_ShowVirtualBoxManager, QtCore.SIGNAL('triggered()'), self.__action_ShowVirtualBoxManager)
self.connect(self.action_OnlineHelp, QtCore.SIGNAL('triggered()'), self.__action_Help)
self.connect(self.action_About, QtCore.SIGNAL('triggered()'), self.__action_About)
self.connect(self.action_AboutQt, QtCore.SIGNAL('triggered()'), self.__action_AboutQt)
self.connect(self.action_CheckForUpdate, QtCore.SIGNAL('triggered()'), self.__action_CheckForUpdate)
self.connect(self.action_Tips, QtCore.SIGNAL('triggered()'), self.__action_Tips)
self.connect(self.action_Instructions, QtCore.SIGNAL('triggered()'), self.__action_Instructions)
self.connect(self.action_New, QtCore.SIGNAL('triggered()'), self.__action_NewProject)
self.connect(self.action_SaveProjectAs, QtCore.SIGNAL('triggered()'), self.__action_SaveProjectAs)
self.connect(self.action_Open, QtCore.SIGNAL('triggered()'), self.__action_OpenFile)
self.connect(self.action_Save, QtCore.SIGNAL('triggered()'), self.__action_Save)
self.connect(self.action_Preferences, QtCore.SIGNAL('triggered()'), self.__action_Preferences)
self.connect(self.action_AddNote, QtCore.SIGNAL('triggered()'), self.__action_AddNote)
self.connect(self.action_config, QtCore.SIGNAL('triggered()'), self.__action_Config)
self.connect(self.action_InsertImage, QtCore.SIGNAL('triggered()'), self.__action_InsertImage)
self.connect(self.action_DrawRectangle, QtCore.SIGNAL('triggered()'), self.__action_DrawRectangle)
self.connect(self.action_DrawEllipse, QtCore.SIGNAL('triggered()'), self.__action_DrawEllipse)
self.connect(self.action_Snapshot, QtCore.SIGNAL('triggered()'), self.__action_Snapshot)
self.connect(self.action_Undo, QtCore.SIGNAL('triggered()'), self.__action_Undo)
self.connect(self.action_Redo, QtCore.SIGNAL('triggered()'), self.__action_Redo)
self.connect(self.action_ShowLayers, QtCore.SIGNAL('triggered()'), self.__action_ShowLayers)
self.connect(self.action_ResetInterfaceLabels, QtCore.SIGNAL('triggered()'), self.__action_ResetInterfaceLabels)
self.connect(self.action_Deployement_Wizard, QtCore.SIGNAL('triggered()'), self.__action_DisplayWizard)
# Device menu is contextual and is build on-the-fly
self.connect(self.menuDevice, QtCore.SIGNAL('aboutToShow()'), self.__action_ShowDeviceMenu)
# Connect tool menu to run tools
self.connect(self.menu_Tools, QtCore.SIGNAL("triggered(QAction *)"), self.slotRunTool)
def __action_DisplayWizard(self):
self.wizard = DeployementWizard()
self.wizard.show()
self.wizard.exec_()
def __action_ShowDeviceMenu(self):
self.menuDevice.clear()
globals.GApp.scene.makeContextualMenu(self.menuDevice)
def __createSubMenus(self):
""" Create new sub-menus
"""
# Create and populate docks submenu
self.submenu_Docks.addAction(self.dockWidget_NodeTypes.toggleViewAction())
self.submenu_Docks.addAction(self.dockWidget_TopoSum.toggleViewAction())
self.submenu_Docks.addAction(self.dockWidget_Console.toggleViewAction())
self.submenu_Docks.addAction(self.dockWidget_UndoView.toggleViewAction())
self.submenu_Docks.addAction(self.dockWidget_Capture.toggleViewAction())
self.menu_View.addSeparator().setText(translate("Workspace", "Docks"))
self.menu_View.addMenu(self.submenu_Docks)
# Create and populate recent files submenu
recent_files = list(globals.GApp.recentfiles)
recent_files.reverse()
for recent_file_conf in recent_files:
action = QtGui.QAction(recent_file_conf.path, self.submenu_RecentFiles)
self.submenu_RecentFiles.addAction(action)
# Add clear menu action
if len(globals.GApp.recentfiles):
self.submenu_RecentFiles.addSeparator()
clear_action = QtGui.QAction(translate("Workspace", "Clear Menu"), self.submenu_RecentFiles)
self.submenu_RecentFiles.addAction(clear_action)
# Insert recent files submenu in File menu
self.submenu_RecentFiles.setTitle(translate("Workspace", "Recent Files"))
self.submenu_RecentFiles.setIcon(QtGui.QIcon(":/icons/open.svg"))
separator = self.menu_File.insertSeparator(self.action_Save)
self.menu_File.insertMenu(separator, self.submenu_RecentFiles)
self.connect(self.submenu_RecentFiles, QtCore.SIGNAL("triggered(QAction *)"), self.slotLoadRecentFile)
def __action_Instructions(self, silent=False):
# Lab instructions
if self.projectFile and os.path.exists(os.path.dirname(self.projectFile)):
instructions_files = glob.glob(os.path.dirname(self.projectFile) + os.sep + "instructions.*")
instructions_files += glob.glob(os.path.dirname(self.projectFile) + os.sep + "instructions" + os.sep + "instructions*")
if len(instructions_files):
path = instructions_files[0]
if QtGui.QDesktopServices.openUrl(QtCore.QUrl('file:///' + path, QtCore.QUrl.TolerantMode)) == False and silent == False:
QtGui.QMessageBox.critical(self, translate("Workspace", "Instructions"), translate("Workspace", "Couldn't open " + path))
elif silent == False:
QtGui.QMessageBox.critical(self, translate("Workspace", "Instructions"), translate("Workspace", "No instructions found. Click <a href='http://www.gns3.net/documentation/instructions/'>here</a> to to see how to add instructions to your project"))
def slotLoadRecentFile(self, action):
""" Called when a file is selected from the Recent Files submenu
action: QtCore.QAction instance
"""
action_text = unicode(action.text(), 'utf-8', errors='replace')
# If action is Clear Menu, then we clear the recent files submenu
if translate("Workspace", "Clear Menu") == action_text:
globals.GApp.recentfiles = []
self.submenu_RecentFiles.clear()
return
self.loadNetfile(action_text)
def retranslateUi(self, MainWindow):
Ui_MainWindow.retranslateUi(self, MainWindow)
self.submenu_Docks.setTitle(translate('Workspace', 'Docks'))
# Retranslate dock contents...
try:
self.nodesDock.retranslateUi(self.nodesDock)
self.treeWidget_TopologySummary.retranslateUi(self.treeWidget_TopologySummary)
except Exception:
# Ignore if not implemented
pass
def centerDialog(self, dialog):
""" Manually center a dialog on the screen
"""
layoutSizeHint = dialog.layout().sizeHint()
p = dialog.geometry().center()
r = QtCore.QRect(QtCore.QPoint(0, 0), layoutSizeHint)
r.moveCenter(p)
dialog.setMinimumSize(QtCore.QSize(0, 0))
dialog.setGeometry(r)
dialog.setMinimumSize(layoutSizeHint)
def __export(self, path, format):
""" Take a screenshot
"""
if format == 'PDF':
#FIXME: seems PDF export doesn't work since Qt version 4.5.0 (on Linux)
printer = QtGui.QPrinter(QtGui.QPrinter.HighResolution)
printer.setOutputFormat(QtGui.QPrinter.PdfFormat)
printer.setOrientation(QtGui.QPrinter.Landscape)
printer.setOutputFileName(path)
painter = QtGui.QPainter(printer)
painter.setRenderHint(QtGui.QPainter.Antialiasing)
self.graphicsView.render(painter)
painter.end()
else:
# reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "Yes - Take all the workspace\nNo - Take only what I see"),
# QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)
#
# if reply == QtGui.QMessageBox.Yes:
#
# items = self.graphicsView.scene().items()
# max_x = max_y = min_x = min_y = 0
# for item in items:
# if item.x() > max_x:
# max_x = item.x()
# if item.y() > max_y:
# max_y = item.y()
# if item.x() < min_x:
# min_x = item.x()
# if item.y() < min_y:
# min_y = item.y()
# x = min_x - 30
# y = min_y - 30
# width = abs(x) + max_x + 200
# height = abs(y) + max_y + 200
#
# else:
rect = self.graphicsView.viewport().rect()
width = rect.width()
height = rect.height()
pixmap = QtGui.QPixmap(width, height)
pixmap.fill(QtCore.Qt.white)
painter = QtGui.QPainter(pixmap)
painter.setRenderHint(QtGui.QPainter.Antialiasing, True)
painter.setRenderHint(QtGui.QPainter.TextAntialiasing, True)
painter.setRenderHint(QtGui.QPainter.SmoothPixmapTransform, True)
# if reply == QtGui.QMessageBox.Yes:
# self.graphicsView.scene().render(painter, QtCore.QRectF(0,0,pixmap.width(),pixmap.height()), QtCore.QRectF(x, y, width, height))
# else:
self.graphicsView.render(painter)
painter.end()
pixmap.save(path, format)
#pixmap = QtGui.QPixmap.grabWidget(self.graphicsView)
#pixmap.save(path, format)
def __action_Export(self):
""" Export the scene to an image file
"""
filedialog = QtGui.QFileDialog(self)
selected = QtCore.QString()
exports = 'PNG File (*.png);;JPG File (*.jpeg *.jpg);;BMP File (*.bmp);;XPM File (*.xpm *.xbm);;PDF File (*.pdf)'
if self.projectFile:
directory = os.path.dirname(self.projectFile)
else:
directory = globals.GApp.systconf['general'].project_path
path = QtGui.QFileDialog.getSaveFileName(filedialog, 'Screenshot', directory, exports, selected)
if not path:
return
path = unicode(path)
#FIXME: bug with Qt 4.5, selected always empty! Temporary work-around, users have to specify the extension:
if selected == '':
format = path[-3:]
else:
format = unicode(unicode(selected)[:3])
if str(selected) == 'PNG File (*.png)' and not path.endswith(".png"):
path = path + '.png'
if str(selected) == 'JPG File (*.jpeg *.jpg)' and (not path.endswith(".jpg") or not path.endswith(".jpeg")):
path = path + '.jpeg'
if str(selected) == 'BMP File (*.bmp)' and not path.endswith(".bmp"):
path = path + '.bmp'
if str(selected) == 'BMP File (*.bmp)' and (not path.endswith(".xpm") or not path.endswith(".xbm")):
path = path + '.xpm'
if str(selected) == 'PDF File (*.pdf)' and not path.endswith(".pdf"):
path = path + '.pdf'
try:
self.__export(path, format.upper())
except IOError, (errno, strerror):
QtGui.QMessageBox.critical(self, translate("Workspace", "I/O Error"), translate("Workspace", "I/O Error: %s") % strerror)
def clear_workdir(self, projectWorkdir):
""" Delete useless working directory files
"""
if globals.GApp.systconf['dynamips'].clean_workdir:
# delete dynamips files
dynamips_files = glob.glob(os.path.normpath(globals.GApp.systconf['dynamips'].workdir) + os.sep + "c[0-9][0-9][0-9][0-9]_*")
dynamips_files += glob.glob(os.path.normpath(globals.GApp.systconf['dynamips'].workdir) + os.sep + "*ghost*")
dynamips_files += glob.glob(os.path.normpath(globals.GApp.systconf['dynamips'].workdir) + os.sep + "ilt_*")
dynamips_files += glob.glob(os.path.normpath(globals.GApp.systconf['dynamips'].workdir) + os.sep + "*_lock")
dynamips_files += glob.glob(os.path.normpath(globals.GApp.systconf['dynamips'].workdir) + os.sep + "*_log.txt")
if projectWorkdir:
# delete useless project files
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "*ghost*")
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "ilt_*")
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "*_lock")
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "c[0-9][0-9][0-9][0-9]_*_log.txt")
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "c[0-9][0-9][0-9][0-9]_*_rommon_vars")
dynamips_files += glob.glob(os.path.normpath(projectWorkdir) + os.sep + "c[0-9][0-9][0-9][0-9]_*_ssa")
for file in dynamips_files:
try:
debug("DELETING %s" % file)
os.remove(file)
except (OSError, IOError), e:
#print translate("Workspace", "Warning: Can't delete %s => %s") % (file, e.strerror)
continue
# delete temporary projects left behind
project_dirs = glob.glob(tempfile.gettempdir() + os.sep + 'GNS3_*')
for project_dir in project_dirs:
shutil.rmtree(project_dir, ignore_errors=True)
def clear(self):
""" Clear all the workspace
"""
# First stop all nodes
self.__action_StopAll()
globals.GApp.workspace.setWindowTitle("GNS3")
projectWorkdir = self.projectWorkdir
self.timer.stop()
self.projectFile = None
self.projectWorkdir = None
self.projectConfigs = None
self.saveCaptures = False
self.unbase = False
globals.GApp.topology.clear()
self.clear_workdir(projectWorkdir)
globals.GApp.mainWindow.capturesDock.refresh()
self.track.clearAllTcpPort()
def __action_Config(self):
""" Choose between extracting or importing configs
"""
options = [translate("Workspace", "Extract configs to a directory"), translate("Workspace", "Import configs from a directory")]
(selection, ok) = QtGui.QInputDialog.getItem(self, translate("Workspace", "Import/Export IOS Startup Configs"),
translate("Workspace", "Please choose an option:"), options, 0, False)
if ok:
selection = unicode(selection)
if selection == translate("Workspace", "Extract configs to a directory"):
self.extractConfigs()
elif selection == translate("Workspace", "Import configs from a directory"):
self.importConfigs()
def extractConfigs(self):
""" Extract all startup-config
"""
fb = fileBrowser(translate('Workspace', 'Directory to write startup-configs'), directory=os.path.normpath(globals.GApp.systconf['general'].project_path), parent=self)
path = fb.getDir()
if path:
path = os.path.normpath(path)
globals.GApp.workspace.projectConfigs = path
net = netfile.NETFile()
for node in globals.GApp.topology.items():
# record router configs
if isinstance(node, IOSRouter) and globals.GApp.workspace.projectConfigs:
device = node.get_dynagen_device()
try:
net.export_router_config(device)
except lib.DynamipsErrorHandled:
node.shutdownInterfaces()
node.state = device.state
node.updateToolTips()
globals.GApp.mainWindow.treeWidget_TopologySummary.changeNodeStatus(node.hostname, node.state)
continue
def importConfigs(self):
""" Import all startup-config
"""
fb = fileBrowser(translate('Workspace', 'Directory to read startup-configs'), directory=os.path.normpath(globals.GApp.systconf['general'].project_path), parent=self)
path = fb.getDir()
if path:
path = os.path.normpath(path)
try:
contents = os.listdir(path)
except OSError, e:
QtGui.QMessageBox.critical(self, translate("Workspace", "IO Error"), unicode(e))
return
for file in contents:
if file[-4:].lower() == '.cfg':
device = file[:-4]
print translate("Workspace", "Importing %s from %s") % (device, file)
try:
f = open(path + os.sep + file, 'r')
config = f.read()
config = '!\n' + config
f.close()
# Encodestring puts in a bunch of newlines. Split them out then join them back together
encoded = ("").join(base64.encodestring(config).split())
globals.GApp.dynagen.devices[device].config_b64 = encoded
except IOError, e:
QtGui.QMessageBox.critical(self, translate("Workspace", "IO Error"), unicode(e))
return
except KeyError:
print translate("Workspace", "Ignoring unknown device %s") % device
except lib.DynamipsError, e:
print translate("Workspace", "Dynamips Error: %s") % e
except lib.DynamipsWarning, e:
print translate("Workspace", "Dynamips Warning: %s") % e
except (lib.DynamipsErrorHandled, socket.error):
QtGui.QMessageBox.critical(self, translate("Workspace", "%s: Dynamips error") % device, translate("Workspace", "Connection lost"))
self.__action_Save(auto=True)
def __action_AddNote(self):
""" Add a note to the scene
"""
if not self.action_AddNote.isChecked():
globals.addingNote = False
globals.GApp.scene.setCursor(QtCore.Qt.ArrowCursor)
else:
globals.addingNote = True
globals.GApp.scene.setCursor(QtCore.Qt.IBeamCursor)
def __action_InsertImage(self):
""" Insert an image
"""
if self.projectFile:
directory = os.path.dirname(self.projectFile)
else:
directory = globals.GApp.systconf['general'].project_path
(path, selected) = fileBrowser(translate("Workspace", "Open a file"), \
filter='PNG File (*.png);;GIF File (*.gif);;JPG File (*.jpeg *.jpg);;BMP File (*.bmp);;XPM File (*.xpm *.xbm);;PBM File (*.pbm);;PGM File (*.pgm);;PPM File (*.ppm);;All files (*.*)',
directory=directory, parent=self).getFile()
if path != None and path != '':
path = os.path.normpath(path)
pixmap_image = QtGui.QPixmap(path)
if not pixmap_image.isNull():
# copy the image in the project directory
if self.projectWorkdir:
try:
shutil.copy(path, self.projectWorkdir)
path = self.projectWorkdir + os.sep + os.path.basename(path)
except (OSError, IOError), e:
debug("Warning: cannot copy " + path + " to " + self.projectWorkdir + ": " + e.strerror)
item = Pixmap(pixmap_image, path)
# center the image
pos_x = item.pos().x() - (item.boundingRect().width() / 2)
pos_y = item.pos().y() - (item.boundingRect().height() / 2)
item.setPos(pos_x, pos_y)
# add the image to the scene
command = undo.AddItem(globals.GApp.topology, item, translate('Workspace', 'picture'))
globals.GApp.topology.undoStack.push(command)
def stopAction_addLink(self):
""" Stop the add link action (called from the Scene)
"""
self.action_AddLink.setChecked(False)
self.action_AddLink.setText(translate('Workspace', 'Add a link'))
self.action_AddLink.setIcon(QIcon(':/icons/connection-new.svg'))
globals.addingLinkFlag = False
globals.GApp.scene.setCursor(QtCore.Qt.ArrowCursor)
def startAction_addLink(self):
""" Start the add link action (called from the Scene)
"""
self.action_AddLink.setChecked(True)
self.__action_addLink()
def updateAction_addLink(self):
""" Update the tooltip and status bar message for add a link action
"""
if globals.GApp.systconf['general'].manual_connection:
msg = translate("Workspace", "Add a link (press SHIFT to select link type and enable auto module insertion)")
else:
msg = translate("Workspace", "Add a link (auto module insertion enabled)")
self.action_AddLink.setToolTip(msg)
self.action_AddLink.setStatusTip(msg)
def __action_addLink(self):
""" Implement the QAction `addLink'
- This function manage the creation of a connection between two nodes.
"""
if not self.action_AddLink.isChecked():
self.action_AddLink.setText(translate('Workspace', 'Add a link'))
newLinkIcon = QtGui.QIcon()
newLinkIcon.addPixmap(QtGui.QPixmap(":/icons/connection-new.svg"), QtGui.QIcon.Normal, QtGui.QIcon.On)
newLinkIcon.addPixmap(QtGui.QPixmap(":/icons/connection-new-hover.svg"), QtGui.QIcon.Active, QtGui.QIcon.On)
self.action_AddLink.setIcon(newLinkIcon)
globals.addingLinkFlag = False
globals.GApp.scene.resetAddingLink()
globals.GApp.scene.setCursor(QtCore.Qt.ArrowCursor)
else:
modifiers = QtGui.QApplication.keyboardModifiers()
if not globals.GApp.systconf['general'].manual_connection or modifiers == QtCore.Qt.ShiftModifier:
menu = QtGui.QMenu()
for linktype in globals.linkTypes.keys():
menu.addAction(linktype)
menu.connect(menu, QtCore.SIGNAL("triggered(QAction *)"), self.__setLinkType)
menu.exec_(QtGui.QCursor.pos())
else:
globals.currentLinkType = globals.Enum.LinkType.Manual
self.action_AddLink.setText(translate('Workspace', 'Cancel'))
self.action_AddLink.setIcon(QIcon(':/icons/cancel-connection.svg'))
globals.addingLinkFlag = True
globals.GApp.scene.setCursor(QtCore.Qt.CrossCursor)
def __setLinkType(self, action):
""" Set the link type to use
"""
action = str(action.text())
globals.currentLinkType = globals.linkTypes[action]
def __action_IOSImages(self):
""" Implement the QAction `IOSImages'
- Show a dialog to configure IOSImages and hypervisors
- Add / Edit / Delete images
- Add / Edit / Delete hypervisors
"""
dialog = IOSDialog(self)
dialog.setModal(True)
dialog.show()
dialog.exec_()
self.dockWidget_NodeTypes.setVisible(False)
def __action_Symbol_Manager(self):
""" Implement the QAction `Symbol_Manager'
- Show a dialog to configure the symbols
"""
dialog = SymbolManager(self)
dialog.setModal(True)
dialog.show()
dialog.exec_()
globals.GApp.scene.reloadRenderers()
self.nodesDock.clear()
self.nodesDock.populateNodeDock(globals.GApp.workspace.dockWidget_NodeTypes.windowTitle())
def __action_Undo(self):
""" Implement the QAction `Undo'
- Undo a action
"""
globals.GApp.topology.undoStack.undo()
def __action_Redo(self):
""" Implement the QAction `Undo'
- Redo a action
"""
globals.GApp.topology.undoStack.redo()
def __action_ShowLayers(self):
""" Implement the QAction `Show layers'
- Show layer positioning for every items
"""
if self.flg_showLayerPos == False:
self.flg_showLayerPos = True
else:
self.flg_showLayerPos = False
for item in globals.GApp.topology.items():
item.update()
def __action_SelectAll(self):
""" Implement the QAction `SelectAll'
- Select all node on the topology
"""
for node in globals.GApp.topology.nodes.itervalues():
node.setSelected(True)
def __action_SelectNone(self):
""" Implement the QAction `SelectNone'
- Unselect all node on the topology
"""
for node in globals.GApp.topology.nodes.itervalues():
node.setSelected(False)
def __action_ZoomIn(self):
""" Scale in the scene (QGraphicsView)
"""
factor_in = pow(2.0, 120 / 240.0)
globals.GApp.scene.scaleView(factor_in)
def __action_ZoomOut(self):
""" Scale out the scene (QGraphicsView)
"""
factor_out = pow(2.0, -120 / 240.0)
globals.GApp.scene.scaleView(factor_out)
def __action_ZoomReset(self):
""" Restore the default scale on the scene (QGraphicsView)
"""
globals.GApp.scene.resetMatrix()
def __doSlidingWindow(self, type):
""" Make the NodeDock appear (sliding effect is in progress)
with the appropriate title and the devices concerned listed.
Make window disappear if click on same category.
"""
if self.dockWidget_NodeTypes.windowTitle() == type:
self.dockWidget_NodeTypes.setVisible(False)
self.dockWidget_NodeTypes.setWindowTitle('')
else:
self.dockWidget_NodeTypes.setWindowTitle(type)
self.dockWidget_NodeTypes.setVisible(True)
self.nodesDock.clear()
self.nodesDock.populateNodeDock(type)
def __action_BrowseAllDevices(self):
""" Display all devices from all categories.
"""
self.__doSlidingWindow('All devices')
def __action_Router(self):
""" Display all devices in the "routers" category.
"""
self.__doSlidingWindow('Routers')
def __action_Switch(self):
""" Display all devices in the "switches" category.
"""
self.__doSlidingWindow('Switches')
def __action_EndDevices(self):
""" Display all devices in the "end device" category.
"""
self.__doSlidingWindow('End devices')
def __action_SecurityDevices(self):
""" Display all devices in the "security devices" category.
"""
self.__doSlidingWindow('Security devices')
def __action_DefaultStyle(self):
""" Restore/Put stylesheet back to normal (and destroy the planet)
"""
self.setStyleSheet('')
self.__restoreIcons()
self.action_EnergySavingStyle.setChecked(False)
self.action_HighContrastStyle.setChecked(False)
def __action_EnergySavingStyle(self):
""" Put stylesheet meant to save energy, very popular these days
"""
self.setStyleSheet(' QMainWindow {} QMenuBar { background: black; } QDockWidget { background: black; color: white; } QToolBar { background: black; } QFrame { background: gray; } QToolButton { width: 30px; height: 30px; /*border:solid 1px black opacity 0.4;*/ /*background-none;*/ } QStatusBar { /* background-image: url(:/pictures/pictures/texture_blackgrid.png);*/ background: black; color: rgb(255,255,255); } ')
self.action_DefaultStyle.setChecked(False)
self.action_HighContrastStyle.setChecked(False)
def __action_HighContrastStyle(self):
""" Put stylesheet meant to display high contrast icons, useful for low vision people
"""
self.action_StartAll.setIcon(QtGui.QIcon(':/icons/play7-test.svg'))
self.action_SuspendAll.setIcon(QtGui.QIcon(':/icons/pause3-test.svg'))
self.action_StopAll.setIcon(QtGui.QIcon(':/icons/stop3-test.svg'))
self.action_EnergySavingStyle.setChecked(False)
self.action_DefaultStyle.setChecked(False)
def __restoreIcons(self):
""" Put normal icons back if the High Contrast Mode has been activated
and the user wants to go back to default style
"""
startAllIcon = QtGui.QIcon()
startAllIcon.addPixmap(QtGui.QPixmap(":/icons/play2-test.svg"), QtGui.QIcon.Normal, QtGui.QIcon.On)
startAllIcon.addPixmap(QtGui.QPixmap(":/icons/play7-test.svg"), QtGui.QIcon.Active, QtGui.QIcon.On)
self.action_StartAll.setIcon(startAllIcon)
pauseAllIcon = QtGui.QIcon()
pauseAllIcon.addPixmap(QtGui.QPixmap(":/icons/pause2-test.svg"), QtGui.QIcon.Normal, QtGui.QIcon.On)
pauseAllIcon.addPixmap(QtGui.QPixmap(":/icons/pause3-test.svg"), QtGui.QIcon.Active, QtGui.QIcon.On)
self.action_SuspendAll.setIcon(pauseAllIcon)
stopAllIcon = QtGui.QIcon()
stopAllIcon.addPixmap(QtGui.QPixmap(":/icons/stop2-test.svg"), QtGui.QIcon.Normal, QtGui.QIcon.On)
stopAllIcon.addPixmap(QtGui.QPixmap(":/icons/stop3-test.svg"), QtGui.QIcon.Active, QtGui.QIcon.On)
self.action_StopAll.setIcon(stopAllIcon)
def __action_ShowHostnames(self):
""" Display/Hide hostnames for all the nodes on the scene
"""
if self.flg_showHostname == False:
self.flg_showHostname = True
for node in globals.GApp.topology.nodes.itervalues():
node.showHostname()
else:
self.flg_showHostname = False
for node in globals.GApp.topology.nodes.itervalues():
node.removeHostname()
def __action_ShowInterfaceNames(self):
""" Display/Hide interface names for all the nodes on the scene
"""
if self.flg_showInterfaceNames == False:
if not len(globals.interfaceLabels) and self.flg_showOnlySavedInterfaceNames:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "Reset saved interface labels?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.Yes:
self.flg_showOnlySavedInterfaceNames = False
self.flg_showInterfaceNames = True
for link in globals.GApp.topology.links:
link.adjust()
else:
self.flg_showInterfaceNames = False
for link in globals.GApp.topology.links:
link.adjust()
def __action_ResetInterfaceLabels(self):
""" Reset saved Interface Labels
"""
if self.flg_showInterfaceNames:
QtGui.QMessageBox.warning(self, translate("Workspace", "Interface labels"), translate("Workspace", "Please hide the interface names before using this option"))
return
self.flg_showOnlySavedInterfaceNames = False
for link in globals.GApp.topology.links:
link.labelSouceIf = None
link.labelDestIf = None
link.adjust()
QtGui.QMessageBox.information(self, translate("Workspace", "Interface labels"), translate("Workspace", "Interface labels have been reset"))
def __action_Console(self):
menu = QtGui.QMenu()
menu.addAction(self.action_TelnetAll)
menu.addAction(self.action_ConsoleAuxAll)
menu.exec_(QtGui.QCursor.pos())
def __action_TelnetAll(self):
""" Telnet to all started IOS routers
"""
for node in globals.GApp.topology.nodes.itervalues():
if (isinstance(node, IOSRouter) or isinstance(node, AnyEmuDevice) or isinstance(node, AnyVBoxEmuDevice)) and node.get_dynagen_device().state == 'running':
time.sleep(globals.GApp.systconf['general'].console_delay)
node.console()
def __action_ConsoleAuxAll(self):
""" Console AUX to all started IOS routers
"""
for node in globals.GApp.topology.nodes.itervalues():
if isinstance(node, IOSRouter) and node.get_dynagen_device().state == 'running':
time.sleep(globals.GApp.systconf['general'].console_delay)
node.aux()
def __launchProgressDialog(self, action, text, autostart=False):
""" Launch a progress dialog and do a action
action: string
text: string
"""
errors = ""
translated_action = ""
node_list = []
if autostart == True:
for (hostname, value) in globals.GApp.dynagen.autostart.iteritems():
if value == True:
node = globals.GApp.topology.getNode(globals.GApp.topology.getNodeID(hostname))
node_list.append(node)
else:
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) or isinstance(node, AnyEmuDevice) or isinstance(node, AnyVBoxEmuDevice):
node_list.append(node)
count = len(node_list)
if count == 0:
return
progress = QtGui.QProgressDialog(text, translate("Workspace", "Abort"), 0, count, globals.GApp.mainWindow)
progress.setWindowTitle("GNS3")
progress.setMinimum(1)
progress.setWindowModality(QtCore.Qt.WindowModal)
globals.GApp.processEvents(QtCore.QEventLoop.AllEvents)
current = 0
for node in node_list:
server = node.get_dynagen_device().dynamips.host + ':' + str(node.get_dynagen_device().dynamips.port)
progress.setValue(current)
globals.GApp.processEvents(QtCore.QEventLoop.AllEvents | QtCore.QEventLoop.WaitForMoreEvents, 1000)
if progress.wasCanceled():
progress.reset()
break
try:
if action == 'starting':
translated_action = translate("Workspace", "starting")
node.startNode(progress=True)
# Slow start feature
seconds = globals.GApp.systconf['general'].slow_start
if seconds > 0:
globals.GApp.processEvents(QtCore.QEventLoop.AllEvents | QtCore.QEventLoop.WaitForMoreEvents, 1000)
time.sleep(seconds)
if action == 'stopping':
translated_action = translate("Workspace", "stopping")
node.stopNode(progress=True)
if action == 'suspending':
translated_action = translate("Workspace", "suspending")
node.suspendNode(progress=True)
if action == 'reloading':
translated_action = translate("Workspace", "reloading")
node.reloadNode(progress=True)
except lib.DynamipsError, msg:
errors += translate("Workspace", "%s: error from server %s: %s") % (node.hostname, server, unicode(msg))
errors += "\n"
except lib.DynamipsWarning, msg:
errors += translate("Workspace", "%s: warning from server %s: %s") % (node.hostname, server, unicode(msg))
errors += "\n"
except (lib.DynamipsErrorHandled, socket.error):
errors += translate("Workspace", "%s: lost communication with server %s") % (node.hostname, server)
errors += "\n"
finally:
current += 1
progress.setValue(count)
progress.deleteLater()
progress = None
if errors:
showDetailedMsgBox(self, translate("Workspace", "%s nodes") % translated_action, translate("Workspace", "Issues have been detected while %s nodes, please check details ...") % translated_action, errors)
def __action_ShowVirtualBoxManager(self):
""" Show VirtualBox Manager
"""
if not self.bringVirtualBoxManagerToFront():
if sys.platform.startswith('win'):
if not os.environ.has_key('VBOX_INSTALL_PATH'):
QtGui.QMessageBox.critical(self, translate("Workspace", "VirtualBox Manager"), translate("Workspace", "VirtualBox is not installed!"))
return
subprocess.Popen(os.environ['VBOX_INSTALL_PATH'] + 'VirtualBox.exe', shell=False)
else:
subprocess.Popen("VirtualBox &", shell=True)
def bringVirtualBoxManagerToFront(self):
""" Attempts to bring VirtualBoxManager to front, and returns True if succeeds.
False means that further processing required.
"""
#Technologov: This code is experimental.
#FIXME: Maybe it should be based on PIDs, rather than window names?
if sys.platform.startswith('win'):
return winm.bringWindowToFront("Oracle VM VirtualBox Manager", "")
elif sys.platform.startswith('darwin'):
# Not implemented.
return False
else: # X11-based UNIX-like system
return winm.bringWindowToFront("", "VirtualBox Manager")
def __action_StartAll(self):
""" Start all nodes
"""
self.__launchProgressDialog('starting', translate("Workspace", "Starting nodes ..."))
def __action_StopAll(self):
""" Stop all nodes
"""
self.__launchProgressDialog('stopping', translate("Workspace", "Stopping nodes ..."))
def __action_SuspendAll(self):
""" Suspend all nodes
"""
self.__launchProgressDialog('suspending', translate("Workspace", "Suspending nodes ..."))
def __action_ReloadAll(self):
""" Reload all nodes
"""
self.__launchProgressDialog('reloading', translate("Workspace", "Reloading nodes ..."))
def __action_Help(self):
""" Launch a browser for the pointing to the documentation page
"""
QtGui.QDesktopServices.openUrl(QtCore.QUrl("http://www.gns3.net/documentation"))
def __action_DrawRectangle(self):
""" Draw a rectangle on the scene
"""
if not self.action_DrawRectangle.isChecked():
globals.drawingRectangle = False
globals.GApp.scene.setCursor(QtCore.Qt.ArrowCursor)
else:
globals.drawingRectangle = True
globals.GApp.scene.setCursor(QtCore.Qt.PointingHandCursor)
def __action_DrawEllipse(self):
""" Draw an ellipse on the scene
"""
if not self.action_DrawEllipse.isChecked():
globals.drawingEllipse = False
globals.GApp.scene.setCursor(QtCore.Qt.ArrowCursor)
else:
globals.drawingEllipse = True
globals.GApp.scene.setCursor(QtCore.Qt.PointingHandCursor)
def __action_About(self):
""" Show GNS3 about dialog
"""
dialog = QtGui.QDialog(self)
ui = Ui_AboutDialog()
ui.setupUi(dialog)
# Dynamically put current version number in About dialog
from __main__ import VERSION
text = ui.aboutText.text()
text.replace("%VERSION%", VERSION)
ui.aboutText.setText(text)
dialog.setModal(True)
dialog.show()
dialog.exec_()
def __action_AboutQt(self):
""" Show Qt about dialog
"""
QtGui.QMessageBox.aboutQt(self)
def __action_CheckForUpdate(self, silent=False, url=None):
""" Check if a newer version is available
"""
if url:
request = QtNetwork.QNetworkRequest(url)
else:
request = QtNetwork.QNetworkRequest(QtCore.QUrl("http://update.gns3.net/latest_release.txt"))
request.setRawHeader("User-Agent", "GNS3 Check For Update");
request.setAttribute(QtNetwork.QNetworkRequest.User, QtCore.QVariant(silent))
reply = self.networkManager.get(request)
reply.finished[()].connect(self.__processCheckForUpdateReply)
def __action_Tips(self):
""" Show the Tips dialog
"""
if self.tips_dialog:
self.tips_dialog.timer.start()
self.tips_dialog.show()
self.tips_dialog.loadWebPage()
self.tips_dialog.exec_()
def __processCheckForUpdateReply(self):
""" Process reply for check for update
"""
from __main__ import VERSION
from distutils.version import LooseVersion
network_reply = self.sender()
isSilent = network_reply.request().attribute(QtNetwork.QNetworkRequest.User).toBool()
# Follow any redirection
possibleRedirect = network_reply.attribute(QtNetwork.QNetworkRequest.RedirectionTargetAttribute).toUrl()
if not possibleRedirect.isEmpty():
self.__action_CheckForUpdate(isSilent, possibleRedirect)
return
if network_reply.error() != QtNetwork.QNetworkReply.NoError and not isSilent:
QtGui.QMessageBox.critical(self, translate("Workspace", "Check For Update"), translate("Workspace", "Cannot check for update ... Try again later"))
else:
latest_release = str(network_reply.readAll()).rstrip()
try:
if LooseVersion(VERSION) < latest_release:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Check For Update"),
translate("Workspace", "Newer GNS3 version %s is available, do you want to visit our website to download it?") % latest_release, QtGui.QMessageBox.Yes, \
QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.Yes:
QtGui.QDesktopServices.openUrl(QtCore.QUrl("http://www.gns3.net/download"))
elif not isSilent:
QtGui.QMessageBox.information(self, translate("Workspace", "Check For Update"), translate("AbstractNode", "GNS3 is up-to-date!"))
except:
# File "GNS3\Workspace.pyo", line 957, in __processCheckForUpdateReply
# File "distutils\version.pyo", line 296, in __cmp__
#AttributeError: LooseVersion instance has no attribute 'version'
debug("Couldn't check for an update, exception in LooseVersion()!")
network_reply.deleteLater()
def __action_Preferences(self):
""" Show the preferences dialog
"""
globals.preferencesWindow = PreferencesDialog(self)
globals.preferencesWindow.show()
globals.preferencesWindow.exec_()
globals.preferencesWindow = None
def load_netfile(self, file, load_instructions=False):
""" Load a .net file"""
if file == None:
return
path = unicode(os.path.abspath(file))
if not os.path.exists(path):
QtGui.QMessageBox.critical(self, translate("Workspace", "Loading"), translate("Workspace", "No such file: %s") % file)
return
if not os.path.isfile(path):
QtGui.QMessageBox.critical(self, translate("Workspace", "Loading"), translate("Workspace", "Not a regular file: %s") % file)
return
self.projectFile = path
self.setWindowTitle("GNS3 - " + self.projectFile)
net = netfile.NETFile()
globals.GApp.scene.resetMatrix()
net.import_net_file(path)
# refresh tool menu to reflect the current working directory
self.createToolsMenu()
self.__launchProgressDialog('starting', translate("Workspace", "Starting nodes ..."), autostart=True)
if load_instructions:
self.__action_Instructions(silent=True)
def __action_NewProject(self):
""" Create a new project
"""
if len(globals.GApp.dynagen.devices):
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"),
translate("Workspace", "This will clear your current topology. Continue?"), QtGui.QMessageBox.Yes, \
QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.No:
return
self.clear()
projectDialog = ProjectDialog(parent=self, newProject=True)
projectDialog.pushButtonOpenProject.setEnabled(False)
projectDialog.pushButtonRecentFiles.setEnabled(False)
self.projectWorkdir = None
self.projectConfigs = None
projectDialog.setModal(True)
projectDialog.show()
projectDialog.exec_()
def __action_SaveProjectAs(self):
""" Save project in a new location
"""
running_nodes = False
for node in globals.GApp.topology.nodes.itervalues():
if (isinstance(node, IOSRouter) or isinstance(node, AnyEmuDevice) or isinstance(node, AnyVBoxEmuDevice)) and node.get_dynagen_device().state == 'running':
running_nodes = True
if running_nodes:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "This action is going to stop all your devices and captures, would you like to continue anyway?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.No:
return
if not self.projectFile:
new_project = True
else:
new_project = False
if not self.isTemporaryProject:
projectDialog = ProjectDialog(self, self.projectFile, self.projectWorkdir, self.projectConfigs, self.unbase, self.saveCaptures, new_project)
else:
projectDialog = ProjectDialog(self, self.projectFile, None, self.projectConfigs, None, None, new_project)
projectDialog.pushButtonOpenProject.setEnabled(False)
projectDialog.pushButtonRecentFiles.setEnabled(False)
if self.projectFile:
projectDialog.setWindowTitle("Save Project As...")
#self.projectWorkdir = None
#self.projectConfigs = None
projectDialog.setModal(True)
projectDialog.show()
projectDialog.exec_()
def createProject(self, settings):
""" Create a new project
"""
globals.GApp.workspace.setWindowTitle("GNS3")
self.projectWorkdir = None
self.projectConfigs = None
self.unbase = False
self.saveCaptures = False
(self.projectFile, self.projectWorkdir, self.projectConfigs, self.unbase, self.saveCaptures) = settings
# Create a project in a temporary location
if not self.projectFile and not self.projectWorkdir and not self.projectConfigs:
self.isTemporaryProject = True
try:
projectDir = tempfile.mkdtemp(prefix='GNS3_')
self.projectWorkdir = os.path.normpath(projectDir + os.sep + 'working')
self.projectConfigs = os.path.normpath(projectDir + os.sep + 'configs')
self.projectFile = os.path.normpath(projectDir + os.sep + 'topology.net')
except (OSError, IOError), e:
QtGui.QMessageBox.critical(self, translate('Workspace', 'createProject'),
translate("Workspace", "Cannot create directory %s: %s") % (projectDir, e.strerror))
else:
self.isTemporaryProject = False
# Always create a working directory for a project...
# self.projectWorkdir = os.path.normpath(os.path.dirname(self.projectFile) + os.sep + 'working')
if self.projectWorkdir and not os.access(self.projectWorkdir, os.F_OK):
try:
os.mkdir(self.projectWorkdir)
except (OSError, IOError), e:
print "Warning: cannot create directory: " + self.projectWorkdir + ": " + e.strerror
if self.projectConfigs and not os.access(self.projectConfigs, os.F_OK):
try:
os.mkdir(self.projectConfigs)
except (OSError, IOError), e:
print "Warning: cannot create directory: " + self.projectConfigs + ": " + e.strerror
if self.saveCaptures and not os.access(os.path.dirname(self.projectFile) + os.sep + 'captures', os.F_OK):
try:
os.mkdir(os.path.dirname(self.projectFile) + os.sep + 'captures')
except (OSError, IOError), e:
print "Warning: cannot create directory: " + os.path.dirname(self.projectFile) + os.sep + 'captures' + ": " + e.strerror
qemu_flash_drives_directory = os.path.dirname(self.projectFile) + os.sep + 'qemu-flash-drives'
if not os.access(qemu_flash_drives_directory, os.F_OK):
try:
os.mkdir(qemu_flash_drives_directory)
except (OSError, IOError), e:
print "Warning: cannot create directory: " + qemu_flash_drives_directory + ": " + e.strerror
if len(globals.GApp.dynagen.devices):
if self.projectConfigs:
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) and node.router.cnfg:
try:
shutil.copy(node.router.cnfg, self.projectConfigs)
except (OSError, IOError), e:
debug("Warning: cannot copy " + node.router.cnfg + " to " + self.projectConfigs)
continue
except:
continue
config = os.path.basename(node.router.cnfg)
node.router.cnfg = self.projectConfigs + os.sep + config
for node in globals.GApp.topology.nodes.values():
if isinstance(node, AnyEmuDevice) and qemu_flash_drives_directory != node.qemu.workingdir:
# Stop this node
node.stopNode()
qemu_files = glob.glob(os.path.normpath(node.qemu.workingdir) + os.sep + node.hostname)
for qemu_file in qemu_files:
try:
dest = qemu_flash_drives_directory + os.sep + node.hostname
debug("MOVING %s to %s" % (qemu_file, dest))
shutil.copytree(qemu_file, dest)
except (OSError, IOError), e:
debug("Warning: cannot copy " + qemu_file + " to " + qemu_flash_drives_directory)
continue
except:
continue
# if self.unbase:
# debug("Unbasing %s" % node.hostname)
# node.get_dynagen_device().unbase()
if self.projectWorkdir:
# stop the node before moving files
for node in globals.GApp.topology.nodes.values():
if (isinstance(node, IOSRouter) and self.projectWorkdir != node.hypervisor.workingdir):
node.stopNode()
globals.GApp.mainWindow.capturesDock.stopAllCaptures()
# move dynamips & Qemu files
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) and self.projectWorkdir != node.hypervisor.workingdir:
dynamips_files = glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + "_" + node.hostname + "_nvram*")
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + "_" + node.hostname + "_disk*")
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + "_" + node.hostname + "_slot*")
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + "_" + node.hostname + "_rom")
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + "_" + node.hostname + "_*flash*")
#dynamips_files += [os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_dynagen_device().formatted_ghost_file()]
for dynamips_file in dynamips_files:
try:
debug("MOVING %s to %s" % (dynamips_file, self.projectWorkdir))
shutil.copy(dynamips_file, self.projectWorkdir)
except (OSError, IOError), e:
debug("Warning: cannot copy " + dynamips_file + " to " + self.projectWorkdir)
continue
except:
continue
# clean the original working directory
#self.clear_workdir(os.path.normpath(node.hypervisor.workingdir))
# if (isinstance(node, QemuDevice) or isinstance(node, JunOS) or isinstance(node, IDS)) and self.unbase:
# node.get_dynagen_device().unbase()
# set the new working directory
try:
for hypervisor in globals.GApp.dynagen.dynamips.values():
if isinstance(hypervisor, qemu_lib.Qemu):
hypervisor.workingdir = qemu_flash_drives_directory
elif self.projectWorkdir:
hypervisor.workingdir = self.projectWorkdir
except lib.DynamipsError, msg:
QtGui.QMessageBox.critical(self, translate('Workspace', 'Setting new working dir'), translate("Workspace", "Dynamips error %s: %s") % (self.projectWorkdir, unicode(msg)))
if self.isTemporaryProject == False:
self.__action_Save()
self.setWindowTitle("GNS3 Project - " + os.path.split(os.path.dirname(self.projectFile))[1])
# refresh tool menu to reflect the current working directory
self.createToolsMenu()
def __action_Snapshot(self):
""" Open snapshot dialog
"""
self.snapDialog = SnapshotDialog(self)
self.snapDialog.setModal(True)
self.snapDialog.show()
self.snapDialog.exec_()
def createSnapshot(self, name):
""" Create a new snapshot of the current topology
"""
if self.projectFile is None:
if self.__action_SaveProjectAs() == False:
return
self.createSnapshot(name)
return
projectName = os.path.basename(self.projectFile)
projectDir = os.path.dirname(self.projectFile)
snapshotDir = os.path.join(projectDir, 'snapshots')
snapshot_workdir = None
snapshot_qemu_flash_drives = None
snapshot_captures = None
snapshot_dir = snapshotDir + os.sep + projectName.replace('.net', '') + '_' + name + '_snapshot_' + time.strftime("%d%m%y_%H%M%S")
snapshot_configs = snapshot_dir + os.sep + 'configs'
try:
os.makedirs(snapshot_dir)
if os.path.exists(projectDir + os.sep + 'working'):
snapshot_workdir = snapshot_dir + os.sep + 'working'
if os.path.exists(projectDir + os.sep + 'qemu-flash-drives'):
snapshot_qemu_flash_drives = snapshot_dir + os.sep + 'qemu-flash-drives'
if os.path.exists(projectDir + os.sep + 'captures'):
snapshot_captures = snapshot_dir + os.sep + 'captures'
os.mkdir(snapshot_configs)
if snapshot_workdir:
os.mkdir(snapshot_workdir)
if snapshot_qemu_flash_drives:
os.mkdir(snapshot_qemu_flash_drives)
except (OSError, IOError), e:
QtGui.QMessageBox.critical(self, translate("Workspace", "Snapshot"), translate("Workspace", "Cannot create directories in %s: %s") % (snapshot_dir, e.strerror))
return
splash = QtGui.QSplashScreen(QtGui.QPixmap(":images/logo_gns3_splash.png"))
splash.show()
splash.showMessage(translate("Workspace", "Please wait while creating a snapshot"))
globals.GApp.processEvents(QtCore.QEventLoop.AllEvents | QtCore.QEventLoop.WaitForMoreEvents, 1000)
# save configs directory content
try:
shutil.copytree(snapshotDir + os.sep + 'configs', snapshot_configs)
except (OSError, IOError), e:
debug("Warning: cannot copy config files to " + snapshot_configs)
# save captures directory content
if snapshot_captures:
try:
shutil.copytree(snapshotDir + os.sep + 'captures', snapshot_captures)
except (OSError, IOError), e:
debug("Warning: cannot copy capture files to " + snapshot_captures)
# copy dynamips working directory (only useful files)
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter):
if snapshot_workdir:
dynamips_files = glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + '_' + node.hostname + '_nvram*')
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + '_' + node.hostname + '_disk*')
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + '_' + node.hostname + '_slot*')
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + '_' + node.hostname + '_rom')
dynamips_files += glob.glob(os.path.normpath(node.hypervisor.workingdir) + os.sep + node.get_platform() + '_' + node.hostname + '_*flash*')
for file in dynamips_files:
try:
shutil.copy(file, snapshot_workdir)
except (OSError, IOError), e:
debug("Warning: cannot copy " + file + " to " + snapshot_workdir + ": " + e.strerror)
continue
if node.router.cnfg:
try:
shutil.copy(node.router.cnfg, snapshot_configs)
except (OSError, IOError), e:
debug("Warning: cannot copy " + node.router.cnfg + " to " + snapshot_configs)
continue
config = os.path.basename(node.router.cnfg)
node.router.cnfg = snapshot_configs + os.sep + config
if snapshot_qemu_flash_drives and isinstance(node, AnyEmuDevice):
qemu_files = glob.glob(os.path.normpath(node.qemu.workingdir) + os.sep + node.hostname)
for file in qemu_files:
try:
shutil.copytree(file, snapshot_qemu_flash_drives + os.sep + node.hostname)
except (OSError, IOError), e:
debug("Warning: cannot copy " + file + " to " + snapshot_qemu_flash_drives + ": " + e.strerror)
continue
try:
for hypervisor in globals.GApp.dynagen.dynamips.values():
if snapshot_qemu_flash_drives and isinstance(hypervisor, qemu_lib.Qemu):
hypervisor.workingdir = snapshot_qemu_flash_drives
elif snapshot_workdir:
hypervisor.workingdir = snapshot_workdir
except lib.DynamipsError, msg:
QtGui.QMessageBox.critical(self, translate("Workspace", "Dynamips error"), translate("Workspace", "Dynamips error: %s") % msg)
save_wd = self.projectWorkdir
if not self.projectWorkdir:
self.projectWorkdir = globals.GApp.systconf['dynamips'].workdir
save_cfg = self.projectConfigs
save_projectFile = self.projectFile
self.projectConfigs = snapshot_configs
self.projectWorkdir = snapshot_workdir
self.projectFile = unicode(snapshot_dir + os.sep + projectName)
self.__action_Save(auto=True, add_too_recent_files=False)
self.projectFile = save_projectFile
self.projectConfigs = save_cfg
self.projectWorkdir = save_wd
try:
qemu_flash_drives_directory = os.path.dirname(self.projectFile) + os.sep + 'qemu-flash-drives'
for hypervisor in globals.GApp.dynagen.dynamips.values():
if isinstance(hypervisor, qemu_lib.Qemu):
hypervisor.workingdir = qemu_flash_drives_directory
elif self.projectWorkdir:
hypervisor.workingdir = self.projectWorkdir
else:
hypervisor.workingdir = globals.GApp.systconf['dynamips'].workdir
if self.projectConfigs:
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) and node.router.cnfg:
config = os.path.basename(node.router.cnfg)
node.router.cnfg = self.projectConfigs + os.sep + config
except lib.DynamipsError, msg:
QtGui.QMessageBox.critical(self, translate("Workspace", "Dynamips error"), translate("Workspace", "Dynamips error!!: %s") % msg)
def restoreSnapshot(self, path):
""" Restore a previously created snapshot
"""
# close snapshot dialog
self.snapDialog.close()
# stop all captures
globals.GApp.mainWindow.capturesDock.stopAllCaptures()
# stop all the devices
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) or isinstance(node, AnyEmuDevice):
node.stopNode()
working_dir = os.path.dirname(path) + os.sep + 'working'
config_dir = os.path.dirname(path) + os.sep + 'configs'
capture_dir = os.path.dirname(path) + os.sep + 'captures'
qemu_flash_drives = os.path.dirname(path) + os.sep + 'qemu-flash-drives'
parent_project_dir = os.path.normpath(os.path.dirname(path) + os.sep + '..' + os.sep + '..' + os.sep)
parent_working_dir = parent_project_dir + os.sep + 'working'
parent_qemu_flash_drives = parent_project_dir + os.sep + 'qemu-flash-drives'
parent_config_dir = parent_project_dir + os.sep + 'configs'
parent_capture_dir = parent_project_dir + os.sep + 'captures'
try:
shutil.copyfile(path, parent_project_dir + os.sep + 'topology.net')
except (OSError, IOError), e:
debug("Warning: cannot copy topology.net to " + parent_project_dir)
try:
shutil.copyfile(os.path.dirname(path) + os.sep + 'topology.png', parent_project_dir + os.sep + 'topology.png')
except (OSError, IOError), e:
debug("Warning: cannot copy topology.png to " + parent_project_dir)
shutil.rmtree(parent_config_dir, ignore_errors=True)
try:
shutil.copytree(config_dir, parent_config_dir)
except (OSError, IOError), e:
debug("Warning: cannot copy config files to " + parent_config_dir)
if os.path.exists(working_dir):
# delete useless working dir files
workdir_files = glob.glob(working_dir + os.sep + "*ghost*")
workdir_files += glob.glob(working_dir + os.sep + "ilt_*")
workdir_files += glob.glob(working_dir + os.sep + "*_lock")
workdir_files += glob.glob(working_dir + os.sep + "c[0-9][0-9][0-9][0-9]_*_log.txt")
workdir_files += glob.glob(working_dir + os.sep + "c[0-9][0-9][0-9][0-9]_*_rommon_vars")
workdir_files += glob.glob(working_dir + os.sep + "c[0-9][0-9][0-9][0-9]_*_ssa")
for file in workdir_files:
try:
debug("DELETING %s" % file)
os.remove(file)
except (OSError, IOError), e:
continue
shutil.rmtree(parent_working_dir, ignore_errors=True)
try:
shutil.copytree(working_dir, parent_working_dir)
except (OSError, IOError), e:
debug("Warning: cannot copy working files to " + parent_working_dir)
if os.path.exists(qemu_flash_drives):
shutil.rmtree(parent_qemu_flash_drives, ignore_errors=True)
try:
shutil.copytree(qemu_flash_drives, parent_qemu_flash_drives)
except (OSError, IOError), e:
debug("Warning: cannot copy Qemu files to " + parent_qemu_flash_drives)
if os.path.exists(capture_dir):
shutil.rmtree(parent_capture_dir, ignore_errors=True)
try:
shutil.copytree(capture_dir, parent_capture_dir)
except (OSError, IOError), e:
debug("Warning: cannot copy capture files to " + parent_capture_dir)
self.load_netfile(parent_project_dir + os.sep + 'topology.net')
self.projectConfigs = parent_project_dir + os.sep + 'configs'
self.projectWorkdir = parent_project_dir + os.sep + 'working'
self.projectFile = parent_project_dir + os.sep + 'topology.net'
#debug("SNAPSHOT RESTORED")
def __action_OpenFile(self):
""" Open a file
"""
if len(globals.GApp.topology.nodes) and globals.GApp.topology.changed == True:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "Would you like to save the current topology?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No, QtGui.QMessageBox.Cancel)
if reply == QtGui.QMessageBox.Yes:
self.__action_Save()
elif reply == QtGui.QMessageBox.Cancel:
return
self.openFile()
def openFromDroppedFile(self, path):
""" Open a .net file from a dropped action
"""
if not path.endswith(".net"):
QtGui.QMessageBox.critical(self, translate("Workspace", "Message"), translate("Workspace", "The file '%s' has not the right extension (.net)") % os.path.basename(path))
return
if len(globals.GApp.topology.nodes) and globals.GApp.topology.changed == True:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "Would you like to save the current topology?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No, QtGui.QMessageBox.Cancel)
if reply == QtGui.QMessageBox.Yes:
self.__action_Save()
elif reply == QtGui.QMessageBox.Cancel:
return
self.loadNetfile(path)
self.__action_Instructions(silent=True)
def __addToRecentFiles(self, path):
""" Add path to recent files menu
"""
# Check is the file is not already in list
index = 0
for recent_file_conf in globals.GApp.recentfiles:
if recent_file_conf.path == path:
globals.GApp.recentfiles.pop(index)
break
index += 1
# Limit number of recent file paths to 10
if len(globals.GApp.recentfiles) == 10:
globals.GApp.recentfiles.pop(0)
# Add to the list
if os.path.exists(path):
recent_file_conf = recentFilesConf()
recent_file_conf.path = unicode(path)
globals.GApp.recentfiles.append(recent_file_conf)
# Limit number of recent file paths to 10
if len(globals.GApp.recentfiles) == 10:
globals.GApp.recentfiles.pop(0)
# Redraw recent files submenu
self.submenu_RecentFiles.clear()
recent_files = list(globals.GApp.recentfiles)
recent_files.reverse()
for recent_file_conf in recent_files:
action = QtGui.QAction(recent_file_conf.path, self.submenu_RecentFiles)
self.submenu_RecentFiles.addAction(action)
# Need to put back the clear menu action
self.submenu_RecentFiles.addSeparator()
clear_action = QtGui.QAction(translate("Workspace", "Clear Menu"), self.submenu_RecentFiles)
self.submenu_RecentFiles.addAction(clear_action)
def openFile(self):
if globals.GApp.systconf['dynamips'].path == '':
QtGui.QMessageBox.warning(self, translate("Workspace", "Open a file"), translate("Workspace", "The path to Dynamips must be configured"))
self.__action_Preferences()
return
(path, selected) = fileBrowser(translate("Workspace", "Open a file"), filter='NET file (*.net);;PNG file (*.png);;All files (*.*)',
directory=os.path.normpath(globals.GApp.systconf['general'].project_path), parent=self).getFile()
if path:
if selected == 'NET file (*.net)' or selected == '' or path.endswith(".net"):
self.loadNetfile(os.path.normpath(path))
elif selected == 'PNG file (*.png)' or path.endswith(".png"):
project_filename = os.path.splitext(os.path.basename(path))[0] + '.net'
project_path = os.path.dirname(path) + os.sep + project_filename
if not os.path.exists(project_path):
QtGui.QMessageBox.critical(self, translate("Workspace", "Project file"), translate("Workspace", "No such file %s") % project_filename)
return
self.loadNetfile(os.path.normpath(project_path))
def loadNetfile(self, path):
try:
# here the loading
self.projectWorkdir = None
self.projectConfigs = None
self.projectFile = None
self.isTemporaryProject = False
self.load_netfile(path)
self.__addToRecentFiles(path)
globals.GApp.topology.changed = False
#self.__action_Instructions(silent=True)
except IOError, (errno, strerror):
QtGui.QMessageBox.critical(self, 'Open', u'Open: ' + strerror)
except (lib.DynamipsErrorHandled, socket.error):
QtGui.QMessageBox.critical(self, translate("Workspace", "Dynamips error"), translate("Workspace", "Connection lost with Dynamips hypervisor (crashed?)"))
def __action_Autosave(self):
""" Autosave feature
"""
curtime = time.strftime("%H:%M:%S")
print translate("Workspace", "%s: Auto-saving ... Next one in %s seconds" % (curtime, str(globals.GApp.systconf['general'].autosave)))
self.__action_Save(auto=True)
def __action_Save(self, auto=False, add_too_recent_files=True):
""" Save to a file (scenario or dynagen .NET format)
"""
if self.projectFile is None or self.isTemporaryProject:
return self.__action_SaveProjectAs()
try:
net = netfile.NETFile()
net.export_net_file(self.projectFile, auto)
if add_too_recent_files:
self.__addToRecentFiles(self.projectFile)
# unbase the qemu disk
if self.unbase == True:
for node in globals.GApp.topology.nodes.values():
if (isinstance(node, QemuDevice) or isinstance(node, JunOS) or isinstance(node, IDS)) and self.unbase and not node.unbased:
node.stopNode()
node.get_dynagen_device().unbase()
node.unbased = True
globals.GApp.topology.changed = False
autosave = globals.GApp.systconf['general'].autosave
if autosave > 0:
self.timer.start(autosave * 1000)
else:
self.timer.stop()
if len(globals.GApp.topology.nodes.values()) and globals.GApp.systconf['general'].auto_screenshot:
project_filename = os.path.splitext(os.path.basename(self.projectFile))[0]
self.__export(os.path.dirname(self.projectFile) + os.sep + project_filename + '.png', 'PNG')
except IOError, (errno, strerror):
QtGui.QMessageBox.critical(self, 'Open', u'Open: ' + strerror)
def closeEvent(self, event):
""" Ask to close GNS3
"""
running_nodes = False
for node in globals.GApp.topology.nodes.itervalues():
if (isinstance(node, IOSRouter) or isinstance(node, AnyEmuDevice) or isinstance(node, AnyVBoxEmuDevice)) and node.get_dynagen_device().state == 'running':
running_nodes = True
if len(globals.GApp.topology.nodes) and globals.GApp.topology.changed == True:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "Would you like to save the current topology?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No, QtGui.QMessageBox.Cancel)
if reply == QtGui.QMessageBox.Yes:
self.__action_Save()
elif reply == QtGui.QMessageBox.Cancel:
event.ignore()
return
elif running_nodes:
reply = QtGui.QMessageBox.question(self, translate("Workspace", "Message"), translate("Workspace", "You have running nodes and you may lose your configurations inside them, would you like to continue anyway?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.No:
event.ignore()
return
self.clear()
event.accept()
|
GNS3/gns3-legacy
|
src/GNS3/Workspace.py
|
Python
|
gpl-2.0
| 88,589
|
[
"VisIt"
] |
248604573f918623d20b788d06fcbb8cab5c41b74a66a57fd4e49dd2314208dc
|
#===============================================================================
#
# CPP11Profile.py
#
# This file is part of ANNarchy.
#
# Copyright (C) 2016-2018 Julien Vitay <julien.vitay@gmail.com>,
# Helge Uelo Dinkelbach <helge.dinkelbach@gmail.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ANNarchy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#===============================================================================
from ANNarchy.core import Global
from ANNarchy.generator.Utils import tabify
from .ProfileGenerator import ProfileGenerator
from .ProfileTemplate import cpp11_profile_template, cpp11_omp_profile_template, cpp11_profile_header
class CPP11Profile(ProfileGenerator):
"""
Extent the generated code by profiling annotations using the C++11
steady clock counter.
"""
def __init__(self, annarchy_dir, net_id):
ProfileGenerator.__init__(self, annarchy_dir, net_id)
def generate(self):
"""
Generate Profiling class code, called from Generator instance.
"""
# Generate header for profiling
with open(self.annarchy_dir+'/generate/net'+str(self._net_id)+'/Profiling.h', 'w') as ofile:
ofile.write(self._generate_header())
def generate_body_dict(self):
"""
Creates a dictionary, contain profile code snippets.
"""
if Global.config["num_threads"] == 1:
body_dict = {
'prof_include': cpp11_profile_template['include'],
'prof_step_pre': cpp11_profile_template['step_pre'],
'prof_step_post': cpp11_profile_template['step_post'],
'prof_run_pre': cpp11_profile_template['run_pre'],
'prof_run_post': cpp11_profile_template['run_post'],
'prof_proj_psp_pre': cpp11_profile_template['proj_psp_pre'],
'prof_proj_psp_post': cpp11_profile_template['proj_psp_post'],
'prof_proj_step_pre': cpp11_profile_template['proj_step_pre'],
'prof_proj_step_post': cpp11_profile_template['proj_step_post'],
'prof_neur_step_pre': cpp11_profile_template['neur_step_pre'],
'prof_neur_step_post': cpp11_profile_template['neur_step_post'],
'prof_rng_pre': cpp11_profile_template['rng_pre'],
'prof_rng_post': cpp11_profile_template['rng_post'],
'prof_record_pre': cpp11_profile_template['record_pre'],
'prof_record_post': cpp11_profile_template['record_post'],
'prof_global_ops_pre': cpp11_profile_template['global_op_pre'],
'prof_global_ops_post': cpp11_profile_template['global_op_post']
}
else:
body_dict = {
'prof_include': cpp11_omp_profile_template['include'],
'prof_step_pre': cpp11_omp_profile_template['step_pre'],
'prof_step_post': cpp11_omp_profile_template['step_post'],
'prof_run_pre': cpp11_omp_profile_template['run_pre'],
'prof_run_post': cpp11_omp_profile_template['run_post'],
'prof_proj_psp_pre': cpp11_omp_profile_template['proj_psp_pre'],
'prof_proj_psp_post': cpp11_omp_profile_template['proj_psp_post'],
'prof_proj_step_pre': cpp11_omp_profile_template['proj_step_pre'],
'prof_proj_step_post': cpp11_omp_profile_template['proj_step_post'],
'prof_neur_step_pre': cpp11_omp_profile_template['neur_step_pre'],
'prof_neur_step_post': cpp11_omp_profile_template['neur_step_post'],
'prof_rng_pre': cpp11_omp_profile_template['rng_pre'],
'prof_rng_post': cpp11_omp_profile_template['rng_post'],
'prof_record_pre': cpp11_omp_profile_template['record_pre'],
'prof_record_post': cpp11_omp_profile_template['record_post'],
'prof_global_ops_pre': cpp11_omp_profile_template['global_op_pre'],
'prof_global_ops_post': cpp11_omp_profile_template['global_op_post']
}
return body_dict
def generate_init_network(self):
if Global.config["num_threads"] == 1:
return cpp11_profile_template['init']
else:
return cpp11_omp_profile_template['init']
def generate_init_population(self, pop):
"""
Generate initialization code for population
"""
declare = """
// Profiling
Measurement* measure_step; // update ODE/non-ODE
Measurement* measure_rng; // draw random numbers
Measurement* measure_sc; // spike condition
"""
init = """ // Profiling
measure_step = Profiling::get_instance()->register_function("pop", "%(name)s", %(id)s, "step", "%(label)s");
measure_rng = Profiling::get_instance()->register_function("pop", "%(name)s", %(id)s, "rng", "%(label)s");
measure_sc = Profiling::get_instance()->register_function("pop", "%(name)s", %(id)s, "spike", "%(label)s");
""" % {'name': pop.name, 'id': pop.id, 'label': pop.name}
return declare, init
def generate_init_projection(self, proj):
"""
Generate initialization code for projection
"""
declare = """
Measurement* measure_psp;
Measurement* measure_step;
"""
if isinstance(proj.target, str):
target = proj.target
else:
target = proj.target[0]
for tar in proj.target[1:]:
target += "_"+tar
init = """ // Profiling
measure_psp = Profiling::get_instance()->register_function("proj", "%(name)s", %(id_proj)s, "psp", "%(label)s");
measure_step = Profiling::get_instance()->register_function("proj", "%(name)s", %(id_proj)s, "step", "%(label)s");
""" % {'id_proj': proj.id, 'name': proj.name, 'label': proj.pre.name+'_'+proj.post.name+'_'+target}
return declare, init
def annotate_computesum_rate(self, proj, code):
"""
annotate the computesum compuation code
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['compute_psp']['before']
prof_end = cpp11_profile_template['compute_psp']['after']
else:
prof_begin = cpp11_omp_profile_template['compute_psp']['before']
prof_end = cpp11_omp_profile_template['compute_psp']['after']
prof_code = """
// first run, measuring average time
%(prof_begin)s
%(code)s
%(prof_end)s
""" % {
'code': code,
'prof_begin': tabify(prof_begin, 2),
'prof_end': tabify(prof_end,2)
}
return prof_code
def annotate_computesum_spiking(self, proj, code):
"""
annotate the computesum compuation code
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['compute_psp']['before'] % {'name': 'proj'+str(proj.id)}
prof_end = cpp11_profile_template['compute_psp']['after'] % {'name': 'proj'+str(proj.id)}
else:
prof_begin = cpp11_omp_profile_template['compute_psp']['before'] % {'name': 'proj'+str(proj.id)}
prof_end = cpp11_omp_profile_template['compute_psp']['after'] % {'name': 'proj'+str(proj.id)}
prof_code = """
// first run, measuring average time
%(prof_begin)s
%(code)s
%(prof_end)s
""" % {'code': code,
'prof_begin': tabify(prof_begin, 2),
'prof_end': tabify(prof_end,2)
}
return prof_code
def annotate_update_synapse(self, proj, code):
"""
annotate the update synapse code, generated by ProjectionGenerator.update_synapse()
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['update_synapse']['before']
prof_end = cpp11_profile_template['update_synapse']['after']
else:
prof_begin = cpp11_omp_profile_template['update_synapse']['before']
prof_end = cpp11_omp_profile_template['update_synapse']['after']
prof_code = """
// first run, measuring average time
%(prof_begin)s
%(code)s
%(prof_end)s
""" % {'code': code,
'prof_begin': tabify(prof_begin, 2),
'prof_end': tabify(prof_end,2)
}
return prof_code
def annotate_update_neuron(self, pop, code):
"""
annotate the update neuron code
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['update_neuron']['before'] % {'name': pop.name}
prof_end = cpp11_profile_template['update_neuron']['after'] % {'name': pop.name}
else:
prof_begin = cpp11_omp_profile_template['update_neuron']['before'] % {'name': pop.name}
prof_end = cpp11_omp_profile_template['update_neuron']['after'] % {'name': pop.name}
prof_code = """
// first run, measuring average time
%(prof_begin)s
%(code)s
%(prof_end)s
""" % {'code': code,
'prof_begin': tabify(prof_begin, 2),
'prof_end': tabify(prof_end,2)
}
return prof_code
def annotate_spike_cond(self, pop, code):
"""
annotate the spike condition code
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['spike_gather']['before'] % {'name': pop.name}
prof_end = cpp11_profile_template['spike_gather']['after'] % {'name': pop.name}
else:
prof_begin = cpp11_omp_profile_template['spike_gather']['before'] % {'name': pop.name}
prof_end = cpp11_omp_profile_template['spike_gather']['after'] % {'name': pop.name}
prof_dict = {
'code': code,
'prof_begin': tabify(prof_begin,2),
'prof_end': tabify(prof_end,2)
}
prof_code = """
%(prof_begin)s
%(code)s
%(prof_end)s
""" % prof_dict
return prof_code
def annotate_update_rng(self, pop, code):
"""
annotate update rng kernel (only for CPUs available)
"""
if Global.config["num_threads"] == 1:
prof_begin = cpp11_profile_template['update_rng']['before'] % {'name': pop.name}
prof_end = cpp11_profile_template['update_rng']['after'] % {'name': pop.name}
else:
prof_begin = cpp11_omp_profile_template['update_rng']['before'] % {'name': pop.name}
prof_end = cpp11_omp_profile_template['update_rng']['after'] % {'name': pop.name}
prof_dict = {
'code': code,
'prof_begin': tabify(prof_begin,2),
'prof_end': tabify(prof_end,2)
}
prof_code = """
%(prof_begin)s
%(code)s
%(prof_end)s
"""
return prof_code % prof_dict
def _generate_header(self):
"""
generate Profiling.h
"""
config_xml = """
_out_file << " <config>" << std::endl;
_out_file << " <paradigm>%(paradigm)s</paradigm>" << std::endl;
_out_file << " <num_threads>%(num_threads)s</num_threads>" << std::endl;
_out_file << " </config>" << std::endl;
""" % {
'paradigm': Global.config["paradigm"],
'num_threads': Global.config["num_threads"]
}
config = Global.config["paradigm"] + '_' + str(Global.config["num_threads"]) + 'threads'
return cpp11_profile_header % {
'result_file': "results_%(config)s.xml" % {'config':config} if Global.config['profile_out'] == None else Global.config['profile_out'],
'config_xml': config_xml
}
|
vitay/ANNarchy
|
ANNarchy/generator/Profile/CPP11Profile.py
|
Python
|
gpl-2.0
| 12,173
|
[
"NEURON"
] |
3d79594263045b171f71194d0c89222e1073ff7c3ab6b8bd59e527ba01c34416
|
import random
def GenerateSlug():
colors = ['red', 'blue', 'white', 'yellow', 'green', 'black', 'orange']
numbers = ['1','2','3','4','5','6','7','8','9']
words = ['monkey', 'cow', 'chicken', 'bull', 'moose','hawk', 'lion', 'rat', 'dog', 'cat']
return random.choice(colors) + random.choice(numbers) + random.choice(words)
|
zwernberg/polls
|
polls/helpers.py
|
Python
|
mit
| 337
|
[
"MOOSE"
] |
7399e109a39b337883e055e68f62371bf8a1b8ad25473022639b471d9a33e101
|
""" This integration test only need the TaskQueueDB
(which should of course be properly defined also in the configuration),
and connects directly to it
Run this test with::
"python -m pytest tests/Integration/WorkloadManagementSystem/Test_TaskQueueDB.py"
Suggestion: for local testing, run this with::
python -m pytest -c ../pytest.ini -vv tests/Integration/WorkloadManagementSystem/Test_TaskQueueDB.py
"""
from DIRAC import gLogger
from DIRAC.Core.Base.Script import parseCommandLine
parseCommandLine()
from DIRAC.WorkloadManagementSystem.DB.TaskQueueDB import TaskQueueDB
gLogger.setLevel("DEBUG")
tqDB = TaskQueueDB()
def test_basicChain():
"""a basic put - remove"""
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 50000}
result = tqDB.insertJob(123, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([123])
assert result["OK"] is True
assert 123 in result["Value"]
tq = result["Value"][123]
result = tqDB.deleteJob(123)
assert result["OK"] is True
result = tqDB.cleanOrphanedTaskQueues()
assert result["OK"] is True
result = tqDB.deleteTaskQueueIfEmpty(tq)
assert result["OK"] is True
def test_chainWithParameter():
"""put - remove with parameters"""
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 50000}
# first job
result = tqDB.insertJob(123, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([123])
assert result["OK"] is True
tq = result["Value"][123]
result = tqDB.deleteTaskQueue(tq)
assert result["OK"] is False # This will fail because of the foreign key
result = tqDB.cleanOrphanedTaskQueues()
assert result["OK"] is True
result = tqDB.deleteTaskQueueIfEmpty(tq) # this won't delete anything
assert result["OK"] is True
# second job
result = tqDB.insertJob(125, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([125])
tq = result["Value"][125]
result = tqDB.deleteTaskQueue(tq)
assert result["OK"] is False # This will fail because of the foreign key
result = tqDB.deleteTaskQueueIfEmpty(tq) # this won't delete anything, as both 123 and 125 are in
assert result["OK"] is True # but still it won't fail
assert result["Value"] is False
result = tqDB.retrieveTaskQueues()
assert result["OK"] is True
assert list(result["Value"].values())[0] == {
"OwnerDN": "/my/DN",
"Jobs": 2,
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 86400,
"Priority": 1.0,
}
# now we will try to delete
result = tqDB.deleteJob(123)
assert result["OK"] is True
result = tqDB.deleteJob(125)
assert result["OK"] is True
result = tqDB.deleteTaskQueueIfEmpty(tq) # this should now delete tq
assert result["OK"] is True
result = tqDB.retrieveTaskQueues()
assert result["OK"] is True
assert result["Value"] == {}
def test_chainWithSites():
"""put - remove with parameters including sites"""
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["LCG.CERN.ch"],
}
result = tqDB.insertJob(201, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([201])
tq_job1 = result["Value"][201]
result = tqDB.insertJob(2011, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([2011])
tq_job11 = result["Value"][2011]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["CLOUD.IN2P3.fr"],
}
result = tqDB.insertJob(203, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([203])
tq_job2 = result["Value"][203]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["LCG.CERN.ch", "CLOUD.IN2P3.fr"],
}
result = tqDB.insertJob(203, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([203])
tq_job3 = result["Value"][203]
# matching
# this should match everything
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000}, numQueuesToGet=5)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job11}
# this should match those for CERN
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Site": "LCG.CERN.ch"}, numQueuesToGet=4)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job3, tq_job11}
# this should match those for IN2P3
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Site": "CLOUD.IN2P3.fr"}, numQueuesToGet=4
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3}
# this should match those for IN2P3
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Site": "CLOUD.IN2P3.fr"}, numQueuesToGet=4
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3}
# now we will try to delete
for jobID in [201, 2011, 202, 203]:
result = tqDB.deleteJob(jobID)
assert result["OK"] is True
for tqID in [tq_job1, tq_job2, tq_job3]:
result = tqDB.deleteTaskQueueIfEmpty(tqID)
assert result["OK"] is True
def test_chainWithBannedSites():
"""put - remove with parameters including Banned sites"""
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"BannedSites": ["LCG.CERN.ch", "CLOUD.IN2P3.fr"],
}
result = tqDB.insertJob(127, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([127])
tq_job1 = result["Value"][127]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"BannedSites": ["CLOUD.IN2P3.fr", "DIRAC.Test.org"],
}
result = tqDB.insertJob(128, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([128])
tq_job2 = result["Value"][128]
# matching
# this should match everything
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000}, numQueuesToGet=4)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2}
# this should match also everything
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "centos7"}, numQueuesToGet=4)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2}
# this should match the first
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Site": "DIRAC.Test.org"}, numQueuesToGet=4
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1}
# this should match the second
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Site": "LCG.CERN.ch"}, numQueuesToGet=4)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2}
# this should not match anything because of the banned site CLOUD.IN2P3.fr
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Site": "CLOUD.IN2P3.fr"}, numQueuesToGet=4
)
assert result["OK"] is True
assert result["Value"] == []
result = tqDB.deleteTaskQueueIfEmpty(tq_job1) # this won't delete anything, as 127 is in
assert result["OK"] is True # but still it won't fail
assert result["Value"] is False
result = tqDB.deleteJob(127)
assert result["OK"] is True
result = tqDB.deleteTaskQueueIfEmpty(tq_job1) # this should now delete tq
assert result["OK"] is True
result = tqDB.deleteJob(128)
assert result["OK"] is True
for tqId in [tq_job1, tq_job2]:
result = tqDB.deleteTaskQueueIfEmpty(tqId)
assert result["OK"] is True
result = tqDB.retrieveTaskQueues()
assert result["OK"] is True
assert result["Value"] == {}
def test_chainWithPlatforms():
"""put - remove with parameters including a platform"""
# We'll try the following case
#
# possible platforms: slc5, slc6, centos7, debian, ubuntu
# where:
# - centos7 > slc6 > slc5
# - ubuntu > debian
# and of course what runs on rhel family does not run on debian family
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["centos7"],
}
result = tqDB.insertJob(1, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([1])
tq_job1 = result["Value"][1]
assert tq_job1 > 0
result = tqDB.insertJob(2, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([2])
tq_job2 = result["Value"][2]
assert tq_job1 == tq_job2
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["ubuntu"],
}
result = tqDB.insertJob(3, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([3])
tq_job3 = result["Value"][3]
assert tq_job3 == tq_job1 + 1
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["centos7", "slc6"],
}
result = tqDB.insertJob(4, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([4])
tq_job4 = result["Value"][4]
assert tq_job4 == tq_job3 + 1
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["debian", "ubuntu"],
}
result = tqDB.insertJob(5, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([5])
tq_job5 = result["Value"][5]
assert tq_job5 == tq_job4 + 1
# We should be in this situation (TQIds are obviously invented):
#
# select TQId, JobId FROM `tq_Jobs`
# +--------+---------+
# | TQId | JobId |
# +--------+---------|
# | 101 | 1 |
# | 101 | 2 |
# | 102 | 3 |
# | 103 | 4 |
# | 104 | 5 |
# +--------+---------+
#
# select * FROM `tq_TQToPlatforms`
# +--------+---------+
# | TQId | Value |
# |--------+---------|
# | 101 | centos7 |
# | 102 | ubuntu |
# | 103 | centos7 |
# | 103 | slc6 |
# | 104 | debian |
# | 104 | ubuntu |
# +--------+---------+
# strict matching
# centos7
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "centos7"}, numQueuesToGet=4)
assert result["OK"] is True
# this should match one in [tq_job1, tq_job2, tq_job4]
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job4}
# ubuntu
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "ubuntu"}, numQueuesToGet=4)
assert result["OK"] is True
# this should match one in [tq_job3, tq_job5]
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job3, tq_job5}
# slc6
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc6"}, numQueuesToGet=4)
assert result["OK"] is True
# this should match only tq_job4, as this is the only one that can run on slc6
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4}
# slc5
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc5"}, numQueuesToGet=4)
assert result["OK"] is True
# this should not match anything
assert result["Value"] == []
# compatibility matching
# ANY platform
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "ANY"}, numQueuesToGet=5)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5}
# Now we insert a TQ without platform
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 5000}
result = tqDB.insertJob(6, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([6])
tq_job6 = result["Value"][6]
assert tq_job6 == tq_job5 + 1
# matching for this one
# ANY platform
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "ANY"}, numQueuesToGet=6)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# ANY platform within a list
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": ["ANY"]}, numQueuesToGet=6)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# no platform at all
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000}, numQueuesToGet=6)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# slc5 -- this time it should match 1 (the one without specified platform)
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc5"}, numQueuesToGet=6)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job6}
# slc6
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc6"}, numQueuesToGet=6)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job6}
# slc5, slc6
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Platform": ["slc5", "slc6"]}, numQueuesToGet=6
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job6}
# slc5, slc6, ubuntu
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Platform": ["slc5", "slc6", "ubuntu"]}, numQueuesToGet=6
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job3, tq_job4, tq_job5, tq_job6}
# Now we insert a TQ with platform "ANY" (same as no platform)
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 5000, "Platform": "ANY"}
result = tqDB.insertJob(7, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([7])
tq_job7 = result["Value"][7]
assert tq_job7 == tq_job6 # would be inserted in the same TQ
# matching for this one
# ANY platform
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "ANY"}, numQueuesToGet=7)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6, tq_job7}
# NO platform
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000}, numQueuesToGet=7)
assert result["OK"] is True
# this should match whatever
assert int(result["Value"][0][0]) in [tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6, tq_job7]
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6, tq_job7}
# slc5 -- this time it should match 2
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc5"}, numQueuesToGet=7)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job6, tq_job7}
# slc6
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "slc6"}, numQueuesToGet=7)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job6, tq_job7}
# new platform appears
# centos8 (> centos7)
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "centos8"}, numQueuesToGet=7)
assert result["OK"] is True
# here, I would like to see 3 TQs matched: those for slc6 + centos7 + ANY
assert len(result["Value"]) == 1
# but here it returns only 1 (those for ANY), by construction
# so, this should be in theory improved
for jobId in range(1, 8):
result = tqDB.deleteJob(jobId)
assert result["OK"] is True
for tqId in [tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6, tq_job7]:
result = tqDB.deleteTaskQueueIfEmpty(tqId)
assert result["OK"] is True
def test_chainWithTags():
"""put - remove with parameters including one or more Tag(s) and/or RequiredTag(s)"""
# We'll try the following case
#
# Tags: MultiProcessor, SingleProcessor, GPU
#
# We'll insert 5 jobs:
# 1 : MultiProcessor
# 2 : SingleProcessor
# 3 : SingleProcessor, MultiProcessor
# 4 : MultiProcessor, GPU
# 5 : -- no tags
# 6 : MultiProcessor, 17Processors
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["MultiProcessor"],
}
result = tqDB.insertJob(1, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([1])
tq_job1 = result["Value"][1]
assert tq_job1 > 0
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["SingleProcessor"],
}
result = tqDB.insertJob(2, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([2])
tq_job2 = result["Value"][2]
assert tq_job2 > tq_job1
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["SingleProcessor", "MultiProcessor"],
}
result = tqDB.insertJob(3, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([3])
tq_job3 = result["Value"][3]
assert tq_job3 > tq_job2
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["MultiProcessor", "GPU"],
}
result = tqDB.insertJob(4, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([4])
tq_job4 = result["Value"][4]
assert tq_job4 > tq_job3
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 5000}
result = tqDB.insertJob(5, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([5])
tq_job5 = result["Value"][5]
assert tq_job5 > tq_job4
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["MultiProcessor", "17Processors"],
}
result = tqDB.insertJob(6, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([6])
tq_job6 = result["Value"][6]
assert tq_job6 > tq_job5
# We should be in this situation (TQIds are obviously invented):
#
# mysql Dirac@localhost:TaskQueueDB> select `TQId`,`JobId` FROM `tq_Jobs`
# +--------+---------+
# | TQId | JobId |
# |--------+---------|
# | 101 | 1 |
# | 102 | 2 |
# | 103 | 3 |
# | 104 | 4 |
# | 105 | 5 |
# | 106 | 6 |
# +--------+---------+
#
# mysql Dirac@localhost:TaskQueueDB> select * FROM `tq_TQToTags`
# +--------+-----------------+
# | TQId | Value |
# |--------+-----------------|
# | 101 | MultiProcessor |
# | 102 | SingleProcessor |
# | 103 | MultiProcessor |
# | 103 | SingleProcessor |
# | 104 | GPU |
# | 104 | MultiProcessor |
# | 106 | MultiProcessor |
# | 106 | 17Processors |
# +--------+-----------------+
# Matching
# Matching Everything with Tag = "ANY"
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": "ANY"}, numQueuesToGet=6)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# Matching Everything with Tag = "aNy"
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": "aNy"}, numQueuesToGet=6)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# Matching Everything with Tag contains "aNy"
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Tag": ["MultiProcessor", "aNy"]}, numQueuesToGet=6
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6}
# Matching only tq_job5 when no tag is specified
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000}, numQueuesToGet=5)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job5}
# Matching only tq_job5 when Tag = ""
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": ""}, numQueuesToGet=5)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job5}
# Matching only tq_job5 when Tag = []
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": []}, numQueuesToGet=5)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job5}
# Matching MultiProcessor
# Tag: 'MultiProcessor'
# By doing this, we are basically saying that this CE is accepting ALSO MultiProcessor payloads
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": "MultiProcessor"}, numQueuesToGet=4)
assert result["OK"] is True
# this matches the tq_job1, as it is the only one that requires ONLY MultiProcessor,
# AND the tq_job5, for which we have inserted no tags
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job5}
# Tags: ['MultiProcessor', 'GPU']
# By doing this, we are basically saying that this CE is accepting ALSO payloads that require MultiProcessor or GPU
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Tag": ["MultiProcessor", "GPU"]}, numQueuesToGet=4
)
assert result["OK"] is True
# this matches the tq_job1, as it requires ONLY MultiProcessor
# the tq_job4, as it is the only one that requires BOTH MultiProcessor and GPU,
# AND the tq_job5, for which we have inserted no tags
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job4, tq_job5, tq_job5}
# RequiredTag: 'MultiProcessor' (but no Tag)
# By doing this, we would be saying that this CE is accepting ONLY MultiProcessor payloads,
# BUT since there are no Tags, we can't know what's POSSIBLE to run, so nothing should be matched
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "RequiredTag": "MultiProcessor"}, numQueuesToGet=4
)
assert result["OK"] is False
# Tag: 'MultiProcessor' + RequiredTag: 'MultiProcessor'
# By doing this, we are basically saying that this CE is accepting ONLY MultiProcessor payloads
# which have ONLY the 'MultiProcessor' tag
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Tag": "MultiProcessor", "RequiredTag": "MultiProcessor"},
numQueuesToGet=4,
)
assert result["OK"] is True
# this matches the tq_job1 as it is the only one that exposes the MultiProcessor tag ONLY
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1}
# Tag: ['MultiProcessor', 'GPU'] + RequiredTag: 'MultiProcessor'
# By doing this, we are basically saying that this CE is accepting MultiProcessor and GPU payloads
# but requires to have the MultiProcessor tag
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Tag": ["MultiProcessor", "GPU"], "RequiredTag": "MultiProcessor"},
numQueuesToGet=4,
)
assert result["OK"] is True
# this matches the tq_job1 as it is the only one that exposes the MultiProcessor tag ONLY
# and tq_job4 because it has GPU and MultiProcessor tags
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job4}
# CINECA type
# We only want to have MultiProcessor payloads
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 50000,
"Tag": ["MultiProcessor", "17Processors", "20Processors", "4Processors"],
"RequiredTag": "MultiProcessor",
},
numQueuesToGet=4,
)
assert result["OK"] is True
# this matches the tq_job1 as it is the only one that exposes the MultiProcessor tag ONLY
# and tq_job6 because it has 17Processors and MultiProcessor tags
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job6}
# NumberOfProcessors and MaxRAM
# This is translated to "#Processors" by the SiteDirector
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Tag": "4Processors"}, numQueuesToGet=4)
assert result["OK"] is True
# FIXME: this is not interpreted in any different way --- is it correct?
# I believe it should be instead interpreted in a way similar to CPUTime
# FIXME: the MaxRam parameter has a similar fate, and becomes "#GB",
# and then there's no specific matching about it.
for jobId in range(1, 8):
result = tqDB.deleteJob(jobId)
assert result["OK"] is True
for tqId in [tq_job1, tq_job2, tq_job3, tq_job4, tq_job5, tq_job6]:
result = tqDB.deleteTaskQueueIfEmpty(tqId)
assert result["OK"] is True
def test_chainWithTagsAndPlatforms():
"""put - remove with parameters including one or more Tag(s) and platforms"""
# platform only
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["centos7"],
}
result = tqDB.insertJob(1, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([1])
tq_job1 = result["Value"][1]
assert tq_job1 > 0
# Tag only
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Tags": ["MultiProcessor"],
}
result = tqDB.insertJob(2, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([2])
tq_job2 = result["Value"][2]
assert tq_job2 > tq_job1
# Platforms and Tag
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["centos7"],
"Tags": ["MultiProcessor"],
}
result = tqDB.insertJob(3, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([3])
tq_job3 = result["Value"][3]
assert tq_job3 > tq_job2
# Tag and another platform
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 5000,
"Platforms": ["slc6"],
"Tags": ["MultiProcessor"],
}
result = tqDB.insertJob(4, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([4])
tq_job4 = result["Value"][4]
assert tq_job4 > tq_job3
# We should be in this situation (TQIds are obviously invented):
#
# mysql Dirac@localhost:TaskQueueDB> select `TQId`,`JobId` FROM `tq_Jobs`
# +--------+---------+
# | TQId | JobId |
# |--------+---------|
# | 101 | 1 |
# | 102 | 2 |
# | 103 | 3 |
# | 104 | 4 |
# +--------+---------+
#
#
# select * FROM `tq_TQToPlatforms`
# +--------+---------+
# | TQId | Value |
# |--------+---------|
# | 101 | centos7 |
# | 103 | centos7 |
# | 104 | debian |
# | 104 | slc6 |
# +--------+---------+
#
# mysql Dirac@localhost:TaskQueueDB> select * FROM `tq_TQToTags`
# +--------+-----------------+
# | TQId | Value |
# |--------+-----------------|
# | 102 | MultiProcessor |
# | 103 | MultiProcessor |
# | 104 | MultiProcessor |
# +--------+-----------------+
# Matching
# Matching Everything
# No Tag, Platform = "ANY"
result = tqDB.matchAndGetTaskQueue({"Setup": "aSetup", "CPUTime": 50000, "Platform": "ANY"}, numQueuesToGet=4)
assert result["OK"] is True
# this should match whatever that does not have tags required, so only tq_job1
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1}
# Tag = "ANY", Platform = "ANY"
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Platform": "ANY", "Tag": "ANY"}, numQueuesToGet=4
)
assert result["OK"] is True
# this should match whatever
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job4}
# Tag = "ANY", Platform = "centos7"
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 50000, "Platform": "centos7", "Tag": "MultiProcessor"}, numQueuesToGet=4
)
assert result["OK"] is True
# this should match whatever has platform == centos7, or no platform
# and either no tags or the MultiProcessor tag
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3}
for jobId in range(1, 8):
result = tqDB.deleteJob(jobId)
assert result["OK"] is True
for tqId in [tq_job1, tq_job2, tq_job3, tq_job4]:
result = tqDB.deleteTaskQueueIfEmpty(tqId)
assert result["OK"] is True
def test_ComplexMatching():
"""test of a complex (realistic) matching. Something like:
{'NumberOfProcessors': 1,
'MaxRAM': 128000,
'Setup': 'aSetup',
'Site': ['Site_1', 'Site_2'],
'Community': 'vo',
'OwnerGroup': ['admin', 'prod', 'user'],
'Platform': ['slc6', 'centos7'],
'Tag': [],
'CPUTime': 9999999}
"""
# Let's first insert few jobs (no tags, for now, and always a platform)
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "admin",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["Site_1", "Site_2"],
"Platforms": ["centos7"],
}
result = tqDB.insertJob(1, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([1])
tq_job1 = result["Value"][1]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "prod",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["Site_1"],
"Platforms": ["slc6", "centos7"],
}
result = tqDB.insertJob(2, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([2])
tq_job2 = result["Value"][2]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "user",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["Site_2"],
"Platforms": ["slc6", "centos7"],
}
result = tqDB.insertJob(3, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([3])
tq_job3 = result["Value"][3]
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "user",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["Site_1", "Site_2"],
"Platforms": ["ubuntu"],
}
result = tqDB.insertJob(4, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([4])
tq_job4 = result["Value"][4]
# now let's try some matching
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": "ANY",
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["ubuntu"],
"Tag": [],
"OwnerGroup": ["admin", "prod", "user"],
"Site": "ANY",
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7", "ubuntu"],
"Tag": [],
"OwnerGroup": ["prod", "user"],
"Site": "ANY",
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3, tq_job4}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7"],
"Tag": [],
"OwnerGroup": ["prod", "user"],
"Site": "ANY",
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3}
result = tqDB.matchAndGetTaskQueue(
{"Setup": "aSetup", "CPUTime": 9999999, "Platform": ["slc6", "centos7"], "OwnerGroup": ["prod", "user"]},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7"],
"OwnerGroup": ["prod", "user"],
"Site": ["Site_1", "Site_2"],
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2, tq_job3}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7"],
"OwnerGroup": ["prod", "user"],
"Site": ["Site_1"],
},
numQueuesToGet=4,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job2}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 10,
"Platform": ["slc6", "centos7"],
"OwnerGroup": ["prod", "user"],
"Site": ["Site_1", "Site_2"],
},
numQueuesToGet=4,
)
assert result["OK"] is True
assert len(result["Value"]) == 0
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": "ANY",
"OwnerGroup": ["admin", "prod", "user"],
"Site": ["ANY"],
},
numQueuesToGet=4,
)
assert result["OK"] is True
assert len(result["Value"]) == 4
# now inserting one without platform, and try again
tqDefDict = {
"OwnerDN": "/my/DN",
"OwnerGroup": "user",
"Setup": "aSetup",
"CPUTime": 5000,
"Sites": ["Site_1", "Site_2"],
}
result = tqDB.insertJob(5, tqDefDict, 10)
assert result["OK"] is True
result = tqDB.getTaskQueueForJobs([5])
tq_job5 = result["Value"][5]
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["slc6", "centos7"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": "ANY",
},
numQueuesToGet=5,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job1, tq_job2, tq_job3, tq_job5}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["ubuntu"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": "Any",
},
numQueuesToGet=5,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job5}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["ubuntu"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": "Any",
"Tag": [],
},
numQueuesToGet=5,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job5}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["ubuntu"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": ["Any", "Site_1"],
"Tag": [],
},
numQueuesToGet=5,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job5}
result = tqDB.matchAndGetTaskQueue(
{
"Setup": "aSetup",
"CPUTime": 9999999,
"Platform": ["ubuntu"],
"OwnerGroup": ["admin", "prod", "user"],
"Site": ["Any", "Site_1"],
"Tag": ["SomeTAG"],
},
numQueuesToGet=5,
)
assert result["OK"] is True
res = set([int(x[0]) for x in result["Value"]])
assert res == {tq_job4, tq_job5}
for jobId in range(1, 8):
result = tqDB.deleteJob(jobId)
assert result["OK"] is True
for tqId in [tq_job1, tq_job2, tq_job3, tq_job4, tq_job5]:
result = tqDB.deleteTaskQueueIfEmpty(tqId)
assert result["OK"] is True
def test_TQ():
"""test of various functions"""
tqDefDict = {"OwnerDN": "/my/DN", "OwnerGroup": "myGroup", "Setup": "aSetup", "CPUTime": 50000}
tqDB.insertJob(123, tqDefDict, 10)
result = tqDB.getNumTaskQueues()
assert result["OK"] is True
assert result["Value"] == 1
result = tqDB.retrieveTaskQueues()
assert result["OK"] is True
assert list(result["Value"].values())[0] == {
"OwnerDN": "/my/DN",
"Jobs": 1,
"OwnerGroup": "myGroup",
"Setup": "aSetup",
"CPUTime": 86400,
"Priority": 1.0,
}
result = tqDB.findOrphanJobs()
assert result["OK"] is True
result = tqDB.recalculateTQSharesForAll()
assert result["OK"] is True
# this will also remove the job
result = tqDB.matchAndGetJob({"Setup": "aSetup", "CPUTime": 300000})
assert result["OK"] is True
assert result["Value"]["matchFound"] is True
assert result["Value"]["jobId"] in [123, 125]
tq = result["Value"]["taskQueueId"]
result = tqDB.deleteTaskQueueIfEmpty(tq)
assert result["OK"] is True
|
ic-hep/DIRAC
|
tests/Integration/WorkloadManagementSystem/Test_TaskQueueDB.py
|
Python
|
gpl-3.0
| 40,691
|
[
"DIRAC"
] |
b71c7ce281e9124c888b101fca93e74cd4041165dbcd1b9647c9072febbf08a7
|
"""
Functions to operate on polynomials.
"""
from __future__ import division, absolute_import, print_function
__all__ = ['poly', 'roots', 'polyint', 'polyder', 'polyadd',
'polysub', 'polymul', 'polydiv', 'polyval', 'poly1d',
'polyfit', 'RankWarning']
import re
import warnings
import numpy.core.numeric as NX
from numpy.core import (isscalar, abs, finfo, atleast_1d, hstack, dot, array,
ones)
from numpy.lib.twodim_base import diag, vander
from numpy.lib.function_base import trim_zeros
from numpy.lib.type_check import iscomplex, real, imag, mintypecode
from numpy.linalg import eigvals, lstsq, inv
class RankWarning(UserWarning):
"""
Issued by `polyfit` when the Vandermonde matrix is rank deficient.
For more information, a way to suppress the warning, and an example of
`RankWarning` being issued, see `polyfit`.
"""
pass
def poly(seq_of_zeros):
"""
Find the coefficients of a polynomial with the given sequence of roots.
Returns the coefficients of the polynomial whose leading coefficient
is one for the given sequence of zeros (multiple roots must be included
in the sequence as many times as their multiplicity; see Examples).
A square matrix (or array, which will be treated as a matrix) can also
be given, in which case the coefficients of the characteristic polynomial
of the matrix are returned.
Parameters
----------
seq_of_zeros : array_like, shape (N,) or (N, N)
A sequence of polynomial roots, or a square array or matrix object.
Returns
-------
c : ndarray
1D array of polynomial coefficients from highest to lowest degree:
``c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N]``
where c[0] always equals 1.
Raises
------
ValueError
If input is the wrong shape (the input must be a 1-D or square
2-D array).
See Also
--------
polyval : Compute polynomial values.
roots : Return the roots of a polynomial.
polyfit : Least squares polynomial fit.
poly1d : A one-dimensional polynomial class.
Notes
-----
Specifying the roots of a polynomial still leaves one degree of
freedom, typically represented by an undetermined leading
coefficient. [1]_ In the case of this function, that coefficient -
the first one in the returned array - is always taken as one. (If
for some reason you have one other point, the only automatic way
presently to leverage that information is to use ``polyfit``.)
The characteristic polynomial, :math:`p_a(t)`, of an `n`-by-`n`
matrix **A** is given by
:math:`p_a(t) = \\mathrm{det}(t\\, \\mathbf{I} - \\mathbf{A})`,
where **I** is the `n`-by-`n` identity matrix. [2]_
References
----------
.. [1] M. Sullivan and M. Sullivan, III, "Algebra and Trignometry,
Enhanced With Graphing Utilities," Prentice-Hall, pg. 318, 1996.
.. [2] G. Strang, "Linear Algebra and Its Applications, 2nd Edition,"
Academic Press, pg. 182, 1980.
Examples
--------
Given a sequence of a polynomial's zeros:
>>> np.poly((0, 0, 0)) # Multiple root example
array([1, 0, 0, 0])
The line above represents z**3 + 0*z**2 + 0*z + 0.
>>> np.poly((-1./2, 0, 1./2))
array([ 1. , 0. , -0.25, 0. ])
The line above represents z**3 - z/4
>>> np.poly((np.random.random(1.)[0], 0, np.random.random(1.)[0]))
array([ 1. , -0.77086955, 0.08618131, 0. ]) #random
Given a square array object:
>>> P = np.array([[0, 1./3], [-1./2, 0]])
>>> np.poly(P)
array([ 1. , 0. , 0.16666667])
Or a square matrix object:
>>> np.poly(np.matrix(P))
array([ 1. , 0. , 0.16666667])
Note how in all cases the leading coefficient is always 1.
"""
seq_of_zeros = atleast_1d(seq_of_zeros)
sh = seq_of_zeros.shape
if len(sh) == 2 and sh[0] == sh[1] and sh[0] != 0:
seq_of_zeros = eigvals(seq_of_zeros)
elif len(sh) == 1:
dt = seq_of_zeros.dtype
# Let object arrays slip through, e.g. for arbitrary precision
if dt != object:
seq_of_zeros = seq_of_zeros.astype(mintypecode(dt.char))
else:
raise ValueError("input must be 1d or non-empty square 2d array.")
if len(seq_of_zeros) == 0:
return 1.0
dt = seq_of_zeros.dtype
a = ones((1,), dtype=dt)
for k in range(len(seq_of_zeros)):
a = NX.convolve(a, array([1, -seq_of_zeros[k]], dtype=dt),
mode='full')
if issubclass(a.dtype.type, NX.complexfloating):
# if complex roots are all complex conjugates, the roots are real.
roots = NX.asarray(seq_of_zeros, complex)
if NX.all(NX.sort(roots) == NX.sort(roots.conjugate())):
a = a.real.copy()
return a
def roots(p):
"""
Return the roots of a polynomial with coefficients given in p.
The values in the rank-1 array `p` are coefficients of a polynomial.
If the length of `p` is n+1 then the polynomial is described by::
p[0] * x**n + p[1] * x**(n-1) + ... + p[n-1]*x + p[n]
Parameters
----------
p : array_like
Rank-1 array of polynomial coefficients.
Returns
-------
out : ndarray
An array containing the roots of the polynomial.
Raises
------
ValueError
When `p` cannot be converted to a rank-1 array.
See also
--------
poly : Find the coefficients of a polynomial with a given sequence
of roots.
polyval : Compute polynomial values.
polyfit : Least squares polynomial fit.
poly1d : A one-dimensional polynomial class.
Notes
-----
The algorithm relies on computing the eigenvalues of the
companion matrix [1]_.
References
----------
.. [1] R. A. Horn & C. R. Johnson, *Matrix Analysis*. Cambridge, UK:
Cambridge University Press, 1999, pp. 146-7.
Examples
--------
>>> coeff = [3.2, 2, 1]
>>> np.roots(coeff)
array([-0.3125+0.46351241j, -0.3125-0.46351241j])
"""
# If input is scalar, this makes it an array
p = atleast_1d(p)
if p.ndim != 1:
raise ValueError("Input must be a rank-1 array.")
# find non-zero array entries
non_zero = NX.nonzero(NX.ravel(p))[0]
# Return an empty array if polynomial is all zeros
if len(non_zero) == 0:
return NX.array([])
# find the number of trailing zeros -- this is the number of roots at 0.
trailing_zeros = len(p) - non_zero[-1] - 1
# strip leading and trailing zeros
p = p[int(non_zero[0]):int(non_zero[-1])+1]
# casting: if incoming array isn't floating point, make it floating point.
if not issubclass(p.dtype.type, (NX.floating, NX.complexfloating)):
p = p.astype(float)
N = len(p)
if N > 1:
# build companion matrix and find its eigenvalues (the roots)
A = diag(NX.ones((N-2,), p.dtype), -1)
A[0,:] = -p[1:] / p[0]
roots = eigvals(A)
else:
roots = NX.array([])
# tack any zeros onto the back of the array
roots = hstack((roots, NX.zeros(trailing_zeros, roots.dtype)))
return roots
def polyint(p, m=1, k=None):
"""
Return an antiderivative (indefinite integral) of a polynomial.
The returned order `m` antiderivative `P` of polynomial `p` satisfies
:math:`\\frac{d^m}{dx^m}P(x) = p(x)` and is defined up to `m - 1`
integration constants `k`. The constants determine the low-order
polynomial part
.. math:: \\frac{k_{m-1}}{0!} x^0 + \\ldots + \\frac{k_0}{(m-1)!}x^{m-1}
of `P` so that :math:`P^{(j)}(0) = k_{m-j-1}`.
Parameters
----------
p : array_like or poly1d
Polynomial to differentiate.
A sequence is interpreted as polynomial coefficients, see `poly1d`.
m : int, optional
Order of the antiderivative. (Default: 1)
k : list of `m` scalars or scalar, optional
Integration constants. They are given in the order of integration:
those corresponding to highest-order terms come first.
If ``None`` (default), all constants are assumed to be zero.
If `m = 1`, a single scalar can be given instead of a list.
See Also
--------
polyder : derivative of a polynomial
poly1d.integ : equivalent method
Examples
--------
The defining property of the antiderivative:
>>> p = np.poly1d([1,1,1])
>>> P = np.polyint(p)
>>> P
poly1d([ 0.33333333, 0.5 , 1. , 0. ])
>>> np.polyder(P) == p
True
The integration constants default to zero, but can be specified:
>>> P = np.polyint(p, 3)
>>> P(0)
0.0
>>> np.polyder(P)(0)
0.0
>>> np.polyder(P, 2)(0)
0.0
>>> P = np.polyint(p, 3, k=[6,5,3])
>>> P
poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ])
Note that 3 = 6 / 2!, and that the constants are given in the order of
integrations. Constant of the highest-order polynomial term comes first:
>>> np.polyder(P, 2)(0)
6.0
>>> np.polyder(P, 1)(0)
5.0
>>> P(0)
3.0
"""
m = int(m)
if m < 0:
raise ValueError("Order of integral must be positive (see polyder)")
if k is None:
k = NX.zeros(m, float)
k = atleast_1d(k)
if len(k) == 1 and m > 1:
k = k[0]*NX.ones(m, float)
if len(k) < m:
raise ValueError(
"k must be a scalar or a rank-1 array of length 1 or >m.")
truepoly = isinstance(p, poly1d)
p = NX.asarray(p)
if m == 0:
if truepoly:
return poly1d(p)
return p
else:
# Note: this must work also with object and integer arrays
y = NX.concatenate((p.__truediv__(NX.arange(len(p), 0, -1)), [k[0]]))
val = polyint(y, m - 1, k=k[1:])
if truepoly:
return poly1d(val)
return val
def polyder(p, m=1):
"""
Return the derivative of the specified order of a polynomial.
Parameters
----------
p : poly1d or sequence
Polynomial to differentiate.
A sequence is interpreted as polynomial coefficients, see `poly1d`.
m : int, optional
Order of differentiation (default: 1)
Returns
-------
der : poly1d
A new polynomial representing the derivative.
See Also
--------
polyint : Anti-derivative of a polynomial.
poly1d : Class for one-dimensional polynomials.
Examples
--------
The derivative of the polynomial :math:`x^3 + x^2 + x^1 + 1` is:
>>> p = np.poly1d([1,1,1,1])
>>> p2 = np.polyder(p)
>>> p2
poly1d([3, 2, 1])
which evaluates to:
>>> p2(2.)
17.0
We can verify this, approximating the derivative with
``(f(x + h) - f(x))/h``:
>>> (p(2. + 0.001) - p(2.)) / 0.001
17.007000999997857
The fourth-order derivative of a 3rd-order polynomial is zero:
>>> np.polyder(p, 2)
poly1d([6, 2])
>>> np.polyder(p, 3)
poly1d([6])
>>> np.polyder(p, 4)
poly1d([ 0.])
"""
m = int(m)
if m < 0:
raise ValueError("Order of derivative must be positive (see polyint)")
truepoly = isinstance(p, poly1d)
p = NX.asarray(p)
n = len(p) - 1
y = p[:-1] * NX.arange(n, 0, -1)
if m == 0:
val = p
else:
val = polyder(y, m - 1)
if truepoly:
val = poly1d(val)
return val
def polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False):
"""
Least squares polynomial fit.
Fit a polynomial ``p(x) = p[0] * x**deg + ... + p[deg]`` of degree `deg`
to points `(x, y)`. Returns a vector of coefficients `p` that minimises
the squared error.
Parameters
----------
x : array_like, shape (M,)
x-coordinates of the M sample points ``(x[i], y[i])``.
y : array_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
deg : int
Degree of the fitting polynomial
rcond : float, optional
Relative condition number of the fit. Singular values smaller than
this relative to the largest singular value will be ignored. The
default value is len(x)*eps, where eps is the relative precision of
the float type, about 2e-16 in most cases.
full : bool, optional
Switch determining nature of return value. When it is False (the
default) just the coefficients are returned, when True diagnostic
information from the singular value decomposition is also returned.
w : array_like, shape (M,), optional
Weights to apply to the y-coordinates of the sample points. For
gaussian uncertainties, use 1/sigma (not 1/sigma**2).
cov : bool, optional
Return the estimate and the covariance matrix of the estimate
If full is True, then cov is not returned.
Returns
-------
p : ndarray, shape (deg + 1,) or (deg + 1, K)
Polynomial coefficients, highest power first. If `y` was 2-D, the
coefficients for `k`-th data set are in ``p[:,k]``.
residuals, rank, singular_values, rcond
Present only if `full` = True. Residuals of the least-squares fit,
the effective rank of the scaled Vandermonde coefficient matrix,
its singular values, and the specified value of `rcond`. For more
details, see `linalg.lstsq`.
V : ndarray, shape (M,M) or (M,M,K)
Present only if `full` = False and `cov`=True. The covariance
matrix of the polynomial coefficient estimates. The diagonal of
this matrix are the variance estimates for each coefficient. If y
is a 2-D array, then the covariance matrix for the `k`-th data set
are in ``V[:,:,k]``
Warns
-----
RankWarning
The rank of the coefficient matrix in the least-squares fit is
deficient. The warning is only raised if `full` = False.
The warnings can be turned off by
>>> import warnings
>>> warnings.simplefilter('ignore', np.RankWarning)
See Also
--------
polyval : Compute polynomial values.
linalg.lstsq : Computes a least-squares fit.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
The solution minimizes the squared error
.. math ::
E = \\sum_{j=0}^k |p(x_j) - y_j|^2
in the equations::
x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0]
x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1]
...
x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k]
The coefficient matrix of the coefficients `p` is a Vandermonde matrix.
`polyfit` issues a `RankWarning` when the least-squares fit is badly
conditioned. This implies that the best fit is not well-defined due
to numerical error. The results may be improved by lowering the polynomial
degree or by replacing `x` by `x` - `x`.mean(). The `rcond` parameter
can also be set to a value smaller than its default, but the resulting
fit may be spurious: including contributions from the small singular
values can add numerical noise to the result.
Note that fitting polynomial coefficients is inherently badly conditioned
when the degree of the polynomial is large or the interval of sample points
is badly centered. The quality of the fit should always be checked in these
cases. When polynomial fits are not satisfactory, splines may be a good
alternative.
References
----------
.. [1] Wikipedia, "Curve fitting",
http://en.wikipedia.org/wiki/Curve_fitting
.. [2] Wikipedia, "Polynomial interpolation",
http://en.wikipedia.org/wiki/Polynomial_interpolation
Examples
--------
>>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
>>> z = np.polyfit(x, y, 3)
>>> z
array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254])
It is convenient to use `poly1d` objects for dealing with polynomials:
>>> p = np.poly1d(z)
>>> p(0.5)
0.6143849206349179
>>> p(3.5)
-0.34732142857143039
>>> p(10)
22.579365079365115
High-order polynomials may oscillate wildly:
>>> p30 = np.poly1d(np.polyfit(x, y, 30))
/... RankWarning: Polyfit may be poorly conditioned...
>>> p30(4)
-0.80000000000000204
>>> p30(5)
-0.99999999999999445
>>> p30(4.5)
-0.10547061179440398
Illustration:
>>> import matplotlib.pyplot as plt
>>> xp = np.linspace(-2, 6, 100)
>>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--')
>>> plt.ylim(-2,2)
(-2, 2)
>>> plt.show()
"""
order = int(deg) + 1
x = NX.asarray(x) + 0.0
y = NX.asarray(y) + 0.0
# check arguments.
if deg < 0:
raise ValueError("expected deg >= 0")
if x.ndim != 1:
raise TypeError("expected 1D vector for x")
if x.size == 0:
raise TypeError("expected non-empty vector for x")
if y.ndim < 1 or y.ndim > 2:
raise TypeError("expected 1D or 2D array for y")
if x.shape[0] != y.shape[0]:
raise TypeError("expected x and y to have same length")
# set rcond
if rcond is None:
rcond = len(x)*finfo(x.dtype).eps
# set up least squares equation for powers of x
lhs = vander(x, order)
rhs = y
# apply weighting
if w is not None:
w = NX.asarray(w) + 0.0
if w.ndim != 1:
raise TypeError("expected a 1-d array for weights")
if w.shape[0] != y.shape[0]:
raise TypeError("expected w and y to have the same length")
lhs *= w[:, NX.newaxis]
if rhs.ndim == 2:
rhs *= w[:, NX.newaxis]
else:
rhs *= w
# scale lhs to improve condition number and solve
scale = NX.sqrt((lhs*lhs).sum(axis=0))
lhs /= scale
c, resids, rank, s = lstsq(lhs, rhs, rcond)
c = (c.T/scale).T # broadcast scale coefficients
# warn on rank reduction, which indicates an ill conditioned matrix
if rank != order and not full:
msg = "Polyfit may be poorly conditioned"
warnings.warn(msg, RankWarning, stacklevel=2)
if full:
return c, resids, rank, s, rcond
elif cov:
Vbase = inv(dot(lhs.T, lhs))
Vbase /= NX.outer(scale, scale)
# Some literature ignores the extra -2.0 factor in the denominator, but
# it is included here because the covariance of Multivariate Student-T
# (which is implied by a Bayesian uncertainty analysis) includes it.
# Plus, it gives a slightly more conservative estimate of uncertainty.
if len(x) <= order + 2:
raise ValueError("the number of data points must exceed order + 2 "
"for Bayesian estimate the covariance matrix")
fac = resids / (len(x) - order - 2.0)
if y.ndim == 1:
return c, Vbase * fac
else:
return c, Vbase[:,:, NX.newaxis] * fac
else:
return c
def polyval(p, x):
"""
Evaluate a polynomial at specific values.
If `p` is of length N, this function returns the value:
``p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1]``
If `x` is a sequence, then `p(x)` is returned for each element of `x`.
If `x` is another polynomial then the composite polynomial `p(x(t))`
is returned.
Parameters
----------
p : array_like or poly1d object
1D array of polynomial coefficients (including coefficients equal
to zero) from highest degree to the constant term, or an
instance of poly1d.
x : array_like or poly1d object
A number, an array of numbers, or an instance of poly1d, at
which to evaluate `p`.
Returns
-------
values : ndarray or poly1d
If `x` is a poly1d instance, the result is the composition of the two
polynomials, i.e., `x` is "substituted" in `p` and the simplified
result is returned. In addition, the type of `x` - array_like or
poly1d - governs the type of the output: `x` array_like => `values`
array_like, `x` a poly1d object => `values` is also.
See Also
--------
poly1d: A polynomial class.
Notes
-----
Horner's scheme [1]_ is used to evaluate the polynomial. Even so,
for polynomials of high degree the values may be inaccurate due to
rounding errors. Use carefully.
References
----------
.. [1] I. N. Bronshtein, K. A. Semendyayev, and K. A. Hirsch (Eng.
trans. Ed.), *Handbook of Mathematics*, New York, Van Nostrand
Reinhold Co., 1985, pg. 720.
Examples
--------
>>> np.polyval([3,0,1], 5) # 3 * 5**2 + 0 * 5**1 + 1
76
>>> np.polyval([3,0,1], np.poly1d(5))
poly1d([ 76.])
>>> np.polyval(np.poly1d([3,0,1]), 5)
76
>>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5))
poly1d([ 76.])
"""
p = NX.asarray(p)
if isinstance(x, poly1d):
y = 0
else:
x = NX.asarray(x)
y = NX.zeros_like(x)
for i in range(len(p)):
y = y * x + p[i]
return y
def polyadd(a1, a2):
"""
Find the sum of two polynomials.
Returns the polynomial resulting from the sum of two input polynomials.
Each input must be either a poly1d object or a 1D sequence of polynomial
coefficients, from highest to lowest degree.
Parameters
----------
a1, a2 : array_like or poly1d object
Input polynomials.
Returns
-------
out : ndarray or poly1d object
The sum of the inputs. If either input is a poly1d object, then the
output is also a poly1d object. Otherwise, it is a 1D array of
polynomial coefficients from highest to lowest degree.
See Also
--------
poly1d : A one-dimensional polynomial class.
poly, polyadd, polyder, polydiv, polyfit, polyint, polysub, polyval
Examples
--------
>>> np.polyadd([1, 2], [9, 5, 4])
array([9, 6, 6])
Using poly1d objects:
>>> p1 = np.poly1d([1, 2])
>>> p2 = np.poly1d([9, 5, 4])
>>> print(p1)
1 x + 2
>>> print(p2)
2
9 x + 5 x + 4
>>> print(np.polyadd(p1, p2))
2
9 x + 6 x + 6
"""
truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d))
a1 = atleast_1d(a1)
a2 = atleast_1d(a2)
diff = len(a2) - len(a1)
if diff == 0:
val = a1 + a2
elif diff > 0:
zr = NX.zeros(diff, a1.dtype)
val = NX.concatenate((zr, a1)) + a2
else:
zr = NX.zeros(abs(diff), a2.dtype)
val = a1 + NX.concatenate((zr, a2))
if truepoly:
val = poly1d(val)
return val
def polysub(a1, a2):
"""
Difference (subtraction) of two polynomials.
Given two polynomials `a1` and `a2`, returns ``a1 - a2``.
`a1` and `a2` can be either array_like sequences of the polynomials'
coefficients (including coefficients equal to zero), or `poly1d` objects.
Parameters
----------
a1, a2 : array_like or poly1d
Minuend and subtrahend polynomials, respectively.
Returns
-------
out : ndarray or poly1d
Array or `poly1d` object of the difference polynomial's coefficients.
See Also
--------
polyval, polydiv, polymul, polyadd
Examples
--------
.. math:: (2 x^2 + 10 x - 2) - (3 x^2 + 10 x -4) = (-x^2 + 2)
>>> np.polysub([2, 10, -2], [3, 10, -4])
array([-1, 0, 2])
"""
truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d))
a1 = atleast_1d(a1)
a2 = atleast_1d(a2)
diff = len(a2) - len(a1)
if diff == 0:
val = a1 - a2
elif diff > 0:
zr = NX.zeros(diff, a1.dtype)
val = NX.concatenate((zr, a1)) - a2
else:
zr = NX.zeros(abs(diff), a2.dtype)
val = a1 - NX.concatenate((zr, a2))
if truepoly:
val = poly1d(val)
return val
def polymul(a1, a2):
"""
Find the product of two polynomials.
Finds the polynomial resulting from the multiplication of the two input
polynomials. Each input must be either a poly1d object or a 1D sequence
of polynomial coefficients, from highest to lowest degree.
Parameters
----------
a1, a2 : array_like or poly1d object
Input polynomials.
Returns
-------
out : ndarray or poly1d object
The polynomial resulting from the multiplication of the inputs. If
either inputs is a poly1d object, then the output is also a poly1d
object. Otherwise, it is a 1D array of polynomial coefficients from
highest to lowest degree.
See Also
--------
poly1d : A one-dimensional polynomial class.
poly, polyadd, polyder, polydiv, polyfit, polyint, polysub,
polyval
convolve : Array convolution. Same output as polymul, but has parameter
for overlap mode.
Examples
--------
>>> np.polymul([1, 2, 3], [9, 5, 1])
array([ 9, 23, 38, 17, 3])
Using poly1d objects:
>>> p1 = np.poly1d([1, 2, 3])
>>> p2 = np.poly1d([9, 5, 1])
>>> print(p1)
2
1 x + 2 x + 3
>>> print(p2)
2
9 x + 5 x + 1
>>> print(np.polymul(p1, p2))
4 3 2
9 x + 23 x + 38 x + 17 x + 3
"""
truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d))
a1, a2 = poly1d(a1), poly1d(a2)
val = NX.convolve(a1, a2)
if truepoly:
val = poly1d(val)
return val
def polydiv(u, v):
"""
Returns the quotient and remainder of polynomial division.
The input arrays are the coefficients (including any coefficients
equal to zero) of the "numerator" (dividend) and "denominator"
(divisor) polynomials, respectively.
Parameters
----------
u : array_like or poly1d
Dividend polynomial's coefficients.
v : array_like or poly1d
Divisor polynomial's coefficients.
Returns
-------
q : ndarray
Coefficients, including those equal to zero, of the quotient.
r : ndarray
Coefficients, including those equal to zero, of the remainder.
See Also
--------
poly, polyadd, polyder, polydiv, polyfit, polyint, polymul, polysub,
polyval
Notes
-----
Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need
not equal `v.ndim`. In other words, all four possible combinations -
``u.ndim = v.ndim = 0``, ``u.ndim = v.ndim = 1``,
``u.ndim = 1, v.ndim = 0``, and ``u.ndim = 0, v.ndim = 1`` - work.
Examples
--------
.. math:: \\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25
>>> x = np.array([3.0, 5.0, 2.0])
>>> y = np.array([2.0, 1.0])
>>> np.polydiv(x, y)
(array([ 1.5 , 1.75]), array([ 0.25]))
"""
truepoly = (isinstance(u, poly1d) or isinstance(u, poly1d))
u = atleast_1d(u) + 0.0
v = atleast_1d(v) + 0.0
# w has the common type
w = u[0] + v[0]
m = len(u) - 1
n = len(v) - 1
scale = 1. / v[0]
q = NX.zeros((max(m - n + 1, 1),), w.dtype)
r = u.copy()
for k in range(0, m-n+1):
d = scale * r[k]
q[k] = d
r[k:k+n+1] -= d*v
while NX.allclose(r[0], 0, rtol=1e-14) and (r.shape[-1] > 1):
r = r[1:]
if truepoly:
return poly1d(q), poly1d(r)
return q, r
_poly_mat = re.compile(r"[*][*]([0-9]*)")
def _raise_power(astr, wrap=70):
n = 0
line1 = ''
line2 = ''
output = ' '
while True:
mat = _poly_mat.search(astr, n)
if mat is None:
break
span = mat.span()
power = mat.groups()[0]
partstr = astr[n:span[0]]
n = span[1]
toadd2 = partstr + ' '*(len(power)-1)
toadd1 = ' '*(len(partstr)-1) + power
if ((len(line2) + len(toadd2) > wrap) or
(len(line1) + len(toadd1) > wrap)):
output += line1 + "\n" + line2 + "\n "
line1 = toadd1
line2 = toadd2
else:
line2 += partstr + ' '*(len(power)-1)
line1 += ' '*(len(partstr)-1) + power
output += line1 + "\n" + line2
return output + astr[n:]
class poly1d(object):
"""
A one-dimensional polynomial class.
A convenience class, used to encapsulate "natural" operations on
polynomials so that said operations may take on their customary
form in code (see Examples).
Parameters
----------
c_or_r : array_like
The polynomial's coefficients, in decreasing powers, or if
the value of the second parameter is True, the polynomial's
roots (values where the polynomial evaluates to 0). For example,
``poly1d([1, 2, 3])`` returns an object that represents
:math:`x^2 + 2x + 3`, whereas ``poly1d([1, 2, 3], True)`` returns
one that represents :math:`(x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6`.
r : bool, optional
If True, `c_or_r` specifies the polynomial's roots; the default
is False.
variable : str, optional
Changes the variable used when printing `p` from `x` to `variable`
(see Examples).
Examples
--------
Construct the polynomial :math:`x^2 + 2x + 3`:
>>> p = np.poly1d([1, 2, 3])
>>> print(np.poly1d(p))
2
1 x + 2 x + 3
Evaluate the polynomial at :math:`x = 0.5`:
>>> p(0.5)
4.25
Find the roots:
>>> p.r
array([-1.+1.41421356j, -1.-1.41421356j])
>>> p(p.r)
array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j])
These numbers in the previous line represent (0, 0) to machine precision
Show the coefficients:
>>> p.c
array([1, 2, 3])
Display the order (the leading zero-coefficients are removed):
>>> p.order
2
Show the coefficient of the k-th power in the polynomial
(which is equivalent to ``p.c[-(i+1)]``):
>>> p[1]
2
Polynomials can be added, subtracted, multiplied, and divided
(returns quotient and remainder):
>>> p * p
poly1d([ 1, 4, 10, 12, 9])
>>> (p**3 + 4) / p
(poly1d([ 1., 4., 10., 12., 9.]), poly1d([ 4.]))
``asarray(p)`` gives the coefficient array, so polynomials can be
used in all functions that accept arrays:
>>> p**2 # square of polynomial
poly1d([ 1, 4, 10, 12, 9])
>>> np.square(p) # square of individual coefficients
array([1, 4, 9])
The variable used in the string representation of `p` can be modified,
using the `variable` parameter:
>>> p = np.poly1d([1,2,3], variable='z')
>>> print(p)
2
1 z + 2 z + 3
Construct a polynomial from its roots:
>>> np.poly1d([1, 2], True)
poly1d([ 1, -3, 2])
This is the same polynomial as obtained by:
>>> np.poly1d([1, -1]) * np.poly1d([1, -2])
poly1d([ 1, -3, 2])
"""
__hash__ = None
@property
def coeffs(self):
""" A copy of the polynomial coefficients """
return self._coeffs.copy()
@property
def variable(self):
""" The name of the polynomial variable """
return self._variable
# calculated attributes
@property
def order(self):
""" The order or degree of the polynomial """
return len(self._coeffs) - 1
@property
def roots(self):
""" The roots of the polynomial, where self(x) == 0 """
return roots(self._coeffs)
# our internal _coeffs property need to be backed by __dict__['coeffs'] for
# scipy to work correctly.
@property
def _coeffs(self):
return self.__dict__['coeffs']
@_coeffs.setter
def _coeffs(self, coeffs):
self.__dict__['coeffs'] = coeffs
# alias attributes
r = roots
c = coef = coefficients = coeffs
o = order
def __init__(self, c_or_r, r=False, variable=None):
if isinstance(c_or_r, poly1d):
self._variable = c_or_r._variable
self._coeffs = c_or_r._coeffs
if set(c_or_r.__dict__) - set(self.__dict__):
msg = ("In the future extra properties will not be copied "
"across when constructing one poly1d from another")
warnings.warn(msg, FutureWarning, stacklevel=2)
self.__dict__.update(c_or_r.__dict__)
if variable is not None:
self._variable = variable
return
if r:
c_or_r = poly(c_or_r)
c_or_r = atleast_1d(c_or_r)
if c_or_r.ndim > 1:
raise ValueError("Polynomial must be 1d only.")
c_or_r = trim_zeros(c_or_r, trim='f')
if len(c_or_r) == 0:
c_or_r = NX.array([0.])
self._coeffs = c_or_r
if variable is None:
variable = 'x'
self._variable = variable
def __array__(self, t=None):
if t:
return NX.asarray(self.coeffs, t)
else:
return NX.asarray(self.coeffs)
def __repr__(self):
vals = repr(self.coeffs)
vals = vals[6:-1]
return "poly1d(%s)" % vals
def __len__(self):
return self.order
def __str__(self):
thestr = "0"
var = self.variable
# Remove leading zeros
coeffs = self.coeffs[NX.logical_or.accumulate(self.coeffs != 0)]
N = len(coeffs)-1
def fmt_float(q):
s = '%.4g' % q
if s.endswith('.0000'):
s = s[:-5]
return s
for k in range(len(coeffs)):
if not iscomplex(coeffs[k]):
coefstr = fmt_float(real(coeffs[k]))
elif real(coeffs[k]) == 0:
coefstr = '%sj' % fmt_float(imag(coeffs[k]))
else:
coefstr = '(%s + %sj)' % (fmt_float(real(coeffs[k])),
fmt_float(imag(coeffs[k])))
power = (N-k)
if power == 0:
if coefstr != '0':
newstr = '%s' % (coefstr,)
else:
if k == 0:
newstr = '0'
else:
newstr = ''
elif power == 1:
if coefstr == '0':
newstr = ''
elif coefstr == 'b':
newstr = var
else:
newstr = '%s %s' % (coefstr, var)
else:
if coefstr == '0':
newstr = ''
elif coefstr == 'b':
newstr = '%s**%d' % (var, power,)
else:
newstr = '%s %s**%d' % (coefstr, var, power)
if k > 0:
if newstr != '':
if newstr.startswith('-'):
thestr = "%s - %s" % (thestr, newstr[1:])
else:
thestr = "%s + %s" % (thestr, newstr)
else:
thestr = newstr
return _raise_power(thestr)
def __call__(self, val):
return polyval(self.coeffs, val)
def __neg__(self):
return poly1d(-self.coeffs)
def __pos__(self):
return self
def __mul__(self, other):
if isscalar(other):
return poly1d(self.coeffs * other)
else:
other = poly1d(other)
return poly1d(polymul(self.coeffs, other.coeffs))
def __rmul__(self, other):
if isscalar(other):
return poly1d(other * self.coeffs)
else:
other = poly1d(other)
return poly1d(polymul(self.coeffs, other.coeffs))
def __add__(self, other):
other = poly1d(other)
return poly1d(polyadd(self.coeffs, other.coeffs))
def __radd__(self, other):
other = poly1d(other)
return poly1d(polyadd(self.coeffs, other.coeffs))
def __pow__(self, val):
if not isscalar(val) or int(val) != val or val < 0:
raise ValueError("Power to non-negative integers only.")
res = [1]
for _ in range(val):
res = polymul(self.coeffs, res)
return poly1d(res)
def __sub__(self, other):
other = poly1d(other)
return poly1d(polysub(self.coeffs, other.coeffs))
def __rsub__(self, other):
other = poly1d(other)
return poly1d(polysub(other.coeffs, self.coeffs))
def __div__(self, other):
if isscalar(other):
return poly1d(self.coeffs/other)
else:
other = poly1d(other)
return polydiv(self, other)
__truediv__ = __div__
def __rdiv__(self, other):
if isscalar(other):
return poly1d(other/self.coeffs)
else:
other = poly1d(other)
return polydiv(other, self)
__rtruediv__ = __rdiv__
def __eq__(self, other):
if not isinstance(other, poly1d):
return NotImplemented
if self.coeffs.shape != other.coeffs.shape:
return False
return (self.coeffs == other.coeffs).all()
def __ne__(self, other):
if not isinstance(other, poly1d):
return NotImplemented
return not self.__eq__(other)
def __getitem__(self, val):
ind = self.order - val
if val > self.order:
return 0
if val < 0:
return 0
return self.coeffs[ind]
def __setitem__(self, key, val):
ind = self.order - key
if key < 0:
raise ValueError("Does not support negative powers.")
if key > self.order:
zr = NX.zeros(key-self.order, self.coeffs.dtype)
self._coeffs = NX.concatenate((zr, self.coeffs))
ind = 0
self._coeffs[ind] = val
return
def __iter__(self):
return iter(self.coeffs)
def integ(self, m=1, k=0):
"""
Return an antiderivative (indefinite integral) of this polynomial.
Refer to `polyint` for full documentation.
See Also
--------
polyint : equivalent function
"""
return poly1d(polyint(self.coeffs, m=m, k=k))
def deriv(self, m=1):
"""
Return a derivative of this polynomial.
Refer to `polyder` for full documentation.
See Also
--------
polyder : equivalent function
"""
return poly1d(polyder(self.coeffs, m=m))
# Stuff to do on module import
warnings.simplefilter('always', RankWarning)
|
unnikrishnankgs/va
|
venv/lib/python3.5/site-packages/numpy/lib/polynomial.py
|
Python
|
bsd-2-clause
| 38,572
|
[
"Gaussian"
] |
a9326df8c6ed8277261c353791dfbd102262e7f13c5e0e0e97d564a90d4b1ae9
|
"""
View for Courseware Index
"""
# pylint: disable=attribute-defined-outside-init
import logging
import urllib
from django.conf import settings
from django.contrib.auth.models import User
from django.contrib.auth.views import redirect_to_login
from django.urls import reverse
from django.http import Http404
from django.template.context_processors import csrf
from django.utils.decorators import method_decorator
from django.utils.functional import cached_property
from django.utils.translation import ugettext as _
from django.views.decorators.cache import cache_control
from django.views.decorators.csrf import ensure_csrf_cookie
from django.views.generic import View
from edx_django_utils.monitoring import set_custom_metrics_for_course_key
from opaque_keys.edx.keys import CourseKey
from web_fragments.fragment import Fragment
from edxmako.shortcuts import render_to_response, render_to_string
from lms.djangoapps.courseware.exceptions import CourseAccessRedirect
from lms.djangoapps.experiments.utils import get_experiment_user_metadata_context
from lms.djangoapps.gating.api import get_entrance_exam_score_ratio, get_entrance_exam_usage_key
from lms.djangoapps.grades.course_grade_factory import CourseGradeFactory
from openedx.core.djangoapps.crawlers.models import CrawlersConfig
from openedx.core.djangoapps.lang_pref import LANGUAGE_KEY
from openedx.core.djangoapps.user_api.preferences.api import get_user_preference
from openedx.core.djangoapps.util.user_messages import PageLevelMessages
from openedx.core.djangoapps.waffle_utils import WaffleSwitchNamespace, WaffleFlagNamespace, CourseWaffleFlag
from openedx.core.djangolib.markup import HTML, Text
from openedx.features.course_experience import COURSE_OUTLINE_PAGE_FLAG, default_course_url_name
from openedx.features.course_experience.views.course_sock import CourseSockFragmentView
from openedx.features.enterprise_support.api import data_sharing_consent_required
from shoppingcart.models import CourseRegistrationCode
from student.views import is_course_blocked
from util.views import ensure_valid_course_key
from xmodule.modulestore.django import modulestore
from xmodule.x_module import STUDENT_VIEW
from .views import CourseTabView
from ..access import has_access
from ..access_utils import check_course_open_for_learner
from ..courses import get_course_with_access, get_current_child, get_studio_url
from ..entrance_exams import (
course_has_entrance_exam,
get_entrance_exam_content,
user_can_skip_entrance_exam,
user_has_passed_entrance_exam
)
from ..masquerade import setup_masquerade
from ..model_data import FieldDataCache
from ..module_render import get_module_for_descriptor, toc_for_course
log = logging.getLogger("edx.courseware.views.index")
TEMPLATE_IMPORTS = {'urllib': urllib}
CONTENT_DEPTH = 2
class CoursewareIndex(View):
"""
View class for the Courseware page.
"""
@cached_property
def enable_anonymous_courseware_access(self):
waffle_flag = CourseWaffleFlag(WaffleFlagNamespace(name='seo'), 'enable_anonymous_courseware_access')
return waffle_flag.is_enabled(self.course_key)
@method_decorator(ensure_csrf_cookie)
@method_decorator(cache_control(no_cache=True, no_store=True, must_revalidate=True))
@method_decorator(ensure_valid_course_key)
@method_decorator(data_sharing_consent_required)
def get(self, request, course_id, chapter=None, section=None, position=None):
"""
Displays courseware accordion and associated content. If course, chapter,
and section are all specified, renders the page, or returns an error if they
are invalid.
If section is not specified, displays the accordion opened to the right
chapter.
If neither chapter or section are specified, displays the user's most
recent chapter, or the first chapter if this is the user's first visit.
Arguments:
request: HTTP request
course_id (unicode): course id
chapter (unicode): chapter url_name
section (unicode): section url_name
position (unicode): position in module, eg of <sequential> module
"""
self.course_key = CourseKey.from_string(course_id)
if not (request.user.is_authenticated or self.enable_anonymous_courseware_access):
return redirect_to_login(request.get_full_path())
self.original_chapter_url_name = chapter
self.original_section_url_name = section
self.chapter_url_name = chapter
self.section_url_name = section
self.position = position
self.chapter, self.section = None, None
self.course = None
self.url = request.path
try:
set_custom_metrics_for_course_key(self.course_key)
self._clean_position()
with modulestore().bulk_operations(self.course_key):
self.course = get_course_with_access(
request.user, 'load', self.course_key,
depth=CONTENT_DEPTH,
check_if_enrolled=not self.enable_anonymous_courseware_access,
)
self.is_staff = has_access(request.user, 'staff', self.course)
self._setup_masquerade_for_effective_user()
return self.render(request)
except Exception as exception: # pylint: disable=broad-except
return CourseTabView.handle_exceptions(request, self.course, exception)
def _setup_masquerade_for_effective_user(self):
"""
Setup the masquerade information to allow the request to
be processed for the requested effective user.
"""
self.real_user = self.request.user
self.masquerade, self.effective_user = setup_masquerade(
self.request,
self.course_key,
self.is_staff,
reset_masquerade_data=True
)
# Set the user in the request to the effective user.
self.request.user = self.effective_user
def render(self, request):
"""
Render the index page.
"""
self._redirect_if_needed_to_pay_for_course()
self._prefetch_and_bind_course(request)
if self.course.has_children_at_depth(CONTENT_DEPTH):
self._reset_section_to_exam_if_required()
self.chapter = self._find_chapter()
self.section = self._find_section()
if self.chapter and self.section:
self._redirect_if_not_requested_section()
self._save_positions()
self._prefetch_and_bind_section()
if not request.user.is_authenticated:
qs = urllib.urlencode({
'course_id': self.course_key,
'enrollment_action': 'enroll',
'email_opt_in': False,
})
PageLevelMessages.register_warning_message(
request,
Text(_("You are not signed in. To see additional course content, {sign_in_link} or "
"{register_link}, and enroll in this course.")).format(
sign_in_link=HTML('<a href="{url}">{sign_in_label}</a>').format(
sign_in_label=_('sign in'),
url='{}?{}'.format(reverse('signin_user'), qs),
),
register_link=HTML('<a href="/{url}">{register_label}</a>').format(
register_label=_('register'),
url='{}?{}'.format(reverse('register_user'), qs),
),
)
)
return render_to_response('courseware/courseware.html', self._create_courseware_context(request))
def _redirect_if_not_requested_section(self):
"""
If the resulting section and chapter are different from what was initially
requested, redirect back to the index page, but with an updated URL that includes
the correct section and chapter values. We do this so that our analytics events
and error logs have the appropriate URLs.
"""
if (
self.chapter.url_name != self.original_chapter_url_name or
(self.original_section_url_name and self.section.url_name != self.original_section_url_name)
):
raise CourseAccessRedirect(
reverse(
'courseware_section',
kwargs={
'course_id': unicode(self.course_key),
'chapter': self.chapter.url_name,
'section': self.section.url_name,
},
)
)
def _clean_position(self):
"""
Verify that the given position is an integer. If it is not positive, set it to 1.
"""
if self.position is not None:
try:
self.position = max(int(self.position), 1)
except ValueError:
raise Http404(u"Position {} is not an integer!".format(self.position))
def _redirect_if_needed_to_pay_for_course(self):
"""
Redirect to dashboard if the course is blocked due to non-payment.
"""
redeemed_registration_codes = []
if self.request.user.is_authenticated:
self.real_user = User.objects.prefetch_related("groups").get(id=self.real_user.id)
redeemed_registration_codes = CourseRegistrationCode.objects.filter(
course_id=self.course_key,
registrationcoderedemption__redeemed_by=self.real_user
)
if is_course_blocked(self.request, redeemed_registration_codes, self.course_key):
# registration codes may be generated via Bulk Purchase Scenario
# we have to check only for the invoice generated registration codes
# that their invoice is valid or not
# TODO Update message to account for the fact that the user is not authenticated.
log.warning(
u'User %s cannot access the course %s because payment has not yet been received',
self.real_user,
unicode(self.course_key),
)
raise CourseAccessRedirect(reverse('dashboard'))
def _reset_section_to_exam_if_required(self):
"""
Check to see if an Entrance Exam is required for the user.
"""
if not user_can_skip_entrance_exam(self.effective_user, self.course):
exam_chapter = get_entrance_exam_content(self.effective_user, self.course)
if exam_chapter and exam_chapter.get_children():
exam_section = exam_chapter.get_children()[0]
if exam_section:
self.chapter_url_name = exam_chapter.url_name
self.section_url_name = exam_section.url_name
def _get_language_preference(self):
"""
Returns the preferred language for the actual user making the request.
"""
language_preference = settings.LANGUAGE_CODE
if self.request.user.is_authenticated:
language_preference = get_user_preference(self.real_user, LANGUAGE_KEY)
return language_preference
def _is_masquerading_as_student(self):
"""
Returns whether the current request is masquerading as a student.
"""
return self.masquerade and self.masquerade.role == 'student'
def _is_masquerading_as_specific_student(self):
"""
Returns whether the current request is masqueurading as a specific student.
"""
return self._is_masquerading_as_student() and self.masquerade.user_name
def _find_block(self, parent, url_name, block_type, min_depth=None):
"""
Finds the block in the parent with the specified url_name.
If not found, calls get_current_child on the parent.
"""
child = None
if url_name:
child = parent.get_child_by(lambda m: m.location.block_id == url_name)
if not child:
# User may be trying to access a child that isn't live yet
if not self._is_masquerading_as_student():
raise Http404('No {block_type} found with name {url_name}'.format(
block_type=block_type,
url_name=url_name,
))
elif min_depth and not child.has_children_at_depth(min_depth - 1):
child = None
if not child:
child = get_current_child(parent, min_depth=min_depth, requested_child=self.request.GET.get("child"))
return child
def _find_chapter(self):
"""
Finds the requested chapter.
"""
return self._find_block(self.course, self.chapter_url_name, 'chapter', CONTENT_DEPTH - 1)
def _find_section(self):
"""
Finds the requested section.
"""
if self.chapter:
return self._find_block(self.chapter, self.section_url_name, 'section')
def _prefetch_and_bind_course(self, request):
"""
Prefetches all descendant data for the requested section and
sets up the runtime, which binds the request user to the section.
"""
self.field_data_cache = FieldDataCache.cache_for_descriptor_descendents(
self.course_key,
self.effective_user,
self.course,
depth=CONTENT_DEPTH,
read_only=CrawlersConfig.is_crawler(request),
)
self.course = get_module_for_descriptor(
self.effective_user,
self.request,
self.course,
self.field_data_cache,
self.course_key,
course=self.course,
)
def _prefetch_and_bind_section(self):
"""
Prefetches all descendant data for the requested section and
sets up the runtime, which binds the request user to the section.
"""
# Pre-fetch all descendant data
self.section = modulestore().get_item(self.section.location, depth=None, lazy=False)
self.field_data_cache.add_descriptor_descendents(self.section, depth=None)
# Bind section to user
self.section = get_module_for_descriptor(
self.effective_user,
self.request,
self.section,
self.field_data_cache,
self.course_key,
self.position,
course=self.course,
)
def _save_positions(self):
"""
Save where we are in the course and chapter.
"""
save_child_position(self.course, self.chapter_url_name)
save_child_position(self.chapter, self.section_url_name)
def _create_courseware_context(self, request):
"""
Returns and creates the rendering context for the courseware.
Also returns the table of contents for the courseware.
"""
course_url_name = default_course_url_name(self.course.id)
course_url = reverse(course_url_name, kwargs={'course_id': unicode(self.course.id)})
courseware_context = {
'csrf': csrf(self.request)['csrf_token'],
'course': self.course,
'course_url': course_url,
'chapter': self.chapter,
'section': self.section,
'init': '',
'fragment': Fragment(),
'staff_access': self.is_staff,
'masquerade': self.masquerade,
'supports_preview_menu': True,
'studio_url': get_studio_url(self.course, 'course'),
'xqa_server': settings.FEATURES.get('XQA_SERVER', "http://your_xqa_server.com"),
'bookmarks_api_url': reverse('bookmarks'),
'language_preference': self._get_language_preference(),
'disable_optimizely': not WaffleSwitchNamespace('RET').is_enabled('enable_optimizely_in_courseware'),
'section_title': None,
'sequence_title': None,
'disable_accordion': COURSE_OUTLINE_PAGE_FLAG.is_enabled(self.course.id),
}
courseware_context.update(
get_experiment_user_metadata_context(
self.course,
self.effective_user,
)
)
table_of_contents = toc_for_course(
self.effective_user,
self.request,
self.course,
self.chapter_url_name,
self.section_url_name,
self.field_data_cache,
)
courseware_context['accordion'] = render_accordion(
self.request,
self.course,
table_of_contents['chapters'],
)
courseware_context['course_sock_fragment'] = CourseSockFragmentView().render_to_fragment(
request, course=self.course)
# entrance exam data
self._add_entrance_exam_to_context(courseware_context)
# staff masquerading data
if not check_course_open_for_learner(self.effective_user, self.course):
# Disable student view button if user is staff and
# course is not yet visible to students.
courseware_context['disable_student_access'] = True
courseware_context['supports_preview_menu'] = False
if self.section:
# chromeless data
if self.section.chrome:
chrome = [s.strip() for s in self.section.chrome.lower().split(",")]
if 'accordion' not in chrome:
courseware_context['disable_accordion'] = True
if 'tabs' not in chrome:
courseware_context['disable_tabs'] = True
# default tab
if self.section.default_tab:
courseware_context['default_tab'] = self.section.default_tab
# section data
courseware_context['section_title'] = self.section.display_name_with_default
section_context = self._create_section_context(
table_of_contents['previous_of_active_section'],
table_of_contents['next_of_active_section'],
)
courseware_context['fragment'] = self.section.render(STUDENT_VIEW, section_context)
if self.section.position and self.section.has_children:
self._add_sequence_title_to_context(courseware_context)
return courseware_context
def _add_sequence_title_to_context(self, courseware_context):
"""
Adds sequence title to the given context.
If we're rendering a section with some display items, but position
exceeds the length of the displayable items, default the position
to the first element.
"""
display_items = self.section.get_display_items()
if not display_items:
return
if self.section.position > len(display_items):
self.section.position = 1
courseware_context['sequence_title'] = display_items[self.section.position - 1].display_name_with_default
def _add_entrance_exam_to_context(self, courseware_context):
"""
Adds entrance exam related information to the given context.
"""
if course_has_entrance_exam(self.course) and getattr(self.chapter, 'is_entrance_exam', False):
courseware_context['entrance_exam_passed'] = user_has_passed_entrance_exam(self.effective_user, self.course)
courseware_context['entrance_exam_current_score'] = get_entrance_exam_score_ratio(
CourseGradeFactory().read(self.effective_user, self.course),
get_entrance_exam_usage_key(self.course),
)
def _create_section_context(self, previous_of_active_section, next_of_active_section):
"""
Returns and creates the rendering context for the section.
"""
def _compute_section_url(section_info, requested_child):
"""
Returns the section URL for the given section_info with the given child parameter.
"""
return "{url}?child={requested_child}".format(
url=reverse(
'courseware_section',
args=[unicode(self.course_key), section_info['chapter_url_name'], section_info['url_name']],
),
requested_child=requested_child,
)
# NOTE (CCB): Pull the position from the URL for un-authenticated users. Otherwise, pull the saved
# state from the data store.
position = None if self.request.user.is_authenticated else self.position
section_context = {
'activate_block_id': self.request.GET.get('activate_block_id'),
'requested_child': self.request.GET.get("child"),
'progress_url': reverse('progress', kwargs={'course_id': unicode(self.course_key)}),
'user_authenticated': self.request.user.is_authenticated,
'position': position,
}
if previous_of_active_section:
section_context['prev_url'] = _compute_section_url(previous_of_active_section, 'last')
if next_of_active_section:
section_context['next_url'] = _compute_section_url(next_of_active_section, 'first')
# sections can hide data that masquerading staff should see when debugging issues with specific students
section_context['specific_masquerade'] = self._is_masquerading_as_specific_student()
return section_context
def render_accordion(request, course, table_of_contents):
"""
Returns the HTML that renders the navigation for the given course.
Expects the table_of_contents to have data on each chapter and section,
including which ones are active.
"""
context = dict(
[
('toc', table_of_contents),
('course_id', unicode(course.id)),
('csrf', csrf(request)['csrf_token']),
('due_date_display_format', course.due_date_display_format),
] + TEMPLATE_IMPORTS.items()
)
return render_to_string('courseware/accordion.html', context)
def save_child_position(seq_module, child_name):
"""
child_name: url_name of the child
"""
for position, child in enumerate(seq_module.get_display_items(), start=1):
if child.location.block_id == child_name:
# Only save if position changed
if position != seq_module.position:
seq_module.position = position
# Save this new position to the underlying KeyValueStore
seq_module.save()
def save_positions_recursively_up(user, request, field_data_cache, xmodule, course=None):
"""
Recurses up the course tree starting from a leaf
Saving the position property based on the previous node as it goes
"""
current_module = xmodule
while current_module:
parent_location = modulestore().get_parent_location(current_module.location)
parent = None
if parent_location:
parent_descriptor = modulestore().get_item(parent_location)
parent = get_module_for_descriptor(
user,
request,
parent_descriptor,
field_data_cache,
current_module.location.course_key,
course=course
)
if parent and hasattr(parent, 'position'):
save_child_position(parent, current_module.location.block_id)
current_module = parent
|
ahmedaljazzar/edx-platform
|
lms/djangoapps/courseware/views/index.py
|
Python
|
agpl-3.0
| 23,447
|
[
"VisIt"
] |
6344fef8431df495c7cbd88bc01a071aadf53b4aca73233794af37314205f53b
|
#!/usr/bin/env python
"""Script for checking the ABINIT automatic tests."""
from __future__ import print_function, division, absolute_import #, unicode_literals
import sys
import os
from pprint import pprint
from optparse import OptionParser
try:
from cStringIO import StringIO
except ImportError:
from io import StringIO
#try:
# import tests
#except ImportError:
# Add the directory [...]/abinit/tests to $PYTHONPATH
pack_dir, tail = os.path.split(os.path.abspath(__file__))
pack_dir, tail = os.path.split(pack_dir)
sys.path.insert(0, pack_dir)
import tests
from tests import abitests
abenv = tests.abenv
__version__ = "0.1"
__author__ = "Matteo Giantomassi"
def check_authors(suite):
def first_second_name(string):
idx = string.rfind(".")
if idx == -1:
first, second = "", string
else:
first, second = string[:idx], string[idx+1:]
return first.strip(), second.strip()
second_names = []
for test in suite:
if not hasattr(test, "authors"):
authors = []
for t in test:
authors.extend(t.authors)
authors = set(authors)
else:
authors = test.authors
for string in authors:
f, s = first_second_name(string)
if not f and s and s != "Unknown":
print("author(s) first name is missing in file %s, string = %s " %(test.full_id, s))
second_names.append(s)
#print(test.id, first_second_name(test.authors[0])[1])
return set(second_names)
def get_allowed_cpp_vars():
"""
Inspect the libpaw header file, the autoconf macros and config.ac
Extract and return the set of allowed CPP options, used to check
the exclude_cpp_vars TEST_INFO section for possible typos.
Based on ~abinit/abichecks/scripts/check-cpp-options.
"""
import re
re_m4file = re.compile("\.m4$")
re_hdrfile = re.compile("\.h$")
re_acdef = re.compile("AC_DEFINE\\(")
re_cppdef = re.compile("^([ ]?)+#([ ]?)+define [0-9A-Z_]*")
abidir = os.path.abspath("../")
# Extract CPP options from the libPAW header files
cpp_libpaw = set()
for root, dirs, files in os.walk(os.path.join(abidir, "src/42_libpaw")):
for src in files:
if not re_hdrfile.search(src): continue
with open(os.path.join(root, src), "rt") as fh:
for line in fh:
if not re_cppdef.search(line): continue
tmp_def = re.sub("^[# ]*define[ ]*([0-9A-Z_]*).*","\\1", line).strip()
cpp_libpaw.add(tmp_def)
# Extract CPP options from the build system
cpp_buildsys = set()
for root, dirs, files in os.walk(os.path.join(abidir, "config/m4")):
for src in files:
if not re_m4file.search(src): continue
with open(os.path.join(root, src), "rt") as fh:
for line in fh:
if not re_acdef.search(line): continue
tmp_def = re.sub(".*AC_DEFINE\\([\\[]?([^\\],]*).*","\\1",line).strip()
cpp_buildsys.add(tmp_def)
with open(os.path.join(abidir, "configure.ac"), "rt") as fh:
for line in fh:
if not re_acdef.search(line): continue
tmp_def = re.sub(".*AC_DEFINE\\([\\[]?([^\\],]*).*","\\1",line).strip()
cpp_buildsys.add(tmp_def)
return cpp_buildsys.union(cpp_libpaw)
def main():
usage = "usage: %prog [suite_name] [options] [-h|--help] for help)"
version = "%prog "+ str(__version__)
parser = OptionParser(usage=usage, version=version)
parser.add_option("-v", "--verbose", dest="verbose", action="store_true", default=False,
help="verbose mode")
options, args = parser.parse_args()
# Get the full database.
# TODO should use with_disabled=True
full_database = abitests.build_database(with_disabled=False)
retcode = 0
print("Stale or lost reference files... ", end="")
err = full_database.find_stale_or_lost_refs()
if err:
retcode += len(err)
print("FAILED")
sys.stderr.write(err)
else:
print("OK")
print("Stale or lost inputs... ", end="")
err = full_database.find_stale_or_lost_inputs()
if err:
retcode += len(err)
print("FAILED")
sys.stderr.write(err)
else:
print("OK")
unknowns, wrong = full_database.find_unknown_wrong_keywords()
print("Unknown keywords... ", end="")
if unknowns:
retcode += len(unknowns)
print("FAILED")
print("The following keywords are not documented:\n\t%s" % unknowns)
print("ACTION: Add the corresponding documentation to the KNOWN_KEYWORDS dictionary defined in tests/__init__.py")
else:
print("OK")
print("Wrong keywords ... ", end="")
if wrong:
retcode += len(wrong)
print("FAILED")
print("The following keywords contain blank spaces:\n\t%s" % wrong)
print("ACTION: Replace blank spaces with underscores")
else:
print("OK")
print("Testing whether important TEST_INFO entries are present... ", end="")
errstr = full_database.check_testinfo_options()
if errstr:
retcode += 1
print("FAILED")
print(errstr)
else:
print("OK")
# Check authors.
#print("Testing whether authors are defined... ", end="")
#second_names = set()
#for suite_name, suite in full_database.items():
# second_names = second_names.union(check_authors(suite))
#if second_names:
# retcode += len(second_names)
# print("FAILED")
# pprint(second_names)
#else:
# print("OK")
# Add test on CPP options
allowed_cpp_vars = get_allowed_cpp_vars()
#print(allowed_cpp_vars)
for suite_name, suite in full_database.items():
for test in suite:
# Remove ! from string e.g. !HAVE_MPI
tvars = set(v[1:] if v.startswith("!") else v for v in test.need_cpp_vars)
diff = tvars.difference(allowed_cpp_vars)
if diff:
print("in test: ", test)
print("diff", diff)
return retcode
if __name__ == "__main__":
sys.exit(main())
|
jmbeuken/abinit
|
tests/check_testsuite.py
|
Python
|
gpl-3.0
| 6,283
|
[
"ABINIT"
] |
63a1b8baa1c50f71fbaafc7a459ac65b24a0a28ae0aa86b97b03c8620aae26e1
|
########################################################################
#
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import getpass
import json
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.six.moves import input
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlparse
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.urls import open_url
from ansible.utils.color import stringc
from ansible.utils.display import Display
display = Display()
class GalaxyLogin(object):
''' Class to handle authenticating user with Galaxy API prior to performing CUD operations '''
GITHUB_AUTH = 'https://api.github.com/authorizations'
def __init__(self, galaxy, github_token=None):
self.galaxy = galaxy
self.github_username = None
self.github_password = None
if github_token is None:
self.get_credentials()
def get_credentials(self):
display.display(u'\n\n' + "We need your " + stringc("Github login", 'bright cyan') +
" to identify you.", screen_only=True)
display.display("This information will " + stringc("not be sent to Galaxy", 'bright cyan') +
", only to " + stringc("api.github.com.", "yellow"), screen_only=True)
display.display("The password will not be displayed." + u'\n\n', screen_only=True)
display.display("Use " + stringc("--github-token", 'yellow') +
" if you do not want to enter your password." + u'\n\n', screen_only=True)
try:
self.github_username = input("Github Username: ")
except:
pass
try:
self.github_password = getpass.getpass("Password for %s: " % self.github_username)
except:
pass
if not self.github_username or not self.github_password:
raise AnsibleError("Invalid Github credentials. Username and password are required.")
def remove_github_token(self):
'''
If for some reason an ansible-galaxy token was left from a prior login, remove it. We cannot
retrieve the token after creation, so we are forced to create a new one.
'''
try:
tokens = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True,))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
for token in tokens:
if token['note'] == 'ansible-galaxy login':
display.vvvvv('removing token: %s' % token['token_last_eight'])
try:
open_url('https://api.github.com/authorizations/%d' % token['id'], url_username=self.github_username,
url_password=self.github_password, method='DELETE', force_basic_auth=True)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
def create_github_token(self):
'''
Create a personal authorization token with a note of 'ansible-galaxy login'
'''
self.remove_github_token()
args = json.dumps({"scopes": ["public_repo"], "note": "ansible-galaxy login"})
try:
data = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True, data=args))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
return data['token']
|
veger/ansible
|
lib/ansible/galaxy/login.py
|
Python
|
gpl-3.0
| 4,553
|
[
"Galaxy"
] |
e7b2a247d63b741e996660c58acd4d185b534653d17d1259a01186874e98d1cc
|
__author__ = 'Zhouhao Zeng'
import HTSeq
import sys
from optparse import OptionParser
import numpy
import collections
def getSiteBodyProfile(ga, site_iv_set, genic_partition):
profile = collections.defaultdict(lambda: numpy.zeros(genic_partition, numpy.float64))
for site_name, site_iv in site_iv_set:
partition_size = site_iv.length / genic_partition
normalization = partition_size * 1.0 / 1000
index = 0
for site_pos in site_iv.xrange(partition_size):
count_in_window = 0
site_pos_window_iv = HTSeq.GenomicInterval(site_pos.chrom, site_pos.pos, site_pos.pos + partition_size, ".")
for step_iv, step_count in ga[site_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window / normalization
index += 1
if index >= genic_partition:
break
return profile
def getUpstreamProfile(ga, site_iv_set, window_size, resolution, upstream_extension, downstream_extension):
upstream_num_points = upstream_extension / resolution
downstream_num_points = downstream_extension / resolution
total_num_points = upstream_num_points + downstream_num_points + 1
profile = collections.defaultdict(lambda: numpy.zeros(total_num_points, numpy.float64))
for site_name, site_start_pos in [(site_name, site_iv.start_d_as_pos) for (site_name, site_iv) in site_iv_set]:
index = 0
while index < total_num_points:
count_in_window = 0
index_pos = site_start_pos.pos + (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_start_pos.chrom, index_pos - window_size / 2,
index_pos + window_size / 2)
for step_iv, step_count in ga[index_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window
index += 1
return profile
def getDownstreamProfile(ga, site_iv_set, window_size, resolution, upstream_extension, downstream_extension):
upstream_num_points = upstream_extension / resolution
downstream_num_points = downstream_extension / resolution
total_num_points = upstream_num_points + downstream_num_points + 1
profile = collections.defaultdict(lambda: numpy.zeros(total_num_points, numpy.float64))
for site_name, site_end_pos in [(site_name, site_iv.end_d_as_pos) for (site_name, site_iv) in site_iv_set]:
index = 0
while index < total_num_points:
count_in_window = 0
index_pos = site_end_pos.pos + (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_end_pos.chrom, index_pos - window_size / 2,
index_pos + window_size / 2)
for step_iv, step_count in ga[index_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window
index += 1
return profile
def getSiteBodyProfileWithStrand(ga, site_iv_set, genic_partition):
profile = collections.defaultdict(lambda: numpy.zeros(genic_partition, numpy.float64))
for site_name, site_iv in site_iv_set:
partition_size = site_iv.length / genic_partition
normalization = partition_size * 1.0 / 1000
index = 0
for site_pos in site_iv.xrange_d(partition_size):
count_in_window = 0
if site_pos.strand == "+":
site_pos_window_iv = HTSeq.GenomicInterval(site_pos.chrom, site_pos.pos, site_pos.pos + partition_size)
elif site_pos.strand == "-":
site_pos_window_iv = HTSeq.GenomicInterval(site_pos.chrom, site_pos.pos - partition_size + 1,
site_pos.pos + 1)
for step_iv, step_count in ga[site_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window / normalization
index += 1
if index >= genic_partition:
break
return profile
def getUpstreamProfileWithStrand(ga, site_iv_set, window_size, resolution, upstream_extension, downstream_extension):
upstream_num_points = upstream_extension / resolution
downstream_num_points = downstream_extension / resolution
total_num_points = upstream_num_points + downstream_num_points + 1
profile = collections.defaultdict(lambda: numpy.zeros(total_num_points, numpy.float64))
for site_name, site_start_pos in [(site_name, site_iv.start_d_as_pos) for (site_name, site_iv) in site_iv_set]:
index = 0
while index < total_num_points:
count_in_window = 0
if site_start_pos.strand == "+":
index_pos = site_start_pos.pos + (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_start_pos.chrom, index_pos - window_size / 2,
index_pos + window_size / 2)
elif site_start_pos.strand == "-":
index_pos = site_start_pos.pos - (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_start_pos.chrom, index_pos - window_size / 2 + 1,
index_pos + window_size / 2 + 1)
for step_iv, step_count in ga[index_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window
index += 1
return profile
def getDownstreamProfileWithStrand(ga, site_iv_set, window_size, resolution, upstream_extension, downstream_extension):
upstream_num_points = upstream_extension / resolution
downstream_num_points = downstream_extension / resolution
total_num_points = upstream_num_points + downstream_num_points + 1
profile = collections.defaultdict(lambda: numpy.zeros(total_num_points, numpy.float64))
for site_name, site_end_pos in [(site_name, site_iv.end_d_as_pos) for (site_name, site_iv) in site_iv_set]:
index = 0
while index < total_num_points:
count_in_window = 0
if site_end_pos.strand == "+":
index_pos = site_end_pos.pos + (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_end_pos.chrom, index_pos - window_size / 2,
index_pos + window_size / 2)
elif site_end_pos.strand == "-":
index_pos = site_end_pos.pos - (index - upstream_num_points) * resolution
index_pos_window_iv = HTSeq.GenomicInterval(site_end_pos.chrom, index_pos - window_size / 2 + 1,
index_pos + window_size / 2 + 1)
for step_iv, step_count in ga[index_pos_window_iv].steps():
count_in_window += step_count * step_iv.length
profile[site_name][index] += count_in_window
index += 1
return profile
def main(argv):
parser = OptionParser()
parser.add_option("-b", "--tags_bed_file", action="store", type="string", dest="tagsfile", metavar="<file>",
help="input ChIP-seq tags bed file")
parser.add_option("-s", "--sites_file", action="store", type="string", dest="sitesfile", metavar="<file>",
help="sites bed file")
parser.add_option("-t", "--sites_strand_specific", action="store", type="string", dest="sites_strand_specific",
metavar="<str>", help="sites strand specific: yes, no", default='no')
parser.add_option("-n", "--normalization", action="store", type="float", dest="norm", metavar="<float>",
help="additional normalization in addition to number of sites, number of reads per million and window_size per 1K")
parser.add_option("-f", "--fragment_size", action="store", type="int", dest="fragment_size", metavar="<int>",
help="fragment size determines the shift (half of fragment_size of ChIP-seq read position, in bps")
parser.add_option("-r", "--resolution", action="store", type="int", dest="resolution", metavar="<int>",
help="resolution of the upstream and downstream profile, eg, 5")
parser.add_option("-u", "--upstream_extension", action="store", type="int", dest="upstream_extension",
metavar="<int>", help="upstream extension")
parser.add_option("-d", "--downstream_extension", action="store", type="int", dest="downstream_extension",
metavar="<int>", help="downstream extension")
parser.add_option("-w", "--window_size", action="store", type="int", dest="window_size", metavar="<int>",
help="window size for averaging. When window size > resolution, there is smoothing")
parser.add_option("-p", "--genic_partition", action="store", type="int", dest="genic_partition", metavar="<int>",
help="genic partition, eg, 20")
parser.add_option("-o", "--output_file", action="store", type="string", dest="outfile", metavar="<file>",
help="output profile around sites file")
(opt, args) = parser.parse_args(argv)
if len(argv) < 22:
parser.print_help()
sys.exit(1)
fragment_size = opt.fragment_size
window_size = opt.window_size
resolution = opt.resolution
upstream_extension = opt.upstream_extension
downstream_extension = opt.downstream_extension
genic_partition = opt.genic_partition
print "upstream extension: %i" % upstream_extension
print "downstream extension: %i" % downstream_extension
print "upstream and downstream resolution: %i" % resolution
print "upstream and downstream scanning window size: %i" % window_size
print "genic partition: %i" % genic_partition
num_tags = 0
ga = HTSeq.GenomicArray("auto", stranded=False, typecode="i")
tagsfile = HTSeq.BED_Reader(opt.tagsfile)
for alt in tagsfile:
if alt.iv.strand == "+":
alt_pos = HTSeq.GenomicPosition(alt.iv.chrom, alt.iv.start_d + fragment_size / 2)
elif alt.iv.strand == "-":
alt_pos = HTSeq.GenomicPosition(alt.iv.chrom, alt.iv.start_d - fragment_size / 2)
ga[alt_pos] += 1
num_tags += 1
site_iv_set = set()
sitesfile = HTSeq.BED_Reader(opt.sitesfile)
for alt in sitesfile:
site_iv_set.add((alt.name, alt.iv))
if opt.sites_strand_specific == "no":
site_body_profile = getSiteBodyProfile(ga, site_iv_set, genic_partition)
upstream_profile = getUpstreamProfile(ga, site_iv_set, window_size, resolution, upstream_extension, 0)
upstream_profile *= 1000.0 / window_size
downstream_profile = getDownstreamProfile(ga, site_iv_set, window_size, resolution, 0, downstream_extension)
downstream_profile *= 1000.0 / window_size
elif opt.sites_strand_specific == "yes":
site_body_profile = getSiteBodyProfileWithStrand(ga, site_iv_set, genic_partition)
upstream_profile = getUpstreamProfileWithStrand(ga, site_iv_set, window_size, resolution, upstream_extension, 0)
for site_name, site_iv in site_iv_set:
upstream_profile[site_name] *= 1000.0 / window_size
downstream_profile = getDownstreamProfileWithStrand(ga, site_iv_set, window_size, resolution, 0,
downstream_extension)
for site_name, site_iv in site_iv_set:
downstream_profile[site_name] *= 1000.0 / window_size
upstream_xValues = numpy.arange(0, upstream_extension + 1, resolution)[-1::-1] * (-1)
site_body_xValues = [0.0] * genic_partition
for i in xrange(genic_partition):
site_body_xValues[i] = (i + 0.5) / genic_partition
downstream_xValues = numpy.arange(0, downstream_extension + 1, resolution)
f = open(opt.outfile, "w")
header = 'site_id' + '\t' + '\t'.join([str(v) for v in upstream_xValues]) + '\t' + '\t'.join(
[str(v) for v in site_body_xValues]) + '\t' + '\t'.join([str(v) for v in downstream_xValues]) + '\n'
f.write(header)
normalization = num_tags / 1000000.0
normalization *= opt.norm
for site_name, site_iv in site_iv_set:
site_body_profile[site_name] = site_body_profile[site_name] / normalization
upstream_profile[site_name] = upstream_profile[site_name] / normalization
downstream_profile[site_name] = downstream_profile[site_name] / normalization
outline = site_name + '\t' + '\t'.join([str(v) for v in numpy.hstack(
[upstream_profile[site_name], site_body_profile[site_name], downstream_profile[site_name]])]) + '\n'
f.write(outline)
f.close()
if __name__ == "__main__":
main(sys.argv)
|
zhouhaozeng/bioinformatics-codebase
|
profile/generate_profile_matrix_around_sites.py
|
Python
|
gpl-3.0
| 13,148
|
[
"HTSeq"
] |
adaeab9426c08cf2d73bae39b18b281df0a1c0970850b12d4d666af6ea5ba49f
|
# -*- coding: utf-8 -*-
#
# Mayavi documentation build configuration file, created by
# sphinx-quickstart on Sat Apr 12 23:25:24 2008.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically).
#
# All configuration values have a default value; values that are commented out
# serve to show the default value.
# Adding the current directory to the path, so that sphinx finds the
# extensions.
import sys, os
sys.path.append(os.path.abspath('sphinxext'))
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'traitsdoc']
## An autodocumentation processor, to insert title of functions before
## the auto-documented functions:
#def add_title(app, what, name, obj, options, signature, return_annotation):
# """ Add a section title with the name of the function before the
# docstring.
# """
# if what is not 'function':
# return
# short_name = name.split('.')[-1]
# extra_lines = """
#
#%s
#...........................................
#
# """ % short_name
# return extra_lines + signature, return_annotation
#
#
#def setup(app):
# """ Register our docstring processor.
# """
# app.connect('autodoc-process-signature', add_title)
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General substitutions.
project = 'mayavi'
copyright = u'2008-2015, Enthought Inc.'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
d = {}
execfile(os.path.join('..', '..', '..', 'mayavi', '__init__.py'), d)
version = release = d['__version__']
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Options for HTML output
# -----------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'default.css'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Content template for the index page.
#html_index = ''
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_use_modindex = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# Output file base name for HTML help builder.
htmlhelp_basename = 'Mayavidoc'
# A logo displayed in the html sidebar.
html_logo = 'mayavi-logo.png'
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
latex_documents = [('index', 'mayavi_user_guide.tex', 'Mayavi User Guide',
'Prabhu Ramachandran, Gael Varoquaux', 'manual')]
# Additional stuff for the LaTeX preamble.
latex_preamble = """
\definecolor{VerbatimColor}{rgb}{0.95,1,0.833}
\definecolor{VerbatimBorderColor}{rgb}{0.6,0.6,0.6}
"""
# A logo displayed on the cover page.
latex_logo = 'm2_about.jpg'
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
################################################################################
# A hack:
import shutil
shutil.copyfile('../../CHANGES.txt', './auto/changes.rst')
|
liulion/mayavi
|
docs/source/mayavi/conf.py
|
Python
|
bsd-3-clause
| 5,275
|
[
"Mayavi"
] |
28a2e79705e62ceb692d1d5352c866d8a4e37a6e931c0a8c688ac41e98783532
|
# proxy module
from __future__ import absolute_import
from mayavi.filters.elevation_filter import *
|
enthought/etsproxy
|
enthought/mayavi/filters/elevation_filter.py
|
Python
|
bsd-3-clause
| 100
|
[
"Mayavi"
] |
d346d8fdd635163fe9385f28dc2ed35ac73150157cf23ba8413fd9fc2b1b8de8
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
sessions2trash.py
Run this script in a web2py environment shell e.g. python web2py.py -S app
If models are loaded (-M option) auth.settings.expiration is assumed
for sessions without an expiration. If models are not loaded, sessions older
than 60 minutes are removed. Use the --expiration option to override these
values.
Typical usage:
# Delete expired sessions every 5 minutes
nohup python web2py.py -S app -M -R scripts/sessions2trash.py &
# Delete sessions older than 60 minutes regardless of expiration,
# with verbose output, then exit.
python web2py.py -S app -M -R scripts/sessions2trash.py -A -o -x 3600 -f -v
# Delete all sessions regardless of expiry and exit.
python web2py.py -S app -M -R scripts/sessions2trash.py -A -o -x 0
# Delete session in a module (move to the modules folder)
from sessions2trash import single_loop
def delete_sessions():
single_loop()
"""
from __future__ import with_statement
from gluon import current
from gluon.storage import Storage
from optparse import OptionParser
import cPickle
import datetime
import os
import stat
import time
EXPIRATION_MINUTES = 60
SLEEP_MINUTES = 5
VERSION = 0.3
class SessionSet(object):
"""Class representing a set of sessions"""
def __init__(self, expiration, force, verbose):
self.expiration = expiration
self.force = force
self.verbose = verbose
def get(self):
"""Get session files/records."""
raise NotImplementedError
def trash(self):
"""Trash expired sessions."""
now = datetime.datetime.now()
for item in self.get():
status = 'OK'
last_visit = item.last_visit_default()
try:
session = item.get()
if session.auth:
if session.auth.expiration and not self.force:
self.expiration = session.auth.expiration
if session.auth.last_visit:
last_visit = session.auth.last_visit
except:
pass
age = 0
if last_visit:
age = total_seconds(now - last_visit)
if age > self.expiration or not self.expiration:
item.delete()
status = 'trashed'
if self.verbose > 1:
print 'key: %s' % str(item)
print 'expiration: %s seconds' % self.expiration
print 'last visit: %s' % str(last_visit)
print 'age: %s seconds' % age
print 'status: %s' % status
print ''
elif self.verbose > 0:
print('%s %s' % (str(item), status))
class SessionSetDb(SessionSet):
"""Class representing a set of sessions stored in database"""
def __init__(self, expiration, force, verbose):
SessionSet.__init__(self, expiration, force, verbose)
def get(self):
"""Return list of SessionDb instances for existing sessions."""
sessions = []
table = current.response.session_db_table
if table:
for row in table._db(table.id > 0).select():
sessions.append(SessionDb(row))
return sessions
class SessionSetFiles(SessionSet):
"""Class representing a set of sessions stored in flat files"""
def __init__(self, expiration, force, verbose):
SessionSet.__init__(self, expiration, force, verbose)
def get(self):
"""Return list of SessionFile instances for existing sessions."""
root_path = os.path.join(current.request.folder, 'sessions')
return [SessionFile(os.path.join(path, x)) for path,dirs,files in os.walk(root_path) for x in files]
class SessionDb(object):
"""Class representing a single session stored in database"""
def __init__(self, row):
self.row = row
def delete(self):
table = current.response.session_db_table
self.row.delete_record()
table._db.commit()
def get(self):
session = Storage()
session.update(cPickle.loads(self.row.session_data))
return session
def last_visit_default(self):
if isinstance(self.row.modified_datetime, datetime.datetime):
return self.row.modified_datetime
else:
try:
return datetime.datetime.strptime(self.row.modified_datetime, '%Y-%m-%d %H:%M:%S.%f')
except:
print 'failed to retrieve last modified time (value: %s)' % self.row.modified_datetime
def __str__(self):
return self.row.unique_key
class SessionFile(object):
"""Class representing a single session stored as a flat file"""
def __init__(self, filename):
self.filename = filename
def delete(self):
try:
os.unlink(self.filename)
except:
pass
def get(self):
session = Storage()
with open(self.filename, 'rb+') as f:
session.update(cPickle.load(f))
return session
def last_visit_default(self):
return datetime.datetime.fromtimestamp(
os.stat(self.filename)[stat.ST_MTIME])
def __str__(self):
return self.filename
def total_seconds(delta):
"""
Adapted from Python 2.7's timedelta.total_seconds() method.
Args:
delta: datetime.timedelta instance.
"""
return (delta.microseconds + (delta.seconds + (delta.days * 24 * 3600)) *
10 ** 6) / 10 ** 6
def single_loop(expiration=None, force=False, verbose=False):
if expiration is None:
try:
expiration = auth.settings.expiration
except:
expiration = EXPIRATION_MINUTES * 60
set_db = SessionSetDb(expiration, force, verbose)
set_files = SessionSetFiles(expiration, force, verbose)
set_db.trash()
set_files.trash()
def main():
"""Main processing."""
usage = '%prog [options]' + '\nVersion: %s' % VERSION
parser = OptionParser(usage=usage)
parser.add_option('-f', '--force',
action='store_true', dest='force', default=False,
help=('Ignore session expiration. '
'Force expiry based on -x option or auth.settings.expiration.')
)
parser.add_option('-o', '--once',
action='store_true', dest='once', default=False,
help='Delete sessions, then exit.',
)
parser.add_option('-s', '--sleep',
dest='sleep', default=SLEEP_MINUTES * 60, type="int",
help='Number of seconds to sleep between executions. Default 300.',
)
parser.add_option('-v', '--verbose',
default=0, action='count',
help="print verbose output, a second -v increases verbosity")
parser.add_option('-x', '--expiration',
dest='expiration', default=None, type="int",
help='Expiration value for sessions without expiration (in seconds)',
)
(options, unused_args) = parser.parse_args()
expiration = options.expiration
while True:
single_loop(expiration, options.force, options.verbose)
if options.once:
break
else:
if options.verbose:
print 'Sleeping %s seconds' % (options.sleep)
time.sleep(options.sleep)
if __name__ == '__main__':
main()
|
pouyana/teireader
|
webui/scripts/sessions2trash.py
|
Python
|
mit
| 7,574
|
[
"VisIt"
] |
1e47a21b33e00f6c5a8d7743434d7cfe1ca2f3f9d1ab69cfae758f7ab5ac0526
|
# -*- coding: utf-8 -*-
import os
# App to demonstrate NearestNeighbor queries in action
import Tkinter
from kdtree import *
class KDTreeApp:
def __init__(self):
"""App for creating KD tree dynamically"""
self.tree = KDTree()
self.match = None
self.redraw = False
self.shortline = None
self.master = Tkinter.Tk()
self.w = Tkinter.Frame(self.master, width=410, height=410)
self.canvas = Tkinter.Canvas(self.w, width=400, height=400)
self.paint()
self.canvas.bind("<Button-1>", self.click)
self.canvas.bind("<Motion>", self.moved)
self.w.pack()
self.w.mainloop()
def toCartesian(self, y):
return self.w.winfo_height() - y
def toTk(self, y):
if y == maxValue:
return 0
tk_y = self.w.winfo_height()
if y != minValue:
tk_y -= y
return tk_y
def view(self):
"""Show window with points"""
def moved(self, event):
"""React to mouse move events"""
p = (event.x, self.toCartesian(event.y))
match = self.tree.find(p)
if match:
self.redraw = True
p = match.point
self.canvas.create_rectangle(
p[X_] - 4, self.toTk(p[Y_]) - 4, p[X_] + 4, self.toTk(p[Y_]) + 4, fill='Red')
self.canvas.delete(self.shortline)
self.shortline = None
else:
if self.redraw:
self.paint()
self.redraw = False
n = self.tree.nearest(p)
if n:
pn = n.point
if self.shortline is None:
self.shortline = self.canvas.create_line(pn[X_], self.toTk(
pn[Y_]), p[X_], self.toTk(p[Y_]), tags="shortline")
else:
self.canvas.coords("shortline", pn[X_], self.toTk(
pn[Y_]), p[X_], self.toTk(p[Y_]))
def click(self, event):
"""Add point to KDtree"""
p = (event.x, self.toCartesian(event.y))
self.tree.add(p)
if self.shortline:
self.canvas.delete(self.shortline)
self.shortline = None
self.paint()
def drawPartition(self, r, p, orient):
if orient == VERTICAL:
self.canvas.create_line(p[X_], self.toTk(
r.y_min), p[X_], self.toTk(r.y_max))
else:
xlow = r.x_min
if r.x_min == minValue:
xlow = 0
xhigh = r.x_max
if r.x_max == maxValue:
xhigh = self.w.winfo_width()
self.canvas.create_line(xlow, self.toTk(
p[Y_]), xhigh, self.toTk(p[Y_]))
self.canvas.create_rectangle(
p[X_] - 4, self.toTk(p[Y_]) - 4, p[X_] + 4, self.toTk(p[Y_]) + 4, fill='Black')
def visit(self, n):
if n == None:
return
self.drawPartition(n.region, n.point, n.orient)
self.visit(n.below)
self.visit(n.above)
def prepare(self, event):
"""prepare to add points"""
if self.label:
self.label.destroy()
self.label = None
self.canvas.pack()
def paint(self):
if self.tree.root:
for child in self.canvas.winfo_children():
child.destroy()
self.visit(self.tree.root)
else:
self.label = Tkinter.Label(
self.w, width=100, height=40, text="Click To Add Points")
self.label.bind("<Button-1>", self.prepare)
self.label.pack()
if __name__ == "__main__":
KDTreeApp()
"""
Change Log
1. 2014.05.23 import changed to use KDTree from kdtree.py file
"""
os.system("pause")
|
NicovincX2/Python-3.5
|
Algorithmique/Structure de données/Arbre (structure de données)/Arbre kd/app_nn.py
|
Python
|
gpl-3.0
| 3,778
|
[
"VisIt"
] |
db8f29ab20b655f73c23678d5de2e0a94ac03e5ef2f90afa75a68c4b3c39b60a
|
import numpy as np
from gpaw.utilities import unpack
from gpaw.utilities.blas import gemm
import gpaw.mpi as mpi
class LCAO:
"""Eigensolver for LCAO-basis calculation"""
def __init__(self, diagonalizer=None):
self.diagonalizer = diagonalizer
# ??? why should we be able to set
# this diagonalizer in both constructor and initialize?
self.has_initialized = False # XXX
def initialize(self, gd, dtype, nao, diagonalizer=None):
self.gd = gd
self.nao = nao
if diagonalizer is not None:
self.diagonalizer = diagonalizer
assert self.diagonalizer is not None
self.has_initialized = True # XXX
def error(self):
return 0.0
error = property(error)
def calculate_hamiltonian_matrix(self, hamiltonian, wfs, kpt, root=-1):
# XXX document parallel stuff, particularly root parameter
assert self.has_initialized
vt_G = hamiltonian.vt_sG[kpt.s]
H_MM = np.empty((wfs.ksl.mynao, wfs.ksl.nao), wfs.dtype)
wfs.timer.start('Potential matrix')
wfs.basis_functions.calculate_potential_matrix(vt_G, H_MM, kpt.q)
wfs.timer.stop('Potential matrix')
# Add atomic contribution
#
# -- a a a*
# H += > P dH P
# mu nu -- mu i ij nu j
# aij
#
wfs.timer.start('Atomic Hamiltonian')
Mstart = wfs.basis_functions.Mstart
Mstop = wfs.basis_functions.Mstop
for a, P_Mi in kpt.P_aMi.items():
dH_ii = np.asarray(unpack(hamiltonian.dH_asp[a][kpt.s]), wfs.dtype)
dHP_iM = np.zeros((dH_ii.shape[1], P_Mi.shape[0]), wfs.dtype)
# (ATLAS can't handle uninitialized output array)
gemm(1.0, P_Mi, dH_ii, 0.0, dHP_iM, 'c')
gemm(1.0, dHP_iM, P_Mi[Mstart:Mstop], 1.0, H_MM)
wfs.timer.stop('Atomic Hamiltonian')
wfs.timer.start('Distribute overlap matrix')
H_MM = wfs.ksl.distribute_overlap_matrix(H_MM, root)
wfs.timer.stop('Distribute overlap matrix')
H_MM += wfs.T_qMM[kpt.q]
return H_MM
def iterate(self, hamiltonian, wfs):
wfs.timer.start('LCAO eigensolver')
for kpt in wfs.kpt_u:
self.iterate_one_k_point(hamiltonian, wfs, kpt)
wfs.timer.stop('LCAO eigensolver')
def iterate_one_k_point(self, hamiltonian, wfs, kpt):
if wfs.bd.comm.size > 1 and wfs.bd.strided:
raise NotImplementedError
H_MM = self.calculate_hamiltonian_matrix(hamiltonian, wfs, kpt, root=0)
S_MM = wfs.S_qMM[kpt.q]
if kpt.eps_n is None:
kpt.eps_n = np.empty(wfs.bd.mynbands)
diagonalization_string = repr(self.diagonalizer)
wfs.timer.start(diagonalization_string)
self.diagonalizer.diagonalize(H_MM, kpt.C_nM, kpt.eps_n, S_MM)
wfs.timer.stop(diagonalization_string)
wfs.timer.start('Calculate projections')
# P_ani are not strictly necessary as required quantities can be
# evaluated directly using P_aMi. We should probably get rid
# of the places in the LCAO code using P_ani directly
for a, P_ni in kpt.P_ani.items():
# ATLAS can't handle uninitialized output array:
P_ni.fill(117)
gemm(1.0, kpt.P_aMi[a], kpt.C_nM, 0.0, P_ni, 'n')
wfs.timer.stop('Calculate projections')
def estimate_memory(self, mem, dtype):
pass
# self.diagonalizer.estimate_memory(mem, dtype) #XXX enable this
|
qsnake/gpaw
|
gpaw/lcao/eigensolver.py
|
Python
|
gpl-3.0
| 3,584
|
[
"GPAW"
] |
987c40e5b6699132466d5da5503f5273ac4649bec848379a9bae2c2dd9137f36
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Jeroen Hoekx <jeroen.hoekx@dsquare.be>, Alexander Bulimov <lazywolf0@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
author:
- Jeroen Hoekx (@jhoekx)
- Alexander Bulimov (@abulimov)
module: lvol
short_description: Configure LVM logical volumes
description:
- This module creates, removes or resizes logical volumes.
version_added: "1.1"
options:
vg:
description:
- The volume group this logical volume is part of.
lv:
description:
- The name of the logical volume.
size:
description:
- The size of the logical volume, according to lvcreate(8) --size, by
default in megabytes or optionally with one of [bBsSkKmMgGtTpPeE] units; or
according to lvcreate(8) --extents as a percentage of [VG|PVS|FREE];
Float values must begin with a digit.
Resizing using percentage values was not supported prior to 2.1.
state:
description:
- Control if the logical volume exists. If C(present) and the
volume does not already exist then the C(size) option is required.
choices: [ absent, present ]
default: present
active:
description:
- Whether the volume is activate and visible to the host.
type: bool
default: 'yes'
version_added: "2.2"
force:
description:
- Shrink or remove operations of volumes requires this switch. Ensures that
that filesystems get never corrupted/destroyed by mistake.
type: bool
default: 'no'
version_added: "1.5"
opts:
description:
- Free-form options to be passed to the lvcreate command.
version_added: "2.0"
snapshot:
description:
- The name of the snapshot volume
version_added: "2.1"
pvs:
description:
- Comma separated list of physical volumes (e.g. /dev/sda,/dev/sdb).
version_added: "2.2"
thinpool:
description:
- The thin pool volume name. When you want to create a thin provisioned volume, specify a thin pool volume name.
version_added: "2.5"
shrink:
description:
- Shrink if current size is higher than size requested.
type: bool
default: 'yes'
version_added: "2.2"
resizefs:
description:
- Resize the underlying filesystem together with the logical volume.
type: bool
default: 'no'
version_added: "2.5"
notes:
- You must specify lv (when managing the state of logical volumes) or thinpool (when managing a thin provisioned volume).
'''
EXAMPLES = '''
- name: Create a logical volume of 512m
lvol:
vg: firefly
lv: test
size: 512
- name: Create a logical volume of 512m with disks /dev/sda and /dev/sdb
lvol:
vg: firefly
lv: test
size: 512
pvs: /dev/sda,/dev/sdb
- name: Create cache pool logical volume
lvol:
vg: firefly
lv: lvcache
size: 512m
opts: --type cache-pool
- name: Create a logical volume of 512g.
lvol:
vg: firefly
lv: test
size: 512g
- name: Create a logical volume the size of all remaining space in the volume group
lvol:
vg: firefly
lv: test
size: 100%FREE
- name: Create a logical volume with special options
lvol:
vg: firefly
lv: test
size: 512g
opts: -r 16
- name: Extend the logical volume to 1024m.
lvol:
vg: firefly
lv: test
size: 1024
- name: Extend the logical volume to consume all remaining space in the volume group
lvol:
vg: firefly
lv: test
size: +100%FREE
- name: Extend the logical volume to take all remaining space of the PVs and resize the underlying filesystem
lvol:
vg: firefly
lv: test
size: 100%PVS
resizefs: true
- name: Resize the logical volume to % of VG
lvol:
vg: firefly
lv: test
size: 80%VG
force: yes
- name: Reduce the logical volume to 512m
lvol:
vg: firefly
lv: test
size: 512
force: yes
- name: Set the logical volume to 512m and do not try to shrink if size is lower than current one
lvol:
vg: firefly
lv: test
size: 512
shrink: no
- name: Remove the logical volume.
lvol:
vg: firefly
lv: test
state: absent
force: yes
- name: Create a snapshot volume of the test logical volume.
lvol:
vg: firefly
lv: test
snapshot: snap1
size: 100m
- name: Deactivate a logical volume
lvol:
vg: firefly
lv: test
active: false
- name: Create a deactivated logical volume
lvol:
vg: firefly
lv: test
size: 512g
active: false
- name: Create a thin pool of 512g
lvol:
vg: firefly
thinpool: testpool
size: 512g
- name: Create a thin volume of 128g
lvol:
vg: firefly
lv: test
thinpool: testpool
size: 128g
'''
import re
import locale
from ansible.module_utils.basic import AnsibleModule
locale.setlocale(locale.LC_ALL, '')
def mkversion(major, minor, patch):
return (1000 * 1000 * int(major)) + (1000 * int(minor)) + int(patch)
def parse_lvs(data):
lvs = []
for line in data.splitlines():
parts = line.strip().split(';')
lvs.append({
'name': parts[0].replace('[', '').replace(']', ''),
'size': locale.atof(parts[1]),
'active': (parts[2][4] == 'a'),
'thinpool': (parts[2][0] == 't'),
'thinvol': (parts[2][0] == 'V'),
})
return lvs
def parse_vgs(data):
vgs = []
for line in data.splitlines():
parts = line.strip().split(';')
vgs.append({
'name': parts[0],
'size': locale.atof(parts[1]),
'free': locale.atof(parts[2]),
'ext_size': locale.atof(parts[3])
})
return vgs
def get_lvm_version(module):
ver_cmd = module.get_bin_path("lvm", required=True)
rc, out, err = module.run_command("%s version" % (ver_cmd))
if rc != 0:
return None
m = re.search(r"LVM version:\s+(\d+)\.(\d+)\.(\d+).*(\d{4}-\d{2}-\d{2})", out)
if not m:
return None
return mkversion(m.group(1), m.group(2), m.group(3))
def main():
module = AnsibleModule(
argument_spec=dict(
vg=dict(type='str', required=True),
lv=dict(type='str'),
size=dict(type='str'),
opts=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
force=dict(type='bool', default=False),
shrink=dict(type='bool', default=True),
active=dict(type='bool', default=True),
snapshot=dict(type='str'),
pvs=dict(type='str'),
resizefs=dict(type='bool', default=False),
thinpool=dict(type='str'),
),
supports_check_mode=True,
required_one_of=(
['lv', 'thinpool'],
),
)
# Determine if the "--yes" option should be used
version_found = get_lvm_version(module)
if version_found is None:
module.fail_json(msg="Failed to get LVM version number")
version_yesopt = mkversion(2, 2, 99) # First LVM with the "--yes" option
if version_found >= version_yesopt:
yesopt = "--yes"
else:
yesopt = ""
vg = module.params['vg']
lv = module.params['lv']
size = module.params['size']
opts = module.params['opts']
state = module.params['state']
force = module.boolean(module.params['force'])
shrink = module.boolean(module.params['shrink'])
active = module.boolean(module.params['active'])
resizefs = module.boolean(module.params['resizefs'])
thinpool = module.params['thinpool']
size_opt = 'L'
size_unit = 'm'
snapshot = module.params['snapshot']
pvs = module.params['pvs']
if pvs is None:
pvs = ""
else:
pvs = pvs.replace(",", " ")
if opts is None:
opts = ""
# Add --test option when running in check-mode
if module.check_mode:
test_opt = ' --test'
else:
test_opt = ''
if size:
# LVCREATE(8) -l --extents option with percentage
if '%' in size:
size_parts = size.split('%', 1)
size_percent = int(size_parts[0])
if size_percent > 100:
module.fail_json(msg="Size percentage cannot be larger than 100%")
size_whole = size_parts[1]
if size_whole == 'ORIGIN':
module.fail_json(msg="Snapshot Volumes are not supported")
elif size_whole not in ['VG', 'PVS', 'FREE']:
module.fail_json(msg="Specify extents as a percentage of VG|PVS|FREE")
size_opt = 'l'
size_unit = ''
if '%' not in size:
# LVCREATE(8) -L --size option unit
if size[-1].lower() in 'bskmgtpe':
size_unit = size[-1].lower()
size = size[0:-1]
try:
locale.atof(size)
if not size[0].isdigit():
raise ValueError()
except ValueError:
module.fail_json(msg="Bad size specification of '%s'" % size)
# when no unit, megabytes by default
if size_opt == 'l':
unit = 'm'
else:
unit = size_unit
# Get information on volume group requested
vgs_cmd = module.get_bin_path("vgs", required=True)
rc, current_vgs, err = module.run_command(
"%s --noheadings --nosuffix -o vg_name,size,free,vg_extent_size --units %s --separator ';' %s" % (vgs_cmd, unit, vg))
if rc != 0:
if state == 'absent':
module.exit_json(changed=False, stdout="Volume group %s does not exist." % vg)
else:
module.fail_json(msg="Volume group %s does not exist." % vg, rc=rc, err=err)
vgs = parse_vgs(current_vgs)
this_vg = vgs[0]
# Get information on logical volume requested
lvs_cmd = module.get_bin_path("lvs", required=True)
rc, current_lvs, err = module.run_command(
"%s -a --noheadings --nosuffix -o lv_name,size,lv_attr --units %s --separator ';' %s" % (lvs_cmd, unit, vg))
if rc != 0:
if state == 'absent':
module.exit_json(changed=False, stdout="Volume group %s does not exist." % vg)
else:
module.fail_json(msg="Volume group %s does not exist." % vg, rc=rc, err=err)
changed = False
lvs = parse_lvs(current_lvs)
if snapshot:
# Check snapshot pre-conditions
for test_lv in lvs:
if test_lv['name'] == lv or test_lv['name'] == thinpool:
if not test_lv['thinpool'] and not thinpool:
break
else:
module.fail_json(msg="Snapshots of thin pool LVs are not supported.")
else:
module.fail_json(msg="Snapshot origin LV %s does not exist in volume group %s." % (lv, vg))
check_lv = snapshot
elif thinpool:
if lv:
# Check thin volume pre-conditions
for test_lv in lvs:
if test_lv['name'] == thinpool:
break
else:
module.fail_json(msg="Thin pool LV %s does not exist in volume group %s." % (thinpool, vg))
check_lv = lv
else:
check_lv = thinpool
else:
check_lv = lv
for test_lv in lvs:
if test_lv['name'] in (check_lv, check_lv.rsplit('/', 1)[-1]):
this_lv = test_lv
break
else:
this_lv = None
msg = ''
if this_lv is None:
if state == 'present':
# Require size argument except for snapshot of thin volumes
if (lv or thinpool) and not size:
for test_lv in lvs:
if test_lv['name'] == lv and test_lv['thinvol'] and snapshot:
break
else:
module.fail_json(msg="No size given.")
# create LV
lvcreate_cmd = module.get_bin_path("lvcreate", required=True)
if snapshot is not None:
if size:
cmd = "%s %s %s -%s %s%s -s -n %s %s %s/%s" % (lvcreate_cmd, test_opt, yesopt, size_opt, size, size_unit, snapshot, opts, vg, lv)
else:
cmd = "%s %s %s -s -n %s %s %s/%s" % (lvcreate_cmd, test_opt, yesopt, snapshot, opts, vg, lv)
elif thinpool and lv:
if size_opt == 'l':
module.fail_json(changed=False, msg="Thin volume sizing with percentage not supported.")
size_opt = 'V'
cmd = "%s %s -n %s -%s %s%s %s -T %s/%s" % (lvcreate_cmd, yesopt, lv, size_opt, size, size_unit, opts, vg, thinpool)
elif thinpool and not lv:
cmd = "%s %s -%s %s%s %s -T %s/%s" % (lvcreate_cmd, yesopt, size_opt, size, size_unit, opts, vg, thinpool)
else:
cmd = "%s %s %s -n %s -%s %s%s %s %s %s" % (lvcreate_cmd, test_opt, yesopt, lv, size_opt, size, size_unit, opts, vg, pvs)
rc, _, err = module.run_command(cmd)
if rc == 0:
changed = True
else:
module.fail_json(msg="Creating logical volume '%s' failed" % lv, rc=rc, err=err)
else:
if state == 'absent':
# remove LV
if not force:
module.fail_json(msg="Sorry, no removal of logical volume %s without force=yes." % (this_lv['name']))
lvremove_cmd = module.get_bin_path("lvremove", required=True)
rc, _, err = module.run_command("%s %s --force %s/%s" % (lvremove_cmd, test_opt, vg, this_lv['name']))
if rc == 0:
module.exit_json(changed=True)
else:
module.fail_json(msg="Failed to remove logical volume %s" % (lv), rc=rc, err=err)
elif not size:
pass
elif size_opt == 'l':
# Resize LV based on % value
tool = None
size_free = this_vg['free']
if size_whole == 'VG' or size_whole == 'PVS':
size_requested = size_percent * this_vg['size'] / 100
else: # size_whole == 'FREE':
size_requested = size_percent * this_vg['free'] / 100
if '+' in size:
size_requested += this_lv['size']
if this_lv['size'] < size_requested:
if (size_free > 0) and (('+' not in size) or (size_free >= (size_requested - this_lv['size']))):
tool = module.get_bin_path("lvextend", required=True)
else:
module.fail_json(
msg="Logical Volume %s could not be extended. Not enough free space left (%s%s required / %s%s available)" %
(this_lv['name'], (size_requested - this_lv['size']), unit, size_free, unit)
)
elif shrink and this_lv['size'] > size_requested + this_vg['ext_size']: # more than an extent too large
if size_requested == 0:
module.fail_json(msg="Sorry, no shrinking of %s to 0 permitted." % (this_lv['name']))
elif not force:
module.fail_json(msg="Sorry, no shrinking of %s without force=yes" % (this_lv['name']))
else:
tool = module.get_bin_path("lvreduce", required=True)
tool = '%s %s' % (tool, '--force')
if tool:
if resizefs:
tool = '%s %s' % (tool, '--resizefs')
cmd = "%s %s -%s %s%s %s/%s %s" % (tool, test_opt, size_opt, size, size_unit, vg, this_lv['name'], pvs)
rc, out, err = module.run_command(cmd)
if "Reached maximum COW size" in out:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err, out=out)
elif rc == 0:
changed = True
msg = "Volume %s resized to %s%s" % (this_lv['name'], size_requested, unit)
elif "matches existing size" in err:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'])
elif "not larger than existing size" in err:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'], msg="Original size is larger than requested size", err=err)
else:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err)
else:
# resize LV based on absolute values
tool = None
if locale.atof(size) > this_lv['size']:
tool = module.get_bin_path("lvextend", required=True)
elif shrink and locale.atof(size) < this_lv['size']:
if locale.atof(size) == 0:
module.fail_json(msg="Sorry, no shrinking of %s to 0 permitted." % (this_lv['name']))
if not force:
module.fail_json(msg="Sorry, no shrinking of %s without force=yes." % (this_lv['name']))
else:
tool = module.get_bin_path("lvreduce", required=True)
tool = '%s %s' % (tool, '--force')
if tool:
if resizefs:
tool = '%s %s' % (tool, '--resizefs')
cmd = "%s %s -%s %s%s %s/%s %s" % (tool, test_opt, size_opt, size, size_unit, vg, this_lv['name'], pvs)
rc, out, err = module.run_command(cmd)
if "Reached maximum COW size" in out:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err, out=out)
elif rc == 0:
changed = True
elif "matches existing size" in err:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'])
elif "not larger than existing size" in err:
module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size'], msg="Original size is larger than requested size", err=err)
else:
module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err)
if this_lv is not None:
if active:
lvchange_cmd = module.get_bin_path("lvchange", required=True)
rc, _, err = module.run_command("%s -ay %s/%s" % (lvchange_cmd, vg, this_lv['name']))
if rc == 0:
module.exit_json(changed=((not this_lv['active']) or changed), vg=vg, lv=this_lv['name'], size=this_lv['size'])
else:
module.fail_json(msg="Failed to activate logical volume %s" % (lv), rc=rc, err=err)
else:
lvchange_cmd = module.get_bin_path("lvchange", required=True)
rc, _, err = module.run_command("%s -an %s/%s" % (lvchange_cmd, vg, this_lv['name']))
if rc == 0:
module.exit_json(changed=(this_lv['active'] or changed), vg=vg, lv=this_lv['name'], size=this_lv['size'])
else:
module.fail_json(msg="Failed to deactivate logical volume %s" % (lv), rc=rc, err=err)
module.exit_json(changed=changed, msg=msg)
if __name__ == '__main__':
main()
|
andmos/ansible
|
lib/ansible/modules/system/lvol.py
|
Python
|
gpl-3.0
| 19,500
|
[
"Firefly"
] |
024d2a98807f9e526c6e3e5a22fc370a3fbf5b9bd253cbc33044dc82fb579f01
|
import re
import string
import os.path
import os
import shutil
import multiprocessing
import pysam
import pprint
import time
class Fasta(object):
def __init__(self, fa=''):
if os.path.isfile(fa):
self.filename = fa
else:
raise RuntimeError("Fasta: \'{:s}\' is not found".format(fa))
def fasta_header(self):
header = []
with open(self.filename, 'r') as f:
for line in f:
if line.startswith('>'):
head = line.replace('>', '', 1).rstrip()
header.append(head)
return header
def split_by_blocks(self, n=1):
'''
Args:
lists(list): block of splited chromosome
'''
self.save_path = './block_fasta/'
if not os.path.isdir(self.save_path):
os.mkdir(self.save_path)
block_list = self.generate_chrom_blocks(n)
count = 0
block_size = len(block_list)
if block_size < 0:
raise ValueError("Block size must be greater than 1")
for block in block_list:
for chrom in block:
name = "-".join(block)
out = open(self.save_path + str(block_size) + '_' + name + '.fa', 'w')
fa = open(self.filename, 'r')
for line in fa:
if line.startswith('>'):
head = line.replace('>', '', 1).rstrip()
if head in block:
out.write(line)
break
out.close()
fa.close()
block_size -= 1
def chr_size(self):
return len(self.fasta_header())
def split_by_length(self, length):
pass
def generate_chrom_blocks(self, cpus):
'''
Args:
cpus(int): number of cpus
Returns:
chrom(list): splited chromsome by cpus
'''
MAX_CPUs = multiprocessing.cpu_count()
#MAX_CPUs = 24
if cpus > MAX_CPUs:
raise RuntimeError("Over the number of cpus are given")
human_chr = self.fasta_header()
num_threads = cpus
chr_size = len(human_chr)
try:
div, mod = divmod(chr_size, num_threads)
except ZeroDivisionError as e:
raise SystemExit(e)
start = 0
end = div
result, overflow = [], []
counter = num_threads
for i in range(0, chr_size):
if len(human_chr[start:end]):
if mod == 0:
result.append(human_chr[start:end])
end += div
start += div
counter -= 1
elif mod >= 1:
if counter > 0:
result.append(human_chr[start:end])
start += div
end += div
counter -= 1
else:
overflow.append(human_chr[start:end])
start += div
end += div
counter -= 1
def __merge_list(norm, over):
flatten = reduce(lambda x, y: x + y, over)
for i, _ in enumerate(norm):
for j, _ in enumerate(flatten):
norm[i].append(flatten[j])
del flatten[j]
break
return norm
if len(overflow) > 0:
return __merge_list(result, overflow)
elif len(overflow) == 0:
return result
def decode_chr_name_from_file(chroms):
'''
Args:
blocked fasta file, 1_chr18-chr19-chr20.fa
Returns:
chromosome name(list): [1, chr18, chr19, chr20],
*** Note that the 1st element is serial number in blocked fasta file name ***
'''
tmp = []
decode = []
files = [f.split("-") for f in chroms]
for fh in files:
for chrm in fh:
# 1_chrN type
p = re.compile(r'^(\d)+_{1}(chr.+)')
ma = p.findall(chrm)
if ma:
index, name = ma[0]
tmp.append(index)
tmp.append(name)
else:
# chrN type or chrN.fa
p = re.compile(r'^(chr[A-Za-z0-9]+)(?!=\.fa)')
ma = p.search(chrm)
if ma:
name = ma.group(0)
tmp.append(name)
decode.append(tmp)
tmp = []
return decode
def as_single(genome):
human_chr = ['chrM', 'chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6',
'chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12', 'chr13',
'chr14', 'chr15', 'chr16', 'chr17', 'chr18', 'chr19', 'chr20',
'chr21', 'chr22', 'chrX', 'chrY']
fafile = pysam.Fastafile(genome)
for c in human_chr:
l = len(fafile.fetch(reference=c, start=1, end=1000000000))
print "Chr: %s, Length: %d" % (c, l)
### Fetching fasta file by multiprocessing ###
def fetch_seq(fa):
'''
Fetch fasta file
Args:
fasta(list): Path to fasta file
'''
print multiprocessing.current_process()
path = './block_fasta/'
fafile = pysam.Fastafile(os.path.join(path, fa))
chroms = decode_chr_name_from_file([fa])
bam_file = '../../../data/sample_0.005.bam'
bam = pysam.Samfile(bam_file, 'rb')
seq = []
align_len = 0
for out in chroms:
for inn in out[1:]:
for col in bam.pileup(reference=inn):
ref = fafile.fetch(reference=inn, start=col.pos, end=col.pos+1)
align_len += len([_ for _ in col.pileups])
print "Result: Total mapped reads [%d] in [%s]" % (align_len, inn)
def run(cpus, fas):
'''
Args:
cpus(int): Number of cpus
fasta(list): Path to fasta files
'''
p = multiprocessing.Pool(cpus)
seq = p.map(fetch_seq, fas)
return seq
def get_fa_list(path):
'''
Args:
path(str): blocked fasta contained directory
Returns:
files(list): only .fa file which located in given path
'''
fa = []
for _ in os.listdir(path):
if _.endswith(".fa"):
fa.append(_)
return fa
|
soh-i/Ivy
|
src/Ivy/io/fasta.py
|
Python
|
gpl-2.0
| 6,403
|
[
"pysam"
] |
9ae1a87386982ea43eed9aca7fa2258c30e8a4db3c37e65f6342df6247df77af
|
# Authors: Matti Hämäläinen <msh@nmr.mgh.harvard.edu>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Lorenzo De Santis <lorenzo.de-santis@u-psud.fr>
#
# License: BSD (3-clause)
# The computations in this code were primarily derived from Matti Hämäläinen's
# C code.
from collections import OrderedDict
from functools import partial
import glob
import os
import os.path as op
import shutil
from copy import deepcopy
import numpy as np
from .io.constants import FIFF, FWD
from .io._digitization import _dig_kind_dict, _dig_kind_rev, _dig_kind_ints
from .io.write import (start_file, start_block, write_float, write_int,
write_float_matrix, write_int_matrix, end_block,
end_file)
from .io.tag import find_tag
from .io.tree import dir_tree_find
from .io.open import fiff_open
from .surface import (read_surface, write_surface, complete_surface_info,
_compute_nearest, _get_ico_surface, read_tri,
_fast_cross_nd_sum, _get_solids, _complete_sphere_surf)
from .transforms import _ensure_trans, apply_trans, Transform
from .utils import (verbose, logger, run_subprocess, get_subjects_dir, warn,
_pl, _validate_type, _TempDir, _check_freesurfer_home,
_check_fname, has_nibabel, _check_option, path_like,
_on_missing)
from .fixes import einsum
from .externals.h5io import write_hdf5, read_hdf5
# ############################################################################
# Compute BEM solution
# The following approach is based on:
#
# de Munck JC: "A linear discretization of the volume conductor boundary
# integral equation using analytically integrated elements",
# IEEE Trans Biomed Eng. 1992 39(9) : 986 - 990
#
class ConductorModel(dict):
"""BEM or sphere model."""
def __repr__(self): # noqa: D105
if self['is_sphere']:
center = ', '.join('%0.1f' % (x * 1000.) for x in self['r0'])
rad = self.radius
if rad is None: # no radius / MEG only
extra = 'Sphere (no layers): r0=[%s] mm' % center
else:
extra = ('Sphere (%s layer%s): r0=[%s] R=%1.f mm'
% (len(self['layers']) - 1, _pl(self['layers']),
center, rad * 1000.))
else:
extra = ('BEM (%s layer%s)' % (len(self['surfs']),
_pl(self['surfs'])))
return '<ConductorModel | %s>' % extra
def copy(self):
"""Return copy of ConductorModel instance."""
return deepcopy(self)
@property
def radius(self):
"""Sphere radius if an EEG sphere model."""
if not self['is_sphere']:
raise RuntimeError('radius undefined for BEM')
return None if len(self['layers']) == 0 else self['layers'][-1]['rad']
def _calc_beta(rk, rk_norm, rk1, rk1_norm):
"""Compute coefficients for calculating the magic vector omega."""
rkk1 = rk1[0] - rk[0]
size = np.linalg.norm(rkk1)
rkk1 /= size
num = rk_norm + np.dot(rk, rkk1)
den = rk1_norm + np.dot(rk1, rkk1)
res = np.log(num / den) / size
return res
def _lin_pot_coeff(fros, tri_rr, tri_nn, tri_area):
"""Compute the linear potential matrix element computations."""
omega = np.zeros((len(fros), 3))
# we replicate a little bit of the _get_solids code here for speed
# (we need some of the intermediate values later)
v1 = tri_rr[np.newaxis, 0, :] - fros
v2 = tri_rr[np.newaxis, 1, :] - fros
v3 = tri_rr[np.newaxis, 2, :] - fros
triples = _fast_cross_nd_sum(v1, v2, v3)
l1 = np.linalg.norm(v1, axis=1)
l2 = np.linalg.norm(v2, axis=1)
l3 = np.linalg.norm(v3, axis=1)
ss = l1 * l2 * l3
ss += einsum('ij,ij,i->i', v1, v2, l3)
ss += einsum('ij,ij,i->i', v1, v3, l2)
ss += einsum('ij,ij,i->i', v2, v3, l1)
solids = np.arctan2(triples, ss)
# We *could* subselect the good points from v1, v2, v3, triples, solids,
# l1, l2, and l3, but there are *very* few bad points. So instead we do
# some unnecessary calculations, and then omit them from the final
# solution. These three lines ensure we don't get invalid values in
# _calc_beta.
bad_mask = np.abs(solids) < np.pi / 1e6
l1[bad_mask] = 1.
l2[bad_mask] = 1.
l3[bad_mask] = 1.
# Calculate the magic vector vec_omega
beta = [_calc_beta(v1, l1, v2, l2)[:, np.newaxis],
_calc_beta(v2, l2, v3, l3)[:, np.newaxis],
_calc_beta(v3, l3, v1, l1)[:, np.newaxis]]
vec_omega = (beta[2] - beta[0]) * v1
vec_omega += (beta[0] - beta[1]) * v2
vec_omega += (beta[1] - beta[2]) * v3
area2 = 2.0 * tri_area
n2 = 1.0 / (area2 * area2)
# leave omega = 0 otherwise
# Put it all together...
yys = [v1, v2, v3]
idx = [0, 1, 2, 0, 2]
for k in range(3):
diff = yys[idx[k - 1]] - yys[idx[k + 1]]
zdots = _fast_cross_nd_sum(yys[idx[k + 1]], yys[idx[k - 1]], tri_nn)
omega[:, k] = -n2 * (area2 * zdots * 2. * solids -
triples * (diff * vec_omega).sum(axis=-1))
# omit the bad points from the solution
omega[bad_mask] = 0.
return omega
def _correct_auto_elements(surf, mat):
"""Improve auto-element approximation."""
pi2 = 2.0 * np.pi
tris_flat = surf['tris'].ravel()
misses = pi2 - mat.sum(axis=1)
for j, miss in enumerate(misses):
# How much is missing?
n_memb = len(surf['neighbor_tri'][j])
assert n_memb > 0 # should be guaranteed by our surface checks
# The node itself receives one half
mat[j, j] = miss / 2.0
# The rest is divided evenly among the member nodes...
miss /= (4.0 * n_memb)
members = np.where(j == tris_flat)[0]
mods = members % 3
offsets = np.array([[1, 2], [-1, 1], [-1, -2]])
tri_1 = members + offsets[mods, 0]
tri_2 = members + offsets[mods, 1]
for t1, t2 in zip(tri_1, tri_2):
mat[j, tris_flat[t1]] += miss
mat[j, tris_flat[t2]] += miss
return
def _fwd_bem_lin_pot_coeff(surfs):
"""Calculate the coefficients for linear collocation approach."""
# taken from fwd_bem_linear_collocation.c
nps = [surf['np'] for surf in surfs]
np_tot = sum(nps)
coeff = np.zeros((np_tot, np_tot))
offsets = np.cumsum(np.concatenate(([0], nps)))
for si_1, surf1 in enumerate(surfs):
rr_ord = np.arange(nps[si_1])
for si_2, surf2 in enumerate(surfs):
logger.info(" %s (%d) -> %s (%d) ..." %
(_bem_surf_name[surf1['id']], nps[si_1],
_bem_surf_name[surf2['id']], nps[si_2]))
tri_rr = surf2['rr'][surf2['tris']]
tri_nn = surf2['tri_nn']
tri_area = surf2['tri_area']
submat = coeff[offsets[si_1]:offsets[si_1 + 1],
offsets[si_2]:offsets[si_2 + 1]] # view
for k in range(surf2['ntri']):
tri = surf2['tris'][k]
if si_1 == si_2:
skip_idx = ((rr_ord == tri[0]) |
(rr_ord == tri[1]) |
(rr_ord == tri[2]))
else:
skip_idx = list()
# No contribution from a triangle that
# this vertex belongs to
# if sidx1 == sidx2 and (tri == j).any():
# continue
# Otherwise do the hard job
coeffs = _lin_pot_coeff(fros=surf1['rr'], tri_rr=tri_rr[k],
tri_nn=tri_nn[k], tri_area=tri_area[k])
coeffs[skip_idx] = 0.
submat[:, tri] -= coeffs
if si_1 == si_2:
_correct_auto_elements(surf1, submat)
return coeff
def _fwd_bem_multi_solution(solids, gamma, nps):
"""Do multi surface solution.
* Invert I - solids/(2*M_PI)
* Take deflation into account
* The matrix is destroyed after inversion
* This is the general multilayer case
"""
pi2 = 1.0 / (2 * np.pi)
n_tot = np.sum(nps)
assert solids.shape == (n_tot, n_tot)
nsurf = len(nps)
defl = 1.0 / n_tot
# Modify the matrix
offsets = np.cumsum(np.concatenate(([0], nps)))
for si_1 in range(nsurf):
for si_2 in range(nsurf):
mult = pi2 if gamma is None else pi2 * gamma[si_1, si_2]
slice_j = slice(offsets[si_1], offsets[si_1 + 1])
slice_k = slice(offsets[si_2], offsets[si_2 + 1])
solids[slice_j, slice_k] = defl - solids[slice_j, slice_k] * mult
solids += np.eye(n_tot)
return np.linalg.inv(solids)
def _fwd_bem_homog_solution(solids, nps):
"""Make a homogeneous solution."""
return _fwd_bem_multi_solution(solids, gamma=None, nps=nps)
def _fwd_bem_ip_modify_solution(solution, ip_solution, ip_mult, n_tri):
"""Modify the solution according to the IP approach."""
n_last = n_tri[-1]
mult = (1.0 + ip_mult) / ip_mult
logger.info(' Combining...')
offsets = np.cumsum(np.concatenate(([0], n_tri)))
for si in range(len(n_tri)):
# Pick the correct submatrix (right column) and multiply
sub = solution[offsets[si]:offsets[si + 1], np.sum(n_tri[:-1]):]
# Multiply
sub -= 2 * np.dot(sub, ip_solution)
# The lower right corner is a special case
sub[-n_last:, -n_last:] += mult * ip_solution
# Final scaling
logger.info(' Scaling...')
solution *= ip_mult
return
def _check_complete_surface(surf, copy=False, incomplete='raise', extra=''):
surf = complete_surface_info(surf, copy=copy, verbose=False)
fewer = np.where([len(t) < 3 for t in surf['neighbor_tri']])[0]
if len(fewer) > 0:
msg = ('Surface {} has topological defects: {:.0f} / {:.0f} vertices '
'have fewer than three neighboring triangles [{}]{}'
.format(_bem_surf_name[surf['id']], len(fewer), surf['ntri'],
', '.join(str(f) for f in fewer), extra))
if incomplete == 'raise':
raise RuntimeError(msg)
else:
warn(msg)
return surf
def _fwd_bem_linear_collocation_solution(bem):
"""Compute the linear collocation potential solution."""
# first, add surface geometries
for surf in bem['surfs']:
_check_complete_surface(surf)
logger.info('Computing the linear collocation solution...')
logger.info(' Matrix coefficients...')
coeff = _fwd_bem_lin_pot_coeff(bem['surfs'])
bem['nsol'] = len(coeff)
logger.info(" Inverting the coefficient matrix...")
nps = [surf['np'] for surf in bem['surfs']]
bem['solution'] = _fwd_bem_multi_solution(coeff, bem['gamma'], nps)
if len(bem['surfs']) == 3:
ip_mult = bem['sigma'][1] / bem['sigma'][2]
if ip_mult <= FWD.BEM_IP_APPROACH_LIMIT:
logger.info('IP approach required...')
logger.info(' Matrix coefficients (homog)...')
coeff = _fwd_bem_lin_pot_coeff([bem['surfs'][-1]])
logger.info(' Inverting the coefficient matrix (homog)...')
ip_solution = _fwd_bem_homog_solution(coeff,
[bem['surfs'][-1]['np']])
logger.info(' Modify the original solution to incorporate '
'IP approach...')
_fwd_bem_ip_modify_solution(bem['solution'], ip_solution, ip_mult,
nps)
bem['bem_method'] = FWD.BEM_LINEAR_COLL
logger.info("Solution ready.")
@verbose
def make_bem_solution(surfs, verbose=None):
"""Create a BEM solution using the linear collocation approach.
Parameters
----------
surfs : list of dict
The BEM surfaces to use (from :func:`mne.make_bem_model`).
%(verbose)s
Returns
-------
bem : instance of ConductorModel
The BEM solution.
See Also
--------
make_bem_model
read_bem_surfaces
write_bem_surfaces
read_bem_solution
write_bem_solution
Notes
-----
.. versionadded:: 0.10.0
"""
logger.info('Approximation method : Linear collocation\n')
bem = _ensure_bem_surfaces(surfs)
_add_gamma_multipliers(bem)
if len(bem['surfs']) == 3:
logger.info('Three-layer model surfaces loaded.')
elif len(bem['surfs']) == 1:
logger.info('Homogeneous model surface loaded.')
else:
raise RuntimeError('Only 1- or 3-layer BEM computations supported')
_check_bem_size(bem['surfs'])
_fwd_bem_linear_collocation_solution(bem)
logger.info('BEM geometry computations complete.')
return bem
# ############################################################################
# Make BEM model
def _ico_downsample(surf, dest_grade):
"""Downsample the surface if isomorphic to a subdivided icosahedron."""
n_tri = len(surf['tris'])
bad_msg = ("Cannot decimate to requested ico grade %d. The provided "
"BEM surface has %d triangles, which cannot be isomorphic with "
"a subdivided icosahedron. Consider manually decimating the "
"surface to a suitable density and then use ico=None in "
"make_bem_model." % (dest_grade, n_tri))
if n_tri % 20 != 0:
raise RuntimeError(bad_msg)
n_tri = n_tri // 20
found = int(round(np.log(n_tri) / np.log(4)))
if n_tri != 4 ** found:
raise RuntimeError(bad_msg)
del n_tri
if dest_grade > found:
raise RuntimeError('For this surface, decimation grade should be %d '
'or less, not %s.' % (found, dest_grade))
source = _get_ico_surface(found)
dest = _get_ico_surface(dest_grade, patch_stats=True)
del dest['tri_cent']
del dest['tri_nn']
del dest['neighbor_tri']
del dest['tri_area']
if not np.array_equal(source['tris'], surf['tris']):
raise RuntimeError('The source surface has a matching number of '
'triangles but ordering is wrong')
logger.info('Going from %dth to %dth subdivision of an icosahedron '
'(n_tri: %d -> %d)' % (found, dest_grade, len(surf['tris']),
len(dest['tris'])))
# Find the mapping
dest['rr'] = surf['rr'][_get_ico_map(source, dest)]
return dest
def _get_ico_map(fro, to):
"""Get a mapping between ico surfaces."""
nearest, dists = _compute_nearest(fro['rr'], to['rr'], return_dists=True)
n_bads = (dists > 5e-3).sum()
if n_bads > 0:
raise RuntimeError('No matching vertex for %d destination vertices'
% (n_bads))
return nearest
def _order_surfaces(surfs):
"""Reorder the surfaces."""
if len(surfs) != 3:
return surfs
# we have three surfaces
surf_order = [FIFF.FIFFV_BEM_SURF_ID_HEAD,
FIFF.FIFFV_BEM_SURF_ID_SKULL,
FIFF.FIFFV_BEM_SURF_ID_BRAIN]
ids = np.array([surf['id'] for surf in surfs])
if set(ids) != set(surf_order):
raise RuntimeError('bad surface ids: %s' % ids)
order = [np.where(ids == id_)[0][0] for id_ in surf_order]
surfs = [surfs[idx] for idx in order]
return surfs
def _assert_complete_surface(surf, incomplete='raise'):
"""Check the sum of solid angles as seen from inside."""
# from surface_checks.c
# Center of mass....
cm = surf['rr'].mean(axis=0)
logger.info('%s CM is %6.2f %6.2f %6.2f mm' %
(_bem_surf_name[surf['id']],
1000 * cm[0], 1000 * cm[1], 1000 * cm[2]))
tot_angle = _get_solids(surf['rr'][surf['tris']], cm[np.newaxis, :])[0]
prop = tot_angle / (2 * np.pi)
if np.abs(prop - 1.0) > 1e-5:
msg = (f'Surface {_bem_surf_name[surf["id"]]} is not complete (sum of '
f'solid angles yielded {prop}, should be 1.)')
_on_missing(
incomplete, msg, name='incomplete', error_klass=RuntimeError)
def _assert_inside(fro, to):
"""Check one set of points is inside a surface."""
# this is "is_inside" in surface_checks.c
fro_name = _bem_surf_name[fro["id"]]
to_name = _bem_surf_name[to["id"]]
logger.info(
f'Checking that surface {fro_name} is inside surface {to_name} ...')
tot_angle = _get_solids(to['rr'][to['tris']], fro['rr'])
if (np.abs(tot_angle / (2 * np.pi) - 1.0) > 1e-5).any():
raise RuntimeError(
f'Surface {fro_name} is not completely inside surface {to_name}')
def _check_surfaces(surfs, incomplete='raise'):
"""Check that the surfaces are complete and non-intersecting."""
for surf in surfs:
_assert_complete_surface(surf, incomplete=incomplete)
# Then check the topology
for surf_1, surf_2 in zip(surfs[:-1], surfs[1:]):
_assert_inside(surf_2, surf_1)
def _check_surface_size(surf):
"""Check that the coordinate limits are reasonable."""
sizes = surf['rr'].max(axis=0) - surf['rr'].min(axis=0)
if (sizes < 0.05).any():
raise RuntimeError(
f'Dimensions of the surface {_bem_surf_name[surf["id"]]} seem too '
f'small ({1000 * sizes.min():9.5f}). Maybe the unit of measure'
' is meters instead of mm')
def _check_thicknesses(surfs):
"""Compute how close we are."""
for surf_1, surf_2 in zip(surfs[:-1], surfs[1:]):
min_dist = _compute_nearest(surf_1['rr'], surf_2['rr'],
return_dists=True)[1]
min_dist = min_dist.min()
fro = _bem_surf_name[surf_1['id']]
to = _bem_surf_name[surf_2['id']]
logger.info(f'Checking distance between {fro} and {to} surfaces...')
logger.info(f'Minimum distance between the {fro} and {to} surfaces is '
f'approximately {1000 * min_dist:6.1f} mm')
def _surfaces_to_bem(surfs, ids, sigmas, ico=None, rescale=True,
incomplete='raise', extra=''):
"""Convert surfaces to a BEM."""
# equivalent of mne_surf2bem
# surfs can be strings (filenames) or surface dicts
if len(surfs) not in (1, 3) or not (len(surfs) == len(ids) ==
len(sigmas)):
raise ValueError('surfs, ids, and sigmas must all have the same '
'number of elements (1 or 3)')
for si, surf in enumerate(surfs):
if isinstance(surf, str):
surfs[si] = read_surface(surf, return_dict=True)[-1]
# Downsampling if the surface is isomorphic with a subdivided icosahedron
if ico is not None:
for si, surf in enumerate(surfs):
surfs[si] = _ico_downsample(surf, ico)
for surf, id_ in zip(surfs, ids):
# Do topology checks (but don't save data) to fail early
surf['id'] = id_
_check_complete_surface(surf, copy=True, incomplete=incomplete,
extra=extra)
surf['coord_frame'] = surf.get('coord_frame', FIFF.FIFFV_COORD_MRI)
surf.update(np=len(surf['rr']), ntri=len(surf['tris']))
if rescale:
surf['rr'] /= 1000. # convert to meters
# Shifting surfaces is not implemented here...
# Order the surfaces for the benefit of the topology checks
for surf, sigma in zip(surfs, sigmas):
surf['sigma'] = sigma
surfs = _order_surfaces(surfs)
# Check topology as best we can
_check_surfaces(surfs, incomplete=incomplete)
for surf in surfs:
_check_surface_size(surf)
_check_thicknesses(surfs)
logger.info('Surfaces passed the basic topology checks.')
return surfs
@verbose
def make_bem_model(subject, ico=4, conductivity=(0.3, 0.006, 0.3),
subjects_dir=None, verbose=None):
"""Create a BEM model for a subject.
.. note:: To get a single layer bem corresponding to the --homog flag in
the command line tool set the ``conductivity`` parameter
to a list/tuple with a single value (e.g. [0.3]).
Parameters
----------
subject : str
The subject.
ico : int | None
The surface ico downsampling to use, e.g. 5=20484, 4=5120, 3=1280.
If None, no subsampling is applied.
conductivity : array of int, shape (3,) or (1,)
The conductivities to use for each shell. Should be a single element
for a one-layer model, or three elements for a three-layer model.
Defaults to ``[0.3, 0.006, 0.3]``. The MNE-C default for a
single-layer model would be ``[0.3]``.
%(subjects_dir)s
%(verbose)s
Returns
-------
surfaces : list of dict
The BEM surfaces. Use `make_bem_solution` to turn these into a
`~mne.bem.ConductorModel` suitable for forward calculation.
See Also
--------
make_bem_solution
make_sphere_model
read_bem_surfaces
write_bem_surfaces
Notes
-----
.. versionadded:: 0.10.0
"""
conductivity = np.array(conductivity, float)
if conductivity.ndim != 1 or conductivity.size not in (1, 3):
raise ValueError('conductivity must be 1D array-like with 1 or 3 '
'elements')
subjects_dir = get_subjects_dir(subjects_dir, raise_error=True)
subject_dir = op.join(subjects_dir, subject)
bem_dir = op.join(subject_dir, 'bem')
inner_skull = op.join(bem_dir, 'inner_skull.surf')
outer_skull = op.join(bem_dir, 'outer_skull.surf')
outer_skin = op.join(bem_dir, 'outer_skin.surf')
surfaces = [inner_skull, outer_skull, outer_skin]
ids = [FIFF.FIFFV_BEM_SURF_ID_BRAIN,
FIFF.FIFFV_BEM_SURF_ID_SKULL,
FIFF.FIFFV_BEM_SURF_ID_HEAD]
logger.info('Creating the BEM geometry...')
if len(conductivity) == 1:
surfaces = surfaces[:1]
ids = ids[:1]
surfaces = _surfaces_to_bem(surfaces, ids, conductivity, ico)
_check_bem_size(surfaces)
logger.info('Complete.\n')
return surfaces
# ############################################################################
# Compute EEG sphere model
def _fwd_eeg_get_multi_sphere_model_coeffs(m, n_terms):
"""Get the model depended weighting factor for n."""
nlayer = len(m['layers'])
if nlayer in (0, 1):
return 1.
# Initialize the arrays
c1 = np.zeros(nlayer - 1)
c2 = np.zeros(nlayer - 1)
cr = np.zeros(nlayer - 1)
cr_mult = np.zeros(nlayer - 1)
for k in range(nlayer - 1):
c1[k] = m['layers'][k]['sigma'] / m['layers'][k + 1]['sigma']
c2[k] = c1[k] - 1.0
cr_mult[k] = m['layers'][k]['rel_rad']
cr[k] = cr_mult[k]
cr_mult[k] *= cr_mult[k]
coeffs = np.zeros(n_terms - 1)
for n in range(1, n_terms):
# Increment the radius coefficients
for k in range(nlayer - 1):
cr[k] *= cr_mult[k]
# Multiply the matrices
M = np.eye(2)
n1 = n + 1.0
for k in range(nlayer - 2, -1, -1):
M = np.dot([[n + n1 * c1[k], n1 * c2[k] / cr[k]],
[n * c2[k] * cr[k], n1 + n * c1[k]]], M)
num = n * (2.0 * n + 1.0) ** (nlayer - 1)
coeffs[n - 1] = num / (n * M[1, 1] + n1 * M[1, 0])
return coeffs
def _compose_linear_fitting_data(mu, u):
"""Get the linear fitting data."""
from scipy import linalg
k1 = np.arange(1, u['nterms'])
mu1ns = mu[0] ** k1
# data to be fitted
y = u['w'][:-1] * (u['fn'][1:] - mu1ns * u['fn'][0])
# model matrix
M = u['w'][:-1, np.newaxis] * (mu[1:] ** k1[:, np.newaxis] -
mu1ns[:, np.newaxis])
uu, sing, vv = linalg.svd(M, full_matrices=False)
ncomp = u['nfit'] - 1
uu, sing, vv = uu[:, :ncomp], sing[:ncomp], vv[:ncomp]
return y, uu, sing, vv
def _compute_linear_parameters(mu, u):
"""Compute the best-fitting linear parameters."""
y, uu, sing, vv = _compose_linear_fitting_data(mu, u)
# Compute the residuals
vec = np.dot(y, uu)
resi = y - np.dot(uu, vec)
vec /= sing
lambda_ = np.zeros(u['nfit'])
lambda_[1:] = np.dot(vec, vv)
lambda_[0] = u['fn'][0] - np.sum(lambda_[1:])
rv = np.dot(resi, resi) / np.dot(y, y)
return rv, lambda_
def _one_step(mu, u):
"""Evaluate the residual sum of squares fit for one set of mu values."""
if np.abs(mu).max() > 1.0:
return 1.0
# Compose the data for the linear fitting, compute SVD, then residuals
y, uu, sing, vv = _compose_linear_fitting_data(mu, u)
resi = y - np.dot(uu, np.dot(y, uu))
return np.dot(resi, resi)
def _fwd_eeg_fit_berg_scherg(m, nterms, nfit):
"""Fit the Berg-Scherg equivalent spherical model dipole parameters."""
from scipy.optimize import fmin_cobyla
assert nfit >= 2
u = dict(nfit=nfit, nterms=nterms)
# (1) Calculate the coefficients of the true expansion
u['fn'] = _fwd_eeg_get_multi_sphere_model_coeffs(m, nterms + 1)
# (2) Calculate the weighting
f = (min([layer['rad'] for layer in m['layers']]) /
max([layer['rad'] for layer in m['layers']]))
# correct weighting
k = np.arange(1, nterms + 1)
u['w'] = np.sqrt((2.0 * k + 1) * (3.0 * k + 1.0) /
k) * np.power(f, (k - 1.0))
u['w'][-1] = 0
# Do the nonlinear minimization, constraining mu to the interval [-1, +1]
mu_0 = np.zeros(3)
fun = partial(_one_step, u=u)
max_ = 1. - 2e-4 # adjust for fmin_cobyla "catol" that not all scipy have
cons = list()
for ii in range(nfit):
def mycon(x, ii=ii):
return max_ - np.abs(x[ii])
cons.append(mycon)
mu = fmin_cobyla(fun, mu_0, cons, rhobeg=0.5, rhoend=1e-5, disp=0)
# (6) Do the final step: calculation of the linear parameters
rv, lambda_ = _compute_linear_parameters(mu, u)
order = np.argsort(mu)[::-1]
mu, lambda_ = mu[order], lambda_[order] # sort: largest mu first
m['mu'] = mu
# This division takes into account the actual conductivities
m['lambda'] = lambda_ / m['layers'][-1]['sigma']
m['nfit'] = nfit
return rv
@verbose
def make_sphere_model(r0=(0., 0., 0.04), head_radius=0.09, info=None,
relative_radii=(0.90, 0.92, 0.97, 1.0),
sigmas=(0.33, 1.0, 0.004, 0.33), verbose=None):
"""Create a spherical model for forward solution calculation.
Parameters
----------
r0 : array-like | str
Head center to use (in head coordinates). If 'auto', the head
center will be calculated from the digitization points in info.
head_radius : float | str | None
If float, compute spherical shells for EEG using the given radius.
If 'auto', estimate an appropriate radius from the dig points in Info,
If None, exclude shells (single layer sphere model).
info : instance of Info | None
Measurement info. Only needed if ``r0`` or ``head_radius`` are
``'auto'``.
relative_radii : array-like
Relative radii for the spherical shells.
sigmas : array-like
Sigma values for the spherical shells.
%(verbose)s
Returns
-------
sphere : instance of ConductorModel
The resulting spherical conductor model.
See Also
--------
make_bem_model
make_bem_solution
Notes
-----
The default model has::
relative_radii = (0.90, 0.92, 0.97, 1.0)
sigmas = (0.33, 1.0, 0.004, 0.33)
These correspond to compartments (with relative radii in ``m`` and
conductivities σ in ``S/m``) for the brain, CSF, skull, and scalp,
respectively.
.. versionadded:: 0.9.0
"""
for name in ('r0', 'head_radius'):
param = locals()[name]
if isinstance(param, str):
if param != 'auto':
raise ValueError('%s, if str, must be "auto" not "%s"'
% (name, param))
relative_radii = np.array(relative_radii, float).ravel()
sigmas = np.array(sigmas, float).ravel()
if len(relative_radii) != len(sigmas):
raise ValueError('relative_radii length (%s) must match that of '
'sigmas (%s)' % (len(relative_radii),
len(sigmas)))
if len(sigmas) <= 1 and head_radius is not None:
raise ValueError('at least 2 sigmas must be supplied if '
'head_radius is not None, got %s' % (len(sigmas),))
if (isinstance(r0, str) and r0 == 'auto') or \
(isinstance(head_radius, str) and head_radius == 'auto'):
if info is None:
raise ValueError('Info must not be None for auto mode')
head_radius_fit, r0_fit = fit_sphere_to_headshape(info, units='m')[:2]
if isinstance(r0, str):
r0 = r0_fit
if isinstance(head_radius, str):
head_radius = head_radius_fit
sphere = ConductorModel(is_sphere=True, r0=np.array(r0),
coord_frame=FIFF.FIFFV_COORD_HEAD)
sphere['layers'] = list()
if head_radius is not None:
# Eventually these could be configurable...
relative_radii = np.array(relative_radii, float)
sigmas = np.array(sigmas, float)
order = np.argsort(relative_radii)
relative_radii = relative_radii[order]
sigmas = sigmas[order]
for rel_rad, sig in zip(relative_radii, sigmas):
# sort layers by (relative) radius, and scale radii
layer = dict(rad=rel_rad, sigma=sig)
layer['rel_rad'] = layer['rad'] = rel_rad
sphere['layers'].append(layer)
# scale the radii
R = sphere['layers'][-1]['rad']
rR = sphere['layers'][-1]['rel_rad']
for layer in sphere['layers']:
layer['rad'] /= R
layer['rel_rad'] /= rR
#
# Setup the EEG sphere model calculations
#
# Scale the relative radii
for k in range(len(relative_radii)):
sphere['layers'][k]['rad'] = (head_radius *
sphere['layers'][k]['rel_rad'])
rv = _fwd_eeg_fit_berg_scherg(sphere, 200, 3)
logger.info('\nEquiv. model fitting -> RV = %g %%' % (100 * rv))
for k in range(3):
logger.info('mu%d = %g lambda%d = %g'
% (k + 1, sphere['mu'][k], k + 1,
sphere['layers'][-1]['sigma'] *
sphere['lambda'][k]))
logger.info('Set up EEG sphere model with scalp radius %7.1f mm\n'
% (1000 * head_radius,))
return sphere
# #############################################################################
# Sphere fitting
@verbose
def fit_sphere_to_headshape(info, dig_kinds='auto', units='m', verbose=None):
"""Fit a sphere to the headshape points to determine head center.
Parameters
----------
info : instance of Info
Measurement info.
%(dig_kinds)s
units : str
Can be "m" (default) or "mm".
.. versionadded:: 0.12
%(verbose)s
Returns
-------
radius : float
Sphere radius.
origin_head: ndarray, shape (3,)
Head center in head coordinates.
origin_device: ndarray, shape (3,)
Head center in device coordinates.
Notes
-----
This function excludes any points that are low and frontal
(``z < 0 and y > 0``) to improve the fit.
"""
if not isinstance(units, str) or units not in ('m', 'mm'):
raise ValueError('units must be a "m" or "mm"')
radius, origin_head, origin_device = _fit_sphere_to_headshape(
info, dig_kinds)
if units == 'mm':
radius *= 1e3
origin_head *= 1e3
origin_device *= 1e3
return radius, origin_head, origin_device
@verbose
def get_fitting_dig(info, dig_kinds='auto', exclude_frontal=True,
verbose=None):
"""Get digitization points suitable for sphere fitting.
Parameters
----------
info : instance of Info
The measurement info.
%(dig_kinds)s
%(exclude_frontal)s
Default is True.
.. versionadded:: 0.19
%(verbose)s
Returns
-------
dig : array, shape (n_pts, 3)
The digitization points (in head coordinates) to use for fitting.
Notes
-----
This will exclude digitization locations that have ``z < 0 and y > 0``,
i.e. points on the nose and below the nose on the face.
.. versionadded:: 0.14
"""
_validate_type(info, "info")
if info['dig'] is None:
raise RuntimeError('Cannot fit headshape without digitization '
', info["dig"] is None')
if isinstance(dig_kinds, str):
if dig_kinds == 'auto':
# try "extra" first
try:
return get_fitting_dig(info, 'extra')
except ValueError:
pass
return get_fitting_dig(info, ('extra', 'eeg'))
else:
dig_kinds = (dig_kinds,)
# convert string args to ints (first make dig_kinds mutable in case tuple)
dig_kinds = list(dig_kinds)
for di, d in enumerate(dig_kinds):
dig_kinds[di] = _dig_kind_dict.get(d, d)
if dig_kinds[di] not in _dig_kind_ints:
raise ValueError('dig_kinds[#%d] (%s) must be one of %s'
% (di, d, sorted(list(_dig_kind_dict.keys()))))
# get head digization points of the specified kind(s)
hsp = [p['r'] for p in info['dig'] if p['kind'] in dig_kinds]
if any(p['coord_frame'] != FIFF.FIFFV_COORD_HEAD for p in info['dig']):
raise RuntimeError('Digitization points not in head coordinates, '
'contact mne-python developers')
# exclude some frontal points (nose etc.)
if exclude_frontal:
hsp = [p for p in hsp if not (p[2] < -1e-6 and p[1] > 1e-6)]
hsp = np.array(hsp)
if len(hsp) <= 10:
kinds_str = ', '.join(['"%s"' % _dig_kind_rev[d]
for d in sorted(dig_kinds)])
msg = ('Only %s head digitization points of the specified kind%s (%s,)'
% (len(hsp), _pl(dig_kinds), kinds_str))
if len(hsp) < 4:
raise ValueError(msg + ', at least 4 required')
else:
warn(msg + ', fitting may be inaccurate')
return hsp
@verbose
def _fit_sphere_to_headshape(info, dig_kinds, verbose=None):
"""Fit a sphere to the given head shape."""
hsp = get_fitting_dig(info, dig_kinds)
radius, origin_head = _fit_sphere(np.array(hsp), disp=False)
# compute origin in device coordinates
dev_head_t = info['dev_head_t']
if dev_head_t is None:
dev_head_t = Transform('meg', 'head')
head_to_dev = _ensure_trans(dev_head_t, 'head', 'meg')
origin_device = apply_trans(head_to_dev, origin_head)
logger.info('Fitted sphere radius:'.ljust(30) + '%0.1f mm'
% (radius * 1e3,))
# 99th percentile on Wikipedia for Giabella to back of head is 21.7cm,
# i.e. 108mm "radius", so let's go with 110mm
# en.wikipedia.org/wiki/Human_head#/media/File:HeadAnthropometry.JPG
if radius > 0.110:
warn('Estimated head size (%0.1f mm) exceeded 99th '
'percentile for adult head size' % (1e3 * radius,))
# > 2 cm away from head center in X or Y is strange
if np.linalg.norm(origin_head[:2]) > 0.02:
warn('(X, Y) fit (%0.1f, %0.1f) more than 20 mm from '
'head frame origin' % tuple(1e3 * origin_head[:2]))
logger.info('Origin head coordinates:'.ljust(30) +
'%0.1f %0.1f %0.1f mm' % tuple(1e3 * origin_head))
logger.info('Origin device coordinates:'.ljust(30) +
'%0.1f %0.1f %0.1f mm' % tuple(1e3 * origin_device))
return radius, origin_head, origin_device
def _fit_sphere(points, disp='auto'):
"""Fit a sphere to an arbitrary set of points."""
from scipy.optimize import fmin_cobyla
if isinstance(disp, str) and disp == 'auto':
disp = True if logger.level <= 20 else False
# initial guess for center and radius
radii = (np.max(points, axis=1) - np.min(points, axis=1)) / 2.
radius_init = radii.mean()
center_init = np.median(points, axis=0)
# optimization
x0 = np.concatenate([center_init, [radius_init]])
def cost_fun(center_rad):
d = np.linalg.norm(points - center_rad[:3], axis=1) - center_rad[3]
d *= d
return d.sum()
def constraint(center_rad):
return center_rad[3] # radius must be >= 0
x_opt = fmin_cobyla(cost_fun, x0, constraint, rhobeg=radius_init,
rhoend=radius_init * 1e-6, disp=disp)
origin, radius = x_opt[:3], x_opt[3]
return radius, origin
def _check_origin(origin, info, coord_frame='head', disp=False):
"""Check or auto-determine the origin."""
if isinstance(origin, str):
if origin != 'auto':
raise ValueError('origin must be a numerical array, or "auto", '
'not %s' % (origin,))
if coord_frame == 'head':
R, origin = fit_sphere_to_headshape(info, verbose=False,
units='m')[:2]
logger.info(' Automatic origin fit: head of radius %0.1f mm'
% (R * 1000.,))
del R
else:
origin = (0., 0., 0.)
origin = np.array(origin, float)
if origin.shape != (3,):
raise ValueError('origin must be a 3-element array')
if disp:
origin_str = ', '.join(['%0.1f' % (o * 1000) for o in origin])
msg = (' Using origin %s mm in the %s frame'
% (origin_str, coord_frame))
if coord_frame == 'meg' and info['dev_head_t'] is not None:
o_dev = apply_trans(info['dev_head_t'], origin)
origin_str = ', '.join('%0.1f' % (o * 1000,) for o in o_dev)
msg += ' (%s mm in the head frame)' % (origin_str,)
logger.info(msg)
return origin
# ############################################################################
# Create BEM surfaces
@verbose
def make_watershed_bem(subject, subjects_dir=None, overwrite=False,
volume='T1', atlas=False, gcaatlas=False, preflood=None,
show=False, copy=False, T1=None, brainmask='ws.mgz',
verbose=None):
"""Create BEM surfaces using the FreeSurfer watershed algorithm.
Parameters
----------
subject : str
Subject name.
%(subjects_dir)s
%(overwrite)s
volume : str
Defaults to T1.
atlas : bool
Specify the --atlas option for mri_watershed.
gcaatlas : bool
Specify the --brain_atlas option for mri_watershed.
preflood : int
Change the preflood height.
show : bool
Show surfaces to visually inspect all three BEM surfaces (recommended).
.. versionadded:: 0.12
copy : bool
If True (default False), use copies instead of symlinks for surfaces
(if they do not already exist).
.. versionadded:: 0.18
T1 : bool | None
If True, pass the ``-T1`` flag.
By default (None), this takes the same value as ``gcaatlas``.
.. versionadded:: 0.19
brainmask : str
The filename for the brainmask output file relative to the
``$SUBJECTS_DIR/$SUBJECT/bem/watershed/`` directory.
Can be for example ``"../../mri/brainmask.mgz"`` to overwrite
the brainmask obtained via ``recon-all -autorecon1``.
.. versionadded:: 0.19
%(verbose)s
See Also
--------
mne.viz.plot_bem
Notes
-----
If your BEM meshes do not look correct when viewed in
:func:`mne.viz.plot_alignment` or :func:`mne.viz.plot_bem`, consider
potential solutions from the :ref:`FAQ <faq_watershed_bem_meshes>`.
.. versionadded:: 0.10
"""
from .viz.misc import plot_bem
env, mri_dir, bem_dir = _prepare_env(subject, subjects_dir)
tempdir = _TempDir() # fsl and Freesurfer create some random junk in CWD
run_subprocess_env = partial(run_subprocess, env=env,
cwd=tempdir)
subjects_dir = env['SUBJECTS_DIR'] # Set by _prepare_env() above.
subject_dir = op.join(subjects_dir, subject)
ws_dir = op.join(bem_dir, 'watershed')
T1_dir = op.join(mri_dir, volume)
T1_mgz = T1_dir
if not T1_dir.endswith('.mgz'):
T1_mgz += '.mgz'
if not op.isdir(bem_dir):
os.makedirs(bem_dir)
_check_fname(T1_mgz, overwrite='read', must_exist=True, name='MRI data')
if op.isdir(ws_dir):
if not overwrite:
raise RuntimeError('%s already exists. Use the --overwrite option'
' to recreate it.' % ws_dir)
else:
shutil.rmtree(ws_dir)
# put together the command
cmd = ['mri_watershed']
if preflood:
cmd += ["-h", "%s" % int(preflood)]
if T1 is None:
T1 = gcaatlas
if T1:
cmd += ['-T1']
if gcaatlas:
fname = op.join(env['FREESURFER_HOME'], 'average',
'RB_all_withskull_*.gca')
fname = sorted(glob.glob(fname))[::-1][0]
logger.info('Using GCA atlas: %s' % (fname,))
cmd += ['-atlas', '-brain_atlas', fname,
subject_dir + '/mri/transforms/talairach_with_skull.lta']
elif atlas:
cmd += ['-atlas']
if op.exists(T1_mgz):
cmd += ['-useSRAS', '-surf', op.join(ws_dir, subject), T1_mgz,
op.join(ws_dir, brainmask)]
else:
cmd += ['-useSRAS', '-surf', op.join(ws_dir, subject), T1_dir,
op.join(ws_dir, brainmask)]
# report and run
logger.info('\nRunning mri_watershed for BEM segmentation with the '
'following parameters:\n\nResults dir = %s\nCommand = %s\n'
% (ws_dir, ' '.join(cmd)))
os.makedirs(op.join(ws_dir))
run_subprocess_env(cmd)
del tempdir # clean up directory
if op.isfile(T1_mgz):
new_info = _extract_volume_info(T1_mgz) if has_nibabel() else dict()
if not new_info:
warn('nibabel is not available or the volumn info is invalid.'
'Volume info not updated in the written surface.')
surfs = ['brain', 'inner_skull', 'outer_skull', 'outer_skin']
for s in surfs:
surf_ws_out = op.join(ws_dir, '%s_%s_surface' % (subject, s))
rr, tris, volume_info = read_surface(surf_ws_out,
read_metadata=True)
# replace volume info, 'head' stays
volume_info.update(new_info)
write_surface(surf_ws_out, rr, tris, volume_info=volume_info,
overwrite=True)
# Create symbolic links
surf_out = op.join(bem_dir, '%s.surf' % s)
if not overwrite and op.exists(surf_out):
skip_symlink = True
else:
if op.exists(surf_out):
os.remove(surf_out)
_symlink(surf_ws_out, surf_out, copy)
skip_symlink = False
if skip_symlink:
logger.info("Unable to create all symbolic links to .surf files "
"in bem folder. Use --overwrite option to recreate "
"them.")
dest = op.join(bem_dir, 'watershed')
else:
logger.info("Symbolic links to .surf files created in bem folder")
dest = bem_dir
logger.info("\nThank you for waiting.\nThe BEM triangulations for this "
"subject are now available at:\n%s." % dest)
# Write a head file for coregistration
fname_head = op.join(bem_dir, subject + '-head.fif')
if op.isfile(fname_head):
os.remove(fname_head)
surf = _surfaces_to_bem([op.join(ws_dir, subject + '_outer_skin_surface')],
[FIFF.FIFFV_BEM_SURF_ID_HEAD], sigmas=[1])
write_bem_surfaces(fname_head, surf)
# Show computed BEM surfaces
if show:
plot_bem(subject=subject, subjects_dir=subjects_dir,
orientation='coronal', slices=None, show=True)
logger.info('Created %s\n\nComplete.' % (fname_head,))
def _extract_volume_info(mgz):
"""Extract volume info from a mgz file."""
import nibabel
header = nibabel.load(mgz).header
version = header['version']
vol_info = dict()
if version == 1:
version = '%s # volume info valid' % version
vol_info['valid'] = version
vol_info['filename'] = mgz
vol_info['volume'] = header['dims'][:3]
vol_info['voxelsize'] = header['delta']
vol_info['xras'], vol_info['yras'], vol_info['zras'] = header['Mdc']
vol_info['cras'] = header['Pxyz_c']
return vol_info
# ############################################################################
# Read
@verbose
def read_bem_surfaces(fname, patch_stats=False, s_id=None, on_defects='raise',
verbose=None):
"""Read the BEM surfaces from a FIF file.
Parameters
----------
fname : str
The name of the file containing the surfaces.
patch_stats : bool, optional (default False)
Calculate and add cortical patch statistics to the surfaces.
s_id : int | None
If int, only read and return the surface with the given s_id.
An error will be raised if it doesn't exist. If None, all
surfaces are read and returned.
%(on_defects)s
.. versionadded:: 0.23
%(verbose)s
Returns
-------
surf: list | dict
A list of dictionaries that each contain a surface. If s_id
is not None, only the requested surface will be returned.
See Also
--------
write_bem_surfaces, write_bem_solution, make_bem_model
"""
# Open the file, create directory
_validate_type(s_id, ('int-like', None), 's_id')
fname = _check_fname(fname, 'read', True, 'fname')
if fname.endswith('.h5'):
surf = _read_bem_surfaces_h5(fname, s_id)
else:
surf = _read_bem_surfaces_fif(fname, s_id)
if s_id is not None and len(surf) != 1:
raise ValueError('surface with id %d not found' % s_id)
for this in surf:
if patch_stats or this['nn'] is None:
_check_complete_surface(this, incomplete=on_defects)
return surf[0] if s_id is not None else surf
def _read_bem_surfaces_h5(fname, s_id):
bem = read_hdf5(fname)
try:
[s['id'] for s in bem['surfs']]
except Exception: # not our format
raise ValueError('BEM data not found')
surf = bem['surfs']
if s_id is not None:
surf = [s for s in surf if s['id'] == s_id]
return surf
def _read_bem_surfaces_fif(fname, s_id):
# Default coordinate frame
coord_frame = FIFF.FIFFV_COORD_MRI
f, tree, _ = fiff_open(fname)
with f as fid:
# Find BEM
bem = dir_tree_find(tree, FIFF.FIFFB_BEM)
if bem is None or len(bem) == 0:
raise ValueError('BEM data not found')
bem = bem[0]
# Locate all surfaces
bemsurf = dir_tree_find(bem, FIFF.FIFFB_BEM_SURF)
if bemsurf is None:
raise ValueError('BEM surface data not found')
logger.info(' %d BEM surfaces found' % len(bemsurf))
# Coordinate frame possibly at the top level
tag = find_tag(fid, bem, FIFF.FIFF_BEM_COORD_FRAME)
if tag is not None:
coord_frame = tag.data
# Read all surfaces
if s_id is not None:
surf = [_read_bem_surface(fid, bsurf, coord_frame, s_id)
for bsurf in bemsurf]
surf = [s for s in surf if s is not None]
else:
surf = list()
for bsurf in bemsurf:
logger.info(' Reading a surface...')
this = _read_bem_surface(fid, bsurf, coord_frame)
surf.append(this)
logger.info('[done]')
logger.info(' %d BEM surfaces read' % len(surf))
return surf
def _read_bem_surface(fid, this, def_coord_frame, s_id=None):
"""Read one bem surface."""
# fid should be open as a context manager here
res = dict()
# Read all the interesting stuff
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_ID)
if tag is None:
res['id'] = FIFF.FIFFV_BEM_SURF_ID_UNKNOWN
else:
res['id'] = int(tag.data)
if s_id is not None and res['id'] != s_id:
return None
tag = find_tag(fid, this, FIFF.FIFF_BEM_SIGMA)
res['sigma'] = 1.0 if tag is None else float(tag.data)
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_NNODE)
if tag is None:
raise ValueError('Number of vertices not found')
res['np'] = int(tag.data)
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_NTRI)
if tag is None:
raise ValueError('Number of triangles not found')
res['ntri'] = int(tag.data)
tag = find_tag(fid, this, FIFF.FIFF_MNE_COORD_FRAME)
if tag is None:
tag = find_tag(fid, this, FIFF.FIFF_BEM_COORD_FRAME)
if tag is None:
res['coord_frame'] = def_coord_frame
else:
res['coord_frame'] = tag.data
else:
res['coord_frame'] = tag.data
# Vertices, normals, and triangles
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_NODES)
if tag is None:
raise ValueError('Vertex data not found')
res['rr'] = tag.data.astype(np.float64) # XXX : double because of mayavi
if res['rr'].shape[0] != res['np']:
raise ValueError('Vertex information is incorrect')
tag = find_tag(fid, this, FIFF.FIFF_MNE_SOURCE_SPACE_NORMALS)
if tag is None:
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_NORMALS)
if tag is None:
res['nn'] = None
else:
res['nn'] = tag.data.copy()
if res['nn'].shape[0] != res['np']:
raise ValueError('Vertex normal information is incorrect')
tag = find_tag(fid, this, FIFF.FIFF_BEM_SURF_TRIANGLES)
if tag is None:
raise ValueError('Triangulation not found')
res['tris'] = tag.data - 1 # index start at 0 in Python
if res['tris'].shape[0] != res['ntri']:
raise ValueError('Triangulation information is incorrect')
return res
@verbose
def read_bem_solution(fname, verbose=None):
"""Read the BEM solution from a file.
Parameters
----------
fname : str
The file containing the BEM solution.
%(verbose)s
Returns
-------
bem : instance of ConductorModel
The BEM solution.
See Also
--------
read_bem_surfaces
write_bem_surfaces
make_bem_solution
write_bem_solution
"""
fname = _check_fname(fname, 'read', True, 'fname')
# mirrors fwd_bem_load_surfaces from fwd_bem_model.c
if fname.endswith('.h5'):
logger.info('Loading surfaces and solution...')
bem = read_hdf5(fname)
else:
bem = _read_bem_solution_fif(fname)
if len(bem['surfs']) == 3:
logger.info('Three-layer model surfaces loaded.')
needed = np.array([FIFF.FIFFV_BEM_SURF_ID_HEAD,
FIFF.FIFFV_BEM_SURF_ID_SKULL,
FIFF.FIFFV_BEM_SURF_ID_BRAIN])
if not all(x['id'] in needed for x in bem['surfs']):
raise RuntimeError('Could not find necessary BEM surfaces')
# reorder surfaces as necessary (shouldn't need to?)
reorder = [None] * 3
for x in bem['surfs']:
reorder[np.where(x['id'] == needed)[0][0]] = x
bem['surfs'] = reorder
elif len(bem['surfs']) == 1:
if not bem['surfs'][0]['id'] == FIFF.FIFFV_BEM_SURF_ID_BRAIN:
raise RuntimeError('BEM Surfaces not found')
logger.info('Homogeneous model surface loaded.')
assert set(bem.keys()) == set(('surfs', 'solution', 'bem_method'))
bem = ConductorModel(bem)
bem['is_sphere'] = False
# sanity checks and conversions
_check_option('BEM approximation method', bem['bem_method'],
(FIFF.FIFFV_BEM_APPROX_LINEAR, FIFF.FIFFV_BEM_APPROX_CONST))
dim = 0
for surf in bem['surfs']:
if bem['bem_method'] == FIFF.FIFFV_BEM_APPROX_LINEAR:
dim += surf['np']
else: # method == FIFF.FIFFV_BEM_APPROX_CONST
dim += surf['ntri']
dims = bem['solution'].shape
if len(dims) != 2:
raise RuntimeError('Expected a two-dimensional solution matrix '
'instead of a %d dimensional one' % dims[0])
if dims[0] != dim or dims[1] != dim:
raise RuntimeError('Expected a %d x %d solution matrix instead of '
'a %d x %d one' % (dim, dim, dims[1], dims[0]))
bem['nsol'] = bem['solution'].shape[0]
# Gamma factors and multipliers
_add_gamma_multipliers(bem)
kind = {
FIFF.FIFFV_BEM_APPROX_CONST: 'constant collocation',
FIFF.FIFFV_BEM_APPROX_LINEAR: 'linear_collocation',
}[bem['bem_method']]
logger.info('Loaded %s BEM solution from %s', kind, fname)
return bem
def _read_bem_solution_fif(fname):
logger.info('Loading surfaces...')
surfs = read_bem_surfaces(fname, patch_stats=True, verbose=False)
# convert from surfaces to solution
logger.info('\nLoading the solution matrix...\n')
f, tree, _ = fiff_open(fname)
with f as fid:
# Find the BEM data
nodes = dir_tree_find(tree, FIFF.FIFFB_BEM)
if len(nodes) == 0:
raise RuntimeError('No BEM data in %s' % fname)
bem_node = nodes[0]
# Approximation method
tag = find_tag(f, bem_node, FIFF.FIFF_BEM_APPROX)
if tag is None:
raise RuntimeError('No BEM solution found in %s' % fname)
method = tag.data[0]
tag = find_tag(fid, bem_node, FIFF.FIFF_BEM_POT_SOLUTION)
sol = tag.data
return dict(solution=sol, bem_method=method, surfs=surfs)
def _add_gamma_multipliers(bem):
"""Add gamma and multipliers in-place."""
bem['sigma'] = np.array([surf['sigma'] for surf in bem['surfs']])
# Dirty trick for the zero conductivity outside
sigma = np.r_[0.0, bem['sigma']]
bem['source_mult'] = 2.0 / (sigma[1:] + sigma[:-1])
bem['field_mult'] = sigma[1:] - sigma[:-1]
# make sure subsequent "zip"s work correctly
assert len(bem['surfs']) == len(bem['field_mult'])
bem['gamma'] = ((sigma[1:] - sigma[:-1])[np.newaxis, :] /
(sigma[1:] + sigma[:-1])[:, np.newaxis])
# In our BEM code we do not model the CSF so we assign the innermost surface
# the id BRAIN. Our 4-layer sphere we model CSF (at least by default), so when
# searching for and referring to surfaces we need to keep track of this.
_sm_surf_dict = OrderedDict([
('brain', FIFF.FIFFV_BEM_SURF_ID_BRAIN),
('inner_skull', FIFF.FIFFV_BEM_SURF_ID_CSF),
('outer_skull', FIFF.FIFFV_BEM_SURF_ID_SKULL),
('head', FIFF.FIFFV_BEM_SURF_ID_HEAD),
])
_bem_surf_dict = {
'inner_skull': FIFF.FIFFV_BEM_SURF_ID_BRAIN,
'outer_skull': FIFF.FIFFV_BEM_SURF_ID_SKULL,
'head': FIFF.FIFFV_BEM_SURF_ID_HEAD,
}
_bem_surf_name = {
FIFF.FIFFV_BEM_SURF_ID_BRAIN: 'inner skull',
FIFF.FIFFV_BEM_SURF_ID_SKULL: 'outer skull',
FIFF.FIFFV_BEM_SURF_ID_HEAD: 'outer skin ',
FIFF.FIFFV_BEM_SURF_ID_UNKNOWN: 'unknown ',
}
_sm_surf_name = {
FIFF.FIFFV_BEM_SURF_ID_BRAIN: 'brain',
FIFF.FIFFV_BEM_SURF_ID_CSF: 'csf',
FIFF.FIFFV_BEM_SURF_ID_SKULL: 'outer skull',
FIFF.FIFFV_BEM_SURF_ID_HEAD: 'outer skin ',
FIFF.FIFFV_BEM_SURF_ID_UNKNOWN: 'unknown ',
}
def _bem_find_surface(bem, id_):
"""Find surface from already-loaded conductor model."""
if bem['is_sphere']:
_surf_dict = _sm_surf_dict
_name_dict = _sm_surf_name
kind = 'Sphere model'
tri = 'boundary'
else:
_surf_dict = _bem_surf_dict
_name_dict = _bem_surf_name
kind = 'BEM'
tri = 'triangulation'
if isinstance(id_, str):
name = id_
id_ = _surf_dict[id_]
else:
name = _name_dict[id_]
kind = 'Sphere model' if bem['is_sphere'] else 'BEM'
idx = np.where(np.array([s['id'] for s in bem['surfs']]) == id_)[0]
if len(idx) != 1:
raise RuntimeError(f'{kind} does not have the {name} {tri}')
return bem['surfs'][idx[0]]
# ############################################################################
# Write
@verbose
def write_bem_surfaces(fname, surfs, overwrite=False, verbose=None):
"""Write BEM surfaces to a fiff file.
Parameters
----------
fname : str
Filename to write. Can end with ``.h5`` to write using HDF5.
surfs : dict | list of dict
The surfaces, or a single surface.
%(overwrite)s
%(verbose)s
"""
if isinstance(surfs, dict):
surfs = [surfs]
fname = _check_fname(fname, overwrite=overwrite, name='fname')
if fname.endswith('.h5'):
write_hdf5(fname, dict(surfs=surfs), overwrite=True)
else:
with start_file(fname) as fid:
start_block(fid, FIFF.FIFFB_BEM)
write_int(fid, FIFF.FIFF_BEM_COORD_FRAME, surfs[0]['coord_frame'])
_write_bem_surfaces_block(fid, surfs)
end_block(fid, FIFF.FIFFB_BEM)
end_file(fid)
@verbose
def write_head_bem(fname, rr, tris, on_defects='raise', overwrite=False,
verbose=None):
"""Write a head surface to a fiff file.
Parameters
----------
fname : str
Filename to write.
rr : array, shape (n_vertices, 3)
Coordinate points in the MRI coordinate system.
tris : ndarray of int, shape (n_tris, 3)
Triangulation (each line contains indices for three points which
together form a face).
%(on_defects)s
%(overwrite)s
%(verbose)s
"""
surf = _surfaces_to_bem([dict(rr=rr, tris=tris)],
[FIFF.FIFFV_BEM_SURF_ID_HEAD], [1], rescale=False,
incomplete=on_defects)
write_bem_surfaces(fname, surf, overwrite=overwrite)
def _write_bem_surfaces_block(fid, surfs):
"""Write bem surfaces to open file handle."""
for surf in surfs:
start_block(fid, FIFF.FIFFB_BEM_SURF)
write_float(fid, FIFF.FIFF_BEM_SIGMA, surf['sigma'])
write_int(fid, FIFF.FIFF_BEM_SURF_ID, surf['id'])
write_int(fid, FIFF.FIFF_MNE_COORD_FRAME, surf['coord_frame'])
write_int(fid, FIFF.FIFF_BEM_SURF_NNODE, surf['np'])
write_int(fid, FIFF.FIFF_BEM_SURF_NTRI, surf['ntri'])
write_float_matrix(fid, FIFF.FIFF_BEM_SURF_NODES, surf['rr'])
# index start at 0 in Python
write_int_matrix(fid, FIFF.FIFF_BEM_SURF_TRIANGLES,
surf['tris'] + 1)
if 'nn' in surf and surf['nn'] is not None and len(surf['nn']) > 0:
write_float_matrix(fid, FIFF.FIFF_BEM_SURF_NORMALS, surf['nn'])
end_block(fid, FIFF.FIFFB_BEM_SURF)
@verbose
def write_bem_solution(fname, bem, overwrite=False, verbose=None):
"""Write a BEM model with solution.
Parameters
----------
fname : str
The filename to use. Can end with ``.h5`` to write using HDF5.
bem : instance of ConductorModel
The BEM model with solution to save.
%(overwrite)s
%(verbose)s
See Also
--------
read_bem_solution
"""
fname = _check_fname(fname, overwrite=overwrite, name='fname')
if fname.endswith('.h5'):
bem = {k: bem[k] for k in ('surfs', 'solution', 'bem_method')}
write_hdf5(fname, bem, overwrite=True)
else:
_write_bem_solution_fif(fname, bem)
def _write_bem_solution_fif(fname, bem):
_check_bem_size(bem['surfs'])
with start_file(fname) as fid:
start_block(fid, FIFF.FIFFB_BEM)
# Coordinate frame (mainly for backward compatibility)
write_int(fid, FIFF.FIFF_BEM_COORD_FRAME,
bem['surfs'][0]['coord_frame'])
# Surfaces
_write_bem_surfaces_block(fid, bem['surfs'])
# The potential solution
if 'solution' in bem:
if bem['bem_method'] != FWD.BEM_LINEAR_COLL:
raise RuntimeError('Only linear collocation supported')
write_int(fid, FIFF.FIFF_BEM_APPROX, FIFF.FIFFV_BEM_APPROX_LINEAR)
write_float_matrix(fid, FIFF.FIFF_BEM_POT_SOLUTION,
bem['solution'])
end_block(fid, FIFF.FIFFB_BEM)
end_file(fid)
# #############################################################################
# Create 3-Layers BEM model from Flash MRI images
def _prepare_env(subject, subjects_dir):
"""Prepare an env object for subprocess calls."""
env = os.environ.copy()
fs_home = _check_freesurfer_home()
_validate_type(subject, "str")
subjects_dir = get_subjects_dir(subjects_dir, raise_error=True)
subjects_dir = op.abspath(subjects_dir) # force use of an absolute path
subjects_dir = op.expanduser(subjects_dir)
if not op.isdir(subjects_dir):
raise RuntimeError('Could not find the MRI data directory "%s"'
% subjects_dir)
subject_dir = op.join(subjects_dir, subject)
if not op.isdir(subject_dir):
raise RuntimeError('Could not find the subject data directory "%s"'
% (subject_dir,))
env.update(SUBJECT=subject, SUBJECTS_DIR=subjects_dir,
FREESURFER_HOME=fs_home)
mri_dir = op.join(subject_dir, 'mri')
bem_dir = op.join(subject_dir, 'bem')
return env, mri_dir, bem_dir
@verbose
def convert_flash_mris(subject, flash30=True, convert=True, unwarp=False,
subjects_dir=None, verbose=None):
"""Convert DICOM files for use with make_flash_bem.
Parameters
----------
subject : str
Subject name.
flash30 : bool
Use 30-degree flip angle data.
convert : bool
Assume that the Flash MRI images have already been converted
to mgz files.
unwarp : bool
Run grad_unwarp with -unwarp option on each of the converted
data sets. It requires FreeSurfer's MATLAB toolbox to be properly
installed.
%(subjects_dir)s
%(verbose)s
Notes
-----
Before running this script do the following:
(unless convert=False is specified)
1. Copy all of your FLASH images in a single directory <source> and
create a directory <dest> to hold the output of mne_organize_dicom
2. cd to <dest> and run
$ mne_organize_dicom <source>
to create an appropriate directory structure
3. Create symbolic links to make flash05 and flash30 point to the
appropriate series:
$ ln -s <FLASH 5 series dir> flash05
$ ln -s <FLASH 30 series dir> flash30
Some partition formats (e.g. FAT32) do not support symbolic links.
In this case, copy the file to the appropriate series:
$ cp <FLASH 5 series dir> flash05
$ cp <FLASH 30 series dir> flash30
4. cd to the directory where flash05 and flash30 links are
5. Set SUBJECTS_DIR and SUBJECT environment variables appropriately
6. Run this script
This function assumes that the Freesurfer segmentation of the subject
has been completed. In particular, the T1.mgz and brain.mgz MRI volumes
should be, as usual, in the subject's mri directory.
"""
env, mri_dir = _prepare_env(subject, subjects_dir)[:2]
tempdir = _TempDir() # fsl and Freesurfer create some random junk in CWD
run_subprocess_env = partial(run_subprocess, env=env,
cwd=tempdir)
# Step 1a : Data conversion to mgz format
if not op.exists(op.join(mri_dir, 'flash', 'parameter_maps')):
os.makedirs(op.join(mri_dir, 'flash', 'parameter_maps'))
echos_done = 0
if convert:
logger.info("\n---- Converting Flash images ----")
echos = ['001', '002', '003', '004', '005', '006', '007', '008']
if flash30:
flashes = ['05', '30']
else:
flashes = ['05']
#
missing = False
for flash in flashes:
for echo in echos:
if not op.isdir(op.join('flash' + flash, echo)):
missing = True
if missing:
echos = ['002', '003', '004', '005', '006', '007', '008', '009']
for flash in flashes:
for echo in echos:
if not op.isdir(op.join('flash' + flash, echo)):
raise RuntimeError("Directory %s is missing."
% op.join('flash' + flash, echo))
#
for flash in flashes:
for echo in echos:
if not op.isdir(op.join('flash' + flash, echo)):
raise RuntimeError("Directory %s is missing."
% op.join('flash' + flash, echo))
sample_file = glob.glob(op.join('flash' + flash, echo, '*'))[0]
dest_file = op.join(mri_dir, 'flash',
'mef' + flash + '_' + echo + '.mgz')
# do not redo if already present
if op.isfile(dest_file):
logger.info("The file %s is already there")
else:
cmd = ['mri_convert', sample_file, dest_file]
run_subprocess_env(cmd)
echos_done += 1
# Step 1b : Run grad_unwarp on converted files
flash_dir = op.join(mri_dir, "flash")
template = op.join(flash_dir, "mef*.mgz")
files = glob.glob(template)
if len(files) == 0:
raise ValueError('No suitable source files found (%s)' % template)
if unwarp:
logger.info("\n---- Unwarp mgz data sets ----")
for infile in files:
outfile = infile.replace(".mgz", "u.mgz")
cmd = ['grad_unwarp', '-i', infile, '-o', outfile, '-unwarp',
'true']
run_subprocess_env(cmd)
# Clear parameter maps if some of the data were reconverted
pm_dir = op.join(flash_dir, 'parameter_maps')
if echos_done > 0 and op.exists(pm_dir):
shutil.rmtree(pm_dir)
logger.info("\nParameter maps directory cleared")
if not op.exists(pm_dir):
os.makedirs(pm_dir)
# Step 2 : Create the parameter maps
if flash30:
logger.info("\n---- Creating the parameter maps ----")
if unwarp:
files = glob.glob(op.join(flash_dir, "mef05*u.mgz"))
if len(os.listdir(pm_dir)) == 0:
cmd = (['mri_ms_fitparms'] +
files +
[op.join(flash_dir, 'parameter_maps')])
run_subprocess_env(cmd)
else:
logger.info("Parameter maps were already computed")
# Step 3 : Synthesize the flash 5 images
logger.info("\n---- Synthesizing flash 5 images ----")
if not op.exists(op.join(pm_dir, 'flash5.mgz')):
cmd = ['mri_synthesize', '20', '5', '5',
op.join(pm_dir, 'T1.mgz'),
op.join(pm_dir, 'PD.mgz'),
op.join(pm_dir, 'flash5.mgz')
]
run_subprocess_env(cmd)
os.remove(op.join(pm_dir, 'flash5_reg.mgz'))
else:
logger.info("Synthesized flash 5 volume is already there")
else:
logger.info("\n---- Averaging flash5 echoes ----")
template = op.join(flash_dir,
"mef05*u.mgz" if unwarp else "mef05*.mgz")
files = glob.glob(template)
if len(files) == 0:
raise ValueError('No suitable source files found (%s)' % template)
cmd = (['mri_average', '-noconform'] +
files +
[op.join(pm_dir, 'flash5.mgz')])
run_subprocess_env(cmd)
if op.exists(op.join(pm_dir, 'flash5_reg.mgz')):
os.remove(op.join(pm_dir, 'flash5_reg.mgz'))
del tempdir # finally done running subprocesses
assert op.isfile(op.join(pm_dir, 'flash5.mgz'))
@verbose
def make_flash_bem(subject, overwrite=False, show=True, subjects_dir=None,
flash_path=None, copy=False, verbose=None):
"""Create 3-Layer BEM model from prepared flash MRI images.
Parameters
----------
subject : str
Subject name.
overwrite : bool
Write over existing .surf files in bem folder.
show : bool
Show surfaces to visually inspect all three BEM surfaces (recommended).
%(subjects_dir)s
flash_path : str | None
Path to the flash images. If None (default), mri/flash/parameter_maps
within the subject reconstruction is used.
.. versionadded:: 0.13.0
copy : bool
If True (default False), use copies instead of symlinks for surfaces
(if they do not already exist).
.. versionadded:: 0.18
%(verbose)s
See Also
--------
convert_flash_mris
Notes
-----
This program assumes that FreeSurfer is installed and sourced properly.
This function extracts the BEM surfaces (outer skull, inner skull, and
outer skin) from multiecho FLASH MRI data with spin angles of 5 and 30
degrees, in mgz format.
"""
from .viz.misc import plot_bem
env, mri_dir, bem_dir = _prepare_env(subject, subjects_dir)
tempdir = _TempDir() # fsl and Freesurfer create some random junk in CWD
run_subprocess_env = partial(run_subprocess, env=env,
cwd=tempdir)
if flash_path is None:
flash_path = op.join(mri_dir, 'flash', 'parameter_maps')
else:
flash_path = op.abspath(flash_path)
subjects_dir = env['SUBJECTS_DIR']
logger.info('\nProcessing the flash MRI data to produce BEM meshes with '
'the following parameters:\n'
'SUBJECTS_DIR = %s\n'
'SUBJECT = %s\n'
'Result dir = %s\n' % (subjects_dir, subject,
op.join(bem_dir, 'flash')))
# Step 4 : Register with MPRAGE
logger.info("\n---- Registering flash 5 with MPRAGE ----")
flash5 = op.join(flash_path, 'flash5.mgz')
flash5_reg = op.join(flash_path, 'flash5_reg.mgz')
if not op.exists(flash5_reg):
if op.exists(op.join(mri_dir, 'T1.mgz')):
ref_volume = op.join(mri_dir, 'T1.mgz')
else:
ref_volume = op.join(mri_dir, 'T1')
cmd = ['fsl_rigid_register', '-r', ref_volume, '-i', flash5,
'-o', flash5_reg]
run_subprocess_env(cmd)
else:
logger.info("Registered flash 5 image is already there")
# Step 5a : Convert flash5 into COR
logger.info("\n---- Converting flash5 volume into COR format ----")
flash5_dir = op.join(mri_dir, 'flash5')
shutil.rmtree(flash5_dir, ignore_errors=True)
os.makedirs(flash5_dir)
cmd = ['mri_convert', flash5_reg, op.join(mri_dir, 'flash5')]
run_subprocess_env(cmd)
# Step 5b and c : Convert the mgz volumes into COR
convert_T1 = False
T1_dir = op.join(mri_dir, 'T1')
if not op.isdir(T1_dir) or len(glob.glob(op.join(T1_dir, 'COR*'))) == 0:
convert_T1 = True
convert_brain = False
brain_dir = op.join(mri_dir, 'brain')
if not op.isdir(brain_dir) or \
len(glob.glob(op.join(brain_dir, 'COR*'))) == 0:
convert_brain = True
logger.info("\n---- Converting T1 volume into COR format ----")
if convert_T1:
T1_fname = op.join(mri_dir, 'T1.mgz')
if not op.isfile(T1_fname):
raise RuntimeError("Both T1 mgz and T1 COR volumes missing.")
os.makedirs(T1_dir)
cmd = ['mri_convert', T1_fname, T1_dir]
run_subprocess_env(cmd)
else:
logger.info("T1 volume is already in COR format")
logger.info("\n---- Converting brain volume into COR format ----")
if convert_brain:
brain_fname = op.join(mri_dir, 'brain.mgz')
if not op.isfile(brain_fname):
raise RuntimeError("Both brain mgz and brain COR volumes missing.")
os.makedirs(brain_dir)
cmd = ['mri_convert', brain_fname, brain_dir]
run_subprocess_env(cmd)
else:
logger.info("Brain volume is already in COR format")
# Finally ready to go
logger.info("\n---- Creating the BEM surfaces ----")
cmd = ['mri_make_bem_surfaces', subject]
run_subprocess_env(cmd)
del tempdir # ran our last subprocess; clean up directory
logger.info("\n---- Converting the tri files into surf files ----")
flash_bem_dir = op.join(bem_dir, 'flash')
if not op.exists(flash_bem_dir):
os.makedirs(flash_bem_dir)
surfs = ['inner_skull', 'outer_skull', 'outer_skin']
for surf in surfs:
out_fname = op.join(flash_bem_dir, surf + '.tri')
shutil.move(op.join(bem_dir, surf + '.tri'), out_fname)
nodes, tris = read_tri(out_fname, swap=True)
# Do not write volume info here because the tris are already in
# standard Freesurfer coords
write_surface(op.splitext(out_fname)[0] + '.surf', nodes, tris,
overwrite=True)
# Cleanup section
logger.info("\n---- Cleaning up ----")
os.remove(op.join(bem_dir, 'inner_skull_tmp.tri'))
# os.chdir(mri_dir)
if convert_T1:
shutil.rmtree(T1_dir)
logger.info("Deleted the T1 COR volume")
if convert_brain:
shutil.rmtree(brain_dir)
logger.info("Deleted the brain COR volume")
shutil.rmtree(flash5_dir)
logger.info("Deleted the flash5 COR volume")
# Create symbolic links to the .surf files in the bem folder
logger.info("\n---- Creating symbolic links ----")
# os.chdir(bem_dir)
for surf in surfs:
surf = op.join(bem_dir, surf + '.surf')
if not overwrite and op.exists(surf):
skip_symlink = True
else:
if op.exists(surf):
os.remove(surf)
_symlink(op.join(flash_bem_dir, op.basename(surf)), surf, copy)
skip_symlink = False
if skip_symlink:
logger.info("Unable to create all symbolic links to .surf files "
"in bem folder. Use --overwrite option to recreate them.")
dest = op.join(bem_dir, 'flash')
else:
logger.info("Symbolic links to .surf files created in bem folder")
dest = bem_dir
logger.info("\nThank you for waiting.\nThe BEM triangulations for this "
"subject are now available at:\n%s.\nWe hope the BEM meshes "
"created will facilitate your MEG and EEG data analyses."
% dest)
# Show computed BEM surfaces
if show:
plot_bem(subject=subject, subjects_dir=subjects_dir,
orientation='coronal', slices=None, show=True)
def _check_bem_size(surfs):
"""Check bem surface sizes."""
if len(surfs) > 1 and surfs[0]['np'] > 10000:
warn('The bem surfaces have %s data points. 5120 (ico grade=4) '
'should be enough. Dense 3-layer bems may not save properly.' %
surfs[0]['np'])
def _symlink(src, dest, copy=False):
"""Create a relative symlink (or just copy)."""
if not copy:
src_link = op.relpath(src, op.dirname(dest))
try:
os.symlink(src_link, dest)
except OSError:
warn('Could not create symbolic link %s. Check that your '
'partition handles symbolic links. The file will be copied '
'instead.' % dest)
copy = True
if copy:
shutil.copy(src, dest)
def _ensure_bem_surfaces(bem, extra_allow=(), name='bem'):
# by default only allow path-like and list, but handle None and
# ConductorModel properly if need be. Always return a ConductorModel
# even though it's incomplete (and might have is_sphere=True).
assert all(extra in (None, ConductorModel) for extra in extra_allow)
allowed = ('path-like', list) + extra_allow
_validate_type(bem, allowed, name)
if isinstance(bem, path_like):
# Load the surfaces
logger.info(f'Loading BEM surfaces from {str(bem)}...')
bem = read_bem_surfaces(bem)
bem = ConductorModel(is_sphere=False, surfs=bem)
elif isinstance(bem, list):
for ii, this_surf in enumerate(bem):
_validate_type(this_surf, dict, f'{name}[{ii}]')
if isinstance(bem, list):
bem = ConductorModel(is_sphere=False, surfs=bem)
# add surfaces in the spherical case
if isinstance(bem, ConductorModel) and bem['is_sphere']:
bem = bem.copy()
bem['surfs'] = []
if len(bem['layers']) == 4:
for idx, id_ in enumerate(_sm_surf_dict.values()):
bem['surfs'].append(_complete_sphere_surf(
bem, idx, 4, complete=False))
bem['surfs'][-1]['id'] = id_
return bem
|
kambysese/mne-python
|
mne/bem.py
|
Python
|
bsd-3-clause
| 76,981
|
[
"Mayavi"
] |
be7529ca8c5244b4781f48647ef11058e228f423845514101ebff6e9d1fd1c1e
|
# -*- coding: utf-8 -*-
# Copyright: Michal Krassowski <krassowski.michal@gmail.com>
# License: GNU GPL, version 3 or later; http://www.gnu.org/copyleft/gpl.html
"""
This plugin adds the function of night mode, similar that one implemented in AnkiDroid.
It adds a "view" menu entity (if it doesn't exist) with options like:
switching night mode
inverting colors of images or latex formulas
modifying some of the colors
It provides shortcut ctrl+n to quickly switch mode and color picker to adjust some of color parameters.
After enabling night mode, add-on changes colors of menubar, toolbar, bottombars and content windows.
If you want to contribute visit GitHub page: https://github.com/krassowski/Anki-Night-Mode
Also, feel free to send me bug reports or feature requests.
Copyright: Michal Krassowski <krassowski.michal@gmail.com>
License: GNU GPL, version 3 or later; http://www.gnu.org/copyleft/gpl.html except when stated otherwise.
Special thanks to contributors: [github nickname (reason)]
- b50 (initial compatibility with 2.1),
- ankitest (compatibility with 1508882486),
- omega3 (useful bug reports and suggestions)
- colchizin
- JulyMorning
- nathanmalloy
- rathsky
- zjosua
- lovac42
And translators:
- Arman High (Armenian)
- Jeremias (Swedish)
- Is (German)
"""
import traceback
from anki.hooks import addHook, runHook
from aqt import appVersion
from aqt import mw
from PyQt5.QtWidgets import QMessageBox
from .actions_and_settings import *
from .internals import alert
from .config import Config, ConfigValueGetter
from .css_class import inject_css_class
from .icons import Icons
from .menu import get_or_create_menu, Menu
from .stylers import Styler
from .styles import Style, MessageBoxStyle
__addon_name__ = 'Night Mode'
__version__ = '2.3.3'
__anki_version__ = '2.1'
if not appVersion.startswith(__anki_version__):
print(
(
'Unsupported version of Anki. '
'Anki-Night-Mode 2.0 requires %s to work properly. '
'For older versions of Anki use Night-Mode 1.x'
) % __anki_version__
)
# Add here you color replacements mapping - old: new, comma separated
class StylingManager:
def __init__(self, app):
self.styles = Style.members
self.stylers = [
styler(app)
for styler in Styler.members
]
self.config = ConfigValueGetter(app.config)
@property
def active_stylers(self):
return [
styler
for styler in self.stylers
if styler.name not in self.config.disabled_stylers
]
def replace(self):
for styler in self.active_stylers:
styler.replace_attributes()
def restore(self):
for styler in self.stylers:
styler.restore_attributes()
class NightMode:
menu_layout = [
EnableNightMode,
EnableInDialogs,
'-',
InvertImage,
InvertLatex,
TransparentLatex,
'-',
BackgroundColor,
TextColor,
ResetColors,
'-',
ModeSettings,
UserColorMap,
DisabledStylers,
StyleScrollBars,
'-',
About
]
def __init__(self):
self.profile_loaded = False
self.config = Config(self, prefix='nm_')
self.config.init_settings()
self.icons = Icons(mw)
self.styles = StylingManager(self)
view_menu = get_or_create_menu('addon_view_menu', '&View')
self.menu = Menu(
self,
'&Night Mode',
self.menu_layout,
attach_to=view_menu
)
addHook('unloadProfile', self.save)
# Disabled, uses delay in __init__.py
# addHook('profileLoaded', self.load)
addHook('prepareQA', self.night_class_injection)
addHook('loadNote', self.background_bug_workaround)
def load(self):
"""
Load configuration from profile, set states of checkable menu objects
and turn on night mode if it were enabled on previous session.
"""
self.config.load()
self.profile_loaded = True
self.refresh()
self.update_menu()
runHook("night_mode_config_loaded", self.config)
def update_menu(self):
self.menu.update_checkboxes(self.config.settings)
def save(self):
self.config.save()
def on(self):
"""Turn on night mode."""
self.styles.replace()
runHook("night_mode_state_changed", True)
def off(self):
"""Turn off night mode."""
self.styles.restore()
runHook("night_mode_state_changed", False)
def refresh(self, reload=False):
"""
Refresh display by re-enabling night or normal mode,
regenerate customizable css strings.
"""
state = self.config.state_on.value
if not self.profile_loaded:
alert(ERROR_NO_PROFILE)
return
try:
if state:
if reload:
self.off()
self.on()
else:
self.off()
except Exception:
alert(ERROR_SWITCH % traceback.format_exc())
return
# Reload current screen.
if mw.state == 'review':
mw.moveToState('overview')
mw.moveToState('review')
if mw.state == 'deckBrowser':
mw.deckBrowser.refresh()
if mw.state == 'overview':
mw.overview.refresh()
# Redraw toolbar (should be always visible).
mw.toolbar.draw()
self.update_menu()
return True
def about(self):
about_box = self.message_box()
about_box.setText(__addon_name__ + ' ' + __version__ + __doc__)
about_box.setGeometry(300, 300, 250, 150)
about_box.setWindowTitle('About ' + __addon_name__ + ' ' + __version__)
about_box.exec_()
def message_box(self):
box = QMessageBox()
if self.config.state_on.value:
box_style = MessageBoxStyle(self)
box.setStyleSheet(box_style.style)
return box
def night_class_injection(self, html, card, context):
html = inject_css_class(self.config.state_on.value, html)
return html
def background_bug_workaround(self, editor):
if self.config.state_on.value:
javascript = """
(function bg_bug_workaround()
{
function getTextNodeAtPosition(root, index){
// Copyright notice:
//
// following function is based on a function created by Pery Mimon:
// https://stackoverflow.com/a/38479462
// and is distributed under CC-BY SA 3.0 license terms:
// https://creativecommons.org/licenses/by-sa/3.0/
var lastNode = null;
var lastIndex = null
var treeWalker = document.createTreeWalker(root, NodeFilter.SHOW_TEXT,function next(elem) {
if(index >= elem.textContent.length){
lastIndex = index
index -= elem.textContent.length;
lastNode = elem;
return NodeFilter.FILTER_REJECT
}
return NodeFilter.FILTER_ACCEPT;
});
var c = treeWalker.nextNode();
return {
node: c ? c : lastNode,
position: c ? index : lastIndex
};
}
var regex = /<(span|strong) style="background-color: rgb\(255, 255, 255\);">(.*?)<\/(span|strong)>/gm
function background_workaround_callback(raw_field)
{
function get_rid_of_background(){
var field = $(raw_field)
var html = field.html()
if(html.search(regex) == -1)
return
var selection = window.getSelection()
var range = selection.getRangeAt(0)
range.setStart(raw_field, 0)
var len = range.toString().length
field.html(html.replace(regex, '<$1>$2</$1>'))
var range = new Range()
var pos = getTextNodeAtPosition(raw_field, len)
range.setStart(pos.node, pos.position)
selection.removeAllRanges()
selection.addRange(range)
}
return get_rid_of_background
}
var field = $('.field')
field.on('keydown', function(e){
var raw_field = this
var get_rid_of_background = background_workaround_callback(raw_field)
if(e.which === 8 || e.which == 46){
window.setTimeout(get_rid_of_background, 0)
}
})
field.on('paste', function(){
var raw_field = this
var get_rid_of_background = background_workaround_callback(raw_field)
window.setTimeout(get_rid_of_background, 100)
})
})()
"""
else:
javascript = ''
editor.web.eval(javascript)
ERROR_NO_PROFILE = """Switching night mode failed: The profile is not loaded yet.
Probably it's a bug of Anki or you tried to switch mode to quickly."""
ERROR_SWITCH = """Switching night mode failed: Something went really really wrong.
Contact add-on author to get help.
Please provide following traceback when reporting the issue:
%s
"""
|
krassowski/Anki-Night-Mode
|
night_mode/night_mode.py
|
Python
|
gpl-3.0
| 9,919
|
[
"VisIt"
] |
f746406168961a7d075fb0bf776f74a2f1154f0c26747ff37f0d84f5a98c1b46
|
#A* -------------------------------------------------------------------
#B* This file contains source code for the PyMOL computer program
#C* copyright Schrodinger LLC.
#D* -------------------------------------------------------------------
#E* It is unlawful to modify or remove this copyright notice.
#F* -------------------------------------------------------------------
#G* Please see the accompanying LICENSE file for further information.
#H* -------------------------------------------------------------------
#I* Additional authors of this source file include:
#-*
#-*
#-*
#Z* -------------------------------------------------------------------
from chempy.models import Indexed
from chempy import Storage,Atom,Bond,feedback
import string
class MOL2(Storage):
def __init__(self,**kwargs):
if 'cmd' in kwargs:
self.cmd = kwargs['cmd']
else:
self.cmd = None
# RTIs
_fields = { "alt_type" : "@<TRIPOS>ALT_TYPE\n",
"anchor_atom" : "@<TRIPOS>ANCHOR_ATOM\n",
"associated_annotation" : "@<TRIPOS>ASSOCIATED_ANNOTATION\n",
"atom" : "@<TRIPOS>ATOM\n",
"bond" : "@<TRIPOS>BOND\n",
"center_of_mass" : "@<TRIPOS>CENTER_OF_MASS\n",
"centroid" : "@<TRIPOS>CENTROID\n",
"comment" : "@<TRIPOS>COMMENT\n",
"crysin" : "@<TRIPOS>CRYSIN\n",
"dict" : "@<TRIPOS>DICT\n",
"data_file" : "@<TRIPOS>DATA_FILE\n",
"extension_point" : "@<TRIPOS>EXTENSION_POINT\n",
"ff_pbc" : "@<TRIPOS>FF_PBC\n",
"ffcon_angle" : "@<TRIPOS>FFCON_ANGLE\n",
"ffcon_dist" : "@<TRIPOS>FFCON_DIST\n",
"ffcon_multi" : "@<TRIPOS>FFCON_MULTI\n",
"ffcon_range" : "@<TRIPOS>FFCON_RANGE\n",
"ffcon_torsion" : "@<TRIPOS>FFCON_TORSION\n",
"line" : "@<TRIPOS>LINE\n",
"lsplane" : "@<TRIPOS>LSPLANE\n",
"molecule" : "@<TRIPOS>MOLECULE\n",
"normal" : "@<TRIPOS>NORMAL\n",
"qsar_align_rule" : "@<TRIPOS>QSAR_ALIGN_RULE\n",
"ring_closure" : "@<TRIPOS>RING_CLOSURE\n",
"rotatable_bond" : "@<TRIPOS>ROTATABLE_BOND\n",
"search_dist" : "@<TRIPOS>SEARCH_DIST\n",
"search_options" : "@<TRIPOS>SEARCH_OPTIONS\n",
"set" : "@<TRIPOS>SET\n",
"substructure" : "@<TRIPOS>SUBSTRUCTURE\n",
"u_feat" : "@<TRIPOS>U_FEAT\n",
"unity_atom_attr" : "@<TRIPOS>UNITY_ATOM_ATTR\n",
"unity_bond_attr" : "@<TRIPOS>UNITY_BOND_ATTR\n" }
_molType = { "small" : "SMALL\n",
"bio" : "BIOPOLYMER\n",
"prot" : "PROTEIN\n",
"nuc" : "NUCLEIC_ACID\n",
"sacc" : "SACCHARIDE\n" }
_chargeType = { "none" : "NO_CHARGES\n",
"del" : "DEL_RE\n",
"gast" : "GASTEIGER\n",
"gast_h" : "GAST_HUCK\n",
"huck" : "HUCKEL\n",
"pull" : "PULLMAN\n",
"gauss80" : "GAUSS80_CHARGES\n",
"ampac" : "AMPAC_CHARGES\n",
"mull" : "MULLIKEN_CHARGES\n",
"dict" : "DICT_ CHARGES\n",
"mmff94" : "MMFF94_CHARGES\n",
"user" : "USER_CHARGES\n" }
_bondTypes = { 1 : "1",
2 : "2",
3 : "3",
"amide" : "am",
4 : "ar",
"dummy" : "du",
"unknown" : "un",
0 : "nc",
"not_connected" : "nc" }
def fromList(self,molList):
pass
def toList(self,model,**kwargs):
molList = []
if 'state' in kwargs:
state = kwargs['state']
else:
state = None
if 'selection' in kwargs:
sel = kwargs['selection']
else:
sel = None
f = MOL2._fields
c = MOL2._chargeType
m = MOL2._molType
n = "\n"
molList.append("# created with PyMOL")
if model.molecule.comments!='':
molList.append("# COMMENTS:"+n)
molList.append("# " + model.molecule.comments)
# RTI MOLECULE
molList.append(n+f["molecule"])
molList.append(model.molecule.title+n)
subst=feat=sets=0
molList.append("%d\t%d\t%d\t%d\t%d\n" % (model.nAtom,model.nBond,subst,feat,sets))
# TODO: Guess this from the user's selection
mKey="prot"
## if self.cmd!=None and state!=None and sel!=None:
## nPoly = self.cmd.count_atoms("poly and (%s and state %s)" % (sel,state))
## nOrg = self.cmd.count_atoms("org and (%s and state %s)" % (sel,state))
## nIno = self.cmd.count_atoms("inorganic and (%s and state %s)" % (sel,state))
## nNuc = self.cmd.count_atoms("(resn DA+DG+DC+DT+A+C+G+U) and (%s and state %s)" (sel,state))
## # not too happy w/this
## # - poly
## if nPoly==0:
## # - nuc
## if nNuc==0:
## # - org
## if nOrg==0:
## # - ino
## if nIno==0:
## # what are you?!
## # you have no polymer, no organic, no nucleic, no inorganic
## # default, is wrong, but so are the others
## mKey = "prot"
## # ino - all
## else:
## mKey = "small"
## # org
## else:
## mKey = "small"
## # nuc
## else:
## if nIno==0 and nOrg==0:
## mKey = "nuc"
## else:
## mKey = "bio"
## # poly
## else:
## mKey="prot"
molList.append(m[mKey])
# TODO: Guess this
molList.append(c["user"])
# RTI ATOM
molList.append(f["atom"])
for a in range(len(model.atom)):
at = model.atom[a]
molList.append("%d\t%4s\t%.3f\t%.3f\t%.3f\t%2s\t%.3f\n" %
(at.index,
at.name or at.symbol or "X",
at.coord[0],at.coord[1],at.coord[2],
at.text_type, at.q))
# RTI BOND
molList.append(f["bond"])
for b in range(len(model.bond)):
bo = model.bond[b]
bOrder = MOL2._bondTypes[bo.order]
molList.append("%d %d %d %s\n" % (b,1+bo.index[0],1+bo.index[1],str(bOrder)))
molList.append("\n")
return molList
def strToFile(self,dat,fname,**params):
if feedback['io']:
print ' chempy: writing mol2 to file "%s".' % fname
fp = open(fname,'w')
result = fp.writelines(dat)
fp.close()
|
gratefulfrog/lib
|
python/chempy/mol2.py
|
Python
|
gpl-2.0
| 7,901
|
[
"ChemPy",
"PyMOL"
] |
6814ee7a1fbc594236aab3a14d0de538a181aa2d23d09980959b57b8bc242fdf
|
# $HeadURL$
'''
:mod: Pfn
.. module: Pfn
:synopsis: pfn URI (un)parsing
.. moduleauthor:: Krzysztof.Ciba@NOSPAMgmail.com
'''
__RCSID__ = "$Id:$"
## imports
import os
## from DIRAC
from DIRAC import S_OK, S_ERROR, gLogger
def pfnunparse( pfnDict ):
"""
Create PFN URI from pfnDict
:param dict pfnDict:
"""
## make sure all keys are in
allDict = dict.fromkeys( [ "Protocol", "Host", "Port", "WSUrl", "Path", "FileName" ], "" )
if type( allDict ) != type( pfnDict ):
return S_ERROR( "pfnunparse: wrong type for pfnDict argument, expected a dict, got %s" % type(pfnDict) )
allDict.update( pfnDict )
pfnDict = allDict
## c
## /a/b/c
filePath = os.path.normpath( '/' + pfnDict["Path"] + '/' + pfnDict["FileName"] ).replace( '//','/' )
## host
uri = pfnDict["Host"]
if pfnDict["Host"]:
if pfnDict["Port"]:
# host:port
uri = "%s:%s" % ( pfnDict["Host"], pfnDict["Port"] )
if pfnDict["WSUrl"]:
if "?" in pfnDict["WSUrl"] and "=" in pfnDict["WSUrl"]:
# host/wsurl
# host:port/wsurl
uri = "%s%s" % ( uri, pfnDict["WSUrl"] )
else:
# host/wsurl
# host:port/wsurl
uri = "%s%s?=" % ( uri, pfnDict["WSUrl"] )
if pfnDict["Protocol"]:
if uri:
# proto://host
# proto://host:port
# proto://host:port/wsurl
uri = "%s://%s" % ( pfnDict["Protocol"], uri )
else:
# proto:
uri = "%s:" % pfnDict["Protocol"]
pfn = "%s%s" % ( uri, filePath )
# c
# /a/b/c
# proto:/a/b/c
# proto://host/a/b/c
# proto://host:port/a/b/c
# proto://host:port/wsurl/a/b/c
return S_OK( pfn )
def pfnparse( pfn ):
"""
Parse pfn and save all bits of information into dictionary
:param str pfn: pfn string
"""
if not pfn:
return S_ERROR("wrong 'pfn' argument value in function call, expected non-empty string, got %s" % str(pfn) )
pfnDict = dict.fromkeys( [ "Protocol", "Host", "Port", "WSUrl", "Path", "FileName" ], "" )
try:
if ":" not in pfn:
# pfn = /a/b/c
pfnDict["Path"] = os.path.dirname( pfn )
pfnDict["FileName"] = os.path.basename( pfn )
else:
# pfn = protocol:/a/b/c
# pfn = protocol://host/a/b/c
# pfn = protocol://host:port/a/b/c
# pfn = protocol://host:port/wsurl?=/a/b/c
pfnDict["Protocol"] = pfn[ 0:pfn.index(":") ]
## remove protocol:
pfn = pfn[len(pfnDict["Protocol"]):]
## remove :// or :
pfn = pfn[3:] if pfn.startswith("://") else pfn[1:]
if pfn.startswith("/"):
## /a/b/c
pfnDict["Path"] = os.path.dirname( pfn )
pfnDict["FileName"] = os.path.basename( pfn )
else:
## host/a/b/c
## host:port/a/b/c
## host:port/wsurl?=/a/b/c
if ":" not in pfn:
## host/a/b/c
pfnDict["Host"] = pfn[ 0:pfn.index("/") ]
pfn = pfn[len(pfnDict["Host"]):]
pfnDict["Path"] = os.path.dirname( pfn )
pfnDict["FileName"] = os.path.basename( pfn )
else:
## host:port/a/b/c
## host:port/wsurl?=/a/b/c
pfnDict["Host"] = pfn[0:pfn.index(":")]
## port/a/b/c
## port/wsurl?=/a/b/c
pfn = pfn[ len(pfnDict["Host"])+1: ]
pfnDict["Port"] = pfn[0:pfn.index("/")]
## /a/b/c
## /wsurl?=/a/b/c
pfn = pfn[ len(pfnDict["Port"]): ]
WSUrl = pfn.find("?")
WSUrlEnd = pfn.find("=")
if WSUrl == -1 and WSUrlEnd == -1:
## /a/b/c
pfnDict["Path"] = os.path.dirname( pfn )
pfnDict["FileName"] = os.path.basename( pfn )
else:
## /wsurl?blah=/a/b/c
pfnDict["WSUrl"] = pfn[ 0:WSUrlEnd+1 ]
## /a/b/c
pfn = pfn[ len(pfnDict["WSUrl"]):]
pfnDict["Path"] = os.path.dirname( pfn )
pfnDict["FileName"] = os.path.basename( pfn )
return S_OK( pfnDict )
except Exception:
errStr = "Pfn.pfnparse: Exception while parsing pfn: " + str( pfn )
gLogger.exception( errStr )
return S_ERROR( errStr )
|
miloszz/DIRAC
|
Core/Utilities/Pfn.py
|
Python
|
gpl-3.0
| 4,092
|
[
"DIRAC"
] |
f75ff1e517035f1188021a56b638548123ce390164bfead3292e010bc7435af6
|
"""
.. moduleauthor:: Johan Comparat <johan.comparat__at__gmail.com>
General purpose:
................
The class StellarPopulationModel is a wrapper dedicated to handling the fit of stellar population models on observed spectra.
It gathers all inputs : from the model and from the data.
*Imports*::
import numpy as np
import astropy.io.fits as pyfits
import astropy.units as u
import glob
import pandas as pd
import os
from firefly_instrument import *
from firefly_dust import *
from firefly_fitter import *
from firefly_library import *
"""
import numpy as np
import astropy.io.fits as pyfits
import astropy.units as u
import glob
import pandas as pd
import os
from os.path import join
import copy
#from scipy.stats import sigmaclip
#from firefly_dust import *
#import firefly_dust as f_dust
from firefly_dust import hpf, unred, determine_attenuation
from firefly_instrument import downgrade
from firefly_fitter import fitter
from firefly_library import airtovac, convert_chis_to_probs, light_weights_to_mass, calculate_averages_pdf, normalise_spec, match_data_models
class StellarPopulationModel:
"""
:param specObs: specObs observed spectrum object initiated with the GalaxySpectrumFIREFLY class.
:param models: choose between 'm11', 'bc03' or 'm09'.
* m11 corresponds to all the models compared in `Maraston and Stromback 2011 <http://adsabs.harvard.edu/abs/2011MNRAS.418.2785M>`_.
* m09 to `Maraston et al. 2009 <http://adsabs.harvard.edu/abs/2009A%26A...493..425M>`_.
* bc03 to the `Bruzual and Charlot 2003 models <http://adsabs.harvard.edu/abs/2003MNRAS.344.1000B>`_.
:param model_libs: only necessary if using m11.
Choose between `MILES <http://adsabs.harvard.edu/abs/2011A%26A...532A..95F>`_, MILES revisednearIRslope, MILES UVextended, `STELIB <http://adsabs.harvard.edu/abs/2003A%26A...402..433L>`_, `ELODIE <http://adsabs.harvard.edu/abs/2007astro.ph..3658P>`_, `MARCS <http://adsabs.harvard.edu/abs/2008A%26A...486..951G>`_.
* MILES, MILES revisednearIRslope, MILES UVextended, STELIB, ELODIE are empirical libraries.
* MARCS is a theoretical library.
:param imfs: choose the `initial mass function <https://en.wikipedia.org/wiki/Initial_mass_function>`_:
* 'ss' for `Salpeter <http://adsabs.harvard.edu/abs/1955ApJ...121..161S>`_or
* 'kr' for `Kroupa <http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1112.3340>`_ or
* 'cha' for `Chabrier <http://adsabs.harvard.edu/abs/2003PASP..115..763C>`_.
:param hpf_mode: 'on' means the code uses HPF to dereden the spectrum, if 'hpf_only' then EBV=0.
Notes
-----
.. note::
*This is how it proceeds :*
#. reads the parameter file by using parameters_obtain(parameters.py)
#. It opens the data file, model files, then it matches their resolutions by downgrading the models to instrumental and velocity dispersion resolution
#. Determines dust attenuation curve to be applied to the models. Two options : through HPF fitting (3.1.) or through filtered values to determing SP properties (3.2.).
#. It fits the models to the data
#. Gets mass-weighted SSP contributions using saved M/L ratio.
#. Convert chis into probabilities and calculates all average properties and errors (assuming the number of degrees of freedom is the number of wavelength points)
#. Optionally produces a plot
#. Finally, it writes the output files
"""
def __init__(self, specObs, outputFile, cosmo, models = 'm11', model_libs = ['MILES_UVextended'], imfs = ['ss','kr'], hpf_mode = 'on', age_limits = [6,10.1], downgrade_models = True, dust_law = 'calzetti', max_ebv = 1.5, num_dust_vals = 200, dust_smoothing_length = 200, max_iterations = 10, fit_per_iteration_cap = 1000, pdf_sampling = 300, data_wave_medium = 'vacuum', Z_limits = [-0.1,0.1], wave_limits = [0,99999990], suffix = "-fireflyFits.fits",use_downgraded_models = False):
self.cosmo = cosmo
self.specObs = specObs
self.outputFile = outputFile
#################### STARTS HERE ####################
# sets the models
self.models = models # m11/bc03 / m09
self.model_libs = model_libs
self.suffix = suffix
self.deltal_libs = []
self.vdisp_round = int(round(self.specObs.vdisp/5.0)*5.0) # rounding vDisp for the models
self.use_downgraded_models = use_downgraded_models
if self.models == 'm11':
for m in self.model_libs:
if m == 'MILES' or m == 'MILES_revisednearIRslope' or m == 'MILES_UVextended':
self.deltal_libs.append(2.55)
elif m == 'STELIB':
self.deltal_libs.append(3.40)
elif m == 'ELODIE':
self.deltal_libs.append(0.55)
elif m == 'MARCS':
self.deltal_libs.append(0.1)
elif self.models=='bc03':
self.model_libs = ['STELIB_BC03']
imfs = ['cha']
self.deltal_libs = [3.00]
elif self.models == 'm09':
self.model_libs = ['M09']
if downgrade_models:
self.deltal_libs = [0.4]
else:
self.deltal_libs = [3.6]
# sets the Initial mass function
self.imfs = imfs
self.hpf_mode = hpf_mode
self.age_limits = age_limits
self.downgrade_models = downgrade_models
self.dust_law = dust_law
self.max_ebv = max_ebv
self.num_dust_vals = num_dust_vals
self.dust_smoothing_length = dust_smoothing_length
# Specific fitting options
self.max_iterations = max_iterations
self.fit_per_iteration_cap = fit_per_iteration_cap
# Sampling size when calculating the maximum pdf (100=recommended)
self.pdf_sampling = pdf_sampling
# Default is air, unless manga is used
self.data_wave_medium = data_wave_medium
self.Z_limits = Z_limits
self.wave_limits = wave_limits
def get_model(self, model_used, imf_used, deltal, vdisp, wave_instrument, r_instrument, ebv_mw):
"""
Retrieves all relevant model files, in their downgraded format.
If they aren't downgraded to the correct resolution / velocity dispersion,
takes the base models in their native form and converts to downgraded files.
:param model_used: list of models to be used, for example ['m11', 'm09'].
:param imf_used: list of imf to be used, for example ['ss', 'cha'].
:param deltal: delta lambda in the models.
:param vdisp: velocity dispersion observed in the galaxy.
:param wave_instrument: wavelength array from the observations
:param r_instrument: resolution array from the observations
:param ebv_mw: E(B-V) from the dust maps for the galaxy.
Workflow
----------
A. loads the models m11 or m09: maps parameters to the right files. Then it constructs the model array. Finally converts wavelengths to air or vacuum.
B. downgrades the model to match data resolution
C. applies attenuation
D. stores models in
self.model_wavelength,
self.model_flux,
self.age_model,
self.metal_model
and returns it as well
"""
# first the m11 case
if self.models == 'm11':
first_file = True
model_files = []
if self.use_downgraded_models :
if model_used == 'MILES_UVextended' or model_used == 'MILES_revisedIRslope':
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data','SSP_M11_MILES_downgraded','ssp_M11_' + model_used+ '.' + imf_used)
else:
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data','SSP_M11_'+ model_used + '_downgraded', 'ssp_M11_' +model_used +'.' + imf_used)
else:
if model_used == 'MILES_UVextended' or model_used == 'MILES_revisedIRslope':
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data','SSP_M11_MILES', 'ssp_M11_'+model_used+'.'+imf_used)
else:
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data','SSP_M11_'+model_used ,'ssp_M11_' +model_used +'.' + imf_used)
# Constructs the metallicity array of models :
all_metal_files = glob.glob(model_path+'*')
#print all_metal_files
metal_files = []
metal = []
for z in range(len(all_metal_files)):
zchar = all_metal_files[z][len(model_path):]
if zchar == 'z001':
znum = -0.3
elif zchar == 'z002':
znum = 0.0
elif zchar == 'z004':
znum = 0.3
elif zchar == 'z0001.bhb':
znum = -1.301
elif zchar == 'z0001.rhb':
znum = -1.302
elif zchar == 'z10m4.bhb':
znum = -2.301
elif zchar == 'z10m4.rhb':
znum = -2.302
elif zchar == 'z10m4':
znum = -2.300
else:
raise NameError('Unrecognised metallicity! Check model file names.')
if znum>self.Z_limits[0] and znum<self.Z_limits[1]:
metal_files.append(all_metal_files[z])
metal.append(znum)
# constructs the model array
model_flux, age_model, metal_model = [],[],[]
for zi,z in enumerate(metal_files):
print "Retrieving and downgrading models for "+z
model_table = pd.read_table(z,converters={'Age':np.float64}, header=None ,usecols=[0,2,3], names=['Age','wavelength_model','flux_model'], delim_whitespace=True)
age_data = np.unique(model_table['Age'].values.ravel())
for a in age_data:
logyrs_a = np.log10(a)+9.0
#print "age model selection:", self.age_limits[0], logyrs_a, self.age_limits[1]
if logyrs_a < self.age_limits[0] or logyrs_a > self.age_limits[1]:
continue
spectrum = model_table.loc[model_table.Age == a, ['wavelength_model', 'flux_model'] ].values
wavelength_int,flux = spectrum[:,0],spectrum[:,1]
# converts to air wavelength
if self.data_wave_medium == 'vacuum':
wavelength = airtovac(wavelength_int)
else:
wavelength = wavelength_int
# downgrades the model
if self.downgrade_models:
mf = downgrade(wavelength,flux,deltal,self.vdisp_round, wave_instrument, r_instrument)
else:
mf = copy.copy(flux)
# Reddens the models
if ebv_mw != 0:
attenuations = unred(wavelength,ebv=0.0-ebv_mw)
model_flux.append(mf*attenuations)
else:
model_flux.append(mf)
age_model.append(a)
metal_model.append(metal[zi])
first_model = False
print "Retrieved all models!"
self.model_wavelength, self.model_flux, self.age_model, self.metal_model = wavelength, model_flux, age_model, metal_model
return wavelength, model_flux, age_model, metal_model
elif self.models == 'm09':
first_file = True
model_files = []
if self.use_downgraded_models:
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data', 'UVmodels_Marastonetal08b_downgraded')
else:
model_path = join(os.environ['STELLARPOPMODELS_DIR'],'data', 'UVmodels_Marastonetal08b')
# Gathers the list of models with metallicities and ages of interest:
all_metal_files = glob.glob(model_path+'*')
metal_files = []
metal = []
for z in range(len(all_metal_files)):
zchar = all_metal_files[z].split('.')[1][2:]
if zchar == 'z001':
znum = -0.3
elif zchar == 'z002':
znum = 0.0
elif zchar == 'z004':
znum = 0.3
elif zchar == 'z0001':
znum = -1.300
else:
raise NameError('Unrecognised metallicity! Check model file names.')
if znum>self.Z_limits[0] and znum<self.Z_limits[1]:
metal_files.append(all_metal_files[z])
metal.append(znum)
# constructs the model array
model_flux, age_model, metal_model = [],[],[]
for zi,z in enumerate(metal_files):
print "Retrieving and downgrading models for "+z
model_table = pd.read_table(z,converters={'Age':np.float64}, header=None ,usecols=[0,2,3], names=['Age','wavelength_model','flux_model'], delim_whitespace=True)
age_data = np.unique(model_table['Age'].values.ravel())
for a in age_data:
logyrs_a = np.log10(a)+9.0
#print "age model selection:", self.age_limits[0], logyrs_a, self.age_limits[1]
if logyrs_a < self.age_limits[0] or logyrs_a > self.age_limits[1]:
continue
spectrum = model_table.loc[model_table.Age == a, ['wavelength_model', 'flux_model'] ].values
wavelength_int,flux = spectrum[:,0],spectrum[:,1]
# converts to air wavelength
if self.data_wave_medium == 'vacuum':
wavelength = airtovac(wavelength_int)
else:
wavelength = wavelength_int
# downgrades the model
if self.downgrade_models:
mf = downgrade(wavelength,flux,deltal,self.vdisp_round, wave_instrument, r_instrument)
else:
mf = copy.copy(flux)
# Reddens the models
if ebv_mw != 0:
attenuations = unred(wavelength,ebv=0.0-ebv_mw)
model_flux.append(mf*attenuations)
else:
model_flux.append(mf)
age_model.append(a)
metal_model.append(metal[zi])
first_model = False
print "Retrieved all models!"
self.model_wavelength, self.model_flux, self.age_model, self.metal_model = wavelength, model_flux, age_model, metal_model
return wavelength, model_flux, age_model, metal_model
def fit_models_to_data(self):
"""
Once the data and models are loaded, then execute this function to find the best model. It loops overs the models to be fitted on the data:
#. gets the models
#. matches the model and data to the same resolution
#. normalises the spectra
"""
for mi,mm in enumerate(self.model_libs):
# loop over the models
for ii in self.imfs:
# loop over the IMFs
# A. gets the models
print "getting the models"
deltal = self.deltal_libs[mi]
model_wave_int, model_flux_int, age, metal = self.get_model( mm, ii, deltal, self.specObs.vdisp, self.specObs.restframe_wavelength, self.specObs.r_instrument, self.specObs.ebv_mw)
# B. matches the model and data to the same resolution
print "Matching models to data"
wave, data_flux, error_flux, model_flux_raw = match_data_models( self.specObs.restframe_wavelength, self.specObs.flux, self.specObs.bad_flags, self.specObs.error, model_wave_int, model_flux_int, self.wave_limits[0], self.wave_limits[1], saveDowngradedModel = False)
# C. normalises the models to the median value of the data
print "Normalising the models"
model_flux, mass_factors = normalise_spec(data_flux, model_flux_raw)
# 3. Corrects from dust attenuation
if self.hpf_mode=='on':
# 3.1. Determining attenuation curve through HPF fitting, apply attenuation curve to models and renormalise spectra
best_ebv, attenuation_curve = determine_attenuation(wave, data_flux, error_flux, model_flux, self, age, metal)
model_flux_atten = np.zeros(np.shape(model_flux_raw))
for m in range(len(model_flux_raw)):
model_flux_atten[m] = attenuation_curve * model_flux_raw[m]
model_flux, mass_factors = normalise_spec(data_flux, model_flux_atten)
# 4. Fits the models to the data
light_weights, chis, branch = fitter(wave, data_flux, error_flux, model_flux, self)
elif self.hpf_mode == 'hpf_only':
# 3.2. Uses filtered values to determing SP properties only."
smoothing_length = self.dust_smoothing_length
hpf_data = hpf(data_flux)
hpf_models = np.zeros(np.shape(model_flux))
for m in range(len(model_flux)):
hpf_models[m] = hpf(model_flux[m])
zero_dat = np.where( (np.isnan(hpf_data)) & (np.isinf(hpf_data)) )
hpf_data[zero_dat] = 0.0
for m in range(len(model_flux)):
hpf_models[m,zero_dat] = 0.0
hpf_error = np.zeros(len(error_flux))
hpf_error[:] = np.median(error_flux)/np.median(data_flux) * np.median(hpf_data)
hpf_error[zero_dat] = np.max(hpf_error)*999999.9
best_ebv = 0.0
hpf_models,mass_factors = normalise_spec(hpf_data,hpf_models)
# 4. Fits the models to the data
light_weights, chis, branch = fitter(wave, hpf_data,hpf_error, hpf_models, self)
# 5. Get mass-weighted SSP contributions using saved M/L ratio.
unnorm_mass, mass_weights = light_weights_to_mass(light_weights, mass_factors)
print "Fitting complete"
print "Calculating average properties and outputting"
# 6. Convert chis into probabilities and calculates all average properties and errors
self.dof = len(wave)
probs = convert_chis_to_probs(chis, self.dof)
dist_lum = self.cosmo.luminosity_distance( self.specObs.redshift).to( u.cm ).value
averages = calculate_averages_pdf(probs, light_weights, mass_weights, unnorm_mass, age, metal, self.pdf_sampling, dist_lum)
unique_ages = np.unique(age)
marginalised_age_weights = np.zeros(np.shape(unique_ages))
marginalised_age_weights_int = np.sum(mass_weights.T,1)
for ua in range(len(unique_ages)):
marginalised_age_weights[ua] = np.sum(marginalised_age_weights_int[np.where(age==unique_ages[ua])])
best_fit_index = [np.argmin(chis)]
best_fit = np.dot(light_weights[best_fit_index],model_flux)[0]
# stores outputs in the object
self.best_fit_index = best_fit_index
self.best_fit = best_fit
self.model_flux = model_flux
self.dist_lum = dist_lum
self.age = np.array(age)
self.metal = np.array(metal)
self.mass_weights = mass_weights
self.light_weights = light_weights
self.chis = chis
self.branch = branch
self.unnorm_mass = unnorm_mass
self.probs = probs
self.wave = wave
self.best_fit = best_fit
self.averages = averages
bf_mass = (self.mass_weights[self.best_fit_index]>0)[0]
bf_light = (self.light_weights[self.best_fit_index]>0)[0]
mass_per_ssp = self.unnorm_mass[self.best_fit_index[0]][bf_mass]*10.0**(-17) * 4 * np.pi * self.dist_lum**2.0
age_per_ssp = self.age[bf_mass]*10**9
metal_per_ssp = self.metal[bf_mass]
weight_mass_per_ssp = self.mass_weights[self.best_fit_index[0]][bf_mass]
weight_light_per_ssp = self.light_weights[self.best_fit_index[0]][bf_light]
order = np.argsort(-weight_light_per_ssp)
print "M Msun", self.averages['stellar_mass'], np.log10(mass_per_ssp[order])
print "age Gyr", 10**self.averages['light_age'], 10**self.averages['mass_age'], age_per_ssp[order]/1e9
print "Z", self.averages['light_metal'], self.averages['mass_metal'], metal_per_ssp[order]
print "SFR Msun/yr", mass_per_ssp[order]/age_per_ssp[order]
print "wm", weight_mass_per_ssp[order]
print "wl", weight_light_per_ssp[order]
print "z, age Gyr", self.specObs.redshift, self.cosmo.age(self.specObs.redshift).value
# 8. It writes the output file
waveCol = pyfits.Column(name="wavelength",format="D", unit="Angstrom", array= wave)
best_fitCol = pyfits.Column(name="firefly_model",format="D", unit="1e-17erg/s/cm2/Angstrom", array= best_fit)
cols = pyfits.ColDefs([ waveCol, best_fitCol])
tbhdu = pyfits.BinTableHDU.from_columns(cols)
tbhdu.header['HIERARCH age_universe'] = np.log10(self.cosmo.age(self.specObs.redshift).value*10**9)
tbhdu.header['HIERARCH redshift'] = self.specObs.redshift
# mean quantities
tbhdu.header['HIERARCH age_lightW_mean'] = np.log10(10**9 * 10**averages['light_age'])
tbhdu.header['HIERARCH age_lightW_mean_up'] = np.log10(10**9 * 10**averages['light_age_1_sig_plus']) # log(Gyrs)
tbhdu.header['HIERARCH age_lightW_mean_low'] = np.log10(10**9 * 10**averages['light_age_1_sig_minus']) # log(Gyrs)
tbhdu.header['HIERARCH metallicity_lightW_mean'] = averages['light_metal']
tbhdu.header['HIERARCH metallicity_lightW_mean_up'] = averages['light_metal_1_sig_plus']
tbhdu.header['HIERARCH metallicity_lightW_mean_low'] = averages['light_metal_1_sig_minus']
tbhdu.header['HIERARCH age_massW_mean'] = np.log10(10**9 * 10**averages['mass_age'])
tbhdu.header['HIERARCH age_massW_mean_up'] = np.log10(10**9 * 10**averages['mass_age_1_sig_plus']) # log(Gyrs)
tbhdu.header['HIERARCH age_massW_mean_low'] = np.log10(10**9 * 10**averages['mass_age_1_sig_minus']) # log(Gyrs)
tbhdu.header['HIERARCH metallicity_massW_mean'] = averages['mass_metal']
tbhdu.header['HIERARCH metallicity_massW_mean_up'] = averages['mass_metal_1_sig_plus']
tbhdu.header['HIERARCH metallicity_massW_mean_low'] = averages['mass_metal_1_sig_minus']
tbhdu.header['HIERARCH EBV'] = best_ebv
tbhdu.header['HIERARCH stellar_mass_mean'] = averages['stellar_mass']
tbhdu.header['HIERARCH stellar_mass_mean_up'] = averages['stellar_mass_1_sig_plus']
tbhdu.header['HIERARCH stellar_mass_mean_low'] = averages['stellar_mass_1_sig_minus']
tbhdu.header['HIERARCH ssp_number'] =len(order)
# quantities per SSP
for iii in range(len(order)):
tbhdu.header['HIERARCH stellar_mass_ssp_'+str(iii)] = np.log10(mass_per_ssp[order])[iii]
tbhdu.header['HIERARCH age_ssp_'+str(iii)] = np.log10(age_per_ssp[order][iii])
tbhdu.header['HIERARCH metal_ssp_'+str(iii)] = metal_per_ssp[order][iii]
tbhdu.header['HIERARCH SFR_ssp_'+str(iii)] = mass_per_ssp[order][iii]/age_per_ssp[order][iii]
tbhdu.header['HIERARCH weightMass_ssp_'+str(iii)] = weight_mass_per_ssp[order][iii]
tbhdu.header['HIERARCH weightLight_ssp_'+str(iii)] = weight_light_per_ssp[order][iii]
prihdr = pyfits.Header()
prihdr['file'] = self.specObs.path_to_spectrum
prihdr['model'] = self.models
prihdr['ageMin'] = self.age_limits[0]
prihdr['ageMax'] = self.age_limits[1]
prihdr['Zmin'] = self.Z_limits[0]
prihdr['Zmax'] = self.Z_limits[1]
prihdu = pyfits.PrimaryHDU(header=prihdr)
thdulist = pyfits.HDUList([prihdu, tbhdu])
if os.path.isfile(self.outputFile + self.suffix ):
os.remove(self.outputFile + self.suffix )
#print self.outputFile + self.suffix , thdulist, thdulist[1].data, thdulist[0].header
thdulist.writeto(self.outputFile + self.suffix )
|
JohanComparat/pySU
|
spm/python/StellarPopulationModel.py
|
Python
|
cc0-1.0
| 21,167
|
[
"Galaxy"
] |
246238fbd1f3617fa5ef415c3442b29e8f9679c4a28ba4b9cc8258b23d96cba8
|
from ase.io import read
from ase.units import kJ
from ase.utils.eos import EquationOfState
configs = read('Ag.traj@0:5') # read 5 configurations
# Extract volumes and energies:
volumes = [ag.get_volume() for ag in configs]
energies = [ag.get_potential_energy() for ag in configs]
eos = EquationOfState(volumes, energies)
v0, e0, B = eos.fit()
print B / kJ * 1.0e24, 'GPa'
eos.plot('Ag-eos.png')
|
grhawk/ASE
|
tools/doc/tutorials/eos/eos2.py
|
Python
|
gpl-2.0
| 397
|
[
"ASE"
] |
aa64d5b5f16dc0fc730fb34dfa00060045e3063a0cf415c39afea22ad45ee18c
|
#!/usr/bin/env python
##################################################
## DEPENDENCIES
import sys
import os
import os.path
try:
import builtins as builtin
except ImportError:
import __builtin__ as builtin
from os.path import getmtime, exists
import time
import types
from Cheetah.Version import MinCompatibleVersion as RequiredCheetahVersion
from Cheetah.Version import MinCompatibleVersionTuple as RequiredCheetahVersionTuple
from Cheetah.Template import Template
from Cheetah.DummyTransaction import *
from Cheetah.NameMapper import NotFound, valueForName, valueFromSearchList, valueFromFrameOrSearchList
from Cheetah.CacheRegion import CacheRegion
import Cheetah.Filters as Filters
import Cheetah.ErrorCatchers as ErrorCatchers
##################################################
## MODULE CONSTANTS
VFFSL=valueFromFrameOrSearchList
VFSL=valueFromSearchList
VFN=valueForName
currentTime=time.time
__CHEETAH_version__ = '2.4.4'
__CHEETAH_versionTuple__ = (2, 4, 4, 'development', 0)
__CHEETAH_genTime__ = 1406885499.342535
__CHEETAH_genTimestamp__ = 'Fri Aug 1 18:31:39 2014'
__CHEETAH_src__ = '/home/wslee2/models/5-wo/force1plus/openpli3.0/build-force1plus/tmp/work/mips32el-oe-linux/enigma2-plugin-extensions-openwebif-1+git5+3c0c4fbdb28d7153bf2140459b553b3d5cdd4149-r0/git/plugin/controllers/views/ajax/config.tmpl'
__CHEETAH_srcLastModified__ = 'Fri Aug 1 18:30:05 2014'
__CHEETAH_docstring__ = 'Autogenerated by Cheetah: The Python-Powered Template Engine'
if __CHEETAH_versionTuple__ < RequiredCheetahVersionTuple:
raise AssertionError(
'This template was compiled with Cheetah version'
' %s. Templates compiled before version %s must be recompiled.'%(
__CHEETAH_version__, RequiredCheetahVersion))
##################################################
## CLASSES
class config(Template):
##################################################
## CHEETAH GENERATED METHODS
def __init__(self, *args, **KWs):
super(config, self).__init__(*args, **KWs)
if not self._CHEETAH__instanceInitialized:
cheetahKWArgs = {}
allowedKWs = 'searchList namespaces filter filtersLib errorCatcher'.split()
for k,v in KWs.items():
if k in allowedKWs: cheetahKWArgs[k] = v
self._initCheetahInstance(**cheetahKWArgs)
def select(self, config, **KWS):
## CHEETAH: generated from #def select($config) at line 1, col 1.
trans = KWS.get("trans")
if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)):
trans = self.transaction # is None unless self.awake() was called
if not trans:
trans = DummyTransaction()
_dummyTrans = True
else: _dummyTrans = False
write = trans.response().write
SL = self._CHEETAH__searchList
_filter = self._CHEETAH__currentFilter
########################################
## START - generated method body
write(u'''<tr>
<td>''')
_v = VFFSL(SL,"config.description",True) # u'$config.description' on line 3, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$config.description')) # from line 3, col 5.
write(u'''</td>
<td>
\t<select id="''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 5, col 14
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 5, col 14.
write(u'''" onchange="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 5, col 50
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 5, col 50.
write(u'''\', this.value);">
''')
for choice in VFFSL(SL,"config.data.choices",True): # generated from line 6, col 3
if VFFSL(SL,"config.data.current",True) == VFFSL(SL,"choice",True)[0]: # generated from line 7, col 4
write(u'''\t\t\t\t<option value="''')
_v = VFFSL(SL,"choice",True)[0] # u'$choice[0]' on line 8, col 20
if _v is not None: write(_filter(_v, rawExpr=u'$choice[0]')) # from line 8, col 20.
write(u'''" selected="true">''')
_v = VFFSL(SL,"choice",True)[1] # u'$choice[1]' on line 8, col 48
if _v is not None: write(_filter(_v, rawExpr=u'$choice[1]')) # from line 8, col 48.
write(u'''</option>
''')
else: # generated from line 9, col 4
write(u'''\t\t\t\t<option value="''')
_v = VFFSL(SL,"choice",True)[0] # u'$choice[0]' on line 10, col 20
if _v is not None: write(_filter(_v, rawExpr=u'$choice[0]')) # from line 10, col 20.
write(u'''">''')
_v = VFFSL(SL,"choice",True)[1] # u'$choice[1]' on line 10, col 32
if _v is not None: write(_filter(_v, rawExpr=u'$choice[1]')) # from line 10, col 32.
write(u'''</option>
''')
write(u'''\t</select>
</td>
</tr>
''')
########################################
## END - generated method body
return _dummyTrans and trans.response().getvalue() or ""
def checkbox(self, config, **KWS):
## CHEETAH: generated from #def checkbox($config) at line 18, col 1.
trans = KWS.get("trans")
if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)):
trans = self.transaction # is None unless self.awake() was called
if not trans:
trans = DummyTransaction()
_dummyTrans = True
else: _dummyTrans = False
write = trans.response().write
SL = self._CHEETAH__searchList
_filter = self._CHEETAH__currentFilter
########################################
## START - generated method body
write(u'''<tr>
<td>''')
_v = VFFSL(SL,"config.description",True) # u'$config.description' on line 20, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$config.description')) # from line 20, col 5.
write(u'''</td>
<td>
''')
if VFFSL(SL,"config.data.current",True): # generated from line 22, col 2
write(u'''\t\t<input type="checkbox" id="''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 23, col 30
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 23, col 30.
write(u'''" checked="true" onclick="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 23, col 80
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 23, col 80.
write(u'''\', this.checked);" />
''')
else: # generated from line 24, col 2
write(u'''\t\t<input type="checkbox" id="''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 25, col 30
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 25, col 30.
write(u'''" onclick="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 25, col 65
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 25, col 65.
write(u'''\', this.checked);" />
''')
write(u'''</td>
</tr>
''')
########################################
## END - generated method body
return _dummyTrans and trans.response().getvalue() or ""
def multicheckbox(self, config, **KWS):
## CHEETAH: generated from #def multicheckbox($config) at line 31, col 1.
trans = KWS.get("trans")
if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)):
trans = self.transaction # is None unless self.awake() was called
if not trans:
trans = DummyTransaction()
_dummyTrans = True
else: _dummyTrans = False
write = trans.response().write
SL = self._CHEETAH__searchList
_filter = self._CHEETAH__currentFilter
########################################
## START - generated method body
write(u'''<tr>
<td>''')
_v = VFFSL(SL,"config.description",True) # u'$config.description' on line 33, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$config.description')) # from line 33, col 5.
write(u'''</td>
<td>
''')
for choice in VFFSL(SL,"config.data.choices",True): # generated from line 35, col 2
if VFFSL(SL,"choice",True) in VFFSL(SL,"config.data.current",True): # generated from line 36, col 3
write(u'''\t\t\t<input type="checkbox" id="''')
_v = VFFSL(SL,"config.path",True) # u'${config.path}' on line 37, col 31
if _v is not None: write(_filter(_v, rawExpr=u'${config.path}')) # from line 37, col 31.
write(u'''_''')
_v = VFFSL(SL,"choice",True) # u'${choice}' on line 37, col 46
if _v is not None: write(_filter(_v, rawExpr=u'${choice}')) # from line 37, col 46.
write(u'''" checked="true" onclick="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 37, col 93
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 37, col 93.
write(u"""', """)
_v = VFFSL(SL,"choice",True) # u'$choice' on line 37, col 108
if _v is not None: write(_filter(_v, rawExpr=u'$choice')) # from line 37, col 108.
write(u''');" />
''')
else: # generated from line 38, col 3
write(u'''\t\t\t<input type="checkbox" id="''')
_v = VFFSL(SL,"config.path",True) # u'${config.path}' on line 39, col 31
if _v is not None: write(_filter(_v, rawExpr=u'${config.path}')) # from line 39, col 31.
write(u'''_''')
_v = VFFSL(SL,"choice",True) # u'${choice}' on line 39, col 46
if _v is not None: write(_filter(_v, rawExpr=u'${choice}')) # from line 39, col 46.
write(u'''" onclick="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 39, col 78
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 39, col 78.
write(u"""', """)
_v = VFFSL(SL,"choice",True) # u'$choice' on line 39, col 93
if _v is not None: write(_filter(_v, rawExpr=u'$choice')) # from line 39, col 93.
write(u''');" />
''')
write(u'''\t\t''')
_v = VFFSL(SL,"choice",True) # u'$choice' on line 41, col 3
if _v is not None: write(_filter(_v, rawExpr=u'$choice')) # from line 41, col 3.
write(u'''
''')
write(u'''</td>
</tr>
''')
########################################
## END - generated method body
return _dummyTrans and trans.response().getvalue() or ""
def number(self, config, **KWS):
## CHEETAH: generated from #def number($config) at line 47, col 1.
trans = KWS.get("trans")
if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)):
trans = self.transaction # is None unless self.awake() was called
if not trans:
trans = DummyTransaction()
_dummyTrans = True
else: _dummyTrans = False
write = trans.response().write
SL = self._CHEETAH__searchList
_filter = self._CHEETAH__currentFilter
########################################
## START - generated method body
write(u'''<tr>
<td>''')
_v = VFFSL(SL,"config.description",True) # u'$config.description' on line 49, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$config.description')) # from line 49, col 5.
write(u'''</td>
<td>
\t<input type="text" id="''')
_v = VFFSL(SL,"config.path",True) # u'${config.path}' on line 51, col 25
if _v is not None: write(_filter(_v, rawExpr=u'${config.path}')) # from line 51, col 25.
write(u'''" value="''')
_v = VFFSL(SL,"config.data.current",True) # u'$config.data.current' on line 51, col 48
if _v is not None: write(_filter(_v, rawExpr=u'$config.data.current')) # from line 51, col 48.
write(u'''" onkeydown="numberTextboxKeydownFilter(event);" onchange="saveConfig(\'''')
_v = VFFSL(SL,"config.path",True) # u'$config.path' on line 51, col 139
if _v is not None: write(_filter(_v, rawExpr=u'$config.path')) # from line 51, col 139.
write(u'''\', this.value);">
</td>
</tr>
''')
########################################
## END - generated method body
return _dummyTrans and trans.response().getvalue() or ""
def respond(self, trans=None):
## CHEETAH: main method generated for this template
if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)):
trans = self.transaction # is None unless self.awake() was called
if not trans:
trans = DummyTransaction()
_dummyTrans = True
else: _dummyTrans = False
write = trans.response().write
SL = self._CHEETAH__searchList
_filter = self._CHEETAH__currentFilter
########################################
## START - generated method body
write(u'''
<div id="content_main">
\t<h3>''')
_v = VFFSL(SL,"title",True) # u'$title' on line 57, col 6
if _v is not None: write(_filter(_v, rawExpr=u'$title')) # from line 57, col 6.
write(u'''</h3>
\t<hr />
\t<table id="configuration">
''')
for config in VFFSL(SL,"configs",True): # generated from line 60, col 3
if VFFSL(SL,"config.data.type",True) == "select": # generated from line 61, col 4
write(u'''\t\t\t\t''')
_v = VFFSL(SL,"select",False)(VFFSL(SL,"config",True)) # u'$select($config)' on line 62, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$select($config)')) # from line 62, col 5.
write(u'''
''')
elif VFFSL(SL,"config.data.type",True) == "checkbox": # generated from line 63, col 4
write(u'''\t\t\t\t''')
_v = VFFSL(SL,"checkbox",False)(VFFSL(SL,"config",True)) # u'$checkbox($config)' on line 64, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$checkbox($config)')) # from line 64, col 5.
write(u'''
''')
elif VFFSL(SL,"config.data.type",True) == "multicheckbox": # generated from line 65, col 4
write(u'''\t\t\t\t''')
_v = VFFSL(SL,"multicheckbox",False)(VFFSL(SL,"config",True)) # u'$multicheckbox($config)' on line 66, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$multicheckbox($config)')) # from line 66, col 5.
write(u'''
''')
elif VFFSL(SL,"config.data.type",True) == "number": # generated from line 67, col 4
write(u'''\t\t\t\t''')
_v = VFFSL(SL,"number",False)(VFFSL(SL,"config",True)) # u'$number($config)' on line 68, col 5
if _v is not None: write(_filter(_v, rawExpr=u'$number($config)')) # from line 68, col 5.
write(u'''
''')
write(u'''\t</table>
</div>
''')
########################################
## END - generated method body
return _dummyTrans and trans.response().getvalue() or ""
##################################################
## CHEETAH GENERATED ATTRIBUTES
_CHEETAH__instanceInitialized = False
_CHEETAH_version = __CHEETAH_version__
_CHEETAH_versionTuple = __CHEETAH_versionTuple__
_CHEETAH_genTime = __CHEETAH_genTime__
_CHEETAH_genTimestamp = __CHEETAH_genTimestamp__
_CHEETAH_src = __CHEETAH_src__
_CHEETAH_srcLastModified = __CHEETAH_srcLastModified__
_mainCheetahMethod_for_config= 'respond'
## END CLASS DEFINITION
if not hasattr(config, '_initCheetahAttributes'):
templateAPIClass = getattr(config, '_CHEETAH_templateClass', Template)
templateAPIClass._addCheetahPlumbingCodeToClass(config)
# CHEETAH was developed by Tavis Rudd and Mike Orr
# with code, advice and input from many other volunteers.
# For more information visit http://www.CheetahTemplate.org/
##################################################
## if run from command line:
if __name__ == '__main__':
from Cheetah.TemplateCmdLineIface import CmdLineIface
CmdLineIface(templateObj=config()).run()
|
MOA-2011/enigma2-plugin-extensions-openwebif
|
plugin/controllers/views/ajax/config.py
|
Python
|
gpl-2.0
| 16,871
|
[
"VisIt"
] |
ea436159956728c653f11524b33da28aa03fd9fb9fdea350433c0dfb8ce6d830
|
# This file is part of Merlin/Arthur.
# Merlin/Arthur is the Copyright (C)2009,2010 of Elliot Rosemarine.
# Individual portions may be copyright by individual contributors, and
# are included in this collective work with permission of the copyright
# owners.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect
from Core.config import Config
from Core.paconf import PA
from Core.maps import Galaxy, Scan
from Arthur.context import render
from Arthur.loadable import loadable, load
@load
class galaxy(loadable):
access = Config.get("Arthur", "scans")
def execute(self, request, user, x, y):
galaxy = Galaxy.load(x,y)
if galaxy is None:
return HttpResponseRedirect(reverse("galaxy_ranks"))
group = []
scans = []
for planet in galaxy.planets:
if not planet.active:
continue
group.append((planet, [],))
if planet.scan("P"):
group[-1][1].append(planet.scan("P"))
scans.append(planet.scan("P"))
if planet.scan("D"):
group[-1][1].append(planet.scan("D"))
scans.append(planet.scan("D"))
if planet.scan("A") or planet.scan("U"):
group[-1][1].append(planet.scan("A") or planet.scan("U"))
scans.append(planet.scan("A") or planet.scan("U"))
return render("scans/galaxy.tpl", request, galaxy=galaxy, group=group, scans=scans)
@load
class types(loadable):
access = Config.get("Arthur", "scans")
def execute(self, request, user, x, y, types):
types = types.upper()
galaxy = Galaxy.load(x,y)
if galaxy is None:
return HttpResponseRedirect(reverse("galaxy_ranks"))
group = []
scans = []
for planet in galaxy.planets:
if not planet.active:
continue
group.append((planet, [],))
for type in Scan._scan_types:
if type in types:
group[-1][1].append(planet.scan(type))
scans.append(planet.scan(type))
return render("scans/galaxy.tpl", request, galaxy=galaxy, group=group, scans=scans)
|
d7415/merlin
|
Arthur/views/scans/galaxy.py
|
Python
|
gpl-2.0
| 3,022
|
[
"Galaxy"
] |
4eb3af429eb0f07bf9f4d3d2d700e59aff929680bfd36b7cd3822a9bc3f36e1b
|
"""
makebamindex
~~~~~~~~~~~~
:Description: Use PySAM to make bam index
"""
# Use PySAM to make bam index
import pysam
import os
import sys
import logging
import coloredlogs
logger = logging.getLogger('iCallSV.makebamindex')
coloredlogs.install(level='DEBUG')
def MakeIndex(bamFile):
"""
This will make bam index if not there for a bam file using pysam.
:param str bamFile: Path to bam file
:return: None
:rtype: None
"""
logger.info("makebamindex: Trying to make index for bam file")
if(os.path.isfile(bamFile)):
try:
pysam.index(bamFile)
except IndexError as err:
exception = "Index error({0}): {1}".format(err.errno, err.strerror)
logger.info("%s", exception)
except IOError as err:
exception = "I/O error({0}): {1}".format(err.errno, err.strerror)
logger.info("%s", exception)
else:
logger.info("Bam File %s does not exists", bamFile)
sys.exit()
|
rhshah/iCallSV
|
iCallSV/makebamindex.py
|
Python
|
apache-2.0
| 992
|
[
"pysam"
] |
4f80ece5e4ced9bdb627d5e1e153d154a87e5fe4089838991744de6f0769603b
|
# This file is part of cclib (http://cclib.github.io), a library for parsing
# and interpreting the results of computational chemistry packages.
#
# Copyright (C) 2007-2014, the cclib development team
#
# The library is free software, distributed under the terms of
# the GNU Lesser General Public version 2.1 or later. You should have
# received a copy of the license along with cclib. You can also access
# the full license online at http://www.gnu.org/copyleft/lgpl.html.
"""Test the Nuclear method in cclib"""
from __future__ import print_function
import os
import re
import logging
import unittest
import numpy
import bettertest
from testall import getfile
from cclib.method import Nuclear
from cclib.parser import QChem
from cclib.parser import utils
class NuclearTest(bettertest.TestCase):
def test_nre(self):
"""Testing nuclear repulsion energy for one logfile where it is printed."""
data, logfile = getfile(QChem, "basicQChem4.2", "water_mp4sdq.out")
nuclear = Nuclear(data)
nuclear.logger.setLevel(logging.ERROR)
with open(logfile.filename) as f:
output = f.read()
line = re.search('Nuclear Repulsion Energy = .* hartrees', output).group()
nre = float(line.split()[4])
nre = utils.convertor(nre, 'Angstrom', 'bohr')
self.assertInside(nuclear.repulsion_energy(), nre, 1E-8)
tests = [NuclearTest]
if __name__ == "__main__":
unittest.TextTestRunner(verbosity=2).run(unittest.makeSuite(NuclearTest))
|
Clyde-fare/cclib
|
test/testnuclear.py
|
Python
|
lgpl-2.1
| 1,511
|
[
"cclib"
] |
bcdc24773af5dc269b26308a65d4d7ab47004381d37a071888a40a1cb3a44f65
|
"""
Convolution functions.
Used for generating the kernel used in convolutional gridding.
We actually paramerize the functions at initialization and return a simple
callable with one parameter, the distance in pixels.
This allows us to pass the convolution routine the minimum of extra parameters.
"""
from __future__ import division
from abc import ABCMeta, abstractmethod
import numpy as np
from six import add_metaclass
@add_metaclass(ABCMeta)
class ConvFuncBase(object):
"""
Implements truncation (via __call__), numpy array reshaping.
Always returns 0 outside truncation radius, i.e.::
if np.fabs(x) > trunc:
conv_func(x)==0 # True
Args:
trunc: truncation radius.
"""
def __init__(self, trunc):
self.trunc = trunc
@abstractmethod
def f(self, radius):
"""The convolution function to be evaluated and truncated"""
pass
def __call__(self, radius_in_pix):
radius_in_pix = np.atleast_1d(radius_in_pix)
output = np.zeros_like(radius_in_pix, dtype=np.float)
inside_trunc_radius = np.fabs(radius_in_pix) < self.trunc
output[inside_trunc_radius] = self.f(radius_in_pix[inside_trunc_radius])
return output
class Triangle(ConvFuncBase):
"""
Linearly declines from 1.0 at origin to 0.0 at **half_base_width**, zero thereafter.
"
Symmetric about the origin.
Makes a terrible anti-aliasing function. But, because it's so
simple, it's easy to verify and therefore a useful tool in verifying
convolution codes.
Args:
half_base_width (float): Half-base width of the triangle.
"""
def __init__(self, half_base_width):
self.half_base_width = half_base_width
super(Triangle, self).__init__(half_base_width)
def f(self, radius_in_pix):
return np.maximum(
1.0 - np.fabs(radius_in_pix) / self.half_base_width,
np.zeros_like(radius_in_pix)
)
class Pillbox(ConvFuncBase):
"""
Valued 1.0 from origin to **half_base_width**, zero thereafter.
AKA 'TopHat' function.
Symmetric about the origin.
Makes a terrible anti-aliasing function. But, because it's so
simple, it's easy to verify and therefore a useful tool in verifying
convolution codes.
Args:
half_base_width (float): Half-base width pillbox.
"""
def __init__(self, half_base_width):
self.half_base_width = half_base_width
super(Pillbox, self).__init__(half_base_width)
def f(self, radius_in_pix):
return np.where(np.fabs(radius_in_pix) < self.half_base_width, 1.0, 0.0)
class Sinc(ConvFuncBase):
"""
Sinc function (with truncation).
"""
def __init__(self, trunc):
super(Sinc, self).__init__(trunc)
def f(self, radius_in_pix):
return np.sinc(radius_in_pix)
class Gaussian(ConvFuncBase):
"""
Gaussian function (with truncation).
evaluates the function::
exp(-(x/w)**2)
(Using the notation of Taylor 1998, p143, where x = u/delta_u and alpha==2.
Default value of ``w=1``).
Args:
trunc: truncation radius.
w (float): Width normalization of the Gaussian. Default = 1.0
"""
def __init__(self, trunc, w=1.0):
super(Gaussian, self).__init__(trunc)
self.w = w
def f(self, radius_in_pix):
radius_div_w = radius_in_pix / self.w
return np.exp(-1. * (radius_div_w * radius_div_w))
class GaussianSinc(ConvFuncBase):
"""
Gaussian times sinc function (with truncation).
evaluates the function::
exp(-(x/w1)**2) * sinc(x/w2)
(Using the notation of Taylor 1998, p143, where x = u/delta_u and alpha==2.
Default values for w1,w2 are chosen according to recommendation therein).
Args:
trunc: truncation radius.
w1 (float): Width normalization of the Gaussian. Default = 2.52
w2 (float): Width normalization of the sinc. Default = 1.55
"""
def __init__(self, trunc, w1=2.52, w2=1.55):
super(GaussianSinc, self).__init__(trunc)
self.w1 = w1
self.w2 = w2
def f(self, radius_in_pix):
radius_div_w1 = radius_in_pix / self.w1
return (
np.exp(-1. * (radius_div_w1 * radius_div_w1)) *
np.sinc(radius_in_pix / self.w2)
)
|
SKA-ScienceDataProcessor/FastImaging-Python
|
src/fastimgproto/gridder/conv_funcs.py
|
Python
|
apache-2.0
| 4,357
|
[
"Gaussian"
] |
566947f0fbce41f643ea941fb6de6d66d4c28c552a2f5e7c4dfbf01a93112840
|
from __future__ import division
from __future__ import absolute_import
import numpy as np
from numpy.random import random
na = np.newaxis
import scipy.stats as stats
import scipy.special as special
import scipy.linalg
from scipy.misc import logsumexp
from numpy.core.umath_tests import inner1d
from . import general
from .general import any_none, all_none
### data abstraction
# the data type is ndarrays OR lists of ndarrays
# type Data = ndarray | [ndarray]
def atleast_2d(data):
# NOTE: can't use np.atleast_2d because if it's 1D we want axis 1 to be the
# singleton and axis 0 to be the sequence index
if data.ndim == 1:
return data.reshape((-1,1))
return data
def mask_data(data):
return np.ma.masked_array(np.nan_to_num(data),np.isnan(data),fill_value=0.,hard_mask=True)
def gi(data):
out = (np.isnan(atleast_2d(data)).sum(1) == 0).ravel()
return out if len(out) != 0 else None
def getdatasize(data):
if isinstance(data,np.ma.masked_array):
return data.shape[0] - data.mask.reshape((data.shape[0],-1))[:,0].sum()
elif isinstance(data,np.ndarray):
if len(data) == 0:
return 0
return data[gi(data)].shape[0]
elif isinstance(data,list):
return sum(getdatasize(d) for d in data)
else:
# handle unboxed case for convenience
assert isinstance(data,int) or isinstance(data,float)
return 1
def getdatadimension(data):
if isinstance(data,np.ndarray):
assert data.ndim > 1
return data.shape[1]
elif isinstance(data,list):
assert len(data) > 0
return getdatadimension(data[0])
else:
# handle unboxed case for convenience
assert isinstance(data,int) or isinstance(data,float)
return 1
def combinedata(datas):
ret = []
for data in datas:
if isinstance(data,np.ma.masked_array):
ret.append(np.ma.compress_rows(data))
if isinstance(data,np.ndarray):
ret.append(data)
elif isinstance(data,list):
ret.extend(combinedata(data))
else:
# handle unboxed case for convenience
assert isinstance(data,int) or isinstance(data,float)
ret.append(np.atleast_1d(data))
return ret
def flattendata(data):
# data is either an array (possibly a maskedarray) or a list of arrays
if isinstance(data,np.ndarray):
return data
elif isinstance(data,list) or isinstance(data,tuple):
if any(isinstance(d,np.ma.MaskedArray) for d in data):
return np.concatenate([np.ma.compress_rows(d) for d in data])
else:
return np.concatenate(data)
else:
# handle unboxed case for convenience
assert isinstance(data,int) or isinstance(data,float)
return np.atleast_1d(data)
### misc
def cov(a):
# return np.cov(a,rowvar=0,bias=1)
mu = a.mean(0)
if isinstance(a,np.ma.MaskedArray):
return np.ma.dot(a.T,a)/a.count(0)[0] - np.ma.outer(mu,mu)
else:
return a.T.dot(a)/a.shape[0] - np.outer(mu,mu)
### Sampling functions
def sample_gaussian(mu=None,Sigma=None,J=None,h=None):
mean_params = mu is not None and Sigma is not None
info_params = J is not None and h is not None
assert mean_params or info_params
if not any_none(mu,Sigma):
return np.random.multivariate_normal(mu,Sigma)
else:
from scipy.linalg.lapack import dpotrs
L = np.linalg.cholesky(J)
x = np.random.randn(h.shape[0])
return scipy.linalg.solve_triangular(L,x,lower=True) \
+ dpotrs(L,h,lower=True)[0]
def sample_discrete(distn,size=[],dtype=np.int32):
'samples from a one-dimensional finite pmf'
distn = np.atleast_1d(distn)
assert (distn >=0).all() and distn.ndim == 1
if (0 == distn).all():
return np.random.randint(distn.shape[0],size=size)
cumvals = np.cumsum(distn)
return np.sum(np.array(random(size))[...,na] * cumvals[-1] > cumvals, axis=-1,dtype=dtype)
def sample_discrete_from_log(p_log,return_lognorms=False,axis=0,dtype=np.int32):
'samples log probability array along specified axis'
lognorms = logsumexp(p_log,axis=axis)
cumvals = np.exp(p_log - np.expand_dims(lognorms,axis)).cumsum(axis)
thesize = np.array(p_log.shape)
thesize[axis] = 1
randvals = random(size=thesize) * \
np.reshape(cumvals[[slice(None) if i is not axis else -1
for i in range(p_log.ndim)]],thesize)
samples = np.sum(randvals > cumvals,axis=axis,dtype=dtype)
if return_lognorms:
return samples, lognorms
else:
return samples
def sample_markov(T,trans_matrix,init_state_distn):
out = np.empty(T,dtype=np.int32)
out[0] = sample_discrete(init_state_distn)
for t in range(1,T):
out[t] = sample_discrete(trans_matrix[out[t-1]])
return out
def sample_niw(mu,lmbda,kappa,nu):
'''
Returns a sample from the normal/inverse-wishart distribution, conjugate
prior for (simultaneously) unknown mean and unknown covariance in a
Gaussian likelihood model. Returns covariance.
'''
# code is based on Matlab's method
# reference: p. 87 in Gelman's Bayesian Data Analysis
assert nu > lmbda.shape[0] and kappa > 0
# first sample Sigma ~ IW(lmbda,nu)
lmbda = sample_invwishart(lmbda,nu)
# then sample mu | Lambda ~ N(mu, Lambda/kappa)
mu = np.random.multivariate_normal(mu,lmbda / kappa)
return mu, lmbda
def sample_invwishart(S,nu):
# TODO make a version that returns the cholesky
# TODO allow passing in chol/cholinv of matrix parameter lmbda
# TODO lowmem! memoize! dchud (eigen?)
n = S.shape[0]
chol = np.linalg.cholesky(S)
if (nu <= 81+n) and (nu == np.round(nu)):
x = np.random.randn(nu,n)
else:
x = np.diag(np.sqrt(np.atleast_1d(stats.chi2.rvs(nu-np.arange(n)))))
x[np.triu_indices_from(x,1)] = np.random.randn(n*(n-1)//2)
R = np.linalg.qr(x,'r')
T = scipy.linalg.solve_triangular(R.T,chol.T,lower=True).T
return np.dot(T,T.T)
def sample_wishart(sigma, nu):
n = sigma.shape[0]
chol = np.linalg.cholesky(sigma)
# use matlab's heuristic for choosing between the two different sampling schemes
if (nu <= 81+n) and (nu == round(nu)):
# direct
X = np.dot(chol,np.random.normal(size=(n,nu)))
else:
A = np.diag(np.sqrt(np.random.chisquare(nu - np.arange(n))))
A[np.tri(n,k=-1,dtype=bool)] = np.random.normal(size=(n*(n-1)/2.))
X = np.dot(chol,A)
return np.dot(X,X.T)
def sample_mn(M,U=None,Uinv=None,V=None,Vinv=None):
assert (U is None) ^ (Uinv is None)
assert (V is None) ^ (Vinv is None)
G = np.random.normal(size=M.shape)
if U is not None:
G = np.dot(np.linalg.cholesky(U),G)
else:
G = np.linalg.solve(np.linalg.cholesky(Uinv).T,G)
if V is not None:
G = np.dot(G,np.linalg.cholesky(V).T)
else:
G = np.linalg.solve(np.linalg.cholesky(Vinv).T,G.T).T
return M + G
def sample_mniw(nu,S,M,K=None,Kinv=None):
assert (K is None) ^ (Kinv is None)
Sigma = sample_invwishart(S,nu)
if K is not None:
return sample_mn(M=M,U=Sigma,V=K), Sigma
else:
return sample_mn(M=M,U=Sigma,Vinv=Kinv), Sigma
def mniw_expectedstats(nu,S,M,K=None,Kinv=None):
# NOTE: could speed this up with chol factorizing S, not re-solving
assert (K is None) ^ (Kinv is None)
m = M.shape[0]
K = K if K is not None else np.linalg.inv(Kinv)
E_Sigmainv = nu*np.linalg.inv(S)
E_Sigmainv_A = nu*np.linalg.solve(S,M)
E_AT_Sigmainv_A = m*K + nu*M.T.dot(np.linalg.solve(S,M))
E_logdetSigmainv = special.digamma((nu-np.arange(m))/2.).sum() \
+ m*np.log(2) - np.linalg.slogdet(S)[1]
return E_Sigmainv, E_Sigmainv_A, E_AT_Sigmainv_A, E_logdetSigmainv
def sample_pareto(x_m,alpha):
return x_m + np.random.pareto(alpha)
def sample_crp_tablecounts(concentration,customers,colweights):
m = np.zeros_like(customers)
tot = customers.sum()
randseq = np.random.random(tot)
starts = np.empty_like(customers)
starts[0,0] = 0
starts.flat[1:] = np.cumsum(np.ravel(customers)[:customers.size-1])
for (i,j), n in np.ndenumerate(customers):
w = colweights[j]
for k in xrange(n):
m[i,j] += randseq[starts[i,j]+k] \
< (concentration * w) / (k + concentration * w)
return m
### Entropy
def invwishart_entropy(sigma,nu,chol=None):
D = sigma.shape[0]
chol = np.linalg.cholesky(sigma) if chol is None else chol
Elogdetlmbda = special.digamma((nu-np.arange(D))/2).sum() + D*np.log(2) - 2*np.log(chol.diagonal()).sum()
return invwishart_log_partitionfunction(sigma,nu,chol)-(nu-D-1)/2*Elogdetlmbda + nu*D/2
def invwishart_log_partitionfunction(sigma,nu,chol=None):
D = sigma.shape[0]
chol = np.linalg.cholesky(sigma) if chol is None else chol
return -1*(nu*np.log(chol.diagonal()).sum() - (nu*D/2*np.log(2) + D*(D-1)/4*np.log(np.pi) \
+ special.gammaln((nu-np.arange(D))/2).sum()))
### Predictive
def multivariate_t_loglik(y,nu,mu,lmbda):
# returns the log value
d = len(mu)
yc = np.array(y-mu,ndmin=2)
L = np.linalg.cholesky(lmbda)
ys = scipy.linalg.solve_triangular(L,yc.T,overwrite_b=True,lower=True)
return scipy.special.gammaln((nu+d)/2.) - scipy.special.gammaln(nu/2.) \
- (d/2.)*np.log(nu*np.pi) - np.log(L.diagonal()).sum() \
- (nu+d)/2.*np.log1p(1./nu*inner1d(ys.T,ys.T))
def beta_predictive(priorcounts,newcounts):
prior_nsuc, prior_nfail = priorcounts
nsuc, nfail = newcounts
numer = scipy.special.gammaln(np.array([nsuc+prior_nsuc,
nfail+prior_nfail, prior_nsuc+prior_nfail])).sum()
denom = scipy.special.gammaln(np.array([prior_nsuc, prior_nfail,
prior_nsuc+prior_nfail+nsuc+nfail])).sum()
return numer - denom
### Statistical tests
def two_sample_t_statistic(pop1, pop2):
pop1, pop2 = (flattendata(p) for p in (pop1, pop2))
t = (pop1.mean(0) - pop2.mean(0)) / np.sqrt(pop1.var(0)/pop1.shape[0] + pop2.var(0)/pop2.shape[0])
p = 2*stats.t.sf(np.abs(t),np.minimum(pop1.shape[0],pop2.shape[0]))
return t,p
def f_statistic(pop1, pop2): # TODO test
pop1, pop2 = (flattendata(p) for p in (pop1, pop2))
var1, var2 = pop1.var(0), pop2.var(0)
n1, n2 = np.where(var1 >= var2, pop1.shape[0], pop2.shape[0]), \
np.where(var1 >= var2, pop2.shape[0], pop1.shape[0])
var1, var2 = np.maximum(var1,var2), np.minimum(var1,var2)
f = var1 / var2
p = stats.f.sf(f,n1,n2)
return f,p
|
michaelpacer/pybasicbayes
|
pybasicbayes/util/stats.py
|
Python
|
mit
| 10,647
|
[
"Gaussian"
] |
22894175816674f40aac8448e1afb7be47baf585839853edf335f3382269246e
|
# -*- coding: utf-8 -*-
from nodes import *
class Network:
"""A class to represent a Variational Bayesian Network.
Arguments
----------
nodes - list
the nodes in the network
Attributes
----------
nodes - list
Pointers to instances of all the nodes contained in the network
Notes
----------
It is possible to create a network by instantiating the class with a subset of the network's nodes and calling self.fetch_network(). This is rather slow for big networks.
Before learning, the network class creates a list of _iterable_ nodes, a subset of nodes with applicable update() and log_lower_bound() functions.
"""
def __init__(self,nodes=[]):
self.nodes = []
[self.addnode(n) for n in nodes]
def addnode(self,n):
"""Add a node (or list of nodes) to the network"""
if type(n) is list:
self.nodes.extend(n)
else:
self.nodes.append(n)
def find_iterable(self):
"""make a list of all of the nodes which are to be updated"""
self.iterable_nodes = [e for e in self.nodes if isinstance(e,Gaussian) or isinstance(e,Gamma) or isinstance(e,DiagonalGamma) or isinstance(e,Wishart)]
def learn(self,niters,tol=1e-3):
"""Iterate through the iterable nodeds in the network, updating each until the log_lower_bound converges or max_iter is reached. """
self.find_iterable()
print 'Found' + str(len(self.iterable_nodes))+' iterable nodes\n'
old_llb = -np.inf
for i in range(niters):
for n in self.iterable_nodes:
n.update()
self.llb = np.sum([n.log_lower_bound() for n in self.iterable_nodes])
print niters-i,self.llb
#check for convergence
if self.llb-old_llb < tol:
print "Convergence!"
break
old_llb = self.llb
def fetch_network(self):
"""Find all of the nodes connected to the nodes in the network
Notes
---------
This is rather slow at the moment"""
N_starting_nodes = len(self.nodes)
new_nodes = True
while new_nodes:
new_nodes = 0
for n in self.nodes:
if isinstance(n,Gaussian):
new_children = [e for e in n.children if not e in self.nodes]
new_parents = [e for e in [n.mean_parent, n.precision_parent] if not e in self.nodes]
new_nodes += len(new_children)+len(new_parents)
self.nodes.extend(new_children)
self.nodes.extend(new_parents)
elif sum([isinstance(n,Addition), isinstance(n,Multiplication)]):
new_children = [e for e in n.children if not e in self.nodes]
new_parents = [e for e in [n.A, n.B] if not e in self.nodes]
new_nodes += len(new_children)+len(new_parents)
self.nodes.extend(new_children)
self.nodes.extend(new_parents)
elif isinstance(n,hstack):
new_children = [e for e in n.children if not e in self.nodes]
new_parents = [e for e in n.parents if not e in self.nodes]
new_nodes += len(new_children)+len(new_parents)
self.nodes.extend(new_children)
self.nodes.extend(new_parents)
if sum([isinstance(n,Gamma), isinstance(n,DiagonalGamma), isinstance(n,Wishart)]):
new_children = [e for e in n.children if not e in self.nodes]
new_nodes += len(new_children)
self.nodes.extend(new_children)
print "Found "+str(len(self.nodes)-N_starting_nodes)+" new nodes."
|
jameshensman/pyvb
|
src/pyvb/network.py
|
Python
|
gpl-3.0
| 3,265
|
[
"Gaussian"
] |
719cf7ce97ec8d2996e809bf2f8b77629e33c21d07f12a29a9585a3a9da94450
|
# -*- coding: utf-8 -*-
# Copyright 2007-2011 The HyperSpy developers
#
# This file is part of HyperSpy.
#
# HyperSpy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# HyperSpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with HyperSpy. If not, see <http://www.gnu.org/licenses/>.
import copy
import os.path
import warnings
import math
import inspect
import numpy as np
import numpy.ma as ma
import scipy.interpolate
try:
from scipy.signal import savgol_filter
savgol_imported = True
except ImportError:
savgol_imported = False
import scipy as sp
from matplotlib import pyplot as plt
try:
from statsmodels.nonparametric.smoothers_lowess import lowess
statsmodels_installed = True
except:
statsmodels_installed = False
from hyperspy.axes import AxesManager
from hyperspy import io
from hyperspy.drawing import mpl_hie, mpl_hse, mpl_he
from hyperspy.learn.mva import MVA, LearningResults
import hyperspy.misc.utils
from hyperspy.misc.utils import DictionaryTreeBrowser
from hyperspy.drawing import signal as sigdraw
from hyperspy.decorators import auto_replot
from hyperspy.defaults_parser import preferences
from hyperspy.misc.io.tools import ensure_directory
from hyperspy.misc.progressbar import progressbar
from hyperspy.gui.tools import (
SpectrumCalibration,
SmoothingSavitzkyGolay,
SmoothingLowess,
SmoothingTV,
ButterworthFilter)
from hyperspy.misc.tv_denoise import _tv_denoise_1d
from hyperspy.gui.egerton_quantification import BackgroundRemoval
from hyperspy.decorators import only_interactive
from hyperspy.decorators import interactive_range_selector
from scipy.ndimage.filters import gaussian_filter1d
from hyperspy.misc.spectrum_tools import find_peaks_ohaver
from hyperspy.misc.image_tools import (shift_image, estimate_image_shift)
from hyperspy.misc.math_tools import symmetrize, antisymmetrize
from hyperspy.exceptions import SignalDimensionError, DataDimensionError
from hyperspy.misc import array_tools
from hyperspy.misc import spectrum_tools
from hyperspy.misc import rgb_tools
from hyperspy.gui.tools import IntegrateArea
from hyperspy import components
from hyperspy.misc.utils import underline
from hyperspy.misc.borrowed.astroML.histtools import histogram
from hyperspy.drawing.utils import animate_legend
class Signal2DTools(object):
def estimate_shift2D(self,
reference='current',
correlation_threshold=None,
chunk_size=30,
roi=None,
normalize_corr=False,
sobel=True,
medfilter=True,
hanning=True,
plot=False,
dtype='float',
show_progressbar=None):
"""Estimate the shifts in a image using phase correlation
This method can only estimate the shift by comparing
bidimensional features that should not change position
between frames. To decrease the memory usage, the time of
computation and the accuracy of the results it is convenient
to select a region of interest by setting the roi keyword.
Parameters
----------
reference : {'current', 'cascade' ,'stat'}
If 'current' (default) the image at the current
coordinates is taken as reference. If 'cascade' each image
is aligned with the previous one. If 'stat' the translation
of every image with all the rest is estimated and by
performing statistical analysis on the result the
translation is estimated.
correlation_threshold : {None, 'auto', float}
This parameter is only relevant when `reference` is 'stat'.
If float, the shift estimations with a maximum correlation
value lower than the given value are not used to compute
the estimated shifts. If 'auto' the threshold is calculated
automatically as the minimum maximum correlation value
of the automatically selected reference image.
chunk_size: {None, int}
If int and `reference`=='stat' the number of images used
as reference are limited to the given value.
roi : tuple of ints or floats (left, right, top bottom)
Define the region of interest. If int(float) the position
is given axis index(value).
sobel : bool
apply a sobel filter for edge enhancement
medfilter : bool
apply a median filter for noise reduction
hanning : bool
Apply a 2d hanning filter
plot : bool
If True plots the images after applying the filters and
the phase correlation
dtype : str or dtype
Typecode or data-type in which the calculations must be
performed.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Returns
-------
list of applied shifts
Notes
-----
The statistical analysis approach to the translation estimation
when using `reference`='stat' roughly follows [1]_ . If you use
it please cite their article.
References
----------
.. [1] Schaffer, Bernhard, Werner Grogger, and Gerald
Kothleitner. “Automated Spatial Drift Correction for EFTEM
Image Series.”
Ultramicroscopy 102, no. 1 (December 2004): 27–36.
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
self._check_signal_dimension_equals_two()
if roi is not None:
# Get the indices of the roi
yaxis = self.axes_manager.signal_axes[1]
xaxis = self.axes_manager.signal_axes[0]
roi = tuple([xaxis._get_index(i) for i in roi[2:]] +
[yaxis._get_index(i) for i in roi[:2]])
ref = None if reference == 'cascade' else \
self.__call__().copy()
shifts = []
nrows = None
images_number = self.axes_manager._max_index + 1
if reference == 'stat':
nrows = images_number if chunk_size is None else \
min(images_number, chunk_size)
pcarray = ma.zeros((nrows, self.axes_manager._max_index + 1,
),
dtype=np.dtype([('max_value', np.float),
('shift', np.int32,
(2,))]))
nshift, max_value = estimate_image_shift(
self(),
self(),
roi=roi,
sobel=sobel,
medfilter=medfilter,
hanning=hanning,
normalize_corr=normalize_corr,
plot=plot,
dtype=dtype)
np.fill_diagonal(pcarray['max_value'], max_value)
pbar = progressbar(maxval=nrows * images_number,
disabled=not show_progressbar).start()
else:
pbar = progressbar(maxval=images_number,
disabled=not show_progressbar).start()
# Main iteration loop. Fills the rows of pcarray when reference
# is stat
for i1, im in enumerate(self._iterate_signal()):
if reference in ['current', 'cascade']:
if ref is None:
ref = im.copy()
shift = np.array([0, 0])
nshift, max_val = estimate_image_shift(
ref, im, roi=roi, sobel=sobel, medfilter=medfilter,
hanning=hanning, plot=plot,
normalize_corr=normalize_corr, dtype=dtype)
if reference == 'cascade':
shift += nshift
ref = im.copy()
else:
shift = nshift
shifts.append(shift.copy())
pbar.update(i1 + 1)
elif reference == 'stat':
if i1 == nrows:
break
# Iterate to fill the columns of pcarray
for i2, im2 in enumerate(
self._iterate_signal()):
if i2 > i1:
nshift, max_value = estimate_image_shift(
im,
im2,
roi=roi,
sobel=sobel,
medfilter=medfilter,
hanning=hanning,
normalize_corr=normalize_corr,
plot=plot,
dtype=dtype)
pcarray[i1, i2] = max_value, nshift
del im2
pbar.update(i2 + images_number * i1 + 1)
del im
if reference == 'stat':
# Select the reference image as the one that has the
# higher max_value in the row
sqpcarr = pcarray[:, :nrows]
sqpcarr['max_value'][:] = symmetrize(sqpcarr['max_value'])
sqpcarr['shift'][:] = antisymmetrize(sqpcarr['shift'])
ref_index = np.argmax(pcarray['max_value'].min(1))
self.ref_index = ref_index
shifts = (pcarray['shift'] +
pcarray['shift'][ref_index, :nrows][:, np.newaxis])
if correlation_threshold is not None:
if correlation_threshold == 'auto':
correlation_threshold = \
(pcarray['max_value'].min(0)).max()
print("Correlation threshold = %1.2f" %
correlation_threshold)
shifts[pcarray['max_value'] <
correlation_threshold] = ma.masked
shifts.mask[ref_index, :] = False
shifts = shifts.mean(0)
else:
shifts = np.array(shifts)
del ref
return shifts
def align2D(self, crop=True, fill_value=np.nan, shifts=None,
roi=None,
sobel=True,
medfilter=True,
hanning=True,
plot=False,
normalize_corr=False,
reference='current',
dtype='float',
correlation_threshold=None,
chunk_size=30):
"""Align the images in place using user provided shifts or by
estimating the shifts.
Please, see `estimate_shift2D` docstring for details
on the rest of the parameters not documented in the following
section
Parameters
----------
crop : bool
If True, the data will be cropped not to include regions
with missing data
fill_value : int, float, nan
The areas with missing data are filled with the given value.
Default is nan.
shifts : None or list of tuples
If None the shifts are estimated using
`estimate_shift2D`.
Returns
-------
shifts : np.array
The shifts are returned only if `shifts` is None
Notes
-----
The statistical analysis approach to the translation estimation
when using `reference`='stat' roughly follows [1]_ . If you use
it please cite their article.
References
----------
.. [1] Schaffer, Bernhard, Werner Grogger, and Gerald
Kothleitner. “Automated Spatial Drift Correction for EFTEM
Image Series.”
Ultramicroscopy 102, no. 1 (December 2004): 27–36.
"""
self._check_signal_dimension_equals_two()
if shifts is None:
shifts = self.estimate_shift2D(
roi=roi,
sobel=sobel,
medfilter=medfilter,
hanning=hanning,
plot=plot,
reference=reference,
dtype=dtype,
correlation_threshold=correlation_threshold,
normalize_corr=normalize_corr,
chunk_size=chunk_size)
return_shifts = True
else:
return_shifts = False
# Translate with sub-pixel precision if necesary
for im, shift in zip(self._iterate_signal(),
shifts):
if np.any(shift):
shift_image(im, -shift,
fill_value=fill_value)
del im
# Crop the image to the valid size
if crop is True:
shifts = -shifts
bottom, top = (int(np.floor(shifts[:, 0].min())) if
shifts[:, 0].min() < 0 else None,
int(np.ceil(shifts[:, 0].max())) if
shifts[:, 0].max() > 0 else 0)
right, left = (int(np.floor(shifts[:, 1].min())) if
shifts[:, 1].min() < 0 else None,
int(np.ceil(shifts[:, 1].max())) if
shifts[:, 1].max() > 0 else 0)
self.crop_image(top, bottom, left, right)
shifts = -shifts
if return_shifts:
return shifts
def crop_image(self, top=None, bottom=None,
left=None, right=None):
"""Crops an image in place.
top, bottom, left, right : int or float
If int the values are taken as indices. If float the values are
converted to indices.
See also:
---------
crop
"""
self._check_signal_dimension_equals_two()
self.crop(self.axes_manager.signal_axes[1].index_in_axes_manager,
top,
bottom)
self.crop(self.axes_manager.signal_axes[0].index_in_axes_manager,
left,
right)
class Signal1DTools(object):
def shift1D(self,
shift_array,
interpolation_method='linear',
crop=True,
fill_value=np.nan,
show_progressbar=None):
"""Shift the data in place over the signal axis by the amount specified
by an array.
Parameters
----------
shift_array : numpy array
An array containing the shifting amount. It must have
`axes_manager._navigation_shape_in_array` shape.
interpolation_method : str or int
Specifies the kind of interpolation as a string ('linear',
'nearest', 'zero', 'slinear', 'quadratic, 'cubic') or as an
integer specifying the order of the spline interpolator to
use.
crop : bool
If True automatically crop the signal axis at both ends if
needed.
fill_value : float
If crop is False fill the data outside of the original
interval with the given value where needed.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
self._check_signal_dimension_equals_one()
axis = self.axes_manager.signal_axes[0]
offset = axis.offset
original_axis = axis.axis.copy()
pbar = progressbar(
maxval=self.axes_manager.navigation_size,
disabled=not show_progressbar)
for i, (dat, shift) in enumerate(zip(
self._iterate_signal(),
shift_array.ravel(()))):
if np.isnan(shift):
continue
si = sp.interpolate.interp1d(original_axis,
dat,
bounds_error=False,
fill_value=fill_value,
kind=interpolation_method)
axis.offset = float(offset - shift)
dat[:] = si(axis.axis)
pbar.update(i + 1)
axis.offset = offset
if crop is True:
minimum, maximum = np.nanmin(shift_array), np.nanmax(shift_array)
if minimum < 0:
iminimum = 1 + axis.value2index(
axis.high_value + minimum,
rounding=math.floor)
print iminimum
self.crop(axis.index_in_axes_manager,
None,
iminimum)
if maximum > 0:
imaximum = axis.value2index(offset + maximum,
rounding=math.ceil)
self.crop(axis.index_in_axes_manager,
imaximum)
def interpolate_in_between(self, start, end, delta=3,
show_progressbar=None, **kwargs):
"""Replace the data in a given range by interpolation.
The operation is performed in place.
Parameters
----------
start, end : {int | float}
The limits of the interval. If int they are taken as the
axis index. If float they are taken as the axis value.
All extra keyword arguments are passed to
scipy.interpolate.interp1d. See the function documentation
for details.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
self._check_signal_dimension_equals_one()
axis = self.axes_manager.signal_axes[0]
i1 = axis._get_index(start)
i2 = axis._get_index(end)
i0 = int(np.clip(i1 - delta, 0, np.inf))
i3 = int(np.clip(i2 + delta, 0, axis.size))
pbar = progressbar(
maxval=self.axes_manager.navigation_size,
disabled=not show_progressbar)
for i, dat in enumerate(self._iterate_signal()):
dat_int = sp.interpolate.interp1d(
range(i0, i1) + range(i2, i3),
dat[i0:i1].tolist() + dat[i2:i3].tolist(),
**kwargs)
dat[i1:i2] = dat_int(range(i1, i2))
pbar.update(i + 1)
def _check_navigation_mask(self, mask):
if mask is not None:
if not isinstance(mask, Signal):
raise ValueError("mask must be a Signal instance.")
elif mask.axes_manager.signal_dimension not in (0, 1):
raise ValueError("mask must be a Signal with signal_dimension "
"equal to 1")
elif (mask.axes_manager.navigation_dimension !=
self.axes_manager.navigation_dimension):
raise ValueError("mask must be a Signal with the same "
"navigation_dimension as the current signal.")
def estimate_shift1D(self,
start=None,
end=None,
reference_indices=None,
max_shift=None,
interpolate=True,
number_of_interpolation_points=5,
mask=None,
show_progressbar=None):
"""Estimate the shifts in the current signal axis using
cross-correlation.
This method can only estimate the shift by comparing
unidimensional features that should not change the position in
the signal axis. To decrease the memory usage, the time of
computation and the accuracy of the results it is convenient to
select the feature of interest providing sensible values for
`start` and `end`. By default interpolation is used to obtain
subpixel precision.
Parameters
----------
start, end : {int | float | None}
The limits of the interval. If int they are taken as the
axis index. If float they are taken as the axis value.
reference_indices : tuple of ints or None
Defines the coordinates of the spectrum that will be used
as eference. If None the spectrum at the current
coordinates is used for this purpose.
max_shift : int
"Saturation limit" for the shift.
interpolate : bool
If True, interpolation is used to provide sub-pixel
accuracy.
number_of_interpolation_points : int
Number of interpolation points. Warning: making this number
too big can saturate the memory
mask : Signal of bool data type.
It must have signal_dimension = 0 and navigation_shape equal to the
current signal. Where mask is True the shift is not computed
and set to nan.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Returns
-------
An array with the result of the estimation in the axis units.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
self._check_signal_dimension_equals_one()
ip = number_of_interpolation_points + 1
axis = self.axes_manager.signal_axes[0]
self._check_navigation_mask(mask)
if reference_indices is None:
reference_indices = self.axes_manager.indices
i1, i2 = axis._get_index(start), axis._get_index(end)
shift_array = np.zeros(self.axes_manager._navigation_shape_in_array,
dtype=float)
ref = self.inav[reference_indices].data[i1:i2]
if interpolate is True:
ref = spectrum_tools.interpolate1D(ip, ref)
pbar = progressbar(
maxval=self.axes_manager.navigation_size,
disabled=not show_progressbar)
for i, (dat, indices) in enumerate(zip(
self._iterate_signal(),
self.axes_manager._array_indices_generator())):
if mask is not None and bool(mask.data[indices]) is True:
shift_array[indices] = np.nan
else:
dat = dat[i1:i2]
if interpolate is True:
dat = spectrum_tools.interpolate1D(ip, dat)
shift_array[indices] = np.argmax(
np.correlate(ref, dat, 'full')) - len(ref) + 1
pbar.update(i + 1)
pbar.finish()
if max_shift is not None:
if interpolate is True:
max_shift *= ip
shift_array.clip(-max_shift, max_shift)
if interpolate is True:
shift_array /= ip
shift_array *= axis.scale
return shift_array
def align1D(self,
start=None,
end=None,
reference_indices=None,
max_shift=None,
interpolate=True,
number_of_interpolation_points=5,
interpolation_method='linear',
crop=True,
fill_value=np.nan,
also_align=[],
mask=None):
"""Estimate the shifts in the signal axis using
cross-correlation and use the estimation to align the data in place.
This method can only estimate the shift by comparing
unidimensional
features that should not change the position.
To decrease memory usage, time of computation and improve
accuracy it is convenient to select the feature of interest
setting the `start` and `end` keywords. By default interpolation is
used to obtain subpixel precision.
Parameters
----------
start, end : {int | float | None}
The limits of the interval. If int they are taken as the
axis index. If float they are taken as the axis value.
reference_indices : tuple of ints or None
Defines the coordinates of the spectrum that will be used
as eference. If None the spectrum at the current
coordinates is used for this purpose.
max_shift : int
"Saturation limit" for the shift.
interpolate : bool
If True, interpolation is used to provide sub-pixel
accuracy.
number_of_interpolation_points : int
Number of interpolation points. Warning: making this number
too big can saturate the memory
interpolation_method : str or int
Specifies the kind of interpolation as a string ('linear',
'nearest', 'zero', 'slinear', 'quadratic, 'cubic') or as an
integer specifying the order of the spline interpolator to
use.
crop : bool
If True automatically crop the signal axis at both ends if
needed.
fill_value : float
If crop is False fill the data outside of the original
interval with the given value where needed.
also_align : list of signals
A list of Signal instances that has exactly the same
dimensions
as this one and that will be aligned using the shift map
estimated using the this signal.
mask : Signal of bool data type.
It must have signal_dimension = 0 and navigation_shape equal to the
current signal. Where mask is True the shift is not computed
and set to nan.
Returns
-------
An array with the result of the estimation. The shift will be
Raises
------
SignalDimensionError if the signal dimension is not 1.
See also
--------
estimate_shift1D
"""
self._check_signal_dimension_equals_one()
shift_array = self.estimate_shift1D(
start=start,
end=end,
reference_indices=reference_indices,
max_shift=max_shift,
interpolate=interpolate,
number_of_interpolation_points=number_of_interpolation_points,
mask=mask)
for signal in also_align + [self]:
signal.shift1D(shift_array=shift_array,
interpolation_method=interpolation_method,
crop=crop,
fill_value=fill_value)
def integrate_in_range(self, signal_range='interactive'):
""" Sums the spectrum over an energy range, giving the integrated
area.
The energy range can either be selected through a GUI or the command
line.
Parameters
----------
signal_range : {a tuple of this form (l, r), "interactive"}
l and r are the left and right limits of the range. They can be
numbers or None, where None indicates the extremes of the interval.
If l and r are floats the `signal_range` will be in axis units (for
example eV). If l and r are integers the `signal_range` will be in
index units. When `signal_range` is "interactive" (default) the
range is selected using a GUI.
Returns
-------
integrated_spectrum : Signal subclass
See Also
--------
integrate_simpson
Examples
--------
Using the GUI
>>> s.integrate_in_range()
Using the CLI
>>> s_int = s.integrate_in_range(signal_range=(560,None))
Selecting a range in the axis units, by specifying the
signal range with floats.
>>> s_int = s.integrate_in_range(signal_range=(560.,590.))
Selecting a range using the index, by specifying the
signal range with integers.
>>> s_int = s.integrate_in_range(signal_range=(100,120))
"""
if signal_range == 'interactive':
self_copy = self.deepcopy()
ia = IntegrateArea(self_copy, signal_range)
ia.edit_traits()
integrated_spectrum = self_copy
else:
integrated_spectrum = self._integrate_in_range_commandline(
signal_range)
return integrated_spectrum
def _integrate_in_range_commandline(self, signal_range):
e1 = signal_range[0]
e2 = signal_range[1]
integrated_spectrum = self[..., e1:e2].integrate1D(-1)
return(integrated_spectrum)
@only_interactive
def calibrate(self):
"""Calibrate the spectral dimension using a gui.
It displays a window where the new calibration can be set by:
* Setting the offset, units and scale directly
* Selection a range by dragging the mouse on the spectrum figure
and
setting the new values for the given range limits
Notes
-----
For this method to work the output_dimension must be 1. Set the
view
accordingly
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
calibration = SpectrumCalibration(self)
calibration.edit_traits()
def smooth_savitzky_golay(self,
polynomial_order=None,
window_length=None,
differential_order=0):
"""Apply a Savitzky-Golay filter to the data in place.
If `polynomial_order` or `window_length` or `differential_order` are
None the method is run in interactive mode.
Parameters
----------
window_length : int
The length of the filter window (i.e. the number of coefficients).
`window_length` must be a positive odd integer.
polynomial_order : int
The order of the polynomial used to fit the samples.
`polyorder` must be less than `window_length`.
differential_order: int, optional
The order of the derivative to compute. This must be a
nonnegative integer. The default is 0, which means to filter
the data without differentiating.
Notes
-----
More information about the filter in `scipy.signal.savgol_filter`.
"""
if not savgol_imported:
raise ImportError("scipy >= 0.14 needs to be installed to use"
"this feature.")
self._check_signal_dimension_equals_one()
if (polynomial_order is not None and
window_length is not None):
axis = self.axes_manager.signal_axes[0]
self.data = savgol_filter(
x=self.data,
window_length=window_length,
polyorder=polynomial_order,
deriv=differential_order,
delta=axis.scale,
axis=axis.index_in_array)
else:
# Interactive mode
smoother = SmoothingSavitzkyGolay(self)
smoother.differential_order = differential_order
if polynomial_order is not None:
smoother.polynomial_order = polynomial_order
if window_length is not None:
smoother.window_length = window_length
smoother.edit_traits()
def smooth_lowess(self,
smoothing_parameter=None,
number_of_iterations=None,
show_progressbar=None):
"""Lowess data smoothing in place.
If `smoothing_parameter` or `number_of_iterations` are None the method
is run in interactive mode.
Parameters
----------
smoothing_parameter: float or None
Between 0 and 1. The fraction of the data used
when estimating each y-value.
number_of_iterations: int or None
The number of residual-based reweightings
to perform.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Raises
------
SignalDimensionError if the signal dimension is not 1.
ImportError if statsmodels is not installed.
Notes
-----
This method uses the lowess algorithm from statsmodels. statsmodels
is required for this method.
"""
if not statsmodels_installed:
raise ImportError("statsmodels is not installed. This package is "
"required for this feature.")
self._check_signal_dimension_equals_one()
if smoothing_parameter is None or number_of_iterations is None:
smoother = SmoothingLowess(self)
if smoothing_parameter is not None:
smoother.smoothing_parameter = smoothing_parameter
if number_of_iterations is not None:
smoother.number_of_iterations = number_of_iterations
smoother.edit_traits()
else:
self.map(lowess,
exog=self.axes_manager[-1].axis,
frac=smoothing_parameter,
it=number_of_iterations,
is_sorted=True,
return_sorted=False,
show_progressbar=show_progressbar)
def smooth_tv(self, smoothing_parameter=None, show_progressbar=None):
"""Total variation data smoothing in place.
Parameters
----------
smoothing_parameter: float or None
Denoising weight relative to L2 minimization. If None the method
is run in interactive mode.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
if smoothing_parameter is None:
smoother = SmoothingTV(self)
smoother.edit_traits()
else:
self.map(_tv_denoise_1d, weight=smoothing_parameter,
show_progressbar=show_progressbar)
def filter_butterworth(self,
cutoff_frequency_ratio=None,
type='low',
order=2):
"""Butterworth filter in place.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
smoother = ButterworthFilter(self)
if cutoff_frequency_ratio is not None:
smoother.cutoff_frequency_ratio = cutoff_frequency_ratio
smoother.apply()
else:
smoother.edit_traits()
def _remove_background_cli(
self, signal_range, background_estimator, estimate_background=True):
from hyperspy.model import Model
model = Model(self)
model.append(background_estimator)
if estimate_background:
background_estimator.estimate_parameters(
self,
signal_range[0],
signal_range[1],
only_current=False)
else:
model.set_signal_range(signal_range[0], signal_range[1])
model.multifit()
return self - model.as_signal()
def remove_background(
self,
signal_range='interactive',
background_type='PowerLaw',
polynomial_order=2,
estimate_background=True):
"""Remove the background, either in place using a gui or returned as a new
spectrum using the command line.
Parameters
----------
signal_range : tuple, optional
If this argument is not specified, the signal range has to be
selected using a GUI. And the original spectrum will be replaced.
If tuple is given, the a spectrum will be returned.
background_type : string
The type of component which should be used to fit the background.
Possible components: PowerLaw, Gaussian, Offset, Polynomial
If Polynomial is used, the polynomial order can be specified
polynomial_order : int, default 2
Specify the polynomial order if a Polynomial background is used.
estimate_background : bool
If True, estimate the background. If False, the signal is fitted
using a full model. This is slower compared to the estimation but
possibly more accurate.
Examples
--------
Using gui, replaces spectrum s
>>>> s.remove_background()
Using command line, returns a spectrum
>>>> s = s.remove_background(signal_range=(400,450), background_type='PowerLaw')
Using a full model to fit the background
>>>> s = s.remove_background(signal_range=(400,450), estimate_background=False)
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
if signal_range == 'interactive':
br = BackgroundRemoval(self)
br.edit_traits()
else:
if background_type == 'PowerLaw':
background_estimator = components.PowerLaw()
elif background_type == 'Gaussian':
background_estimator = components.Gaussian()
elif background_type == 'Offset':
background_estimator = components.Offset()
elif background_type == 'Polynomial':
background_estimator = components.Polynomial(polynomial_order)
else:
raise ValueError(
"Background type: " +
background_type +
" not recognized")
spectra = self._remove_background_cli(
signal_range, background_estimator, estimate_background)
return spectra
@interactive_range_selector
def crop_spectrum(self, left_value=None, right_value=None,):
"""Crop in place the spectral dimension.
Parameters
----------
left_value, righ_value: {int | float | None}
If int the values are taken as indices. If float they are
converted to indices using the spectral axis calibration.
If left_value is None crops from the beginning of the axis.
If right_value is None crops up to the end of the axis. If
both are
None the interactive cropping interface is activated
enabling
cropping the spectrum using a span selector in the signal
plot.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
self.crop(
axis=self.axes_manager.signal_axes[0].index_in_axes_manager,
start=left_value, end=right_value)
@auto_replot
def gaussian_filter(self, FWHM):
"""Applies a Gaussian filter in the spectral dimension in place.
Parameters
----------
FWHM : float
The Full Width at Half Maximum of the gaussian in the
spectral axis units
Raises
------
ValueError if FWHM is equal or less than zero.
SignalDimensionError if the signal dimension is not 1.
"""
self._check_signal_dimension_equals_one()
if FWHM <= 0:
raise ValueError(
"FWHM must be greater than zero")
axis = self.axes_manager.signal_axes[0]
FWHM *= 1 / axis.scale
self.data = gaussian_filter1d(
self.data,
axis=axis.index_in_array,
sigma=FWHM / 2.35482)
@auto_replot
def hanning_taper(self, side='both', channels=None, offset=0):
"""Apply a hanning taper to the data in place.
Parameters
----------
side : {'left', 'right', 'both'}
channels : {None, int}
The number of channels to taper. If None 5% of the total
number of channels are tapered.
offset : int
Returns
-------
channels
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
# TODO: generalize it
self._check_signal_dimension_equals_one()
if channels is None:
channels = int(round(len(self()) * 0.02))
if channels < 20:
channels = 20
dc = self.data
if side == 'left' or side == 'both':
dc[..., offset:channels + offset] *= (
np.hanning(2 * channels)[:channels])
dc[..., :offset] *= 0.
if side == 'right' or side == 'both':
if offset == 0:
rl = None
else:
rl = -offset
dc[..., -channels - offset:rl] *= (
np.hanning(2 * channels)[-channels:])
if offset != 0:
dc[..., -offset:] *= 0.
return channels
def find_peaks1D_ohaver(self, xdim=None, slope_thresh=0, amp_thresh=None,
subchannel=True, medfilt_radius=5, maxpeakn=30000,
peakgroup=10):
"""Find peaks along a 1D line (peaks in spectrum/spectra).
Function to locate the positive peaks in a noisy x-y data set.
Detects peaks by looking for downward zero-crossings in the
first derivative that exceed 'slope_thresh'.
Returns an array containing position, height, and width of each
peak.
'slope_thresh' and 'amp_thresh', control sensitivity: higher
values will
neglect smaller features.
peakgroup is the number of points around the top peak to search
around
Parameters
---------
slope_thresh : float (optional)
1st derivative threshold to count the peak
default is set to 0.5
higher values will neglect smaller features.
amp_thresh : float (optional)
intensity threshold above which
default is set to 10% of max(y)
higher values will neglect smaller features.
medfilt_radius : int (optional)
median filter window to apply to smooth the data
(see scipy.signal.medfilt)
if 0, no filter will be applied.
default is set to 5
peakgroup : int (optional)
number of points around the "top part" of the peak
default is set to 10
maxpeakn : int (optional)
number of maximum detectable peaks
default is set to 5000
subpix : bool (optional)
default is set to True
Returns
-------
peaks : structured array of shape _navigation_shape_in_array in which
each cell contains an array that contains as many structured arrays as
peaks where found at that location and which fields: position, height,
width, contains position, height, and width of each peak.
Raises
------
SignalDimensionError if the signal dimension is not 1.
"""
# TODO: add scipy.signal.find_peaks_cwt
self._check_signal_dimension_equals_one()
axis = self.axes_manager.signal_axes[0].axis
arr_shape = (self.axes_manager._navigation_shape_in_array
if self.axes_manager.navigation_size > 0
else [1, ])
peaks = np.zeros(arr_shape, dtype=object)
for y, indices in zip(self._iterate_signal(),
self.axes_manager._array_indices_generator()):
peaks[indices] = find_peaks_ohaver(
y,
axis,
slope_thresh=slope_thresh,
amp_thresh=amp_thresh,
medfilt_radius=medfilt_radius,
maxpeakn=maxpeakn,
peakgroup=peakgroup,
subchannel=subchannel)
return peaks
def estimate_peak_width(self,
factor=0.5,
window=None,
return_interval=False,
show_progressbar=None):
"""Estimate the width of the highest intensity of peak
of the spectra at a given fraction of its maximum.
It can be used with asymmetric peaks. For accurate results any
background must be previously substracted.
The estimation is performed by interpolation using cubic splines.
Parameters
----------
factor : 0 < float < 1
The default, 0.5, estimates the FWHM.
window : None, float
The size of the window centred at the peak maximum
used to perform the estimation.
The window size must be chosen with care: if it is narrower
than the width of the peak at some positions or if it is
so wide that it includes other more intense peaks this
method cannot compute the width and a NaN is stored instead.
return_interval: bool
If True, returns 2 extra signals with the positions of the
desired height fraction at the left and right of the
peak.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
Returns
-------
width or [width, left, right], depending on the value of
`return_interval`.
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
self._check_signal_dimension_equals_one()
if not 0 < factor < 1:
raise ValueError("factor must be between 0 and 1.")
left, right = (self._get_navigation_signal(),
self._get_navigation_signal())
# The signals must be of dtype float to contain np.nan
left.change_dtype('float')
right.change_dtype('float')
axis = self.axes_manager.signal_axes[0]
x = axis.axis
maxval = self.axes_manager.navigation_size
if maxval > 0:
pbar = progressbar(maxval=maxval,
disabled=not show_progressbar)
for i, spectrum in enumerate(self):
if window is not None:
vmax = axis.index2value(spectrum.data.argmax())
spectrum = spectrum[vmax - window / 2.:vmax + window / 2.]
x = spectrum.axes_manager[0].axis
spline = scipy.interpolate.UnivariateSpline(
x,
spectrum.data - factor * spectrum.data.max(),
s=0)
roots = spline.roots()
if len(roots) == 2:
left[self.axes_manager.indices] = roots[0]
right[self.axes_manager.indices] = roots[1]
else:
left[self.axes_manager.indices] = np.nan
right[self.axes_manager.indices] = np.nan
if maxval > 0:
pbar.update(i)
if maxval > 0:
pbar.finish()
width = right - left
if factor == 0.5:
width.metadata.General.title = (
self.metadata.General.title + " FWHM")
left.metadata.General.title = (
self.metadata.General.title + " FWHM left position")
right.metadata.General.title = (
self.metadata.General.title + " FWHM right position")
else:
width.metadata.General.title = (
self.metadata.General.title +
" full-width at %.1f maximum" % factor)
left.metadata.General.title = (
self.metadata.General.title +
" full-width at %.1f maximum left position" % factor)
right.metadata.General.title = (
self.metadata.General.title +
" full-width at %.1f maximum right position" % factor)
if return_interval is True:
return [width, left, right]
else:
return width
class MVATools(object):
# TODO: All of the plotting methods here should move to drawing
def _plot_factors_or_pchars(self, factors, comp_ids=None,
calibrate=True, avg_char=False,
same_window=None, comp_label='PC',
img_data=None,
plot_shifts=True, plot_char=4,
cmap=plt.cm.gray, quiver_color='white',
vector_scale=1,
per_row=3, ax=None):
"""Plot components from PCA or ICA, or peak characteristics
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns maps of all components.
if int, returns maps of components with ids from 0 to given
int.
if list of ints, returns maps of components with ids in
given list.
calibrate : bool
if True, plots are calibrated according to the data in the
axes
manager.
same_window : bool
if True, plots each factor to the same window. They are
not scaled.
comp_label : string, the label that is either the plot title
(if plotting in
separate windows) or the label in the legend (if plotting
in the
same window)
cmap : a matplotlib colormap
The colormap used for factor images or
any peak characteristic scatter map
overlay.
Parameters only valid for peak characteristics (or pk char factors):
--------------------------------------------------------------------
img_data - 2D numpy array,
The array to overlay peak characteristics onto. If None,
defaults to the average image of your stack.
plot_shifts - bool, default is True
If true, plots a quiver (arrow) plot showing the shifts for
each
peak present in the component being plotted.
plot_char - None or int
If int, the id of the characteristic to plot as the colored
scatter plot.
Possible components are:
4: peak height
5: peak orientation
6: peak eccentricity
quiver_color : any color recognized by matplotlib
Determines the color of vectors drawn for
plotting peak shifts.
vector_scale : integer or None
Scales the quiver plot arrows. The vector
is defined as one data unit along the X axis.
If shifts are small, set vector_scale so
that when they are multiplied by vector_scale,
they are on the scale of the image plot.
If None, uses matplotlib's autoscaling.
"""
if same_window is None:
same_window = preferences.MachineLearning.same_window
if comp_ids is None:
comp_ids = xrange(factors.shape[1])
elif not hasattr(comp_ids, '__iter__'):
comp_ids = xrange(comp_ids)
n = len(comp_ids)
if same_window:
rows = int(np.ceil(n / float(per_row)))
fig_list = []
if n < per_row:
per_row = n
if same_window and self.axes_manager.signal_dimension == 2:
f = plt.figure(figsize=(4 * per_row, 3 * rows))
else:
f = plt.figure()
for i in xrange(len(comp_ids)):
if self.axes_manager.signal_dimension == 1:
if same_window:
ax = plt.gca()
else:
if i > 0:
f = plt.figure()
ax = f.add_subplot(111)
ax = sigdraw._plot_1D_component(
factors=factors,
idx=comp_ids[i],
axes_manager=self.axes_manager,
ax=ax,
calibrate=calibrate,
comp_label=comp_label,
same_window=same_window)
if same_window:
plt.legend(ncol=factors.shape[1] // 2, loc='best')
elif self.axes_manager.signal_dimension == 2:
if same_window:
ax = f.add_subplot(rows, per_row, i + 1)
else:
if i > 0:
f = plt.figure()
ax = f.add_subplot(111)
sigdraw._plot_2D_component(factors=factors,
idx=comp_ids[i],
axes_manager=self.axes_manager,
calibrate=calibrate, ax=ax,
cmap=cmap, comp_label=comp_label)
if not same_window:
fig_list.append(f)
try:
plt.tight_layout()
except:
pass
if not same_window:
return fig_list
else:
return f
def _plot_loadings(self, loadings, comp_ids=None, calibrate=True,
same_window=None, comp_label=None,
with_factors=False, factors=None,
cmap=plt.cm.gray, no_nans=False, per_row=3):
if same_window is None:
same_window = preferences.MachineLearning.same_window
if comp_ids is None:
comp_ids = xrange(loadings.shape[0])
elif not hasattr(comp_ids, '__iter__'):
comp_ids = xrange(comp_ids)
n = len(comp_ids)
if same_window:
rows = int(np.ceil(n / float(per_row)))
fig_list = []
if n < per_row:
per_row = n
if same_window and self.axes_manager.signal_dimension == 2:
f = plt.figure(figsize=(4 * per_row, 3 * rows))
else:
f = plt.figure()
for i in xrange(n):
if self.axes_manager.navigation_dimension == 1:
if same_window:
ax = plt.gca()
else:
if i > 0:
f = plt.figure()
ax = f.add_subplot(111)
elif self.axes_manager.navigation_dimension == 2:
if same_window:
ax = f.add_subplot(rows, per_row, i + 1)
else:
if i > 0:
f = plt.figure()
ax = f.add_subplot(111)
sigdraw._plot_loading(
loadings, idx=comp_ids[i], axes_manager=self.axes_manager,
no_nans=no_nans, calibrate=calibrate, cmap=cmap,
comp_label=comp_label, ax=ax, same_window=same_window)
if not same_window:
fig_list.append(f)
try:
plt.tight_layout()
except:
pass
if not same_window:
if with_factors:
return fig_list, self._plot_factors_or_pchars(
factors, comp_ids=comp_ids, calibrate=calibrate,
same_window=same_window, comp_label=comp_label,
per_row=per_row)
else:
return fig_list
else:
if self.axes_manager.navigation_dimension == 1:
plt.legend(ncol=loadings.shape[0] // 2, loc='best')
animate_legend()
if with_factors:
return f, self._plot_factors_or_pchars(factors,
comp_ids=comp_ids,
calibrate=calibrate,
same_window=same_window,
comp_label=comp_label,
per_row=per_row)
else:
return f
def _export_factors(self,
factors,
folder=None,
comp_ids=None,
multiple_files=None,
save_figures=False,
save_figures_format='png',
factor_prefix=None,
factor_format=None,
comp_label=None,
cmap=plt.cm.gray,
plot_shifts=True,
plot_char=4,
img_data=None,
same_window=False,
calibrate=True,
quiver_color='white',
vector_scale=1,
no_nans=True, per_row=3):
from hyperspy._signals.image import Image
from hyperspy._signals.spectrum import Spectrum
if multiple_files is None:
multiple_files = preferences.MachineLearning.multiple_files
if factor_format is None:
factor_format = preferences.MachineLearning.\
export_factors_default_file_format
# Select the desired factors
if comp_ids is None:
comp_ids = xrange(factors.shape[1])
elif not hasattr(comp_ids, '__iter__'):
comp_ids = range(comp_ids)
mask = np.zeros(factors.shape[1], dtype=np.bool)
for idx in comp_ids:
mask[idx] = 1
factors = factors[:, mask]
if save_figures is True:
plt.ioff()
fac_plots = self._plot_factors_or_pchars(factors,
comp_ids=comp_ids,
same_window=same_window,
comp_label=comp_label,
img_data=img_data,
plot_shifts=plot_shifts,
plot_char=plot_char,
cmap=cmap,
per_row=per_row,
quiver_color=quiver_color,
vector_scale=vector_scale)
for idx in xrange(len(comp_ids)):
filename = '%s_%02i.%s' % (factor_prefix, comp_ids[idx],
save_figures_format)
if folder is not None:
filename = os.path.join(folder, filename)
ensure_directory(filename)
fac_plots[idx].savefig(filename, save_figures_format,
dpi=600)
plt.ion()
elif multiple_files is False:
if self.axes_manager.signal_dimension == 2:
# factor images
axes_dicts = []
axes = self.axes_manager.signal_axes[::-1]
shape = (axes[1].size, axes[0].size)
factor_data = np.rollaxis(
factors.reshape((shape[0], shape[1], -1)), 2)
axes_dicts.append(axes[0].get_axis_dictionary())
axes_dicts.append(axes[1].get_axis_dictionary())
axes_dicts.append({'name': 'factor_index',
'scale': 1.,
'offset': 0.,
'size': int(factors.shape[1]),
'units': 'factor',
'index_in_array': 0, })
s = Image(factor_data,
axes=axes_dicts,
metadata={
'General': {'title': '%s from %s' % (
factor_prefix,
self.metadata.General.title),
}})
elif self.axes_manager.signal_dimension == 1:
axes = []
axes.append(
self.axes_manager.signal_axes[0].get_axis_dictionary())
axes[0]['index_in_array'] = 1
axes.append({
'name': 'factor_index',
'scale': 1.,
'offset': 0.,
'size': int(factors.shape[1]),
'units': 'factor',
'index_in_array': 0,
})
s = Spectrum(
factors.T, axes=axes, metadata={
"General": {
'title': '%s from %s' %
(factor_prefix, self.metadata.General.title), }})
filename = '%ss.%s' % (factor_prefix, factor_format)
if folder is not None:
filename = os.path.join(folder, filename)
s.save(filename)
else: # Separate files
if self.axes_manager.signal_dimension == 1:
axis_dict = self.axes_manager.signal_axes[0].\
get_axis_dictionary()
axis_dict['index_in_array'] = 0
for dim, index in zip(comp_ids, range(len(comp_ids))):
s = Spectrum(factors[:, index],
axes=[axis_dict, ],
metadata={
"General": {'title': '%s from %s' % (
factor_prefix,
self.metadata.General.title),
}})
filename = '%s-%i.%s' % (factor_prefix,
dim,
factor_format)
if folder is not None:
filename = os.path.join(folder, filename)
s.save(filename)
if self.axes_manager.signal_dimension == 2:
axes = self.axes_manager.signal_axes
axes_dicts = []
axes_dicts.append(axes[0].get_axis_dictionary())
axes_dicts.append(axes[1].get_axis_dictionary())
axes_dicts[0]['index_in_array'] = 0
axes_dicts[1]['index_in_array'] = 1
factor_data = factors.reshape(
self.axes_manager._signal_shape_in_array + [-1, ])
for dim, index in zip(comp_ids, range(len(comp_ids))):
im = Image(factor_data[..., index],
axes=axes_dicts,
metadata={
"General": {'title': '%s from %s' % (
factor_prefix,
self.metadata.General.title),
}})
filename = '%s-%i.%s' % (factor_prefix,
dim,
factor_format)
if folder is not None:
filename = os.path.join(folder, filename)
im.save(filename)
def _export_loadings(self,
loadings,
folder=None,
comp_ids=None,
multiple_files=None,
loading_prefix=None,
loading_format=None,
save_figures_format='png',
comp_label=None,
cmap=plt.cm.gray,
save_figures=False,
same_window=False,
calibrate=True,
no_nans=True,
per_row=3):
from hyperspy._signals.image import Image
from hyperspy._signals.spectrum import Spectrum
if multiple_files is None:
multiple_files = preferences.MachineLearning.multiple_files
if loading_format is None:
loading_format = preferences.MachineLearning.\
export_loadings_default_file_format
if comp_ids is None:
comp_ids = range(loadings.shape[0])
elif not hasattr(comp_ids, '__iter__'):
comp_ids = range(comp_ids)
mask = np.zeros(loadings.shape[0], dtype=np.bool)
for idx in comp_ids:
mask[idx] = 1
loadings = loadings[mask]
if save_figures is True:
plt.ioff()
sc_plots = self._plot_loadings(loadings, comp_ids=comp_ids,
calibrate=calibrate,
same_window=same_window,
comp_label=comp_label,
cmap=cmap, no_nans=no_nans,
per_row=per_row)
for idx in xrange(len(comp_ids)):
filename = '%s_%02i.%s' % (loading_prefix, comp_ids[idx],
save_figures_format)
if folder is not None:
filename = os.path.join(folder, filename)
ensure_directory(filename)
sc_plots[idx].savefig(filename, dpi=600)
plt.ion()
elif multiple_files is False:
if self.axes_manager.navigation_dimension == 2:
axes_dicts = []
axes = self.axes_manager.navigation_axes[::-1]
shape = (axes[1].size, axes[0].size)
loading_data = loadings.reshape((-1, shape[0], shape[1]))
axes_dicts.append(axes[0].get_axis_dictionary())
axes_dicts[0]['index_in_array'] = 1
axes_dicts.append(axes[1].get_axis_dictionary())
axes_dicts[1]['index_in_array'] = 2
axes_dicts.append({'name': 'loading_index',
'scale': 1.,
'offset': 0.,
'size': int(loadings.shape[0]),
'units': 'factor',
'index_in_array': 0, })
s = Image(loading_data,
axes=axes_dicts,
metadata={
"General": {'title': '%s from %s' % (
loading_prefix,
self.metadata.General.title),
}})
elif self.axes_manager.navigation_dimension == 1:
cal_axis = self.axes_manager.navigation_axes[0].\
get_axis_dictionary()
cal_axis['index_in_array'] = 1
axes = []
axes.append({'name': 'loading_index',
'scale': 1.,
'offset': 0.,
'size': int(loadings.shape[0]),
'units': 'comp_id',
'index_in_array': 0, })
axes.append(cal_axis)
s = Image(loadings,
axes=axes,
metadata={
"General": {'title': '%s from %s' % (
loading_prefix,
self.metadata.General.title),
}})
filename = '%ss.%s' % (loading_prefix, loading_format)
if folder is not None:
filename = os.path.join(folder, filename)
s.save(filename)
else: # Separate files
if self.axes_manager.navigation_dimension == 1:
axis_dict = self.axes_manager.navigation_axes[0].\
get_axis_dictionary()
axis_dict['index_in_array'] = 0
for dim, index in zip(comp_ids, range(len(comp_ids))):
s = Spectrum(loadings[index],
axes=[axis_dict, ])
filename = '%s-%i.%s' % (loading_prefix,
dim,
loading_format)
if folder is not None:
filename = os.path.join(folder, filename)
s.save(filename)
elif self.axes_manager.navigation_dimension == 2:
axes_dicts = []
axes = self.axes_manager.navigation_axes[::-1]
shape = (axes[0].size, axes[1].size)
loading_data = loadings.reshape((-1, shape[0], shape[1]))
axes_dicts.append(axes[0].get_axis_dictionary())
axes_dicts[0]['index_in_array'] = 0
axes_dicts.append(axes[1].get_axis_dictionary())
axes_dicts[1]['index_in_array'] = 1
for dim, index in zip(comp_ids, range(len(comp_ids))):
s = Image(loading_data[index, ...],
axes=axes_dicts,
metadata={
"General": {'title': '%s from %s' % (
loading_prefix,
self.metadata.General.title),
}})
filename = '%s-%i.%s' % (loading_prefix,
dim,
loading_format)
if folder is not None:
filename = os.path.join(folder, filename)
s.save(filename)
def plot_decomposition_factors(self,
comp_ids=None,
calibrate=True,
same_window=None,
comp_label='Decomposition factor',
per_row=3):
"""Plot factors from a decomposition.
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns maps of all components.
if int, returns maps of components with ids from 0 to given
int.
if list of ints, returns maps of components with ids in
given list.
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window. They are
not scaled.
comp_label : string, the label that is either the plot title
(if plotting in
separate windows) or the label in the legend (if plotting
in the
same window)
cmap : The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
per_row : int, the number of plots in each row, when the
same_window
parameter is True.
See Also
--------
plot_decomposition_loadings, plot_decomposition_results.
"""
if self.axes_manager.signal_dimension > 2:
raise NotImplementedError("This method cannot plot factors of "
"signals of dimension higher than 2."
"You can use "
"`plot_decomposition_results` instead.")
if same_window is None:
same_window = preferences.MachineLearning.same_window
factors = self.learning_results.factors
if comp_ids is None:
comp_ids = self.learning_results.output_dimension
return self._plot_factors_or_pchars(factors,
comp_ids=comp_ids,
calibrate=calibrate,
same_window=same_window,
comp_label=comp_label,
per_row=per_row)
def plot_bss_factors(self, comp_ids=None, calibrate=True,
same_window=None, comp_label='BSS factor',
per_row=3):
"""Plot factors from blind source separation results.
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns maps of all components.
if int, returns maps of components with ids from 0 to
given int.
if list of ints, returns maps of components with ids in
given list.
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window. They are
not scaled.
comp_label : string, the label that is either the plot title
(if plotting in
separate windows) or the label in the legend (if plotting
in the
same window)
cmap : The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
per_row : int, the number of plots in each row, when the
same_window
parameter is True.
See Also
--------
plot_bss_loadings, plot_bss_results.
"""
if self.axes_manager.signal_dimension > 2:
raise NotImplementedError("This method cannot plot factors of "
"signals of dimension higher than 2."
"You can use "
"`plot_decomposition_results` instead.")
if same_window is None:
same_window = preferences.MachineLearning.same_window
factors = self.learning_results.bss_factors
return self._plot_factors_or_pchars(factors,
comp_ids=comp_ids,
calibrate=calibrate,
same_window=same_window,
comp_label=comp_label,
per_row=per_row)
def plot_decomposition_loadings(self,
comp_ids=None,
calibrate=True,
same_window=None,
comp_label='Decomposition loading',
with_factors=False,
cmap=plt.cm.gray,
no_nans=False,
per_row=3):
"""Plot loadings from PCA.
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns maps of all components.
if int, returns maps of components with ids from 0 to
given int.
if list of ints, returns maps of components with ids in
given list.
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window. They are
not scaled.
comp_label : string,
The label that is either the plot title (if plotting in
separate windows) or the label in the legend (if plotting
in the same window). In this case, each loading line can be
toggled on and off by clicking on the legended line.
with_factors : bool
If True, also returns figure(s) with the factors for the
given comp_ids.
cmap : matplotlib colormap
The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
no_nans : bool
If True, removes NaN's from the loading plots.
per_row : int
the number of plots in each row, when the same_window
parameter is True.
See Also
--------
plot_decomposition_factors, plot_decomposition_results.
"""
if self.axes_manager.navigation_dimension > 2:
raise NotImplementedError("This method cannot plot loadings of "
"dimension higher than 2."
"You can use "
"`plot_decomposition_results` instead.")
if same_window is None:
same_window = preferences.MachineLearning.same_window
loadings = self.learning_results.loadings.T
if with_factors:
factors = self.learning_results.factors
else:
factors = None
if comp_ids is None:
comp_ids = self.learning_results.output_dimension
return self._plot_loadings(
loadings,
comp_ids=comp_ids,
with_factors=with_factors,
factors=factors,
same_window=same_window,
comp_label=comp_label,
cmap=cmap,
no_nans=no_nans,
per_row=per_row)
def plot_bss_loadings(self, comp_ids=None, calibrate=True,
same_window=None, comp_label='BSS loading',
with_factors=False, cmap=plt.cm.gray,
no_nans=False, per_row=3):
"""Plot loadings from ICA
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns maps of all components.
if int, returns maps of components with ids from 0 to
given int.
if list of ints, returns maps of components with ids in
given list.
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window. They are
not scaled.
comp_label : string,
The label that is either the plot title (if plotting in
separate windows) or the label in the legend (if plotting
in the same window). In this case, each loading line can be
toggled on and off by clicking on the legended line.
with_factors : bool
If True, also returns figure(s) with the factors for the
given comp_ids.
cmap : matplotlib colormap
The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
no_nans : bool
If True, removes NaN's from the loading plots.
per_row : int
the number of plots in each row, when the same_window
parameter is True.
See Also
--------
plot_bss_factors, plot_bss_results.
"""
if self.axes_manager.navigation_dimension > 2:
raise NotImplementedError("This method cannot plot loadings of "
"dimension higher than 2."
"You can use "
"`plot_bss_results` instead.")
if same_window is None:
same_window = preferences.MachineLearning.same_window
loadings = self.learning_results.bss_loadings.T
if with_factors:
factors = self.learning_results.bss_factors
else:
factors = None
return self._plot_loadings(
loadings,
comp_ids=comp_ids,
with_factors=with_factors,
factors=factors,
same_window=same_window,
comp_label=comp_label,
cmap=cmap,
no_nans=no_nans,
per_row=per_row)
def export_decomposition_results(self, comp_ids=None,
folder=None,
calibrate=True,
factor_prefix='factor',
factor_format=None,
loading_prefix='loading',
loading_format=None,
comp_label=None,
cmap=plt.cm.gray,
same_window=False,
multiple_files=None,
no_nans=True,
per_row=3,
save_figures=False,
save_figures_format='png'):
"""Export results from a decomposition to any of the supported
formats.
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns all components/loadings.
if int, returns components/loadings with ids from 0 to
given int.
if list of ints, returns components/loadings with ids in
given list.
folder : str or None
The path to the folder where the file will be saved.
If `None` the
current folder is used by default.
factor_prefix : string
The prefix that any exported filenames for
factors/components
begin with
factor_format : string
The extension of the format that you wish to save to.
loading_prefix : string
The prefix that any exported filenames for
factors/components
begin with
loading_format : string
The extension of the format that you wish to save to.
Determines
the kind of output.
- For image formats (tif, png, jpg, etc.), plots are
created
using the plotting flags as below, and saved at
600 dpi.
One plot per loading is saved.
- For multidimensional formats (rpl, hdf5), arrays are
saved
in single files. All loadings are contained in the
one
file.
- For spectral formats (msa), each loading is saved to a
separate file.
multiple_files : Bool
If True, on exporting a file per factor and per loading will
be
created. Otherwise only two files will be created, one for
the
factors and another for the loadings. The default value can
be
chosen in the preferences.
save_figures : Bool
If True the same figures that are obtained when using the
plot
methods will be saved with 600 dpi resolution
Plotting options (for save_figures = True ONLY)
----------------------------------------------
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window.
comp_label : string, the label that is either the plot title
(if plotting in separate windows) or the label in the legend
(if plotting in the same window)
cmap : The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
per_row : int, the number of plots in each row, when the
same_window
parameter is True.
save_figures_format : str
The image format extension.
See Also
--------
get_decomposition_factors,
get_decomposition_loadings.
"""
factors = self.learning_results.factors
loadings = self.learning_results.loadings.T
self._export_factors(
factors,
folder=folder,
comp_ids=comp_ids,
calibrate=calibrate,
multiple_files=multiple_files,
factor_prefix=factor_prefix,
factor_format=factor_format,
comp_label=comp_label,
save_figures=save_figures,
cmap=cmap,
no_nans=no_nans,
same_window=same_window,
per_row=per_row,
save_figures_format=save_figures_format)
self._export_loadings(
loadings,
comp_ids=comp_ids, folder=folder,
calibrate=calibrate,
multiple_files=multiple_files,
loading_prefix=loading_prefix,
loading_format=loading_format,
comp_label=comp_label,
cmap=cmap,
save_figures=save_figures,
same_window=same_window,
no_nans=no_nans,
per_row=per_row)
def export_bss_results(self,
comp_ids=None,
folder=None,
calibrate=True,
multiple_files=None,
save_figures=False,
factor_prefix='bss_factor',
factor_format=None,
loading_prefix='bss_loading',
loading_format=None,
comp_label=None, cmap=plt.cm.gray,
same_window=False,
no_nans=True,
per_row=3,
save_figures_format='png'):
"""Export results from ICA to any of the supported formats.
Parameters
----------
comp_ids : None, int, or list of ints
if None, returns all components/loadings.
if int, returns components/loadings with ids from 0 to given
int.
if list of ints, returns components/loadings with ids in
iven list.
folder : str or None
The path to the folder where the file will be saved. If
`None` the
current folder is used by default.
factor_prefix : string
The prefix that any exported filenames for
factors/components
begin with
factor_format : string
The extension of the format that you wish to save to.
Determines
the kind of output.
- For image formats (tif, png, jpg, etc.), plots are
created
using the plotting flags as below, and saved at
600 dpi.
One plot per factor is saved.
- For multidimensional formats (rpl, hdf5), arrays are
saved
in single files. All factors are contained in the one
file.
- For spectral formats (msa), each factor is saved to a
separate file.
loading_prefix : string
The prefix that any exported filenames for
factors/components
begin with
loading_format : string
The extension of the format that you wish to save to.
multiple_files : Bool
If True, on exporting a file per factor and per loading
will be
created. Otherwise only two files will be created, one
for the
factors and another for the loadings. The default value
can be
chosen in the preferences.
save_figures : Bool
If True the same figures that are obtained when using the
plot
methods will be saved with 600 dpi resolution
Plotting options (for save_figures = True ONLY)
----------------------------------------------
calibrate : bool
if True, calibrates plots where calibration is available
from
the axes_manager. If False, plots are in pixels/channels.
same_window : bool
if True, plots each factor to the same window.
comp_label : string
the label that is either the plot title (if plotting in
separate windows) or the label in the legend (if plotting
in the
same window)
cmap : The colormap used for the factor image, or for peak
characteristics, the colormap used for the scatter plot of
some peak characteristic.
per_row : int, the number of plots in each row, when the
same_window
parameter is True.
save_figures_format : str
The image format extension.
See Also
--------
get_bss_factors,
get_bss_loadings.
"""
factors = self.learning_results.bss_factors
loadings = self.learning_results.bss_loadings.T
self._export_factors(factors,
folder=folder,
comp_ids=comp_ids,
calibrate=calibrate,
multiple_files=multiple_files,
factor_prefix=factor_prefix,
factor_format=factor_format,
comp_label=comp_label,
save_figures=save_figures,
cmap=cmap,
no_nans=no_nans,
same_window=same_window,
per_row=per_row,
save_figures_format=save_figures_format)
self._export_loadings(loadings,
comp_ids=comp_ids,
folder=folder,
calibrate=calibrate,
multiple_files=multiple_files,
loading_prefix=loading_prefix,
loading_format=loading_format,
comp_label=comp_label,
cmap=cmap,
save_figures=save_figures,
same_window=same_window,
no_nans=no_nans,
per_row=per_row,
save_figures_format=save_figures_format)
def _get_loadings(self, loadings):
from hyperspy.hspy import signals
data = loadings.T.reshape(
(-1,) + self.axes_manager.navigation_shape[::-1])
signal = signals.Signal(
data,
axes=(
[{"size": data.shape[0], "navigate": True}] +
self.axes_manager._get_navigation_axes_dicts()))
signal.set_signal_origin(self.metadata.Signal.signal_origin)
for axis in signal.axes_manager._axes[1:]:
axis.navigate = False
return signal
def _get_factors(self, factors):
signal = self.__class__(
factors.T.reshape((-1,) + self.axes_manager.signal_shape[::-1]),
axes=[{"size": factors.shape[-1], "navigate": True}] +
self.axes_manager._get_signal_axes_dicts())
signal.set_signal_origin(self.metadata.Signal.signal_origin)
signal.set_signal_type(self.metadata.Signal.signal_type)
for axis in signal.axes_manager._axes[1:]:
axis.navigate = False
return signal
def get_decomposition_loadings(self):
"""Return the decomposition loadings as a Signal.
See Also
-------
get_decomposition_factors, export_decomposition_results.
"""
signal = self._get_loadings(self.learning_results.loadings)
signal.axes_manager._axes[0].name = "Decomposition component index"
signal.metadata.General.title = "Decomposition loadings of " + \
self.metadata.General.title
return signal
def get_decomposition_factors(self):
"""Return the decomposition factors as a Signal.
See Also
-------
get_decomposition_loadings, export_decomposition_results.
"""
signal = self._get_factors(self.learning_results.factors)
signal.axes_manager._axes[0].name = "Decomposition component index"
signal.metadata.General.title = ("Decomposition factors of " +
self.metadata.General.title)
return signal
def get_bss_loadings(self):
"""Return the blind source separtion loadings as a Signal.
See Also
-------
get_bss_factors, export_bss_results.
"""
signal = self._get_loadings(
self.learning_results.bss_loadings)
signal.axes_manager[0].name = "BSS component index"
signal.metadata.General.title = ("BSS loadings of " +
self.metadata.General.title)
return signal
def get_bss_factors(self):
"""Return the blind source separtion factors as a Signal.
See Also
-------
get_bss_loadings, export_bss_results.
"""
signal = self._get_factors(self.learning_results.bss_factors)
signal.axes_manager[0].name = "BSS component index"
signal.metadata.General.title = ("BSS factors of " +
self.metadata.General.title)
return signal
def plot_bss_results(self,
factors_navigator="auto",
loadings_navigator="auto",
factors_dim=2,
loadings_dim=2,):
"""Plot the blind source separation factors and loadings.
Unlike `plot_bss_factors` and `plot_bss_loadings`, this method displays
one component at a time. Therefore it provides a more compact
visualization than then other two methods. The loadings and factors
are displayed in different windows and each has its own
navigator/sliders to navigate them if they are multidimensional. The
component index axis is syncronize between the two.
Parameters
----------
factor_navigator, loadings_navigator : {"auto", None, "spectrum",
Signal}
See `plot` documentation for details.
factors_dim, loadings_dim: int
Currently HyperSpy cannot plot signals of dimension higher than
two. Therefore, to visualize the BSS results when the
factors or the loadings have signal dimension greater than 2
we can view the data as spectra(images) by setting this parameter
to 1(2). (Default 2)
See Also
--------
plot_bss_factors, plot_bss_loadings, plot_decomposition_results.
"""
factors = self.get_bss_factors()
loadings = self.get_bss_loadings()
factors.axes_manager._axes[0] = loadings.axes_manager._axes[0]
if loadings.axes_manager.signal_dimension > 2:
loadings.axes_manager.set_signal_dimension(loadings_dim)
if factors.axes_manager.signal_dimension > 2:
factors.axes_manager.set_signal_dimension(factors_dim)
loadings.plot(navigator=loadings_navigator)
factors.plot(navigator=factors_navigator)
def plot_decomposition_results(self,
factors_navigator="auto",
loadings_navigator="auto",
factors_dim=2,
loadings_dim=2):
"""Plot the decompostion factors and loadings.
Unlike `plot_factors` and `plot_loadings`, this method displays
one component at a time. Therefore it provides a more compact
visualization than then other two methods. The loadings and factors
are displayed in different windows and each has its own
navigator/sliders to navigate them if they are multidimensional. The
component index axis is syncronize between the two.
Parameters
----------
factor_navigator, loadings_navigator : {"auto", None, "spectrum",
Signal}
See `plot` documentation for details.
factors_dim, loadings_dim : int
Currently HyperSpy cannot plot signals of dimension higher than
two. Therefore, to visualize the BSS results when the
factors or the loadings have signal dimension greater than 2
we can view the data as spectra(images) by setting this parameter
to 1(2). (Default 2)
See Also
--------
plot_factors, plot_loadings, plot_bss_results.
"""
factors = self.get_decomposition_factors()
loadings = self.get_decomposition_loadings()
factors.axes_manager._axes[0] = loadings.axes_manager._axes[0]
if loadings.axes_manager.signal_dimension > 2:
loadings.axes_manager.set_signal_dimension(loadings_dim)
if factors.axes_manager.signal_dimension > 2:
factors.axes_manager.set_signal_dimension(factors_dim)
loadings.plot(navigator=loadings_navigator)
factors.plot(navigator=factors_navigator)
class Signal(MVA,
MVATools,
Signal1DTools,
Signal2DTools,):
_record_by = ""
_signal_type = ""
_signal_origin = ""
def __init__(self, data, **kwds):
"""Create a Signal from a numpy array.
Parameters
----------
data : numpy array
The signal data. It can be an array of any dimensions.
axes : dictionary (optional)
Dictionary to define the axes (see the
documentation of the AxesManager class for more details).
attributes : dictionary (optional)
A dictionary whose items are stored as attributes.
metadata : dictionary (optional)
A dictionary containing a set of parameters
that will to stores in the `metadata` attribute.
Some parameters might be mandatory in some cases.
original_metadata : dictionary (optional)
A dictionary containing a set of parameters
that will to stores in the `original_metadata` attribute. It
typically contains all the parameters that has been
imported from the original data file.
"""
self._create_metadata()
self.learning_results = LearningResults()
kwds['data'] = data
self._load_dictionary(kwds)
self._plot = None
self.auto_replot = True
self.inav = SpecialSlicers(self, True)
self.isig = SpecialSlicers(self, False)
def _create_metadata(self):
self.metadata = DictionaryTreeBrowser()
mp = self.metadata
mp.add_node("_HyperSpy")
mp.add_node("General")
mp.add_node("Signal")
mp._HyperSpy.add_node("Folding")
folding = mp._HyperSpy.Folding
folding.unfolded = False
folding.signal_unfolded = False
folding.original_shape = None
folding.original_axes_manager = None
mp.Signal.binned = False
self.original_metadata = DictionaryTreeBrowser()
self.tmp_parameters = DictionaryTreeBrowser()
def __repr__(self):
if self.metadata._HyperSpy.Folding.unfolded:
unfolded = "unfolded "
else:
unfolded = ""
string = '<'
string += self.__class__.__name__
string += ", title: %s" % self.metadata.General.title
string += ", %sdimensions: %s" % (
unfolded,
self.axes_manager._get_dimension_str())
string += '>'
return string.encode('utf8')
def __getitem__(self, slices, isNavigation=None):
try:
len(slices)
except TypeError:
slices = (slices,)
_orig_slices = slices
has_nav = True if isNavigation is None else isNavigation
has_signal = True if isNavigation is None else not isNavigation
# Create a deepcopy of self that contains a view of self.data
_signal = self._deepcopy_with_new_data(self.data)
nav_idx = [el.index_in_array for el in
_signal.axes_manager.navigation_axes]
signal_idx = [el.index_in_array for el in
_signal.axes_manager.signal_axes]
if not has_signal:
idx = nav_idx
elif not has_nav:
idx = signal_idx
else:
idx = nav_idx + signal_idx
# Add support for Ellipsis
if Ellipsis in _orig_slices:
_orig_slices = list(_orig_slices)
# Expand the first Ellipsis
ellipsis_index = _orig_slices.index(Ellipsis)
_orig_slices.remove(Ellipsis)
_orig_slices = (
_orig_slices[:ellipsis_index] +
[slice(None), ] * max(0, len(idx) - len(_orig_slices)) +
_orig_slices[ellipsis_index:])
# Replace all the following Ellipses by :
while Ellipsis in _orig_slices:
_orig_slices[_orig_slices.index(Ellipsis)] = slice(None)
_orig_slices = tuple(_orig_slices)
if len(_orig_slices) > len(idx):
raise IndexError("too many indices")
slices = np.array([slice(None,)] *
len(_signal.axes_manager._axes))
slices[idx] = _orig_slices + (slice(None),) * max(
0, len(idx) - len(_orig_slices))
array_slices = []
for slice_, axis in zip(slices, _signal.axes_manager._axes):
if (isinstance(slice_, slice) or
len(_signal.axes_manager._axes) < 2):
array_slices.append(axis._slice_me(slice_))
else:
if isinstance(slice_, float):
slice_ = axis.value2index(slice_)
array_slices.append(slice_)
_signal._remove_axis(axis.index_in_axes_manager)
_signal.data = _signal.data[array_slices]
if self.metadata.has_item('Signal.Noise_properties.variance'):
variance = self.metadata.Signal.Noise_properties.variance
if isinstance(variance, Signal):
_signal.metadata.Signal.Noise_properties.variance = \
variance.__getitem__(_orig_slices, isNavigation)
_signal.get_dimensions_from_data()
return _signal
def __setitem__(self, i, j):
"""x.__setitem__(i, y) <==> x[i]=y
"""
if isinstance(j, Signal):
j = j.data
self.__getitem__(i).data[:] = j
def _binary_operator_ruler(self, other, op_name):
exception_message = (
"Invalid dimensions for this operation")
if isinstance(other, Signal):
if other.data.shape != self.data.shape:
# Are they aligned?
are_aligned = array_tools.are_aligned(self.data.shape,
other.data.shape)
if are_aligned is True:
sdata, odata = array_tools.homogenize_ndim(self.data,
other.data)
else:
# Let's align them if possible
sig_and_nav = [s for s in [self, other] if
s.axes_manager.signal_size > 1 and
s.axes_manager.navigation_size > 1]
sig = [s for s in [self, other] if
s.axes_manager.signal_size > 1 and
s.axes_manager.navigation_size == 0]
if sig_and_nav and sig:
self = sig_and_nav[0]
other = sig[0]
if (self.axes_manager.signal_shape ==
other.axes_manager.signal_shape):
sdata = self.data
other_new_shape = [
axis.size if axis.navigate is False
else 1
for axis in self.axes_manager._axes]
odata = other.data.reshape(
other_new_shape)
elif (self.axes_manager.navigation_shape ==
other.axes_manager.signal_shape):
sdata = self.data
other_new_shape = [
axis.size if axis.navigate is True
else 1
for axis in self.axes_manager._axes]
odata = other.data.reshape(
other_new_shape)
else:
raise ValueError(exception_message)
elif len(sig) == 2:
sdata = self.data.reshape(
(1,) * other.axes_manager.signal_dimension +
self.data.shape)
odata = other.data.reshape(
other.data.shape +
(1,) * self.axes_manager.signal_dimension)
else:
raise ValueError(exception_message)
# The data are now aligned but the shapes are not the
# same and therefore we have to calculate the resulting
# axes
ref_axes = self if (
len(self.axes_manager._axes) >
len(other.axes_manager._axes)) else other
new_axes = []
for i, (ssize, osize) in enumerate(
zip(sdata.shape, odata.shape)):
if ssize > osize:
if are_aligned or len(sig) != 2:
new_axes.append(
self.axes_manager._axes[i].copy())
else:
new_axes.append(self.axes_manager._axes[
i - other.axes_manager.signal_dimension
].copy())
elif ssize < osize:
new_axes.append(
other.axes_manager._axes[i].copy())
else:
new_axes.append(
ref_axes.axes_manager._axes[i].copy())
else:
sdata = self.data
odata = other.data
new_axes = [axis.copy()
for axis in self.axes_manager._axes]
exec("result = sdata.%s(odata)" % op_name)
new_signal = self._deepcopy_with_new_data(result)
new_signal.axes_manager._axes = new_axes
new_signal.axes_manager.set_signal_dimension(
self.axes_manager.signal_dimension)
return new_signal
else:
exec("result = self.data.%s(other)" % op_name)
return self._deepcopy_with_new_data(result)
def _unary_operator_ruler(self, op_name):
exec("result = self.data.%s()" % op_name)
return self._deepcopy_with_new_data(result)
def _check_signal_dimension_equals_one(self):
if self.axes_manager.signal_dimension != 1:
raise SignalDimensionError(self.axes_manager.signal_dimension, 1)
def _check_signal_dimension_equals_two(self):
if self.axes_manager.signal_dimension != 2:
raise SignalDimensionError(self.axes_manager.signal_dimension, 2)
def _deepcopy_with_new_data(self, data=None):
"""Returns a deepcopy of itself replacing the data.
This method has the advantage over deepcopy that it does not
copy the data what can save precious memory
Parameters
---------
data : {None | np.array}
Returns
-------
ns : Signal
"""
try:
old_data = self.data
self.data = None
old_plot = self._plot
self._plot = None
ns = self.deepcopy()
ns.data = data
return ns
finally:
self.data = old_data
self._plot = old_plot
def _print_summary(self):
string = "\n\tTitle: "
string += self.metadata.General.title.decode('utf8')
if self.metadata.has_item("Signal.signal_type"):
string += "\n\tSignal type: "
string += self.metadata.Signal.signal_type
string += "\n\tData dimensions: "
string += str(self.axes_manager.shape)
if self.metadata.has_item('Signal.record_by'):
string += "\n\tData representation: "
string += self.metadata.Signal.record_by
string += "\n\tData type: "
string += str(self.data.dtype)
print string
def _load_dictionary(self, file_data_dict):
"""Load data from dictionary.
Parameters
----------
file_data_dict : dictionary
A dictionary containing at least a 'data' keyword with an array of
arbitrary dimensions. Additionally the dictionary can contain the
following items:
data : numpy array
The signal data. It can be an array of any dimensions.
axes : dictionary (optional)
Dictionary to define the axes (see the
documentation of the AxesManager class for more details).
attributes : dictionary (optional)
A dictionary whose items are stored as attributes.
metadata : dictionary (optional)
A dictionary containing a set of parameters
that will to stores in the `metadata` attribute.
Some parameters might be mandatory in some cases.
original_metadata : dictionary (optional)
A dictionary containing a set of parameters
that will to stores in the `original_metadata` attribute. It
typically contains all the parameters that has been
imported from the original data file.
"""
self.data = np.asanyarray(file_data_dict['data'])
if 'axes' not in file_data_dict:
file_data_dict['axes'] = self._get_undefined_axes_list()
self.axes_manager = AxesManager(
file_data_dict['axes'])
if 'metadata' not in file_data_dict:
file_data_dict['metadata'] = {}
if 'original_metadata' not in file_data_dict:
file_data_dict['original_metadata'] = {}
if 'attributes' in file_data_dict:
for key, value in file_data_dict['attributes'].iteritems():
if hasattr(self, key):
if isinstance(value, dict):
for k, v in value.iteritems():
eval('self.%s.__setattr__(k,v)' % key)
else:
self.__setattr__(key, value)
self.original_metadata.add_dictionary(
file_data_dict['original_metadata'])
self.metadata.add_dictionary(
file_data_dict['metadata'])
if "title" not in self.metadata.General:
self.metadata.General.title = ''
if (self._record_by or
"Signal.record_by" not in self.metadata):
self.metadata.Signal.record_by = self._record_by
if (self._signal_origin or
"Signal.signal_origin" not in self.metadata):
self.metadata.Signal.signal_origin = self._signal_origin
if (self._signal_type or
not self.metadata.has_item("Signal.signal_type")):
self.metadata.Signal.signal_type = self._signal_type
def squeeze(self):
"""Remove single-dimensional entries from the shape of an array
and the axes.
"""
# We deepcopy everything but data
self = self._deepcopy_with_new_data(self.data)
for axis in self.axes_manager._axes:
if axis.size == 1:
self._remove_axis(axis.index_in_axes_manager)
self.data = self.data.squeeze()
return self
def _to_dictionary(self, add_learning_results=True):
"""Returns a dictionary that can be used to recreate the signal.
All items but `data` are copies.
Parameters
----------
add_learning_results : bool
Returns
-------
dic : dictionary
"""
dic = {}
dic['data'] = self.data
dic['axes'] = self.axes_manager._get_axes_dicts()
dic['metadata'] = \
self.metadata.deepcopy().as_dictionary()
dic['original_metadata'] = \
self.original_metadata.deepcopy().as_dictionary()
dic['tmp_parameters'] = \
self.tmp_parameters.deepcopy().as_dictionary()
if add_learning_results and hasattr(self, 'learning_results'):
dic['learning_results'] = copy.deepcopy(
self.learning_results.__dict__)
return dic
def _get_undefined_axes_list(self):
axes = []
for i in xrange(len(self.data.shape)):
axes.append({'size': int(self.data.shape[i]), })
return axes
def __call__(self, axes_manager=None):
if axes_manager is None:
axes_manager = self.axes_manager
return np.atleast_1d(
self.data.__getitem__(axes_manager._getitem_tuple))
def plot(self, navigator="auto", axes_manager=None, **kwargs):
"""Plot the signal at the current coordinates.
For multidimensional datasets an optional figure,
the "navigator", with a cursor to navigate that data is
raised. In any case it is possible to navigate the data using
the sliders. Currently only signals with signal_dimension equal to
0, 1 and 2 can be plotted.
Parameters
----------
navigator : {"auto", None, "slider", "spectrum", Signal}
If "auto", if navigation_dimension > 0, a navigator is
provided to explore the data.
If navigation_dimension is 1 and the signal is an image
the navigator is a spectrum obtained by integrating
over the signal axes (the image).
If navigation_dimension is 1 and the signal is a spectrum
the navigator is an image obtained by stacking horizontally
all the spectra in the dataset.
If navigation_dimension is > 1, the navigator is an image
obtained by integrating the data over the signal axes.
Additionaly, if navigation_dimension > 2 a window
with one slider per axis is raised to navigate the data.
For example,
if the dataset consists of 3 navigation axes X, Y, Z and one
signal axis, E, the default navigator will be an image
obtained by integrating the data over E at the current Z
index and a window with sliders for the X, Y and Z axes
will be raised. Notice that changing the Z-axis index
changes the navigator in this case.
If "slider" and the navigation dimension > 0 a window
with one slider per axis is raised to navigate the data.
If "spectrum" and navigation_dimension > 0 the navigator
is always a spectrum obtained by integrating the data
over all other axes.
If None, no navigator will be provided.
Alternatively a Signal instance can be provided. The signal
dimension must be 1 (for a spectrum navigator) or 2 (for a
image navigator) and navigation_shape must be 0 (for a static
navigator) or navigation_shape + signal_shape must be equal
to the navigator_shape of the current object (for a dynamic
navigator).
If the signal dtype is RGB or RGBA this parameters has no
effect and is always "slider".
axes_manager : {None, axes_manager}
If None `axes_manager` is used.
**kwargs : optional
Any extra keyword arguments are passed to the signal plot.
"""
if self._plot is not None:
try:
self._plot.close()
except:
# If it was already closed it will raise an exception,
# but we want to carry on...
pass
if axes_manager is None:
axes_manager = self.axes_manager
if self.is_rgbx is True:
if axes_manager.navigation_size < 2:
navigator = None
else:
navigator = "slider"
if axes_manager.signal_dimension == 0:
self._plot = mpl_he.MPL_HyperExplorer()
elif axes_manager.signal_dimension == 1:
# Hyperspectrum
self._plot = mpl_hse.MPL_HyperSpectrum_Explorer()
elif axes_manager.signal_dimension == 2:
self._plot = mpl_hie.MPL_HyperImage_Explorer()
else:
raise ValueError('Plotting is not supported for this view')
self._plot.axes_manager = axes_manager
self._plot.signal_data_function = self.__call__
if self.metadata.General.title:
self._plot.signal_title = self.metadata.General.title
elif self.tmp_parameters.has_item('filename'):
self._plot.signal_title = self.tmp_parameters.filename
def get_static_explorer_wrapper(*args, **kwargs):
return navigator()
def get_1D_sum_explorer_wrapper(*args, **kwargs):
navigator = self
# Sum over all but the first navigation axis.
while len(navigator.axes_manager.shape) > 1:
navigator = navigator.sum(-1)
return np.nan_to_num(navigator.data).squeeze()
def get_dynamic_explorer_wrapper(*args, **kwargs):
navigator.axes_manager.indices = self.axes_manager.indices[
navigator.axes_manager.signal_dimension:]
navigator.axes_manager._update_attributes()
return navigator()
if not isinstance(navigator, Signal) and navigator == "auto":
if (self.axes_manager.navigation_dimension == 1 and
self.axes_manager.signal_dimension == 1):
navigator = "data"
elif self.axes_manager.navigation_dimension > 0:
if self.axes_manager.signal_dimension == 0:
navigator = self.deepcopy()
else:
navigator = self
while navigator.axes_manager.signal_dimension > 0:
navigator = navigator.sum(-1)
if navigator.axes_manager.navigation_dimension == 1:
navigator = navigator.as_spectrum(0)
else:
navigator = navigator.as_image((0, 1))
else:
navigator = None
# Navigator properties
if axes_manager.navigation_axes:
if navigator is "slider":
self._plot.navigator_data_function = "slider"
elif navigator is None:
self._plot.navigator_data_function = None
elif isinstance(navigator, Signal):
# Dynamic navigator
if (axes_manager.navigation_shape ==
navigator.axes_manager.signal_shape +
navigator.axes_manager.navigation_shape):
self._plot.navigator_data_function = \
get_dynamic_explorer_wrapper
elif (axes_manager.navigation_shape ==
navigator.axes_manager.signal_shape or
axes_manager.navigation_shape[:2] ==
navigator.axes_manager.signal_shape or
(axes_manager.navigation_shape[0],) ==
navigator.axes_manager.signal_shape):
self._plot.navigator_data_function = \
get_static_explorer_wrapper
else:
raise ValueError(
"The navigator dimensions are not compatible with "
"those of self.")
elif navigator == "data":
self._plot.navigator_data_function = \
lambda axes_manager=None: self.data
elif navigator == "spectrum":
self._plot.navigator_data_function = \
get_1D_sum_explorer_wrapper
else:
raise ValueError(
"navigator must be one of \"spectrum\",\"auto\","
" \"slider\", None, a Signal instance")
self._plot.plot(**kwargs)
def save(self, filename=None, overwrite=None, extension=None,
**kwds):
"""Saves the signal in the specified format.
The function gets the format from the extension.:
- hdf5 for HDF5
- rpl for Ripple (useful to export to Digital Micrograph)
- msa for EMSA/MSA single spectrum saving.
- Many image formats such as png, tiff, jpeg...
If no extension is provided the default file format as defined
in the `preferences` is used.
Please note that not all the formats supports saving datasets of
arbitrary dimensions, e.g. msa only supports 1D data.
Each format accepts a different set of parameters. For details
see the specific format documentation.
Parameters
----------
filename : str or None
If None (default) and tmp_parameters.filename and
`tmp_paramters.folder` are defined, the
filename and path will be taken from there. A valid
extension can be provided e.g. "my_file.rpl", see `extension`.
overwrite : None, bool
If None, if the file exists it will query the user. If
True(False) it (does not) overwrites the file if it exists.
extension : {None, 'hdf5', 'rpl', 'msa',common image extensions e.g.
'tiff', 'png'}
The extension of the file that defines the file format.
If None, the extension is taken from the first not None in the
following list:
i) the filename
ii) `tmp_parameters.extension`
iii) `preferences.General.default_file_format` in this order.
"""
if filename is None:
if (self.tmp_parameters.has_item('filename') and
self.tmp_parameters.has_item('folder')):
filename = os.path.join(
self.tmp_parameters.folder,
self.tmp_parameters.filename)
extension = (self.tmp_parameters.extension
if not extension
else extension)
elif self.metadata.has_item('General.original_filename'):
filename = self.metadata.General.original_filename
else:
raise ValueError('File name not defined')
if extension is not None:
basename, ext = os.path.splitext(filename)
filename = basename + '.' + extension
io.save(filename, self, overwrite=overwrite, **kwds)
def _replot(self):
if self._plot is not None:
if self._plot.is_active() is True:
self.plot()
@auto_replot
def get_dimensions_from_data(self):
"""Get the dimension parameters from the data_cube. Useful when
the data_cube was externally modified, or when the SI was not
loaded from a file
"""
dc = self.data
for axis in self.axes_manager._axes:
axis.size = int(dc.shape[axis.index_in_array])
def crop(self, axis, start=None, end=None):
"""Crops the data in a given axis. The range is given in pixels
Parameters
----------
axis : {int | string}
Specify the data axis in which to perform the cropping
operation. The axis can be specified using the index of the
axis in `axes_manager` or the axis name.
start, end : {int | float | None}
The beginning and end of the cropping interval. If int
the value is taken as the axis index. If float the index
is calculated using the axis calibration. If start/end is
None crop from/to the low/high end of the axis.
"""
axis = self.axes_manager[axis]
i1, i2 = axis._get_index(start), axis._get_index(end)
if i1 is not None:
new_offset = axis.axis[i1]
# We take a copy to guarantee the continuity of the data
self.data = self.data[
(slice(None),) * axis.index_in_array + (slice(i1, i2),
Ellipsis)]
if i1 is not None:
axis.offset = new_offset
self.get_dimensions_from_data()
self.squeeze()
def swap_axes(self, axis1, axis2):
"""Swaps the axes.
Parameters
----------
axis1, axis2 : {int | str}
Specify the data axes in which to perform the operation.
The axis can be specified using the index of the
axis in `axes_manager` or the axis name.
Returns
-------
s : a copy of the object with the axes swapped.
"""
axis1 = self.axes_manager[axis1].index_in_array
axis2 = self.axes_manager[axis2].index_in_array
s = self._deepcopy_with_new_data(self.data.swapaxes(axis1, axis2))
c1 = s.axes_manager._axes[axis1]
c2 = s.axes_manager._axes[axis2]
s.axes_manager._axes[axis1] = c2
s.axes_manager._axes[axis2] = c1
s.axes_manager._update_attributes()
s._make_sure_data_is_contiguous()
return s
def rollaxis(self, axis, to_axis):
"""Roll the specified axis backwards, until it lies in a given position.
Parameters
----------
axis : {int, str}
The axis to roll backwards. The positions of the other axes do not
change relative to one another.
to_axis : {int, str}
The axis is rolled until it lies before this other axis.
Returns
-------
s : Signal or subclass
Output signal.
See Also
--------
roll : swap_axes
Examples
--------
>>> s = signals.Spectrum(np.ones((5,4,3,6)))
>>> s
<Spectrum, title: , dimensions: (3, 4, 5, 6)>
>>> s.rollaxis(3, 1)
<Spectrum, title: , dimensions: (3, 4, 5, 6)>
>>> s.rollaxis(2,0)
<Spectrum, title: , dimensions: (5, 3, 4, 6)>
"""
axis = self.axes_manager[axis].index_in_array
to_index = self.axes_manager[to_axis].index_in_array
if axis == to_index:
return self.deepcopy()
new_axes_indices = hyperspy.misc.utils.rollelem(
[axis_.index_in_array for axis_ in self.axes_manager._axes],
index=axis,
to_index=to_index)
s = self._deepcopy_with_new_data(self.data.transpose(new_axes_indices))
s.axes_manager._axes = hyperspy.misc.utils.rollelem(
s.axes_manager._axes,
index=axis,
to_index=to_index)
s.axes_manager._update_attributes()
s._make_sure_data_is_contiguous()
return s
def rebin(self, new_shape):
"""Returns the object with the data rebinned.
Parameters
----------
new_shape: tuple of ints
The new shape elements must be divisors of the original shape
elements.
Returns
-------
s : Signal subclass
Raises
------
ValueError
When there is a mismatch between the number of elements in the
signal shape and `new_shape` or `new_shape` elements are not
divisors of the original signal shape.
Examples
--------
>>> import hyperspy.hspy as hs
>>> s = hs.signals.Spectrum(np.zeros((10, 100)))
>>> s
<Spectrum, title: , dimensions: (10|100)>
>>> s.rebin((5, 100))
<Spectrum, title: , dimensions: (5|100)>
I
"""
if len(new_shape) != len(self.data.shape):
raise ValueError("Wrong shape size")
new_shape_in_array = []
for axis in self.axes_manager._axes:
new_shape_in_array.append(
new_shape[axis.index_in_axes_manager])
factors = (np.array(self.data.shape) /
np.array(new_shape_in_array))
s = self._deepcopy_with_new_data(
array_tools.rebin(self.data, new_shape_in_array))
for axis in s.axes_manager._axes:
axis.scale *= factors[axis.index_in_array]
s.get_dimensions_from_data()
if s.metadata.has_item('Signal.Noise_properties.variance'):
if isinstance(s.metadata.Signal.Noise_properties.variance, Signal):
var = s.metadata.Signal.Noise_properties.variance
s.metadata.Signal.Noise_properties.variance = var.rebin(
new_shape)
return s
def split(self,
axis='auto',
number_of_parts='auto',
step_sizes='auto'):
"""Splits the data into several signals.
The split can be defined by giving the number_of_parts, a homogeneous
step size or a list of customized step sizes. By default ('auto'),
the function is the reverse of utils.stack().
Parameters
----------
axis : {'auto' | int | string}
Specify the data axis in which to perform the splitting
operation. The axis can be specified using the index of the
axis in `axes_manager` or the axis name.
- If 'auto' and if the object has been created with utils.stack,
split will return the former list of signals
(options stored in 'metadata._HyperSpy.Stacking_history'
else the last navigation axis will be used.
number_of_parts : {'auto' | int}
Number of parts in which the SI will be splitted. The
splitting is homegenous. When the axis size is not divisible
by the number_of_parts the reminder data is lost without
warning. If number_of_parts and step_sizes is 'auto',
number_of_parts equals the length of the axis,
step_sizes equals one and the axis is supress from each
sub_spectra.
step_sizes : {'auto' | list of ints | int}
Size of the splitted parts. If 'auto', the step_sizes equals one.
If int, the splitting is homogenous.
Examples
--------
>>> s=signals.Spectrum(random.random([4,3,2]))
>>> s
<Spectrum, title: , dimensions: (3, 4|2)>
>>> s.split()
[<Spectrum, title: , dimensions: (3 |2)>,
<Spectrum, title: , dimensions: (3 |2)>,
<Spectrum, title: , dimensions: (3 |2)>,
<Spectrum, title: , dimensions: (3 |2)>]
>>> s.split(step_sizes=2)
[<Spectrum, title: , dimensions: (3, 2|2)>,
<Spectrum, title: , dimensions: (3, 2|2)>]
>>> s.split(step_sizes=[1,2])
[<Spectrum, title: , dimensions: (3, 1|2)>,
<Spectrum, title: , dimensions: (3, 2|2)>]
Returns
-------
list of the splitted signals
"""
shape = self.data.shape
signal_dict = self._to_dictionary(add_learning_results=False)
if axis == 'auto':
mode = 'auto'
if hasattr(self.metadata._HyperSpy, 'Stacking_history'):
stack_history = self.metadata._HyperSpy.Stacking_history
axis_in_manager = stack_history.axis
step_sizes = stack_history.step_sizes
else:
axis_in_manager = \
self.axes_manager[-1 + 1j].index_in_axes_manager
else:
mode = 'manual'
axis_in_manager = self.axes_manager[axis].index_in_axes_manager
axis = self.axes_manager[axis_in_manager].index_in_array
len_axis = self.axes_manager[axis_in_manager].size
if number_of_parts is 'auto' and step_sizes is 'auto':
step_sizes = 1
number_of_parts = len_axis
elif number_of_parts is not 'auto' and step_sizes is not 'auto':
raise ValueError(
"You can define step_sizes or number_of_parts "
"but not both.")
elif step_sizes is 'auto':
if number_of_parts > shape[axis]:
raise ValueError(
"The number of parts is greater than "
"the axis size.")
else:
step_sizes = ([shape[axis] // number_of_parts, ] *
number_of_parts)
if isinstance(step_sizes, int):
step_sizes = [step_sizes] * int(len_axis / step_sizes)
splitted = []
cut_index = np.array([0] + step_sizes).cumsum()
axes_dict = signal_dict['axes']
for i in xrange(len(cut_index) - 1):
axes_dict[axis]['offset'] = \
self.axes_manager._axes[axis].index2value(cut_index[i])
axes_dict[axis]['size'] = cut_index[i + 1] - cut_index[i]
data = self.data[
(slice(None), ) * axis +
(slice(cut_index[i], cut_index[i + 1]), Ellipsis)]
signal_dict['data'] = data
splitted += self.__class__(**signal_dict),
if number_of_parts == len_axis \
or step_sizes == [1] * len_axis:
for i, spectrum in enumerate(splitted):
spectrum.data = spectrum.data[
spectrum.axes_manager._get_data_slice([(axis, 0)])]
spectrum._remove_axis(axis_in_manager)
if mode == 'auto' and hasattr(
self.original_metadata, 'stack_elements'):
for i, spectrum in enumerate(splitted):
se = self.original_metadata.stack_elements['element' + str(i)]
spectrum.metadata = copy.deepcopy(
se['metadata'])
spectrum.original_metadata = copy.deepcopy(
se['original_metadata'])
spectrum.metadata.General.title = se.metadata.General.title
return splitted
# TODO: remove in HyperSpy 0.9
def unfold_if_multidim(self):
"""Unfold the datacube if it is >2D
Deprecated method, please use unfold.
"""
warnings.warn(
"`unfold_if_multidim` is deprecated and will be removed in "
"HyperSpy 0.9. Please use `unfold` instead.")
return None
@auto_replot
def _unfold(self, steady_axes, unfolded_axis):
"""Modify the shape of the data by specifying the axes whose
dimension do not change and the axis over which the remaining axes will
be unfolded
Parameters
----------
steady_axes : list
The indices of the axes which dimensions do not change
unfolded_axis : int
The index of the axis over which all the rest of the axes (except
the steady axes) will be unfolded
See also
--------
fold
"""
# It doesn't make sense unfolding when dim < 2
if self.data.squeeze().ndim < 2:
return
# We need to store the original shape and coordinates to be used
# by
# the fold function only if it has not been already stored by a
# previous unfold
folding = self.metadata._HyperSpy.Folding
if folding.unfolded is False:
folding.original_shape = self.data.shape
folding.original_axes_manager = self.axes_manager
folding.unfolded = True
new_shape = [1] * len(self.data.shape)
for index in steady_axes:
new_shape[index] = self.data.shape[index]
new_shape[unfolded_axis] = -1
self.data = self.data.reshape(new_shape)
self.axes_manager = self.axes_manager.deepcopy()
uname = ''
uunits = ''
to_remove = []
for axis, dim in zip(self.axes_manager._axes, new_shape):
if dim == 1:
uname += ',' + unicode(axis)
uunits = ',' + unicode(axis.units)
to_remove.append(axis)
ua = self.axes_manager._axes[unfolded_axis]
ua.name = unicode(ua) + uname
ua.units = unicode(ua.units) + uunits
ua.size = self.data.shape[unfolded_axis]
for axis in to_remove:
self.axes_manager.remove(axis.index_in_axes_manager)
self.data = self.data.squeeze()
if self.metadata.has_item('Signal.Noise_properties.variance'):
variance = self.metadata.Signal.Noise_properties.variance
if isinstance(variance, Signal):
variance._unfold(steady_axes, unfolded_axis)
def unfold(self):
"""Modifies the shape of the data by unfolding the signal and
navigation dimensions separately
Returns
-------
needed_unfolding : bool
"""
nav_needed_unfolding = self.unfold_navigation_space()
sig_needed_unfolding = self.unfold_signal_space()
needed_unfolding = nav_needed_unfolding or sig_needed_unfolding
return needed_unfolding
def unfold_navigation_space(self):
"""Modify the shape of the data to obtain a navigation space of
dimension 1
Returns
-------
needed_unfolding : bool
"""
if self.axes_manager.navigation_dimension < 2:
needed_unfolding = False
else:
needed_unfolding = True
steady_axes = [
axis.index_in_array for axis in
self.axes_manager.signal_axes]
unfolded_axis = (
self.axes_manager.navigation_axes[0].index_in_array)
self._unfold(steady_axes, unfolded_axis)
return needed_unfolding
def unfold_signal_space(self):
"""Modify the shape of the data to obtain a signal space of
dimension 1
Returns
-------
needed_unfolding : bool
"""
if self.axes_manager.signal_dimension < 2:
needed_unfolding = False
else:
needed_unfolding = True
steady_axes = [
axis.index_in_array for axis in
self.axes_manager.navigation_axes]
unfolded_axis = self.axes_manager.signal_axes[0].index_in_array
self._unfold(steady_axes, unfolded_axis)
self.metadata._HyperSpy.Folding.signal_unfolded = True
return needed_unfolding
@auto_replot
def fold(self):
"""If the signal was previously unfolded, folds it back"""
folding = self.metadata._HyperSpy.Folding
# Note that == must be used instead of is True because
# if the value was loaded from a file its type can be np.bool_
if folding.unfolded is True:
self.data = self.data.reshape(folding.original_shape)
self.axes_manager = folding.original_axes_manager
folding.original_shape = None
folding.original_axes_manager = None
folding.unfolded = False
folding.signal_unfolded = False
if self.metadata.has_item('Signal.Noise_properties.variance'):
variance = self.metadata.Signal.Noise_properties.variance
if isinstance(variance, Signal):
variance.fold()
def _make_sure_data_is_contiguous(self):
if self.data.flags['C_CONTIGUOUS'] is False:
self.data = np.ascontiguousarray(self.data)
def _iterate_signal(self):
"""Iterates over the signal data.
It is faster than using the signal iterator.
"""
if self.axes_manager.navigation_size < 2:
yield self()
return
self._make_sure_data_is_contiguous()
axes = [axis.index_in_array for
axis in self.axes_manager.signal_axes]
unfolded_axis = (
self.axes_manager.navigation_axes[0].index_in_array)
new_shape = [1] * len(self.data.shape)
for axis in axes:
new_shape[axis] = self.data.shape[axis]
new_shape[unfolded_axis] = -1
# Warning! if the data is not contigous it will make a copy!!
data = self.data.reshape(new_shape)
for i in xrange(data.shape[unfolded_axis]):
getitem = [0] * len(data.shape)
for axis in axes:
getitem[axis] = slice(None)
getitem[unfolded_axis] = i
yield(data[getitem])
def _remove_axis(self, axis):
axis = self.axes_manager[axis]
self.axes_manager.remove(axis.index_in_axes_manager)
if axis.navigate is False: # The removed axis is a signal axis
if self.axes_manager.signal_dimension == 2:
self._record_by = "image"
elif self.axes_manager.signal_dimension == 1:
self._record_by = "spectrum"
elif self.axes_manager.signal_dimension == 0:
self._record_by = ""
else:
return
self.metadata.Signal.record_by = self._record_by
self._assign_subclass()
def _apply_function_on_data_and_remove_axis(self, function, axis):
s = self._deepcopy_with_new_data(
function(self.data,
axis=self.axes_manager[axis].index_in_array))
s._remove_axis(axis)
return s
def sum(self, axis):
"""Sum the data over the given axis.
Parameters
----------
axis : {int, string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.sum(-1).data.shape
(64,64)
# If we just want to plot the result of the operation
s.sum(-1, True).plot()
"""
return self._apply_function_on_data_and_remove_axis(np.sum, axis)
def max(self, axis, return_signal=False):
"""Returns a signal with the maximum of the signal along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum, mean, min
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.max(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.max, axis)
def min(self, axis):
"""Returns a signal with the minimum of the signal along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum, mean, max, std, var
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.min(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.min, axis)
def mean(self, axis):
"""Returns a signal with the average of the signal along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.mean(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.mean,
axis)
def std(self, axis):
"""Returns a signal with the standard deviation of the signal along
an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.std(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.std, axis)
def var(self, axis):
"""Returns a signal with the variances of the signal along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.var(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.var, axis)
def diff(self, axis, order=1):
"""Returns a signal with the n-th order discrete difference along
given axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
order: the order of the derivative
See also
--------
mean, sum
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.diff(-1).data.shape
(64,64,1023)
"""
s = self._deepcopy_with_new_data(
np.diff(self.data,
n=order,
axis=self.axes_manager[axis].index_in_array))
axis = s.axes_manager[axis]
axis.offset += (order * axis.scale / 2)
s.get_dimensions_from_data()
return s
def derivative(self, axis, order=1):
"""Numerical derivative along the given axis.
Currently only the first order finite difference method is implemented.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
order: int
The order of the derivative. (Note that this is the order of the
derivative i.e. `order=2` does not use second order finite
differences method.)
Returns
-------
der : Signal
Note that the size of the data on the given `axis` decreases by the
given `order` i.e. if `axis` is "x" and `order` is 2 the x dimension
is N, der's x dimension is N - 2.
See also
--------
diff
"""
der = self.diff(order=order, axis=axis)
axis = self.axes_manager[axis]
der.data /= axis.scale ** order
return der
def integrate_simpson(self, axis):
"""Returns a signal with the result of calculating the integral
of the signal along an axis using Simpson's rule.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.var(-1).data.shape
(64,64)
"""
axis = self.axes_manager[axis]
s = self._deepcopy_with_new_data(
sp.integrate.simps(y=self.data,
x=axis.axis,
axis=axis.index_in_array))
s._remove_axis(axis.index_in_axes_manager)
return s
def integrate1D(self, axis):
"""Integrate the signal over the given axis.
The integration is performed using Simpson's rule if
`metadata.Signal.binned` is False and summation over the given axis if
True.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
See also
--------
sum_in_mask, mean
Examples
--------
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.var(-1).data.shape
(64,64)
"""
if self.metadata.Signal.binned is False:
return self.integrate_simpson(axis)
else:
return self.sum(axis)
def indexmax(self, axis):
"""Returns a signal with the index of the maximum along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
The data dtype is always int.
See also
--------
sum, mean, min
Usage
-----
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.indexmax(-1).data.shape
(64,64)
"""
return self._apply_function_on_data_and_remove_axis(np.argmax, axis)
def valuemax(self, axis):
"""Returns a signal with the value of the maximum along an axis.
Parameters
----------
axis : {int | string}
The axis can be specified using the index of the axis in
`axes_manager` or the axis name.
Returns
-------
s : Signal
The data dtype is always int.
See also
--------
sum, mean, min
Usage
-----
>>> import numpy as np
>>> s = Signal(np.random.random((64,64,1024)))
>>> s.data.shape
(64,64,1024)
>>> s.valuemax(-1).data.shape
(64,64)
"""
s = self.indexmax(axis)
s.data = self.axes_manager[axis].index2value(s.data)
return s
def get_histogram(img, bins='freedman', range_bins=None, **kwargs):
"""Return a histogram of the signal data.
More sophisticated algorithms for determining bins can be used.
Aside from the `bins` argument allowing a string specified how bins
are computed, the parameters are the same as numpy.histogram().
Parameters
----------
bins : int or list or str, optional
If bins is a string, then it must be one of:
'knuth' : use Knuth's rule to determine bins
'scotts' : use Scott's rule to determine bins
'freedman' : use the Freedman-diaconis rule to determine bins
'blocks' : use bayesian blocks for dynamic bin widths
range_bins : tuple or None, optional
the minimum and maximum range for the histogram. If not specified,
it will be (x.min(), x.max())
**kwargs
other keyword arguments (weight and density) are described in
np.histogram().
Returns
-------
hist_spec : An 1D spectrum instance containing the histogram.
See Also
--------
print_summary_statistics
astroML.density_estimation.histogram, numpy.histogram : these are the
functions that hyperspy uses to compute the histogram.
Notes
-----
The number of bins estimators are taken from AstroML. Read
their documentation for more info.
Examples
--------
>>> s = signals.Spectrum(np.random.normal(size=(10, 100)))
Plot the data histogram
>>> s.get_histogram().plot()
Plot the histogram of the signal at the current coordinates
>>> s.get_current_signal().get_histogram().plot()
"""
from hyperspy import signals
hist, bin_edges = histogram(img.data.flatten(),
bins=bins,
range=range_bins,
**kwargs)
hist_spec = signals.Spectrum(hist)
if bins == 'blocks':
hist_spec.axes_manager.signal_axes[0].axis = bin_edges[:-1]
warnings.warn(
"The options `bins = 'blocks'` is not fully supported in this "
"versions of hyperspy. It should be used for plotting purpose"
"only.")
else:
hist_spec.axes_manager[0].scale = bin_edges[1] - bin_edges[0]
hist_spec.axes_manager[0].offset = bin_edges[0]
hist_spec.axes_manager[0].name = 'value'
hist_spec.metadata.General.title = (img.metadata.General.title +
" histogram")
hist_spec.metadata.Signal.binned = True
return hist_spec
def map(self, function,
show_progressbar=None, **kwargs):
"""Apply a function to the signal data at all the coordinates.
The function must operate on numpy arrays and the output *must have the
same dimensions as the input*. The function is applied to the data at
each coordinate and the result is stored in the current signal i.e.
this method operates *in-place*. Any extra keyword argument is passed
to the function. The keywords can take different values at different
coordinates. If the function takes an `axis` or `axes` argument, the
function is assumed to be vectorial and the signal axes are assigned to
`axis` or `axes`. Otherwise, the signal is iterated over the
navigation axes and a progress bar is displayed to monitor the
progress.
Parameters
----------
function : function
A function that can be applied to the signal.
show_progressbar : None or bool
If True, display a progress bar. If None the default is set in
`preferences`.
keyword arguments : any valid keyword argument
All extra keyword arguments are passed to the
Notes
-----
This method is similar to Python's :func:`map` that can also be utilize
with a :class:`Signal` instance for similar purposes. However, this
method has the advantage of being faster because it iterates the numpy
array instead of the :class:`Signal`.
Examples
--------
Apply a gaussian filter to all the images in the dataset. The sigma
parameter is constant.
>>> import scipy.ndimage
>>> im = signals.Image(np.random.random((10, 64, 64)))
>>> im.map(scipy.ndimage.gaussian_filter, sigma=2.5)
Apply a gaussian filter to all the images in the dataset. The sigmal
parameter is variable.
>>> im = signals.Image(np.random.random((10, 64, 64)))
>>> sigmas = signals.Signal(np.linspace(2,5,10))
>>> sigmas.axes_manager.set_signal_dimension(0)
>>> im.map(scipy.ndimage.gaussian_filter, sigma=sigmas)
"""
if show_progressbar is None:
show_progressbar = preferences.General.show_progressbar
# Sepate ndkwargs
ndkwargs = ()
for key, value in kwargs.iteritems():
if isinstance(value, Signal):
ndkwargs += ((key, value),)
# Check if the signal axes have inhomogenous scales and/or units and
# display in warning if yes.
scale = set()
units = set()
for i in range(len(self.axes_manager.signal_axes)):
scale.add(self.axes_manager[i].scale)
units.add(self.axes_manager[i].units)
if len(units) != 1 or len(scale) != 1:
warnings.warn(
"The function you applied does not take into "
"account the difference of units and of scales in-between"
" axes.")
# If the function has an axis argument and the signal dimension is 1,
# we suppose that it can operate on the full array and we don't
# interate over the coordinates.
try:
fargs = inspect.getargspec(function).args
except TypeError:
# This is probably a Cython function that is not supported by
# inspect.
fargs = []
if not ndkwargs and (self.axes_manager.signal_dimension == 1 and
"axis" in fargs):
kwargs['axis'] = \
self.axes_manager.signal_axes[-1].index_in_array
self.data = function(self.data, **kwargs)
# If the function has an axes argument
# we suppose that it can operate on the full array and we don't
# interate over the coordinates.
elif not ndkwargs and "axes" in fargs:
kwargs['axes'] = tuple([axis.index_in_array for axis in
self.axes_manager.signal_axes])
self.data = function(self.data, **kwargs)
else:
# Iteration over coordinates.
pbar = progressbar(
maxval=self.axes_manager.navigation_size,
disabled=not show_progressbar)
iterators = [signal[1]._iterate_signal() for signal in ndkwargs]
iterators = tuple([self._iterate_signal()] + iterators)
for data in zip(*iterators):
for (key, value), datum in zip(ndkwargs, data[1:]):
kwargs[key] = datum[0]
data[0][:] = function(data[0], **kwargs)
pbar.next()
pbar.finish()
def copy(self):
try:
backup_plot = self._plot
self._plot = None
return copy.copy(self)
finally:
self._plot = backup_plot
def __deepcopy__(self, memo):
dc = type(self)(**self._to_dictionary())
if dc.data is not None:
dc.data = dc.data.copy()
# The Signal subclasses might change the view on init
# The following code just copies the original view
for oaxis, caxis in zip(self.axes_manager._axes,
dc.axes_manager._axes):
caxis.navigate = oaxis.navigate
return dc
def deepcopy(self):
return copy.deepcopy(self)
def change_dtype(self, dtype):
"""Change the data type.
Parameters
----------
dtype : str or dtype
Typecode or data-type to which the array is cast. In
addition to all standard numpy dtypes HyperSpy
supports four extra dtypes for RGB images:
"rgb8", "rgba8", "rgb16" and "rgba16". Changing from
and to any rgbx dtype is more constrained than most
other dtype conversions. To change to a rgbx dtype
the signal `record_by` must be "spectrum",
`signal_dimension` must be 3(4) for rgb(rgba) dtypes
and the dtype must be uint8(uint16) for rgbx8(rgbx16).
After conversion `record_by` becomes `image` and the
spectra dimension is removed. The dtype of images of
dtype rgbx8(rgbx16) can only be changed to uint8(uint16)
and the `record_by` becomes "spectrum".
Examples
--------
>>> import numpy as np
>>> from hyperspy.signals import Spectrum
>>> s = signals.Spectrum(np.array([1,2,3,4,5]))
>>> s.data
array([1, 2, 3, 4, 5])
>>> s.change_dtype('float')
>>> s.data
array([ 1., 2., 3., 4., 5.])
"""
if not isinstance(dtype, np.dtype):
if dtype in rgb_tools.rgb_dtypes:
if self.metadata.Signal.record_by != "spectrum":
raise AttributeError(
"Only spectrum signals can be converted "
"to RGB images.")
if "rgba" in dtype:
if self.axes_manager.signal_size != 4:
raise AttributeError(
"Only spectra with signal_size equal to 4 can "
"be converted to RGBA images")
else:
if self.axes_manager.signal_size != 3:
raise AttributeError(
"Only spectra with signal_size equal to 3 can "
"be converted to RGBA images")
if "8" in dtype and self.data.dtype.name != "uint8":
raise AttributeError(
"Only signals with dtype uint8 can be converted to "
"rgb8 images")
elif "16" in dtype and self.data.dtype.name != "uint16":
raise AttributeError(
"Only signals with dtype uint16 can be converted to "
"rgb16 images")
dtype = rgb_tools.rgb_dtypes[dtype]
self.data = rgb_tools.regular_array2rgbx(self.data)
self.axes_manager.remove(-1)
self.metadata.Signal.record_by = "image"
self._assign_subclass()
return
else:
dtype = np.dtype(dtype)
if rgb_tools.is_rgbx(self.data) is True:
ddtype = self.data.dtype.fields["B"][0]
if ddtype != dtype:
raise ValueError(
"It is only possibile to change to %s." %
ddtype)
self.data = rgb_tools.rgbx2regular_array(self.data)
self.get_dimensions_from_data()
self.metadata.Signal.record_by = "spectrum"
self.axes_manager[-1 + 2j].name = "RGB index"
self._assign_subclass()
return
else:
self.data = self.data.astype(dtype)
def estimate_poissonian_noise_variance(self,
expected_value=None,
gain_factor=None,
gain_offset=None,
correlation_factor=None):
"""Estimate the poissonian noise variance of the signal.
The variance is stored in the
``metadata.Signal.Noise_properties.variance`` attribute.
A poissonian noise variance is equal to the expected value. With the
default arguments, this method simply sets the variance attribute to
the given `expected_value`. However, more generally (although then
noise is not strictly poissonian), the variance may be proportional to
the expected value. Moreover, when the noise is a mixture of white
(gaussian) and poissonian noise, the variance is described by the
following linear model:
.. math::
\mathrm{Var}[X] = (a * \mathrm{E}[X] + b) * c
Where `a` is the `gain_factor`, `b` is the `gain_offset` (the gaussian
noise variance) and `c` the `correlation_factor`. The correlation
factor accounts for correlation of adjacent signal elements that can
be modeled as a convolution with a gaussian point spread function.
Parameters
----------
expected_value : None or Signal instance.
If None, the signal data is taken as the expected value. Note that
this may be inaccurate where `data` is small.
gain_factor, gain_offset, correlation_factor: None or float.
All three must be positive. If None, take the values from
``metadata.Signal.Noise_properties.Variance_linear_model`` if
defined. Otherwise suppose poissonian noise i.e. ``gain_factor=1``,
``gain_offset=0``, ``correlation_factor=1``. If not None, the
values are stored in
``metadata.Signal.Noise_properties.Variance_linear_model``.
"""
if expected_value is None:
dc = self.data.copy()
else:
dc = expected_value.data.copy()
if self.metadata.has_item(
"Signal.Noise_properties.Variance_linear_model"):
vlm = self.metadata.Signal.Noise_properties.Variance_linear_model
else:
self.metadata.add_node(
"Signal.Noise_properties.Variance_linear_model")
vlm = self.metadata.Signal.Noise_properties.Variance_linear_model
if gain_factor is None:
if not vlm.has_item("gain_factor"):
vlm.gain_factor = 1
gain_factor = vlm.gain_factor
if gain_offset is None:
if not vlm.has_item("gain_offset"):
vlm.gain_offset = 0
gain_offset = vlm.gain_offset
if correlation_factor is None:
if not vlm.has_item("correlation_factor"):
vlm.correlation_factor = 1
correlation_factor = vlm.correlation_factor
if gain_offset < 0:
raise ValueError("`gain_offset` must be positive.")
if gain_factor < 0:
raise ValueError("`gain_factor` must be positive.")
if correlation_factor < 0:
raise ValueError("`correlation_factor` must be positive.")
variance = (dc * gain_factor + gain_offset) * correlation_factor
# The lower bound of the variance is the gaussian noise.
variance = np.clip(variance, gain_offset * correlation_factor, np.inf)
variance = type(self)(variance)
variance.axes_manager = self.axes_manager
variance.metadata.General.title = ("Variance of " +
self.metadata.General.title)
self.metadata.set_item(
"Signal.Noise_properties.variance", variance)
def get_current_signal(self, auto_title=True, auto_filename=True):
"""Returns the data at the current coordinates as a Signal subclass.
The signal subclass is the same as that of the current object. All the
axes navigation attribute are set to False.
Parameters
----------
auto_title : bool
If True an space followed by the current indices in parenthesis
are appended to the title.
auto_filename : bool
If True and `tmp_parameters.filename` is defined
(what is always the case when the Signal has been read from a
file), the filename is modified by appending an underscore and a
parenthesis containing the current indices.
Returns
-------
cs : Signal subclass instance.
Examples
--------
>>> im = signals.Image(np.zeros((2,3, 32,32)))
>>> im
<Image, title: , dimensions: (3, 2, 32, 32)>
>>> im.axes_manager.indices = 2,1
>>> im.get_current_signal()
<Image, title: (2, 1), dimensions: (32, 32)>
"""
cs = self.__class__(
self(),
axes=self.axes_manager._get_signal_axes_dicts(),
metadata=self.metadata.as_dictionary(),)
if auto_filename is True and self.tmp_parameters.has_item('filename'):
cs.tmp_parameters.filename = (self.tmp_parameters.filename +
'_' +
str(self.axes_manager.indices))
cs.tmp_parameters.extension = self.tmp_parameters.extension
cs.tmp_parameters.folder = self.tmp_parameters.folder
if auto_title is True:
cs.metadata.General.title = (cs.metadata.General.title +
' ' + str(self.axes_manager.indices))
cs.axes_manager._set_axis_attribute_values("navigate", False)
return cs
def _get_navigation_signal(self, data=None, dtype=None):
"""Return a signal with the same axes as the navigation space.
Parameters
----------
data : {None, numpy array}, optional
If None the `Signal` data is an array of the same dtype as the
current one filled with zeros. If a numpy array, the array must
have the correct dimensions.
dtype : data-type, optional
The desired data-type for the data array when `data` is None,
e.g., `numpy.int8`. Default is the data type of the current signal
data.
"""
if data is not None:
ref_shape = (self.axes_manager._navigation_shape_in_array
if self.axes_manager.navigation_dimension != 0
else (1,))
if data.shape != ref_shape:
raise ValueError(
("data.shape %s is not equal to the current navigation "
"shape in array which is %s") %
(str(data.shape), str(ref_shape)))
else:
if dtype is None:
dtype = self.data.dtype
if self.axes_manager.navigation_dimension == 0:
data = np.array([0, ], dtype=dtype)
else:
data = np.zeros(self.axes_manager._navigation_shape_in_array,
dtype=dtype)
if self.axes_manager.navigation_dimension == 0:
s = Signal(data)
elif self.axes_manager.navigation_dimension == 1:
from hyperspy._signals.spectrum import Spectrum
s = Spectrum(data,
axes=self.axes_manager._get_navigation_axes_dicts())
elif self.axes_manager.navigation_dimension == 2:
from hyperspy._signals.image import Image
s = Image(data,
axes=self.axes_manager._get_navigation_axes_dicts())
else:
s = Signal(np.zeros(self.axes_manager._navigation_shape_in_array,
dtype=self.data.dtype),
axes=self.axes_manager._get_navigation_axes_dicts())
s.axes_manager.set_signal_dimension(
self.axes_manager.navigation_dimension)
return s
def _get_signal_signal(self, data=None, dtype=None):
"""Return a signal with the same axes as the signal space.
Parameters
----------
data : {None, numpy array}, optional
If None the `Signal` data is an array of the same dtype as the
current one filled with zeros. If a numpy array, the array must
have the correct dimensions.
dtype : data-type, optional
The desired data-type for the data array when `data` is None,
e.g., `numpy.int8`. Default is the data type of the current signal
data.
"""
if data is not None:
ref_shape = (self.axes_manager._signal_shape_in_array
if self.axes_manager.signal_dimension != 0
else (1,))
if data.shape != ref_shape:
raise ValueError(
"data.shape %s is not equal to the current signal shape in"
" array which is %s" % (str(data.shape), str(ref_shape)))
else:
if dtype is None:
dtype = self.data.dtype
if self.axes_manager.signal_dimension == 0:
data = np.array([0, ], dtype=dtype)
else:
data = np.zeros(self.axes_manager._signal_shape_in_array,
dtype=dtype)
if self.axes_manager.signal_dimension == 0:
s = Signal(data)
s.set_signal_type(self.metadata.Signal.signal_type)
else:
s = self.__class__(data,
axes=self.axes_manager._get_signal_axes_dicts())
return s
def __iter__(self):
# Reset AxesManager iteration index
self.axes_manager.__iter__()
return self
def next(self):
self.axes_manager.next()
return self.get_current_signal()
def __len__(self):
nitem = int(self.axes_manager.navigation_size)
nitem = nitem if nitem > 0 else 1
return nitem
def as_spectrum(self, spectral_axis):
"""Return the Signal as a spectrum.
The chosen spectral axis is moved to the last index in the
array and the data is made contiguous for effecient
iteration over spectra.
Parameters
----------
spectral_axis : {int, complex, str}
Select the spectral axis to-be using its index or name.
Examples
--------
>>> img = signals.Image(np.ones((3,4,5,6)))
>>> img
<Image, title: , dimensions: (4, 3, 6, 5)>
>>> img.to_spectrum(-1+1j)
<Spectrum, title: , dimensions: (6, 5, 4, 3)>
>>> img.to_spectrum(0)
<Spectrum, title: , dimensions: (6, 5, 3, 4)>
"""
# Roll the spectral axis to-be to the latex index in the array
sp = self.rollaxis(spectral_axis, -1 + 3j)
sp.metadata.Signal.record_by = "spectrum"
sp._assign_subclass()
return sp
def as_image(self, image_axes):
"""Convert signal to image.
The chosen image axes are moved to the last indices in the
array and the data is made contiguous for effecient
iteration over images.
Parameters
----------
image_axes : tuple of {int, complex, str}
Select the image axes. Note that the order of the axes matters
and it is given in the "natural" i.e. X, Y, Z... order.
Examples
--------
>>> s = signals.Spectrum(np.ones((2,3,4,5)))
>>> s
<Spectrum, title: , dimensions: (4, 3, 2, 5)>
>>> s.as_image((0,1))
<Image, title: , dimensions: (5, 2, 4, 3)>
>>> s.to_image((1,2))
<Image, title: , dimensions: (4, 5, 3, 2)>
Raises
------
DataDimensionError : when data.ndim < 2
"""
if self.data.ndim < 2:
raise DataDimensionError(
"A Signal dimension must be >= 2 to be converted to an Image")
axes = (self.axes_manager[image_axes[0]],
self.axes_manager[image_axes[1]])
iaxes = [axis.index_in_array for axis in axes]
im = self.rollaxis(iaxes[0] + 3j, -1 + 3j).rollaxis(
iaxes[1] - np.argmax(iaxes) + 3j, -2 + 3j)
im.metadata.Signal.record_by = "image"
im._assign_subclass()
return im
def _assign_subclass(self):
mp = self.metadata
self.__class__ = hyperspy.io.assign_signal_subclass(
record_by=mp.Signal.record_by
if "Signal.record_by" in mp
else self._record_by,
signal_type=mp.Signal.signal_type
if "Signal.signal_type" in mp
else self._signal_type,
signal_origin=mp.Signal.signal_origin
if "Signal.signal_origin" in mp
else self._signal_origin)
self.__init__(**self._to_dictionary())
def set_signal_type(self, signal_type):
"""Set the signal type and change the current class
accordingly if pertinent.
The signal_type attribute specifies the kind of data that the signal
containts e.g. "EELS" for electron energy-loss spectroscopy,
"PES" for photoemission spectroscopy. There are some methods that are
only available for certain kind of signals, so setting this
parameter can enable/disable features.
Parameters
----------
signal_type : {"EELS", "EDS_TEM", "EDS_SEM", "DielectricFunction"}
Currently there are special features for "EELS" (electron
energy-loss spectroscopy), "EDS_TEM" (energy dispersive X-rays of
thin samples, normally obtained in a transmission electron
microscope), "EDS_SEM" (energy dispersive X-rays of thick samples,
normally obtained in a scanning electron microscope) and
"DielectricFuction". Setting the signal_type to the correct acronym
is highly advisable when analyzing any signal for which HyperSpy
provides extra features. Even if HyperSpy does not provide extra
features for the signal that you are analyzing, it is good practice
to set signal_type to a value that best describes the data signal
type.
"""
self.metadata.Signal.signal_type = signal_type
self._assign_subclass()
def set_signal_origin(self, origin):
"""Set the origin of the signal and change the current class
accordingly if pertinent.
The signal_origin attribute specifies if the data was obtained
through experiment or simulation. There are some methods that are
only available for experimental or simulated data, so setting this
parameter can enable/disable features.
Parameters
----------
origin : {'experiment', 'simulation', None, ""}
None an the empty string mean that the signal origin is uknown.
Raises
------
ValueError if origin is not 'experiment' or 'simulation'
"""
if origin not in ['experiment', 'simulation', "", None]:
raise ValueError("`origin` must be one of: experiment, simulation")
if origin is None:
origin = ""
self.metadata.Signal.signal_origin = origin
self._assign_subclass()
def print_summary_statistics(self, formatter="%.3f"):
"""Prints the five-number summary statistics of the data, the mean and
the standard deviation.
Prints the mean, standandard deviation (std), maximum (max), minimum
(min), first quartile (Q1), median and third quartile. nans are
removed from the calculations.
Parameters
----------
formatter : bool
Number formatter.
See Also
--------
get_histogram
"""
data = self.data
# To make it work with nans
data = data[~np.isnan(data)]
print(underline("Summary statistics"))
print("mean:\t" + formatter % data.mean())
print("std:\t" + formatter % data.std())
print
print("min:\t" + formatter % data.min())
print("Q1:\t" + formatter % np.percentile(data,
25))
print("median:\t" + formatter % np.median(data))
print("Q3:\t" + formatter % np.percentile(data,
75))
print("max:\t" + formatter % data.max())
@property
def is_rgba(self):
return rgb_tools.is_rgba(self.data)
@property
def is_rgb(self):
return rgb_tools.is_rgb(self.data)
@property
def is_rgbx(self):
return rgb_tools.is_rgbx(self.data)
def add_marker(self, marker, plot_on_signal=True, plot_marker=True):
"""
Add a marker to the signal or navigator plot.
Plot the signal, if not yet plotted
Parameters
----------
marker: `hyperspy.drawing._markers`
the marker to add. see `utils.markers`
plot_on_signal: bool
If True, add the marker to the signal
If False, add the marker to the navigator
plot_marker: bool
if True, plot the marker
Examples
-------
>>> import scipy.misc
>>> im = signals.Image(scipy.misc.lena())
>>> m = utils.plot.markers.rectangle(x1=150, y1=100, x2=400,
>>> y2=400, color='red')
>>> im.add_marker(m)
"""
if self._plot is None:
self.plot()
if plot_on_signal:
self._plot.signal_plot.add_marker(marker)
else:
self._plot.navigator_plot.add_marker(marker)
if plot_marker:
marker.plot()
def create_model(self):
from hyperspy.model import Model
return Model(self)
# Implement binary operators
for name in (
# Arithmetic operators
"__add__",
"__sub__",
"__mul__",
"__floordiv__",
"__mod__",
"__divmod__",
"__pow__",
"__lshift__",
"__rshift__",
"__and__",
"__xor__",
"__or__",
"__div__",
"__truediv__",
# Comparison operators
"__lt__",
"__le__",
"__eq__",
"__ne__",
"__ge__",
"__gt__",
):
exec(
("def %s(self, other):\n" % name) +
(" return self._binary_operator_ruler(other, \'%s\')\n" %
name))
exec("%s.__doc__ = int.%s.__doc__" % (name, name))
exec("setattr(Signal, \'%s\', %s)" % (name, name))
# The following commented line enables the operators with swapped
# operands. They should be defined only for commutative operators
# but for simplicity we don't support this at all atm.
# exec("setattr(Signal, \'%s\', %s)" % (name[:2] + "r" + name[2:],
# name))
# Implement unary arithmetic operations
for name in (
"__neg__",
"__pos__",
"__abs__",
"__invert__",):
exec(
("def %s(self):" % name) +
(" return self._unary_operator_ruler(\'%s\')" % name))
exec("%s.__doc__ = int.%s.__doc__" % (name, name))
exec("setattr(Signal, \'%s\', %s)" % (name, name))
class SpecialSlicers:
def __init__(self, signal, isNavigation):
self.isNavigation = isNavigation
self.signal = signal
def __getitem__(self, slices):
return self.signal.__getitem__(slices, self.isNavigation)
def __setitem__(self, i, j):
"""x.__setitem__(i, y) <==> x[i]=y
"""
if isinstance(j, Signal):
j = j.data
self.signal.__getitem__(i, self.isNavigation).data[:] = j
def __len__(self):
return self.signal.axes_manager.signal_shape[0]
|
sillvan/hyperspy
|
hyperspy/signal.py
|
Python
|
gpl-3.0
| 185,611
|
[
"Gaussian"
] |
55432fb62acb49e32dbe5ba96ea5d9eafded671ca63d7cc07fe9da9602bd03f3
|
#!/usr/bin/env python
'''
This file contains the input/output for the HMM.
The functions that read the human made HMM definitions and
their parsing.
In order to build an HMM from scratch we need this package
'''
# A copy of Fariselli's original code extended for Gaussian Mixture Models.
# Here the observations are not discrete symbols but rather continuous vectors of a signal
# or a sequence profile. The emission symbols (EMISSION_ALPHABET) are only used for the
# human readable display. The dimension of the means and covariance matrices are determined
# by including an integer value after the MIXTURES Yes tag, contained in the ".mod" grammar file header.
#
# There are additional tags used to denote the GMM parameters. See SR_theorecticalModel.mod
# file for an example.
# MIXTURE_NUM 3 --> denotes the number of mixtures in this hidden state
# MIXTURE_WEIGHTS w1 w2 w3 --> the coefs/weights for each mixture in this hidden state
# MIXTURE_MEANS mean1 mean2 mean3 --> the means for the mixtures in this hidden state
# where mean vectors are entered in order and the name for the text file containing
# the values is entered. Each mean will have shape (Ax1), where A is the number of
# components of the observation vectors and entered on one line as
# m11 m12 m13 m14 ... m1A. The file will have filetype ".mean".
#
# MIXTURE_COVARS covar1 covar2 covar3 --> the covariance matrices for the mixtures
# in this hidden state, with a similar entry format as the means, see above.
# Each covariance matrix has the shape (AxA) and there will be MIXTURE_NUM of them.
# The indexing works by (row x col) with each mixture's covariance being entered in a file
# covar111 ... covar11A ... covar1AA on multiple lines. Each line represents a row in the
# covariance matrix. The file will have filetype ".cov".
#
# Provide a check counter for each of the parameters to determine incorrect input.
# Import NumPY for matrix/vector notation for mixture means and covariances
# It might be better to import this as a local library in the function used, rather
# than as a global library.
import numpy
def save_cPickle(hmm,file):
''' save the hmm into cPickle file '''
import cPickle
fs=open(file,'w')
cPickle.dump(hmm,fs)
fs.close()
def load_cPickle(file):
''' load the object from file '''
import cPickle
import copy
fs=open(file,'r')
hmm=cPickle.load(fs)
fs.close()
return hmm
def parse_weight(name):
'''
Parse the textfile for a mixture's weights
'''
fweight = open(name).readlines()
weights = []
for line in fweight:
line.rstrip()
weights.append(float(line))
return weights
def parse_mean(name,dimA):
'''
Parse the textfile for a mixture's mean vector, reads the vector down one column on multiple rows
'''
fmean = open(name).readlines()
mean_vec = []
for line in fmean:
line.rstrip()
mean_vec.append(float(line))
#elems = fmean.readline().rstrip().split()
#mean_vec = [float(e) for e in elems]
return mean_vec
def parse_covar(name,dimA):
'''
Parse the textfile for a mixture's covariance matrix.
'''
covarf = open(name)
covar = numpy.zeros((dimA,dimA))
i = 0
for row in covarf:
col = row.rstrip().split()
for j in range(len(col)):
covar[i][j] = float(col[j])
i += 1
covarf.close()
return covar
def parse_text(text,profileDim,gmm_param_dir):
''' parse_text(text) legge
il contenuto dell'hmm passato attraverso text
che a sua volta e` derivato da un file di tipo mod
The profileDim parameter is the dimension of the input encoding used. For a gmmhmm not using mixtures
the profileDim will take on the default parameter value of zero.
'''
ret={} # return dictionary
import sys
import re
import string # so the functions used from this library have been deprecated in newer releases
curr_name=None # nome corrente
for line in text:
if (re.search('^#',line) == None):
list=line.split()
if (len(list)>0):
if(list[0] == 'TRANSITION_ALPHABET' or list[0] == 'EMISSION_ALPHABET'):
ret[list[0]]=list[1:]
elif(list[0] == 'MIXTURES'): # check if we are dealing a GMMHMM
ret[list[0]] = list[1:]
ret[list[0]][1] = profileDim #set the input encoding dimension externally from the grammar .mod file
elif(list[0] == 'NAME'):
curr_name=list[1] # set the name of the current hidden state
ret[curr_name]={}
ret[curr_name].update({'FIX_TR':None})
ret[curr_name].update({'FIX_EM':None})
ret[curr_name].update({'FIX_EM_MIX':None})
if(curr_name not in ret['TRANSITION_ALPHABET']): # check to ensure states are consistent
sys.exit(curr_name + " not in TRANSITION_ALPHABET ="+str(ret['TRANSITION_ALPHABET']))
elif(list[0] == 'FIX_TR'):
ret[curr_name].update({'FIX_TR':'YES'})
elif(list[0] == 'FIX_EM'):
ret[curr_name].update({'FIX_EM':'YES'})
elif(list[0] == 'FIX_EM_MIX'):
ret[curr_name].update({'FIX_EM_MIX':'YES'})
elif(list[0] == 'ENDSTATE'):
ret[curr_name].update({list[0]:int(list[1])})
elif(list[0] == 'LABEL'):
ret[curr_name].update({list[0]:list[1]})
elif(list[0] == 'TRANS'):
if(list[1] == 'None'):
tmplist=[]
else:
tmplist=list[1:]
for i in range(len(tmplist)):
try:
tmplist[i]=string.atof(tmplist[i])
except:
pass
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'LINK'):
if(list[1] == 'None'):
tmplist=[]
else:
tmplist=list[1:]
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'EMISSION'): # and ret['MIXTURES'][0] != 'Yes'): # Fariselli checks if EMISSION in a state is empty to see if it is a null/silent state
if(list[1] == 'None'): # For GMM I will check if MIX_NUM == 0 for null/silent state
tmplist=[]
else:
tmplist=list[1:]
for i in range(len(tmplist)):
try:
tmplist[i]=string.atof(tmplist[i])
except:
pass
ret[curr_name].update({list[0]:tmplist})
elif(list[0]=='EM_LIST'): # and ret['MIXTURES'][0] != 'Yes'): # emission list is left to denote the AA symbols
if(list[1] == 'None'):
tmplist=[]
else:
tmplist=list[1:]
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'MIXTURE_NUM'): # and ret['MIXTURES'][0] == 'Yes'): # number of mixtures in this state
if(list[1] == 'None'):
tmplist = []
else:
tmplist = list[1:]
for i in range(len(tmplist)):
try:
tmplist[i] = string.atoi(tmplist[i])
except:
pass
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'MIXTURE_WEIGHTS'): # and ret['MIXTURES'][0] == 'Yes'): # get the mixture weights
tmplist = []
if(list[1] == 'None'):
tmplist=[]
else:
weights = list[1:]
tmp_inner = parse_weight(gmm_param_dir+str(weights[0])+".weight")
tmplist = [float(w) for w in tmp_inner]
if(ret[curr_name]['MIXTURE_NUM'] != []) and (ret[curr_name]['MIXTURE_NUM'][0] != "tied") and (len(tmplist) != ret[curr_name]['MIXTURE_NUM'][0]): # check to ensure mixture weights are consistent
sys.exit(curr_name + " has incorrect number of mixture weights = " + str(ret[curr_name]['MIXTURE_NUM']))
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'MIXTURE_MEANS'): # and ret['MIXTURES'][0] == 'Yes'): # get the mixture means these are vectors of shape (Ax1) stored as a list of vectors
if (ret[curr_name]['MIXTURE_NUM'] != []) and (ret[curr_name]['MIXTURE_NUM'][0] != 'tied'):
dimA = int(ret['MIXTURES'][1])
tmplist = []
if(list[1] == 'None'):
tmplist=[]
else:
means = list[1:]
for i in range(ret[curr_name]['MIXTURE_NUM'][0]):
tmp_inner = parse_mean(gmm_param_dir+str(means[i])+".mean",dimA)
tmplist.append(tmp_inner) #tmplist[i] = [float(elem) for elem in list[i*dimA:(i+1)*dimA]] # a list of mean vectors
numelems = sum([len(mean_vec) for mean_vec in tmplist])
if (ret[curr_name]['MIXTURE_NUM'] != []) and (numelems % dimA != 0): # check to ensure mean components are consistent
sys.exit(curr_name + " has incorrect number of mixture mean components")
else:
tmplist = []
ret[curr_name].update({list[0]:tmplist})
elif(list[0] == 'MIXTURE_COVARS'): # and ret['MIXTURES'][0] == 'Yes'): # get the mixture covariances, these are matrices with shape (AxA) stored as a list of matrices
if (ret[curr_name]['MIXTURE_NUM'] != []) and (ret[curr_name]['MIXTURE_NUM'][0] != 'tied'):
dimA = int(ret['MIXTURES'][1])
covars = list[1:]
tmplist = []
if(list[1] == 'None'):
tmplist=[]
else:
if (ret[curr_name]['MIXTURE_NUM'] != []) and (ret[curr_name]['MIXTURE_NUM'][0] != 'tied') and (len(covars) != ret[curr_name]['MIXTURE_NUM'][0]): # check to ensure covariance components are consistent in number
print "len(covar),dimA",len(covars),dimA
sys.exit(curr_name + " has incorrect number of mixture covariance components")
for covar_id in range(ret[curr_name]['MIXTURE_NUM'][0]):
tmp_covar = parse_covar(gmm_param_dir+str(covars[covar_id])+".covar",dimA)
tmplist.append(tmp_covar)
else:
tmplist = []
ret[curr_name].update({list[0]:tmplist}) # storing a list of numpy matrices
return(ret)
def write_for_humans(hmm,filename):
''' write_for_humans(hmm,filename) write on filename the hmm in the same format it is read by parse_text'''
import sys
try:
f=open(filename,'w')
except:
print "Can't open write_for_humans file, ", filename
sys.exit()
os_path = filename[:-10] + "/"
strPrint=""
separator="#############################\n"
strPrint+="# alphabets\n"
strPrint+="TRANSITION_ALPHABET "
for i in hmm.topo_order:
strPrint+=str(hmm.state_names[i])+" "
strPrint+="\n"
strPrint+="EMISSION_ALPHABET "+" ".join(hmm.emission_alphabet)+'\n'
strPrint+="MIXTURES " + hmm.mixtures + " " + str(hmm.dimProfile) +'\n'
strPrint+=separator
for i in hmm.topo_order: # for each state in topological order
strPrint+="NAME "+hmm.state_names[i]+'\n'
strPrint+="LINK " #trransitions
if(hmm.out_s[i]):
for j in hmm.out_s[i]:
strPrint+=hmm.state_names[j]+' '
strPrint+='\n'
else:
strPrint+=" None\n"
strPrint+="TRANS "
if(hmm.states[i].tied_t):
strPrint+="tied "+hmm.states[i].tied_t+'\n'
elif(hmm.out_s[i]):
for j in hmm.out_s[i]:
strPrint+=str(hmm.states[i].a(hmm.states[j]))+' '
strPrint+='\n'
else:
strPrint+=" None\n"
if(hmm.fix_tr[i]):
strPrint+="FIX_TR\n"
# end state
strPrint+="ENDSTATE "+str(hmm.states[i].end_state)+'\n'
#emissions
if(not hmm.states[i].is_null()): #if (hmm.states[i].em_letters):
strPrint+="EM_LIST "+" ".join(hmm.states[i].em_letters)+'\n'
else:
strPrint+="EM_LIST None\n"
strPrint+="EMISSION "
if(hmm.states[i].tied_e): # and hmm.mixtures != "Yes"):
strPrint+="tied "+hmm.states[i].tied_e+'\n'
elif(not hmm.states[i].is_null() and hmm.mixtures != "Yes"): #elif(hmm.states[i].em_letters):
v=hmm.states[i].get_emissions()
for j in v:
strPrint+=str(j)+' '
strPrint+='\n'
else:
strPrint+=" None\n"
#gaussian mixture model parameters
strPrint+= "MIXTURE_NUM "
if(not hmm.states[i].is_null() and hmm.mixtures == "Yes"):
if hmm.states[i].tied_e_mix and not hmm.states[i].tied_e:
strPrint+="tied "+hmm.states[i].tied_e_mix+'\n'
elif not hmm.states[i].tied_e_mix and not hmm.states[i].tied_e:
strPrint += str(hmm.states[i].get_emissions().get_mix_num()) + '\n'
else:
strPrint += "None\n"
else:
strPrint += "None\n"
strPrint+= "MIXTURE_WEIGHTS "
if(not hmm.states[i].is_null() and hmm.mixtures == "Yes"):
if not hmm.states[i].tied_e:
weights = []
for mix in range(hmm.states[i].get_emissions().get_mix_num()):
weights.append(hmm.states[i].get_emissions().get_mix_weight(mix))
# weight_name = str(hmm.states[i].name) + str(hmm.states[i].get_em_name()[3:]) + "_mix_%d"%mix + ".weight"
# strPrint += weight_name[:-7] + ' '
strPrint += str(hmm.states[i].name) + str(hmm.states[i].get_em_name()[3:])
write_weight(os_path + str(hmm.states[i].name) + str(hmm.states[i].get_em_name()[3:]) + ".weight",weights)
strPrint +="\n"
else:
strPrint += "None\n"
else:
strPrint += "None\n"
strPrint+= "MIXTURE_MEANS "
if(not hmm.states[i].is_null() and hmm.mixtures == "Yes"):
if not hmm.states[i].tied_e and not hmm.states[i].tied_e_mix:
mix_num = hmm.states[i].get_emissions().get_mix_num()
for mix in range(mix_num):
mean = hmm.states[i].get_emissions().get_mixture_density(mix).get_mean()
mean_name = str(hmm.states[i].name) + str(hmm.states[i].get_em_name()[3:]) + "_mix_%d"%mix + ".mean"
strPrint += mean_name[:-5] + " "
write_mean(os_path + mean_name,mean)
strPrint += "\n"
else:
strPrint += "None\n"
else:
strPrint += "None\n"
strPrint+= "MIXTURE_COVARS "
if(not hmm.states[i].is_null() and hmm.mixtures == "Yes"):
if not hmm.states[i].tied_e and not hmm.states[i].tied_e_mix:
mix_num = hmm.states[i].get_emissions().get_mix_num()
for mix in range(mix_num):
covar = hmm.states[i].get_emissions().get_mixture_density(mix).get_cov() #this is a numpy array shape (dimProfile,dimProfile)
covar_name = str(hmm.states[i].name) + str(hmm.states[i].get_em_name()[3:]) + "_mix_%d"%mix + ".covar"
strPrint += covar_name[:-6] + " "
write_covar(os_path + covar_name,covar,hmm.dimProfile)
strPrint +="\n"
else:
strPrint += "None\n"
else:
strPrint += "None\n"
if(hmm.fix_em[i]):
strPrint+="FIX_EM\n"
if(hmm.fix_em_mix[i]):
strPrint+="FIX_EM_MIX\n"
#labels
if(hmm.states[i].label):
strPrint+="LABEL "+hmm.states[i].label+'\n'
else:
strPrint+="LABEL None\n"
strPrint+=separator
f.write(strPrint)
f.close()
def write_weight(filename,weights):
'''
Writes a mixture's weight to a file output.
'''
fweight = open(filename,'w')
weightstr = ""
for w in weights:
weightstr += str(w) + "\n"
fweight.write(weightstr)
fweight.close()
def write_mean(filename,mean):
'''
Writes a mixture's mean vector to a file output.
'''
fmean = open(filename,'w')
meanstr = ""
for comp in mean:
meanstr += str(comp) + "\n"
fmean.write(meanstr)
fmean.close()
def write_covar(filename,covar,dimA):
'''
Writes a mixture's covariance matrix to a file output.
'''
fcovar = open(filename,'w')
covarstr = ""
for i in range(len(covar.flat)):
covarstr += str(covar.flat[i]) + " "
if (i+1)%dimA == 0:
covarstr += "\n"
fcovar.write(covarstr)
fcovar.close()
def Build_HMM(file,profileDim,gmm_param_dir):
'''
This function build an hmm starting from a file. It discriminates between a discrete symbol
version and a GMM version using different node_em class objects. Also searches for
MIXTURES = Yes/No indicator from the input file.
The profileDim parameter is the dimension of the input encoding used.
'''
import string
import State
import HMM
try:
lines = open(file).readlines()
except:
print "Can't open build_hmm file, ",file
info=parse_text(lines,profileDim,gmm_param_dir)
tr_al=info['TRANSITION_ALPHABET']
if(info['EMISSION_ALPHABET'][0] == 'range'): # used for integer emission symbols such as dice throws
em_al=[]
for i in range(string.atoi(info['EMISSION_ALPHABET'][1])):
em_al.append(str(i))
else:
em_al=info['EMISSION_ALPHABET']
tied_t={} # tied transitions None if not tied
tied_e={} # tied emissions None if not tied (this is at the state level)
tied_e_mix = {} # tied emission mixture densities, None if not tied (this is at the sub-state mixture-tying level)
links={} # temporary list for each state
states=[]
label={}
endstate={}
in_links={}
fix_tr={}
fix_em={}
fix_em_mix = {}
empty_tr=State.node_tr("_EMPTY_TR_",[]) #empty transition and emissions
if info["MIXTURES"][0] == "Yes": # check if node_em is for discrete symbols or vector profile GMMs
empty_em = State.node_em_gmm("_EMPTY_EM_",[],[]) # mix_weights = [], mix_densities = []
else:
empty_em = State.node_em("_EMPTY_EM_",[])
for name in tr_al: # initialize dictionaries
in_links[name]=[]
links[name]=[None,None]
for name in tr_al: # create in_link information
for in_name in info[name]['LINK']:
if(name not in in_links[in_name]):
in_links[in_name].append(name)
serial=0 # used as incremental internal number for transitions and emissions. It will be used toi set node_tr node_em
for name in tr_al: # set node_tr
if(info[name]['TRANS']!=[] and info[name]['TRANS'][0] != 'tied'):
if(info[name]['TRANS'][0] == 'uniform'): # set uniform
d=1.0/len(info[name]['LINK'])
info[name]['TRANS']=[d]*len(info[name]['LINK'])
obj=State.node_tr("_TR_"+str(serial),info[name]['TRANS'])
serial=serial + 1
links[name][0]=obj
tied_t[name]=None
elif(info[name]['TRANS']!=[] and info[name]['TRANS'][0] == 'tied'):
tmpname=info[name]['TRANS'][1]
links[name][0]=links[tmpname][0] # links[name][0] is for transitions
tied_t[name]=tmpname
elif(info[name]['TRANS']==[]):
links[name][0]=empty_tr
tied_t[name]=None
# This section implements the tying of emission density functions (either discrete/continuous at the state level or for GMM at the sub-state mixture level)
# For mixture-tying, the implementation is such that a collection of states will share one codebook of mixture densities while still having individual state
# mixture weights
serial=0
for name in tr_al: # set node_em
if (info["MIXTURES"][0] != "Yes"):
if(info[name]['EMISSION']!=[] and info[name]['EM_LIST'][0] == 'all'):
info[name]['EM_LIST']=em_al
elif(info[name]['EMISSION']!=[] and info[name]['EMISSION'][0] != 'tied'):
if(info[name]['EMISSION'][0] == 'uniform'): # set uniform
d=1.0/len(info[name]['EM_LIST'])
info[name]['EMISSION']=[d]*len(info[name]['EM_LIST'])
obj=State.node_em("_EM_"+str(serial),info[name]['EMISSION'])
serial=serial + 1
links[name][1]=obj
tied_e[name]=None
tied_e_mix[name] = None
elif(info[name]['EMISSION']==[]):
links[name][1]=empty_em
tied_e[name]=None
tied_e_mix[name] = None
elif(info[name]['EMISSION']!=[] and info[name]['EMISSION'][0] == 'tied'):
# check for states with tied emissions discrete and GMM at the state level
tmpname=info[name]['EMISSION'][1]
links[name][1]=links[tmpname][1]
tied_e[name]=tmpname
tied_e_mix[name] = tmpname
elif(info["MIXTURES"][0] == "Yes"):
if(info[name]['EMISSION']!=[] and info[name]['EMISSION'][0] == 'tied'):
# check for states with tied emissions discrete and GMM at the state level
tmpname=info[name]['EMISSION'][1]
links[name][1]=links[tmpname][1]
tied_e[name]=tmpname
tied_e_mix[name] = tmpname
elif (info[name]['MIXTURE_NUM'] != []) and (info[name]['MIXTURE_NUM'][0] != 'tied'):
#normalise mixture weights to sum to one
weight_sum = float(sum(info[name]["MIXTURE_WEIGHTS"]))
info[name]["MIXTURE_WEIGHTS"] = [w/weight_sum for w in info[name]["MIXTURE_WEIGHTS"]]
mix_densities = []
for k in range(int(info[name]["MIXTURE_NUM"][0])):
#tmp_mix_density = State.mixture_density("_EM_GMM_"+str(serial)+"_MIX_"+str(k),info[name]["MIXTURE_MEANS"][k],info[name]["MIXTURE_COVARS"][k])
#all mixture densities in a state share the same name, this is because tying of mixture densities works by tying all mixtures in a state to another state.
tmp_mix_density = State.mixture_density("_EM_GMM_"+str(serial)+"_MIX",info[name]["MIXTURE_MEANS"][k],info[name]["MIXTURE_COVARS"][k])
mix_densities.append(tmp_mix_density)
obj = State.node_em_gmm("_EM_GMM_"+str(serial),info[name]["MIXTURE_WEIGHTS"],mix_densities)
serial=serial+1
links[name][1]=obj
tied_e[name]=None
tied_e_mix[name] = None
elif (info[name]['MIXTURE_NUM'] == []):
links[name][1] = empty_em
tied_e[name] = None
tied_e_mix[name] = None
elif (info[name]['MIXTURE_NUM'][0] == 'tied'):
# check for states with tied emissions GMM at the sub-state mixture level
#normalise mixture weights to sum to one
weight_sum = float(sum(info[name]["MIXTURE_WEIGHTS"]))
info[name]["MIXTURE_WEIGHTS"] = [w/weight_sum for w in info[name]["MIXTURE_WEIGHTS"]]
tmpname = info[name]["MIXTURE_NUM"][1]
#this is used to obtain the reference pointer to the tied to state's mixture densities
tmp_node_em_gmm = links[tmpname][1]
tied_mixture_densities = tmp_node_em_gmm.get_mixtures()
obj = State.node_em_gmm("_EM_GMM_"+str(serial),info[name]["MIXTURE_WEIGHTS"],tied_mixture_densities)
serial=serial+1
links[name][1] = obj
tied_e[name]= None
tied_e_mix[name]=tmpname
for name in tr_al: # set labels
if(info[name]['FIX_TR']): # fixed transitions
fix_tr[name]='YES'
else:
fix_tr[name]=None
if(info[name]['FIX_EM']): # fixed emissions
fix_em[name]='YES'
else:
fix_em[name]=None
if(info[name]['FIX_EM_MIX']): # fixed emission mixture densities
fix_em_mix[name]='YES'
else:
fix_em_mix[name]=None
if(info[name]['LABEL'] == ['None']): # LABELS
label[name]=None
else:
label[name]=info[name]['LABEL']
endstate[name]=info[name]['ENDSTATE'] # set endstates
states.append(State.State(name,links[name][0],links[name][1],info[name]['LINK'],in_links[name],info[name]['EM_LIST'],tied_t[name],tied_e[name],tied_e_mix[name],endstate[name],label[name])) # set State[i] and appnd it to the state list
hmm = HMM.HMM(states,em_al,fix_tr,fix_em,fix_em_mix,info["MIXTURES"][0],profileDim) #int(info["MIXTURES"][1])) # set self.hmm => the HMM
return(hmm)
def main():
print "Test with file mod.mod"
print "read file and create an hmm"
hmm = Build_HMM("Test_Case2.mod") #need to include gmm_parameter directory and input encoding dimension
print "inlinks (<-) - outlinks (->) "
for i in range(hmm.num_states):
print hmm.states[i].name," -> ",hmm.states[i].out_links, " <- ",hmm.states[i].in_links
print "print the tied states"
for i in range(hmm.num_states):
n=hmm.states[i]
if(n.tied_e):
print "State ",n.name," has tied emission to ",n.tied_e
elif (m.tied_e_mix):
print "State ",n.name," has tied emission mixture densities to ",n.tied_e_mix
if(n.tied_t):
print "State ",n.name," has tied transitions to ",n.tied_t
print "###############################################################"
print"# state tr_name em_name # state tr_name em_name tying test tying test"
for i in range(hmm.num_states):
ni=hmm.states[i]
for j in range(i,hmm.num_states):
nj=hmm.states[j]
print "# <",ni.name,"> ",ni._node_tr.name,ni._node_em.name,
print "# <",nj.name,"> ",nj._node_tr.name,nj._node_em.name,
print " (TR1==TR2)? ",(hmm.states[i]._node_tr == hmm.states[j]._node_tr),
print " (EM1==EM2)? ",(hmm.states[i]._node_em == hmm.states[j]._node_em)
print "\n===================================Topo_order============================="
for i in hmm.topo_order:
print i,hmm.state_names[i]
# test case for the input and output of the hmm grammar file parsing
write_for_humans(hmm,"Test_Case_Output.mod")
if __name__ == '__main__':
main()
|
arthurfait/HMM-3
|
hmm/HMM_IO.py
|
Python
|
gpl-3.0
| 28,226
|
[
"Gaussian"
] |
cd45375e69d7a5d9349c0ef32016c960c921f86eeb6e72a8208f5ce3e5d92b4b
|
import numpy as np, SimPEG as simpeg, vtk, sys, os, time
import vtk.util.numpy_support as npsup
import polydata, extraction
# Functions that take any vtkData object as input.
# Function in vtk to work with dataset
def thresholdCellId2vtp(vtkObj,ind):
"""
Function to return polygon from a cell in a data object.
vtkObj has to have a 'id' cell array
"""
thresObj = thresFilt(vtkObj,'ind',[ind-.1,ind+.1],thType='Between')
vtpObj = extraction.vtu2vtp(thresObj)
return polydata.normFilter(polydata.triangulatePolyData(vtpObj))
def getCell2vtp(vtkObj,ind):
"""
Function gets a cell by ind and constructs a polydata from it.
"""
# Get the cell
cE = vtkObj.GetCell(ind)
# Make the polygon
if cE.GetCellType() == 11:
# Use a cubeSource, much faster
cube = vtk.vtkCubeSource()
cube.SetBounds(cE.GetBounds())
cube.Update()
vtpObj = cube.GetOutput()
else:
polygons = vtk.vtkCellArray()
for iF in range(cE.GetNumberOfFaces()):
f = cE.GetFace(iF)
poly = vtk.vtkPolygon()
poly.GetPointIds().SetNumberOfIds(f.GetNumberOfPoints())
for nr in range(f.GetNumberOfPoints()):
poly.GetPointIds().SetId(nr,f.GetPointId(nr))
polygons.InsertNextCell(poly)
# Build the polydata
vtpObj = vtk.vtkPolyData()
vtpObj.SetPoints(obj.GetPoints())
vtpObj.SetPolys(polygons)
return polydata.normFilter(polydata.triangulatePolyData(vtpObj))
# Add a numpy array to a VTKobject
def addNPDataArrays(vtkObj,arrDict,arrType='Cell'):
""" Function to add a nparray to vtkObject"""
for nameArr,npArr in arrDict.iteritems():
vtkArr = npsup.numpy_to_vtk(npArr,deep=1)
vtkArr.SetName(nameArr)
if arrType == 'Cell':
vtkObj.GetCellData().AddArray(vtkArr)
elif arrType == 'Point':
vtkObj.GetPointData().AddArray(vtkArr)
else:
raise Exception('Not a support arrType')
def getDataArrayNames(vtkObj,arrType='Cell'):
"""Function that returns a list of all the names of cell/point data arrays."""
l = []
if arrType == 'Cell':
nameList = [ vtkObj.GetCellData().GetArrayName(i) for i in range(vtkObj.GetCellData().GetNumberOfArrays())]
elif arrType == 'Point':
nameList = [ vtkObj.GetPointData().GetArrayName(i) for i in range(vtkObj.GetPointData().GetNumberOfArrays())]
else:
raise Exception('Not a support arrType')
return nameList
def getDataArray(vtkObj,name,arrType='Cell'):
"""Function that returns the cell/point data array. """
return npsup.vtk_to_numpy(vtkObj.GetCellData().GetArray(name))
if arrType == 'Cell':
return npsup.vtk_to_numpy(vtkObj.GetCellData().GetArray(name))
elif arrType == 'Point':
return npsup.vtk_to_numpy(vtkObj.GetPointData().GetArray(name))
else:
raise Exception('Not a support arrType')
def thresFilt(vtkObj,arrName,value,thType='Upper'):
thresFilt = vtk.vtkThreshold()
thresFilt.SetInputData(vtkObj)
if thType in 'Upper':
thresFilt.ThresholdByUpper(value)
elif thType in 'Lower':
thresFilt.ThresholdByLower(value)
elif thType in 'Between':
thresFilt.ThresholdBetween(value[0],value[1])
thresFilt.AllScalarsOn()
thresFilt.SetInputArrayToProcess(0,0,0,vtk.vtkDataObject.FIELD_ASSOCIATION_CELLS,arrName)
thresFilt.Update()
return thresFilt.GetOutput()
|
grosenkj/telluricpy
|
telluricpy/vtkTools/dataset.py
|
Python
|
mit
| 3,517
|
[
"VTK"
] |
8843884eee63073cd4265cb3c0db7c0b595bc705bc29e6e0babe670e4c40660d
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__RCSID__ = "$Id$"
import six
import os
from DIRAC import S_OK, S_ERROR
from DIRAC.Core.Utilities.List import stringListToString, intListToString
from DIRAC.DataManagementSystem.DB.FileCatalogComponents.FileManager.FileManagerBase import FileManagerBase
class FileManagerFlat(FileManagerBase):
######################################################
#
# The all important _findFiles and _getDirectoryFiles methods
#
def _findFiles(self, lfns, metadata=['FileID'], connection=False):
connection = self._getConnection(connection)
""" Find file ID if it exists for the given list of LFNs """
dirDict = self._getFileDirectories(lfns)
failed = {}
directoryIDs = {}
for dirPath in dirDict.keys():
res = self.db.dtree.findDir(dirPath)
if not res['OK'] or not res['Value']:
error = res.get('Message', 'No such file or directory')
for fileName in dirDict[dirPath]:
failed['%s/%s' % (dirPath, fileName)] = error
else:
directoryIDs[dirPath] = res['Value']
successful = {}
for dirPath in directoryIDs.keys():
fileNames = dirDict[dirPath]
res = self._getDirectoryFiles(directoryIDs[dirPath], fileNames, metadata, connection=connection)
if not res['OK'] or not res['Value']:
error = res.get('Message', 'No such file or directory')
for fileName in fileNames:
failed['%s/%s' % (dirPath, fileName)] = error
else:
for fileName, fileDict in res['Value'].items():
successful["%s/%s" % (dirPath, fileName)] = fileDict
return S_OK({"Successful": successful, "Failed": failed})
def _getDirectoryFiles(self, dirID, fileNames, metadata, allStatus=False, connection=False):
connection = self._getConnection(connection)
# metadata can be any of
# ['FileID','Size','UID','GID','Checksum','ChecksumType','Type','CreationDate','ModificationDate','Mode','Status']
req = "SELECT FileName,%s FROM FC_Files WHERE DirID=%d" % (intListToString(metadata), dirID)
if not allStatus:
statusIDs = []
res = self._getStatusInt('AprioriGood', connection=connection)
if res['OK']:
statusIDs.append(res['Value'])
if statusIDs:
req = "%s AND Status IN (%s)" % (req, intListToString(statusIDs))
if fileNames:
req = "%s AND FileName IN (%s)" % (req, stringListToString(fileNames))
res = self.db._query(req, connection)
if not res['OK']:
return res
files = {}
for fTuple in res['Value']:
fileName = fTuple[0]
files[fileName] = dict(zip(metadata, fTuple[1:]))
return S_OK(files)
######################################################
#
# _addFiles related methods
#
def _insertFiles(self, lfns, uid, gid, connection=False):
connection = self._getConnection(connection)
# Add the files
failed = {}
directoryFiles = {}
insertTuples = []
res = self._getStatusInt('AprioriGood', connection=connection)
statusID = 0
if res['OK']:
statusID = res['Value']
for lfn in sorted(lfns.keys()):
fileInfo = lfns[lfn]
size = fileInfo['Size']
guid = fileInfo.get('GUID', '')
checksum = fileInfo['Checksum']
checksumtype = fileInfo.get('ChecksumType', 'Adler32')
dirName = os.path.dirname(lfn)
dirID = fileInfo['DirID']
fileName = os.path.basename(lfn)
if dirName not in directoryFiles:
directoryFiles[dirName] = []
directoryFiles[dirName].append(fileName)
insertTuples.append("(%d,%d,%d,%d,%d,'%s','%s','%s','%s',UTC_TIMESTAMP(),UTC_TIMESTAMP(),%d)" %
(dirID, size, uid, gid, statusID, fileName, guid, checksum, checksumtype, self.db.umask))
fields = "DirID,Size,UID,GID,Status,FileName,GUID,Checksum,ChecksumType,CreationDate,ModificationDate,Mode"
req = "INSERT INTO FC_Files (%s) VALUES %s" % (fields, ','.join(insertTuples))
res = self.db._update(req, connection)
if not res['OK']:
return res
# Get the fileIDs for the inserted files
res = self._findFiles(list(lfns), ['FileID'], connection=connection)
if not res['OK']:
for lfn in list(lfns):
failed[lfn] = 'Failed post insert check'
lfns.pop(lfn)
else:
failed.update(res['Value']['Failed'])
for lfn, fileDict in res['Value']['Successful'].items():
lfns[lfn]['FileID'] = fileDict['FileID']
return S_OK({'Successful': lfns, 'Failed': failed})
def _getFileIDFromGUID(self, guid, connection=False):
connection = self._getConnection(connection)
if not guid:
return S_OK({})
if not isinstance(guid, (list, tuple)):
guid = [guid]
req = "SELECT FileID,GUID FROM FC_Files WHERE GUID IN (%s)" % stringListToString(guid)
res = self.db._query(req, connection)
if not res['OK']:
return res
guidDict = {}
for fileID, guid in res['Value']:
guidDict[guid] = fileID
return S_OK(guidDict)
######################################################
#
# _deleteFiles related methods
#
def _deleteFiles(self, fileIDs, connection=False):
connection = self._getConnection(connection)
replicaPurge = self.__deleteFileReplicas(fileIDs)
filePurge = self.__deleteFiles(fileIDs, connection=connection)
if not replicaPurge['OK']:
return replicaPurge
if not filePurge['OK']:
return filePurge
return S_OK()
def __deleteFileReplicas(self, fileIDs, connection=False):
connection = self._getConnection(connection)
if not fileIDs:
return S_OK()
req = "DELETE FROM FC_Replicas WHERE FileID in (%s)" % (intListToString(fileIDs))
return self.db._update(req, connection)
def __deleteFiles(self, fileIDs, connection=False):
connection = self._getConnection(connection)
if not fileIDs:
return S_OK()
req = "DELETE FROM FC_Files WHERE FileID in (%s)" % (intListToString(fileIDs))
return self.db._update(req, connection)
######################################################
#
# _addReplicas related methods
#
def _insertReplicas(self, lfns, master=False, connection=False):
connection = self._getConnection(connection)
res = self._getStatusInt('AprioriGood', connection=connection)
statusID = 0
if res['OK']:
statusID = res['Value']
replicaType = 'Replica'
if master:
replicaType = 'Master'
insertTuples = {}
deleteTuples = []
successful = {}
failed = {}
directorySESizeDict = {}
for lfn in sorted(lfns.keys()):
fileID = lfns[lfn]['FileID']
pfn = lfns[lfn]['PFN']
seName = lfns[lfn]['SE']
res = self.db.seManager.findSE(seName)
if not res['OK']:
failed[lfn] = res['Message']
continue
seID = res['Value']
if not master:
res = self.__existsReplica(fileID, seID, connection=connection)
if not res['OK']:
failed[lfn] = res['Message']
continue
elif res['Value']:
successful[lfn] = True
continue
dirID = lfns[lfn]['DirID']
if dirID not in directorySESizeDict:
directorySESizeDict[dirID] = {}
if seID not in directorySESizeDict[dirID]:
directorySESizeDict[dirID][seID] = {'Files': 0, 'Size': 0}
directorySESizeDict[dirID][seID]['Size'] += lfns[lfn]['Size']
directorySESizeDict[dirID][seID]['Files'] += 1
insertTuples[lfn] = (
"(%d,%d,%d,'%s',UTC_TIMESTAMP(),UTC_TIMESTAMP(),'%s')" %
(fileID, seID, statusID, replicaType, pfn))
deleteTuples.append((fileID, seID))
if insertTuples:
fields = "FileID,SEID,Status,RepType,CreationDate,ModificationDate,PFN"
req = "INSERT INTO FC_Replicas (%s) VALUES %s" % (fields, ','.join(insertTuples.values()))
res = self.db._update(req, connection)
if not res['OK']:
self.__deleteReplicas(deleteTuples, connection=connection)
for lfn in insertTuples.keys():
failed[lfn] = res['Message']
else:
# Update the directory usage
self._updateDirectoryUsage(directorySESizeDict, '+', connection=connection)
for lfn in insertTuples.keys():
successful[lfn] = True
return S_OK({'Successful': successful, 'Failed': failed})
def __existsReplica(self, fileID, seID, connection=False):
# TODO: This is in efficient. Should perform bulk operation
connection = self._getConnection(connection)
""" Check if a replica already exists """
if isinstance(seID, six.string_types):
res = self.db.seManager.findSE(seID)
if not res['OK']:
return res
seID = res['Value']
req = "SELECT FileID FROM FC_Replicas WHERE FileID=%d AND SEID=%d" % (fileID, seID)
result = self.db._query(req, connection)
if not result['OK']:
return result
if not result['Value']:
return S_OK(False)
return S_OK(True)
######################################################
#
# _deleteReplicas related methods
#
def _deleteReplicas(self, lfns, connection=False):
connection = self._getConnection(connection)
successful = {}
res = self._findFiles(list(lfns), ['DirID', 'FileID', 'Size'], connection=connection)
failed = res['Value']['Failed']
lfnFileIDDict = res['Value']['Successful']
toRemove = []
directorySESizeDict = {}
for lfn, fileDict in lfnFileIDDict.items():
fileID = fileDict['FileID']
se = lfns[lfn]['SE']
toRemove.append((fileID, se))
# Now prepare the storage usage dict
res = self.db.seManager.findSE(se)
if not res['OK']:
return res
seID = res['Value']
dirID = fileDict['DirID']
if dirID not in directorySESizeDict:
directorySESizeDict[dirID] = {}
if seID not in directorySESizeDict[dirID]:
directorySESizeDict[dirID][seID] = {'Files': 0, 'Size': 0}
directorySESizeDict[dirID][seID]['Size'] += fileDict['Size']
directorySESizeDict[dirID][seID]['Files'] += 1
res = self.__deleteReplicas(toRemove)
if not res['OK']:
for lfn in lfnFileIDDict.keys():
failed[lfn] = res['Message']
else:
# Update the directory usage
self._updateDirectoryUsage(directorySESizeDict, '-', connection=connection)
for lfn in lfnFileIDDict.keys():
successful[lfn] = True
return S_OK({'Successful': successful, 'Failed': failed})
def __deleteReplicas(self, replicaTuples, connection=False):
connection = self._getConnection(connection)
deleteTuples = []
for fileID, seID in replicaTuples:
if isinstance(seID, six.string_types):
res = self.db.seManager.findSE(seID)
if not res['OK']:
return res
seID = res['Value']
deleteTuples.append("(%d,%d)" % (fileID, seID))
req = "DELETE FROM FC_Replicas WHERE (FileID,SEID) IN (%s)" % intListToString(deleteTuples)
return self.db._update(req, connection)
######################################################
#
# _setReplicaStatus _setReplicaHost _setReplicaParameter methods
# _setFileParameter method
#
def _setReplicaStatus(self, fileID, se, status, connection=False):
connection = self._getConnection(connection)
res = self._getStatusInt(status, connection=connection)
if not res['OK']:
return res
statusID = res['Value']
return self._setReplicaParameter(fileID, se, 'Status', statusID, connection=connection)
def _setReplicaHost(self, fileID, se, newSE, connection=False):
connection = self._getConnection(connection)
res = self.db.seManager.findSE(newSE)
if not res['OK']:
return res
newSE = res['Value']
return self._setReplicaParameter(fileID, se, 'SEID', newSE, connection=connection)
def _setReplicaParameter(self, fileID, seID, paramName, paramValue, connection=False):
connection = self._getConnection(connection)
if isinstance(seID, six.string_types):
res = self.db.seManager.findSE(seID)
if not res['OK']:
return res
seID = res['Value']
req = "UPDATE FC_Replicas SET %s='%s', ModificationDate=UTC_TIMESTAMP() WHERE FileID=%d AND SEID=%d;" % (
paramName,
paramValue,
fileID,
seID)
return self.db._update(req, connection)
def _setFileParameter(self, fileID, paramName, paramValue, connection=False):
connection = self._getConnection(connection)
if not isinstance(fileID, (list, tuple)):
fileID = [fileID]
req = "UPDATE FC_Files SET %s='%s', ModificationDate=UTC_TIMESTAMP() WHERE FileID IN (%s)" % (
paramName,
paramValue,
intListToString(fileID))
return self.db._update(req, connection)
######################################################
#
# _getFileReplicas related methods
#
def _getFileReplicas(self, fileIDs, fields=['PFN'], connection=False):
connection = self._getConnection(connection)
if not fileIDs:
return S_ERROR("No such file or directory")
req = "SELECT FileID,SEID,Status,%s FROM FC_Replicas WHERE FileID IN (%s);" % (
intListToString(fields), intListToString(fileIDs))
res = self.db._query(req, connection)
if not res['OK']:
return res
replicas = {}
for fTuple in res['Value']:
fileID = fTuple[0]
if fileID not in replicas:
replicas[fileID] = {}
seID = fTuple[1]
res = self.db.seManager.getSEName(seID)
if not res['OK']:
continue
seName = res['Value']
statusID = fTuple[2]
res = self._getIntStatus(statusID, connection=connection)
if not res['OK']:
continue
status = res['Value']
replicas[fileID][seName] = {'Status': status}
replicas[fileID][seName].update(dict(zip(fields, fTuple[3:])))
for fileID in fileIDs:
if fileID not in replicas:
replicas[fileID] = {}
return S_OK(replicas)
|
yujikato/DIRAC
|
src/DIRAC/DataManagementSystem/DB/FileCatalogComponents/FileManager/FileManagerFlat.py
|
Python
|
gpl-3.0
| 13,929
|
[
"DIRAC"
] |
878965fc9b82643521c0732869f5eaad1c996e33f6d803527d2755e496616217
|
from anndata import AnnData
from typing import Optional
from ._dpt import _diffmap
from .._utils import AnyRandom
def diffmap(
adata: AnnData,
n_comps: int = 15,
neighbors_key: Optional[str] = None,
random_state: AnyRandom = 0,
copy: bool = False,
):
"""\
Diffusion Maps [Coifman05]_ [Haghverdi15]_ [Wolf18]_.
Diffusion maps [Coifman05]_ has been proposed for visualizing single-cell
data by [Haghverdi15]_. The tool uses the adapted Gaussian kernel suggested
by [Haghverdi16]_ in the implementation of [Wolf18]_.
The width ("sigma") of the connectivity kernel is implicitly determined by
the number of neighbors used to compute the single-cell graph in
:func:`~scanpy.pp.neighbors`. To reproduce the original implementation
using a Gaussian kernel, use `method=='gauss'` in
:func:`~scanpy.pp.neighbors`. To use an exponential kernel, use the default
`method=='umap'`. Differences between these options shouldn't usually be
dramatic.
Parameters
----------
adata
Annotated data matrix.
n_comps
The number of dimensions of the representation.
neighbors_key
If not specified, diffmap looks .uns['neighbors'] for neighbors settings
and .obsp['connectivities'], .obsp['distances'] for connectivities and
distances respectively (default storage places for pp.neighbors).
If specified, diffmap looks .uns[neighbors_key] for neighbors settings and
.obsp[.uns[neighbors_key]['connectivities_key']],
.obsp[.uns[neighbors_key]['distances_key']] for connectivities and distances
respectively.
random_state
A numpy random seed
copy
Return a copy instead of writing to adata.
Returns
-------
Depending on `copy`, returns or updates `adata` with the following fields.
`X_diffmap` : :class:`numpy.ndarray` (`adata.obsm`)
Diffusion map representation of data, which is the right eigen basis of
the transition matrix with eigenvectors as columns.
`diffmap_evals` : :class:`numpy.ndarray` (`adata.uns`)
Array of size (number of eigen vectors).
Eigenvalues of transition matrix.
Notes
-----
The 0-th column in `adata.obsm["X_diffmap"]` is the steady-state solution,
which is non-informative in diffusion maps.
Therefore, the first diffusion component is at index 1,
e.g. `adata.obsm["X_diffmap"][:,1]`
"""
if neighbors_key is None:
neighbors_key = 'neighbors'
if neighbors_key not in adata.uns:
raise ValueError(
'You need to run `pp.neighbors` first to compute a neighborhood graph.'
)
if n_comps <= 2:
raise ValueError('Provide any value greater than 2 for `n_comps`. ')
adata = adata.copy() if copy else adata
_diffmap(
adata, n_comps=n_comps, neighbors_key=neighbors_key, random_state=random_state
)
return adata if copy else None
|
theislab/scanpy
|
scanpy/tools/_diffmap.py
|
Python
|
bsd-3-clause
| 2,969
|
[
"Gaussian"
] |
bb3bf9e40f873f8a1ff9a2415bf42f660989e2fbaf199d4014b99c0f68ff25db
|
import argparse
def commandargs(self):
#If there were system argument
self.parser = argparse.ArgumentParser()
self.parser.add_argument("--runtype", help="Batch, single file or pictures",default='file')
self.parser.add_argument("--batchpool", help="run directory of videos",type=str)
self.parser.add_argument("--inDEST", help="path of single video",type=str,default='C:/Program Files (x86)/MotionMeerkat/PlotwatcherTest.tlv')
self.parser.add_argument("--fileD", help="output directory",default="C:/MotionMeerkat")
self.parser.add_argument("--adapt", help="Adaptive background averaging",action='store_true',default=True)
self.parser.add_argument("--accAvg", help="Fixed background averaging rate",default=0.35,type=float)
self.parser.add_argument("--frameHIT", help="Expected percentage of motion frames",default=0.5,type=float)
self.parser.add_argument("--threshT", help="Threshold of movement",default=30,type=int)
self.parser.add_argument("--drawSmall", help="'Draw' or 'enter' object size",type=str,default='enter')
self.parser.add_argument("--minSIZE", help="Minimum size of contour",default=0.1,type=float)
self.parser.add_argument("--burnin", help="Delay time",default=0,type=int)
self.parser.add_argument("--scan", help="Scan one of every X frames for motion",default=0,type=int)
self.parser.add_argument("--frameSET", help="Set frame_rate?",action='store_true',default=False)
self.parser.add_argument("--frame_rate", help="frames per second",default=1)
self.parser.add_argument("--moglearning", help="Speed of MOG background detector, lowering values are more sensitive to movement",default=0.09,type=float)
self.parser.add_argument("--subMethod", help="Accumulated Averaging [Acc] or Mixture of Gaussian [MOG] background method",default='MOG',type=str)
self.parser.add_argument("--mogvariance", help="Variance in MOG to select background",default=25,type=int)
self.parser.add_argument("--set_ROI", help="Set region of interest?",action='store_true',default=False)
self.parser.add_argument("--windy", help="Enable wind correction",action='store_true',default=False)
self.parser.add_argument("--windy_min", help="How many minutes of continious movement should be ignored?",default='3',type=int)
self.parser.add_argument("--remove_singles", help="Remove single frames with no neighbors of a given distance",action='store_true',default=False)
self.parser.add_argument("--single_distance", help="Eliminate frames witin no neighbors within X seconds",default='10',type=int)
self.parser.add_argument("--ROI_include", help="include or exclude?",default="exclude")
self.parser.add_argument("--set_areacounter", help="Set region to count area",action="store_true",default=False)
self.parser.add_argument("--todraw", help="Draw red boxes to highlight motion' ?",action="store_true",default=False)
self.parser.add_argument("--makeVID", help="Output images as 'frames','video','both', 'none' ?",default='frames',type=str)
self.args = self.parser.parse_args(namespace=self)
# Set pictures flag
if not self.runtype=="pictures":
self.pictures=False
else:
self.pictures=True
print "\n"
#standardize minimum size as a percent of the frame
self.minSIZE=float(self.minSIZE)/100
|
bw4sz/OpenCV_HummingbirdsMotion
|
MotionMeerkat/CommandArgs.py
|
Python
|
gpl-3.0
| 3,501
|
[
"Gaussian"
] |
08d89e276e27ebf1224039239fc7c6014bad87a15124ee46fdfb25d7447e0663
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
import unittest
import json
import os
from pymatgen.electronic_structure.cohp import CompleteCohp, Cohp, IcohpValue, IcohpCollection
from pymatgen.electronic_structure.core import Spin, Orbital
from pymatgen.util.testing import PymatgenTest
test_dir = os.path.join(os.path.dirname(__file__), "..", "..", "..",
"test_files", "cohp")
class CohpTest(unittest.TestCase):
def setUp(self):
with open(os.path.join(test_dir, "cohp.json"), "r") as f:
self.cohp = Cohp.from_dict(json.load(f))
self.cohp_only = Cohp(self.cohp.efermi,
self.cohp.energies,
self.cohp.cohp)
with open(os.path.join(test_dir, "coop.json"), "r") as f:
self.coop = Cohp.from_dict(json.load(f))
def test_as_from_dict(self):
with open(os.path.join(test_dir, "cohp.json"), "r") as f:
cohp_dict = json.load(f)
self.assertEqual(self.cohp.as_dict(), cohp_dict)
def test_attributes(self):
self.assertEqual(len(self.cohp.energies), 301)
self.assertEqual(self.cohp.efermi, 9.75576)
self.assertEqual(self.coop.efermi, 5.90043)
self.assertFalse(self.cohp.are_coops)
self.assertTrue(self.coop.are_coops)
def test_get_icohp(self):
self.assertEqual(self.cohp.get_icohp(),
self.cohp.get_cohp(integrated=True))
self.assertEqual(None, self.cohp_only.get_icohp())
def test_get_interpolated_value(self):
# icohp_ef are the ICHOP(Ef) values taken from
# the ICOHPLIST.lobster file.
icohp_ef_dict = {Spin.up: -0.10218, Spin.down: -0.19701}
icoop_ef_dict = {Spin.up: 0.24714}
icohp_ef = self.cohp.get_interpolated_value(self.cohp.efermi,
integrated=True)
icoop_ef = self.coop.get_interpolated_value(self.coop.efermi,
integrated=True)
self.assertAlmostEqual(icohp_ef_dict, icohp_ef)
self.assertAlmostEqual(icoop_ef_dict, icoop_ef)
with self.assertRaises(ValueError):
self.cohp_only.get_interpolated_value(5.0, integrated=True)
def test_str(self):
with open(os.path.join(test_dir, "cohp.str"), "rt") as f:
str_cohp = f.read()
with open(os.path.join(test_dir, "coop.str"), "rt") as f:
str_coop = f.read()
self.assertEqual(self.cohp.__str__(), str_cohp)
self.assertEqual(self.coop.__str__(), str_coop)
def test_antibnd_states_below_efermi(self):
self.assertDictEqual(self.cohp.has_antibnd_states_below_efermi(spin=None), {Spin.up: True, Spin.down: True})
self.assertDictEqual(self.cohp.has_antibnd_states_below_efermi(spin=None, limit=0.5),
{Spin.up: False, Spin.down: False})
self.assertDictEqual(self.cohp.has_antibnd_states_below_efermi(spin=Spin.up, limit=0.5), {Spin.up: False})
class IcohpValueTest(unittest.TestCase):
def setUp(self):
# without spin polarization
label = "1"
atom1 = "K1"
atom2 = "F2"
length = "2.3"
translation = [-1, 0, 0]
num = 1
icohp = {Spin.up: -2.0}
are_coops = False
self.icohpvalue = IcohpValue(label=label, atom1=atom1, atom2=atom2, length=length, translation=translation,
num=num, icohp=icohp, are_coops=are_coops)
label_sp = "1"
atom1_sp = "K1"
atom2_sp = "F2"
length_sp = "2.3"
translation_sp = [-1, 0, 0]
num_sp = 1
icohp_sp = {Spin.up: -1.1, Spin.down: -1.0}
are_coops_sp = False
self.icohpvalue_sp = IcohpValue(label=label_sp, atom1=atom1_sp, atom2=atom2_sp, length=length_sp,
translation=translation_sp, num=num_sp, icohp=icohp_sp, are_coops=are_coops_sp)
def test_attributes(self):
# without spin polarization
self.assertEqual(self.icohpvalue_sp.num_bonds, 1)
self.assertEqual(self.icohpvalue_sp.are_coops, False)
self.assertEqual(self.icohpvalue_sp.is_spin_polarized, True)
self.assertDictEqual(self.icohpvalue.icohp, {Spin.up: -2.0})
# with spin polarization
self.assertEqual(self.icohpvalue_sp.num_bonds, 1)
self.assertEqual(self.icohpvalue_sp.are_coops, False)
self.assertEqual(self.icohpvalue_sp.is_spin_polarized, True)
self.assertDictEqual(self.icohpvalue_sp.icohp, {Spin.up: -1.1, Spin.down: -1.0})
def test_icohpvalue(self):
# without spin polarization
self.assertEqual(self.icohpvalue.icohpvalue(spin=Spin.up), -2.0)
# with spin polarization
self.assertEqual(self.icohpvalue_sp.icohpvalue(spin=Spin.up), -1.1)
self.assertEqual(self.icohpvalue_sp.icohpvalue(spin=Spin.down), -1.0)
def test_summed_icohp(self):
# without spin polarization
self.assertEqual(self.icohpvalue.summed_icohp, -2.0)
# with spin polarization
self.assertEqual(self.icohpvalue_sp.summed_icohp, -2.1)
class CombinedIcohpTest(unittest.TestCase):
def setUp(self):
# without spin polarization:
are_coops = False
is_spin_polarized = False
list_atom2 = ['K2', 'K2', 'K2', 'K2', 'K2', 'K2']
list_icohp = [{Spin.up: -0.40075}, {Spin.up: -0.40074}, {Spin.up: -0.40079}, {Spin.up: -0.40079},
{Spin.up: -0.40074}, {Spin.up: -0.40075}]
list_icoop = [{Spin.up: 0.02342}, {Spin.up: 0.02342}, {Spin.up: 0.02343}, {Spin.up: 0.02343},
{Spin.up: 0.02342}, {Spin.up: 0.02342}]
list_labels = ['1', '2', '3', '4', '5', '6']
list_length = [2.71199, 2.71199, 2.71199, 2.71199, 2.71199, 2.71199]
list_num = [1, 1, 1, 1, 1, 1]
list_atom1 = ['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
list_translation = [[0, -1, -1], [-1, 0, -1], [0, 0, -1], [-1, -1, 0], [0, -1, 0], [-1, 0, 0]]
self.icohpcollection_KF = IcohpCollection(is_spin_polarized=is_spin_polarized, are_coops=are_coops,
list_labels=list_labels, list_atom1=list_atom1, list_atom2=list_atom2,
list_length=list_length, list_translation=list_translation,
list_num=list_num, list_icohp=list_icohp)
self.icoopcollection_KF = IcohpCollection(is_spin_polarized=is_spin_polarized, are_coops=True,
list_labels=list_labels, list_atom1=list_atom1, list_atom2=list_atom2,
list_length=list_length, list_translation=list_translation,
list_num=list_num, list_icohp=list_icoop)
# with spin polarization:
list_atom2_sp = ['Fe7', 'Fe9']
list_labels_sp = ['1', '2']
list_translation_sp = [[0, 0, 0], [0, 0, 0]]
list_length_sp = [2.83189, 2.45249]
list_atom1_sp = ['Fe8', 'Fe8']
is_spin_polarized_sp = True
are_coops_sp = False
list_num_sp = [2, 1]
list_icohp_sp = [{Spin.up: -0.10218, Spin.down: -0.19701}, {Spin.up: -0.28485, Spin.down: -0.58279}]
list_icoop_sp = [{Spin.up: -0.11389, Spin.down: -0.20828}, {Spin.up: -0.04087, Spin.down: -0.05756}]
self.icohpcollection_Fe = IcohpCollection(is_spin_polarized=is_spin_polarized_sp, are_coops=are_coops_sp,
list_labels=list_labels_sp, list_atom1=list_atom1_sp,
list_atom2=list_atom2_sp, list_length=list_length_sp,
list_translation=list_translation_sp, list_num=list_num_sp,
list_icohp=list_icohp_sp)
self.icoopcollection_Fe = IcohpCollection(is_spin_polarized=is_spin_polarized_sp, are_coops=True,
list_labels=list_labels_sp, list_atom1=list_atom1_sp,
list_atom2=list_atom2_sp, list_length=list_length_sp,
list_translation=list_translation_sp, list_num=list_num_sp,
list_icohp=list_icoop_sp)
def test_get_icohp_by_label(self):
# without spin polarization
# ICOHPs
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("1"), -0.40075)
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("2"), -0.40074)
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("3"), -0.40079)
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("4"), -0.40079)
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("5"), -0.40074)
self.assertEqual(self.icohpcollection_KF.get_icohp_by_label("6"), -0.40075)
# with spin polarization
# summed spin
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("1"), -0.10218 - 0.19701)
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("2"), -0.28485 - 0.58279)
# Spin up
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("1", summed_spin_channels=False), -0.10218)
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("2", summed_spin_channels=False), -0.28485)
# Spin down
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("1", summed_spin_channels=False, spin=Spin.down),
-0.19701)
self.assertEqual(self.icohpcollection_Fe.get_icohp_by_label("2", summed_spin_channels=False, spin=Spin.down),
-0.58279)
def test_get_summed_icohp_by_label_list(self):
# without spin polarization
self.assertAlmostEqual(
self.icohpcollection_KF.get_summed_icohp_by_label_list(["1", "2", "3", "4", "5", "6"], divisor=6.0),
-0.40076)
# with spin polarization
sum1 = (-0.10218 - 0.19701 - 0.28485 - 0.58279) / 2.0
sum2 = (-0.10218 - 0.28485) / 2.0
sum3 = (-0.19701 - 0.58279) / 2.0
self.assertAlmostEqual(self.icohpcollection_Fe.get_summed_icohp_by_label_list(["1", "2"], divisor=2.0), sum1)
self.assertAlmostEqual(
self.icohpcollection_Fe.get_summed_icohp_by_label_list(["1", "2"], summed_spin_channels=False, divisor=2.0),
sum2)
self.assertAlmostEqual(
self.icohpcollection_Fe.get_summed_icohp_by_label_list(["1", "2"], summed_spin_channels=False,
spin=Spin.down, divisor=2.0), sum3)
def test_get_icohp_dict_by_bondlengths(self):
# without spin polarization
icohpvalue = {}
icohpvalue["1"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40075},
'are_coops': False, 'label': '1', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [0, -1, -1]}
icohpvalue["2"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40074},
'are_coops': False, 'label': '2', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [-1, 0, -1]}
icohpvalue["3"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40079},
'are_coops': False, 'label': '3', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [0, 0, -1]}
icohpvalue["4"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40079},
'are_coops': False, 'label': '4', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [-1, -1, 0]}
icohpvalue["5"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40074},
'are_coops': False, 'label': '5', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [0, -1, 0]}
icohpvalue["6"] = {'@module': 'pymatgen.electronic_structure.cohp', 'num': 1, 'length': 2.71199,
'icohp': {Spin.up: -0.40075},
'are_coops': False, 'label': '6', 'atom2': 'K2', '@class': 'IcohpValue', 'atom1': 'F1',
'translation': [-1, 0, 0]}
dict_KF = self.icohpcollection_KF.get_icohp_dict_by_bondlengths(minbondlength=0.0, maxbondlength=8.0)
for key, value in sorted(dict_KF.items()):
self.assertDictEqual(value.as_dict(), icohpvalue[key])
self.assertDictEqual({}, self.icohpcollection_KF.get_icohp_dict_by_bondlengths(minbondlength=0.0,
maxbondlength=1.0))
# with spin polarization
icohpvalue_spin = {}
icohpvalue_spin["1"] = {'num': 2, 'atom2': 'Fe7', 'translation': [0, 0, 0],
'@module': 'pymatgen.electronic_structure.cohp',
'are_coops': False, 'atom1': 'Fe8',
'label': '1', 'length': 2.83189, '@class': 'IcohpValue',
'icohp': {Spin.up: -0.10218, Spin.down: -0.19701}}
icohpvalue_spin["2"] = {'num': 1, 'atom2': 'Fe9', 'translation': [0, 0, 0],
'@module': 'pymatgen.electronic_structure.cohp',
'are_coops': False, 'atom1': 'Fe8',
'label': '2', 'length': 2.45249, '@class': 'IcohpValue',
'icohp': {Spin.up: -0.28485, Spin.down: -0.58279}}
dict_Fe = self.icohpcollection_Fe.get_icohp_dict_by_bondlengths(minbondlength=0.0, maxbondlength=8.0)
for key, value in sorted(dict_Fe.items()):
self.assertDictEqual(value.as_dict(), icohpvalue_spin[key])
dict_Fe2 = self.icohpcollection_Fe.get_icohp_dict_by_bondlengths(minbondlength=2.5, maxbondlength=2.9)
self.assertEqual(len(dict_Fe2), 1)
for key, value in sorted(dict_Fe2.items()):
self.assertDictEqual(value.as_dict(), icohpvalue_spin[key])
def test_get_icohp_dict_of_site(self):
# without spin polarization
icohpvalue = {}
icohpvalue["1"] = {'translation': [0, -1, -1], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '1', 'icohp': {Spin.up: -0.40075}}
icohpvalue["2"] = {'translation': [-1, 0, -1], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '2', 'icohp': {Spin.up: -0.40074}}
icohpvalue["3"] = {'translation': [0, 0, -1], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '3', 'icohp': {Spin.up: -0.40079}}
icohpvalue["4"] = {'translation': [-1, -1, 0], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '4', 'icohp': {Spin.up: -0.40079}}
icohpvalue["5"] = {'translation': [0, -1, 0], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '5', 'icohp': {Spin.up: -0.40074}}
icohpvalue["6"] = {'translation': [-1, 0, 0], 'are_coops': False,
'@module': 'pymatgen.electronic_structure.cohp', 'length': 2.71199,
'atom2': 'K2',
'@class': 'IcohpValue', 'atom1': 'F1', 'num': 1, 'label': '6', 'icohp': {Spin.up: -0.40075}}
dict_KF = self.icohpcollection_KF.get_icohp_dict_of_site(site=0)
for key, value in sorted(dict_KF.items()):
self.assertDictEqual(value.as_dict(), icohpvalue[key])
# compare number of results dependent on minsummedicohp, maxsummedicohp,minbondlength, maxbondlength, and only_bonds_to
dict_KF_2 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, minsummedicohp=None,
maxsummedicohp=-0.0, minbondlength=0.0,
maxbondlength=8.0)
dict_KF_3 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, minsummedicohp=None,
maxsummedicohp=-0.5, minbondlength=0.0,
maxbondlength=8.0)
dict_KF_4 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, minsummedicohp=0.0,
maxsummedicohp=None, minbondlength=0.0,
maxbondlength=8.0)
dict_KF_5 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, minsummedicohp=None,
maxsummedicohp=None, minbondlength=0.0,
maxbondlength=2.0)
dict_KF_6 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, minsummedicohp=None,
maxsummedicohp=None, minbondlength=3.0,
maxbondlength=8.0)
dict_KF_7 = self.icohpcollection_KF.get_icohp_dict_of_site(site=0, only_bonds_to=['K'])
dict_KF_8 = self.icohpcollection_KF.get_icohp_dict_of_site(site=1, only_bonds_to=['K'])
dict_KF_9 = self.icohpcollection_KF.get_icohp_dict_of_site(site=1, only_bonds_to=['F'])
self.assertEqual(len(dict_KF_2), 6)
self.assertEqual(len(dict_KF_3), 0)
self.assertEqual(len(dict_KF_4), 0)
self.assertEqual(len(dict_KF_5), 0)
self.assertEqual(len(dict_KF_6), 0)
self.assertEqual(len(dict_KF_7), 6)
self.assertEqual(len(dict_KF_8), 0)
self.assertEqual(len(dict_KF_9), 6)
# spin polarization
dict_Fe = self.icohpcollection_Fe.get_icohp_dict_of_site(site=0)
self.assertEqual(len(dict_Fe), 0)
# Fe8
dict_Fe2 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=7)
self.assertEqual(len(dict_Fe2), 2)
# Test the values
icohplist_Fe = {}
icohplist_Fe["1"] = {'are_coops': False, 'translation': [0, 0, 0],
'icohp': {Spin.down: -0.19701, Spin.up: -0.10218}, 'length': 2.83189,
'@module': 'pymatgen.electronic_structure.cohp', 'atom1': 'Fe8', 'atom2': 'Fe7',
'label': '1',
'@class': 'IcohpValue', 'num': 2}
icohplist_Fe["2"] = {'are_coops': False, 'translation': [0, 0, 0],
'icohp': {Spin.down: -0.58279, Spin.up: -0.28485}, 'length': 2.45249,
'@module': 'pymatgen.electronic_structure.cohp', 'atom1': 'Fe8', 'atom2': 'Fe9',
'label': '2',
'@class': 'IcohpValue', 'num': 1}
for key, value in sorted(dict_Fe2.items()):
self.assertEqual(value.as_dict(), icohplist_Fe[key])
# Fe9
dict_Fe3 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=8)
self.assertEqual(len(dict_Fe3), 1)
# compare number of results dependent on minsummedicohp, maxsummedicohp,minbondlength, maxbondlength
# Fe8
dict_Fe4 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=7, minsummedicohp=-0.3,
maxsummedicohp=None, minbondlength=0.0,
maxbondlength=8.0)
self.assertEqual(len(dict_Fe4), 1)
values = []
for key, value in dict_Fe4.items():
values.append(value)
self.assertDictEqual(values[0].as_dict(), icohplist_Fe["1"])
dict_Fe5 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=7, minsummedicohp=None,
maxsummedicohp=-0.3, minbondlength=0.0,
maxbondlength=8.0)
self.assertEqual(len(dict_Fe5), 1)
values = []
for key, value in dict_Fe5.items():
values.append(value)
self.assertDictEqual(values[0].as_dict(), icohplist_Fe["2"])
dict_Fe6 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=7, minsummedicohp=None,
maxsummedicohp=None, minbondlength=0.0,
maxbondlength=2.5)
self.assertEqual(len(dict_Fe6), 1)
values = []
for key, value in dict_Fe6.items():
values.append(value)
self.assertDictEqual(values[0].as_dict(), icohplist_Fe["2"])
dict_Fe7 = self.icohpcollection_Fe.get_icohp_dict_of_site(site=7, minsummedicohp=None,
maxsummedicohp=None, minbondlength=2.5,
maxbondlength=8.0)
self.assertEqual(len(dict_Fe7), 1)
values = []
for key, value in dict_Fe7.items():
values.append(value)
self.assertDictEqual(values[0].as_dict(), icohplist_Fe["1"])
def test_extremum_icohpvalue(self):
# without spin polarization
# ICOHPs
self.assertEqual(self.icohpcollection_KF.extremum_icohpvalue(), -0.40079)
# ICOOPs
self.assertEqual(self.icoopcollection_KF.extremum_icohpvalue(), 0.02343)
# with spin polarization
# summed spin
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.extremum_icohpvalue(), -0.86764)
self.assertAlmostEqual(self.icoopcollection_Fe.extremum_icohpvalue(), -0.09842999999999999)
# ICOOPs
# spin up
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.extremum_icohpvalue(summed_spin_channels=False), -0.28485)
# ICOOPs
self.assertEqual(self.icoopcollection_Fe.extremum_icohpvalue(summed_spin_channels=False), -0.04087)
# spin down
# ICOHPs
self.assertEqual(self.icohpcollection_Fe.extremum_icohpvalue(summed_spin_channels=False, spin=Spin.down),
-0.58279)
# ICOOPs
self.assertEqual(self.icoopcollection_Fe.extremum_icohpvalue(summed_spin_channels=False, spin=Spin.down),
-0.05756)
class CompleteCohpTest(PymatgenTest):
def setUp(self):
filepath = os.path.join(test_dir, "complete_cohp_lobster.json")
with open(filepath, "r") as f:
self.cohp_lobster_dict = CompleteCohp.from_dict(json.load(f))
filepath = os.path.join(test_dir, "complete_coop_lobster.json")
with open(filepath, "r") as f:
self.coop_lobster_dict = CompleteCohp.from_dict(json.load(f))
filepath = os.path.join(test_dir, "complete_cohp_lmto.json")
with open(filepath, "r") as f:
self.cohp_lmto_dict = CompleteCohp.from_dict(json.load(f))
filepath = os.path.join(test_dir, "complete_cohp_orbitalwise.json")
with open(filepath, "r") as f:
self.cohp_orb_dict = CompleteCohp.from_dict(json.load(f))
# Lobster 3.0
filepath = os.path.join(test_dir, "complete_cohp_forb.json")
with open(filepath, "r") as f:
self.cohp_lobster_forb_dict = CompleteCohp.from_dict(json.load(f))
# Lobster 2.0
filepath = os.path.join(test_dir, "COPL.BiSe")
structure = os.path.join(test_dir, "CTRL.BiSe")
self.cohp_lmto = CompleteCohp.from_file("lmto", filename=filepath,
structure_file=structure)
filepath = os.path.join(test_dir, "COHPCAR.lobster")
structure = os.path.join(test_dir, "POSCAR")
self.cohp_lobster = CompleteCohp.from_file("lobster",
filename=filepath,
structure_file=structure)
filepath = os.path.join(test_dir, "COOPCAR.lobster.BiSe")
structure = os.path.join(test_dir, "POSCAR.BiSe")
self.coop_lobster = CompleteCohp.from_file("lobster",
filename=filepath,
structure_file=structure,
are_coops=True)
filepath = os.path.join(test_dir, "COHPCAR.lobster.orbitalwise")
structure = os.path.join(test_dir, "POSCAR.orbitalwise")
self.cohp_orb = CompleteCohp.from_file("lobster",
filename=filepath,
structure_file=structure)
filepath = os.path.join(test_dir, "COHPCAR.lobster.notot.orbitalwise")
self.cohp_notot = CompleteCohp.from_file("lobster",
filename=filepath,
structure_file=structure)
# Lobster 3.0
filepath = os.path.join(test_dir, "COHPCAR.lobster.Na2UO4")
structure = os.path.join(test_dir, "POSCAR.Na2UO4")
self.cohp_lobster_forb = CompleteCohp.from_file("lobster", filename=filepath, structure_file=structure)
def test_attiributes(self):
self.assertFalse(self.cohp_lobster.are_coops)
self.assertFalse(self.cohp_lobster_dict.are_coops)
self.assertFalse(self.cohp_lmto.are_coops)
self.assertFalse(self.cohp_lmto_dict.are_coops)
self.assertTrue(self.coop_lobster.are_coops)
self.assertTrue(self.coop_lobster_dict.are_coops)
self.assertFalse(self.cohp_lobster_forb.are_coops)
self.assertFalse(self.cohp_lobster_forb_dict.are_coops)
self.assertEqual(len(self.cohp_lobster.energies), 301)
self.assertEqual(len(self.cohp_lmto.energies), 801)
self.assertEqual(len(self.coop_lobster.energies), 241)
self.assertEqual(len(self.cohp_lobster_forb.energies), 7)
self.assertEqual(self.cohp_lobster.efermi, 9.75576)
self.assertEqual(self.cohp_lmto.efermi, -2.3433)
self.assertEqual(self.coop_lobster.efermi, 5.90043)
self.assertEqual(self.cohp_lobster_forb.efermi, 4.12875)
def test_dict(self):
# The json files are dict representations of the COHPs from the LMTO
# and LOBSTER calculations and should thus be the same.
self.assertEqual(self.cohp_lobster.as_dict(),
self.cohp_lobster_dict.as_dict())
self.assertEqual(self.cohp_orb.as_dict(),
self.cohp_orb_dict.as_dict())
# Lobster 3.0, including f orbitals
self.assertEqual(self.cohp_lobster_forb.as_dict(),
self.cohp_lobster_forb_dict.as_dict())
# Testing the LMTO dicts will be more involved. Since the average
# is calculated and not read, there may be differences in rounding
# with a very small number of matrix elements, which would cause the
# test to fail
for key in ["COHP", "ICOHP"]:
self.assertArrayAlmostEqual(
self.cohp_lmto.as_dict()[key]["average"]["1"],
self.cohp_lmto_dict.as_dict()[key]["average"]["1"], 5)
for key in self.cohp_lmto.as_dict():
if key not in ["COHP", "ICOHP"]:
self.assertEqual(self.cohp_lmto.as_dict()[key],
self.cohp_lmto_dict.as_dict()[key])
else:
for bond in self.cohp_lmto.as_dict()[key]:
if bond != "average":
self.assertEqual(self.cohp_lmto.as_dict()[key][bond],
self.cohp_lmto_dict.as_dict()[key][bond])
def test_icohp_values(self):
# icohp_ef are the ICHOP(Ef) values taken from
# the ICOHPLIST.lobster file.
icohp_ef_dict = {"1": {Spin.up: -0.10218, Spin.down: -0.19701},
"2": {Spin.up: -0.28485, Spin.down: -0.58279}}
all_cohps_lobster = self.cohp_lobster.all_cohps
for bond in icohp_ef_dict:
icohp_ef = all_cohps_lobster[bond].get_interpolated_value(
self.cohp_lobster.efermi, integrated=True)
self.assertEqual(icohp_ef_dict[bond], icohp_ef)
icoop_ef_dict = {"1": {Spin.up: 0.14245},
"2": {Spin.up: -0.04118},
"3": {Spin.up: 0.14245},
"4": {Spin.up: -0.04118},
"5": {Spin.up: -0.03516},
"6": {Spin.up: 0.10745},
"7": {Spin.up: -0.03516},
"8": {Spin.up: 0.10745},
"9": {Spin.up: -0.12395},
"10": {Spin.up: 0.24714},
"11": {Spin.up: -0.12395}}
all_coops_lobster = self.coop_lobster.all_cohps
for bond in icoop_ef_dict:
icoop_ef = all_coops_lobster[bond].get_interpolated_value(
self.coop_lobster.efermi, integrated=True)
self.assertEqual(icoop_ef_dict[bond], icoop_ef)
def test_get_cohp_by_label(self):
self.assertEqual(self.cohp_orb.get_cohp_by_label("1").energies[0], -11.7225)
self.assertEqual(self.cohp_orb.get_cohp_by_label("1").energies[5], -11.47187)
self.assertFalse(self.cohp_orb.get_cohp_by_label("1").are_coops)
self.assertEqual(self.cohp_orb.get_cohp_by_label("1").cohp[Spin.up][0], 0.0)
self.assertEqual(self.cohp_orb.get_cohp_by_label("1").cohp[Spin.up][300], 0.03392)
self.assertEqual(self.cohp_orb.get_cohp_by_label("average").cohp[Spin.up][230], -0.08792)
self.assertEqual(self.cohp_orb.get_cohp_by_label("average").energies[230], -0.19368000000000007)
self.assertFalse(self.cohp_orb.get_cohp_by_label("average").are_coops)
# test methods from super class that could be overwritten
self.assertEqual(self.cohp_orb.get_icohp()[Spin.up][3], 0.0)
self.assertEqual(self.cohp_orb.get_cohp()[Spin.up][3], 0.0)
def test_get_summed_cohp_by_label_list(self):
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1"]).energies[0], -11.7225)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1", "1"]).energies[0], -11.7225)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1"]).energies[5], -11.47187)
self.assertFalse(self.cohp_orb.get_summed_cohp_by_label_list(["1"]).are_coops)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1"]).cohp[Spin.up][0], 0.0)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1", "1"]).cohp[Spin.up][0], 0.0)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1", "1"]).cohp[Spin.up][300], 0.03392 * 2.0)
self.assertEqual(self.cohp_orb.get_summed_cohp_by_label_list(["1", "1"], divisor=2).cohp[Spin.up][300], 0.03392)
def test_get_summed_cohp_by_label_and_orbital_list(self):
ref = self.cohp_orb.orb_res_cohp["1"]["4s-4px"]
ref2 = self.cohp_orb.orb_res_cohp["1"]["4px-4pz"]
cohp_label = self.cohp_orb.get_summed_cohp_by_label_and_orbital_list(["1"], ["4s-4px"])
cohp_label2 = self.cohp_orb.get_summed_cohp_by_label_and_orbital_list(["1", "1"], ["4s-4px", "4s-4px"])
cohp_label2x = self.cohp_orb.get_summed_cohp_by_label_and_orbital_list(["1", "1"], ["4s-4px", "4s-4px"],
divisor=2)
cohp_label3 = self.cohp_orb.get_summed_cohp_by_label_and_orbital_list(["1", "1"], ["4px-4pz", "4s-4px"])
self.assertArrayEqual(cohp_label.cohp[Spin.up], ref["COHP"][Spin.up])
self.assertArrayEqual(cohp_label2.cohp[Spin.up], ref["COHP"][Spin.up] * 2.0)
self.assertArrayEqual(cohp_label3.cohp[Spin.up], ref["COHP"][Spin.up] + ref2["COHP"][Spin.up])
self.assertArrayEqual(cohp_label.icohp[Spin.up], ref["ICOHP"][Spin.up])
self.assertArrayEqual(cohp_label2.icohp[Spin.up], ref["ICOHP"][Spin.up] * 2.0)
self.assertArrayEqual(cohp_label2x.icohp[Spin.up], ref["ICOHP"][Spin.up])
self.assertArrayEqual(cohp_label3.icohp[Spin.up], ref["ICOHP"][Spin.up] + ref2["ICOHP"][Spin.up])
def test_orbital_resolved_cohp(self):
# When read from a COHPCAR file, total COHPs are calculated from
# the orbital-resolved COHPs if the total is missing. This may be
# case for LOBSTER version 2.2.0 and earlier due to a bug with the
# cohpgenerator keyword. The calculated total should be approximately
# the total COHP calculated by LOBSTER. Due to numerical errors in
# the LOBSTER calculation, the precision is not very high though.
self.assertArrayAlmostEqual(
self.cohp_orb.all_cohps["1"].cohp[Spin.up],
self.cohp_notot.all_cohps["1"].cohp[Spin.up], decimal=3)
self.assertArrayAlmostEqual(
self.cohp_orb.all_cohps["1"].icohp[Spin.up],
self.cohp_notot.all_cohps["1"].icohp[Spin.up], decimal=3)
# Tests different methods for getting orbital-resolved COHPs
ref = self.cohp_orb.orb_res_cohp["1"]["4s-4px"]
cohp_label = self.cohp_orb.get_orbital_resolved_cohp("1",
"4s-4px")
self.assertEqual(cohp_label.cohp, ref["COHP"])
self.assertEqual(cohp_label.icohp, ref["ICOHP"])
orbitals = [[Orbital.s, Orbital.px], ["s", "px"], [0, 3]]
cohps = [self.cohp_orb.get_orbital_resolved_cohp("1",
[[4, orb[0]], [4, orb[1]]]) for orb in orbitals]
# print(cohps)
for cohp in cohps:
self.assertEqual(cohp.as_dict(), cohp_label.as_dict())
if __name__ == "__main__":
unittest.main()
|
montoyjh/pymatgen
|
pymatgen/electronic_structure/tests/test_cohp.py
|
Python
|
mit
| 35,582
|
[
"pymatgen"
] |
2c78d2f267ec1e2d51c363b65e191985af3bba692da0a31c63aa42d8bbf14a98
|
# ============================================================================
#
# Copyright (C) 2007-2012 Conceptive Engineering bvba. All rights reserved.
# www.conceptive.be / project-camelot@conceptive.be
#
# This file is part of the Camelot Library.
#
# This file may be used under the terms of the GNU General Public
# License version 2.0 as published by the Free Software Foundation
# and appearing in the file license.txt included in the packaging of
# this file. Please review this information to ensure GNU
# General Public Licensing requirements will be met.
#
# If you are unsure which license is appropriate for your use, please
# visit www.python-camelot.com or contact project-camelot@conceptive.be
#
# This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
# WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
#
# For use of this library in commercial applications, please contact
# project-camelot@conceptive.be
#
# ============================================================================
from PyQt4 import QtGui
from PyQt4.QtCore import Qt
from customeditor import CustomEditor
from camelot.view.art import Icon
class StarEditor( CustomEditor ):
star_icon = Icon('tango/16x16/status/weather-clear.png')
no_star_icon = Icon('tango/16x16/status/weather-clear-noStar.png')
def __init__(self,
parent,
maximum = 5,
editable = True,
field_name = 'star',
**kwargs):
CustomEditor.__init__(self, parent)
self.setObjectName( field_name )
self.setFocusPolicy(Qt.StrongFocus)
layout = QtGui.QHBoxLayout(self)
layout.setContentsMargins( 0, 0, 0, 0)
layout.setSpacing(0)
self.maximum = maximum
self.buttons = []
for i in range(self.maximum):
button = QtGui.QToolButton(self)
button.setIcon(self.no_star_icon.getQIcon())
button.setFocusPolicy(Qt.ClickFocus)
if editable:
button.setAutoRaise(True)
else:
button.setAutoRaise(True)
button.setDisabled(True)
button.setFixedHeight(self.get_height())
self.buttons.append(button)
def createStarClick(i):
return lambda:self.starClick(i+1)
for i in range(self.maximum):
self.buttons[i].clicked.connect(createStarClick(i))
for i in range(self.maximum):
layout.addWidget(self.buttons[i])
layout.addStretch()
self.setLayout(layout)
def get_value(self):
return CustomEditor.get_value(self) or self.stars
def set_enabled(self, editable=True):
for button in self.buttons:
button.setEnabled(editable)
button.update()
self.set_value(self.stars)
def starClick(self, value):
if self.stars == value:
self.stars -= 1
else:
self.stars = int(value)
for i in range(self.maximum):
if i+1 <= self.stars:
self.buttons[i].setIcon(self.star_icon.getQIcon())
else:
self.buttons[i].setIcon(self.no_star_icon.getQIcon())
self.editingFinished.emit()
def set_value(self, value):
value = CustomEditor.set_value(self, value) or 0
self.stars = int(value)
for i in range(self.maximum):
if i+1 <= self.stars:
self.buttons[i].setIcon(self.star_icon.getQIcon())
else:
self.buttons[i].setIcon(self.no_star_icon.getQIcon())
def set_background_color(self, background_color):
return False
|
jeroendierckx/Camelot
|
camelot/view/controls/editors/stareditor.py
|
Python
|
gpl-2.0
| 3,808
|
[
"VisIt"
] |
4919190a301c02996a13f426caad37fdec075994bcebe7149b51a2b55db4f579
|
from twython import Twython
from config import Config
from streamcatcher import StreamCatcher
config = Config()
# Make sure we have the application details.
if not config.get('app', 'key'):
print 'You will need to register an application at https://dev.twitter.com/apps'
print 'When you have, enter the details below. You\'ll only need to do this once.'
APP_KEY = raw_input('The app key: ')
APP_SECRET = raw_input('The app secret: ')
if not APP_KEY or not APP_SECRET:
print 'The app key and secret are required!'
exit()
config.set('app', 'key', APP_KEY)
config.set('app', 'secret', APP_SECRET)
APP_KEY = config.get('app', 'key')
APP_SECRET = config.get('app', 'secret')
# Make sure we have auth tokens.
if not config.get('auth', 'token'):
print 'Fetching authentication URL...'
auth = Twython(APP_KEY, APP_SECRET)
tokens = auth.get_authentication_tokens()
OAUTH_TOKEN = tokens['oauth_token']
OAUTH_TOKEN_SECRET = tokens['oauth_token_secret']
print 'Please visit ' + tokens['auth_url'] + ' to log in, then copy/paste the PIN below.'
oauth_verifier = raw_input('Authorisation PIN: ')
auth = Twython(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
tokens = auth.get_authorized_tokens(oauth_verifier)
config.set('auth', 'token', tokens['oauth_token'])
config.set('auth', 'secret', tokens['oauth_token_secret'])
# And... stream!
stream = StreamCatcher(APP_KEY, APP_SECRET, config.get('auth', 'token'), config.get('auth', 'secret'))
stream.start()
|
3ft9/twicli
|
src/twicli.py
|
Python
|
mit
| 1,482
|
[
"VisIt"
] |
7a94c0b5f09d7666ba20e4999f13b3f702cdc3b8f67557c0249b6a7310a1d131
|
# ----------------------------------------------------------------------------
# Copyright (c) 2013--, scikit-bio development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function
from future.utils import with_metaclass
from abc import ABCMeta, abstractproperty
from itertools import product
import numpy as np
from skbio.util import classproperty, overrides
from skbio.util._misc import MiniRegistry
from ._sequence import Sequence
class IUPACSequence(with_metaclass(ABCMeta, Sequence)):
"""Store biological sequence data conforming to the IUPAC character set.
This is an abstract base class (ABC) that cannot be instantiated.
Attributes
----------
id
description
values
quality
alphabet
gap_chars
nondegenerate_chars
degenerate_chars
degenerate_map
Raises
------
ValueError
If sequence characters are not in the IUPAC character set [1]_.
See Also
--------
NucleotideSequence
DNA
RNA
Protein
References
----------
.. [1] Nomenclature for incompletely specified bases in nucleic acid
sequences: recommendations 1984.
Nucleic Acids Res. May 10, 1985; 13(9): 3021-3030.
A Cornish-Bowden
"""
_number_of_extended_ascii_codes = 256
_ascii_lowercase_boundary = 90
__validation_mask = None
__degenerate_codes = None
__nondegenerate_codes = None
__gap_codes = None
@classproperty
def _validation_mask(cls):
# TODO These masks could be defined (as literals) on each concrete
# object. For now, memoize!
if cls.__validation_mask is None:
cls.__validation_mask = np.invert(np.bincount(
np.fromstring(''.join(cls.alphabet), dtype=np.uint8),
minlength=cls._number_of_extended_ascii_codes).astype(bool))
return cls.__validation_mask
@classproperty
def _degenerate_codes(cls):
if cls.__degenerate_codes is None:
degens = cls.degenerate_chars
cls.__degenerate_codes = np.asarray([ord(d) for d in degens])
return cls.__degenerate_codes
@classproperty
def _nondegenerate_codes(cls):
if cls.__nondegenerate_codes is None:
nondegens = cls.nondegenerate_chars
cls.__nondegenerate_codes = np.asarray([ord(d) for d in nondegens])
return cls.__nondegenerate_codes
@classproperty
def _gap_codes(cls):
if cls.__gap_codes is None:
gaps = cls.gap_chars
cls.__gap_codes = np.asarray([ord(g) for g in gaps])
return cls.__gap_codes
@classproperty
def alphabet(cls):
"""Return valid IUPAC characters.
This includes gap, non-degenerate, and degenerate characters.
Returns
-------
set
Valid IUPAC characters.
"""
return cls.degenerate_chars | cls.nondegenerate_chars | cls.gap_chars
@classproperty
def gap_chars(cls):
"""Return characters defined as gaps.
Returns
-------
set
Characters defined as gaps.
"""
return set('-.')
@classproperty
def degenerate_chars(cls):
"""Return degenerate IUPAC characters.
Returns
-------
set
Degenerate IUPAC characters.
"""
return set(cls.degenerate_map)
@abstractproperty
@classproperty
def nondegenerate_chars(cls):
"""Return non-degenerate IUPAC characters.
Returns
-------
set
Non-degenerate IUPAC characters.
"""
return set() # pragma: no cover
@abstractproperty
@classproperty
def degenerate_map(cls):
"""Return mapping of degenerate to non-degenerate characters.
Returns
-------
dict (set)
Mapping of each degenerate IUPAC character to the set of
non-degenerate IUPAC characters it represents.
"""
return set() # pragma: no cover
@property
def _motifs(self):
return _motifs
@overrides(Sequence)
def __init__(self, sequence, id="", description="", quality=None,
validate=True, case_insensitive=False):
super(IUPACSequence, self).__init__(
sequence, id=id, description=description, quality=quality)
if case_insensitive:
self._convert_to_uppercase()
if validate:
self._validate()
def _convert_to_uppercase(self):
lowercase = self._bytes > self._ascii_lowercase_boundary
if np.any(lowercase):
with self._byte_ownership():
# ASCII is built such that the difference between uppercase and
# lowercase is the 6th bit.
self._bytes[lowercase] ^= 32
def _validate(self):
# This is the fastest way that we have found to identify the
# presence or absence of certain characters (numbers).
# It works by multiplying a mask where the numbers which are
# permitted have a zero at their index, and all others have a one.
# The result is a vector which will propogate counts of invalid
# numbers and remove counts of valid numbers, so that we need only
# see if the array is empty to determine validity.
invalid_characters = np.bincount(
self._bytes, minlength=self._number_of_extended_ascii_codes
) * self._validation_mask
if np.any(invalid_characters):
bad = list(np.where(
invalid_characters > 0)[0].astype(np.uint8).view('|S1'))
raise ValueError(
"Invalid character%s in sequence: %r. Valid IUPAC characters: "
"%r" % ('s' if len(bad) > 1 else '',
[str(b.tostring().decode("ascii")) for b in bad] if
len(bad) > 1 else bad[0],
list(self.alphabet)))
def gaps(self):
"""Find positions containing gaps in the biological sequence.
Returns
-------
1D np.ndarray (bool)
Boolean vector where ``True`` indicates a gap character is present
at that position in the biological sequence.
See Also
--------
has_gaps
Examples
--------
>>> from skbio import DNA
>>> s = DNA('AC-G-')
>>> s.gaps()
array([False, False, True, False, True], dtype=bool)
"""
return np.in1d(self._bytes, self._gap_codes)
def has_gaps(self):
"""Determine if the sequence contains one or more gap characters.
Returns
-------
bool
Indicates whether there are one or more occurrences of gap
characters in the biological sequence.
Examples
--------
>>> from skbio import DNA
>>> s = DNA('ACACGACGTT')
>>> s.has_gaps()
False
>>> t = DNA('A.CAC--GACGTT')
>>> t.has_gaps()
True
"""
# TODO use count, there aren't that many gap chars
return bool(self.gaps().any())
def degenerates(self):
"""Find positions containing degenerate characters in the sequence.
Returns
-------
1D np.ndarray (bool)
Boolean vector where ``True`` indicates a degenerate character is
present at that position in the biological sequence.
See Also
--------
has_degenerates
nondegenerates
has_nondegenerates
Examples
--------
>>> from skbio import DNA
>>> s = DNA('ACWGN')
>>> s.degenerates()
array([False, False, True, False, True], dtype=bool)
"""
return np.in1d(self._bytes, self._degenerate_codes)
def has_degenerates(self):
"""Determine if sequence contains one or more degenerate characters.
Returns
-------
bool
Indicates whether there are one or more occurrences of degenerate
characters in the biological sequence.
See Also
--------
degenerates
nondegenerates
has_nondegenerates
Examples
--------
>>> from skbio import DNA
>>> s = DNA('ACAC-GACGTT')
>>> s.has_degenerates()
False
>>> t = DNA('ANCACWWGACGTT')
>>> t.has_degenerates()
True
"""
# TODO use bincount!
return bool(self.degenerates().any())
def nondegenerates(self):
"""Find positions containing non-degenerate characters in the sequence.
Returns
-------
1D np.ndarray (bool)
Boolean vector where ``True`` indicates a non-degenerate character
is present at that position in the biological sequence.
See Also
--------
has_nondegenerates
degenerates
has_nondegenerates
Examples
--------
>>> from skbio import DNA
>>> s = DNA('ACWGN')
>>> s.nondegenerates()
array([ True, True, False, True, False], dtype=bool)
"""
return np.in1d(self._bytes, self._nondegenerate_codes)
def has_nondegenerates(self):
"""Determine if sequence contains one or more non-degenerate characters
Returns
-------
bool
Indicates whether there are one or more occurrences of
non-degenerate characters in the biological sequence.
See Also
--------
nondegenerates
degenerates
has_degenerates
Examples
--------
>>> from skbio import DNA
>>> s = DNA('NWNNNNNN')
>>> s.has_nondegenerates()
False
>>> t = DNA('ANCACWWGACGTT')
>>> t.has_nondegenerates()
True
"""
return bool(self.nondegenerates().any())
def degap(self):
"""Return a new sequence with gap characters removed.
Returns
-------
IUPACSequence
A new sequence with all gap characters removed.
See Also
--------
gap_chars
Notes
-----
The type, ID, and description of the result will be the same as the
biological sequence. If quality scores are present, they will be
filtered in the same manner as the sequence characters and included in
the resulting degapped sequence.
Examples
--------
>>> from skbio import DNA
>>> s = DNA('GGTC-C--ATT-C.', quality=range(14))
>>> t = s.degap()
>>> t
DNA('GGTCCATTC', length=9, quality=[0, 1, 2, 3, 5, 8, 9, 10, 12])
"""
return self[np.invert(self.gaps())]
def expand_degenerates(self):
"""Yield all possible non-degenerate versions of the sequence.
Yields
------
IUPACSequence
Non-degenerate version of the sequence.
See Also
--------
degenerate_map
Notes
-----
There is no guaranteed ordering to the non-degenerate sequences that
are yielded.
Each non-degenerate sequence will have the same type, ID, description,
and quality scores as the biological sequence.
Examples
--------
>>> from skbio import DNA
>>> seq = DNA('TRG')
>>> seq_generator = seq.expand_degenerates()
>>> for s in sorted(seq_generator, key=str):
... s
DNA('TAG', length=3)
DNA('TGG', length=3)
"""
degen_chars = self.degenerate_map
nonexpansion_chars = self.nondegenerate_chars.union(self.gap_chars)
expansions = []
for char in self:
char = str(char)
if char in nonexpansion_chars:
expansions.append(char)
else:
expansions.append(degen_chars[char])
result = product(*expansions)
return (self._to(sequence=''.join(nondegen_seq)) for nondegen_seq in
result)
def find_motifs(self, motif_type, min_length=1, ignore=None):
"""Search the biological sequence for motifs.
Options for `motif_type`:
Parameters
----------
motif_type : str
Type of motif to find.
min_length : int, optional
Only motifs at least as long as `min_length` will be returned.
ignore : 1D array_like (bool), optional
Boolean vector indicating positions to ignore when matching.
Yields
------
slice
Location of the motif in the biological sequence.
Raises
------
ValueError
If an unknown `motif_type` is specified.
Examples
--------
>>> from skbio import DNA
>>> s = DNA('ACGGGGAGGCGGAG')
>>> for motif_slice in s.find_motifs('purine-run', min_length=2):
... motif_slice
... s[motif_slice]
slice(2, 9, None)
DNA('GGGGAGG', length=7)
slice(10, 14, None)
DNA('GGAG', length=4)
Gap characters can disrupt motifs:
>>> s = DNA('GG-GG')
>>> for motif_slice in s.find_motifs('purine-run'):
... motif_slice
slice(0, 2, None)
slice(3, 5, None)
Gaps can be ignored by passing the gap boolean vector to `ignore`:
>>> s = DNA('GG-GG')
>>> for motif_slice in s.find_motifs('purine-run', ignore=s.gaps()):
... motif_slice
slice(0, 5, None)
"""
if motif_type not in self._motifs:
raise ValueError("Not a known motif (%r) for this sequence (%s)." %
(motif_type, self.__class__.__name__))
return self._motifs[motif_type](self, min_length, ignore)
@overrides(Sequence)
def _constructor(self, **kwargs):
return self.__class__(validate=False, case_insensitive=False, **kwargs)
_motifs = MiniRegistry()
# Leave this at the bottom
_motifs.interpolate(IUPACSequence, "find_motifs")
|
jensreeder/scikit-bio
|
skbio/sequence/_iupac_sequence.py
|
Python
|
bsd-3-clause
| 14,348
|
[
"scikit-bio"
] |
43581f76d8bbadf7a378eacc0673deec50dd0d18a64f8617424ba45367f11b77
|
# -*- coding: utf-8 -*-
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~IMPORTS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# Standard library imports
from collections import *
import warnings
# Third party imports
import numpy as np
import pandas as pd
import pysam as ps
# Local lib import
from pycoQC.common import *
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~GLOBAL SETTINGS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# Silence futurewarnings
warnings.filterwarnings("ignore", category=FutureWarning)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~MAIN CLASS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
class pycoQC_parse ():
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~INIT METHOD~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
def __init__ (self,
summary_file:str,
barcode_file:str="",
bam_file:str="",
runid_list:list=[],
filter_calibration:bool=False,
filter_duplicated:bool=False,
min_barcode_percent:float=0.1,
cleanup:bool=True,
verbose:bool=False,
quiet:bool=False):
"""
Parse Albacore sequencing_summary.txt file and clean-up the data
* summary_file
Path to the sequencing_summary generated by Albacore 1.0.0 + (read_fast5_basecaller.py) / Guppy 2.1.3+ (guppy_basecaller).
One can also pass multiple space separated file paths or a UNIX style regex matching multiple files
* barcode_file
Path to the barcode_file generated by Guppy 2.1.3+ (guppy_barcoder) or Deepbinner 0.2.0+. This is not a required file.
One can also pass multiple space separated file paths or a UNIX style regex matching multiple files
* bam_file
Path to a Bam file corresponding to reads in the summary_file. Preferably aligned with Minimap2
One can also pass multiple space separated file paths or a UNIX style regex matching multiple files
* runid_list
Select only specific runids to be analysed. Can also be used to force pycoQC to order the runids for
temporal plots, if the sequencing_summary file contain several sucessive runs. By default pycoQC analyses
all the runids in the file and uses the runid order as defined in the file.
* filter_calibration
If True read flagged as calibration strand by the software are removed
* filter_duplicated
If True duplicated read_ids are removed but the first occurence is kept (Guppy sometimes outputs the same read multiple times)
* min_barcode_percent
Minimal percent of total reads to retain barcode label. If below the barcode value is set as `unclassified`.
"""
# Set logging level
self.logger = get_logger(name=__name__, verbose=verbose, quiet=quiet)
# Save self variables
self.runid_list = runid_list
self.filter_calibration = filter_calibration
self.filter_duplicated = filter_duplicated
self.min_barcode_percent = min_barcode_percent
self.cleanup = cleanup
# Init object counter
self.counter = OrderedDict()
# Check input files
self.logger.warning ("Check input data files")
# Expand file names and test readability
self.summary_files_list = expand_file_names(summary_file)
self.logger.debug ("\t\tSequencing summary files found: {}".format(" ".join(self.summary_files_list)))
self.counter["Summary files found"] = len(self.summary_files_list)
if barcode_file:
self.barcode_files_list = expand_file_names(barcode_file)
self.logger.debug ("\t\tBarcode files found: {}".format(" ".join(self.barcode_files_list)))
self.counter["Barcode files found"] = len(self.barcode_files_list)
else:
self.barcode_files_list =[]
if bam_file:
self.bam_file_list = expand_file_names(bam_file, bam_check=True)
self.logger.debug ("\t\tBam files found: {}".format(" ".join(self.bam_file_list)))
self.counter["Bam files found"] = len(self.bam_file_list)
else:
self.bam_file_list =[]
self.logger.warning ("Parse data files")
summary_reads_df = self._parse_summary()
barcode_reads_df = self._parse_barcode()
bam_reads_df, self.alignments_df, self.ref_len_dict = self._parse_bam()
self.logger.warning ("Merge data")
self.reads_df = self._merge_reads_df(summary_reads_df, barcode_reads_df, bam_reads_df)
# Cleanup data
if self.cleanup:
self.logger.warning("Cleaning data")
self.reads_df = self._clean_reads_df(self.reads_df)
def __str__(self):
return dict_to_str(self.counter)
def __repr__(self):
return "[{}]\n".format(self.__class__.__name__)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~PRIVATE METHODS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
def _parse_summary (self):
""""""
self.logger.debug ("\tParse summary files")
df = merge_files_to_df (self.summary_files_list)
if self.cleanup:
# Standardise col names for all types of files
self.logger.debug ("\tRename summary sequencing columns")
rename_colmanes = {
"sequence_length_template":"read_len",
"sequence_length_2d":"read_len",
"sequence_length":"read_len",
"mean_qscore_template":"mean_qscore",
"mean_qscore_2d":"mean_qscore",
"calibration_strand_genome_template":"calibration",
"barcode_arrangement":"barcode"}
df = df.rename(columns=rename_colmanes)
# Verify the required and optional columns, Drop unused fields
self.logger.debug ("\tVerifying fields and discarding unused columns")
df = self._select_df_columns (
df = df,
required_colnames = ["read_id", "run_id", "channel", "start_time", "read_len", "mean_qscore"],
optional_colnames = ["calibration", "barcode"])
# Collect stats
n = len(df)
self.logger.debug ("\t\t{:,} reads found in initial file".format(n))
self.counter["Initial reads"] = n
return df
def _parse_barcode (self):
""""""
if not self.barcode_files_list:
return pd.DataFrame()
self.logger.debug ("\tParse barcode files")
df = merge_files_to_df (self.barcode_files_list)
# check presence of barcode details
if "read_id" in df and "barcode_arrangement" in df:
self.logger.debug ("\t\tFound valid Guppy barcode file")
df = df [["read_id", "barcode_arrangement"]]
df = df.rename(columns={"barcode_arrangement":"barcode"})
elif "read_ID" in df and "barcode_call" in df:
self.logger.debug ("\t\tFound valid Deepbinner barcode file")
df = df [["read_ID", "barcode_call"]]
df = df.rename(columns={"read_ID":"read_id", "barcode_call":"barcode"})
df['barcode'].replace("none", "unclassified", inplace=True)
else:
raise pycoQCError ("File {} does not contain required barcode information".format(fp))
n = len(df[df['barcode']!="unclassified"])
self.logger.debug ("\t\t{:,} reads with barcodes assigned".format(n))
self.counter["Reads with barcodes"] = n
return df
def _parse_bam (self):
""""""
if not self.bam_file_list:
return (pd.DataFrame(), pd.DataFrame(), OrderedDict())
# Init collections
ref_len_dict = OrderedDict()
alignments_dict = Counter()
read_dict = OrderedDict ()
for bam_fn in self.bam_file_list:
with ps.AlignmentFile(bam_fn, "rb") as bam:
# Save reference lengths information
for ref_id, ref_len in zip(bam.references, bam.lengths):
if not ref_id in ref_len_dict:
ref_len_dict[ref_id] = ref_len
# Parse reads
for read in bam:
if read.is_unmapped:
alignments_dict["Unmapped"]+=1
elif read.is_secondary:
alignments_dict["Secondary"]+=1
elif read.is_supplementary:
alignments_dict["Suplementary"]+=1
elif read.query_name in read_dict:
alignments_dict["Duplicated"]+=1
else:
alignments_dict["Primary"]+=1
read_dict[read.query_name] = self._get_read_stats(read)
# Convert aligments_dict to df
if alignments_dict:
alignments_df = pd.DataFrame.from_dict(alignments_dict, orient="index")
alignments_df.reset_index(inplace=True)
alignments_df.columns=["Alignments", "Counts"]
alignments_df["Percents"] = (alignments_df["Counts"]/alignments_df["Counts"].sum()*100).round(2)
else:
alignments_df = pd.DataFrame()
# Convert read_dict to df
if read_dict:
read_df = pd.DataFrame.from_dict(read_dict, orient="index")
read_df.index.name="read_id"
read_df.reset_index(inplace=True)
else:
read_df = pd.DataFrame()
return (read_df, alignments_df, ref_len_dict)
def _merge_reads_df(self, summary_reads_df, barcode_reads_df, bam_reads_df):
""""""
df = summary_reads_df
# Merge df and fill in missing barcode values
if not barcode_reads_df.empty:
df = pd.merge(df, barcode_reads_df, on="read_id", how="left")
df['barcode'].fillna('unclassified', inplace=True)
# Merge df and fill in missing barcode values
if not bam_reads_df.empty:
df = pd.merge(df, bam_reads_df, on="read_id", how="left")
return df
def _clean_reads_df (self, df):
""""""
# Drop lines containing NA values
self.logger.info ("\tDiscarding lines containing NA values")
l = len(df)
df = df.dropna(subset=["read_id", "run_id", "channel", "start_time", "read_len", "mean_qscore"])
n=l-len(df)
self.logger.info ("\t\t{:,} reads discarded".format(n))
self.counter["Reads with NA values discarded"] = n
if len(df) <= 1:
raise pycoQCError("No valid read left after NA values filtering")
# Filter out zero length reads
self.logger.info ("\tFiltering out zero length reads")
l = len(df)
df = df[(df["read_len"] > 0)]
n=l-len(df)
self.logger.info ("\t\t{:,} reads discarded".format(n))
self.counter["Zero length reads discarded"] = n
if len(df) <= 1:
raise pycoQCError("No valid read left after zero_len filtering")
# Filter out reads with duplicated read_id
if self.filter_duplicated:
self.logger.info ("\tFiltering out duplicated reads")
l = len(df)
df = df[~df.duplicated(subset="read_id", keep='first')]
n=l-len(df)
self.logger.info ("\t\t{:,} reads discarded".format(n))
self.counter["Duplicated reads discarded"] = n
if len(df) <= 1:
raise pycoQCError("No valid read left after calibration strand filtering")
# Filter out calibration strand reads if the "calibration_strand_genome_template" field is available
if self.filter_calibration and "calibration" in df:
self.logger.info ("\tFiltering out calibration strand reads")
l = len(df)
df = df[(df["calibration"].isin(["filtered_out", "no_match", "*"]))]
n=l-len(df)
self.logger.info ("\t\t{:,} reads discarded".format(n))
self.counter["Calibration reads discarded"] = n
if len(df) <= 1:
raise pycoQCError("No valid read left after calibration strand filtering")
# Filter and reorder based on runid_list list if passed by user
if self.runid_list:
self.logger.info ("\tSelecting run_ids passed by user")
l = len(df)
df = df[(df["run_id"].isin(self.runid_list))]
n=l-len(df)
self.logger.debug ("\t\t{:,} reads discarded".format(n))
self.counter["Excluded runid reads discarded"] = n
if len(df) <= 1:
raise pycoQCError("No valid read left after run ID filtering")
runid_list = self.runid_list
# Else sort the runids by output per time assuming that the throughput decreases over time
else:
self.logger.info ("\tSorting run IDs by decreasing throughput")
d = {}
for run_id, sdf in df.groupby("run_id"):
d[run_id] = len(sdf)/np.ptp(sdf["start_time"])
runid_list = [i for i, j in sorted (d.items(), key=lambda t: t[1], reverse=True)]
self.logger.info ("\t\tRun-id order {}".format(runid_list))
# Modify start time per run ids to order them following the runid_list
self.logger.info ("\tReordering runids")
increment_time = 0
runid_start = OrderedDict()
for runid in runid_list:
self.logger.info ("\t\tProcessing reads with Run_ID {} / time offset: {}".format(runid, increment_time))
max_val = df['start_time'][df["run_id"] == runid].max()
df.loc[df["run_id"] == runid, 'start_time'] += increment_time
runid_start[runid] = increment_time
increment_time += max_val+1
df = df.sort_values ("start_time")
# Unset low frequency barcodes
if "barcode" in df and self.min_barcode_percent:
self.logger.info ("\tCleaning up low frequency barcodes")
l = (df["barcode"]=="unclassified").sum()
barcode_counts = df["barcode"][df["barcode"]!="unclassified"].value_counts()
cutoff = int(barcode_counts.sum()*self.min_barcode_percent/100)
low_barcode = barcode_counts[barcode_counts<cutoff].index
df.loc[df["barcode"].isin(low_barcode), "barcode"] = "unclassified"
n= int((df["barcode"]=="unclassified").sum()-l)
self.logger.info ("\t\t{:,} reads with low frequency barcode unset".format(n))
self.counter["Reads with low frequency barcode unset"] = n
# Cast values to required types
self.logger.info ("\tCast value to appropriate type")
df = df.astype({'channel':"uint16","start_time":"float32","read_len":"uint32","mean_qscore":"float32"})
# Reindex final df
self.logger.info ("\tReindexing dataframe by read_ids")
df = df.reset_index (drop=True)
df = df.set_index ("read_id")
self.logger.info ("\t\t{:,} Final valid reads".format(len(df)))
# Save final df
self.counter["Valid reads"] = len(df)
if len(df) < 500:
self.logger.warning ("WARNING: Low number of reads found. This is likely to lead to errors when trying to generate plots")
return df
def _get_read_stats(self, read):
""""""
d = OrderedDict()
# Extract general stats
d["ref_id"] = read.reference_name
d["ref_start"] = read.reference_start
d["ref_end"] = read.reference_end
d["align_len"] = read.query_alignment_length
d["mapq"] = read.mapping_quality
# Extract indel and soft_clip from cigar
c_stat = read.get_cigar_stats()[0]
d["insertion"] = c_stat[1]
d["deletion"] = c_stat[2]
d["soft_clip"] = c_stat[4]
# Compute alignment score from NM field if available
if read.has_tag("NM"):
edit_dist = read.get_tag("NM")
d["mismatch"] = edit_dist-(d["deletion"]+d["insertion"])
try:
d["identity_freq"] = (d["align_len"]-edit_dist)/d["align_len"]
except ZeroDivisionError:
d["identity_freq"] = 0
# If not NM try to compute score from MD field
elif read.has_tag("MD"):
md_err = 0
for i in read.get_tag("MD"):
if i in ["A","T","C","G","a","t","c","g"]:
md_err += 1
d["mismatch"] = md_err-d["deletion"]
edit_dist = d["mismatch"]+d["insertion"]+d["deletion"]
try:
d["identity_freq"] = (d["align_len"]-edit_dist)/d["align_len"]
except ZeroDivisionError:
d["identity_freq"] = 0
return d
def _select_df_columns(self, df, required_colnames, optional_colnames):
""""""
col_found = []
# Verify the presence of the columns required for pycoQC
for col in required_colnames:
if col in df:
col_found.append(col)
else:
raise pycoQCError("Column {} not found in the provided sequence_summary file".format(col))
for col in optional_colnames:
if col in df:
col_found.append(col)
return df[col_found]
|
a-slide/pycoQC
|
pycoQC/pycoQC_parse.py
|
Python
|
gpl-3.0
| 17,123
|
[
"pysam"
] |
d6eed3eebe33d22cec52affc08cc9d4e572b3901a8b347c7fedd892e8e9925b4
|
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# Andre Anjos <andre.anjos@idiap.ch>
# Thu 30 Jan 08:45:49 2014 CET
bob_packages = ['bob.core', 'bob.io.base', 'bob.sp', 'bob.math']
from setuptools import setup, find_packages, dist
dist.Distribution(dict(setup_requires=['bob.extension', 'bob.blitz'] + bob_packages))
import bob.extension.utils
from bob.blitz.extension import Extension, Library, build_ext
from bob.extension.utils import load_requirements
build_requires = load_requirements()
# Define package version
version = open("version.txt").read().rstrip()
import os
packages = ['boost']
boost_modules = ['system']
class vl:
def __init__ (self, only_static=False):
"""
Searches for libvl in stock locations. Allows user to override.
If the user sets the environment variable BOB_PREFIX_PATH, that prefixes
the standard path locations.
Parameters:
only_static, boolean
A flag, that indicates if we intend to link against the static library only.
This will trigger our library search to disconsider shared libraries when searching.
"""
import os
self.name = 'vlfeat'
header = 'vl/sift.h'
module = 'vl'
self.include_directories = []
self.libraries = []
self.library_directories = []
self.macros = []
# get include directory
candidates = bob.extension.utils.find_header(header)
if not candidates:
# raise RuntimeError("could not find %s's `%s' - have you installed %s on this machine?" % (self.name, header, self.name))
return
directory = os.path.dirname(candidates[0])
# find library
prefix = os.path.dirname(os.path.dirname(directory))
candidates = bob.extension.utils.find_library(module, prefixes=[prefix], only_static=only_static)
if not candidates:
# raise RuntimeError("cannot find required %s binary module `%s' - make sure libvlfeat-dev is installed on `%s'" % (self.name, module, prefix))
return
# include directories
self.include_directories = [os.path.normpath(directory)]
# libraries
name, ext = os.path.splitext(os.path.basename(candidates[0]))
if ext in ['.so', '.a', '.dylib', '.dll']:
self.libraries.append(name[3:]) #strip 'lib' from the name
else: #link against the whole thing
self.libraries.append(':' + os.path.basename(candidates[0]))
# library path
self.library_directories = [os.path.dirname(candidates[0])]
# macros
self.macros = [('HAVE_%s' % self.name.upper(), '1')]
vl_pkg = vl()
system_include_dirs = vl_pkg.include_directories
setup(
name='bob.ip.base',
version=version,
description='Basic Image Processing Utilities for Bob',
url='http://github.com/bioidiap/bob.ip.base',
license='BSD',
author='Andre Anjos',
author_email='andre.anjos@idiap.ch',
long_description=open('README.rst').read(),
packages=find_packages(),
include_package_data=True,
zip_safe=False,
setup_requires = build_requires,
install_requires = build_requires,
ext_modules = [
Extension("bob.ip.base.version",
[
"bob/ip/base/version.cpp",
],
bob_packages = bob_packages,
packages = packages,
boost_modules = boost_modules,
version = version,
),
Library("bob.ip.base.bob_ip_base",
[
"bob/ip/base/cpp/GeomNorm.cpp",
"bob/ip/base/cpp/FaceEyesNorm.cpp",
"bob/ip/base/cpp/Affine.cpp",
"bob/ip/base/cpp/LBP.cpp",
"bob/ip/base/cpp/LBPTop.cpp",
"bob/ip/base/cpp/DCTFeatures.cpp",
"bob/ip/base/cpp/TanTriggs.cpp",
"bob/ip/base/cpp/Gaussian.cpp",
"bob/ip/base/cpp/MultiscaleRetinex.cpp",
"bob/ip/base/cpp/WeightedGaussian.cpp",
"bob/ip/base/cpp/SelfQuotientImage.cpp",
"bob/ip/base/cpp/GaussianScaleSpace.cpp",
"bob/ip/base/cpp/SIFT.cpp",
"bob/ip/base/cpp/HOG.cpp",
"bob/ip/base/cpp/GLCM.cpp",
"bob/ip/base/cpp/Wiener.cpp",
],
packages = packages,
boost_modules = boost_modules,
bob_packages = bob_packages,
system_include_dirs = vl_pkg.library_directories,
version = version,
library_dirs = vl_pkg.library_directories,
libraries = vl_pkg.libraries,
define_macros = vl_pkg.macros,
),
Extension("bob.ip.base._library",
[
"bob/ip/base/auxiliary.cpp",
"bob/ip/base/geom_norm.cpp",
"bob/ip/base/face_eyes_norm.cpp",
"bob/ip/base/affine.cpp",
"bob/ip/base/lbp.cpp",
"bob/ip/base/lbp_top.cpp",
"bob/ip/base/dct_features.cpp",
"bob/ip/base/tan_triggs.cpp",
"bob/ip/base/gaussian.cpp",
"bob/ip/base/multiscale_retinex.cpp",
"bob/ip/base/weighted_gaussian.cpp",
"bob/ip/base/self_quotient_image.cpp",
"bob/ip/base/gaussian_scale_space.cpp",
"bob/ip/base/sift.cpp",
"bob/ip/base/vl_feat.cpp",
"bob/ip/base/hog.cpp",
"bob/ip/base/glcm.cpp",
"bob/ip/base/filter.cpp",
"bob/ip/base/wiener.cpp",
"bob/ip/base/main.cpp",
],
packages = packages,
boost_modules = boost_modules,
bob_packages = bob_packages,
system_include_dirs = vl_pkg.library_directories,
version = version,
library_dirs = vl_pkg.library_directories,
libraries = vl_pkg.libraries,
define_macros = vl_pkg.macros,
),
],
cmdclass = {
'build_ext': build_ext
},
classifiers = [
'Framework :: Bob',
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Topic :: Software Development :: Libraries :: Python Modules',
],
)
|
tiagofrepereira2012/bob.ip.base
|
setup.py
|
Python
|
bsd-3-clause
| 5,982
|
[
"Gaussian"
] |
5ef2f079b043cc14e778272ef5d116d8db105c8b144ce6fd565817396de635b6
|
'''
synbiochem (c) University of Manchester 2015
synbiochem is licensed under the MIT License.
To view a copy of this license, visit <http://opensource.org/licenses/MIT/>.
@author: neilswainston
'''
# pylint: disable=too-many-public-methods
import unittest
from sbcdb.mnxref_utils import MnxRefReader
class TestMnxRefReader(unittest.TestCase):
'''Test class for MnxRefReader.'''
def setUp(self):
unittest.TestCase.setUp(self)
reader = MnxRefReader()
self.__chem_data = reader.get_chem_data()
self.__reac_data = reader.get_reac_data()
def test_get_chem_data(self):
'''Tests get_chem_data method.'''
self.assertEquals(self.__chem_data['MNXM1354']['chebi'], 'CHEBI:58282')
def test_get_reac_data(self):
'''Tests get_chem_data method.'''
eqn = '1 MNXM1 + 1 MNXM6 + 1 MNXM97401 = 1 MNXM5 + 1 MNXM97393'
self.assertEquals(self.__reac_data['MNXR62989']['equation'], eqn)
if __name__ == "__main__":
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()
|
synbiochem/biochem4j
|
sbcdb/test/test_mnxref_utils.py
|
Python
|
mit
| 1,061
|
[
"VisIt"
] |
d55d7dc8676a891d192f7b4409a3fe0704fd9eb729412505a181b0582c394869
|
# Copyright (C) 2016 Allen Li
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from setuptools import setup
def find_version(path):
with open(path) as f:
text = f.read()
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
text, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
setup(
name='mir.dlsite',
version=find_version('mir/dlsite/__init__.py'),
description='API for DLsite',
long_description='',
keywords='',
url='https://github.com/darkfeline/mir.dlsite',
author='Allen Li',
author_email='darkfeline@felesatra.moe',
classifiers=[
'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.9',
],
packages=['mir.dlsite'],
install_requires=[
'beautifulsoup4~=4.6',
'lxml~=4.0',
],
)
|
darkfeline/mir.dlsite
|
setup.py
|
Python
|
apache-2.0
| 1,507
|
[
"MOE"
] |
9010242d8bbfb739dd07cb32fc90c159f6ab1285a7e5049b2c2958900a476fd9
|
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
from bigdl.dllib.net.utils import find_placeholders, _check_the_same
from bigdl.orca.tfpark.tfnet import TFNet
from bigdl.orca.tfpark.tf_dataset import TFNdarrayDataset, check_data_compatible
from bigdl.orca.tfpark.tf_dataset import _standarize_feature_dataset
if sys.version >= '3':
long = int
unicode = str
class TFPredictor:
def __init__(self, sess, outputs, inputs=None, dataset=None):
'''
TFPredictor takes a list of TensorFlow tensors as the model outputs and
feed all the elements in TFDatasets to produce those outputs and returns
a Spark RDD with each of its elements representing the model prediction
for the corresponding input elements.
:param sess: the current TensorFlow Session, you should first use this session
to load the trained variables then pass into TFPredictor
:param outputs: the output tensors of the TensorFlow model
'''
if inputs is None:
dataset, inputs = TFPredictor._get_datasets_and_inputs(outputs)
self.sess = sess
self.dataset = dataset
self.inputs = inputs
self.tfnet = TFNet.from_session(sess, self.inputs, outputs)
if self.dataset.batch_per_thread <= 0:
raise ValueError("You should set batch_per_thread on TFDataset " +
"instead of batch_size for prediction")
@staticmethod
def _get_datasets_and_inputs(outputs):
import tensorflow as tf
all_required_inputs = find_placeholders(outputs)
dataset = tf.get_collection(all_required_inputs[0].name)[0]
inputs = dataset.tensors
_check_the_same(all_required_inputs, inputs)
return dataset, inputs
@classmethod
def from_outputs(cls, sess, outputs):
dataset, inputs = TFPredictor._get_datasets_and_inputs(outputs)
return cls(sess, outputs, inputs, dataset)
@classmethod
def from_keras(cls, keras_model, dataset):
import tensorflow.keras.backend as K
sess = K.get_session()
outputs = keras_model.outputs
inputs = keras_model.inputs
check_data_compatible(dataset, keras_model, mode="inference")
if isinstance(dataset, TFNdarrayDataset):
dataset = _standarize_feature_dataset(dataset, keras_model)
return cls(sess, outputs, inputs, dataset)
def predict(self):
return self.tfnet.predict(self.dataset.get_prediction_data(), mini_batch=True)
|
intel-analytics/BigDL
|
python/orca/src/bigdl/orca/tfpark/tf_predictor.py
|
Python
|
apache-2.0
| 3,080
|
[
"ORCA"
] |
d5917690716615a56af06f72e24768b5596acc4a6df1facc92134114e801b117
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <loic.blot@unix-experience.fr>
- Philippe Dellaert (@pdellaert) <philippe@dellaert.org>
- Abhijeet Kasurde (@Akasurde) <akasurde@redhat.com>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure the user used for vmware_guest should have correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0 and 6.5
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify state of the virtual machine be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket. Value should be multiple of C(num_cpus).'
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- ' - C(memory_reservation) (integer): Amount of memory in MB to set resource limits for memory. version_added: 2.5'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CensOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine.
- >
Valid values are referenced here:
U(http://pubs.vmware.com/vsphere-6-5/topic/com.vmware.wssdk.apiref.doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Resizing disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): Datastore to use for the disk. If C(autoselect_datastore) is enabled, filter datastore selection.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. Specify only if C(datastore) is not specified.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso). Required if type is set C(iso).'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensetive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation: 512
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Template and customize
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: string
value: 10.10.10.1
- id: old_property
operation: remove
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
HAS_PYVMOMI = False
try:
import pyVmomi
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMWare Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
@staticmethod
def create_scsi_controller(scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
if scsi_type == 'lsilogic':
scsi_ctl.device = vim.vm.device.VirtualLsiLogicController()
elif scsi_type == 'paravirtual':
scsi_ctl.device = vim.vm.device.ParaVirtualSCSIController()
elif scsi_type == 'buslogic':
scsi_ctl.device = vim.vm.device.VirtualBusLogicController()
elif scsi_type == 'lsilogicsas':
scsi_ctl.device = vim.vm.device.VirtualLsiLogicSASController()
scsi_ctl.device.deviceInfo = vim.Description()
scsi_ctl.device.slotInfo = vim.vm.device.VirtualDevice.PciBusSlotInfo()
scsi_ctl.device.slotInfo.pciSlotNumber = 16
scsi_ctl.device.controllerKey = 100
scsi_ctl.device.unitNumber = 3
scsi_ctl.device.busNumber = 0
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
@staticmethod
def is_scsi_controller(device):
return isinstance(device, vim.vm.device.VirtualLsiLogicController) or \
isinstance(device, vim.vm.device.ParaVirtualSCSIController) or \
isinstance(device, vim.vm.device.VirtualBusLogicController) or \
isinstance(device, vim.vm.device.VirtualLsiLogicSASController)
@staticmethod
def create_ide_controller():
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
ide_ctl.device.busNumber = 0
return ide_ctl
@staticmethod
def create_cdrom(ide_ctl, cdrom_type, iso_path=None):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_ctl.device.key
cdrom_spec.device.key = -1
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and self.is_valid_mac_addr(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
@staticmethod
def is_valid_mac_addr(mac_addr):
"""
Function to validate MAC address for given string
Args:
mac_addr: string to validate as MAC address
Returns: (Boolean) True if string is valid MAC address, otherwise False
"""
mac_addr_regex = re.compile('[0-9a-f]{2}([-:])[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$')
return bool(mac_addr_regex.match(mac_addr))
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if self.get_parent_datacenter(result).name != self.dc_name:
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or obj.name == name:
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
tmpobjs = objects.copy()
for k, v in objects.items():
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
tmpobjs.pop(k, None)
objects = tmpobjs
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.change_detected = False
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': False, 'failed': True, 'msg': task.info.error.msg}
else:
return {'changed': True, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template'] and not self.params['guest_id']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError as e:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware']:
mem_reservation = None
try:
mem_reservation = int(self.params['hardware'].get('mem_reservation'))
except ValueError as e:
self.module.fail_json(msg="hardware.mem_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError as e:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError as e:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError as e:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError as e:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
self.configspec.memoryMB = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation' in self.params['hardware']:
memory_reservation_mb = 0
try:
memory_reservation_mb = int(self.params['hardware']['memory_reservation'])
except ValueError as e:
self.module.fail_json(msg="Failed to set memory_reservation value."
"Valid value for memory_reservation value in MB (integer): %s" % e)
mem_alloc = vim.ResourceAllocationInfo()
mem_alloc.reservation = memory_reservation_mb
self.configspec.memoryAllocation = mem_alloc
if vm_obj is None or self.configspec.memoryAllocation.reservation != vm_obj.config.memoryAllocation.reservation:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if "cdrom" in self.params and self.params["cdrom"]:
if "type" not in self.params["cdrom"] or self.params["cdrom"]["type"] not in ["none", "client", "iso"]:
self.module.fail_json(msg="cdrom.type is mandatory")
if self.params["cdrom"]["type"] == "iso" and ("iso_path" not in self.params["cdrom"] or not self.params["cdrom"]["iso_path"]):
self.module.fail_json(msg="cdrom.iso_path is mandatory in case cdrom.type is iso")
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
cdrom_spec = None
cdrom_device = self.get_vm_cdrom_device(vm=vm_obj)
iso_path = self.params["cdrom"]["iso_path"] if "iso_path" in self.params["cdrom"] else None
if cdrom_device is None:
# Creating new CD-ROM
ide_device = self.get_vm_ide_device(vm=vm_obj)
if ide_device is None:
# Creating new IDE device
ide_device = self.device_helper.create_ide_controller()
self.change_detected = True
self.configspec.deviceChange.append(ide_device)
elif len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4 IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_ctl=ide_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
# Updating an existing CD-ROM
if self.params["cdrom"]["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif self.params["cdrom"]["type"] == "iso":
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (self.params["cdrom"]["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (self.params["cdrom"]["type"] != "none")
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.hardware.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 15):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
if vm_obj is not None:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state != 'error':
self.change_detected = True
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
def get_vm_cdrom_device(self, vm=None):
if vm is None:
return None
for device in vm.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualCdrom):
return device
return None
def get_vm_ide_device(self, vm=None):
if vm is None:
return None
for device in vm.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualIDEController):
return device
return None
def get_vm_network_interfaces(self, vm=None):
if vm is None:
return []
device_list = []
for device in vm.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualPCNet32) or \
isinstance(device, vim.vm.device.VirtualVmxnet2) or \
isinstance(device, vim.vm.device.VirtualVmxnet3) or \
isinstance(device, vim.vm.device.VirtualE1000) or \
isinstance(device, vim.vm.device.VirtualE1000e) or \
isinstance(device, vim.vm.device.VirtualSriovEthernetCard):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Function to sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and find_obj(self.content, [vim.Network], network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not PyVmomiDeviceHelper.is_valid_mac_addr(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMWare best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup." % pg_obj.name)
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
self.module.fail_json(msg="Failed to set vApp property field='%s' and value='%s'. Error: %s"
% (property_name, property_value, to_text(e)))
else:
if property_spec.get('operation') == 'remove':
# attemp to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj, config_spec):
if len(self.params['customvalues']) == 0:
return
vm_custom_spec = config_spec
vm_custom_spec.extraConfig = []
changed = False
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value fields.")
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
option = vim.option.OptionValue()
option.key = kv['key']
option.value = kv['value']
vm_custom_spec.extraConfig.append(option)
changed = True
if changed:
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['dns_suffix']
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
ident.userData.computerName.name = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
ident.guiUnattended.timeZone = self.params['customization']['timezone']
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
kb = self.get_configured_disk_size(expected_disk_spec)
# VMWare doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception as e:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def select_resource_pool_by_name(self, resource_pool_name):
resource_pool = self.cache.find_obj(self.content, [vim.ResourcePool], resource_pool_name)
if resource_pool is None:
self.module.fail_json(msg='Could not find resource_pool "%s"' % resource_pool_name)
return resource_pool
def select_resource_pool_by_host(self, host):
resource_pools = self.cache.get_all_objs(self.content, [vim.ResourcePool])
for rp in resource_pools.items():
if not rp[0]:
continue
if not hasattr(rp[0], 'parent') or not rp[0].parent:
continue
# Find resource pool on host
if self.obj_has_parent(rp[0].parent, host.parent):
# If no resource_pool selected or it's the selected pool, return it
if self.module.params['resource_pool'] is None or rp[0].name == self.module.params['resource_pool']:
return rp[0]
if self.module.params['resource_pool'] is not None:
self.module.fail_json(msg="Could not find resource_pool %s for selected host %s"
% (self.module.params['resource_pool'], host.name))
else:
self.module.fail_json(msg="Failed to find a resource group for %s" % host.name)
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self):
resource_pool = None
# highest priority, resource_pool given.
if self.params['resource_pool']:
resource_pool = self.select_resource_pool_by_name(self.params['resource_pool'])
# next priority, esxi hostname given.
elif self.params['esxi_hostname']:
host = self.select_host()
resource_pool = self.select_resource_pool_by_host(host)
# next priority, cluster given, take the root of the pool
elif self.params['cluster']:
cluster = self.cache.get_cluster(self.params['cluster'])
if cluster is None:
self.module.fail_json(msg="Unable to find cluster '%(cluster)s'" % self.params)
resource_pool = cluster.resourcePool
# fallback, pick any RP
else:
resource_pool = self.select_resource_pool_by_name(self.params['resource_pool'])
if resource_pool is None:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes is True:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# create the relocation spec
relospec = vim.vm.RelocateSpec()
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
relospec.host = self.select_host()
relospec.datastore = datastore
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
relospec.pool = resource_pool
if self.params['snapshot_src'] is not None and self.params['linked_clone']:
relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=relospec)
if self.customspec:
clonespec.customization = self.customspec
if self.params['snapshot_src'] is not None:
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList, snapname=self.params['snapshot_src'])
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_detected,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if self.params['customvalues']:
vm_custom_spec = vim.vm.ConfigSpec()
self.customize_customvalues(vm_obj=vm, config_spec=vm_custom_spec)
task = vm.ReconfigVM_Task(vm_custom_spec)
self.wait_for_task(task)
if self.params['wait_for_ip_address'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
self.wait_for_vm_ip(vm)
vm_facts = self.gather_facts(vm)
return {'changed': self.change_detected, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj, config_spec=self.configspec)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
change_applied = False
relospec = vim.vm.RelocateSpec()
if self.params['resource_pool']:
relospec.pool = self.select_resource_pool_by_name(self.params['resource_pool'])
if relospec.pool is None:
self.module.fail_json(msg='Unable to find resource pool "%(resource_pool)s"' % self.params)
elif relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=relospec)
self.wait_for_task(task)
change_applied = True
# Only send VMWare task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
change_applied = True
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
return {'changed': change_applied, 'failed': True, 'msg': task.info.error.msg}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
change_applied = True
if task.info.state == 'error':
return {'changed': change_applied, 'failed': True, 'msg': task.info.error.msg}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
change_applied = True
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
if self.params['resource_pool']:
resource_pool = self.select_resource_pool_by_name(self.params['resource_pool'])
if resource_pool is None:
self.module.fail_json(msg='Unable to find resource pool "%(resource_pool)s"' % self.params)
self.current_vm_obj.MarkAsVirtualMachine(pool=resource_pool)
# Automatically update VMWare UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
change_applied = True
else:
self.module.fail_json(msg="Resource pool must be specified when converting template to VM!")
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': change_applied, 'failed': False, 'instance': vm_facts}
@staticmethod
def wait_for_task(task):
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(1)
def wait_for_vm_ip(self, vm, poll=100, sleep=5):
ips = None
facts = {}
thispoll = 0
while not ips and thispoll <= poll:
newvm = self.get_vm()
facts = self.gather_facts(newvm)
if facts['ipv4'] or facts['ipv6']:
ips = True
else:
time.sleep(sleep)
thispoll += 1
return facts
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type='dict', default={}),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
vapp_properties=dict(type='list', default=[]),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm)
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
Nitaco/ansible
|
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
Python
|
gpl-3.0
| 112,476
|
[
"VisIt"
] |
123ee8af0d22c2baf1b5c8270ff29a0833bd785acf77bd9dafe9b7035ed9fdb5
|
#!/usr/bin/env python
import os
from matplotlib import pyplot as plt, cm
from pymicro.view.vol_utils import AxShowPixelValue
from pymicro.file.file_utils import HST_read
from pymicro.crystal.microstructure import Microstructure
'''
This example first demonstrate how to plot a slice of a 3d image. Here
we use a custom random colormap to display grains nicely.
The example also shows how to modify the coordinate formatter to
display the pixel value when moving the mouse above the plotted image.
Run this example interactively to try it.
'''
display = False
data_dir = '../data'
scan_name = 'pure_Ti_216x216x141_uint16.raw'
scan_path = os.path.join(data_dir, scan_name)
# read only the first slice of the volume and make it a 2d array
data = HST_read(scan_path, autoparse_filename=True, zrange=range(0, 1), verbose=True)[:, :, 0]
rand_cmap = Microstructure.rand_cmap(N=2048, first_is_black=True)
fig, ax = plt.subplots()
ax = AxShowPixelValue(ax)
ax.imshow(data.T, cmap=rand_cmap, interpolation='nearest', origin='upper')
image_name = os.path.splitext(__file__)[0] + '.png'
plt.savefig(image_name)
print('writting %s' % image_name)
from matplotlib import image
image.thumbnail(image_name, 'thumb_' + image_name, 0.2)
# display the plot in interactive mode
if display: plt.show()
|
heprom/pymicro
|
examples/plotting/pyplot_show_pixel_value.py
|
Python
|
mit
| 1,286
|
[
"CRYSTAL"
] |
7c85a2502709ac3e458496849d85fb5b3dce1b27631601004ee1c1996ddfe86a
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (C) 2015-2018 Hydriz Scholz
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime
import os
import balchivist
class BALMMediacounts(object):
"""
This module is for archiving the statistics on visits to media files
provided by the Wikimedia Foundation (available at
<https://dumps.wikimedia.org/other/mediacounts/>) to the Internet
Archive.
"""
title = "Wikimedia statistics files for media files visits on %s"
desc = "This is the Wikimedia statistics files for visits to "
desc += "media files on upload.wikimedia.org on %s."
config = balchivist.BALConfig("mediacounts")
dbtable = "mediacounts"
conv = balchivist.BALConverter()
tempdir = config.get('dumpdir')
filelist = [
"mediacounts.%s.v00.tsv.bz2"
]
extrafilelist = [
"mediacounts.top1000.%s.v00.csv.zip"
]
jobs = [
"archive",
"check",
"update"
]
# A size hint for the Internet Archive, currently set at 100GB
sizehint = "107374182400"
def __init__(self, params={}, sqldb=None):
"""
This function is executed when a new BALMMediacounts instance is
initialized.
- params (dict): Information about what is to be done about a given
item. The "verbose" and "debug" parameters are necessary.
- sqldb (object): A call to the BALSqlDb class with the required
parameters.
"""
self.sqldb = sqldb
self.verbose = params['verbose']
self.debug = params['debug']
self.common = balchivist.BALCommon(verbose=self.verbose,
debug=self.debug)
@classmethod
def argparse(cls, parser=None):
"""
This function is used for declaring the valid arguments specific to
this module and should only be used during the argparse stage.
- parser (object): The parser object.
"""
group = parser.add_argument_group(
title="Media files visits statistics",
description="Statistics on visits to media files."
)
group.add_argument("--mediacounts-job", action="store",
choices=cls.jobs, default="archive",
dest="mediacountsjob", help="The job to execute.")
group.add_argument("--mediacounts-date", action="store",
dest="mediacountsdate",
help="The date of the dump to work on.")
group.add_argument("--mediacounts-path", action="store",
dest="mediacountspath",
help="The path to the dump directory.")
def getItemMetadata(self, dumpdate):
"""
This function is used for obtaining the metadata for the item on the
Internet Archive.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns: Dict with the necessary item metadata.
"""
try:
datetime.datetime.strptime(dumpdate, '%Y%m%d')
except ValueError:
self.common.giveMessage('The date was given in the wrong format!')
return False
datename = self.conv.getDateFromWiki(dumpdate)
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
metadata = {
'collection': self.config.get('collection'),
'contributor': self.config.get('contributor'),
'mediatype': self.config.get('mediatype'),
'rights': self.config.get('rights'),
'licenseurl': self.config.get('licenseurl'),
'date': arcdate,
'subject': self.config.get('subject'),
'title': self.title % (datename),
'description': self.desc % (datename)
}
return metadata
def getFiles(self, dumpdate):
"""
This function is for getting a list of dump files available to be
archived for the given dump date.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns list of all files.
"""
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
d = datetime.datetime.strptime(dumpdate, '%Y%m%d')
output = []
for dumpfile in self.filelist:
output.append(dumpfile % (arcdate))
for dumpfile in self.extrafilelist:
thefile = dumpfile % (arcdate)
fileurl = "%s/%s/%s" % (self.config.get('baseurl'),
d.strftime('%Y'), thefile)
if self.common.checkDownloadFileExistence(fileurl):
output.append(thefile)
else:
# File does not exist in the extra file list, continue
continue
return output
def removeFiles(self, filelist):
"""
This function is used for removing all the downloaded files for a
particular dump.
- filelist (list): The list of dump files.
Returns: True if the operation is successful, False if an error has
occured.
"""
for dumpfile in filelist:
filepath = "%s/%s" % (self.tempdir, dumpfile)
try:
os.remove(filepath)
except:
return False
return True
def getDumpDates(self, can_archive="all"):
"""
This function is for getting all the date of dumps presently stored in
the database.
- can_archive (string): Dumps with this can_archive status will be
returned, "all" for all can_archive statuses.
Returns: List of dump dates.
"""
dumps = []
if (can_archive == "all"):
conds = ''
else:
conds = 'can_archive="%s"' % (can_archive)
options = 'ORDER BY dumpdate DESC LIMIT 30'
results = self.sqldb.select(dbtable=self.dbtable,
columns=['dumpdate'],
conds=conds, options=options)
if results is not None:
for result in results:
dumps.append(result[0].strftime("%Y%m%d"))
return dumps
def getItemsLeft(self, job=None):
"""
This function is used for getting the number of items left to be done
for a specific job.
Note: The "update" job should not be using this!
- job (string): The job to obtain the count for.
Returns: Int with the number of items left to work on.
"""
conds = {}
if (job is None or job == "archive"):
conds['is_archived'] = "0"
conds['can_archive'] = "1"
return self.getNumberOfItems(params=conds)
elif (job == "check"):
conds['is_archived'] = "1"
conds['is_checked'] = "0"
return self.getNumberOfItems(params=conds)
else:
return 0
def getNumberOfItems(self, params={}):
"""
This function is used to get the number of items left to work with.
- params (dict): The conditions to put in the WHERE clause.
Returns: Int with number of items left to work with.
"""
conds = ['claimed_by IS NULL']
for key, val in params.iteritems():
conds.append('%s="%s"' % (key, val))
return self.sqldb.count(dbtable=self.dbtable,
conds=' AND '.join(conds))
def getRandomItem(self, job=None):
"""
This function is used for getting a random item to work on for a
specific job.
Returns: String in %Y%m%d format for the dump date of the item to work
on, None if otherwise.
"""
if (job is None or job == "archive"):
return self.getRandomItemSql(archived=False)
elif (job == "check"):
return self.getRandomItemSql(archived=True)
else:
return None
def getRandomItemSql(self, archived=False):
"""
This function is used to get a random item to work on.
- archived (boolean): Whether or not to obtain a random item that is
already archived.
Returns: Dict with the parameters to the archiving scripts.
"""
output = {}
columns = ['dumpdate']
options = 'ORDER BY RAND() LIMIT 1'
conds = ['claimed_by IS NULL']
if (archived):
extra = [
'is_archived="1"',
'is_checked="0"'
]
else:
extra = [
'is_archived="0"',
'can_archive="1"'
]
conds.extend(extra)
results = self.sqldb.select(dbtable=self.dbtable, columns=columns,
conds=' AND '.join(conds), options=options)
if results is None:
# This should not be triggered at all. Use self.getItemsLeft()
# to verify first before running this function.
output = None
else:
for result in results:
output = result[0].strftime("%Y%m%d")
return output
def addNewItem(self, dumpdate):
"""
This function is used for adding new dumps into the database.
- dumpdate (string in %Y%m%d format): The date of the dump to add.
Returns: True if the update is successful, False if an error occured.
"""
try:
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
except ValueError:
return False
values = {
'dumpdate': '"%s"' % (arcdate),
'claimed_by': 'NULL',
'can_archive': '"0"',
'is_archived': '"0"',
'is_checked': '"0"',
'comments': 'NULL'
}
return self.sqldb.insert(dbtable=self.dbtable, values=values)
def updateCanArchive(self, dumpdate, can_archive):
"""
This function is used to update the status of whether a dump can be
archived.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
- can_archive (string): The can_archive status of the dump.
Returns: True if update is successful, False if an error occurred.
"""
vals = {
'can_archive': '"%s"' % (can_archive)
}
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
return self.sqldb.update(dbtable=self.dbtable, values=vals,
conds="dumpdate=\"%s\"" % (arcdate))
def markArchived(self, dumpdate):
"""
This function is used to mark an item as archived after doing so.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns: True if update is successful, False if an error occurred.
"""
vals = {
'is_archived': '"1"',
'claimed_by': 'NULL'
}
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
return self.sqldb.update(dbtable=self.dbtable, values=vals,
conds="dumpdate=\"%s\"" % (arcdate))
def markChecked(self, dumpdate):
"""
This function is used to mark an item as checked after doing so.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns: True if update is successful, False if an error occurred.
"""
vals = {
'is_checked': '"1"',
'claimed_by': 'NULL'
}
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
return self.sqldb.update(dbtable=self.dbtable, values=vals,
conds="dumpdate=\"%s\"" % (arcdate))
def markFailedArchive(self, dumpdate):
"""
This function is used to mark an item as failed when archiving it.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns: True if update is successful, False if an error occurred.
"""
vals = {
'is_archived': '"2"',
'claimed_by': 'NULL'
}
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
return self.sqldb.update(dbtable=self.dbtable, values=vals,
conds="dumpdate=\"%s\"" % (arcdate))
def markFailedCheck(self, dumpdate):
"""
This function is used to mark an item as failed when checking it.
- dumpdate (string in %Y%m%d format): The date of the dump to work on.
Returns: True if update is successful, False if an error occurred.
"""
vals = {
'is_checked': '"2"',
'claimed_by': 'NULL'
}
arcdate = self.conv.getDateFromWiki(dumpdate, archivedate=True)
return self.sqldb.update(dbtable=self.dbtable, values=vals,
conds="dumpdate=\"%s\"" % (arcdate))
def archive(self, dumpdate, path=None):
"""
This function is for doing the actual archiving process.
- dumpdate (string): The dumpdate of the dump in %Y%m%d format.
- path (string): The path to the dump directory.
Returns: True if process is successful, False if otherwise.
"""
identifier = "mediacounts-%s" % (dumpdate)
iaitem = balchivist.BALArchiver(identifier=identifier,
verbose=self.verbose, debug=self.debug)
allfiles = self.getFiles(dumpdate)
md = self.getItemMetadata(dumpdate)
headers = {
'x-archive-size-hint': self.sizehint
}
if (path is None):
dumps = self.tempdir
d = datetime.datetime.strptime(dumpdate, '%Y%m%d')
baseurl = "%s/%s" % (self.config.get('baseurl'), d.strftime('%Y'))
self.common.downloadFiles(filelist=allfiles, directory=dumps,
baseurl=baseurl)
else:
dumps = path
if (self.common.checkDumpDir(path=dumps, filelist=allfiles)):
pass
else:
# The dump directory is not suitable to be used, exit the function
return False
os.chdir(dumps)
upload = iaitem.upload(body=allfiles, metadata=md, headers=headers)
if (upload and path is None):
self.removeFiles(allfiles)
return True
else:
return upload
def check(self, dumpdate):
"""
This function checks if the uploaded dump is really complete.
- dumpdate (string in %Y%m%d format): The date of the dump to check.
Returns: True if complete, False if it isn't or errors have occurred.
"""
complete = True
allfiles = self.getFiles(dumpdate)
identifier = "mediacounts-%s" % (dumpdate)
iaitem = balchivist.BALArchiver(identifier=identifier,
verbose=self.verbose, debug=self.debug)
iafiles = iaitem.getFileList()
self.common.giveMessage("Checking if all files are uploaded for the "
"%s dump" % (dumpdate))
for dumpfile in allfiles:
if (dumpfile in iafiles):
continue
else:
# The Internet Archive have got incomplete items
complete = False
return complete
def update(self):
"""
This function checks for new dumps and adds new entries into the
database.
Returns: True if complete, raises an Exception if an error has
occurred.
"""
# Variables for getting latest 3 days for updating/checking
oneday = datetime.datetime.now() - datetime.timedelta(days=1)
twoday = datetime.datetime.now() - datetime.timedelta(days=2)
theday = datetime.datetime.now() - datetime.timedelta(days=3)
yesterday = oneday.strftime("%Y%m%d")
daybefore = twoday.strftime("%Y%m%d")
threedays = theday.strftime("%Y%m%d")
alldumps = self.getDumpDates()
# Add yesterday's date into the database
if (yesterday in alldumps):
self.common.giveMessage("Dump on %s already in the database, "
"skipping" % (yesterday))
else:
self.addNewItem(dumpdate=yesterday)
# Allow dump for the day before to be archived
if (daybefore in alldumps):
self.common.giveMessage("Updating can_archive for dump on %s" %
(daybefore))
self.updateCanArchive(dumpdate=daybefore, can_archive=1)
else:
# The day before was not in the database, which should not happen
self.common.giveMessage("Adding dump on %s and updating its "
"can_archive status" % (daybefore))
self.addNewItem(dumpdate=daybefore)
self.updateCanArchive(dumpdate=daybefore, can_archive=1)
# Double-check dump for 3 days ago
if (threedays in alldumps):
# Ensure that it really can be archived
self.updateCanArchive(dumpdate=threedays, can_archive=1)
else:
# 3 days ago dump was not in the database, which really should not
# happen
self.common.giveMessage("Adding dump on %s and updating its "
"can_archive status" % (threedays))
self.addNewItem(dumpdate=threedays)
self.updateCanArchive(dumpdate=threedays, can_archive=1)
return True
def dispatch(self, job, date, path):
"""
This function is for dispatching an item to the various functions.
"""
# Claim the item from the database server if not in debug mode
if self.debug:
pass
else:
arcdate = self.conv.getDateFromWiki(date, archivedate=True)
itemdetails = {
'dumpdate': arcdate
}
self.sqldb.claimItem(params=itemdetails, dbtable=self.dbtable)
msg = "Running %s on the Wikimedia media files visit " % (job)
msg += "statistics on %s" % (date)
self.common.giveMessage(msg)
if (job == "archive"):
status = self.archive(dumpdate=date, path=path)
if (self.debug):
return status
elif (self.debug is False and status):
self.common.giveMessage("Marking %s as archived" % (date))
self.markArchived(dumpdate=date)
else:
self.common.giveMessage("Marking %s as failed archive" %
(date))
self.markFailedArchive(dumpdate=date)
elif (job == "check"):
status = self.check(dumpdate=date)
if (self.debug):
return status
elif (self.debug is False and status):
self.common.giveMessage("Marking %s as checked" % (date))
self.markChecked(dumpdate=date)
else:
self.common.giveMessage("Marking %s as failed check" % (date))
self.markFailedCheck(dumpdate=date)
def execute(self, args=None):
"""
This function is for the main execution of the module.
- args (namespace): A namespace for all arguments from argparse.
Returns: True if all processing has completed successfully, False if an
error has occurred.
"""
continuous = False
if (args is None):
continuous = True
elif (args.mediacountsjob == "update"):
return self.update()
elif (args.mediacountsdate is None):
continuous = True
else:
pass
if (continuous):
if (args is None):
# Default to performing the archive job
mediacountsjob = "archive"
mediacountspath = None
else:
mediacountsjob = args.mediacountsjob
mediacountspath = args.mediacountspath
while self.getItemsLeft(job=mediacountsjob) > 0:
date = self.getRandomItem(job=mediacountsjob)
self.dispatch(job=mediacountsjob, date=date,
path=mediacountspath)
else:
self.dispatch(job=args.mediacountsjob, date=args.mediacountsdate,
path=args.mediacountspath)
return True
if __name__ == '__main__':
BALMessage = balchivist.BALMessage()
IncorrectUsage = balchivist.exception.IncorrectUsage
raise IncorrectUsage(BALMessage.getMessage('exception-incorrectusage'))
|
Hydriz/Balchivist
|
modules/mediacounts.py
|
Python
|
gpl-3.0
| 21,396
|
[
"VisIt"
] |
10f0c94add12b2acb9c1c05b8c9021268428f6e05c8f685a5b7a9d073b8a8d5f
|
"""
A class hierarchy for numerical integration on reference domains in 1,2, and 3D.
.. inheritance-diagram:: proteus.Quadrature
:parts: 1
"""
from __future__ import absolute_import
from __future__ import division
from builtins import zip
from builtins import range
from past.utils import old_div
from builtins import object
from .EGeometry import *
from .Profiling import logEvent
from math import *
class Q_base(object):
"""
The base class for quadrature methods.
"""
def __init__(self,order=1):
self.points=()
self.weights=()
self.pointsAll=(())
self.weightsAll=(())
self.order=order
def setOrder(self,k):
self.order=k
if self.order > len(self.pointsAll):
logEvent("WARNING Q_base requested order=%d > max allowed=%d setting order to max" % (self.order,
len(self.pointsAll)))
self.order= len(self.pointsAll)
self.points=self.pointsAll[self.order-1]
self.weights=self.weightsAll[self.order-1]
class GaussPoint(Q_base):
"""
A dummy class for integrating the boundary of the unit interval (i.e. a point).
"""
def __init__(self,order=1):
Q_base.__init__(self,order)
self.pointsAll=(
(EVec(0.0),),)
self.weightsAll=(
(1.0,),)
self.setOrder(order)
def setOrder(self,k):
self.order = k
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
LobattoPoint = GaussPoint
class GaussEdge(Q_base):
"""
Gaussian Quadrature on the unit interval.
"""
def __init__(self,order=1):
#mql. Compute points and weigths for Gauss Quad with n=5,...,9
#mql. TODO: we should generalize this to any order.
#mql. Why are the points not in order?
[p6,w6]=numpy.polynomial.legendre.leggauss(6); p6=0.5*(p6+1); w6=0.5*w6
[p7,w7]=numpy.polynomial.legendre.leggauss(7); p7=0.5*(p7+1); w7=0.5*w7
[p8,w8]=numpy.polynomial.legendre.leggauss(8); p8=0.5*(p8+1); w8=0.5*w8
[p9,w9]=numpy.polynomial.legendre.leggauss(9); p9=0.5*(p9+1); w9=0.5*w9
#mwf for convenience, see Ern and Guermond
a1 = 0.5*sqrt(old_div((15.0+2.0*sqrt(30)),35.0))
a2 = 0.5*sqrt(old_div((15.0-2.0*sqrt(30)),35.0))
w1 = 0.25 - old_div(sqrt(old_div(5.,6.)),12.)
w2 = 0.25 + old_div(sqrt(old_div(5.,6.)),12.)
Q_base.__init__(self,order)
self.pointsAll=(
(EVec(0.5),), #n=1
(EVec(old_div((sqrt(3.0)-1.0),(2.0*sqrt(3.0)))), #n=2
EVec(old_div((sqrt(3.0)+1.0),(2.0*sqrt(3.0))))),
(EVec(old_div((sqrt(5.0) - sqrt(3.0)),(2.0*sqrt(5)))), #n=3
EVec(0.5),
EVec(old_div((sqrt(5.0) + sqrt(3.0)),(2.0*sqrt(5))))),
(EVec(0.5+a1),EVec(0.5-a1), #n=4
EVec(0.5+a2),EVec(0.5-a2)),
(EVec(0.5), #n=5
EVec(0.5*(old_div(sqrt(5.0-2.0*sqrt(old_div(10.0,7.0))),3.0)) + 0.5),
EVec(0.5*(old_div(-sqrt(5.0-2.0*sqrt(old_div(10.0,7.0))),3.0)) + 0.5),
EVec(0.5*(old_div(sqrt(5.0+2.0*sqrt(old_div(10.0,7.0))),3.0)) + 0.5),
EVec(0.5*(old_div(-sqrt(5.0+2.0*sqrt(old_div(10.0,7.0))),3.0)) + 0.5)),
(EVec(p6[0]), #n=6
EVec(p6[1]),
EVec(p6[2]),
EVec(p6[3]),
EVec(p6[4]),
EVec(p6[5])),
(EVec(p7[0]), #n=7
EVec(p7[1]),
EVec(p7[2]),
EVec(p7[3]),
EVec(p7[4]),
EVec(p7[5]),
EVec(p7[6])),
(EVec(p8[0]), #n=8
EVec(p8[1]),
EVec(p8[2]),
EVec(p8[3]),
EVec(p8[4]),
EVec(p8[5]),
EVec(p8[6]),
EVec(p8[7])),
(EVec(p9[0]), #n=9
EVec(p9[1]),
EVec(p9[2]),
EVec(p9[3]),
EVec(p9[4]),
EVec(p9[5]),
EVec(p9[6]),
EVec(p9[7]),
EVec(p9[8]))
)
self.weightsAll=(
(1.0,), #n=1
(0.5, #n=2
0.5),
(old_div(5.0,18.0), #n=3
old_div(8.0,18.0),
old_div(5.0,18.0)),
(w1,w1,w2,w2), #n=4
(0.5*(old_div(128.0,225.0)), #n=5
0.5*(322.0+13.0*sqrt(70.0))/900.0,
0.5*(322.0+13.0*sqrt(70.0))/900.0,
0.5*(322.0-13.0*sqrt(70.0))/900.0,
0.5*(322.0-13.0*sqrt(70.0))/900.0),
(w6[0],w6[1],w6[2],w6[3],w6[4],w6[5]), #n=6
(w7[0],w7[1],w7[2],w7[3],w7[4],w7[5],w7[6]), #n=7
(w8[0],w8[1],w8[2],w8[3],w8[4],w8[5],w8[6],w8[7]), #n=8
(w9[0],w9[1],w9[2],w9[3],w9[4],w9[5],w9[6],w9[7],w9[8]) #n=9
)
self.setOrder(order)
def setOrder(self,order,domain=[0.0,1.0]):
Q_base.setOrder(self,order)
points = self.points
weights = self.weights
self.points = []
self.weights = []
for i in range(order):
self.points.append(EVec((domain[1]-domain[0])*points[i][0] +domain[0]))
self.weights.append((domain[1]-domain[0])*weights[i])
class LobattoEdge(Q_base):
"""
Gauss-Lobatto quadrature on the unit interval.
"""
def __init__(self,order=1):
Q_base.__init__(self,order)
a1 = 0.2*sqrt(5.0)
a2 = old_div(sqrt(21.0),7.0)
self.pointsAll=(
(EVec(0.0),EVec(1.0)),
(EVec(0.0),EVec(0.5),EVec(1.0)),
(EVec(0.0),EVec(0.5*(1.-a1)),EVec(0.5*(1.+a1)),EVec(1.0)),
(EVec(0.0),EVec(0.5*(1.-a2)),EVec(0.5),EVec(0.5*(1.+a2)),EVec(1.0)),
)
self.weightsAll=(
(0.5,0.5),
(old_div(1.0,6.0), old_div(4.0,6.0), old_div(1.0,6.0)),
(old_div(1.0,12.0), old_div(5.0,12.0),old_div(5.0,12.0),old_div(1.0,12.0)),
(old_div(1.0,20.0),old_div(49.0,180.0),old_div(32.0,90.0),old_div(49.0,180.0),old_div(1.0,20.0))
)
self.setOrder(order)
class LobattoEdgeAlt(Q_base):
"""
Gauss-Lobatto quadrature on the [-1:1] interval.
"""
def __init__(self,order=1):
Q_base.__init__(self,order)
a1 = 0.2*sqrt(5.0)
a2 = old_div(sqrt(21),7.0)
self.pointsAll=(
(EVec(-1.0),EVec(1.0)),
(EVec(-1.0),EVec(0.0),EVec(1.0)),
(EVec(-1.0),EVec(-a1),EVec(a1) ,EVec(1.0)),
(EVec(-1.0),EVec(-a2),EVec(0.0),EVec(a2),EVec(1.0)),)
self.weightsAll=(
(1.0,1.0),
(old_div(1.0,3.0), old_div(4.0,3.0), old_div(1.0,3.0)),
(old_div(1.0,6.0), old_div(5.0,6.0),old_div(5.0,6.0),old_div(1.0,6.0)),
(old_div(1.0,10.0),old_div(49.0,90.0),old_div(32.0,45.0),old_div(49.0,90.0),old_div(1.0,10.0)),)
self.setOrder(order)
class CompositeTrapezoidalEdge(Q_base):
"""
Composite trapezoidal rule on the unit interval.
order is number of intervals
"""
def __init__(self,order=1,maxOrder=20):
assert order > 0, "Composite Trapezoidal Rule requires order > 0"
self.maxOrder = maxOrder
Q_base.__init__(self,order)
pointsList = []
weightsList= []
for nintervals in range(1,self.maxOrder+1):
dx = old_div(1.0,float(nintervals))
points=numpy.arange(nintervals+1,dtype='d')*dx
weights=numpy.zeros(nintervals+1,'d'); weights.fill(dx)
weights[0] *= 0.5; weights[-1] *= 0.5
pointsList.append([EVec(p) for p in points])
weightsList.append([w for w in weights])
self.pointsAll =tuple(tuple(pL) for pL in pointsList)
self.weightsAll=tuple(tuple(wL) for wL in weightsList)
self.setOrder(order)
def setOrder(self,k):
assert k > 0, "Composite Trapezoidal Rule requires order > 0"
assert k <= self.maxOrder, "Composite Trapezoidal Rule k= %s maxOrder= %s need to increase in ctor " % (k,self.maxOrder)
self.order = k
self.points = self.pointsAll[k-1]
self.weights = self.weightsAll[k-1]
class FaceBarycenterEdge(Q_base):
def __init__(self,order=1):
order=1
Q_base.__init__(self,order)
self.pointsAll=(
(EVec(1.0),EVec(0.0)),)
self.weightsAll=(
(0.5,0.5),)
self.setOrder(order)
def setOrder(self,k):
self.order = 1
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
class GaussTriangle(Q_base):
"""
Gauss quadrature on the unit triangle.
"""
def __init__(self,order=1):
#for convenience, see Guern and Ermond 360
#4th order
a1 = 0.445948490915965
a2 = 0.091576213509771
w1 = 0.223381589678010*0.5
w2 = 0.109951743655322*0.5
#5th order
a5= (old_div((6.-sqrt(15.0)),21.0),old_div((6. + sqrt(15.0)),21.0))
w5= (0.5*(155. - sqrt(15))/1200.,0.5*(155. + sqrt(15))/1200.)
fifthOrderPoints = []; fifthOrderWeights = []
for i in range(2):
fifthOrderPoints.append((a5[i],a5[i])); fifthOrderWeights.append(w5[i])
fifthOrderPoints.append((1.-2.*a5[i],a5[i])); fifthOrderWeights.append(w5[i])
fifthOrderPoints.append((a5[i],1.-2.*a5[i])); fifthOrderWeights.append(w5[i])
fifthOrderPoints.append((old_div(1.,3.),old_div(1.,3.))); fifthOrderWeights.append(0.5*9.0/40.0)
#6th order
a6 = (0.063089014491502,0.249286745170910)
a6a = 0.310352451033785
a6b = 0.053145049844816
w6 =(0.5*0.050844906370206,0.5*0.116786275726378)
w6ab= 0.5*0.082851075618374
sixthOrderPoints = []; sixthOrderWeights = []
for i in range(2):
sixthOrderPoints.append((a6[i],a6[i])); sixthOrderWeights.append(w6[i])
sixthOrderPoints.append((1.-2.*a6[i],a6[i])); sixthOrderWeights.append(w6[i])
sixthOrderPoints.append((a6[i],1.-2.*a6[i])); sixthOrderWeights.append(w6[i])
abPerms = [(a6a,a6b),(1.-a6a-a6b,a6a),(a6b,1.-a6a-a6b),
(a6b,a6a),(1.-a6a-a6b,a6b),(a6a,1.-a6a-a6b)]
abWeights= [w6ab for i in range(6)]
sixthOrderPoints.extend(abPerms); sixthOrderWeights.extend(abWeights)
Q_base.__init__(self,order)
self.pointsAll=(
( EVec(old_div(1.0,3.0),old_div(1.0,3.0)),),
( EVec(old_div(1.0,2.0),old_div(1.0,2.0)), EVec(0.0,old_div(1.0,2.0)), EVec(old_div(1.0,2.0),0.0)),
( EVec(old_div(1.0,3.0),old_div(1.0,3.0)), EVec(old_div(3.0,5.0),old_div(1.0,5.0)), EVec(old_div(1.0,5.0),old_div(3.0,5.0)),EVec(old_div(1.0,5.0),old_div(1.0,5.0))),
( EVec(a1,a1),EVec(1.0-2.0*a1,a1),EVec(a1,1.0-2.0*a1),
EVec(a2,a2),EVec(1.0-2.0*a2,a2),EVec(a2,1.0-2.0*a2)),
tuple(EVec(p[0],p[1]) for p in fifthOrderPoints),
tuple(EVec(p[0],p[1]) for p in sixthOrderPoints)
)#points All
self.weightsAll=(
(0.5,),
(old_div(1.0,6.0), old_div(1.0,6.0), old_div(1.0,6.0)),
(old_div(-27.0,96.0), old_div(25.0,96.0), old_div(25.0,96.0), old_div(25.0,96.0)),
(w1,w1,w1,w2,w2,w2),
tuple(fifthOrderWeights),
tuple(sixthOrderWeights)
)#weightsAll
self.setOrder(order)
class CompositeTriangle(Q_base):
"""
Composite quadrature rule on the unit triangle.
"""
# uniform refine the reference cell until size < hk.
def __init__(self,quad_rule,hk):
N = int(floor(old_div(1.0,hk)))
h1= old_div(1.0,N)
h2= 1.0/N/N
npt = len(quad_rule.points)
self.weights = numpy.tile(quad_rule.weights, N*N)
self.weights *= h2
quad_points = numpy.asarray(quad_rule.points)
self.points = numpy.zeros((npt*N*N,3),'d')
k = 1
for j in range(N):
for i in range(N-j):
self.points[(k-1)*npt:k*npt, :] = quad_points*h1 + numpy.array([i*h1,j*h1,0.0])
k += 1
for j in range(1,N):
for i in range(1,N-j+1):
self.points[(k-1)*npt:k*npt, :] = -quad_points*h1 + numpy.array([i*h1,j*h1, 0.0])
k += 1
class CompositeTetrahedron(Q_base):
def get_detJ_and_J_from_ref(self, simplex_nodes):
x = simplex_nodes[:, 0]
y = simplex_nodes[:, 1]
z = simplex_nodes[:, 2]
J = numpy.array([[x[0] - x[2], x[1] - x[2], x[3] - x[2]],
[y[0] - y[2], y[1] - y[2], y[3] - y[2]],
[z[0] - z[2], z[1] - z[2], z[3] - z[2]]])
detJ = numpy.linalg.det(J)
B = numpy.array([x[2], y[2], z[2]])
return detJ, J, B
def get_h_of_Tetrahedron(self, simplex_nodes):
h1 = numpy.linalg.norm(simplex_nodes[0, :] - simplex_nodes[2, :])
h2 = numpy.linalg.norm(simplex_nodes[1, :] - simplex_nodes[2, :])
h3 = numpy.linalg.norm(simplex_nodes[3, :] - simplex_nodes[2, :])
h4 = numpy.linalg.norm(simplex_nodes[0, :] - simplex_nodes[1, :])
h5 = numpy.linalg.norm(simplex_nodes[0, :] - simplex_nodes[3, :])
h6 = numpy.linalg.norm(simplex_nodes[3, :] - simplex_nodes[2, :])
return max([h1, h2, h3, h4, h5, h6])
def get_max_h_of_all_tetrahedron(self, all_tet):
n_tet = old_div(all_tet.shape[0], 4)
h = []
for i in range(n_tet):
h.append(self.get_h_of_Tetrahedron(all_tet[i * 4:(i + 1) * 4, :]))
return max(h)
def get_8_sub_simplex(self, simplex_nodes):
r""" Return 8 sub-simplex of the simplex givne by simplex_nodes: 4 points ordered by right hand rule.
`<https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/22430/versions/7/previews/tetrarefine3.m/index.html?access_key=>`_
"""
x1 = simplex_nodes[0]
x2 = simplex_nodes[1]
x3 = simplex_nodes[2]
x4 = simplex_nodes[3]
x5 = 0.5 * (x1 + x2)
x6 = 0.5 * (x1 + x3)
x7 = 0.5 * (x1 + x4)
x8 = 0.5 * (x2 + x3)
x9 = 0.5 * (x2 + x4)
x10 = 0.5 * (x3 + x4)
all_sub_simplex = numpy.zeros((8 * 4, 3), 'd')
all_sub_simplex[0] = x1
all_sub_simplex[1] = x5
all_sub_simplex[2] = x6
all_sub_simplex[3] = x7
all_sub_simplex[4] = x5
all_sub_simplex[5] = x2
all_sub_simplex[6] = x8
all_sub_simplex[7] = x9
all_sub_simplex[8] = x6
all_sub_simplex[9] = x8
all_sub_simplex[10] = x3
all_sub_simplex[11] = x10
all_sub_simplex[12] = x7
all_sub_simplex[13] = x9
all_sub_simplex[14] = x10
all_sub_simplex[15] = x4
all_sub_simplex[16] = x5
all_sub_simplex[17] = x6
all_sub_simplex[18] = x7
all_sub_simplex[19] = x9
all_sub_simplex[20] = x8
all_sub_simplex[21] = x6
all_sub_simplex[22] = x5
all_sub_simplex[23] = x9
all_sub_simplex[24] = x6
all_sub_simplex[25] = x7
all_sub_simplex[26] = x9
all_sub_simplex[27] = x10
all_sub_simplex[28] = x9
all_sub_simplex[29] = x8
all_sub_simplex[30] = x6
all_sub_simplex[31] = x10
return all_sub_simplex
def get_sub_tet_of_all_tet(self, all_tet):
n_tet = old_div(all_tet.shape[0], 4)
all_sub_tet = numpy.zeros((n_tet * 8 * 4, 3), 'd')
for i in range(n_tet):
all_sub_tet[i * 8 * 4:(i + 1) * 8 * 4,
:] = self.get_8_sub_simplex(all_tet[i * 4:(i + 1) * 4, :])
return all_sub_tet
def __init__(self, quad_rule, hk):
"""
refine the reference tetrahedron until the largest edge < hk
"""
all_tetrahedron = numpy.array(
[[1, 0, 0], [0, 1, 0], [0, 0, 0], [0, 0, 1]], 'd')
max_h = self.get_max_h_of_all_tetrahedron(all_tetrahedron)
while max_h > hk:
all_sub_tet = self.get_sub_tet_of_all_tet(all_tetrahedron)
all_tetrahedron = all_sub_tet
max_h = self.get_max_h_of_all_tetrahedron(all_tetrahedron)
# self.plot_all_tet(all_tetrahedron)
self.h = max_h
n_tet = old_div(all_tetrahedron.shape[0], 4)
quad_weights = numpy.asarray(quad_rule.weights, 'd')
quad_points = numpy.asarray(quad_rule.points, 'd')
n_quad_per_tet = quad_weights.shape[0]
self.weights = numpy.zeros((n_tet * n_quad_per_tet,), 'd')
self.points = numpy.zeros((n_tet * n_quad_per_tet, 3), 'd')
for i in range(n_tet):
detJ, J, B = self.get_detJ_and_J_from_ref(
all_tetrahedron[i * 4: (i + 1) * 4, :])
self.weights[i * n_quad_per_tet:(i + 1)
* n_quad_per_tet] = detJ * quad_weights
self.points[i * n_quad_per_tet:(i + 1)
* n_quad_per_tet, :] = numpy.dot(quad_points, J.T) + B
class LobattoTriangle(Q_base):
"""
Gauss-Lobatto quadrature on the unit triangle.
"""
def __init__(self,order=1):
#mwf allow higher order now
#order=1
Q_base.__init__(self,order)
#for third order Fekete rule (see Taylor etal SINUM 2000)
#10 points below are classes of points orbit is number of permuations
#orbit, 1st two barycentric coordinates, weight to be multiplied by area of triangle
#1, (1/3,1/3), 0.9
#3, (0,0), 0.0333333333
#6, (0,a), 0.1666666667
#where a = 0.2763932023
a = 0.2763932023; b = 1.0-a; wv = 0.03333333333333*0.25; wab = 0.16666666666667*0.25
self.pointsAll=(
( EVec(0.0,0.0),EVec(1.0,0.0),EVec(0.0,1.0)),
( EVec(old_div(1.0,3.0),old_div(1.0,3.0)),
EVec(0.,0.),EVec(1.0,0.0),EVec(0.0,1.0),
EVec(0,a),EVec(b,0),EVec(a,b),EVec(a,0.),EVec(b,a),EVec(0,b)),
)
self.weightsAll=(
(old_div(1.0,6.0),old_div(1.0,6.0),old_div(1.0,6.0)),
(0.5-3.0*wv-6*wab,
wv,wv,wv,
wab,wab,wab,wab,wab,wab),
)
self.setOrder(order)
def setOrder(self,k):
self.order = 1
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
if k > 1:
self.order = 2
self.points = self.pointsAll[1]
self.weights = self.weightsAll[1]
class CompositeTrapezoidalTriangle(Q_base):
"""
Composite trapezoidal rule on the reference triangle
order is number of intervals
"""
def __init__(self,order=1,maxOrder=20):
assert order > 0, "Composite Trapezoidal Rule requires order > 0"
self.maxOrder = maxOrder
Q_base.__init__(self,order)
pointsList = []
weightsList= []
parentArea = 0.5
for nintervals in range(1,self.maxOrder+1):
dx = old_div(1.0,float(nintervals))
#uniform subdivisions in terms of barycentric coordinates
baryvals = numpy.arange(nintervals+1,dtype='d')*dx
combos = []; weights = []
for i in range(nintervals+1):
for j in range(nintervals+1-i):
combos.append(EVec(baryvals[i],baryvals[j]))
iInBoundary = 0; jInBoundary=0;
if i == 0 or i == nintervals: iInBoundary = 1
if j == 0 or j == nintervals-i: jInBoundary = 1
if iInBoundary+jInBoundary == 2:
weights.append(old_div(parentArea,3.0))
elif iInBoundary+jInBoundary == 1:
weights.append(parentArea)
else:
weights.append(parentArea*2.0)
pointsList.append([tuple(p) for p in combos])
weightsList.append([w for w in weights])
parentArea = 0.5*(old_div(1.0,float(nintervals+1)))**2
self.pointsAll =tuple(tuple(pL) for pL in pointsList)
self.weightsAll=tuple(tuple(wL) for wL in weightsList)
self.setOrder(order)
def setOrder(self,k):
assert k > 0, "Composite Trapezoidal Rule requires order > 0"
assert k <= self.maxOrder, "Composite Trapezoidal Rule k= %s maxOrder= %s need to increase in ctor " % (k,self.maxOrder)
self.order = k
self.points = self.pointsAll[k-1]
self.weights = self.weightsAll[k-1]
class FaceBarycenterTriangle(Q_base):
def __init__(self,order=1):
order=1
Q_base.__init__(self,order)
self.pointsAll=(
( EVec(old_div(1.0,2.0),old_div(1.0,2.0)), EVec(0.0,old_div(1.0,2.0)), EVec(old_div(1.0,2.0),0.0)),
)
self.weightsAll=(
(old_div(1.0,6.0), old_div(1.0,6.0), old_div(1.0,6.0)),
)
self.setOrder(order)
def setOrder(self,k):
self.order = 1
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
class GaussTetrahedron(Q_base):
"""
Gauss-Legendre quadrature on the unit tetrahedron.
"""
def __init__(self,order=1):
Q_base.__init__(self,order)
#mwf for convenience, see Guern Ermond 360
a1=old_div((7.0-sqrt(15.)),34.0)
a2=old_div((7.0+sqrt(15.)),34.0)
a =old_div((10.0-2.0*sqrt(5.)),40.0)
Vf=old_div(1.0,6.0)
w1=(2665.0+14.0*sqrt(15.0))/37800.0*Vf
w2=(2665.0-14.0*sqrt(15.0))/37800.0*Vf
wa=10.0/189.0*Vf
#
self.pointsAll=(
(EVec(0.25,0.25,0.25),),
(EVec(0.585410196624969,0.138196601125011,0.138196601125011),#2nd degree
EVec(0.138196601125011,0.585410196624969,0.138196601125011),
EVec(0.138196601125011,0.138196601125011,0.585410196624969),
EVec(0.138196601125011,0.138196601125011,0.138196601125011)),
(EVec(0.25,0.25,0.25),#3rd degree
EVec(old_div(1.0,2.0), old_div(1.0,6.0), old_div(1.0,6.0)),
EVec(old_div(1.0,6.0), old_div(1.0,2.0), old_div(1.0,6.0)),
EVec(old_div(1.0,6.0), old_div(1.0,6.0), old_div(1.0,2.0)),
EVec(old_div(1.0,6.0), old_div(1.0,6.0), old_div(1.0,6.0))),
(EVec(0.50,0.50,0.0),#4th degree
EVec(0.50,0.0,0.50),
EVec(0.0,0.50,0.50),
EVec(0.0,0.0,0.50),
EVec(0.0,0.50,0.0),
EVec(0.50,0.0,0.0),
EVec(0.100526765225204467,0.100526765225204467,0.100526765225204467),
EVec(0.100526765225204467,0.100526765225204467,0.698419704324386603),
EVec(0.100526765225204467,0.698419704324386603,0.100526765225204467),
EVec(0.698419704324386603,0.100526765225204467,0.100526765225204467),
EVec(0.314372873493192195,0.314372873493192195,0.314372873493192195),
EVec(0.314372873493192195,0.314372873493192195,0.568813795204234229e-1),
EVec(0.314372873493192195,0.568813795204234229e-1,0.314372873493192195),
EVec(0.568813795204234229e-1,0.314372873493192195,0.314372873493192195)),
(EVec(0.333333333333333333,0.333333333333333333,0.333333333333333333),#5th degree
EVec(0.0,0.333333333333333333,0.333333333333333333),
EVec(0.333333333333333333,0.0,0.333333333333333333),
EVec(0.333333333333333333,0.333333333333333333,0.0),
EVec(0.25,0.25,0.25),
EVec(0.909090909090909091e-1,0.909090909090909091e-1,0.909090909090909091e-1),
EVec(0.727272727272727273,0.909090909090909091e-1,0.909090909090909091e-1),
EVec(0.909090909090909091e-1,0.727272727272727273,0.909090909090909091e-1),
EVec(0.909090909090909091e-1,0.909090909090909091e-1,0.727272727272727273),
EVec(0.433449846426335728,0.665501535736642813e-1,0.665501535736642813e-1),
EVec(0.665501535736642813e-1,0.433449846426335728,0.665501535736642813e-1),
EVec(0.665501535736642813e-1,0.665501535736642813e-1,0.433449846426335728),
EVec(0.433449846426335728,0.433449846426335728,0.665501535736642813e-1),
EVec(0.433449846426335728,0.665501535736642813e-1,0.433449846426335728),
EVec(0.665501535736642813e-1,0.433449846426335728,0.433449846426335728)),
(EVec(0.214602871259151684,0.214602871259151684,0.214602871259151684),#6th degree
EVec(0.214602871259151684,0.214602871259151684,0.356191386222544953),
EVec(0.214602871259151684,0.356191386222544953,0.214602871259151684),
EVec(0.356191386222544953,0.214602871259151684,0.214602871259151684),
EVec(0.406739585346113397e-1,0.406739585346113397e-1,0.406739585346113397e-1),
EVec(0.406739585346113397e-1,0.406739585346113397e-1,0.877978124396165982),
EVec(0.406739585346113397e-1,0.877978124396165982,0.406739585346113397e-1),
EVec(0.877978124396165982,0.406739585346113397e-1,0.406739585346113397e-1),
EVec(0.322337890142275646,0.322337890142275646,0.322337890142275646),
EVec(0.322337890142275646,0.322337890142275646,0.329863295731730594e-1),
EVec(0.322337890142275646,0.329863295731730594e-1,0.322337890142275646),
EVec(0.329863295731730594e-1,0.322337890142275646,0.322337890142275646),
EVec(0.636610018750175299e-1,0.636610018750175299e-1,0.269672331458315867),
EVec(0.636610018750175299e-1,0.269672331458315867,0.636610018750175299e-1),
EVec(0.269672331458315867,0.636610018750175299e-1,0.636610018750175299e-1),
EVec(0.636610018750175299e-1,0.636610018750175299e-1,0.603005664791649076),
EVec(0.636610018750175299e-1,0.603005664791649076,0.636610018750175299e-1),
EVec(0.603005664791649076,0.636610018750175299e-1,0.636610018750175299e-1),
EVec(0.636610018750175299e-1,0.269672331458315867,0.603005664791649076),
EVec(0.636610018750175299e-1,0.603005664791649076,0.269672331458315867),
EVec(0.603005664791649076,0.636610018750175299e-1,0.269672331458315867),
EVec(0.603005664791649076,0.269672331458315867,0.636610018750175299e-1),
EVec(0.269672331458315867,0.603005664791649076,0.636610018750175299e-1),
EVec(0.269672331458315867,0.636610018750175299e-1,0.603005664791649076)),
(EVec(0.50,0.50,0.0),#7th degree
EVec(0.50,0.0,0.50),
EVec(0.0,0.50,0.50),
EVec(0.0,0.0,0.50),
EVec(0.0,0.50,0.0),
EVec(0.50,0.0,0.0),
EVec(0.25,0.25,0.25),
EVec(0.782131923303186549e-1,0.782131923303186549e-1,0.782131923303186549e-1),
EVec(0.782131923303186549e-1,0.782131923303186549e-1,0.765360423009044044),
EVec(0.782131923303186549e-1,0.765360423009044044,0.782131923303186549e-1),
EVec(0.765360423009044044,0.782131923303186549e-1,0.782131923303186549e-1),
EVec(0.121843216663904411,0.121843216663904411,0.121843216663904411),
EVec(0.121843216663904411,0.121843216663904411,0.634470350008286765),
EVec(0.121843216663904411,0.634470350008286765,0.121843216663904411),
EVec(0.634470350008286765,0.121843216663904411,0.121843216663904411),
EVec(0.332539164446420554,0.332539164446420554,0.332539164446420554),
EVec(0.332539164446420554,0.332539164446420554,0.238250666073834549e-2),
EVec(0.332539164446420554,0.238250666073834549e-2,0.332539164446420554),
EVec(0.238250666073834549e-2,0.332539164446420554,0.332539164446420554),
EVec(0.10,0.10,0.20),
EVec(0.10,0.20,0.10),
EVec(0.20,0.10,0.10),
EVec(0.10,0.10,0.60),
EVec(0.10,0.60,0.10),
EVec(0.60,0.10,0.10),
EVec(0.10,0.20,0.60),
EVec(0.10,0.60,0.20),
EVec(0.60,0.10,0.20),
EVec(0.60,0.20,0.10),
EVec(0.20,0.60,0.10),
EVec(0.20,0.10,0.60)),
(EVec(0.25,0.25,0.25),#8th degree
EVec(0.127470936566639015,0.127470936566639015,0.127470936566639015),
EVec(0.127470936566639015,0.127470936566639015,0.617587190300082967),
EVec(0.127470936566639015,0.617587190300082967,0.127470936566639015),
EVec(0.617587190300082967,0.127470936566639015,0.127470936566639015),
EVec(0.320788303926322960e-1,0.320788303926322960e-1,0.320788303926322960e-1),
EVec(0.320788303926322960e-1,0.320788303926322960e-1,0.903763508822103123),
EVec(0.320788303926322960e-1,0.903763508822103123,0.320788303926322960e-1),
EVec(0.903763508822103123,0.320788303926322960e-1,0.320788303926322960e-1),
EVec(0.497770956432810185e-1,0.497770956432810185e-1,0.450222904356718978),
EVec(0.497770956432810185e-1,0.450222904356718978,0.497770956432810185e-1),
EVec(0.450222904356718978,0.497770956432810185e-1,0.497770956432810185e-1),
EVec(0.450222904356718978,0.450222904356718978,0.497770956432810185e-1),
EVec(0.450222904356718978,0.497770956432810185e-1,0.450222904356718978),
EVec(0.497770956432810185e-1,0.450222904356718978,0.450222904356718978),
EVec(0.183730447398549945,0.183730447398549945,0.316269552601450060),
EVec(0.183730447398549945,0.316269552601450060,0.183730447398549945),
EVec(0.316269552601450060,0.183730447398549945,0.183730447398549945),
EVec(0.316269552601450060,0.316269552601450060,0.183730447398549945),
EVec(0.316269552601450060,0.183730447398549945,0.316269552601450060),
EVec(0.183730447398549945,0.316269552601450060,0.316269552601450060),
EVec(0.231901089397150906,0.231901089397150906,0.229177878448171174e-1),
EVec(0.231901089397150906,0.229177878448171174e-1,0.231901089397150906),
EVec(0.229177878448171174e-1,0.231901089397150906,0.231901089397150906),
EVec(0.231901089397150906,0.231901089397150906,0.513280033360881072),
EVec(0.231901089397150906,0.513280033360881072,0.231901089397150906),
EVec(0.513280033360881072,0.231901089397150906,0.231901089397150906),
EVec(0.231901089397150906,0.229177878448171174e-1,0.513280033360881072),
EVec(0.231901089397150906,0.513280033360881072,0.229177878448171174e-1),
EVec(0.513280033360881072,0.231901089397150906,0.229177878448171174e-1),
EVec(0.513280033360881072,0.229177878448171174e-1,0.231901089397150906),
EVec(0.229177878448171174e-1,0.513280033360881072,0.231901089397150906),
EVec(0.229177878448171174e-1,0.231901089397150906,0.513280033360881072),
EVec(0.379700484718286102e-1,0.379700484718286102e-1,0.730313427807538396),
EVec(0.379700484718286102e-1,0.730313427807538396,0.379700484718286102e-1),
EVec(0.730313427807538396,0.379700484718286102e-1,0.379700484718286102e-1),
EVec(0.379700484718286102e-1,0.379700484718286102e-1,0.193746475248804382),
EVec(0.379700484718286102e-1,0.193746475248804382,0.379700484718286102e-1),
EVec(0.193746475248804382,0.379700484718286102e-1,0.379700484718286102e-1),
EVec(0.379700484718286102e-1,0.730313427807538396,0.193746475248804382),
EVec(0.379700484718286102e-1,0.193746475248804382,0.730313427807538396),
EVec(0.193746475248804382,0.379700484718286102e-1,0.730313427807538396),
EVec(0.193746475248804382,0.730313427807538396,0.379700484718286102e-1),
EVec(0.730313427807538396,0.193746475248804382,0.379700484718286102e-1),
EVec(0.730313427807538396,0.379700484718286102e-1,0.193746475248804382)))
# ( EVec(0.333333333333333333,0.333333333333333333,0.333333333333333333),
# EVec(0.0,0.333333333333333333,0.333333333333333333),
# EVec(0.333333333333333333,0.0,0.333333333333333333),
# EVec(0.333333333333333333,0.333333333333333333,0.0),
# EVec(0.25,0.25,0.25),
# EVec(0.909090909090909091e-1,0.909090909090909091e-1,0.909090909090909091e-1),
# EVec(0.727272727272727273,0.909090909090909091e-1,0.909090909090909091e-1),
# EVec(0.909090909090909091e-1,0.727272727272727273,0.909090909090909091e-1),
# EVec(0.909090909090909091e-1,0.909090909090909091e-1,0.727272727272727273),
# EVec(0.433449846426335728,0.665501535736642813e-1,0.665501535736642813e-1),
# EVec(0.665501535736642813e-1,0.433449846426335728,0.665501535736642813e-1),
# EVec(0.665501535736642813e-1,0.665501535736642813e-1,0.433449846426335728),
# EVec(0.433449846426335728,0.433449846426335728,0.665501535736642813e-1),
# EVec(0.433449846426335728,0.665501535736642813e-1,0.433449846426335728),
# EVec(0.665501535736642813e-1,0.433449846426335728,0.433449846426335728)),
# ( EVec(0.25,0.25,0.25),
# EVec(a1,a1,a1),
# EVec(1.0-2.0*a1,a1,a1),
# EVec(a1,1.0-2.0*a1,a1),
# EVec(a1,a1,1.0-2.0*a1),
# EVec(a2,a2,a2),
# EVec(1.0-2.0*a2,a2,a2),
# EVec(a2,1.0-2.0*a2,a2),
# EVec(a2,a2,1.0-2.0*a2),
# EVec(0.5-a,a,a),
# EVec(a,0.5-a,a),
# EVec(a,a,0.5-a),
# EVec(0.5-a,0.5-a,a),
# EVec(a,0.5-a,0.5-a),
# EVec(0.5-a,a,0.5-a)))
self.weightsAll=(
(old_div(1.0,6.0),),
(old_div(1.0,24.0),#2nd degree
old_div(1.0,24.0),
old_div(1.0,24.0),
old_div(1.0,24.0)),
(old_div(-4.0,30.0),#3rd degree
old_div(9.0,120.0),
old_div(9.0,120.0),
old_div(9.0,120.0),
old_div(9.0,120.0)),
(0.317460317460317450e-2, #4th degree
0.317460317460317450e-2,
0.317460317460317450e-2,
0.317460317460317450e-2,
0.317460317460317450e-2,
0.317460317460317450e-2,
0.147649707904967828e-1,
0.147649707904967828e-1,
0.147649707904967828e-1,
0.147649707904967828e-1,
0.221397911142651221e-1,
0.221397911142651221e-1,
0.221397911142651221e-1,
0.221397911142651221e-1),
(0.602678571428571597e-2,#5th degree
0.602678571428571597e-2,
0.602678571428571597e-2,
0.602678571428571597e-2,
0.302836780970891856e-1,
0.116452490860289742e-1,
0.116452490860289742e-1,
0.116452490860289742e-1,
0.116452490860289742e-1,
0.109491415613864534e-1,
0.109491415613864534e-1,
0.109491415613864534e-1,
0.109491415613864534e-1,
0.109491415613864534e-1,
0.109491415613864534e-1),
(0.665379170969464506e-2,#6th degree
0.665379170969464506e-2,
0.665379170969464506e-2,
0.665379170969464506e-2,
0.167953517588677620e-2,
0.167953517588677620e-2,
0.167953517588677620e-2,
0.167953517588677620e-2,
0.922619692394239843e-2,
0.922619692394239843e-2,
0.922619692394239843e-2,
0.922619692394239843e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2,
0.803571428571428248e-2),
(0.970017636684296702e-3,#7th degree
0.970017636684296702e-3,
0.970017636684296702e-3,
0.970017636684296702e-3,
0.970017636684296702e-3,
0.970017636684296702e-3,
0.182642234661087939e-1,
0.105999415244141609e-1,
0.105999415244141609e-1,
0.105999415244141609e-1,
0.105999415244141609e-1,
-0.625177401143299494e-1,
-0.625177401143299494e-1,
-0.625177401143299494e-1,
-0.625177401143299494e-1,
0.489142526307353653e-2,
0.489142526307353653e-2,
0.489142526307353653e-2,
0.489142526307353653e-2,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1,
0.275573192239850917e-1),
(-0.393270066412926145e-1,#8th degree
0.408131605934270525e-2,
0.408131605934270525e-2,
0.408131605934270525e-2,
0.408131605934270525e-2,
0.658086773304341943e-3,
0.658086773304341943e-3,
0.658086773304341943e-3,
0.658086773304341943e-3,
0.438425882512284693e-2,
0.438425882512284693e-2,
0.438425882512284693e-2,
0.438425882512284693e-2,
0.438425882512284693e-2,
0.438425882512284693e-2,
0.138300638425098166e-1,
0.138300638425098166e-1,
0.138300638425098166e-1,
0.138300638425098166e-1,
0.138300638425098166e-1,
0.138300638425098166e-1,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.424043742468372453e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2,
0.223873973961420164e-2))
# (0.602678571428571597e-2,
# 0.602678571428571597e-2,
# 0.602678571428571597e-2,
# 0.602678571428571597e-2,
# 0.302836780970891856e-1,
# 0.116452490860289742e-1,
# 0.116452490860289742e-1,
# 0.116452490860289742e-1,
# 0.116452490860289742e-1,
# 0.109491415613864534e-1,
# 0.109491415613864534e-1,
# 0.109491415613864534e-1,
# 0.109491415613864534e-1,
# 0.109491415613864534e-1,
# 0.109491415613864534e-1),
# (16.0/135.*Vf,
# w1,w1,w1,w1,
# w2,w2,w2,w2,
# wa,wa,wa,wa,wa,wa))
self.setOrder(order)
class LobattoTetrahedron(Q_base):
"""
Gauss-Lobatto quadrature on the unit tetrahedron.
"""
def __init__(self,order=1):
order=1
Q_base.__init__(self,order)
self.pointsAll=(
( EVec(0.0,0.0,0.0),
EVec(1.0,0.0,0.0),
EVec(0.0,1.0,0.0),
EVec(0.0,0.0,1.0)),)
self.weightsAll=(
(old_div(1.0,24.0),old_div(1.0,24.0),old_div(1.0,24.0),old_div(1.0,24.0)),)
self.setOrder(order)
def setOrder(self,k):
self.order = 1
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
class FaceBarycenterTetrahedron(Q_base):
def __init__(self,order=1):
order=1
Q_base.__init__(self,order)
self.pointsAll=(
( EVec(old_div(1.0,3.0),old_div(1.0,3.0),old_div(1.0,3.0)), EVec(0.0,old_div(1.0,3.0),old_div(1.,3.)), EVec(old_div(1.0,3.0),0.0,old_div(1.0,3.0)),EVec(old_div(1.,3.),old_div(1.,3.),0.0)),
)
self.weightsAll=(
(old_div(1.0,24.0), old_div(1.0,24.0), old_div(1.0,24.0)),
)
self.setOrder(order)
def setOrder(self,k):
self.order = 1
self.points = self.pointsAll[0]
self.weights = self.weightsAll[0]
class SimplexGaussQuadrature(Q_base):
""" A class which defines quadrature on unit simplices.
Arguments
---------
nd : int
Dimension of the finite element problem.
order : int
Polynomial order for which the integration is exact.
"""
def __init__(self,nd=3,order=1):
Q_base.__init__(self,order)
if nd == 0:
self.quadrature = GaussPoint(order)
if nd == 1:
self.quadrature = GaussEdge(order)
elif nd == 2:
self.quadrature = GaussTriangle(order)
elif nd == 3:
self.quadrature = GaussTetrahedron(order)
self.pointsAll = self.quadrature.pointsAll
self.weightsAll = self.quadrature.weightsAll
self.points = self.quadrature.points
self.weights = self.quadrature.weights
def setOrder(self,k):
self.quadrature.setOrder(k)
self.points = self.quadrature.points
self.weights = self.quadrature.weights
class CubeGaussQuadrature(Q_base):
"""
A class for all quadrature on unit simplices.
"""
def __init__(self,nd=3,order=1):
Q_base.__init__(self,order)
self.nd=nd
self.quadrature = GaussEdge(order=order)
self.setOrder(order)
def setOrder(self,order):
self.quadrature.setOrder(order,[-1.0,1.0])
if self.nd == 1:
self.points = self.quadrature.points
self.weights = self.quadrature.weights
if self.nd == 2:
self.points = []
self.weights = []
for i in range(order):
for j in range(order):
self.points.append(EVec(self.quadrature.points[i][0],self.quadrature.points[j][0]))
self.weights.append(self.quadrature.weights[i]*self.quadrature.weights[j])
if self.nd == 3:
self.points =[]
self.weights = []
for i in range(order):
for j in range(order):
for k in range(order):
self.points.append(EVec(self.quadrature.points[i][0],self.quadrature.points[j][0],self.quadrature.points[k][0]))
self.weights.append(self.quadrature.weights[i]*self.quadrature.weights[j]*self.quadrature.weights[k])
class SimplexLobattoQuadrature(Q_base):
"""
A class for quadrature on unit simplices.
"""
def __init__(self,nd=3,order=1):
#mwf allow for higher order now
#order=1
Q_base.__init__(self,order)
if nd == 0:
self.quadrature = LobattoPoint(order)
if nd == 1:
self.quadrature = LobattoEdge(order)
elif nd == 2:
self.quadrature = LobattoTriangle(order)
elif nd == 3:
self.quadrature = LobattoTetrahedron(order)
self.pointsAll = self.quadrature.pointsAll
self.weightsAll = self.quadrature.weightsAll
self.points = self.quadrature.points
self.weights = self.quadrature.weights
def setOrder(self,k):
#mwf allow variable order now
#order=1
#self.quadrature.points = self.quadrature.pointsAll[0]
#self.quadrature.weights = self.quadrature.weightsAll[0]
self.quadrature.setOrder(k)
self.points = self.quadrature.points
self.weights = self.quadrature.weights
def buildUnion(quadratureDict):
#The quadratureDict argument allows different quadrature
#for a set of integrals (the keys). The goal of the following function is
#to build a single array of points and an array of weights
#for each integral (a dictionary of arrays) that matches these points--zero weight
#if the given point isn't part of the quadrature rule
#for that integral.
#
#
#First calculate the union of all element quadrature points.
#
quadraturePointSet = set()
for I,quadrature in quadratureDict.items():
quadraturePointSet |= set([(p[0],p[1],p[2]) for p in quadrature.points])
nQuadraturePoints = len(quadraturePointSet)
quadraturePointSet = sorted(quadraturePointSet)
#
#Now build a dictionary at each element quadrature point which
#contains the weights for each integral
#
# e.g. quadratureWeightDict[(I,p)] is the weight at p for the
# integral I
#
quadratureWeightDict={}
#mwf check to avoid float comparison
quadraturePointValid= {}
for I,quadrature in quadratureDict.items():
for p in quadraturePointSet:
quadratureWeightDict[(I,p)]=0.0
quadraturePointValid[(I,p)]=False
for w,p in zip(quadrature.weights,
quadrature.points):
quadratureWeightDict[(I,(p[0],p[1],p[2]))]=w
quadraturePointValid[(I,(p[0],p[1],p[2]))]=True
#
# Now create the desired point and weight arrays
#
quadraturePoints = numpy.zeros((nQuadraturePoints,3),
'd')
for k,p in enumerate(quadraturePointSet):
quadraturePoints[k,:]=p
quadratureWeights = {}
#mwf add dictionary to get indeces for points corresponding to each integral type
quadraturePointIndeces = {}
for I in list(quadratureDict.keys()):
quadratureWeights[I] = numpy.zeros(
(nQuadraturePoints,),'d')
quadraturePointIndeces[I]= []
for k,p in enumerate(quadraturePointSet):
quadratureWeights[I][k] = quadratureWeightDict[(I,p)]
if quadraturePointValid[(I,p)]:
#mwf is this good enough to tell me what the correct indeces are?
assert abs(quadratureWeightDict[(I,p)]) > 1.0e-10, "valid quadrature point zero weight"
quadraturePointIndeces[I].append(k)
return (quadraturePoints,quadratureWeights,quadraturePointIndeces)
## @}
|
erdc/proteus
|
proteus/Quadrature.py
|
Python
|
mit
| 47,052
|
[
"Gaussian"
] |
0078d1a832f2d22abc066b710e439734175d11d311c2b0270ff7910b0c9e55b5
|
#!/usr/bin/env python
'''unit testing code for pysam.
Execute in the :file:`tests` directory as it requires the Makefile
and data files located there.
'''
import pysam
import unittest
import os
import re
import sys
import subprocess
import shutil
from TestUtils import checkBinaryEqual
IS_PYTHON3 = sys.version_info[0] >= 3
SAMTOOLS = "samtools"
WORKDIR = "pysam_test_work"
DATADIR = "pysam_data"
def runSamtools(cmd):
'''run a samtools command'''
try:
retcode = subprocess.call(cmd, shell=True,
stderr=subprocess.PIPE)
if retcode < 0:
print("Child was terminated by signal", -retcode)
except OSError as e:
print("Execution failed:", e)
def getSamtoolsVersion():
'''return samtools version'''
with subprocess.Popen(SAMTOOLS, shell=True,
stderr=subprocess.PIPE).stderr as pipe:
lines = b"".join(pipe.readlines())
if IS_PYTHON3:
lines = lines.decode('ascii')
return re.search("Version:\s+(\S+)", lines).groups()[0]
class BinaryTest(unittest.TestCase):
'''test samtools command line commands and compare
against pysam commands.
Tests fail, if the output is not binary identical.
'''
first_time = True
# a dictionary of commands to test
# first entry: (samtools output file, samtools command)
# second entry: (pysam output file, (pysam function, pysam options) )
commands = \
{
"view":
(
("ex1.view", "view ex1.bam > ex1.view"),
("pysam_ex1.view", (pysam.view, "ex1.bam")),
),
"view2":
(
("ex1.view", "view -bT ex1.fa -o ex1.view2 ex1.sam"),
# note that -o ex1.view2 throws exception.
("pysam_ex1.view",
(pysam.view, "-bT ex1.fa -oex1.view2 ex1.sam")),
),
"sort":
(
("ex1.sort.bam", "sort ex1.bam ex1.sort"),
("pysam_ex1.sort.bam", (pysam.sort, "ex1.bam pysam_ex1.sort")),
),
"mpileup":
(
("ex1.pileup", "mpileup ex1.bam > ex1.pileup"),
("pysam_ex1.mpileup", (pysam.mpileup, "ex1.bam")),
),
"depth":
(
("ex1.depth", "depth ex1.bam > ex1.depth"),
("pysam_ex1.depth", (pysam.depth, "ex1.bam")),
),
"faidx":
(
("ex1.fa.fai", "faidx ex1.fa"),
("pysam_ex1.fa.fai", (pysam.faidx, "ex1.fa")),
),
"index":
(
("ex1.bam.bai", "index ex1.bam"),
("pysam_ex1.bam.bai", (pysam.index, "pysam_ex1.bam")),
),
"idxstats":
(
("ex1.idxstats", "idxstats ex1.bam > ex1.idxstats"),
("pysam_ex1.idxstats", (pysam.idxstats, "pysam_ex1.bam")),
),
"fixmate":
(
("ex1.fixmate.bam", "fixmate ex1.bam ex1.fixmate.bam"),
("pysam_ex1.fixmate.bam",
(pysam.fixmate, "pysam_ex1.bam pysam_ex1.fixmate.bam")),
),
"flagstat":
(
("ex1.flagstat", "flagstat ex1.bam > ex1.flagstat"),
("pysam_ex1.flagstat", (pysam.flagstat, "pysam_ex1.bam")),
),
"calmd":
(
("ex1.calmd.bam", "calmd ex1.bam ex1.fa > ex1.calmd.bam"),
("pysam_ex1.calmd.bam", (pysam.calmd, "pysam_ex1.bam ex1.fa")),
),
"merge":
(
("ex1.merge", "merge -f ex1.merge ex1.bam ex1.bam"),
# -f option does not work - following command will
# cause the subsequent command to fail
("pysam_ex1.merge",
(pysam.merge, "pysam_ex1.merge pysam_ex1.bam pysam_ex1.bam")),
),
"rmdup":
(
# use -s option, otherwise the following error in samtools 1.2:
# Samtools-htslib-API: bam_get_library() not yet implemented
("ex1.rmdup.bam", "rmdup -s ex1.bam ex1.rmdup.bam"),
("pysam_ex1.rmdup.bam",
(pysam.rmdup, "pysam_ex1.bam -s pysam_ex1.rmdup.bam")),
),
"reheader":
(
("ex1.reheader", "reheader ex1.bam ex1.bam > ex1.reheader"),
("pysam_ex1.reheader", (pysam.reheader, "ex1.bam ex1.bam")),
),
"cat":
(
("ex1.cat.bam", "cat -o ex1.cat.bam ex1.bam ex1.bam"),
("pysam_ex1.cat.bam",
(pysam.cat, " -o pysam_ex1.cat.bam ex1.bam ex1.bam")),
),
"targetcut":
(
("ex1.targetcut", "targetcut ex1.bam > ex1.targetcut"),
("pysam_ex1.targetcut", (pysam.targetcut, "pysam_ex1.bam")),
),
"phase":
(
("ex1.phase", "phase ex1.bam > ex1.phase"),
("pysam_ex1.phase", (pysam.phase, "pysam_ex1.bam")),
),
"import":
(
("ex1.bam", "import ex1.fa.fai ex1.sam.gz ex1.bam"),
("pysam_ex1.bam",
(pysam.samimport, "ex1.fa.fai ex1.sam.gz pysam_ex1.bam")),
),
"bam2fq":
(
("ex1.bam2fq", "bam2fq ex1.bam > ex1.bam2fq"),
("pysam_ex1.bam2fq", (pysam.bam2fq, "pysam_ex1.bam")),
),
"pad2unpad":
(
("ex2.unpad", "pad2unpad -T ex1.fa ex2.bam > ex2.unpad"),
("pysam_ex2.unpad", (pysam.pad2unpad, "-T ex1.fa ex2.bam")),
),
"bamshuf":
(
("ex1.bamshuf.bam", "bamshuf ex1.bam ex1.bamshuf"),
("pysam_ex1.bamshuf.bam",
(pysam.bamshuf, "ex1.bam pysam_ex1.bamshuf")),
),
"bedcov":
(
("ex1.bedcov", "bedcov ex1.bed ex1.bam > ex1.bedcov"),
("pysam_ex1.bedcov", (pysam.bedcov, "ex1.bed ex1.bam")),
),
}
# some tests depend on others. The order specifies in which order
# the samtools commands are executed.
# The first three (faidx, import, index) need to be in that order,
# the rest is arbitrary.
order = ('faidx',
'import',
'index',
'view',
'view2',
'sort',
'mpileup',
'depth',
'idxstats',
'fixmate',
'flagstat',
'calmd',
'merge',
'rmdup',
'reheader',
'cat',
'bedcov',
'targetcut',
'phase',
'bam2fq',
# Segmentation fault:
# 'bamshuf',
# File not binary identical
# 'pad2unpad',
)
def setUp(self):
'''setup tests.
For setup, all commands will be run before the first test is
executed. Individual tests will then just compare the output
files.
'''
if BinaryTest.first_time:
# remove previous files
if os.path.exists(WORKDIR):
shutil.rmtree(WORKDIR)
pass
# copy the source files to WORKDIR
os.makedirs(WORKDIR)
for f in ("ex1.fa", "ex1.sam.gz",
"ex1.sam", "ex2.bam",
"ex1.bed"):
shutil.copy(os.path.join(DATADIR, f),
os.path.join(WORKDIR, f))
# cd to workdir
savedir = os.getcwd()
os.chdir(WORKDIR)
for label in self.order:
sys.stdout.write("preparing test {}".format(label))
command = self.commands[label]
# build samtools command and target and run
samtools_target, samtools_command = command[0]
runSamtools(" ".join((SAMTOOLS, samtools_command)))
sys.stdout.write(" samtools ok")
# get pysam command and run
try:
pysam_target, pysam_command = command[1]
except ValueError as msg:
raise ValueError("error while setting up %s=%s: %s" %
(label, command, msg))
pysam_method, pysam_options = pysam_command
try:
output = pysam_method(*pysam_options.split(" "),
raw=True,
catch_stdout=True)
except pysam.SamtoolsError as msg:
raise pysam.SamtoolsError(
"error while executing %s: options=%s: msg=%s" %
(label, pysam_options, msg))
sys.stdout.write(" pysam ok\n")
if ">" in samtools_command:
with open(pysam_target, "wb") as outfile:
if type(output) == list:
if IS_PYTHON3:
for line in output:
outfile.write(line.encode('ascii'))
else:
for line in output:
outfile.write(line)
else:
outfile.write(output)
os.chdir(savedir)
BinaryTest.first_time = False
samtools_version = getSamtoolsVersion()
def _r(s):
# patch - remove any of the alpha/beta suffixes, i.e., 0.1.12a ->
# 0.1.12
if s.count('-') > 0:
s = s[0:s.find('-')]
return re.sub("[^0-9.]", "", s)
if _r(samtools_version) != _r(pysam.__samtools_version__):
raise ValueError(
"versions of pysam/samtools and samtools differ: %s != %s" %
(pysam.__samtools_version__,
samtools_version))
def checkCommand(self, command):
if command:
samtools_target, pysam_target = self.commands[
command][0][0], self.commands[command][1][0]
samtools_target = os.path.join(WORKDIR, samtools_target)
pysam_target = os.path.join(WORKDIR, pysam_target)
self.assertTrue(
checkBinaryEqual(samtools_target, pysam_target),
"%s failed: files %s and %s are not the same" %
(command, samtools_target, pysam_target))
def testImport(self):
self.checkCommand("import")
def testIndex(self):
self.checkCommand("index")
def testSort(self):
self.checkCommand("sort")
def testMpileup(self):
self.checkCommand("mpileup")
def testCalmd(self):
self.checkCommand("calmd")
def testDepth(self):
self.checkCommand("depth")
def testIdxstats(self):
self.checkCommand("idxstats")
def testFixmate(self):
self.checkCommand("fixmate")
def testFlagstat(self):
self.checkCommand("flagstat")
def testMerge(self):
self.checkCommand("merge")
def testRmdup(self):
self.checkCommand("rmdup")
def testReheader(self):
self.checkCommand("reheader")
def testCat(self):
self.checkCommand("cat")
def testTargetcut(self):
self.checkCommand("targetcut")
def testPhase(self):
self.checkCommand("phase")
def testBam2fq(self):
self.checkCommand("bam2fq")
def testBedcov(self):
self.checkCommand("bedcov")
def testView(self):
self.checkCommand("view")
# def testBamshuf(self):
# self.checkCommand("bamshuf")
# def testPad2Unpad(self):
# self.checkCommand("pad2unpad")
def testEmptyIndex(self):
self.assertRaises(IOError, pysam.index, "exdoesntexist.bam")
def __del__(self):
if os.path.exists(WORKDIR):
shutil.rmtree(WORKDIR)
class StdoutTest(unittest.TestCase):
'''test if stdout can be redirected.'''
def testWithRedirectedStdout(self):
r = pysam.flagstat(os.path.join(DATADIR, "ex1.bam"))
self.assertTrue(len(r) > 0)
def testWithoutRedirectedStdout(self):
r = pysam.flagstat(os.path.join(DATADIR, "ex1.bam"),
catch_stdout=False)
self.assertTrue(len(r) == 0)
if __name__ == "__main__":
# build data files
print ("building data files")
subprocess.call("make -C %s" % DATADIR, shell=True)
print ("starting tests")
unittest.main()
print ("completed tests")
|
dannon/pysam
|
tests/samtools_test.py
|
Python
|
mit
| 12,882
|
[
"pysam"
] |
3c93dc835d94e7521fda68e0be7d6a73da0e3345b2639c0c87e75a7c3be5416a
|
import click
import os
import pathlib
import logging
from create_tiles import calc_output_filenames, create_tiles, list_tile_files
from ingester.utils import preserve_cwd
from netcdf_writer import append_to_netcdf, MultiVariableNetCDF, SingleVariableNetCDF
import eodatasets.drivers
import eodatasets.type
from eodatasets.serialise import read_yaml_metadata
_LOG = logging.getLogger(__name__)
CLICK_SETTINGS = dict(help_option_names=['-h', '--help'])
DEFAULT_TILE_OPTIONS = {
'output_format': 'GTiff',
'create_options': ['COMPRESS=DEFLATE', 'ZLEVEL=1']
}
def get_input_filenames(input_path, eodataset):
"""
Extracts absolute filenames from a DatasetMetadata object.
:type input_path: pathlib.Path
:type eodataset: eodatasets.type.DatasetMetadata
:return: list of filenames
"""
assert input_path.is_dir()
bands = sorted([band for band_num, band in eodataset.image.bands.items()], key=lambda band: band.number)
input_files = [input_path / band.path for band in bands]
return input_files
def is_yaml_file(path):
"""
Checks if this is a path to a yaml file
:type path: pathlib.Path
:rtype: boolean
"""
return path.is_file() and path.suffix == '.yaml'
def load_dataset(input_path):
"""
Loads a dataset metadata description
:param input_path:
:rtype: (pathlib.Path, eodataset.DatasetMetadata)
"""
input_path = pathlib.Path(input_path)
if is_yaml_file(input_path):
eodataset = read_yaml_metadata(input_path)
input_path = input_path.parent
elif input_path.is_dir():
eodriver = eodatasets.drivers.EODSDriver()
eodataset = eodatasets.type.DatasetMetadata()
eodriver.fill_metadata(eodataset, input_path)
else:
raise Exception("Unknown dataset type at: {}" % input_path)
return input_path, eodataset
def merge_tiles_to_netcdf(eodataset, filename_format, netcdf_class):
created_tiles = list_tile_files('test.csv')
tile_mappings = calc_output_filenames(created_tiles, filename_format, eodataset)
for geotiff, netcdf in tile_mappings:
append_to_netcdf(geotiff, netcdf, eodataset, netcdf_class=netcdf_class)
return [netcdf_path for _, netcdf_path in tile_mappings]
def setup_logging(verbosity):
"""
Setups up logging, defaults to WARN
:param verbosity: 1 for INFO, 2 for DEBUG
:return:
"""
logging_level = logging.WARN - 10 * verbosity
logging.basicConfig(level=logging_level)
@preserve_cwd
def ingest(input_path, output_dir, filename_format, netcdf_class=MultiVariableNetCDF, tile=True, merge=True):
"""
Runs a series of steps to: stack, split into tiles and re-merge into netcdf an input dataset
:param input_path: str, pathname to a ga-metadata.yaml file or directory that eo-datasets can process
:param output_dir: str, pathname
:param filename_format: string format for output filenames, extracts fields from the input EO-Dataset
:param netcdf_class: either MultiVariableNetCDF or SingleVariableNetCDF
:param tile: boolean, whether to run the tiling step
:param merge: boolean, whether
:return: list of created tile-files
"""
os.chdir(output_dir)
input_path, eodataset = load_dataset(input_path)
input_files = get_input_filenames(input_path, eodataset)
basename = eodataset.ga_label
if tile:
created_tiles = create_tiles(input_files, basename, DEFAULT_TILE_OPTIONS)
_LOG.info("Created tiles: {}".format(created_tiles))
# Import tiles into NetCDF files
if merge:
netcdf_paths = merge_tiles_to_netcdf(eodataset, filename_format, netcdf_class)
_LOG.info("Created/alterated storage units: {}".format(netcdf_paths))
return netcdf_paths
@click.command(help="Example output filename format: combined_{x}_{y}.nc", context_settings=CLICK_SETTINGS)
@click.option('--output-dir', '-o', default='.')
@click.option('--multi-variable', 'netcdf_class', flag_value=MultiVariableNetCDF, default=True)
@click.option('--single-variable', 'netcdf_class', flag_value=SingleVariableNetCDF)
@click.option('--tile/--no-tile', default=True, help="Allow partial processing")
@click.option('--merge/--no-merge', default=True, help="Allow partial processing")
@click.option('--verbose', '-v', count=True, help="Use multiple times for more verbosity")
@click.argument('input_path', type=click.Path(exists=True, readable=True))
@click.argument('filename-format')
def main(input_path, output_dir, filename_format, netcdf_class=MultiVariableNetCDF,
tile=True, merge=True, verbose=0):
"""
Runs ingest from the command line
"""
setup_logging(verbose)
ingest(input_path, output_dir, filename_format, netcdf_class, tile, merge)
if __name__ == '__main__':
try:
from ipdb import launch_ipdb_on_exception
with launch_ipdb_on_exception():
main()
except ImportError:
main()
|
omad/datacube-experiments
|
ingester/datacube_ingester.py
|
Python
|
bsd-3-clause
| 4,945
|
[
"NetCDF"
] |
51d112efaf24841301db4e88c0bf82ca18436d6cf2eeda0f480c2acbf29227e0
|
#!/usr/bin/env python
from __future__ import print_function, division
import argparse
import json
import re
import elasticsearch as es
import elasticsearch_dsl
parser = argparse.ArgumentParser(description="Cleans up Kibana objects and " +
"Elasticsearch indices for a given user " +
"and project")
parser.add_argument('user', action='store', type=str, help="User ID.")
parser.add_argument('project', action='store', type=str, help="Run ID.")
parser.add_argument('-e', '--elasticsearch', action='store_true',
help="Delete Elasticsearch indices with [user_project] prefix")
parser.add_argument('-k', '--kibana', action='store_true',
help="Delete Kibana objects with [user_project] prefix.")
parser.add_argument('-H', '--host', action='store', dest='host', type=str,
default='elk.crc.nd.edu', help="Elasticsearch host.")
parser.add_argument('-p', '--port', action='store', dest='port', type=int,
default=9200, help="Elasticsearch port number.")
args = parser.parse_args()
prefix = '[' + args.user + '_' + args.project + ']'
client = es.Elasticsearch([{'host': args.host, 'port': args.port}])
if args.kibana:
search = elasticsearch_dsl.Search(using=client, index='.kibana') \
.filter('prefix', _id=prefix)
response = search.execute()
for result in response:
client.delete(index='.kibana', doc_type=result.meta.doc_type,
id=result.meta.id)
if args.elasticsearch:
client.indices.delete(index=prefix + '_*')
|
yannakopoulos/elk-admin
|
scripts/cleanup_elk.py
|
Python
|
mit
| 1,609
|
[
"Elk"
] |
96ff9624f77c7b41197cb4ba5b4c91bdd03cc9471feff366f7d287f916f1fc90
|
# coding=utf-8
from __future__ import unicode_literals
from ..person import Provider as PersonProvider
class Provider(PersonProvider):
formats = (
'{{first_name_male}} {{last_name}}',
'{{first_name_male}} {{last_name}}',
'{{first_name_male}} {{last_name}}',
'{{first_name_male}} {{last_name}}',
'{{first_name_male}} {{last_name}}-{{last_name}}',
'{{first_name_female}} {{last_name}}',
'{{first_name_female}} {{last_name}}',
'{{first_name_female}} {{last_name}}',
'{{first_name_female}} {{last_name}}',
'{{first_name_female}} {{last_name}}-{{last_name}}',
'{{prefix_male}} {{first_name_male}} {{last_name}}',
'{{prefix_female}} {{first_name_female}} {{last_name}}',
'{{prefix_male}} {{first_name_male}} {{last_name}}',
'{{prefix_female}} {{first_name_female}} {{last_name}}'
)
first_names_male = (
'Adam', 'Albert', 'Aksel', 'Alex', 'Alexander', 'Alf', 'Allan',
'Alvin', 'Anders', 'André', 'Andreas', 'Anton', 'Arne', 'Asger',
'ugust', 'Benjamin', 'Benny', 'Bent', 'Bertil', 'Bertram', 'Birger',
'Bjarne', 'Bo', 'Bob', 'Bobby', 'Boe', 'Boris', 'Borris',
'Brian', 'Bruno', 'Bøje', 'Børge', 'Carl', 'Carlo', 'Carsten',
'Casper', 'Christian', 'Christoffer', 'Christopher', 'Claus', 'Clavs', 'Curt',
'Dan', 'Daniel', 'Danny', 'David', 'Dennis', 'Ebbe', 'Einar',
'Einer', 'Elias', 'Emil ', 'Eric', 'Erik', 'Erling', 'Ernst',
'Esben', 'Finn', 'Flemming ', 'Frank', 'Frans', 'Freddy', 'Frede',
'Frederik', 'Frode', 'Georg ', 'George', 'Gert', 'Gorm', 'Gunnar',
'Gunner', 'Gustav', 'Hans', 'Helge', 'Henrik', 'Henry', 'Herbert',
'Herman', 'Hjalte', 'Holger', 'Hugo', 'Ib', 'Ivan', 'Iver',
'Jack', 'Jacob', 'Jakob', 'James', 'Jan', 'Jano', 'Jarl',
'Jean', 'Jens', 'Jeppe', 'Jesper', 'Jim', 'Jimmy', 'Joachim',
'Joakim', 'Johan', 'Johannes', 'John', 'Johnnie', 'Johnny', 'Jon',
'Jonas', 'Jonathan', 'Julius', 'Jørgen', 'Karl', 'Karlo', 'Karsten',
'Kaspar', 'Kasper', 'Keld', 'Ken', 'Kenn', 'Kenneth', 'Kenny',
'Kent', 'Kim', 'Kjeld', 'Klaus', 'Klavs', 'Kristian', 'Kurt',
'Kåre', 'Lars', 'Lasse', 'Laurits', 'Laus', 'Laust', 'Leif',
'Lennarth', 'Lucas', 'Ludvig', 'Mads', 'Magnus', 'Malthe', 'Marcus',
'Marius', 'Mark', 'Martin', 'Mathias', 'Matthias', 'Michael', 'Mik',
'Mikael', 'Mike', 'Mikkel', 'Mogens', 'Morten', 'Nick', 'Nicklas',
'Nicolai', 'Nicolaj', 'Niels', 'Nikolai', 'Nikolaj', 'Nils', 'Noah',
'Ole', 'Olfert', 'Oliver', 'Oscar', 'Oskar', 'Osvald', 'Otto',
'Ove', 'Palle', 'Patrick', 'Paw', 'Peder', 'Per', 'Pete',
'Peter', 'Paul', 'Philip', 'Poul', 'Preben', 'Ragnar', 'Ragner',
'Rasmus', 'René', 'Richard', 'Richardt', 'Robert', 'Robin', 'Rolf',
'Ron', 'Ronni', 'Ronnie', 'Ronny', 'Ruben', 'Rune', 'Sam',
'Sebastian', 'Silas', 'Simon', 'Simon', 'Sonny', 'Steen', 'Stefan',
'Sten', 'Stephan', 'Steve', 'Steven', 'Stig', 'Svenning', 'Søren',
'Tage', 'Tejs', 'Thomas', 'Tim', 'Timmy', 'Tobias', 'Tom',
'Tommy', 'Tonny', 'Torben', 'Troels', 'Uffe', 'Ulf', 'Ulrik',
'Vagn', 'Valdemar', 'Verner', 'Victor', 'Villads', 'Werner', 'William',
'Yan', 'Yannick', 'Yngve', 'Zacharias', 'Ziggy', 'Øivind', 'Øjvind',
'Ørni', 'Øvli', 'Øystein', 'Øyvind', 'Åbjørn', 'Aage', 'Åge',
)
first_names_female = (
'Abelone', 'Agnes', 'Agnete', 'Alberte', 'Alma', 'Amalie', 'Amanda',
'Andrea', 'Ane', 'Anette', 'Anna', 'Anne', 'Annemette', 'Annette',
'Asta', 'Astrid', 'Benedicte', 'Benedikte', 'Bente', 'Benthe', 'Berit',
'Berta', 'Beth', 'Bettina', 'Birgit', 'Birgitte', 'Birte', 'Birthe',
'Bitten', 'Bodil', 'Britt', 'Britta', 'Camilla', 'Carina', 'Carla',
'Caroline', 'Cathrine', 'Catrine', 'Cecilie', 'Charlotte', 'Christina', 'Christine',
'Cirkeline', 'Clara', 'Connie', 'Conny', 'Dagmar', 'Dagny', 'Daniella',
'Dina', 'Ditte', 'Doris', 'Dorte', 'Dorthe', 'Edith', 'Elin',
'Elisabeth', 'Ella', 'Ellen', 'Elna', 'Else', 'Elsebeth', 'Emilie',
'Emily', 'Emma', 'Erna', 'Esmarelda', 'Ester', 'Filippa', 'Frederikke',
'Freja', 'Frida', 'Gerda', 'Gertrud', 'Gitte', 'Grete', 'Grethe',
'Gundhild', 'Gunhild', 'Gurli', 'Gyda', 'Hannah', 'Hanne', 'Heidi',
'Helen', 'Helle', 'Henriette', 'Herdis', 'Iben', 'Ida', 'Inga',
'Inge', 'Ingelise', 'Inger', 'Ingrid', 'Irma', 'Isabella', 'Jacobine',
'Jacqueline', 'Janne', 'Janni', 'Jannie', 'Jasmin', 'Jean', 'Jenny',
'Joan', 'Johanne', 'Jonna', 'Josefine', 'Josephine ', 'Julie', 'Justina',
'Jytte', 'Karen', 'Karin', 'Karina', 'Karla', 'Karoline', 'Katcha',
'Katja', 'Katrine', 'Kirsten', 'Kirstin', 'Kirstine', 'Klara', 'Kristina',
'Kristine', 'Laura', 'Lea', 'Lena', 'Lene', 'Leonora', 'Line',
'Liva', 'Lona', 'Lone', 'Lotte', 'Louise', 'Lærke', 'Maiken',
'Maja', 'Majken', 'Malene', 'Malou', 'Maren', 'Margit', 'Margrethe',
'Maria', 'Marianne', 'Marie', 'Marlene', 'Mathilde', 'Maya', 'Merete',
'Merethe', 'Mette ', 'Mia', 'Michala', 'Michelle', 'Mie', 'Mille',
'Mimi', 'Minna', 'Nadia', 'Naja', 'Nana', 'Nanna', 'Nanni',
'Natasha', 'Natasja', 'Nete', 'Nicoline', 'Nina', 'Nora', 'Oda',
'Odeline', 'Odette', 'Ofelia', 'Olga', 'Olivia', 'Patricia', 'Paula',
'Paulina', 'Pernille', 'Pia', 'Ragna', 'Ragnhild', 'Randi', 'Rebecca',
'Regitse', 'Regitze', 'Rikke', 'Rita', 'Ritt', 'Ronja', 'Rosa',
'Ruth', 'Sabine', 'Sandra', 'Sanne', 'Sara', 'Sarah', 'Selma',
'Signe', 'Sigrid', 'Silje', 'Sille', 'Simone', 'Sine', 'Sofia',
'Sofie', 'Solveig', 'Solvej', 'Sonja', 'Sophie', 'Stina', 'Stine',
'Susanne', 'Sussanne', 'Sussie', 'Sys', 'Sørine', 'Søs', 'Tammy',
'Tanja', 'Thea', 'Tilde', 'Tina', 'Tine', 'Tove', 'Trine',
'Ulla', 'Ulrike', 'Ursula', 'Vera', 'Victoria', 'Viola', 'Vivian',
'Weena', 'Winni', 'Winnie', 'Xenia', 'Yasmin', 'Yda', 'Yrsa',
'Yvonne', 'Zahra', 'Zara', 'Zehnia', 'Zelma', 'Zenia', 'Åse',
)
last_names = (
'Jensen', 'Nielsen', 'Hansen', 'Pedersen', 'Andersen', 'Christensen', 'Larsen',
'Sørensen', 'Rasmussen', 'Petersen', 'Jørgensen', 'Madsen', 'Kristensen', 'Olsen',
'Christiansen', 'Thomsen', 'Poulsen', 'Johansen', 'Knudsen', 'Mortensen', 'Møller',
'Jacobsen', 'Jakobsen', 'Olesen', 'Frederiksen', 'Mikkelsen', 'Henriksen', 'Laursen',
'Lund', 'Schmidt', 'Eriksen', 'Holm', ' Kristiansen', 'Clausen', 'Simonsen',
'Svendsen', 'Andreasen', 'Iversen', 'Jeppesen', 'Mogensen', 'Jespersen', 'Nissen',
'Lauridsen', 'Frandsen', 'Østergaard', 'Jepsen', 'Kjær', 'Carlsen', 'Vestergaard',
'Jessen', 'Nørgaard', 'Dahl', 'Christoffersen', 'Skov', 'Søndergaard', 'Bertelsen',
'Bruun', 'Lassen', 'Bach', 'Gregersen', 'Friis', 'Johnsen', 'Steffensen',
'Kjeldsen', 'Bech', 'Krogh', ' Lauritsen', 'Danielsen', 'Mathiesen', 'Andresen',
'Brandt', 'Winther', 'Toft', 'Ravn', 'Mathiasen', 'Dam', 'Holst',
'Nilsson', 'Lind', 'Berg', 'Schou', 'Overgaard', 'Kristoffersen', 'Schultz',
'Klausen', 'Karlsen', 'Paulsen', 'Hermansen', 'Thorsen', 'Koch', 'Thygesen',
)
prefixes_male = (
'Hr', 'Dr.', 'Prof.', 'Univ.Prof.'
)
prefixes_female = (
'Fru', 'Dr.', 'Prof.', 'Univ.Prof.'
)
@classmethod
def first_name(cls):
return cls.random_element((cls.first_name_male(), cls.first_name_female()))
@classmethod
def first_name_male(cls):
return cls.random_element(cls.first_names_male)
@classmethod
def first_name_female(cls):
return cls.random_element(cls.first_names_female)
@classmethod
def prefix(cls):
return cls.random_element((cls.prefix_male(), cls.prefix_female()))
@classmethod
def prefix_male(cls):
return cls.random_element(cls.prefixes_male)
@classmethod
def prefix_female(cls):
return cls.random_element(cls.prefixes_female)
|
DonHilborn/DataGenerator
|
faker/providers/dk_DK/person.py
|
Python
|
mit
| 8,224
|
[
"Brian"
] |
19adb785593918430be68138c043be775917121c02987916b6cd82bf736b8a2f
|
import numpy as np
from sklearn.neighbors import NearestNeighbors
from multiprocessing import Pool
from contextlib import closing
from itertools import repeat
from scipy import sparse as sp
import subprocess
import time
import re
import os
import sys
from .bruteforce_nn import knnsearch
def find_neighbors(data, k=30, metric='minkowski', p=2, method='brute', n_jobs=-1):
"""
Wraps sklearn.neighbors.NearestNeighbors
Find k nearest neighbors of every point in data and delete self-distances
:param data: n-by-d data matrix
:param k: number for nearest neighbors search
:param metric: string naming distance metric used to define neighbors
:param p: if metric == "minkowski", p=2 --> euclidean, p=1 --> manhattan; otherwise ignored.
:param method: 'brute' or 'kdtree'
:param n_jobs:
:return d: n-by-k matrix of distances
:return idx: n-by-k matrix of neighbor indices
"""
if metric.lower() == "euclidean":
metric = "minkowski"
p = 2
if metric.lower() == "manhattan":
metric = "minkowski"
p = 1
if metric.lower() == "minkowski":
algorithm = "auto"
elif metric.lower() == "cosine" or metric.lower() == "correlation":
algorithm = "brute"
else:
algorithm = "auto"
print("Finding {} nearest neighbors using {} metric and '{}' algorithm".format(k, metric, algorithm),
flush=True)
if method == 'kdtree':
nbrs = NearestNeighbors(n_neighbors=k+1, # k+1 because results include self
n_jobs=n_jobs, # use multiple cores if possible
metric=metric, # primary metric
p=p, # if metric == "minkowski", 2 --> euclidean, 1 --> manhattan
algorithm=algorithm # kd_tree is fastest for minkowski metrics
).fit(data)
d, idx = nbrs.kneighbors(data)
elif method == 'brute':
d, idx = knnsearch(data, k+1, metric)
else:
raise ValueError("Invalid argument to `method` parameters: {}".format(method))
# Remove self-distances if these are in fact included
if idx[0, 0] == 0:
idx = np.delete(idx, 0, axis=1)
d = np.delete(d, 0, axis=1)
else: # Otherwise delete the _last_ column of d and idx
idx = np.delete(idx, -1, axis=1)
d = np.delete(d, -1, axis=1)
return d, idx
def neighbor_graph(kernel, kernelargs):
"""
Apply kernel (i.e. affinity function) to kernelargs (containing information about the data)
and return graph as a sparse COO matrix
:param kernel: affinity function
:param kernelargs: dictionary of keyword arguments for kernel
:return graph: n-by-n COO sparse matrix
"""
i, j, s = kernel(**kernelargs)
n, k = kernelargs['idx'].shape
graph = sp.coo_matrix((s, (i, j)), shape=(n, n))
return graph
def gaussian_kernel(idx, d, sigma):
"""
For truncated list of k-nearest distances, apply Gaussian kernel
Assume distances in d are Euclidean
:param idx:
:param d:
:param sigma:
:return:
"""
n, k = idx.shape
i = [np.tile(x, (k,)) for x in range(n)]
i = np.concatenate(np.array(i))
j = np.concatenate(idx)
d = np.concatenate(d)
f = np.vectorize(lambda x: 1/(sigma * (2 * np.pi) ** .5) * np.exp(-.5 * (x / sigma) ** 2))
# apply vectorized gaussian function
p = f(d)
return i, j, p
def jaccard_kernel(idx):
"""
Compute Jaccard coefficient between nearest-neighbor sets
:param idx: numpy array of nearest-neighbor indices
:return (i, j, s): tuple of indices and jaccard coefficients, suitable for constructing COO matrix
"""
n, k = idx.shape
s = list()
for i in range(n):
shared_neighbors = np.fromiter((len(set(idx[i]).intersection(set(idx[j]))) for j in idx[i]), dtype=float)
s.extend(shared_neighbors / (2 * k - shared_neighbors))
i = np.concatenate(np.array([np.tile(x, (k, )) for x in range(n)]))
j = np.concatenate(idx)
return i, j, s
def calc_jaccard(i, idx):
"""Compute the Jaccard coefficient between i and i's direct neighbors"""
coefficients = np.fromiter((len(set(idx[i]).intersection(set(idx[j]))) for j in idx[i]), dtype=float)
coefficients /= (2 * idx.shape[1] - coefficients)
return idx[i], coefficients
def parallel_jaccard_kernel(idx):
"""Compute Jaccard coefficient between nearest-neighbor sets in parallel
:param idx: n-by-k integer matrix of k-nearest neighbors
:return (i, j, s): row indices, column indices, and nonzero values for a sparse adjacency matrix
"""
n = len(idx)
with closing(Pool()) as pool:
jaccard_values = pool.starmap(calc_jaccard, zip(range(n), repeat(idx)))
graph = sp.lil_matrix((n, n), dtype=float)
for i, tup in enumerate(jaccard_values):
graph.rows[i] = tup[0]
graph.data[i] = tup[1]
i, j = graph.nonzero()
s = graph.tocoo().data
return i, j, s[s > 0]
def graph2binary(filename, graph):
"""
Write (weighted) graph to binary file filename.bin
:param filename:
:param graph:
:return None: graph is written to filename.bin
"""
tic = time.time()
# Unpack values in graph
i, j = graph.nonzero()
s = graph.data
# place i and j in single array as edge list
ij = np.hstack((i[:, np.newaxis], j[:, np.newaxis]))
# add dummy self-edges for vertices at the END of the list with no neighbors
ijmax = np.union1d(i, j).max()
n = graph.shape[0]
missing = np.arange(ijmax+1, n)
for q in missing:
ij = np.append(ij, [[q, q]], axis=0)
s = np.append(s, [0.], axis=0)
# Check data types: int32 for indices, float64 for weights
if ij.dtype != np.int32:
ij = ij.astype('int32')
if s.dtype != np.float64:
s = s.astype('float64')
# write to file (NB f.writelines is ~10x faster than np.tofile(f))
with open(filename + '.bin', 'w+b') as f:
f.writelines([e for t in zip(ij, s) for e in t])
print("Wrote graph to binary file in {} seconds".format(time.time() - tic))
def runlouvain(filename, max_runs=100, time_limit=2000, tol=1e-3):
"""
From binary graph file filename.bin, optimize modularity by running multiple random re-starts of
the Louvain C++ code.
Louvain is run repeatedly until modularity has not increased in some number (20) of runs
or if the total number of runs exceeds some larger number (max_runs) OR if a time limit (time_limit) is exceeded
:param filename: *.bin file generated by graph2binary
:param max_runs: maximum number of times to repeat Louvain before ending iterations and taking best result
:param time_limit: maximum number of seconds to repeat Louvain before ending iterations and taking best result
:param tol: precision for evaluating modularity increase
:return communities: community assignments
:return Q: modularity score corresponding to `communities`
"""
def get_modularity(msg):
# pattern = re.compile('modularity increased from -*0.\d+ to 0.\d+')
pattern = re.compile('modularity increased from -*\d.\d+e*-*\d+ to \d.\d+')
matches = pattern.findall(msg.decode())
q = list()
for line in matches:
q.append(line.split(sep=" ")[-1])
return list(map(float, q))
print('Running Louvain modularity optimization', flush=True)
# Use package location to find Louvain code
# lpath = os.path.abspath(resource_filename(Requirement.parse("PhenoGraph"), 'louvain'))
lpath = os.path.join(os.path.dirname(__file__), 'louvain')
try:
assert os.path.isdir(lpath)
except AssertionError:
print("Could not find Louvain code, tried: {}".format(lpath), flush=True)
# Determine if we're using Windows, Mac, or Linux
if sys.platform == "win32" or sys.platform == "cygwin":
convert_binary = "convert.exe"
community_binary = "community.exe"
hierarchy_binary = "hierarchy.exe"
elif sys.platform.startswith("linux"):
convert_binary = "linux-convert"
community_binary = "linux-community"
hierarchy_binary = "linux-hierarchy"
elif sys.platform == "darwin":
convert_binary = "convert"
community_binary = "community"
hierarchy_binary = "hierarchy"
else:
raise RuntimeError("Operating system could not be determined or is not supported. "
"sys.platform == {}".format(sys.platform), flush=True)
# Prepend appropriate path separator
convert_binary = os.path.sep + convert_binary
community_binary = os.path.sep + community_binary
hierarchy_binary = os.path.sep + hierarchy_binary
tic = time.time()
# run convert
args = [lpath + convert_binary, '-i', filename + '.bin', '-o',
filename + '_graph.bin', '-w', filename + '_graph.weights']
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
# check for errors from convert
if bool(out) or bool(err):
print("stdout from convert: {}".format(out.decode()))
print("stderr from convert: {}".format(err.decode()))
Q = 0
run = 0
updated = 0
while run - updated < 20 and run < max_runs and (time.time() - tic) < time_limit:
# run community
fout = open(filename + '.tree', 'w')
args = [lpath + community_binary, filename + '_graph.bin', '-l', '-1', '-v', '-w', filename + '_graph.weights']
p = subprocess.Popen(args, stdout=fout, stderr=subprocess.PIPE)
# Here, we print communities to filename.tree and retain the modularity scores reported piped to stderr
_, msg = p.communicate()
fout.close()
# get modularity from err msg
q = get_modularity(msg)
run += 1
# continue only if we've reached a higher modularity than before
if q[-1] - Q > tol:
Q = q[-1]
updated = run
# run hierarchy
args = [lpath + hierarchy_binary, filename + '.tree']
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
# find number of levels in hierarchy and number of nodes in graph
nlevels = int(re.findall('\d+', out.decode())[0])
nnodes = int(re.findall('level 0: \d+', out.decode())[0].split(sep=" ")[-1])
# get community assignments at each level in hierarchy
hierarchy = np.empty((nnodes, nlevels), dtype='int')
for level in range(nlevels):
args = [lpath + hierarchy_binary, filename + '.tree', '-l', str(level)]
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
h = np.empty((nnodes,))
for i, line in enumerate(out.decode().splitlines()):
h[i] = int(line.split(sep=' ')[-1])
hierarchy[:, level] = h
communities = hierarchy[:, nlevels-1]
print("After {} runs, maximum modularity is Q = {}".format(run, Q), flush=True)
print("Louvain completed {} runs in {} seconds".format(run, time.time() - tic), flush=True)
return communities, Q
|
jacoblevine/PhenoGraph
|
phenograph/core.py
|
Python
|
mit
| 11,451
|
[
"Gaussian"
] |
7121a3d27298d819086214afa943ba8356ac4d592d23aad4af4e35dd9f9d474e
|
# coding: utf-8
# -*- coding: utf-8 -*-
import matplotlib
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
import netCDF4
import pandas as pd
import warnings
from pathlib import Path
from aps.load_region import load_region
from aps.analysis import describe
matplotlib.style.use('seaborn-notebook')
plt.rcParams['figure.figsize'] = (14, 6)
warnings.filterwarnings("ignore")
# Check versions
print('Numpy version: ', np.__version__)
print('Matplotlib version: ', matplotlib.__version__)
print('Today: ', dt.date.today())
# Select region and date
region_id = 3035
date_range = pd.date_range(start="2020-12-01", end="2021-05-31")
# Load region mask - only for data on 1km xgeo-grid
# !!!Important to ensure correct overlay between data- and region-raster
region_mask, y_min, y_max, x_min, x_max = load_region(region_id)
# Set path
nc_dir = Path(r"\\DM-CIFS-01\grid5\metdata\prognosis\meps\det\archive")
tmp_folder = Path(r'./tmp')
csv_file = tmp_folder / "new_snow_line_{0}_{1}_{2}.csv".format(region_id, date_range[0].strftime("%Y%m%d"), date_range[-1].strftime("%Y%m%d"))
new_snow_line = {"Date": [], "Hour": [], "Altitude": [], "Altitude_std": []}
# Load netcdf files for given date range
for d in date_range:
nc_date = dt.datetime(year=d.year, month=d.month, day=d.day, hour=6) # Use the 6 o'clock file
nc_datestring = nc_date.strftime("%Y%m%dT%HZ")
nc_file = "meps_det_1km_{0}.nc".format(nc_datestring)
nc_path = nc_dir / str(nc_date.year) / nc_file
# Load data
try:
nc_data = netCDF4.Dataset(nc_path, "r")
# Choose a time-step
t_index = 18 # 00-next day
except FileNotFoundError:
# if 6 o'clock is not available try the 9 o'clock
try:
nc_date = dt.datetime(year=d.year, month=d.month, day=d.day, hour=9)
nc_datestring = nc_date.strftime("%Y%m%dT%HZ")
nc_file = "meps_det_1km_{0}.nc".format(nc_datestring)
nc_path = nc_dir / str(nc_date.year) / nc_file
nc_data = netCDF4.Dataset(nc_path, "r")
# Choose a time-step
t_index = 15 # 00-next day
print("Using 9 o'clock run.")
except FileNotFoundError:
print("No data available for {0}".format(nc_date))
continue
time_v = nc_data.variables['time']
x_dim = nc_data.dimensions['x'].size
y_dim = nc_data.dimensions['y'].size
ts = netCDF4.num2date(time_v[t_index], time_v.units)
print("\n", ts)
# precip
precip_clip = region_mask * np.flipud(
nc_data.variables['precipitation_amount_acc'][t_index, (y_dim - y_max):(y_dim - y_min), x_min:x_max])
print(describe(precip_clip))
# Altitude data
wetb_clip = region_mask * np.flipud(nc_data.variables['altitude_of_isoTprimW_equal_0'][t_index, (y_dim-y_max):(y_dim-y_min), x_min:x_max])
print(describe(wetb_clip))
plt.imshow(wetb_clip, vmin=-500, vmax=2500)
plt.colorbar()
plt.title(ts)
plt.tight_layout()
plt.savefig(tmp_folder / "wet_bulb_alt_{0}_{1}.png".format(region_id, nc_datestring))
plt.clf()
# Loop over six-hour chunks of data
for k, f in enumerate(range(t_index, t_index+24, 6), start=1):
t = []
sl = []
sd = []
for i, d in enumerate(range(f, f + 6), start=1):
wetb_clip = region_mask * np.flipud(
nc_data.variables['altitude_of_isoTprimW_equal_0'][d, (y_dim - y_max):(y_dim - y_min), x_min:x_max])
_t = netCDF4.num2date(time_v[d], time_v.units)
_sl = np.nanmean(wetb_clip) #np.nanpercentile(wetb_clip, 90)
_sd = np.nanstd(wetb_clip)
# print("\t", i, f, d, _t, _sl, np.nanmean(wetb_clip))
t.append(_t)
sl.append(_sl)
sd.append(_sd)
sl = np.array(sl)
sd = np.array(sd)
print(_t, np.mean(sl), np.max(sl))
_i = np.argmax(sl)
new_snow_line["Date"].append(t[_i].strftime("%Y-%m-%d"))
new_snow_line["Hour"].append(t[_i].hour)
new_snow_line["Altitude"].append(sl[_i])
new_snow_line["Altitude_std"].append(sd[_i])
df = pd.DataFrame(new_snow_line)
df.to_csv(csv_file, sep=";", index=False)
#
# hour_range = [0, 24]
# date_str = '20190405T00Z'
# var_wet = 'altitude_of_isoTprimW_equal_0'
# var_dry = 'altitude_of_0_degree_isotherm'
# var_rr = 'precipitation_amount'
#
# met_folder = Path(r'\\nve.no\fil\grid\metdata\prognosis\meps\det\archive\2019')
#
# nc_file1 = met_folder / 'meps_det_extracted_1km_{0}.nc'.format(date_str)
# times, altitude, land_area_fraction, nc_vars = ga.nc_load(nc_file1, [var_dry, var_wet], time_period=hour_range)
#
# print("{2} hours: from {0} to {1}".format(times[0], times[-1], len(times)))
#
# # ## Calculating the freezing level
# #
# # We use the parameters "altitude_of_0_degree_isotherm" and "altitude_of_isoTprimW_equal_0" from [MEPS_extracted](http://thredds.met.no/thredds/catalog/meps25files/catalog.html).
# # Under dry conditions we use altitude_of_0_degree_isotherm and for timing we use the period with the highest values. With precipitation we use altitude_of_isoTprimW_equal_0 and the period with the highest amount of precipitation.
# #
# # - split data into four chunks: 0-6, 6-12, 12-18, 18-24
# # - compress time dimension to 1 by keeping only the maximum value in each cell for each chunk
# # - calculate the 90-percentile for all max-values within a region
# # - round 90-percentile for each region to the next 50 m
#
# def get_periods_with_precip():
# """
# Check which parts of the day exceed a precipitation of 2 mm / 6h.
# :return: a list containing one or more of the periods [0-6, 6-12, 12-18, 18-24]
# """
# pass
#
#
# precip_high = True
#
# if precip_high:
# var_nsl = var_wet
# calc_period = get_periods_with_precip() # choose between 0-6, 6-12, 12-18, 18-24
# else:
# var_nsl = var_dry
#
# # ### Compress time dimension
#
# nsl_max_00_06 = np.amax(nc_vars[var_nsl][0:6,:,:], axis=0)
# nsl_max_06_12 = np.amax(nc_vars[var_nsl][6:12,:,:], axis=0)
# nsl_max_12_18 = np.amax(nc_vars[var_nsl][12:18,:,:], axis=0)
# nsl_max_18_24 = np.amax(nc_vars[var_nsl][18:24,:,:], axis=0)
# nsl_max = np.amax(nc_vars[var_nsl][0:24,:,:], axis=0)
#
# nsl_list = [nsl_max_00_06, nsl_max_06_12, nsl_max_12_18,nsl_max_18_24, nsl_max]
# # ### Extract regions
#
# # Load region mask - only for data on 1km xgeo-grid
#
# region_mask, y_min, y_max, x_min, x_max = load_region(region_id)
# print(y_max-y_min, x_max-x_min)
#
# print(np.unique(region_mask))
# plt.imshow(region_mask)
# plt.savefig(tmp_folder / 'region_mask_nsl.png')
# plt.clf()
#
# # ### Clip to region
#
# ## TODO: Do this for each 6h-period and make distplots.
# t_index = 0
# nsl_region = clip_region(np.flipud(nsl_max), region_mask, t_index, y_min, y_max, x_min, x_max)
# print(np.count_nonzero(np.isnan(nsl_region)))
# print(np.unique(nsl_region))
# plt.imshow(nsl_region)
# plt.colorbar()
# plt.savefig(tmp_folder / 'nsl_region.png')
# plt.clf()
#
# nsl_region_mean = np.nanmean(nsl_region.flatten())
# print("Mean\t: ", nsl_region_mean)
# for p in [0, 5, 25, 50, 75, 80, 85, 90, 95, 100]:
# print(p, "\t: ", np.nanpercentile(nsl_region.flatten(), p))
#
# nsl_region_flat = nsl_region[~np.isnan(nsl_region)].data.flatten()
# sns.distplot(nsl_region_flat)
# plt.savefig(tmp_folder / 'nsl_dist.png')
# plt.clf()
#
# # ## Determine cells with precipitation
#
# nc_file2 = r"\\hdata\grid\metdata\prognosis\meps\det\archive\2019\meps_det_pp_1km_{0}.nc".format(date_str)
# times, altitude, land_area_fraction, nc_vars2 = ga.nc_load(nc_file2, [var_rr], time_period=hour_range)
#
# precip_sum = np.sum(nc_vars2[var_rr][0:24, :, :], axis=0)
#
# t_index = 0
# precip_sum_region = clip_region(np.flipud(precip_sum), region_mask, t_index, y_min, y_max, x_min, x_max)
#
# print(np.count_nonzero(np.isnan(precip_sum_region)))
# print(np.unique(precip_sum_region))
# plt.imshow(precip_sum_region)
# plt.colorbar()
# plt.savefig(tmp_folder / 'precip_sum_region.png')
# plt.clf()
#
# # Mask where the precipitation during the day exceeds a given value.
# psr_mask = np.where(precip_sum_region > 1., 1, np.nan)
#
# plt.imshow(psr_mask)
# plt.savefig(tmp_folder / 'psr_mask.png')
# plt.clf()
#
# nsl_region_wet = nsl_region * psr_mask
# nsl_region_wet_mean = np.nanmean(nsl_region_wet.flatten())
# print("Mean\t: ", nsl_region_wet_mean)
# for p in [0, 5, 25, 50, 75, 80, 85, 90, 95, 100]:
# print(p, "\t: ", np.nanpercentile(nsl_region_wet.flatten(), p))
#
# nsl_region_wet_flat = nsl_region_wet[~np.isnan(nsl_region_wet)].data.flatten()
# sns.distplot(nsl_region_wet_flat)
# plt.savefig(tmp_folder / 'nsl_wet_dist.png')
# plt.clf()
#
# print('####################\n# Mean (all): {0:.2} #\n# Mean (wet): {0:.2} #\n####################'.format(fl_region_mean, fl_region_wet_mean))
|
kmunve/APS
|
aps/scripts/new_snow_line.py
|
Python
|
mit
| 8,836
|
[
"NetCDF"
] |
9fbe8e0f9ac753ea3da74452ccac0aa224a5f2446b6950763b3e4c00030ba940
|
# -*- coding: utf-8 -*-
# Copyright (C) Brian Moe (2013-2014), Duncan Macleod (2014-)
#
# This file is part of LIGO CIS Core.
#
# LIGO CIS Core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# LIGO CIS Core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with LIGO CIS Core. If not, see <http://www.gnu.org/licenses/>.
"""Management commands for the `cisserver` django app
"""
from .. import version
__version__ = version.version
__author__ = 'Brian Moe, Duncan.macleod <duncan.macleod@ligo.org>'
__credits__ = 'The LIGO Scientific Collaboration, The LIGO Laboratory'
|
lscsoft/cis.server
|
cisserver/management/__init__.py
|
Python
|
gpl-3.0
| 1,017
|
[
"Brian",
"MOE"
] |
4a69138ba2605d2d830a25fb85d5fb2e8a5b73d30c6a9b61b8f74a293f4d0134
|
#pylint: disable=missing-docstring
####################################################################################################
# DO NOT MODIFY THIS HEADER #
# MOOSE - Multiphysics Object Oriented Simulation Environment #
# #
# (c) 2010 Battelle Energy Alliance, LLC #
# ALL RIGHTS RESERVED #
# #
# Prepared by Battelle Energy Alliance, LLC #
# Under Contract No. DE-AC07-05ID14517 #
# With the U. S. Department of Energy #
# #
# See COPYRIGHT for full restrictions #
####################################################################################################
#pylint: enable=missing-docstring
import os
import logging
import collections
import markdown
import anytree
import mooseutils
import MooseDocs
from . import common
LOG = logging.getLogger(__name__)
class MooseMarkdown(markdown.Markdown):
"""
A custom Markdown object for handling raw text, markdown files, or MarkdownNode objects.
The key to this class is allowing the Markdown object to work with MarkdownNode objects such
that the extension objects, namely MooseTemplate, could have access to the node object to allow
for searching the tree for other pages. This should allow for cross page figure, equation, and
table links to be created.
Args:
config[dict]: The configure dict, if not provided getDefaultExtensions is used.
config_file[str]: The name of the configuration file to import, this is applied to the
supplied or default 'config'.
"""
EQUATION_COUNT = 0 # counters for numbered equation numbers
CACHE = dict() # Cache for find method
@staticmethod
def getDefaultExtensions():
"""
Return an OrderedDict that defines the default configuration for extensions.
It returns an OrderedDict such that the dict() can serve to populate the list of extensions
(i.e., the keys) in the desired instantiate order as well as the configuration settings.
If no settings are needed for the entry simply set the entry to contain an empty dict()
"""
ext = collections.OrderedDict() # used to maintain insert order
# http://pythonhosted.org/Markdown/extensions/index.html
ext['toc'] = dict()
ext['smarty'] = dict()
ext['admonition'] = dict()
ext['extra'] = dict()
ext['meta'] = dict()
# MooseDocs
ext['MooseDocs.extensions.global'] = dict()
ext['MooseDocs.extensions.include'] = dict()
ext['MooseDocs.extensions.bibtex'] = dict()
ext['MooseDocs.extensions.css'] = dict()
ext['MooseDocs.extensions.devel'] = dict()
ext['MooseDocs.extensions.misc'] = dict()
ext['MooseDocs.extensions.media'] = dict()
ext['MooseDocs.extensions.tables'] = dict()
ext['MooseDocs.extensions.listings'] = dict()
ext['MooseDocs.extensions.refs'] = dict()
ext['MooseDocs.extensions.app_syntax'] = dict()
ext['MooseDocs.extensions.template'] = dict()
ext['MooseDocs.extensions.gchart'] = dict()
ext['MooseDocs.extensions.admonition'] = dict()
ext['MooseDocs.extensions.katex'] = dict()
return ext
def __init__(self, config=None, default=True):
# Create the default configuration
ext_config = self.getDefaultExtensions() if default else collections.OrderedDict()
# Apply the supplied configuration
if config is not None:
ext_config.update(config)
# Define storage for the current MarkdownNode
self.current = None
super(MooseMarkdown, self).__init__(extensions=ext_config.keys(),
extension_configs=ext_config)
def requireExtension(self, required):
"""
Raise an exception of the supplied extension type is not registered.
"""
if not self.getExtension(required):
raise mooseutils.MooseException("The {} extension is required." \
.format(required.__name__))
def getExtension(self, etype):
"""
Return an extension instance.
Args:
etype[type]: The type of the extension to return.
"""
out = None
for ext in self.registeredExtensions:
if isinstance(ext, etype):
out = ext
break
return out
def convert(self, md):
"""
Convert the raw text, markdown file, or node to html.
Args:
md[str]: A markdown file, markdown content, or MarkdownNode
"""
self.EQUATION_COUNT = 0 #pylint: disable=invalid-name
self.current = None
if isinstance(md, str):
self.current = common.nodes.MarkdownNode('', content=md)
elif isinstance(md, common.nodes.MarkdownFileNodeBase):
self.current = md
LOG.debug('Converting %s to html.', self.current.filename) #pylint: disable=no-member
else:
raise mooseutils.MooseException("The supplied content must be a markdown str or a "
"MarkdownFileNodeBase object.")
return super(MooseMarkdown, self).convert(self.current.content)
def getFilename(self, desired, check_local=True):
"""
Locate nodes with a filename ending with provided string.
"""
if check_local:
local = os.path.join(MooseDocs.ROOT_DIR, desired)
if os.path.exists(local):
return local, None
nodes = self.find(self.current.root, desired)
if len(nodes) > 1:
msg = "Multiple filenames matching '{}' found:".format(desired)
for n in nodes:
msg += '\n {}'.format(n.filename)
LOG.error(msg)
if nodes:
return nodes[0].filename, nodes[0]
return None, None
@staticmethod
def find(node, desired):
"""
Find method for filenames (mainly for testing).
"""
if desired in MooseMarkdown.CACHE:
return MooseMarkdown.CACHE[desired]
types = (common.nodes.FileNodeBase)
filter_ = lambda n: isinstance(n, types) and n.filename.endswith(desired)
nodes = [n for n in anytree.iterators.PreOrderIter(node, filter_=filter_)]
MooseMarkdown.CACHE[desired] = nodes
return nodes
|
yipenggao/moose
|
python/MooseDocs/MooseMarkdown.py
|
Python
|
lgpl-2.1
| 7,174
|
[
"MOOSE"
] |
bd252f834d3186d67709541f74ac6ea4062fbd988b6194e860abd5d6eeab51b6
|
#!/usr/bin/env python
import os
import sys
import glob
import re
import csv
import itertools
from collections import OrderedDict, defaultdict
import time
import yaml
import subprocess
import copy
import tempfile
import argparse
import bcbio.solexa.samplesheet
from bcbio.pipeline.config_loader import load_config
from scilifelab.db.statusdb import ProjectSummaryConnection
from scilifelab.bcbio.qc import FlowcellRunMetricsParser
from scilifelab.log import minimal_logger
LOG = minimal_logger(__name__)
# The directory where CASAVA has written the demuxed output
CASAVA_OUTPUT_DIR = "Demultiplexing"
# The analysis script for running the pipeline in parallell mode (on one node)
PARALLELL_ANALYSIS_SCRIPT="automated_initial_analysis.py"
# The analysis script for running the pipeline in distributed mode (across multiple nodes/cores)
DISTRIBUTED_ANALYSIS_SCRIPT="distributed_nextgen_pipeline.py"
# For non-CASAVA analysis, this script is used to sanitize the run_info.yaml configuration file
PROCESS_YAML_SCRIPT = "process_run_info.py"
# If True, will sanitize the run_info.yaml configuration file when running non-CASAVA analysis
PROCESS_YAML = True
# If True, will assign the distributed master process and workers to a separate RabbitMQ queue for each flowcell
FC_SPECIFIC_AMPQ = True
# Number of attempts to upload the report to gdocs
REPORT_RETRIES = 10
def main(post_process_config_file, fc_dir, run_info_file=None, only_run=False, only_setup=False, ignore_casava=False, process_project=[], process_sample=[]):
run_arguments = [[os.getcwd(),post_process_config_file,fc_dir,run_info_file]]
if has_casava_output(fc_dir) and not ignore_casava:
if not only_run:
run_arguments = setup_analysis_directory_structure(post_process_config_file, fc_dir, run_info_file, process_project, process_sample)
else:
if not only_run:
run_arguments = setup_analysis(post_process_config_file, fc_dir, run_info_file)
if not only_setup:
for arguments in run_arguments:
run_analysis(*arguments)
def run_analysis(work_dir, post_process, fc_dir, run_info):
"""Changes into the supplied work_dir directory and submits
the job using the supplied arguments and with slurm parameters
obtained from the post_process.yaml configuration
"""
# Move to the working directory
start_dir = os.getcwd()
os.chdir(work_dir)
config = load_config(post_process)
if str(config["algorithm"]["num_cores"]) == "messaging":
analysis_script = DISTRIBUTED_ANALYSIS_SCRIPT
else:
analysis_script = PARALLELL_ANALYSIS_SCRIPT
# Launches the pipeline using PM module
project_to_run, sample_to_run, flowcell_to_run = fc_dir.split('/')[-3:]
cmd = ["pm",
"production",
"run",
project_to_run,
"--sample",
sample_to_run,
"--flowcell",
flowcell_to_run,
"--drmaa",
"--force"]
subprocess.check_call(cmd)
# Change back to the starting directory
os.chdir(start_dir)
def setup_analysis(post_process_config, archive_dir, run_info_file):
"""Does a non-casava pre-analysis setup and returns a list of arguments
that can be passed to the run_analysis function in order to start the
analysis.
"""
# Set the barcode type in run_info.yaml to "illumina", strip the 7th nucleotide and set analysis to 'Minimal'
if run_info_file is not None and PROCESS_YAML:
LOG.info("---------\nProcessing run_info:")
run_info_backup = "%s.orig" % run_info_file
os.rename(run_info_file,run_info_backup)
cl = ["%s" % PROCESS_YAML_SCRIPT,run_info_backup,"--analysis","Align_illumina","--out_file",run_info_file,"--ascii","--clear_description"]
LOG.info(subprocess.check_output(cl))
LOG.info("\n---------\n")
# Check that the specified paths exist
LOG.info("Checking input paths")
for path in (post_process_config,archive_dir,run_info_file):
if path is not None and not os.path.exists(path):
raise Exception("The path %s does not exist" % path)
LOG.info("Getting base_dir from %s" % post_process_config)
# Parse the config to get the analysis directory
with open(post_process_config) as ppc:
config = yaml.load(ppc)
analysis = config.get("analysis",{})
base_dir = analysis["base_dir"]
LOG.info("Getting run name from %s" % archive_dir)
# Get the run name from the archive dir
_,run_name = os.path.split(os.path.normpath(archive_dir))
# Create the working directory if necessary and change into it
work_dir = os.path.join(base_dir,run_name)
os.chdir(base_dir)
LOG.info("Creating/changing to %s" % work_dir)
try:
os.mkdir(run_name,0770)
except OSError:
pass
os.chdir(run_name)
# make sure that the work dir exists
if not os.path.exists(work_dir):
raise Exception("The path %s does not exist and was not created" % work_dir)
# if required, parse the machine id and flowcell position and use an ampq vhost specific for it
if FC_SPECIFIC_AMPQ:
machine_id = None
flowcell_position = None
for p in run_name.upper().split("_"):
if p.startswith("SN"):
machine_id = p
elif p[0] in ("A","B") and p.endswith("XX"):
flowcell_position = p[0]
assert machine_id and flowcell_position, "Machine id and flowcell position could not be parsed from run name '%s'" % run_name
# write a dedicated post_process.yaml for the ampq queue
if config.get("distributed",False):
config["distributed"]["rabbitmq_vhost"] = "bionextgen-%s-%s" % (machine_id,flowcell_position)
post_process_config_orig = post_process_config
parts = os.path.splitext(post_process_config)
post_process_config = "%s-%s-%s%s" % (parts[0],machine_id,flowcell_position,parts[1])
with open(post_process_config,"w") as fh:
fh.write(yaml.safe_dump(config, default_flow_style=False, allow_unicode=True, width=1000))
return [[os.getcwd(),post_process_config,archive_dir,run_info_file]]
def setup_analysis_directory_structure(post_process_config_file, fc_dir, custom_config_file, process_project=[], process_sample=[]):
"""Parse the bcl2fastq v2.17 generated flowcell directory and create a
corresponding directory structure suitable for bcbb analysis,
complete with sample-specific and project-specific configuration files.
Returns a list of arguments, both sample- and project-specific, that can
be passed to the run_analysis method for execution
"""
config = load_config(post_process_config_file)
analysis_dir = os.path.abspath(config["analysis"]["base_dir"])
assert os.path.exists(fc_dir), "ERROR: Flowcell directory %s does not exist" % fc_dir
assert os.path.exists(analysis_dir), "ERROR: Analysis top directory %s does not exist" % analysis_dir
couch_credentials = config.get('statusdb',{})
couch_credentials['url'] = couch_credentials.get('url','localhost')
couch_credentials['port'] = couch_credentials.get('port','5984')
couch_credentials['username'] = couch_credentials.get('username','anonymous')
couch_credentials['password'] = couch_credentials.get('password','password')
# A list with the arguments to each run, when running by sample
sample_run_arguments = []
# Parse the flowcell dir
fc_dir_structure = parse_casava_directory(fc_dir)
[fc_date, fc_name] = [fc_dir_structure['fc_date'],fc_dir_structure['fc_name']]
fc_run_id = "%s_%s" % (fc_date,fc_name)
# Copy the basecall stats directory. This will be causing an issue when multiple directories are present...
# syncing should be done from archive, preserving the Unaligned* structures
_copy_basecall_stats([os.path.join(fc_dir_structure['fc_dir'],d) for d in fc_dir_structure['basecall_stats_dir']], analysis_dir)
# Parse the custom_config_file
custom_config = []
if custom_config_file is not None:
with open(custom_config_file) as fh:
custom_config = yaml.load(fh)
# Iterate over the projects in the flowcell directory
for project in fc_dir_structure.get('projects',[]):
# If we only want to run the analysis for a particular project, skip if this is not it
project_name = project['project_name']
if len(process_project) > 0 and project_name not in process_project:
continue
# Create a project directory if it doesn't already exist
project_dir = os.path.join(analysis_dir,project_name)
if not os.path.exists(project_dir):
os.mkdir(project_dir,0770)
# Iterate over the samples in the project
for sample_no, sample in enumerate(project.get('samples',[])):
# If we only want to run the analysis for a particular sample, skip if this is not it
sample_name = sample['sample_name'].replace('__','.')
if len(process_sample) > 0 and sample_name not in process_sample:
continue
# Create a directory for the sample if it doesn't already exist
sample_dir = os.path.join(project_dir,sample_name)
if not os.path.exists(sample_dir):
os.mkdir(sample_dir,0770)
# Create a directory for the flowcell if it does not exist
dst_sample_dir = os.path.join(sample_dir,fc_run_id)
if not os.path.exists(dst_sample_dir):
os.mkdir(dst_sample_dir,0770)
# rsync the source files to the sample directory
src_sample_dir = os.path.join(fc_dir_structure['fc_dir'],project['data_dir'],project['project_dir'],sample['sample_dir'])
sample_files = do_rsync([os.path.join(src_sample_dir,f) for f in sample.get('files',[])],dst_sample_dir)
# Generate a sample-specific configuration yaml structure
samplesheet = os.path.join(src_sample_dir,sample['samplesheet'])
sample_config = bcbb_configuration_from_samplesheet(samplesheet, couch_credentials)
# Append the sequence files to the config
for lane in sample_config:
if 'multiplex' in lane:
for sample in lane['multiplex']:
sample['files'] = sorted([os.path.basename(f) for f in sample_files if f.find("_L00%d_" % int(lane['lane'])) >= 0])
else:
lane['files'] = sorted([os.path.basename(f) for f in sample_files if f.find("_L00%d_" % int(lane['lane'])) >= 0])
sample_config = override_with_custom_config(sample_config,custom_config)
arguments = _setup_config_files(dst_sample_dir,sample_config,post_process_config_file,fc_dir_structure['fc_dir'],sample_name,fc_date,fc_name)
sample_run_arguments.append([arguments[1],arguments[0],arguments[1],arguments[3]])
return sample_run_arguments
def _copy_basecall_stats(source_dirs, destination_dir):
"""Copy relevant files from the Basecall_Stats_FCID directory
to the analysis directory
"""
for source_dir in source_dirs:
# First create the directory in the destination
dirname = os.path.join(destination_dir,os.path.basename(source_dir))
try:
os.mkdir(dirname)
except:
pass
# List the files/directories to copy
files = glob.glob(os.path.join(source_dir,"*.htm"))
files += glob.glob(os.path.join(source_dir,"*.metrics"))
files += glob.glob(os.path.join(source_dir,"*.xml"))
files += glob.glob(os.path.join(source_dir,"*.xsl"))
for dir in ["Plots","css"]:
d = os.path.join(source_dir,dir)
if os.path.exists(d):
files += [d]
do_rsync(files,dirname)
def copy_undetermined_index_files(casava_data_dir, dst_dir):
"""Copy the fastq files with undetermined index reads to the destination directory
"""
# List of files to copy
copy_list = []
# List the directories containing the fastq files
#fastq_dir_pattern = os.path.join(casava_data_dir,"Undetermined_indices","Sample_lane*")
# Pattern matching the fastq_files
fastq_file_pattern = "Undetermined_*.fastq.gz"
# Samplesheet name
samplesheet_pattern = "SampleSheet.csv"
samplesheets = []
for dir in glob.glob(fastq_dir_pattern):
copy_list += glob.glob(os.path.join(dir,fastq_file_pattern))
samplesheet = os.path.join(dir,samplesheet_pattern)
if os.path.exists(samplesheet):
samplesheets.append(samplesheet)
# Merge the samplesheets into one
new_samplesheet = os.path.join(dst_dir,samplesheet_pattern)
new_samplesheet = _merge_samplesheets(samplesheets,new_samplesheet)
# Rsync the fastq files to the destination directory
do_rsync(copy_list,dst_dir)
def _merge_samplesheets(samplesheets, merged_samplesheet):
"""Merge several .csv samplesheets into one
"""
data = []
header = []
for samplesheet in samplesheets:
with open(samplesheet) as fh:
csvread = csv.DictReader(fh, dialect='excel')
header = csvread.fieldnames
for row in csvread:
data.append(row)
with open(merged_samplesheet,"w") as outh:
csvwrite = csv.DictWriter(outh,header)
csvwrite.writeheader()
csvwrite.writerows(sorted(data, key=lambda d: (d['Lane'],d['Index'])))
return merged_samplesheet
def override_with_custom_config(org_config, custom_config):
"""Override the default configuration from the .csv samplesheets
with a custom configuration. Will replace overlapping options
or add options that are missing from the samplesheet-generated
config.
"""
new_config = copy.deepcopy(org_config)
for item in new_config:
for custom_item in custom_config:
if item['lane'] != custom_item.get('lane',""):
continue
for key, val in custom_item.items():
if key == 'multiplex':
continue
item[key] = val
for sample in item.get('multiplex',[]):
if 'sequence' not in sample:
continue
for custom_sample in custom_item.get('multiplex',[]):
if sample['sequence'] == custom_sample.get('sequence',""):
for key, val in custom_sample.items():
sample[key] = val
break
break
return new_config
def _setup_config_files(dst_dir,configs,post_process_config_file,fc_dir,sample_name="run",fc_date=None,fc_name=None):
# Setup the data structure
config_data_structure = {'details': configs}
if fc_date is not None:
config_data_structure['fc_date'] = fc_date
if fc_name is not None:
config_data_structure['fc_name'] = fc_name
# Dump the config to file
config_file = os.path.join(dst_dir,"%s-bcbb-config.yaml" % sample_name)
with open(config_file,'w') as fh:
fh.write(yaml.safe_dump(config_data_structure, default_flow_style=False, allow_unicode=True, width=1000))
# Copy post-process file
with open(post_process_config_file) as fh:
local_post_process = yaml.load(fh)
# Update galaxy config to point to the original location
local_post_process['galaxy_config'] = bcbio.utils.add_full_path(local_post_process['galaxy_config'],os.path.abspath(os.path.dirname(post_process_config_file)))
# Add job name and output paths to the cluster platform arguments
if 'distributed' in local_post_process and 'platform_args' in local_post_process['distributed']:
slurm_out = "%s-bcbb.log" % sample_name
local_post_process['distributed']['platform_args'] = "%s -J %s -o %s -D %s" % (local_post_process['distributed']['platform_args'], sample_name, slurm_out, dst_dir)
local_post_process_file = os.path.join(dst_dir,"%s-post_process.yaml" % sample_name)
with open(local_post_process_file,'w') as fh:
fh.write(yaml.safe_dump(local_post_process, default_flow_style=False, allow_unicode=True, width=1000))
# Write the command for running the pipeline with the configuration files
run_command_file = os.path.join(dst_dir,"%s-bcbb-command.txt" % sample_name)
with open(run_command_file,"w") as fh:
fh.write(" ".join([os.path.basename(__file__),"--only-run",os.path.basename(local_post_process_file), os.path.join("..",os.path.basename(dst_dir)), os.path.basename(config_file)]))
fh.write("\n")
return [os.path.basename(local_post_process_file), dst_dir, fc_dir, os.path.basename(config_file)]
def bcbb_configuration_from_samplesheet(csv_samplesheet, couch_credentials):
"""Parse an illumina csv-samplesheet and return a dictionary suitable for the bcbb-pipeline
"""
tfh, yaml_file = tempfile.mkstemp('.yaml','samplesheet')
os.close(tfh)
yaml_file = bcbio.solexa.samplesheet.csv2yaml(csv_samplesheet,yaml_file)
with open(yaml_file) as fh:
config = yaml.load(fh)
application_setup = {
'Amplicon': {'analysis': 'Align_standard'},
'ChIP-seq': {'analysis': 'Align_standard',
'genome_build': 'phix'},
'Custom capture': {'analysis': 'Align_standard_seqcap'},
'de novo': {'analysis': 'Align_standard',
'genome_build': 'unknown'},
'Exome capture': {'analysis': 'Align_standard_seqcap'},
'Finished library': {'analysis': 'Align_standard',
'genome_build': 'phix'},
'Mate-pair': {'analysis': 'Align_standard',
'genome_build': 'unknown'},
'Metagenome': {'analysis': 'Align_standard',
'genome_build': 'unknown'},
'miRNA-seq': {'analysis': 'Align_standard',
'genome_build': 'unknown'},
'RNA-seq (mRNA)': {'analysis': 'Align_standard',
'genome_build': 'phix'},
'RNA-seq (total RNA)': {'analysis': 'Align_standard',
'genome_build': 'phix'},
'stranded RNA-seq (total RNA)': {'analysis': 'Align_standard',
'genome_build': 'phix'},
'WG re-seq': {'analysis': 'Align_standard'},
'default': {'analysis': 'Align_standard'},
}
#Connect to statusdb to get project application
try:
p_con = ProjectSummaryConnection(**couch_credentials)
except:
print("Can't connect to statusdb to get application")
p_con = None
# Replace the default analysis
## TODO: This is an ugly hack, should be replaced by a custom config
for lane in config:
for plex in lane.get('multiplex',[]):
application='default'
if p_con is not None:
try:
Proj=plex.get('sample_prj','')
project = p_con.get_entry(Proj)
if project is not None:
application = project.get("application", 'default').strip()
except:
application='default'
setup = application_setup.get(application,application_setup['default'])
for key, val in setup.items():
plex[key] = val
# Remove the yaml file, we will write a new one later
os.remove(yaml_file)
return config
def do_rsync(src_files, dst_dir):
cl = ["rsync","-car"]
cl.extend(src_files)
cl.append(dst_dir)
cl = [str(i) for i in cl]
# For testing, just touch the files rather than copy them
# for f in src_files:
# open(os.path.join(dst_dir,os.path.basename(f)),"w").close()
subprocess.check_call(cl)
return [os.path.join(dst_dir,os.path.basename(f)) for f in src_files]
def parse_casava_directory(fc_dir):
"""Traverse a bcl2fastq v2.17 generated directory structure and return a dictionary
"""
projects = []
fc_dir = os.path.abspath(fc_dir)
parser = FlowcellRunMetricsParser(fc_dir)
run_info = parser.parseRunInfo()
runparams = parser.parseRunParameters()
fc_name = run_info.get('Flowcell',None)
fc_date = run_info.get('Date',None)
fc_pos = runparams.get('FCPosition','')
assert fc_name is not None and fc_date is not None, "Could not parse flowcell name and flowcell date"
unaligned_dir_pattern = os.path.join(fc_dir,"{}".format(CASAVA_OUTPUT_DIR))
basecall_stats_dir_pattern = os.path.join(unaligned_dir_pattern,"Basecall_Stats_*")
basecall_stats_dir = [os.path.relpath(d,fc_dir) for d in glob.glob(basecall_stats_dir_pattern)]
project_dir_pattern = os.path.join(unaligned_dir_pattern,"*__*_*_*")
data=read_ssheet_csv(fc_dir)
for project_dir in glob.glob(project_dir_pattern):
project_samples = []
sample_dir_pattern = os.path.join(project_dir,"Sample_*")
for sample_dir in glob.glob(sample_dir_pattern):
fastq_file_pattern = os.path.join(sample_dir,"*.fastq.gz")
fastq_files = [os.path.basename(file) for file in glob.glob(fastq_file_pattern)]
sample_name = os.path.basename(sample_dir).replace("Sample_","").replace('__','.')
samplesheet_pattern = os.path.join(sample_dir, "SampleSheet.csv")
if not os.path.exists(samplesheet_pattern):
write_samplesheet(samplesheet_pattern,data,sample_name,fc_name,fc_dir)
samplesheet = glob.glob(samplesheet_pattern)
project_samples.append({'sample_dir': os.path.basename(sample_dir),
'sample_name': sample_name,
'files': fastq_files,
'samplesheet': os.path.basename(samplesheet[0])})
project_name = os.path.basename(project_dir).replace('__','.')
projects.append({'data_dir': os.path.relpath(os.path.dirname(project_dir),fc_dir),
'project_dir': os.path.basename(project_dir),
'project_name': project_name,
'samples': project_samples})
return {'fc_dir': fc_dir, 'fc_name': '{}{}'.format(fc_pos,fc_name), 'fc_date': fc_date, 'basecall_stats_dir': basecall_stats_dir, 'projects': projects}
def read_ssheet_csv(fc_dir):
#read csv [Data] section
ssheet_pattern=os.path.join(fc_dir,"SampleSheet.csv")
rows=[]
for num,line in enumerate(open(ssheet_pattern,'rU'),1):
if '[Data]' in line:
n=num
ssheet=open(ssheet_pattern,'rU')
for i in range(n):
ssheet.next()
reader=csv.DictReader(ssheet)
for row in reader:
rows.append(row)
return rows
def write_samplesheet(samplesheet_pattern,data,sample_name,fc_name,fc_dir):
#Create SampleSheet.csv in sample_dir if not exist
out_rows=[]
for d in data:
if d['Sample_Name'] == sample_name:
out_row=OrderedDict()
if d.has_key('FCID'):
out_row['FCID']=d['FCID']
else:
out_row['FCID']=fc_name
if d.has_key('Lane'):
out_row['Lane']=d['Lane']
elif _is_miseq_run(fc_dir):
out_row['Lane']="1"
out_row['SampleID']=d['Sample_Name']
if d.has_key('SampleRef'):
out_row['SampleRef']=d['SampleRef']
else:
out_row['SampleRef']=d['GenomeFolder'].split('/')[-3]
out_row['Index']=d.get('index','')
if 'index2' in d and len(d['index2']) >0:
out_row['Index']="{}-{}".format(out_row['Index'],d["index2"])
out_row['Description']=d['Description']
if d.has_key('Control'):
out_row['Control']=d['Control']
else:
out_row['Control']= 'None'
if d.has_key('Recipe'):
out_row['Recipe']=d['Recipe']
else:
out_row['Recipe']='None'
if d.has_key('Operator'):
out_row['Operator']=d['Operator']
else:
out_row['Operator']='None'
out_row['SampleProject']=d['Sample_Project']
out_rows.append(out_row)
with open(samplesheet_pattern, 'w') as outh:
writer=csv.DictWriter(outh,fieldnames=out_rows[0].keys())
writer.writeheader()
for row in out_rows:
writer.writerow(row)
def _is_miseq_run(fcdir):
#Return True if this is a miseq run folder, False otherwise
if not _is_run_folder_name(os.path.basename(fcdir)):
return False
p=os.path.basename(fcdir).split("_")[-1]
m=re.match(r'[AB][A-Z0-9]+XX',p)
return (m is None)
def _is_run_folder_name(name):
#Check if a name matches the format of run folders
m = re.match("\d{6}_[A-Za-z0-9]+_\d+_[AB]?[A-Z0-9\-]+", name)
return (m is not None)
def has_casava_output(fc_dir):
try:
structure = parse_casava_directory(fc_dir)
if len(structure['projects']) > 0:
return True
except:
pass
return False
def report_to_gdocs(fc_dir, post_process_config_file):
"""Upload the run results to Google Docs using pm"""
# Call the report_to_gdocs script
runid = os.path.basename(os.path.abspath(fc_dir))
cmd = ["pm",
"report",
"report-to-gdocs",
"--run-id={}".format(runid)]
# Retry REPORT_RETRIES times to upload the report
succeeded = False
for i in xrange(REPORT_RETRIES):
try:
LOG.info("Uploading report to gdocs: {}".format(cmd))
subprocess.check_call(cmd)
succeeded = True
break
except Exception, e:
LOG.warn("Uploading failed for {} ('{}'), retrying.. ({} retries left)".format(runid,str(e),str(REPORT_RETRIES-i-1)))
if succeeded:
LOG.info("Uploading report for {} successful".format(runid))
else:
LOG.warn("Uploading report for {} failed, giving up after {} attempts".format(runid,REPORT_RETRIES))
return succeeded
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Wrapper script for bcbb pipeline. If given a .yaml configuration file, "\
"a run folder containing the sequence data from an illumina run and, optionally, "\
"a custom yaml file with options that should override what is specified in the "\
"config and samplesheet, the script will copy the relevant files from [store_dir] "\
"to [base_dir] and submit the automated_initial_analysis.py pipeline script for each "\
"sample to the cluster platform specified in the configuration.")
parser.add_argument("config", action="store", default=None, help="Path to the .yaml pipeline configuration file")
parser.add_argument("fcdir", action="store", default=None, help="Path to the archive run folder")
parser.add_argument("custom_config", action="store", default=None, help="Path to a custom configuration file with lane or sample specific options that will override the main configuration", nargs="?")
parser.add_argument("-r", "--only-run", dest="only_run", action="store_true", default=False, help="Don't setup the analysis directory, just start the pipeline")
parser.add_argument("-s", "--only-setup", dest="only_setup", action="store_true", default=False, help="Setup the analysis directory but don't start the pipeline")
parser.add_argument("-i", "--ignore-casava", dest="ignore_casava", action="store_true", default=False, help="Ignore any Casava 1.8+ file structure and just assume the pre-casava pipeline setup")
parser.add_argument("-g", "--no-google-report", dest="no_google_report", action="store_true", default=False, help="Don't upload any demultiplex statistics to Google Docs")
parser.add_argument("--process-project", dest="process_project", action="store", default=[], nargs='+', help="Only setup and run analysis for the specified list of projects")
parser.add_argument("--process-sample", dest="process_sample", action="store", default=[], nargs='+', help="Only setup and run analysis for the specified list of samples")
args = parser.parse_args()
main(args.config,args.fcdir,args.custom_config,args.only_run,args.only_setup,args.ignore_casava,args.process_project,args.process_sample)
if not args.no_google_report:
report_to_gdocs(args.fcdir, args.config)
|
SciLifeLab/scilifelab
|
scripts/bcbb_helpers/run_bcbb_pipeline.py
|
Python
|
mit
| 29,074
|
[
"Galaxy"
] |
daad6793d5fecb1e3dafc73b7aae04a526960cdcf54e3657e7c8ded6689545a2
|
# Copyright (C) 2010-2018 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""This scripts demonstrates the measurement of the mean square displacement
using the Observables/Correlators framework."""
from __future__ import print_function
import numpy as np
import espressomd
import espressomd.observables
import espressomd.accumulators
# System setup
system = espressomd.System(box_l=[1.0, 1.0, 1.0])
system.set_random_state_PRNG()
#system.seed = system.cell_system.get_state()['n_nodes'] * [1234]
np.random.seed(seed=system.seed)
system.part.add(pos=(0, 0, 0), v=(1, 2, 3))
system.time_step = 0.01
system.cell_system.skin = 0
system.cell_system.set_n_square(use_verlet_lists=False)
system.thermostat.set_langevin(kT=1, gamma=10)
system.integrator.run(1000)
# Initialize obzervable for a particle with id 0
p = espressomd.observables.ParticlePositions(ids=(0,))
# ASk the observable for its parameters
print(p.get_params())
# Calculate and return current value
print(p.calculate())
# Return stored current value
print(p.calculate())
# Instance a correlator correlating the p observable with itself,
# calculating the mean squared displacement (msd).
c = espressomd.accumulators.Correlator(
tau_lin=16, tau_max=1000, delta_N=1, obs1=p,
corr_operation="square_distance_componentwise", compress1="discard1")
# Instance a correlator calculating the FCS autocorrelation function from
# particle positions, using the symmetric focal spot with wx=wy=wz=10
# (sigma)
fcs = espressomd.accumulators.Correlator(
tau_lin=16, tau_max=10000, delta_N=10, obs1=p,
corr_operation="fcs_acf", args=[10, 10, 10], compress1="discard2")
# Ask the correlator for its parameters
print(c.get_params())
# Register the correlator for auto updating at the interval given by its
# dt (currently every timestep)
system.auto_update_accumulators.add(c)
system.auto_update_accumulators.add(fcs)
# Integrate
system.integrator.run(300000)
# Finalize the correlation calculation and write the results to a file
c.finalize()
np.savetxt("res.dat", c.result())
fcs.finalize()
np.savetxt("fcs.dat", fcs.result())
|
hmenke/espresso
|
samples/observables_correlators.py
|
Python
|
gpl-3.0
| 2,781
|
[
"ESPResSo"
] |
b00eb400142e8d33258dfa2bd31d8cd8abf6c033ffda860e8985a890ca09f6ec
|
#!/usr/bin/env python
"""Universal feed parser
Handles RSS 0.9x, RSS 1.0, RSS 2.0, CDF, Atom 0.3, and Atom 1.0 feeds
Visit http://feedparser.org/ for the latest version
Visit http://feedparser.org/docs/ for the latest documentation
Required: Python 2.4 or later
Recommended: CJKCodecs and iconv_codec <http://cjkpython.i18n.org/>
"""
__version__ = "5.0.1"
__license__ = """Copyright (c) 2002-2008, Mark Pilgrim, All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS'
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE."""
__author__ = "Mark Pilgrim <http://diveintomark.org/>"
__contributors__ = ["Jason Diamond <http://injektilo.org/>",
"John Beimler <http://john.beimler.org/>",
"Fazal Majid <http://www.majid.info/mylos/weblog/>",
"Aaron Swartz <http://aaronsw.com/>",
"Kevin Marks <http://epeus.blogspot.com/>",
"Sam Ruby <http://intertwingly.net/>",
"Ade Oshineye <http://blog.oshineye.com/>",
"Martin Pool <http://sourcefrog.net/>",
"Kurt McKee <http://kurtmckee.org/>"]
_debug = 0
# HTTP "User-Agent" header to send to servers when downloading feeds.
# If you are embedding feedparser in a larger application, you should
# change this to your application name and URL.
USER_AGENT = 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.11) Gecko/20101012 Firefox/3.6.11' # Changed by Kovid
# HTTP "Accept" header to send to servers when downloading feeds. If you don't
# want to send an Accept header, set this to None.
ACCEPT_HEADER = "application/atom+xml,application/rdf+xml,application/rss+xml,application/x-netcdf,application/xml;q=0.9,text/xml;q=0.2,*/*;q=0.1"
# List of preferred XML parsers, by SAX driver name. These will be tried first,
# but if they're not installed, Python will keep searching through its own list
# of pre-installed parsers until it finds one that supports everything we need.
PREFERRED_XML_PARSERS = ["drv_libxml2"]
# If you want feedparser to automatically run HTML markup through HTML Tidy, set
# this to 1. Requires mxTidy <http://www.egenix.com/files/python/mxTidy.html>
# or utidylib <http://utidylib.berlios.de/>.
TIDY_MARKUP = 0
# List of Python interfaces for HTML Tidy, in order of preference. Only useful
# if TIDY_MARKUP = 1
PREFERRED_TIDY_INTERFACES = ["uTidy", "mxTidy"]
# If you want feedparser to automatically resolve all relative URIs, set this
# to 1.
RESOLVE_RELATIVE_URIS = 1
# If you want feedparser to automatically sanitize all potentially unsafe
# HTML content, set this to 1.
SANITIZE_HTML = 1
# ---------- Python 3 modules (make it work if possible) ----------
try:
import rfc822
except ImportError:
from email import _parseaddr as rfc822
try:
# Python 3.1 introduces bytes.maketrans and simultaneously
# deprecates string.maketrans; use bytes.maketrans if possible
_maketrans = bytes.maketrans
except (NameError, AttributeError):
import string
_maketrans = string.maketrans
# base64 support for Atom feeds that contain embedded binary data
try:
import base64, binascii
# Python 3.1 deprecates decodestring in favor of decodebytes
_base64decode = getattr(base64, 'decodebytes', base64.decodestring)
except:
base64 = binascii = None
def _s2bytes(s):
# Convert a UTF-8 str to bytes if the interpreter is Python 3
try:
return bytes(s, 'utf8')
except (NameError, TypeError):
# In Python 2.5 and below, bytes doesn't exist (NameError)
# In Python 2.6 and above, bytes and str are the same (TypeError)
return s
def _l2bytes(l):
# Convert a list of ints to bytes if the interpreter is Python 3
try:
if bytes is not str:
# In Python 2.6 and above, this call won't raise an exception
# but it will return bytes([65]) as '[65]' instead of 'A'
return bytes(l)
raise NameError
except NameError:
return ''.join(map(chr, l))
# If you want feedparser to allow all URL schemes, set this to ()
# List culled from Python's urlparse documentation at:
# http://docs.python.org/library/urlparse.html
# as well as from "URI scheme" at Wikipedia:
# https://secure.wikimedia.org/wikipedia/en/wiki/URI_scheme
# Many more will likely need to be added!
ACCEPTABLE_URI_SCHEMES = (
'file', 'ftp', 'gopher', 'h323', 'hdl', 'http', 'https', 'imap', 'mailto',
'mms', 'news', 'nntp', 'prospero', 'rsync', 'rtsp', 'rtspu', 'sftp',
'shttp', 'sip', 'sips', 'snews', 'svn', 'svn+ssh', 'telnet', 'wais',
# Additional common-but-unofficial schemes
'aim', 'callto', 'cvs', 'facetime', 'feed', 'git', 'gtalk', 'irc', 'ircs',
'irc6', 'itms', 'mms', 'msnim', 'skype', 'ssh', 'smb', 'svn', 'ymsg',
)
#ACCEPTABLE_URI_SCHEMES = ()
# ---------- required modules (should come with any Python distribution) ----------
import sgmllib, re, sys, copy, urlparse, time, types, cgi, urllib, urllib2, datetime
try:
from io import BytesIO as _StringIO
except ImportError:
try:
from cStringIO import StringIO as _StringIO
except:
from StringIO import StringIO as _StringIO
# ---------- optional modules (feedparser will work without these, but with reduced functionality) ----------
# gzip is included with most Python distributions, but may not be available if you compiled your own
try:
import gzip
except:
gzip = None
try:
import zlib
except:
zlib = None
# If a real XML parser is available, feedparser will attempt to use it. feedparser has
# been tested with the built-in SAX parser, PyXML, and libxml2. On platforms where the
# Python distribution does not come with an XML parser (such as Mac OS X 10.2 and some
# versions of FreeBSD), feedparser will quietly fall back on regex-based parsing.
try:
import xml.sax
xml.sax.make_parser(PREFERRED_XML_PARSERS) # test for valid parsers
from xml.sax.saxutils import escape as _xmlescape
_XML_AVAILABLE = 1
except:
_XML_AVAILABLE = 0
def _xmlescape(data,entities={}):
data = data.replace('&', '&')
data = data.replace('>', '>')
data = data.replace('<', '<')
for char, entity in entities:
data = data.replace(char, entity)
return data
# cjkcodecs and iconv_codec provide support for more character encodings.
# Both are available from http://cjkpython.i18n.org/
try:
import cjkcodecs.aliases
except:
pass
try:
import iconv_codec
except:
pass
# chardet library auto-detects character encodings
# Download from http://chardet.feedparser.org/
try:
import chardet
if _debug:
import chardet.constants
chardet.constants._debug = 1
except:
chardet = None
# reversable htmlentitydefs mappings for Python 2.2
try:
from htmlentitydefs import name2codepoint, codepoint2name
except:
import htmlentitydefs
name2codepoint={}
codepoint2name={}
for (name,codepoint) in htmlentitydefs.entitydefs.iteritems():
if codepoint.startswith('&#'): codepoint=unichr(int(codepoint[2:-1]))
name2codepoint[name]=ord(codepoint)
codepoint2name[ord(codepoint)]=name
# BeautifulSoup parser used for parsing microformats from embedded HTML content
# http://www.crummy.com/software/BeautifulSoup/
# feedparser is tested with BeautifulSoup 3.0.x, but it might work with the
# older 2.x series. If it doesn't, and you can figure out why, I'll accept a
# patch and modify the compatibility statement accordingly.
try:
import BeautifulSoup
except:
BeautifulSoup = None
# ---------- don't touch these ----------
class ThingsNobodyCaresAboutButMe(Exception): pass
class CharacterEncodingOverride(ThingsNobodyCaresAboutButMe): pass
class CharacterEncodingUnknown(ThingsNobodyCaresAboutButMe): pass
class NonXMLContentType(ThingsNobodyCaresAboutButMe): pass
class UndeclaredNamespace(Exception): pass
sgmllib.tagfind = re.compile('[a-zA-Z][-_.:a-zA-Z0-9]*')
sgmllib.special = re.compile('<!')
sgmllib.charref = re.compile('&#(\d+|[xX][0-9a-fA-F]+);')
if sgmllib.endbracket.search(' <').start(0):
class EndBracketRegEx:
def __init__(self):
# Overriding the built-in sgmllib.endbracket regex allows the
# parser to find angle brackets embedded in element attributes.
self.endbracket = re.compile('''([^'"<>]|"[^"]*"(?=>|/|\s|\w+=)|'[^']*'(?=>|/|\s|\w+=))*(?=[<>])|.*?(?=[<>])''')
def search(self,string,index=0):
match = self.endbracket.match(string,index)
if match is not None:
# Returning a new object in the calling thread's context
# resolves a thread-safety.
return EndBracketMatch(match)
return None
class EndBracketMatch:
def __init__(self, match):
self.match = match
def start(self, n):
return self.match.end(n)
sgmllib.endbracket = EndBracketRegEx()
SUPPORTED_VERSIONS = {'': 'unknown',
'rss090': 'RSS 0.90',
'rss091n': 'RSS 0.91 (Netscape)',
'rss091u': 'RSS 0.91 (Userland)',
'rss092': 'RSS 0.92',
'rss093': 'RSS 0.93',
'rss094': 'RSS 0.94',
'rss20': 'RSS 2.0',
'rss10': 'RSS 1.0',
'rss': 'RSS (unknown version)',
'atom01': 'Atom 0.1',
'atom02': 'Atom 0.2',
'atom03': 'Atom 0.3',
'atom10': 'Atom 1.0',
'atom': 'Atom (unknown version)',
'cdf': 'CDF',
'hotrss': 'Hot RSS'
}
try:
UserDict = dict
except NameError:
# Python 2.1 does not have dict
from UserDict import UserDict
def dict(aList):
rc = {}
for k, v in aList:
rc[k] = v
return rc
class FeedParserDict(UserDict):
keymap = {'channel': 'feed',
'items': 'entries',
'guid': 'id',
'date': 'updated',
'date_parsed': 'updated_parsed',
'description': ['summary', 'subtitle'],
'url': ['href'],
'modified': 'updated',
'modified_parsed': 'updated_parsed',
'issued': 'published',
'issued_parsed': 'published_parsed',
'copyright': 'rights',
'copyright_detail': 'rights_detail',
'tagline': 'subtitle',
'tagline_detail': 'subtitle_detail'}
def __getitem__(self, key):
if key == 'category':
return UserDict.__getitem__(self, 'tags')[0]['term']
if key == 'enclosures':
norel = lambda link: FeedParserDict([(name,value) for (name,value) in link.items() if name!='rel'])
return [norel(link) for link in UserDict.__getitem__(self, 'links') if link['rel']=='enclosure']
if key == 'license':
for link in UserDict.__getitem__(self, 'links'):
if link['rel']=='license' and link.has_key('href'):
return link['href']
if key == 'categories':
return [(tag['scheme'], tag['term']) for tag in UserDict.__getitem__(self, 'tags')]
realkey = self.keymap.get(key, key)
if type(realkey) == types.ListType:
for k in realkey:
if UserDict.__contains__(self, k):
return UserDict.__getitem__(self, k)
if UserDict.__contains__(self, key):
return UserDict.__getitem__(self, key)
return UserDict.__getitem__(self, realkey)
def __setitem__(self, key, value):
for k in self.keymap.keys():
if key == k:
key = self.keymap[k]
if type(key) == types.ListType:
key = key[0]
return UserDict.__setitem__(self, key, value)
def get(self, key, default=None):
if self.has_key(key):
return self[key]
else:
return default
def setdefault(self, key, value):
if not self.has_key(key):
self[key] = value
return self[key]
def has_key(self, key):
try:
return hasattr(self, key) or UserDict.__contains__(self, key)
except AttributeError:
return False
# This alias prevents the 2to3 tool from changing the semantics of the
# __contains__ function below and exhausting the maximum recursion depth
__has_key = has_key
def __getattr__(self, key):
try:
return self.__dict__[key]
except KeyError:
pass
try:
assert not key.startswith('_')
return self.__getitem__(key)
except:
raise AttributeError, "object has no attribute '%s'" % key
def __setattr__(self, key, value):
if key.startswith('_') or key == 'data':
self.__dict__[key] = value
else:
return self.__setitem__(key, value)
def __contains__(self, key):
return self.__has_key(key)
def zopeCompatibilityHack():
global FeedParserDict
del FeedParserDict
def FeedParserDict(aDict=None):
rc = {}
if aDict:
rc.update(aDict)
return rc
_ebcdic_to_ascii_map = None
def _ebcdic_to_ascii(s):
global _ebcdic_to_ascii_map
if not _ebcdic_to_ascii_map:
emap = (
0,1,2,3,156,9,134,127,151,141,142,11,12,13,14,15,
16,17,18,19,157,133,8,135,24,25,146,143,28,29,30,31,
128,129,130,131,132,10,23,27,136,137,138,139,140,5,6,7,
144,145,22,147,148,149,150,4,152,153,154,155,20,21,158,26,
32,160,161,162,163,164,165,166,167,168,91,46,60,40,43,33,
38,169,170,171,172,173,174,175,176,177,93,36,42,41,59,94,
45,47,178,179,180,181,182,183,184,185,124,44,37,95,62,63,
186,187,188,189,190,191,192,193,194,96,58,35,64,39,61,34,
195,97,98,99,100,101,102,103,104,105,196,197,198,199,200,201,
202,106,107,108,109,110,111,112,113,114,203,204,205,206,207,208,
209,126,115,116,117,118,119,120,121,122,210,211,212,213,214,215,
216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,
123,65,66,67,68,69,70,71,72,73,232,233,234,235,236,237,
125,74,75,76,77,78,79,80,81,82,238,239,240,241,242,243,
92,159,83,84,85,86,87,88,89,90,244,245,246,247,248,249,
48,49,50,51,52,53,54,55,56,57,250,251,252,253,254,255
)
_ebcdic_to_ascii_map = _maketrans( \
_l2bytes(range(256)), _l2bytes(emap))
return s.translate(_ebcdic_to_ascii_map)
_cp1252 = {
unichr(128): unichr(8364), # euro sign
unichr(130): unichr(8218), # single low-9 quotation mark
unichr(131): unichr( 402), # latin small letter f with hook
unichr(132): unichr(8222), # double low-9 quotation mark
unichr(133): unichr(8230), # horizontal ellipsis
unichr(134): unichr(8224), # dagger
unichr(135): unichr(8225), # double dagger
unichr(136): unichr( 710), # modifier letter circumflex accent
unichr(137): unichr(8240), # per mille sign
unichr(138): unichr( 352), # latin capital letter s with caron
unichr(139): unichr(8249), # single left-pointing angle quotation mark
unichr(140): unichr( 338), # latin capital ligature oe
unichr(142): unichr( 381), # latin capital letter z with caron
unichr(145): unichr(8216), # left single quotation mark
unichr(146): unichr(8217), # right single quotation mark
unichr(147): unichr(8220), # left double quotation mark
unichr(148): unichr(8221), # right double quotation mark
unichr(149): unichr(8226), # bullet
unichr(150): unichr(8211), # en dash
unichr(151): unichr(8212), # em dash
unichr(152): unichr( 732), # small tilde
unichr(153): unichr(8482), # trade mark sign
unichr(154): unichr( 353), # latin small letter s with caron
unichr(155): unichr(8250), # single right-pointing angle quotation mark
unichr(156): unichr( 339), # latin small ligature oe
unichr(158): unichr( 382), # latin small letter z with caron
unichr(159): unichr( 376)} # latin capital letter y with diaeresis
_urifixer = re.compile('^([A-Za-z][A-Za-z0-9+-.]*://)(/*)(.*?)')
def _urljoin(base, uri):
uri = _urifixer.sub(r'\1\3', uri)
try:
return urlparse.urljoin(base, uri)
except:
uri = urlparse.urlunparse([urllib.quote(part) for part in urlparse.urlparse(uri)])
return urlparse.urljoin(base, uri)
class _FeedParserMixin:
namespaces = {'': '',
'http://backend.userland.com/rss': '',
'http://blogs.law.harvard.edu/tech/rss': '',
'http://purl.org/rss/1.0/': '',
'http://my.netscape.com/rdf/simple/0.9/': '',
'http://example.com/newformat#': '',
'http://example.com/necho': '',
'http://purl.org/echo/': '',
'uri/of/echo/namespace#': '',
'http://purl.org/pie/': '',
'http://purl.org/atom/ns#': '',
'http://www.w3.org/2005/Atom': '',
'http://purl.org/rss/1.0/modules/rss091#': '',
'http://webns.net/mvcb/': 'admin',
'http://purl.org/rss/1.0/modules/aggregation/': 'ag',
'http://purl.org/rss/1.0/modules/annotate/': 'annotate',
'http://media.tangent.org/rss/1.0/': 'audio',
'http://backend.userland.com/blogChannelModule': 'blogChannel',
'http://web.resource.org/cc/': 'cc',
'http://backend.userland.com/creativeCommonsRssModule': 'creativeCommons',
'http://purl.org/rss/1.0/modules/company': 'co',
'http://purl.org/rss/1.0/modules/content/': 'content',
'http://my.theinfo.org/changed/1.0/rss/': 'cp',
'http://purl.org/dc/elements/1.1/': 'dc',
'http://purl.org/dc/terms/': 'dcterms',
'http://purl.org/rss/1.0/modules/email/': 'email',
'http://purl.org/rss/1.0/modules/event/': 'ev',
'http://rssnamespace.org/feedburner/ext/1.0': 'feedburner',
'http://freshmeat.net/rss/fm/': 'fm',
'http://xmlns.com/foaf/0.1/': 'foaf',
'http://www.w3.org/2003/01/geo/wgs84_pos#': 'geo',
'http://postneo.com/icbm/': 'icbm',
'http://purl.org/rss/1.0/modules/image/': 'image',
'http://www.itunes.com/DTDs/PodCast-1.0.dtd': 'itunes',
'http://example.com/DTDs/PodCast-1.0.dtd': 'itunes',
'http://purl.org/rss/1.0/modules/link/': 'l',
'http://search.yahoo.com/mrss': 'media',
#Version 1.1.2 of the Media RSS spec added the trailing slash on the namespace
'http://search.yahoo.com/mrss/': 'media',
'http://madskills.com/public/xml/rss/module/pingback/': 'pingback',
'http://prismstandard.org/namespaces/1.2/basic/': 'prism',
'http://www.w3.org/1999/02/22-rdf-syntax-ns#': 'rdf',
'http://www.w3.org/2000/01/rdf-schema#': 'rdfs',
'http://purl.org/rss/1.0/modules/reference/': 'ref',
'http://purl.org/rss/1.0/modules/richequiv/': 'reqv',
'http://purl.org/rss/1.0/modules/search/': 'search',
'http://purl.org/rss/1.0/modules/slash/': 'slash',
'http://schemas.xmlsoap.org/soap/envelope/': 'soap',
'http://purl.org/rss/1.0/modules/servicestatus/': 'ss',
'http://hacks.benhammersley.com/rss/streaming/': 'str',
'http://purl.org/rss/1.0/modules/subscription/': 'sub',
'http://purl.org/rss/1.0/modules/syndication/': 'sy',
'http://schemas.pocketsoap.com/rss/myDescModule/': 'szf',
'http://purl.org/rss/1.0/modules/taxonomy/': 'taxo',
'http://purl.org/rss/1.0/modules/threading/': 'thr',
'http://purl.org/rss/1.0/modules/textinput/': 'ti',
'http://madskills.com/public/xml/rss/module/trackback/':'trackback',
'http://wellformedweb.org/commentAPI/': 'wfw',
'http://purl.org/rss/1.0/modules/wiki/': 'wiki',
'http://www.w3.org/1999/xhtml': 'xhtml',
'http://www.w3.org/1999/xlink': 'xlink',
'http://www.w3.org/XML/1998/namespace': 'xml'
}
_matchnamespaces = {}
can_be_relative_uri = ['link', 'id', 'wfw_comment', 'wfw_commentrss', 'docs', 'url', 'href', 'comments', 'icon', 'logo']
can_contain_relative_uris = ['content', 'title', 'summary', 'info', 'tagline', 'subtitle', 'copyright', 'rights', 'description']
can_contain_dangerous_markup = ['content', 'title', 'summary', 'info', 'tagline', 'subtitle', 'copyright', 'rights', 'description']
html_types = ['text/html', 'application/xhtml+xml']
def __init__(self, baseuri=None, baselang=None, encoding='utf-8'):
if _debug: sys.stderr.write('initializing FeedParser\n')
if not self._matchnamespaces:
for k, v in self.namespaces.items():
self._matchnamespaces[k.lower()] = v
self.feeddata = FeedParserDict() # feed-level data
self.encoding = encoding # character encoding
self.entries = [] # list of entry-level data
self.version = '' # feed type/version, see SUPPORTED_VERSIONS
self.namespacesInUse = {} # dictionary of namespaces defined by the feed
# the following are used internally to track state;
# this is really out of control and should be refactored
self.infeed = 0
self.inentry = 0
self.incontent = 0
self.intextinput = 0
self.inimage = 0
self.inauthor = 0
self.incontributor = 0
self.inpublisher = 0
self.insource = 0
self.sourcedata = FeedParserDict()
self.contentparams = FeedParserDict()
self._summaryKey = None
self.namespacemap = {}
self.elementstack = []
self.basestack = []
self.langstack = []
self.baseuri = baseuri or ''
self.lang = baselang or None
self.svgOK = 0
self.hasTitle = 0
if baselang:
self.feeddata['language'] = baselang.replace('_','-')
def unknown_starttag(self, tag, attrs):
if _debug: sys.stderr.write('start %s with %s\n' % (tag, attrs))
# normalize attrs
attrs = [(k.lower(), v) for k, v in attrs]
attrs = [(k, k in ('rel', 'type') and v.lower() or v) for k, v in attrs]
# the sgml parser doesn't handle entities in attributes, but
# strict xml parsers do -- account for this difference
if isinstance(self, _LooseFeedParser):
attrs = [(k, v.replace('&', '&')) for k, v in attrs]
# track xml:base and xml:lang
attrsD = dict(attrs)
baseuri = attrsD.get('xml:base', attrsD.get('base')) or self.baseuri
if type(baseuri) != type(u''):
try:
baseuri = unicode(baseuri, self.encoding)
except:
baseuri = unicode(baseuri, 'iso-8859-1')
# ensure that self.baseuri is always an absolute URI that
# uses a whitelisted URI scheme (e.g. not `javscript:`)
if self.baseuri:
self.baseuri = _makeSafeAbsoluteURI(self.baseuri, baseuri) or self.baseuri
else:
self.baseuri = _urljoin(self.baseuri, baseuri)
lang = attrsD.get('xml:lang', attrsD.get('lang'))
if lang == '':
# xml:lang could be explicitly set to '', we need to capture that
lang = None
elif lang is None:
# if no xml:lang is specified, use parent lang
lang = self.lang
if lang:
if tag in ('feed', 'rss', 'rdf:RDF'):
self.feeddata['language'] = lang.replace('_','-')
self.lang = lang
self.basestack.append(self.baseuri)
self.langstack.append(lang)
# track namespaces
for prefix, uri in attrs:
if prefix.startswith('xmlns:'):
self.trackNamespace(prefix[6:], uri)
elif prefix == 'xmlns':
self.trackNamespace(None, uri)
# track inline content
if self.incontent and self.contentparams.has_key('type') and not self.contentparams.get('type', 'xml').endswith('xml'):
if tag in ['xhtml:div', 'div']: return # typepad does this 10/2007
# element declared itself as escaped markup, but it isn't really
self.contentparams['type'] = 'application/xhtml+xml'
if self.incontent and self.contentparams.get('type') == 'application/xhtml+xml':
if tag.find(':') <> -1:
prefix, tag = tag.split(':', 1)
namespace = self.namespacesInUse.get(prefix, '')
if tag=='math' and namespace=='http://www.w3.org/1998/Math/MathML':
attrs.append(('xmlns',namespace))
if tag=='svg' and namespace=='http://www.w3.org/2000/svg':
attrs.append(('xmlns',namespace))
if tag == 'svg': self.svgOK += 1
return self.handle_data('<%s%s>' % (tag, self.strattrs(attrs)), escape=0)
# match namespaces
if tag.find(':') <> -1:
prefix, suffix = tag.split(':', 1)
else:
prefix, suffix = '', tag
prefix = self.namespacemap.get(prefix, prefix)
if prefix:
prefix = prefix + '_'
# special hack for better tracking of empty textinput/image elements in illformed feeds
if (not prefix) and tag not in ('title', 'link', 'description', 'name'):
self.intextinput = 0
if (not prefix) and tag not in ('title', 'link', 'description', 'url', 'href', 'width', 'height'):
self.inimage = 0
# call special handler (if defined) or default handler
methodname = '_start_' + prefix + suffix
try:
method = getattr(self, methodname)
return method(attrsD)
except AttributeError:
# Since there's no handler or something has gone wrong we explicitly add the element and its attributes
unknown_tag = prefix + suffix
if len(attrsD) == 0:
# No attributes so merge it into the encosing dictionary
return self.push(unknown_tag, 1)
else:
# Has attributes so create it in its own dictionary
context = self._getContext()
context[unknown_tag] = attrsD
def unknown_endtag(self, tag):
if _debug: sys.stderr.write('end %s\n' % tag)
# match namespaces
if tag.find(':') <> -1:
prefix, suffix = tag.split(':', 1)
else:
prefix, suffix = '', tag
prefix = self.namespacemap.get(prefix, prefix)
if prefix:
prefix = prefix + '_'
if suffix == 'svg' and self.svgOK: self.svgOK -= 1
# call special handler (if defined) or default handler
methodname = '_end_' + prefix + suffix
try:
if self.svgOK: raise AttributeError()
method = getattr(self, methodname)
method()
except AttributeError:
self.pop(prefix + suffix)
# track inline content
if self.incontent and self.contentparams.has_key('type') and not self.contentparams.get('type', 'xml').endswith('xml'):
# element declared itself as escaped markup, but it isn't really
if tag in ['xhtml:div', 'div']: return # typepad does this 10/2007
self.contentparams['type'] = 'application/xhtml+xml'
if self.incontent and self.contentparams.get('type') == 'application/xhtml+xml':
tag = tag.split(':')[-1]
self.handle_data('</%s>' % tag, escape=0)
# track xml:base and xml:lang going out of scope
if self.basestack:
self.basestack.pop()
if self.basestack and self.basestack[-1]:
self.baseuri = self.basestack[-1]
if self.langstack:
self.langstack.pop()
if self.langstack: # and (self.langstack[-1] is not None):
self.lang = self.langstack[-1]
def handle_charref(self, ref):
# called for each character reference, e.g. for ' ', ref will be '160'
if not self.elementstack: return
ref = ref.lower()
if ref in ('34', '38', '39', '60', '62', 'x22', 'x26', 'x27', 'x3c', 'x3e'):
text = '&#%s;' % ref
else:
if ref[0] == 'x':
c = int(ref[1:], 16)
else:
c = int(ref)
text = unichr(c).encode('utf-8')
self.elementstack[-1][2].append(text)
def handle_entityref(self, ref):
# called for each entity reference, e.g. for '©', ref will be 'copy'
if not self.elementstack: return
if _debug: sys.stderr.write('entering handle_entityref with %s\n' % ref)
if ref in ('lt', 'gt', 'quot', 'amp', 'apos'):
text = '&%s;' % ref
elif ref in self.entities.keys():
text = self.entities[ref]
if text.startswith('&#') and text.endswith(';'):
return self.handle_entityref(text)
else:
try: name2codepoint[ref]
except KeyError: text = '&%s;' % ref
else: text = unichr(name2codepoint[ref]).encode('utf-8')
self.elementstack[-1][2].append(text)
def handle_data(self, text, escape=1):
# called for each block of plain text, i.e. outside of any tag and
# not containing any character or entity references
if not self.elementstack: return
if escape and self.contentparams.get('type') == 'application/xhtml+xml':
text = _xmlescape(text)
self.elementstack[-1][2].append(text)
def handle_comment(self, text):
# called for each comment, e.g. <!-- insert message here -->
pass
def handle_pi(self, text):
# called for each processing instruction, e.g. <?instruction>
pass
def handle_decl(self, text):
pass
def parse_declaration(self, i):
# override internal declaration handler to handle CDATA blocks
if _debug: sys.stderr.write('entering parse_declaration\n')
if self.rawdata[i:i+9] == '<![CDATA[':
k = self.rawdata.find(']]>', i)
if k == -1:
# CDATA block began but didn't finish
k = len(self.rawdata)
return k
self.handle_data(_xmlescape(self.rawdata[i+9:k]), 0)
return k+3
else:
k = self.rawdata.find('>', i)
if k >= 0:
return k+1
else:
# We have an incomplete CDATA block.
return k
def mapContentType(self, contentType):
contentType = contentType.lower()
if contentType == 'text' or contentType == 'plain':
contentType = 'text/plain'
elif contentType == 'html':
contentType = 'text/html'
elif contentType == 'xhtml':
contentType = 'application/xhtml+xml'
return contentType
def trackNamespace(self, prefix, uri):
loweruri = uri.lower()
if (prefix, loweruri) == (None, 'http://my.netscape.com/rdf/simple/0.9/') and not self.version:
self.version = 'rss090'
if loweruri == 'http://purl.org/rss/1.0/' and not self.version:
self.version = 'rss10'
if loweruri == 'http://www.w3.org/2005/atom' and not self.version:
self.version = 'atom10'
if loweruri.find('backend.userland.com/rss') <> -1:
# match any backend.userland.com namespace
uri = 'http://backend.userland.com/rss'
loweruri = uri
if self._matchnamespaces.has_key(loweruri):
self.namespacemap[prefix] = self._matchnamespaces[loweruri]
self.namespacesInUse[self._matchnamespaces[loweruri]] = uri
else:
self.namespacesInUse[prefix or ''] = uri
def resolveURI(self, uri):
return _urljoin(self.baseuri or '', uri)
def decodeEntities(self, element, data):
return data
def strattrs(self, attrs):
return ''.join([' %s="%s"' % (t[0],_xmlescape(t[1],{'"':'"'})) for t in attrs])
def push(self, element, expectingText):
self.elementstack.append([element, expectingText, []])
def pop(self, element, stripWhitespace=1):
if not self.elementstack: return
if self.elementstack[-1][0] != element: return
element, expectingText, pieces = self.elementstack.pop()
if self.version == 'atom10' and self.contentparams.get('type','text') == 'application/xhtml+xml':
# remove enclosing child element, but only if it is a <div> and
# only if all the remaining content is nested underneath it.
# This means that the divs would be retained in the following:
# <div>foo</div><div>bar</div>
while pieces and len(pieces)>1 and not pieces[-1].strip():
del pieces[-1]
while pieces and len(pieces)>1 and not pieces[0].strip():
del pieces[0]
if pieces and (pieces[0] == '<div>' or pieces[0].startswith('<div ')) and pieces[-1]=='</div>':
depth = 0
for piece in pieces[:-1]:
if piece.startswith('</'):
depth -= 1
if depth == 0: break
elif piece.startswith('<') and not piece.endswith('/>'):
depth += 1
else:
pieces = pieces[1:-1]
# Ensure each piece is a str for Python 3
for (i, v) in enumerate(pieces):
if not isinstance(v, basestring):
pieces[i] = v.decode('utf-8')
output = ''.join(pieces)
if stripWhitespace:
output = output.strip()
if not expectingText: return output
# decode base64 content
if base64 and self.contentparams.get('base64', 0):
try:
output = _base64decode(output)
except binascii.Error:
pass
except binascii.Incomplete:
pass
except TypeError:
# In Python 3, base64 takes and outputs bytes, not str
# This may not be the most correct way to accomplish this
output = _base64decode(output.encode('utf-8')).decode('utf-8')
# resolve relative URIs
if (element in self.can_be_relative_uri) and output:
output = self.resolveURI(output)
# decode entities within embedded markup
if not self.contentparams.get('base64', 0):
output = self.decodeEntities(element, output)
if self.lookslikehtml(output):
self.contentparams['type']='text/html'
# remove temporary cruft from contentparams
try:
del self.contentparams['mode']
except KeyError:
pass
try:
del self.contentparams['base64']
except KeyError:
pass
is_htmlish = self.mapContentType(self.contentparams.get('type', 'text/html')) in self.html_types
# resolve relative URIs within embedded markup
if is_htmlish and RESOLVE_RELATIVE_URIS:
if element in self.can_contain_relative_uris:
output = _resolveRelativeURIs(output, self.baseuri, self.encoding, self.contentparams.get('type', 'text/html'))
# parse microformats
# (must do this before sanitizing because some microformats
# rely on elements that we sanitize)
if is_htmlish and element in ['content', 'description', 'summary']:
mfresults = _parseMicroformats(output, self.baseuri, self.encoding)
if mfresults:
for tag in mfresults.get('tags', []):
self._addTag(tag['term'], tag['scheme'], tag['label'])
for enclosure in mfresults.get('enclosures', []):
self._start_enclosure(enclosure)
for xfn in mfresults.get('xfn', []):
self._addXFN(xfn['relationships'], xfn['href'], xfn['name'])
vcard = mfresults.get('vcard')
if vcard:
self._getContext()['vcard'] = vcard
# sanitize embedded markup
if is_htmlish and SANITIZE_HTML:
if element in self.can_contain_dangerous_markup:
output = _sanitizeHTML(output, self.encoding, self.contentparams.get('type', 'text/html'))
if self.encoding and type(output) != type(u''):
try:
output = unicode(output, self.encoding)
except:
pass
# address common error where people take data that is already
# utf-8, presume that it is iso-8859-1, and re-encode it.
if self.encoding in ('utf-8', 'utf-8_INVALID_PYTHON_3') and type(output) == type(u''):
try:
output = unicode(output.encode('iso-8859-1'), 'utf-8')
except:
pass
# map win-1252 extensions to the proper code points
if type(output) == type(u''):
output = u''.join([c in _cp1252.keys() and _cp1252[c] or c for c in output])
# categories/tags/keywords/whatever are handled in _end_category
if element == 'category':
return output
if element == 'title' and self.hasTitle:
return output
# store output in appropriate place(s)
if self.inentry and not self.insource:
if element == 'content':
self.entries[-1].setdefault(element, [])
contentparams = copy.deepcopy(self.contentparams)
contentparams['value'] = output
self.entries[-1][element].append(contentparams)
elif element == 'link':
if not self.inimage:
# query variables in urls in link elements are improperly
# converted from `?a=1&b=2` to `?a=1&b;=2` as if they're
# unhandled character references. fix this special case.
output = re.sub("&([A-Za-z0-9_]+);", "&\g<1>", output)
self.entries[-1][element] = output
if output:
self.entries[-1]['links'][-1]['href'] = output
else:
if element == 'description':
element = 'summary'
self.entries[-1][element] = output
if self.incontent:
contentparams = copy.deepcopy(self.contentparams)
contentparams['value'] = output
self.entries[-1][element + '_detail'] = contentparams
elif (self.infeed or self.insource):# and (not self.intextinput) and (not self.inimage):
context = self._getContext()
if element == 'description':
element = 'subtitle'
context[element] = output
if element == 'link':
# fix query variables; see above for the explanation
output = re.sub("&([A-Za-z0-9_]+);", "&\g<1>", output)
context[element] = output
context['links'][-1]['href'] = output
elif self.incontent:
contentparams = copy.deepcopy(self.contentparams)
contentparams['value'] = output
context[element + '_detail'] = contentparams
return output
def pushContent(self, tag, attrsD, defaultContentType, expectingText):
self.incontent += 1
if self.lang: self.lang=self.lang.replace('_','-')
self.contentparams = FeedParserDict({
'type': self.mapContentType(attrsD.get('type', defaultContentType)),
'language': self.lang,
'base': self.baseuri})
self.contentparams['base64'] = self._isBase64(attrsD, self.contentparams)
self.push(tag, expectingText)
def popContent(self, tag):
value = self.pop(tag)
self.incontent -= 1
self.contentparams.clear()
return value
# a number of elements in a number of RSS variants are nominally plain
# text, but this is routinely ignored. This is an attempt to detect
# the most common cases. As false positives often result in silent
# data loss, this function errs on the conservative side.
def lookslikehtml(self, s):
if self.version.startswith('atom'): return
if self.contentparams.get('type','text/html') != 'text/plain': return
# must have a close tag or a entity reference to qualify
if not (re.search(r'</(\w+)>',s) or re.search("&#?\w+;",s)): return
# all tags must be in a restricted subset of valid HTML tags
if filter(lambda t: t.lower() not in _HTMLSanitizer.acceptable_elements,
re.findall(r'</?(\w+)',s)): return
# all entities must have been defined as valid HTML entities
from htmlentitydefs import entitydefs
if filter(lambda e: e not in entitydefs.keys(),
re.findall(r'&(\w+);',s)): return
return 1
def _mapToStandardPrefix(self, name):
colonpos = name.find(':')
if colonpos <> -1:
prefix = name[:colonpos]
suffix = name[colonpos+1:]
prefix = self.namespacemap.get(prefix, prefix)
name = prefix + ':' + suffix
return name
def _getAttribute(self, attrsD, name):
return attrsD.get(self._mapToStandardPrefix(name))
def _isBase64(self, attrsD, contentparams):
if attrsD.get('mode', '') == 'base64':
return 1
if self.contentparams['type'].startswith('text/'):
return 0
if self.contentparams['type'].endswith('+xml'):
return 0
if self.contentparams['type'].endswith('/xml'):
return 0
return 1
def _itsAnHrefDamnIt(self, attrsD):
href = attrsD.get('url', attrsD.get('uri', attrsD.get('href', None)))
if href:
try:
del attrsD['url']
except KeyError:
pass
try:
del attrsD['uri']
except KeyError:
pass
attrsD['href'] = href
return attrsD
def _save(self, key, value, overwrite=False):
context = self._getContext()
if overwrite:
context[key] = value
else:
context.setdefault(key, value)
def _start_rss(self, attrsD):
versionmap = {'0.91': 'rss091u',
'0.92': 'rss092',
'0.93': 'rss093',
'0.94': 'rss094'}
#If we're here then this is an RSS feed.
#If we don't have a version or have a version that starts with something
#other than RSS then there's been a mistake. Correct it.
if not self.version or not self.version.startswith('rss'):
attr_version = attrsD.get('version', '')
version = versionmap.get(attr_version)
if version:
self.version = version
elif attr_version.startswith('2.'):
self.version = 'rss20'
else:
self.version = 'rss'
def _start_dlhottitles(self, attrsD):
self.version = 'hotrss'
def _start_channel(self, attrsD):
self.infeed = 1
self._cdf_common(attrsD)
_start_feedinfo = _start_channel
def _cdf_common(self, attrsD):
if attrsD.has_key('lastmod'):
self._start_modified({})
self.elementstack[-1][-1] = attrsD['lastmod']
self._end_modified()
if attrsD.has_key('href'):
self._start_link({})
self.elementstack[-1][-1] = attrsD['href']
self._end_link()
def _start_feed(self, attrsD):
self.infeed = 1
versionmap = {'0.1': 'atom01',
'0.2': 'atom02',
'0.3': 'atom03'}
if not self.version:
attr_version = attrsD.get('version')
version = versionmap.get(attr_version)
if version:
self.version = version
else:
self.version = 'atom'
def _end_channel(self):
self.infeed = 0
_end_feed = _end_channel
def _start_image(self, attrsD):
context = self._getContext()
if not self.inentry:
context.setdefault('image', FeedParserDict())
self.inimage = 1
self.hasTitle = 0
self.push('image', 0)
def _end_image(self):
self.pop('image')
self.inimage = 0
def _start_textinput(self, attrsD):
context = self._getContext()
context.setdefault('textinput', FeedParserDict())
self.intextinput = 1
self.hasTitle = 0
self.push('textinput', 0)
_start_textInput = _start_textinput
def _end_textinput(self):
self.pop('textinput')
self.intextinput = 0
_end_textInput = _end_textinput
def _start_author(self, attrsD):
self.inauthor = 1
self.push('author', 1)
# Append a new FeedParserDict when expecting an author
context = self._getContext()
context.setdefault('authors', [])
context['authors'].append(FeedParserDict())
_start_managingeditor = _start_author
_start_dc_author = _start_author
_start_dc_creator = _start_author
_start_itunes_author = _start_author
def _end_author(self):
self.pop('author')
self.inauthor = 0
self._sync_author_detail()
_end_managingeditor = _end_author
_end_dc_author = _end_author
_end_dc_creator = _end_author
_end_itunes_author = _end_author
def _start_itunes_owner(self, attrsD):
self.inpublisher = 1
self.push('publisher', 0)
def _end_itunes_owner(self):
self.pop('publisher')
self.inpublisher = 0
self._sync_author_detail('publisher')
def _start_contributor(self, attrsD):
self.incontributor = 1
context = self._getContext()
context.setdefault('contributors', [])
context['contributors'].append(FeedParserDict())
self.push('contributor', 0)
def _end_contributor(self):
self.pop('contributor')
self.incontributor = 0
def _start_dc_contributor(self, attrsD):
self.incontributor = 1
context = self._getContext()
context.setdefault('contributors', [])
context['contributors'].append(FeedParserDict())
self.push('name', 0)
def _end_dc_contributor(self):
self._end_name()
self.incontributor = 0
def _start_name(self, attrsD):
self.push('name', 0)
_start_itunes_name = _start_name
def _end_name(self):
value = self.pop('name')
if self.inpublisher:
self._save_author('name', value, 'publisher')
elif self.inauthor:
self._save_author('name', value)
elif self.incontributor:
self._save_contributor('name', value)
elif self.intextinput:
context = self._getContext()
context['name'] = value
_end_itunes_name = _end_name
def _start_width(self, attrsD):
self.push('width', 0)
def _end_width(self):
value = self.pop('width')
try:
value = int(value)
except:
value = 0
if self.inimage:
context = self._getContext()
context['width'] = value
def _start_height(self, attrsD):
self.push('height', 0)
def _end_height(self):
value = self.pop('height')
try:
value = int(value)
except:
value = 0
if self.inimage:
context = self._getContext()
context['height'] = value
def _start_url(self, attrsD):
self.push('href', 1)
_start_homepage = _start_url
_start_uri = _start_url
def _end_url(self):
value = self.pop('href')
if self.inauthor:
self._save_author('href', value)
elif self.incontributor:
self._save_contributor('href', value)
_end_homepage = _end_url
_end_uri = _end_url
def _start_email(self, attrsD):
self.push('email', 0)
_start_itunes_email = _start_email
def _end_email(self):
value = self.pop('email')
if self.inpublisher:
self._save_author('email', value, 'publisher')
elif self.inauthor:
self._save_author('email', value)
elif self.incontributor:
self._save_contributor('email', value)
_end_itunes_email = _end_email
def _getContext(self):
if self.insource:
context = self.sourcedata
elif self.inimage and self.feeddata.has_key('image'):
context = self.feeddata['image']
elif self.intextinput:
context = self.feeddata['textinput']
elif self.inentry:
context = self.entries[-1]
else:
context = self.feeddata
return context
def _save_author(self, key, value, prefix='author'):
context = self._getContext()
context.setdefault(prefix + '_detail', FeedParserDict())
context[prefix + '_detail'][key] = value
self._sync_author_detail()
context.setdefault('authors', [FeedParserDict()])
context['authors'][-1][key] = value
def _save_contributor(self, key, value):
context = self._getContext()
context.setdefault('contributors', [FeedParserDict()])
context['contributors'][-1][key] = value
def _sync_author_detail(self, key='author'):
context = self._getContext()
detail = context.get('%s_detail' % key)
if detail:
name = detail.get('name')
email = detail.get('email')
if name and email:
context[key] = '%s (%s)' % (name, email)
elif name:
context[key] = name
elif email:
context[key] = email
else:
author, email = context.get(key), None
if not author: return
emailmatch = re.search(r'''(([a-zA-Z0-9\_\-\.\+]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?))(\?subject=\S+)?''', author)
if emailmatch:
email = emailmatch.group(0)
# probably a better way to do the following, but it passes all the tests
author = author.replace(email, '')
author = author.replace('()', '')
author = author.replace('<>', '')
author = author.replace('<>', '')
author = author.strip()
if author and (author[0] == '('):
author = author[1:]
if author and (author[-1] == ')'):
author = author[:-1]
author = author.strip()
if author or email:
context.setdefault('%s_detail' % key, FeedParserDict())
if author:
context['%s_detail' % key]['name'] = author
if email:
context['%s_detail' % key]['email'] = email
def _start_subtitle(self, attrsD):
self.pushContent('subtitle', attrsD, 'text/plain', 1)
_start_tagline = _start_subtitle
_start_itunes_subtitle = _start_subtitle
def _end_subtitle(self):
self.popContent('subtitle')
_end_tagline = _end_subtitle
_end_itunes_subtitle = _end_subtitle
def _start_rights(self, attrsD):
self.pushContent('rights', attrsD, 'text/plain', 1)
_start_dc_rights = _start_rights
_start_copyright = _start_rights
def _end_rights(self):
self.popContent('rights')
_end_dc_rights = _end_rights
_end_copyright = _end_rights
def _start_item(self, attrsD):
self.entries.append(FeedParserDict())
self.push('item', 0)
self.inentry = 1
self.guidislink = 0
self.hasTitle = 0
id = self._getAttribute(attrsD, 'rdf:about')
if id:
context = self._getContext()
context['id'] = id
self._cdf_common(attrsD)
_start_entry = _start_item
_start_product = _start_item
def _end_item(self):
self.pop('item')
self.inentry = 0
_end_entry = _end_item
def _start_dc_language(self, attrsD):
self.push('language', 1)
_start_language = _start_dc_language
def _end_dc_language(self):
self.lang = self.pop('language')
_end_language = _end_dc_language
def _start_dc_publisher(self, attrsD):
self.push('publisher', 1)
_start_webmaster = _start_dc_publisher
def _end_dc_publisher(self):
self.pop('publisher')
self._sync_author_detail('publisher')
_end_webmaster = _end_dc_publisher
def _start_published(self, attrsD):
self.push('published', 1)
_start_dcterms_issued = _start_published
_start_issued = _start_published
def _end_published(self):
value = self.pop('published')
self._save('published_parsed', _parse_date(value), overwrite=True)
_end_dcterms_issued = _end_published
_end_issued = _end_published
def _start_updated(self, attrsD):
self.push('updated', 1)
_start_modified = _start_updated
_start_dcterms_modified = _start_updated
_start_pubdate = _start_updated
_start_dc_date = _start_updated
_start_lastbuilddate = _start_updated
def _end_updated(self):
value = self.pop('updated')
parsed_value = _parse_date(value)
self._save('updated_parsed', parsed_value, overwrite=True)
_end_modified = _end_updated
_end_dcterms_modified = _end_updated
_end_pubdate = _end_updated
_end_dc_date = _end_updated
_end_lastbuilddate = _end_updated
def _start_created(self, attrsD):
self.push('created', 1)
_start_dcterms_created = _start_created
def _end_created(self):
value = self.pop('created')
self._save('created_parsed', _parse_date(value), overwrite=True)
_end_dcterms_created = _end_created
def _start_expirationdate(self, attrsD):
self.push('expired', 1)
def _end_expirationdate(self):
self._save('expired_parsed', _parse_date(self.pop('expired')), overwrite=True)
def _start_cc_license(self, attrsD):
context = self._getContext()
value = self._getAttribute(attrsD, 'rdf:resource')
attrsD = FeedParserDict()
attrsD['rel']='license'
if value: attrsD['href']=value
context.setdefault('links', []).append(attrsD)
def _start_creativecommons_license(self, attrsD):
self.push('license', 1)
_start_creativeCommons_license = _start_creativecommons_license
def _end_creativecommons_license(self):
value = self.pop('license')
context = self._getContext()
attrsD = FeedParserDict()
attrsD['rel']='license'
if value: attrsD['href']=value
context.setdefault('links', []).append(attrsD)
del context['license']
_end_creativeCommons_license = _end_creativecommons_license
def _addXFN(self, relationships, href, name):
context = self._getContext()
xfn = context.setdefault('xfn', [])
value = FeedParserDict({'relationships': relationships, 'href': href, 'name': name})
if value not in xfn:
xfn.append(value)
def _addTag(self, term, scheme, label):
context = self._getContext()
tags = context.setdefault('tags', [])
if (not term) and (not scheme) and (not label): return
value = FeedParserDict({'term': term, 'scheme': scheme, 'label': label})
if value not in tags:
tags.append(value)
def _start_category(self, attrsD):
if _debug: sys.stderr.write('entering _start_category with %s\n' % repr(attrsD))
term = attrsD.get('term')
scheme = attrsD.get('scheme', attrsD.get('domain'))
label = attrsD.get('label')
self._addTag(term, scheme, label)
self.push('category', 1)
_start_dc_subject = _start_category
_start_keywords = _start_category
def _start_media_category(self, attrsD):
attrsD.setdefault('scheme', 'http://search.yahoo.com/mrss/category_schema')
self._start_category(attrsD)
def _end_itunes_keywords(self):
for term in self.pop('itunes_keywords').split():
self._addTag(term, 'http://www.itunes.com/', None)
def _start_itunes_category(self, attrsD):
self._addTag(attrsD.get('text'), 'http://www.itunes.com/', None)
self.push('category', 1)
def _end_category(self):
value = self.pop('category')
if not value: return
context = self._getContext()
tags = context['tags']
if value and len(tags) and not tags[-1]['term']:
tags[-1]['term'] = value
else:
self._addTag(value, None, None)
_end_dc_subject = _end_category
_end_keywords = _end_category
_end_itunes_category = _end_category
_end_media_category = _end_category
def _start_cloud(self, attrsD):
self._getContext()['cloud'] = FeedParserDict(attrsD)
def _start_link(self, attrsD):
attrsD.setdefault('rel', 'alternate')
if attrsD['rel'] == 'self':
attrsD.setdefault('type', 'application/atom+xml')
else:
attrsD.setdefault('type', 'text/html')
context = self._getContext()
attrsD = self._itsAnHrefDamnIt(attrsD)
if attrsD.has_key('href'):
attrsD['href'] = self.resolveURI(attrsD['href'])
expectingText = self.infeed or self.inentry or self.insource
context.setdefault('links', [])
if not (self.inentry and self.inimage):
context['links'].append(FeedParserDict(attrsD))
if attrsD.has_key('href'):
expectingText = 0
if (attrsD.get('rel') == 'alternate') and (self.mapContentType(attrsD.get('type')) in self.html_types):
context['link'] = attrsD['href']
else:
self.push('link', expectingText)
_start_producturl = _start_link
def _end_link(self):
value = self.pop('link')
context = self._getContext()
_end_producturl = _end_link
def _start_guid(self, attrsD):
self.guidislink = (attrsD.get('ispermalink', 'true') == 'true')
self.push('id', 1)
def _end_guid(self):
value = self.pop('id')
self._save('guidislink', self.guidislink and not self._getContext().has_key('link'))
if self.guidislink:
# guid acts as link, but only if 'ispermalink' is not present or is 'true',
# and only if the item doesn't already have a link element
self._save('link', value)
def _start_title(self, attrsD):
if self.svgOK: return self.unknown_starttag('title', attrsD.items())
self.pushContent('title', attrsD, 'text/plain', self.infeed or self.inentry or self.insource)
_start_dc_title = _start_title
_start_media_title = _start_title
def _end_title(self):
if self.svgOK: return
value = self.popContent('title')
if not value: return
context = self._getContext()
self.hasTitle = 1
_end_dc_title = _end_title
def _end_media_title(self):
hasTitle = self.hasTitle
self._end_title()
self.hasTitle = hasTitle
def _start_description(self, attrsD):
context = self._getContext()
if context.has_key('summary'):
self._summaryKey = 'content'
self._start_content(attrsD)
else:
self.pushContent('description', attrsD, 'text/html', self.infeed or self.inentry or self.insource)
_start_dc_description = _start_description
def _start_abstract(self, attrsD):
self.pushContent('description', attrsD, 'text/plain', self.infeed or self.inentry or self.insource)
def _end_description(self):
if self._summaryKey == 'content':
self._end_content()
else:
value = self.popContent('description')
self._summaryKey = None
_end_abstract = _end_description
_end_dc_description = _end_description
def _start_info(self, attrsD):
self.pushContent('info', attrsD, 'text/plain', 1)
_start_feedburner_browserfriendly = _start_info
def _end_info(self):
self.popContent('info')
_end_feedburner_browserfriendly = _end_info
def _start_generator(self, attrsD):
if attrsD:
attrsD = self._itsAnHrefDamnIt(attrsD)
if attrsD.has_key('href'):
attrsD['href'] = self.resolveURI(attrsD['href'])
self._getContext()['generator_detail'] = FeedParserDict(attrsD)
self.push('generator', 1)
def _end_generator(self):
value = self.pop('generator')
context = self._getContext()
if context.has_key('generator_detail'):
context['generator_detail']['name'] = value
def _start_admin_generatoragent(self, attrsD):
self.push('generator', 1)
value = self._getAttribute(attrsD, 'rdf:resource')
if value:
self.elementstack[-1][2].append(value)
self.pop('generator')
self._getContext()['generator_detail'] = FeedParserDict({'href': value})
def _start_admin_errorreportsto(self, attrsD):
self.push('errorreportsto', 1)
value = self._getAttribute(attrsD, 'rdf:resource')
if value:
self.elementstack[-1][2].append(value)
self.pop('errorreportsto')
def _start_summary(self, attrsD):
context = self._getContext()
if context.has_key('summary'):
self._summaryKey = 'content'
self._start_content(attrsD)
else:
self._summaryKey = 'summary'
self.pushContent(self._summaryKey, attrsD, 'text/plain', 1)
_start_itunes_summary = _start_summary
def _end_summary(self):
if self._summaryKey == 'content':
self._end_content()
else:
self.popContent(self._summaryKey or 'summary')
self._summaryKey = None
_end_itunes_summary = _end_summary
def _start_enclosure(self, attrsD):
attrsD = self._itsAnHrefDamnIt(attrsD)
context = self._getContext()
attrsD['rel']='enclosure'
context.setdefault('links', []).append(FeedParserDict(attrsD))
def _start_source(self, attrsD):
if 'url' in attrsD:
# This means that we're processing a source element from an RSS 2.0 feed
self.sourcedata['href'] = attrsD[u'url']
self.push('source', 1)
self.insource = 1
self.hasTitle = 0
def _end_source(self):
self.insource = 0
value = self.pop('source')
if value:
self.sourcedata['title'] = value
self._getContext()['source'] = copy.deepcopy(self.sourcedata)
self.sourcedata.clear()
def _start_content(self, attrsD):
self.pushContent('content', attrsD, 'text/plain', 1)
src = attrsD.get('src')
if src:
self.contentparams['src'] = src
self.push('content', 1)
def _start_prodlink(self, attrsD):
self.pushContent('content', attrsD, 'text/html', 1)
def _start_body(self, attrsD):
self.pushContent('content', attrsD, 'application/xhtml+xml', 1)
_start_xhtml_body = _start_body
def _start_content_encoded(self, attrsD):
self.pushContent('content', attrsD, 'text/html', 1)
_start_fullitem = _start_content_encoded
def _end_content(self):
copyToSummary = self.mapContentType(self.contentparams.get('type')) in (['text/plain'] + self.html_types)
value = self.popContent('content')
if copyToSummary:
self._save('summary', value)
_end_body = _end_content
_end_xhtml_body = _end_content
_end_content_encoded = _end_content
_end_fullitem = _end_content
_end_prodlink = _end_content
def _start_itunes_image(self, attrsD):
self.push('itunes_image', 0)
if attrsD.get('href'):
self._getContext()['image'] = FeedParserDict({'href': attrsD.get('href')})
_start_itunes_link = _start_itunes_image
def _end_itunes_block(self):
value = self.pop('itunes_block', 0)
self._getContext()['itunes_block'] = (value == 'yes') and 1 or 0
def _end_itunes_explicit(self):
value = self.pop('itunes_explicit', 0)
# Convert 'yes' -> True, 'clean' to False, and any other value to None
# False and None both evaluate as False, so the difference can be ignored
# by applications that only need to know if the content is explicit.
self._getContext()['itunes_explicit'] = (None, False, True)[(value == 'yes' and 2) or value == 'clean' or 0]
def _start_media_content(self, attrsD):
context = self._getContext()
context.setdefault('media_content', [])
context['media_content'].append(attrsD)
def _start_media_thumbnail(self, attrsD):
context = self._getContext()
context.setdefault('media_thumbnail', [])
self.push('url', 1) # new
context['media_thumbnail'].append(attrsD)
def _end_media_thumbnail(self):
url = self.pop('url')
context = self._getContext()
if url != None and len(url.strip()) != 0:
if not context['media_thumbnail'][-1].has_key('url'):
context['media_thumbnail'][-1]['url'] = url
def _start_media_player(self, attrsD):
self.push('media_player', 0)
self._getContext()['media_player'] = FeedParserDict(attrsD)
def _end_media_player(self):
value = self.pop('media_player')
context = self._getContext()
context['media_player']['content'] = value
def _start_newlocation(self, attrsD):
self.push('newlocation', 1)
def _end_newlocation(self):
url = self.pop('newlocation')
context = self._getContext()
# don't set newlocation if the context isn't right
if context is not self.feeddata:
return
context['newlocation'] = _makeSafeAbsoluteURI(self.baseuri, url.strip())
if _XML_AVAILABLE:
class _StrictFeedParser(_FeedParserMixin, xml.sax.handler.ContentHandler):
def __init__(self, baseuri, baselang, encoding):
if _debug: sys.stderr.write('trying StrictFeedParser\n')
xml.sax.handler.ContentHandler.__init__(self)
_FeedParserMixin.__init__(self, baseuri, baselang, encoding)
self.bozo = 0
self.exc = None
self.decls = {}
def startPrefixMapping(self, prefix, uri):
self.trackNamespace(prefix, uri)
if uri == 'http://www.w3.org/1999/xlink':
self.decls['xmlns:'+prefix] = uri
def startElementNS(self, name, qname, attrs):
namespace, localname = name
lowernamespace = str(namespace or '').lower()
if lowernamespace.find('backend.userland.com/rss') <> -1:
# match any backend.userland.com namespace
namespace = 'http://backend.userland.com/rss'
lowernamespace = namespace
if qname and qname.find(':') > 0:
givenprefix = qname.split(':')[0]
else:
givenprefix = None
prefix = self._matchnamespaces.get(lowernamespace, givenprefix)
if givenprefix and (prefix == None or (prefix == '' and lowernamespace == '')) and not self.namespacesInUse.has_key(givenprefix):
raise UndeclaredNamespace, "'%s' is not associated with a namespace" % givenprefix
localname = str(localname).lower()
# qname implementation is horribly broken in Python 2.1 (it
# doesn't report any), and slightly broken in Python 2.2 (it
# doesn't report the xml: namespace). So we match up namespaces
# with a known list first, and then possibly override them with
# the qnames the SAX parser gives us (if indeed it gives us any
# at all). Thanks to MatejC for helping me test this and
# tirelessly telling me that it didn't work yet.
attrsD, self.decls = self.decls, {}
if localname=='math' and namespace=='http://www.w3.org/1998/Math/MathML':
attrsD['xmlns']=namespace
if localname=='svg' and namespace=='http://www.w3.org/2000/svg':
attrsD['xmlns']=namespace
if prefix:
localname = prefix.lower() + ':' + localname
elif namespace and not qname: #Expat
for name,value in self.namespacesInUse.items():
if name and value == namespace:
localname = name + ':' + localname
break
if _debug: sys.stderr.write('startElementNS: qname = %s, namespace = %s, givenprefix = %s, prefix = %s, attrs = %s, localname = %s\n' % (qname, namespace, givenprefix, prefix, attrs.items(), localname))
for (namespace, attrlocalname), attrvalue in attrs._attrs.items():
lowernamespace = (namespace or '').lower()
prefix = self._matchnamespaces.get(lowernamespace, '')
if prefix:
attrlocalname = prefix + ':' + attrlocalname
attrsD[str(attrlocalname).lower()] = attrvalue
for qname in attrs.getQNames():
attrsD[str(qname).lower()] = attrs.getValueByQName(qname)
self.unknown_starttag(localname, attrsD.items())
def characters(self, text):
self.handle_data(text)
def endElementNS(self, name, qname):
namespace, localname = name
lowernamespace = str(namespace or '').lower()
if qname and qname.find(':') > 0:
givenprefix = qname.split(':')[0]
else:
givenprefix = ''
prefix = self._matchnamespaces.get(lowernamespace, givenprefix)
if prefix:
localname = prefix + ':' + localname
elif namespace and not qname: #Expat
for name,value in self.namespacesInUse.items():
if name and value == namespace:
localname = name + ':' + localname
break
localname = str(localname).lower()
self.unknown_endtag(localname)
def error(self, exc):
self.bozo = 1
self.exc = exc
def fatalError(self, exc):
self.error(exc)
raise exc
class _BaseHTMLProcessor(sgmllib.SGMLParser):
special = re.compile('''[<>'"]''')
bare_ampersand = re.compile("&(?!#\d+;|#x[0-9a-fA-F]+;|\w+;)")
elements_no_end_tag = [
'area', 'base', 'basefont', 'br', 'col', 'command', 'embed', 'frame',
'hr', 'img', 'input', 'isindex', 'keygen', 'link', 'meta', 'param',
'source', 'track', 'wbr'
]
def __init__(self, encoding, _type):
self.encoding = encoding
self._type = _type
if _debug: sys.stderr.write('entering BaseHTMLProcessor, encoding=%s\n' % self.encoding)
sgmllib.SGMLParser.__init__(self)
def reset(self):
self.pieces = []
sgmllib.SGMLParser.reset(self)
def _shorttag_replace(self, match):
tag = match.group(1)
if tag in self.elements_no_end_tag:
return '<' + tag + ' />'
else:
return '<' + tag + '></' + tag + '>'
def parse_starttag(self,i):
j=sgmllib.SGMLParser.parse_starttag(self, i)
if self._type == 'application/xhtml+xml':
if j>2 and self.rawdata[j-2:j]=='/>':
self.unknown_endtag(self.lasttag)
return j
def feed(self, data):
data = re.compile(r'<!((?!DOCTYPE|--|\[))', re.IGNORECASE).sub(r'<!\1', data)
#data = re.sub(r'<(\S+?)\s*?/>', self._shorttag_replace, data) # bug [ 1399464 ] Bad regexp for _shorttag_replace
data = re.sub(r'<([^<>\s]+?)\s*/>', self._shorttag_replace, data)
data = data.replace(''', "'")
data = data.replace('"', '"')
try:
bytes
if bytes is str:
raise NameError
self.encoding = self.encoding + '_INVALID_PYTHON_3'
except NameError:
if self.encoding and type(data) == type(u''):
data = data.encode(self.encoding)
sgmllib.SGMLParser.feed(self, data)
sgmllib.SGMLParser.close(self)
def normalize_attrs(self, attrs):
if not attrs: return attrs
# utility method to be called by descendants
attrs = dict([(k.lower(), v) for k, v in attrs]).items()
attrs = [(k, k in ('rel', 'type') and v.lower() or v) for k, v in attrs]
attrs.sort()
return attrs
def unknown_starttag(self, tag, attrs):
# called for each start tag
# attrs is a list of (attr, value) tuples
# e.g. for <pre class='screen'>, tag='pre', attrs=[('class', 'screen')]
if _debug: sys.stderr.write('_BaseHTMLProcessor, unknown_starttag, tag=%s\n' % tag)
uattrs = []
strattrs=''
if attrs:
for key, value in attrs:
value=value.replace('>','>').replace('<','<').replace('"','"')
value = self.bare_ampersand.sub("&", value)
# thanks to Kevin Marks for this breathtaking hack to deal with (valid) high-bit attribute values in UTF-8 feeds
if type(value) != type(u''):
try:
value = unicode(value, self.encoding)
except:
value = unicode(value, 'iso-8859-1')
try:
# Currently, in Python 3 the key is already a str, and cannot be decoded again
uattrs.append((unicode(key, self.encoding), value))
except TypeError:
uattrs.append((key, value))
strattrs = u''.join([u' %s="%s"' % (key, value) for key, value in uattrs])
if self.encoding:
try:
strattrs=strattrs.encode(self.encoding)
except:
pass
if tag in self.elements_no_end_tag:
self.pieces.append('<%(tag)s%(strattrs)s />' % locals())
else:
self.pieces.append('<%(tag)s%(strattrs)s>' % locals())
def unknown_endtag(self, tag):
# called for each end tag, e.g. for </pre>, tag will be 'pre'
# Reconstruct the original end tag.
if tag not in self.elements_no_end_tag:
self.pieces.append("</%(tag)s>" % locals())
def handle_charref(self, ref):
# called for each character reference, e.g. for ' ', ref will be '160'
# Reconstruct the original character reference.
if ref.startswith('x'):
value = unichr(int(ref[1:],16))
else:
value = unichr(int(ref))
if value in _cp1252.keys():
self.pieces.append('&#%s;' % hex(ord(_cp1252[value]))[1:])
else:
self.pieces.append('&#%(ref)s;' % locals())
def handle_entityref(self, ref):
# called for each entity reference, e.g. for '©', ref will be 'copy'
# Reconstruct the original entity reference.
if name2codepoint.has_key(ref):
self.pieces.append('&%(ref)s;' % locals())
else:
self.pieces.append('&%(ref)s' % locals())
def handle_data(self, text):
# called for each block of plain text, i.e. outside of any tag and
# not containing any character or entity references
# Store the original text verbatim.
if _debug: sys.stderr.write('_BaseHTMLProcessor, handle_data, text=%s\n' % text)
self.pieces.append(text)
def handle_comment(self, text):
# called for each HTML comment, e.g. <!-- insert Javascript code here -->
# Reconstruct the original comment.
self.pieces.append('<!--%(text)s-->' % locals())
def handle_pi(self, text):
# called for each processing instruction, e.g. <?instruction>
# Reconstruct original processing instruction.
self.pieces.append('<?%(text)s>' % locals())
def handle_decl(self, text):
# called for the DOCTYPE, if present, e.g.
# <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
# "http://www.w3.org/TR/html4/loose.dtd">
# Reconstruct original DOCTYPE
self.pieces.append('<!%(text)s>' % locals())
_new_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9:]*\s*').match
def _scan_name(self, i, declstartpos):
rawdata = self.rawdata
n = len(rawdata)
if i == n:
return None, -1
m = self._new_declname_match(rawdata, i)
if m:
s = m.group()
name = s.strip()
if (i + len(s)) == n:
return None, -1 # end of buffer
return name.lower(), m.end()
else:
self.handle_data(rawdata)
# self.updatepos(declstartpos, i)
return None, -1
def convert_charref(self, name):
return '&#%s;' % name
def convert_entityref(self, name):
return '&%s;' % name
def output(self):
'''Return processed HTML as a single string'''
return ''.join([str(p) for p in self.pieces])
def parse_declaration(self, i):
try:
return sgmllib.SGMLParser.parse_declaration(self, i)
except sgmllib.SGMLParseError:
# escape the doctype declaration and continue parsing
self.handle_data('<')
return i+1
class _LooseFeedParser(_FeedParserMixin, _BaseHTMLProcessor):
def __init__(self, baseuri, baselang, encoding, entities):
sgmllib.SGMLParser.__init__(self)
_FeedParserMixin.__init__(self, baseuri, baselang, encoding)
_BaseHTMLProcessor.__init__(self, encoding, 'application/xhtml+xml')
self.entities=entities
def decodeEntities(self, element, data):
data = data.replace('<', '<')
data = data.replace('<', '<')
data = data.replace('<', '<')
data = data.replace('>', '>')
data = data.replace('>', '>')
data = data.replace('>', '>')
data = data.replace('&', '&')
data = data.replace('&', '&')
data = data.replace('"', '"')
data = data.replace('"', '"')
data = data.replace(''', ''')
data = data.replace(''', ''')
if self.contentparams.has_key('type') and not self.contentparams.get('type', 'xml').endswith('xml'):
data = data.replace('<', '<')
data = data.replace('>', '>')
data = data.replace('&', '&')
data = data.replace('"', '"')
data = data.replace(''', "'")
return data
def strattrs(self, attrs):
return ''.join([' %s="%s"' % (n,v.replace('"','"')) for n,v in attrs])
class _MicroformatsParser:
STRING = 1
DATE = 2
URI = 3
NODE = 4
EMAIL = 5
known_xfn_relationships = ['contact', 'acquaintance', 'friend', 'met', 'co-worker', 'coworker', 'colleague', 'co-resident', 'coresident', 'neighbor', 'child', 'parent', 'sibling', 'brother', 'sister', 'spouse', 'wife', 'husband', 'kin', 'relative', 'muse', 'crush', 'date', 'sweetheart', 'me']
known_binary_extensions = ['zip','rar','exe','gz','tar','tgz','tbz2','bz2','z','7z','dmg','img','sit','sitx','hqx','deb','rpm','bz2','jar','rar','iso','bin','msi','mp2','mp3','ogg','ogm','mp4','m4v','m4a','avi','wma','wmv']
def __init__(self, data, baseuri, encoding):
self.document = BeautifulSoup.BeautifulSoup(data)
self.baseuri = baseuri
self.encoding = encoding
if type(data) == type(u''):
data = data.encode(encoding)
self.tags = []
self.enclosures = []
self.xfn = []
self.vcard = None
def vcardEscape(self, s):
if type(s) in (type(''), type(u'')):
s = s.replace(',', '\\,').replace(';', '\\;').replace('\n', '\\n')
return s
def vcardFold(self, s):
s = re.sub(';+$', '', s)
sFolded = ''
iMax = 75
sPrefix = ''
while len(s) > iMax:
sFolded += sPrefix + s[:iMax] + '\n'
s = s[iMax:]
sPrefix = ' '
iMax = 74
sFolded += sPrefix + s
return sFolded
def normalize(self, s):
return re.sub(r'\s+', ' ', s).strip()
def unique(self, aList):
results = []
for element in aList:
if element not in results:
results.append(element)
return results
def toISO8601(self, dt):
return time.strftime('%Y-%m-%dT%H:%M:%SZ', dt)
def getPropertyValue(self, elmRoot, sProperty, iPropertyType=4, bAllowMultiple=0, bAutoEscape=0):
all = lambda x: 1
sProperty = sProperty.lower()
bFound = 0
bNormalize = 1
propertyMatch = {'class': re.compile(r'\b%s\b' % sProperty)}
if bAllowMultiple and (iPropertyType != self.NODE):
snapResults = []
containers = elmRoot(['ul', 'ol'], propertyMatch)
for container in containers:
snapResults.extend(container('li'))
bFound = (len(snapResults) != 0)
if not bFound:
snapResults = elmRoot(all, propertyMatch)
bFound = (len(snapResults) != 0)
if (not bFound) and (sProperty == 'value'):
snapResults = elmRoot('pre')
bFound = (len(snapResults) != 0)
bNormalize = not bFound
if not bFound:
snapResults = [elmRoot]
bFound = (len(snapResults) != 0)
arFilter = []
if sProperty == 'vcard':
snapFilter = elmRoot(all, propertyMatch)
for node in snapFilter:
if node.findParent(all, propertyMatch):
arFilter.append(node)
arResults = []
for node in snapResults:
if node not in arFilter:
arResults.append(node)
bFound = (len(arResults) != 0)
if not bFound:
if bAllowMultiple: return []
elif iPropertyType == self.STRING: return ''
elif iPropertyType == self.DATE: return None
elif iPropertyType == self.URI: return ''
elif iPropertyType == self.NODE: return None
else: return None
arValues = []
for elmResult in arResults:
sValue = None
if iPropertyType == self.NODE:
if bAllowMultiple:
arValues.append(elmResult)
continue
else:
return elmResult
sNodeName = elmResult.name.lower()
if (iPropertyType == self.EMAIL) and (sNodeName == 'a'):
sValue = (elmResult.get('href') or '').split('mailto:').pop().split('?')[0]
if sValue:
sValue = bNormalize and self.normalize(sValue) or sValue.strip()
if (not sValue) and (sNodeName == 'abbr'):
sValue = elmResult.get('title')
if sValue:
sValue = bNormalize and self.normalize(sValue) or sValue.strip()
if (not sValue) and (iPropertyType == self.URI):
if sNodeName == 'a': sValue = elmResult.get('href')
elif sNodeName == 'img': sValue = elmResult.get('src')
elif sNodeName == 'object': sValue = elmResult.get('data')
if sValue:
sValue = bNormalize and self.normalize(sValue) or sValue.strip()
if (not sValue) and (sNodeName == 'img'):
sValue = elmResult.get('alt')
if sValue:
sValue = bNormalize and self.normalize(sValue) or sValue.strip()
if not sValue:
sValue = elmResult.renderContents()
sValue = re.sub(r'<\S[^>]*>', '', sValue)
sValue = sValue.replace('\r\n', '\n')
sValue = sValue.replace('\r', '\n')
if sValue:
sValue = bNormalize and self.normalize(sValue) or sValue.strip()
if not sValue: continue
if iPropertyType == self.DATE:
sValue = _parse_date_iso8601(sValue)
if bAllowMultiple:
arValues.append(bAutoEscape and self.vcardEscape(sValue) or sValue)
else:
return bAutoEscape and self.vcardEscape(sValue) or sValue
return arValues
def findVCards(self, elmRoot, bAgentParsing=0):
sVCards = ''
if not bAgentParsing:
arCards = self.getPropertyValue(elmRoot, 'vcard', bAllowMultiple=1)
else:
arCards = [elmRoot]
for elmCard in arCards:
arLines = []
def processSingleString(sProperty):
sValue = self.getPropertyValue(elmCard, sProperty, self.STRING, bAutoEscape=1).decode(self.encoding)
if sValue:
arLines.append(self.vcardFold(sProperty.upper() + ':' + sValue))
return sValue or u''
def processSingleURI(sProperty):
sValue = self.getPropertyValue(elmCard, sProperty, self.URI)
if sValue:
sContentType = ''
sEncoding = ''
sValueKey = ''
if sValue.startswith('data:'):
sEncoding = ';ENCODING=b'
sContentType = sValue.split(';')[0].split('/').pop()
sValue = sValue.split(',', 1).pop()
else:
elmValue = self.getPropertyValue(elmCard, sProperty)
if elmValue:
if sProperty != 'url':
sValueKey = ';VALUE=uri'
sContentType = elmValue.get('type', '').strip().split('/').pop().strip()
sContentType = sContentType.upper()
if sContentType == 'OCTET-STREAM':
sContentType = ''
if sContentType:
sContentType = ';TYPE=' + sContentType.upper()
arLines.append(self.vcardFold(sProperty.upper() + sEncoding + sContentType + sValueKey + ':' + sValue))
def processTypeValue(sProperty, arDefaultType, arForceType=None):
arResults = self.getPropertyValue(elmCard, sProperty, bAllowMultiple=1)
for elmResult in arResults:
arType = self.getPropertyValue(elmResult, 'type', self.STRING, 1, 1)
if arForceType:
arType = self.unique(arForceType + arType)
if not arType:
arType = arDefaultType
sValue = self.getPropertyValue(elmResult, 'value', self.EMAIL, 0)
if sValue:
arLines.append(self.vcardFold(sProperty.upper() + ';TYPE=' + ','.join(arType) + ':' + sValue))
# AGENT
# must do this before all other properties because it is destructive
# (removes nested class="vcard" nodes so they don't interfere with
# this vcard's other properties)
arAgent = self.getPropertyValue(elmCard, 'agent', bAllowMultiple=1)
for elmAgent in arAgent:
if re.compile(r'\bvcard\b').search(elmAgent.get('class')):
sAgentValue = self.findVCards(elmAgent, 1) + '\n'
sAgentValue = sAgentValue.replace('\n', '\\n')
sAgentValue = sAgentValue.replace(';', '\\;')
if sAgentValue:
arLines.append(self.vcardFold('AGENT:' + sAgentValue))
# Completely remove the agent element from the parse tree
elmAgent.extract()
else:
sAgentValue = self.getPropertyValue(elmAgent, 'value', self.URI, bAutoEscape=1);
if sAgentValue:
arLines.append(self.vcardFold('AGENT;VALUE=uri:' + sAgentValue))
# FN (full name)
sFN = processSingleString('fn')
# N (name)
elmName = self.getPropertyValue(elmCard, 'n')
if elmName:
sFamilyName = self.getPropertyValue(elmName, 'family-name', self.STRING, bAutoEscape=1)
sGivenName = self.getPropertyValue(elmName, 'given-name', self.STRING, bAutoEscape=1)
arAdditionalNames = self.getPropertyValue(elmName, 'additional-name', self.STRING, 1, 1) + self.getPropertyValue(elmName, 'additional-names', self.STRING, 1, 1)
arHonorificPrefixes = self.getPropertyValue(elmName, 'honorific-prefix', self.STRING, 1, 1) + self.getPropertyValue(elmName, 'honorific-prefixes', self.STRING, 1, 1)
arHonorificSuffixes = self.getPropertyValue(elmName, 'honorific-suffix', self.STRING, 1, 1) + self.getPropertyValue(elmName, 'honorific-suffixes', self.STRING, 1, 1)
arLines.append(self.vcardFold('N:' + sFamilyName + ';' +
sGivenName + ';' +
','.join(arAdditionalNames) + ';' +
','.join(arHonorificPrefixes) + ';' +
','.join(arHonorificSuffixes)))
elif sFN:
# implied "N" optimization
# http://microformats.org/wiki/hcard#Implied_.22N.22_Optimization
arNames = self.normalize(sFN).split()
if len(arNames) == 2:
bFamilyNameFirst = (arNames[0].endswith(',') or
len(arNames[1]) == 1 or
((len(arNames[1]) == 2) and (arNames[1].endswith('.'))))
if bFamilyNameFirst:
arLines.append(self.vcardFold('N:' + arNames[0] + ';' + arNames[1]))
else:
arLines.append(self.vcardFold('N:' + arNames[1] + ';' + arNames[0]))
# SORT-STRING
sSortString = self.getPropertyValue(elmCard, 'sort-string', self.STRING, bAutoEscape=1)
if sSortString:
arLines.append(self.vcardFold('SORT-STRING:' + sSortString))
# NICKNAME
arNickname = self.getPropertyValue(elmCard, 'nickname', self.STRING, 1, 1)
if arNickname:
arLines.append(self.vcardFold('NICKNAME:' + ','.join(arNickname)))
# PHOTO
processSingleURI('photo')
# BDAY
dtBday = self.getPropertyValue(elmCard, 'bday', self.DATE)
if dtBday:
arLines.append(self.vcardFold('BDAY:' + self.toISO8601(dtBday)))
# ADR (address)
arAdr = self.getPropertyValue(elmCard, 'adr', bAllowMultiple=1)
for elmAdr in arAdr:
arType = self.getPropertyValue(elmAdr, 'type', self.STRING, 1, 1)
if not arType:
arType = ['intl','postal','parcel','work'] # default adr types, see RFC 2426 section 3.2.1
sPostOfficeBox = self.getPropertyValue(elmAdr, 'post-office-box', self.STRING, 0, 1)
sExtendedAddress = self.getPropertyValue(elmAdr, 'extended-address', self.STRING, 0, 1)
sStreetAddress = self.getPropertyValue(elmAdr, 'street-address', self.STRING, 0, 1)
sLocality = self.getPropertyValue(elmAdr, 'locality', self.STRING, 0, 1)
sRegion = self.getPropertyValue(elmAdr, 'region', self.STRING, 0, 1)
sPostalCode = self.getPropertyValue(elmAdr, 'postal-code', self.STRING, 0, 1)
sCountryName = self.getPropertyValue(elmAdr, 'country-name', self.STRING, 0, 1)
arLines.append(self.vcardFold('ADR;TYPE=' + ','.join(arType) + ':' +
sPostOfficeBox + ';' +
sExtendedAddress + ';' +
sStreetAddress + ';' +
sLocality + ';' +
sRegion + ';' +
sPostalCode + ';' +
sCountryName))
# LABEL
processTypeValue('label', ['intl','postal','parcel','work'])
# TEL (phone number)
processTypeValue('tel', ['voice'])
# EMAIL
processTypeValue('email', ['internet'], ['internet'])
# MAILER
processSingleString('mailer')
# TZ (timezone)
processSingleString('tz')
# GEO (geographical information)
elmGeo = self.getPropertyValue(elmCard, 'geo')
if elmGeo:
sLatitude = self.getPropertyValue(elmGeo, 'latitude', self.STRING, 0, 1)
sLongitude = self.getPropertyValue(elmGeo, 'longitude', self.STRING, 0, 1)
arLines.append(self.vcardFold('GEO:' + sLatitude + ';' + sLongitude))
# TITLE
processSingleString('title')
# ROLE
processSingleString('role')
# LOGO
processSingleURI('logo')
# ORG (organization)
elmOrg = self.getPropertyValue(elmCard, 'org')
if elmOrg:
sOrganizationName = self.getPropertyValue(elmOrg, 'organization-name', self.STRING, 0, 1)
if not sOrganizationName:
# implied "organization-name" optimization
# http://microformats.org/wiki/hcard#Implied_.22organization-name.22_Optimization
sOrganizationName = self.getPropertyValue(elmCard, 'org', self.STRING, 0, 1)
if sOrganizationName:
arLines.append(self.vcardFold('ORG:' + sOrganizationName))
else:
arOrganizationUnit = self.getPropertyValue(elmOrg, 'organization-unit', self.STRING, 1, 1)
arLines.append(self.vcardFold('ORG:' + sOrganizationName + ';' + ';'.join(arOrganizationUnit)))
# CATEGORY
arCategory = self.getPropertyValue(elmCard, 'category', self.STRING, 1, 1) + self.getPropertyValue(elmCard, 'categories', self.STRING, 1, 1)
if arCategory:
arLines.append(self.vcardFold('CATEGORIES:' + ','.join(arCategory)))
# NOTE
processSingleString('note')
# REV
processSingleString('rev')
# SOUND
processSingleURI('sound')
# UID
processSingleString('uid')
# URL
processSingleURI('url')
# CLASS
processSingleString('class')
# KEY
processSingleURI('key')
if arLines:
arLines = [u'BEGIN:vCard',u'VERSION:3.0'] + arLines + [u'END:vCard']
sVCards += u'\n'.join(arLines) + u'\n'
return sVCards.strip()
def isProbablyDownloadable(self, elm):
attrsD = elm.attrMap
if not attrsD.has_key('href'): return 0
linktype = attrsD.get('type', '').strip()
if linktype.startswith('audio/') or \
linktype.startswith('video/') or \
(linktype.startswith('application/') and not linktype.endswith('xml')):
return 1
path = urlparse.urlparse(attrsD['href'])[2]
if path.find('.') == -1: return 0
fileext = path.split('.').pop().lower()
return fileext in self.known_binary_extensions
def findTags(self):
all = lambda x: 1
for elm in self.document(all, {'rel': re.compile(r'\btag\b')}):
href = elm.get('href')
if not href: continue
urlscheme, domain, path, params, query, fragment = \
urlparse.urlparse(_urljoin(self.baseuri, href))
segments = path.split('/')
tag = segments.pop()
if not tag:
tag = segments.pop()
tagscheme = urlparse.urlunparse((urlscheme, domain, '/'.join(segments), '', '', ''))
if not tagscheme.endswith('/'):
tagscheme += '/'
self.tags.append(FeedParserDict({"term": tag, "scheme": tagscheme, "label": elm.string or ''}))
def findEnclosures(self):
all = lambda x: 1
enclosure_match = re.compile(r'\benclosure\b')
for elm in self.document(all, {'href': re.compile(r'.+')}):
if not enclosure_match.search(elm.get('rel', '')) and not self.isProbablyDownloadable(elm): continue
if elm.attrMap not in self.enclosures:
self.enclosures.append(elm.attrMap)
if elm.string and not elm.get('title'):
self.enclosures[-1]['title'] = elm.string
def findXFN(self):
all = lambda x: 1
for elm in self.document(all, {'rel': re.compile('.+'), 'href': re.compile('.+')}):
rels = elm.get('rel', '').split()
xfn_rels = []
for rel in rels:
if rel in self.known_xfn_relationships:
xfn_rels.append(rel)
if xfn_rels:
self.xfn.append({"relationships": xfn_rels, "href": elm.get('href', ''), "name": elm.string})
def _parseMicroformats(htmlSource, baseURI, encoding):
if not BeautifulSoup: return
if _debug: sys.stderr.write('entering _parseMicroformats\n')
try:
p = _MicroformatsParser(htmlSource, baseURI, encoding)
except UnicodeEncodeError:
# sgmllib throws this exception when performing lookups of tags
# with non-ASCII characters in them.
return
p.vcard = p.findVCards(p.document)
p.findTags()
p.findEnclosures()
p.findXFN()
return {"tags": p.tags, "enclosures": p.enclosures, "xfn": p.xfn, "vcard": p.vcard}
class _RelativeURIResolver(_BaseHTMLProcessor):
relative_uris = [('a', 'href'),
('applet', 'codebase'),
('area', 'href'),
('blockquote', 'cite'),
('body', 'background'),
('del', 'cite'),
('form', 'action'),
('frame', 'longdesc'),
('frame', 'src'),
('iframe', 'longdesc'),
('iframe', 'src'),
('head', 'profile'),
('img', 'longdesc'),
('img', 'src'),
('img', 'usemap'),
('input', 'src'),
('input', 'usemap'),
('ins', 'cite'),
('link', 'href'),
('object', 'classid'),
('object', 'codebase'),
('object', 'data'),
('object', 'usemap'),
('q', 'cite'),
('script', 'src')]
def __init__(self, baseuri, encoding, _type):
_BaseHTMLProcessor.__init__(self, encoding, _type)
self.baseuri = baseuri
def resolveURI(self, uri):
return _makeSafeAbsoluteURI(_urljoin(self.baseuri, uri.strip()))
def unknown_starttag(self, tag, attrs):
if _debug:
sys.stderr.write('tag: [%s] with attributes: [%s]\n' % (tag, str(attrs)))
attrs = self.normalize_attrs(attrs)
attrs = [(key, ((tag, key) in self.relative_uris) and self.resolveURI(value) or value) for key, value in attrs]
_BaseHTMLProcessor.unknown_starttag(self, tag, attrs)
def _resolveRelativeURIs(htmlSource, baseURI, encoding, _type):
if _debug:
sys.stderr.write('entering _resolveRelativeURIs\n')
p = _RelativeURIResolver(baseURI, encoding, _type)
p.feed(htmlSource)
return p.output()
def _makeSafeAbsoluteURI(base, rel=None):
# bail if ACCEPTABLE_URI_SCHEMES is empty
if not ACCEPTABLE_URI_SCHEMES:
return _urljoin(base, rel or u'')
if not base:
return rel or u''
if not rel:
scheme = urlparse.urlparse(base)[0]
if not scheme or scheme in ACCEPTABLE_URI_SCHEMES:
return base
return u''
uri = _urljoin(base, rel)
if uri.strip().split(':', 1)[0] not in ACCEPTABLE_URI_SCHEMES:
return u''
return uri
class _HTMLSanitizer(_BaseHTMLProcessor):
acceptable_elements = ['a', 'abbr', 'acronym', 'address', 'area',
'article', 'aside', 'audio', 'b', 'big', 'blockquote', 'br', 'button',
'canvas', 'caption', 'center', 'cite', 'code', 'col', 'colgroup',
'command', 'datagrid', 'datalist', 'dd', 'del', 'details', 'dfn',
'dialog', 'dir', 'div', 'dl', 'dt', 'em', 'event-source', 'fieldset',
'figcaption', 'figure', 'footer', 'font', 'form', 'header', 'h1',
'h2', 'h3', 'h4', 'h5', 'h6', 'hr', 'i', 'img', 'input', 'ins',
'keygen', 'kbd', 'label', 'legend', 'li', 'm', 'map', 'menu', 'meter',
'multicol', 'nav', 'nextid', 'ol', 'output', 'optgroup', 'option',
'p', 'pre', 'progress', 'q', 's', 'samp', 'section', 'select',
'small', 'sound', 'source', 'spacer', 'span', 'strike', 'strong',
'sub', 'sup', 'table', 'tbody', 'td', 'textarea', 'time', 'tfoot',
'th', 'thead', 'tr', 'tt', 'u', 'ul', 'var', 'video', 'noscript']
acceptable_attributes = ['abbr', 'accept', 'accept-charset', 'accesskey',
'action', 'align', 'alt', 'autocomplete', 'autofocus', 'axis',
'background', 'balance', 'bgcolor', 'bgproperties', 'border',
'bordercolor', 'bordercolordark', 'bordercolorlight', 'bottompadding',
'cellpadding', 'cellspacing', 'ch', 'challenge', 'char', 'charoff',
'choff', 'charset', 'checked', 'cite', 'class', 'clear', 'color', 'cols',
'colspan', 'compact', 'contenteditable', 'controls', 'coords', 'data',
'datafld', 'datapagesize', 'datasrc', 'datetime', 'default', 'delay',
'dir', 'disabled', 'draggable', 'dynsrc', 'enctype', 'end', 'face', 'for',
'form', 'frame', 'galleryimg', 'gutter', 'headers', 'height', 'hidefocus',
'hidden', 'high', 'href', 'hreflang', 'hspace', 'icon', 'id', 'inputmode',
'ismap', 'keytype', 'label', 'leftspacing', 'lang', 'list', 'longdesc',
'loop', 'loopcount', 'loopend', 'loopstart', 'low', 'lowsrc', 'max',
'maxlength', 'media', 'method', 'min', 'multiple', 'name', 'nohref',
'noshade', 'nowrap', 'open', 'optimum', 'pattern', 'ping', 'point-size',
'prompt', 'pqg', 'radiogroup', 'readonly', 'rel', 'repeat-max',
'repeat-min', 'replace', 'required', 'rev', 'rightspacing', 'rows',
'rowspan', 'rules', 'scope', 'selected', 'shape', 'size', 'span', 'src',
'start', 'step', 'summary', 'suppress', 'tabindex', 'target', 'template',
'title', 'toppadding', 'type', 'unselectable', 'usemap', 'urn', 'valign',
'value', 'variable', 'volume', 'vspace', 'vrml', 'width', 'wrap',
'xml:lang']
unacceptable_elements_with_end_tag = ['script', 'applet', 'style']
acceptable_css_properties = ['azimuth', 'background-color',
'border-bottom-color', 'border-collapse', 'border-color',
'border-left-color', 'border-right-color', 'border-top-color', 'clear',
'color', 'cursor', 'direction', 'display', 'elevation', 'float', 'font',
'font-family', 'font-size', 'font-style', 'font-variant', 'font-weight',
'height', 'letter-spacing', 'line-height', 'overflow', 'pause',
'pause-after', 'pause-before', 'pitch', 'pitch-range', 'richness',
'speak', 'speak-header', 'speak-numeral', 'speak-punctuation',
'speech-rate', 'stress', 'text-align', 'text-decoration', 'text-indent',
'unicode-bidi', 'vertical-align', 'voice-family', 'volume',
'white-space', 'width']
# survey of common keywords found in feeds
acceptable_css_keywords = ['auto', 'aqua', 'black', 'block', 'blue',
'bold', 'both', 'bottom', 'brown', 'center', 'collapse', 'dashed',
'dotted', 'fuchsia', 'gray', 'green', '!important', 'italic', 'left',
'lime', 'maroon', 'medium', 'none', 'navy', 'normal', 'nowrap', 'olive',
'pointer', 'purple', 'red', 'right', 'solid', 'silver', 'teal', 'top',
'transparent', 'underline', 'white', 'yellow']
valid_css_values = re.compile('^(#[0-9a-f]+|rgb\(\d+%?,\d*%?,?\d*%?\)?|' +
'\d{0,2}\.?\d{0,2}(cm|em|ex|in|mm|pc|pt|px|%|,|\))?)$')
mathml_elements = ['annotation', 'annotation-xml', 'maction', 'math',
'merror', 'mfenced', 'mfrac', 'mi', 'mmultiscripts', 'mn', 'mo', 'mover', 'mpadded',
'mphantom', 'mprescripts', 'mroot', 'mrow', 'mspace', 'msqrt', 'mstyle',
'msub', 'msubsup', 'msup', 'mtable', 'mtd', 'mtext', 'mtr', 'munder',
'munderover', 'none', 'semantics']
mathml_attributes = ['actiontype', 'align', 'columnalign', 'columnalign',
'columnalign', 'close', 'columnlines', 'columnspacing', 'columnspan', 'depth',
'display', 'displaystyle', 'encoding', 'equalcolumns', 'equalrows',
'fence', 'fontstyle', 'fontweight', 'frame', 'height', 'linethickness',
'lspace', 'mathbackground', 'mathcolor', 'mathvariant', 'mathvariant',
'maxsize', 'minsize', 'open', 'other', 'rowalign', 'rowalign', 'rowalign',
'rowlines', 'rowspacing', 'rowspan', 'rspace', 'scriptlevel', 'selection',
'separator', 'separators', 'stretchy', 'width', 'width', 'xlink:href',
'xlink:show', 'xlink:type', 'xmlns', 'xmlns:xlink']
# svgtiny - foreignObject + linearGradient + radialGradient + stop
svg_elements = ['a', 'animate', 'animateColor', 'animateMotion',
'animateTransform', 'circle', 'defs', 'desc', 'ellipse', 'foreignObject',
'font-face', 'font-face-name', 'font-face-src', 'g', 'glyph', 'hkern',
'linearGradient', 'line', 'marker', 'metadata', 'missing-glyph', 'mpath',
'path', 'polygon', 'polyline', 'radialGradient', 'rect', 'set', 'stop',
'svg', 'switch', 'text', 'title', 'tspan', 'use']
# svgtiny + class + opacity + offset + xmlns + xmlns:xlink
svg_attributes = ['accent-height', 'accumulate', 'additive', 'alphabetic',
'arabic-form', 'ascent', 'attributeName', 'attributeType',
'baseProfile', 'bbox', 'begin', 'by', 'calcMode', 'cap-height',
'class', 'color', 'color-rendering', 'content', 'cx', 'cy', 'd', 'dx',
'dy', 'descent', 'display', 'dur', 'end', 'fill', 'fill-opacity',
'fill-rule', 'font-family', 'font-size', 'font-stretch', 'font-style',
'font-variant', 'font-weight', 'from', 'fx', 'fy', 'g1', 'g2',
'glyph-name', 'gradientUnits', 'hanging', 'height', 'horiz-adv-x',
'horiz-origin-x', 'id', 'ideographic', 'k', 'keyPoints', 'keySplines',
'keyTimes', 'lang', 'mathematical', 'marker-end', 'marker-mid',
'marker-start', 'markerHeight', 'markerUnits', 'markerWidth', 'max',
'min', 'name', 'offset', 'opacity', 'orient', 'origin',
'overline-position', 'overline-thickness', 'panose-1', 'path',
'pathLength', 'points', 'preserveAspectRatio', 'r', 'refX', 'refY',
'repeatCount', 'repeatDur', 'requiredExtensions', 'requiredFeatures',
'restart', 'rotate', 'rx', 'ry', 'slope', 'stemh', 'stemv',
'stop-color', 'stop-opacity', 'strikethrough-position',
'strikethrough-thickness', 'stroke', 'stroke-dasharray',
'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin',
'stroke-miterlimit', 'stroke-opacity', 'stroke-width', 'systemLanguage',
'target', 'text-anchor', 'to', 'transform', 'type', 'u1', 'u2',
'underline-position', 'underline-thickness', 'unicode', 'unicode-range',
'units-per-em', 'values', 'version', 'viewBox', 'visibility', 'width',
'widths', 'x', 'x-height', 'x1', 'x2', 'xlink:actuate', 'xlink:arcrole',
'xlink:href', 'xlink:role', 'xlink:show', 'xlink:title', 'xlink:type',
'xml:base', 'xml:lang', 'xml:space', 'xmlns', 'xmlns:xlink', 'y', 'y1',
'y2', 'zoomAndPan']
svg_attr_map = None
svg_elem_map = None
acceptable_svg_properties = [ 'fill', 'fill-opacity', 'fill-rule',
'stroke', 'stroke-width', 'stroke-linecap', 'stroke-linejoin',
'stroke-opacity']
def reset(self):
_BaseHTMLProcessor.reset(self)
self.unacceptablestack = 0
self.mathmlOK = 0
self.svgOK = 0
def unknown_starttag(self, tag, attrs):
acceptable_attributes = self.acceptable_attributes
keymap = {}
if not tag in self.acceptable_elements or self.svgOK:
if tag in self.unacceptable_elements_with_end_tag:
self.unacceptablestack += 1
# add implicit namespaces to html5 inline svg/mathml
if self._type.endswith('html'):
if not dict(attrs).get('xmlns'):
if tag=='svg':
attrs.append( ('xmlns','http://www.w3.org/2000/svg') )
if tag=='math':
attrs.append( ('xmlns','http://www.w3.org/1998/Math/MathML') )
# not otherwise acceptable, perhaps it is MathML or SVG?
if tag=='math' and ('xmlns','http://www.w3.org/1998/Math/MathML') in attrs:
self.mathmlOK += 1
if tag=='svg' and ('xmlns','http://www.w3.org/2000/svg') in attrs:
self.svgOK += 1
# chose acceptable attributes based on tag class, else bail
if self.mathmlOK and tag in self.mathml_elements:
acceptable_attributes = self.mathml_attributes
elif self.svgOK and tag in self.svg_elements:
# for most vocabularies, lowercasing is a good idea. Many
# svg elements, however, are camel case
if not self.svg_attr_map:
lower=[attr.lower() for attr in self.svg_attributes]
mix=[a for a in self.svg_attributes if a not in lower]
self.svg_attributes = lower
self.svg_attr_map = dict([(a.lower(),a) for a in mix])
lower=[attr.lower() for attr in self.svg_elements]
mix=[a for a in self.svg_elements if a not in lower]
self.svg_elements = lower
self.svg_elem_map = dict([(a.lower(),a) for a in mix])
acceptable_attributes = self.svg_attributes
tag = self.svg_elem_map.get(tag,tag)
keymap = self.svg_attr_map
elif not tag in self.acceptable_elements:
return
# declare xlink namespace, if needed
if self.mathmlOK or self.svgOK:
if filter(lambda (n,v): n.startswith('xlink:'),attrs):
if not ('xmlns:xlink','http://www.w3.org/1999/xlink') in attrs:
attrs.append(('xmlns:xlink','http://www.w3.org/1999/xlink'))
clean_attrs = []
for key, value in self.normalize_attrs(attrs):
if key in acceptable_attributes:
key=keymap.get(key,key)
# make sure the uri uses an acceptable uri scheme
if key == u'href':
value = _makeSafeAbsoluteURI(value)
clean_attrs.append((key,value))
elif key=='style':
clean_value = self.sanitize_style(value)
if clean_value: clean_attrs.append((key,clean_value))
_BaseHTMLProcessor.unknown_starttag(self, tag, clean_attrs)
def unknown_endtag(self, tag):
if not tag in self.acceptable_elements:
if tag in self.unacceptable_elements_with_end_tag:
self.unacceptablestack -= 1
if self.mathmlOK and tag in self.mathml_elements:
if tag == 'math' and self.mathmlOK: self.mathmlOK -= 1
elif self.svgOK and tag in self.svg_elements:
tag = self.svg_elem_map.get(tag,tag)
if tag == 'svg' and self.svgOK: self.svgOK -= 1
else:
return
_BaseHTMLProcessor.unknown_endtag(self, tag)
def handle_pi(self, text):
pass
def handle_decl(self, text):
pass
def handle_data(self, text):
if not self.unacceptablestack:
_BaseHTMLProcessor.handle_data(self, text)
def sanitize_style(self, style):
# disallow urls
style=re.compile('url\s*\(\s*[^\s)]+?\s*\)\s*').sub(' ',style)
# gauntlet
if not re.match("""^([:,;#%.\sa-zA-Z0-9!]|\w-\w|'[\s\w]+'|"[\s\w]+"|\([\d,\s]+\))*$""", style): return ''
# This replaced a regexp that used re.match and was prone to pathological back-tracking.
if re.sub("\s*[-\w]+\s*:\s*[^:;]*;?", '', style).strip(): return ''
clean = []
for prop,value in re.findall("([-\w]+)\s*:\s*([^:;]*)",style):
if not value: continue
if prop.lower() in self.acceptable_css_properties:
clean.append(prop + ': ' + value + ';')
elif prop.split('-')[0].lower() in ['background','border','margin','padding']:
for keyword in value.split():
if not keyword in self.acceptable_css_keywords and \
not self.valid_css_values.match(keyword):
break
else:
clean.append(prop + ': ' + value + ';')
elif self.svgOK and prop.lower() in self.acceptable_svg_properties:
clean.append(prop + ': ' + value + ';')
return ' '.join(clean)
def parse_comment(self, i, report=1):
ret = _BaseHTMLProcessor.parse_comment(self, i, report)
if ret >= 0:
return ret
# if ret == -1, this may be a malicious attempt to circumvent
# sanitization, or a page-destroying unclosed comment
match = re.compile(r'--[^>]*>').search(self.rawdata, i+4)
if match:
return match.end()
# unclosed comment; deliberately fail to handle_data()
return len(self.rawdata)
def _sanitizeHTML(htmlSource, encoding, _type):
p = _HTMLSanitizer(encoding, _type)
htmlSource = htmlSource.replace('<![CDATA[', '<![CDATA[')
p.feed(htmlSource)
data = p.output()
if TIDY_MARKUP:
# loop through list of preferred Tidy interfaces looking for one that's installed,
# then set up a common _tidy function to wrap the interface-specific API.
_tidy = None
for tidy_interface in PREFERRED_TIDY_INTERFACES:
try:
if tidy_interface == "uTidy":
from tidy import parseString as _utidy
def _tidy(data, **kwargs):
return str(_utidy(data, **kwargs))
break
elif tidy_interface == "mxTidy":
from mx.Tidy import Tidy as _mxtidy
def _tidy(data, **kwargs):
nerrors, nwarnings, data, errordata = _mxtidy.tidy(data, **kwargs)
return data
break
except:
pass
if _tidy:
utf8 = type(data) == type(u'')
if utf8:
data = data.encode('utf-8')
data = _tidy(data, output_xhtml=1, numeric_entities=1, wrap=0, char_encoding="utf8")
if utf8:
data = unicode(data, 'utf-8')
if data.count('<body'):
data = data.split('<body', 1)[1]
if data.count('>'):
data = data.split('>', 1)[1]
if data.count('</body'):
data = data.split('</body', 1)[0]
data = data.strip().replace('\r\n', '\n')
return data
class _FeedURLHandler(urllib2.HTTPDigestAuthHandler, urllib2.HTTPRedirectHandler, urllib2.HTTPDefaultErrorHandler):
def http_error_default(self, req, fp, code, msg, headers):
if ((code / 100) == 3) and (code != 304):
return self.http_error_302(req, fp, code, msg, headers)
infourl = urllib.addinfourl(fp, headers, req.get_full_url())
infourl.status = code
return infourl
def http_error_302(self, req, fp, code, msg, headers):
if headers.dict.has_key('location'):
infourl = urllib2.HTTPRedirectHandler.http_error_302(self, req, fp, code, msg, headers)
else:
infourl = urllib.addinfourl(fp, headers, req.get_full_url())
if not hasattr(infourl, 'status'):
infourl.status = code
return infourl
def http_error_301(self, req, fp, code, msg, headers):
if headers.dict.has_key('location'):
infourl = urllib2.HTTPRedirectHandler.http_error_301(self, req, fp, code, msg, headers)
else:
infourl = urllib.addinfourl(fp, headers, req.get_full_url())
if not hasattr(infourl, 'status'):
infourl.status = code
return infourl
http_error_300 = http_error_302
http_error_303 = http_error_302
http_error_307 = http_error_302
def http_error_401(self, req, fp, code, msg, headers):
# Check if
# - server requires digest auth, AND
# - we tried (unsuccessfully) with basic auth, AND
# - we're using Python 2.3.3 or later (digest auth is irreparably broken in earlier versions)
# If all conditions hold, parse authentication information
# out of the Authorization header we sent the first time
# (for the username and password) and the WWW-Authenticate
# header the server sent back (for the realm) and retry
# the request with the appropriate digest auth headers instead.
# This evil genius hack has been brought to you by Aaron Swartz.
host = urlparse.urlparse(req.get_full_url())[1]
try:
assert sys.version.split()[0] >= '2.3.3'
assert base64 != None
user, passw = _base64decode(req.headers['Authorization'].split(' ')[1]).split(':')
realm = re.findall('realm="([^"]*)"', headers['WWW-Authenticate'])[0]
self.add_password(realm, host, user, passw)
retry = self.http_error_auth_reqed('www-authenticate', host, req, headers)
self.reset_retry_count()
return retry
except:
return self.http_error_default(req, fp, code, msg, headers)
def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers):
"""URL, filename, or string --> stream
This function lets you define parsers that take any input source
(URL, pathname to local or network file, or actual data as a string)
and deal with it in a uniform manner. Returned object is guaranteed
to have all the basic stdio read methods (read, readline, readlines).
Just .close() the object when you're done with it.
If the etag argument is supplied, it will be used as the value of an
If-None-Match request header.
If the modified argument is supplied, it can be a tuple of 9 integers
(as returned by gmtime() in the standard Python time module) or a date
string in any format supported by feedparser. Regardless, it MUST
be in GMT (Greenwich Mean Time). It will be reformatted into an
RFC 1123-compliant date and used as the value of an If-Modified-Since
request header.
If the agent argument is supplied, it will be used as the value of a
User-Agent request header.
If the referrer argument is supplied, it will be used as the value of a
Referer[sic] request header.
If handlers is supplied, it is a list of handlers used to build a
urllib2 opener.
if request_headers is supplied it is a dictionary of HTTP request headers
that will override the values generated by FeedParser.
"""
if hasattr(url_file_stream_or_string, 'read'):
return url_file_stream_or_string
if url_file_stream_or_string == '-':
return sys.stdin
if urlparse.urlparse(url_file_stream_or_string)[0] in ('http', 'https', 'ftp', 'file', 'feed'):
# Deal with the feed URI scheme
if url_file_stream_or_string.startswith('feed:http'):
url_file_stream_or_string = url_file_stream_or_string[5:]
elif url_file_stream_or_string.startswith('feed:'):
url_file_stream_or_string = 'http:' + url_file_stream_or_string[5:]
if not agent:
agent = USER_AGENT
# test for inline user:password for basic auth
auth = None
if base64:
urltype, rest = urllib.splittype(url_file_stream_or_string)
realhost, rest = urllib.splithost(rest)
if realhost:
user_passwd, realhost = urllib.splituser(realhost)
if user_passwd:
url_file_stream_or_string = '%s://%s%s' % (urltype, realhost, rest)
auth = base64.standard_b64encode(user_passwd).strip()
# iri support
try:
if isinstance(url_file_stream_or_string,unicode):
url_file_stream_or_string = url_file_stream_or_string.encode('idna').decode('utf-8')
else:
url_file_stream_or_string = url_file_stream_or_string.decode('utf-8').encode('idna').decode('utf-8')
except:
pass
# try to open with urllib2 (to use optional headers)
request = _build_urllib2_request(url_file_stream_or_string, agent, etag, modified, referrer, auth, request_headers)
opener = apply(urllib2.build_opener, tuple(handlers + [_FeedURLHandler()]))
opener.addheaders = [] # RMK - must clear so we only send our custom User-Agent
try:
return opener.open(request)
finally:
opener.close() # JohnD
# try to open with native open function (if url_file_stream_or_string is a filename)
try:
return open(url_file_stream_or_string, 'rb')
except:
pass
# treat url_file_stream_or_string as string
return _StringIO(str(url_file_stream_or_string))
def _build_urllib2_request(url, agent, etag, modified, referrer, auth, request_headers):
request = urllib2.Request(url)
request.add_header('User-Agent', agent)
if etag:
request.add_header('If-None-Match', etag)
if type(modified) == type(''):
modified = _parse_date(modified)
elif isinstance(modified, datetime.datetime):
modified = modified.utctimetuple()
if modified:
# format into an RFC 1123-compliant timestamp. We can't use
# time.strftime() since the %a and %b directives can be affected
# by the current locale, but RFC 2616 states that dates must be
# in English.
short_weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
request.add_header('If-Modified-Since', '%s, %02d %s %04d %02d:%02d:%02d GMT' % (short_weekdays[modified[6]], modified[2], months[modified[1] - 1], modified[0], modified[3], modified[4], modified[5]))
if referrer:
request.add_header('Referer', referrer)
if gzip and zlib:
request.add_header('Accept-encoding', 'gzip, deflate')
elif gzip:
request.add_header('Accept-encoding', 'gzip')
elif zlib:
request.add_header('Accept-encoding', 'deflate')
else:
request.add_header('Accept-encoding', '')
if auth:
request.add_header('Authorization', 'Basic %s' % auth)
if ACCEPT_HEADER:
request.add_header('Accept', ACCEPT_HEADER)
# use this for whatever -- cookies, special headers, etc
# [('Cookie','Something'),('x-special-header','Another Value')]
for header_name, header_value in request_headers.items():
request.add_header(header_name, header_value)
request.add_header('A-IM', 'feed') # RFC 3229 support
return request
_date_handlers = []
def registerDateHandler(func):
'''Register a date handler function (takes string, returns 9-tuple date in GMT)'''
_date_handlers.insert(0, func)
# ISO-8601 date parsing routines written by Fazal Majid.
# The ISO 8601 standard is very convoluted and irregular - a full ISO 8601
# parser is beyond the scope of feedparser and would be a worthwhile addition
# to the Python library.
# A single regular expression cannot parse ISO 8601 date formats into groups
# as the standard is highly irregular (for instance is 030104 2003-01-04 or
# 0301-04-01), so we use templates instead.
# Please note the order in templates is significant because we need a
# greedy match.
_iso8601_tmpl = ['YYYY-?MM-?DD', 'YYYY-0MM?-?DD', 'YYYY-MM', 'YYYY-?OOO',
'YY-?MM-?DD', 'YY-?OOO', 'YYYY',
'-YY-?MM', '-OOO', '-YY',
'--MM-?DD', '--MM',
'---DD',
'CC', '']
_iso8601_re = [
tmpl.replace(
'YYYY', r'(?P<year>\d{4})').replace(
'YY', r'(?P<year>\d\d)').replace(
'MM', r'(?P<month>[01]\d)').replace(
'DD', r'(?P<day>[0123]\d)').replace(
'OOO', r'(?P<ordinal>[0123]\d\d)').replace(
'CC', r'(?P<century>\d\d$)')
+ r'(T?(?P<hour>\d{2}):(?P<minute>\d{2})'
+ r'(:(?P<second>\d{2}))?'
+ r'(\.(?P<fracsecond>\d+))?'
+ r'(?P<tz>[+-](?P<tzhour>\d{2})(:(?P<tzmin>\d{2}))?|Z)?)?'
for tmpl in _iso8601_tmpl]
try:
del tmpl
except NameError:
pass
_iso8601_matches = [re.compile(regex).match for regex in _iso8601_re]
try:
del regex
except NameError:
pass
def _parse_date_iso8601(dateString):
'''Parse a variety of ISO-8601-compatible formats like 20040105'''
m = None
for _iso8601_match in _iso8601_matches:
m = _iso8601_match(dateString)
if m: break
if not m: return
if m.span() == (0, 0): return
params = m.groupdict()
ordinal = params.get('ordinal', 0)
if ordinal:
ordinal = int(ordinal)
else:
ordinal = 0
year = params.get('year', '--')
if not year or year == '--':
year = time.gmtime()[0]
elif len(year) == 2:
# ISO 8601 assumes current century, i.e. 93 -> 2093, NOT 1993
year = 100 * int(time.gmtime()[0] / 100) + int(year)
else:
year = int(year)
month = params.get('month', '-')
if not month or month == '-':
# ordinals are NOT normalized by mktime, we simulate them
# by setting month=1, day=ordinal
if ordinal:
month = 1
else:
month = time.gmtime()[1]
month = int(month)
day = params.get('day', 0)
if not day:
# see above
if ordinal:
day = ordinal
elif params.get('century', 0) or \
params.get('year', 0) or params.get('month', 0):
day = 1
else:
day = time.gmtime()[2]
else:
day = int(day)
# special case of the century - is the first year of the 21st century
# 2000 or 2001 ? The debate goes on...
if 'century' in params.keys():
year = (int(params['century']) - 1) * 100 + 1
# in ISO 8601 most fields are optional
for field in ['hour', 'minute', 'second', 'tzhour', 'tzmin']:
if not params.get(field, None):
params[field] = 0
hour = int(params.get('hour', 0))
minute = int(params.get('minute', 0))
second = int(float(params.get('second', 0)))
# weekday is normalized by mktime(), we can ignore it
weekday = 0
daylight_savings_flag = -1
tm = [year, month, day, hour, minute, second, weekday,
ordinal, daylight_savings_flag]
# ISO 8601 time zone adjustments
tz = params.get('tz')
if tz and tz != 'Z':
if tz[0] == '-':
tm[3] += int(params.get('tzhour', 0))
tm[4] += int(params.get('tzmin', 0))
elif tz[0] == '+':
tm[3] -= int(params.get('tzhour', 0))
tm[4] -= int(params.get('tzmin', 0))
else:
return None
# Python's time.mktime() is a wrapper around the ANSI C mktime(3c)
# which is guaranteed to normalize d/m/y/h/m/s.
# Many implementations have bugs, but we'll pretend they don't.
return time.localtime(time.mktime(tuple(tm)))
registerDateHandler(_parse_date_iso8601)
# 8-bit date handling routines written by ytrewq1.
_korean_year = u'\ub144' # b3e2 in euc-kr
_korean_month = u'\uc6d4' # bff9 in euc-kr
_korean_day = u'\uc77c' # c0cf in euc-kr
_korean_am = u'\uc624\uc804' # bfc0 c0fc in euc-kr
_korean_pm = u'\uc624\ud6c4' # bfc0 c8c4 in euc-kr
_korean_onblog_date_re = \
re.compile('(\d{4})%s\s+(\d{2})%s\s+(\d{2})%s\s+(\d{2}):(\d{2}):(\d{2})' % \
(_korean_year, _korean_month, _korean_day))
_korean_nate_date_re = \
re.compile(u'(\d{4})-(\d{2})-(\d{2})\s+(%s|%s)\s+(\d{,2}):(\d{,2}):(\d{,2})' % \
(_korean_am, _korean_pm))
def _parse_date_onblog(dateString):
'''Parse a string according to the OnBlog 8-bit date format'''
m = _korean_onblog_date_re.match(dateString)
if not m: return
w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \
{'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\
'hour': m.group(4), 'minute': m.group(5), 'second': m.group(6),\
'zonediff': '+09:00'}
if _debug: sys.stderr.write('OnBlog date parsed as: %s\n' % w3dtfdate)
return _parse_date_w3dtf(w3dtfdate)
registerDateHandler(_parse_date_onblog)
def _parse_date_nate(dateString):
'''Parse a string according to the Nate 8-bit date format'''
m = _korean_nate_date_re.match(dateString)
if not m: return
hour = int(m.group(5))
ampm = m.group(4)
if (ampm == _korean_pm):
hour += 12
hour = str(hour)
if len(hour) == 1:
hour = '0' + hour
w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \
{'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\
'hour': hour, 'minute': m.group(6), 'second': m.group(7),\
'zonediff': '+09:00'}
if _debug: sys.stderr.write('Nate date parsed as: %s\n' % w3dtfdate)
return _parse_date_w3dtf(w3dtfdate)
registerDateHandler(_parse_date_nate)
_mssql_date_re = \
re.compile('(\d{4})-(\d{2})-(\d{2})\s+(\d{2}):(\d{2}):(\d{2})(\.\d+)?')
def _parse_date_mssql(dateString):
'''Parse a string according to the MS SQL date format'''
m = _mssql_date_re.match(dateString)
if not m: return
w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \
{'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\
'hour': m.group(4), 'minute': m.group(5), 'second': m.group(6),\
'zonediff': '+09:00'}
if _debug: sys.stderr.write('MS SQL date parsed as: %s\n' % w3dtfdate)
return _parse_date_w3dtf(w3dtfdate)
registerDateHandler(_parse_date_mssql)
# Unicode strings for Greek date strings
_greek_months = \
{ \
u'\u0399\u03b1\u03bd': u'Jan', # c9e1ed in iso-8859-7
u'\u03a6\u03b5\u03b2': u'Feb', # d6e5e2 in iso-8859-7
u'\u039c\u03ac\u03ce': u'Mar', # ccdcfe in iso-8859-7
u'\u039c\u03b1\u03ce': u'Mar', # cce1fe in iso-8859-7
u'\u0391\u03c0\u03c1': u'Apr', # c1f0f1 in iso-8859-7
u'\u039c\u03ac\u03b9': u'May', # ccdce9 in iso-8859-7
u'\u039c\u03b1\u03ca': u'May', # cce1fa in iso-8859-7
u'\u039c\u03b1\u03b9': u'May', # cce1e9 in iso-8859-7
u'\u0399\u03bf\u03cd\u03bd': u'Jun', # c9effded in iso-8859-7
u'\u0399\u03bf\u03bd': u'Jun', # c9efed in iso-8859-7
u'\u0399\u03bf\u03cd\u03bb': u'Jul', # c9effdeb in iso-8859-7
u'\u0399\u03bf\u03bb': u'Jul', # c9f9eb in iso-8859-7
u'\u0391\u03cd\u03b3': u'Aug', # c1fde3 in iso-8859-7
u'\u0391\u03c5\u03b3': u'Aug', # c1f5e3 in iso-8859-7
u'\u03a3\u03b5\u03c0': u'Sep', # d3e5f0 in iso-8859-7
u'\u039f\u03ba\u03c4': u'Oct', # cfeaf4 in iso-8859-7
u'\u039d\u03bf\u03ad': u'Nov', # cdefdd in iso-8859-7
u'\u039d\u03bf\u03b5': u'Nov', # cdefe5 in iso-8859-7
u'\u0394\u03b5\u03ba': u'Dec', # c4e5ea in iso-8859-7
}
_greek_wdays = \
{ \
u'\u039a\u03c5\u03c1': u'Sun', # caf5f1 in iso-8859-7
u'\u0394\u03b5\u03c5': u'Mon', # c4e5f5 in iso-8859-7
u'\u03a4\u03c1\u03b9': u'Tue', # d4f1e9 in iso-8859-7
u'\u03a4\u03b5\u03c4': u'Wed', # d4e5f4 in iso-8859-7
u'\u03a0\u03b5\u03bc': u'Thu', # d0e5ec in iso-8859-7
u'\u03a0\u03b1\u03c1': u'Fri', # d0e1f1 in iso-8859-7
u'\u03a3\u03b1\u03b2': u'Sat', # d3e1e2 in iso-8859-7
}
_greek_date_format_re = \
re.compile(u'([^,]+),\s+(\d{2})\s+([^\s]+)\s+(\d{4})\s+(\d{2}):(\d{2}):(\d{2})\s+([^\s]+)')
def _parse_date_greek(dateString):
'''Parse a string according to a Greek 8-bit date format.'''
m = _greek_date_format_re.match(dateString)
if not m: return
try:
wday = _greek_wdays[m.group(1)]
month = _greek_months[m.group(3)]
except:
return
rfc822date = '%(wday)s, %(day)s %(month)s %(year)s %(hour)s:%(minute)s:%(second)s %(zonediff)s' % \
{'wday': wday, 'day': m.group(2), 'month': month, 'year': m.group(4),\
'hour': m.group(5), 'minute': m.group(6), 'second': m.group(7),\
'zonediff': m.group(8)}
if _debug: sys.stderr.write('Greek date parsed as: %s\n' % rfc822date)
return _parse_date_rfc822(rfc822date)
registerDateHandler(_parse_date_greek)
# Unicode strings for Hungarian date strings
_hungarian_months = \
{ \
u'janu\u00e1r': u'01', # e1 in iso-8859-2
u'febru\u00e1ri': u'02', # e1 in iso-8859-2
u'm\u00e1rcius': u'03', # e1 in iso-8859-2
u'\u00e1prilis': u'04', # e1 in iso-8859-2
u'm\u00e1ujus': u'05', # e1 in iso-8859-2
u'j\u00fanius': u'06', # fa in iso-8859-2
u'j\u00falius': u'07', # fa in iso-8859-2
u'augusztus': u'08',
u'szeptember': u'09',
u'okt\u00f3ber': u'10', # f3 in iso-8859-2
u'november': u'11',
u'december': u'12',
}
_hungarian_date_format_re = \
re.compile(u'(\d{4})-([^-]+)-(\d{,2})T(\d{,2}):(\d{2})((\+|-)(\d{,2}:\d{2}))')
def _parse_date_hungarian(dateString):
'''Parse a string according to a Hungarian 8-bit date format.'''
m = _hungarian_date_format_re.match(dateString)
if not m: return
try:
month = _hungarian_months[m.group(2)]
day = m.group(3)
if len(day) == 1:
day = '0' + day
hour = m.group(4)
if len(hour) == 1:
hour = '0' + hour
except:
return
w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s%(zonediff)s' % \
{'year': m.group(1), 'month': month, 'day': day,\
'hour': hour, 'minute': m.group(5),\
'zonediff': m.group(6)}
if _debug: sys.stderr.write('Hungarian date parsed as: %s\n' % w3dtfdate)
return _parse_date_w3dtf(w3dtfdate)
registerDateHandler(_parse_date_hungarian)
# W3DTF-style date parsing adapted from PyXML xml.utils.iso8601, written by
# Drake and licensed under the Python license. Removed all range checking
# for month, day, hour, minute, and second, since mktime will normalize
# these later
def _parse_date_w3dtf(dateString):
def __extract_date(m):
year = int(m.group('year'))
if year < 100:
year = 100 * int(time.gmtime()[0] / 100) + int(year)
if year < 1000:
return 0, 0, 0
julian = m.group('julian')
if julian:
julian = int(julian)
month = julian / 30 + 1
day = julian % 30 + 1
jday = None
while jday != julian:
t = time.mktime((year, month, day, 0, 0, 0, 0, 0, 0))
jday = time.gmtime(t)[-2]
diff = abs(jday - julian)
if jday > julian:
if diff < day:
day = day - diff
else:
month = month - 1
day = 31
elif jday < julian:
if day + diff < 28:
day = day + diff
else:
month = month + 1
return year, month, day
month = m.group('month')
day = 1
if month is None:
month = 1
else:
month = int(month)
day = m.group('day')
if day:
day = int(day)
else:
day = 1
return year, month, day
def __extract_time(m):
if not m:
return 0, 0, 0
hours = m.group('hours')
if not hours:
return 0, 0, 0
hours = int(hours)
minutes = int(m.group('minutes'))
seconds = m.group('seconds')
if seconds:
seconds = int(seconds)
else:
seconds = 0
return hours, minutes, seconds
def __extract_tzd(m):
'''Return the Time Zone Designator as an offset in seconds from UTC.'''
if not m:
return 0
tzd = m.group('tzd')
if not tzd:
return 0
if tzd == 'Z':
return 0
hours = int(m.group('tzdhours'))
minutes = m.group('tzdminutes')
if minutes:
minutes = int(minutes)
else:
minutes = 0
offset = (hours*60 + minutes) * 60
if tzd[0] == '+':
return -offset
return offset
__date_re = ('(?P<year>\d\d\d\d)'
'(?:(?P<dsep>-|)'
'(?:(?P<month>\d\d)(?:(?P=dsep)(?P<day>\d\d))?'
'|(?P<julian>\d\d\d)))?')
__tzd_re = '(?P<tzd>[-+](?P<tzdhours>\d\d)(?::?(?P<tzdminutes>\d\d))|Z)'
__tzd_rx = re.compile(__tzd_re)
__time_re = ('(?P<hours>\d\d)(?P<tsep>:|)(?P<minutes>\d\d)'
'(?:(?P=tsep)(?P<seconds>\d\d)(?:[.,]\d+)?)?'
+ __tzd_re)
__datetime_re = '%s(?:T%s)?' % (__date_re, __time_re)
__datetime_rx = re.compile(__datetime_re)
m = __datetime_rx.match(dateString)
if (m is None) or (m.group() != dateString): return
gmt = __extract_date(m) + __extract_time(m) + (0, 0, 0)
if gmt[0] == 0: return
return time.gmtime(time.mktime(gmt) + __extract_tzd(m) - time.timezone)
registerDateHandler(_parse_date_w3dtf)
def _parse_date_rfc822(dateString):
'''Parse an RFC822, RFC1123, RFC2822, or asctime-style date'''
data = dateString.split()
if data[0][-1] in (',', '.') or data[0].lower() in rfc822._daynames:
del data[0]
if len(data) == 4:
s = data[3]
i = s.find('+')
if i > 0:
data[3:] = [s[:i], s[i+1:]]
else:
data.append('')
dateString = " ".join(data)
# Account for the Etc/GMT timezone by stripping 'Etc/'
elif len(data) == 5 and data[4].lower().startswith('etc/'):
data[4] = data[4][4:]
dateString = " ".join(data)
if len(data) < 5:
dateString += ' 00:00:00 GMT'
tm = rfc822.parsedate_tz(dateString)
if tm:
return time.gmtime(rfc822.mktime_tz(tm))
# rfc822.py defines several time zones, but we define some extra ones.
# 'ET' is equivalent to 'EST', etc.
_additional_timezones = {'AT': -400, 'ET': -500, 'CT': -600, 'MT': -700, 'PT': -800}
rfc822._timezones.update(_additional_timezones)
registerDateHandler(_parse_date_rfc822)
def _parse_date_perforce(aDateString):
"""parse a date in yyyy/mm/dd hh:mm:ss TTT format"""
# Fri, 2006/09/15 08:19:53 EDT
_my_date_pattern = re.compile( \
r'(\w{,3}), (\d{,4})/(\d{,2})/(\d{2}) (\d{,2}):(\d{2}):(\d{2}) (\w{,3})')
dow, year, month, day, hour, minute, second, tz = \
_my_date_pattern.search(aDateString).groups()
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
dateString = "%s, %s %s %s %s:%s:%s %s" % (dow, day, months[int(month) - 1], year, hour, minute, second, tz)
tm = rfc822.parsedate_tz(dateString)
if tm:
return time.gmtime(rfc822.mktime_tz(tm))
registerDateHandler(_parse_date_perforce)
def _parse_date(dateString):
'''Parses a variety of date formats into a 9-tuple in GMT'''
for handler in _date_handlers:
try:
date9tuple = handler(dateString)
if not date9tuple: continue
if len(date9tuple) != 9:
if _debug: sys.stderr.write('date handler function must return 9-tuple\n')
raise ValueError
map(int, date9tuple)
return date9tuple
except Exception as e:
if _debug: sys.stderr.write('%s raised %s\n' % (handler.__name__, repr(e)))
pass
return None
def _getCharacterEncoding(http_headers, xml_data):
'''Get the character encoding of the XML document
http_headers is a dictionary
xml_data is a raw string (not Unicode)
This is so much trickier than it sounds, it's not even funny.
According to RFC 3023 ('XML Media Types'), if the HTTP Content-Type
is application/xml, application/*+xml,
application/xml-external-parsed-entity, or application/xml-dtd,
the encoding given in the charset parameter of the HTTP Content-Type
takes precedence over the encoding given in the XML prefix within the
document, and defaults to 'utf-8' if neither are specified. But, if
the HTTP Content-Type is text/xml, text/*+xml, or
text/xml-external-parsed-entity, the encoding given in the XML prefix
within the document is ALWAYS IGNORED and only the encoding given in
the charset parameter of the HTTP Content-Type header should be
respected, and it defaults to 'us-ascii' if not specified.
Furthermore, discussion on the atom-syntax mailing list with the
author of RFC 3023 leads me to the conclusion that any document
served with a Content-Type of text/* and no charset parameter
must be treated as us-ascii. (We now do this.) And also that it
must always be flagged as non-well-formed. (We now do this too.)
If Content-Type is unspecified (input was local file or non-HTTP source)
or unrecognized (server just got it totally wrong), then go by the
encoding given in the XML prefix of the document and default to
'iso-8859-1' as per the HTTP specification (RFC 2616).
Then, assuming we didn't find a character encoding in the HTTP headers
(and the HTTP Content-type allowed us to look in the body), we need
to sniff the first few bytes of the XML data and try to determine
whether the encoding is ASCII-compatible. Section F of the XML
specification shows the way here:
http://www.w3.org/TR/REC-xml/#sec-guessing-no-ext-info
If the sniffed encoding is not ASCII-compatible, we need to make it
ASCII compatible so that we can sniff further into the XML declaration
to find the encoding attribute, which will tell us the true encoding.
Of course, none of this guarantees that we will be able to parse the
feed in the declared character encoding (assuming it was declared
correctly, which many are not). CJKCodecs and iconv_codec help a lot;
you should definitely install them if you can.
http://cjkpython.i18n.org/
'''
def _parseHTTPContentType(content_type):
'''takes HTTP Content-Type header and returns (content type, charset)
If no charset is specified, returns (content type, '')
If no content type is specified, returns ('', '')
Both return parameters are guaranteed to be lowercase strings
'''
content_type = content_type or ''
content_type, params = cgi.parse_header(content_type)
return content_type, params.get('charset', '').replace("'", '')
sniffed_xml_encoding = ''
xml_encoding = ''
true_encoding = ''
http_content_type, http_encoding = _parseHTTPContentType(http_headers.get('content-type', http_headers.get('Content-type')))
# Must sniff for non-ASCII-compatible character encodings before
# searching for XML declaration. This heuristic is defined in
# section F of the XML specification:
# http://www.w3.org/TR/REC-xml/#sec-guessing-no-ext-info
try:
if xml_data[:4] == _l2bytes([0x4c, 0x6f, 0xa7, 0x94]):
# EBCDIC
xml_data = _ebcdic_to_ascii(xml_data)
elif xml_data[:4] == _l2bytes([0x00, 0x3c, 0x00, 0x3f]):
# UTF-16BE
sniffed_xml_encoding = 'utf-16be'
xml_data = unicode(xml_data, 'utf-16be').encode('utf-8')
elif (len(xml_data) >= 4) and (xml_data[:2] == _l2bytes([0xfe, 0xff])) and (xml_data[2:4] != _l2bytes([0x00, 0x00])):
# UTF-16BE with BOM
sniffed_xml_encoding = 'utf-16be'
xml_data = unicode(xml_data[2:], 'utf-16be').encode('utf-8')
elif xml_data[:4] == _l2bytes([0x3c, 0x00, 0x3f, 0x00]):
# UTF-16LE
sniffed_xml_encoding = 'utf-16le'
xml_data = unicode(xml_data, 'utf-16le').encode('utf-8')
elif (len(xml_data) >= 4) and (xml_data[:2] == _l2bytes([0xff, 0xfe])) and (xml_data[2:4] != _l2bytes([0x00, 0x00])):
# UTF-16LE with BOM
sniffed_xml_encoding = 'utf-16le'
xml_data = unicode(xml_data[2:], 'utf-16le').encode('utf-8')
elif xml_data[:4] == _l2bytes([0x00, 0x00, 0x00, 0x3c]):
# UTF-32BE
sniffed_xml_encoding = 'utf-32be'
xml_data = unicode(xml_data, 'utf-32be').encode('utf-8')
elif xml_data[:4] == _l2bytes([0x3c, 0x00, 0x00, 0x00]):
# UTF-32LE
sniffed_xml_encoding = 'utf-32le'
xml_data = unicode(xml_data, 'utf-32le').encode('utf-8')
elif xml_data[:4] == _l2bytes([0x00, 0x00, 0xfe, 0xff]):
# UTF-32BE with BOM
sniffed_xml_encoding = 'utf-32be'
xml_data = unicode(xml_data[4:], 'utf-32be').encode('utf-8')
elif xml_data[:4] == _l2bytes([0xff, 0xfe, 0x00, 0x00]):
# UTF-32LE with BOM
sniffed_xml_encoding = 'utf-32le'
xml_data = unicode(xml_data[4:], 'utf-32le').encode('utf-8')
elif xml_data[:3] == _l2bytes([0xef, 0xbb, 0xbf]):
# UTF-8 with BOM
sniffed_xml_encoding = 'utf-8'
xml_data = unicode(xml_data[3:], 'utf-8').encode('utf-8')
else:
# ASCII-compatible
pass
xml_encoding_match = re.compile(_s2bytes('^<\?.*encoding=[\'"](.*?)[\'"].*\?>')).match(xml_data)
except:
xml_encoding_match = None
if xml_encoding_match:
xml_encoding = xml_encoding_match.groups()[0].decode('utf-8').lower()
if sniffed_xml_encoding and (xml_encoding in ('iso-10646-ucs-2', 'ucs-2', 'csunicode', 'iso-10646-ucs-4', 'ucs-4', 'csucs4', 'utf-16', 'utf-32', 'utf_16', 'utf_32', 'utf16', 'u16')):
xml_encoding = sniffed_xml_encoding
acceptable_content_type = 0
application_content_types = ('application/xml', 'application/xml-dtd', 'application/xml-external-parsed-entity')
text_content_types = ('text/xml', 'text/xml-external-parsed-entity')
if (http_content_type in application_content_types) or \
(http_content_type.startswith('application/') and http_content_type.endswith('+xml')):
acceptable_content_type = 1
true_encoding = http_encoding or xml_encoding or 'utf-8'
elif (http_content_type in text_content_types) or \
(http_content_type.startswith('text/')) and http_content_type.endswith('+xml'):
acceptable_content_type = 1
true_encoding = http_encoding or 'us-ascii'
elif http_content_type.startswith('text/'):
true_encoding = http_encoding or 'us-ascii'
elif http_headers and (not (http_headers.has_key('content-type') or http_headers.has_key('Content-type'))):
true_encoding = xml_encoding or 'iso-8859-1'
else:
true_encoding = xml_encoding or 'utf-8'
# some feeds claim to be gb2312 but are actually gb18030.
# apparently MSIE and Firefox both do the following switch:
if true_encoding.lower() == 'gb2312':
true_encoding = 'gb18030'
return true_encoding, http_encoding, xml_encoding, sniffed_xml_encoding, acceptable_content_type
def _toUTF8(data, encoding):
'''Changes an XML data stream on the fly to specify a new encoding
data is a raw sequence of bytes (not Unicode) that is presumed to be in %encoding already
encoding is a string recognized by encodings.aliases
'''
if _debug: sys.stderr.write('entering _toUTF8, trying encoding %s\n' % encoding)
# strip Byte Order Mark (if present)
if (len(data) >= 4) and (data[:2] == _l2bytes([0xfe, 0xff])) and (data[2:4] != _l2bytes([0x00, 0x00])):
if _debug:
sys.stderr.write('stripping BOM\n')
if encoding != 'utf-16be':
sys.stderr.write('trying utf-16be instead\n')
encoding = 'utf-16be'
data = data[2:]
elif (len(data) >= 4) and (data[:2] == _l2bytes([0xff, 0xfe])) and (data[2:4] != _l2bytes([0x00, 0x00])):
if _debug:
sys.stderr.write('stripping BOM\n')
if encoding != 'utf-16le':
sys.stderr.write('trying utf-16le instead\n')
encoding = 'utf-16le'
data = data[2:]
elif data[:3] == _l2bytes([0xef, 0xbb, 0xbf]):
if _debug:
sys.stderr.write('stripping BOM\n')
if encoding != 'utf-8':
sys.stderr.write('trying utf-8 instead\n')
encoding = 'utf-8'
data = data[3:]
elif data[:4] == _l2bytes([0x00, 0x00, 0xfe, 0xff]):
if _debug:
sys.stderr.write('stripping BOM\n')
if encoding != 'utf-32be':
sys.stderr.write('trying utf-32be instead\n')
encoding = 'utf-32be'
data = data[4:]
elif data[:4] == _l2bytes([0xff, 0xfe, 0x00, 0x00]):
if _debug:
sys.stderr.write('stripping BOM\n')
if encoding != 'utf-32le':
sys.stderr.write('trying utf-32le instead\n')
encoding = 'utf-32le'
data = data[4:]
newdata = unicode(data, encoding)
if _debug: sys.stderr.write('successfully converted %s data to unicode\n' % encoding)
declmatch = re.compile('^<\?xml[^>]*?>')
newdecl = '''<?xml version='1.0' encoding='utf-8'?>'''
if declmatch.search(newdata):
newdata = declmatch.sub(newdecl, newdata)
else:
newdata = newdecl + u'\n' + newdata
return newdata.encode('utf-8')
def _stripDoctype(data):
'''Strips DOCTYPE from XML document, returns (rss_version, stripped_data)
rss_version may be 'rss091n' or None
stripped_data is the same XML document, minus the DOCTYPE
'''
start = re.search(_s2bytes('<\w'), data)
start = start and start.start() or -1
head,data = data[:start+1], data[start+1:]
entity_pattern = re.compile(_s2bytes(r'^\s*<!ENTITY([^>]*?)>'), re.MULTILINE)
entity_results=entity_pattern.findall(head)
head = entity_pattern.sub(_s2bytes(''), head)
doctype_pattern = re.compile(_s2bytes(r'^\s*<!DOCTYPE([^>]*?)>'), re.MULTILINE)
doctype_results = doctype_pattern.findall(head)
doctype = doctype_results and doctype_results[0] or _s2bytes('')
if doctype.lower().count(_s2bytes('netscape')):
version = 'rss091n'
else:
version = None
# only allow in 'safe' inline entity definitions
replacement=_s2bytes('')
if len(doctype_results)==1 and entity_results:
safe_pattern=re.compile(_s2bytes('\s+(\w+)\s+"(&#\w+;|[^&"]*)"'))
safe_entities=filter(lambda e: safe_pattern.match(e),entity_results)
if safe_entities:
replacement=_s2bytes('<!DOCTYPE feed [\n <!ENTITY') + _s2bytes('>\n <!ENTITY ').join(safe_entities) + _s2bytes('>\n]>')
data = doctype_pattern.sub(replacement, head) + data
return version, data, dict(replacement and [(k.decode('utf-8'), v.decode('utf-8')) for k, v in safe_pattern.findall(replacement)])
def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, referrer=None, handlers=[], request_headers={}, response_headers={}):
'''Parse a feed from a URL, file, stream, or string.
request_headers, if given, is a dict from http header name to value to add
to the request; this overrides internally generated values.
'''
result = FeedParserDict()
result['feed'] = FeedParserDict()
result['entries'] = []
if _XML_AVAILABLE:
result['bozo'] = 0
if not isinstance(handlers, list):
handlers = [handlers]
try:
f = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers)
data = f.read()
except Exception as e:
result['bozo'] = 1
result['bozo_exception'] = e
data = None
f = None
if hasattr(f, 'headers'):
result['headers'] = dict(f.headers)
# overwrite existing headers using response_headers
if 'headers' in result:
result['headers'].update(response_headers)
elif response_headers:
result['headers'] = copy.deepcopy(response_headers)
# if feed is gzip-compressed, decompress it
if f and data and 'headers' in result:
if gzip and result['headers'].get('content-encoding') == 'gzip':
try:
data = gzip.GzipFile(fileobj=_StringIO(data)).read()
except Exception as e:
# Some feeds claim to be gzipped but they're not, so
# we get garbage. Ideally, we should re-request the
# feed without the 'Accept-encoding: gzip' header,
# but we don't.
result['bozo'] = 1
result['bozo_exception'] = e
data = ''
elif zlib and result['headers'].get('content-encoding') == 'deflate':
try:
data = zlib.decompress(data, -zlib.MAX_WBITS)
except Exception as e:
result['bozo'] = 1
result['bozo_exception'] = e
data = ''
# save HTTP headers
if 'headers' in result:
if 'etag' in result['headers'] or 'ETag' in result['headers']:
etag = result['headers'].get('etag', result['headers'].get('ETag'))
if etag:
result['etag'] = etag
if 'last-modified' in result['headers'] or 'Last-Modified' in result['headers']:
modified = result['headers'].get('last-modified', result['headers'].get('Last-Modified'))
if modified:
result['modified'] = _parse_date(modified)
if hasattr(f, 'url'):
result['href'] = f.url
result['status'] = 200
if hasattr(f, 'status'):
result['status'] = f.status
if hasattr(f, 'close'):
f.close()
# there are four encodings to keep track of:
# - http_encoding is the encoding declared in the Content-Type HTTP header
# - xml_encoding is the encoding declared in the <?xml declaration
# - sniffed_encoding is the encoding sniffed from the first 4 bytes of the XML data
# - result['encoding'] is the actual encoding, as per RFC 3023 and a variety of other conflicting specifications
http_headers = result.get('headers', {})
result['encoding'], http_encoding, xml_encoding, sniffed_xml_encoding, acceptable_content_type = \
_getCharacterEncoding(http_headers, data)
if http_headers and (not acceptable_content_type):
if http_headers.has_key('content-type') or http_headers.has_key('Content-type'):
bozo_message = '%s is not an XML media type' % http_headers.get('content-type', http_headers.get('Content-type'))
else:
bozo_message = 'no Content-type specified'
result['bozo'] = 1
result['bozo_exception'] = NonXMLContentType(bozo_message)
if data is not None:
result['version'], data, entities = _stripDoctype(data)
# ensure that baseuri is an absolute uri using an acceptable URI scheme
contentloc = http_headers.get('content-location', http_headers.get('Content-Location', ''))
href = result.get('href', '')
baseuri = _makeSafeAbsoluteURI(href, contentloc) or _makeSafeAbsoluteURI(contentloc) or href
baselang = http_headers.get('content-language', http_headers.get('Content-Language', None))
# if server sent 304, we're done
if result.get('status', 0) == 304:
result['version'] = ''
result['debug_message'] = 'The feed has not changed since you last checked, ' + \
'so the server sent no data. This is a feature, not a bug!'
return result
# if there was a problem downloading, we're done
if data is None:
return result
# determine character encoding
use_strict_parser = 0
known_encoding = 0
tried_encodings = []
# try: HTTP encoding, declared XML encoding, encoding sniffed from BOM
for proposed_encoding in (result['encoding'], xml_encoding, sniffed_xml_encoding):
if not proposed_encoding: continue
if proposed_encoding in tried_encodings: continue
tried_encodings.append(proposed_encoding)
try:
data = _toUTF8(data, proposed_encoding)
known_encoding = use_strict_parser = 1
break
except:
pass
# if no luck and we have auto-detection library, try that
if (not known_encoding) and chardet:
try:
proposed_encoding = chardet.detect(data)['encoding']
if proposed_encoding and (proposed_encoding not in tried_encodings):
tried_encodings.append(proposed_encoding)
data = _toUTF8(data, proposed_encoding)
known_encoding = use_strict_parser = 1
except:
pass
# if still no luck and we haven't tried utf-8 yet, try that
if (not known_encoding) and ('utf-8' not in tried_encodings):
try:
proposed_encoding = 'utf-8'
tried_encodings.append(proposed_encoding)
data = _toUTF8(data, proposed_encoding)
known_encoding = use_strict_parser = 1
except:
pass
# if still no luck and we haven't tried windows-1252 yet, try that
if (not known_encoding) and ('windows-1252' not in tried_encodings):
try:
proposed_encoding = 'windows-1252'
tried_encodings.append(proposed_encoding)
data = _toUTF8(data, proposed_encoding)
known_encoding = use_strict_parser = 1
except:
pass
# if still no luck and we haven't tried iso-8859-2 yet, try that.
if (not known_encoding) and ('iso-8859-2' not in tried_encodings):
try:
proposed_encoding = 'iso-8859-2'
tried_encodings.append(proposed_encoding)
data = _toUTF8(data, proposed_encoding)
known_encoding = use_strict_parser = 1
except:
pass
# if still no luck, give up
if not known_encoding:
result['bozo'] = 1
result['bozo_exception'] = CharacterEncodingUnknown( \
'document encoding unknown, I tried ' + \
'%s, %s, utf-8, windows-1252, and iso-8859-2 but nothing worked' % \
(result['encoding'], xml_encoding))
result['encoding'] = ''
elif proposed_encoding != result['encoding']:
result['bozo'] = 1
result['bozo_exception'] = CharacterEncodingOverride( \
'document declared as %s, but parsed as %s' % \
(result['encoding'], proposed_encoding))
result['encoding'] = proposed_encoding
if not _XML_AVAILABLE:
use_strict_parser = 0
if use_strict_parser:
# initialize the SAX parser
feedparser = _StrictFeedParser(baseuri, baselang, 'utf-8')
saxparser = xml.sax.make_parser(PREFERRED_XML_PARSERS)
saxparser.setFeature(xml.sax.handler.feature_namespaces, 1)
saxparser.setContentHandler(feedparser)
saxparser.setErrorHandler(feedparser)
source = xml.sax.xmlreader.InputSource()
source.setByteStream(_StringIO(data))
if hasattr(saxparser, '_ns_stack'):
# work around bug in built-in SAX parser (doesn't recognize xml: namespace)
# PyXML doesn't have this problem, and it doesn't have _ns_stack either
saxparser._ns_stack.append({'http://www.w3.org/XML/1998/namespace':'xml'})
try:
saxparser.parse(source)
except Exception as e:
if _debug:
import traceback
traceback.print_stack()
traceback.print_exc()
sys.stderr.write('xml parsing failed\n')
result['bozo'] = 1
result['bozo_exception'] = feedparser.exc or e
use_strict_parser = 0
if not use_strict_parser:
feedparser = _LooseFeedParser(baseuri, baselang, 'utf-8', entities)
feedparser.feed(data.decode('utf-8', 'replace'))
result['feed'] = feedparser.feeddata
result['entries'] = feedparser.entries
result['version'] = result['version'] or feedparser.version
result['namespaces'] = feedparser.namespacesInUse
return result
class Serializer:
def __init__(self, results):
self.results = results
class TextSerializer(Serializer):
def write(self, stream=sys.stdout):
self._writer(stream, self.results, '')
def _writer(self, stream, node, prefix):
if not node: return
if hasattr(node, 'keys'):
keys = node.keys()
keys.sort()
for k in keys:
if k in ('description', 'link'): continue
if node.has_key(k + '_detail'): continue
if node.has_key(k + '_parsed'): continue
self._writer(stream, node[k], prefix + k + '.')
elif type(node) == types.ListType:
index = 0
for n in node:
self._writer(stream, n, prefix[:-1] + '[' + str(index) + '].')
index += 1
else:
try:
s = str(node).encode('utf-8')
s = s.replace('\\', '\\\\')
s = s.replace('\r', '')
s = s.replace('\n', r'\n')
stream.write(prefix[:-1])
stream.write('=')
stream.write(s)
stream.write('\n')
except:
pass
class PprintSerializer(Serializer):
def write(self, stream=sys.stdout):
if self.results.has_key('href'):
stream.write(self.results['href'] + '\n\n')
from pprint import pprint
pprint(self.results, stream)
stream.write('\n')
if __name__ == '__main__':
try:
from optparse import OptionParser
except:
OptionParser = None
if OptionParser:
optionParser = OptionParser(version=__version__, usage="%prog [options] url_or_filename_or_-")
optionParser.set_defaults(format="pprint")
optionParser.add_option("-A", "--user-agent", dest="agent", metavar="AGENT", help="User-Agent for HTTP URLs")
optionParser.add_option("-e", "--referer", "--referrer", dest="referrer", metavar="URL", help="Referrer for HTTP URLs")
optionParser.add_option("-t", "--etag", dest="etag", metavar="TAG", help="ETag/If-None-Match for HTTP URLs")
optionParser.add_option("-m", "--last-modified", dest="modified", metavar="DATE", help="Last-modified/If-Modified-Since for HTTP URLs (any supported date format)")
optionParser.add_option("-f", "--format", dest="format", metavar="FORMAT", help="output results in FORMAT (text, pprint)")
optionParser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False, help="write debugging information to stderr")
(options, urls) = optionParser.parse_args()
if options.verbose:
_debug = 1
if not urls:
optionParser.print_help()
sys.exit(0)
else:
if not sys.argv[1:]:
print __doc__
sys.exit(0)
class _Options:
etag = modified = agent = referrer = None
format = 'pprint'
options = _Options()
urls = sys.argv[1:]
zopeCompatibilityHack()
serializer = globals().get(options.format.capitalize() + 'Serializer', Serializer)
for url in urls:
results = parse(url, etag=options.etag, modified=options.modified, agent=options.agent, referrer=options.referrer)
serializer(results).write(sys.stdout)
|
yeyanchao/calibre
|
src/calibre/web/feeds/feedparser.py
|
Python
|
gpl-3.0
| 167,411
|
[
"NetCDF",
"VisIt"
] |
e21f6f9b9b849376ee388a31d05bf42f5a1bc321d00ae301e7246b685e4b01f8
|
#!/usr/bin/python
"""
Run some VASP tests to ensure that the VASP calculator works. This
is conditional on the existence of the VASP_COMMAND or VASP_SCRIPT
environment variables
"""
from ase.test import NotAvailable
import os
vcmd = os.getenv('VASP_COMMAND')
vscr = os.getenv('VASP_SCRIPT')
if vcmd == None and vscr == None:
raise NotAvailable('Neither VASP_COMMAND nor VASP_SCRIPT defined')
from ase import Atoms
from ase.calculators.vasp import Vasp
import numpy as np
def array_almost_equal(a1, a2, tol=np.finfo(type(1.0)).eps):
"""Replacement for old numpy.testing.utils.array_almost_equal."""
return (np.abs(a1 - a2) < tol).all()
d = 1.14
co = Atoms('CO', positions=[(0, 0, 0), (0, 0, d)],
pbc=True)
co.center(vacuum=5.)
calc = Vasp(
xc = 'PBE',
prec = 'Low',
algo = 'Fast',
ismear= 0,
sigma = 1.,
istart = 0,
lwave = False,
lcharg = False)
co.set_calculator(calc)
en = co.get_potential_energy()
assert abs(en + 14.918933) < 1e-4
# Secondly, check that restart from the previously created VASP output works
calc2 = Vasp(restart=True)
co2 = calc2.get_atoms()
# Need tolerance of 1e-14 because VASP itself changes coordinates
# slightly between reading POSCAR and writing CONTCAR even if no ionic
# steps are made.
assert array_almost_equal(co.positions, co2.positions, 1e-14)
assert en - co2.get_potential_energy() == 0.
assert array_almost_equal(calc.get_stress(co), calc2.get_stress(co2))
assert array_almost_equal(calc.get_forces(co), calc2.get_forces(co2))
assert array_almost_equal(calc.get_eigenvalues(), calc2.get_eigenvalues())
assert calc.get_number_of_bands() == calc2.get_number_of_bands()
assert calc.get_xc_functional() == calc2.get_xc_functional()
# Cleanup
calc.clean()
|
JConwayAWT/PGSS14CC
|
lib/python/multimetallics/ase/test/vasp/vasp_co.py
|
Python
|
gpl-2.0
| 1,825
|
[
"ASE",
"VASP"
] |
d395bf88b559d237733c4e18abbe71fbabb9e94b2ca590ad6de73299dbe98085
|
"""
sfc_models.examples package
===========================
Example code is deployed here.; currently only in the form of scripts in the "scripts" sub-package. Eventually,
will develop a deployment function to push the examples to a directory specified by the user.
The Quick2DPlot() function uses matlibplot, which is the only non-base library used. If matlibplot is
not installed, the plotting fails and gives the user a warning. The user can replace matlibplot with
another plotting package fairly easily, by replacing Quick2DPlot code with replacement code.
Copyright 2016 Brian Romanchuk
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sfc_models.examples.install_example_scripts
def install_scripts(target_dir=None): # pragma: no cover
"""
Install all the example Python files into a target directory (and create an "output" subdirectory).
If a file already exists, the script will fail. You need to copy existing files out of the way.
:param target_dir: str
:return:
"""
if target_dir is None:
if __name__ == '__main__':
print("""
This function installs the sfc_model example scripts into a specified target directory.
The function will not overwrite existing files, so you will need to clean out the directory
(saving any of your own changes) before calling this function.
Example usage:
sfc_models.examples.install_script('c:\\my_working_directory\\sfc_examples')
This will install to c:\my_working_directory\sfc_examples
""")
return
sfc_models.examples.install_example_scripts.install(target_dir)
|
brianr747/SFC_models
|
sfc_models/examples/__init__.py
|
Python
|
apache-2.0
| 2,104
|
[
"Brian"
] |
8af8444beef1dff28da6e62d7520bd07ed2bdbb9c65390169ef40dc3810028e5
|
#
# @file TestRule_newSetters.py
# @brief Rule unit tests for new set function API
#
# @author Akiya Jouraku (Python conversion)
# @author Sarah Keating
#
# ====== WARNING ===== WARNING ===== WARNING ===== WARNING ===== WARNING ======
#
# DO NOT EDIT THIS FILE.
#
# This file was generated automatically by converting the file located at
# src/sbml/test/TestRule_newSetters.c
# using the conversion program dev/utilities/translateTests/translateTests.pl.
# Any changes made here will be lost the next time the file is regenerated.
#
# -----------------------------------------------------------------------------
# This file is part of libSBML. Please visit http://sbml.org for more
# information about SBML, and the latest version of libSBML.
#
# Copyright 2005-2010 California Institute of Technology.
# Copyright 2002-2005 California Institute of Technology and
# Japan Science and Technology Corporation.
#
# This library is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation. A copy of the license agreement is provided
# in the file named "LICENSE.txt" included with this software distribution
# and also available online as http://sbml.org/software/libsbml/license.html
# -----------------------------------------------------------------------------
import sys
import unittest
import libsbml
class TestRule_newSetters(unittest.TestCase):
global R
R = None
def setUp(self):
self.R = libsbml.AssignmentRule(2,4)
if (self.R == None):
pass
pass
def tearDown(self):
_dummyList = [ self.R ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setFormula1(self):
formula = "k1*X0";
i = self.R.setFormula(formula)
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assert_(( formula == self.R.getFormula() ))
self.assertEqual( True, self.R.isSetFormula() )
pass
def test_Rule_setFormula2(self):
i = self.R.setFormula("")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, self.R.isSetFormula() )
pass
def test_Rule_setFormula3(self):
formula = "k1 X0";
i = self.R.setFormula(formula)
self.assert_( i == libsbml.LIBSBML_INVALID_OBJECT )
self.assertEqual( False, self.R.isSetFormula() )
pass
def test_Rule_setMath1(self):
math = libsbml.ASTNode(libsbml.AST_TIMES)
a = libsbml.ASTNode()
b = libsbml.ASTNode()
a.setName( "a")
b.setName( "b")
math.addChild(a)
math.addChild(b)
i = self.R.setMath(math)
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( True, self.R.isSetMath() )
math1 = self.R.getMath()
self.assert_( math1 != None )
formula = libsbml.formulaToString(math1)
self.assert_( formula != None )
self.assert_(( "a * b" == formula ))
_dummyList = [ math ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setMath2(self):
math = libsbml.ASTNode(libsbml.AST_DIVIDE)
a = libsbml.ASTNode()
a.setName( "a")
math.addChild(a)
i = self.R.setMath(math)
self.assert_( i == libsbml.LIBSBML_INVALID_OBJECT )
self.assertEqual( False, self.R.isSetMath() )
_dummyList = [ math ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setMath3(self):
i = self.R.setMath(None)
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, self.R.isSetMath() )
pass
def test_Rule_setUnits1(self):
i = self.R.setUnits( "second")
self.assert_( i == libsbml.LIBSBML_UNEXPECTED_ATTRIBUTE )
self.assertEqual( False, self.R.isSetUnits() )
pass
def test_Rule_setUnits2(self):
R1 = libsbml.AssignmentRule(1,2)
R1.setL1TypeCode(libsbml.SBML_PARAMETER_RULE)
i = R1.setUnits( "second")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( True, R1.isSetUnits() )
i = R1.unsetUnits()
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, R1.isSetUnits() )
_dummyList = [ R1 ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setUnits3(self):
R1 = libsbml.AssignmentRule(1,2)
R1.setL1TypeCode(libsbml.SBML_PARAMETER_RULE)
i = R1.setUnits( "1second")
self.assert_( i == libsbml.LIBSBML_INVALID_ATTRIBUTE_VALUE )
self.assertEqual( False, R1.isSetUnits() )
i = R1.unsetUnits()
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, R1.isSetUnits() )
_dummyList = [ R1 ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setUnits4(self):
R1 = libsbml.AssignmentRule(1,2)
R1.setL1TypeCode(libsbml.SBML_PARAMETER_RULE)
i = R1.setUnits( "second")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( True, R1.isSetUnits() )
i = R1.setUnits("")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, R1.isSetUnits() )
_dummyList = [ R1 ]; _dummyList[:] = []; del _dummyList
pass
def test_Rule_setVariable1(self):
i = self.R.setVariable( "1mole")
self.assert_( i == libsbml.LIBSBML_INVALID_ATTRIBUTE_VALUE )
self.assertEqual( False, self.R.isSetVariable() )
pass
def test_Rule_setVariable2(self):
i = self.R.setVariable( "mole")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( True, self.R.isSetVariable() )
i = self.R.setVariable( "")
self.assert_( i == libsbml.LIBSBML_OPERATION_SUCCESS )
self.assertEqual( False, self.R.isSetVariable() )
pass
def test_Rule_setVariable3(self):
R1 = libsbml.AlgebraicRule(1,2)
i = R1.setVariable( "r")
self.assert_( i == libsbml.LIBSBML_UNEXPECTED_ATTRIBUTE )
self.assertEqual( False, R1.isSetVariable() )
_dummyList = [ R1 ]; _dummyList[:] = []; del _dummyList
pass
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestRule_newSetters))
return suite
if __name__ == "__main__":
if unittest.TextTestRunner(verbosity=1).run(suite()).wasSuccessful() :
sys.exit(0)
else:
sys.exit(1)
|
TheCoSMoCompany/biopredyn
|
Prototype/src/libsbml-5.10.0/src/bindings/python/test/sbml/TestRule_newSetters.py
|
Python
|
bsd-3-clause
| 6,208
|
[
"VisIt"
] |
ee07496c9a998e4fbb2376a143f138de09e835d8690a08fbc1ff52273f74899d
|
#------------------------------------------------------------------------------
# Copyright (c) 2013, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#------------------------------------------------------------------------------
import ast
import sys
import types
from .byteplay import (
Code, LOAD_FAST, CALL_FUNCTION, LOAD_GLOBAL, STORE_FAST, LOAD_CONST,
RETURN_VALUE, STORE_NAME, LOAD_NAME, DELETE_NAME, DELETE_FAST, SetLineno
)
from .code_tracing import inject_tracing, inject_inversion
# Increment this number whenever the compiler changes the code which it
# generates. This number is used by the import hooks to know which version
# of a .enamlc file is valid for the Enaml compiler version in use. If
# this number is not incremented on change, it may result in .enamlc
# files which fail on import.
#
# Version History
# ---------------
# 1 : Initial compiler version - 2 February 2012
# 2 : Update line number handling - 26 March 2012
# When compiling code objects with mode='eval', Python ignores the
# line number specified by the ast. The workaround is to compile the
# code object, then make a new copy of it with the proper firstlineno
# set via the types.CodeType constructor.
# 3 : Update the generated code to remove the toolkit - 21 June 2012
# This updates the compiler for the coming switch to async UI's
# which will see the removal of the Toolkit concept. The only
# magic scope maintained is for that of operators.
# 4 : Update component building - 27 July 2012
# This updates the compiler to handle the new Enaml creation semantics
# that don't rely on __enaml_call__. Instead the parent is passed
# directly to the component cls which is a subclass of Declarative.
# That class handles calling the builder functions upon instance
# creation. This allows us to get rid of the EnamlDef class and
# make enamldef constructs proper subclasses of Declarative.
# 5 : Change the import names - 28 July 2012
# This changes the imported helper name from _make_decl_subclass_
# to _make_enamldef_helper_ which is more descriptive, but equally
# mangled. It also updates the method name used on the Declarative
# component for adding attribute from _add_decl_attr to the more
# descriptive _add_user_attribute. Finally, it adds the eval_compile
# function for compiling Python code in 'eval' mode with proper line
# number handling.
# 6 : Compile with code tracing - 24 November 2012
# This updates the compiler to generate code using the idea of code
# tracing instead of monitors and inverters. The compiler compiles
# the expressions into functions which are augmented to accept
# additional arguments. These arguments are tracer objects which will
# have methods called in response to bytecode ops executing. These
# methods can then attach listeners as necessary. This is an easier
# paradigm to develop with than the previous incarnation. This new
# way also allows the compiler to generate the final code objects
# upfront, instead of needed to specialize at runtime for a given
# operator context. This results in a much smaller footprint since
# then number of code objects created is n instead of n x m.
# 7 : Fix bug with local deletes - 10 December 2012
# This fixes a bug in the locals optimization where the DELETE_NAME
# opcode was not being replaced with DELETE_FAST.
# 8 : Generate description dicts instead of builders - 27 January 2013
# This updates the compiler to generate marshalable description
# dicts instead of builder functions. The responsibility of building
# out the object tree has been shifted to the Declarative class. This
# is a touch slower, but provides a ton more flexibility and enables
# templated components like `Looper` and `Conditional`.
# 9 : Generate description dicts for attrs and events - 11 March 2013
# This augments the description dictionary for an enamldef with
# a list of dicts describing the 'attr' and 'event' keywords for
# the given enamldef block. These dicts are used by the compiler
# helper to generate atom members for the new class.
# 10 : Class time post processing and decorators - 17 March 2013
# This moves a large amount of processing from instantiation time
# to class definition time. In particular, operators are now bound
# at the class level. This also adds support for decorators on an
# enamldef block.
# 11 : Fix a bug in code generation for Python 2.6 - 18 March 2013
# On Python 2.6 the LIST_APPEND instruction consumes the TOS. This
# update adds a check for running on < 2.7 and dups the TOS.
# 12 : Post process an enamldef immediately. - 18 March 2013
# This removes the need for the 2.6 check from version 11 since it
# does not rely on the LIST_APPEND instruction. It almost means
# that widget names must appear before they are used, just like in
# normal Python class bodies.
# 13 : Move the post processing of enamldefs to before running the
# decorators. This means a decorator gets a complete class.
# 14 : Updates to the parser and ast to be more structured - 22 March 2013
# This updates ast generated by the parser and updates the process
# for class creation when a module is imported. The serialized data
# which lives in the code object is unpacked into a construction
# tree which is then used for various transformations.
COMPILER_VERSION = 14
#------------------------------------------------------------------------------
# Compiler Helpers
#------------------------------------------------------------------------------
# Code that will be executed at the top of every enaml module
STARTUP = ['from enaml.core.compiler_helpers import __make_enamldef_helper']
# Cleanup code that will be included in every compiled enaml module
CLEANUP = ['del __make_enamldef_helper']
def update_firstlineno(code, firstlineno):
""" Returns a new code object with an updated first line number.
"""
return types.CodeType(
code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags,
code.co_code, code.co_consts, code.co_names, code.co_varnames,
code.co_filename, code.co_name, firstlineno, code.co_lnotab,
code.co_freevars, code.co_cellvars,
)
#------------------------------------------------------------------------------
# Expression Compilers
#------------------------------------------------------------------------------
def replace_global_loads(codelist, explicit=None):
""" A code transformer which rewrites LOAD_GLOBAL opcodes.
This transform will replace the LOAD_GLOBAL opcodes with LOAD_NAME
opcodes. The operation is performed in-place.
Parameters
----------
codelist : list
The list of byteplay code ops to modify.
explicit : set or None
The set of global names declared explicitly and which should
remain untransformed.
"""
# Replacing LOAD_GLOBAL with LOAD_NAME enables dynamic scoping by
# way of a custom locals mapping. The `call_func` function in the
# `funchelper` module enables passing a locals map to a function.
explicit = explicit or set()
for idx, (op, op_arg) in enumerate(codelist):
if op == LOAD_GLOBAL and op_arg not in explicit:
codelist[idx] = (LOAD_NAME, op_arg)
def optimize_locals(codelist):
""" Optimize the given code object for fast locals access.
All STORE_NAME opcodes will be replaced with STORE_FAST. Names which
are stored and then loaded via LOAD_NAME are rewritten to LOAD_FAST
and DELETE_NAME is rewritten to DELETE_FAST. This transformation is
applied in-place.
Parameters
----------
codelist : list
The list of byteplay code ops to modify.
"""
fast_locals = set()
for idx, (op, op_arg) in enumerate(codelist):
if op == STORE_NAME:
fast_locals.add(op_arg)
codelist[idx] = (STORE_FAST, op_arg)
for idx, (op, op_arg) in enumerate(codelist):
if op == LOAD_NAME and op_arg in fast_locals:
codelist[idx] = (LOAD_FAST, op_arg)
elif op == DELETE_NAME and op_arg in fast_locals:
codelist[idx] = (DELETE_FAST, op_arg)
def compile_simple(py_ast, filename):
""" Compile an ast into a code object implementing operator `=`.
Parameters
----------
py_ast : ast.Expression
A Python ast Expression node.
filename : string
The filename which generated the expression.
Returns
-------
result : types.CodeType
A Python code object which implements the desired behavior.
"""
code = compile(py_ast, filename, mode='eval')
code = update_firstlineno(code, py_ast.lineno)
bp_code = Code.from_code(code)
replace_global_loads(bp_code.code)
optimize_locals(bp_code.code)
bp_code.newlocals = False
return bp_code.to_code()
def compile_notify(py_ast, filename):
""" Compile an ast into a code object implementing operator `::`.
Parameters
----------
py_ast : ast.Module
A Python ast Module node.
filename : string
The filename which generated the expression.
Returns
-------
result : types.CodeType
A Python code object which implements the desired behavior.
"""
explicit_globals = set()
for node in ast.walk(py_ast):
if isinstance(node, ast.Global):
explicit_globals.update(node.names)
code = compile(py_ast, filename, mode='exec')
bp_code = Code.from_code(code)
replace_global_loads(bp_code.code, explicit_globals)
optimize_locals(bp_code.code)
bp_code.newlocals = False
return bp_code.to_code()
def compile_subscribe(py_ast, filename):
""" Compile an ast into a code object implementing operator `<<`.
Parameters
----------
py_ast : ast.Expression
A Python ast Expression node.
filename : string
The filename which generated the expression.
Returns
-------
result : types.CodeType
A Python code object which implements the desired behavior.
"""
code = compile(py_ast, filename, mode='eval')
code = update_firstlineno(code, py_ast.lineno)
bp_code = Code.from_code(code)
replace_global_loads(bp_code.code)
optimize_locals(bp_code.code)
bp_code.code = inject_tracing(bp_code.code)
bp_code.newlocals = False
bp_code.args = ('_[tracer]',) + bp_code.args
return bp_code.to_code()
def compile_update(py_ast, filename):
""" Compile an ast into a code object implementing operator `>>`.
Parameters
----------
py_ast : ast.Expression
A Python ast Expression node.
filename : string
The filename which generated the expression.
Returns
-------
result : types.CodeType
A Python code object which implements the desired behavior.
"""
code = compile(py_ast, filename, mode='eval')
code = update_firstlineno(code, py_ast.lineno)
bp_code = Code.from_code(code)
replace_global_loads(bp_code.code)
optimize_locals(bp_code.code)
bp_code.code = inject_inversion(bp_code.code)
bp_code.newlocals = False
bp_code.args = ('_[inverter]', '_[value]') + bp_code.args
return bp_code.to_code()
def compile_delegate(py_ast, filename):
""" Compile an ast into a code object implementing operator `:=`.
This will generate two code objects: one which is equivalent to
operator `<<` and another which is equivalent to `>>`.
Parameters
----------
py_ast : ast.Expression
A Python ast Expression node.
filename : string
The filename which generated the expression.
Returns
-------
result : tuple
A 2-tuple of types.CodeType equivalent to operators `<<` and
`>>` respectively.
"""
code = compile(py_ast, filename, mode='eval')
code = update_firstlineno(code, py_ast.lineno)
bp_code = Code.from_code(code)
bp_code.newlocals = False
codelist = bp_code.code[:]
bp_args = tuple(bp_code.args)
replace_global_loads(codelist)
optimize_locals(codelist)
sub_list = inject_tracing(codelist)
bp_code.code = sub_list
bp_code.args = ('_[tracer]',) + bp_args
sub_code = bp_code.to_code()
upd_list = inject_inversion(codelist)
bp_code.code = upd_list
bp_code.args = ('_[inverter]', '_[value]') + bp_args
upd_code = bp_code.to_code()
return (sub_code, upd_code)
COMPILE_OP_MAP = {
'=': compile_simple,
'::': compile_notify,
'<<': compile_subscribe,
'>>': compile_update,
':=': compile_delegate,
}
#------------------------------------------------------------------------------
# Node Visitor
#------------------------------------------------------------------------------
class _NodeVisitor(object):
""" A node visitor class that is used as base class for the various
Enaml compilers.
"""
def visit(self, node):
""" The main visitor dispatch method.
Unhandled nodes will raise an error.
"""
name = 'visit_%s' % node.__class__.__name__
try:
method = getattr(self, name)
except AttributeError:
method = self.default_visit
method(node)
def visit_nonstrict(self, node):
""" A nonstrict visitor dispatch method.
Unhandled nodes will be ignored.
"""
name = 'visit_%s' % node.__class__.__name__
try:
method = getattr(self, name)
except AttributeError:
pass
else:
method(node)
def default_visit(self, node):
""" The default visitor method. Raises an error since there
should not be any unhandled nodes.
"""
raise ValueError('Unhandled Node %s.' % node)
#------------------------------------------------------------------------------
# EnamlDef Compiler
#------------------------------------------------------------------------------
class EnamlDefCompiler(_NodeVisitor):
""" A visitor which compiles an EnamlDef into a marshallable dict.
"""
@classmethod
def compile(cls, node, filename):
""" The main entry point of the EnamlDefCompiler.
This compiler compiles the given EnamlDef node into a dictionary
which can be used to build out the component tree at run time.
Parameters
----------
node : EnamlDef
The EnamlDef node to compile.
filename : string
The string filename to use for the enamldef.
"""
compiler = cls(filename)
compiler.visit(node)
return compiler.stack.pop()
def __init__(self, filename):
self.filename = filename
self.stack = []
def visit_EnamlDef(self, node):
obj = {
'filename': self.filename,
'lineno': node.lineno,
'typename': node.typename,
'base': node.base,
'identifier': node.identifier,
'docstring': node.docstring,
'storage_defs': [],
'bindings': [],
'child_defs': [],
}
self.stack.append(obj)
for item in node.body:
self.visit(item)
def visit_StorageDef(self, node):
storage_def = {
'lineno': node.lineno,
'kind': node.kind,
'name': node.name,
'typename': node.typename,
}
self.stack[-1]['storage_defs'].append(storage_def)
if node.expr is not None:
self.visit_Binding(node)
def visit_Binding(self, node):
opexpr = node.expr
pyast = opexpr.value.ast
opcompiler = COMPILE_OP_MAP[opexpr.operator]
code = opcompiler(pyast, self.filename)
binding = {
'lineno': node.lineno,
'name': node.name,
'operator': opexpr.operator,
}
if isinstance(code, tuple):
code, auxcode = code
binding['code'] = code
binding['auxcode'] = auxcode
else:
binding['code'] = code
binding['auxcode'] = None
self.stack[-1]['bindings'].append(binding)
def visit_ChildDef(self, node):
obj = {
'lineno': node.lineno,
'typename': node.typename,
'identifier': node.identifier,
'filename': self.filename,
'storage_defs': [],
'bindings': [],
'child_defs': [],
}
self.stack.append(obj)
for item in node.body:
self.visit(item)
self.stack.pop()
self.stack[-1]['child_defs'].append(obj)
#------------------------------------------------------------------------------
# Enaml Compiler
#------------------------------------------------------------------------------
class EnamlCompiler(_NodeVisitor):
""" A visitor that will compile an enaml module ast node.
The entry point is the `compile` classmethod which will compile
the ast into an appropriate python code object for a module.
"""
@classmethod
def compile(cls, module_ast, filename):
""" The main entry point of the compiler.
Parameters
----------
module_ast : Instance(enaml_ast.Module)
The enaml module ast node that should be compiled.
filename : string
The string filename of the module ast being compiled.
Returns
-------
result : types.CodeType
The code object for the compiled module.
"""
# Protect against unicode filenames, which are incompatible
# with code objects created via types.CodeType
if isinstance(filename, unicode):
filename = filename.encode(sys.getfilesystemencoding())
# Generate the startup code for the module
module_ops = [(SetLineno, 1)]
for start in STARTUP:
start_code = compile(start, filename, mode='exec')
bp_code = Code.from_code(start_code)
# Skip the SetLineo and ReturnValue codes
module_ops.extend(bp_code.code[1:-2])
# Add in the code ops for the module
compiler = cls(filename)
compiler.visit(module_ast)
module_ops.extend(compiler.code_ops)
# Generate the cleanup code for the module
for end in CLEANUP:
end_code = compile(end, filename, mode='exec')
bp_code = Code.from_code(end_code)
# Skip the SetLineo and ReturnValue codes
module_ops.extend(bp_code.code[1:-2])
# Add in the final return value ops
module_ops.extend([
(LOAD_CONST, None),
(RETURN_VALUE, None),
])
# Generate and return the module code object.
mod_code = Code(
module_ops, [], [], False, False, False, '', filename, 0, '',
)
return mod_code.to_code()
def __init__(self, filename):
self.filename = filename
self.code_ops = []
def visit_Module(self, node):
for item in node.body:
self.visit(item)
def visit_Python(self, node):
py_code = compile(node.ast, self.filename, mode='exec')
bp_code = Code.from_code(py_code)
# Skip the SetLineo and ReturnValue codes
self.code_ops.extend(bp_code.code[1:-2])
def visit_EnamlDef(self, node):
code_ops = self.code_ops
dct = EnamlDefCompiler.compile(node, self.filename)
for decorator in node.decorators:
code = compile(decorator.ast, self.filename, mode='eval')
bpcode = Code.from_code(code).code
code_ops.extend(bpcode[:-1]) # skip the return value op
code_ops.extend([
(SetLineno, node.lineno),
(LOAD_NAME, '__make_enamldef_helper'), # Foo = __make_enamldef_helper(dct, globals)
(LOAD_CONST, dct), # dct is a marshalable description dict
(LOAD_NAME, 'globals'),
(CALL_FUNCTION, 0x0000),
(CALL_FUNCTION, 0x0002),
])
for dec in node.decorators:
code_ops.append((CALL_FUNCTION, 0x0001))
code_ops.append((STORE_NAME, node.typename))
|
ContinuumIO/ashiba
|
enaml/enaml/core/enaml_compiler.py
|
Python
|
bsd-3-clause
| 20,427
|
[
"VisIt"
] |
eb1d0864989a88e231b46a0e763ea2d78170e09f10e74fafde335914a8d0d5af
|
import numpy as np
import pycuda.driver as cuda
import pycuda.gpuarray as garray
from pycuda.compiler import SourceModule
from pycuda.tools import dtype_to_ctype, context_dependent_memoize
import neurokernel.LPU.utils.curand as curand
from neurokernel.LPU.utils.simpleio import *
from neurokernel.LPU.NDComponents.MembraneModels.BaseMembraneModel import BaseMembraneModel
class PhotoreceptorModel(BaseMembraneModel):
accesses = ['photon', 'I']
updates = ['V']
def __init__(self, params_dict, access_buffers, dt, LPU_id=None,
debug=False, cuda_verbose = False):
self.num_microvilli = params_dict['num_microvilli'].get().astype(np.int32)
self.num_neurons = self.num_microvilli.size
self.dt = dt
# self.multiple = int(self.dt/self.run_dt)
# assert(self.multiple * self.run_dt == self.dt)
self.record_neuron = debug
self.debug = debug
self.LPU_id = LPU_id
self.dtype = np.double
if cuda_verbose:
self.compile_options = ['--ptxas-options=-v']
else:
self.compile_options = []
self.block_transduction = (128, 1, 1)
self.grid_transduction = (cuda.Context.get_device().MULTIPROCESSOR_COUNT*9, 1)
self.block_sum = (256,1,1)
self.grid_sum = (self.num_neurons,1)
self.block_re_sort = (256, 1, 1)
self.grid_re_sort = (cuda.Context.get_device().MULTIPROCESSOR_COUNT*5, 1)
self.block_hh = (256, 1, 1)
self.grid_hh = ((self.num_neurons-1)//self.block_hh[0] + 1, 1)
self.block_state = (32, 32, 1)
self.grid_state = ((self.num_neurons-1)//self.block_state[0] + 1, 1)
self.params_dict = params_dict
self.access_buffers = access_buffers
self._initialize(params_dict)
@property
def maximum_dt_allowed(self):
return 1e-4
@property
def internal_steps(self):
if self.dt > self.maximum_dt_allowed:
div = self.dt/self.maximum_dt_allowed
if np.abs(div - np.round(div)) < 1e-5:
return int(np.round(div))
else:
return int(np.ceil(div))
#raise ValueError('Simulation time step dt larger than maximum allowed dt of model {}'.format(type(self)))
else:
return 1
@property
def internal_dt(self):
return self.dt/self.internal_steps
def pre_run(self, update_pointers):
cuda.memcpy_dtod(int(update_pointers['V']),
self.params_dict['init_V'].gpudata,
self.params_dict['init_V'].nbytes)
def _initialize(self, params_dict):
self._setup_output()
self._setup_transduction()
self._setup_hh()
def _setup_output(self):
outputfile = self.LPU_id + '_out'
if self.record_neuron:
self.outputfile_I = h5py.File(outputfile+'I.h5', 'w')
self.outputfile_I.create_dataset(
'/array', (0, self.num_neurons), dtype = self.dtype,
maxshape = (None, self.num_neurons))
def _setup_transduction(self, seed = 0):
self.photons = garray.zeros(self.num_neurons, self.dtype)
# setup RNG
self.randState = curand.curand_setup(
self.block_transduction[0]*self.grid_transduction[0], seed)
# using microvilli as single unite in the transduction kernel
# therefore, we need to figure out which neuron each microvillus
# belongs to, and from where to where we should sum up the current.
self.cum_microvilli = np.hstack((0, np.cumsum(self.num_microvilli)))
self.total_microvilli = self.cum_microvilli[-1]
tmp = np.zeros(self.total_microvilli, np.uint16)
tmp[self.cum_microvilli[1:-1]] = 1
self.microvilli_ind = np.cumsum(tmp).astype(np.uint16)
#self.d_num_microvilli = garray.to_gpu(self.num_microvilli)
self.d_num_microvilli = self.params_dict['num_microvilli']
self.count = garray.empty(1, np.int32)
self.d_cum_microvilli = garray.to_gpu(self.cum_microvilli.astype(np.int32))
self.d_microvilli_ind = garray.to_gpu(self.microvilli_ind.astype(np.uint16))
self.X = []
tmp = np.zeros(self.total_microvilli*2, np.uint16)
tmp[::2] = 50
# variables G, Gstar
self.X.append(garray.to_gpu(tmp.view(np.int32)))
tmp = np.zeros(self.total_microvilli*2, np.uint16)
# variables PLCstar, Dstar
self.X.append(garray.to_gpu(tmp.view(np.int32)))
tmp = np.zeros(self.total_microvilli*2, np.uint16)
# variables Cstar, Tstar
self.X.append(garray.to_gpu(tmp.view(np.int32)))
tmp = np.zeros(self.total_microvilli, np.uint16)
# variables Mstar
self.X.append(garray.to_gpu(tmp))
Xaddress = np.empty(5, np.int64)
for i in range(4):
Xaddress[i] = int(self.X[i].gpudata)
Xaddress[4] = int(self.d_microvilli_ind.gpudata)
change_ind1 = np.asarray([1, 1, 2, 3, 3, 2, 5, 4, 5, 5, 7, 6, 6, 1],
np.int32) - 1
change_ind2 = np.asarray([1, 1, 3, 4, 1, 1, 1, 1, 1, 7, 1, 1, 1, 1],
np.int32) - 1
change1 = np.asarray([0, -1, -1, -1, -1, 1, 1, -1, -1, -2, -1, 1, -1, 1],
np.int32)
change2 = np.asarray([0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
np.int32)
self.transduction_func = get_transduction_func(
self.dtype, self.block_transduction[0], Xaddress,
change_ind1, change_ind2,
change1, change2, self.compile_options)
self.re_sort_func = get_re_sort_func(
self.dtype, self.compile_options)
self.ns = garray.zeros(self.num_neurons, self.dtype) + 1
self.update_ns_func = get_update_ns_func(self.dtype, self.compile_options)
def _setup_hh(self):
self.I = garray.zeros(self.num_neurons, self.dtype)
self.I_fb = garray.zeros(self.num_neurons, self.dtype)
self.hhx = [garray.empty(self.num_neurons, self.dtype)
for _ in range(5)]
# mV
# V_init = np.empty(self.num_neurons, dtype=np.double)
# V_init.fill(-0.0819925*1000)
#cuda.memcpy_htod(int(self.V_p), V_init)
self.hhx[0].fill(0.2184)
self.hhx[1].fill(0.9653)
self.hhx[2].fill(0.0117)
self.hhx[3].fill(0.9998)
self.hhx[4].fill(0.0017)
self.sum_current_func = get_sum_current_func(self.dtype,
self.block_sum[0],
self.compile_options)
self.hh_func = get_hh_func(self.dtype, self.compile_options)
def run_step(self, update_pointers, st=None):
self.I_fb.fill(0)
if self.params_dict['pre']['I'].size > 0:
self.sum_in_variable('I', self.I_fb)
# what if no input processor is provided?
self.re_sort_func.prepared_async_call(
self.grid_re_sort, self.block_re_sort, st,
self.access_buffers['photon'].gpudata,
self.photons.gpudata,
self.params_dict['pre']['photon'].gpudata,
self.params_dict['npre']['photon'].gpudata,
self.params_dict['cumpre']['photon'].gpudata,
self.num_neurons)
for _ in range(self.internal_steps):
if self.debug:
minimum = min(self.photons.get())
if (minimum < 0):
raise ValueError('Inputs to photoreceptor should not '
'be negative, minimum value detected: {}'
.format(minimum))
# reset warp counter
self.count.fill(0)
# X, V, ns, photons -> X
self.transduction_func.prepared_async_call(
self.grid_transduction, self.block_transduction, st,
self.randState.gpudata, self.internal_dt,
update_pointers['V'], self.ns.gpudata,
self.photons.gpudata,
self.d_num_microvilli.gpudata,
self.total_microvilli, self.count.gpudata)
# X, V, I_fb -> I
self.sum_current_func.prepared_async_call(
self.grid_sum, self.block_sum, st,
self.X[2].gpudata, self.d_num_microvilli.gpudata,
self.d_cum_microvilli.gpudata,
update_pointers['V'], self.I.gpudata, self.I_fb.gpudata)
# hhX, I -> hhX, V
self.hh_func.prepared_async_call(
self.grid_hh, self.block_hh, st,
self.I.gpudata, update_pointers['V'], self.hhx[0].gpudata,
self.hhx[1].gpudata, self.hhx[2].gpudata, self.hhx[3].gpudata,
self.hhx[4].gpudata, self.num_neurons, self.internal_dt/10, 10)
self.update_ns_func.prepared_async_call(
( (self.num_neurons - 1) // 128 + 1, 1), (128, 1, 1), st,
self.ns.gpudata, self.num_neurons, update_pointers['V'], self.internal_dt)
def get_update_ns_func(dtype, compile_options):
template = """
#define RTAU 1.0
__global__ void
update_ns(%(type)s* g_ns, int num_neurons, %(type)s* V, %(type)s dt)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
if(tid < num_neurons) {
%(type)s v = V[tid];
%(type)s ns = g_ns[tid];
%(type)s n_inf;
if(v >= -53)
n_inf = 8.5652*(v+53)+5;
else
n_inf = fmax%(fletter)s(1.0, 0.2354*(v+70)+1);
g_ns[tid] = ns + (n_inf-ns)*RTAU*dt;
}
}
"""
scalartype = dtype.type if isinstance(dtype, np.dtype) else dtype
mod = SourceModule(template % {"type": dtype_to_ctype(dtype),
"fletter": 'f' if scalartype == np.float32 else ''},
options = compile_options)
func = mod.get_function('update_ns')
func.prepare('PiP'+np.dtype(dtype).char)#[np.intp, np.int32, np.intp, scalartype])
return func
def get_re_sort_func(dtype, compile_options):
template = """
__global__ void
resort(%(type)s* in_photos, %(type)s* out_photons, int* pre, int* npre,
int* cumpre, int num_neurons)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
int total_threads = blockDim.x * gridDim.x;
for(int i = tid; i < num_neurons; i += total_threads)
{
if(npre[i])
out_photons[i] = in_photos[pre[cumpre[i]]];
}
}
"""
mod = SourceModule(template % {"type": dtype_to_ctype(dtype)},
options = compile_options)
func = mod.get_function('resort')
func.prepare('PPPPPi')
return func
def get_transduction_func(dtype, block_size, Xaddress,
change_ind1, change_ind2, change1, change2, compile_options):
template = """
/* This is kept for documentation purposes the actual code used is after the end
* of this template */
#include "curand_kernel.h"
extern "C" {
#include "stdio.h"
#define BLOCK_SIZE %(block_size)d
#define LA 0.5
/* Simulation Constants */
#define C_T 0.5 /* Total concentration of calmodulin */
#define G_T 50 /* Total number of G-protein */
#define PLC_T 100 /* Total number of PLC */
#define T_T 25 /* Total number of TRP/TRPL channels */
#define I_TSTAR 0.68 /* Average current through one opened TRP/TRPL channel (pA)*/
#define GAMMA_DSTAR 4.0 /* s^(-1) rate constant*/
#define GAMMA_GAP 3.0 /* s^(-1) rate constant*/
#define GAMMA_GSTAR 3.5 /* s^(-1) rate constant*/
#define GAMMA_MSTAR 3.7 /* s^(-1) rate constant*/
#define GAMMA_PLCSTAR 144 /* s^(-1) rate constant */
#define GAMMA_TSTAR 25 /* s^(-1) rate constant */
#define H_DSTAR 37.8 /* strength constant */
#define H_MSTAR 40 /* strength constant */
#define H_PLCSTAR 11.1 /* strength constant */
#define H_TSTARP 11.5 /* strength constant */
#define H_TSTARN 10 /* strength constant */
#define K_P 0.3 /* Dissociation coefficient for calcium positive feedback */
#define K_P_INV 3.3333 /* K_P inverse ( too many decimals are not important) */
#define K_N 0.18 /* Dissociation coefficient for calmodulin negative feedback */
#define K_N_INV 5.5555 /* K_N inverse ( too many decimals are not important) */
#define K_U 30 /* (mM^(-1)s^(-1)) Rate of Ca2+ uptake by calmodulin */
#define K_R 5.5 /* (mM^(-1)s^(-1)) Rate of Ca2+ release by calmodulin */
#define K_CA 1000 /* s^(-1) diffusion from microvillus to somata (tuned) */
#define K_NACA 3e-8 /* Scaling factor for Na+/Ca2+ exchanger model */
#define KAPPA_DSTAR 1300.0 /* s^(-1) rate constant - there is also a capital K_DSTAR */
#define KAPPA_GSTAR 7.05 /* s^(-1) rate constant */
#define KAPPA_PLCSTAR 15.6 /* s^(-1) rate constant */
#define KAPPA_TSTAR 150.0 /* s^(-1) rate constant */
#define K_DSTAR 100.0 /* rate constant */
#define F 96485 /* (mC/mol) Faraday constant (changed from paper)*/
#define N 4 /* Binding sites for calcium on calmodulin */
#define R 8.314 /* (J*K^-1*mol^-1)Gas constant */
#define T 293 /* (K) Absolute temperature */
#define VOL 3e-9 /* changed from 3e-12microlitres to nlitres
* microvillus volume so that units agree */
#define N_S0_DIM 1 /* initial condition */
#define N_S0_BRIGHT 2
#define A_N_S0_DIM 4 /* upper bound for dynamic increase (of negetive feedback) */
#define A_N_S0_BRIGHT 200
#define TAU_N_S0_DIM 3000 /* time constant for negative feedback */
#define TAU_N_S0_BRIGHT 1000
#define NA_CO 120 /* (mM) Extracellular sodium concentration */
#define NA_CI 8 /* (mM) Intracellular sodium concentration */
#define CA_CO 1.5 /* (mM) Extracellular calcium concentration */
#define G_TRP 8 /* conductance of a TRP channel */
#define TRP_REV 0 /* TRP channel reversal potential (mV) */
__device__ __constant__ long long int d_X[5];
__device__ __constant__ int change_ind1[14];
__device__ __constant__ int change1[14];
__device__ __constant__ int change_ind2[14];
__device__ __constant__ int change2[14];
/* cc = n/(NA*VOL) [6.0221413e+23 mol^-1 * 3*10e-21 m^3] */
__device__ float num_to_mM(int n)
{
return n * 5.5353e-4; // n/1806.6;
}
/* n = cc*VOL*NA [6.0221413e+23 mol^-1 * 3*10e-21 m^3] */
__device__ float mM_to_num(float cc)
{
return rintf(cc * 1806.6);
}
/* Assumes Hill constant (=2) for positive calcium feedback */
__device__ float compute_fp(float Ca_cc)
{
float tmp = Ca_cc*K_P_INV;
tmp *= tmp;
return tmp/(1 + tmp);
}
/* Assumes Hill constant(=3) for negative calmodulin feedback */
__device__ float compute_fn(float Cstar_cc, float ns)
{
float tmp = Cstar_cc*K_N_INV;
tmp *= tmp*tmp;
return ns*tmp/(1 + tmp);
}
/* Vm [V] */
__device__ float compute_ca(int Tstar, float Cstar_cc, float Vm)
{
float I_in = Tstar*G_TRP*fmaxf(-Vm + 0.001*TRP_REV, 0);
/* CaM = C_T - Cstar_cc */
float denom = (K_CA + (N*K_U*C_T) - (N*K_U)*Cstar_cc + 179.0952 * expf(-(F/(R*T))*Vm)); // (K_NACA*NA_CO^3/VOL*F)
/* I_Ca ~= 0.4*I_in */
float numer = (0.4*I_in)/(2*VOL*F) +
((K_NACA*CA_CO*NA_CI*NA_CI*NA_CI)/(VOL*F)) + // in paper it's -K_NACA... due to different conventions
N*K_R*Cstar_cc;
return fmaxf(1.6e-4, numer/denom);
}
__global__ void
transduction(curandStateXORWOW_t *state, float dt, %(type)s* d_Vm,
%(type)s* g_ns, %(type)s* input,
int* num_microvilli, int total_microvilli, int* count)
{
int tid = threadIdx.x;
int gid = threadIdx.x + blockIdx.x * blockDim.x;
int wid = tid %% 32;
int wrp = tid >> 5;
__shared__ int X[BLOCK_SIZE][7]; // number of molecules
__shared__ float Ca[BLOCK_SIZE];
__shared__ float fn[BLOCK_SIZE];
float Vm, ns, lambda;
float sumrate, dt_advanced;
int reaction_ind;
ushort2 tmp;
// copy random generator state locally to avoid accessing global memory
curandStateXORWOW_t localstate = state[gid];
int mid; // microvilli ID
volatile __shared__ int mi[4]; // starting point of mid per ward
// use atomicAdd to obtain the starting mid for the warp
if(wid == 0)
{
mi[wrp] = atomicAdd(count, 32);
}
mid = mi[wrp] + wid;
int ind;
while(mid < total_microvilli)
{
// load photoreceptor index of the microvilli
ind = ((ushort*)d_X[4])[mid];
// load variables that are needed for computing calcium concentration
tmp = ((ushort2*)d_X[2])[mid];
X[tid][5] = tmp.x;
X[tid][6] = tmp.y;
Vm = d_Vm[ind]*1e-3;
ns = g_ns[ind];
// update calcium concentration
Ca[tid] = compute_ca(X[tid][6], num_to_mM(X[tid][5]), Vm);
fn[tid] = compute_fn(num_to_mM(X[tid][5]), ns);
lambda = input[ind]/num_microvilli[ind];
// load the rest of variables
tmp = ((ushort2*)d_X[1])[mid];
X[tid][4] = tmp.y;
X[tid][3] = tmp.x;
tmp = ((ushort2*)d_X[0])[mid];
X[tid][2] = tmp.y;
X[tid][1] = tmp.x;
X[tid][0] = ((ushort*)d_X[3])[mid];
// compute total rate of reaction
sumrate = lambda;
sumrate += mM_to_num(K_U) * Ca[tid] * (0.5 - num_to_mM(X[tid][5]) ); //11
sumrate += mM_to_num(K_R) * num_to_mM(X[tid][5]); //12
sumrate += GAMMA_TSTAR * (1 + H_TSTARN*fn[tid]) * X[tid][6]; // 10
sumrate += GAMMA_DSTAR * (1 + H_DSTAR*fn[tid]) * X[tid][4]; // 8
sumrate += GAMMA_PLCSTAR * (1 + H_PLCSTAR*fn[tid]) * X[tid][3]; // 7
sumrate += GAMMA_MSTAR * (1 + H_MSTAR*fn[tid]) * X[tid][0]; // 1
sumrate += KAPPA_DSTAR * X[tid][3]; // 6
sumrate += GAMMA_GAP * X[tid][2] * X[tid][3]; // 4
sumrate += KAPPA_PLCSTAR * X[tid][2] * (PLC_T-X[tid][3]); // 3
sumrate += GAMMA_GSTAR * (G_T - X[tid][2] - X[tid][1] - X[tid][3]); // 5
sumrate += KAPPA_GSTAR * X[tid][1] * X[tid][0]; // 2
sumrate += (KAPPA_TSTAR/(K_DSTAR*K_DSTAR)) *
(1 + H_TSTARP*compute_fp( Ca[tid] )) *
X[tid][4]*(X[tid][4]-1)*(T_T-X[tid][6])*0.5 ; // 9
// choose the next reaction time
dt_advanced = -logf(curand_uniform(&localstate))/(LA + sumrate);
// If the reaction time is smaller than dt,
// pick the reaction and update,
// then compute the total rate and next reaction time again
// until all dt_advanced is larger than dt.
// Note that you don't have to compensate for
// the last reaction time that exceeds dt.
// The reason is that the exponential distribution is MEMORYLESS.
while(dt_advanced <= dt)
{
reaction_ind = 0;
sumrate = curand_uniform(&localstate) * sumrate;
if(sumrate > 2e-5)
{
sumrate -= lambda;
reaction_ind = (sumrate<=2e-5) * 13;
if(!reaction_ind)
{
sumrate -= mM_to_num(K_U) * Ca[tid] * (0.5 - num_to_mM(X[tid][5]) );
reaction_ind = (sumrate<=2e-5) * 11;
if(!reaction_ind)
{
sumrate -= mM_to_num(K_R) * num_to_mM(X[tid][5]);
reaction_ind = (sumrate<=2e-5) * 12;
if(!reaction_ind)
{
sumrate -= GAMMA_TSTAR * (1 + H_TSTARN*fn[tid]) * X[tid][6];
reaction_ind = (sumrate<=2e-5) * 10;
if(!reaction_ind)
{
sumrate -= GAMMA_DSTAR * (1 + H_DSTAR*fn[tid]) * X[tid][4];
reaction_ind = (sumrate<=2e-5) * 8;
if(!reaction_ind)
{
sumrate -= GAMMA_PLCSTAR * (1 + H_PLCSTAR*fn[tid]) * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 7;
if(!reaction_ind)
{
sumrate -= GAMMA_MSTAR * (1 + H_MSTAR*fn[tid]) * X[tid][0];
reaction_ind = (sumrate<=2e-5) * 1;
if(!reaction_ind)
{
sumrate -= KAPPA_DSTAR * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 6;
if(!reaction_ind)
{
sumrate -= GAMMA_GAP * X[tid][2] * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 4;
if(!reaction_ind)
{
sumrate -= KAPPA_PLCSTAR * X[tid][2] * (PLC_T-X[tid][3]);
reaction_ind = (sumrate<=2e-5) * 3;
if(!reaction_ind)
{
sumrate -= GAMMA_GSTAR * (G_T - X[tid][2] - X[tid][1] - X[tid][3]);
reaction_ind = (sumrate<=2e-5) * 5;
if(!reaction_ind)
{
sumrate -= KAPPA_GSTAR * X[tid][1] * X[tid][0];
reaction_ind = (sumrate<=2e-5) * 2;
if(!reaction_ind)
{
sumrate -= (KAPPA_TSTAR/(K_DSTAR*K_DSTAR)) *
(1 + H_TSTARP*compute_fp( Ca[tid] )) *
X[tid][4]*(X[tid][4]-1)*(T_T-X[tid][6])*0.5;
reaction_ind = (sumrate<=2e-5) * 9;
}
}
}
}
}
}
}
}
}
}
}
}
}
int ind;
// only up to two state variables are needed to be updated
// update the first one.
ind = change_ind1[reaction_ind];
X[tid][ind] += change1[reaction_ind];
//if(reaction_ind == 9)
//{
// X[tid][ind] = max(X[tid][ind], 0);
//}
ind = change_ind2[reaction_ind];
//update the second one
if(ind != 0)
{
X[tid][ind] += change2[reaction_ind];
}
// compute the advance time again
Ca[tid] = compute_ca(X[tid][6], num_to_mM(X[tid][5]), Vm);
fn[tid] = compute_fn( num_to_mM(X[tid][5]), ns );
//fp[tid] = compute_fp( Ca[tid] );
sumrate = lambda;
sumrate += mM_to_num(K_U) * Ca[tid] * (0.5 - num_to_mM(X[tid][5]) ); //11
sumrate += mM_to_num(K_R) * num_to_mM(X[tid][5]); //12
sumrate += GAMMA_TSTAR * (1 + H_TSTARN*fn[tid]) * X[tid][6]; // 10
sumrate += GAMMA_DSTAR * (1 + H_DSTAR*fn[tid]) * X[tid][4]; // 8
sumrate += GAMMA_PLCSTAR * (1 + H_PLCSTAR*fn[tid]) * X[tid][3]; // 7
sumrate += GAMMA_MSTAR * (1 + H_MSTAR*fn[tid]) * X[tid][0]; // 1
sumrate += KAPPA_DSTAR * X[tid][3]; // 6
sumrate += GAMMA_GAP * X[tid][2] * X[tid][3]; // 4
sumrate += KAPPA_PLCSTAR * X[tid][2] * (PLC_T-X[tid][3]); // 3
sumrate += GAMMA_GSTAR * (G_T - X[tid][2] - X[tid][1] - X[tid][3]); // 5
sumrate += KAPPA_GSTAR * X[tid][1] * X[tid][0]; // 2
sumrate += (KAPPA_TSTAR/(K_DSTAR*K_DSTAR)) *
(1 + H_TSTARP*compute_fp( Ca[tid] )) *
X[tid][4]*(X[tid][4]-1)*(T_T-X[tid][6])*0.5; // 9
dt_advanced -= logf(curand_uniform(&localstate))/(LA + sumrate);
} // end while
((ushort*)d_X[3])[mid] = X[tid][0];
((ushort2*)d_X[0])[mid] = make_ushort2(X[tid][1], X[tid][2]);
((ushort2*)d_X[1])[mid] = make_ushort2(X[tid][3], X[tid][4]);
((ushort2*)d_X[2])[mid] = make_ushort2(X[tid][5], X[tid][6]);
if(wid == 0)
{
mi[wrp] = atomicAdd(count, 32);
}
mid = mi[wrp] + wid;
}
// copy the updated random generator state back to global memory
state[gid] = localstate;
}
}
"""
template_run = """
#include "curand_kernel.h"
extern "C" {
#include "stdio.h"
#define BLOCK_SIZE %(block_size)d
#define LA 0.5
__device__ __constant__ long long int d_X[5];
__device__ __constant__ int change_ind1[14];
__device__ __constant__ int change1[14];
__device__ __constant__ int change_ind2[14];
__device__ __constant__ int change2[14];
__device__ float num_to_mM(int n)
{
return n * 5.5353e-4; // n/1806.6;
}
__device__ float mM_to_num(float cc)
{
return rintf(cc * 1806.6);
}
__device__ float compute_fp( float ca_cc)
{
float tmp = ca_cc*3.3333333333;
tmp *= tmp;
return tmp/(1+tmp);
}
__device__ float compute_fn( float Cstar_cc, float ns)
{
float tmp = Cstar_cc*5.55555555;
tmp *= tmp*tmp;
return ns*tmp/(1+tmp);
}
__device__ float compute_ca(int Tstar, float cstar_cc, float Vm)
{
float I_in = Tstar*8*fmaxf(-Vm,0);
float denom = (1060 - 120*cstar_cc + 179.0952 * expf(-39.60793*Vm));
float numer = I_in * 690.9537 + 0.0795979 + 22*cstar_cc;
return fmaxf(1.6e-4, numer/denom);
}
__global__ void
transduction(curandStateXORWOW_t *state, float dt, %(type)s* d_Vm,
%(type)s* g_ns, %(type)s* input,
int* num_microvilli, int total_microvilli, int* count)
{
int tid = threadIdx.x;
int gid = threadIdx.x + blockIdx.x * blockDim.x;
int wid = tid %% 32;
int wrp = tid >> 5;
__shared__ int X[BLOCK_SIZE][7]; // number of molecules
__shared__ float Ca[BLOCK_SIZE];
__shared__ float fn[BLOCK_SIZE];
float Vm, ns, lambda;
float sumrate, dt_advanced;
int reaction_ind;
ushort2 tmp;
// copy random generator state locally to avoid accessing global memory
curandStateXORWOW_t localstate = state[gid];
int mid; // microvilli ID
volatile __shared__ int mi[4]; // starting point of mid per ward, blocksize must be 128
// use atomicAdd to obtain the starting mid for the warp
if(wid == 0)
{
mi[wrp] = atomicAdd(count, 32);
}
mid = mi[wrp] + wid;
int ind;
while(mid < total_microvilli)
{
ind = ((ushort*)d_X[4])[mid];
// load variables that are needed for computing calcium concentration
tmp = ((ushort2*)d_X[2])[mid];
X[tid][5] = tmp.x;
X[tid][6] = tmp.y;
Vm = d_Vm[ind]*1e-3;
ns = g_ns[ind];
// update calcium concentration
Ca[tid] = compute_ca(X[tid][6], num_to_mM(X[tid][5]), Vm);
fn[tid] = compute_fn( num_to_mM(X[tid][5]), ns);
lambda = input[ind]/(double)num_microvilli[ind];
// load the rest of variables
tmp = ((ushort2*)d_X[1])[mid];
X[tid][4] = tmp.y;
X[tid][3] = tmp.x;
tmp = ((ushort2*)d_X[0])[mid];
X[tid][2] = tmp.y;
X[tid][1] = tmp.x;
X[tid][0] = ((ushort*)d_X[3])[mid];
sumrate = lambda + 54198 * Ca[tid] * (0.5 - X[tid][5] * 5.5353e-4) + 5.5 * X[tid][5]; // 11, 12
sumrate += 25 * (1+10*fn[tid]) * X[tid][6]; // 10
sumrate += 4 * (1+37.8*fn[tid]) * X[tid][4] ; // 8
sumrate += (1444+1598.4*fn[tid]) * X[tid][3] ; // 7, 6
sumrate += (3.7*(1+40*fn[tid]) + 7.05 * X[tid][1]) * X[tid][0] ; // 1, 2
sumrate += (1560 - 12.6 * X[tid][3]) * X[tid][2]; // 3, 4
sumrate += 3.5 * (50 - X[tid][2] - X[tid][1] - X[tid][3]) ; // 5
sumrate += 0.015 * (1+11.5*compute_fp( Ca[tid] )) * X[tid][4]*(X[tid][4]-1)*(25-X[tid][6])*0.5 ; // 9
dt_advanced = -logf(curand_uniform(&localstate))/(LA+sumrate);
// If the reaction time is smaller than dt,
// pick the reaction and update,
// then compute the total rate and next reaction time again
// until all dt_advanced is larger than dt.
// Note that you don't have to compensate for
// the last reaction time that exceeds dt.
// The reason is that the exponential distribution is MEMORYLESS.
while (dt_advanced <= dt) {
reaction_ind = 0;
sumrate = curand_uniform(&localstate) * sumrate;
if (sumrate > 2e-5) {
sumrate -= lambda;
reaction_ind = (sumrate<=2e-5) * 13;
if (!reaction_ind) {
sumrate -= mM_to_num(30) * Ca[tid] * (0.5 - num_to_mM(X[tid][5]) );
reaction_ind = (sumrate<=2e-5) * 11;
if (!reaction_ind) {
sumrate -= mM_to_num(5.5) * num_to_mM(X[tid][5]);
reaction_ind = (sumrate<=2e-5) * 12;
if (!reaction_ind) {
sumrate -= 25 * (1+10*fn[tid]) * X[tid][6];
reaction_ind = (sumrate<=2e-5) * 10;
if (!reaction_ind) {
sumrate -= 4 * (1+37.8*fn[tid]) * X[tid][4];
reaction_ind = (sumrate<=2e-5) * 8;
if (!reaction_ind) {
sumrate -= 144 * (1+11.1*fn[tid]) * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 7;
if (!reaction_ind) {
sumrate -= 3.7*(1+40*fn[tid]) * X[tid][0];
reaction_ind = (sumrate<=2e-5) * 1;
if (!reaction_ind) {
sumrate -= 1300 * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 6;
if (!reaction_ind) {
sumrate -= 3.0 * X[tid][2] * X[tid][3];
reaction_ind = (sumrate<=2e-5) * 4;
if (!reaction_ind) {
sumrate -= 15.6 * X[tid][2]
* (100-X[tid][3]);
reaction_ind = (sumrate<=2e-5) * 3;
if (!reaction_ind) {
sumrate -= 3.5 * (50 - X[tid][2]
- X[tid][1] - X[tid][3]);
reaction_ind = (sumrate<=2e-5) * 5;
if(!reaction_ind) {
sumrate -= 7.05 * X[tid][1]
* X[tid][0];
reaction_ind = (sumrate<=2e-5)
* 2;
if(!reaction_ind) {
sumrate -= 0.015 *
(1+11.5*compute_fp( Ca[tid] )) * X[tid][4]*(X[tid][4]-1)*(25-X[tid][6])*0.5;
reaction_ind = (sumrate<=2e-5) * 9;
}
}
}
}
}
}
}
}
}
}
}
}
}
//int ind;
// only up to two state variables are needed to be updated
// update the first one.
ind = change_ind1[reaction_ind];
X[tid][ind] += change1[reaction_ind];
//update the second one
ind = change_ind2[reaction_ind];
if (ind != 0)
X[tid][ind] += change2[reaction_ind];
// compute the advance time again
Ca[tid] = compute_ca(X[tid][6], num_to_mM(X[tid][5]), Vm);
fn[tid] = compute_fn( num_to_mM(X[tid][5]), ns );
sumrate = lambda + 54198*Ca[tid]*(0.5 - X[tid][5]*5.5353e-4)
+ 5.5*X[tid][5]; // 11, 12
sumrate += 25*(1 + 10*fn[tid])*X[tid][6]; // 10
sumrate += 4*(1 + 37.8*fn[tid])*X[tid][4]; // 8
sumrate += (1444 + 1598.4*fn[tid])*X[tid][3]; // 7, 6
sumrate += (3.7*(1 + 40*fn[tid]) + 7.05*X[tid][1])*X[tid][0]; // 1, 2
sumrate += (1560 - 12.6*X[tid][3])*X[tid][2]; // 3, 4
sumrate += 3.5*(50 - X[tid][2] - X[tid][1] - X[tid][3]); // 5
sumrate += 0.015*(1 + 11.5*compute_fp( Ca[tid] ))
*X[tid][4]*(X[tid][4] - 1)*(25 - X[tid][6])*0.5; // 9
dt_advanced -= logf(curand_uniform(&localstate))/(LA+sumrate);
} // end while
((ushort*)d_X[3])[mid] = X[tid][0];
((ushort2*)d_X[0])[mid] = make_ushort2(X[tid][1], X[tid][2]);
((ushort2*)d_X[1])[mid] = make_ushort2(X[tid][3], X[tid][4]);
((ushort2*)d_X[2])[mid] = make_ushort2(X[tid][5], X[tid][6]);
if(wid == 0)
{
mi[wrp] = atomicAdd(count, 32);
}
mid = mi[wrp] + wid;
}
// copy the updated random generator state back to global memory
state[gid] = localstate;
}
}
"""
try:
co = [compile_options[0]+' --maxrregcount=54']
except IndexError:
co = ['--maxrregcount=54']
scalartype = dtype.type if isinstance(dtype, np.dtype) else dtype
mod = SourceModule(
template_run % {
"type": dtype_to_ctype(dtype),
"block_size": block_size,
"fletter": 'f' if scalartype == np.float32 else ''
},
options = co,
no_extern_c = True)
func = mod.get_function('transduction')
d_X_address, d_X_nbytes = mod.get_global("d_X")
cuda.memcpy_htod(d_X_address, Xaddress)
d_change_ind1_address, d_change_ind1_nbytes = mod.get_global("change_ind1")
d_change_ind2_address, d_change_ind2_nbytes = mod.get_global("change_ind2")
d_change1_address, d_change1_nbytes = mod.get_global("change1")
d_change2_address, d_change2_nbytes = mod.get_global("change2")
cuda.memcpy_htod(d_change_ind1_address, change_ind1)
cuda.memcpy_htod(d_change_ind2_address, change_ind2)
cuda.memcpy_htod(d_change1_address, change1)
cuda.memcpy_htod(d_change2_address, change2)
func.prepare('PfPPPPiP')
func.set_cache_config(cuda.func_cache.PREFER_SHARED)
return func
def get_hh_func(dtype, compile_options):
template = """
#define E_K (-85)
#define E_Cl (-30)
#define G_s 1.6
#define G_dr 3.5
#define G_Cl 0.006
#define G_K 0.082
#define G_nov 3.0
#define C 4
__global__ void
hh(%(type)s* I_all, %(type)s* d_V, %(type)s* d_sa, %(type)s* d_si,
%(type)s* d_dra, %(type)s* d_dri, %(type)s* d_nov, int num_neurons,
%(type)s ddt, int multiple)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid < num_neurons) {
%(type)s I = I_all[tid];
%(type)s V = d_V[tid]; //mV
%(type)s sa = d_sa[tid];
%(type)s si = d_si[tid];
%(type)s dra = d_dra[tid];
%(type)s dri = d_dri[tid];
%(type)s nov = d_nov[tid];
%(type)s x_inf, tau_x, dx;
%(type)s dt = 1000 * ddt;
for(int i = 0; i < multiple; ++i) {
/* The precision of power constant affects the result */
x_inf = cbrt%(fletter)s(1/(1+exp%(fletter)s((-23.7-V)/12.8)));
tau_x = 0.13+3.39*exp%(fletter)s(-(-73-V)*(-73-V)/400);
dx = (x_inf - sa)/tau_x;
sa += dt * dx;
x_inf = 0.9/(1+exp%(fletter)s((-55-V)/-3.9))
+ 0.1/(1+exp%(fletter)s( (-74.8-V)/-10.7));
tau_x = 113*exp%(fletter)s(-(-71-V)*(-71-V)/841);
dx = (x_inf - si)/tau_x;
si += dt * dx;
x_inf = sqrt%(fletter)s(1/(1+exp%(fletter)s((-1-V)/9.1)));
tau_x = 0.5+5.75*exp%(fletter)s(-(-25-V)*(-25-V)/1024);
dx = (x_inf - dra)/tau_x;
dra += dt * dx;
x_inf = 1/(1+exp%(fletter)s((-25.7-V)/-6.4));
tau_x = 890;
dx = (x_inf - dri)/tau_x;
dri += dt * dx;
x_inf = 1/(1+exp%(fletter)s((-12-V)/11));
tau_x = 3 + 166*exp%(fletter)s(-(-20-V)*(-20-V)/484);
dx = (x_inf - nov)/tau_x;
nov += dt * dx;
dx = (I - G_K*(V-E_K) - G_Cl * (V-E_Cl) -
G_s * sa*sa*sa * si * (V-E_K) -
G_dr * dra*dra * dri * (V-E_K)
- G_nov * nov * (V-E_K) )/C;
V += dt * dx;
}
d_V[tid] = V;
d_sa[tid] = sa;
d_si[tid] = si;
d_dra[tid] = dra;
d_dri[tid] = dri;
d_nov[tid] = nov;
}
}
"""
# Used 53 registers, 388 bytes cmem[0], 304 bytes cmem[2]
# float: Used 35 registers, 380 bytes cmem[0], 96 bytes cmem[2]
scalartype = dtype.type if isinstance(dtype, np.dtype) else dtype
mod = SourceModule(template % {"type": dtype_to_ctype(dtype), "fletter": 'f'
if scalartype == np.float32 else ''},
options = compile_options)
func = mod.get_function('hh')
func.prepare('PPPPPPPi'+np.dtype(dtype).char+'i')
return func
def get_sum_current_func(dtype, block_size, compile_options):
template = """
#define BLOCK_SIZE %(block_size)d
#define G_TRP 8 /* conductance of a TRP channel */
#define TRP_REV 0 /* mV */
__inline__ __device__
int warpReduction(volatile int* sdata, int tid){
sdata[tid] += sdata[tid+32];
sdata[tid] += sdata[tid+16];
sdata[tid] += sdata[tid+8];
sdata[tid] += sdata[tid+4];
sdata[tid] += sdata[tid+2];
return sdata[tid] + sdata[tid+1];
}
__global__ void
sum_current(ushort2* d_Tstar, int* d_num_microvilli,
int* d_cum_microvilli,
%(type)s* d_Vm, %(type)s* I_all,
%(type)s* I_fb)
{
int tid = threadIdx.x;
int bid = blockIdx.x;
int num_microvilli = d_num_microvilli[bid];
int shift = d_cum_microvilli[bid];
int total_open_channel;
__shared__ int sum[BLOCK_SIZE];
sum[tid] = 0;
for(int i = tid; i < num_microvilli; i += BLOCK_SIZE)
sum[tid] += d_Tstar[i + shift].y;
__syncthreads();
if (tid < 64) {
#pragma unroll
for(int i = 1; i < BLOCK_SIZE/64; ++i)
sum[tid] += sum[tid + 64*i];
}
__syncthreads();
if (tid < 32) total_open_channel = warpReduction(sum, tid);
if (tid == 0) {
%(type)s Vm = (d_Vm[bid]-TRP_REV) * 0.001;
%(type)s I_in;
if(Vm < 0)
I_in = total_open_channel * G_TRP * (-Vm);
else
I_in = 0;
I_all[bid] = I_fb[bid] + I_in / 15.7; // convert pA into \muA/cm^2
}
}
"""
assert(block_size%64 == 0)
mod = SourceModule(template % {"type": dtype_to_ctype(dtype),
"block_size": block_size},
options = compile_options)
func = mod.get_function('sum_current')
func.prepare('PPPPPP')
return func
|
neurokernel/retina
|
retina/NDComponents/MembraneModels/PhotoreceptorModel.py
|
Python
|
bsd-3-clause
| 41,046
|
[
"NEURON"
] |
9738b9e22b8b28550c82d7c3d04c9bcf0d45e51b6deb06053fbdded3a90c2fc0
|
"""
oxd Python
----------
Python bindings for Gluu oxd server.
"""
import codecs
import os
import re
from setuptools import setup
def find_version(*file_paths):
here = os.path.abspath(os.path.dirname(__file__))
with codecs.open(os.path.join(here, *file_paths), 'r') as f:
version_file = f.read()
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
setup(
name="oxdpython",
version=find_version("oxdpython", "__init__.py"),
url="https://github.com/GluuFederation/oxd-python",
license="MIT",
author="Gluu",
author_email="info@gluu.org",
description="Python binidings for Gluu oxd server",
long_description="oxd Python is a client library for the Gluu oxd Server. For information, visit http://oxd.gluu.org",
packages=["oxdpython"],
zip_safe=False,
install_requires=[],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Topic :: Internet",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
],
include_package_data=True,
entry_points={}
)
|
GluuFederation/oxd-python
|
setup.py
|
Python
|
mit
| 1,836
|
[
"VisIt"
] |
b2fa1137258d2a204d69515a11bdc5b9fad00f528e6702f9d784d80607613963
|
import numpy as np
from cogue.crystal.utility import klength2mesh
class DOS:
def __init__(self,
phonon,
distance=100):
self._phonon = phonon # Phonopy object
self._lattice = np.array(phonon.get_unitcell().get_cell().T,
dtype='double')
self._distance = distance
self._mesh = None
self._freqs = None
self._dos = None
def run(self):
self._set_mesh()
if self._run_mesh_sampling():
self._run_dos()
return True
return False
def get_lattice(self):
return self._lattice
def get_mesh(self):
return self._mesh
def get_dos(self):
return self._freqs, self._dos
def write_dos(self):
self._phonon.write_total_DOS()
def plot_dos(self, plt):
fig = plt.figure()
# fig.subplots_adjust(left=0.15, right=0.95, top=0.95, bottom=0.15)
plt.tick_params(axis='both', which='major', labelsize=10.5)
ax = fig.add_subplot(111)
plt.plot(self._freqs, self._dos, 'r-')
f_min, f_max = self._get_f_range()
plt.xlim(xmin=f_min, xmax=f_max)
plt.ylim(ymin=0)
ax.xaxis.set_ticks_position('both')
ax.yaxis.set_ticks_position('both')
ax.xaxis.set_tick_params(which='both', direction='in')
ax.yaxis.set_tick_params(which='both', direction='in')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# aspect = (xlim[1] - xlim[0]) / (ylim[1] - ylim[0])
# ax.set_aspect(aspect * 0.55)
plt.xlabel("Frequency (THz)")
plt.ylabel("Phonon DOS\n(States/THz$\cdot$unitcell)")
fig.tight_layout()
def save_dos(self, plt):
plt.savefig("dos.png")
def _get_f_range(self):
i_min = 0
i_max = 1000
for i, (f, d) in enumerate(zip(self._freqs, self._dos)):
if d > 1e-5:
i_min = i
break
for i, (f, d) in enumerate(zip(self._freqs[::-1], self._dos[::-1])):
if d > 1e-5:
i_max = len(self._freqs) - 1 - i
break
f_min = self._freqs[i_min]
if f_min > 0:
f_min = 0
f_max = self._freqs[i_max]
f_max += (f_max - f_min) * 0.05
return f_min, f_max
def _set_mesh(self):
self._mesh = klength2mesh(self._distance, self._lattice)
def _run_mesh_sampling(self):
return self._phonon.set_mesh(self._mesh)
def _run_dos(self, tetrahedron_method=True):
if self._phonon.set_total_DOS(tetrahedron_method=tetrahedron_method):
self._freqs, self._dos = self._phonon.get_total_DOS()
if __name__ == '__main__':
import os
import sys
import yaml
from phonopy import Phonopy
from phonopy.interface.phonopy_yaml import get_unitcell_from_phonopy_yaml
from phonopy.file_IO import parse_FORCE_SETS, parse_BORN
from cogue.crystal.utility import get_angles, get_lattice_parameters
import matplotlib
if len(sys.argv) > 1:
cell = get_unitcell_from_phonopy_yaml(sys.argv[1])
else:
cell = get_unitcell_from_phonopy_yaml("POSCAR-unitcell.yaml")
phonon_info = yaml.load(open("phonon.yaml"))
cell = get_unitcell_from_phonopy_yaml("POSCAR-unitcell.yaml")
phonon = Phonopy(cell, phonon_info['supercell_matrix'])
force_sets = parse_FORCE_SETS()
phonon.set_displacement_dataset(force_sets)
phonon.produce_force_constants()
if os.path.isfile("BORN"):
with open("BORN") as f:
primitive = phonon.get_primitive()
nac_params = parse_BORN(primitive, filename="BORN")
nac_params['factor'] = 14.399652
phonon.set_nac_params(nac_params)
matplotlib.use('Agg')
matplotlib.rcParams.update({'figure.figsize': (5, 2.8),
'font.family': 'serif'})
import matplotlib.pyplot as plt
distance = 100
dos = DOS(phonon, distance=distance)
if dos.run():
dos.write_dos()
dos.plot_dos(plt)
lattice = dos.get_lattice()
print("a, b, c = %f %f %f" % tuple(get_lattice_parameters(lattice)))
print("alpha, beta, gamma = %f %f %f" % tuple(get_angles(lattice)))
print("mesh (x=%f) = %s" % (distance, dos.get_mesh()))
dos.save_dos(plt)
else:
print("DOS calculation failed.")
|
atztogo/phonondb
|
phonondb/phonopy/dos.py
|
Python
|
bsd-3-clause
| 4,461
|
[
"CRYSTAL",
"phonopy"
] |
cae2dbc6f0a34a25e027b8dfb0d71b04dd42cd05e141199a18ae75198b6e11ae
|
# Copyright (C) 2008 CSC - Scientific Computing Ltd.
"""This module defines an ASE interface to VASP.
Developed on the basis of modules by Jussi Enkovaara and John
Kitchin. The path of the directory containing the pseudopotential
directories (potpaw,potpaw_GGA, potpaw_PBE, ...) should be set
by the environmental flag $VASP_PP_PATH.
The user should also set the environmental flag $VASP_SCRIPT pointing
to a python script looking something like::
import os
exitcode = os.system('vasp')
Alternatively, user can set the environmental flag $VASP_COMMAND pointing
to the command use the launch vasp e.g. 'vasp' or 'mpirun -n 16 vasp'
http://cms.mpi.univie.ac.at/vasp/
"""
import os
import sys
import re
from general import Calculator
from os.path import join, isfile, islink
import numpy as np
import ase
import ase.io
from ase.utils import devnull
# Parameters that can be set in INCAR. The values which are None
# are not written and default parameters of VASP are used for them.
float_keys = [
'aexx', # Fraction of exact/DFT exchange
'aggac', # Fraction of gradient correction to correlation
'aggax', # Fraction of gradient correction to exchange
'aldac', # Fraction of LDA correlation energy
'amin', #
'amix', #
'amix_mag', #
'bmix', # tags for mixing
'bmix_mag', #
'deper', # relative stopping criterion for optimization of eigenvalue
'ebreak', # absolute stopping criterion for optimization of eigenvalues (EDIFF/N-BANDS/4)
'efield', # applied electrostatic field
'emax', # energy-range for DOSCAR file
'emin', #
'enaug', # Density cutoff
'encut', # Planewave cutoff
'encutgw', # energy cutoff for response function
'encutfock', # FFT grid in the HF related routines
'hfscreen', # attribute to change from PBE0 to HSE
'kspacing', # determines the number of k-points if the KPOINTS
# file is not present. KSPACING is the smallest
# allowed spacing between k-points in units of
# $\AA$^{-1}$.
'potim', # time-step for ion-motion (fs)
'nelect', # total number of electrons
'param1', # Exchange parameter
'param2', # Exchange parameter
'pomass', # mass of ions in am
'sigma', # broadening in eV
'spring', # spring constant for NEB
'time', # special control tag
'weimin', # maximum weight for a band to be considered empty
'zab_vdw', # vdW-DF parameter
'zval', # ionic valence
#The next keywords pertain to the VTST add-ons from Graeme Henkelman's group at UT Austin
'jacobian', # Weight of lattice to atomic motion
'ddr', # (DdR) dimer separation
'drotmax', # (DRotMax) number of rotation steps per translation step
'dfnmin', # (DFNMin) rotational force below which dimer is not rotated
'dfnmax', # (DFNMax) rotational force below which dimer rotation stops
'stol', # convergence ratio for minimum eigenvalue
'sdr', # finite difference for setting up Lanczos matrix and step size when translating
'maxmove', # Max step for translation for IOPT > 0
'invcurve', # Initial curvature for LBFGS (IOPT = 1)
'timestep', # Dynamical timestep for IOPT = 3 and IOPT = 7
'sdalpha', # Ratio between force and step size for IOPT = 4
#The next keywords pertain to IOPT = 7 (i.e. FIRE)
'ftimemax', # Max time step
'ftimedec', # Factor to dec. dt
'ftimeinc', # Factor to inc. dt
'falpha', # Parameter for velocity damping
'falphadec', # Factor to dec. alpha
]
exp_keys = [
'ediff', # stopping-criterion for electronic upd.
'ediffg', # stopping-criterion for ionic upd.
'symprec', # precession in symmetry routines
#The next keywords pertain to the VTST add-ons from Graeme Henkelman's group at UT Austin
'fdstep', # Finite diference step for IOPT = 1 or 2
]
string_keys = [
'algo', # algorithm: Normal (Davidson) | Fast | Very_Fast (RMM-DIIS)
'gga', # xc-type: PW PB LM or 91
'prec', # Precission of calculation (Low, Normal, Accurate)
'system', # name of System
'tebeg', #
'teend', # temperature during run
'precfock', # FFT grid in the HF related routines
]
int_keys = [
'ialgo', # algorithm: use only 8 (CG) or 48 (RMM-DIIS)
'ibrion', # ionic relaxation: 0-MD 1-quasi-New 2-CG
'icharg', # charge: 0-WAVECAR 1-CHGCAR 2-atom 10-const
'idipol', # monopol/dipol and quadropole corrections
'images', # number of images for NEB calculation
'iniwav', # initial electr wf. : 0-lowe 1-rand
'isif', # calculate stress and what to relax
'ismear', # part. occupancies: -5 Blochl -4-tet -1-fermi 0-gaus >0 MP
'ispin', # spin-polarized calculation
'istart', # startjob: 0-new 1-cont 2-samecut
'isym', # symmetry: 0-nonsym 1-usesym 2-usePAWsym
'iwavpr', # prediction of wf.: 0-non 1-charg 2-wave 3-comb
'ldauprint', # 0-silent, 1-occ. matrix written to OUTCAR, 2-1+pot. matrix written
'ldautype', # L(S)DA+U: 1-Liechtenstein 2-Dudarev 4-Liechtenstein(LDAU)
'lmaxmix', #
'lorbit', # create PROOUT
'maxmix', #
'ngx', # FFT mesh for wavefunctions, x
'ngxf', # FFT mesh for charges x
'ngy', # FFT mesh for wavefunctions, y
'ngyf', # FFT mesh for charges y
'ngz', # FFT mesh for wavefunctions, z
'ngzf', # FFT mesh for charges z
'nbands', # Number of bands
'nblk', # blocking for some BLAS calls (Sec. 6.5)
'nbmod', # specifies mode for partial charge calculation
'nelm', # nr. of electronic steps (default 60)
'nelmdl', # nr. of initial electronic steps
'nelmin',
'nfree', # number of steps per DOF when calculting Hessian using finitite differences
'nkred', # define sub grid of q-points for HF with nkredx=nkredy=nkredz
'nkredx', # define sub grid of q-points in x direction for HF
'nkredy', # define sub grid of q-points in y direction for HF
'nkredz', # define sub grid of q-points in z direction for HF
'nomega', # number of frequency points
'nomegar', # number of frequency points on real axis
'npar', # parallelization over bands
'nsim', # evaluate NSIM bands simultaneously if using RMM-DIIS
'nsw', # number of steps for ionic upd.
'nupdown', # fix spin moment to specified value
'nwrite', # verbosity write-flag (how much is written)
'smass', # Nose mass-parameter (am)
'vdwgr', # extra keyword for Andris program
'vdwrn', # extra keyword for Andris program
'voskown', # use Vosko, Wilk, Nusair interpolation
#The next keywords pertain to the VTST add-ons from Graeme Henkelman's group at UT Austin
'ichain', # Flag for controlling which method is being used (0=NEB, 1=DynMat, 2=Dimer, 3=Lanczos)
# if ichain > 3, then both IBRION and POTIM are automatically set in the INCAR file
'iopt', # Controls which optimizer to use. for iopt > 0, ibrion = 3 and potim = 0.0
'snl', # Maximum dimentionality of the Lanczos matrix
'lbfgsmem', # Steps saved for inverse Hessian for IOPT = 1 (LBFGS)
'fnmin', # Max iter. before adjusting dt and alpha for IOPT = 7 (FIRE)
]
bool_keys = [
'addgrid', # finer grid for augmentation charge density
'kgamma', # The generated kpoint grid (from KSPACING) is either
# centred at the $\Gamma$
# point (e.g. includes the $\Gamma$ point)
# (KGAMMA=.TRUE.)
'laechg', # write AECCAR0/AECCAR1/AECCAR2
'lasph', # non-spherical contributions to XC energy (and pot for VASP.5.X)
'lasync', # overlap communcation with calculations
'lcharg', #
'lcorr', # Harris-correction to forces
'ldau', # L(S)DA+U
'ldiag', # algorithm: perform sub space rotation
'ldipol', # potential correction mode
'lelf', # create ELFCAR
'lepsilon', # enables to calculate and to print the BEC tensors
'lhfcalc', # switch to turn on Hartree Fock calculations
'loptics', # calculate the frequency dependent dielectric matrix
'lpard', # evaluate partial (band and/or k-point) decomposed charge density
'lplane', # parallelisation over the FFT grid
'lscalapack', # switch off scaLAPACK
'lscalu', # switch of LU decomposition
'lsepb', # write out partial charge of each band seperately?
'lsepk', # write out partial charge of each k-point seperately?
'lthomas', #
'luse_vdw', # Invoke vdW-DF implementation by Klimes et. al
'lvdw', # Invoke DFT-D2 method of Grimme
'lvhar', # write Hartree potential to LOCPOT (vasp 5.x)
'lvtot', # create WAVECAR/CHGCAR/LOCPOT
'lwave', #
#The next keywords pertain to the VTST add-ons from Graeme Henkelman's group at UT Austin
'lclimb', # Turn on CI-NEB
'ltangentold', # Old central difference tangent
'ldneb', # Turn on modified double nudging
'lnebcell', # Turn on SS-NEB
'lglobal', # Optmizize NEB globally for LBFGS (IOPT = 1)
'llineopt', # Use force based line minimizer for translation (IOPT = 1)
]
list_keys = [
'dipol', # center of cell for dipol
'eint', # energy range to calculate partial charge for
'ferwe', # Fixed band occupation (spin-paired)
'ferdo', # Fixed band occupation (spin-plarized)
'iband', # bands to calculate partial charge for
'magmom', # initial magnetic moments
'kpuse', # k-point to calculate partial charge for
'ropt', # number of grid points for non-local proj in real space
'rwigs', # Wigner-Seitz radii
'ldauu', # ldau parameters, has potential to redundant w.r.t. dict
'ldaul', # key 'ldau_luj', but 'ldau_luj' can't be read direct from
'ldauj', # the INCAR (since it needs to know information about atomic
# species. In case of conflict 'ldau_luj' gets written out
# when a calculation is set up
]
special_keys = [
'lreal', # non-local projectors in real space
]
dict_keys = [
'ldau_luj', # dictionary with L(S)DA+U parameters, e.g. {'Fe':{'L':2, 'U':4.0, 'J':0.9}, ...}
]
keys = [
# 'NBLOCK' and KBLOCK inner block; outer block
# 'NPACO' and APACO distance and nr. of slots for P.C.
# 'WEIMIN, EBREAK, DEPER special control tags
]
class Vasp(Calculator):
name = 'Vasp'
def __init__(self, restart=None, output_template='vasp',
track_output=False, command=None, **kwargs):
self.float_params = {}
self.exp_params = {}
self.string_params = {}
self.int_params = {}
self.bool_params = {}
self.list_params = {}
self.special_params = {}
self.dict_params = {}
for key in float_keys:
self.float_params[key] = None
for key in exp_keys:
self.exp_params[key] = None
for key in string_keys:
self.string_params[key] = None
for key in int_keys:
self.int_params[key] = None
for key in bool_keys:
self.bool_params[key] = None
for key in list_keys:
self.list_params[key] = None
for key in special_keys:
self.special_params[key] = None
for key in dict_keys:
self.dict_params[key] = None
self.string_params['prec'] = 'Normal'
if kwargs.get('xc', None):
if kwargs['xc'] not in ['PW91','LDA','PBE']:
raise ValueError(
'%s not supported for xc! use one of: PW91, LDA or PBE.' %
kwargs['xc'])
self.input_params = {'xc': kwargs['xc']} # exchange correlation functional
else:
self.input_params = {'xc': 'PW91'} # exchange correlation functional
self.input_params.update({
'setups': None, # Special setups (e.g pv, sv, ...)
'txt': '-', # Where to send information
'kpts': (1,1,1), # k-points
'gamma': False, # Option to use gamma-sampling instead
# of Monkhorst-Pack
'kpts_nintersections': None, # number of points between points in band structures
'reciprocal': False, # Option to write explicit k-points in units
# of reciprocal lattice vectors
})
self.restart = restart
self.track_output = track_output
self.output_template = output_template
self.command = command
if restart:
self.restart_load()
return
self.nbands = self.int_params['nbands']
self.atoms = None
self.positions = None
self.run_counts = 0
self.set(**kwargs)
def set(self, **kwargs):
for key in kwargs:
if self.float_params.has_key(key):
self.float_params[key] = kwargs[key]
elif self.exp_params.has_key(key):
self.exp_params[key] = kwargs[key]
elif self.string_params.has_key(key):
self.string_params[key] = kwargs[key]
elif self.int_params.has_key(key):
self.int_params[key] = kwargs[key]
elif self.bool_params.has_key(key):
self.bool_params[key] = kwargs[key]
elif self.list_params.has_key(key):
self.list_params[key] = kwargs[key]
elif self.special_params.has_key(key):
self.special_params[key] = kwargs[key]
elif self.dict_params.has_key(key):
self.dict_params[key] = kwargs[key]
elif self.input_params.has_key(key):
self.input_params[key] = kwargs[key]
else:
raise TypeError('Parameter not defined: ' + key)
def update(self, atoms):
if self.calculation_required(atoms, ['energy']):
if (self.atoms is None or
self.atoms.positions.shape != atoms.positions.shape):
# Completely new calculation just reusing the same
# calculator, so delete any old VASP files found.
self.clean()
self.calculate(atoms)
def initialize(self, atoms):
"""Initialize a VASP calculation
Constructs the POTCAR file (does not actually write it).
User should specify the PATH
to the pseudopotentials in VASP_PP_PATH environment variable
The pseudopotentials are expected to be in:
LDA: $VASP_PP_PATH/potpaw/
PBE: $VASP_PP_PATH/potpaw_PBE/
PW91: $VASP_PP_PATH/potpaw_GGA/
if your pseudopotentials are somewhere else, or named
differently you should make symlinks at the paths above that
point to the right place.
"""
p = self.input_params
self.all_symbols = atoms.get_chemical_symbols()
self.natoms = len(atoms)
self.spinpol = atoms.get_initial_magnetic_moments().any()
atomtypes = atoms.get_chemical_symbols()
# Determine the number of atoms of each atomic species
# sorted after atomic species
special_setups = []
symbols = {}
if self.input_params['setups']:
for m in self.input_params['setups']:
try :
#special_setup[self.input_params['setups'][m]] = int(m)
special_setups.append(int(m))
except:
#print 'setup ' + m + ' is a groups setup'
continue
#print 'special_setups' , special_setups
for m,atom in enumerate(atoms):
symbol = atom.symbol
if m in special_setups:
pass
else:
if not symbols.has_key(symbol):
symbols[symbol] = 1
else:
symbols[symbol] += 1
# Build the sorting list
self.sort = []
self.sort.extend(special_setups)
for symbol in symbols:
for m,atom in enumerate(atoms):
if m in special_setups:
pass
else:
if atom.symbol == symbol:
self.sort.append(m)
self.resort = range(len(self.sort))
for n in range(len(self.resort)):
self.resort[self.sort[n]] = n
self.atoms_sorted = atoms[self.sort]
# Check if the necessary POTCAR files exists and
# create a list of their paths.
self.symbol_count = []
for m in special_setups:
self.symbol_count.append([atomtypes[m],1])
for m in symbols:
self.symbol_count.append([m,symbols[m]])
#print 'self.symbol_count',self.symbol_count
sys.stdout.flush()
xc = '/'
#print 'p[xc]',p['xc']
if p['xc'] == 'PW91':
xc = '_gga/'
elif p['xc'] == 'PBE':
xc = '_pbe/'
if 'VASP_PP_PATH' in os.environ:
pppaths = os.environ['VASP_PP_PATH'].split(':')
else:
pppaths = []
self.ppp_list = []
# Setting the pseudopotentials, first special setups and
# then according to symbols
for m in special_setups:
name = 'potpaw'+xc.upper() + p['setups'][str(m)] + '/POTCAR'
found = False
for path in pppaths:
filename = join(path, name)
#print 'filename', filename
if isfile(filename) or islink(filename):
found = True
self.ppp_list.append(filename)
break
elif isfile(filename + '.Z') or islink(filename + '.Z'):
found = True
self.ppp_list.append(filename+'.Z')
break
if not found:
print 'Looking for %s' % name
raise RuntimeError('No pseudopotential for %s!' % symbol)
#print 'symbols', symbols
for symbol in symbols:
try:
name = 'potpaw'+xc.upper()+symbol + p['setups'][symbol]
except (TypeError, KeyError):
name = 'potpaw' + xc.upper() + symbol
name += '/POTCAR'
found = False
for path in pppaths:
filename = join(path, name)
#print 'filename', filename
if isfile(filename) or islink(filename):
found = True
self.ppp_list.append(filename)
break
elif isfile(filename + '.Z') or islink(filename + '.Z'):
found = True
self.ppp_list.append(filename+'.Z')
break
if not found:
print '''Looking for %s
The pseudopotentials are expected to be in:
LDA: $VASP_PP_PATH/potpaw/
PBE: $VASP_PP_PATH/potpaw_PBE/
PW91: $VASP_PP_PATH/potpaw_GGA/''' % name
raise RuntimeError('No pseudopotential for %s!' % symbol)
self.converged = None
self.setups_changed = None
def calculate(self, atoms):
"""Generate necessary files in the working directory and run VASP.
The method first write VASP input files, then calls the method
which executes VASP. When the VASP run is finished energy, forces,
etc. are read from the VASP output.
"""
# Initialize calculations
self.initialize(atoms)
# Write input
from ase.io.vasp import write_vasp
write_vasp('POSCAR',
self.atoms_sorted,
symbol_count=self.symbol_count)
self.write_incar(atoms)
self.write_potcar()
self.write_kpoints()
self.write_sort_file()
# Execute VASP
self.run()
# Read output
atoms_sorted = ase.io.read('CONTCAR', format='vasp')
if self.int_params['ibrion']>-1 and self.int_params['nsw']>0:
# Update atomic positions and unit cell with the ones read
# from CONTCAR.
atoms.positions = atoms_sorted[self.resort].positions
atoms.cell = atoms_sorted.cell
self.converged = self.read_convergence()
self.set_results(atoms)
def set_results(self, atoms):
self.read(atoms)
if self.spinpol:
self.magnetic_moment = self.read_magnetic_moment()
if (self.int_params['lorbit']>=10
or (self.int_params['lorbit']!=None
and self.list_params['rwigs'])):
self.magnetic_moments = self.read_magnetic_moments(atoms)
else:
self.magnetic_moments = None
self.old_float_params = self.float_params.copy()
self.old_exp_params = self.exp_params.copy()
self.old_string_params = self.string_params.copy()
self.old_int_params = self.int_params.copy()
self.old_input_params = self.input_params.copy()
self.old_bool_params = self.bool_params.copy()
self.old_list_params = self.list_params.copy()
self.old_dict_params = self.dict_params.copy()
self.atoms = atoms.copy()
self.name = 'vasp'
self.version = self.read_version()
self.niter = self.read_number_of_iterations()
self.sigma = self.read_electronic_temperature()
self.nelect = self.read_number_of_electrons()
def run(self):
"""Method which explicitely runs VASP."""
if self.track_output:
self.out = self.output_template+str(self.run_counts)+'.out'
self.run_counts += 1
else:
self.out = self.output_template+'.out'
stderr = sys.stderr
p=self.input_params
if p['txt'] is None:
sys.stderr = devnull
elif p['txt'] == '-':
pass
elif isinstance(p['txt'], str):
sys.stderr = open(p['txt'], 'w')
if self.command is not None:
vasp = self.command
exitcode = os.system('%s > %s' % (vasp, self.out))
elif os.environ.has_key('VASP_COMMAND'):
vasp = os.environ['VASP_COMMAND']
exitcode = os.system('%s > %s' % (vasp, self.out))
elif os.environ.has_key('VASP_SCRIPT'):
vasp = os.environ['VASP_SCRIPT']
locals={}
execfile(vasp, {}, locals)
exitcode = locals['exitcode']
else:
raise RuntimeError('Please set either VASP_COMMAND or VASP_SCRIPT environment variable')
sys.stderr = stderr
if exitcode != 0:
raise RuntimeError('Vasp exited with exit code: %d. ' % exitcode)
def restart_load(self):
"""Method which is called upon restart."""
# Try to read sorting file
if os.path.isfile('ase-sort.dat'):
self.sort = []
self.resort = []
file = open('ase-sort.dat', 'r')
lines = file.readlines()
file.close()
for line in lines:
data = line.split()
self.sort.append(int(data[0]))
self.resort.append(int(data[1]))
atoms = ase.io.read('CONTCAR', format='vasp')[self.resort]
else:
atoms = ase.io.read('CONTCAR', format='vasp')
self.sort = range(len(atoms))
self.resort = range(len(atoms))
self.atoms = atoms.copy()
self.read_incar()
self.read_outcar()
self.set_results(atoms)
self.read_kpoints()
self.read_potcar()
# self.old_incar_params = self.incar_params.copy()
self.old_input_params = self.input_params.copy()
self.converged = self.read_convergence()
def clean(self):
"""Method which cleans up after a calculation.
The default files generated by Vasp will be deleted IF this
method is called.
"""
files = ['CHG', 'CHGCAR', 'POSCAR', 'INCAR', 'CONTCAR',
'DOSCAR', 'EIGENVAL', 'IBZKPT', 'KPOINTS', 'OSZICAR',
'OUTCAR', 'PCDAT', 'POTCAR', 'vasprun.xml',
'WAVECAR', 'XDATCAR', 'PROCAR', 'ase-sort.dat',
'LOCPOT', 'AECCAR0', 'AECCAR1', 'AECCAR2']
for f in files:
try:
os.remove(f)
except OSError:
pass
def set_atoms(self, atoms):
if (atoms != self.atoms):
self.converged = None
self.atoms = atoms.copy()
def get_atoms(self):
atoms = self.atoms.copy()
atoms.set_calculator(self)
return atoms
def get_version(self):
self.update(self.atoms)
return self.version
def read_version(self):
version = None
for line in open('OUTCAR'):
if line.find(' vasp.') != -1: # find the first occurence
version = line[len(' vasp.'):].split()[0]
break
return version
def get_potential_energy(self, atoms, force_consistent=False):
self.update(atoms)
if force_consistent:
return self.energy_free
else:
return self.energy_zero
def get_number_of_iterations(self):
self.update(self.atoms)
return self.niter
def read_number_of_iterations(self):
niter = None
for line in open('OUTCAR'):
if line.find('- Iteration') != -1: # find the last iteration number
niter = int(line.split(')')[0].split('(')[-1].strip())
return niter
def get_electronic_temperature(self):
self.update(self.atoms)
return self.sigma
def read_electronic_temperature(self):
sigma = None
for line in open('OUTCAR'):
if line.find('Fermi-smearing in eV SIGMA') != -1:
sigma = float(line.split('=')[1].strip())
return sigma
def get_default_number_of_electrons(self, filename='POTCAR'):
"""Get list of tuples (atomic symbol, number of valence electrons)
for each atomtype from a POTCAR file. """
return self.read_default_number_of_electrons(filename)
def read_default_number_of_electrons(self, filename='POTCAR'):
nelect = []
lines = open(filename).readlines()
for n, line in enumerate(lines):
if line.find('TITEL') != -1:
symbol = line.split('=')[1].split()[1].split('_')[0].strip()
valence = float(lines[n+4].split(';')[1].split('=')[1].split()[0].strip())
nelect.append((symbol, valence))
return nelect
def get_number_of_electrons(self):
self.update(self.atoms)
return self.nelect
def read_number_of_electrons(self):
nelect = None
for line in open('OUTCAR'):
if line.find('total number of electrons') != -1:
nelect = float(line.split('=')[1].split()[0].strip())
return nelect
def get_forces(self, atoms):
self.update(atoms)
return self.forces
def get_stress(self, atoms):
self.update(atoms)
return self.stress
def read_stress(self):
stress = None
for line in open('OUTCAR'):
if line.find(' in kB ') != -1:
stress = -np.array([float(a) for a in line.split()[2:]]) \
[[0, 1, 2, 4, 5, 3]] \
* 1e-1 * ase.units.GPa
return stress
def read_ldau(self):
ldau_luj = None
ldauprint = None
ldau = None
ldautype = None
atomtypes = []
# read ldau parameters from outcar
for line in open('OUTCAR'):
if line.find('TITEL') != -1: # What atoms are present
atomtypes.append(line.split()[3].split('_')[0].split('.')[0])
if line.find('LDAUTYPE') != -1: # Is this a DFT+U calculation
ldautype = int(line.split('=')[-1])
ldau = True
ldau_luj = {}
if line.find('LDAUL') != -1:
L = line.split('=')[-1].split()
if line.find('LDAUU') != -1:
U = line.split('=')[-1].split()
if line.find('LDAUJ') != -1:
J = line.split('=')[-1].split()
# create dictionary
if ldau:
for i,symbol in enumerate(atomtypes):
ldau_luj[symbol] = {'L': int(L[i]), 'U': float(U[i]), 'J': float(J[i])}
self.dict_params['ldau_luj'] = ldau_luj
return ldau, ldauprint, ldautype, ldau_luj
def calculation_required(self, atoms, quantities):
if (self.positions is None or
(self.atoms != atoms) or
(self.float_params != self.old_float_params) or
(self.exp_params != self.old_exp_params) or
(self.string_params != self.old_string_params) or
(self.int_params != self.old_int_params) or
(self.bool_params != self.old_bool_params) or
(self.list_params != self.old_list_params) or
(self.input_params != self.old_input_params) or
(self.dict_params != self.old_dict_params)
or not self.converged):
return True
if 'magmom' in quantities:
return not hasattr(self, 'magnetic_moment')
return False
def get_number_of_bands(self):
return self.nbands
def get_k_point_weights(self):
self.update(self.atoms)
return self.read_k_point_weights()
def get_number_of_spins(self):
if self.spinpol is None:
return 1
else:
return 1 + int(self.spinpol)
def get_eigenvalues(self, kpt=0, spin=0):
self.update(self.atoms)
return self.read_eigenvalues(kpt, spin)
def get_occupation_numbers(self, kpt=0, spin=0):
self.update(self.atoms)
return self.read_occupation_numbers(kpt, spin)
def get_fermi_level(self):
return self.fermi
def get_number_of_grid_points(self):
raise NotImplementedError
def get_pseudo_density(self):
raise NotImplementedError
def get_pseudo_wavefunction(self, n=0, k=0, s=0, pad=True):
raise NotImplementedError
def get_bz_k_points(self):
raise NotImplementedError
def get_ibz_kpoints(self):
self.update(self.atoms)
return self.read_ibz_kpoints()
def get_ibz_k_points(self):
return self.get_ibz_kpoints()
def get_spin_polarized(self):
if not hasattr(self, 'spinpol'):
self.spinpol = self.atoms.get_initial_magnetic_moments().any()
return self.spinpol
def get_magnetic_moment(self, atoms):
self.update(atoms)
return self.magnetic_moment
def get_magnetic_moments(self, atoms):
if self.int_params['lorbit']>=10 or self.list_params['rwigs']:
self.update(atoms)
return self.magnetic_moments
else:
return None
#raise RuntimeError(
# "The combination %s for lorbit with %s for rwigs not supported to calculate magnetic moments" % (p['lorbit'], p['rwigs']))
def get_dipole_moment(self, atoms):
"""Returns total dipole moment of the system."""
self.update(atoms)
return self.dipole
def get_xc_functional(self):
return self.input_params['xc']
def write_incar(self, atoms, **kwargs):
"""Writes the INCAR file."""
incar = open('INCAR', 'w')
incar.write('INCAR created by Atomic Simulation Environment\n')
for key, val in self.float_params.items():
if val is not None:
incar.write(' %s = %5.6f\n' % (key.upper(), val))
for key, val in self.exp_params.items():
if val is not None:
incar.write(' %s = %5.2e\n' % (key.upper(), val))
for key, val in self.string_params.items():
if val is not None:
incar.write(' %s = %s\n' % (key.upper(), val))
for key, val in self.int_params.items():
if val is not None:
incar.write(' %s = %d\n' % (key.upper(), val))
if key == 'ichain' and val > 0:
incar.write(' IBRION = 3\n POTIM = 0.0\n')
for key, val in self.int_params.items():
if key == 'iopt' and val == None:
print 'WARNING: optimization is set to LFBGS (IOPT = 1)'
incar.write(' IOPT = 1\n')
for key, val in self.exp_params.items():
if key == 'ediffg' and val == None:
RuntimeError('Please set EDIFFG < 0')
for key, val in self.list_params.items():
if val is not None:
incar.write(' %s = ' % key.upper())
if key in ('dipol', 'eint', 'ropt', 'rwigs'):
[incar.write('%.4f ' % x) for x in val]
elif key in ('ldauu', 'ldauj', 'ldaul') and \
not self.dict_keys.has('ldau_luj'):
[incar.write('%.4f ' % x) for x in val]
elif key in ('ferwe', 'ferdo'):
[incar.write('%.1f ' % x) for x in val]
elif key in ('iband', 'kpuse'):
[incar.write('%i ' % x) for x in val]
elif key == 'magmom':
list = [[1, val[0]]]
for n in range(1, len(val)):
if val[n] == val[n-1]:
list[-1][0] += 1
else:
list.append([1, val[n]])
[incar.write('%i*%.4f ' % (mom[0], mom[1])) for mom in list]
incar.write('\n')
for key, val in self.bool_params.items():
if val is not None:
incar.write(' %s = ' % key.upper())
if val:
incar.write('.TRUE.\n')
else:
incar.write('.FALSE.\n')
for key, val in self.special_params.items():
if val is not None:
incar.write(' %s = ' % key.upper())
if key == 'lreal':
if type(val) == str:
incar.write(val+'\n')
elif type(val) == bool:
if val:
incar.write('.TRUE.\n')
else:
incar.write('.FALSE.\n')
for key, val in self.dict_params.items():
if val is not None:
if key == 'ldau_luj':
llist = ulist = jlist = ''
for symbol in self.symbol_count:
luj = val.get(symbol[0], {'L':-1, 'U': 0.0, 'J': 0.0}) # default: No +U
llist += ' %i' % luj['L']
ulist += ' %.3f' % luj['U']
jlist += ' %.3f' % luj['J']
incar.write(' LDAUL =%s\n' % llist)
incar.write(' LDAUU =%s\n' % ulist)
incar.write(' LDAUJ =%s\n' % jlist)
if self.spinpol:
if not self.int_params['ispin']:
incar.write(' ispin = 2\n'.upper())
# Write out initial magnetic moments
magmom = atoms.get_initial_magnetic_moments()[self.sort]
list = [[1, magmom[0]]]
for n in range(1, len(magmom)):
if magmom[n] == magmom[n-1]:
list[-1][0] += 1
else:
list.append([1, magmom[n]])
incar.write(' magmom = '.upper())
[incar.write('%i*%.4f ' % (mom[0], mom[1])) for mom in list]
incar.write('\n')
incar.close()
def write_kpoints(self, **kwargs):
"""Writes the KPOINTS file."""
p = self.input_params
kpoints = open('KPOINTS', 'w')
kpoints.write('KPOINTS created by Atomic Simulation Environment\n')
shape=np.array(p['kpts']).shape
if len(shape)==1:
kpoints.write('0\n')
if p['gamma']:
kpoints.write('Gamma\n')
else:
kpoints.write('Monkhorst-Pack\n')
[kpoints.write('%i ' % kpt) for kpt in p['kpts']]
kpoints.write('\n0 0 0\n')
elif len(shape)==2:
kpoints.write('%i \n' % (len(p['kpts'])))
if p['reciprocal']:
kpoints.write('Reciprocal\n')
else:
kpoints.write('Cartesian\n')
for n in range(len(p['kpts'])):
[kpoints.write('%f ' % kpt) for kpt in p['kpts'][n]]
if shape[1]==4:
kpoints.write('\n')
elif shape[1]==3:
kpoints.write('1.0 \n')
kpoints.close()
def write_potcar(self,suffix = ""):
"""Writes the POTCAR file."""
import tempfile
potfile = open('POTCAR'+suffix,'w')
for filename in self.ppp_list:
if filename.endswith('R'):
for line in open(filename, 'r'):
potfile.write(line)
elif filename.endswith('.Z'):
file_tmp = tempfile.NamedTemporaryFile()
os.system('gunzip -c %s > %s' % (filename, file_tmp.name))
for line in file_tmp.readlines():
potfile.write(line)
file_tmp.close()
potfile.close()
def write_sort_file(self):
"""Writes a sortings file.
This file contains information about how the atoms are sorted in
the first column and how they should be resorted in the second
column. It is used for restart purposes to get sorting right
when reading in an old calculation to ASE."""
file = open('ase-sort.dat', 'w')
for n in range(len(self.sort)):
file.write('%5i %5i \n' % (self.sort[n], self.resort[n]))
# Methods for reading information from OUTCAR files:
def read_energy(self, all=None):
[energy_free, energy_zero]=[0, 0]
if all:
energy_free = []
energy_zero = []
for line in open('OUTCAR', 'r'):
# Free energy
if line.lower().startswith(' free energy toten'):
if all:
energy_free.append(float(line.split()[-2]))
else:
energy_free = float(line.split()[-2])
# Extrapolated zero point energy
if line.startswith(' energy without entropy'):
if all:
energy_zero.append(float(line.split()[-1]))
else:
energy_zero = float(line.split()[-1])
return [energy_free, energy_zero]
def read_forces(self, atoms, all=False):
"""Method that reads forces from OUTCAR file.
If 'all' is switched on, the forces for all ionic steps
in the OUTCAR file be returned, in other case only the
forces for the last ionic configuration is returned."""
file = open('OUTCAR','r')
lines = file.readlines()
file.close()
n=0
if all:
all_forces = []
for line in lines:
if line.rfind('TOTAL-FORCE') > -1:
forces=[]
for i in range(len(atoms)):
forces.append(np.array([float(f) for f in lines[n+2+i].split()[3:6]]))
if all:
all_forces.append(np.array(forces)[self.resort])
n+=1
if all:
return np.array(all_forces)
else:
return np.array(forces)[self.resort]
def read_fermi(self):
"""Method that reads Fermi energy from OUTCAR file"""
E_f=None
for line in open('OUTCAR', 'r'):
if line.rfind('E-fermi') > -1:
E_f=float(line.split()[2])
return E_f
def read_dipole(self):
dipolemoment=np.zeros([1,3])
for line in open('OUTCAR', 'r'):
if line.rfind('dipolmoment') > -1:
dipolemoment=np.array([float(f) for f in line.split()[1:4]])
return dipolemoment
def read_magnetic_moments(self, atoms):
magnetic_moments = np.zeros(len(atoms))
n = 0
lines = open('OUTCAR', 'r').readlines()
for line in lines:
if line.rfind('magnetization (x)') > -1:
for m in range(len(atoms)):
magnetic_moments[m] = float(lines[n + m + 4].split()[4])
n += 1
return np.array(magnetic_moments)[self.resort]
def read_magnetic_moment(self):
n=0
for line in open('OUTCAR','r'):
if line.rfind('number of electron ') > -1:
magnetic_moment=float(line.split()[-1])
n+=1
return magnetic_moment
def read_nbands(self):
for line in open('OUTCAR', 'r'):
if line.rfind('NBANDS') > -1:
return int(line.split()[-1])
def read_convergence(self):
"""Method that checks whether a calculation has converged."""
converged = None
# First check electronic convergence
for line in open('OUTCAR', 'r'):
if 0: # vasp always prints that!
if line.rfind('aborting loop') > -1: # scf failed
raise RuntimeError(line.strip())
break
if line.rfind('EDIFF ') > -1:
ediff = float(line.split()[2])
if line.rfind('total energy-change')>-1:
# I saw this in an atomic oxygen calculation. it
# breaks this code, so I am checking for it here.
if 'MIXING' in line:
continue
split = line.split(':')
a = float(split[1].split('(')[0])
b = split[1].split('(')[1][0:-2]
# sometimes this line looks like (second number wrong format!):
# energy-change (2. order) :-0.2141803E-08 ( 0.2737684-111)
# we are checking still the first number so
# let's "fix" the format for the second one
if 'e' not in b.lower():
# replace last occurence of - (assumed exponent) with -e
bsplit = b.split('-')
bsplit[-1] = 'e' + bsplit[-1]
b = '-'.join(bsplit).replace('-e','e-')
b = float(b)
if [abs(a), abs(b)] < [ediff, ediff]:
converged = True
else:
converged = False
continue
# Then if ibrion in [1,2,3] check whether ionic relaxation
# condition been fulfilled
if (self.int_params['ibrion'] in [1,2,3]
and self.int_params['nsw'] not in [0]) :
if not self.read_relaxed():
converged = False
else:
converged = True
return converged
def read_ibz_kpoints(self):
lines = open('OUTCAR', 'r').readlines()
ibz_kpts = []
n = 0
i = 0
for line in lines:
if line.rfind('Following cartesian coordinates')>-1:
m = n+2
while i==0:
ibz_kpts.append([float(lines[m].split()[p]) for p in range(3)])
m += 1
if lines[m]==' \n':
i = 1
if i == 1:
continue
n += 1
ibz_kpts = np.array(ibz_kpts)
return np.array(ibz_kpts)
def read_k_point_weights(self):
file = open('IBZKPT')
lines = file.readlines()
file.close()
if 'Tetrahedra\n' in lines:
N = lines.index('Tetrahedra\n')
else:
N = len(lines)
kpt_weights = []
for n in range(3, N):
kpt_weights.append(float(lines[n].split()[3]))
kpt_weights = np.array(kpt_weights)
kpt_weights /= np.sum(kpt_weights)
return kpt_weights
def read_eigenvalues(self, kpt=0, spin=0):
file = open('EIGENVAL', 'r')
lines = file.readlines()
file.close()
eigs = []
for n in range(8+kpt*(self.nbands+2), 8+kpt*(self.nbands+2)+self.nbands):
eigs.append(float(lines[n].split()[spin+1]))
return np.array(eigs)
def read_occupation_numbers(self, kpt=0, spin=0):
lines = open('OUTCAR').readlines()
nspins = self.get_number_of_spins()
start = 0
if nspins == 1:
for n, line in enumerate(lines): # find it in the last iteration
m = re.search(' k-point *'+str(kpt+1)+' *:', line)
if m is not None:
start = n
else:
for n, line in enumerate(lines):
if line.find(' spin component '+str(spin+1)) != -1: # find it in the last iteration
start = n
for n2, line2 in enumerate(lines[start:]):
m = re.search(' k-point *'+str(kpt+1)+' *:', line2)
if m is not None:
start = start + n2
break
for n2, line2 in enumerate(lines[start+2:]):
if not line2.strip():
break
occ = []
for line in lines[start+2:start+2+n2]:
occ.append(float(line.split()[2]))
return np.array(occ)
def read_relaxed(self):
for line in open('OUTCAR', 'r'):
if line.rfind('reached required accuracy') > -1:
return True
return False
# The below functions are used to restart a calculation and are under early constructions
def read_incar(self, filename='INCAR'):
"""Method that imports settings from INCAR file."""
self.spinpol = False
file=open(filename, 'r')
file.readline()
lines=file.readlines()
for line in lines:
try:
# Make multiplications easier to spot
line = line.replace("*", " * ")
data = line.split()
# Skip empty and commented lines.
if len(data) == 0:
continue
elif data[0][0] in ['#', '!']:
continue
key = data[0].lower()
if key in float_keys:
self.float_params[key] = float(data[2])
elif key in exp_keys:
self.exp_params[key] = float(data[2])
elif key in string_keys:
self.string_params[key] = str(data[2])
elif key in int_keys:
if key == 'ispin':
# JRK added. not sure why we would want to leave ispin out
self.int_params[key] = int(data[2])
if int(data[2]) == 2:
self.spinpol = True
else:
self.int_params[key] = int(data[2])
elif key in bool_keys:
if 'true' in data[2].lower():
self.bool_params[key] = True
elif 'false' in data[2].lower():
self.bool_params[key] = False
elif key in list_keys:
list = []
if key in ('dipol', 'eint', 'ferwe', 'ropt', 'rwigs',
'ldauu', 'ldaul', 'ldauj'):
for a in data[2:]:
if a in ["!", "#"]:
break
list.append(float(a))
elif key in ('iband', 'kpuse'):
for a in data[2:]:
if a in ["!", "#"]:
break
list.append(int(a))
self.list_params[key] = list
if key == 'magmom':
list = []
i = 2
while i < len(data):
if data[i] in ["#", "!"]:
break
if data[i] == "*":
b = list.pop()
i += 1
for j in range(int(b)):
list.append(float(data[i]))
else:
list.append(float(data[i]))
i += 1
self.list_params['magmom'] = list
list = np.array(list)
if self.atoms is not None:
self.atoms.set_initial_magnetic_moments(list[self.resort])
elif key in special_keys:
if key == 'lreal':
if 'true' in data[2].lower():
self.special_params[key] = True
elif 'false' in data[2].lower():
self.special_params[key] = False
else:
self.special_params[key] = data[2]
except KeyError:
raise IOError('Keyword "%s" in INCAR is not known by calculator.' % key)
except IndexError:
raise IOError('Value missing for keyword "%s".' % key)
def read_outcar(self):
# Spin polarized calculation?
file = open('OUTCAR', 'r')
lines = file.readlines()
file.close()
for line in lines:
if line.rfind('ISPIN') > -1:
if int(line.split()[2])==2:
self.spinpol = True
else:
self.spinpol = None
self.energy_free, self.energy_zero = self.read_energy()
self.forces = self.read_forces(self.atoms)
self.dipole = self.read_dipole()
self.fermi = self.read_fermi()
self.stress = self.read_stress()
self.nbands = self.read_nbands()
self.read_ldau()
p=self.int_params
q=self.list_params
if self.spinpol:
self.magnetic_moment = self.read_magnetic_moment()
if p['lorbit']>=10 or (p['lorbit']!=None and q['rwigs']):
self.magnetic_moments = self.read_magnetic_moments(self.atoms)
else:
self.magnetic_moments = None
self.set(nbands=self.nbands)
def read_kpoints(self, filename='KPOINTS'):
file = open(filename, 'r')
lines = file.readlines()
file.close()
ktype = lines[2].split()[0].lower()[0]
if ktype in ['g', 'm']:
if ktype=='g':
self.set(gamma=True)
kpts = np.array([int(lines[3].split()[i]) for i in range(3)])
self.set(kpts=kpts)
elif ktype in ['c', 'k']:
raise NotImplementedError('Only Monkhorst-Pack and gamma centered grid supported for restart.')
else:
raise NotImplementedError('Only Monkhorst-Pack and gamma centered grid supported for restart.')
def read_potcar(self):
""" Method that reads the Exchange Correlation functional from POTCAR file.
"""
file = open('POTCAR', 'r')
lines = file.readlines()
file.close()
# Search for key 'LEXCH' in POTCAR
xc_flag = None
for line in lines:
key = line.split()[0].upper()
if key == 'LEXCH':
xc_flag = line.split()[-1].upper()
break
if xc_flag is None:
raise ValueError('LEXCH flag not found in POTCAR file.')
# Values of parameter LEXCH and corresponding XC-functional
xc_dict = {'PE':'PBE', '91':'PW91', 'CA':'LDA'}
if xc_flag not in xc_dict.keys():
raise ValueError(
'Unknown xc-functional flag found in POTCAR, LEXCH=%s' % xc_flag)
self.input_params['xc'] = xc_dict[xc_flag]
class VaspChargeDensity(object):
"""Class for representing VASP charge density"""
def __init__(self, filename='CHG'):
# Instance variables
self.atoms = [] # List of Atoms objects
self.chg = [] # Charge density
self.chgdiff = [] # Charge density difference, if spin polarized
self.aug = '' # Augmentation charges, not parsed just a big string
self.augdiff = '' # Augmentation charge differece, is spin polarized
# Note that the augmentation charge is not a list, since they
# are needed only for CHGCAR files which store only a single
# image.
if filename != None:
self.read(filename)
def is_spin_polarized(self):
if len(self.chgdiff) > 0:
return True
return False
def _read_chg(self, fobj, chg, volume):
"""Read charge from file object
Utility method for reading the actual charge density (or
charge density difference) from a file object. On input, the
file object must be at the beginning of the charge block, on
output the file position will be left at the end of the
block. The chg array must be of the correct dimensions.
"""
# VASP writes charge density as
# WRITE(IU,FORM) (((C(NX,NY,NZ),NX=1,NGXC),NY=1,NGYZ),NZ=1,NGZC)
# Fortran nested implied do loops; innermost index fastest
# First, just read it in
for zz in range(chg.shape[2]):
for yy in range(chg.shape[1]):
chg[:, yy, zz] = np.fromfile(fobj, count = chg.shape[0],
sep=' ')
chg /= volume
def read(self, filename='CHG'):
"""Read CHG or CHGCAR file.
If CHG contains charge density from multiple steps all the
steps are read and stored in the object. By default VASP
writes out the charge density every 10 steps.
chgdiff is the difference between the spin up charge density
and the spin down charge density and is thus only read for a
spin-polarized calculation.
aug is the PAW augmentation charges found in CHGCAR. These are
not parsed, they are just stored as a string so that they can
be written again to a CHGCAR format file.
"""
import ase.io.vasp as aiv
f = open(filename)
self.atoms = []
self.chg = []
self.chgdiff = []
self.aug = ''
self.augdiff = ''
while True:
try:
atoms = aiv.read_vasp(f)
except (IOError, ValueError, IndexError):
# Probably an empty line, or we tried to read the
# augmentation occupancies in CHGCAR
break
f.readline()
ngr = f.readline().split()
ng = (int(ngr[0]), int(ngr[1]), int(ngr[2]))
chg = np.empty(ng)
self._read_chg(f, chg, atoms.get_volume())
self.chg.append(chg)
self.atoms.append(atoms)
# Check if the file has a spin-polarized charge density part, and
# if so, read it in.
fl = f.tell()
# First check if the file has an augmentation charge part (CHGCAR file.)
line1 = f.readline()
if line1=='':
break
elif line1.find('augmentation') != -1:
augs = [line1]
while True:
line2 = f.readline()
if line2.split() == ngr:
self.aug = ''.join(augs)
augs = []
chgdiff = np.empty(ng)
self._read_chg(f, chgdiff, atoms.get_volume())
self.chgdiff.append(chgdiff)
elif line2 == '':
break
else:
augs.append(line2)
if len(self.aug) == 0:
self.aug = ''.join(augs)
augs = []
else:
self.augdiff = ''.join(augs)
augs = []
elif line1.split() == ngr:
chgdiff = np.empty(ng)
self._read_chg(f, chgdiff, atoms.get_volume())
self.chgdiff.append(chgdiff)
else:
f.seek(fl)
f.close()
def _write_chg(self, fobj, chg, volume, format='chg'):
"""Write charge density
Utility function similar to _read_chg but for writing.
"""
# Make a 1D copy of chg, must take transpose to get ordering right
chgtmp=chg.T.ravel()
# Multiply by volume
chgtmp=chgtmp*volume
# Must be a tuple to pass to string conversion
chgtmp=tuple(chgtmp)
# CHG format - 10 columns
if format.lower() == 'chg':
# Write all but the last row
for ii in range((len(chgtmp)-1)/10):
fobj.write(' %#11.5G %#11.5G %#11.5G %#11.5G %#11.5G\
%#11.5G %#11.5G %#11.5G %#11.5G %#11.5G\n' % chgtmp[ii*10:(ii+1)*10]
)
# If the last row contains 10 values then write them without a newline
if len(chgtmp)%10==0:
fobj.write(' %#11.5G %#11.5G %#11.5G %#11.5G %#11.5G\
%#11.5G %#11.5G %#11.5G %#11.5G %#11.5G' % chgtmp[len(chgtmp)-10:len(chgtmp)])
# Otherwise write fewer columns without a newline
else:
for ii in range(len(chgtmp)%10):
fobj.write((' %#11.5G') % chgtmp[len(chgtmp)-len(chgtmp)%10+ii])
# Other formats - 5 columns
else:
# Write all but the last row
for ii in range((len(chgtmp)-1)/5):
fobj.write(' %17.10E %17.10E %17.10E %17.10E %17.10E\n' % chgtmp[ii*5:(ii+1)*5])
# If the last row contains 5 values then write them without a newline
if len(chgtmp)%5==0:
fobj.write(' %17.10E %17.10E %17.10E %17.10E %17.10E' % chgtmp[len(chgtmp)-5:len(chgtmp)])
# Otherwise write fewer columns without a newline
else:
for ii in range(len(chgtmp)%5):
fobj.write((' %17.10E') % chgtmp[len(chgtmp)-len(chgtmp)%5+ii])
# Write a newline whatever format it is
fobj.write('\n')
# Clean up
del chgtmp
def write(self, filename='CHG', format=None):
"""Write VASP charge density in CHG format.
filename: str
Name of file to write to.
format: str
String specifying whether to write in CHGCAR or CHG
format.
"""
import ase.io.vasp as aiv
if format == None:
if filename.lower().find('chgcar') != -1:
format = 'chgcar'
elif filename.lower().find('chg') != -1:
format = 'chg'
elif len(self.chg) == 1:
format = 'chgcar'
else:
format = 'chg'
f = open(filename, 'w')
for ii, chg in enumerate(self.chg):
if format == 'chgcar' and ii != len(self.chg) - 1:
continue # Write only the last image for CHGCAR
aiv.write_vasp(f, self.atoms[ii], direct=True, long_format=False)
f.write('\n')
for dim in chg.shape:
f.write(' %4i' % dim)
f.write('\n')
vol = self.atoms[ii].get_volume()
self._write_chg(f, chg, vol, format)
if format == 'chgcar':
f.write(self.aug)
if self.is_spin_polarized():
if format == 'chg':
f.write('\n')
for dim in chg.shape:
f.write(' %4i' % dim)
self._write_chg(f, self.chgdiff[ii], vol, format)
if format == 'chgcar':
f.write('\n')
f.write(self.augdiff)
if format == 'chg' and len(self.chg) > 1:
f.write('\n')
f.close()
class VaspDos(object):
"""Class for representing density-of-states produced by VASP
The energies are in property self.energy
Site-projected DOS is accesible via the self.site_dos method.
Total and integrated DOS is accessible as numpy.ndarray's in the
properties self.dos and self.integrated_dos. If the calculation is
spin polarized, the arrays will be of shape (2, NDOS), else (1,
NDOS).
The self.efermi property contains the currently set Fermi
level. Changing this value shifts the energies.
"""
def __init__(self, doscar='DOSCAR', efermi=0.0):
"""Initialize"""
self._efermi = 0.0
self.read_doscar(doscar)
self.efermi = efermi
def _set_efermi(self, efermi):
"""Set the Fermi level."""
ef = efermi - self._efermi
self._efermi = efermi
self._total_dos[0, :] = self._total_dos[0, :] - ef
try:
self._site_dos[:, 0, :] = self._site_dos[:, 0, :] - ef
except IndexError:
pass
def _get_efermi(self):
return self._efermi
efermi = property(_get_efermi, _set_efermi, None, "Fermi energy.")
def _get_energy(self):
"""Return the array with the energies."""
return self._total_dos[0, :]
energy = property(_get_energy, None, None, "Array of energies")
def site_dos(self, atom, orbital):
"""Return an NDOSx1 array with dos for the chosen atom and orbital.
atom: int
Atom index
orbital: int or str
Which orbital to plot
If the orbital is given as an integer:
If spin-unpolarized calculation, no phase factors:
s = 0, p = 1, d = 2
Spin-polarized, no phase factors:
s-up = 0, s-down = 1, p-up = 2, p-down = 3, d-up = 4, d-down = 5
If phase factors have been calculated, orbitals are
s, py, pz, px, dxy, dyz, dz2, dxz, dx2
double in the above fashion if spin polarized.
"""
# Integer indexing for orbitals starts from 1 in the _site_dos array
# since the 0th column contains the energies
if isinstance(orbital, int):
return self._site_dos[atom, orbital + 1, :]
n = self._site_dos.shape[1]
if n == 4:
norb = {'s':1, 'p':2, 'd':3}
elif n == 7:
norb = {'s+':1, 's-up':1, 's-':2, 's-down':2,
'p+':3, 'p-up':3, 'p-':4, 'p-down':4,
'd+':5, 'd-up':5, 'd-':6, 'd-down':6}
elif n == 10:
norb = {'s':1, 'py':2, 'pz':3, 'px':4,
'dxy':5, 'dyz':6, 'dz2':7, 'dxz':8,
'dx2':9}
elif n == 19:
norb = {'s+':1, 's-up':1, 's-':2, 's-down':2,
'py+':3, 'py-up':3, 'py-':4, 'py-down':4,
'pz+':5, 'pz-up':5, 'pz-':6, 'pz-down':6,
'px+':7, 'px-up':7, 'px-':8, 'px-down':8,
'dxy+':9, 'dxy-up':9, 'dxy-':10, 'dxy-down':10,
'dyz+':11, 'dyz-up':11, 'dyz-':12, 'dyz-down':12,
'dz2+':13, 'dz2-up':13, 'dz2-':14, 'dz2-down':14,
'dxz+':15, 'dxz-up':15, 'dxz-':16, 'dxz-down':16,
'dx2+':17, 'dx2-up':17, 'dx2-':18, 'dx2-down':18}
return self._site_dos[atom, norb[orbital.lower()], :]
def _get_dos(self):
if self._total_dos.shape[0] == 3:
return self._total_dos[1, :]
elif self._total_dos.shape[0] == 5:
return self._total_dos[1:3, :]
dos = property(_get_dos, None, None, 'Average DOS in cell')
def _get_integrated_dos(self):
if self._total_dos.shape[0] == 3:
return self._total_dos[2, :]
elif self._total_dos.shape[0] == 5:
return self._total_dos[3:5, :]
integrated_dos = property(_get_integrated_dos, None, None,
'Integrated average DOS in cell')
def read_doscar(self, fname="DOSCAR"):
"""Read a VASP DOSCAR file"""
f = open(fname)
natoms = int(f.readline().split()[0])
[f.readline() for nn in range(4)] # Skip next 4 lines.
# First we have a block with total and total integrated DOS
ndos = int(f.readline().split()[2])
dos = []
for nd in xrange(ndos):
dos.append(np.array([float(x) for x in f.readline().split()]))
self._total_dos = np.array(dos).T
# Next we have one block per atom, if INCAR contains the stuff
# necessary for generating site-projected DOS
dos = []
for na in xrange(natoms):
line = f.readline()
if line == '':
# No site-projected DOS
break
ndos = int(line.split()[2])
line = f.readline().split()
cdos = np.empty((ndos, len(line)))
cdos[0] = np.array(line)
for nd in xrange(1, ndos):
line = f.readline().split()
cdos[nd] = np.array([float(x) for x in line])
dos.append(cdos.T)
self._site_dos = np.array(dos)
import pickle
class xdat2traj:
def __init__(self, trajectory=None, atoms=None, poscar=None,
xdatcar=None, sort=None, calc=None):
if not poscar:
self.poscar = 'POSCAR'
else:
self.poscar = poscar
if not atoms:
self.atoms = ase.io.read(self.poscar, format='vasp')
else:
self.atoms = atoms
if not xdatcar:
self.xdatcar = 'XDATCAR'
else:
self.xdatcar = xdatcar
if not trajectory:
self.trajectory = 'out.traj'
else:
self.trajectory = trajectory
if not calc:
self.calc = Vasp()
else:
self.calc = calc
if not sort:
if not hasattr(self.calc, 'sort'):
self.calc.sort = range(len(self.atoms))
else:
self.calc.sort = sort
self.calc.resort = range(len(self.calc.sort))
for n in range(len(self.calc.resort)):
self.calc.resort[self.calc.sort[n]] = n
self.out = ase.io.trajectory.PickleTrajectory(self.trajectory, mode='w')
self.energies = self.calc.read_energy(all=True)[1]
self.forces = self.calc.read_forces(self.atoms, all=True)
def convert(self):
lines = open(self.xdatcar).readlines()
if len(lines[7].split())==0:
del(lines[0:8])
elif len(lines[5].split())==0:
del(lines[0:6])
elif len(lines[4].split())==0:
del(lines[0:5])
step = 0
iatom = 0
scaled_pos = []
for line in lines:
if iatom == len(self.atoms):
if step == 0:
self.out.write_header(self.atoms[self.calc.resort])
scaled_pos = np.array(scaled_pos)
self.atoms.set_scaled_positions(scaled_pos)
d = {'positions': self.atoms.get_positions()[self.calc.resort],
'cell': self.atoms.get_cell(),
'momenta': None,
'energy': self.energies[step],
'forces': self.forces[step],
'stress': None}
pickle.dump(d, self.out.fd, protocol=-1)
scaled_pos = []
iatom = 0
step += 1
else:
iatom += 1
scaled_pos.append([float(line.split()[n]) for n in range(3)])
# Write also the last image
# I'm sure there is also more clever fix...
scaled_pos = np.array(scaled_pos)
self.atoms.set_scaled_positions(scaled_pos)
d = {'positions': self.atoms.get_positions()[self.calc.resort],
'cell': self.atoms.get_cell(),
'momenta': None,
'energy': self.energies[step],
'forces': self.forces[step],
'stress': None}
pickle.dump(d, self.out.fd, protocol=-1)
self.out.fd.close()
|
conwayje/ase-python
|
ase/calculators/vasp.py
|
Python
|
gpl-2.0
| 68,697
|
[
"ASE",
"VASP"
] |
ae525eb4968ee5e061a72638b0bf6d8d9e191b540aae802eb57e077ca97dde75
|
import os, sys, re, inspect, types, errno, pprint, subprocess, io, shutil, time, copy
import path_tool
path_tool.activate_module('FactorySystem')
path_tool.activate_module('argparse')
from ParseGetPot import ParseGetPot
from socket import gethostname
#from options import *
from util import *
from RunParallel import RunParallel
from CSVDiffer import CSVDiffer
from XMLDiffer import XMLDiffer
from Tester import Tester
from PetscJacobianTester import PetscJacobianTester
from InputParameters import InputParameters
from Factory import Factory
from Parser import Parser
from Warehouse import Warehouse
import argparse
from optparse import OptionParser, OptionGroup, Values
from timeit import default_timer as clock
class TestHarness:
@staticmethod
def buildAndRun(argv, app_name, moose_dir):
if '--store-timing' in argv:
harness = TestTimer(argv, app_name, moose_dir)
else:
harness = TestHarness(argv, app_name, moose_dir)
harness.findAndRunTests()
def __init__(self, argv, app_name, moose_dir):
self.factory = Factory()
# Get dependant applications and load dynamic tester plugins
# If applications have new testers, we expect to find them in <app_dir>/scripts/TestHarness/testers
dirs = [os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))]
sys.path.append(os.path.join(moose_dir, 'framework', 'scripts')) # For find_dep_apps.py
# Use the find_dep_apps script to get the dependant applications for an app
import find_dep_apps
depend_app_dirs = find_dep_apps.findDepApps(app_name)
dirs.extend([os.path.join(my_dir, 'scripts', 'TestHarness') for my_dir in depend_app_dirs.split('\n')])
# Finally load the plugins!
self.factory.loadPlugins(dirs, 'testers', Tester)
self.test_table = []
self.num_passed = 0
self.num_failed = 0
self.num_skipped = 0
self.num_pending = 0
self.host_name = gethostname()
self.moose_dir = moose_dir
self.run_tests_dir = os.path.abspath('.')
self.code = '2d2d6769726c2d6d6f6465'
self.error_code = 0x0
# Assume libmesh is a peer directory to MOOSE if not defined
if os.environ.has_key("LIBMESH_DIR"):
self.libmesh_dir = os.environ['LIBMESH_DIR']
else:
self.libmesh_dir = os.path.join(self.moose_dir, 'libmesh', 'installed')
self.file = None
# Parse arguments
self.parseCLArgs(argv)
self.checks = {}
self.checks['platform'] = getPlatforms()
# The TestHarness doesn't strictly require the existence of libMesh in order to run. Here we allow the user
# to select whether they want to probe for libMesh configuration options.
if self.options.skip_config_checks:
self.checks['compiler'] = set(['ALL'])
self.checks['petsc_version'] = 'N/A'
self.checks['library_mode'] = set(['ALL'])
self.checks['mesh_mode'] = set(['ALL'])
self.checks['dtk'] = set(['ALL'])
self.checks['unique_ids'] = set(['ALL'])
self.checks['vtk'] = set(['ALL'])
self.checks['tecplot'] = set(['ALL'])
self.checks['dof_id_bytes'] = set(['ALL'])
self.checks['petsc_debug'] = set(['ALL'])
self.checks['curl'] = set(['ALL'])
self.checks['tbb'] = set(['ALL'])
else:
self.checks['compiler'] = getCompilers(self.libmesh_dir)
self.checks['petsc_version'] = getPetscVersion(self.libmesh_dir)
self.checks['library_mode'] = getSharedOption(self.libmesh_dir)
self.checks['mesh_mode'] = getLibMeshConfigOption(self.libmesh_dir, 'mesh_mode')
self.checks['dtk'] = getLibMeshConfigOption(self.libmesh_dir, 'dtk')
self.checks['unique_ids'] = getLibMeshConfigOption(self.libmesh_dir, 'unique_ids')
self.checks['vtk'] = getLibMeshConfigOption(self.libmesh_dir, 'vtk')
self.checks['tecplot'] = getLibMeshConfigOption(self.libmesh_dir, 'tecplot')
self.checks['dof_id_bytes'] = getLibMeshConfigOption(self.libmesh_dir, 'dof_id_bytes')
self.checks['petsc_debug'] = getLibMeshConfigOption(self.libmesh_dir, 'petsc_debug')
self.checks['curl'] = getLibMeshConfigOption(self.libmesh_dir, 'curl')
self.checks['tbb'] = getLibMeshConfigOption(self.libmesh_dir, 'tbb')
# Override the MESH_MODE option if using '--parallel-mesh' option
if self.options.parallel_mesh == True or \
(self.options.cli_args != None and \
self.options.cli_args.find('--parallel-mesh') != -1):
option_set = set(['ALL', 'PARALLEL'])
self.checks['mesh_mode'] = option_set
method = set(['ALL', self.options.method.upper()])
self.checks['method'] = method
self.initialize(argv, app_name)
"""
Recursively walks the current tree looking for tests to run
Error codes:
0x0 - Success
0x0* - Parser error
0x1* - TestHarness error
"""
def findAndRunTests(self):
self.error_code = 0x0
self.preRun()
self.start_time = clock()
try:
# PBS STUFF
if self.options.pbs and os.path.exists(self.options.pbs):
self.options.processingPBS = True
self.processPBSResults()
else:
self.options.processingPBS = False
base_dir = os.getcwd()
for dirpath, dirnames, filenames in os.walk(base_dir, followlinks=True):
# Prune submdule paths when searching for tests
if base_dir != dirpath and os.path.exists(os.path.join(dirpath, '.git')):
dirnames[:] = []
# walk into directories that aren't contrib directories
if "contrib" not in os.path.relpath(dirpath, os.getcwd()):
for file in filenames:
# set cluster_handle to be None initially (happens for each test)
self.options.cluster_handle = None
# See if there were other arguments (test names) passed on the command line
if file == self.options.input_file_name: #and self.test_match.search(file):
saved_cwd = os.getcwd()
sys.path.append(os.path.abspath(dirpath))
os.chdir(dirpath)
if self.prunePath(file):
continue
# Build a Warehouse to hold the MooseObjects
warehouse = Warehouse()
# Build a Parser to parse the objects
parser = Parser(self.factory, warehouse)
# Parse it
self.error_code = self.error_code | parser.parse(file)
# Retrieve the tests from the warehouse
testers = warehouse.getAllObjects()
# Augment the Testers with additional information directly from the TestHarness
for tester in testers:
self.augmentParameters(file, tester)
if self.options.enable_recover:
testers = self.appendRecoverableTests(testers)
# Handle PBS tests.cluster file
if self.options.pbs:
(tester, command) = self.createClusterLauncher(dirpath, testers)
if command is not None:
self.runner.run(tester, command)
else:
# Go through the Testers and run them
for tester in testers:
# Double the alloted time for tests when running with the valgrind option
tester.setValgrindMode(self.options.valgrind_mode)
# When running in valgrind mode, we end up with a ton of output for each failed
# test. Therefore, we limit the number of fails...
if self.options.valgrind_mode and self.num_failed > self.options.valgrind_max_fails:
(should_run, reason) = (False, 'Max Fails Exceeded')
elif self.num_failed > self.options.max_fails:
(should_run, reason) = (False, 'Max Fails Exceeded')
else:
(should_run, reason) = tester.checkRunnableBase(self.options, self.checks)
if should_run:
command = tester.getCommand(self.options)
# This method spawns another process and allows this loop to continue looking for tests
# RunParallel will call self.testOutputAndFinish when the test has completed running
# This method will block when the maximum allowed parallel processes are running
self.runner.run(tester, command)
else: # This job is skipped - notify the runner
if (reason != ''):
self.handleTestResult(tester.parameters(), '', reason)
self.runner.jobSkipped(tester.parameters()['test_name'])
os.chdir(saved_cwd)
sys.path.pop()
except KeyboardInterrupt:
print '\nExiting due to keyboard interrupt...'
sys.exit(0)
self.runner.join()
# Wait for all tests to finish
if self.options.pbs and self.options.processingPBS == False:
print '\n< checking batch status >\n'
self.options.processingPBS = True
self.processPBSResults()
self.cleanup()
if self.num_failed:
self.error_code = self.error_code | 0x10
sys.exit(self.error_code)
def createClusterLauncher(self, dirpath, testers):
self.options.test_serial_number = 0
command = None
tester = None
# Create the tests.cluster input file
# Loop through each tester and create a job
for tester in testers:
(should_run, reason) = tester.checkRunnableBase(self.options, self.checks)
if should_run:
if self.options.cluster_handle == None:
self.options.cluster_handle = open(dirpath + '/' + self.options.pbs + '.cluster', 'w')
self.options.cluster_handle.write('[Jobs]\n')
# This returns the command to run as well as builds the parameters of the test
# The resulting command once this loop has completed is sufficient to launch
# all previous jobs
command = tester.getCommand(self.options)
self.options.cluster_handle.write('[]\n')
self.options.test_serial_number += 1
else: # This job is skipped - notify the runner
if (reason != ''):
self.handleTestResult(tester.parameters(), '', reason)
self.runner.jobSkipped(tester.parameters()['test_name'])
# Close the tests.cluster file
if self.options.cluster_handle is not None:
self.options.cluster_handle.close()
self.options.cluster_handle = None
# Return the final tester/command (sufficient to run all tests)
return (tester, command)
def prunePath(self, filename):
test_dir = os.path.abspath(os.path.dirname(filename))
# Filter tests that we want to run
# Under the new format, we will filter based on directory not filename since it is fixed
prune = True
if len(self.tests) == 0:
prune = False # No filter
else:
for item in self.tests:
if test_dir.find(item) > -1:
prune = False
# Return the inverse of will_run to indicate that this path should be pruned
return prune
def augmentParameters(self, filename, tester):
params = tester.parameters()
# We are going to do some formatting of the path that is printed
# Case 1. If the test directory (normally matches the input_file_name) comes first,
# we will simply remove it from the path
# Case 2. If the test directory is somewhere in the middle then we should preserve
# the leading part of the path
test_dir = os.path.abspath(os.path.dirname(filename))
relative_path = test_dir.replace(self.run_tests_dir, '')
relative_path = relative_path.replace('/' + self.options.input_file_name + '/', ':')
relative_path = re.sub('^[/:]*', '', relative_path) # Trim slashes and colons
formatted_name = relative_path + '.' + tester.name()
params['test_name'] = formatted_name
params['test_dir'] = test_dir
params['relative_path'] = relative_path
params['executable'] = self.executable
params['hostname'] = self.host_name
params['moose_dir'] = self.moose_dir
if params.isValid('prereq'):
if type(params['prereq']) != list:
print "Option 'prereq' needs to be of type list in " + params['test_name']
sys.exit(1)
params['prereq'] = [relative_path.replace('/tests/', '') + '.' + item for item in params['prereq']]
# This method splits a lists of tests into two pieces each, the first piece will run the test for
# approx. half the number of timesteps and will write out a restart file. The second test will
# then complete the run using the MOOSE recover option.
def appendRecoverableTests(self, testers):
new_tests = []
for part1 in testers:
if part1.parameters()['recover'] == True:
# Clone the test specs
part2 = copy.deepcopy(part1)
# Part 1:
part1_params = part1.parameters()
part1_params['test_name'] += '_part1'
part1_params['cli_args'].append('--half-transient :Outputs/checkpoint=true')
part1_params['skip_checks'] = True
# Part 2:
part2_params = part2.parameters()
part2_params['prereq'].append(part1.parameters()['test_name'])
part2_params['delete_output_before_running'] = False
part2_params['cli_args'].append('--recover')
part2_params.addParam('caveats', ['recover'], "")
new_tests.append(part2)
testers.extend(new_tests)
return testers
## Finish the test by inspecting the raw output
def testOutputAndFinish(self, tester, retcode, output, start=0, end=0):
caveats = []
test = tester.specs # Need to refactor
if test.isValid('caveats'):
caveats = test['caveats']
if self.options.pbs and self.options.processingPBS == False:
(reason, output) = self.buildPBSBatch(output, tester)
elif self.options.dry_run:
reason = 'DRY_RUN'
output += '\n'.join(tester.processResultsCommand(self.moose_dir, self.options))
else:
(reason, output) = tester.processResults(self.moose_dir, retcode, self.options, output)
if self.options.scaling and test['scale_refine']:
caveats.append('scaled')
did_pass = True
if reason == '':
# It ran OK but is this test set to be skipped on any platform, compiler, so other reason?
if self.options.extra_info:
checks = ['platform', 'compiler', 'petsc_version', 'mesh_mode', 'method', 'library_mode', 'dtk', 'unique_ids']
for check in checks:
if not 'ALL' in test[check]:
caveats.append(', '.join(test[check]))
if len(caveats):
result = '[' + ', '.join(caveats).upper() + '] OK'
elif self.options.pbs and self.options.processingPBS == False:
result = 'LAUNCHED'
else:
result = 'OK'
elif reason == 'DRY_RUN':
result = 'DRY_RUN'
else:
result = 'FAILED (%s)' % reason
did_pass = False
self.handleTestResult(tester.specs, output, result, start, end)
return did_pass
def getTiming(self, output):
time = ''
m = re.search(r"Active time=(\S+)", output)
if m != None:
return m.group(1)
def getSolveTime(self, output):
time = ''
m = re.search(r"solve().*", output)
if m != None:
return m.group().split()[5]
def checkExpectError(self, output, expect_error):
if re.search(expect_error, output, re.MULTILINE | re.DOTALL) == None:
#print "%" * 100, "\nExpect Error Pattern not found:\n", expect_error, "\n", "%" * 100, "\n"
return False
else:
return True
# PBS Defs
def processPBSResults(self):
# If batch file exists, check the contents for pending tests.
if os.path.exists(self.options.pbs):
# Build a list of launched jobs
batch_file = open(self.options.pbs)
batch_list = [y.split(':') for y in [x for x in batch_file.read().split('\n')]]
batch_file.close()
del batch_list[-1:]
# Loop through launched jobs and match the TEST_NAME to determin correct stdout (Output_Path)
for job in batch_list:
file = '/'.join(job[2].split('/')[:-2]) + '/' + job[3]
# Build a Warehouse to hold the MooseObjects
warehouse = Warehouse()
# Build a Parser to parse the objects
parser = Parser(self.factory, warehouse)
# Parse it
parser.parse(file)
# Retrieve the tests from the warehouse
testers = warehouse.getAllObjects()
for tester in testers:
self.augmentParameters(file, tester)
for tester in testers:
# Build the requested Tester object
if job[1] == tester.parameters()['test_name']:
# Create Test Type
# test = self.factory.create(tester.parameters()['type'], tester)
# Get job status via qstat
qstat = ['qstat', '-f', '-x', str(job[0])]
qstat_command = subprocess.Popen(qstat, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
qstat_stdout = qstat_command.communicate()[0]
if qstat_stdout != None:
output_value = re.search(r'job_state = (\w+)', qstat_stdout).group(1)
else:
return ('QSTAT NOT FOUND', '')
# Report the current status of JOB_ID
if output_value == 'F':
# F = Finished. Get the exit code reported by qstat
exit_code = int(re.search(r'Exit_status = (-?\d+)', qstat_stdout).group(1))
# Read the stdout file
if os.path.exists(job[2]):
output_file = open(job[2], 'r')
# Not sure I am doing this right: I have to change the TEST_DIR to match the temporary cluster_launcher TEST_DIR location, thus violating the tester.specs...
tester.parameters()['test_dir'] = '/'.join(job[2].split('/')[:-1])
outfile = output_file.read()
output_file.close()
else:
# I ran into this scenario when the cluster went down, but launched/completed my job :)
self.handleTestResult(tester.specs, '', 'FAILED (NO STDOUT FILE)', 0, 0, True)
self.testOutputAndFinish(tester, exit_code, outfile)
elif output_value == 'R':
# Job is currently running
self.handleTestResult(tester.specs, '', 'RUNNING', 0, 0, True)
elif output_value == 'E':
# Job is exiting
self.handleTestResult(tester.specs, '', 'EXITING', 0, 0, True)
elif output_value == 'Q':
# Job is currently queued
self.handleTestResult(tester.specs, '', 'QUEUED', 0, 0, True)
else:
return ('BATCH FILE NOT FOUND', '')
def buildPBSBatch(self, output, tester):
# Create/Update the batch file
if 'command not found' in output:
return ('QSUB NOT FOUND', '')
else:
# Get the PBS Job ID using qstat
results = re.findall(r'JOB_NAME: (\w+\d+) JOB_ID: (\d+) TEST_NAME: (\S+)', output, re.DOTALL)
if len(results) != 0:
file_name = self.options.pbs
job_list = open(os.path.abspath(os.path.join(tester.specs['executable'], os.pardir)) + '/' + file_name, 'a')
for result in results:
(test_dir, job_id, test_name) = result
qstat_command = subprocess.Popen(['qstat', '-f', '-x', str(job_id)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
qstat_stdout = qstat_command.communicate()[0]
# Get the Output_Path from qstat stdout
if qstat_stdout != None:
output_value = re.search(r'Output_Path(.*?)(^ +)', qstat_stdout, re.S | re.M).group(1)
output_value = output_value.split(':')[1].replace('\n', '').replace('\t', '')
else:
job_list.close()
return ('QSTAT NOT FOUND', '')
# Write job_id, test['test_name'], and Ouput_Path to the batch file
job_list.write(str(job_id) + ':' + test_name + ':' + output_value + ':' + self.options.input_file_name + '\n')
# Return to TestHarness and inform we have launched the job
job_list.close()
return ('', 'LAUNCHED')
else:
return ('QSTAT INVALID RESULTS', '')
def cleanPBSBatch(self):
# Open the PBS batch file and assign it to a list
if os.path.exists(self.options.pbs_cleanup):
batch_file = open(self.options.pbs_cleanup, 'r')
batch_list = [y.split(':') for y in [x for x in batch_file.read().split('\n')]]
batch_file.close()
del batch_list[-1:]
else:
print 'PBS batch file not found:', self.options.pbs_cleanup
sys.exit(1)
# Loop through launched jobs and delete whats found.
for job in batch_list:
if os.path.exists(job[2]):
batch_dir = os.path.abspath(os.path.join(job[2], os.pardir)).split('/')
if os.path.exists('/'.join(batch_dir)):
shutil.rmtree('/'.join(batch_dir))
if os.path.exists('/'.join(batch_dir[:-1]) + '/' + self.options.pbs_cleanup + '.cluster'):
os.remove('/'.join(batch_dir[:-1]) + '/' + self.options.pbs_cleanup + '.cluster')
os.remove(self.options.pbs_cleanup)
# END PBS Defs
## Update global variables and print output based on the test result
# Containing OK means it passed, skipped means skipped, anything else means it failed
def handleTestResult(self, specs, output, result, start=0, end=0, add_to_table=True):
timing = ''
if self.options.timing:
timing = self.getTiming(output)
elif self.options.store_time:
timing = self.getSolveTime(output)
# Only add to the test_table if told to. We now have enough cases where we wish to print to the screen, but not
# in the 'Final Test Results' area.
if add_to_table:
self.test_table.append( (specs, output, result, timing, start, end) )
if result.find('OK') != -1 or result.find('DRY_RUN') != -1:
self.num_passed += 1
elif result.find('skipped') != -1:
self.num_skipped += 1
elif result.find('deleted') != -1:
self.num_skipped += 1
elif result.find('LAUNCHED') != -1 or result.find('RUNNING') != -1 or result.find('QUEUED') != -1 or result.find('EXITING') != -1:
self.num_pending += 1
else:
self.num_failed += 1
self.postRun(specs, timing)
if self.options.show_directory:
print printResult(specs['relative_path'] + '/' + specs['test_name'].split('/')[-1], result, timing, start, end, self.options)
else:
print printResult(specs['test_name'], result, timing, start, end, self.options)
if self.options.verbose or ('FAILED' in result and not self.options.quiet):
output = output.replace('\r', '\n') # replace the carriage returns with newlines
lines = output.split('\n');
color = ''
if 'EXODIFF' in result or 'CSVDIFF' in result:
color = 'YELLOW'
elif 'FAILED' in result:
color = 'RED'
else:
color = 'GREEN'
test_name = colorText(specs['test_name'] + ": ", color, colored=self.options.colored, code=self.options.code)
output = ("\n" + test_name).join(lines)
print output
# Print result line again at the bottom of the output for failed tests
if self.options.show_directory:
print printResult(specs['relative_path'] + '/' + specs['test_name'].split('/')[-1], result, timing, start, end, self.options), "(reprint)"
else:
print printResult(specs['test_name'], result, timing, start, end, self.options), "(reprint)"
if not 'skipped' in result:
if self.options.file:
if self.options.show_directory:
self.file.write(printResult( specs['relative_path'] + '/' + specs['test_name'].split('/')[-1], result, timing, start, end, self.options, color=False) + '\n')
self.file.write(output)
else:
self.file.write(printResult( specs['test_name'], result, timing, start, end, self.options, color=False) + '\n')
self.file.write(output)
if self.options.sep_files or (self.options.fail_files and 'FAILED' in result) or (self.options.ok_files and result.find('OK') != -1):
fname = os.path.join(specs['test_dir'], specs['test_name'].split('/')[-1] + '.' + result[:6] + '.txt')
f = open(fname, 'w')
f.write(printResult( specs['test_name'], result, timing, start, end, self.options, color=False) + '\n')
f.write(output)
f.close()
# Write the app_name to a file, if the tests passed
def writeState(self, app_name):
# If we encounter bitten_status_moose environment, build a line itemized list of applications which passed their tests
if os.environ.has_key("BITTEN_STATUS_MOOSE"):
result_file = open(os.path.join(self.moose_dir, 'test_results.log'), 'a')
result_file.write(os.path.split(app_name)[1].split('-')[0] + '\n')
result_file.close()
# Print final results, close open files, and exit with the correct error code
def cleanup(self):
# Print the results table again if a bunch of output was spewed to the screen between
# tests as they were running
if self.options.verbose or (self.num_failed != 0 and not self.options.quiet):
print '\n\nFinal Test Results:\n' + ('-' * (TERM_COLS-1))
for (test, output, result, timing, start, end) in sorted(self.test_table, key=lambda x: x[2], reverse=True):
if self.options.show_directory:
print printResult(test['relative_path'] + '/' + specs['test_name'].split('/')[-1], result, timing, start, end, self.options)
else:
print printResult(test['test_name'], result, timing, start, end, self.options)
time = clock() - self.start_time
print '-' * (TERM_COLS-1)
print 'Ran %d tests in %.1f seconds' % (self.num_passed+self.num_failed, time)
if self.num_passed:
summary = '<g>%d passed</g>'
else:
summary = '<b>%d passed</b>'
summary += ', <b>%d skipped</b>'
if self.num_pending:
summary += ', <c>%d pending</c>'
else:
summary += ', <b>%d pending</b>'
if self.num_failed:
summary += ', <r>%d FAILED</r>'
else:
summary += ', <b>%d failed</b>'
# Mask off TestHarness error codes to report parser errors
if self.error_code & 0x0F:
summary += ', <r>FATAL PARSER ERROR</r>'
print colorText( summary % (self.num_passed, self.num_skipped, self.num_pending, self.num_failed), "", html = True, \
colored=self.options.colored, code=self.options.code )
if self.options.pbs:
print '\nYour PBS batch file:', self.options.pbs
if self.file:
self.file.close()
if self.num_failed == 0:
self.writeState(self.executable)
def initialize(self, argv, app_name):
# Initialize the parallel runner with how many tests to run in parallel
self.runner = RunParallel(self, self.options.jobs, self.options.load)
## Save executable-under-test name to self.executable
self.executable = os.getcwd() + '/' + app_name + '-' + self.options.method
# Save the output dir since the current working directory changes during tests
self.output_dir = os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), self.options.output_dir)
# Create the output dir if they ask for it. It is easier to ask for forgiveness than permission
if self.options.output_dir:
try:
os.makedirs(self.output_dir)
except OSError, ex:
if ex.errno == errno.EEXIST: pass
else: raise
# Open the file to redirect output to and set the quiet option for file output
if self.options.file:
self.file = open(os.path.join(self.output_dir, self.options.file), 'w')
if self.options.file or self.options.fail_files or self.options.sep_files:
self.options.quiet = True
## Parse command line options and assign them to self.options
def parseCLArgs(self, argv=sys.argv[1:]):
parser = argparse.ArgumentParser(description='A tool used to test MOOSE based applications')
parser.add_argument('test_name', nargs=argparse.REMAINDER)
parser.add_argument('--opt', action='store_const', dest='method', const='opt', help='test the app_name-opt binary')
parser.add_argument('--dbg', action='store_const', dest='method', const='dbg', help='test the app_name-dbg binary')
parser.add_argument('--devel', action='store_const', dest='method', const='devel', help='test the app_name-devel binary')
parser.add_argument('--oprof', action='store_const', dest='method', const='oprof', help='test the app_name-oprof binary')
parser.add_argument('--pro', action='store_const', dest='method', const='pro', help='test the app_name-pro binary')
parser.add_argument('-j', '--jobs', nargs=1, metavar='int', action='store', type=int, dest='jobs', default=1, help='run test binaries in parallel')
parser.add_argument('-e', action='store_true', dest='extra_info', help='Display "extra" information including all caveats and deleted tests')
parser.add_argument('-c', '--no-color', action='store_false', dest='colored', help='Do not show colored output')
parser.add_argument('--heavy', action='store_true', dest='heavy_tests', help='Run tests marked with HEAVY : True')
parser.add_argument('--all-tests', action='store_true', dest='all_tests', help='Run normal tests and tests marked with HEAVY : True')
parser.add_argument('-g', '--group', action='store', type=str, dest='group', default='ALL', help='Run only tests in the named group')
parser.add_argument('--not_group', action='store', type=str, dest='not_group', help='Run only tests NOT in the named group')
# parser.add_argument('--dofs', action='store', dest='dofs', help='This option is for automatic scaling which is not currently implemented in MOOSE 2.0')
parser.add_argument('--dbfile', nargs='?', action='store', dest='dbFile', help='Location to timings data base file. If not set, assumes $HOME/timingDB/timing.sqlite')
parser.add_argument('-l', '--load-average', action='store', type=float, dest='load', default=64.0, help='Do not run additional tests if the load average is at least LOAD')
parser.add_argument('-t', '--timing', action='store_true', dest='timing', help='Report Timing information for passing tests')
parser.add_argument('-s', '--scale', action='store_true', dest='scaling', help='Scale problems that have SCALE_REFINE set')
parser.add_argument('-i', nargs=1, action='store', type=str, dest='input_file_name', default='tests', help='The default test specification file to look for (default="tests").')
parser.add_argument('--libmesh_dir', nargs=1, action='store', type=str, dest='libmesh_dir', help='Currently only needed for bitten code coverage')
parser.add_argument('--skip-config-checks', action='store_true', dest='skip_config_checks', help='Skip configuration checks (all tests will run regardless of restrictions)')
parser.add_argument('--parallel', '-p', nargs='?', action='store', type=int, dest='parallel', const=1, help='Number of processors to use when running mpiexec')
parser.add_argument('--n-threads', nargs=1, action='store', type=int, dest='nthreads', default=1, help='Number of threads to use when running mpiexec')
parser.add_argument('-d', action='store_true', dest='debug_harness', help='Turn on Test Harness debugging')
parser.add_argument('--recover', action='store_true', dest='enable_recover', help='Run a test in recover mode')
parser.add_argument('--valgrind', action='store_const', dest='valgrind_mode', const='NORMAL', help='Run normal valgrind tests')
parser.add_argument('--valgrind-heavy', action='store_const', dest='valgrind_mode', const='HEAVY', help='Run heavy valgrind tests')
parser.add_argument('--valgrind-max-fails', nargs=1, type=int, dest='valgrind_max_fails', default=5, help='The number of valgrind tests allowed to fail before any additional valgrind tests will run')
parser.add_argument('--max-fails', nargs=1, type=int, dest='max_fails', default=50, help='The number of tests allowed to fail before any additional tests will run')
parser.add_argument('--pbs', nargs='?', metavar='batch_file', dest='pbs', const='generate', help='Enable launching tests via PBS. If no batch file is specified one will be created for you')
parser.add_argument('--pbs-cleanup', nargs=1, metavar='batch_file', help='Clean up the directories/files created by PBS. You must supply the same batch_file used to launch PBS.')
parser.add_argument('--re', action='store', type=str, dest='reg_exp', help='Run tests that match --re=regular_expression')
parser.add_argument('--parallel-mesh', action='store_true', dest='parallel_mesh', help="Pass --parallel-mesh to executable")
parser.add_argument('--error', action='store_true', help='Run the tests with warnings as errors')
parser.add_argument('--cli-args', nargs='?', type=str, dest='cli_args', help='Append the following list of arguments to the command line (Encapsulate the command in quotes)')
parser.add_argument('--dry-run', action='store_true', dest='dry_run', help="Pass --dry-run to print commands to run, but don't actually run them")
outputgroup = parser.add_argument_group('Output Options', 'These options control the output of the test harness. The sep-files options write output to files named test_name.TEST_RESULT.txt. All file output will overwrite old files')
outputgroup.add_argument('-v', '--verbose', action='store_true', dest='verbose', help='show the output of every test')
outputgroup.add_argument('-q', '--quiet', action='store_true', dest='quiet', help='only show the result of every test, don\'t show test output even if it fails')
outputgroup.add_argument('--show-directory', action='store_true', dest='show_directory', help='Print test directory path in out messages')
outputgroup.add_argument('-o', '--output-dir', nargs=1, metavar='directory', dest='output_dir', default='', help='Save all output files in the directory, and create it if necessary')
outputgroup.add_argument('-f', '--file', nargs=1, action='store', dest='file', help='Write verbose output of each test to FILE and quiet output to terminal')
outputgroup.add_argument('-x', '--sep-files', action='store_true', dest='sep_files', help='Write the output of each test to a separate file. Only quiet output to terminal. This is equivalant to \'--sep-files-fail --sep-files-ok\'')
outputgroup.add_argument('--sep-files-ok', action='store_true', dest='ok_files', help='Write the output of each passed test to a separate file')
outputgroup.add_argument('-a', '--sep-files-fail', action='store_true', dest='fail_files', help='Write the output of each FAILED test to a separate file. Only quiet output to terminal.')
outputgroup.add_argument("--store-timing", action="store_true", dest="store_time", help="Store timing in the SQL database: $HOME/timingDB/timing.sqlite A parent directory (timingDB) must exist.")
outputgroup.add_argument("--revision", nargs=1, action="store", type=str, dest="revision", help="The current revision being tested. Required when using --store-timing.")
outputgroup.add_argument("--yaml", action="store_true", dest="yaml", help="Dump the parameters for the testers in Yaml Format")
outputgroup.add_argument("--dump", action="store_true", dest="dump", help="Dump the parameters for the testers in GetPot Format")
code = True
if self.code.decode('hex') in argv:
del argv[argv.index(self.code.decode('hex'))]
code = False
self.options = parser.parse_args()
self.tests = self.options.test_name
self.options.code = code
# Convert all list based options of length one to scalars
for key, value in vars(self.options).items():
if type(value) == list and len(value) == 1:
tmp_str = getattr(self.options, key)
setattr(self.options, key, value[0])
self.checkAndUpdateCLArgs()
## Called after options are parsed from the command line
# Exit if options don't make any sense, print warnings if they are merely weird
def checkAndUpdateCLArgs(self):
opts = self.options
if opts.output_dir and not (opts.file or opts.sep_files or opts.fail_files or opts.ok_files):
print 'WARNING: --output-dir is specified but no output files will be saved, use -f or a --sep-files option'
if opts.group == opts.not_group:
print 'ERROR: The group and not_group options cannot specify the same group'
sys.exit(1)
if opts.store_time and not (opts.revision):
print 'ERROR: --store-timing is specified but no revision'
sys.exit(1)
if opts.store_time:
# timing returns Active Time, while store_timing returns Solve Time.
# Thus we need to turn off timing.
opts.timing = False
opts.scaling = True
if opts.valgrind_mode and (opts.parallel > 1 or opts.nthreads > 1):
print 'ERROR: --parallel and/or --threads can not be used with --valgrind'
sys.exit(1)
# Update any keys from the environment as necessary
if not self.options.method:
if os.environ.has_key('METHOD'):
self.options.method = os.environ['METHOD']
else:
self.options.method = 'opt'
if not self.options.valgrind_mode:
self.options.valgrind_mode = ''
# Update libmesh_dir to reflect arguments
if opts.libmesh_dir:
self.libmesh_dir = opts.libmesh_dir
# Generate a batch file if PBS argument supplied with out a file
if opts.pbs == 'generate':
largest_serial_num = 0
for name in os.listdir('.'):
m = re.search('pbs_(\d{3})', name)
if m != None and int(m.group(1)) > largest_serial_num:
largest_serial_num = int(m.group(1))
opts.pbs = "pbs_" + str(largest_serial_num+1).zfill(3)
def postRun(self, specs, timing):
return
def preRun(self):
if self.options.yaml:
self.factory.printYaml("Tests")
sys.exit(0)
elif self.options.dump:
self.factory.printDump("Tests")
sys.exit(0)
if self.options.pbs_cleanup:
self.cleanPBSBatch()
sys.exit(0)
def getOptions(self):
return self.options
#################################################################################################################################
# The TestTimer TestHarness
# This method finds and stores timing for individual tests. It is activated with --store-timing
#################################################################################################################################
CREATE_TABLE = """create table timing
(
app_name text,
test_name text,
revision text,
date int,
seconds real,
scale int,
load real
);"""
class TestTimer(TestHarness):
def __init__(self, argv, app_name, moose_dir):
TestHarness.__init__(self, argv, app_name, moose_dir)
try:
from sqlite3 import dbapi2 as sqlite
except:
print 'Error: --store-timing requires the sqlite3 python module.'
sys.exit(1)
self.app_name = app_name
self.db_file = self.options.dbFile
if not self.db_file:
home = os.environ['HOME']
self.db_file = os.path.join(home, 'timingDB/timing.sqlite')
if not os.path.exists(self.db_file):
print 'Warning: creating new database at default location: ' + str(self.db_file)
self.createDB(self.db_file)
else:
print 'Warning: Assuming database location ' + self.db_file
def createDB(self, fname):
from sqlite3 import dbapi2 as sqlite
print 'Creating empty database at ' + fname
con = sqlite.connect(fname)
cr = con.cursor()
cr.execute(CREATE_TABLE)
con.commit()
def preRun(self):
from sqlite3 import dbapi2 as sqlite
# Delete previous data if app_name and repo revision are found
con = sqlite.connect(self.db_file)
cr = con.cursor()
cr.execute('delete from timing where app_name = ? and revision = ?', (self.app_name, self.options.revision))
con.commit()
# After the run store the results in the database
def postRun(self, test, timing):
from sqlite3 import dbapi2 as sqlite
con = sqlite.connect(self.db_file)
cr = con.cursor()
timestamp = int(time.time())
load = os.getloadavg()[0]
# accumulate the test results
data = []
sum_time = 0
num = 0
parse_failed = False
# Were only interested in storing scaled data
if timing != None and test['scale_refine'] != 0:
sum_time += float(timing)
num += 1
data.append( (self.app_name, test['test_name'].split('/').pop(), self.options.revision, timestamp, timing, test['scale_refine'], load) )
# Insert the data into the database
cr.executemany('insert into timing values (?,?,?,?,?,?,?)', data)
con.commit()
|
mellis13/moose
|
python/TestHarness/TestHarness.py
|
Python
|
lgpl-2.1
| 40,691
|
[
"MOOSE",
"VTK"
] |
4b799aed90f9ad1827e4d0b77058727245cd39585fd961a3179c317896c31346
|
import os
import cdsapi
import datetime
import numpy as np
import bluesky as bs
import netCDF4 as nc
from bluesky import settings, stack
from bluesky.core import timed_function
from bluesky.traffic.windsim import WindSim
datadir = settings.data_path + '/NetCDF/'
if not os.path.exists(datadir):
os.makedirs(datadir)
def init_plugin():
global windecmwf
windecmwf = WindECMWF()
config = {
'plugin_name': 'WINDECMWF',
'plugin_type': 'sim'
}
return config
class WindECMWF(WindSim):
def __init__(self):
super().__init__()
self.year = 0
self.month = 0
self.day = 0
self.hour = 0
self.lat0 = -90
self.lon0 = -180
self.lat1 = 90
self.lon1 = 180
# Switch for periodic loading of new GFS data
self.autoload = True
def fetch_nc(self, year, month, day):
"""
Retrieve weather data via the CDS API for multiple pressure levels
"""
ymd = "%04d%02d%02d" % (year, month, day)
fname = 'p_levels_%s.nc' % (ymd)
fpath = datadir + fname
if not os.path.isfile(fpath):
bs.scr.echo("Downloading file, please wait...")
# Set client
c = cdsapi.Client()
# Retrieve data
c.retrieve(
'reanalysis-era5-pressure-levels',
{
'product_type': 'reanalysis',
'format': 'netcdf',
'pressure_level': [
'100', '125', '150',
'175', '200', '225',
'250', '300', '350',
'400', '450', '500',
'550', '600', '650',
'700', '750', '775',
'800'
],
'year': year,
'month': month,
'day': day,
'time': [
'00:00', '03:00', '06:00',
'09:00', '12:00', '15:00',
'18:00', '21:00',
],
'variable': [
'u_component_of_wind', 'v_component_of_wind'
],
},
fpath)
bs.scr.echo("Download completed.")
netcdf = nc.Dataset(fpath, mode='r')
return netcdf
def extract_wind(self, netcdf, lat0, lon0, lat1, lon1, hour):
# Load reanalysis data
level = netcdf['level'][:].data
lats = netcdf['latitude'][:].data
lons = netcdf['longitude'][:].data
vxs_ = netcdf['u'][:].squeeze().data
vys_ = netcdf['v'][:].squeeze().data
# Close data for performance
netcdf.close()
# Transform pressure levels to altitude
p = level * 100
h = (1 - (p / 101325.0)**0.190264) * 44330.76923 # in meters
# Set hour to rounded hour
hour = round(hour/3)
# Construct 2D array of all data points
lats_ = np.tile(np.repeat(lats, len(lons)), len(level))
lons_ = np.tile(lons, len(lats)*len(level))
alts_ = np.repeat(h, len(lats)*len(lons))
vxs_ = vxs_[hour,:,:,:].flatten()
vys_ = vys_[hour,:,:,:].flatten()
# Convert longitudes
lons_ = (lons_ + 180) % 360.0 - 180.0 # convert range from 0~360 to -180~180
# Reduce area based on lat lon limits
lat0_ = min(lat0, lat1)
lat1_ = max(lat0, lat1)
lon0_ = min(lon0, lon1)
lon1_ = max(lon0, lon1)
mask = (lats_ >= lat0_) & (lats_ <= lat1_) & (lons_ >= lon0_) & (lons_ <= lon1_)
data = np.array([lats_[mask], lons_[mask], alts_[mask], vxs_[mask], vys_[mask]])
return data
@stack.command(name='WINDECMWF')
def loadwind(self, lat0: 'lat', lon0: 'lon', lat1: 'lat', lon1: 'lon',
year: int=None, month: int=None, day: int=None, hour: int=None):
''' WINDECMWF: Load a windfield directly from NOAA database.
Arguments:
- lat0, lon0, lat1, lon1 [deg]: Bounding box in which to generate wind field
- year, month, day, hour: Date and time of wind data (optional, will use
current simulation UTC if not specified).
'''
self.lat0, self.lon0, self.lat1, self.lon1 = min(lat0, lat1), \
min(lon0, lon1), max(lat0, lat1), max(lon0, lon1)
self.year = year or bs.sim.utc.year
self.month = month or bs.sim.utc.month
self.day = day or bs.sim.utc.day
self.hour = hour or bs.sim.utc.hour
# round hour to 3 hours
self.hour = round(self.hour/3) * 3
if self.hour == 24:
ymd0 = "%04d%02d%02d" % (self.year, self.month, self.day)
ymd1 = (datetime.datetime.strptime(ymd0, '%Y%m%d') +
datetime.timedelta(days=1))
self.year = ymd1.year
self.month = ymd1.month
self.day = ymd1.day
self.hour = 0
txt = "Loading wind field for %s-%s-%s..." % (self.year, self.month, self.day)
bs.scr.echo("%s" % txt)
netcdf = self.fetch_nc(self.year, self.month, self.day)
if netcdf is None or self.lat0 == self.lat1 or self.lon0 == self.lon1:
return False, "Wind data non-existend in area [%d, %d], [%d, %d]. " \
% (self.lat0, self.lat1, self.lon0, self.lon1) \
+ "time: %04d-%02d-%02d" \
% (self.year, self.month, self.day)
# first clear exisiting wind field
self.clear()
# add new wind field
data = self.extract_wind(netcdf, self.lat0, self.lon0, self.lat1, self.lon1, self.hour).T
data = data[np.lexsort((data[:, 2], data[:, 1], data[:, 0]))] # Sort by lat, lon, alt
reshapefactor = int((1 + (max(self.lat0, self.lat1) - min(self.lat0, self.lat1))*4) * \
(1 + (max(self.lon0, self.lon1) - min(self.lon0, self.lon1))*4))
lat = np.reshape(data[:,0], (reshapefactor, -1)).T[0,:]
lon = np.reshape(data[:,1], (reshapefactor, -1)).T[0,:]
veast = np.reshape(data[:,3], (reshapefactor, -1)).T
vnorth = np.reshape(data[:,4], (reshapefactor, -1)).T
windalt = np.reshape(data[:,2], (reshapefactor, -1)).T[:,0]
self.addpointvne(lat, lon, vnorth, veast, windalt)
return True, "Wind field updated in area [%d, %d], [%d, %d]. " \
% (self.lat0, self.lat1, self.lon0, self.lon1) \
+ "time: %04d-%02d-%02d" \
% (self.year, self.month, self.day)
@timed_function(name='WINDECMWF', dt=3600)
def update(self):
if self.autoload:
_, txt = self.loadwind(self.lat0, self.lon0, self.lat1, self.lon1)
bs.scr.echo("%s" % txt)
|
ProfHoekstra/bluesky
|
plugins/windecmwf.py
|
Python
|
gpl-3.0
| 7,031
|
[
"NetCDF"
] |
6485da75907645dd27abe908256a611cc97ac47879e86fdfd9f0c0e582413325
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.