text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
# Copyright (C) 2012,2013
# Max Planck Institute for Polymer Research
# Copyright (C) 2008,2009,2010,2011
# Max-Planck-Institute for Polymer Research & Fraunhofer SCAI
#
# This file is part of ESPResSo++.
#
# ESPResSo++ is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo++ is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
r"""
****************************************
**espressopp.analysis.IntraChainDistSq**
****************************************
.. function:: espressopp.analysis.IntraChainDistSq(system, fpl)
:param system:
:param fpl:
:type system:
:type fpl:
.. function:: espressopp.analysis.IntraChainDistSq.compute()
:rtype:
"""
from espressopp.esutil import cxxinit
from espressopp import pmi
from espressopp.analysis.AllParticlePos import *
from _espressopp import analysis_IntraChainDistSq
class IntraChainDistSqLocal(AllParticlePosLocal, analysis_IntraChainDistSq):
def __init__(self, system, fpl):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
cxxinit(self, analysis_IntraChainDistSq, system, fpl)
def compute(self):
return self.cxxclass.compute(self)
if pmi.isController :
class IntraChainDistSq(AllParticlePos):
__metaclass__ = pmi.Proxy
pmiproxydefs = dict(
cls = 'espressopp.analysis.IntraChainDistSqLocal',
pmicall = [ "compute" ]
)
|
junghans/espressopp
|
src/analysis/IntraChainDistSq.py
|
Python
|
gpl-3.0
| 1,969
|
[
"ESPResSo"
] |
fae810f9d257a3f9b8a32912d42f877373cbb060b7023bed28e532c209eee8ac
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Brian Coca <bcoca@ansible.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: systemd
author:
- Ansible Core Team
version_added: "2.2"
short_description: Manage systemd units
description:
- Controls systemd units (services, timers, and so on) on remote hosts.
options:
name:
description:
- Name of the unit. This parameter takes the name of exactly one unit to work with.
- When no extension is given, it is implied to a C(.service) as systemd.
- When using in a chroot environment you always need to specify the name of the unit with the extension. For example, C(crond.service).
type: str
aliases: [ service, unit ]
state:
description:
- C(started)/C(stopped) are idempotent actions that will not run commands unless necessary.
C(restarted) will always bounce the unit. C(reloaded) will always reload.
type: str
choices: [ reloaded, restarted, started, stopped ]
enabled:
description:
- Whether the unit should start on boot. B(At least one of state and enabled are required.)
type: bool
force:
description:
- Whether to override existing symlinks.
type: bool
version_added: 2.6
masked:
description:
- Whether the unit should be masked or not, a masked unit is impossible to start.
type: bool
daemon_reload:
description:
- Run daemon-reload before doing any other operations, to make sure systemd has read any changes.
- When set to C(true), runs daemon-reload even if the module does not start or stop anything.
type: bool
default: no
aliases: [ daemon-reload ]
daemon_reexec:
description:
- Run daemon_reexec command before doing any other operations, the systemd manager will serialize the manager state.
type: bool
default: no
aliases: [ daemon-reexec ]
version_added: "2.8"
scope:
description:
- Run systemctl within a given service manager scope, either as the default system scope C(system),
the current user's scope C(user), or the scope of all users C(global).
- "For systemd to work with 'user', the executing user must have its own instance of dbus started and accessible (systemd requirement)."
- "The user dbus process is normally started during normal login, but not during the run of Ansible tasks.
Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error."
- The user must have access, normally given via setting the ``XDG_RUNTIME_DIR`` variable, see example below.
type: str
choices: [ system, user, global ]
default: system
version_added: "2.7"
no_block:
description:
- Do not synchronously wait for the requested operation to finish.
Enqueued job will continue without Ansible blocking on its completion.
type: bool
default: no
version_added: "2.3"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- Since 2.4, one of the following options is required C(state), C(enabled), C(masked), C(daemon_reload), (C(daemon_reexec) since 2.8),
and all except C(daemon_reload) and (C(daemon_reexec) since 2.8) also require C(name).
- Before 2.4 you always required C(name).
- Globs are not supported in name, i.e ``postgres*.service``.
- The service names might vary by specific OS/distribution
requirements:
- A system managed by systemd.
'''
EXAMPLES = '''
- name: Make sure a service unit is running
ansible.builtin.systemd:
state: started
name: httpd
- name: Stop service cron on debian, if running
ansible.builtin.systemd:
name: cron
state: stopped
- name: Restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: crond
- name: Reload service httpd, in all cases
ansible.builtin.systemd:
name: httpd.service
state: reloaded
- name: Enable service httpd and ensure it is not masked
ansible.builtin.systemd:
name: httpd
enabled: yes
masked: no
- name: Enable a timer unit for dnf-automatic
ansible.builtin.systemd:
name: dnf-automatic.timer
state: started
enabled: yes
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd:
daemon_reload: yes
- name: Just force systemd to re-execute itself (2.8 and above)
ansible.builtin.systemd:
daemon_reexec: yes
- name: Run a user service when XDG_RUNTIME_DIR is not set on remote login
ansible.builtin.systemd:
name: myservice
state: started
scope: user
environment:
XDG_RUNTIME_DIR: "/run/user/{{ myuid }}"
'''
RETURN = '''
status:
description: A dictionary with the key=value pairs returned from `systemctl show`.
returned: success
type: complex
sample: {
"ActiveEnterTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ActiveEnterTimestampMonotonic": "8135942",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "active",
"After": "auditd.service systemd-user-sessions.service time-sync.target systemd-journald.socket basic.target system.slice",
"AllowIsolate": "no",
"Before": "shutdown.target multi-user.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "1000",
"CPUAccounting": "no",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "1024",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes",
"ConditionTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ConditionTimestampMonotonic": "7902742",
"Conflicts": "shutdown.target",
"ControlGroup": "/system.slice/crond.service",
"ControlPID": "0",
"DefaultDependencies": "yes",
"Delegate": "no",
"Description": "Command Scheduler",
"DevicePolicy": "auto",
"EnvironmentFile": "/etc/sysconfig/crond (ignore_errors=no)",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "595",
"ExecMainStartTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ExecMainStartTimestampMonotonic": "8134990",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/crond ; argv[]=/usr/sbin/crond -n $CRONDARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FragmentPath": "/usr/lib/systemd/system/crond.service",
"GuessMainPID": "yes",
"IOScheduling": "0",
"Id": "crond.service",
"IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"InactiveExitTimestampMonotonic": "8135942",
"JobTimeoutUSec": "0",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "18446744073709551615",
"LimitCORE": "18446744073709551615",
"LimitCPU": "18446744073709551615",
"LimitDATA": "18446744073709551615",
"LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615",
"LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200",
"LimitNICE": "0",
"LimitNOFILE": "4096",
"LimitNPROC": "3902",
"LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0",
"LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "3902",
"LimitSTACK": "18446744073709551615",
"LoadState": "loaded",
"MainPID": "595",
"MemoryAccounting": "no",
"MemoryLimit": "18446744073709551615",
"MountFlags": "0",
"Names": "crond.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "none",
"OOMScoreAdjust": "0",
"OnFailureIsolate": "no",
"PermissionsStartOnly": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"RemainAfterExit": "no",
"Requires": "basic.target",
"Restart": "no",
"RestartUSec": "100ms",
"Result": "success",
"RootDirectoryStartOnly": "no",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitInterval": "10000000",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "running",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "simple",
"UMask": "0022",
"UnitFileState": "enabled",
"WantedBy": "multi-user.target",
"Wants": "system.slice",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "0",
}
''' # NOQA
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.facts.system.chroot import is_chroot
from ansible.module_utils.service import sysv_exists, sysv_is_enabled, fail_if_missing
from ansible.module_utils._text import to_native
def is_running_service(service_status):
return service_status['ActiveState'] in set(['active', 'activating'])
def is_deactivating_service(service_status):
return service_status['ActiveState'] in set(['deactivating'])
def request_was_ignored(out):
return '=' not in out and ('ignoring request' in out or 'ignoring command' in out)
def parse_systemctl_show(lines):
# The output of 'systemctl show' can contain values that span multiple lines. At first glance it
# appears that such values are always surrounded by {}, so the previous version of this code
# assumed that any value starting with { was a multi-line value; it would then consume lines
# until it saw a line that ended with }. However, it is possible to have a single-line value
# that starts with { but does not end with } (this could happen in the value for Description=,
# for example), and the previous version of this code would then consume all remaining lines as
# part of that value. Cryptically, this would lead to Ansible reporting that the service file
# couldn't be found.
#
# To avoid this issue, the following code only accepts multi-line values for keys whose names
# start with Exec (e.g., ExecStart=), since these are the only keys whose values are known to
# span multiple lines.
parsed = {}
multival = []
k = None
for line in lines:
if k is None:
if '=' in line:
k, v = line.split('=', 1)
if k.startswith('Exec') and v.lstrip().startswith('{'):
if not v.rstrip().endswith('}'):
multival.append(v)
continue
parsed[k] = v.strip()
k = None
else:
multival.append(line)
if line.rstrip().endswith('}'):
parsed[k] = '\n'.join(multival).strip()
multival = []
k = None
return parsed
# ===========================================
# Main control flow
def main():
# initialize
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', aliases=['service', 'unit']),
state=dict(type='str', choices=['reloaded', 'restarted', 'started', 'stopped']),
enabled=dict(type='bool'),
force=dict(type='bool'),
masked=dict(type='bool'),
daemon_reload=dict(type='bool', default=False, aliases=['daemon-reload']),
daemon_reexec=dict(type='bool', default=False, aliases=['daemon-reexec']),
scope=dict(type='str', default='system', choices=['system', 'user', 'global']),
no_block=dict(type='bool', default=False),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled', 'masked', 'daemon_reload', 'daemon_reexec']],
required_by=dict(
state=('name', ),
enabled=('name', ),
masked=('name', ),
),
)
unit = module.params['name']
if unit is not None:
for globpattern in (r"*", r"?", r"["):
if globpattern in unit:
module.fail_json(msg="This module does not currently support using glob patterns, found '%s' in service name: %s" % (globpattern, unit))
systemctl = module.get_bin_path('systemctl', True)
if os.getenv('XDG_RUNTIME_DIR') is None:
os.environ['XDG_RUNTIME_DIR'] = '/run/user/%s' % os.geteuid()
''' Set CLI options depending on params '''
# if scope is 'system' or None, we can ignore as there is no extra switch.
# The other choices match the corresponding switch
if module.params['scope'] != 'system':
systemctl += " --%s" % module.params['scope']
if module.params['no_block']:
systemctl += " --no-block"
if module.params['force']:
systemctl += " --force"
rc = 0
out = err = ''
result = dict(
name=unit,
changed=False,
status=dict(),
)
# Run daemon-reload first, if requested
if module.params['daemon_reload'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reload" % (systemctl))
if rc != 0:
module.fail_json(msg='failure %d during daemon-reload: %s' % (rc, err))
# Run daemon-reexec
if module.params['daemon_reexec'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reexec" % (systemctl))
if rc != 0:
module.fail_json(msg='failure %d during daemon-reexec: %s' % (rc, err))
if unit:
found = False
is_initd = sysv_exists(unit)
is_systemd = False
# check service data, cannot error out on rc as it changes across versions, assume not found
(rc, out, err) = module.run_command("%s show '%s'" % (systemctl, unit))
if rc == 0 and not (request_was_ignored(out) or request_was_ignored(err)):
# load return of systemctl show into dictionary for easy access and return
if out:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
is_systemd = 'LoadState' in result['status'] and result['status']['LoadState'] != 'not-found'
is_masked = 'LoadState' in result['status'] and result['status']['LoadState'] == 'masked'
# Check for loading error
if is_systemd and not is_masked and 'LoadError' in result['status']:
module.fail_json(msg="Error loading unit file '%s': %s" % (unit, result['status']['LoadError']))
# Workaround for https://github.com/ansible/ansible/issues/71528
elif err and rc == 1 and 'Failed to parse bus message' in err:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
unit_base, sep, suffix = unit.partition('@')
unit_search = '{unit_base}{sep}'.format(unit_base=unit_base, sep=sep)
(rc, out, err) = module.run_command("{systemctl} list-unit-files '{unit_search}*'".format(systemctl=systemctl, unit_search=unit_search))
is_systemd = unit_search in out
(rc, out, err) = module.run_command("{systemctl} is-active '{unit}'".format(systemctl=systemctl, unit=unit))
result['status']['ActiveState'] = out.rstrip('\n')
else:
# list taken from man systemctl(1) for systemd 244
valid_enabled_states = [
"enabled",
"enabled-runtime",
"linked",
"linked-runtime",
"masked",
"masked-runtime",
"static",
"indirect",
"disabled",
"generated",
"transient"]
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
if out.strip() in valid_enabled_states:
is_systemd = True
else:
# fallback list-unit-files as show does not work on some systems (chroot)
# not used as primary as it skips some services (like those using init.d) and requires .service/etc notation
(rc, out, err) = module.run_command("%s list-unit-files '%s'" % (systemctl, unit))
if rc == 0:
is_systemd = True
else:
# Check for systemctl command
module.run_command(systemctl, check_rc=True)
# Does service exist?
found = is_systemd or is_initd
if is_initd and not is_systemd:
module.warn('The service (%s) is actually an init script but the system is managed by systemd' % unit)
# mask/unmask the service, if requested, can operate on services before they are installed
if module.params['masked'] is not None:
# state is not masked unless systemd affirms otherwise
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
masked = out.strip() == "masked"
if masked != module.params['masked']:
result['changed'] = True
if module.params['masked']:
action = 'mask'
else:
action = 'unmask'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
# some versions of system CAN mask/unmask non existing services, we only fail on missing if they don't
fail_if_missing(module, found, unit, msg='host')
# Enable/disable service startup at boot if requested
if module.params['enabled'] is not None:
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
fail_if_missing(module, found, unit, msg='host')
# do we need to enable the service?
enabled = False
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
# check systemctl result or if it is a init script
if rc == 0:
enabled = True
elif rc == 1:
# if not a user or global user service and both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
if module.params['scope'] == 'system' and \
is_initd and \
not out.strip().endswith('disabled') and \
sysv_is_enabled(unit):
enabled = True
# default to current state
result['enabled'] = enabled
# Change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, out + err))
result['enabled'] = not enabled
# set service state if requested
if module.params['state'] is not None:
fail_if_missing(module, found, unit, msg="host")
# default to desired state
result['state'] = module.params['state']
# What is current service state?
if 'ActiveState' in result['status']:
action = None
if module.params['state'] == 'started':
if not is_running_service(result['status']):
action = 'start'
elif module.params['state'] == 'stopped':
if is_running_service(result['status']) or is_deactivating_service(result['status']):
action = 'stop'
else:
if not is_running_service(result['status']):
action = 'start'
else:
action = module.params['state'][:-2] # remove 'ed' from restarted/reloaded
result['state'] = 'started'
if action:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
# check for chroot
elif is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn("Target is a chroot or systemd is offline. This can lead to false positives or prevent the init system tools from working.")
else:
# this should not happen?
module.fail_json(msg="Service is in unknown state", status=result['status'])
module.exit_json(**result)
if __name__ == '__main__':
main()
|
nitzmahone/ansible
|
lib/ansible/modules/systemd.py
|
Python
|
gpl-3.0
| 23,112
|
[
"Brian"
] |
a62490cad245633b12730cc8d3284cdc454c491ec90280221d0729da8db06330
|
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module provides classes and methods used to describe deformations and
strains, including applying those deformations to structure objects and
generating deformed structure sets for further calculations.
"""
import collections
import itertools
import numpy as np
import scipy
from pymatgen.core.lattice import Lattice
from pymatgen.core.tensors import SquareTensor, symmetry_reduce
__author__ = "Joseph Montoya"
__copyright__ = "Copyright 2012, The Materials Project"
__credits__ = "Maarten de Jong, Mark Asta, Anubhav Jain"
__version__ = "1.0"
__maintainer__ = "Joseph Montoya"
__email__ = "montoyjh@lbl.gov"
__status__ = "Production"
__date__ = "July 24, 2018"
class Deformation(SquareTensor):
"""
Subclass of SquareTensor that describes the deformation gradient tensor
"""
symbol = "d"
def __new__(cls, deformation_gradient):
"""
Create a Deformation object. Note that the constructor uses __new__
rather than __init__ according to the standard method of subclassing
numpy ndarrays.
Args:
deformation_gradient (3x3 array-like): the 3x3 array-like
representing the deformation gradient
"""
obj = super().__new__(cls, deformation_gradient)
return obj.view(cls)
def is_independent(self, tol=1e-8):
"""
checks to determine whether the deformation is independent
"""
return len(self.get_perturbed_indices(tol)) == 1
def get_perturbed_indices(self, tol=1e-8):
"""
Gets indices of perturbed elements of the deformation gradient,
i. e. those that differ from the identity
"""
indices = list(zip(*np.where(abs(self - np.eye(3)) > tol)))
return indices
@property
def green_lagrange_strain(self):
"""
calculates the euler-lagrange strain from
the deformation gradient
"""
return Strain.from_deformation(self)
def apply_to_structure(self, structure):
"""
Apply the deformation gradient to a structure.
Args:
structure (Structure object): the structure object to
be modified by the deformation
"""
def_struct = structure.copy()
old_latt = def_struct.lattice.matrix
new_latt = np.transpose(np.dot(self, np.transpose(old_latt)))
def_struct.lattice = Lattice(new_latt)
return def_struct
@classmethod
def from_index_amount(cls, matrixpos, amt):
"""
Factory method for constructing a Deformation object
from a matrix position and amount
Args:
matrixpos (tuple): tuple corresponding the matrix position to
have a perturbation added
amt (float): amount to add to the identity matrix at position
matrixpos
"""
f = np.identity(3)
f[matrixpos] += amt
return cls(f)
class DeformedStructureSet(collections.abc.Sequence):
"""
class that generates a set of independently deformed structures that
can be used to calculate linear stress-strain response
"""
def __init__(self, structure, norm_strains=None, shear_strains=None, symmetry=False):
"""
constructs the deformed geometries of a structure. Generates
m + n deformed structures according to the supplied parameters.
Args:
structure (Structure): structure to undergo deformation
norm_strains (list of floats): strain values to apply
to each normal mode.
shear_strains (list of floats): strain values to apply
to each shear mode.
symmetry (bool): whether or not to use symmetry reduction.
"""
norm_strains = norm_strains or [-0.01, -0.005, 0.005, 0.01]
shear_strains = shear_strains or [-0.06, -0.03, 0.03, 0.06]
self.undeformed_structure = structure
self.deformations = []
self.def_structs = []
# Generate deformations
for ind in [(0, 0), (1, 1), (2, 2)]:
for amount in norm_strains:
strain = Strain.from_index_amount(ind, amount)
self.deformations.append(strain.get_deformation_matrix())
for ind in [(0, 1), (0, 2), (1, 2)]:
for amount in shear_strains:
strain = Strain.from_index_amount(ind, amount)
self.deformations.append(strain.get_deformation_matrix())
# Perform symmetry reduction if specified
if symmetry:
self.sym_dict = symmetry_reduce(self.deformations, structure)
self.deformations = list(self.sym_dict.keys())
self.deformed_structures = [defo.apply_to_structure(structure) for defo in self.deformations]
def __iter__(self):
return iter(self.deformed_structures)
def __len__(self):
return len(self.deformed_structures)
def __getitem__(self, ind):
return self.deformed_structures[ind]
class Strain(SquareTensor):
"""
Subclass of SquareTensor that describes the Green-Lagrange strain tensor.
"""
symbol = "e"
def __new__(cls, strain_matrix):
"""
Create a Strain object. Note that the constructor uses __new__
rather than __init__ according to the standard method of
subclassing numpy ndarrays. Note also that the default constructor
does not include the deformation gradient
Args:
strain_matrix (3x3 array-like): the 3x3 array-like
representing the Green-Lagrange strain
"""
vscale = np.ones((6,))
vscale[3:] *= 2
obj = super().__new__(cls, strain_matrix, vscale=vscale)
if not obj.is_symmetric():
raise ValueError(
"Strain objects must be initialized "
"with a symmetric array or a voigt-notation "
"vector with six entries."
)
return obj.view(cls)
def __array_finalize__(self, obj):
if obj is None:
return
self.rank = getattr(obj, "rank", None)
self._vscale = getattr(obj, "_vscale", None)
@classmethod
def from_deformation(cls, deformation):
"""
Factory method that returns a Strain object from a deformation
gradient
Args:
deformation (3x3 array-like):
"""
dfm = Deformation(deformation)
return cls(0.5 * (np.dot(dfm.trans, dfm) - np.eye(3)))
@classmethod
def from_index_amount(cls, idx, amount):
"""
Like Deformation.from_index_amount, except generates
a strain from the zero 3x3 tensor or voigt vector with
the amount specified in the index location. Ensures
symmetric strain.
Args:
idx (tuple or integer): index to be perturbed, can be voigt or
full-tensor notation
amount (float): amount to perturb selected index
"""
if np.array(idx).ndim == 0:
v = np.zeros(6)
v[idx] = amount
return cls.from_voigt(v)
if np.array(idx).ndim == 1:
v = np.zeros((3, 3))
for i in itertools.permutations(idx):
v[i] = amount
return cls(v)
raise ValueError("Index must either be 2-tuple or integer corresponding to full-tensor or voigt index")
def get_deformation_matrix(self, shape="upper"):
"""
returns the deformation matrix
"""
return convert_strain_to_deformation(self, shape=shape)
@property
def von_mises_strain(self):
"""
Equivalent strain to Von Mises Stress
"""
eps = self - 1 / 3 * np.trace(self) * np.identity(3)
return np.sqrt(np.sum(eps * eps) * 2 / 3)
def convert_strain_to_deformation(strain, shape="upper"):
"""
This function converts a strain to a deformation gradient that will
produce that strain. Supports three methods:
Args:
strain (3x3 array-like): strain matrix
shape: (string): method for determining deformation, supports
"upper" produces an upper triangular defo
"lower" produces a lower triangular defo
"symmetric" produces a symmetric defo
"""
strain = SquareTensor(strain)
ftdotf = 2 * strain + np.eye(3)
if shape == "upper":
result = scipy.linalg.cholesky(ftdotf)
elif shape == "symmetric":
result = scipy.linalg.sqrtm(ftdotf)
else:
raise ValueError('shape must be "upper" or "symmetric"')
return Deformation(result)
|
materialsproject/pymatgen
|
pymatgen/analysis/elasticity/strain.py
|
Python
|
mit
| 8,734
|
[
"pymatgen"
] |
cb5805fc31e9f4c1fa390c7ed80c501f7531aef07a746a6be834801f12d9fda9
|
# Copyright 1999-2000 by Jeffrey Chang. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Record classes to hold BLAST output.
Classes:
Blast Holds all the information from a blast search.
PSIBlast Holds all the information from a psi-blast search.
Header Holds information from the header.
Description Holds information about one hit description.
Alignment Holds information about one alignment hit.
HSP Holds information about one HSP.
MultipleAlignment Holds information about a multiple alignment.
DatabaseReport Holds information from the database report.
Parameters Holds information from the parameters.
"""
# XXX finish printable BLAST output
from Bio.Align import Generic
class Header(object):
"""Saves information from a blast header.
Members:
application The name of the BLAST flavor that generated this data.
version Version of blast used.
date Date this data was generated.
reference Reference for blast.
query Name of query sequence.
query_letters Number of letters in the query sequence. (int)
database Name of the database.
database_sequences Number of sequences in the database. (int)
database_letters Number of letters in the database. (int)
"""
def __init__(self):
self.application = ''
self.version = ''
self.date = ''
self.reference = ''
self.query = ''
self.query_letters = None
self.database = ''
self.database_sequences = None
self.database_letters = None
class Description(object):
"""Stores information about one hit in the descriptions section.
Members:
title Title of the hit.
score Number of bits. (int)
bits Bit score. (float)
e E value. (float)
num_alignments Number of alignments for the same subject. (int)
"""
def __init__(self):
self.title = ''
self.score = None
self.bits = None
self.e = None
self.num_alignments = None
def __str__(self):
return "%-66s %5s %s" % (self.title, self.score, self.e)
class Alignment(object):
"""Stores information about one hit in the alignments section.
Members:
title Name.
hit_id Hit identifier. (str)
hit_def Hit definition. (str)
length Length. (int)
hsps A list of HSP objects.
"""
def __init__(self):
self.title = ''
self.hit_id = ''
self.hit_def = ''
self.length = None
self.hsps = []
def __str__(self):
lines = self.title.split('\n')
lines.append("Length = %s\n" % self.length)
return '\n '.join(lines)
class HSP(object):
"""Stores information about one hsp in an alignment hit.
Members:
score BLAST score of hit. (float)
bits Number of bits for that score. (float)
expect Expect value. (float)
num_alignments Number of alignments for same subject. (int)
identities Number of identities (int) if using the XML parser.
Tuple of numer of identities/total aligned (int, int)
if using the (obsolete) plain text parser.
positives Number of positives (int) if using the XML parser.
Tuple of numer of positives/total aligned (int, int)
if using the (obsolete) plain text parser.
gaps Number of gaps (int) if using the XML parser.
Tuple of numer of gaps/total aligned (int, int) if
using the (obsolete) plain text parser.
align_length Length of the alignment. (int)
strand Tuple of (query, target) strand.
frame Tuple of 1 or 2 frame shifts, depending on the flavor.
query The query sequence.
query_start The start residue for the query sequence. (1-based)
query_end The end residue for the query sequence. (1-based)
match The match sequence.
sbjct The sbjct sequence.
sbjct_start The start residue for the sbjct sequence. (1-based)
sbjct_end The end residue for the sbjct sequence. (1-based)
Not all flavors of BLAST return values for every attribute:
score expect identities positives strand frame
BLASTP X X X X
BLASTN X X X X X
BLASTX X X X X X
TBLASTN X X X X X
TBLASTX X X X X X/X
Note: for BLASTX, the query sequence is shown as a protein sequence,
but the numbering is based on the nucleotides. Thus, the numbering
is 3x larger than the number of amino acid residues. A similar effect
can be seen for the sbjct sequence in TBLASTN, and for both sequences
in TBLASTX.
Also, for negative frames, the sequence numbering starts from
query_start and counts down.
"""
def __init__(self):
self.score = None
self.bits = None
self.expect = None
self.num_alignments = None
self.identities = (None, None)
self.positives = (None, None)
self.gaps = (None, None)
self.align_length = None
self.strand = (None, None)
self.frame = ()
self.query = ''
self.query_start = None
self.query_end = None
self.match = ''
self.sbjct = ''
self.sbjct_start = None
self.sbjct_end = None
def __str__(self):
lines = ["Score %i (%i bits), expectation %0.1e, alignment length %i" \
% (self.score, self.bits, self.expect, self.align_length)]
if self.align_length < 50:
lines.append("Query:%s %s %s" % (str(self.query_start).rjust(8),
str(self.query),
str(self.query_end)))
lines.append(" %s" \
% (str(self.match)))
lines.append("Sbjct:%s %s %s" % (str(self.sbjct_start).rjust(8),
str(self.sbjct),
str(self.sbjct_end)))
else:
lines.append("Query:%s %s...%s %s" \
% (str(self.query_start).rjust(8),
str(self.query)[:45],
str(self.query)[-3:],
str(self.query_end)))
lines.append(" %s...%s" \
% (str(self.match)[:45],
str(self.match)[-3:]))
lines.append("Sbjct:%s %s...%s %s" \
% (str(self.sbjct_start).rjust(8),
str(self.sbjct)[:45],
str(self.sbjct)[-3:],
str(self.sbjct_end)))
return "\n".join(lines)
class MultipleAlignment(object):
"""Holds information about a multiple alignment.
Members:
alignment A list of tuples (name, start residue, sequence, end residue).
The start residue is 1-based. It may be blank, if that sequence is
not aligned in the multiple alignment.
"""
def __init__(self):
self.alignment = []
def to_generic(self, alphabet):
"""Retrieve generic alignment object for the given alignment.
Instead of the tuples, this returns an Alignment object from
Bio.Align.Generic, through which you can manipulate and query
the object.
alphabet is the specified alphabet for the sequences in the code (for
example IUPAC.IUPACProtein.
Thanks to James Casbon for the code.
"""
#TODO - Switch to new Bio.Align.MultipleSeqAlignment class?
seq_parts = []
seq_names = []
parse_number = 0
n = 0
for name, start, seq, end in self.alignment:
if name == 'QUERY': #QUERY is the first in each alignment block
parse_number += 1
n = 0
if parse_number == 1: # create on first_parse, append on all others
seq_parts.append(seq)
seq_names.append(name)
else:
seq_parts[n] += seq
n += 1
generic = Generic.Alignment(alphabet)
for (name,seq) in zip(seq_names,seq_parts):
generic.add_sequence(name, seq)
return generic
class Round(object):
"""Holds information from a PSI-BLAST round.
Members:
number Round number. (int)
reused_seqs Sequences in model, found again. List of Description objects.
new_seqs Sequences not found, or below threshold. List of Description.
alignments A list of Alignment objects.
multiple_alignment A MultipleAlignment object.
"""
def __init__(self):
self.number = None
self.reused_seqs = []
self.new_seqs = []
self.alignments = []
self.multiple_alignment = None
class DatabaseReport(object):
"""Holds information about a database report.
Members:
database_name List of database names. (can have multiple dbs)
num_letters_in_database Number of letters in the database. (int)
num_sequences_in_database List of number of sequences in the database.
posted_date List of the dates the databases were posted.
ka_params A tuple of (lambda, k, h) values. (floats)
gapped # XXX this isn't set right!
ka_params_gap A tuple of (lambda, k, h) values. (floats)
"""
def __init__(self):
self.database_name = []
self.posted_date = []
self.num_letters_in_database = []
self.num_sequences_in_database = []
self.ka_params = (None, None, None)
self.gapped = 0
self.ka_params_gap = (None, None, None)
class Parameters(object):
"""Holds information about the parameters.
Members:
matrix Name of the matrix.
gap_penalties Tuple of (open, extend) penalties. (floats)
sc_match Match score for nucleotide-nucleotide comparison
sc_mismatch Mismatch penalty for nucleotide-nucleotide comparison
num_hits Number of hits to the database. (int)
num_sequences Number of sequences. (int)
num_good_extends Number of extensions. (int)
num_seqs_better_e Number of sequences better than e-value. (int)
hsps_no_gap Number of HSP's better, without gapping. (int)
hsps_prelim_gapped Number of HSP's gapped in prelim test. (int)
hsps_prelim_gapped_attemped Number of HSP's attempted in prelim. (int)
hsps_gapped Total number of HSP's gapped. (int)
query_length Length of the query. (int)
query_id Identifier of the query sequence. (str)
database_length Number of letters in the database. (int)
effective_hsp_length Effective HSP length. (int)
effective_query_length Effective length of query. (int)
effective_database_length Effective length of database. (int)
effective_search_space Effective search space. (int)
effective_search_space_used Effective search space used. (int)
frameshift Frameshift window. Tuple of (int, float)
threshold Threshold. (int)
window_size Window size. (int)
dropoff_1st_pass Tuple of (score, bits). (int, float)
gap_x_dropoff Tuple of (score, bits). (int, float)
gap_x_dropoff_final Tuple of (score, bits). (int, float)
gap_trigger Tuple of (score, bits). (int, float)
blast_cutoff Tuple of (score, bits). (int, float)
"""
def __init__(self):
self.matrix = ''
self.gap_penalties = (None, None)
self.sc_match = None
self.sc_mismatch = None
self.num_hits = None
self.num_sequences = None
self.num_good_extends = None
self.num_seqs_better_e = None
self.hsps_no_gap = None
self.hsps_prelim_gapped = None
self.hsps_prelim_gapped_attemped = None
self.hsps_gapped = None
self.query_id = None
self.query_length = None
self.database_length = None
self.effective_hsp_length = None
self.effective_query_length = None
self.effective_database_length = None
self.effective_search_space = None
self.effective_search_space_used = None
self.frameshift = (None, None)
self.threshold = None
self.window_size = None
self.dropoff_1st_pass = (None, None)
self.gap_x_dropoff = (None, None)
self.gap_x_dropoff_final = (None, None)
self.gap_trigger = (None, None)
self.blast_cutoff = (None, None)
#TODO - Add a friendly __str__ method to BLAST results
class Blast(Header, DatabaseReport, Parameters):
"""Saves the results from a blast search.
Members:
descriptions A list of Description objects.
alignments A list of Alignment objects.
multiple_alignment A MultipleAlignment object.
+ members inherited from base classes
"""
def __init__(self):
Header.__init__(self)
DatabaseReport.__init__(self)
Parameters.__init__(self)
self.descriptions = []
self.alignments = []
self.multiple_alignment = None
class PSIBlast(Header, DatabaseReport, Parameters):
"""Saves the results from a blastpgp search.
Members:
rounds A list of Round objects.
converged Whether the search converged.
+ members inherited from base classes
"""
def __init__(self):
Header.__init__(self)
DatabaseReport.__init__(self)
Parameters.__init__(self)
self.rounds = []
self.converged = 0
|
bryback/quickseq
|
genescript/Bio/Blast/Record.py
|
Python
|
mit
| 14,376
|
[
"BLAST",
"Biopython"
] |
39277a716351b79e25fbfde346564ab3f485fe0e4e17467803662b403ec7e2be
|
import sys
from pathlib import Path
import neurom
from neurom import load_morphology
from neurom.view import plotly_impl
import mock
from numpy.testing import assert_allclose
SWC_PATH = Path(__file__).parent.parent / 'data/swc'
MORPH_FILENAME = SWC_PATH / 'Neuron.swc'
m = load_morphology(MORPH_FILENAME)
def _reload_module(module):
"""Force module reload."""
import importlib
importlib.reload(module)
def test_plotly_extra_not_installed():
with mock.patch.dict(sys.modules, {'plotly': None}):
try:
_reload_module(neurom.view.plotly_impl)
assert False, "ImportError not triggered"
except ImportError as e:
assert (str(e) ==
'neurom[plotly] is not installed. '
'Please install it by doing: pip install neurom[plotly]')
def test_plotly_draw_morph3d():
plotly_impl.plot_morph3d(m, auto_open=False)
plotly_impl.plot_morph3d(m.neurites[0], auto_open=False)
fig = plotly_impl.plot_morph3d(load_morphology(SWC_PATH / 'simple-different-soma.swc'),
auto_open=False)
x, y, z = [fig['data'][2][key] for key in str('xyz')]
assert_allclose(x[0, 0], 2)
assert_allclose(x[33, 33], -1.8971143170299758)
assert_allclose(y[0, 0], 3)
assert_allclose(y[33, 33], 9.75)
assert_allclose(z[0, 0], 13)
assert_allclose(z[33, 33], 8.5)
def test_plotly_draw_morph2d():
plotly_impl.plot_morph(m, auto_open=False)
plotly_impl.plot_morph(m.neurites[0], auto_open=False)
|
BlueBrain/NeuroM
|
tests/view/test_plotly_impl.py
|
Python
|
bsd-3-clause
| 1,558
|
[
"NEURON"
] |
2fa8cb286f7c0e26ae19edd8b2727eb542ac17476eedfbc86f369a43274f80cf
|
"""test_correctilluminationcalculate.py - test the CorrectIlluminationCalculate module
CellProfiler is distributed under the GNU General Public License.
See the accompanying file LICENSE for details.
Copyright (c) 2003-2009 Massachusetts Institute of Technology
Copyright (c) 2009-2015 Broad Institute
All rights reserved.
Please see the AUTHORS file for credits.
Website: http://www.cellprofiler.org
"""
import base64
import numpy as np
from StringIO import StringIO
import unittest
import sys
import zlib
from cellprofiler.preferences import set_headless
set_headless()
import cellprofiler.pipeline as cpp
import cellprofiler.settings as cps
import cellprofiler.cpimage as cpi
import cellprofiler.workspace as cpw
import cellprofiler.objects as cpo
import cellprofiler.measurements as cpm
import cellprofiler.modules.injectimage as inj
import cellprofiler.modules.correctilluminationcalculate as calc
INPUT_IMAGE_NAME = "MyImage"
OUTPUT_IMAGE_NAME = "MyResult"
AVERAGE_IMAGE_NAME = "Ave"
DILATED_IMAGE_NAME = "Dilate"
class TestCorrectImage_Calculate(unittest.TestCase):
@classmethod
def setUpClass(cls):
'''Backwards compatibility for Python 2.6 unittest'''
if not hasattr(cls, "assertIn"):
cls.assertIn = lambda self, x, y: self.assertTrue(x in y)
if not hasattr(cls, "assertNotIn"):
cls.assertNotIn = lambda self, x, y: self.assertFalse(x in y)
def error_callback(self, calller, event):
if isinstance(event, cpp.RunExceptionEvent):
self.fail(event.error.message)
def make_workspaces(self, images_and_masks):
'''Make a workspace for each image set provided
images_and_masks - a collection of two-tuples: image+mask
returns a list of workspaces + the module
'''
image_set_list = cpi.ImageSetList()
workspaces = []
module = calc.CorrectIlluminationCalculate()
module.module_num = 1
module.image_name.value = INPUT_IMAGE_NAME
module.illumination_image_name.value = OUTPUT_IMAGE_NAME
module.average_image_name.value = AVERAGE_IMAGE_NAME
module.dilated_image_name.value = DILATED_IMAGE_NAME
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
measurements = cpm.Measurements()
for i, (image, mask) in enumerate(images_and_masks):
image_set = image_set_list.get_image_set(i)
if mask is None:
image = cpi.Image(image)
else:
image = cpi.Image(image, mask)
image_set.add(INPUT_IMAGE_NAME, image)
workspace = cpw.Workspace(
pipeline, module, image_set, cpo.ObjectSet(),
measurements, image_set_list)
workspaces.append(workspace)
return workspaces, module
def test_00_00_zeros(self):
"""Test all combinations of options with an image of all zeros"""
for image in (np.zeros((10,10)), np.zeros((10,10,3))):
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage("MyImage", image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = "MyImage"
module.illumination_image_name.value = "OutputImage"
module.save_average_image.value = True
module.save_dilated_image.value = True
for ea in (calc.EA_EACH, calc.EA_ALL_ACROSS, calc.EA_ALL_FIRST):
module.each_or_all.value = ea
for intensity_choice in (calc.IC_BACKGROUND, calc.IC_REGULAR):
module.intensity_choice.value = intensity_choice
for dilate_objects in (True, False):
module.dilate_objects.value = dilate_objects
for rescale_option in (cps.YES, cps.NO, calc.RE_MEDIAN):
module.rescale_option.value = rescale_option
for smoothing_method \
in (calc.SM_NONE, calc.SM_FIT_POLYNOMIAL,
calc.SM_GAUSSIAN_FILTER, calc.SM_MEDIAN_FILTER,
calc.SM_TO_AVERAGE, calc.SM_SPLINES,
calc.SM_CONVEX_HULL):
module.smoothing_method.value = smoothing_method
for ow in (calc.FI_AUTOMATIC, calc.FI_MANUALLY,
calc.FI_OBJECT_SIZE):
module.automatic_object_width.value = ow
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(
pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(image != None)
self.assertTrue(np.all(image.pixel_data == 0),
"""Failure case:
intensity_choice = %(intensity_choice)s
dilate_objects = %(dilate_objects)s
rescale_option = %(rescale_option)s
smoothing_method = %(smoothing_method)s
automatic_object_width = %(ow)s"""%locals())
def test_01_01_ones_image(self):
"""The illumination correction of an image of all ones should be uniform
"""
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
for image in ((np.ones((10, 10)), np.ones((10, 10, 3)))):
inj_module = inj.InjectImage("MyImage", image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = "MyImage"
module.illumination_image_name.value = "OutputImage"
module.rescale_option.value = cps.YES
for ea in (calc.EA_EACH, calc.EA_ALL_ACROSS, calc.EA_ALL_FIRST):
module.each_or_all.value = ea
for intensity_choice in (calc.IC_BACKGROUND, calc.IC_REGULAR):
module.intensity_choice.value = intensity_choice
for dilate_objects in (True, False):
module.dilate_objects.value = dilate_objects
for smoothing_method \
in (calc.SM_NONE, calc.SM_FIT_POLYNOMIAL,
calc.SM_GAUSSIAN_FILTER, calc.SM_MEDIAN_FILTER,
calc.SM_TO_AVERAGE, calc.SM_SPLINES,
calc.SM_CONVEX_HULL):
module.smoothing_method.value = smoothing_method
for ow in (calc.FI_AUTOMATIC, calc.FI_MANUALLY,
calc.FI_OBJECT_SIZE):
module.automatic_object_width.value = ow
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(
pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(image != None)
self.assertTrue(np.all(np.std(image.pixel_data) < .00001),
"""Failure case:
each_or_all = %(ea)s
intensity_choice = %(intensity_choice)s
dilate_objects = %(dilate_objects)s
smoothing_method = %(smoothing_method)s
automatic_object_width = %(ow)s"""%locals())
def test_01_02_masked_image(self):
"""A masked image should be insensitive to points outside the mask"""
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
np.random.seed(12)
for image in (np.random.uniform(size=(10,10)),
np.random.uniform(size=(10, 10, 3))):
mask = np.zeros((10,10),bool)
mask[2:7,3:8] = True
image[mask] = 1
inj_module = inj.InjectImage("MyImage", image, mask)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = "MyImage"
module.illumination_image_name.value = "OutputImage"
module.rescale_option.value = cps.YES
module.dilate_objects.value = False
for ea in (calc.EA_EACH, calc.EA_ALL_ACROSS, calc.EA_ALL_FIRST):
module.each_or_all.value = ea
for intensity_choice in (calc.IC_BACKGROUND, calc.IC_REGULAR):
module.intensity_choice.value = intensity_choice
for smoothing_method \
in (calc.SM_NONE, calc.SM_FIT_POLYNOMIAL,
calc.SM_GAUSSIAN_FILTER, calc.SM_MEDIAN_FILTER,
calc.SM_TO_AVERAGE, calc.SM_CONVEX_HULL):
module.smoothing_method.value = smoothing_method
for ow in (calc.FI_AUTOMATIC, calc.FI_MANUALLY,
calc.FI_OBJECT_SIZE):
module.automatic_object_width.value = ow
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(
pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(image != None)
self.assertTrue(np.all(abs(image.pixel_data[mask] - 1 < .00001)),
"""Failure case:
each_or_all = %(ea)s
intensity_choice = %(intensity_choice)s
smoothing_method = %(smoothing_method)s
automatic_object_width = %(ow)s"""%locals())
def test_01_03_filtered(self):
'''Regression test of issue #310
post_group should add the composite image to the image set
if CorrectIllumination_Calculate didn't run because the image
set was filtered.
'''
r = np.random.RandomState()
r.seed(13)
i0 = r.uniform(size=(11,13))
i1 = r.uniform(size=(11,13))
i2 = r.uniform(size=(11,13))
workspaces, module = self.make_workspaces((
( i0, None),
(i1, None),
(i2, None)))
module.each_or_all.value = calc.EA_ALL_ACROSS
module.smoothing_method.value = calc.SM_TO_AVERAGE
module.save_average_image.value = True
module.save_dilated_image.value = True
module.prepare_group(workspaces[0], None, [1, 2, 3])
assert isinstance(module, calc.CorrectIlluminationCalculate)
for workspace in workspaces[:-1]:
assert isinstance(workspace, cpw.Workspace)
module.run(workspace)
image_set = workspaces[-1].image_set
self.assertNotIn(OUTPUT_IMAGE_NAME, image_set.get_names())
self.assertNotIn(DILATED_IMAGE_NAME, image_set.get_names())
self.assertNotIn(AVERAGE_IMAGE_NAME, image_set.get_names())
module.post_group(workspaces[-1], None)
self.assertIn(OUTPUT_IMAGE_NAME, image_set.get_names())
self.assertIn(DILATED_IMAGE_NAME, image_set.get_names())
self.assertIn(AVERAGE_IMAGE_NAME, image_set.get_names())
def test_01_04_not_filtered(self):
'''Regression test of issue #310, negative case
post_group should not add the composite image to the image set
if CorrectIllumination_Calculate did run.
'''
r = np.random.RandomState()
r.seed(13)
i0 = r.uniform(size=(11,13))
i1 = r.uniform(size=(11,13))
i2 = r.uniform(size=(11,13))
workspaces, module = self.make_workspaces((
( i0, None),
(i1, None),
(i2, None)))
module.each_or_all.value = calc.EA_ALL_ACROSS
module.smoothing_method.value = calc.SM_TO_AVERAGE
module.save_average_image.value = True
module.save_dilated_image.value = True
module.prepare_group(workspaces[0], None, [1, 2, 3])
assert isinstance(module, calc.CorrectIlluminationCalculate)
for workspace in workspaces:
assert isinstance(workspace, cpw.Workspace)
module.run(workspace)
image_set = workspaces[-1].image_set
self.assertIn(OUTPUT_IMAGE_NAME, image_set.get_names())
self.assertIn(DILATED_IMAGE_NAME, image_set.get_names())
self.assertIn(AVERAGE_IMAGE_NAME, image_set.get_names())
module.post_group(workspaces[-1], None)
#
# Make sure it appears only once
#
for image_name in (
OUTPUT_IMAGE_NAME, DILATED_IMAGE_NAME, AVERAGE_IMAGE_NAME):
self.assertEqual(len(filter(lambda x: x == image_name,
image_set.get_names())), 1)
def test_02_02_Background(self):
"""Test an image with four distinct backgrounds"""
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
image = np.ones((40,40))
image[10,10] = .25
image[10,30] = .5
image[30,10] = .75
image[30,30] = .9
inj_module = inj.InjectImage("MyImage", image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = "MyImage"
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value == calc.EA_EACH
module.block_size.value = 20
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
module.smoothing_method.value = calc.SM_NONE
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(image.pixel_data[:20,:20] == .25))
self.assertTrue(np.all(image.pixel_data[:20,20:] == .5))
self.assertTrue(np.all(image.pixel_data[20:,:20] == .75))
self.assertTrue(np.all(image.pixel_data[20:,20:] == .9))
def test_03_00_no_smoothing(self):
"""Make sure that no smoothing takes place if smoothing is turned off"""
input_image = np.random.uniform(size=(10,10))
image_name = "InputImage"
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_NONE
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(np.abs(image.pixel_data-input_image) < .001),
"Failed to fit polynomial to %s"%(image_name))
def test_03_01_FitPolynomial(self):
"""Test fitting a polynomial to different gradients"""
y,x = (np.mgrid[0:20,0:20]).astype(float)/20.0
image_x = x
image_y = y
image_x2 = x**2
image_y2 = y**2
image_xy = x*y
for input_image, image_name in ((image_x, "XImage"),
(image_y, "YImage"),
(image_x2, "X2Image"),
(image_y2, "Y2Image"),
(image_xy, "XYImage")):
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_FIT_POLYNOMIAL
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(np.abs(image.pixel_data-input_image) < .001),
"Failed to fit polynomial to %s"%(image_name))
def test_03_02_gaussian_filter(self):
"""Test gaussian filtering a gaussian of a point"""
input_image = np.zeros((101,101))
input_image[50,50] = 1
image_name = "InputImage"
i,j = np.mgrid[-50:51,-50:51]
expected_image = np.e ** (- (i**2+j**2)/(2*(10.0/2.35)**2))
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_GAUSSIAN_FILTER
module.automatic_object_width.value = calc.FI_MANUALLY
module.size_of_smoothing_filter.value = 10
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
ipd = image.pixel_data[40:61,40:61]
expected_image = expected_image[40:61,40:61]
self.assertTrue(np.all(np.abs(ipd / ipd.mean()-
expected_image/ expected_image.mean()) <
.001))
def test_03_03_median_filter(self):
"""Test median filtering of a point"""
input_image = np.zeros((101,101))
input_image[50,50] = 1
image_name = "InputImage"
expected_image = np.zeros((101,101))
filter_distance = int(.5 + 10/2.35)
expected_image[-filter_distance:filter_distance+1,
-filter_distance:filter_distance+1] = 1
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_MEDIAN_FILTER
module.automatic_object_width.value = calc.FI_MANUALLY
module.size_of_smoothing_filter.value = 10
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(image.pixel_data == expected_image))
def test_03_04_smooth_to_average(self):
"""Test smoothing to an average value"""
np.random.seed(0)
input_image = np.random.uniform(size=(10,10)).astype(np.float32)
image_name = "InputImage"
expected_image = np.ones((10,10))*input_image.mean()
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_TO_AVERAGE
module.automatic_object_width.value = calc.FI_MANUALLY
module.size_of_smoothing_filter.value = 10
module.rescale_option.value = cps.NO
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
np.testing.assert_almost_equal(image.pixel_data, expected_image)
def test_03_05_splines(self):
for automatic, bg_mode, spline_points, threshold, convergence, offset, hi, lo, succeed in (
(True, calc.MODE_AUTO, 5, 2, .001, 0, True, False, True),
(True, calc.MODE_AUTO, 5, 2, .001, .7, False, True, True),
(True, calc.MODE_AUTO, 5, 2, .001, .5, True, True, True),
(False, calc.MODE_AUTO, 5, 2, .001, 0, True, False, True),
(False, calc.MODE_AUTO, 5, 2, .001, .7, False, True, True),
(False, calc.MODE_AUTO, 5, 2, .001, .5, True, True, True),
(False, calc.MODE_BRIGHT, 5, 2, .001, .7, False, True, True),
(False, calc.MODE_DARK, 5, 2, .001, 0, True, False, True),
(False, calc.MODE_GRAY, 5, 2, .001, .5, True, True, True),
(False, calc.MODE_AUTO, 7, 2, .001, 0, True, False, True),
(False, calc.MODE_AUTO, 4, 2, .001, 0, True, False, True),
(False, calc.MODE_DARK, 5, 2, .001, .7, False, True, False),
(False, calc.MODE_BRIGHT, 5, 2, .001, 0, True, False, False)
):
#
# Make an image with a random background
#
np.random.seed(35)
image = np.random.uniform(size=(21,31)) * .05 + offset
if hi:
#
# Add some "foreground" pixels
#
fg = np.random.permutation(400)[:100]
image[fg % image.shape[0], (fg / image.shape[0]).astype(int)] *= 10
if lo:
#
# Add some "background" pixels
#
bg = np.random.permutation(400)[:100]
image[bg % image.shape[0], (bg / image.shape[0]).astype(int)] -= offset
#
# Make a background function
#
ii, jj = np.mgrid[-10:11,-15:16]
bg = ((ii.astype(float) / 10) ** 2) * ((jj.astype(float) / 15) ** 2)
bg *= .2
image += bg
workspaces, module = self.make_workspaces(((image, None),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = automatic
module.spline_bg_mode.value = bg_mode
module.spline_convergence.value = convergence
module.spline_threshold.value = threshold
module.spline_points.value = spline_points
module.spline_rescale.value = 1
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
if succeed:
self.assertTrue(np.all(diff < .05))
else:
self.assertFalse(np.all(diff < .05))
def test_03_06_splines_scaled(self):
#
# Make an image with a random background
#
np.random.seed(36)
image = np.random.uniform(size=(101,131)) * .05
#
# Add some "foreground" pixels
#
fg = np.random.permutation(np.prod(image.shape))[:200]
image[fg % image.shape[0], (fg / image.shape[0]).astype(int)] *= 15
#
# Make a background function
#
ii, jj = np.mgrid[-50:51,-65:66]
bg = ((ii.astype(float) / 10) ** 2) * ((jj.astype(float) / 15) ** 2)
bg *= .2
image += bg
workspaces, module = self.make_workspaces(((image, None),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = False
module.spline_rescale.value = 2
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
np.all(diff < .05)
def test_03_07_splines_masked(self):
#
# Make an image with a random background
#
np.random.seed(37)
image = np.random.uniform(size=(21,31)) * .05
#
# Mask 1/2 of the pixels
#
mask = np.random.uniform(size=(21,31)) < .5
#
# Make a background function
#
ii, jj = np.mgrid[-10:11,-15:16]
bg = ((ii.astype(float) / 10) ** 2) * ((jj.astype(float) / 15) ** 2)
bg *= .2
image += bg
#
# Offset the background within the mask
#
image[~mask] += bg[~mask]
workspaces, module = self.make_workspaces(((image, mask),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = True
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
self.assertTrue(np.all(diff < .05))
#
# Make sure test fails w/o mask
#
workspaces, module = self.make_workspaces(((image, None),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = True
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
self.assertFalse(np.all(diff < .05))
def test_03_07_splines_cropped(self):
#
# Make an image with a random background
#
np.random.seed(37)
image = np.random.uniform(size=(21,31)) * .05
#
# Mask 1/2 of the pixels
#
mask = np.zeros(image.shape, bool)
mask[4:-4,6:-6] = True
#
# Make a background function
#
ii, jj = np.mgrid[-10:11,-15:16]
bg = ((ii.astype(float) / 10) ** 2) * ((jj.astype(float) / 15) ** 2)
bg *= .2
image += bg
#
# Offset the background within the mask
#
image[~mask] += bg[~mask]
workspaces, module = self.make_workspaces(((image, mask),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = True
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
self.assertTrue(np.all(diff < .05))
#
# Make sure test fails w/o mask
#
workspaces, module = self.make_workspaces(((image, None),))
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
module.intensity_choice.value = calc.IC_BACKGROUND
module.each_or_all.value = calc.EA_EACH
module.rescale_option.value = cps.NO
module.smoothing_method.value = calc.SM_SPLINES
module.automatic_splines.value = True
module.prepare_group(workspaces[0], {}, [1])
module.run(workspaces[0])
img = workspaces[0].image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = img.pixel_data
diff = pixel_data - np.min(pixel_data) - bg
self.assertFalse(np.all(diff < .05))
def test_04_01_intermediate_images(self):
"""Make sure the average and dilated image flags work"""
for average_flag, dilated_flag in ((False,False),
(False,True),
(True, False),
(True,True)):
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage("InputImage", np.zeros((10,10)))
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = "InputImage"
module.illumination_image_name.value = "OutputImage"
module.save_average_image.value = average_flag
module.average_image_name.value = "AverageImage"
module.save_dilated_image.value = dilated_flag
module.dilated_image_name.value = "DilatedImage"
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
if average_flag:
img = image_set.get_image("AverageImage")
else:
self.assertRaises(AssertionError,
image_set.get_image,
"AverageImage")
if dilated_flag:
img = image_set.get_image("DilatedImage")
else:
self.assertRaises(AssertionError,
image_set.get_image,
"DilatedImage")
def test_05_01_rescale(self):
"""Test basic rescaling of an image with two values"""
input_image = np.ones((10,10))
input_image[0:5,:] *= .5
image_name = "InputImage"
expected_image = input_image * 2
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_NONE
module.automatic_object_width.value = calc.FI_MANUALLY
module.size_of_smoothing_filter.value = 10
module.rescale_option.value = cps.YES
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(image.pixel_data == expected_image))
def test_05_02_rescale_outlier(self):
"""Test rescaling with one low outlier"""
input_image = np.ones((10,10))
input_image[0:5,:] *= .5
input_image[0,0] = .1
image_name = "InputImage"
expected_image = input_image * 2
expected_image[0,0] = 1
pipeline = cpp.Pipeline()
pipeline.add_listener(self.error_callback)
inj_module = inj.InjectImage(image_name, input_image)
inj_module.module_num = 1
pipeline.add_module(inj_module)
module = calc.CorrectIlluminationCalculate()
module.module_num = 2
pipeline.add_module(module)
module.image_name.value = image_name
module.illumination_image_name.value = "OutputImage"
module.intensity_choice.value = calc.IC_REGULAR
module.each_or_all.value == calc.EA_EACH
module.smoothing_method.value = calc.SM_NONE
module.automatic_object_width.value = calc.FI_MANUALLY
module.size_of_smoothing_filter.value = 10
module.rescale_option.value = cps.YES
module.dilate_objects.value = False
measurements = cpm.Measurements()
image_set_list = cpi.ImageSetList()
workspace = cpw.Workspace(pipeline, None, None, None,
measurements, image_set_list)
pipeline.prepare_run(workspace)
inj_module.prepare_group(workspace, {}, [1])
module.prepare_group(workspace, {}, [1])
image_set = image_set_list.get_image_set(0)
object_set = cpo.ObjectSet()
workspace = cpw.Workspace(pipeline,
inj_module,
image_set,
object_set,
measurements,
image_set_list)
inj_module.run(workspace)
module.run(workspace)
image = image_set.get_image("OutputImage")
self.assertTrue(np.all(image.pixel_data == expected_image))
def test_06_01_load_matlab(self):
data = ('eJzzdQzxcXRSMNUzUPB1DNFNy8xJ1VEIyEksScsvyrVSCHAO9/TTUXAuSk0s'
'SU1RyM+zUvDNz1PwTSxSMDBUMDSxMrW0MrRQMDIwNFAgGTAwevryMzAwLGdk'
'YKiYczZkr99hA4F9S17yd4TJRk44dCxC7AiHCBvbrVWuoj4nfO9emSTs3tnL'
'Jdx/QPmjgA1Df9NltVyhRcda+OaUeL37U3s/Mv13FMOHUPYVJ/Odd/Fpr3bb'
'OO2DgVziuc5s9lCDBwan6j3klecv4Dya7MLKl5Bb+O/a3I2/xfP3lhxf1vRI'
'rmhSQqbtQ58N8l/SDQ2j5CawLlP+1KWYa5jTMd/TYYb0R+W/OWWx0z/63J32'
'xTX1Mrvucv6zLnZH4g+w5T958F3oR5nI/SCtOdo3F7ecq2z0U158uaP0V9Pq'
'D68l6yT4N+pqfJr+1Zq1Rvfo9WkVovPmPXpZcC3wcWjQHi6bU5uDHkpqzmM0'
'PzFr+tv3DRUzhMRXz/ns2CZ/zDaNjS+5Rk+e2+Hn7yJNi2IB9bAp4Rdvnn/R'
'8tHUOPaYr+CD/6s/r3v77e/Tq6p8mza+NX648vUWY6u3U/o872h+i+qs/ft1'
'9+q/b7ye826b711k1/LD0fHuYp+7Bu+M7h8Xi+8zfXSK+/yd5XqLpskEyRw+'
'vzNQ+0a73v9ZljZTf5ZFbYrby3J+wpnzj0XfP5xea3ezqV/3XD3zpczepQDs'
'fe/W')
pipeline = cpp.Pipeline()
def callback(caller,event):
self.assertFalse(isinstance(event, cpp.LoadExceptionEvent))
pipeline.add_listener(callback)
pipeline.load(StringIO(zlib.decompress(base64.b64decode(data))))
self.assertEqual(len(pipeline.modules()), 1)
module = pipeline.modules()[0]
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
self.assertEqual(module.image_name, "IllumBlue")
self.assertEqual(module.illumination_image_name, "IllumOut")
self.assertEqual(module.intensity_choice, calc.IC_REGULAR)
self.assertFalse(module.dilate_objects)
self.assertEqual(module.rescale_option, cps.YES)
self.assertEqual(module.each_or_all, calc.EA_EACH)
self.assertEqual(module.smoothing_method, calc.SM_NONE)
self.assertEqual(module.automatic_object_width, calc.FI_AUTOMATIC)
self.assertFalse(module.save_average_image)
self.assertFalse(module.save_dilated_image)
def test_06_02_load_v1(self):
data = r"""CellProfiler Pipeline: http://www.cellprofiler.org
Version:1
SVNRevision:9411
LoadImages:[module_num:1|svn_version:\'Unknown\'|variable_revision_number:4|show_window:True|notes:\x5B\x5D]
What type of files are you loading?:individual images
How do you want to load these files?:Text-Exact match
How many images are there in each group?:3
Type the text that the excluded images have in common:Do not use
Analyze all subfolders within the selected folder?:No
Image location:Default Image Folder
Enter the full path to the images:.
Do you want to check image sets for missing or duplicate files?:No
Do you want to group image sets by metadata?:No
Do you want to exclude certain files?:No
What metadata fields do you want to group by?:
Type the text that these images have in common (case-sensitive):D.TIF
What do you want to call this image in CellProfiler?:Image1
What is the position of this image in each group?:D.TIF
Do you want to extract metadata from the file name, the subfolder path or both?:None
Type the regular expression that finds metadata in the file name\x3A:None
Type the regular expression that finds metadata in the subfolder path\x3A:None
CorrectIlluminationCalculate:[module_num:2|svn_version:\'9401\'|variable_revision_number:1|show_window:True|notes:\x5B\x5D]
Select the input image:Image1
Name the output image:Illum1
Select how the illumination function is calculated:Regular
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:Yes
Calculate function for each image individually, or based on all images?:All
Smoothing method:No smoothing
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the averaged image:Illum1Average
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the dilated image:Illum1Dilated
CorrectIlluminationCalculate:[module_num:3|svn_version:\'9401\'|variable_revision_number:1|show_window:True|notes:\x5B\x5D]
Select the input image:Image2
Name the output image:Illum2
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:Yes
Dilation radius:2
Block size:65
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:All\x3A First cycle
Smoothing method:Median Filter
Method to calculate smoothing filter size:Manually
Approximate object size:15
Smoothing filter size:20
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the averaged image:Illum2Avg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the dilated image:Illum2Dilated
CorrectIlluminationCalculate:[module_num:4|svn_version:\'9401\'|variable_revision_number:1|show_window:True|notes:\x5B\x5D]
Select the input image:Image3
Name the output image:Illum3
Select how the illumination function is calculated:Regular
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:Median
Calculate function for each image individually, or based on all images?:All\x3A Across cycles
Smoothing method:Median Filter
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:Illum3Avg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the dilated image:Illum3Dilated
CorrectIlluminationCalculate:[module_num:5|svn_version:\'9401\'|variable_revision_number:1|show_window:True|notes:\x5B\x5D]
Select the input image:Image4
Name the output image:Illum4
Select how the illumination function is calculated:Regular
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:Median
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Gaussian Filter
Method to calculate smoothing filter size:Object size
Approximate object size:15
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:Illum4Avg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:Yes
Name the dilated image:Illum4Dilated
CorrectIlluminationCalculate:[module_num:6|svn_version:\'9401\'|variable_revision_number:1|show_window:True|notes:\x5B\x5D]
Select the input image:Image5
Name the output image:Illum5
Select how the illumination function is calculated:Regular
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:Median
Calculate function for each image individually, or based on all images?:All
Smoothing method:Smooth to Average
Method to calculate smoothing filter size:Object size
Approximate object size:15
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:Illum5Avg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:Illum5Dilated
"""
pipeline = cpp.Pipeline()
def callback(caller,event):
self.assertFalse(isinstance(event, cpp.LoadExceptionEvent))
pipeline.add_listener(callback)
pipeline.load(StringIO(data))
self.assertEqual(len(pipeline.modules()), 6)
for i, (image_name, illumination_image_name, intensity_choice,
dilate_objects, object_dilation_radius, block_size,
rescale_option, each_or_all, smoothing_method,
automatic_object_width, object_width, size_of_smoothing_filter,
save_average_image, average_image_name, save_dilated_image,
dilated_image_name) in enumerate((
("Image1", "Illum1", calc.IC_REGULAR, False, 1, 60, cps.YES,
calc.EA_ALL_FIRST, calc.SM_NONE, calc.FI_AUTOMATIC, 10, 10, True,
"Illum1Average", True, "Illum1Dilated"),
("Image2", "Illum2", calc.IC_BACKGROUND, True, 2, 65, cps.NO,
calc.EA_ALL_FIRST, calc.SM_MEDIAN_FILTER, calc.FI_MANUALLY, 15, 20,
True, "Illum2Avg", True, "Illum2Dilated"),
("Image3", "Illum3", calc.IC_REGULAR, False, 1, 60,
calc.RE_MEDIAN, calc.EA_ALL_ACROSS, calc.SM_MEDIAN_FILTER,
calc.FI_AUTOMATIC, 10, 10, False, "Illum3Avg", True,
"Illum3Dilated"),
("Image4","Illum4",calc.IC_REGULAR, cps.NO, 1, 60,
calc.RE_MEDIAN, calc.EA_EACH, calc.SM_GAUSSIAN_FILTER,
calc.FI_OBJECT_SIZE, 15, 10, False, "Illum4Avg", True,
"Illum4Dilated"),
("Image5", "Illum5", calc.IC_REGULAR, cps.NO, 1, 60,
calc.RE_MEDIAN, calc.EA_ALL_ACROSS, calc.SM_TO_AVERAGE,
calc.FI_OBJECT_SIZE, 15, 10, False, "Illum5Avg",
False, "Illum5Dilated"))):
module = pipeline.modules()[i+1]
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
self.assertEqual(module.image_name, image_name)
self.assertEqual(module.illumination_image_name, illumination_image_name)
self.assertEqual(module.intensity_choice, intensity_choice)
self.assertEqual(module.dilate_objects, dilate_objects)
self.assertEqual(module.object_dilation_radius, object_dilation_radius)
self.assertEqual(module.block_size, block_size)
self.assertEqual(module.rescale_option, rescale_option)
self.assertEqual(module.each_or_all, each_or_all)
self.assertEqual(module.smoothing_method, smoothing_method)
self.assertEqual(module.automatic_object_width, automatic_object_width)
self.assertEqual(module.object_width, object_width)
self.assertEqual(module.size_of_smoothing_filter, size_of_smoothing_filter)
self.assertEqual(module.save_average_image, save_average_image)
self.assertEqual(module.average_image_name, average_image_name)
self.assertEqual(module.save_dilated_image, save_dilated_image)
self.assertEqual(module.dilated_image_name, dilated_image_name)
def test_06_03_load_v2(self):
data = r"""CellProfiler Pipeline: http://www.cellprofiler.org
Version:1
SVNRevision:10125
CorrectIlluminationCalculate:[module_num:1|svn_version:\'10063\'|variable_revision_number:2|show_window:True|notes:\x5B\x5D]
Select the input image:Masked
Name the output image:Illum
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:No
Dilation radius:2
Block size:55
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Splines
Method to calculate smoothing filter size:Automatic
Approximate object size:11
Smoothing filter size:12
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:IllumAverage
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:IllumDilated
Automatically calculate spline parameters?:No
Background mode:bright
# of spline points:4
Background threshold:2
Image resampling factor:2
Max # of iterations:40
Convergence:0.001
CorrectIlluminationCalculate:[module_num:2|svn_version:\'10063\'|variable_revision_number:2|show_window:True|notes:\x5B\x5D]
Select the input image:Masked
Name the output image:Illum
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Splines
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:IllumBlueAvg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:IllumBlueDilated
Automatically calculate spline parameters?:Yes
Background mode:auto
# of spline points:3
Background threshold:2
Image resampling factor:2
Max # of iterations:40
Convergence:0.001
CorrectIlluminationCalculate:[module_num:3|svn_version:\'10063\'|variable_revision_number:2|show_window:True|notes:\x5B\x5D]
Select the input image:Masked
Name the output image:Illum
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Splines
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:IllumBlueAvg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:IllumBlueDilated
Automatically calculate spline parameters?:Yes
Background mode:dark
# of spline points:3
Background threshold:2
Image resampling factor:2
Max # of iterations:40
Convergence:0.001
CorrectIlluminationCalculate:[module_num:4|svn_version:\'10063\'|variable_revision_number:2|show_window:True|notes:\x5B\x5D]
Select the input image:Masked
Name the output image:Illum
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Splines
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:IllumBlueAvg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:IllumBlueDilated
Automatically calculate spline parameters?:No
Background mode:gray
# of spline points:3
Background threshold:2
Image resampling factor:2
Max # of iterations:40
Convergence:0.001
CorrectIlluminationCalculate:[module_num:5|svn_version:\'10063\'|variable_revision_number:2|show_window:True|notes:\x5B\x5D]
Select the input image:Masked
Name the output image:Illum
Select how the illumination function is calculated:Background
Dilate objects in the final averaged image?:No
Dilation radius:1
Block size:60
Rescale the illumination function?:No
Calculate function for each image individually, or based on all images?:Each
Smoothing method:Convex Hull
Method to calculate smoothing filter size:Automatic
Approximate object size:10
Smoothing filter size:10
Retain the averaged image for use later in the pipeline (for example, in SaveImages)?:No
Name the averaged image:IllumBlueAvg
Retain the dilated image for use later in the pipeline (for example, in SaveImages)?:No
Name the dilated image:IllumBlueDilated
Automatically calculate spline parameters?:No
Background mode:gray
# of spline points:3
Background threshold:2
Image resampling factor:2
Max # of iterations:40
Convergence:0.001
"""
pipeline = cpp.Pipeline()
def callback(caller,event):
self.assertFalse(isinstance(event, cpp.LoadExceptionEvent))
pipeline.add_listener(callback)
pipeline.load(StringIO(data))
self.assertEqual(len(pipeline.modules()), 5)
module = pipeline.modules()[0]
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
self.assertEqual(module.image_name, "Masked")
self.assertEqual(module.illumination_image_name, "Illum")
self.assertEqual(module.intensity_choice, calc.IC_BACKGROUND)
self.assertFalse(module.dilate_objects)
self.assertEqual(module.object_dilation_radius, 2)
self.assertEqual(module.block_size, 55)
self.assertEqual(module.rescale_option, cps.NO)
self.assertEqual(module.each_or_all, calc.EA_EACH)
self.assertEqual(module.smoothing_method, calc.SM_SPLINES)
self.assertEqual(module.automatic_object_width, calc.FI_AUTOMATIC)
self.assertEqual(module.object_width, 11)
self.assertEqual(module.size_of_smoothing_filter, 12)
self.assertFalse(module.save_average_image)
self.assertEqual(module.average_image_name, "IllumAverage")
self.assertFalse(module.save_dilated_image)
self.assertEqual(module.dilated_image_name, "IllumDilated")
self.assertFalse(module.automatic_splines)
self.assertEqual(module.spline_bg_mode, calc.MODE_BRIGHT)
self.assertEqual(module.spline_points, 4)
self.assertEqual(module.spline_threshold, 2)
self.assertEqual(module.spline_rescale, 2)
self.assertEqual(module.spline_maximum_iterations, 40)
self.assertAlmostEqual(module.spline_convergence.value, 0.001)
self.assertTrue(pipeline.modules()[1].automatic_splines)
for module, spline_bg_mode in zip(pipeline.modules()[1:4], (
calc.MODE_AUTO, calc.MODE_DARK, calc.MODE_GRAY)):
self.assertTrue(isinstance(module, calc.CorrectIlluminationCalculate))
self.assertEqual(module.spline_bg_mode, spline_bg_mode)
module = pipeline.modules()[4]
self.assertEqual(module.smoothing_method, calc.SM_CONVEX_HULL)
|
LeeKamentsky/CellProfiler
|
cellprofiler/modules/tests/test_correctilluminationcalculate.py
|
Python
|
gpl-2.0
| 63,437
|
[
"Gaussian"
] |
2d3a1e47f075896bb468f75487d0a66f1179e07ef1ad191d9d19c10dd212caac
|
import six
import chainer
import chainer.functions as F
from chainer.functions.loss.vae import gaussian_kl_divergence
import chainer.links as L
class VAE(chainer.Chain):
"""Variational AutoEncoder"""
def __init__(self, n_in, n_latent, n_h):
super(VAE, self).__init__(
# encoder
le1=L.Linear(n_in, n_h),
le2_mu=L.Linear(n_h, n_latent),
le2_ln_var=L.Linear(n_h, n_latent),
# decoder
ld1=L.Linear(n_latent, n_h),
ld2=L.Linear(n_h, n_in),
)
def __call__(self, x, sigmoid=True):
"""AutoEncoder"""
return self.decode(self.encode(x)[0], sigmoid)
def encode(self, x):
h1 = F.tanh(self.le1(x))
mu = self.le2_mu(h1)
ln_var = self.le2_ln_var(h1) # log(sigma**2)
return mu, ln_var
def decode(self, z, sigmoid=True):
h1 = F.tanh(self.ld1(z))
h2 = self.ld2(h1)
if sigmoid:
return F.sigmoid(h2)
else:
return h2
def get_loss_func(self, C=1.0, k=1, train=True):
"""Get loss function of VAE.
The loss value is equal to ELBO (Evidence Lower Bound)
multiplied by -1.
Args:
C (int): Usually this is 1.0. Can be changed to control the
second term of ELBO bound, which works as regularization.
k (int): Number of Monte Carlo samples used in encoded vector.
train (bool): If true loss_function is used for training.
"""
def lf(x):
mu, ln_var = self.encode(x)
batchsize = len(mu.data)
# reconstruction loss
rec_loss = 0
for l in six.moves.range(k):
z = F.gaussian(mu, ln_var)
rec_loss += F.bernoulli_nll(x, self.decode(z, sigmoid=False)) \
/ (k * batchsize)
self.rec_loss = rec_loss
self.loss = self.rec_loss + \
C * gaussian_kl_divergence(mu, ln_var) / batchsize
return self.loss
return lf
|
benob/chainer
|
examples/vae/net.py
|
Python
|
mit
| 2,076
|
[
"Gaussian"
] |
620ca5f6f6d19fc4f88df2dc9c6f6d8c7b5b0879fe132a0e483229b1305cde22
|
# Copyright 2011-2015 Free Software Foundation, Inc.
#
# This is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see
# <http://www.gnu.org/licenses/>.
# This is a GCC plugin that computes some exception-handling data for
# gdb. This data can then be summarized and checked by the
# exsummary.py script.
# To use:
# * First, install the GCC Python plugin. See
# https://fedorahosted.org/gcc-python-plugin/
# * export PYTHON_PLUGIN=/full/path/to/plugin/directory
# This should be the directory holding "python.so".
# * cd build/gdb; make mostlyclean
# * make CC=.../gcc-with-excheck
# This will write a number of .py files in the build directory.
# * python .../exsummary.py
# This will show the violations.
import gcc
import gccutils
import sys
# Where our output goes.
output_file = None
# Cleanup functions require special treatment, because they take a
# function argument, but in theory the function must be nothrow.
cleanup_functions = {
'make_cleanup': 1,
'make_cleanup_dtor': 1,
'make_final_cleanup': 1,
'make_my_cleanup2': 1,
'make_my_cleanup': 1
}
# Functions which may throw but which we want to ignore.
ignore_functions = {
# This one is super special.
'exceptions_state_mc': 1,
# gdb generally pretends that internal_error cannot throw, even
# though it can.
'internal_error': 1,
# do_cleanups and friends are supposedly nothrow but we don't want
# to run afoul of the indirect function call logic.
'do_cleanups': 1,
'do_final_cleanups': 1
}
# Functions which take a function argument, but which are not
# interesting, usually because the argument is not called in the
# current context.
non_passthrough_functions = {
'signal': 1,
'add_internal_function': 1
}
# Return True if the type is from Python.
def type_is_pythonic(t):
if isinstance(t, gcc.ArrayType):
t = t.type
if not isinstance(t, gcc.RecordType):
return False
# Hack.
return str(t).find('struct Py') == 0
# Examine all the fields of a struct. We don't currently need any
# sort of recursion, so this is simple for now.
def examine_struct_fields(initializer):
global output_file
for idx2, value2 in initializer.elements:
if isinstance(idx2, gcc.Declaration):
if isinstance(value2, gcc.AddrExpr):
value2 = value2.operand
if isinstance(value2, gcc.FunctionDecl):
output_file.write("declare_nothrow(%s)\n"
% repr(str(value2.name)))
# Examine all global variables looking for pointers to functions in
# structures whose types were defined by Python.
def examine_globals():
global output_file
vars = gcc.get_variables()
for var in vars:
if not isinstance(var.decl, gcc.VarDecl):
continue
output_file.write("################\n")
output_file.write("# Analysis for %s\n" % var.decl.name)
if not var.decl.initial:
continue
if not type_is_pythonic(var.decl.type):
continue
if isinstance(var.decl.type, gcc.ArrayType):
for idx, value in var.decl.initial.elements:
examine_struct_fields(value)
else:
gccutils.check_isinstance(var.decl.type, gcc.RecordType)
examine_struct_fields(var.decl.initial)
# Called at the end of compilation to write out some data derived from
# globals and to close the output.
def close_output(*args):
global output_file
examine_globals()
output_file.close()
# The pass which derives some exception-checking information. We take
# a two-step approach: first we get a call graph from the compiler.
# This is emitted by the plugin as Python code. Then, we run a second
# program that reads all the generated Python and uses it to get a
# global view of exception routes in gdb.
class GdbExceptionChecker(gcc.GimplePass):
def __init__(self, output_file):
gcc.GimplePass.__init__(self, 'gdb_exception_checker')
self.output_file = output_file
def log(self, obj):
self.output_file.write("# %s\n" % str(obj))
# Return true if FN is a call to a method on a Python object.
# We know these cannot throw in the gdb sense.
def fn_is_python_ignorable(self, fn):
if not isinstance(fn, gcc.SsaName):
return False
stmt = fn.def_stmt
if not isinstance(stmt, gcc.GimpleAssign):
return False
if stmt.exprcode is not gcc.ComponentRef:
return False
rhs = stmt.rhs[0]
if not isinstance(rhs, gcc.ComponentRef):
return False
if not isinstance(rhs.field, gcc.FieldDecl):
return False
return rhs.field.name == 'tp_dealloc' or rhs.field.name == 'tp_free'
# Decode a function call and write something to the output.
# THIS_FUN is the enclosing function that we are processing.
# FNDECL is the call to process; it might not actually be a DECL
# node.
# LOC is the location of the call.
def handle_one_fndecl(self, this_fun, fndecl, loc):
callee_name = ''
if isinstance(fndecl, gcc.AddrExpr):
fndecl = fndecl.operand
if isinstance(fndecl, gcc.FunctionDecl):
# Ordinary call to a named function.
callee_name = str(fndecl.name)
self.output_file.write("function_call(%s, %s, %s)\n"
% (repr(callee_name),
repr(this_fun.decl.name),
repr(str(loc))))
elif self.fn_is_python_ignorable(fndecl):
# Call to tp_dealloc.
pass
elif (isinstance(fndecl, gcc.SsaName)
and isinstance(fndecl.var, gcc.ParmDecl)):
# We can ignore an indirect call via a parameter to the
# current function, because this is handled via the rule
# for passthrough functions.
pass
else:
# Any other indirect call.
self.output_file.write("has_indirect_call(%s, %s)\n"
% (repr(this_fun.decl.name),
repr(str(loc))))
return callee_name
# This does most of the work for examine_one_bb.
# THIS_FUN is the enclosing function.
# BB is the basic block to process.
# Returns True if this block is the header of a TRY_CATCH, False
# otherwise.
def examine_one_bb_inner(self, this_fun, bb):
if not bb.gimple:
return False
try_catch = False
for stmt in bb.gimple:
loc = stmt.loc
if not loc:
loc = this_fun.decl.location
if not isinstance(stmt, gcc.GimpleCall):
continue
callee_name = self.handle_one_fndecl(this_fun, stmt.fn, loc)
if callee_name == 'exceptions_state_mc_action_iter':
try_catch = True
global non_passthrough_functions
if callee_name in non_passthrough_functions:
continue
# We have to specially handle calls where an argument to
# the call is itself a function, e.g., qsort. In general
# we model these as "passthrough" -- we assume that in
# addition to the call the qsort there is also a call to
# the argument function.
for arg in stmt.args:
# We are only interested in arguments which are functions.
t = arg.type
if isinstance(t, gcc.PointerType):
t = t.dereference
if not isinstance(t, gcc.FunctionType):
continue
if isinstance(arg, gcc.AddrExpr):
arg = arg.operand
global cleanup_functions
if callee_name in cleanup_functions:
if not isinstance(arg, gcc.FunctionDecl):
gcc.inform(loc, 'cleanup argument not a DECL: %s' % repr(arg))
else:
# Cleanups must be nothrow.
self.output_file.write("declare_cleanup(%s)\n"
% repr(str(arg.name)))
else:
# Assume we have a passthrough function, like
# qsort or an iterator. We model this by
# pretending there is an ordinary call at this
# point.
self.handle_one_fndecl(this_fun, arg, loc)
return try_catch
# Examine all the calls in a basic block and generate output for
# them.
# THIS_FUN is the enclosing function.
# BB is the basic block to examine.
# BB_WORKLIST is a list of basic blocks to work on; we add the
# appropriate successor blocks to this.
# SEEN_BBS is a map whose keys are basic blocks we have already
# processed. We use this to ensure that we only visit a given
# block once.
def examine_one_bb(self, this_fun, bb, bb_worklist, seen_bbs):
try_catch = self.examine_one_bb_inner(this_fun, bb)
for edge in bb.succs:
if edge.dest in seen_bbs:
continue
seen_bbs[edge.dest] = 1
if try_catch:
# This is bogus, but we magically know the right
# answer.
if edge.false_value:
bb_worklist.append(edge.dest)
else:
bb_worklist.append(edge.dest)
# Iterate over all basic blocks in THIS_FUN.
def iterate_bbs(self, this_fun):
# Iteration must be in control-flow order, because if we see a
# TRY_CATCH construct we need to drop all the contained blocks.
bb_worklist = [this_fun.cfg.entry]
seen_bbs = {}
seen_bbs[this_fun.cfg.entry] = 1
for bb in bb_worklist:
self.examine_one_bb(this_fun, bb, bb_worklist, seen_bbs)
def execute(self, fun):
if fun and fun.cfg and fun.decl:
self.output_file.write("################\n")
self.output_file.write("# Analysis for %s\n" % fun.decl.name)
self.output_file.write("define_function(%s, %s)\n"
% (repr(fun.decl.name),
repr(str(fun.decl.location))))
global ignore_functions
if fun.decl.name not in ignore_functions:
self.iterate_bbs(fun)
def main(**kwargs):
global output_file
output_file = open(gcc.get_dump_base_name() + '.gdb_exc.py', 'w')
# We used to use attributes here, but there didn't seem to be a
# big benefit over hard-coding.
output_file.write('declare_throw("throw_exception")\n')
output_file.write('declare_throw("throw_verror")\n')
output_file.write('declare_throw("throw_vfatal")\n')
output_file.write('declare_throw("throw_error")\n')
gcc.register_callback(gcc.PLUGIN_FINISH_UNIT, close_output)
ps = GdbExceptionChecker(output_file)
ps.register_after('ssa')
main()
|
armoredsoftware/protocol
|
measurer/gdb-7.9/gdb/contrib/excheck.py
|
Python
|
bsd-3-clause
| 11,670
|
[
"VisIt"
] |
83a9c674673e79aa1d9bdd0199e6fac3c49b6ff8834e11ecbc64bb66198d8ed4
|
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import packaging.version
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google.api_core import client_options
from google.api_core import exceptions as core_exceptions
from google.api_core import future
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.api_core import operation_async # type: ignore
from google.api_core import operations_v1
from google.auth import credentials as ga_credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.aiplatform_v1beta1.services.index_endpoint_service import (
IndexEndpointServiceAsyncClient,
)
from google.cloud.aiplatform_v1beta1.services.index_endpoint_service import (
IndexEndpointServiceClient,
)
from google.cloud.aiplatform_v1beta1.services.index_endpoint_service import pagers
from google.cloud.aiplatform_v1beta1.services.index_endpoint_service import transports
from google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.base import (
_GOOGLE_AUTH_VERSION,
)
from google.cloud.aiplatform_v1beta1.types import index_endpoint
from google.cloud.aiplatform_v1beta1.types import index_endpoint as gca_index_endpoint
from google.cloud.aiplatform_v1beta1.types import index_endpoint_service
from google.cloud.aiplatform_v1beta1.types import machine_resources
from google.cloud.aiplatform_v1beta1.types import operation as gca_operation
from google.longrunning import operations_pb2
from google.oauth2 import service_account
from google.protobuf import field_mask_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
import google.auth
# TODO(busunkim): Once google-auth >= 1.25.0 is required transitively
# through google-api-core:
# - Delete the auth "less than" test cases
# - Delete these pytest markers (Make the "greater than or equal to" tests the default).
requires_google_auth_lt_1_25_0 = pytest.mark.skipif(
packaging.version.parse(_GOOGLE_AUTH_VERSION) >= packaging.version.parse("1.25.0"),
reason="This test requires google-auth < 1.25.0",
)
requires_google_auth_gte_1_25_0 = pytest.mark.skipif(
packaging.version.parse(_GOOGLE_AUTH_VERSION) < packaging.version.parse("1.25.0"),
reason="This test requires google-auth >= 1.25.0",
)
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert IndexEndpointServiceClient._get_default_mtls_endpoint(None) is None
assert (
IndexEndpointServiceClient._get_default_mtls_endpoint(api_endpoint)
== api_mtls_endpoint
)
assert (
IndexEndpointServiceClient._get_default_mtls_endpoint(api_mtls_endpoint)
== api_mtls_endpoint
)
assert (
IndexEndpointServiceClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
IndexEndpointServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert (
IndexEndpointServiceClient._get_default_mtls_endpoint(non_googleapi)
== non_googleapi
)
@pytest.mark.parametrize(
"client_class", [IndexEndpointServiceClient, IndexEndpointServiceAsyncClient,]
)
def test_index_endpoint_service_client_from_service_account_info(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_info"
) as factory:
factory.return_value = creds
info = {"valid": True}
client = client_class.from_service_account_info(info)
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "aiplatform.googleapis.com:443"
@pytest.mark.parametrize(
"transport_class,transport_name",
[
(transports.IndexEndpointServiceGrpcTransport, "grpc"),
(transports.IndexEndpointServiceGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
def test_index_endpoint_service_client_service_account_always_use_jwt(
transport_class, transport_name
):
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=True)
use_jwt.assert_called_once_with(True)
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=False)
use_jwt.assert_not_called()
@pytest.mark.parametrize(
"client_class", [IndexEndpointServiceClient, IndexEndpointServiceAsyncClient,]
)
def test_index_endpoint_service_client_from_service_account_file(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
client = client_class.from_service_account_json("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "aiplatform.googleapis.com:443"
def test_index_endpoint_service_client_get_transport_class():
transport = IndexEndpointServiceClient.get_transport_class()
available_transports = [
transports.IndexEndpointServiceGrpcTransport,
]
assert transport in available_transports
transport = IndexEndpointServiceClient.get_transport_class("grpc")
assert transport == transports.IndexEndpointServiceGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
IndexEndpointServiceClient,
transports.IndexEndpointServiceGrpcTransport,
"grpc",
),
(
IndexEndpointServiceAsyncClient,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
@mock.patch.object(
IndexEndpointServiceClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(IndexEndpointServiceClient),
)
@mock.patch.object(
IndexEndpointServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(IndexEndpointServiceAsyncClient),
)
def test_index_endpoint_service_client_client_options(
client_class, transport_class, transport_name
):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(IndexEndpointServiceClient, "get_transport_class") as gtc:
transport = transport_class(credentials=ga_credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(IndexEndpointServiceClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}
):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name,use_client_cert_env",
[
(
IndexEndpointServiceClient,
transports.IndexEndpointServiceGrpcTransport,
"grpc",
"true",
),
(
IndexEndpointServiceAsyncClient,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"true",
),
(
IndexEndpointServiceClient,
transports.IndexEndpointServiceGrpcTransport,
"grpc",
"false",
),
(
IndexEndpointServiceAsyncClient,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"false",
),
],
)
@mock.patch.object(
IndexEndpointServiceClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(IndexEndpointServiceClient),
)
@mock.patch.object(
IndexEndpointServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(IndexEndpointServiceAsyncClient),
)
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_index_endpoint_service_client_mtls_env_auto(
client_class, transport_class, transport_name, use_client_cert_env
):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
if use_client_cert_env == "false":
expected_client_cert_source = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_client_cert_source = client_cert_source_callback
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=True,
):
with mock.patch(
"google.auth.transport.mtls.default_client_cert_source",
return_value=client_cert_source_callback,
):
if use_client_cert_env == "false":
expected_host = client.DEFAULT_ENDPOINT
expected_client_cert_source = None
else:
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_client_cert_source = client_cert_source_callback
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=False,
):
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
IndexEndpointServiceClient,
transports.IndexEndpointServiceGrpcTransport,
"grpc",
),
(
IndexEndpointServiceAsyncClient,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_index_endpoint_service_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
IndexEndpointServiceClient,
transports.IndexEndpointServiceGrpcTransport,
"grpc",
),
(
IndexEndpointServiceAsyncClient,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_index_endpoint_service_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_index_endpoint_service_client_client_options_from_dict():
with mock.patch(
"google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.IndexEndpointServiceGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = IndexEndpointServiceClient(
client_options={"api_endpoint": "squid.clam.whelk"}
)
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_create_index_endpoint(
transport: str = "grpc",
request_type=index_endpoint_service.CreateIndexEndpointRequest,
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.create_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.CreateIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_create_index_endpoint_from_dict():
test_create_index_endpoint(request_type=dict)
def test_create_index_endpoint_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
client.create_index_endpoint()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.CreateIndexEndpointRequest()
@pytest.mark.asyncio
async def test_create_index_endpoint_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.CreateIndexEndpointRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.create_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.CreateIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_create_index_endpoint_async_from_dict():
await test_create_index_endpoint_async(request_type=dict)
def test_create_index_endpoint_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.CreateIndexEndpointRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.create_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_index_endpoint_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.CreateIndexEndpointRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.create_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_index_endpoint_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_index_endpoint(
parent="parent_value",
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].index_endpoint == gca_index_endpoint.IndexEndpoint(
name="name_value"
)
def test_create_index_endpoint_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_index_endpoint(
index_endpoint_service.CreateIndexEndpointRequest(),
parent="parent_value",
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_index_endpoint_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_index_endpoint(
parent="parent_value",
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].index_endpoint == gca_index_endpoint.IndexEndpoint(
name="name_value"
)
@pytest.mark.asyncio
async def test_create_index_endpoint_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_index_endpoint(
index_endpoint_service.CreateIndexEndpointRequest(),
parent="parent_value",
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
)
def test_get_index_endpoint(
transport: str = "grpc", request_type=index_endpoint_service.GetIndexEndpointRequest
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint.IndexEndpoint(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
network="network_value",
)
response = client.get_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.GetIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, index_endpoint.IndexEndpoint)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
assert response.network == "network_value"
def test_get_index_endpoint_from_dict():
test_get_index_endpoint(request_type=dict)
def test_get_index_endpoint_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
client.get_index_endpoint()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.GetIndexEndpointRequest()
@pytest.mark.asyncio
async def test_get_index_endpoint_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.GetIndexEndpointRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint.IndexEndpoint(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
network="network_value",
)
)
response = await client.get_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.GetIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, index_endpoint.IndexEndpoint)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
assert response.network == "network_value"
@pytest.mark.asyncio
async def test_get_index_endpoint_async_from_dict():
await test_get_index_endpoint_async(request_type=dict)
def test_get_index_endpoint_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.GetIndexEndpointRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
call.return_value = index_endpoint.IndexEndpoint()
client.get_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_index_endpoint_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.GetIndexEndpointRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint.IndexEndpoint()
)
await client.get_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_index_endpoint_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint.IndexEndpoint()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_index_endpoint(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_index_endpoint_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_index_endpoint(
index_endpoint_service.GetIndexEndpointRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_index_endpoint_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint.IndexEndpoint()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint.IndexEndpoint()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_index_endpoint(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_index_endpoint_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_index_endpoint(
index_endpoint_service.GetIndexEndpointRequest(), name="name_value",
)
def test_list_index_endpoints(
transport: str = "grpc",
request_type=index_endpoint_service.ListIndexEndpointsRequest,
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint_service.ListIndexEndpointsResponse(
next_page_token="next_page_token_value",
)
response = client.list_index_endpoints(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.ListIndexEndpointsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListIndexEndpointsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_index_endpoints_from_dict():
test_list_index_endpoints(request_type=dict)
def test_list_index_endpoints_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
client.list_index_endpoints()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.ListIndexEndpointsRequest()
@pytest.mark.asyncio
async def test_list_index_endpoints_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.ListIndexEndpointsRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint_service.ListIndexEndpointsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_index_endpoints(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.ListIndexEndpointsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListIndexEndpointsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_index_endpoints_async_from_dict():
await test_list_index_endpoints_async(request_type=dict)
def test_list_index_endpoints_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.ListIndexEndpointsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
call.return_value = index_endpoint_service.ListIndexEndpointsResponse()
client.list_index_endpoints(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_index_endpoints_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.ListIndexEndpointsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint_service.ListIndexEndpointsResponse()
)
await client.list_index_endpoints(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_index_endpoints_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint_service.ListIndexEndpointsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_index_endpoints(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_index_endpoints_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_index_endpoints(
index_endpoint_service.ListIndexEndpointsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_index_endpoints_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = index_endpoint_service.ListIndexEndpointsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
index_endpoint_service.ListIndexEndpointsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_index_endpoints(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_index_endpoints_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_index_endpoints(
index_endpoint_service.ListIndexEndpointsRequest(), parent="parent_value",
)
def test_list_index_endpoints_pager():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
next_page_token="abc",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[], next_page_token="def",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[index_endpoint.IndexEndpoint(),],
next_page_token="ghi",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_index_endpoints(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, index_endpoint.IndexEndpoint) for i in results)
def test_list_index_endpoints_pages():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
next_page_token="abc",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[], next_page_token="def",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[index_endpoint.IndexEndpoint(),],
next_page_token="ghi",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
),
RuntimeError,
)
pages = list(client.list_index_endpoints(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_index_endpoints_async_pager():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
next_page_token="abc",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[], next_page_token="def",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[index_endpoint.IndexEndpoint(),],
next_page_token="ghi",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
),
RuntimeError,
)
async_pager = await client.list_index_endpoints(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, index_endpoint.IndexEndpoint) for i in responses)
@pytest.mark.asyncio
async def test_list_index_endpoints_async_pages():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_index_endpoints),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
next_page_token="abc",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[], next_page_token="def",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[index_endpoint.IndexEndpoint(),],
next_page_token="ghi",
),
index_endpoint_service.ListIndexEndpointsResponse(
index_endpoints=[
index_endpoint.IndexEndpoint(),
index_endpoint.IndexEndpoint(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_index_endpoints(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_index_endpoint(
transport: str = "grpc",
request_type=index_endpoint_service.UpdateIndexEndpointRequest,
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_index_endpoint.IndexEndpoint(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
network="network_value",
)
response = client.update_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UpdateIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_index_endpoint.IndexEndpoint)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
assert response.network == "network_value"
def test_update_index_endpoint_from_dict():
test_update_index_endpoint(request_type=dict)
def test_update_index_endpoint_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
client.update_index_endpoint()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UpdateIndexEndpointRequest()
@pytest.mark.asyncio
async def test_update_index_endpoint_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.UpdateIndexEndpointRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_index_endpoint.IndexEndpoint(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
network="network_value",
)
)
response = await client.update_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UpdateIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_index_endpoint.IndexEndpoint)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
assert response.network == "network_value"
@pytest.mark.asyncio
async def test_update_index_endpoint_async_from_dict():
await test_update_index_endpoint_async(request_type=dict)
def test_update_index_endpoint_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.UpdateIndexEndpointRequest()
request.index_endpoint.name = "index_endpoint.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
call.return_value = gca_index_endpoint.IndexEndpoint()
client.update_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"index_endpoint.name=index_endpoint.name/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_index_endpoint_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.UpdateIndexEndpointRequest()
request.index_endpoint.name = "index_endpoint.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_index_endpoint.IndexEndpoint()
)
await client.update_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"index_endpoint.name=index_endpoint.name/value",
) in kw["metadata"]
def test_update_index_endpoint_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_index_endpoint.IndexEndpoint()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_index_endpoint(
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == gca_index_endpoint.IndexEndpoint(
name="name_value"
)
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=["paths_value"])
def test_update_index_endpoint_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_index_endpoint(
index_endpoint_service.UpdateIndexEndpointRequest(),
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_index_endpoint_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_index_endpoint.IndexEndpoint()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_index_endpoint.IndexEndpoint()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_index_endpoint(
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == gca_index_endpoint.IndexEndpoint(
name="name_value"
)
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=["paths_value"])
@pytest.mark.asyncio
async def test_update_index_endpoint_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_index_endpoint(
index_endpoint_service.UpdateIndexEndpointRequest(),
index_endpoint=gca_index_endpoint.IndexEndpoint(name="name_value"),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
def test_delete_index_endpoint(
transport: str = "grpc",
request_type=index_endpoint_service.DeleteIndexEndpointRequest,
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeleteIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_index_endpoint_from_dict():
test_delete_index_endpoint(request_type=dict)
def test_delete_index_endpoint_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
client.delete_index_endpoint()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeleteIndexEndpointRequest()
@pytest.mark.asyncio
async def test_delete_index_endpoint_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.DeleteIndexEndpointRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeleteIndexEndpointRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_index_endpoint_async_from_dict():
await test_delete_index_endpoint_async(request_type=dict)
def test_delete_index_endpoint_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.DeleteIndexEndpointRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_index_endpoint_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.DeleteIndexEndpointRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_index_endpoint(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_index_endpoint_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_index_endpoint(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_delete_index_endpoint_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_index_endpoint(
index_endpoint_service.DeleteIndexEndpointRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_index_endpoint_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_index_endpoint), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_index_endpoint(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_delete_index_endpoint_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_index_endpoint(
index_endpoint_service.DeleteIndexEndpointRequest(), name="name_value",
)
def test_deploy_index(
transport: str = "grpc", request_type=index_endpoint_service.DeployIndexRequest
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.deploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeployIndexRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_deploy_index_from_dict():
test_deploy_index(request_type=dict)
def test_deploy_index_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
client.deploy_index()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeployIndexRequest()
@pytest.mark.asyncio
async def test_deploy_index_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.DeployIndexRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.deploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.DeployIndexRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_deploy_index_async_from_dict():
await test_deploy_index_async(request_type=dict)
def test_deploy_index_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.DeployIndexRequest()
request.index_endpoint = "index_endpoint/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.deploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "index_endpoint=index_endpoint/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_deploy_index_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.DeployIndexRequest()
request.index_endpoint = "index_endpoint/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.deploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "index_endpoint=index_endpoint/value",) in kw[
"metadata"
]
def test_deploy_index_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.deploy_index(
index_endpoint="index_endpoint_value",
deployed_index=gca_index_endpoint.DeployedIndex(id="id_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == "index_endpoint_value"
assert args[0].deployed_index == gca_index_endpoint.DeployedIndex(id="id_value")
def test_deploy_index_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.deploy_index(
index_endpoint_service.DeployIndexRequest(),
index_endpoint="index_endpoint_value",
deployed_index=gca_index_endpoint.DeployedIndex(id="id_value"),
)
@pytest.mark.asyncio
async def test_deploy_index_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.deploy_index(
index_endpoint="index_endpoint_value",
deployed_index=gca_index_endpoint.DeployedIndex(id="id_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == "index_endpoint_value"
assert args[0].deployed_index == gca_index_endpoint.DeployedIndex(id="id_value")
@pytest.mark.asyncio
async def test_deploy_index_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.deploy_index(
index_endpoint_service.DeployIndexRequest(),
index_endpoint="index_endpoint_value",
deployed_index=gca_index_endpoint.DeployedIndex(id="id_value"),
)
def test_undeploy_index(
transport: str = "grpc", request_type=index_endpoint_service.UndeployIndexRequest
):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.undeploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UndeployIndexRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_undeploy_index_from_dict():
test_undeploy_index(request_type=dict)
def test_undeploy_index_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
client.undeploy_index()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UndeployIndexRequest()
@pytest.mark.asyncio
async def test_undeploy_index_async(
transport: str = "grpc_asyncio",
request_type=index_endpoint_service.UndeployIndexRequest,
):
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.undeploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == index_endpoint_service.UndeployIndexRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_undeploy_index_async_from_dict():
await test_undeploy_index_async(request_type=dict)
def test_undeploy_index_field_headers():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.UndeployIndexRequest()
request.index_endpoint = "index_endpoint/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.undeploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "index_endpoint=index_endpoint/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_undeploy_index_field_headers_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = index_endpoint_service.UndeployIndexRequest()
request.index_endpoint = "index_endpoint/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.undeploy_index(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "index_endpoint=index_endpoint/value",) in kw[
"metadata"
]
def test_undeploy_index_flattened():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.undeploy_index(
index_endpoint="index_endpoint_value",
deployed_index_id="deployed_index_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == "index_endpoint_value"
assert args[0].deployed_index_id == "deployed_index_id_value"
def test_undeploy_index_flattened_error():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.undeploy_index(
index_endpoint_service.UndeployIndexRequest(),
index_endpoint="index_endpoint_value",
deployed_index_id="deployed_index_id_value",
)
@pytest.mark.asyncio
async def test_undeploy_index_flattened_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_index), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.undeploy_index(
index_endpoint="index_endpoint_value",
deployed_index_id="deployed_index_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].index_endpoint == "index_endpoint_value"
assert args[0].deployed_index_id == "deployed_index_id_value"
@pytest.mark.asyncio
async def test_undeploy_index_flattened_error_async():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.undeploy_index(
index_endpoint_service.UndeployIndexRequest(),
index_endpoint="index_endpoint_value",
deployed_index_id="deployed_index_id_value",
)
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.IndexEndpointServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.IndexEndpointServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = IndexEndpointServiceClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.IndexEndpointServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = IndexEndpointServiceClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.IndexEndpointServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
client = IndexEndpointServiceClient(transport=transport)
assert client.transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.IndexEndpointServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.IndexEndpointServiceGrpcAsyncIOTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
assert isinstance(client.transport, transports.IndexEndpointServiceGrpcTransport,)
def test_index_endpoint_service_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(core_exceptions.DuplicateCredentialArgs):
transport = transports.IndexEndpointServiceTransport(
credentials=ga_credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_index_endpoint_service_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.IndexEndpointServiceTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.IndexEndpointServiceTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"create_index_endpoint",
"get_index_endpoint",
"list_index_endpoints",
"update_index_endpoint",
"delete_index_endpoint",
"deploy_index",
"undeploy_index",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
# Additionally, the LRO client (a property) should
# also raise NotImplementedError
with pytest.raises(NotImplementedError):
transport.operations_client
@requires_google_auth_gte_1_25_0
def test_index_endpoint_service_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
google.auth, "load_credentials_from_file", autospec=True
) as load_creds, mock.patch(
"google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.IndexEndpointServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.IndexEndpointServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@requires_google_auth_lt_1_25_0
def test_index_endpoint_service_base_transport_with_credentials_file_old_google_auth():
# Instantiate the base transport with a credentials file
with mock.patch.object(
google.auth, "load_credentials_from_file", autospec=True
) as load_creds, mock.patch(
"google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.IndexEndpointServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.IndexEndpointServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
def test_index_endpoint_service_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch(
"google.cloud.aiplatform_v1beta1.services.index_endpoint_service.transports.IndexEndpointServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.IndexEndpointServiceTransport()
adc.assert_called_once()
@requires_google_auth_gte_1_25_0
def test_index_endpoint_service_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
IndexEndpointServiceClient()
adc.assert_called_once_with(
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id=None,
)
@requires_google_auth_lt_1_25_0
def test_index_endpoint_service_auth_adc_old_google_auth():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
IndexEndpointServiceClient()
adc.assert_called_once_with(
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id=None,
)
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
@requires_google_auth_gte_1_25_0
def test_index_endpoint_service_transport_auth_adc(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
adc.assert_called_once_with(
scopes=["1", "2"],
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
@requires_google_auth_lt_1_25_0
def test_index_endpoint_service_transport_auth_adc_old_google_auth(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus")
adc.assert_called_once_with(
scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class,grpc_helpers",
[
(transports.IndexEndpointServiceGrpcTransport, grpc_helpers),
(transports.IndexEndpointServiceGrpcAsyncIOTransport, grpc_helpers_async),
],
)
def test_index_endpoint_service_transport_create_channel(transport_class, grpc_helpers):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(
google.auth, "default", autospec=True
) as adc, mock.patch.object(
grpc_helpers, "create_channel", autospec=True
) as create_channel:
creds = ga_credentials.AnonymousCredentials()
adc.return_value = (creds, None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
create_channel.assert_called_with(
"aiplatform.googleapis.com:443",
credentials=creds,
credentials_file=None,
quota_project_id="octopus",
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
scopes=["1", "2"],
default_host="aiplatform.googleapis.com",
ssl_credentials=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
def test_index_endpoint_service_grpc_transport_client_cert_source_for_mtls(
transport_class,
):
cred = ga_credentials.AnonymousCredentials()
# Check ssl_channel_credentials is used if provided.
with mock.patch.object(transport_class, "create_channel") as mock_create_channel:
mock_ssl_channel_creds = mock.Mock()
transport_class(
host="squid.clam.whelk",
credentials=cred,
ssl_channel_credentials=mock_ssl_channel_creds,
)
mock_create_channel.assert_called_once_with(
"squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_channel_creds,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls
# is used.
with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()):
with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred:
transport_class(
credentials=cred,
client_cert_source_for_mtls=client_cert_source_callback,
)
expected_cert, expected_key = client_cert_source_callback()
mock_ssl_cred.assert_called_once_with(
certificate_chain=expected_cert, private_key=expected_key
)
def test_index_endpoint_service_host_no_port():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="aiplatform.googleapis.com"
),
)
assert client.transport._host == "aiplatform.googleapis.com:443"
def test_index_endpoint_service_host_with_port():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="aiplatform.googleapis.com:8000"
),
)
assert client.transport._host == "aiplatform.googleapis.com:8000"
def test_index_endpoint_service_grpc_transport_channel():
channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.IndexEndpointServiceGrpcTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
def test_index_endpoint_service_grpc_asyncio_transport_channel():
channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.IndexEndpointServiceGrpcAsyncIOTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
def test_index_endpoint_service_transport_channel_mtls_with_client_cert_source(
transport_class,
):
with mock.patch(
"grpc.ssl_channel_credentials", autospec=True
) as grpc_ssl_channel_cred:
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = ga_credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
assert transport._ssl_channel_credentials == mock_ssl_cred
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[
transports.IndexEndpointServiceGrpcTransport,
transports.IndexEndpointServiceGrpcAsyncIOTransport,
],
)
def test_index_endpoint_service_transport_channel_mtls_with_adc(transport_class):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
def test_index_endpoint_service_grpc_lro_client():
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_index_endpoint_service_grpc_lro_async_client():
client = IndexEndpointServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc_asyncio",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_index_path():
project = "squid"
location = "clam"
index = "whelk"
expected = "projects/{project}/locations/{location}/indexes/{index}".format(
project=project, location=location, index=index,
)
actual = IndexEndpointServiceClient.index_path(project, location, index)
assert expected == actual
def test_parse_index_path():
expected = {
"project": "octopus",
"location": "oyster",
"index": "nudibranch",
}
path = IndexEndpointServiceClient.index_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_index_path(path)
assert expected == actual
def test_index_endpoint_path():
project = "cuttlefish"
location = "mussel"
index_endpoint = "winkle"
expected = "projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}".format(
project=project, location=location, index_endpoint=index_endpoint,
)
actual = IndexEndpointServiceClient.index_endpoint_path(
project, location, index_endpoint
)
assert expected == actual
def test_parse_index_endpoint_path():
expected = {
"project": "nautilus",
"location": "scallop",
"index_endpoint": "abalone",
}
path = IndexEndpointServiceClient.index_endpoint_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_index_endpoint_path(path)
assert expected == actual
def test_common_billing_account_path():
billing_account = "squid"
expected = "billingAccounts/{billing_account}".format(
billing_account=billing_account,
)
actual = IndexEndpointServiceClient.common_billing_account_path(billing_account)
assert expected == actual
def test_parse_common_billing_account_path():
expected = {
"billing_account": "clam",
}
path = IndexEndpointServiceClient.common_billing_account_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_common_billing_account_path(path)
assert expected == actual
def test_common_folder_path():
folder = "whelk"
expected = "folders/{folder}".format(folder=folder,)
actual = IndexEndpointServiceClient.common_folder_path(folder)
assert expected == actual
def test_parse_common_folder_path():
expected = {
"folder": "octopus",
}
path = IndexEndpointServiceClient.common_folder_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_common_folder_path(path)
assert expected == actual
def test_common_organization_path():
organization = "oyster"
expected = "organizations/{organization}".format(organization=organization,)
actual = IndexEndpointServiceClient.common_organization_path(organization)
assert expected == actual
def test_parse_common_organization_path():
expected = {
"organization": "nudibranch",
}
path = IndexEndpointServiceClient.common_organization_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_common_organization_path(path)
assert expected == actual
def test_common_project_path():
project = "cuttlefish"
expected = "projects/{project}".format(project=project,)
actual = IndexEndpointServiceClient.common_project_path(project)
assert expected == actual
def test_parse_common_project_path():
expected = {
"project": "mussel",
}
path = IndexEndpointServiceClient.common_project_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_common_project_path(path)
assert expected == actual
def test_common_location_path():
project = "winkle"
location = "nautilus"
expected = "projects/{project}/locations/{location}".format(
project=project, location=location,
)
actual = IndexEndpointServiceClient.common_location_path(project, location)
assert expected == actual
def test_parse_common_location_path():
expected = {
"project": "scallop",
"location": "abalone",
}
path = IndexEndpointServiceClient.common_location_path(**expected)
# Check that the path construction is reversible.
actual = IndexEndpointServiceClient.parse_common_location_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.IndexEndpointServiceTransport, "_prep_wrapped_messages"
) as prep:
client = IndexEndpointServiceClient(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.IndexEndpointServiceTransport, "_prep_wrapped_messages"
) as prep:
transport_class = IndexEndpointServiceClient.get_transport_class()
transport = transport_class(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
|
sasha-gitg/python-aiplatform
|
tests/unit/gapic/aiplatform_v1beta1/test_index_endpoint_service.py
|
Python
|
apache-2.0
| 113,731
|
[
"Octopus"
] |
8addb944732d5d71c388eb38750a117e1a4d799d45eb4b59dd6b60cad9942d57
|
from . import base
from grow.common import sdk_utils
from protorpc import messages
from xtermcolor import colorize
import os
import re
import shlex
import subprocess
import sys
class Config(messages.Message):
build_task = messages.StringField(1, default='build')
run_task = messages.StringField(2, default='')
command = messages.StringField(3, default='gulp')
class GulpPreprocessor(base.BasePreprocessor):
KIND = 'gulp'
Config = Config
def run(self, build=True):
# Avoid restarting the Gulp subprocess if the preprocessor is
# being run as a result of restarting the server.
if 'RESTARTED' in os.environ:
return
args = sdk_utils.get_popen_args(self.pod)
task = self.config.build_task if build else self.config.run_task
raw_command = '{} {}'.format(self.config.command, task)
command = shlex.split(raw_command)
process = subprocess.Popen(command, **args)
if not build:
return
code = process.wait()
if code != 0:
text = 'Failed to run: {}'.format(raw_command)
raise base.PreprocessorError(text)
|
denmojo/pygrow
|
grow/preprocessors/gulp.py
|
Python
|
mit
| 1,161
|
[
"GULP"
] |
4bc13f47bb50fa5fab883f59fa4b082ef44e5fc051368bf7d68f03e0c72414ac
|
"""
Convert Delft3D output (NetCDF) file to a set of vtk files
Usage:
d3d_to_vtk <file> [--scale_z=<sz>]
Options:
--scale_z=<sz> scale z of all grid points due to some instabilities in ParaView [default: 10]
"""
import logging
import docopt
import sources
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
arguments = docopt.docopt(__doc__)
sources.curvi_to_vtk(arguments['<file>'], int(arguments['--scale_z']))
|
openearth/vtkanim
|
vtkanim/d3d_to_vtk.py
|
Python
|
gpl-3.0
| 453
|
[
"NetCDF",
"ParaView",
"VTK"
] |
eb13ccb5eb0968d42ce616eca932f65d9cf54bc6d09129ad8af6bbc8008ea567
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
################################################################################
#
# qooxdoo - the new era of web development
#
# http://qooxdoo.org
#
# Copyright:
# 2006-2013 1&1 Internet AG, Germany, http://www.1und1.de
#
# License:
# LGPL: http://www.gnu.org/licenses/lgpl.html
# EPL: http://www.eclipse.org/org/documents/epl-v10.php
# See the LICENSE file in the project's top-level directory for details.
#
# Authors:
# * Thomas Herchenroeder (thron7)
#
################################################################################
##
# Lint-derived checker specifically for unknown globals.
#
# Allows to run a dedicated check for unknown globals when it is necessary and
# a list of build classes is available.
##
import os, sys, re, types
from collections import defaultdict
from ecmascript.frontend import treeutil, lang
from ecmascript.transform.check import lint
from ecmascript.transform.check import scopes
##
# A visitor on a Scope() tree, but uses the same check code for globals checking
# as lint.LintChecker (hence the subclassing).
# Supposed to be faster for dedicated globals checking (when lint.LintChecker does
# all checks, and traverses the (bigger) syntax tree).
#
class GlobalsChecker(lint.LintChecker):
def __init__(self, root_node, file_name_, opts):
super(GlobalsChecker, self).__init__(root_node, file_name_, opts)
def visit(self, scope_node):
if not self.opts.ignore_undefined_globals:
self.unknown_globals(scope_node)
# recurse
for cld in scope_node.children:
self.visit(cld)
# - ---------------------------------------------------------------------------
#cnt = 0
def globals_check(node, file_name, opts):
node = scopes.create_scopes(node) # update scopes
lint = GlobalsChecker(node, file_name, opts)
#print "Globals", file_name
lint.visit(node.scope)
#import cProfile
#cProfile.runctx("lint.visit(node)",globals(),locals(),
# "/home/thron7/tmp/prof/deps.prof"+str(cnt))
#global cnt
#cnt += 1
return lint.issues
|
sebastienhupin/qxrad
|
qooxdoo/tool/pylib/ecmascript/transform/check/check_globals.py
|
Python
|
lgpl-3.0
| 2,132
|
[
"VisIt"
] |
1ce4d20fc059f311a2621a8684c053f69071bea1ecc63644361cbdcfc6b9eb71
|
# $Id$
#
# Copyright (C) 2001-2008 greg Landrum and Rational Discovery LLC
# All Rights Reserved
#
""" unit testing code for composite models
"""
import unittest
import cPickle
from rdkit.ML.Composite import Composite
from rdkit.ML.DecTree.DecTree import DecTreeNode as Node
from rdkit import RDConfig
class TestCase(unittest.TestCase):
def setUp(self):
#print '\n%s: '%self.shortDescription(),
self.examples = cPickle.load(open(RDConfig.RDCodeDir+'/ML/Composite/test_data/ferro.pkl','rb'))
self.varNames = ['composition','max_atomic','has3d','has4d','has5d','elconc','atvol','isferro']
self.qBounds = [[],[1.89,3.53],[],[],[],[0.55,0.73],[11.81,14.52],[]]
self.nPoss= [0,3,2,2,2,3,3,2]
self.attrs = range(1,len(self.varNames)-1)
from rdkit.ML.Data import DataUtils
DataUtils.InitRandomNumbers((23,43))
def testQuantize(self):
" testing data quantization "
qBounds = [[],[1,2,3]]
examples = [['foo',0],['foo',1.5],['foo',5.5],['foo',2.5]]
answers = [['foo',0],['foo',1],['foo',3],['foo',2]]
nPoss= [0,4]
composite = Composite.Composite()
composite.SetQuantBounds(qBounds,nPoss)
for i in xrange(len(examples)):
qEx = composite.QuantizeExample(examples[i])
assert qEx == answers[i],'quantization of %s failed'%(str(examples[i]))
def testTreeGrow(self):
" testing tree-based composite "
self.refCompos = cPickle.load(open(RDConfig.RDCodeDir+'/ML/Composite/test_data/composite_base.pkl','rb'))
composite = Composite.Composite()
composite._varNames=self.varNames
composite.SetQuantBounds(self.qBounds,self.nPoss)
from rdkit.ML.DecTree import CrossValidate
driver = CrossValidate.CrossValidationDriver
pruner = None
composite.Grow(self.examples,self.attrs,[],buildDriver=driver,
pruner=pruner,nTries=100,silent=1)
composite.AverageErrors()
composite.SortModels()
self.treeComposite = composite
assert len(composite) == len(self.refCompos),'length mismatch %d!=%d'%(len(composite),len(self.refCompos))
for i in xrange(len(composite)):
t1,c1,e1 = composite[i]
t2,c2,e2 = self.refCompos[i]
assert e1 == e2, 'error mismatch'
# we used to check for equality here, but since there are redundant errors,
# that's non-trivial.
#assert t1 == t2, 'tree mismatch'
#assert c1 == c2, 'count mismatch'
def testTreeScreen(self):
" testing tree-based composite screening "
self.refCompos = cPickle.load(open(RDConfig.RDCodeDir+'/ML/Composite/test_data/composite_base.pkl','rb'))
testCompos = cPickle.load(open(RDConfig.RDCodeDir+'/ML/Composite/test_data/composite_base.unittree.pkl','rb'))
for example in self.examples:
res,conf = testCompos.ClassifyExample(example)
cRes,cConf = self.refCompos.ClassifyExample(example)
assert res==cRes,'result mismatch'
assert conf==cConf,'confidence mismatch'
def testErrorEstimate(self):
" testing out-of-bag error estimates "
compos = Composite.Composite()
compos.SetQuantBounds([(0.5,),(0.5,),(0.5,),[]],[2,2,2,2])
compos.SetDescriptorNames(('D0','D1','D2','Act'))
compos.SetInputOrder(('ID','D0','D1','D2','Act'))
data = [['A',0,0,0,0],
['B',1,0,0,1],
['C',0,1,0,0],
['D',1,1,1,1]]
#
# Build and validate three simple trees:
#
t1 = Node(None,'D0',0)
n = Node(t1,'D1',1)
t1.AddChildNode(n)
n.AddChildNode(Node(n,'0',0,isTerminal=1))
n.AddChildNode(Node(n,'1',1,isTerminal=1))
n = Node(t1,'D2',2)
t1.AddChildNode(n)
n.AddChildNode(Node(n,'1',1,isTerminal=1))
n.AddChildNode(Node(n,'0',0,isTerminal=1))
assert t1.ClassifyExample(data[0][1:])==0
assert t1.ClassifyExample(data[1][1:])==1
assert t1.ClassifyExample(data[2][1:])==1
assert t1.ClassifyExample(data[3][1:])==0
t1._trainIndices = (0,1)
compos.AddModel(t1,.5)
t2 = Node(None,'D1',1)
n = Node(t2,'D0',0)
t2.AddChildNode(n)
n.AddChildNode(Node(n,'0',0,isTerminal=1))
n.AddChildNode(Node(n,'1',1,isTerminal=1))
n = Node(t2,'D2',2)
t2.AddChildNode(n)
n.AddChildNode(Node(n,'0',0,isTerminal=1))
n.AddChildNode(Node(n,'1',1,isTerminal=1))
assert t2.ClassifyExample(data[0][1:])==0
assert t2.ClassifyExample(data[1][1:])==1
assert t2.ClassifyExample(data[2][1:])==0
assert t2.ClassifyExample(data[3][1:])==1
t2._trainIndices = (1,2)
compos.AddModel(t2,0.0)
t3 = Node(None,'D0',0)
n = Node(t3,'D2',2)
t3.AddChildNode(n)
n.AddChildNode(Node(n,'0',0,isTerminal=1))
n.AddChildNode(Node(n,'1',1,isTerminal=1))
n = Node(t3,'D1',1)
t3.AddChildNode(n)
n.AddChildNode(Node(n,'0',0,isTerminal=1))
n.AddChildNode(Node(n,'1',1,isTerminal=1))
assert t3.ClassifyExample(data[0][1:])==0
assert t3.ClassifyExample(data[1][1:])==0
assert t3.ClassifyExample(data[2][1:])==0
assert t3.ClassifyExample(data[3][1:])==1
t3._trainIndices = (2,3)
compos.AddModel(t3,0.25)
#
# validate the composite itself:
#
pred,conf=compos.ClassifyExample(data[0])
assert pred==0
assert conf==1.0
pred,conf=compos.ClassifyExample(data[1])
assert pred==1
assert conf==2./3.
pred,conf=compos.ClassifyExample(data[2])
assert pred==0
assert conf==2./3.
pred,conf=compos.ClassifyExample(data[3])
assert pred==1
assert conf==2./3.
#
# now test the out-of-bag calculation:
#
pred,conf=compos.ClassifyExample(data[0],onlyModels=(1,2))
assert pred==0
assert conf==1.0
pred,conf=compos.ClassifyExample(data[1],onlyModels=(2,))
assert pred==0
assert conf==1.0
pred,conf=compos.ClassifyExample(data[2],onlyModels=(0,))
assert pred==1
assert conf==1.0
pred,conf=compos.ClassifyExample(data[3],onlyModels=(0,1))
assert pred==0
assert conf==0.5
if __name__ == '__main__':
unittest.main()
|
rdkit/rdkit-orig
|
rdkit/ML/Composite/UnitTestComposite.py
|
Python
|
bsd-3-clause
| 5,980
|
[
"RDKit"
] |
78742594293e794cb4b62be55a47e4972858d2789929bdea377a2ca873356352
|
# -*- coding: utf-8 -*-
"""
Created on Wed Oct 21 12:08:08 2015
@author: noore
"""
import os, re, csv
import pandas as pd
import matplotlib
import seaborn as sns
from colorsys import hls_to_rgb
from cycler import cycler
import numpy as np
import matplotlib.pyplot as plt
from collections import OrderedDict
import definitions as D
from hashlib import sha1
from matplotlib import rcParams
sns.set()
sns.set(style="white", context="paper", font="monospaced")
rcParams['font.size'] = 14.0
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = 'Arial'
rcParams['legend.fontsize'] = 'medium'
rcParams['axes.labelsize'] = 14.0
rcParams['axes.titlesize'] = 14.0
rcParams['xtick.labelsize'] = 12.0
rcParams['ytick.labelsize'] = 12.0
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['text.latex.preamble'] = \
r'\usepackage{txfonts},\usepackage{lmodern},\usepackage{cmbright}'
matplotlib.rcParams['font.family'] = 'sans-serif'
matplotlib.rcParams['font.sans-serif'] = 'Helvetica'
matplotlib.rcParams['font.weight'] = 'medium'
matplotlib.rcParams['font.style'] = 'normal'
matplotlib.rcParams['mathtext.fontset'] = 'stixsans'
BASE_DIR = os.path.expanduser('~/git/flux-enzyme-cost-minimization')
DATA_DIR = os.path.join(BASE_DIR, 'data')
TEMP_DIR = os.path.join(BASE_DIR, 'tmp')
OUTPUT_DIR = os.path.join(BASE_DIR, 'res')
ZIP_SVG_FNAME = os.path.join(OUTPUT_DIR, 'all_efms.zip')
INPUT_SVG_FNAME = os.path.join(DATA_DIR, 'Ecoli_Carlson_2016_05_09.svg')
REACTION_NAME_FNAME = os.path.join(DATA_DIR, 'reaction_name_mapping.csv')
PROTEOME_FNAME = os.path.join(DATA_DIR, 'protein_abundance_from_schmidt_et_al.csv')
MODEL_DIR = os.path.join(BASE_DIR, 'models/Escherichia_coli_Wortel_et_al_2017')
ECOLI_MODEL_FNAME = os.path.join(MODEL_DIR, 'Ecoli_Central_Metabolism.xlsx')
DATA_FILES = {
'standard': [['n39-p1'], None],
'anaerobic': [['n39-p7'], None],
'sweep_oxygen': [['n39-p11'], 'mext-oxygen-'],
'sweep_kcat_r6r': [['n39-p13'], 'kcat-r6r-'],
'low_kcat_r6r': [['n39-p14'], None],
'sweep_kcat_r70': [['n39-p15'], 'kcat-r70-'],
'sweep_glucose': [['n39-p16'], 'mext-glucoseExt-'],
'monod_glucose_aero': [['n39-p17'], 'mext-glucoseExt-'],
'monod_glucose_anae': [['n39-p18'], 'mext-glucoseExt-'],
}
for v in DATA_FILES.values():
v[0] = [os.path.join(DATA_DIR, f + '.zip') for f in v[0]]
SELECT_EFM_DF = pd.read_csv(os.path.join(DATA_DIR, 'n39-select_EFMs.csv'),
index_col=0)
SELECT_EFM_DF.index.name = 'efm'
efm_dict = OrderedDict()
for efm in SELECT_EFM_DF.index:
efm_dict[int(efm)] = SELECT_EFM_DF.loc[efm, ['color', 'label']]
R_BIOMASS = 'R70'
R_GLUCOSE_IN = 'R1'
R_ACETATE_OUT = 'R91'
R_FORMATE_OUT = 'R96'
R_SUCCINATE_OUT = 'R95'
R_LACTATE_OUT = 'R94'
R_NH3_IN = 'R93'
R_OXYGEN_DEPENDENT = ['R80', 'R27'] # oxphos and sdh
#R_OXYGEN_SENSITIVE = ['R20', 'R27b'] # pfl and frd
R_OXYGEN_SENSITIVE = ['R20'] # pfl (without frd)
R_MAINTENANCE = 'R82'
R_PPP = 'R10a'
R_TCA = 'R22'
R_ED = 'R60'
R_PDH = 'R21'
R_UPPER_GLYCOLYSIS = 'R5r'
R_SUC_FUM_CYCLE = ['R27', 'R27b']
C_IN_BIOMASS = 580
BIOMASS_MW = 20666 # [mg/mmol]
C_IN_GLUCOSE = 6.0
C_IN_ACETATE = 2.0
C_IN_FORMATE = 1.0
C_IN_ACETATE = 2.0
C_IN_SUCCINATE = 4.0
C_IN_LACTATE = 3.0
SECONDS_IN_HOUR = 3600.0
# settings for sweeps and Monod surface parameters
STD_CONC = {'glucoseExt': 1e2, 'oxygen': 2.1e-1} # in mM
MIN_CONC = {'glucoseExt': 1e-4, 'oxygen': 1e-3} # in mM
MAX_CONC = {'glucoseExt': 1e4, 'oxygen': 1e1} # in mM
LOW_CONC = {'glucoseExt': 1e-1, 'oxygen': 2.1e-3} # in mM
GLU_COL = 'glucose level (mM)'
OX_COL = 'O$_2$ level (mM)'
ALPHA_CCM = 0.25 # The mass fraction of metabolic enzyme within the proteome
ALPHA_PROT = 0.5 # The mass fraction of protein within biomass
GR_PARAM_A = 0.27 # unitless
GR_PARAM_B = 0.2 # in [h]
# linear growth rate function:
#GR_FUNCTION = lambda r_BM : r_BM * ALPHA_CCM * ALPHA_PROT
# non-linear growth rate function:
GR_FUNCTION = lambda r_BM : r_BM * GR_PARAM_A * ALPHA_PROT * \
(1.0 + GR_PARAM_B * ALPHA_PROT * r_BM)**(-1)
GENERAL_CMAP = 'magma_r'
GR_HEATMAP_CMAP = 'magma_r'
EPISTATIS_CMAP = 'RdBu'
def FIX_REACTION_ID(r, throw_exception=True):
hits = re.findall('(^[Rr]+)(\d+.*)', r)
if len(hits) != 1:
if throw_exception:
raise ValueError('reaction name does not match the required pattern: ' + r)
else:
return r
return hits[0][0].upper() + hits[0][1].lower()
with open(REACTION_NAME_FNAME, 'r') as fp:
REACTION_DICT = {r['id']: r['name'] for r in csv.DictReader(fp)}
# read the protein abundance measurements and
# convert from copies/cell to mM
SINGLE_CELL_VOLUME = 2.4e-15 # Liters, Schmidt et al. 2015 (for glucose chemostat mu=0.35)
AVOGADRO = 6.02e23
copies_per_cell_to_mM = lambda x: float(x) / SINGLE_CELL_VOLUME / AVOGADRO * 1.0e3
with open(PROTEOME_FNAME, 'r') as fp:
PROTEOME_DICT = {r['id']: copies_per_cell_to_mM(r['abundance'])
for r in csv.DictReader(fp)}
def GET_REACTION_NAME(r):
if r == 'other':
return 'other'
return REACTION_DICT[FIX_REACTION_ID(r)]
GLUCOER_UPTAKE_L = r'glucose uptake [mM / s]'
GROWTH_RATE_L = r'growth rate [h$^{-1}$]'
BIOMASS_PROD_PER_ENZ_L = 'biomass production [gr dw h$^{-1}$ / gr enz]'
YIELD_L = 'biomass yield [gr dw / mol C glc]'
YIELD_MOL_L = 'biomass yield [mol C biomass / mol C glc]'
ACE_L = 'acetate secretion [mol C ace / mol C glc]'
FORMATE_L = 'formate secretion [mol for / mol C glc]'
SUCCINATE_L = 'succinate secretion [mol C suc / mol C glc]'
LACTATE_L = 'lactate secretion [mol C lac / mol C glc]'
NH3_L = 'ammonia uptake [mol NH3 / mol C glc]'
PPP_L = 'pentose phosphate flux (relative to uptake)'
TCA_L = 'TCA cycle flux (citrate synthase relative to uptake)'
ED_L = 'ED pathway flux (relative to uptake)'
UPPER_GLYCOLYSIS_L = 'upper glycolysis (relative to uptake)'
PDH_L = 'pyruvate dehydrogenase (relative to uptake)'
MAITENANCE_L = 'maintenance cost [mM ATP / s]'
STRICTLY_AEROBIC_L = 'aerobic'
STRICTLY_ANAEROBIC_L = 'anaerobic'
SUC_FUM_CYCLE_L = 'succinate-fumarate cycle'
COST_UNITS = '[gr enz / gr dw h$^{-1}$]'
OXYGEN_L = r'oxygen uptake [mol O$_2$ / mol C glc]'
N_REACTION_L = 'number of active reactions'
TOT_FLUX_L = 'sum of fluxes relative to uptake [a.u.]'
TOT_FLUX_SA_L = r'ideal cost %s' % COST_UNITS
INV_TOT_FLUX_SA_L = r'Pathway specific activity'
TOT_ENZYME_L = 'total enzyme %s' % COST_UNITS
## labels for measured flux data
MEAS_FLUX_L = 'measured fluxes from Gerosa et al. [mM/s]'
MEAS_STDEV_L = 'standard deviation [mM/s]'
PRED_FLUX_L = 'projected fluxes [mM/s]'
RESID_L = 'residual [mM/s]'
## colors for plots:
PARETO_NEUTRAL_COLOR = (0.9, 0.7, 0.7)
PARETO_STRONG_COLOR = (0.4, 0.2, 0.9)
BAR_COLOR = (0.8, 0.4, 0.5)
PARETO_CMAP_LOWEST = (0.8, 0.8, 0.8)
PARETO_CMAP_HIGHEST = (0.1, 0.1, 0.1)
# H values between 0.165 and 0.5 are banned (too green)
# hls_to_rgb(0.165, 0.5, 1) -> (1.0, 1.0, 0.0)
# hls_to_rgb(0.50, 0.5, 1) -> (0.0, 1.0, 1.0)
GAP_START = 0.165
GAP_SIZE = 0.335
rand2hue = lambda x: x*(1.0-GAP_SIZE) + GAP_SIZE*(x > GAP_START/(1.0-GAP_SIZE))
REMAINDER_L = 'other'
ANAEROBIC_OXYGEN_LEVEL = 1e-2 # in mM, i.e. equal to 10 uM
PREDIFINED_COLORS = {REMAINDER_L: hls_to_rgb(0, 0.8, 0),
'R80': hls_to_rgb(0.0, 0.5, 0.8), # red - ox phos
'R1': hls_to_rgb(0.6, 0.5, 0.7), # blue - glucose uptake
'RR9': hls_to_rgb(0.1, 0.5, 0.7)} # yellow - PEP->PYR
def get_figure_data():
fig_names = DATA_FILES.keys()
fig_dfs = map(lambda s: pd.read_pickle(os.path.join(TEMP_DIR, s + '.pkl')),
fig_names)
return dict(zip(fig_names, fig_dfs))
def get_projected_exp_fluxes():
fluxes_df = pd.read_pickle(os.path.join(TEMP_DIR, 'measured_fluxes.pkl'))
return fluxes_df
def pareto_cmap(h_mid):
"""
Creates a colormap where the edges are light and dark grey, and the
center is the provided HUSL color
"""
rgb_mid = hls_to_rgb(h_mid, 0.5, 0.6)
return matplotlib.colors.LinearSegmentedColormap.from_list(
'bright-color-dark', [PARETO_CMAP_LOWEST, rgb_mid, PARETO_CMAP_HIGHEST])
def plot_basic_pareto(data, ax, x, y, s=10, marker='o', c=None,
facecolors=(0.85, 0.85, 0.85), edgecolors='none',
paretofacecolors='none', paretoedgecolors='none',
paretosize=20, paretomarker='s',
efm_dict=None,
show_efm_labels=True,
**kwargs):
"""
make plot gr vs yield for all EFMs
"""
xdata = data[x]
ydata = data[y]
if c is not None:
# if the c-value of all the data points is 0, use gray color
# (otherwise, by default, the cmap will give 0 the middle color)
cdata = data.loc[:, c]
CS = ax.scatter(xdata, ydata, s=s, c=cdata, marker=marker,
facecolors=facecolors, edgecolors=edgecolors,
**kwargs)
cbar = plt.colorbar(CS, ax=ax)
cbar.set_label(c)
else:
cdata = None
CS = ax.scatter(xdata, ydata, s=s, marker=marker,
facecolors=facecolors, edgecolors=edgecolors,
**kwargs)
ax.set_xlabel(x)
ax.set_ylabel(y)
if efm_dict is not None:
for efm, (col, lab) in efm_dict.items():
if efm in data.index:
ax.plot(data.at[efm, x], data.at[efm, y], markersize=5,
marker=marker, color=col, label=None)
if show_efm_labels:
ax.annotate(lab, xy=(data.at[efm, x], data.at[efm, y]),
xytext=(0, 5), textcoords='offset points',
ha='center', va='bottom', color=col)
if paretofacecolors != 'none' or paretoedgecolors != 'none':
# find the EFMs which are on the pareto front and mark them
pareto_idx = []
for i in ydata.sort_values(ascending=False).index:
if pareto_idx == [] or xdata[i] > xdata[pareto_idx[-1]]:
pareto_idx.append(i)
xpareto = xdata[pareto_idx]
ypareto = ydata[pareto_idx]
cpareto = pd.Series(index=pareto_idx,
data=[paretofacecolors]*len(pareto_idx))
if efm_dict is not None:
for efm, (col, lab) in efm_dict.items():
if efm in pareto_idx:
cpareto[efm] = col
ax.scatter(x=xpareto, y=ypareto, s=paretosize, c=cpareto,
marker=paretomarker, edgecolors=paretoedgecolors,
label=None)
return CS
def get_pareto(data, x, y):
xdata = data[x]
ydata = data[y]
pareto_idx = []
for i in ydata.sort_values(ascending=False).index:
if pareto_idx == [] or xdata[i] > xdata[pareto_idx[-1]]:
pareto_idx.append(i)
return data.loc[pareto_idx, [x, y]]
def plot_dual_pareto(data0, label0, data1, label1, ax, x, y,
s=15, marker='o', c0=None, c1=None, draw_lines=True,
**kwargs):
"""
Plot a comparative Pareto plot, where data0 is the standard condition
and data1 is a perturbation. In addition, use a colormap for the data1
scatter plot (using the label 'c').
"""
if c0 is None:
c0 = D.PARETO_NEUTRAL_COLOR
if c1 is None:
c1 = D.PARETO_STRONG_COLOR
# a grey Pareto plot for data0
plot_basic_pareto(data0, ax, x, y, s=s, marker=marker,
edgecolors=(0.7, 0.7, 1),
facecolors=(0.7, 0.7, 1),
paretofacecolors=(0, 0, 0.5),
paretoedgecolors=(0, 0, 0.5),
paretosize=40,
paretomarker='v',
label=label0, show_efm_labels=False,
**kwargs)
# a full-blown Pareto plot for data1
plot_basic_pareto(data1, ax, x, y, s=s, marker=marker,
edgecolors=(1, 0.7, 0.7),
facecolors=(1, 0.7, 0.7),
paretofacecolors=(0.5, 0, 0),
paretoedgecolors=(0.5, 0, 0),
paretosize=40,
paretomarker='^',
label=label1, show_efm_labels=False,
**kwargs)
# add lines connecting the two conditions
if draw_lines:
data = data0[[x,y]].join(data1[[x,y]], lsuffix='', rsuffix='_1')
for i in data.index:
x0,y0 = data.loc[i, [x, y ]]
x1,y1 = data.loc[i, [x+'_1', y+'_1']]
ax.plot([x0, x1], [y0, y1], '-', color=(0, 0, 0), linewidth=0.2,
label=None, alpha=0.15)
ax.legend(loc='upper center', fontsize=12)
def plot_scatter_with_all_labels(data, ax, x, y,
facecolors='blue', edgecolors='none', alpha=1):
"""
make plot gr vs yield for all EFMs
"""
xdata = data.loc[:, x]
ydata = data.loc[:, y]
ax.scatter(xdata, ydata, s=5, marker='o', alpha=alpha,
facecolors=facecolors, edgecolors=edgecolors)
ax.set_xlabel(x)
ax.set_ylabel(y)
for i in data.index:
ax.text(data[x][i], data[y][i], '%d' % i, fontsize=4)
def plot_sweep(data, xlabel, efm_dict, ax, legend_loc='lower right', legend_fontsize=10):
"""make line plots of gr vs parameter for all selected EFMs"""
colors, labels = zip(*efm_dict.values())
efm_cycler = cycler('color', colors)
ax.set_prop_cycle(efm_cycler)
efm_data = data.loc[efm_dict.keys(), :]
efm_data.transpose().plot(kind='line', ax=ax, linewidth=2)
ax.legend(labels, loc=legend_loc, fontsize=legend_fontsize, labelspacing=0.2)
ax.set_xlabel(xlabel)
ax.set_ylabel(r'growth rate [h$^{-1}$]')
ax.set_ylim([0, None])
def string_to_random_rgb(s, min_l=0.1, max_l=0.6, min_s=0.1, max_s=0.6):
"""
generate 3 pseudorandom numbers from the hash-function of the name
"""
seed = int(int(sha1(s.encode('utf-8')).hexdigest(), 16) % 1e7)
np.random.seed(seed)
h = rand2hue(np.random.rand())
l = min_l + np.random.rand() * (max_l - min_l)
s = min_s + np.random.rand() * (max_s - min_s)
return hls_to_rgb(h, l, s)
def reaction_to_rgb(r):
"""
Use a hash function to randomly choose a color (in a certain range
of hues, without green, and range of saturation and brightness).
The function is pseudorandom, but will always return the same
color for the same input string.
"""
if r in PREDIFINED_COLORS:
return PREDIFINED_COLORS[r]
else:
return string_to_random_rgb(r, min_l=0.1, max_l=0.6, min_s=0.1, max_s=0.6)
def efm_to_hex(efm):
"""
Use a hash function to randomly choose a color (in a certain range
of hues, without green, and range of saturation and brightness).
The function is pseudorandom, but will always return the same
color for the same input string.
"""
if efm in efm_dict:
return efm_dict[efm]['color']
else:
rgb = string_to_random_rgb(str(efm), min_l=0.3, max_l=0.8, min_s=0.1, max_s=0.8)
return matplotlib.colors.rgb2hex(rgb)
def cycle_colors(n, h0=0.0, l=0.5, s=0.8):
for x in np.linspace(0, 1, n+1)[:-1]:
h = rand2hue((h0 + x) % 1.0)
rgb = hls_to_rgb(h, l, s)
yield matplotlib.colors.rgb2hex(rgb)
def cycle_colors_rand(n, min_l=0.4, max_l=0.5, min_s=0.8, max_s=0.8, seed=1984):
np.random.seed(seed)
for i in range(n):
x = np.random.rand()
h = rand2hue(x)
l = min_l + np.random.rand() * (max_l - min_l)
s = min_s + np.random.rand() * (max_s - min_s)
rgb = hls_to_rgb(h, l, s)
yield matplotlib.colors.rgb2hex(rgb)
def allocation_area_plot(data, ax0=None, ax1=None, xlabel='',
n_best=10):
"""
data - a DataFrame of index=parameter, columns=reactions, values=cost
"""
normed_data = data.div(data.sum(axis=1), axis=0)
# sort the reactions in decreasing order of the mean value across
# all contidions
sorted_reactions = normed_data.mean(axis=0).sort_values(ascending=False).index
sig_cols = sorted_reactions[:n_best]
rem_cols = sorted_reactions[n_best:]
remainder = data[rem_cols].sum(1)
remainder.name = REMAINDER_L
lumped_data = data[sig_cols].join(remainder).abs()
normed_remainder = normed_data[rem_cols].sum(1)
normed_remainder.name = REMAINDER_L
lumped_normed_data = normed_data[sig_cols].join(normed_remainder).abs()
if ax0 is not None:
ax0.stackplot(lumped_data.index.tolist(), lumped_data.as_matrix().T,
colors=map(D.reaction_to_rgb, lumped_data.columns))
ax0.set_xlabel(xlabel)
ax0.set_ylabel('absolute enzyme costs\n%s' % COST_UNITS)
ax0.set_xscale('log')
ax0.set_xlim(lumped_data.index.min(), lumped_data.index.max())
ax0.legend(list(map(D.GET_REACTION_NAME, lumped_data.columns)),
fontsize='small')
if ax1 is not None:
# Then, also make a copy with normalize values (i.e. each row
# sums up to 1, i.e. the fraction of the total cost)
ax1.stackplot(lumped_normed_data.index.tolist(),
lumped_normed_data.as_matrix().T,
colors=map(D.reaction_to_rgb, lumped_normed_data.columns))
ax1.set_xlabel(xlabel)
ax1.set_ylabel('fraction of enzyme costs')
ax1.set_xscale('log')
ax1.set_ylim(0.0, 1.0)
ax1.set_xlim(lumped_normed_data.index.min(),
lumped_normed_data.index.max())
if ax0 is None:
ax1.legend(list(map(D.GET_REACTION_NAME, lumped_normed_data.columns)),
fontsize='small')
def as_base10_exp(x):
return '$10^{%d}$' % int(np.round(np.log10(x)))
def savefig(fig, fignum):
fig.savefig(os.path.join(D.OUTPUT_DIR, 'Fig%s.pdf' % fignum))
fig.savefig(os.path.join(D.OUTPUT_DIR, 'Fig%s.svg' % fignum))
fig.savefig(os.path.join(D.OUTPUT_DIR, 'Fig%s.eps' % fignum))
|
liebermeister/flux-enzyme-cost-minimization
|
scripts/definitions.py
|
Python
|
gpl-2.0
| 18,294
|
[
"Avogadro"
] |
b6708069f73a23bebb6db84638ca9b531bf957fa222233e545db20d587a67df3
|
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements the graph generation for computation of gradients."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import contextlib
import warnings
import numpy as np
import six
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.core.framework import attr_value_pb2
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_grad # pylint: disable=unused-import
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_grad # pylint: disable=unused-import
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import functional_ops
from tensorflow.python.ops import image_grad # pylint: disable=unused-import
from tensorflow.python.ops import linalg_grad # pylint: disable=unused-import
from tensorflow.python.ops import linalg_ops # pylint: disable=unused-import
from tensorflow.python.ops import logging_ops # pylint: disable=unused-import
from tensorflow.python.ops import math_grad # pylint: disable=unused-import
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import tf_logging as logging
# Warn the user if we convert a sparse representation to dense with at
# least this number of elements.
_LARGE_SPARSE_NUM_ELEMENTS = 100000000
def _IndexedSlicesToTensor(value, dtype=None, name=None, as_ref=False):
"""Converts an IndexedSlices object `value` to a Tensor.
NOTE(mrry): This function is potentially expensive.
Args:
value: An ops.IndexedSlices object.
dtype: The dtype of the Tensor to be returned.
name: Optional name to use for the returned Tensor.
as_ref: True if a ref is requested.
Returns:
A dense Tensor representing the values in the given IndexedSlices.
Raises:
ValueError: If the IndexedSlices does not have the same dtype.
"""
_ = as_ref
if dtype and not dtype.is_compatible_with(value.dtype):
raise ValueError(
"Tensor conversion requested dtype %s for IndexedSlices with dtype %s" %
(dtype.name, value.dtype.name))
if value.dense_shape is None:
raise ValueError(
"Tensor conversion requested for IndexedSlices without dense_shape: %s"
% str(value))
# TODO(mrry): Consider adding static shape information to
# IndexedSlices, to avoid using numpy here.
dense_shape_value = tensor_util.constant_value(value.dense_shape)
if dense_shape_value is not None:
num_elements = np.prod(dense_shape_value)
if num_elements >= _LARGE_SPARSE_NUM_ELEMENTS:
warnings.warn(
"Converting sparse IndexedSlices to a dense Tensor with %d elements. "
"This may consume a large amount of memory." % num_elements)
else:
warnings.warn(
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
"This may consume a large amount of memory.")
return math_ops.unsorted_segment_sum(
value.values, value.indices, value.dense_shape[0], name=name)
ops.register_tensor_conversion_function(ops.IndexedSlices,
_IndexedSlicesToTensor)
def _MarkReachedOps(from_ops, reached_ops):
"""Mark all ops reached from "from_ops".
Args:
from_ops: list of Operations.
reached_ops: list of booleans, indexed by operation id.
"""
queue = collections.deque()
queue.extend(from_ops)
while queue:
op = queue.popleft()
if not reached_ops[op._id]:
reached_ops[op._id] = True
for output in op.outputs:
queue.extend(output.consumers())
def _GatherInputs(to_ops, reached_ops):
"""List all inputs of to_ops that are in reached_ops.
Args:
to_ops: list of Operations.
reached_ops: list of booleans, indexed by operation id.
Returns:
The list of all inputs of to_ops that are in reached_ops.
That list includes all elements of to_ops.
"""
inputs = []
queue = collections.deque()
queue.extend(to_ops)
while queue:
op = queue.popleft()
# We are interested in this op.
if reached_ops[op._id]:
inputs.append(op)
# Clear the boolean so we won't add the inputs again.
reached_ops[op._id] = False
for inp in op.inputs:
queue.append(inp.op)
return inputs
def _PendingCount(graph, to_ops, from_ops, colocate_gradients_with_ops):
"""Initialize the pending count for ops between two lists of Operations.
'pending_count[op._id]' indicates the number of backprop inputs
to this operation.
Args:
graph: a Graph.
to_ops: list of Operations.
from_ops: list of Operations.
colocate_gradients_with_ops: Python bool. See docstring of gradients().
Returns:
A tuple containing: (1) a list of integers indexed by operation id,
indicating the number of backprop inputs to this operation, and (2)
a ControlFlowState object which is not None if the ops between from_ops
and to_ops contain control flow loops.
"""
# Mark reachable ops from from_ops.
reached_ops = [False] * (graph._last_id + 1)
for op in to_ops:
reached_ops[op._id] = True
_MarkReachedOps(from_ops, reached_ops)
# Mark between ops.
between_ops = [False] * (graph._last_id + 1)
between_op_list = []
queue = collections.deque()
queue.extend(to_ops)
while queue:
op = queue.popleft()
# We are interested in this op.
if reached_ops[op._id]:
between_ops[op._id] = True
between_op_list.append(op)
# Clear the boolean so we won't add the inputs again.
reached_ops[op._id] = False
for inp in op.inputs:
queue.append(inp.op)
# 'loop_state' is None if there are no while loops.
loop_state = control_flow_ops.MaybeCreateControlFlowState(
between_op_list, between_ops, colocate_gradients_with_ops)
# Initialize pending count for between ops.
pending_count = [0] * (graph._last_id + 1)
for op in between_op_list:
for x in op.inputs:
if between_ops[x.op._id]:
pending_count[x.op._id] += 1
return pending_count, loop_state
def _AsList(x):
return x if isinstance(x, (list, tuple)) else [x]
def _DefaultGradYs(grad_ys, ys, colocate_gradients_with_ops):
"""Fill in default values for grad_ys.
Args:
grad_ys: List of gradients, can contain None.
ys: List of tensors.
colocate_gradients_with_ops: If True, try colocating gradients with
the corresponding op.
Returns:
A list of gradients to use, without None.
Raises:
ValueError: If sizes of gradients and inputs don't match
TypeError: If type of any gradient is not valid for its input.
"""
if len(grad_ys) != len(ys):
raise ValueError("Passed %d grad_ys for %d ys" % (len(grad_ys), len(ys)))
grad_ys = ops.convert_n_to_tensor_or_indexed_slices(grad_ys, name="grad_y")
for i in xrange(len(grad_ys)):
grad_y = grad_ys[i]
y = ys[i]
if grad_y is None:
with _maybe_colocate_with(y.op, colocate_gradients_with_ops):
grad_ys[i] = array_ops.fill(
array_ops.shape(y), constant_op.constant(
1, dtype=y.dtype))
continue
if y.dtype.is_real or y.dtype.is_integer:
if not grad_y.dtype.is_real and not grad_y.dtype.is_integer:
raise TypeError("Gradient type %s generated for real or "
"integer-valued tensor %s with type %s must be "
"real or integer" %
(dtypes.as_dtype(grad_y.dtype).name, y,
dtypes.as_dtype(y.dtype).name))
elif y.dtype.is_complex:
if not grad_y.dtype.is_complex:
raise TypeError("Gradient type %s generated for complex-valued "
"tensor %s with type %s must be real" %
(dtypes.as_dtype(grad_y.dtype).name, y,
dtypes.as_dtype(y.dtype).name))
else:
raise TypeError("Tensor %s with type %s must be numeric "
"to obtain a default gradient" %
(y, dtypes.as_dtype(y.dtype).name))
return grad_ys
def _IsTrainable(tensor):
dtype = dtypes.as_dtype(tensor.dtype)
return dtype.base_dtype in (dtypes.float16, dtypes.float32, dtypes.float64,
dtypes.complex64, dtypes.complex128)
def _VerifyGeneratedGradients(grads, op):
"""Verify that gradients are valid in number and type.
Args:
grads: List of generated gradients.
op: Operation for which the gradients where generated.
Raises:
ValueError: if sizes of gradients and inputs don't match.
TypeError: if type of any gradient is not valid for its input.
"""
if len(grads) != len(op.inputs):
raise ValueError("Num gradients %d generated for op %s do not match num "
"inputs %d" % (len(grads), op.node_def, len(op.inputs)))
for i in xrange(len(grads)):
grad = grads[i]
inp = op.inputs[i]
if grad is None:
continue
if grad.dtype.is_real:
if not inp.dtype.is_real:
raise TypeError("Gradient type %s generated for real-valued op %s "
"with type %s must be real" %
(dtypes.as_dtype(grad.dtype).name, op.node_def,
dtypes.as_dtype(inp.dtype).name))
elif grad.dtype.is_complex:
if not inp.dtype.is_complex:
raise TypeError("Gradient type %s generated for complex-valued op %s"
" with type %s must be complex" %
(dtypes.as_dtype(grad.dtype).name, op.node_def,
dtypes.as_dtype(inp.dtype).name))
else:
raise TypeError("Gradient type %s generated for op %s "
"with type %s must be either real or complex" %
(dtypes.as_dtype(grad.dtype).name, op.node_def,
dtypes.as_dtype(inp.dtype).name))
def _StopOps(from_ops, pending_count):
"""The set of ops that terminate the gradient computation.
This computes the frontier of the forward graph *before* which backprop
should stop. Operations in the returned set will not be differentiated.
This set is defined as the subset of `from_ops` containing ops that have
no predecessor in `from_ops`. `pending_count` is the result of
`_PendingCount(g, xs, from_ops)`. An 'op' has predecessors in `from_ops`
iff pending_count[op._id] > 0.
Args:
from_ops: list of Operations.
pending_count: List of integers, indexed by operation id.
Returns:
The set of operations.
"""
stop_ops = set()
for op in from_ops:
is_stop_op = True
for inp in op.inputs:
if pending_count[inp.op._id] > 0:
is_stop_op = False
break
if is_stop_op:
stop_ops.add(op._id)
return stop_ops
@contextlib.contextmanager
def _maybe_colocate_with(op, colocate_gradients_with_ops):
"""Context to colocate with `op` if `colocate_gradients_with_ops`."""
if colocate_gradients_with_ops:
with ops.colocate_with(op):
yield
else:
yield
def _SymGrad(op, out_grads):
"""Backprop through a function call node op given its outputs' gradients."""
f_in = [x for x in op.inputs] + out_grads
f_types = [x.dtype for x in op.inputs]
f = attr_value_pb2.NameAttrList()
f.name = op.type
for k in op.node_def.attr:
f.attr[k].CopyFrom(op.node_def.attr[k])
# pylint: disable=protected-access
in_grads = functional_ops._symbolic_gradient(input=f_in, Tout=f_types, f=f)
# pylint: enable=protected-access
return in_grads
def gradients(ys,
xs,
grad_ys=None,
name="gradients",
colocate_gradients_with_ops=False,
gate_gradients=False,
aggregation_method=None):
"""Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`.
`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`
is a list of `Tensor`, holding the gradients received by the
`ys`. The list must be the same length as `ys`.
`gradients()` adds ops to the graph to output the partial
derivatives of `ys` with respect to `xs`. It returns a list of
`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`
for y in `ys`.
`grad_ys` is a list of tensors of the same length as `ys` that holds
the initial gradients for each y in `ys`. When `grad_ys` is None,
we fill in a tensor of '1's of the shape of y for each y in `ys`. A
user can provide their own initial `grad_ys` to compute the
derivatives using a different initial gradient for each y (e.g., if
one wanted to weight the gradient differently for each value in
each y).
Args:
ys: A `Tensor` or list of tensors to be differentiated.
xs: A `Tensor` or list of tensors to be used for differentiation.
grad_ys: Optional. A `Tensor` or list of tensors the same size as
`ys` and holding the gradients computed for each y in `ys`.
name: Optional name to use for grouping all the gradient ops together.
defaults to 'gradients'.
colocate_gradients_with_ops: If True, try colocating gradients with
the corresponding op.
gate_gradients: If True, add a tuple around the gradients returned
for an operations. This avoids some race conditions.
aggregation_method: Specifies the method used to combine gradient terms.
Accepted values are constants defined in the class `AggregationMethod`.
Returns:
A list of `sum(dy/dx)` for each x in `xs`.
Raises:
LookupError: if one of the operations between `x` and `y` does not
have a registered gradient function.
ValueError: if the arguments are invalid.
"""
ys = _AsList(ys)
xs = _AsList(xs)
if grad_ys is None:
grad_ys = [None] * len(ys)
else:
grad_ys = _AsList(grad_ys)
with ops.name_scope(name, "gradients", ys + xs + grad_ys):
ys = ops.convert_n_to_tensor_or_indexed_slices(ys, name="y")
xs = ops.convert_n_to_tensor_or_indexed_slices(xs, name="x")
grad_ys = _DefaultGradYs(grad_ys, ys, colocate_gradients_with_ops)
# The approach we take here is as follows: Create a list of all ops in the
# subgraph between the ys and xs. Visit these ops in reverse order of ids
# to ensure that when we visit an op the gradients w.r.t its outputs have
# been collected. Then aggregate these gradients if needed, call the op's
# gradient function, and add the generated gradients to the gradients for
# its input.
# Initialize the pending count for ops in the connected subgraph from ys
# to the xs.
to_ops = [t.op for t in ys]
from_ops = [t.op for t in xs]
pending_count, loop_state = _PendingCount(ops.get_default_graph(), to_ops,
from_ops,
colocate_gradients_with_ops)
# Iterate over the collected ops.
#
# grads: op => list of gradients received on each output endpoint of the
# op. The gradients for each endpoint are initially collected as a list.
# When it is time to call the op's gradient function, for each endpoint we
# aggregate the list of received gradients into a Add() Operation if there
# is more than one.
grads = {}
# Add the initial gradients for the ys.
for y, grad_y in zip(ys, grad_ys):
_SetGrad(grads, y, grad_y)
# Initialize queue with to_ops.
queue = collections.deque()
# Add the ops in 'to_ops' into the queue.
to_ops_set = set()
for op in to_ops:
# 'ready' handles the case where one output gradient relies on
# another output's gradient.
# pylint: disable=protected-access
ready = (pending_count[op._id] == 0)
if ready and op._id not in to_ops_set:
to_ops_set.add(op._id)
queue.append(op)
# pylint: enable=protected-access
if loop_state:
loop_exits = loop_state.ProcessUnusedLoopExits(pending_count, to_ops_set)
for y in loop_exits:
if _IsTrainable(y):
_SetGrad(grads, y, loop_state.ZerosLikeForExit(y))
queue.append(y.op)
# The set of 'from_ops'.
stop_ops = _StopOps(from_ops, pending_count)
while queue:
# generate gradient subgraph for op.
op = queue.popleft()
with _maybe_colocate_with(op, colocate_gradients_with_ops):
if loop_state:
loop_state.EnterGradWhileContext(op, before=True)
out_grads = _AggregatedGrads(grads, op, loop_state, aggregation_method)
if loop_state:
loop_state.ExitGradWhileContext(op, before=True)
grad_fn = None
# pylint: disable=protected-access
is_func_call = ops.get_default_graph()._is_function(op.type)
has_out_grads = any(isinstance(g, ops.Tensor) or g for g in out_grads)
if has_out_grads and (op._id not in stop_ops):
if is_func_call:
grad_fn = ops.get_default_graph()._get_function(
op.type).python_grad_func
# pylint: enable=protected-access
else:
# A grad_fn must be defined, either as a function or as None
# for ops that do not have gradients.
try:
grad_fn = ops.get_gradient_function(op)
except LookupError:
raise LookupError(
"No gradient defined for operation '%s' (op type: %s)" %
(op.name, op.type))
if loop_state:
loop_state.EnterGradWhileContext(op, before=False)
if (grad_fn or is_func_call) and has_out_grads:
# NOTE: If _AggregatedGrads didn't compute a value for the i'th
# output, it means that the cost does not depend on output[i],
# therefore dC/doutput[i] is 0.
for i, out_grad in enumerate(out_grads):
if (not isinstance(out_grad, ops.Tensor) and
not out_grad) and _IsTrainable(op.outputs[i]):
# Only floating-point outputs get a zero gradient. Gradient
# functions should ignore the gradient for other outputs.
if loop_state:
out_grads[i] = loop_state.ZerosLike(op, i)
else:
out_grads[i] = control_flow_ops.ZerosLikeOutsideLoop(op, i)
with ops.name_scope(op.name + "_grad"):
# pylint: disable=protected-access
with ops.get_default_graph()._original_op(op):
# pylint: enable=protected-access
if grad_fn:
# If grad_fn was found, do not use SymbolicGradient even for
# functions.
in_grads = grad_fn(op, *out_grads)
else:
# For function call ops, we add a 'SymbolicGradient'
# node to the graph to compute gradients.
in_grads = _SymGrad(op, out_grads)
in_grads = _AsList(in_grads)
_VerifyGeneratedGradients(in_grads, op)
if gate_gradients and len(
[x for x in in_grads if x is not None]) > 1:
in_grads = control_flow_ops.tuple(in_grads)
_LogOpGradients(op, out_grads, in_grads)
else:
# If no grad_fn is defined or none of out_grads is available,
# just propagate a list of None backwards.
in_grads = [None] * len(op.inputs)
for t_in, in_grad in zip(op.inputs, in_grads):
if in_grad is not None:
if isinstance(in_grad, ops.Tensor):
in_grad.set_shape(t_in.get_shape())
_SetGrad(grads, t_in, in_grad)
if loop_state:
loop_state.ExitGradWhileContext(op, before=False)
# Update pending count for the inputs of op and enqueue ready ops.
_UpdatePendingAndEnqueueReady(grads, op, queue, pending_count, loop_state)
if loop_state:
loop_state.PostProcessing()
return [_GetGrad(grads, x) for x in xs]
def _HasAnyNotNoneGrads(grads, op):
"""Return true iff op has real gradient."""
out_grads = _GetGrads(grads, op)
for out_grad in out_grads:
if isinstance(out_grad, (ops.Tensor, ops.IndexedSlices)):
return True
if out_grad and isinstance(out_grad, collections.Sequence):
if any([g is not None for g in out_grad]):
return True
return False
def _UpdatePendingAndEnqueueReady(grads, op, queue, pending_count, loop_state):
"""Update pending count for the inputs of op and enqueue ready ops."""
for x in op.inputs:
# pylint: disable=protected-access
pending_count[x.op._id] -= 1
ready = (pending_count[x.op._id] == 0)
if loop_state and not ready:
ready = (pending_count[x.op._id] > 0 and
control_flow_ops.IsLoopSwitch(x.op))
# pylint: enable=protected-access
if ready:
if control_flow_ops.IsLoopExit(x.op):
# if x is an exit without real gradient, defer processing them.
grad_state = loop_state.GetGradState(x.op, before=False)
grad_state.deferred_exits.append(x)
grad_state.pending_exits_count -= 1
if grad_state.pending_exits_count == 0:
# We now have all the exits so process them.
has_real_grad = False
for y in grad_state.deferred_exits:
if _HasAnyNotNoneGrads(grads, y.op):
has_real_grad = True
queue.append(y.op)
else:
grad_state.unused_exits.append(y)
if has_real_grad:
# For an unused exit, if it has floating-point outputs, backprop
# a zero gradient. Otherwise, just ignore it.
for y in grad_state.unused_exits:
if _IsTrainable(y):
_SetGrad(grads, y, loop_state.ZerosLikeForExit(y))
queue.append(y.op)
else:
# All exits are "unused" so use None as gradient.
for y in grad_state.unused_exits:
queue.append(y.op)
else:
queue.append(x.op)
def _SetGrad(grads, t, grad):
"""Sets gradient "grad" in "grads" for tensor "t"."""
op = t.op
op_grads = grads.get(op)
if not op_grads:
op_grads = [[] for _ in xrange(len(op.outputs))]
grads[op] = op_grads
t_grads = op_grads[t.value_index]
if isinstance(t_grads, list):
t_grads.append(grad)
else:
assert control_flow_ops.IsLoopSwitch(op)
op_grads[t.value_index] = grad
def _GetGrad(grads, t):
"""Gets gradient for tensor "t"."""
op = t.op
op_grads = grads.get(op)
if not op_grads:
return None
t_grad = op_grads[t.value_index]
assert not isinstance(t_grad, list), (
"gradients list should have been aggregated by now.")
return t_grad
def _GetGrads(grads, op):
"""Gets all gradients for op."""
if op in grads:
return grads[op]
else:
return [[] for _ in xrange(len(op.outputs))]
def _HandleNestedIndexedSlices(grad):
assert isinstance(grad, ops.IndexedSlices)
if isinstance(grad.values, ops.Tensor):
return grad
else:
assert isinstance(grad.values, ops.IndexedSlices)
g = _HandleNestedIndexedSlices(grad.values)
return ops.IndexedSlices(g.values,
array_ops.gather(grad.indices, g.indices),
g.dense_shape)
def _AccumulatorShape(inputs):
shape = tensor_shape.unknown_shape()
for i in inputs:
if isinstance(i, ops.Tensor):
shape = shape.merge_with(i.get_shape())
return shape
def _LogOpGradients(op, out_grads, in_grads):
"""Log the in and out grads of an op."""
logging.vlog(1, "Gradient for '" + op.name + "'")
def _FilterGrad(x):
if x is None:
return False
if isinstance(x, (list, tuple)):
return bool(x)
else:
return True
logging.vlog(1, " in --> %s",
", ".join([x.name for x in out_grads if _FilterGrad(x)]))
logging.vlog(1, " out --> %s",
", ".join([x.name for x in in_grads if _FilterGrad(x)]))
def _MultiDeviceAddN(tensor_list):
"""Adds tensors from potentially multiple devices."""
# Basic function structure comes from control_flow_ops.group().
# Sort tensors according to their devices.
tensors_on_device = collections.defaultdict(lambda: [])
for tensor in tensor_list:
tensors_on_device[tensor.device].append(tensor)
# For each device, add the tensors on that device first.
# Then gather the partial sums from multiple devices.
# TODO(sjhwang): Create hierarchical aggregation tree as pbar's suggestion.
# E.g., aggregate per GPU, then per task, and so on.
summands = []
def DeviceKey(dev):
return "" if dev is None else dev
for dev in sorted(six.iterkeys(tensors_on_device), key=DeviceKey):
tensors = tensors_on_device[dev]
with ops.colocate_with(tensors[0].op, ignore_existing=True):
summands.append(math_ops.add_n(tensors))
return math_ops.add_n(summands)
class AggregationMethod(object):
"""A class listing aggregation methods used to combine gradients.
Computing partial derivatives can require aggregating gradient
contributions. This class lists the various methods that can
be used to combine gradients in the graph:
* `ADD_N`: All of the gradient terms are summed as part of one
operation using the "AddN" op. It has the property that all
gradients must be ready before any aggregation is performed.
* `DEFAULT`: The system-chosen default aggregation method.
"""
ADD_N = 0
DEFAULT = ADD_N
# The following are experimental and may not be supported in future releases.
EXPERIMENTAL_TREE = 1
EXPERIMENTAL_ACCUMULATE_N = 2
def _AggregatedGrads(grads, op, loop_state, aggregation_method=None):
"""Get the aggregated gradients for op.
Args:
grads: The map of memoized gradients.
op: The op to get gradients for.
loop_state: An object for maintaining the state of the while loops in the
graph. It is of type ControlFlowState. None if the graph
contains no while loops.
aggregation_method: Specifies the method used to combine gradient terms.
Accepted values are constants defined in the class `AggregationMethod`.
Returns:
A list of gradients, one per each output of `op`. If the gradients
for a particular output is a list, this function aggregates it
before returning.
Raises:
TypeError: if the incoming grads are not Tensors or IndexedSlices.
ValueError: if the arguments are invalid.
"""
if aggregation_method is None:
aggregation_method = AggregationMethod.DEFAULT
if aggregation_method not in [
AggregationMethod.ADD_N, AggregationMethod.EXPERIMENTAL_TREE,
AggregationMethod.EXPERIMENTAL_ACCUMULATE_N
]:
raise ValueError("Invalid aggregation_method specified %s." %
aggregation_method)
out_grads = _GetGrads(grads, op)
for i, out_grad in enumerate(out_grads):
if loop_state:
if isinstance(out_grad, (ops.Tensor, ops.IndexedSlices)):
assert control_flow_ops.IsLoopSwitch(op)
continue
# Grads have to be Tensors or IndexedSlices
if (isinstance(out_grad, collections.Sequence) and not all([
isinstance(g, (ops.Tensor, ops.IndexedSlices)) for g in out_grad
if g is not None
])):
raise TypeError("gradients have to be either all Tensors "
"or all IndexedSlices")
# Aggregate multiple gradients, and convert [] to None.
if out_grad:
if len(out_grad) < 2:
used = "nop"
out_grads[i] = out_grad[0]
elif all([isinstance(g, ops.Tensor) for g in out_grad if g is not None]):
tensor_shape = _AccumulatorShape(out_grad)
if (aggregation_method == AggregationMethod.EXPERIMENTAL_ACCUMULATE_N
and len(out_grad) > 2 and tensor_shape.is_fully_defined()):
# The benefit of using AccumulateN is that its inputs can be combined
# in any order and this can allow the expression to be evaluated with
# a smaller memory footprint. When used with gpu_allocator_retry,
# it is possible to compute a sum of terms which are much larger than
# total GPU memory.
# AccumulateN can currently only be used if we know the shape for
# an accumulator variable. If this is not known, or if we only have
# 2 grads then we fall through to the "tree" case below.
used = "accumulate_n"
out_grads[i] = math_ops.accumulate_n(out_grad)
elif aggregation_method in [
AggregationMethod.EXPERIMENTAL_TREE,
AggregationMethod.EXPERIMENTAL_ACCUMULATE_N
]:
# Aggregate all gradients by doing pairwise sums: this may
# reduce performance, but it can improve memory because the
# gradients can be released earlier.
#
# TODO(vrv): Consider replacing this with a version of
# tf.AddN() that eagerly frees its inputs as soon as they are
# ready, so the order of this tree does not become a problem.
used = "tree"
with ops.name_scope(op.name + "_gradient_sum"):
running_sum = out_grad[0]
for grad in out_grad[1:]:
running_sum = math_ops.add_n([running_sum, grad])
out_grads[i] = running_sum
else:
used = "add_n"
out_grads[i] = _MultiDeviceAddN(out_grad)
logging.vlog(2, " _AggregatedGrads %d x %s using %s",
len(out_grad), tensor_shape, used)
else:
out_grad = math_ops._as_indexed_slices_list(
[g for g in out_grad if g is not None])
out_grad = [_HandleNestedIndexedSlices(x) for x in out_grad]
# Form IndexedSlices out of the concatenated values and
# indices.
out_grads[i] = ops.IndexedSlices(
array_ops.concat([x.values for x in out_grad], 0),
array_ops.concat([x.indices for x in out_grad], 0),
out_grad[0].dense_shape)
else: # not out_grad
# out_grads[i] is [], thus its aggregation is simply None.
out_grads[i] = None
return out_grads
# TODO(vrv): Make this available when we want to make it public.
def _hessian_vector_product(ys, xs, v):
"""Multiply the Hessian of `ys` wrt `xs` by `v`.
This is an efficient construction that uses a backprop-like approach
to compute the product between the Hessian and another vector. The
Hessian is usually too large to be explicitly computed or even
represented, but this method allows us to at least multiply by it
for the same big-O cost as backprop.
Implicit Hessian-vector products are the main practical, scalable way
of using second derivatives with neural networks. They allow us to
do things like construct Krylov subspaces and approximate conjugate
gradient descent.
Example: if `y` = 1/2 `x`^T A `x`, then `hessian_vector_product(y,
x, v)` will return an expression that evaluates to the same values
as (A + A.T) `v`.
Args:
ys: A scalar value, or a tensor or list of tensors to be summed to
yield a scalar.
xs: A list of tensors that we should construct the Hessian over.
v: A list of tensors, with the same shapes as xs, that we want to
multiply by the Hessian.
Returns:
A list of tensors (or if the list would be length 1, a single tensor)
containing the product between the Hessian and `v`.
Raises:
ValueError: `xs` and `v` have different length.
"""
# Validate the input
length = len(xs)
if len(v) != length:
raise ValueError("xs and v must have the same length.")
# First backprop
grads = gradients(ys, xs)
assert len(grads) == length
elemwise_products = [
math_ops.multiply(grad_elem, array_ops.stop_gradient(v_elem))
for grad_elem, v_elem in zip(grads, v) if grad_elem is not None
]
# Second backprop
return gradients(elemwise_products, xs)
def hessians(ys, xs, name="hessians", colocate_gradients_with_ops=False,
gate_gradients=False, aggregation_method=None):
"""Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.
`hessians()` adds ops to the graph to output the Hessian matrix of `ys`
with respect to `xs`. It returns a list of `Tensor` of length `len(xs)`
where each tensor is the Hessian of `sum(ys)`. This function currently
only supports evaluating the Hessian with respect to (a list of) one-
dimensional tensors.
The Hessian is a matrix of second-order partial derivatives of a scalar
tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).
Args:
ys: A `Tensor` or list of tensors to be differentiated.
xs: A `Tensor` or list of tensors to be used for differentiation.
name: Optional name to use for grouping all the gradient ops together.
defaults to 'hessians'.
colocate_gradients_with_ops: See `gradients()` documentation for details.
gate_gradients: See `gradients()` documentation for details.
aggregation_method: See `gradients()` documentation for details.
Returns:
A list of Hessian matrices of `sum(y)` for each `x` in `xs`.
Raises:
LookupError: if one of the operations between `xs` and `ys` does not
have a registered gradient function.
ValueError: if the arguments are invalid or not supported. Currently,
this function only supports one-dimensional `x` in `xs`.
"""
xs = _AsList(xs)
kwargs = {
'colocate_gradients_with_ops': colocate_gradients_with_ops,
'gate_gradients': gate_gradients,
'aggregation_method': aggregation_method
}
# Compute a hessian matrix for each x in xs
hessians = []
for i, x in enumerate(xs):
# Check dimensions
ndims = x.get_shape().ndims
if ndims is None:
raise ValueError('Cannot compute Hessian because the dimensionality of '
'element number %d of `xs` cannot be determined' % i)
elif ndims != 1:
raise ValueError('Computing hessians is currently only supported for '
'one-dimensional tensors. Element number %d of `xs` has '
'%d dimensions.' % (i, ndims))
with ops.name_scope(name + '_first_derivative'):
# Compute the partial derivatives of the input with respect to all
# elements of `x`
_gradients = gradients(ys, x, **kwargs)[0]
# Unpack the gradients into a list so we can take derivatives with
# respect to each element
_gradients = array_ops.unstack(_gradients)
with ops.name_scope(name + '_second_derivative'):
# Compute the partial derivatives with respect to each element of the list
_hess = [gradients(_gradient, x, **kwargs)[0] for _gradient in _gradients]
# Pack the list into a matrix and add to the list of hessians
hessians.append(array_ops.stack(_hess, name=name))
return hessians
|
anilmuthineni/tensorflow
|
tensorflow/python/ops/gradients_impl.py
|
Python
|
apache-2.0
| 35,591
|
[
"VisIt"
] |
a826df85f837e69ac069b1fced0175c5b955baffb9abc74416b9641e9bcce615
|
from mushroom_rl.core import Serializable
from mushroom_rl.utils.table import Table
import numpy as np
def to_parameter(x):
if isinstance(x, Parameter):
return x
else:
return Parameter(x)
class Parameter(Serializable):
"""
This class implements function to manage parameters, such as learning rate.
It also allows to have a single parameter for each state of state-action
tuple.
"""
def __init__(self, value, min_value=None, max_value=None, size=(1,)):
"""
Constructor.
Args:
value (float): initial value of the parameter;
min_value (float, None): minimum value that the parameter can reach
when decreasing;
max_value (float, None): maximum value that the parameter can reach
when increasing;
size (tuple, (1,)): shape of the matrix of parameters; this shape
can be used to have a single parameter for each state or
state-action tuple.
"""
self._initial_value = value
self._min_value = min_value
self._max_value = max_value
self._n_updates = Table(size)
self._add_save_attr(
_initial_value='primitive',
_min_value='primitive',
_max_value='primitive',
_n_updates='mushroom',
)
def __call__(self, *idx, **kwargs):
"""
Update and return the parameter in the provided index.
Args:
*idx (list): index of the parameter to return.
Returns:
The updated parameter in the provided index.
"""
if self._n_updates.table.size == 1:
idx = list()
self.update(*idx, **kwargs)
return self.get_value(*idx, **kwargs)
def get_value(self, *idx, **kwargs):
"""
Return the current value of the parameter in the provided index.
Args:
*idx (list): index of the parameter to return.
Returns:
The current value of the parameter in the provided index.
"""
new_value = self._compute(*idx, **kwargs)
if self._min_value is None and self._max_value is None:
return new_value
else:
return np.clip(new_value, self._min_value, self._max_value)
def _compute(self, *idx, **kwargs):
"""
Returns:
The value of the parameter in the provided index.
"""
return self._initial_value
def update(self, *idx, **kwargs):
"""
Updates the number of visit of the parameter in the provided index.
Args:
*idx (list): index of the parameter whose number of visits has to be
updated.
"""
self._n_updates[idx] += 1
@property
def shape(self):
"""
Returns:
The shape of the table of parameters.
"""
return self._n_updates.table.shape
@property
def initial_value(self):
"""
Returns:
The initial value of the parameters.
"""
return self._initial_value
class LinearParameter(Parameter):
"""
This class implements a linearly changing parameter according to the number
of times it has been used.
"""
def __init__(self, value, threshold_value, n, size=(1,)):
self._coeff = (threshold_value - value) / n
if self._coeff >= 0:
super().__init__(value, None, threshold_value, size)
else:
super().__init__(value, threshold_value, None, size)
self._add_save_attr(_coeff='primitive')
def _compute(self, *idx, **kwargs):
return self._coeff * self._n_updates[idx] + self._initial_value
class ExponentialParameter(Parameter):
"""
This class implements a exponentially changing parameter according to the
number of times it has been used.
"""
def __init__(self, value, exp=1., min_value=None, max_value=None,
size=(1,)):
self._exp = exp
super().__init__(value, min_value, max_value, size)
self._add_save_attr(_exp='primitive')
def _compute(self, *idx, **kwargs):
n = np.maximum(self._n_updates[idx], 1)
return self._initial_value / n ** self._exp
|
carloderamo/mushroom
|
mushroom_rl/utils/parameters.py
|
Python
|
mit
| 4,314
|
[
"VisIt"
] |
0879719095f8bd34c07aa32b607baf79aed0f70004e6707d03be600ea8cdc9ae
|
import fileinput
import sys
import re
import os
import HTSeq
def replaceAll(file, searchExp, replaceExp):
"""
Replace an expression in a file
"""
for line in fileinput.input(file, inplace=1):
if searchExp in line:
line = line.replace(searchExp, replaceExp)
sys.stdout.write(line)
##patch HTseq library to increse buffer size
def path_htseq():
"""
Patch the htseq library increaseing the buffer size
"""
initfile = os.path.splitext(HTSeq.__file__)[0] + '.py'
print "Set patch for %s" %initfile
replaceAll(initfile, 'max_buffer_size=3000000', 'max_buffer_size=9000000')
if __name__ == "__main__":
path_htseq()
|
frankosan/pypers
|
apply_patches.py
|
Python
|
gpl-3.0
| 687
|
[
"HTSeq"
] |
74042aa210c44a9b0f3f78512b51fb23561b1dadd9a2585a5ff4cd0073ad0143
|
"""Test OpenBabel executables from Python
Note: Python bindings not used
On Windows or Linux, you can run these tests at the commandline
in the build folder with:
"C:\Program Files\CMake 2.6\bin\ctest.exe" -C CTestTestfile.cmake
-R pytest -VV
You could also "chdir" into build/test and run the test file directly:
python ../../../test/testsym.py
In both cases, the test file is run directly from the source folder,
and so you can quickly develop the tests and try them out.
"""
import os
import unittest
from testbabel import run_exec, BaseTest
class TestSym(BaseTest):
"""Base class for a series of tests relating to symmetry"""
def testInChItoSMI(self):
"""Verify that the InChI is read correctly"""
output, error = run_exec(self.inchi, "babel -iinchi -ocan")
self.assertEqual(output.rstrip(), self.cansmi)
def testSMItoInChI(self):
"""Verify that all molecules give the same InChI"""
output, error = run_exec("\n".join(self.smiles), "babel -ismi -oinchi")
output = "\n".join([x.rstrip() for x in output.split("\n")])
self.assertEqual(output.rstrip(), "\n".join([self.inchi] * len(self.smiles)))
def testSMItoCAN(self):
"""Verify that all molecules give the same cansmi"""
output, error = run_exec("\n".join(self.smiles), "babel -ismi -ocan")
output = "\n".join([x.rstrip() for x in output.split("\n")])
self.assertEqual(output.rstrip(), "\n".join([self.cansmi] * len(self.smiles)))
def testSMIthruXML(self):
"""Verify that roundtripping through CML preserves stereo"""
output, error = run_exec("\n".join(self.smiles), "babel -ismi -ocml tmp.cml")
output, error = run_exec(output.rstrip(), "babel -icml tmp.cml -ocan")
output = "\n".join([x.rstrip() for x in output.split("\n")])
self.assertEqual(output.rstrip(), "\n".join([self.cansmi] * len(self.smiles)))
os.remove("tmp.cml")
class TestTetSym(TestSym):
"""A series of tests relating to tetrahedral symmetry"""
def setUp(self):
self.canFindExecutable("babel")
# The following all represent the same molecule
self.cansmi = "C[C@](Br)(Cl)F"
self.inchi = "InChI=1S/C2H3BrClF/c1-2(3,4)5/h1H3/t2-/m0/s1"
self.smiles = [
'C[C@@](Cl)(Br)F',
'C[C@](Cl)(F)Br',
'C[C@](Br)(Cl)F',
'C[C@@](Br)(F)Cl',
'C[C@@](F)(Cl)Br',
'C[C@](F)(Br)Cl',
'Cl[C@](C)(Br)F',
'Cl[C@@](C)(F)Br',
'Cl[C@@](Br)(C)F',
'Cl[C@](Br)(F)C',
'Cl[C@](F)(C)Br',
'Cl[C@@](F)(Br)C',
'Br[C@@](C)(Cl)F',
'Br[C@](C)(F)Cl',
'Br[C@](Cl)(C)F',
'Br[C@@](Cl)(F)C',
'Br[C@@](F)(C)Cl',
'Br[C@](F)(Cl)C',
'F[C@](C)(Cl)Br',
'F[C@@](C)(Br)Cl',
'F[C@@](Cl)(C)Br',
'F[C@](Cl)(Br)C',
'F[C@](Br)(C)Cl',
'F[C@@](Br)(Cl)C'
]
class TestCisTransSym(TestSym):
"""A series of tests relating to cistrans symmetry"""
def setUp(self):
self.canFindExecutable("babel")
# The following all represent the same molecule
self.cansmi = "Cl/C=C/C=C\\Br"
self.inchi = "InChI=1S/C4H4BrCl/c5-3-1-2-4-6/h1-4H/b3-1-,4-2+"
self.smiles = [
"C(=C\C=C/Br)/Cl",
"Cl/C=C/C=C\Br",
"Br/C=C\C=C\Cl",
"C(=C\Cl)/C=C\Br",
"C(=C\C=C\Cl)\Br",
"C(=C\Br)\C=C\Cl"
]
class TestLonePairTetSym(TestSym):
"""A series of tests relating to tet symmetry involving a lone pair"""
def setUp(self):
self.canFindExecutable("babel")
# The following all represent the same molecule
self.cansmi = "C[S@](=O)Cl"
self.inchi = "InChI=1S/CH3ClOS/c1-4(2)3/h1H3/t4-/m0/s1"
self.smiles = [
self.cansmi,
"O=[S@](Cl)C",
"O=[S@@](C)Cl",
"[S@](Cl)(=O)C",
]
class TestRingBondCisTransSym(TestSym):
"""A series of tests relating to tet symmetry involving a lone pair"""
def setUp(self):
self.canFindExecutable("babel")
# The following all represent the same molecule
self.cansmi = r"I/C=C/1\CN1"
self.inchi = "InChI=1S/C3H4IN/c4-1-3-2-5-3/h1,5H,2H2/b3-1+"
self.smiles = [
self.cansmi,
r"I/C=C\1/NC1",
r"I/C=C1NC/1",
"I/C=C/1/NC/1",
]
class TestConversions(BaseTest):
"""A series of tests relating to file format conversions and symmetry"""
def setUp(self):
self.canFindExecutable("babel")
self.data = [
('ClC=CF', 'FC=CCl', 'InChI=1S/C2H2ClF/c3-1-2-4/h1-2H'),
('ClC=CF', 'FC=CCl', 'InChI=1S/C2H2ClF/c3-1-2-4/h1-2H'),
('Cl/C=C/F', 'F/C=C/Cl', 'InChI=1S/C2H2ClF/c3-1-2-4/h1-2H/b2-1+'),
(r"Cl/C=C\F", r"F/C=C\Cl", 'InChI=1S/C2H2ClF/c3-1-2-4/h1-2H/b2-1-'),
('Cl[C@@](Br)(F)I', 'F[C@](I)(Br)Cl', 'InChI=1S/CBrClFI/c2-1(3,4)5/t1-/m0/s1'),
('Cl[C@](Br)(F)I', 'F[C@@](I)(Br)Cl', 'InChI=1S/CBrClFI/c2-1(3,4)5/t1-/m1/s1'),
('ClC(Br)(F)I', 'FC(I)(Br)Cl', 'InChI=1S/CBrClFI/c2-1(3,4)5'),
('O=[S@@](Cl)I', "Cl[S@](=O)I", "InChI=1S/ClIOS/c1-4(2)3/t4-/m0/s1"),
('O=[S@](Cl)I', "Cl[S@@](=O)I", "InChI=1S/ClIOS/c1-4(2)3/t4-/m1/s1"),
('O=S(Cl)I', "ClS(=O)I", "InChI=1S/ClIOS/c1-4(2)3"),
(r"IC=C1NC1", r"IC=C1CN1", "InChI=1S/C3H4IN/c4-1-3-2-5-3/h1,5H,2H2"),
(r"I/C=C\1/NC1", r"I/C=C/1\CN1", "InChI=1S/C3H4IN/c4-1-3-2-5-3/h1,5H,2H2/b3-1+"),
(r"I/C=C/1\NC1", r"I/C=C\1/CN1", "InChI=1S/C3H4IN/c4-1-3-2-5-3/h1,5H,2H2/b3-1-"),
]
def testSMILEStoInChI(self):
# Tests interconversions between the SMILES on the left versus
# the InChI on the right.
# The canonical smiles (in the middle) were derived from the SMILES.
for smiles, can, inchi in self.data:
output, error = run_exec(smiles, "babel -ismi -oinchi")
self.assertEqual(output.rstrip(), inchi)
output, error = run_exec(inchi, "babel -iinchi -ocan")
self.assertEqual(output.rstrip(), can)
def parseMDL(self, text):
lines = text.split("\n")
broken = lines[3].split()
Natoms = int(broken[0])
Nbonds = int(broken[1])
atoms = []
for i in range(Natoms):
broken = lines[i+4].split()
atoms.append({'parity':int(broken[6])})
bonds = []
for i in range(Nbonds):
broken = lines[i+4+Natoms].split()
bonds.append({'stereo':int(broken[3])})
return atoms, bonds
def testSMILESto2D(self):
"""Test gen2d for some basic cases"""
for smi, can, inchi in self.data:
output, error = run_exec(smi, "obabel -ismi --gen2d -omdl")
output, error = run_exec(output.rstrip(), "obabel -imdl -ocan")
self.assertEqual(can, output.rstrip())
def testSMILESto3DMDL(self):
"""Test interconversion between SMILES and 3D MDL"""
data = [
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 3]), # 'ClC=CF'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 3]), # 'ClC=CF'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]), # 'Cl/C=C/F'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]), # 'Cl/C=C\\F'
# The bond parities are irrelevant/meaningless for the next two
([0, 0, 0, 0, 1], []), # 'Cl[C@@](Br)(F)I'
([0, 0, 0, 0, 2], []), # 'Cl[C@](Br)(F)I'
([0, 0, 0, 0, 3], [0, 0, 0, 0]), # 'ClC(Br)(F)I'
([0, 0, 0, 1], []), # 'O=[S@@](Cl)I),
([0, 0, 0, 2], []), # 'O=[S@](Cl)I),
([0, 0, 0, 3], []), # 'O=S(Cl)I),
([0]*9, [0]*8 + [3]), # "IC=C1NC1"
([0]*9, [0]*9), # r"I/C=C\1/NC1"
([0]*9, [0]*9), # r"I/C=C/1\NC1"
]
for i, (atompar, bondstereo) in enumerate(data):
smiles, can = self.data[i][0:2]
output, error = run_exec(smiles, "babel -ismi -osdf --gen3d")
atoms, bonds = self.parseMDL(output)
parities = [atom['parity'] for atom in atoms]
parities.sort()
stereos = [bond['stereo'] for bond in bonds]
stereos.sort()
self.assertEqual(atompar, parities)
if bondstereo:
self.assertEqual(bondstereo, stereos)
output, error = run_exec(output, "obabel -isdf -as -ocan")
# "-as" is necessary to identify the unknown stereo
self.assertEqual(output.rstrip(), can)
def testXYZtoSMILESand3DMDL(self):
"""Test conversion from XYZ to SMILES and 3D MDL"""
# Since the XYZ format does not trigger stereo perception,
# this test makes sure that the SMILES and 3D MDL formats
# perceive stereo themselves.
data = [
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 3]), # 'ClC=CF'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 3]), # 'ClC=CF'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]), # 'Cl/C=C/F'
([0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]), # 'Cl/C=C\\F'
# The bond parities are irrelevant/meaningless for the next two
([0, 0, 0, 0, 1], []), # 'Cl[C@@](Br)(F)I'
([0, 0, 0, 0, 2], []), # 'Cl[C@](Br)(F)I'
([0, 0, 0, 0, 3], [0, 0, 0, 4]), # 'ClC(Br)(F)I'
([0, 0, 0, 1], []), # 'O=[S@@](Cl)I),
([0, 0, 0, 2], []), # 'O=[S@](Cl)I),
([0, 0, 0, 3], []), # 'O=S(Cl)I),
([0]*9, [0]*8 + [3]), # "IC=C1NC1"
([0]*9, [0]*9), # r"I/C=C\1/NC1"
([0]*9, [0]*9), # r"I/C=C/1\NC1"
]
for i, (atompar, bondstereo) in enumerate(data):
if i in [0, 1, 6, 10]: continue # ambiguous stereo is lost in XYZ
if i in [7, 8, 9]: continue # perception of S=O from XYZ fails
smiles, can = self.data[i][0:2]
output, error = run_exec(smiles, "babel -ismi -oxyz --gen3d")
canoutput, error = run_exec(output, "babel -ixyz -ocan")
self.assertEqual(canoutput.rstrip(), can)
sdfoutput, error = run_exec(output, "babel -ixyz -osdf")
atoms, bonds = self.parseMDL(sdfoutput)
parities = [atom['parity'] for atom in atoms]
parities.sort()
stereos = [bond['stereo'] for bond in bonds]
stereos.sort()
self.assertEqual(atompar, parities)
if bondstereo:
self.assertEqual(bondstereo, stereos)
def test2DMDLto0D(self):
"""Test conversion for 2D MDL to CAN and InChI"""
# The following file was created using RDKit starting from
# the SMILES strings in data[x][0] below.
filename = self.getTestFile("testsym_2Dtests.sdf")
output, error = run_exec("babel -isdf %s -ocan" % filename)
for i, smiles in enumerate(output.rstrip().split("\n")):
self.assertEqual(smiles.rstrip(), self.data[i][1])
output, error = run_exec("babel -isdf %s -oinchi" % filename)
for i, inchi in enumerate(output.rstrip().split("\n")):
self.assertEqual(inchi.rstrip(), self.data[i][2])
def test2DMDLto0D_more(self):
"""Test various combinations of stereobonds in 2D perception"""
filenames = [self.getTestFile(x) for x in
["testsym_2Dtests_more.sdf",
"testsym_2Dtests_threeligands.sdf"]]
# The test files have the correct canonical SMILES string
# stored in the data field "smiles"
output, error = run_exec("obabel -isdf %s %s -ocan --append smiles" %
(filenames[0], filenames[1]))
for line in output.rstrip().split("\n"):
result, correct_answer = line.split()
self.assertEqual(result, correct_answer)
def test2DMDLto2DMDL(self):
"""Make sure that stereo is preserved when writing wedge bonds"""
filenames = [self.getTestFile(x) for x in
["testsym_2Dtests_more.sdf",
"testsym_2Dtests_threeligands.sdf"]]
# The test files have the correct canonical SMILES string
# stored in the data field "smiles"
output, error = run_exec("obabel -isdf %s %s -osdf --append smiles" %
(filenames[0], filenames[1]))
finaloutput, error = run_exec(output, "obabel -isdf -ocan")
for line in finaloutput.rstrip().split("\n"):
result, correct_answer = line.split()
self.assertEqual(result, correct_answer)
def testSMILESto0DMDL(self):
"""Test interconversion between SMILES and 0D MDL"""
data = [
([0, 0, 0, 0, 1], [0, 0, 0, 0]), # 'Cl[C@@](Br)(F)I'
([0, 0, 0, 0, 2], [0, 0, 0, 0]), # 'Cl[C@](Br)(F)I'
([0, 0, 0, 0, 0], [0, 0, 0, 0]), # 'ClC(Br)(F)I'
([0, 0, 0, 0, 3], [0, 0, 0, 0]) # 'ClC(Br)(F)I' option 'S' when reading
]
for i, (atompar, bondstereo) in enumerate(data):
if i == 3:
smiles, can = self.data[6][0:2]
output, error = run_exec(smiles, "babel -ismi -osdf -aS")
else:
smiles, can = self.data[i + 4][0:2]
output, error = run_exec(smiles, "babel -ismi -osdf")
atoms, bonds = self.parseMDL(output)
parities = [atom['parity'] for atom in atoms]
parities.sort()
stereos = [bond['stereo'] for bond in bonds]
stereos.sort()
self.assertEqual(atompar, parities)
self.assertEqual(bondstereo, stereos)
output, error = run_exec(output, "babel -isdf -as -ocan")
self.assertEqual(output.rstrip(), can)
class TestStereoConversion(BaseTest):
"""Random tests relating to roundtripping stereochemistry"""
def setUp(self):
self.canFindExecutable("babel")
def testInChIToSMILES_Bug(self):
"""PR#2101034- InChI <-> SMILES conv misrepresents stereo"""
test_inchi = 'InChI=1S/C10H10/c1-2-3-7-10-8-5-4-6-9-10/h2-9H,1H2/b7-3+'
output, error = run_exec(test_inchi, "babel -iinchi -osmi")
self.assertEqual(output.rstrip(), "C=C/C=C/c1ccccc1")
test_smiles = "C=C\C=C/c1ccccc1"
output, error = run_exec(test_smiles, "babel -ismi -oinchi")
self.assertEqual(output.rstrip(), "InChI=1S/C10H10/c1-2-3-7-10-8-5-4-6-9-10/h2-9H,1H2/b7-3-")
def testChiralToLonePair(self):
"""PR#3058701 - Handle stereochemistry at lone pair on S"""
# Note to self: Need to ensure that roundtripping through the various
# 2D and 3D formats works. In the meanwhile, this test at least ensures
# that SMILES reading and writing works fine.
can = 'C[S@](=O)Cl'
smiles = [can, '[S@](Cl)(=O)C', 'O=[S@](Cl)C']
for smile in smiles:
output, error = run_exec(smile, "babel -ismi -ocan")
self.assertEqual(output.rstrip(), can)
# Check that regular chiral S still work fine
smi = "[S@](=O)(=N)(C)O"
output, error = run_exec(smi, "babel -ismi -osmi")
self.assertEqual(output.rstrip(), smi)
del TestSym # remove base class to avoid tests
if __name__ == "__main__":
unittest.main()
|
Reinis/openbabel
|
test/testsym.py
|
Python
|
gpl-2.0
| 15,030
|
[
"RDKit"
] |
cb95cb89016f3c4adde2cabb8de0e1b1c369e307fa06f85f6519ccddc998f3a3
|
# -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from datetime import datetime, date
from lxml import etree
import time
from openerp import api
from openerp import SUPERUSER_ID
from openerp import tools
from openerp.osv import fields, osv
from openerp.tools.translate import _
from openerp.exceptions import UserError
class project_task_type(osv.osv):
_name = 'project.task.type'
_description = 'Task Stage'
_order = 'sequence'
_columns = {
'name': fields.char('Stage Name', required=True, translate=True),
'description': fields.text('Description', translate=True),
'sequence': fields.integer('Sequence'),
'project_ids': fields.many2many('project.project', 'project_task_type_rel', 'type_id', 'project_id', 'Projects'),
'legend_priority': fields.char(
'Priority Management Explanation', translate=True,
help='Explanation text to help users using the star and priority mechanism on stages or issues that are in this stage.'),
'legend_blocked': fields.char(
'Kanban Blocked Explanation', translate=True,
help='Override the default value displayed for the blocked state for kanban selection, when the task or issue is in that stage.'),
'legend_done': fields.char(
'Kanban Valid Explanation', translate=True,
help='Override the default value displayed for the done state for kanban selection, when the task or issue is in that stage.'),
'legend_normal': fields.char(
'Kanban Ongoing Explanation', translate=True,
help='Override the default value displayed for the normal state for kanban selection, when the task or issue is in that stage.'),
'fold': fields.boolean('Folded in Tasks Pipeline',
help='This stage is folded in the kanban view when '
'there are no records in that stage to display.'),
}
def _get_default_project_ids(self, cr, uid, ctx=None):
if ctx is None:
ctx = {}
default_project_id = ctx.get('default_project_id')
return [default_project_id] if default_project_id else None
_defaults = {
'sequence': 1,
'project_ids': _get_default_project_ids,
}
_order = 'sequence'
class project(osv.osv):
_name = "project.project"
_description = "Project"
_inherits = {'account.analytic.account': "analytic_account_id",
"mail.alias": "alias_id"}
_inherit = ['mail.thread', 'ir.needaction_mixin']
_period_number = 5
def _auto_init(self, cr, context=None):
""" Installation hook: aliases, project.project """
# create aliases for all projects and avoid constraint errors
alias_context = dict(context, alias_model_name='project.task')
return self.pool.get('mail.alias').migrate_to_alias(cr, self._name, self._table, super(project, self)._auto_init,
'project.task', self._columns['alias_id'], 'id', alias_prefix='project+', alias_defaults={'project_id':'id'}, context=alias_context)
def onchange_partner_id(self, cr, uid, ids, part=False, context=None):
partner_obj = self.pool.get('res.partner')
val = {}
if not part:
return {'value': val}
if 'pricelist_id' in self.fields_get(cr, uid, context=context):
pricelist = partner_obj.read(cr, uid, part, ['property_product_pricelist'], context=context)
pricelist_id = pricelist.get('property_product_pricelist', False) and pricelist.get('property_product_pricelist')[0] or False
val['pricelist_id'] = pricelist_id
return {'value': val}
def unlink(self, cr, uid, ids, context=None):
alias_ids = []
mail_alias = self.pool.get('mail.alias')
analytic_account_to_delete = set()
for proj in self.browse(cr, uid, ids, context=context):
if proj.tasks:
raise UserError(_('You cannot delete a project containing tasks. You can either delete all the project\'s tasks and then delete the project or simply deactivate the project.'))
elif proj.alias_id:
alias_ids.append(proj.alias_id.id)
if proj.analytic_account_id and not proj.analytic_account_id.line_ids:
analytic_account_to_delete.add(proj.analytic_account_id.id)
res = super(project, self).unlink(cr, uid, ids, context=context)
mail_alias.unlink(cr, uid, alias_ids, context=context)
self.pool['account.analytic.account'].unlink(cr, uid, list(analytic_account_to_delete), context=context)
return res
def _get_attached_docs(self, cr, uid, ids, field_name, arg, context):
res = {}
attachment = self.pool.get('ir.attachment')
task = self.pool.get('project.task')
for id in ids:
project_attachments = attachment.search(cr, uid, [('res_model', '=', 'project.project'), ('res_id', '=', id)], context=context, count=True)
task_ids = task.search(cr, uid, [('project_id', '=', id)], context=context)
task_attachments = attachment.search(cr, uid, [('res_model', '=', 'project.task'), ('res_id', 'in', task_ids)], context=context, count=True)
res[id] = (project_attachments or 0) + (task_attachments or 0)
return res
def _task_count(self, cr, uid, ids, field_name, arg, context=None):
if context is None:
context = {}
res={}
for project in self.browse(cr, uid, ids, context=context):
res[project.id] = len(project.task_ids)
return res
def _task_needaction_count(self, cr, uid, ids, field_name, arg, context=None):
Task = self.pool['project.task']
res = dict.fromkeys(ids, 0)
projects = Task.read_group(cr, uid, [('project_id', 'in', ids), ('message_needaction', '=', True)], ['project_id'], ['project_id'], context=context)
res.update({project['project_id'][0]: int(project['project_id_count']) for project in projects})
return res
def _get_alias_models(self, cr, uid, context=None):
""" Overriden in project_issue to offer more options """
return [('project.task', "Tasks")]
def _get_visibility_selection(self, cr, uid, context=None):
""" Overriden in portal_project to offer more options """
return [('portal', _('Customer Project: visible in portal if the customer is a follower')),
('employees', _('All Employees Project: all employees can access')),
('followers', _('Private Project: followers only'))]
def attachment_tree_view(self, cr, uid, ids, context):
task_ids = self.pool.get('project.task').search(cr, uid, [('project_id', 'in', ids)])
domain = [
'|',
'&', ('res_model', '=', 'project.project'), ('res_id', 'in', ids),
'&', ('res_model', '=', 'project.task'), ('res_id', 'in', task_ids)]
res_id = ids and ids[0] or False
return {
'name': _('Attachments'),
'domain': domain,
'res_model': 'ir.attachment',
'type': 'ir.actions.act_window',
'view_id': False,
'view_mode': 'kanban,tree,form',
'view_type': 'form',
'help': _('''<p class="oe_view_nocontent_create">
Documents are attached to the tasks and issues of your project.</p><p>
Send messages or log internal notes with attachments to link
documents to your project.
</p>'''),
'limit': 80,
'context': "{'default_res_model': '%s','default_res_id': %d}" % (self._name, res_id)
}
# Lambda indirection method to avoid passing a copy of the overridable method when declaring the field
_alias_models = lambda self, *args, **kwargs: self._get_alias_models(*args, **kwargs)
_visibility_selection = lambda self, *args, **kwargs: self._get_visibility_selection(*args, **kwargs)
_columns = {
'active': fields.boolean('Active', help="If the active field is set to False, it will allow you to hide the project without removing it."),
'sequence': fields.integer('Sequence', help="Gives the sequence order when displaying a list of Projects."),
'analytic_account_id': fields.many2one(
'account.analytic.account', 'Contract/Analytic',
help="Link this project to an analytic account if you need financial management on projects. "
"It enables you to connect projects with budgets, planning, cost and revenue analysis, timesheets on projects, etc.",
ondelete="cascade", required=True, auto_join=True),
'label_tasks': fields.char('Use Tasks as', help="Gives label to tasks on project's kanban view."),
'tasks': fields.one2many('project.task', 'project_id', "Task Activities"),
'resource_calendar_id': fields.many2one('resource.calendar', 'Working Time', help="Timetable working hours to adjust the gantt diagram report", states={'close':[('readonly',True)]} ),
'type_ids': fields.many2many('project.task.type', 'project_task_type_rel', 'project_id', 'type_id', 'Tasks Stages', states={'close':[('readonly',True)], 'cancelled':[('readonly',True)]}),
'task_count': fields.function(_task_count, type='integer', string="Tasks",),
'task_needaction_count': fields.function(_task_needaction_count, type='integer', string="Tasks",),
'task_ids': fields.one2many('project.task', 'project_id',
domain=['|', ('stage_id.fold', '=', False), ('stage_id', '=', False)]),
'color': fields.integer('Color Index'),
'user_id': fields.many2one('res.users', 'Project Manager'),
'alias_id': fields.many2one('mail.alias', 'Alias', ondelete="restrict", required=True,
help="Internal email associated with this project. Incoming emails are automatically synchronized "
"with Tasks (or optionally Issues if the Issue Tracker module is installed)."),
'alias_model': fields.selection(_alias_models, "Alias Model", select=True, required=True,
help="The kind of document created when an email is received on this project's email alias"),
'privacy_visibility': fields.selection(_visibility_selection, 'Privacy / Visibility', required=True,
help="Holds visibility of the tasks or issues that belong to the current project:\n"
"- Portal : employees see everything;\n"
" if portal is activated, portal users see the tasks or issues followed by\n"
" them or by someone of their company\n"
"- Employees Only: employees see all tasks or issues\n"
"- Followers Only: employees see only the followed tasks or issues; if portal\n"
" is activated, portal users see the followed tasks or issues."),
'state': fields.selection([('draft','New'),
('open','In Progress'),
('cancelled', 'Cancelled'),
('pending','Pending'),
('close','Closed')],
'Status', required=True, copy=False),
'doc_count': fields.function(
_get_attached_docs, string="Number of documents attached", type='integer'
),
'date_start': fields.date('Start Date'),
'date': fields.date('Expiration Date', select=True, track_visibility='onchange'),
}
_order = "sequence, name, id"
_defaults = {
'active': True,
'type': 'contract',
'label_tasks': 'Tasks',
'state': 'open',
'sequence': 10,
'user_id': lambda self,cr,uid,ctx: uid,
'alias_model': 'project.task',
'privacy_visibility': 'employees',
}
# TODO: Why not using a SQL contraints ?
def _check_dates(self, cr, uid, ids, context=None):
for leave in self.read(cr, uid, ids, ['date_start', 'date'], context=context):
if leave['date_start'] and leave['date']:
if leave['date_start'] > leave['date']:
return False
return True
_constraints = [
(_check_dates, 'Error! project start-date must be lower than project end-date.', ['date_start', 'date'])
]
def set_template(self, cr, uid, ids, context=None):
return self.setActive(cr, uid, ids, value=False, context=context)
def reset_project(self, cr, uid, ids, context=None):
return self.setActive(cr, uid, ids, value=True, context=context)
def map_tasks(self, cr, uid, old_project_id, new_project_id, context=None):
""" copy and map tasks from old to new project """
if context is None:
context = {}
map_task_id = {}
task_obj = self.pool.get('project.task')
proj = self.browse(cr, uid, old_project_id, context=context)
for task in proj.tasks:
# preserve task name and stage, normally altered during copy
defaults = {'stage_id': task.stage_id.id,
'name': task.name}
map_task_id[task.id] = task_obj.copy(cr, uid, task.id, defaults, context=context)
self.write(cr, uid, [new_project_id], {'tasks':[(6,0, map_task_id.values())]})
task_obj.duplicate_task(cr, uid, map_task_id, context=context)
return True
def copy(self, cr, uid, id, default=None, context=None):
if default is None:
default = {}
context = dict(context or {})
context['active_test'] = False
proj = self.browse(cr, uid, id, context=context)
if not default.get('name'):
default.update(name=_("%s (copy)") % (proj.name))
res = super(project, self).copy(cr, uid, id, default, context)
for follower in proj.message_follower_ids:
self.message_subscribe(cr, uid, res, partner_ids=[follower.partner_id.id], subtype_ids=[subtype.id for subtype in follower.subtype_ids])
self.map_tasks(cr, uid, id, res, context=context)
return res
def duplicate_template(self, cr, uid, ids, context=None):
context = dict(context or {})
data_obj = self.pool.get('ir.model.data')
result = []
for proj in self.browse(cr, uid, ids, context=context):
context.update({'analytic_project_copy': True})
new_date_start = time.strftime('%Y-%m-%d')
new_date_end = False
if proj.date_start and proj.date:
start_date = date(*time.strptime(proj.date_start,'%Y-%m-%d')[:3])
end_date = date(*time.strptime(proj.date,'%Y-%m-%d')[:3])
new_date_end = (datetime(*time.strptime(new_date_start,'%Y-%m-%d')[:3])+(end_date-start_date)).strftime('%Y-%m-%d')
context.update({'copy':True})
new_id = self.copy(cr, uid, proj.id, default = {
'name':_("%s (copy)") % (proj.name),
'state':'open',
'date_start':new_date_start,
'date':new_date_end}, context=context)
result.append(new_id)
if result and len(result):
res_id = result[0]
form_view_id = data_obj._get_id(cr, uid, 'project', 'edit_project')
form_view = data_obj.read(cr, uid, form_view_id, ['res_id'])
tree_view_id = data_obj._get_id(cr, uid, 'project', 'view_project')
tree_view = data_obj.read(cr, uid, tree_view_id, ['res_id'])
search_view_id = data_obj._get_id(cr, uid, 'project', 'view_project_project_filter')
search_view = data_obj.read(cr, uid, search_view_id, ['res_id'])
return {
'name': _('Projects'),
'view_type': 'form',
'view_mode': 'form,tree',
'res_model': 'project.project',
'view_id': False,
'res_id': res_id,
'views': [(form_view['res_id'],'form'),(tree_view['res_id'],'tree')],
'type': 'ir.actions.act_window',
'search_view_id': search_view['res_id'],
}
@api.multi
def setActive(self, value=True):
""" Set a project as active/inactive, and its tasks as well. """
self.write({'active': value})
def create(self, cr, uid, vals, context=None):
if context is None:
context = {}
# Prevent double project creation when 'use_tasks' is checked + alias management
create_context = dict(context, project_creation_in_progress=True,
alias_model_name=vals.get('alias_model', 'project.task'),
alias_parent_model_name=self._name,
mail_create_nosubscribe=True)
ir_values = self.pool.get('ir.values').get_default(cr, uid, 'project.config.settings', 'generate_project_alias')
if ir_values:
vals['alias_name'] = vals.get('alias_name') or vals.get('name')
project_id = super(project, self).create(cr, uid, vals, context=create_context)
project_rec = self.browse(cr, uid, project_id, context=context)
values = {'alias_parent_thread_id': project_id, 'alias_defaults': {'project_id': project_id}}
self.pool.get('mail.alias').write(cr, uid, [project_rec.alias_id.id], values, context=context)
return project_id
def write(self, cr, uid, ids, vals, context=None):
# if alias_model has been changed, update alias_model_id accordingly
if vals.get('alias_model'):
model_ids = self.pool.get('ir.model').search(cr, uid, [('model', '=', vals.get('alias_model', 'project.task'))])
vals.update(alias_model_id=model_ids[0])
res = super(project, self).write(cr, uid, ids, vals, context=context)
if 'active' in vals:
# archiving/unarchiving a project does it on its tasks, too
projects = self.browse(cr, uid, ids, context)
tasks = projects.with_context(active_test=False).mapped('tasks')
tasks.write({'active': vals['active']})
return res
class task(osv.osv):
_name = "project.task"
_description = "Task"
_date_name = "date_start"
_inherit = ['mail.thread', 'ir.needaction_mixin']
_mail_post_access = 'read'
def _get_default_partner(self, cr, uid, context=None):
if context is None:
context = {}
if 'default_project_id' in context:
project = self.pool.get('project.project').browse(cr, uid, context['default_project_id'], context=context)
if project and project.partner_id:
return project.partner_id.id
return False
def _get_default_stage_id(self, cr, uid, context=None):
""" Gives default stage_id """
if context is None:
context = {}
return self.stage_find(cr, uid, [], context.get('default_project_id'), [('fold', '=', False)], context=context)
def _read_group_stage_ids(self, cr, uid, ids, domain, read_group_order=None, access_rights_uid=None, context=None):
if context is None:
context = {}
stage_obj = self.pool.get('project.task.type')
order = stage_obj._order
access_rights_uid = access_rights_uid or uid
if read_group_order == 'stage_id desc':
order = '%s desc' % order
if 'default_project_id' in context:
search_domain = ['|', ('project_ids', '=', context['default_project_id']), ('id', 'in', ids)]
else:
search_domain = [('id', 'in', ids)]
stage_ids = stage_obj._search(cr, uid, search_domain, order=order, access_rights_uid=access_rights_uid, context=context)
result = stage_obj.name_get(cr, access_rights_uid, stage_ids, context=context)
# restore order of the search
result.sort(lambda x, y: cmp(stage_ids.index(x[0]), stage_ids.index(y[0])))
fold = {}
for stage in stage_obj.browse(cr, access_rights_uid, stage_ids, context=context):
fold[stage.id] = stage.fold or False
return result, fold
_group_by_full = {
'stage_id': _read_group_stage_ids,
}
def onchange_remaining(self, cr, uid, ids, remaining=0.0, planned=0.0):
if remaining and not planned:
return {'value': {'planned_hours': remaining}}
return {}
def onchange_planned(self, cr, uid, ids, planned=0.0, effective=0.0):
return {'value': {'remaining_hours': planned - effective}}
@api.cr_uid_ids_context
def onchange_project(self, cr, uid, id, project_id, context=None):
if project_id:
project = self.pool.get('project.project').browse(cr, uid, project_id, context=context)
if project and project.partner_id:
return {'value': {'partner_id': project.partner_id.id}}
return {}
def onchange_user_id(self, cr, uid, ids, user_id, context=None):
vals = {}
if user_id:
vals['date_start'] = fields.datetime.now()
return {'value': vals}
def duplicate_task(self, cr, uid, map_ids, context=None):
mapper = lambda t: map_ids.get(t.id, t.id)
for task in self.browse(cr, uid, map_ids.values(), context):
new_child_ids = set(map(mapper, task.child_ids))
new_parent_ids = set(map(mapper, task.parent_ids))
if new_child_ids or new_parent_ids:
task.write({'parent_ids': [(6,0,list(new_parent_ids))],
'child_ids': [(6,0,list(new_child_ids))]})
def copy_data(self, cr, uid, id, default=None, context=None):
if default is None:
default = {}
current = self.browse(cr, uid, id, context=context)
if not default.get('name'):
default['name'] = _("%s (copy)") % current.name
if 'remaining_hours' not in default:
default['remaining_hours'] = current.planned_hours
return super(task, self).copy_data(cr, uid, id, default, context)
_columns = {
'active': fields.boolean('Active'),
'name': fields.char('Task Title', track_visibility='onchange', size=128, required=True, select=True),
'description': fields.html('Description'),
'priority': fields.selection([('0','Normal'), ('1','High')], 'Priority', select=True),
'sequence': fields.integer('Sequence', select=True, help="Gives the sequence order when displaying a list of tasks."),
'stage_id': fields.many2one('project.task.type', 'Stage', track_visibility='onchange', select=True,
domain="[('project_ids', '=', project_id)]", copy=False),
'tag_ids': fields.many2many('project.tags', string='Tags', oldname='categ_ids'),
'kanban_state': fields.selection([('normal', 'In Progress'),('done', 'Ready for next stage'),('blocked', 'Blocked')], 'Kanban State',
track_visibility='onchange',
help="A task's kanban state indicates special situations affecting it:\n"
" * Normal is the default situation\n"
" * Blocked indicates something is preventing the progress of this task\n"
" * Ready for next stage indicates the task is ready to be pulled to the next stage",
required=True, copy=False),
'create_date': fields.datetime('Create Date', readonly=True, select=True),
'write_date': fields.datetime('Last Modification Date', readonly=True, select=True), #not displayed in the view but it might be useful with base_action_rule module (and it needs to be defined first for that)
'date_start': fields.datetime('Starting Date', select=True, copy=False),
'date_end': fields.datetime('Ending Date', select=True, copy=False),
'date_assign': fields.datetime('Assigning Date', select=True, copy=False, readonly=True),
'date_deadline': fields.date('Deadline', select=True, copy=False),
'date_last_stage_update': fields.datetime('Last Stage Update', select=True, copy=False, readonly=True),
'project_id': fields.many2one('project.project', 'Project', ondelete='set null', select=True, track_visibility='onchange', change_default=True),
'parent_ids': fields.many2many('project.task', 'project_task_parent_rel', 'task_id', 'parent_id', 'Parent Tasks'),
'child_ids': fields.many2many('project.task', 'project_task_parent_rel', 'parent_id', 'task_id', 'Delegated Tasks'),
'notes': fields.text('Notes'),
'planned_hours': fields.float('Initially Planned Hours', help='Estimated time to do the task, usually set by the project manager when the task is in draft state.'),
'remaining_hours': fields.float('Remaining Hours', digits=(16,2), help="Total remaining time, can be re-estimated periodically by the assignee of the task."),
'user_id': fields.many2one('res.users', 'Assigned to', select=True, track_visibility='onchange'),
'partner_id': fields.many2one('res.partner', 'Customer'),
'manager_id': fields.related('project_id', 'user_id', type='many2one', relation='res.users', string='Project Manager'),
'company_id': fields.many2one('res.company', 'Company'),
'id': fields.integer('ID', readonly=True),
'color': fields.integer('Color Index'),
'user_email': fields.related('user_id', 'email', type='char', string='User Email', readonly=True),
'attachment_ids': fields.one2many('ir.attachment', 'res_id', domain=lambda self: [('res_model', '=', self._name)], auto_join=True, string='Attachments'),
# In the domain of displayed_image_id, we couln't use attachment_ids because a one2many is represented as a list of commands so we used res_model & res_id
'displayed_image_id': fields.many2one('ir.attachment', domain="[('res_model', '=', 'project.task'), ('res_id', '=', id), ('mimetype', 'ilike', 'image')]", string='Displayed Image'),
'legend_blocked': fields.related("stage_id", "legend_blocked", type="char", string='Kanban Blocked Explanation'),
'legend_done': fields.related("stage_id", "legend_done", type="char", string='Kanban Valid Explanation'),
'legend_normal': fields.related("stage_id", "legend_normal", type="char", string='Kanban Ongoing Explanation'),
}
_defaults = {
'stage_id': _get_default_stage_id,
'project_id': lambda self, cr, uid, ctx=None: ctx.get('default_project_id') if ctx is not None else False,
'date_last_stage_update': fields.datetime.now,
'kanban_state': 'normal',
'priority': '0',
'sequence': 10,
'active': True,
'user_id': lambda obj, cr, uid, ctx=None: uid,
'company_id': lambda self, cr, uid, ctx=None: self.pool.get('res.company')._company_default_get(cr, uid, 'project.task', context=ctx),
'partner_id': lambda self, cr, uid, ctx=None: self._get_default_partner(cr, uid, context=ctx),
'date_start': fields.datetime.now,
}
_order = "priority desc, sequence, date_start, name, id"
def _check_recursion(self, cr, uid, ids, context=None):
for id in ids:
visited_branch = set()
visited_node = set()
res = self._check_cycle(cr, uid, id, visited_branch, visited_node, context=context)
if not res:
return False
return True
def _check_cycle(self, cr, uid, id, visited_branch, visited_node, context=None):
if id in visited_branch: #Cycle
return False
if id in visited_node: #Already tested don't work one more time for nothing
return True
visited_branch.add(id)
visited_node.add(id)
#visit child using DFS
task = self.browse(cr, uid, id, context=context)
for child in task.child_ids:
res = self._check_cycle(cr, uid, child.id, visited_branch, visited_node, context=context)
if not res:
return False
visited_branch.remove(id)
return True
def _check_dates(self, cr, uid, ids, context=None):
if context == None:
context = {}
obj_task = self.browse(cr, uid, ids[0], context=context)
start = obj_task.date_start or False
end = obj_task.date_end or False
if start and end :
if start > end:
return False
return True
_constraints = [
(_check_recursion, 'Error ! You cannot create recursive tasks.', ['parent_ids']),
(_check_dates, 'Error ! Task starting date must be lower than its ending date.', ['date_start','date_end'])
]
# Override view according to the company definition
def fields_view_get(self, cr, uid, view_id=None, view_type='form', context=None, toolbar=False, submenu=False):
users_obj = self.pool.get('res.users')
if context is None: context = {}
# read uom as admin to avoid access rights issues, e.g. for portal/share users,
# this should be safe (no context passed to avoid side-effects)
obj_tm = users_obj.browse(cr, SUPERUSER_ID, uid, context=context).company_id.project_time_mode_id
tm = obj_tm and obj_tm.name or 'Hours'
res = super(task, self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar, submenu=submenu)
# read uom as admin to avoid access rights issues, e.g. for portal/share users,
# this should be safe (no context passed to avoid side-effects)
obj_tm = users_obj.browse(cr, SUPERUSER_ID, uid, context=context).company_id.project_time_mode_id
try:
# using get_object to get translation value
uom_hour = self.pool['ir.model.data'].get_object(cr, uid, 'product', 'product_uom_hour', context=context)
except ValueError:
uom_hour = False
if not obj_tm or not uom_hour or obj_tm.id == uom_hour.id:
return res
eview = etree.fromstring(res['arch'])
# if the project_time_mode_id is not in hours (so in days), display it as a float field
def _check_rec(eview):
if eview.attrib.get('widget','') == 'float_time':
eview.set('widget','float')
for child in eview:
_check_rec(child)
return True
_check_rec(eview)
res['arch'] = etree.tostring(eview)
# replace reference of 'Hours' to 'Day(s)'
for f in res['fields']:
# TODO this NOT work in different language than english
# the field 'Initially Planned Hours' should be replaced by 'Initially Planned Days'
# but string 'Initially Planned Days' is not available in translation
if 'Hours' in res['fields'][f]['string']:
res['fields'][f]['string'] = res['fields'][f]['string'].replace('Hours', obj_tm.name)
return res
def get_empty_list_help(self, cr, uid, help, context=None):
context = dict(context or {})
context['empty_list_help_id'] = context.get('default_project_id')
context['empty_list_help_model'] = 'project.project'
context['empty_list_help_document_name'] = _("tasks")
return super(task, self).get_empty_list_help(cr, uid, help, context=context)
# ----------------------------------------
# Case management
# ----------------------------------------
def stage_find(self, cr, uid, cases, section_id, domain=[], order='sequence', context=None):
""" Override of the base.stage method
Parameter of the stage search taken from the lead:
- section_id: if set, stages must belong to this section or
be a default stage; if not set, stages must be default
stages
"""
if isinstance(cases, (int, long)):
cases = self.browse(cr, uid, cases, context=context)
# collect all section_ids
section_ids = []
if section_id:
section_ids.append(section_id)
for task in cases:
if task.project_id:
section_ids.append(task.project_id.id)
search_domain = []
if section_ids:
search_domain = [('|')] * (len(section_ids) - 1)
for section_id in section_ids:
search_domain.append(('project_ids', '=', section_id))
search_domain += list(domain)
# perform search, return the first found
stage_ids = self.pool.get('project.task.type').search(cr, uid, search_domain, order=order, context=context)
if stage_ids:
return stage_ids[0]
return False
def _check_child_task(self, cr, uid, ids, context=None):
if context == None:
context = {}
tasks = self.browse(cr, uid, ids, context=context)
for task in tasks:
if task.child_ids:
for child in task.child_ids:
if child.stage_id and not child.stage_id.fold:
raise UserError(_("Child task still open.\nPlease cancel or complete child task first."))
return True
def _store_history(self, cr, uid, ids, context=None):
for task in self.browse(cr, uid, ids, context=context):
self.pool.get('project.task.history').create(cr, uid, {
'task_id': task.id,
'remaining_hours': task.remaining_hours,
'planned_hours': task.planned_hours,
'kanban_state': task.kanban_state,
'type_id': task.stage_id.id,
'user_id': task.user_id.id
}, context=context)
return True
# ------------------------------------------------
# CRUD overrides
# ------------------------------------------------
def create(self, cr, uid, vals, context=None):
context = dict(context or {})
# for default stage
if vals.get('project_id') and not context.get('default_project_id'):
context['default_project_id'] = vals.get('project_id')
# user_id change: update date_assign
if vals.get('user_id'):
vals['date_assign'] = fields.datetime.now()
# context: no_log, because subtype already handle this
create_context = dict(context, mail_create_nolog=True)
task_id = super(task, self).create(cr, uid, vals, context=create_context)
self._store_history(cr, uid, [task_id], context=context)
return task_id
def write(self, cr, uid, ids, vals, context=None):
if isinstance(ids, (int, long)):
ids = [ids]
# stage change: update date_last_stage_update
if 'stage_id' in vals:
vals['date_last_stage_update'] = fields.datetime.now()
# user_id change: update date_assign
if vals.get('user_id'):
vals['date_assign'] = fields.datetime.now()
# Overridden to reset the kanban_state to normal whenever
# the stage (stage_id) of the task changes.
if vals and not 'kanban_state' in vals and 'stage_id' in vals:
new_stage = vals.get('stage_id')
vals_reset_kstate = dict(vals, kanban_state='normal')
for t in self.browse(cr, uid, ids, context=context):
write_vals = vals_reset_kstate if t.stage_id.id != new_stage else vals
super(task, self).write(cr, uid, [t.id], write_vals, context=context)
result = True
else:
result = super(task, self).write(cr, uid, ids, vals, context=context)
if any(item in vals for item in ['stage_id', 'remaining_hours', 'user_id', 'kanban_state']):
self._store_history(cr, uid, ids, context=context)
return result
def unlink(self, cr, uid, ids, context=None):
if context is None:
context = {}
self._check_child_task(cr, uid, ids, context=context)
res = super(task, self).unlink(cr, uid, ids, context)
return res
def _get_total_hours(self):
return self.remaining_hours
def _generate_task(self, cr, uid, tasks, ident=4, context=None):
context = context or {}
result = ""
ident = ' '*ident
company = self.pool["res.users"].browse(cr, uid, uid, context=context).company_id
duration_uom = {
'day(s)': 'd', 'days': 'd', 'day': 'd', 'd': 'd',
'month(s)': 'm', 'months': 'm', 'month': 'month', 'm': 'm',
'week(s)': 'w', 'weeks': 'w', 'week': 'w', 'w': 'w',
'hour(s)': 'H', 'hours': 'H', 'hour': 'H', 'h': 'H',
}.get(company.project_time_mode_id.name.lower(), "hour(s)")
for task in tasks:
if task.stage_id and task.stage_id.fold:
continue
result += '''
%sdef Task_%s():
%s todo = \"%.2f%s\"
%s effort = \"%.2f%s\"''' % (ident, task.id, ident, task.remaining_hours, duration_uom, ident, task._get_total_hours(), duration_uom)
start = []
for t2 in task.parent_ids:
start.append("up.Task_%s.end" % (t2.id,))
if start:
result += '''
%s start = max(%s)
''' % (ident,','.join(start))
if task.user_id:
result += '''
%s resource = %s
''' % (ident, 'User_'+str(task.user_id.id))
result += "\n"
return result
# ---------------------------------------------------
# Mail gateway
# ---------------------------------------------------
def _track_subtype(self, cr, uid, ids, init_values, context=None):
record = self.browse(cr, uid, ids[0], context=context)
if 'kanban_state' in init_values and record.kanban_state == 'blocked':
return 'project.mt_task_blocked'
elif 'kanban_state' in init_values and record.kanban_state == 'done':
return 'project.mt_task_ready'
elif 'user_id' in init_values and record.user_id: # assigned -> new
return 'project.mt_task_new'
elif 'stage_id' in init_values and record.stage_id and record.stage_id.sequence <= 1: # start stage -> new
return 'project.mt_task_new'
elif 'stage_id' in init_values:
return 'project.mt_task_stage'
return super(task, self)._track_subtype(cr, uid, ids, init_values, context=context)
def _notification_group_recipients(self, cr, uid, ids, message, recipients, done_ids, group_data, context=None):
""" Override the mail.thread method to handle project users and officers
recipients. Indeed those will have specific action in their notification
emails: creating tasks, assigning it. """
group_project_user = self.pool['ir.model.data'].xmlid_to_res_id(cr, uid, 'project.group_project_user')
for recipient in recipients:
if recipient.id in done_ids:
continue
if recipient.user_ids and group_project_user in recipient.user_ids[0].groups_id.ids:
group_data['group_project_user'] |= recipient
done_ids.add(recipient.id)
return super(task, self)._notification_group_recipients(cr, uid, ids, message, recipients, done_ids, group_data, context=context)
def _notification_get_recipient_groups(self, cr, uid, ids, message, recipients, context=None):
res = super(task, self)._notification_get_recipient_groups(cr, uid, ids, message, recipients, context=context)
take_action = self._notification_link_helper(cr, uid, ids, 'assign', context=context)
new_action_id = self.pool['ir.model.data'].xmlid_to_res_id(cr, uid, 'project.action_view_task')
new_action = self._notification_link_helper(cr, uid, ids, 'new', context=context, action_id=new_action_id)
task_record = self.browse(cr, uid, ids[0], context=context)
actions = []
if not task_record.user_id:
actions.append({'url': take_action, 'title': _('I take it')})
else:
actions.append({'url': new_action, 'title': _('New Task')})
res['group_project_user'] = {
'actions': actions
}
return res
@api.cr_uid_context
def message_get_reply_to(self, cr, uid, ids, default=None, context=None):
""" Override to get the reply_to of the parent project. """
tasks = self.browse(cr, SUPERUSER_ID, ids, context=context)
project_ids = set([task.project_id.id for task in tasks if task.project_id])
aliases = self.pool['project.project'].message_get_reply_to(cr, uid, list(project_ids), default=default, context=context)
return dict((task.id, aliases.get(task.project_id and task.project_id.id or 0, False)) for task in tasks)
def email_split(self, cr, uid, ids, msg, context=None):
email_list = tools.email_split((msg.get('to') or '') + ',' + (msg.get('cc') or ''))
# check left-part is not already an alias
task_ids = self.browse(cr, uid, ids, context=context)
aliases = [task.project_id.alias_name for task in task_ids if task.project_id]
return filter(lambda x: x.split('@')[0] not in aliases, email_list)
def message_new(self, cr, uid, msg, custom_values=None, context=None):
""" Override to updates the document according to the email. """
if custom_values is None:
custom_values = {}
defaults = {
'name': msg.get('subject'),
'planned_hours': 0.0,
'partner_id': msg.get('author_id', False)
}
defaults.update(custom_values)
res = super(task, self).message_new(cr, uid, msg, custom_values=defaults, context=context)
email_list = self.email_split(cr, uid, [res], msg, context=context)
partner_ids = filter(None, self._find_partner_from_emails(cr, uid, [res], email_list, force_create=False, context=context))
self.message_subscribe(cr, uid, [res], partner_ids, context=context)
return res
def message_update(self, cr, uid, ids, msg, update_vals=None, context=None):
""" Override to update the task according to the email. """
if update_vals is None:
update_vals = {}
maps = {
'cost': 'planned_hours',
}
for line in msg['body'].split('\n'):
line = line.strip()
res = tools.command_re.match(line)
if res:
match = res.group(1).lower()
field = maps.get(match)
if field:
try:
update_vals[field] = float(res.group(2).lower())
except (ValueError, TypeError):
pass
email_list = self.email_split(cr, uid, ids, msg, context=context)
partner_ids = filter(None, self._find_partner_from_emails(cr, uid, ids, email_list, force_create=False, context=context))
self.message_subscribe(cr, uid, ids, partner_ids, context=context)
return super(task, self).message_update(cr, uid, ids, msg, update_vals=update_vals, context=context)
def message_get_suggested_recipients(self, cr, uid, ids, context=None):
recipients = super(task, self).message_get_suggested_recipients(cr, uid, ids, context=context)
for data in self.browse(cr, uid, ids, context=context):
if data.partner_id:
reason = _('Customer Email') if data.partner_id.email else _('Customer')
data._message_add_suggested_recipient(recipients, partner=data.partner_id, reason=reason)
return recipients
class account_analytic_account(osv.osv):
_inherit = 'account.analytic.account'
_description = 'Analytic Account'
def _compute_project_count(self, cr, uid, ids, fieldnames, args, context=None):
result = dict.fromkeys(ids, 0)
for account in self.browse(cr, uid, ids, context=context):
result[account.id] = len(account.project_ids)
return result
_columns = {
'use_tasks': fields.boolean('Tasks', help="Check this box to manage internal activities through this project"),
'company_uom_id': fields.related('company_id', 'project_time_mode_id', string="Company UOM", type='many2one', relation='product.uom'),
'project_ids': fields.one2many('project.project', 'analytic_account_id', 'Projects'),
'project_count': fields.function(_compute_project_count, 'Project Count', type='integer')
}
def on_change_template(self, cr, uid, ids, template_id, date_start=False, context=None):
res = super(account_analytic_account, self).on_change_template(cr, uid, ids, template_id, date_start=date_start, context=context)
if template_id and 'value' in res:
template = self.browse(cr, uid, template_id, context=context)
res['value']['use_tasks'] = template.use_tasks
return res
def _trigger_project_creation(self, cr, uid, vals, context=None):
'''
This function is used to decide if a project needs to be automatically created or not when an analytic account is created. It returns True if it needs to be so, False otherwise.
'''
if context is None: context = {}
return vals.get('use_tasks') and not 'project_creation_in_progress' in context
@api.cr_uid_id_context
def project_create(self, cr, uid, analytic_account_id, vals, context=None):
'''
This function is called at the time of analytic account creation and is used to create a project automatically linked to it if the conditions are meet.
'''
project_pool = self.pool.get('project.project')
project_id = project_pool.search(cr, uid, [('analytic_account_id','=', analytic_account_id)])
if not project_id and self._trigger_project_creation(cr, uid, vals, context=context):
project_values = {
'name': vals.get('name'),
'analytic_account_id': analytic_account_id,
'use_tasks': True,
}
return project_pool.create(cr, uid, project_values, context=context)
return False
def create(self, cr, uid, vals, context=None):
if context is None:
context = {}
if vals.get('child_ids', False) and context.get('analytic_project_copy', False):
vals['child_ids'] = []
analytic_account_id = super(account_analytic_account, self).create(cr, uid, vals, context=context)
self.project_create(cr, uid, analytic_account_id, vals, context=context)
return analytic_account_id
def write(self, cr, uid, ids, vals, context=None):
if isinstance(ids, (int, long)):
ids = [ids]
vals_for_project = vals.copy()
for account in self.browse(cr, uid, ids, context=context):
if not vals.get('name'):
vals_for_project['name'] = account.name
self.project_create(cr, uid, account.id, vals_for_project, context=context)
return super(account_analytic_account, self).write(cr, uid, ids, vals, context=context)
def unlink(self, cr, uid, ids, context=None):
proj_ids = self.pool['project.project'].search(cr, uid, [('analytic_account_id', 'in', ids)])
has_tasks = self.pool['project.task'].search(cr, uid, [('project_id', 'in', proj_ids)], count=True, context=context)
if has_tasks:
raise UserError(_('Please remove existing tasks in the project linked to the accounts you want to delete.'))
return super(account_analytic_account, self).unlink(cr, uid, ids, context=context)
def name_search(self, cr, uid, name, args=None, operator='ilike', context=None, limit=100):
if args is None:
args = []
if context is None:
context={}
if context.get('current_model') == 'project.project':
project_ids = self.search(cr, uid, args + [('name', operator, name)], limit=limit, context=context)
return self.name_get(cr, uid, project_ids, context=context)
return super(account_analytic_account, self).name_search(cr, uid, name, args=args, operator=operator, context=context, limit=limit)
def projects_action(self, cr, uid, ids, context=None):
accounts = self.browse(cr, uid, ids, context=context)
project_ids = sum([account.project_ids.ids for account in accounts], [])
result = {
"type": "ir.actions.act_window",
"res_model": "project.project",
"views": [[False, "tree"], [False, "form"]],
"domain": [["id", "in", project_ids]],
"context": {"create": False},
"name": "Projects",
}
if len(project_ids) == 1:
result['views'] = [(False, "form")]
result['res_id'] = project_ids[0]
else:
result = {'type': 'ir.actions.act_window_close'}
return result
class project_project(osv.osv):
_inherit = 'project.project'
_defaults = {
'use_tasks': True
}
class project_task_history(osv.osv):
"""
Tasks History, used for cumulative flow charts (Lean/Agile)
"""
_name = 'project.task.history'
_description = 'History of Tasks'
_rec_name = 'task_id'
_log_access = False
def _get_date(self, cr, uid, ids, name, arg, context=None):
result = {}
for history in self.browse(cr, uid, ids, context=context):
if history.type_id and history.type_id.fold:
result[history.id] = history.date
continue
cr.execute('''select
date
from
project_task_history
where
task_id=%s and
id>%s
order by id limit 1''', (history.task_id.id, history.id))
res = cr.fetchone()
result[history.id] = res and res[0] or False
return result
def _get_related_date(self, cr, uid, ids, context=None):
result = []
for history in self.browse(cr, uid, ids, context=context):
cr.execute('''select
id
from
project_task_history
where
task_id=%s and
id<%s
order by id desc limit 1''', (history.task_id.id, history.id))
res = cr.fetchone()
if res:
result.append(res[0])
return result
_columns = {
'task_id': fields.many2one('project.task', 'Task', ondelete='cascade', required=True, select=True),
'type_id': fields.many2one('project.task.type', 'Stage'),
'kanban_state': fields.selection([('normal', 'Normal'), ('blocked', 'Blocked'), ('done', 'Ready for next stage')], 'Kanban State', required=False),
'date': fields.date('Date', select=True),
'end_date': fields.function(_get_date, string='End Date', type="date", store={
'project.task.history': (_get_related_date, None, 20)
}),
'remaining_hours': fields.float('Remaining Time', digits=(16, 2)),
'planned_hours': fields.float('Planned Time', digits=(16, 2)),
'user_id': fields.many2one('res.users', 'Responsible'),
}
_defaults = {
'date': fields.date.context_today,
}
class project_task_history_cumulative(osv.osv):
_name = 'project.task.history.cumulative'
_table = 'project_task_history_cumulative'
_inherit = 'project.task.history'
_auto = False
_columns = {
'end_date': fields.date('End Date'),
'nbr_tasks': fields.integer('# of Tasks', readonly=True),
'project_id': fields.many2one('project.project', 'Project'),
}
def init(self, cr):
tools.drop_view_if_exists(cr, 'project_task_history_cumulative')
cr.execute(""" CREATE VIEW project_task_history_cumulative AS (
SELECT
history.date::varchar||'-'||history.history_id::varchar AS id,
history.date AS end_date,
*
FROM (
SELECT
h.id AS history_id,
h.date+generate_series(0, CAST((coalesce(h.end_date, DATE 'tomorrow')::date - h.date) AS integer)-1) AS date,
h.task_id, h.type_id, h.user_id, h.kanban_state,
count(h.task_id) as nbr_tasks,
greatest(h.remaining_hours, 1) AS remaining_hours, greatest(h.planned_hours, 1) AS planned_hours,
t.project_id
FROM
project_task_history AS h
JOIN project_task AS t ON (h.task_id = t.id)
GROUP BY
h.id,
h.task_id,
t.project_id
) AS history
)
""")
class project_tags(osv.Model):
""" Tags of project's tasks (or issues) """
_name = "project.tags"
_description = "Tags of project's tasks, issues..."
_columns = {
'name': fields.char('Name', required=True),
'color': fields.integer('Color Index'),
}
_sql_constraints = [
('name_uniq', 'unique (name)', "Tag name already exists !"),
]
|
vileopratama/vitech
|
src/addons/project/project.py
|
Python
|
mit
| 53,636
|
[
"VisIt"
] |
eb6552dce52da4474e365c4edf1869a989cc50a6b52d70e5f49869ce324fb68b
|
"""
(c) RIKEN 2015. All rights reserved.
Author: Keitaro Yamashita
This software is released under the new BSD License; see LICENSE.
"""
import numpy
from cctbx.array_family import flex
from cctbx import miller
from cctbx import crystal
class DenzoXfile:
"""
Reference:
http://www.hkl-xray.com/sites/default/files/manual_online.pdf
p. 44
"""
def __init__(self, filein, *args, **kwargs):
if filein is not None:
self.read_file(filein, *args, **kwargs)
# __init__()
def read_file(self, filein, onlyfull=False, onlypartial=False, read_I_sum=False):
assert (onlyfull,onlypartial).count(True) < 2
ifs = open(filein)
self.comment = ifs.readline()
if self.comment.startswith(" 999 ") and len(self.comment.split()) == 3: raise RuntimeError("%s does not contains useful info" % filein)
self.amatrix, self.umatrix = numpy.zeros((3,3)), numpy.zeros((3,3))
for i in xrange(3):
vals = map(float, ifs.readline().strip().split())
self.amatrix[i,] = vals[0:3]
self.umatrix[i,] = vals[3:6]
# osc. start osc. end dist. in pixel units wavelength cryst rotz roty rotx mosaicity
l = ifs.readline()
tmp = l[:12], l[12:24], l[24:36], l[36:48], l[48:58], l[58:68], l[68:78], l[78:88]
self.osc_start, self.osc_end, self.distance, self.wavelength, self.rotz,self.roty,self.rotx, self.mosaicity = map(float, tmp)
# these are the actual reflections now....
# 1,2,3 h,k,l not reduced 4 flag 0 full, 1 partial 5 I by profile fitting 6 I by profile summation 7 chi2
# of profile
# fitting 8 sigma of I 9 cosine of incidence angle at detector 10 x coordinate of predicted centroid, in pixel units 11 y
# coordinate of predicted centroid, in pixel units 12 Lorentz, polarization, obliquity factor 13 strength of
# averaged profile, arbitrary units
# In columns 5,6, and 8 the intensity of very strong reflections (i.e. overflows) will be written as an integer,
# rather than as a floating point number - see example on line T above. Scalepack understands this Denzo
# convention.
self.miller_indices = flex.miller_index()
self.data = flex.double()
self.sigmas = flex.double()
for l in ifs:
if l.startswith(" 999 "):
boxsize = map(int, l.strip().split())[1:]
break
#sp = l.strip().split()
sp = l[:4], l[4:8], l[8:12], l[13], l[14:22], l[22:30], l[30:37], l[37:43], l[43:49], l[49:56], l[56:63], l[63:69], l[69:77]
h, k, l, isfull = map(int, sp[:4])
Ipf, Ips, chisq, sigma, cos, calx, caly, lpo, strength = map(float, sp[4:])
if onlyfull and not isfull: continue
if onlypartial and isfull: continue
self.miller_indices.append((h,k,l))
self.data.append(Ips if read_I_sum else Ipf)
self.sigmas.append(sigma)
self.intbox = [] # 1: background area, 0: guard area, 2: spot area
for l in ifs:
if l.startswith(" "):
self.intbox.append(map(int, l.strip()))
elif l.startswith("unit cell"):
self.unit_cell = map(float, l[9:].strip().split())
elif l.startswith("space group"):
self.space_group = l[12:].strip()
# read_file()
def miller_set(self, anomalous_flag=True):
return miller.set(crystal_symmetry=crystal.symmetry(unit_cell=self.unit_cell, space_group_symbol=self.space_group),
indices=self.miller_indices, anomalous_flag=anomalous_flag)
# miller_set()
def miller_array(self, anomalous_flag=True):
return miller.array(miller_set=self.miller_set(anomalous_flag),
data=self.data, sigmas=self.sigmas)
# miller_array()
# class DenzoXfile
if __name__ == "__main__":
import sys
x = DenzoXfile(sys.argv[1])
a = x.miller_array()
a.show_summary()
print "A="
print x.amatrix
print "U="
print x.umatrix
|
keitaroyam/yamtbx
|
yamtbx/dataproc/hkl2000/xfile.py
|
Python
|
bsd-3-clause
| 4,175
|
[
"CRYSTAL"
] |
61830d9fdab6a002676168884d5d9c2a4877d5aad971c15949908424c553a36c
|
# -*- coding: utf-8 -*-
r"""
Linear elasticity with given displacements.
Find :math:`\ul{u}` such that:
.. math::
\int_{\Omega} D_{ijkl}\ e_{ij}(\ul{v}) e_{kl}(\ul{u})
= 0
\;, \quad \forall \ul{v} \;,
where
.. math::
D_{ijkl} = \mu (\delta_{ik} \delta_{jl}+\delta_{il} \delta_{jk}) +
\lambda \ \delta_{ij} \delta_{kl}
\;.
This example models a cylinder that is fixed at one end while the second end
has a specified displacement of 0.01 in the x direction (this boundary
condition is named ``'Displaced'``). There is also a specified displacement of
0.005 in the z direction for points in the region labeled
``'SomewhereTop'``. This boundary condition is named
``'PerturbedSurface'``. The region ``'SomewhereTop'`` is specified as those
vertices for which::
(z > 0.017) & (x > 0.03) & (x < 0.07)
The displacement field (three DOFs/node) in the ``'Omega region'`` is
approximated using P1 (four-node tetrahedral) finite elements. The material is
linear elastic and its properties are specified as Lamé parameters
:math:`\lambda` and :math:`\mu` (see
http://en.wikipedia.org/wiki/Lam%C3%A9_parameters)
The output is the displacement for each vertex, saved by default to
cylinder.vtk. View the results using::
$ ./postproc.py cylinder.vtk --wireframe -b --only-names=u -d'u,plot_displacements,rel_scaling=1'
"""
from __future__ import absolute_import
from sfepy import data_dir
from sfepy.mechanics.matcoefs import stiffness_from_lame
filename_mesh = data_dir + '/meshes/3d/cylinder.mesh'
regions = {
'Omega' : 'all',
'Left' : ('vertices in (x < 0.001)', 'facet'),
'Right' : ('vertices in (x > 0.099)', 'facet'),
'SomewhereTop' : ('vertices in (z > 0.017) & (x > 0.03) & (x < 0.07)',
'vertex'),
}
materials = {
'solid' : ({'D': stiffness_from_lame(dim=3, lam=1e1, mu=1e0)},),
}
fields = {
'displacement': ('real', 'vector', 'Omega', 1),
}
integrals = {
'i' : 1,
}
variables = {
'u' : ('unknown field', 'displacement', 0),
'v' : ('test field', 'displacement', 'u'),
}
ebcs = {
'Fixed' : ('Left', {'u.all' : 0.0}),
'Displaced' : ('Right', {'u.0' : 0.01, 'u.[1,2]' : 0.0}),
'PerturbedSurface' : ('SomewhereTop', {'u.2' : 0.005}),
}
equations = {
'balance_of_forces' :
"""dw_lin_elastic.i.Omega(solid.D, v, u) = 0""",
}
solvers = {
'ls' : ('ls.scipy_direct', {}),
'newton' : ('nls.newton', {
'i_max' : 1,
'eps_a' : 1e-10,
}),
}
|
lokik/sfepy
|
examples/linear_elasticity/linear_elastic.py
|
Python
|
bsd-3-clause
| 2,486
|
[
"VTK"
] |
3d9ea8609544c073890e7c557d3bf8c011d82579c80b962c5996aafdf23c721e
|
# Copyright Iris contributors
#
# This file is part of Iris and is released under the LGPL license.
# See COPYING and COPYING.LESSER in the root of the repository for full
# licensing details.
"""
Definitions of coordinates and other dimensional metadata.
"""
from abc import ABCMeta, abstractmethod
from collections import namedtuple
from collections.abc import Iterator
import copy
from itertools import chain, zip_longest
import operator
import warnings
import zlib
import cftime
import dask.array as da
import numpy as np
import numpy.ma as ma
from iris._data_manager import DataManager
import iris._lazy_data as _lazy
from iris.common import (
AncillaryVariableMetadata,
BaseMetadata,
CellMeasureMetadata,
CFVariableMixin,
CoordMetadata,
DimCoordMetadata,
metadata_manager_factory,
)
import iris.exceptions
import iris.time
import iris.util
class _DimensionalMetadata(CFVariableMixin, metaclass=ABCMeta):
"""
Superclass for dimensional metadata.
"""
_MODE_ADD = 1
_MODE_SUB = 2
_MODE_MUL = 3
_MODE_DIV = 4
_MODE_RDIV = 5
_MODE_SYMBOL = {
_MODE_ADD: "+",
_MODE_SUB: "-",
_MODE_MUL: "*",
_MODE_DIV: "/",
_MODE_RDIV: "/",
}
@abstractmethod
def __init__(
self,
values,
standard_name=None,
long_name=None,
var_name=None,
units=None,
attributes=None,
):
"""
Constructs a single dimensional metadata object.
Args:
* values:
The values of the dimensional metadata.
Kwargs:
* standard_name:
CF standard name of the dimensional metadata.
* long_name:
Descriptive name of the dimensional metadata.
* var_name:
The netCDF variable name for the dimensional metadata.
* units
The :class:`~cf_units.Unit` of the dimensional metadata's values.
Can be a string, which will be converted to a Unit object.
* attributes
A dictionary containing other cf and user-defined attributes.
"""
# Note: this class includes bounds handling code for convenience, but
# this can only run within instances which are also Coords, because
# only they may actually have bounds. This parent class has no
# bounds-related getter/setter properties, and no bounds keywords in
# its __init__ or __copy__ methods. The only bounds-related behaviour
# it provides is a 'has_bounds()' method, which always returns False.
# Configure the metadata manager.
if not hasattr(self, "_metadata_manager"):
self._metadata_manager = metadata_manager_factory(BaseMetadata)
#: CF standard name of the quantity that the metadata represents.
self.standard_name = standard_name
#: Descriptive name of the metadata.
self.long_name = long_name
#: The netCDF variable name for the metadata.
self.var_name = var_name
#: Unit of the quantity that the metadata represents.
self.units = units
#: Other attributes, including user specified attributes that
#: have no meaning to Iris.
self.attributes = attributes
# Set up DataManager attributes and values.
self._values_dm = None
self._values = values
self._bounds_dm = None # Only ever set on Coord-derived instances.
def __getitem__(self, keys):
"""
Returns a new dimensional metadata whose values are obtained by
conventional array indexing.
.. note::
Indexing of a circular coordinate results in a non-circular
coordinate if the overall shape of the coordinate changes after
indexing.
"""
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
# Fetch the values.
values = self._values_dm.core_data()
# Index values with the keys.
_, values = iris.util._slice_data_with_keys(values, keys)
# Copy values after indexing to avoid making metadata that is a
# view on another metadata. This will not realise lazy data.
values = values.copy()
# If the metadata is a coordinate and it has bounds, repeat the above
# with the bounds.
copy_args = {}
if self.has_bounds():
bounds = self._bounds_dm.core_data()
_, bounds = iris.util._slice_data_with_keys(bounds, keys)
# Pass into the copy method : for Coords, it has a 'bounds' key.
copy_args["bounds"] = bounds.copy()
# The new metadata is a copy of the old one with replaced content.
new_metadata = self.copy(values, **copy_args)
return new_metadata
def copy(self, values=None):
"""
Returns a copy of this dimensional metadata object.
Kwargs:
* values
An array of values for the new dimensional metadata object.
This may be a different shape to the original values array being
copied.
"""
# Note: this is overridden in Coord subclasses, to add bounds handling
# and a 'bounds' keyword.
new_metadata = copy.deepcopy(self)
if values is not None:
new_metadata._values_dm = None
new_metadata._values = values
return new_metadata
@abstractmethod
def cube_dims(self, cube):
"""
Identify the cube dims of any _DimensionalMetadata object.
Return the dimensions in the cube of a matching _DimensionalMetadata
object, if any.
Equivalent to cube.coord_dims(self) for a Coord,
or cube.cell_measure_dims for a CellMeasure, and so on.
Simplifies generic code to handle any _DimensionalMetadata objects.
"""
# Only makes sense for specific subclasses.
raise NotImplementedError()
def _sanitise_array(self, src, ndmin):
if _lazy.is_lazy_data(src):
# Lazy data : just ensure ndmin requirement.
ndims_missing = ndmin - src.ndim
if ndims_missing <= 0:
result = src
else:
extended_shape = tuple([1] * ndims_missing + list(src.shape))
result = src.reshape(extended_shape)
else:
# Real data : a few more things to do in this case.
# Ensure the array is writeable.
# NB. Returns the *same object* if src is already writeable.
result = np.require(src, requirements="W")
# Ensure the array has enough dimensions.
# NB. Returns the *same object* if result.ndim >= ndmin
func = ma.array if ma.isMaskedArray(result) else np.array
result = func(result, ndmin=ndmin, copy=False)
# We don't need to copy the data, but we do need to have our
# own view so we can control the shape, etc.
result = result.view()
return result
@property
def _values(self):
"""The _DimensionalMetadata values as a NumPy array."""
return self._values_dm.data.view()
@_values.setter
def _values(self, values):
# Set the values to a new array - as long as it's the same shape.
# Ensure values has an ndmin of 1 and is either a numpy or lazy array.
# This will avoid Scalar _DimensionalMetadata with values of shape ()
# rather than the desired (1,).
values = self._sanitise_array(values, 1)
# Set or update DataManager.
if self._values_dm is None:
self._values_dm = DataManager(values)
else:
self._values_dm.data = values
def _lazy_values(self):
"""
Returns a lazy array representing the dimensional metadata values.
"""
return self._values_dm.lazy_data()
def _core_values(self):
"""
The values array of this dimensional metadata which may be a NumPy
array or a dask array.
"""
result = self._values_dm.core_data()
if not _lazy.is_lazy_data(result):
result = result.view()
return result
def _has_lazy_values(self):
"""
Returns a boolean indicating whether the metadata's values array is a
lazy dask array or not.
"""
return self._values_dm.has_lazy_data()
def _repr_other_metadata(self):
fmt = ""
if self.long_name:
fmt = ", long_name={self.long_name!r}"
if self.var_name:
fmt += ", var_name={self.var_name!r}"
if len(self.attributes) > 0:
fmt += ", attributes={self.attributes}"
result = fmt.format(self=self)
return result
def _str_dates(self, dates_as_numbers):
date_obj_array = self.units.num2date(dates_as_numbers)
kwargs = {"separator": ", ", "prefix": " "}
return np.core.arrayprint.array2string(
date_obj_array, formatter={"all": str}, **kwargs
)
def __str__(self):
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
if self.units.is_time_reference():
fmt = (
"{cls}({values}{bounds}"
", standard_name={self.standard_name!r}"
", calendar={self.units.calendar!r}{other_metadata})"
)
if self.units.is_long_time_interval():
# A time unit with a long time interval ("months" or "years")
# cannot be converted to a date using `num2date` so gracefully
# fall back to printing points as numbers, not datetimes.
values = self._values
else:
values = self._str_dates(self._values)
bounds = ""
if self.has_bounds():
if self.units.is_long_time_interval():
bounds_vals = self.bounds
else:
bounds_vals = self._str_dates(self.bounds)
bounds = ", bounds={vals}".format(vals=bounds_vals)
result = fmt.format(
self=self,
cls=type(self).__name__,
values=values,
bounds=bounds,
other_metadata=self._repr_other_metadata(),
)
else:
result = repr(self)
return result
def __repr__(self):
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
fmt = (
"{cls}({self._values!r}{bounds}"
", standard_name={self.standard_name!r}, units={self.units!r}"
"{other_metadata})"
)
bounds = ""
# if coordinate, handle the bounds
if self.has_bounds():
bounds = ", bounds=" + repr(self.bounds)
result = fmt.format(
self=self,
cls=type(self).__name__,
bounds=bounds,
other_metadata=self._repr_other_metadata(),
)
return result
def __eq__(self, other):
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
eq = NotImplemented
# If the other object has a means of getting its definition, then do
# the comparison, otherwise return a NotImplemented to let Python try
# to resolve the operator elsewhere.
if hasattr(other, "metadata"):
# metadata comparison
eq = self.metadata == other.metadata
# data values comparison
if eq and eq is not NotImplemented:
eq = iris.util.array_equal(
self._core_values(), other._core_values(), withnans=True
)
# Also consider bounds, if we have them.
# (N.B. though only Coords can ever actually *have* bounds).
if eq and eq is not NotImplemented:
if self.has_bounds() and other.has_bounds():
eq = iris.util.array_equal(
self.core_bounds(), other.core_bounds(), withnans=True
)
else:
eq = not self.has_bounds() and not other.has_bounds()
return eq
def __ne__(self, other):
result = self.__eq__(other)
if result is not NotImplemented:
result = not result
return result
# Must supply __hash__ as Python 3 does not enable it if __eq__ is defined.
# NOTE: Violates "objects which compare equal must have the same hash".
# We ought to remove this, as equality of two dimensional metadata can
# *change*, so they really should not be hashable.
# However, current code needs it, e.g. so we can put them in sets.
# Fixing it will require changing those uses. See #962 and #1772.
def __hash__(self):
return hash(id(self))
def __binary_operator__(self, other, mode_constant):
"""
Common code which is called by add, sub, mul and div
Mode constant is one of ADD, SUB, MUL, DIV, RDIV
.. note::
The unit is *not* changed when doing scalar operations on a
metadata object. This means that a metadata object which represents
"10 meters" when multiplied by a scalar i.e. "1000" would result in
a metadata object of "10000 meters". An alternative approach could
be taken to multiply the *unit* by 1000 and the resultant metadata
object would represent "10 kilometers".
"""
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
if isinstance(other, _DimensionalMetadata):
emsg = (
f"{self.__class__.__name__} "
f"{self._MODE_SYMBOL[mode_constant]} "
f"{other.__class__.__name__}"
)
raise iris.exceptions.NotYetImplementedError(emsg)
if isinstance(other, (int, float, np.number)):
def op(values):
if mode_constant == self._MODE_ADD:
new_values = values + other
elif mode_constant == self._MODE_SUB:
new_values = values - other
elif mode_constant == self._MODE_MUL:
new_values = values * other
elif mode_constant == self._MODE_DIV:
new_values = values / other
elif mode_constant == self._MODE_RDIV:
new_values = other / values
return new_values
new_values = op(self._values_dm.core_data())
result = self.copy(new_values)
if self.has_bounds():
result.bounds = op(self._bounds_dm.core_data())
else:
# must return NotImplemented to ensure invocation of any
# associated reflected operator on the "other" operand
# see https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types
result = NotImplemented
return result
def __add__(self, other):
return self.__binary_operator__(other, self._MODE_ADD)
def __sub__(self, other):
return self.__binary_operator__(other, self._MODE_SUB)
def __mul__(self, other):
return self.__binary_operator__(other, self._MODE_MUL)
def __div__(self, other):
return self.__binary_operator__(other, self._MODE_DIV)
def __truediv__(self, other):
return self.__binary_operator__(other, self._MODE_DIV)
__radd__ = __add__
def __rsub__(self, other):
return (-self) + other
def __rdiv__(self, other):
return self.__binary_operator__(other, self._MODE_RDIV)
def __rtruediv__(self, other):
return self.__binary_operator__(other, self._MODE_RDIV)
__rmul__ = __mul__
def __neg__(self):
values = -self._core_values()
copy_args = {}
if self.has_bounds():
copy_args["bounds"] = -self.core_bounds()
return self.copy(values, **copy_args)
def convert_units(self, unit):
"""Change the units, converting the values of the metadata."""
# If the coord has units convert the values in points (and bounds if
# present).
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.
if self.units.is_unknown():
raise iris.exceptions.UnitConversionError(
"Cannot convert from unknown units. "
'The "units" attribute may be set directly.'
)
# Set up a delayed conversion for use if either values or bounds (if
# present) are lazy.
# Make fixed copies of old + new units for a delayed conversion.
old_unit = self.units
new_unit = unit
# Define a delayed conversion operation (i.e. a callback).
def pointwise_convert(values):
return old_unit.convert(values, new_unit)
if self._has_lazy_values():
new_values = _lazy.lazy_elementwise(
self._lazy_values(), pointwise_convert
)
else:
new_values = self.units.convert(self._values, unit)
self._values = new_values
if self.has_bounds():
if self.has_lazy_bounds():
new_bounds = _lazy.lazy_elementwise(
self.lazy_bounds(), pointwise_convert
)
else:
new_bounds = self.units.convert(self.bounds, unit)
self.bounds = new_bounds
self.units = unit
def is_compatible(self, other, ignore=None):
"""
Return whether the current dimensional metadata object is compatible
with another.
"""
compatible = self.name() == other.name() and self.units == other.units
if compatible:
common_keys = set(self.attributes).intersection(other.attributes)
if ignore is not None:
if isinstance(ignore, str):
ignore = (ignore,)
common_keys = common_keys.difference(ignore)
for key in common_keys:
if np.any(self.attributes[key] != other.attributes[key]):
compatible = False
break
return compatible
@property
def dtype(self):
"""
The NumPy dtype of the current dimensional metadata object, as
specified by its values.
"""
return self._values_dm.dtype
@property
def ndim(self):
"""
Return the number of dimensions of the current dimensional metadata
object.
"""
return self._values_dm.ndim
def has_bounds(self):
"""
Return a boolean indicating whether the current dimensional metadata
object has a bounds array.
"""
# Allows for code to handle unbounded dimensional metadata agnostic of
# whether the metadata is a coordinate or not.
return False
@property
def shape(self):
"""The fundamental shape of the metadata, expressed as a tuple."""
return self._values_dm.shape
def xml_element(self, doc):
"""
Create the :class:`xml.dom.minidom.Element` that describes this
:class:`_DimensionalMetadata`.
Args:
* doc:
The parent :class:`xml.dom.minidom.Document`.
Returns:
The :class:`xml.dom.minidom.Element` that will describe this
:class:`_DimensionalMetadata`.
"""
# Create the XML element as the camelCaseEquivalent of the
# class name.
element_name = type(self).__name__
element_name = element_name[0].lower() + element_name[1:]
element = doc.createElement(element_name)
element.setAttribute("id", self._xml_id())
if self.standard_name:
element.setAttribute("standard_name", str(self.standard_name))
if self.long_name:
element.setAttribute("long_name", str(self.long_name))
if self.var_name:
element.setAttribute("var_name", str(self.var_name))
element.setAttribute("units", repr(self.units))
if isinstance(self, Coord):
if self.climatological:
element.setAttribute(
"climatological", str(self.climatological)
)
if self.attributes:
attributes_element = doc.createElement("attributes")
for name in sorted(self.attributes.keys()):
attribute_element = doc.createElement("attribute")
attribute_element.setAttribute("name", name)
attribute_element.setAttribute(
"value", str(self.attributes[name])
)
attributes_element.appendChild(attribute_element)
element.appendChild(attributes_element)
if isinstance(self, Coord):
if self.coord_system:
element.appendChild(self.coord_system.xml_element(doc))
# Add the values
element.setAttribute("value_type", str(self._value_type_name()))
element.setAttribute("shape", str(self.shape))
# The values are referred to "points" of a coordinate and "data"
# otherwise.
if isinstance(self, Coord):
values_term = "points"
# TODO: replace with isinstance(self, Connectivity) once Connectivity
# is re-integrated here (currently in experimental.ugrid).
elif hasattr(self, "indices"):
values_term = "indices"
else:
values_term = "data"
element.setAttribute(values_term, self._xml_array_repr(self._values))
return element
def _xml_id_extra(self, unique_value):
return unique_value
def _xml_id(self):
# Returns a consistent, unique string identifier for this coordinate.
unique_value = b""
if self.standard_name:
unique_value += self.standard_name.encode("utf-8")
unique_value += b"\0"
if self.long_name:
unique_value += self.long_name.encode("utf-8")
unique_value += b"\0"
unique_value += str(self.units).encode("utf-8") + b"\0"
for k, v in sorted(self.attributes.items()):
unique_value += (str(k) + ":" + str(v)).encode("utf-8") + b"\0"
# Extra modifications to unique_value that are specialised in child
# classes
unique_value = self._xml_id_extra(unique_value)
# Mask to ensure consistency across Python versions & platforms.
crc = zlib.crc32(unique_value) & 0xFFFFFFFF
return "%08x" % (crc,)
@staticmethod
def _xml_array_repr(data):
if hasattr(data, "to_xml_attr"):
result = data._values.to_xml_attr()
else:
result = iris.util.format_array(data)
return result
def _value_type_name(self):
"""
A simple, readable name for the data type of the dimensional metadata
values.
"""
dtype = self._core_values().dtype
kind = dtype.kind
if kind in "SU":
# Establish the basic type name for 'string' type data.
if kind == "S":
value_type_name = "bytes"
else:
value_type_name = "string"
else:
value_type_name = dtype.name
return value_type_name
class AncillaryVariable(_DimensionalMetadata):
def __init__(
self,
data,
standard_name=None,
long_name=None,
var_name=None,
units=None,
attributes=None,
):
"""
Constructs a single ancillary variable.
Args:
* data:
The values of the ancillary variable.
Kwargs:
* standard_name:
CF standard name of the ancillary variable.
* long_name:
Descriptive name of the ancillary variable.
* var_name:
The netCDF variable name for the ancillary variable.
* units
The :class:`~cf_units.Unit` of the ancillary variable's values.
Can be a string, which will be converted to a Unit object.
* attributes
A dictionary containing other cf and user-defined attributes.
"""
# Configure the metadata manager.
if not hasattr(self, "_metadata_manager"):
self._metadata_manager = metadata_manager_factory(
AncillaryVariableMetadata
)
super().__init__(
values=data,
standard_name=standard_name,
long_name=long_name,
var_name=var_name,
units=units,
attributes=attributes,
)
@property
def data(self):
return self._values
@data.setter
def data(self, data):
self._values = data
def lazy_data(self):
"""
Return a lazy array representing the ancillary variable's data.
Accessing this method will never cause the data values to be loaded.
Similarly, calling methods on, or indexing, the returned Array
will not cause the ancillary variable to have loaded data.
If the data have already been loaded for the ancillary variable, the
returned Array will be a new lazy array wrapper.
Returns:
A lazy array, representing the ancillary variable data array.
"""
return super()._lazy_values()
def core_data(self):
"""
The data array at the core of this ancillary variable, which may be a
NumPy array or a dask array.
"""
return super()._core_values()
def has_lazy_data(self):
"""
Return a boolean indicating whether the ancillary variable's data array
is a lazy dask array or not.
"""
return super()._has_lazy_values()
def cube_dims(self, cube):
"""
Return the cube dimensions of this AncillaryVariable.
Equivalent to "cube.ancillary_variable_dims(self)".
"""
return cube.ancillary_variable_dims(self)
class CellMeasure(AncillaryVariable):
"""
A CF Cell Measure, providing area or volume properties of a cell
where these cannot be inferred from the Coordinates and
Coordinate Reference System.
"""
def __init__(
self,
data,
standard_name=None,
long_name=None,
var_name=None,
units=None,
attributes=None,
measure=None,
):
"""
Constructs a single cell measure.
Args:
* data:
The values of the measure for each cell.
Either a 'real' array (:class:`numpy.ndarray`) or a 'lazy' array
(:class:`dask.array.Array`).
Kwargs:
* standard_name:
CF standard name of the coordinate.
* long_name:
Descriptive name of the coordinate.
* var_name:
The netCDF variable name for the coordinate.
* units
The :class:`~cf_units.Unit` of the coordinate's values.
Can be a string, which will be converted to a Unit object.
* attributes
A dictionary containing other CF and user-defined attributes.
* measure
A string describing the type of measure. Supported values are
'area' and 'volume'. The default is 'area'.
"""
# Configure the metadata manager.
self._metadata_manager = metadata_manager_factory(CellMeasureMetadata)
super().__init__(
data=data,
standard_name=standard_name,
long_name=long_name,
var_name=var_name,
units=units,
attributes=attributes,
)
if measure is None:
measure = "area"
#: String naming the measure type.
self.measure = measure
@property
def measure(self):
return self._metadata_manager.measure
@measure.setter
def measure(self, measure):
if measure not in ["area", "volume"]:
emsg = f"measure must be 'area' or 'volume', got {measure!r}"
raise ValueError(emsg)
self._metadata_manager.measure = measure
def __str__(self):
result = repr(self)
return result
def __repr__(self):
fmt = (
"{cls}({self.data!r}, "
"measure={self.measure!r}, standard_name={self.standard_name!r}, "
"units={self.units!r}{other_metadata})"
)
result = fmt.format(
self=self,
cls=type(self).__name__,
other_metadata=self._repr_other_metadata(),
)
return result
def cube_dims(self, cube):
"""
Return the cube dimensions of this CellMeasure.
Equivalent to "cube.cell_measure_dims(self)".
"""
return cube.cell_measure_dims(self)
def xml_element(self, doc):
"""
Create the :class:`xml.dom.minidom.Element` that describes this
:class:`CellMeasure`.
Args:
* doc:
The parent :class:`xml.dom.minidom.Document`.
Returns:
The :class:`xml.dom.minidom.Element` that describes this
:class:`CellMeasure`.
"""
# Create the XML element as the camelCaseEquivalent of the
# class name
element = super().xml_element(doc=doc)
# Add the 'measure' property
element.setAttribute("measure", self.measure)
return element
class CoordExtent(
namedtuple(
"_CoordExtent",
[
"name_or_coord",
"minimum",
"maximum",
"min_inclusive",
"max_inclusive",
],
)
):
"""Defines a range of values for a coordinate."""
def __new__(
cls,
name_or_coord,
minimum,
maximum,
min_inclusive=True,
max_inclusive=True,
):
"""
Create a CoordExtent for the specified coordinate and range of
values.
Args:
* name_or_coord
Either a coordinate name or a coordinate, as defined in
:meth:`iris.cube.Cube.coords()`.
* minimum
The minimum value of the range to select.
* maximum
The maximum value of the range to select.
Kwargs:
* min_inclusive
If True, coordinate values equal to `minimum` will be included
in the selection. Default is True.
* max_inclusive
If True, coordinate values equal to `maximum` will be included
in the selection. Default is True.
"""
return super().__new__(
cls, name_or_coord, minimum, maximum, min_inclusive, max_inclusive
)
__slots__ = ()
# Coordinate cell styles. Used in plot and cartography.
POINT_MODE = 0
BOUND_MODE = 1
BOUND_POSITION_START = 0
BOUND_POSITION_MIDDLE = 0.5
BOUND_POSITION_END = 1
# Private named tuple class for coordinate groups.
_GroupbyItem = namedtuple("GroupbyItem", "groupby_point, groupby_slice")
def _get_2d_coord_bound_grid(bounds):
"""
Creates a grid using the bounds of a 2D coordinate with 4 sided cells.
Assumes that the four vertices of the cells are in an anti-clockwise order
(bottom-left, bottom-right, top-right, top-left).
Selects the zeroth vertex of each cell. A final column is added, which
contains the first vertex of the cells in the final column. A final row
is added, which contains the third vertex of all the cells in the final
row, except for in the final column where it uses the second vertex.
e.g.
# 0-0-0-0-1
# 0-0-0-0-1
# 3-3-3-3-2
Args:
* bounds: (array)
Coordinate bounds array of shape (Y, X, 4)
Returns:
* grid: (array)
Grid of shape (Y+1, X+1)
"""
# Check bds has the shape (ny, nx, 4)
if not (bounds.ndim == 3 and bounds.shape[-1] == 4):
raise ValueError(
"Bounds for 2D coordinates must be 3-dimensional and "
"have 4 bounds per point."
)
bounds_shape = bounds.shape
result = np.zeros((bounds_shape[0] + 1, bounds_shape[1] + 1))
result[:-1, :-1] = bounds[:, :, 0]
result[:-1, -1] = bounds[:, -1, 1]
result[-1, :-1] = bounds[-1, :, 3]
result[-1, -1] = bounds[-1, -1, 2]
return result
class Cell(namedtuple("Cell", ["point", "bound"])):
"""
An immutable representation of a single cell of a coordinate, including the
sample point and/or boundary position.
Notes on cell comparison:
Cells are compared in two ways, depending on whether they are
compared to another Cell, or to a number/string.
Cell-Cell comparison is defined to produce a strict ordering. If
two cells are not exactly equal (i.e. including whether they both
define bounds or not) then they will have a consistent relative
order.
Cell-number and Cell-string comparison is defined to support
Constraint matching. The number/string will equal the Cell if, and
only if, it is within the Cell (including on the boundary). The
relative comparisons (lt, le, ..) are defined to be consistent with
this interpretation. So for a given value `n` and Cell `cell`, only
one of the following can be true:
| n < cell
| n == cell
| n > cell
Similarly, `n <= cell` implies either `n < cell` or `n == cell`.
And `n >= cell` implies either `n > cell` or `n == cell`.
"""
# This subclass adds no attributes.
__slots__ = ()
# Make this class's comparison operators override those of numpy
__array_priority__ = 100
def __new__(cls, point=None, bound=None):
"""
Construct a Cell from point or point-and-bound information.
"""
if point is None:
raise ValueError("Point must be defined.")
if bound is not None:
bound = tuple(bound)
if isinstance(point, np.ndarray):
point = tuple(point.flatten())
if isinstance(point, (tuple, list)):
if len(point) != 1:
raise ValueError(
"Point may only be a list or tuple if it has " "length 1."
)
point = point[0]
return super().__new__(cls, point, bound)
def __mod__(self, mod):
point = self.point
bound = self.bound
if point is not None:
point = point % mod
if bound is not None:
bound = tuple([val % mod for val in bound])
return Cell(point, bound)
def __add__(self, mod):
point = self.point
bound = self.bound
if point is not None:
point = point + mod
if bound is not None:
bound = tuple([val + mod for val in bound])
return Cell(point, bound)
def __hash__(self):
return super().__hash__()
def __eq__(self, other):
"""
Compares Cell equality depending on the type of the object to be
compared.
"""
if isinstance(other, (int, float, np.number)) or hasattr(
other, "timetuple"
):
if self.bound is not None:
return self.contains_point(other)
else:
return self.point == other
elif isinstance(other, Cell):
return (self.point == other.point) and (self.bound == other.bound)
elif (
isinstance(other, str)
and self.bound is None
and isinstance(self.point, str)
):
return self.point == other
else:
return NotImplemented
# Must supply __ne__, Python does not defer to __eq__ for negative equality
def __ne__(self, other):
result = self.__eq__(other)
if result is not NotImplemented:
result = not result
return result
def __common_cmp__(self, other, operator_method):
"""
Common method called by the rich comparison operators. The method of
checking equality depends on the type of the object to be compared.
Cell vs Cell comparison is used to define a strict order.
Non-Cell vs Cell comparison is used to define Constraint matching.
"""
if (isinstance(other, list) and len(other) == 1) or (
isinstance(other, np.ndarray) and other.shape == (1,)
):
other = other[0]
if isinstance(other, np.ndarray) and other.shape == ():
other = float(other)
if not (
isinstance(other, (int, float, np.number, Cell))
or hasattr(other, "timetuple")
):
raise TypeError(
"Unexpected type of other " "{}.".format(type(other))
)
if operator_method not in (
operator.gt,
operator.lt,
operator.ge,
operator.le,
):
raise ValueError("Unexpected operator_method")
# Prevent silent errors resulting from missing cftime
# behaviour.
if isinstance(other, cftime.datetime) or (
isinstance(self.point, cftime.datetime)
and not isinstance(other, iris.time.PartialDateTime)
):
raise TypeError(
"Cannot determine the order of " "cftime.datetime objects"
)
if isinstance(other, Cell):
# Cell vs Cell comparison for providing a strict sort order
if self.bound is None:
if other.bound is None:
# Point vs point
# - Simple ordering
result = operator_method(self.point, other.point)
else:
# Point vs point-and-bound
# - Simple ordering of point values, but if the two
# points are equal, we make the arbitrary choice
# that the point-only Cell is defined as less than
# the point-and-bound Cell.
if self.point == other.point:
result = operator_method in (operator.lt, operator.le)
else:
result = operator_method(self.point, other.point)
else:
if other.bound is None:
# Point-and-bound vs point
# - Simple ordering of point values, but if the two
# points are equal, we make the arbitrary choice
# that the point-only Cell is defined as less than
# the point-and-bound Cell.
if self.point == other.point:
result = operator_method in (operator.gt, operator.ge)
else:
result = operator_method(self.point, other.point)
else:
# Point-and-bound vs point-and-bound
# - Primarily ordered on minimum-bound. If the
# minimum-bounds are equal, then ordered on
# maximum-bound. If the maximum-bounds are also
# equal, then ordered on point values.
if self.bound[0] == other.bound[0]:
if self.bound[1] == other.bound[1]:
result = operator_method(self.point, other.point)
else:
result = operator_method(
self.bound[1], other.bound[1]
)
else:
result = operator_method(self.bound[0], other.bound[0])
else:
# Cell vs number (or string, or datetime-like) for providing
# Constraint behaviour.
if self.bound is None:
# Point vs number
# - Simple matching
me = self.point
else:
if hasattr(other, "timetuple"):
raise TypeError(
"Cannot determine whether a point lies "
"within a bounded region for "
"datetime-like objects."
)
# Point-and-bound vs number
# - Match if "within" the Cell
if operator_method in [operator.gt, operator.le]:
me = min(self.bound)
else:
me = max(self.bound)
# Work around to handle cftime.datetime comparison, which
# doesn't return NotImplemented on failure in some versions of the
# library
try:
result = operator_method(me, other)
except TypeError:
rop = {
operator.lt: operator.gt,
operator.gt: operator.lt,
operator.le: operator.ge,
operator.ge: operator.le,
}[operator_method]
result = rop(other, me)
return result
def __ge__(self, other):
return self.__common_cmp__(other, operator.ge)
def __le__(self, other):
return self.__common_cmp__(other, operator.le)
def __gt__(self, other):
return self.__common_cmp__(other, operator.gt)
def __lt__(self, other):
return self.__common_cmp__(other, operator.lt)
def __str__(self):
if self.bound is not None:
return repr(self)
else:
return str(self.point)
def contains_point(self, point):
"""
For a bounded cell, returns whether the given point lies within the
bounds.
.. note:: The test carried out is equivalent to min(bound)
<= point <= max(bound).
"""
if self.bound is None:
raise ValueError("Point cannot exist inside an unbounded cell.")
if hasattr(point, "timetuple") or np.any(
[hasattr(val, "timetuple") for val in self.bound]
):
raise TypeError(
"Cannot determine whether a point lies within "
"a bounded region for datetime-like objects."
)
return np.min(self.bound) <= point <= np.max(self.bound)
class Coord(_DimensionalMetadata):
"""
Abstract base class for coordinates.
"""
@abstractmethod
def __init__(
self,
points,
standard_name=None,
long_name=None,
var_name=None,
units=None,
bounds=None,
attributes=None,
coord_system=None,
climatological=False,
):
"""
Coordinate abstract base class. As of ``v3.0.0`` you **cannot** create an instance of :class:`Coord`.
Args:
* points:
The values (or value in the case of a scalar coordinate) for each
cell of the coordinate.
Kwargs:
* standard_name:
CF standard name of the coordinate.
* long_name:
Descriptive name of the coordinate.
* var_name:
The netCDF variable name for the coordinate.
* units
The :class:`~cf_units.Unit` of the coordinate's values.
Can be a string, which will be converted to a Unit object.
* bounds
An array of values describing the bounds of each cell. Given n
bounds for each cell, the shape of the bounds array should be
points.shape + (n,). For example, a 1D coordinate with 100 points
and two bounds per cell would have a bounds array of shape
(100, 2)
Note if the data is a climatology, `climatological`
should be set.
* attributes
A dictionary containing other CF and user-defined attributes.
* coord_system
A :class:`~iris.coord_systems.CoordSystem` representing the
coordinate system of the coordinate,
e.g., a :class:`~iris.coord_systems.GeogCS` for a longitude coordinate.
* climatological (bool):
When True: the coordinate is a NetCDF climatological time axis.
When True: saving in NetCDF will give the coordinate variable a
'climatology' attribute and will create a boundary variable called
'<coordinate-name>_climatology' in place of a standard bounds
attribute and bounds variable.
Will set to True when a climatological time axis is loaded
from NetCDF.
Always False if no bounds exist.
"""
# Configure the metadata manager.
if not hasattr(self, "_metadata_manager"):
self._metadata_manager = metadata_manager_factory(CoordMetadata)
super().__init__(
values=points,
standard_name=standard_name,
long_name=long_name,
var_name=var_name,
units=units,
attributes=attributes,
)
#: Relevant coordinate system (if any).
self.coord_system = coord_system
# Set up bounds DataManager attributes and the bounds values.
self._bounds_dm = None
self.bounds = bounds
self.climatological = climatological
def copy(self, points=None, bounds=None):
"""
Returns a copy of this coordinate.
Kwargs:
* points: A points array for the new coordinate.
This may be a different shape to the points of the coordinate
being copied.
* bounds: A bounds array for the new coordinate.
Given n bounds for each cell, the shape of the bounds array
should be points.shape + (n,). For example, a 1d coordinate
with 100 points and two bounds per cell would have a bounds
array of shape (100, 2).
.. note:: If the points argument is specified and bounds are not, the
resulting coordinate will have no bounds.
"""
if points is None and bounds is not None:
raise ValueError(
"If bounds are specified, points must also be " "specified"
)
new_coord = super().copy(values=points)
if points is not None:
# Regardless of whether bounds are provided as an argument, new
# points will result in new bounds, discarding those copied from
# self.
new_coord.bounds = bounds
return new_coord
@classmethod
def from_coord(cls, coord):
"""Create a new Coord of this type, from the given coordinate."""
kwargs = {
"points": coord.core_points(),
"bounds": coord.core_bounds(),
"standard_name": coord.standard_name,
"long_name": coord.long_name,
"var_name": coord.var_name,
"units": coord.units,
"attributes": coord.attributes,
"coord_system": copy.deepcopy(coord.coord_system),
"climatological": coord.climatological,
}
if issubclass(cls, DimCoord):
# DimCoord introduces an extra constructor keyword.
kwargs["circular"] = getattr(coord, "circular", False)
return cls(**kwargs)
@property
def points(self):
"""The coordinate points values as a NumPy array."""
return self._values
@points.setter
def points(self, points):
self._values = points
@property
def bounds(self):
"""
The coordinate bounds values, as a NumPy array,
or None if no bound values are defined.
.. note:: The shape of the bound array should be: ``points.shape +
(n_bounds, )``.
"""
bounds = None
if self.has_bounds():
bounds = self._bounds_dm.data.view()
return bounds
@bounds.setter
def bounds(self, bounds):
# Ensure the bounds are a compatible shape.
if bounds is None:
self._bounds_dm = None
self.climatological = False
else:
bounds = self._sanitise_array(bounds, 2)
if self.shape != bounds.shape[:-1]:
raise ValueError(
"Bounds shape must be compatible with points " "shape."
)
if (
not self.has_bounds()
or self.core_bounds().shape != bounds.shape
):
# Construct a new bounds DataManager.
self._bounds_dm = DataManager(bounds)
else:
self._bounds_dm.data = bounds
@property
def coord_system(self):
"""The coordinate-system of the coordinate."""
return self._metadata_manager.coord_system
@coord_system.setter
def coord_system(self, value):
self._metadata_manager.coord_system = value
@property
def climatological(self):
"""
A boolean that controls whether the coordinate is a climatological
time axis, in which case the bounds represent a climatological period
rather than a normal period.
Always reads as False if there are no bounds.
On set, the input value is cast to a boolean, exceptions raised
if units are not time units or if there are no bounds.
"""
if not self.has_bounds():
self._metadata_manager.climatological = False
if not self.units.is_time_reference():
self._metadata_manager.climatological = False
return self._metadata_manager.climatological
@climatological.setter
def climatological(self, value):
# Ensure the bounds are a compatible shape.
value = bool(value)
if value:
if not self.units.is_time_reference():
emsg = (
"Cannot set climatological coordinate, does not have"
" valid time reference units, got {!r}."
)
raise TypeError(emsg.format(self.units))
if not self.has_bounds():
emsg = "Cannot set climatological coordinate, no bounds exist."
raise ValueError(emsg)
self._metadata_manager.climatological = value
def lazy_points(self):
"""
Return a lazy array representing the coord points.
Accessing this method will never cause the points values to be loaded.
Similarly, calling methods on, or indexing, the returned Array
will not cause the coord to have loaded points.
If the data have already been loaded for the coord, the returned
Array will be a new lazy array wrapper.
Returns:
A lazy array, representing the coord points array.
"""
return super()._lazy_values()
def lazy_bounds(self):
"""
Return a lazy array representing the coord bounds.
Accessing this method will never cause the bounds values to be loaded.
Similarly, calling methods on, or indexing, the returned Array
will not cause the coord to have loaded bounds.
If the data have already been loaded for the coord, the returned
Array will be a new lazy array wrapper.
Returns:
A lazy array representing the coord bounds array or `None` if the
coord does not have bounds.
"""
lazy_bounds = None
if self.has_bounds():
lazy_bounds = self._bounds_dm.lazy_data()
return lazy_bounds
def core_points(self):
"""
The points array at the core of this coord, which may be a NumPy array
or a dask array.
"""
return super()._core_values()
def core_bounds(self):
"""
The points array at the core of this coord, which may be a NumPy array
or a dask array.
"""
result = None
if self.has_bounds():
result = self._bounds_dm.core_data()
if not _lazy.is_lazy_data(result):
result = result.view()
return result
def has_lazy_points(self):
"""
Return a boolean indicating whether the coord's points array is a
lazy dask array or not.
"""
return super()._has_lazy_values()
def has_lazy_bounds(self):
"""
Return a boolean indicating whether the coord's bounds array is a
lazy dask array or not.
"""
result = False
if self.has_bounds():
result = self._bounds_dm.has_lazy_data()
return result
def _repr_other_metadata(self):
result = super()._repr_other_metadata()
if self.coord_system:
result += ", coord_system={}".format(self.coord_system)
if self.climatological:
result += ", climatological={}".format(self.climatological)
return result
# Must supply __hash__ as Python 3 does not enable it if __eq__ is defined.
# NOTE: Violates "objects which compare equal must have the same hash".
# We ought to remove this, as equality of two coords can *change*, so they
# really should not be hashable.
# However, current code needs it, e.g. so we can put them in sets.
# Fixing it will require changing those uses. See #962 and #1772.
def __hash__(self):
return hash(id(self))
def cube_dims(self, cube):
"""
Return the cube dimensions of this Coord.
Equivalent to "cube.coord_dims(self)".
"""
return cube.coord_dims(self)
def convert_units(self, unit):
r"""
Change the coordinate's units, converting the values in its points
and bounds arrays.
For example, if a coordinate's :attr:`~iris.coords.Coord.units`
attribute is set to radians then::
coord.convert_units('degrees')
will change the coordinate's
:attr:`~iris.coords.Coord.units` attribute to degrees and
multiply each value in :attr:`~iris.coords.Coord.points` and
:attr:`~iris.coords.Coord.bounds` by 180.0/:math:`\pi`.
"""
super().convert_units(unit=unit)
def cells(self):
"""
Returns an iterable of Cell instances for this Coord.
For example::
for cell in self.cells():
...
"""
return _CellIterator(self)
def _sanity_check_bounds(self):
if self.ndim == 1:
if self.nbounds != 2:
raise ValueError(
"Invalid operation for {!r}, with {} "
"bound(s). Contiguous bounds are only "
"defined for 1D coordinates with 2 "
"bounds.".format(self.name(), self.nbounds)
)
elif self.ndim == 2:
if self.nbounds != 4:
raise ValueError(
"Invalid operation for {!r}, with {} "
"bound(s). Contiguous bounds are only "
"defined for 2D coordinates with 4 "
"bounds.".format(self.name(), self.nbounds)
)
else:
raise ValueError(
"Invalid operation for {!r}. Contiguous bounds "
"are not defined for coordinates with more than "
"2 dimensions.".format(self.name())
)
def _discontiguity_in_bounds(self, rtol=1e-5, atol=1e-8):
"""
Checks that the bounds of the coordinate are contiguous.
Kwargs:
* rtol: (float)
Relative tolerance that is used when checking contiguity. Defaults
to 1e-5.
* atol: (float)
Absolute tolerance that is used when checking contiguity. Defaults
to 1e-8.
Returns:
* contiguous: (boolean)
True if there are no discontiguities.
* diffs: (array or tuple of arrays)
The diffs along the bounds of the coordinate. If self is a 2D
coord of shape (Y, X), a tuple of arrays is returned, where the
first is an array of differences along the x-axis, of the shape
(Y, X-1) and the second is an array of differences along the
y-axis, of the shape (Y-1, X).
"""
self._sanity_check_bounds()
if self.ndim == 1:
contiguous = np.allclose(
self.bounds[1:, 0], self.bounds[:-1, 1], rtol=rtol, atol=atol
)
diffs = np.abs(self.bounds[:-1, 1] - self.bounds[1:, 0])
elif self.ndim == 2:
def mod360_adjust(compare_axis):
bounds = self.bounds.copy()
if compare_axis == "x":
upper_bounds = bounds[:, :-1, 1]
lower_bounds = bounds[:, 1:, 0]
elif compare_axis == "y":
upper_bounds = bounds[:-1, :, 3]
lower_bounds = bounds[1:, :, 0]
if self.name() in ["longitude", "grid_longitude"]:
# If longitude, adjust for longitude wrapping
diffs = upper_bounds - lower_bounds
index = diffs > 180
if index.any():
sign = np.sign(diffs)
modification = (index.astype(int) * 360) * sign
upper_bounds -= modification
diffs_between_cells = np.abs(upper_bounds - lower_bounds)
cell_size = lower_bounds - upper_bounds
diffs_along_axis = diffs_between_cells > (
atol + rtol * cell_size
)
points_close_enough = diffs_along_axis <= (
atol + rtol * cell_size
)
contiguous_along_axis = np.all(points_close_enough)
return diffs_along_axis, contiguous_along_axis
diffs_along_x, match_cell_x1 = mod360_adjust(compare_axis="x")
diffs_along_y, match_cell_y1 = mod360_adjust(compare_axis="y")
contiguous = match_cell_x1 and match_cell_y1
diffs = (diffs_along_x, diffs_along_y)
return contiguous, diffs
def is_contiguous(self, rtol=1e-05, atol=1e-08):
"""
Return True if, and only if, this Coord is bounded with contiguous
bounds to within the specified relative and absolute tolerances.
1D coords are contiguous if the upper bound of a cell aligns,
within a tolerance, to the lower bound of the next cell along.
2D coords, with 4 bounds, are contiguous if the lower right corner of
each cell aligns with the lower left corner of the cell to the right of
it, and the upper left corner of each cell aligns with the lower left
corner of the cell above it.
Args:
* rtol:
The relative tolerance parameter (default is 1e-05).
* atol:
The absolute tolerance parameter (default is 1e-08).
Returns:
Boolean.
"""
if self.has_bounds():
contiguous, _ = self._discontiguity_in_bounds(rtol=rtol, atol=atol)
else:
contiguous = False
return contiguous
def contiguous_bounds(self):
"""
Returns the N+1 bound values for a contiguous bounded 1D coordinate
of length N, or the (N+1, M+1) bound values for a contiguous bounded 2D
coordinate of shape (N, M).
Only 1D or 2D coordinates are supported.
.. note::
If the coordinate has bounds, this method assumes they are
contiguous.
If the coordinate is 1D and does not have bounds, this method will
return bounds positioned halfway between the coordinate's points.
If the coordinate is 2D and does not have bounds, an error will be
raised.
"""
if not self.has_bounds():
if self.ndim == 1:
warnings.warn(
"Coordinate {!r} is not bounded, guessing "
"contiguous bounds.".format(self.name())
)
bounds = self._guess_bounds()
elif self.ndim == 2:
raise ValueError(
"2D coordinate {!r} is not bounded. Guessing "
"bounds of 2D coords is not currently "
"supported.".format(self.name())
)
else:
self._sanity_check_bounds()
bounds = self.bounds
if self.ndim == 1:
c_bounds = np.resize(bounds[:, 0], bounds.shape[0] + 1)
c_bounds[-1] = bounds[-1, 1]
elif self.ndim == 2:
c_bounds = _get_2d_coord_bound_grid(bounds)
return c_bounds
def is_monotonic(self):
"""Return True if, and only if, this Coord is monotonic."""
if self.ndim != 1:
raise iris.exceptions.CoordinateMultiDimError(self)
if self.shape == (1,):
return True
if self.points is not None:
if not iris.util.monotonic(self.points, strict=True):
return False
if self.has_bounds():
for b_index in range(self.nbounds):
if not iris.util.monotonic(
self.bounds[..., b_index], strict=True
):
return False
return True
def is_compatible(self, other, ignore=None):
"""
Return whether the coordinate is compatible with another.
Compatibility is determined by comparing
:meth:`iris.coords.Coord.name()`, :attr:`iris.coords.Coord.units`,
:attr:`iris.coords.Coord.coord_system` and
:attr:`iris.coords.Coord.attributes` that are present in both objects.
Args:
* other:
An instance of :class:`iris.coords.Coord`,
:class:`iris.common.CoordMetadata` or
:class:`iris.common.DimCoordMetadata`.
* ignore:
A single attribute key or iterable of attribute keys to ignore when
comparing the coordinates. Default is None. To ignore all
attributes, set this to other.attributes.
Returns:
Boolean.
"""
compatible = False
if self.coord_system == other.coord_system:
compatible = super().is_compatible(other=other, ignore=ignore)
return compatible
@property
def bounds_dtype(self):
"""
The NumPy dtype of the coord's bounds. Will be `None` if the coord
does not have bounds.
"""
result = None
if self.has_bounds():
result = self._bounds_dm.dtype
return result
@property
def nbounds(self):
"""
Return the number of bounds that this coordinate has (0 for no bounds).
"""
nbounds = 0
if self.has_bounds():
nbounds = self._bounds_dm.shape[-1]
return nbounds
def has_bounds(self):
"""Return a boolean indicating whether the coord has a bounds array."""
return self._bounds_dm is not None
def cell(self, index):
"""
Return the single :class:`Cell` instance which results from slicing the
points/bounds with the given index.
"""
index = iris.util._build_full_slice_given_keys(index, self.ndim)
point = tuple(np.array(self.points[index], ndmin=1).flatten())
if len(point) != 1:
raise IndexError(
"The index %s did not uniquely identify a single "
"point to create a cell with." % (index,)
)
bound = None
if self.has_bounds():
bound = tuple(np.array(self.bounds[index], ndmin=1).flatten())
if self.units.is_time_reference():
point = self.units.num2date(point)
if bound is not None:
bound = self.units.num2date(bound)
return Cell(point, bound)
def collapsed(self, dims_to_collapse=None):
"""
Returns a copy of this coordinate, which has been collapsed along
the specified dimensions.
Replaces the points & bounds with a simple bounded region.
"""
# Ensure dims_to_collapse is a tuple to be able to pass
# through to numpy
if isinstance(dims_to_collapse, (int, np.integer)):
dims_to_collapse = (dims_to_collapse,)
if isinstance(dims_to_collapse, list):
dims_to_collapse = tuple(dims_to_collapse)
if np.issubdtype(self.dtype, np.str_):
# Collapse the coordinate by serializing the points and
# bounds as strings.
def serialize(x):
return "|".join([str(i) for i in x.flatten()])
bounds = None
if self.has_bounds():
shape = self._bounds_dm.shape[1:]
bounds = []
for index in np.ndindex(shape):
index_slice = (slice(None),) + tuple(index)
bounds.append(serialize(self.bounds[index_slice]))
dtype = np.dtype("U{}".format(max(map(len, bounds))))
bounds = np.array(bounds, dtype=dtype).reshape((1,) + shape)
points = serialize(self.points)
dtype = np.dtype("U{}".format(len(points)))
# Create the new collapsed coordinate.
coord = self.copy(
points=np.array(points, dtype=dtype), bounds=bounds
)
else:
# Collapse the coordinate by calculating the bounded extremes.
if self.ndim > 1:
msg = (
"Collapsing a multi-dimensional coordinate. "
"Metadata may not be fully descriptive for {!r}."
)
warnings.warn(msg.format(self.name()))
elif not self.is_contiguous():
msg = (
"Collapsing a non-contiguous coordinate. "
"Metadata may not be fully descriptive for {!r}."
)
warnings.warn(msg.format(self.name()))
if self.has_bounds():
item = self.core_bounds()
if dims_to_collapse is not None:
# Express main dims_to_collapse as non-negative integers
# and add the last (bounds specific) dimension.
dims_to_collapse = tuple(
dim % self.ndim for dim in dims_to_collapse
) + (-1,)
else:
item = self.core_points()
# Determine the array library for stacking
al = da if _lazy.is_lazy_data(item) else np
# Calculate the bounds and points along the right dims
bounds = al.stack(
[
item.min(axis=dims_to_collapse),
item.max(axis=dims_to_collapse),
],
axis=-1,
)
points = al.array(bounds.sum(axis=-1) * 0.5, dtype=self.dtype)
# Create the new collapsed coordinate.
coord = self.copy(points=points, bounds=bounds)
return coord
def _guess_bounds(self, bound_position=0.5):
"""
Return bounds for this coordinate based on its points.
Kwargs:
* bound_position:
The desired position of the bounds relative to the position
of the points.
Returns:
A numpy array of shape (len(self.points), 2).
.. note::
This method only works for coordinates with ``coord.ndim == 1``.
"""
# XXX Consider moving into DimCoord
# ensure we have monotonic points
if not self.is_monotonic():
raise ValueError(
"Need monotonic points to generate bounds for %s" % self.name()
)
if self.ndim != 1:
raise iris.exceptions.CoordinateMultiDimError(self)
if self.shape[0] < 2:
raise ValueError(
"Cannot guess bounds for a coordinate of length " "1."
)
if self.has_bounds():
raise ValueError(
"Coord already has bounds. Remove the bounds "
"before guessing new ones."
)
if getattr(self, "circular", False):
points = np.empty(self.shape[0] + 2)
points[1:-1] = self.points
direction = 1 if self.points[-1] > self.points[0] else -1
points[0] = self.points[-1] - (self.units.modulus * direction)
points[-1] = self.points[0] + (self.units.modulus * direction)
diffs = np.diff(points)
else:
diffs = np.diff(self.points)
diffs = np.insert(diffs, 0, diffs[0])
diffs = np.append(diffs, diffs[-1])
min_bounds = self.points - diffs[:-1] * bound_position
max_bounds = self.points + diffs[1:] * (1 - bound_position)
bounds = np.array([min_bounds, max_bounds]).transpose()
if (
self.name() in ("latitude", "grid_latitude")
and self.units == "degree"
):
points = self.points
if (points >= -90).all() and (points <= 90).all():
np.clip(bounds, -90, 90, out=bounds)
return bounds
def guess_bounds(self, bound_position=0.5):
"""
Add contiguous bounds to a coordinate, calculated from its points.
Puts a cell boundary at the specified fraction between each point and
the next, plus extrapolated lowermost and uppermost bound points, so
that each point lies within a cell.
With regularly spaced points, the resulting bounds will also be
regular, and all points lie at the same position within their cell.
With irregular points, the first and last cells are given the same
widths as the ones next to them.
Kwargs:
* bound_position:
The desired position of the bounds relative to the position
of the points.
.. note::
An error is raised if the coordinate already has bounds, is not
one-dimensional, or is not monotonic.
.. note::
Unevenly spaced values, such from a wrapped longitude range, can
produce unexpected results : In such cases you should assign
suitable values directly to the bounds property, instead.
"""
self.bounds = self._guess_bounds(bound_position)
def intersect(self, other, return_indices=False):
"""
Returns a new coordinate from the intersection of two coordinates.
Both coordinates must be compatible as defined by
:meth:`~iris.coords.Coord.is_compatible`.
Kwargs:
* return_indices:
If True, changes the return behaviour to return the intersection
indices for the "self" coordinate.
"""
if not self.is_compatible(other):
msg = (
"The coordinates cannot be intersected. They are not "
"compatible because of differing metadata."
)
raise ValueError(msg)
# Cache self.cells for speed. We can also use the index operation on a
# list conveniently.
self_cells = [cell for cell in self.cells()]
# Maintain a list of indices on self for which cells exist in both self
# and other.
self_intersect_indices = []
for cell in other.cells():
try:
self_intersect_indices.append(self_cells.index(cell))
except ValueError:
pass
if return_indices is False and self_intersect_indices == []:
raise ValueError(
"No intersection between %s coords possible." % self.name()
)
self_intersect_indices = np.array(self_intersect_indices)
# Return either the indices, or a Coordinate instance of the
# intersection.
if return_indices:
return self_intersect_indices
else:
return self[self_intersect_indices]
def nearest_neighbour_index(self, point):
"""
Returns the index of the cell nearest to the given point.
Only works for one-dimensional coordinates.
For example:
>>> cube = iris.load_cube(iris.sample_data_path('ostia_monthly.nc'))
>>> cube.coord('latitude').nearest_neighbour_index(0)
9
>>> cube.coord('longitude').nearest_neighbour_index(10)
12
.. note:: If the coordinate contains bounds, these will be used to
determine the nearest neighbour instead of the point values.
.. note:: For circular coordinates, the 'nearest' point can wrap around
to the other end of the values.
"""
points = self.points
bounds = self.bounds if self.has_bounds() else np.array([])
if self.ndim != 1:
raise ValueError(
"Nearest-neighbour is currently limited"
" to one-dimensional coordinates."
)
do_circular = getattr(self, "circular", False)
if do_circular:
wrap_modulus = self.units.modulus
# wrap 'point' to a range based on lowest points or bounds value.
wrap_origin = np.min(np.hstack((points, bounds.flatten())))
point = wrap_origin + (point - wrap_origin) % wrap_modulus
# Calculate the nearest neighbour.
# The algorithm: given a single value (V),
# if coord has bounds,
# make bounds cells complete and non-overlapping
# return first cell containing V
# else (no bounds),
# find the point which is closest to V
# or if two are equally close, return the lowest index
if self.has_bounds():
# make bounds ranges complete+separate, so point is in at least one
increasing = self.bounds[0, 1] > self.bounds[0, 0]
bounds = bounds.copy()
# sort the bounds cells by their centre values
sort_inds = np.argsort(np.mean(bounds, axis=1))
bounds = bounds[sort_inds]
# replace all adjacent bounds with their averages
if increasing:
mid_bounds = 0.5 * (bounds[:-1, 1] + bounds[1:, 0])
bounds[:-1, 1] = mid_bounds
bounds[1:, 0] = mid_bounds
else:
mid_bounds = 0.5 * (bounds[:-1, 0] + bounds[1:, 1])
bounds[:-1, 0] = mid_bounds
bounds[1:, 1] = mid_bounds
# if point lies beyond either end, fix the end cell to include it
bounds[0, 0] = min(point, bounds[0, 0])
bounds[-1, 1] = max(point, bounds[-1, 1])
# get index of first-occurring cell that contains the point
inside_cells = np.logical_and(
point >= np.min(bounds, axis=1),
point <= np.max(bounds, axis=1),
)
result_index = np.where(inside_cells)[0][0]
# return the original index of the cell (before the bounds sort)
result_index = sort_inds[result_index]
# Or, if no bounds, we always have points ...
else:
if do_circular:
# add an extra, wrapped max point (simpler than bounds case)
# NOTE: circular implies a DimCoord, so *must* be monotonic
if points[-1] >= points[0]:
# ascending value order : add wrapped lowest value to end
index_offset = 0
points = np.hstack((points, points[0] + wrap_modulus))
else:
# descending order : add wrapped lowest value at start
index_offset = 1
points = np.hstack((points[-1] + wrap_modulus, points))
# return index of first-occurring nearest point
distances = np.abs(points - point)
result_index = np.where(distances == np.min(distances))[0][0]
if do_circular:
# convert index back from circular-adjusted points
result_index = (result_index - index_offset) % self.shape[0]
return result_index
def xml_element(self, doc):
"""
Create the :class:`xml.dom.minidom.Element` that describes this
:class:`Coord`.
Args:
* doc:
The parent :class:`xml.dom.minidom.Document`.
Returns:
The :class:`xml.dom.minidom.Element` that will describe this
:class:`DimCoord`.
"""
# Create the XML element as the camelCaseEquivalent of the
# class name
element = super().xml_element(doc=doc)
# Add bounds, points are handled by the parent class.
if self.has_bounds():
element.setAttribute("bounds", self._xml_array_repr(self.bounds))
return element
def _xml_id_extra(self, unique_value):
"""Coord specific stuff for the xml id"""
unique_value += str(self.coord_system).encode("utf-8") + b"\0"
return unique_value
class DimCoord(Coord):
"""
A coordinate that is 1D, and numeric, with values that have a strict monotonic ordering. Missing values are not
permitted in a :class:`DimCoord`.
"""
@classmethod
def from_regular(
cls,
zeroth,
step,
count,
standard_name=None,
long_name=None,
var_name=None,
units=None,
attributes=None,
coord_system=None,
circular=False,
climatological=False,
with_bounds=False,
):
"""
Create a :class:`DimCoord` with regularly spaced points, and
optionally bounds.
The majority of the arguments are defined as for
:class:`Coord`, but those which differ are defined below.
Args:
* zeroth:
The value *prior* to the first point value.
* step:
The numeric difference between successive point values.
* count:
The number of point values.
Kwargs:
* with_bounds:
If True, the resulting DimCoord will possess bound values
which are equally spaced around the points. Otherwise no
bounds values will be defined. Defaults to False.
"""
points = (zeroth + step) + step * np.arange(count, dtype=np.float32)
_, regular = iris.util.points_step(points)
if not regular:
points = (zeroth + step) + step * np.arange(
count, dtype=np.float64
)
points.flags.writeable = False
if with_bounds:
delta = 0.5 * step
bounds = np.concatenate([[points - delta], [points + delta]]).T
bounds.flags.writeable = False
else:
bounds = None
return cls(
points,
standard_name=standard_name,
long_name=long_name,
var_name=var_name,
units=units,
bounds=bounds,
attributes=attributes,
coord_system=coord_system,
circular=circular,
climatological=climatological,
)
def __init__(
self,
points,
standard_name=None,
long_name=None,
var_name=None,
units=None,
bounds=None,
attributes=None,
coord_system=None,
circular=False,
climatological=False,
):
"""
Create a 1D, numeric, and strictly monotonic coordinate with **immutable** points and bounds.
Missing values are not permitted.
Args:
* points:
1D numpy array-like of values (or single value in the case of a
scalar coordinate) for each cell of the coordinate. The values
must be strictly monotonic and masked values are not allowed.
Kwargs:
* standard_name:
CF standard name of the coordinate.
* long_name:
Descriptive name of the coordinate.
* var_name:
The netCDF variable name for the coordinate.
* units:
The :class:`~cf_units.Unit` of the coordinate's values.
Can be a string, which will be converted to a Unit object.
* bounds:
An array of values describing the bounds of each cell. Given n
bounds and m cells, the shape of the bounds array should be
(m, n). For each bound, the values must be strictly monotonic along
the cells, and the direction of monotonicity must be consistent
across the bounds. For example, a DimCoord with 100 points and two
bounds per cell would have a bounds array of shape (100, 2), and
the slices ``bounds[:, 0]`` and ``bounds[:, 1]`` would be monotonic
in the same direction. Masked values are not allowed.
Note if the data is a climatology, `climatological`
should be set.
* attributes:
A dictionary containing other CF and user-defined attributes.
* coord_system:
A :class:`~iris.coord_systems.CoordSystem` representing the
coordinate system of the coordinate,
e.g., a :class:`~iris.coord_systems.GeogCS` for a longitude coordinate.
* circular (bool):
Whether the coordinate wraps by the :attr:`~iris.coords.DimCoord.units.modulus`
i.e., the longitude coordinate wraps around the full great circle.
* climatological (bool):
When True: the coordinate is a NetCDF climatological time axis.
When True: saving in NetCDF will give the coordinate variable a
'climatology' attribute and will create a boundary variable called
'<coordinate-name>_climatology' in place of a standard bounds
attribute and bounds variable.
Will set to True when a climatological time axis is loaded
from NetCDF.
Always False if no bounds exist.
"""
# Configure the metadata manager.
self._metadata_manager = metadata_manager_factory(DimCoordMetadata)
super().__init__(
points,
standard_name=standard_name,
long_name=long_name,
var_name=var_name,
units=units,
bounds=bounds,
attributes=attributes,
coord_system=coord_system,
climatological=climatological,
)
#: Whether the coordinate wraps by ``coord.units.modulus``.
self.circular = circular
def __deepcopy__(self, memo):
"""
coord.__deepcopy__() -> Deep copy of coordinate.
Used if copy.deepcopy is called on a coordinate.
"""
new_coord = copy.deepcopy(super(), memo)
# Ensure points and bounds arrays are read-only.
new_coord._values_dm.data.flags.writeable = False
if new_coord._bounds_dm is not None:
new_coord._bounds_dm.data.flags.writeable = False
return new_coord
@property
def circular(self):
return self._metadata_manager.circular
@circular.setter
def circular(self, circular):
self._metadata_manager.circular = bool(circular)
def copy(self, points=None, bounds=None):
new_coord = super().copy(points=points, bounds=bounds)
# Make the arrays read-only.
new_coord._values_dm.data.flags.writeable = False
if bounds is not None:
new_coord._bounds_dm.data.flags.writeable = False
return new_coord
def __eq__(self, other):
result = NotImplemented
if isinstance(other, DimCoord):
# The "circular" member participates in DimCoord to DimCoord
# equivalence. We require to do this explicitly here
# as the "circular" member does NOT participate in
# DimCoordMetadata to DimCoordMetadata equivalence.
result = self.circular == other.circular and super().__eq__(other)
return result
# The __ne__ operator from Coord implements the not __eq__ method.
# For Python 3, we must explicitly re-implement the '__hash__' method, as
# defining an '__eq__' has blocked its inheritance. See ...
# https://docs.python.org/3.1/reference/datamodel.html#object.__hash__
# "If a class that overrides __eq__() needs to retain the
# implementation of __hash__() from a parent class, the interpreter
# must be told this explicitly".
__hash__ = Coord.__hash__
def __getitem__(self, key):
coord = super().__getitem__(key)
coord.circular = self.circular and coord.shape == self.shape
return coord
def collapsed(self, dims_to_collapse=None):
coord = Coord.collapsed(self, dims_to_collapse=dims_to_collapse)
if self.circular and self.units.modulus is not None:
bnds = coord.bounds.copy()
bnds[0, 1] = coord.bounds[0, 0] + self.units.modulus
coord.bounds = bnds
coord.points = np.array(
np.sum(coord.bounds) * 0.5, dtype=self.points.dtype
)
# XXX This isn't actually correct, but is ported from the old world.
coord.circular = False
return coord
def _repr_other_metadata(self):
result = Coord._repr_other_metadata(self)
if self.circular:
result += ", circular=%r" % self.circular
return result
def _new_points_requirements(self, points):
"""
Confirm that a new set of coord points adheres to the requirements for
:class:`~iris.coords.DimCoord` points, being:
* points are scalar or 1D,
* points are numeric,
* points are not masked, and
* points are monotonic.
"""
if points.ndim not in (0, 1):
emsg = "The {!r} {} points array must be scalar or 1-dimensional."
raise ValueError(emsg.format(self.name(), self.__class__.__name__))
if not np.issubdtype(points.dtype, np.number):
emsg = "The {!r} {} points array must be numeric."
raise ValueError(emsg.format(self.name(), self.__class__.__name__))
if ma.is_masked(points):
emsg = "A {!r} {} points array must not be masked."
raise TypeError(emsg.format(self.name(), self.__class__.__name__))
if points.size > 1 and not iris.util.monotonic(points, strict=True):
emsg = "The {!r} {} points array must be strictly monotonic."
raise ValueError(emsg.format(self.name(), self.__class__.__name__))
@Coord._values.setter
def _values(self, points):
# DimCoord always realises the points, to allow monotonicity checks.
# Ensure it is an actual array, and also make our own copy so that we
# can make it read-only.
points = _lazy.as_concrete_data(points)
# Make sure that we have an array (any type of array).
points = np.asanyarray(points)
# Check validity requirements for dimension-coordinate points.
self._new_points_requirements(points)
# Cast to a numpy array for masked arrays with no mask.
points = np.array(points)
super(DimCoord, self.__class__)._values.fset(self, points)
if self._values_dm is not None:
# Re-fetch the core array, as the super call may replace it.
points = self._values_dm.core_data()
# N.B. always a *real* array, as we realised 'points' at the start.
# Make the array read-only.
points.flags.writeable = False
def _new_bounds_requirements(self, bounds):
"""
Confirm that a new set of coord bounds adheres to the requirements for
:class:`~iris.coords.DimCoord` bounds, being:
* bounds are compatible in shape with the points
* bounds are numeric,
* bounds are not masked, and
* bounds are monotonic in the first dimension.
"""
# Ensure the bounds are a compatible shape.
if self.shape != bounds.shape[:-1] and not (
self.shape == (1,) and bounds.ndim == 1
):
emsg = (
"The shape of the {!r} {} bounds array should be "
"points.shape + (n_bounds)"
)
raise ValueError(emsg.format(self.name(), self.__class__.__name__))
# Checks for numeric.
if not np.issubdtype(bounds.dtype, np.number):
emsg = "The {!r} {} bounds array must be numeric."
raise ValueError(emsg.format(self.name(), self.__class__.__name__))
# Check not masked.
if ma.is_masked(bounds):
emsg = "A {!r} {} bounds array must not be masked."
raise TypeError(emsg.format(self.name(), self.__class__.__name__))
# Check bounds are monotonic.
if bounds.ndim > 1:
n_bounds = bounds.shape[-1]
n_points = bounds.shape[0]
if n_points > 1:
directions = set()
for b_index in range(n_bounds):
monotonic, direction = iris.util.monotonic(
bounds[:, b_index], strict=True, return_direction=True
)
if not monotonic:
emsg = (
"The {!r} {} bounds array must be strictly "
"monotonic."
)
raise ValueError(
emsg.format(self.name(), self.__class__.__name__)
)
directions.add(direction)
if len(directions) != 1:
emsg = (
"The direction of monotonicity for {!r} {} must "
"be consistent across all bounds."
)
raise ValueError(
emsg.format(self.name(), self.__class__.__name__)
)
@Coord.bounds.setter
def bounds(self, bounds):
if bounds is not None:
# Ensure we have a realised array of new bounds values.
bounds = _lazy.as_concrete_data(bounds)
# Make sure we have an array (any type of array).
bounds = np.asanyarray(bounds)
# Check validity requirements for dimension-coordinate bounds.
self._new_bounds_requirements(bounds)
# Cast to a numpy array for masked arrays with no mask.
bounds = np.array(bounds)
# Call the parent bounds setter.
super(DimCoord, self.__class__).bounds.fset(self, bounds)
if self._bounds_dm is not None:
# Re-fetch the core array, as the super call may replace it.
bounds = self._bounds_dm.core_data()
# N.B. always a *real* array, as we realised 'bounds' at the start.
# Ensure the array is read-only.
bounds.flags.writeable = False
def is_monotonic(self):
return True
def xml_element(self, doc):
"""
Create the :class:`xml.dom.minidom.Element` that describes this
:class:`DimCoord`.
Args:
* doc:
The parent :class:`xml.dom.minidom.Document`.
Returns:
The :class:`xml.dom.minidom.Element` that describes this
:class:`DimCoord`.
"""
element = super().xml_element(doc)
if self.circular:
element.setAttribute("circular", str(self.circular))
return element
class AuxCoord(Coord):
"""
A CF auxiliary coordinate.
"""
def __init__(self, *args, **kwargs):
"""
Create a coordinate with **mutable** points and bounds.
Args:
* points:
The values (or value in the case of a scalar coordinate) for each
cell of the coordinate.
Kwargs:
* standard_name:
CF standard name of the coordinate.
* long_name:
Descriptive name of the coordinate.
* var_name:
The netCDF variable name for the coordinate.
* units
The :class:`~cf_units.Unit` of the coordinate's values.
Can be a string, which will be converted to a Unit object.
* bounds
An array of values describing the bounds of each cell. Given n
bounds for each cell, the shape of the bounds array should be
points.shape + (n,). For example, a 1D coordinate with 100 points
and two bounds per cell would have a bounds array of shape
(100, 2)
Note if the data is a climatology, `climatological`
should be set.
* attributes
A dictionary containing other CF and user-defined attributes.
* coord_system
A :class:`~iris.coord_systems.CoordSystem` representing the
coordinate system of the coordinate,
e.g., a :class:`~iris.coord_systems.GeogCS` for a longitude coordinate.
* climatological (bool):
When True: the coordinate is a NetCDF climatological time axis.
When True: saving in NetCDF will give the coordinate variable a
'climatology' attribute and will create a boundary variable called
'<coordinate-name>_climatology' in place of a standard bounds
attribute and bounds variable.
Will set to True when a climatological time axis is loaded
from NetCDF.
Always False if no bounds exist.
"""
super().__init__(*args, **kwargs)
# Logically, :class:`Coord` is an abstract class and all actual coords must
# be members of some concrete subclass, i.e. an :class:`AuxCoord` or
# a :class:`DimCoord`.
# So we retain :class:`AuxCoord` as a distinct concrete subclass.
# This provides clarity, backwards compatibility, and so we can add
# AuxCoord-specific code if needed in future.
class CellMethod(iris.util._OrderedHashable):
"""
Represents a sub-cell pre-processing operation.
"""
# Declare the attribute names relevant to the _OrderedHashable behaviour.
_names = ("method", "coord_names", "intervals", "comments")
#: The name of the operation that was applied. e.g. "mean", "max", etc.
method = None
#: The tuple of coordinate names over which the operation was applied.
coord_names = None
#: A description of the original intervals over which the operation
#: was applied.
intervals = None
#: Additional comments.
comments = None
def __init__(self, method, coords=None, intervals=None, comments=None):
"""
Args:
* method:
The name of the operation.
Kwargs:
* coords:
A single instance or sequence of :class:`.Coord` instances or
coordinate names.
* intervals:
A single string, or a sequence strings, describing the intervals
within the cell method.
* comments:
A single string, or a sequence strings, containing any additional
comments.
"""
if not isinstance(method, str):
raise TypeError(
"'method' must be a string - got a '%s'" % type(method)
)
default_name = BaseMetadata.DEFAULT_NAME
_coords = []
if coords is None:
pass
elif isinstance(coords, Coord):
_coords.append(coords.name(token=True))
elif isinstance(coords, str):
_coords.append(BaseMetadata.token(coords) or default_name)
else:
normalise = (
lambda coord: coord.name(token=True)
if isinstance(coord, Coord)
else BaseMetadata.token(coord) or default_name
)
_coords.extend([normalise(coord) for coord in coords])
_intervals = []
if intervals is None:
pass
elif isinstance(intervals, str):
_intervals = [intervals]
else:
_intervals.extend(intervals)
_comments = []
if comments is None:
pass
elif isinstance(comments, str):
_comments = [comments]
else:
_comments.extend(comments)
self._init(method, tuple(_coords), tuple(_intervals), tuple(_comments))
def __str__(self):
"""Return a custom string representation of CellMethod"""
# Group related coord names intervals and comments together
cell_components = zip_longest(
self.coord_names, self.intervals, self.comments, fillvalue=""
)
collection_summaries = []
cm_summary = "%s: " % self.method
for coord_name, interval, comment in cell_components:
other_info = ", ".join(filter(None, chain((interval, comment))))
if other_info:
coord_summary = "%s (%s)" % (coord_name, other_info)
else:
coord_summary = "%s" % coord_name
collection_summaries.append(coord_summary)
return cm_summary + ", ".join(collection_summaries)
def __add__(self, other):
# Disable the default tuple behaviour of tuple concatenation
raise NotImplementedError()
def xml_element(self, doc):
"""
Create the :class:`xml.dom.minidom.Element` that describes this
:class:`CellMethod`.
Args:
* doc:
The parent :class:`xml.dom.minidom.Document`.
Returns:
The :class:`xml.dom.minidom.Element` that describes this
:class:`CellMethod`.
"""
cellMethod_xml_element = doc.createElement("cellMethod")
cellMethod_xml_element.setAttribute("method", self.method)
for coord_name, interval, comment in zip_longest(
self.coord_names, self.intervals, self.comments
):
coord_xml_element = doc.createElement("coord")
if coord_name is not None:
coord_xml_element.setAttribute("name", coord_name)
if interval is not None:
coord_xml_element.setAttribute("interval", interval)
if comment is not None:
coord_xml_element.setAttribute("comment", comment)
cellMethod_xml_element.appendChild(coord_xml_element)
return cellMethod_xml_element
# See Coord.cells() for the description/context.
class _CellIterator(Iterator):
def __init__(self, coord):
self._coord = coord
if coord.ndim != 1:
raise iris.exceptions.CoordinateMultiDimError(coord)
self._indices = iter(range(coord.shape[0]))
def __next__(self):
# NB. When self._indices runs out it will raise StopIteration for us.
i = next(self._indices)
return self._coord.cell(i)
next = __next__
# See ExplicitCoord._group() for the description/context.
class _GroupIterator(Iterator):
def __init__(self, points):
self._points = points
self._start = 0
def __next__(self):
num_points = len(self._points)
if self._start >= num_points:
raise StopIteration
stop = self._start + 1
m = self._points[self._start]
while stop < num_points and self._points[stop] == m:
stop += 1
group = _GroupbyItem(m, slice(self._start, stop))
self._start = stop
return group
next = __next__
|
rcomer/iris
|
lib/iris/coords.py
|
Python
|
lgpl-3.0
| 100,378
|
[
"NetCDF"
] |
3542e4e61de829c81088172c55d406d0002eb7ce14c4d7a4eea22552d35cb408
|
"""Example of interfacing with scipy solvers.
This example solves the problem
-Laplacian(x) = b
where b is a gaussian peak at the origin.
"""
import numpy as np
import scipy.sparse.linalg as scipy_solvers
import odl
# Create discrete space, a square from [-1, 1] x [-1, 1] with (11 x 11) points
space = odl.uniform_discr([-1, -1], [1, 1], [11, 11])
# Create odl operator for negative laplacian
laplacian = -odl.Laplacian(space)
# Create right hand side, a gaussian around the point (0, 0)
rhs = space.element(lambda x: np.exp(-(x[0]**2 + x[1]**2) / 0.1**2))
# Convert laplacian to scipy operator
scipy_laplacian = odl.operator.oputils.as_scipy_operator(laplacian)
# Convert to array and flatten
rhs_arr = rhs.asarray().ravel()
# Solve using scipy
result, info = scipy_solvers.cg(scipy_laplacian, rhs_arr)
# Other options include
# result, info = scipy_solvers.cgs(scipy_laplacian, rhs_arr)
# result, info = scipy_solvers.gmres(scipy_op, rhs_arr)
# result, info = scipy_solvers.lgmres(scipy_op, rhs_arr)
# result, info = scipy_solvers.bicg(scipy_op, rhs_arr)
# result, info = scipy_solvers.bicgstab(scipy_op, rhs_arr)
# Convert back to odl and display result
result_odl = space.element(result.reshape(space.shape)) # result is flat
result_odl.show('Result')
(rhs - laplacian(result_odl)).show('Residual', force_show=True)
|
kohr-h/odl
|
examples/solvers/scipy_solvers.py
|
Python
|
mpl-2.0
| 1,338
|
[
"Gaussian"
] |
adbf0746ddb0e62713455520a0e5a615118b0377ff32536beafd5742271755dd
|
# Copyright 2009 Brian Quinlan. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
from __future__ import with_statement
import logging
import threading
import time
try:
from collections import namedtuple
except ImportError:
from concurrent.futures._compat import namedtuple
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
FIRST_COMPLETED = 'FIRST_COMPLETED'
FIRST_EXCEPTION = 'FIRST_EXCEPTION'
ALL_COMPLETED = 'ALL_COMPLETED'
_AS_COMPLETED = '_AS_COMPLETED'
# Possible future states (for internal use by the futures package).
PENDING = 'PENDING'
RUNNING = 'RUNNING'
# The future was cancelled by the user...
CANCELLED = 'CANCELLED'
# ...and _Waiter.add_cancelled() was called by a worker.
CANCELLED_AND_NOTIFIED = 'CANCELLED_AND_NOTIFIED'
FINISHED = 'FINISHED'
_FUTURE_STATES = [
PENDING,
RUNNING,
CANCELLED,
CANCELLED_AND_NOTIFIED,
FINISHED
]
_STATE_TO_DESCRIPTION_MAP = {
PENDING: "pending",
RUNNING: "running",
CANCELLED: "cancelled",
CANCELLED_AND_NOTIFIED: "cancelled",
FINISHED: "finished"
}
# Logger for internal use by the futures package.
LOGGER = logging.getLogger("concurrent.futures")
class Error(Exception):
"""Base class for all future-related exceptions."""
pass
class CancelledError(Error):
"""The Future was cancelled."""
pass
class TimeoutError(Error):
"""The operation exceeded the given deadline."""
pass
class _Waiter(object):
"""Provides the event that wait() and as_completed() block on."""
def __init__(self):
self.event = threading.Event()
self.finished_futures = []
def add_result(self, future):
self.finished_futures.append(future)
def add_exception(self, future):
self.finished_futures.append(future)
def add_cancelled(self, future):
self.finished_futures.append(future)
class _AsCompletedWaiter(_Waiter):
"""Used by as_completed()."""
def __init__(self):
super(_AsCompletedWaiter, self).__init__()
self.lock = threading.Lock()
def add_result(self, future):
with self.lock:
super(_AsCompletedWaiter, self).add_result(future)
self.event.set()
def add_exception(self, future):
with self.lock:
super(_AsCompletedWaiter, self).add_exception(future)
self.event.set()
def add_cancelled(self, future):
with self.lock:
super(_AsCompletedWaiter, self).add_cancelled(future)
self.event.set()
class _FirstCompletedWaiter(_Waiter):
"""Used by wait(return_when=FIRST_COMPLETED)."""
def add_result(self, future):
super(_FirstCompletedWaiter, self).add_result(future)
self.event.set()
def add_exception(self, future):
super(_FirstCompletedWaiter, self).add_exception(future)
self.event.set()
def add_cancelled(self, future):
super(_FirstCompletedWaiter, self).add_cancelled(future)
self.event.set()
class _AllCompletedWaiter(_Waiter):
"""Used by wait(return_when=FIRST_EXCEPTION and ALL_COMPLETED)."""
def __init__(self, num_pending_calls, stop_on_exception):
self.num_pending_calls = num_pending_calls
self.stop_on_exception = stop_on_exception
self.lock = threading.Lock()
super(_AllCompletedWaiter, self).__init__()
def _decrement_pending_calls(self):
with self.lock:
self.num_pending_calls -= 1
if not self.num_pending_calls:
self.event.set()
def add_result(self, future):
super(_AllCompletedWaiter, self).add_result(future)
self._decrement_pending_calls()
def add_exception(self, future):
super(_AllCompletedWaiter, self).add_exception(future)
if self.stop_on_exception:
self.event.set()
else:
self._decrement_pending_calls()
def add_cancelled(self, future):
super(_AllCompletedWaiter, self).add_cancelled(future)
self._decrement_pending_calls()
class _AcquireFutures(object):
"""A context manager that does an ordered acquire of Future conditions."""
def __init__(self, futures):
self.futures = sorted(futures, key=id)
def __enter__(self):
for future in self.futures:
future._condition.acquire()
def __exit__(self, *args):
for future in self.futures:
future._condition.release()
def _create_and_install_waiters(fs, return_when):
if return_when == _AS_COMPLETED:
waiter = _AsCompletedWaiter()
elif return_when == FIRST_COMPLETED:
waiter = _FirstCompletedWaiter()
else:
pending_count = sum(
f._state not in [CANCELLED_AND_NOTIFIED, FINISHED] for f in fs)
if return_when == FIRST_EXCEPTION:
waiter = _AllCompletedWaiter(pending_count, stop_on_exception=True)
elif return_when == ALL_COMPLETED:
waiter = _AllCompletedWaiter(pending_count, stop_on_exception=False)
else:
raise ValueError("Invalid return condition: %r" % return_when)
for f in fs:
f._waiters.append(waiter)
return waiter
def as_completed(fs, timeout=None):
"""An iterator over the given futures that yields each as it completes.
Args:
fs: The sequence of Futures (possibly created by different Executors) to
iterate over.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
Returns:
An iterator that yields the given Futures as they complete (finished or
cancelled).
Raises:
TimeoutError: If the entire result iterator could not be generated
before the given timeout.
"""
if timeout is not None:
end_time = timeout + time.time()
with _AcquireFutures(fs):
finished = set(
f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
pending = set(fs) - finished
waiter = _create_and_install_waiters(fs, _AS_COMPLETED)
try:
for future in finished:
yield future
while pending:
if timeout is None:
wait_timeout = None
else:
wait_timeout = end_time - time.time()
if wait_timeout < 0:
raise TimeoutError(
'%d (of %d) futures unfinished' % (
len(pending), len(fs)))
waiter.event.wait(wait_timeout)
with waiter.lock:
finished = waiter.finished_futures
waiter.finished_futures = []
waiter.event.clear()
for future in finished:
yield future
pending.remove(future)
finally:
for f in fs:
f._waiters.remove(waiter)
DoneAndNotDoneFutures = namedtuple(
'DoneAndNotDoneFutures', 'done not_done')
def wait(fs, timeout=None, return_when=ALL_COMPLETED):
"""Wait for the futures in the given sequence to complete.
Args:
fs: The sequence of Futures (possibly created by different Executors) to
wait upon.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
return_when: Indicates when this function should return. The options
are:
FIRST_COMPLETED - Return when any future finishes or is
cancelled.
FIRST_EXCEPTION - Return when any future finishes by raising an
exception. If no future raises an exception
then it is equivalent to ALL_COMPLETED.
ALL_COMPLETED - Return when all futures finish or are cancelled.
Returns:
A named 2-tuple of sets. The first set, named 'done', contains the
futures that completed (is finished or cancelled) before the wait
completed. The second set, named 'not_done', contains uncompleted
futures.
"""
with _AcquireFutures(fs):
done = set(f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
not_done = set(fs) - done
if (return_when == FIRST_COMPLETED) and done:
return DoneAndNotDoneFutures(done, not_done)
elif (return_when == FIRST_EXCEPTION) and done:
if any(f for f in done
if not f.cancelled() and f.exception() is not None):
return DoneAndNotDoneFutures(done, not_done)
if len(done) == len(fs):
return DoneAndNotDoneFutures(done, not_done)
waiter = _create_and_install_waiters(fs, return_when)
waiter.event.wait(timeout)
for f in fs:
f._waiters.remove(waiter)
done.update(waiter.finished_futures)
return DoneAndNotDoneFutures(done, set(fs) - done)
class Future(object):
"""Represents the result of an asynchronous computation."""
def __init__(self):
"""Initializes the future. Should not be called by clients."""
self._condition = threading.Condition()
self._state = PENDING
self._result = None
self._exception = None
self._waiters = []
self._done_callbacks = []
def _invoke_callbacks(self):
for callback in self._done_callbacks:
try:
callback(self)
except Exception:
LOGGER.exception('exception calling callback for %r', self)
def __repr__(self):
with self._condition:
if self._state == FINISHED:
if self._exception:
return '<Future at %s state=%s raised %s>' % (
hex(id(self)),
_STATE_TO_DESCRIPTION_MAP[self._state],
self._exception.__class__.__name__)
else:
return '<Future at %s state=%s returned %s>' % (
hex(id(self)),
_STATE_TO_DESCRIPTION_MAP[self._state],
self._result.__class__.__name__)
return '<Future at %s state=%s>' % (
hex(id(self)),
_STATE_TO_DESCRIPTION_MAP[self._state])
def cancel(self):
"""Cancel the future if possible.
Returns True if the future was cancelled, False otherwise. A future
cannot be cancelled if it is running or has already completed.
"""
with self._condition:
if self._state in [RUNNING, FINISHED]:
return False
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
return True
self._state = CANCELLED
self._condition.notify_all()
self._invoke_callbacks()
return True
def cancelled(self):
"""Return True if the future has cancelled."""
with self._condition:
return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]
def running(self):
"""Return True if the future is currently executing."""
with self._condition:
return self._state == RUNNING
def done(self):
"""Return True of the future was cancelled or finished executing."""
with self._condition:
return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]
def __get_result(self):
if self._exception:
raise self._exception
else:
return self._result
def add_done_callback(self, fn):
"""Attaches a callable that will be called when the future finishes.
Args:
fn: A callable that will be called with this future as its only
argument when the future completes or is cancelled. The callable
will always be called by a thread in the same process in which
it was added. If the future has already completed or been
cancelled then the callable will be called immediately. These
callables are called in the order that they were added.
"""
with self._condition:
if self._state not in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]:
self._done_callbacks.append(fn)
return
fn(self)
def result(self, timeout=None):
"""Return the result of the call that the future represents.
Args:
timeout: The number of seconds to wait for the result if the future
isn't done. If None, then there is no limit on the wait time.
Returns:
The result of the call that the future represents.
Raises:
CancelledError: If the future was cancelled.
TimeoutError: If the future didn't finish executing before the given
timeout.
Exception: If the call raised then that exception will be raised.
"""
with self._condition:
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
self._condition.wait(timeout)
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
else:
raise TimeoutError()
def exception(self, timeout=None):
"""Return the exception raised by the call that the future represents.
Args:
timeout: The number of seconds to wait for the exception if the
future isn't done. If None, then there is no limit on the wait
time.
Returns:
The exception raised by the call that the future represents or None
if the call completed without raising.
Raises:
CancelledError: If the future was cancelled.
TimeoutError: If the future didn't finish executing before the given
timeout.
"""
with self._condition:
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self._exception
self._condition.wait(timeout)
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self._exception
else:
raise TimeoutError()
# The following methods should only be used by Executors and in tests.
def set_running_or_notify_cancel(self):
"""Mark the future as running or process any cancel notifications.
Should only be used by Executor implementations and unit tests.
If the future has been cancelled (cancel() was called and returned
True) then any threads waiting on the future completing (though calls
to as_completed() or wait()) are notified and False is returned.
If the future was not cancelled then it is put in the running state
(future calls to running() will return True) and True is returned.
This method should be called by Executor implementations before
executing the work associated with this future. If this method returns
False then the work should not be executed.
Returns:
False if the Future was cancelled, True otherwise.
Raises:
RuntimeError: if this method was already called or if set_result()
or set_exception() was called.
"""
with self._condition:
if self._state == CANCELLED:
self._state = CANCELLED_AND_NOTIFIED
for waiter in self._waiters:
waiter.add_cancelled(self)
# self._condition.notify_all() is not necessary because
# self.cancel() triggers a notification.
return False
elif self._state == PENDING:
self._state = RUNNING
return True
else:
LOGGER.critical('Future %s in unexpected state: %s',
id(self.future),
self.future._state)
raise RuntimeError('Future in unexpected state')
def set_result(self, result):
"""Sets the return value of work associated with the future.
Should only be used by Executor implementations and unit tests.
"""
with self._condition:
self._result = result
self._state = FINISHED
for waiter in self._waiters:
waiter.add_result(self)
self._condition.notify_all()
self._invoke_callbacks()
def set_exception(self, exception):
"""Sets the result of the future as being the given exception.
Should only be used by Executor implementations and unit tests.
"""
with self._condition:
self._exception = exception
self._state = FINISHED
for waiter in self._waiters:
waiter.add_exception(self)
self._condition.notify_all()
self._invoke_callbacks()
class Executor(object):
"""This is an abstract base class for concrete asynchronous executors."""
def submit(self, fn, *args, **kwargs):
"""Submits a callable to be executed with the given arguments.
Schedules the callable to be executed as fn(*args, **kwargs) and returns
a Future instance representing the execution of the callable.
Returns:
A Future representing the given call.
"""
raise NotImplementedError()
def map(self, fn, *iterables, **kwargs):
"""Returns a iterator equivalent to map(fn, iter).
Args:
fn: A callable that will take as many arguments as there are
passed iterables.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
Returns:
An iterator equivalent to: map(func, *iterables) but the calls may
be evaluated out-of-order.
Raises:
TimeoutError: If the entire result iterator could not be generated
before the given timeout.
Exception: If fn(*args) raises for any values.
"""
timeout = kwargs.get('timeout')
if timeout is not None:
end_time = timeout + time.time()
fs = [self.submit(fn, *args) for args in zip(*iterables)]
try:
for future in fs:
if timeout is None:
yield future.result()
else:
yield future.result(end_time - time.time())
finally:
for future in fs:
future.cancel()
def shutdown(self, wait=True):
"""Clean-up the resources associated with the Executor.
It is safe to call this method several times. Otherwise, no other
methods can be called after this one.
Args:
wait: If True then shutdown will not return until all running
futures have finished executing and the resources used by the
executor have been reclaimed.
"""
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.shutdown(wait=True)
return False
|
schmeichelinho/buildroot-weio-v2
|
weio/fs-overlay/usr/lib/python2.7/site-packages/concurrent/futures/_base.py
|
Python
|
gpl-2.0
| 19,642
|
[
"Brian"
] |
caeb126d29477b4590c699d78339e6911bc6940f0389bb6cd482a096aeea9643
|
# -*- coding: UTF-8 -*-
'''
Модуль реализует набор базовых типов, представляющих разные виды нечетких
подмножеств.
'''
from .common import ACCURACY
from .domain import RationalRange, IntegerRange
##from .algebra import SubsetAlgebra, NumbersAlgebra
import pylab as p
import math
class Subset(object):
'''
Реализует функциональность нечеткого подмножества общего вида.
Имеет атрибуты, указывающие начало и конец интервала
определения подмножества (для подмножеств, определенных на R).
>>> A=Subset(0.0, 1.0)
>>> A.begin
0.0
>>> A.end
1.0
Attributes:
values
points
domain
'''
def __init__(self, begin=0.0,
end=1.0,
domain=None):
self.domain = domain or RationalRange(begin, end)
self.values = {}
self.points = {}
self.values[self.domain.begin] = 0.0
self.values[self.domain.end] = 0.0
self.points[self.domain.begin] = 0.0
self.points[self.domain.end] = 0.0
self._algebra = SubsetAlgebra()
def value(self, key):
'''
Возвращает уровень принадлежности точки нечеткому подмножеству.
Данный метод непосредственно и является программной имплементацией
функции принадлежности.
>>> A=Gaussian(1.0, 1.0)
>>> A.value(0.5)
0.8825
>>> A.value(1.5)
0.8825
>>> A.value(1.0)
1.0
>>> A.value(0.0)
0.60653
'''
if not key in self.domain:
return 0.0
try:
return self.values[key]
except KeyError:
sort = sorted(self.values.keys())
sort1 = sorted(self.values.keys())
sort1.pop(0)
for (i, j) in zip(sort, sort1):
if i < key < j:
return (key-i)*(self[j]-self[i]) / (j-i) + self[i]
break;
def char(self):
'''
Выводит на экран список элементов носителя и соответствующих им значений
нечеткого множества. Шаг перебора непрерывного носителя совпадает с
частотой дискретизации при численных вычислениях
Синтаксис:
>>> A=Triangle(1.0, 2.0, 4.0)
>>> A.Domain.acc=5
>>> A.char()
1.0 0.0
1.6 0.6
2.2 0.9
2.8 0.6
3.4 0.3
4.0 -0.0
'''
for i in self.domain:
print i, self.value(i)
def normalize(self):
'''
Возвращает нормированное по высоте нечеткое множество.
Синтаксис:
>>> A=Triangle(1.0, 2.0, 4.0)
>>> A.Domain.acc=5
>>> B=A*0.5
>>> print B.card() #doctest: +SKIP
1.49985
>>> print A.card() #doctest: +SKIP
29.997
>>> C=B.normalize()
>>> print C.card() #doctest: +SKIP
2.97
>>> print round(B.value(B.mode()), 2)
0.5
>>> print round(C.value(C.mode()), 2)
1.0
'''
sup = self.sup()
if sup == 0.0:
return self
res = Subset(self.domain.begin, self.domain.end)
for i in self.domain:
res[i] = self.value(i)/sup
return res
def sup(self):
sup = 0.0
for i in self.domain:
if self.value(i) > sup:
sup = self.value(i)
return sup
def plot(self, verbose=True):
'''
Отображает нечеткое множество графически. Только для нечетких множеств,
определенных на носителе типа RationalRange. Параметр verbose
определяет отображение на графике дополнительной информации.
Синтаксис:
>>> A=Triangle(2.5, 3.8, 10.2)
>>> A.plot()
>>> A.plot(verbose=True)
>>> A.plot(verbose=False)
'''
xxx = []
yyy = []
for i in self.domain:
xxx.append(i)
yyy.append(self.value(i))
p.plot(xxx, yyy)
if isinstance(self.domain, IntegerRange):
# TODO построение графиков НПМ на целочисленных интервалах.
pass
p.plot(self.domain.begin, 1.2)
p.plot(self.domain.end+(self.domain.end-self.domain.begin)/3, -0.1)
if verbose:
p.text(self.domain.begin, 0.0, str(self.domain.begin))
p.text(self.domain.end, 0.0, str(self.domain.end))
for i in self.points.iterkeys():
p.text(i, self.points[i], str(i))
def level(self, lvl):
begin = self.domain.begin
end = self.domain.end
for i in self.domain:
if self.value(i) >= float(lvl):
begin = i
break
for i in self.domain:
if (self.value(i) <= lvl) and (i > begin):
end = i
break
res = Interval(begin, end)
return res
def __getitem__(self, key):
return self.value(key)
def __setitem__(self, key, value):
if not key in self.domain:
raise KeyError
self.values[key] = value
def centr(self):
'''
Вычисляет центроид (центр масс) нечеткого подмножества.
Зависит от конфигурации ФП. Работает как на непрерывных
ФП заданного вида, так и на ФП произвольного вида.
>>> A=Triangle(0.2, 0.3, 0.4)
>>> print round(A.centr(), 3)
0.3
>>> A=Trapezoidal(begin=1.0, begin_tol=2.0, end_tol=5.0, end=6.0)
>>> print round(A.centr(), 3)
3.5
'''
sum_ = 0.0
j = 0.0
for i in self.domain:
sum_ += self[i] * i
j += self[i]
try:
return sum_/j
except ZeroDivisionError:
return None
def card(self):
'''
Возвращает мощность нечеткого подмножества
Синтаксис:
>>> T=Triangle(-1.4, 0.0, 2.6)
>>> print round(T.card(), 2) # doctest: +SKIP
4.0
'''
sum_ = 0.0
for i in self.domain:
sum_ += self.value(i)
return sum_*(self.domain.end-self.domain.begin) / self.domain.acc
def mode(self):
'''
Возвращает моду (точку максимума) нечеткого подмножества.
Синтаксис:
>>> A=Triangle(10, 20, 40)
>>> A.mode()
20.0
>>> B=Triangle(20, 40, 50)
>>> B.mode()
40.0
>>> C=A+B
>>> print round(C.mode(), 2)
20.0
'''
res = self.domain.begin
for i in self.domain:
if self.value(i) > self.value(res):
res = i
return res
def euclid_distance(self, other):
begin = min(self.domain.begin, other.domain.begin)
end = max(self.domain.end, other.domain.end)
acc = max(self.domain.acc, other.domain.acc)
domain = RationalRange(begin, end, acc=acc)
summ = 0.0
for i in domain:
summ += (self.value(i)-other.value(i))**2
return math.sqrt(summ/acc)
def hamming_distance(self, other):
begin = min(self.domain.begin, other.domain.begin)
end = max(self.domain.end, other.domain.end)
acc = max(self.domain.acc, other.domain.acc)
domain = RationalRange(begin, end, acc=acc)
summ = 0.0
for i in domain:
summ += abs(self.value(i)-other.value(i))
return summ/acc
def __add__(self, other):
return self._algebra.__add__(self, other)
def __sub__(self, other):
return self._algebra.__sub__(self, other)
def __mul__(self, other):
return self._algebra.__mul__(self, other)
def __div__(self, other):
return self._algebra.__div__(self, other)
def __pow__(self, other):
return self._algebra.__pow__(self, other)
def __invert__(self):
return self.__neg__()
def __not__(self):
return self.__neg__()
def __neg__(self):
res = Subset(domain=self.domain)
i = res.domain.begin
for i in res.domain:
res[i] = 1 - self.value(i)
return res
def __and__(self, other):
return self._fuzzy_algebra(other, lambda x, y: min(x, y))
def __or__(self, other):
return self._fuzzy_algebra(other, lambda x, y: max(x, y))
def __abs__(self):
return self.card()
def __str__(self):
return str(self.centr())
# TODO протестировать следующие три функции
def __cmp__(self, other):
sum_ = 0.0
sum2 = 0.0
card = (self.card()*other.card())
for i in self.domain:
for j in other.domain:
chances = self.value(i)*other.value(j)/card
sum_ += chances
if i < j:
sum2 += chances
risk = sum2/sum_
return max(-risk*2+1, 0.0)
def __eq__(self, other):
begin = min(self.domain.begin, other.domain.begin)
end = max(self.domain.end, other.domain.end)
i = begin
delta = (end-begin)/ACCURACY
res = True
while i <= end:
if self.value(i) != other.value(i):
res = False
i += delta
return res
def __ne__(self, other):
return not self == other
class Trapezoidal(Subset):
'''
Нечеткое множество с трапециевидной функцией принадлежности.
Синтаксис:
>>> A=Trapezoidal(begin=0.0, begin_tol=1.5,
end_tol=2.8, end=6.6,
domain=RationalRange(begin=0, end=10))
Параметры:
begin
задает нижнюю границу левого ската трапеции. Значение
принадлежности в этой точке равно 0.
begin_tol
задает нижнюю границу интервала толернтности. Значение
принадлежности равно 1.
end_tol
верхняя граница интервала толерантности. Значение - 1.
end
верхняя граница правого ската трапеции. Значение - 0.
domain
Этим параметром можно задать границы области определения нечеткого
множества. Подробнее см. RationalRange и IntegerRange.
Attributes:
begin_tol
end_tol
'''
def __init__(self, points, domain=None):
(begin, begin_tol, end_tol, end) = points
super(Trapezoidal, self).__init__(begin, end)
self.domain.begin = float(begin)
self.begin_tol = float(begin_tol)
self.end_tol = float(end_tol)
self.domain.end = float(end)
self[begin] = 0.0
self[begin_tol] = 1.0
self[end_tol] = 1.0
self[end] = 0.0
def card(self):
return (self.begin_tol-self.domain.begin)/2 + \
self.end_tol-self.begin_tol + \
(self.domain.end-self.end_tol)/2
def mom(self):
return (self.end_tol+self.begin_tol)/2
def mode(self):
return self.begin_tol
def median(self):
return (self.domain.begin+self.begin_tol+self.domain.end+self.end_tol)/4
def __eq__(self, other):
if isinstance(other, Trapezoidal):
if self.domain.begin == other.begin and \
self.begin_tol == other.begin_tol and \
self.end_tol == other.end_tol and \
self.domain.end == other.end:
return True
else:
return False
else:
return Subset.__eq__(self, other)
class Triangle(Trapezoidal):
'''
Нечеткое множество с функцией принадлежности в виде треугольника.
Фактически, представляет собой частный случай трапециевидного нечеткого
множества с вырожденным в точку интервалом толерантности. Этот класс
создан для быстрого создания нечетких множеств наиболее распространенной
(треугольной) формы.
Синтаксис:
>>> A=Triangle(1.0, 2.3, 5.6)
Параметры:
Принимает три параметра, по порядку: нижняя раница ската, точка моды,
верхняя граница ската. Числа должны быть упорядочены по возрастанию.
Attributes:
a
b
c
'''
def __init__(self, a, b, c, domain=None):
super(Triangle, self).__init__((a, b, b, c))
def mode(self):
return self.begin_tol
def card(self):
return (self.domain.end-self.domain.begin)/2
class Interval(Trapezoidal):
'''
Определяет четкий интервал как частный вид нечеткого множества. Конструктор
принимает два параметра - границы интервала.
Синтаксис:
>>> A=Interval(0.5, 6.4)
'''
def __init__(self, a, b):
super(Interval, self).__init__((a, a, b, b))
def card(self):
return self.end_tol-self.begin_tol
def value(self, value):
if value in self.domain:
return 1.0
else:
return 0.0
class Point(Trapezoidal):
'''
Реализует нечеткое множество состоящее из одной точки.
Синтаксис:
>>> A=Point(2.0)
'''
def __init__(self, a):
super(Point, self).__init__((a, a, a, a))
def value(self, x):
if x != self.domain.begin:
return 0.0
elif self.domain.begin == x:
return 1.0
else:
return -1
def plot(self, verbose=True):
p.scatter([self.domain.begin], [1.0], 20)
p.plot(self.domain.begin, 1.0)
def card(self):
return 0.0
class Gaussian(Subset):
'''
Определяет нечеткое множество с функцией принадлежности в виде гауссианы.
Синтаксис:
>>> A=Gaussian(0.0, 1.0) # Стандартное распределение
Первый параметр - мода гауссианы, второй - стандартное отклонение (омега)
Attributes:
mu
omega
'''
def __init__(self, mu, omega):
super(Gaussian, self).__init__(mu-5*omega, mu+5*omega)
self.median = float(mu)
self.omega = float(omega)
def value(self, x):
return round(math.exp(-((x-self.median)**2)/(2*self.omega**2)), 5)
def plot(self, verbose=True):
xxx = []
yyy = []
for i in self.domain:
xxx.append(i)
yyy.append(self.value(i))
p.plot(xxx, yyy)
p.plot(self.domain.end+(self.domain.end-self.domain.begin)/3, -0.1)
p.text(self.median, 1.00, str(self.median))
def centr(self):
return self.median
def mode(self):
return self.median
def card(self):
return round(math.sqrt(2*math.pi)*self.omega, 5)
class Algebra():
pass
class SubsetAlgebra(Algebra):
def __init__(self):
pass
def _fuzzy_algebra(self, one, other, operation):
## if isinstance(self, Point) or isinstance(other, Point):
## raise NotImplementedError
if isinstance(other, float) or isinstance(other, int):
raise NotImplementedError
begin = min(one.domain.begin, other.domain.begin)
end = max(one.domain.end, other.domain.end)
acc = max(one.domain.acc, other.domain.acc)
domain = RationalRange(begin, end, acc=acc)
res = Subset(domain=domain)
for i in res.domain:
res[i] = max(
min(
operation(one[i], other[i]),
1),
0)
return res
def __add__(self, one, other):
return self._fuzzy_algebra(one, other, lambda x, y: x + y)
def __sub__(self, one, other):
return self._fuzzy_algebra(one, other, lambda x, y: x - y)
def __mul__(self, one, other):
return self._fuzzy_algebra(one, other, lambda x, y: x * y)
def __div__(self, one, other):
raise NotImplementedError
def __pow__(self, one, other):
if not(isinstance(other, float) or isinstance(other, int)):
raise NotImplementedError
begin = one.domain.begin
end = one.domain.end
res = Subset(begin, end)
for i in res.domain:
res[i] = min(one.value(i)**other, 1)
return res
class NumbersAlgebra(Algebra):
pass
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=False)
#~ doctest.testmod(verbose=True)
|
sejros/FuzzyCalcPy
|
fuzzycalc/subset.py
|
Python
|
gpl-3.0
| 18,953
|
[
"Gaussian"
] |
38c8965c33ec16b1c37853a3a7c361f4d4cfe0656ac8e29eb32058c57b99e48b
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
import numpy as np
import sisl
from sisl.physics.spin import Spin
from ..plot import entry_point
from .bands import BandsPlot
from ..plotutils import random_color
from ..input_fields import OrbitalQueries, TextInput, BoolInput, ColorInput, FloatInput, SileInput
class FatbandsPlot(BandsPlot):
"""Colorful representation of orbital weights in bands.
Parameters
-------------
wfsx_file: wfsxSileSiesta, optional
The WFSX file to get the weights of the different orbitals in the
bands. In standard SIESTA nomenclature, this should be
the *.bands.WFSX file, as it is the one that contains the
weights that correspond to the bands. This
file is only meaningful (and required) if fatbands are plotted from
the .bands file. Otherwise, the bands and weights will be
generated from the hamiltonian by sisl. If the *.bands
file is provided but the wfsx one isn't, we will try to find it.
If `bands_file` is SystemLabel.bands, we will look for
SystemLabel.bands.WFSX
scale: float, optional
The factor by which the width of all fatbands should be multiplied.
Note that each group has an additional individual factor that you can
also tweak.
groups: array-like of dict, optional
The different groups that are displayed in the fatbands Each item
is a dict. Structure of the dict: { 'name':
'species': 'atoms': Structure of the dict: {
'index': Structure of the dict: { 'in': } 'fx':
'fy': 'fz': 'x': 'y': 'z':
'Z': 'neighbours': Structure of the dict: {
'range': 'R': 'neigh_tag': } 'tag':
'seq': } 'orbitals': 'spin': 'normalize':
'color': 'scale': }
bands_file: bandsSileSiesta, optional
This parameter explicitly sets a .bands file. Otherwise, the bands
file is attempted to read from the fdf file
band_structure: BandStructure, optional
A band structure. it can either be provided as a sisl.BandStructure
object or as a list of points, which will be parsed into a
band structure object. Each item is a dict. Structure
of the dict: { 'x': 'y': 'z':
'divisions': 'names': Tick that should be displayed at this
corner of the path. }
aiida_bands: optional
An aiida BandsData node.
add_band_data: optional
This function receives each band and should return a dictionary with
additional arguments that are passed to the band drawing
routine. It also receives the plot as the second argument.
See the docs of `sisl.viz.backends.templates.Backend.draw_line` to
understand what are the supported arguments to be
returned. Notice that the arguments that the backend is able to
process can be very framework dependant.
Erange: array-like of shape (2,), optional
Energy range where the bands are displayed.
E0: float, optional
The energy to which all energies will be referenced (including
Erange).
bands_range: array-like of shape (2,), optional
The bands that should be displayed. Only relevant if Erange is None.
spin: optional
Determines how the different spin configurations should be displayed.
In spin polarized calculations, it allows you to choose between spin
0 and 1. In non-colinear spin calculations, it allows you
to ask for a given spin texture, by specifying the
direction.
spin_texture_colorscale: str, optional
The plotly colorscale to use for the spin texture (if displayed)
gap: bool, optional
Whether the gap should be displayed in the plot
direct_gaps_only: bool, optional
Whether to show only gaps that are direct, according to the gap
tolerance
gap_tol: float, optional
The difference in k that must exist to consider to gaps
different. If two gaps' positions differ in less than
this, only one gap will be drawn. Useful in cases
where there are degenerated bands with exactly the same values.
gap_color: str, optional
Color to display the gap
custom_gaps: array-like of dict, optional
List of all the gaps that you want to display. Each item is a dict.
Structure of the dict: { 'from': K value where to start
measuring the gap. It can be either the label of
the k-point or the numeric value in the plot. 'to': K value
where to end measuring the gap. It can be either
the label of the k-point or the numeric value in the plot.
'color': The color with which the gap should be displayed
'spin': The spin components where the gap should be calculated. }
bands_width: float, optional
Width of the lines that represent the bands
bands_color: str, optional
Choose the color to display the bands. This will be used for the
spin up bands if the calculation is spin polarized
spindown_color: str, optional
Choose the color for the spin down bands.Only used if the
calculation is spin polarized.
root_fdf: fdfSileSiesta, optional
Path to the fdf file that is the 'parent' of the results.
results_path: str, optional
Directory where the files with the simulations results are
located. This path has to be relative to the root fdf.
entry_points_order: array-like, optional
Order with which entry points will be attempted.
backend: optional
Directory where the files with the simulations results are
located. This path has to be relative to the root fdf.
"""
_plot_type = 'Fatbands'
_update_methods = {
"read_data": [],
"set_data": ["_draw_gaps", "_get_groups_weights"],
"get_figure": []
}
_parameters = (
FloatInput(key='scale', name='Scale factor',
default=None,
help="""The factor by which the width of all fatbands should be multiplied.
Note that each group has an additional individual factor that you can also tweak."""
# Probably scale should not multiply but normalize everything relative to the energy range!
),
OrbitalQueries(
key="groups", name="Fatbands groups",
default=None,
help="""The different groups that are displayed in the fatbands""",
queryForm=[
TextInput(
key="name", name="Name",
default="Group",
params={
"placeholder": "Name of the line..."
},
),
'species', 'atoms', 'orbitals', 'spin',
BoolInput(
key="normalize", name="Normalize",
default=True,
params={
"offLabel": "No",
"onLabel": "Yes"
}
),
ColorInput(
key="color", name="Color",
default=None,
),
FloatInput(
key="scale", name="Scale factor",
default=1,
),
]
),
)
@property
def weights(self):
return self.bands_data["weight"]
@entry_point("wfsx file", 0)
def _read_from_wfsx(self, root_fdf, wfsx_file):
"""Generates fatbands from SIESTA output.
Uses the `.wfsx` file to retrieve the eigenstates. From them, it computes
all the needed quantities (eigenvalues, orbital contribution, ...).
"""
self._entry_point_with_extra_vars(super()._read_from_wfsx, need_H=True)
@entry_point("hamiltonian", 1)
def _read_from_H(self):
"""Calculates the fatbands from a sisl hamiltonian."""
self._entry_point_with_extra_vars(super()._read_from_H)
def _entry_point_with_extra_vars(self, entry_point, **kwargs):
# Define the function that will "catch" each eigenstate and
# build the weights array. See BandsPlot._read_from_H to understand where
# this will go exactly
def _weights_from_eigenstate(eigenstate, plot, spin_index):
weights = eigenstate.norm2(sum=False)
if not plot.spin.is_diagonal:
# If it is a non-colinear or spin orbit calculation, we have two weights for each
# orbital (one for each spin component of the state), so we just pair them together
# and sum their contributions to get the weight of the orbital.
weights = weights.reshape(len(weights), -1, 2).sum(2)
return weights.real
# We make bands plot read the bands, which will also populate the weights
# thanks to the above step
bands_read = False; err = None
try:
entry_point(extra_vars=[{"coords": ("band", "orb"), "name": "weight", "getter": _weights_from_eigenstate}], **kwargs)
bands_read = True
except Exception as e:
# Let's keep this error, we are going to at least set the group options so that the
# user knows what can they choose (specially important for the GUI)
err = e
self._set_group_options()
if not bands_read:
raise err
def _set_group_options(self):
# Try to find a geometry if there isn't already one
if not hasattr(self, "geometry"):
# From the hamiltonian
band_struct = self.get_setting("band_structure")
if band_struct is not None:
self.geometry = band_struct.parent.geometry
self.get_param('groups').update_options(self.geometry, self.spin)
def _set_data(self):
# We get the information that the Bandsplot wants to send to the drawer
from_bandsplot = super()._set_data()
# And add some extra information related to the weights.
return {
**from_bandsplot,
**self._get_groups_weights()
}
def _get_groups_weights(self, groups, E0, bands_range, scale):
"""Returns a dictionary with information about all the weights that have been requested
The return of this function is expected to be passed to the drawers.
"""
# We get the bands range that is going to be plotted
# Remember that the BandsPlot will have updated this setting accordingly,
# so it's safe to use it directly
min_band, max_band = bands_range
# Get the weights that matter
plot_weights = self.weights.sel(band=slice(min_band, max_band))
if groups is None:
groups = ()
if scale is None:
# Probably we can calculate a more suitable scale
scale = 1
groups_weights = {}
groups_metadata = {}
# Here we get the values of the weights for each group of orbitals.
for i, group in enumerate(groups):
group = {**group}
# Use only the active requests
if not group.get("active", True):
continue
# Give a name to the request in case it didn't have one.
if group.get("name") is None:
group["name"] = f"Group {i}"
# Multiply the groups' scale by the global scale
group["scale"] = group.get("scale", 1) * scale
# Get the weight values for the request and store them to send to the drawer
self._get_group_weights(group, plot_weights, values_storage=groups_weights, metadata_storage=groups_metadata)
return {"groups_weights": groups_weights, "groups_metadata": groups_metadata}
def _get_group_weights(self, group, weights=None, values_storage=None, metadata_storage=None):
"""Extracts the weight values that correspond to a specific fatbands request.
Parameters
--------------
group: dict
the request to process.
weights: DataArray, optional
the part of the weights dataarray that falls in the energy range that we want to draw.
If not provided, the full weights data stored in `self.weights` is used.
values_storage: dict, optional
a dictionary where the weights values will be stored using the request's name as the key.
metadata_storage: dict, optional
a dictionary where metadata for the request will be stored using the request's name as the key.
Returns
----------
xarray.DataArray
The weights resulting from the request. They are indexed by spin, band and k value.
"""
if weights is None:
weights = self.weights
if "spin" not in weights.coords:
weights = weights.expand_dims("spin")
groups_param = self.get_param("groups")
group = groups_param.complete_query(group)
orb = groups_param.get_orbitals(group)
# Get the weights for the requested orbitals
weights = weights.sel(orb=orb)
# Now get a particular spin component if the user wants it
if group["spin"] is not None:
weights = weights.sel(spin=group["spin"])
if group["normalize"]:
weights = weights.mean("orb")
else:
weights = weights.sum("orb")
if group["color"] is None:
group["color"] = random_color()
group_name = group["name"]
values = weights.transpose("spin", "band", "k") * group["scale"]
if values_storage is not None:
if group_name in values_storage:
raise ValueError(f"There are multiple groups that are named '{group_name}'")
values_storage[group_name] = values
if metadata_storage is not None:
# Build the dictionary that contains metadata for this group.
metadata = {
"style": {
"line": {"color": group["color"]}
}
}
metadata_storage[group_name] = metadata
return values
# -------------------------------------
# Convenience methods
# -------------------------------------
def split_groups(self, on="species", only=None, exclude=None, clean=True, colors=(), **kwargs):
"""
Builds groups automatically to draw their contributions.
Works exactly the same as `PdosPlot.split_DOS`
Parameters
--------
on: str, {"species", "atoms", "Z", "orbitals", "n", "l", "m", "zeta", "spin"} or list of str
the parameter to split along.
Note that you can combine parameters with a "+" to split along multiple parameters
at the same time. You can get the same effect also by passing a list.
only: array-like, optional
if desired, the only values that should be plotted out of
all of the values that come from the splitting.
exclude: array-like, optional
values that should not be plotted
clean: boolean, optional
whether the plot should be cleaned before drawing.
If False, all the groups that come from the method will
be drawn on top of what is already there.
colors: array-like, optional
A list of colors to be used. There can be more colors than
needed, or less. If there are less colors than groups, the colors
will just be repeated.
**kwargs:
keyword arguments that go directly to each request.
This is useful to add extra filters. For example:
`plot.split_groups(on="orbitals", species=["C"])`
will split the PDOS on the different orbitals but will take
only those that belong to carbon atoms.
Examples
-----------
>>> plot = H.plot.fatbands()
>>>
>>> # Split the fatbands in n and l but show only the fatbands from Au
>>> # Also use "Au $ns" as a template for the name, where $n will
>>> # be replaced by the value of n.
>>> plot.split_groups(on="n+l", species=["Au"], name="Au $ns")
"""
groups = self.get_param('groups')._generate_queries(
on=on, only=only, exclude=exclude, **kwargs)
if len(colors) > 0:
# Repeat the colors in case there are more groups than colors
colors = np.tile(colors, len(groups) // len(colors) + 1)
# Asign colors
for i, _ in enumerate(groups):
groups[i]['color'] = colors[i]
# If the user doesn't want to clean the plot, we will just add the groups to the existing ones
if not clean:
groups = [*self.get_setting("groups"), *groups]
return self.update_settings(groups=groups)
def scale_fatbands(self, factor, from_current=False):
"""Scales all bands by a given factor.
Basically, it updates 'scale' setting.
Parameters
-----------
factor: float
the factor that should be used to scale.
from_current: boolean, optional
whether 'factor' is meant to multiply the current scaling factor.
If False, it will just replace the current factor.
"""
if from_current:
scale = self.get_setting('scale') * factor
else:
scale = factor
return self.update_settings(scale=scale)
|
zerothi/sisl
|
sisl/viz/plots/fatbands.py
|
Python
|
mpl-2.0
| 18,164
|
[
"SIESTA"
] |
f89228e07a05b28654bfb73861cff09635060e54b387ab8f680f48aab3a6ebd7
|
#!/usr/bin/env python3
import logging
import os
import pathlib
import numpy as np
import pytest
from pysisyphus.calculators.ORCA import ORCA
from pysisyphus.constants import ANG2BOHR
from pysisyphus.Geometry import Geometry
from pysisyphus.irc.GonzalesSchlegel import GonzalesSchlegel
from pysisyphus.irc.DampedVelocityVerlet import DampedVelocityVerlet
from pysisyphus.irc.Euler import Euler
from pysisyphus.irc.ParamPlot import ParamPlot
from qchelper.geometry import parse_xyz_file
THIS_DIR = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
def prepare_geometry():
keywords = "HF 4-22GSP TightSCF"
xyz_fn = "01_irc_sn2_fluour_transfer_optts.xyz"
#blocks = "%pal nprocs 3 end"
blocks = ""
atoms, coords = parse_xyz_file(THIS_DIR / xyz_fn)
coords *= ANG2BOHR
geometry = Geometry(atoms, coords.flatten())
geometry.set_calculator(ORCA(keywords, charge=-1, mult=1, blocks=blocks))
hessian = geometry.hessian
return geometry, THIS_DIR
def param_plot(irc, prefix):
p1inds = (4, 0)
p2inds = (5, 0)
param_plot = ParamPlot(irc.all_coords, p1inds, p2inds)
param_plot.plot()
param_plot.save(THIS_DIR, prefix)
param_plot.save_coords(THIS_DIR, prefix)
@pytest.mark.orca_irc
def test_irc_sn2_ftransfer_gs():
geometry, THIS_DIR = prepare_geometry()
prefix = "sn2_ftransfer_422gsp_gs"
irc = GonzalesSchlegel(geometry, max_steps=5, step_length=0.1)
irc.run()
irc.write_trj(THIS_DIR, prefix)
param_plot(irc, prefix)
@pytest.mark.orca_irc
def test_irc_sn2_ftransfer_gs03():
# Fails
geometry, THIS_DIR = prepare_geometry()
prefix = "sn2_ftransfer_422gsp_gs03"
irc = GonzalesSchlegel(geometry, max_steps=15, step_length=0.3)
irc.run()
irc.write_trj(THIS_DIR, prefix)
param_plot(irc, prefix)
@pytest.mark.orca_irc
def test_irc_sn2_ftransfer_dvv():
geometry, THIS_DIR = prepare_geometry()
prefix = "sn2_ftransfer_422gsp_dvv"
irc = DampedVelocityVerlet(geometry, max_steps=50)
irc.run()
irc.write_trj(THIS_DIR, prefix)
param_plot(irc, prefix)
@pytest.mark.orca_irc
def test_irc_sn2_ftransfer_euler():
geometry, THIS_DIR = prepare_geometry()
prefix = "sn2_ftransfer_422gsp_euler"
irc = Euler(geometry, max_steps=50)
irc.run()
irc.write_trj(THIS_DIR, prefix)
param_plot(irc, prefix)
if __name__ == "__main__":
test_irc_sn2_ftransfer_gs()
#test_irc_sn2_ftransfer_gs03()
#test_irc_sn2_ftransfer_dvv()
#test_irc_sn2_ftransfer_euler()
|
eljost/pysisyphus
|
tests_staging/irc_sn2_ftransfer/test_sn2_ftransfer_irc.py
|
Python
|
gpl-3.0
| 2,520
|
[
"ORCA"
] |
4abdc25acb573907fd682496e08debacb37161bf890057e216926abd348bf362
|
"""
@name: Modules/Housing/Security/_test/test_security.py
@author: D. Brian Kimmel
@contact: D.BrianKimmel@gmail.com
@copyright: (c) 2016-2019 by D. Brian Kimmel
@license: MIT License
@note: Created on Nov 1, 2016
@summary: Test
Passed all 13 tests - DBK - 2018-02-13
"""
__updated__ = '2020-02-02'
# Import system type stuff
from twisted.trial import unittest
# Import PyMh files and modules.
from _test.testing_mixin import SetupPyHouseObj
from Modules.House.Security.security import Api as securityApi
from Modules.Core.Utilities.debug_tools import PrettyFormatAny
class SetupMixin(object):
def setUp(self):
self.m_pyhouse_obj = SetupPyHouseObj().BuildPyHouseObj()
class A0(unittest.TestCase):
def setUp(self):
pass
def test_00_Print(self):
_x = PrettyFormatAny.form('_test', 'title', 190) # so it is defined when printing is cleaned up.
print('Id: test_security')
class A1_Setup(SetupMixin, unittest.TestCase):
"""
This section tests the above setup for things we will need further down in the tests.
"""
def setUp(self):
SetupMixin.setUp(self)
self.m_api = securityApi(self.m_pyhouse_obj)
def test_01_Pyhouse(self):
"""
"""
print(PrettyFormatAny.form(self.m_pyhouse_obj, 'A1-01-A - PyHouse'))
self.assertIsNotNone(self.m_pyhouse_obj)
def test_02_House(self):
"""
"""
print(PrettyFormatAny.form(self.m_pyhouse_obj.House, 'A1-02-A - House'))
self.assertIsNotNone(self.m_pyhouse_obj.House)
def test_03_Security(self):
"""
"""
print(PrettyFormatAny.form(self.m_pyhouse_obj.House.Security, 'A1-03-A - Security'))
self.assertIsNotNone(self.m_pyhouse_obj.House.Security)
class B1(SetupMixin, unittest.TestCase):
""" This section tests the reading and writing of XML used by lighting_controllers.
"""
def setUp(self):
SetupMixin.setUp(self)
def test_01_FindXml(self):
""" Be sure that the XML contains the right stuff.
"""
# print(PrettyFormatAny.form(self.m_xml, 'A1-01-A - Tags'))
self.assertEqual(self.m_xml.root.tag, TESTING_PYHOUSE)
self.assertEqual(self.m_xml.security_sect.tag, 'SecuritySection')
self.assertEqual(self.m_xml.garagedoor_sect.tag, 'GarageDoorSection')
self.assertEqual(self.m_xml.garagedoor.tag, 'GarageDoor')
self.assertEqual(self.m_xml.motiondetector_sect.tag, 'MotionDetectorSection')
self.assertEqual(self.m_xml.motiondetector.tag, 'Motion')
def test_02_Xml(self):
# print(PrettyFormatAny.form(self.m_pyhouse_obj._Config.XmlRoot))
pass
def test_03_Family(self):
self.assertEqual(self.m_family['Insteon'].Name, 'Insteon')
# ## END DBK
|
DBrianKimmel/PyHouse
|
Project/src/Modules/House/Security/_test/test_security.py
|
Python
|
mit
| 2,816
|
[
"Brian"
] |
5c4ac6f3a16f519d272842a2b2eca1468152b68c4ad0cf109fc87d27eabebad5
|
# Copyright (C) 2009-2014 CEA/DEN, EDF R&D
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
#
#
# exemple repris de test_hexa1.cxx
import os
import GEOM
import SALOMEDS
import hexablock
print "test pipe..."
doc = hexablock.addDocument()
orig = doc.addVertex(50, 0, 0)
vz = doc.addVector(0, 0, 1)
int_radius = 3
ext_radius = 4
height = 5
pi = doc.addPipe(orig, vz, int_radius, ext_radius, height)
vb = doc.addVector(1, 0, 0)
nr = 4
na = 9
nl = 5
elts = doc.makePipe(pi, vb, nr, na, nl)
file_name = os.path.join(os.environ['TMP'], 'pipe.vtk')
#### elts.saveVtk(file_name)
print "...test pipe OK"
|
FedoraScientific/salome-hexablock
|
doc/pyplots/test_make_pipe.py
|
Python
|
lgpl-2.1
| 1,366
|
[
"VTK"
] |
20126f03fb9a75863374effb2638b370f8409b954257135634826b055600ac65
|
#########################################################################################
# SLURM.py
# 10.11.2014
# Author: A.T.
#########################################################################################
""" SLURM.py is a DIRAC independent class representing SLURM batch system.
SLURM objects are used as backend batch system representation for
LocalComputingElement and SSHComputingElement classes
"""
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
# TODO: This should be modernised to use subprocess(32)
try:
import commands
except ImportError:
# Python 3's subprocess module contains a compatibility layer
import subprocess as commands
import os
import re
__RCSID__ = "$Id$"
class SLURM(object):
def submitJob(self, **kwargs):
""" Submit nJobs to the OAR batch system
"""
resultDict = {}
MANDATORY_PARAMETERS = ['Executable', 'OutputDir', 'ErrorDir',
'Queue', 'SubmitOptions']
for argument in MANDATORY_PARAMETERS:
if argument not in kwargs:
resultDict['Status'] = -1
resultDict['Message'] = 'No %s' % argument
return resultDict
nJobs = kwargs.get('NJobs')
if not nJobs:
nJobs = 1
outputDir = kwargs['OutputDir']
errorDir = kwargs['ErrorDir']
queue = kwargs['Queue']
submitOptions = kwargs['SubmitOptions']
executable = kwargs['Executable']
numberOfProcessors = kwargs['NumberOfProcessors']
preamble = kwargs.get('Preamble')
outFile = os.path.join(outputDir, "%jobid%")
errFile = os.path.join(errorDir, "%jobid%")
outFile = os.path.expandvars(outFile)
errFile = os.path.expandvars(errFile)
executable = os.path.expandvars(executable)
jobIDs = []
for _i in range(nJobs):
jid = ''
cmd = '%s; ' % preamble if preamble else ''
# By default, all the environment variables of the submitter node are propagated to the workers
# It can create conflicts during the installation of the pilots
# --export restricts the propagation to the PATH variable to get a clean environment in the workers
cmd += "sbatch --export=PATH -o %s/%%j.out --partition=%s -n %s %s %s " % (
outputDir, queue, numberOfProcessors, submitOptions, executable)
status, output = commands.getstatusoutput(cmd)
if status != 0 or not output:
break
lines = output.split('\n')
for line in lines:
result = re.search(r'Submitted batch job (\d*)', line)
if result:
jid = result.groups()[0]
break
if not jid:
break
jid = jid.strip()
jobIDs.append(jid)
if jobIDs:
resultDict['Status'] = 0
resultDict['Jobs'] = jobIDs
else:
resultDict['Status'] = status
resultDict['Message'] = output
return resultDict
def killJob(self, **kwargs):
""" Delete a job from OAR batch scheduler. Input: list of jobs output: int
"""
resultDict = {}
MANDATORY_PARAMETERS = ['JobIDList', 'Queue']
for argument in MANDATORY_PARAMETERS:
if argument not in kwargs:
resultDict['Status'] = -1
resultDict['Message'] = 'No %s' % argument
return resultDict
jobIDList = kwargs['JobIDList']
if not jobIDList:
resultDict['Status'] = -1
resultDict['Message'] = 'Empty job list'
return resultDict
queue = kwargs['Queue']
successful = []
failed = []
for job in jobIDList:
cmd = 'scancel --partition=%s %s' % (queue, job)
status, output = commands.getstatusoutput(cmd)
if status != 0:
failed.append(job)
else:
successful.append(job)
resultDict['Status'] = 0
if failed:
resultDict['Status'] = 1
resultDict['Message'] = output
resultDict['Successful'] = successful
resultDict['Failed'] = failed
return resultDict
def getJobStatus(self, **kwargs):
""" Get status of the jobs in the given list
"""
resultDict = {}
if 'JobIDList' not in kwargs or not kwargs['JobIDList']:
resultDict['Status'] = -1
resultDict['Message'] = 'Empty job list'
return resultDict
jobIDList = kwargs['JobIDList']
jobIDs = ""
for jobID in jobIDList:
jobIDs += jobID + ","
# displays accounting data for all jobs in the Slurm job accounting log or Slurm database
cmd = "sacct -j %s -o JobID,STATE" % jobIDs
status, output = commands.getstatusoutput(cmd)
if status != 0:
resultDict['Status'] = 1
resultDict['Message'] = output
return resultDict
statusDict = {}
lines = output.split('\n')
jids = set()
for line in lines[1:]:
jid, status = line.split()
jids.add(jid)
if jid in jobIDList:
if status in ['PENDING', 'SUSPENDED', 'CONFIGURING']:
statusDict[jid] = 'Waiting'
elif status in ['RUNNING', 'COMPLETING']:
statusDict[jid] = 'Running'
elif status in ['CANCELLED', 'PREEMPTED']:
statusDict[jid] = 'Aborted'
elif status in ['COMPLETED']:
statusDict[jid] = 'Done'
elif status in ['FAILED', 'TIMEOUT', 'NODE_FAIL']:
statusDict[jid] = 'Failed'
else:
statusDict[jid] = 'Unknown'
leftJobs = set(jobIDList) - jids
for jid in leftJobs:
statusDict[jid] = 'Unknown'
# Final output
resultDict['Status'] = 0
resultDict['Jobs'] = statusDict
return resultDict
def getCEStatus(self, **kwargs):
""" Get the overall status of the CE
"""
resultDict = {}
MANDATORY_PARAMETERS = ['Queue']
for argument in MANDATORY_PARAMETERS:
if argument not in kwargs:
resultDict['Status'] = -1
resultDict['Message'] = 'No %s' % argument
return resultDict
user = kwargs.get('User')
if not user:
user = os.environ.get('USER')
if not user:
resultDict['Status'] = -1
resultDict['Message'] = 'No user name'
return resultDict
queue = kwargs['Queue']
cmd = "squeue --partition=%s --user=%s --format='%%j %%T' " % (queue, user)
status, output = commands.getstatusoutput(cmd)
if status != 0:
resultDict['Status'] = 1
resultDict['Message'] = output
waitingJobs = 0
runningJobs = 0
lines = output.split('\n')
for line in lines[1:]:
_jid, status = line.split()
if status in ['PENDING', 'SUSPENDED', 'CONFIGURING']:
waitingJobs += 1
elif status in ['RUNNING', 'COMPLETING']:
runningJobs += 1
# Final output
resultDict['Status'] = 0
resultDict["Waiting"] = waitingJobs
resultDict["Running"] = runningJobs
return resultDict
|
yujikato/DIRAC
|
src/DIRAC/Resources/Computing/BatchSystems/SLURM.py
|
Python
|
gpl-3.0
| 6,735
|
[
"DIRAC"
] |
6a532158a3d0be1a8ea218dbb2e791ac990e3061f2274a12df9e9c646bf476cb
|
# Copyright 2018 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The core code for applying Canonical Correlation Analysis to deep networks.
This module contains the core functions to apply canonical correlation analysis
to deep neural networks. The main function is get_cca_similarity, which takes in
two sets of activations, typically the neurons in two layers and their outputs
on all of the datapoints D = [d_1,...,d_m] that have been passed through.
Inputs have shape (num_neurons1, m), (num_neurons2, m). This can be directly
applied used on fully connected networks. For convolutional layers, the 3d block
of neurons can either be flattened entirely, along channels, or alternatively,
the dft_ccas (Discrete Fourier Transform) module can be used.
See:
https://arxiv.org/abs/1706.05806
https://arxiv.org/abs/1806.05759
for full details.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
num_cca_trials = 5
def positivedef_matrix_sqrt(array):
"""Stable method for computing matrix square roots, supports complex matrices.
Args:
array: A numpy 2d array, can be complex valued that is a positive
definite symmetric (or hermitian) matrix
Returns:
sqrtarray: The matrix square root of array
"""
w, v = np.linalg.eigh(array)
# A - np.dot(v, np.dot(np.diag(w), v.T))
wsqrt = np.sqrt(w)
sqrtarray = np.dot(v, np.dot(np.diag(wsqrt), np.conj(v).T))
return sqrtarray
def remove_small(sigma_xx, sigma_xy, sigma_yx, sigma_yy, epsilon):
"""Takes covariance between X, Y, and removes values of small magnitude.
Args:
sigma_xx: 2d numpy array, variance matrix for x
sigma_xy: 2d numpy array, crossvariance matrix for x,y
sigma_yx: 2d numpy array, crossvariance matrixy for x,y,
(conjugate) transpose of sigma_xy
sigma_yy: 2d numpy array, variance matrix for y
epsilon : cutoff value for norm below which directions are thrown
away
Returns:
sigma_xx_crop: 2d array with low x norm directions removed
sigma_xy_crop: 2d array with low x and y norm directions removed
sigma_yx_crop: 2d array with low x and y norm directiosn removed
sigma_yy_crop: 2d array with low y norm directions removed
x_idxs: indexes of sigma_xx that were removed
y_idxs: indexes of sigma_yy that were removed
"""
x_diag = np.abs(np.diagonal(sigma_xx))
y_diag = np.abs(np.diagonal(sigma_yy))
x_idxs = (x_diag >= epsilon)
y_idxs = (y_diag >= epsilon)
sigma_xx_crop = sigma_xx[x_idxs][:, x_idxs]
sigma_xy_crop = sigma_xy[x_idxs][:, y_idxs]
sigma_yx_crop = sigma_yx[y_idxs][:, x_idxs]
sigma_yy_crop = sigma_yy[y_idxs][:, y_idxs]
return (sigma_xx_crop, sigma_xy_crop, sigma_yx_crop, sigma_yy_crop,
x_idxs, y_idxs)
def compute_ccas(sigma_xx, sigma_xy, sigma_yx, sigma_yy, epsilon,
verbose=True):
"""Main cca computation function, takes in variances and crossvariances.
This function takes in the covariances and cross covariances of X, Y,
preprocesses them (removing small magnitudes) and outputs the raw results of
the cca computation, including cca directions in a rotated space, and the
cca correlation coefficient values.
Args:
sigma_xx: 2d numpy array, (num_neurons_x, num_neurons_x)
variance matrix for x
sigma_xy: 2d numpy array, (num_neurons_x, num_neurons_y)
crossvariance matrix for x,y
sigma_yx: 2d numpy array, (num_neurons_y, num_neurons_x)
crossvariance matrix for x,y (conj) transpose of sigma_xy
sigma_yy: 2d numpy array, (num_neurons_y, num_neurons_y)
variance matrix for y
epsilon: small float to help with stabilizing computations
verbose: boolean on whether to print intermediate outputs
Returns:
[ux, sx, vx]: [numpy 2d array, numpy 1d array, numpy 2d array]
ux and vx are (conj) transposes of each other, being
the canonical directions in the X subspace.
sx is the set of canonical correlation coefficients-
how well corresponding directions in vx, Vy correlate
with each other.
[uy, sy, vy]: Same as above, but for Y space
invsqrt_xx: Inverse square root of sigma_xx to transform canonical
directions back to original space
invsqrt_yy: Same as above but for sigma_yy
x_idxs: The indexes of the input sigma_xx that were pruned
by remove_small
y_idxs: Same as above but for sigma_yy
"""
(sigma_xx, sigma_xy, sigma_yx, sigma_yy,
x_idxs, y_idxs) = remove_small(sigma_xx, sigma_xy, sigma_yx, sigma_yy, epsilon)
numx = sigma_xx.shape[0]
numy = sigma_yy.shape[0]
if numx == 0 or numy == 0:
return ([0, 0, 0], [0, 0, 0], np.zeros_like(sigma_xx),
np.zeros_like(sigma_yy), x_idxs, y_idxs)
if verbose:
print("adding eps to diagonal and taking inverse")
sigma_xx += epsilon * np.eye(numx)
sigma_yy += epsilon * np.eye(numy)
inv_xx = np.linalg.pinv(sigma_xx)
inv_yy = np.linalg.pinv(sigma_yy)
if verbose:
print("taking square root")
invsqrt_xx = positivedef_matrix_sqrt(inv_xx)
invsqrt_yy = positivedef_matrix_sqrt(inv_yy)
if verbose:
print("dot products...")
arr = np.dot(invsqrt_xx, np.dot(sigma_xy, invsqrt_yy))
if verbose:
print("trying to take final svd")
u, s, v = np.linalg.svd(arr)
if verbose:
print("computed everything!")
return [u, np.abs(s), v], invsqrt_xx, invsqrt_yy, x_idxs, y_idxs
def sum_threshold(array, threshold):
"""Computes threshold index of decreasing nonnegative array by summing.
This function takes in a decreasing array nonnegative floats, and a
threshold between 0 and 1. It returns the index i at which the sum of the
array up to i is threshold*total mass of the array.
Args:
array: a 1d numpy array of decreasing, nonnegative floats
threshold: a number between 0 and 1
Returns:
i: index at which np.sum(array[:i]) >= threshold
"""
assert (threshold >= 0) and (threshold <= 1), "print incorrect threshold"
for i in range(len(array)):
if np.sum(array[:i])/np.sum(array) >= threshold:
return i
def create_zero_dict(compute_dirns, dimension):
"""Outputs a zero dict when neuron activation norms too small.
This function creates a return_dict with appropriately shaped zero entries
when all neuron activations are very small.
Args:
compute_dirns: boolean, whether to have zero vectors for directions
dimension: int, defines shape of directions
Returns:
return_dict: a dict of appropriately shaped zero entries
"""
return_dict = {}
return_dict["mean"] = (np.asarray(0), np.asarray(0))
return_dict["sum"] = (np.asarray(0), np.asarray(0))
return_dict["cca_coef1"] = np.asarray(0)
return_dict["cca_coef2"] = np.asarray(0)
return_dict["idx1"] = 0
return_dict["idx2"] = 0
if compute_dirns:
return_dict["cca_dirns1"] = np.zeros((1, dimension))
return_dict["cca_dirns2"] = np.zeros((1, dimension))
return return_dict
def get_cca_similarity(acts1, acts2, epsilon=0., threshold=0.98,
compute_coefs=True,
compute_dirns=False,
verbose=True):
"""The main function for computing cca similarities.
This function computes the cca similarity between two sets of activations,
returning a dict with the cca coefficients, a few statistics of the cca
coefficients, and (optionally) the actual directions.
Args:
acts1: (num_neurons1, data_points) a 2d numpy array of neurons by
datapoints where entry (i,j) is the output of neuron i on
datapoint j.
acts2: (num_neurons2, data_points) same as above, but (potentially)
for a different set of neurons. Note that acts1 and acts2
can have different numbers of neurons, but must agree on the
number of datapoints
epsilon: small float to help stabilize computations
threshold: float between 0, 1 used to get rid of trailing zeros in
the cca correlation coefficients to output more accurate
summary statistics of correlations.
compute_coefs: boolean value determining whether coefficients
over neurons are computed. Needed for computing
directions
compute_dirns: boolean value determining whether actual cca
directions are computed. (For very large neurons and
datasets, may be better to compute these on the fly
instead of store in memory.)
verbose: Boolean, whether intermediate outputs are printed
Returns:
return_dict: A dictionary with outputs from the cca computations.
Contains neuron coefficients (combinations of neurons
that correspond to cca directions), the cca correlation
coefficients (how well aligned directions correlate),
x and y idxs (for computing cca directions on the fly
if compute_dirns=False), and summary statistics. If
compute_dirns=True, the cca directions are also
computed.
"""
# assert dimensionality equal
assert acts1.shape[1] == acts2.shape[1], "dimensions don't match"
# check that acts1, acts2 are transposition
assert acts1.shape[0] < acts1.shape[1], ("input must be number of neurons"
"by datapoints")
return_dict = {}
# compute covariance with numpy function for extra stability
numx = acts1.shape[0]
numy = acts2.shape[0]
covariance = np.cov(acts1, acts2)
sigmaxx = covariance[:numx, :numx]
sigmaxy = covariance[:numx, numx:]
sigmayx = covariance[numx:, :numx]
sigmayy = covariance[numx:, numx:]
# rescale covariance to make cca computation more stable
xmax = np.max(np.abs(sigmaxx))
ymax = np.max(np.abs(sigmayy))
sigmaxx /= xmax
sigmayy /= ymax
sigmaxy /= np.sqrt(xmax * ymax)
sigmayx /= np.sqrt(xmax * ymax)
([u, s, v], invsqrt_xx, invsqrt_yy,
x_idxs, y_idxs) = compute_ccas(sigmaxx, sigmaxy, sigmayx, sigmayy,
epsilon=epsilon,
verbose=verbose)
# if x_idxs or y_idxs is all false, return_dict has zero entries
if (not np.any(x_idxs)) or (not np.any(y_idxs)):
return create_zero_dict(compute_dirns, acts1.shape[1])
if compute_coefs:
# also compute full coefficients over all neurons
x_mask = np.dot(x_idxs.reshape((-1, 1)), x_idxs.reshape((1, -1)))
y_mask = np.dot(y_idxs.reshape((-1, 1)), y_idxs.reshape((1, -1)))
return_dict["coef_x"] = u.T
return_dict["invsqrt_xx"] = invsqrt_xx
return_dict["full_coef_x"] = np.zeros((numx, numx))
np.place(return_dict["full_coef_x"], x_mask,
return_dict["coef_x"])
return_dict["full_invsqrt_xx"] = np.zeros((numx, numx))
np.place(return_dict["full_invsqrt_xx"], x_mask,
return_dict["invsqrt_xx"])
return_dict["coef_y"] = v
return_dict["invsqrt_yy"] = invsqrt_yy
return_dict["full_coef_y"] = np.zeros((numy, numy))
np.place(return_dict["full_coef_y"], y_mask,
return_dict["coef_y"])
return_dict["full_invsqrt_yy"] = np.zeros((numy, numy))
np.place(return_dict["full_invsqrt_yy"], y_mask,
return_dict["invsqrt_yy"])
# compute means
neuron_means1 = np.mean(acts1, axis=1, keepdims=True)
neuron_means2 = np.mean(acts2, axis=1, keepdims=True)
return_dict["neuron_means1"] = neuron_means1
return_dict["neuron_means2"] = neuron_means2
if compute_dirns:
# orthonormal directions that are CCA directions
cca_dirns1 = np.dot(np.dot(return_dict["full_coef_x"],
return_dict["full_invsqrt_xx"]),
(acts1 - neuron_means1)) + neuron_means1
cca_dirns2 = np.dot(np.dot(return_dict["full_coef_y"],
return_dict["full_invsqrt_yy"]),
(acts2 - neuron_means2)) + neuron_means2
# get rid of trailing zeros in the cca coefficients
idx1 = sum_threshold(s, threshold)
idx2 = sum_threshold(s, threshold)
return_dict["cca_coef1"] = s
return_dict["cca_coef2"] = s
return_dict["x_idxs"] = x_idxs
return_dict["y_idxs"] = y_idxs
# summary statistics
return_dict["mean"] = (np.mean(s[:idx1]), np.mean(s[:idx2]))
return_dict["sum"] = (np.sum(s), np.sum(s))
if compute_dirns:
return_dict["cca_dirns1"] = cca_dirns1
return_dict["cca_dirns2"] = cca_dirns2
return return_dict
def robust_cca_similarity(acts1, acts2, threshold=0.98, epsilon=1e-6,
compute_dirns=True):
"""Calls get_cca_similarity multiple times while adding noise.
This function is very similar to get_cca_similarity, and can be used if
get_cca_similarity doesn't converge for some pair of inputs. This function
adds some noise to the activations to help convergence.
Args:
acts1: (num_neurons1, data_points) a 2d numpy array of neurons by
datapoints where entry (i,j) is the output of neuron i on
datapoint j.
acts2: (num_neurons2, data_points) same as above, but (potentially)
for a different set of neurons. Note that acts1 and acts2
can have different numbers of neurons, but must agree on the
number of datapoints
threshold: float between 0, 1 used to get rid of trailing zeros in
the cca correlation coefficients to output more accurate
summary statistics of correlations.
epsilon: small float to help stabilize computations
compute_dirns: boolean value determining whether actual cca
directions are computed. (For very large neurons and
datasets, may be better to compute these on the fly
instead of store in memory.)
Returns:
return_dict: A dictionary with outputs from the cca computations.
Contains neuron coefficients (combinations of neurons
that correspond to cca directions), the cca correlation
coefficients (how well aligned directions correlate),
x and y idxs (for computing cca directions on the fly
if compute_dirns=False), and summary statistics. If
compute_dirns=True, the cca directions are also
computed.
"""
for trial in range(num_cca_trials):
try:
return_dict = get_cca_similarity(acts1, acts2, threshold, compute_dirns)
except np.LinAlgError:
acts1 = acts1*1e-1 + np.random.normal(size=acts1.shape)*epsilon
acts2 = acts2*1e-1 + np.random.normal(size=acts1.shape)*epsilon
if trial + 1 == num_cca_trials:
raise
return return_dict
|
google/svcca
|
cca_core.py
|
Python
|
apache-2.0
| 16,148
|
[
"NEURON"
] |
0ea1f39a5a5ee44226e65e269113a4e2ed6d6a640d812cd5a5fa1eb7a0312c71
|
from ddapp import asynctaskqueue as atq
from ddapp import segmentation
from ddapp import visualization as vis
import ddapp.objectmodel as om
from ddapp import propertyset
from ddapp import pointpicker
from ddapp import planplayback
from ddapp.timercallback import TimerCallback
from ddapp.simpletimer import SimpleTimer
from ddapp import ikplanner
from ddapp import callbacks
from ddapp import robotsystem
from ddapp import transformUtils
from ddapp import affordanceitems
from ddapp import vtkNumpy as vnp
from ddapp.debugVis import DebugData
from ddapp.robotplanlistener import ManipulationPlanItem
from ddapp.footstepsdriver import FootstepPlanItem
from ddapp import vtkAll as vtk
import drc as lcmdrc
import numpy as np
import copy
import pickle
import PythonQt
from PythonQt import QtCore, QtGui
import re
import inspect
robotSystem = None
def _splitCamelCase(name):
name = re.sub('(.)([A-Z][a-z]+)', r'\1 \2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1 \2', name)
class AsyncTask(object):
'''
AsyncTask documentation.
'''
def __init__(self, **kwargs):
self.statusMessage = ''
self.failReason = ''
self.properties = propertyset.PropertySet()
self.properties.addProperty('Name', _splitCamelCase(self.__class__.__name__).lower())
for cls in reversed(inspect.getmro(self.__class__)):
if hasattr(cls, 'getDefaultProperties'):
cls.getDefaultProperties(self.properties)
for name, value in kwargs.iteritems():
self.properties.setProperty(_splitCamelCase(name).capitalize(), value)
def __call__(self):
return self.run()
def stop(self):
pass
def run(self):
pass
def fail(self, reason):
self.failReason = reason
raise atq.AsyncTaskQueue.FailException(reason)
def copy(self):
return copy.deepcopy(self)
class PrintTask(AsyncTask):
'''
Name: Print Task
Short Description: prints a string
Description:
This task prints a message string.
'''
printFunction = None
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Message', '<empty message>')
def run(self):
if self.printFunction:
self.printFunction(self.properties.message)
else:
print self.properties.message
class CallbackTask(AsyncTask):
def __init__(self, callback=None, **kwargs):
AsyncTask.__init__(self, **kwargs)
self.callback = callback
def run(self):
if self.callback:
self.callback()
class ExceptionTask(AsyncTask):
def run(self):
raise Exception('Task exception')
class UserPromptTask(AsyncTask):
promptsEnabled = True
promptFunction = None
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Message', 'continue?')
properties.addProperty('Always', False)
def showUserPrompt(self):
if self.promptFunction:
self.promptFunction(self, self.properties.message)
else:
self.showDialog()
def showDialog(self):
self.d = QtGui.QDialog()
buttons = QtGui.QDialogButtonBox()
buttons.addButton('Yes', QtGui.QDialogButtonBox.AcceptRole)
buttons.addButton('No', QtGui.QDialogButtonBox.RejectRole)
buttons.connect('accepted()', self.d.accept)
buttons.connect('rejected()', self.d.reject)
l = QtGui.QVBoxLayout(self.d)
l.addWidget(QtGui.QLabel(self.properties.message))
l.addWidget(buttons)
self.d.setAttribute(QtCore.Qt.WA_QuitOnClose, False)
self.d.show()
self.d.raise_()
self.d.connect('accepted()', self.accept)
self.d.connect('rejected()', self.reject)
def accept(self):
self.result = True
def reject(self):
self.result = False
def run(self):
if not self.promptsEnabled and not self.properties.getProperty('Always'):
return
self.result = None
self.showUserPrompt()
while self.result is None:
yield
if not self.result:
raise atq.AsyncTaskQueue.PauseException()
class CheckPlanInfo(UserPromptTask):
@staticmethod
def getDefaultProperties(properties):
UserPromptTask.getDefaultProperties(properties)
properties.setProperty('Message', 'Plan is invalid. Do you want to accept it anyway?')
def run(self):
if robotSystem.ikPlanner.lastManipPlan and max(robotSystem.ikPlanner.lastManipPlan.plan_info) <= 10:
return
else:
return UserPromptTask.run(self)
class DelayTask(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Delay time', 1.0, attributes=propertyset.PropertyAttributes(minimum=0.0, maximum=1e4, singleStep=0.1))
def run(self):
delayTime = self.properties.getProperty('Delay time')
t = SimpleTimer()
while True:
elapsed = t.elapsed()
if elapsed >= delayTime:
break
self.statusMessage = 'Waiting %.1f seconds' % (delayTime - elapsed)
yield
class PauseTask(AsyncTask):
def run(self):
raise atq.AsyncTaskQueue.PauseException()
class QuitTask(AsyncTask):
def run(self):
QtCore.QCoreApplication.instance().quit()
class WaitForMultisenseLidar(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Number of sweeps', 1, attributes=propertyset.PropertyAttributes(minimum=0, maximum=100))
def run(self):
self.multisenseDriver = robotSystem.multisenseDriver
currentRevolution = self.multisenseDriver.displayedRevolution
desiredRevolution = currentRevolution + self.properties.getProperty('Number of sweeps')
while self.multisenseDriver.displayedRevolution < desiredRevolution:
self.statusMessage = 'Waiting for multisense sweep'
yield
class SnapshotPointcloud(AsyncTask):
def run(self):
polyData = self.getPointCloud()
om.removeFromObjectModel(om.findObjectByName('pointcloud snapshot'))
vis.showPolyData(polyData, 'pointcloud snapshot', parent='segmentation', visible=False)
class SnapshotMultisensePointcloud(SnapshotPointcloud):
def getPointCloud(self):
return segmentation.getCurrentRevolutionData()
class SnapshotSelectedPointcloud(SnapshotPointcloud):
def getPointCloud(self):
obj = om.getActiveObject()
if obj and obj.getProperty('Name') == 'Multisense':
return SnapshotMultisensePointcloud().getPointCloud()
elif obj and obj.getProperty('Name') == 'stereo point cloud':
return SnapshotStereoPointcloud().getPointCloud()
elif obj and hasattr(obj, 'polyData'):
return obj.polyData
else:
self.fail('no pointcloud is selected')
class SnapshotStereoPointcloud(SnapshotPointcloud):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Remove Outliers', False)
def getPointCloud(self):
return segmentation.getDisparityPointCloud(decimation=1, removeOutliers=self.getProperty('Remove Outliers'))
class PointCloudAlgorithmBase(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Use pointcloud snapshot', True)
def getPointCloud(self):
if self.properties.getProperty('Use pointcloud snapshot'):
obj = om.findObjectByName('pointcloud snapshot')
if obj is None:
self.fail('pointcloud snapshot not found')
if not obj.polyData.GetNumberOfPoints():
self.fail('input pointcloud is empty')
return obj.polyData
else:
return SnapshotSelectedPointcloud().getPointCloud()
class FitDrill(PointCloudAlgorithmBase):
def run(self):
polyData = self.getPointCloud()
segmentation.findAndFitDrillBarrel(polyData)
class FindDrillOnTableByAnnotation(PointCloudAlgorithmBase):
def getAnnotationInputPoint(self):
obj = om.findObjectByName('drill on table annotation')
if obj is None:
self.fail('user annotation not found')
return obj.annotationPoints[0]
def run(self):
point = self.getAnnotationInputPoint()
polyData = self.getPointCloud()
segmentation.segmentTableThenFindDrills(polyData, point)
class FindRotaryDrillByAnnotation(PointCloudAlgorithmBase):
def getAnnotationInputPoint(self):
obj = om.findObjectByName('rotary drill annotation')
if obj is None:
self.fail('user annotation not found')
return obj.annotationPoints[0]
def run(self):
point = self.getAnnotationInputPoint()
polyData = self.getPointCloud()
#segmentation.segmentDrillAuto(point, polyData)
om.removeFromObjectModel(om.findObjectByName('drill'))
segmentation.segmentDrillAlignedWithTable(point, polyData)
class WaitForAtlasBehavior(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Behavior name', '')
def run(self):
behaviorName = self.properties.getProperty('Behavior name')
assert behaviorName in robotSystem.atlasDriver.getBehaviorMap().values()
while robotSystem.atlasDriver.getCurrentBehaviorName() != behaviorName:
yield
class WaitForWalkExecutionBDI(AsyncTask):
def run(self):
self.statusMessage = 'Waiting for BDI walking to begin...'
while robotSystem.atlasDriver.getCurrentBehaviorName() != 'step':
yield
self.statusMessage = 'Waiting for BDI walk execution...'
while robotSystem.atlasDriver.getCurrentBehaviorName() != 'stand':
yield
class WaitForPlanExecution(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Timeout', 5.0, attributes=propertyset.PropertyAttributes(minimum=0.0, maximum=1e4, singleStep=0.1, decimals=2))
def promptUserForPlanRecommit(self):
prompt = UserPromptTask(message='Plan appears dropped. Recommit?')
return prompt.run()
def run(self):
def getMsg():
return robotSystem.atlasDriver.lastControllerStatusMessage
def isExecuting():
return getMsg().execution_status == lcmdrc.plan_status_t.EXECUTION_STATUS_EXECUTING
# wait for first status message
while not getMsg():
yield
if isExecuting():
raise Exception('error, invoked during plan execution and cannot guarantee safety.')
t = SimpleTimer()
lastPlanStartTime = getMsg().last_plan_start_utime
# wait for next plan to begin
self.statusMessage = 'Waiting for %s to begin...' % self.getTypeLabel()
while getMsg().last_plan_start_utime == lastPlanStartTime:
if t.elapsed() > self.properties.getProperty('Timeout'):
yield self.promptUserForPlanRecommit()
t.reset()
self.recommitPlan()
else:
yield
# wait for execution
self.statusMessage = 'Waiting for %s execution...' % self.getTypeLabel()
while getMsg().execution_status == lcmdrc.plan_status_t.EXECUTION_STATUS_EXECUTING:
if getMsg().plan_type != self.getType():
raise Exception('error, unexpected execution plan type: %s' % getMsg().plan_type)
yield
self.statusMessage = 'Waiting for recent robot state...'
while robotSystem.robotStateJointController.lastRobotStateMessage.utime < getMsg().last_plan_start_utime:
yield
class WaitForManipulationPlanExecution(WaitForPlanExecution):
def getType(self):
return lcmdrc.plan_status_t.MANIPULATING
def getTypeLabel(self):
return 'manipulation'
def recommitPlan(self):
lastPlan = robotSystem.manipPlanner.committedPlans.pop()
robotSystem.manipPlanner.commitManipPlan(lastPlan)
class WaitForWalkExecution(WaitForPlanExecution):
def getType(self):
return lcmdrc.plan_status_t.WALKING
def getTypeLabel(self):
return 'walking'
def recommitPlan(self):
lastPlan = robotSystem.footstepsDriver.committedPlans.pop()
robotSystem.footstepsDriver.commitFootstepPlan(lastPlan)
class UserSelectAffordanceCandidate(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Candidate name prefix', '')
properties.addProperty('New name', '')
properties.addProperty('Delete candidates', True)
def getCandidates(self):
namePrefix = self.properties.getProperty('Candidate name prefix')
matchStr = '^%s [0-9]+$' % namePrefix
return [obj for obj in om.getObjects() if re.match(matchStr, obj.getProperty('Name'))]
def selectCandidate(self, selectedObj, candidates):
if self.properties.getProperty('Delete candidates'):
for obj in candidates:
if obj != selectedObj:
om.removeFromObjectModel(obj)
newName = self.properties.getProperty('New name')
if newName:
oldName = selectedObj.getProperty('Name')
selectedObj.setProperty('Name', newName)
for child in selectedObj.children():
child.setProperty('Name', child.getProperty('Name').replace(oldName, newName))
def run(self):
candidates = self.getCandidates()
if not candidates:
self.fail('no affordance candidates found')
om.clearSelection()
self.statusMessage = 'Please select affordance candidate: %s' % self.properties.getProperty('Candidate name prefix')
while True:
obj = om.getActiveObject()
if obj and obj in candidates:
break
else:
yield
self.selectCandidate(obj, candidates)
class TransformFrame(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Frame input name', '')
properties.addProperty('Frame output name', '')
properties.addProperty('Translation', [0.0, 0.0, 0.0], attributes=propertyset.PropertyAttributes(decimals=3, minimum=-1e5, maximum=1e5))
properties.addProperty('Rotation', [0.0, 0.0, 0.0], attributes=propertyset.PropertyAttributes(decimals=3, minimum=-360, maximum=360))
def getInputFrame(self):
name = self.properties.getProperty('Frame input name')
frame = om.findObjectByName(name)
if not isinstance(frame, vis.FrameItem):
self.fail('frame not found: %s' % name)
return frame
def run(self):
inputFrame = self.getInputFrame()
translation = self.properties.getProperty('Translation')
rpy = self.properties.getProperty('Rotation')
offset = transformUtils.frameFromPositionAndRPY(translation, rpy)
offset.PostMultiply()
offset.Concatenate(transformUtils.copyFrame(inputFrame.transform))
outputFrame = vis.updateFrame(offset, self.properties.getProperty('Frame output name'), scale=0.2, parent=inputFrame.parent())
if not hasattr(inputFrame, 'frameSync'):
inputFrame.frameSync = vis.FrameSync()
inputFrame.frameSync.addFrame(inputFrame)
inputFrame.frameSync.addFrame(outputFrame, ignoreIncoming=True)
class ComputeRobotFootFrame(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Pose name', 'EST_ROBOT_STATE')
properties.addProperty('Frame output name', 'robot foot frame')
def run(self):
poseName = self.properties.getProperty('Pose name')
if poseName == 'EST_ROBOT_STATE':
pose = robotSystem.robotStateJointController.q.copy()
else:
pose = robotSystem.ikPlanner.jointController.getPose(poseName)
robotModel = robotSystem.ikPlanner.getRobotModelAtPose(pose)
footFrame = robotSystem.footstepsDriver.getFeetMidPoint(robotModel)
vis.updateFrame(footFrame, self.properties.getProperty('Frame output name'), scale=0.2)
class FindAffordance(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Affordance name', '')
def run(self):
affordanceName = self.properties.getProperty('Affordance name')
obj = om.findObjectByName(affordanceName)
if not obj:
self.fail('could not find affordance: %s' % affordanceName)
class ProjectAffordanceToGround(PointCloudAlgorithmBase):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Affordance name', '')
properties.addProperty('Ground frame name', '')
properties.addProperty('Frame output name', '')
def getSelectedAffordance(self):
affordanceName = self.properties.getProperty('Affordance name')
if affordanceName:
obj = om.findObjectByName(affordanceName)
if not obj:
self.fail('could not find affordance: %s' % affordanceName)
else:
obj = om.getActiveObject()
if obj is None:
self.fail('no affordance is selected')
try:
frame = obj.getChildFrame()
except AttributeError:
frame = None
if frame is None:
self.fail('affordance does not have a frame')
return obj
def getGroundFrame(self):
frame = om.findObjectByName(self.properties.getProperty('Ground frame name'))
if not frame:
self.fail('could not find ground frame')
return frame
def run(self):
aff = self.getSelectedAffordance()
affFrame = aff.getChildFrame().transform
groundFrame = self.getGroundFrame().transform
projectedXYZ = np.hstack([affFrame.GetPosition()[0:2], groundFrame.GetPosition()[2]])
result = transformUtils.copyFrame(affFrame)
result.Translate(projectedXYZ - np.array(result.GetPosition()))
outputName = self.properties.getProperty('Frame output name')
outputName = outputName or '%s ground frame' % aff.getProperty('Name')
vis.updateFrame(result, outputName, scale=0.2)
class UserAnnotatePointCloud(PointCloudAlgorithmBase):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Annotation name', 'user annotation')
properties.addProperty('Number of points', 2)
def clearPicker(self):
self.picker.stop()
self.picker.clear()
self.picker = None
def onAnnotationAborted(self):
om.removeFromObjectModel(self.picker.annotationObj)
self.aborted = True
self.clearPicker()
def onAnnotationComplete(self, *pts):
self.picker.annotationObj.annotationPoints = pts
self.clearPicker()
def startAnnotationPicker(self):
view = robotSystem.view
polyData = self.getPointCloud()
self.picker = pointpicker.PointPicker(view, numberOfPoints=self.properties.getProperty('Number of points'), drawLines=True, callback=self.onAnnotationComplete, abortCallback=self.onAnnotationAborted)
self.picker.annotationName = self.properties.getProperty('Annotation name')
self.picker.annotationFolder = 'annotations'
self.picker.pickType = 'points' if polyData.GetNumberOfCells() == polyData.GetNumberOfVerts() else 'render'
self.aborted = False
self.picker.start()
def run(self):
self.startAnnotationPicker()
self.statusMessage = 'Annotate point cloud (shift+click) to select points'
while self.picker is not None:
yield
if self.aborted:
self.fail('user abort')
class FindHorizontalSurfaces(PointCloudAlgorithmBase):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Normal estimation search radius', 0.03, attributes=propertyset.PropertyAttributes(decimals=3, minimum=0.0, maximum=100))
properties.addProperty('Cluster tolerance', 0.02, attributes=propertyset.PropertyAttributes(decimals=3, minimum=0.0, maximum=10))
properties.addProperty('Distance to plane threshold', 0.01, attributes=propertyset.PropertyAttributes(decimals=4, minimum=0.0, maximum=1))
properties.addProperty('Normals dot up range', [0.9, 1.0], attributes=propertyset.PropertyAttributes(decimals=2, minimum=0.0, maximum=1))
def run(self):
polyData = self.getPointCloud()
segmentation.findHorizontalSurfaces(polyData,
removeGroundFirst=True,
showClusters=True,
normalEstimationSearchRadius=self.properties.getProperty('Normal estimation search radius'),
clusterTolerance=self.properties.getProperty('Cluster tolerance'),
distanceToPlaneThreshold=self.properties.getProperty('Distance to plane threshold'),
normalsDotUpRange=self.properties.getProperty('Normals dot up range')
)
class SetNeckPitch(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Angle', 0, attributes=om.PropertyAttributes(minimum=-35, maximum=90))
def run(self):
robotSystem.neckDriver.setNeckPitch(self.properties.getProperty('Angle'))
class CloseHand(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Side', 0, attributes=om.PropertyAttributes(enumNames=['Left', 'Right']))
properties.addProperty('Mode', 0, attributes=om.PropertyAttributes(enumNames=['Basic', 'Pinch']))
properties.addProperty('Amount', 100, attributes=propertyset.PropertyAttributes(minimum=0, maximum=100))
def getHandDriver(self, side):
assert side in ('left', 'right')
return robotSystem.lHandDriver if side == 'left' else robotSystem.rHandDriver
def run(self):
side = self.properties.getPropertyEnumValue('Side').lower()
self.getHandDriver(side).sendCustom(self.properties.getProperty('Amount'), 100, 100, self.properties.getProperty('Mode'))
class OpenHand(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Side', 0, attributes=om.PropertyAttributes(enumNames=['Left', 'Right']))
properties.addProperty('Mode', 0, attributes=om.PropertyAttributes(enumNames=['Basic', 'Pinch']))
properties.addProperty('Amount', 100, attributes=propertyset.PropertyAttributes(minimum=0, maximum=100))
def getHandDriver(self, side):
assert side in ('left', 'right')
return robotSystem.lHandDriver if side == 'left' else robotSystem.rHandDriver
def run(self):
side = self.properties.getPropertyEnumValue('Side').lower()
self.getHandDriver(side).sendOpen()
self.getHandDriver(side).sendCustom(100-self.properties.getProperty('Amount'), 100, 100, self.properties.getProperty('Mode'))
class CommitFootstepPlan(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Plan name', '')
def run(self):
#planName = self.properties.getProperty('Plan name')
#plan = om.findObjectByName(planName)
#if not isinstance(plan, FootstepPlanItem):
# self.fail('could not find footstep plan')
#plan = plan.plan
plan = robotSystem.footstepsDriver.lastFootstepPlan
robotSystem.footstepsDriver.commitFootstepPlan(plan)
class CommitManipulationPlan(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Plan name', '')
def run(self):
planName = self.properties.getProperty('Plan name')
plan = om.findObjectByName(planName)
if not isinstance(plan, ManipulationPlanItem):
self.fail('could not find manipulation plan')
plan = plan.plan
robotSystem.manipPlanner.commitManipPlan(plan)
class RequestWalkingPlan(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Start pose name', 'EST_ROBOT_STATE')
properties.addProperty('Footstep plan name', '')
def run(self):
poseName = self.properties.getProperty('Start pose name')
if poseName == 'EST_ROBOT_STATE':
pose = robotSystem.robotStateJointController.q.copy()
else:
pose = robotSystem.ikPlanner.jointController.getPose(poseName)
planName = self.properties.getProperty('Footstep plan name')
plan = om.findObjectByName(planName)
if not isinstance(plan, FootstepPlanItem):
self.fail('could not find footstep plan: %s' % planName)
plan = plan.plan
robotSystem.footstepsDriver.sendWalkingPlanRequest(plan, pose, waitForResponse=True)
def _addPlanItem(plan, name, itemClass):
assert plan is not None
item = itemClass(name)
item.plan = plan
om.removeFromObjectModel(om.findObjectByName(name))
om.addToObjectModel(item, om.getOrCreateContainer('segmentation'))
return item
class RequestFootstepPlan(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Stance frame name', 'stance frame')
properties.addProperty('Start pose name', 'EST_ROBOT_STATE')
def run(self):
poseName = self.properties.getProperty('Start pose name')
if poseName == 'EST_ROBOT_STATE':
pose = robotSystem.robotStateJointController.q.copy()
else:
pose = robotSystem.ikPlanner.jointController.getPose(poseName)
goalFrame = om.findObjectByName(self.properties.getProperty('Stance frame name')).transform
request = robotSystem.footstepsDriver.constructFootstepPlanRequest(pose, goalFrame)
footstepPlan = robotSystem.footstepsDriver.sendFootstepPlanRequest(request, waitForResponse=True)
if not footstepPlan:
self.fail('failed to get a footstep plan response')
_addPlanItem(footstepPlan, self.properties.getProperty('Stance frame name') + ' footstep plan', FootstepPlanItem)
class PlanPostureGoal(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Posture group', 'General')
properties.addProperty('Posture name', 'arm up pregrasp')
properties.addProperty('Side', 1, attributes=om.PropertyAttributes(enumNames=['Default', 'Left', 'Right']))
def run(self):
startPosture = robotSystem.robotStateJointController.q.copy()
side = [None, 'left', 'right'][self.properties.getProperty('Side')]
pose = robotSystem.ikPlanner.getMergedPostureFromDatabase(startPosture, self.properties.getProperty('Posture group'), self.properties.getProperty('Posture name'), side)
plan = robotSystem.ikPlanner.computePostureGoal(startPosture, pose)
_addPlanItem(plan, self.properties.getProperty('Posture name') + ' posture plan', ManipulationPlanItem)
class PlanStandPosture(AsyncTask):
def run(self):
startPosture = robotSystem.robotStateJointController.q.copy()
plan = robotSystem.ikPlanner.computeStandPlan(startPosture)
_addPlanItem(plan, 'stand pose plan', ManipulationPlanItem)
class PlanNominalPosture(AsyncTask):
def run(self):
startPosture = robotSystem.robotStateJointController.q.copy()
plan = robotSystem.ikPlanner.computeNominalPlan(startPosture)
_addPlanItem(plan, 'nominal pose plan', ManipulationPlanItem)
class PlanReachToFrame(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Frame input name', '')
properties.addProperty('Side', 1, attributes=om.PropertyAttributes(enumNames=['Left', 'Right']))
def getInputFrame(self):
name = self.properties.getProperty('Frame input name')
frame = om.findObjectByName(name)
if not isinstance(frame, vis.FrameItem):
self.fail('frame not found: %s' % name)
return frame
def run(self):
side = self.properties.getPropertyEnumValue('Side').lower()
startPose = robotSystem.robotStateJointController.q.copy()
targetFrame = self.getInputFrame()
constraintSet = robotSystem.ikPlanner.planEndEffectorGoal(startPose, side, targetFrame, lockBase=False, lockBack=True)
endPose, info = constraintSet.runIk()
plan = constraintSet.runIkTraj()
_addPlanItem(plan, '%s reach plan' % targetFrame.getProperty('Name'), ManipulationPlanItem)
class FitWallFrameFromAnnotation(PointCloudAlgorithmBase):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Annotation input name', '')
def getAnnotationInput(self):
obj = om.findObjectByName(self.properties.getProperty('Annotation input name'))
if obj is None:
self.fail('user annotation not found')
return obj
def run(self):
polyData = self.getPointCloud()
annotation = self.getAnnotationInput()
annotationPoint = annotation.annotationPoints[0]
planePoints, normal = segmentation.applyLocalPlaneFit(polyData, annotationPoint, searchRadius=0.1, searchRadiusEnd=0.2)
viewDirection = segmentation.SegmentationContext.getGlobalInstance().getViewDirection()
if np.dot(normal, viewDirection) < 0:
normal = -normal
xaxis = normal
zaxis = [0, 0, 1]
yaxis = np.cross(zaxis, xaxis)
xaxis = np.cross(yaxis, zaxis)
xaxis /= np.linalg.norm(xaxis)
yaxis /= np.linalg.norm(yaxis)
t = transformUtils.getTransformFromAxes(xaxis, yaxis, zaxis)
t.PostMultiply()
t.Translate(annotationPoint)
polyData = annotation.polyData
polyData = segmentation.transformPolyData(polyData, t.GetLinearInverse())
annotation.setProperty('Visible', False)
om.removeFromObjectModel(om.findObjectByName('wall'))
obj = vis.showPolyData(polyData, 'wall')
obj.actor.SetUserTransform(t)
vis.showFrame(t, 'wall frame', scale=0.2, parent=obj)
class FitShelfItem(PointCloudAlgorithmBase):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Annotation input name', '')
properties.addProperty('Cluster tolerance', 0.02, attributes=propertyset.PropertyAttributes(decimals=3, minimum=0.0, maximum=10))
def getAnnotationInput(self):
obj = om.findObjectByName(self.properties.getProperty('Annotation input name'))
if obj is None:
self.fail('user annotation not found')
return obj
def run(self):
polyData = self.getPointCloud()
annotation = self.getAnnotationInput()
annotationPoint = annotation.annotationPoints[0]
mesh = segmentation.fitShelfItem(polyData, annotationPoint, clusterTolerance=self.properties.getProperty('Cluster tolerance'))
annotation.setProperty('Visible', False)
om.removeFromObjectModel(om.findObjectByName('shelf item'))
obj = vis.showPolyData(mesh, 'shelf item', color=[0,1,0])
t = transformUtils.frameFromPositionAndRPY(segmentation.computeCentroid(mesh), [0,0,0])
segmentation.makeMovable(obj, t)
class SpawnValveAffordance(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Radius', 0.195, attributes=om.PropertyAttributes(decimals=4, minimum=0, maximum=10))
properties.addProperty('Position', [0.7, -0.22, 1.21], attributes=om.PropertyAttributes(decimals=3, minimum=-1e4, maximum=1e4))
properties.addProperty('Rotation', [180, -90, 16], attributes=om.PropertyAttributes(decimals=2, minimum=-360, maximum=360))
def getGroundFrame(self):
return vtk.vtkTransform()
robotModel = robotSystem.robotStateModel
baseLinkFrame = robotModel.model.getLinkFrame(robotModel.model.getLinkNames()[0])
#baseLinkFrame.PostMultiply()
#baseLinkFrame.Translate(0,0,-baseLinkFrame.GetPosition()[2])
return baseLinkFrame
#return robotSystem.footstepsDriver.getFeetMidPoint(robotModel)
def computeValveFrame(self):
position = self.properties.getProperty('Position')
rpy = self.properties.getProperty('Rotation')
t = transformUtils.frameFromPositionAndRPY(position, rpy)
t.Concatenate(self.getGroundFrame())
return t
def run(self):
radius = self.properties.getProperty('Radius')
thickness = 0.03
folder = om.getOrCreateContainer('affordances')
frame = self.computeValveFrame()
d = DebugData()
d.addLine(np.array([0, 0, -thickness/2.0]), np.array([0, 0, thickness/2.0]), radius=radius)
mesh = d.getPolyData()
params = dict(radius=radius, length=thickness, xwidth=radius, ywidth=radius, zwidth=thickness, otdf_type='steering_cyl', friendly_name='valve')
affordance = vis.showPolyData(mesh, 'valve', color=[0.0, 1.0, 0.0], cls=affordanceitems.FrameAffordanceItem, parent=folder, alpha=1.0)
frame = vis.showFrame(frame, 'valve frame', parent=affordance, visible=False, scale=radius)
affordance.actor.SetUserTransform(frame.transform)
affordance.setAffordanceParams(params)
affordance.updateParamsFromActorTransform()
class SpawnDrillBarrelAffordance(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Position', [0.5, -0.22, 1.2], attributes=om.PropertyAttributes(decimals=3, minimum=-1e4, maximum=1e4))
properties.addProperty('Rotation', [0, 0, 0], attributes=om.PropertyAttributes(decimals=2, minimum=-360, maximum=360))
def getGroundFrame(self):
return vtk.vtkTransform()
robotModel = robotSystem.robotStateModel
baseLinkFrame = robotModel.model.getLinkFrame(robotModel.model.getLinkNames()[0])
#baseLinkFrame.PostMultiply()
#baseLinkFrame.Translate(0,0,-baseLinkFrame.GetPosition()[2])
return baseLinkFrame
#return robotSystem.footstepsDriver.getFeetMidPoint(robotModel)
def computeAffordanceFrame(self):
position = self.properties.getProperty('Position')
rpy = self.properties.getProperty('Rotation')
t = transformUtils.frameFromPositionAndRPY(position, rpy)
t.Concatenate(self.getGroundFrame())
return t
def run(self):
folder = om.getOrCreateContainer('affordances')
frame = self.computeAffordanceFrame()
mesh = segmentation.getDrillBarrelMesh()
params = segmentation.getDrillAffordanceParams(np.array(frame.GetPosition()), [1,0,0], [0,1,0], [0,0,1], 'dewalt_barrel')
affordance = vis.showPolyData(mesh, 'drill', color=[0.0, 1.0, 0.0], cls=affordanceitems.FrameAffordanceItem, parent=folder)
frame = vis.showFrame(frame, 'drill frame', parent=affordance, visible=False, scale=0.2)
affordance.actor.SetUserTransform(frame.transform)
affordance.setAffordanceParams(params)
affordance.updateParamsFromActorTransform()
class SpawnDrillRotaryAffordance(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Position', [0.5, -0.22, 1.2], attributes=om.PropertyAttributes(decimals=3, minimum=-1e4, maximum=1e4))
properties.addProperty('Rotation', [0, 0, 0], attributes=om.PropertyAttributes(decimals=2, minimum=-360, maximum=360))
def getGroundFrame(self):
return vtk.vtkTransform()
robotModel = robotSystem.robotStateModel
baseLinkFrame = robotModel.model.getLinkFrame(robotModel.model.getLinkNames()[0])
#baseLinkFrame.PostMultiply()
#baseLinkFrame.Translate(0,0,-baseLinkFrame.GetPosition()[2])
return baseLinkFrame
#return robotSystem.footstepsDriver.getFeetMidPoint(robotModel)
def computeAffordanceFrame(self):
position = self.properties.getProperty('Position')
rpy = self.properties.getProperty('Rotation')
t = transformUtils.frameFromPositionAndRPY(position, rpy)
t.Concatenate(self.getGroundFrame())
return t
def run(self):
folder = om.getOrCreateContainer('affordances')
frame = self.computeAffordanceFrame()
mesh = segmentation.getDrillMesh()
params = segmentation.getDrillAffordanceParams(np.array(frame.GetPosition()), [1,0,0], [0,1,0], [0,0,1])
affordance = vis.showPolyData(mesh, 'drill', color=[0.0, 1.0, 0.0], cls=affordanceitems.FrameAffordanceItem, parent=folder)
frame = vis.showFrame(frame, 'drill frame', parent=affordance, visible=False, scale=0.2)
affordance.actor.SetUserTransform(frame.transform)
affordance.setAffordanceParams(params)
affordance.updateParamsFromActorTransform()
class PlanGazeTrajectory(AsyncTask):
@staticmethod
def getDefaultProperties(properties):
properties.addProperty('Target frame name', '')
properties.addProperty('Annotation input name', '')
properties.addProperty('Side', 1, attributes=om.PropertyAttributes(enumNames=['Left', 'Right']))
properties.addProperty('Cone threshold degrees', 5.0, attributes=om.PropertyAttributes(decimals=1, minimum=0, maximum=360))
properties.addProperty('Palm offset', 0.0, attributes=om.PropertyAttributes(decimals=3, minimum=-1e4, maximum=1e4))
def getAnnotationInputPoints(self):
obj = om.findObjectByName(self.properties.getProperty('Annotation input name'))
if obj is None:
self.fail('user annotation not found')
return obj.annotationPoints
def appendPositionConstraintForTargetFrame(self, goalFrame, t):
positionConstraint, _ = self.ikPlanner.createPositionOrientationGraspConstraints(self.graspingHand, goalFrame, self.graspToHandLinkFrame)
positionConstraint.tspan = [t, t]
self.constraintSet.constraints.append(positionConstraint)
def initGazeConstraintSet(self, goalFrame):
# create constraint set
startPose = robotSystem.robotStateJointController.q.copy()
startPoseName = 'gaze_plan_start'
endPoseName = 'gaze_plan_end'
self.ikPlanner.addPose(startPose, startPoseName)
self.ikPlanner.addPose(startPose, endPoseName)
self.constraintSet = ikplanner.ConstraintSet(self.ikPlanner, [], startPoseName, endPoseName)
self.constraintSet.endPose = startPose
# add body constraints
bodyConstraints = self.ikPlanner.createMovingBodyConstraints(startPoseName, lockBase=True, lockBack=False, lockLeftArm=self.graspingHand=='right', lockRightArm=self.graspingHand=='left')
self.constraintSet.constraints.extend(bodyConstraints)
# add gaze constraint
self.graspToHandLinkFrame = self.ikPlanner.newPalmOffsetGraspToHandFrame(self.graspingHand, self.properties.getProperty('Palm offset'))
gazeConstraint = self.ikPlanner.createGazeGraspConstraint(self.graspingHand, goalFrame, self.graspToHandLinkFrame, coneThresholdDegrees=self.properties.getProperty('Cone threshold degrees'))
self.constraintSet.constraints.insert(0, gazeConstraint)
def getGazeTargetFrame(self):
frame = om.findObjectByName(self.properties.getProperty('Target frame name'))
if not frame:
self.fail('could not find ground frame')
return frame
def run(self):
self.ikPlanner = robotSystem.ikPlanner
side = self.properties.getPropertyEnumValue('Side').lower()
self.graspingHand = side
targetPoints = self.getAnnotationInputPoints()
gazeTargetFrame = self.getGazeTargetFrame()
self.initGazeConstraintSet(gazeTargetFrame)
numberOfSamples = len(targetPoints)
for i in xrange(numberOfSamples):
targetPos = targetPoints[i]
targetFrame = transformUtils.copyFrame(gazeTargetFrame.transform)
targetFrame.Translate(targetPos - np.array(targetFrame.GetPosition()))
self.appendPositionConstraintForTargetFrame(targetFrame, i+1)
gazeConstraint = self.constraintSet.constraints[0]
assert isinstance(gazeConstraint, ikplanner.ik.WorldGazeDirConstraint)
gazeConstraint.tspan = [1.0, numberOfSamples]
plan = self.constraintSet.runIkTraj()
_addPlanItem(plan, '%s gaze plan' % gazeTargetFrame.getProperty('Name'), ManipulationPlanItem)
|
gizatt/director
|
src/python/ddapp/tasks/robottasks.py
|
Python
|
bsd-3-clause
| 40,748
|
[
"VTK"
] |
e1c6af4431db1cd1987d426d1d8276dd7d0b69ee55df04b258d33f91240bb835
|
# jhbuild - a tool to ease building collections of source packages
# Copyright (C) 2008 Igalia S.L., John Carr, Frederic Peters
#
# waterfall.py: custom waterfall display
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# heavily based on buildbot code,
# Copyright (C) Brian Warner <warner-buildbot@lothar.com>
import time, urllib
from buildbot import version
from buildbot.changes.changes import Change
from buildbot import interfaces, util
from buildbot.status import builder
from buildbot.status.web.waterfall import WaterfallStatusResource, Spacer
from buildbot.status.web.base import Box, HtmlResource, IBox, ICurrentBox, \
ITopBox, td, build_get_class, path_to_build, path_to_step, map_branches
from feeds import Rss20StatusResource, Atom10StatusResource
def insertGaps(g, lastEventTime, idleGap=2, showEvents=False):
# summary of changes between this function and the one from buildbot:
# - do not insert time gaps for events that are not shown
debug = False
e = g.next()
starts, finishes = e.getTimes()
if debug: log.msg("E0", starts, finishes)
if finishes == 0:
finishes = starts
if debug: log.msg("E1 finishes=%s, gap=%s, lET=%s" % \
(finishes, idleGap, lastEventTime))
if finishes is not None and finishes + idleGap < lastEventTime:
if debug: log.msg(" spacer0")
yield Spacer(finishes, lastEventTime)
followingEventStarts = starts
if debug: log.msg(" fES0", starts)
yield e
while 1:
e = g.next()
if isinstance(e, builder.Event) and not showEvents:
continue
starts, finishes = e.getTimes()
if debug: log.msg("E2", starts, finishes)
if finishes == 0:
finishes = starts
if finishes is not None and finishes + idleGap < followingEventStarts:
# there is a gap between the end of this event and the beginning
# of the next one. Insert an idle event so the waterfall display
# shows a gap here.
if debug:
log.msg(" finishes=%s, gap=%s, fES=%s" % \
(finishes, idleGap, followingEventStarts))
yield Spacer(finishes, followingEventStarts)
yield e
followingEventStarts = starts
if debug: log.msg(" fES1", starts)
class JhWaterfallStatusResource(WaterfallStatusResource):
""" Override the standard Waterfall class to add RSS and Atom feeds """
def __init__(self, *args, **kwargs):
WaterfallStatusResource.__init__(self, *args, **kwargs)
rss = Rss20StatusResource(self.categories)
self.putChild("rss", rss)
atom = Atom10StatusResource(self.categories)
self.putChild("atom", atom)
self.module_name = self.categories[0]
def getTitle(self, request):
status = self.getStatus(request)
p = status.getProjectName()
if p:
return '%s: %s' % (p, self.module_name)
else:
return "BuildBot"
def buildGrid(self, request, builders):
# summary of changes between this method and the overriden one:
# - don't show events (master started...) by default
# - only display changes related to the current module
debug = False
# TODO: see if we can use a cached copy
showEvents = False
if request.args.get("show_events", ["false"])[0].lower() == "true":
showEvents = True
filterBranches = [b for b in request.args.get("branch", []) if b]
filterBranches = map_branches(filterBranches)
maxTime = int(request.args.get("last_time", [util.now()])[0])
if "show_time" in request.args:
minTime = maxTime - int(request.args["show_time"][0])
elif "first_time" in request.args:
minTime = int(request.args["first_time"][0])
else:
minTime = None
spanLength = 10 # ten-second chunks
maxPageLen = int(request.args.get("num_events", [200])[0])
# first step is to walk backwards in time, asking each column
# (commit, all builders) if they have any events there. Build up the
# array of events, and stop when we have a reasonable number.
commit_source = self.getChangemaster(request)
lastEventTime = util.now()
sources = [commit_source] + builders
changeNames = ["changes"]
builderNames = map(lambda builder: builder.getName(), builders)
sourceNames = changeNames + builderNames
sourceEvents = []
sourceGenerators = []
projectName = str(self.categories[0])
def get_event_from(g):
try:
while True:
e = g.next()
# e might be builder.BuildStepStatus,
# builder.BuildStatus, builder.Event,
# waterfall.Spacer(builder.Event), or changes.Change .
# The showEvents=False flag means we should hide
# builder.Event .
if not showEvents and isinstance(e, builder.Event):
continue
if isinstance(e, Change) and hasattr(e, 'project') and \
e.project != projectName:
continue
break
event = interfaces.IStatusEvent(e)
if debug:
log.msg("gen %s gave1 %s" % (g, event.getText()))
except StopIteration:
event = None
return event
for s in sources:
gen = insertGaps(s.eventGenerator(filterBranches), lastEventTime, showEvents)
sourceGenerators.append(gen)
# get the first event
sourceEvents.append(get_event_from(gen))
eventGrid = []
timestamps = []
lastEventTime = 0
for e in sourceEvents:
if e and e.getTimes()[0] > lastEventTime:
lastEventTime = e.getTimes()[0]
if lastEventTime == 0:
lastEventTime = util.now()
spanStart = lastEventTime - spanLength
debugGather = 0
while 1:
if debugGather: log.msg("checking (%s,]" % spanStart)
# the tableau of potential events is in sourceEvents[]. The
# window crawls backwards, and we examine one source at a time.
# If the source's top-most event is in the window, is it pushed
# onto the events[] array and the tableau is refilled. This
# continues until the tableau event is not in the window (or is
# missing).
spanEvents = [] # for all sources, in this span. row of eventGrid
firstTimestamp = None # timestamp of first event in the span
lastTimestamp = None # last pre-span event, for next span
for c in range(len(sourceGenerators)):
events = [] # for this source, in this span. cell of eventGrid
event = sourceEvents[c]
while event and spanStart < event.getTimes()[0]:
# to look at windows that don't end with the present,
# condition the .append on event.time <= spanFinish
if not IBox(event, None):
log.msg("BAD EVENT", event, event.getText())
assert 0
if debug:
log.msg("pushing", event.getText(), event)
# Fixing event text (removing module name at the start)
t = event.getText()
if t and t[0].startswith(self.module_name):
text = t[0][len(self.module_name)+1:]
if text == 'updated':
text = 'update'
event.setText([text] + t[1:])
events.append(event)
starts, finishes = event.getTimes()
firstTimestamp = util.earlier(firstTimestamp, starts)
event = get_event_from(sourceGenerators[c])
if debug:
log.msg("finished span")
if event:
# this is the last pre-span event for this source
lastTimestamp = util.later(lastTimestamp,
event.getTimes()[0])
if debugGather:
log.msg(" got %s from %s" % (events, sourceNames[c]))
sourceEvents[c] = event # refill the tableau
spanEvents.append(events)
# only show events older than maxTime. This makes it possible to
# visit a page that shows what it would be like to scroll off the
# bottom of this one.
if firstTimestamp is not None and firstTimestamp <= maxTime:
eventGrid.append(spanEvents)
timestamps.append(firstTimestamp)
if lastTimestamp:
spanStart = lastTimestamp - spanLength
else:
# no more events
break
if minTime is not None and lastTimestamp < minTime:
break
if len(timestamps) > maxPageLen:
break
# now loop
# loop is finished. now we have eventGrid[] and timestamps[]
if debugGather: log.msg("finished loop")
assert(len(timestamps) == len(eventGrid))
return (changeNames, builderNames, timestamps, eventGrid, sourceEvents)
def body(self, request):
# summary of changes between this method and the overriden one:
# - more structural markup and CSS
# - removal of the phase stuff, only keep one
"This method builds the main waterfall display."
status = self.getStatus(request)
data = ''
projectName = status.getProjectName()
projectURL = status.getProjectURL()
# we start with all Builders available to this Waterfall: this is
# limited by the config-file -time categories= argument, and defaults
# to all defined Builders.
allBuilderNames = status.getBuilderNames(categories=self.categories)
builders = [status.getBuilder(name) for name in allBuilderNames]
# but if the URL has one or more builder= arguments (or the old show=
# argument, which is still accepted for backwards compatibility), we
# use that set of builders instead. We still don't show anything
# outside the config-file time set limited by categories=.
showBuilders = request.args.get("show", [])
showBuilders.extend(request.args.get("builder", []))
if showBuilders:
builders = [b for b in builders if b.name in showBuilders]
# now, if the URL has one or category= arguments, use them as a
# filter: only show those builders which belong to one of the given
# categories.
showCategories = request.args.get("category", [])
if showCategories:
builders = [b for b in builders if b.category in showCategories]
builderNames = [b.name for b in builders]
(changeNames, builderNames, timestamps, eventGrid, sourceEvents) = \
self.buildGrid(request, builders)
# start the table: top-header material
data += '<table class="waterfall">\n'
data += '<thead>\n'
data += '<tr>\n'
data += '<td colspan="2"></td>'
for b in builders:
state, builds = b.getState()
builder_name = b.name[len(self.module_name)+1:]
data += '<th class="%s" title="%s"><a href="%s">%s</a></th>' % (
state, state,
request.childLink('../builders/%s' % urllib.quote(b.name, safe='')),
builder_name)
data += '</tr>\n'
data += '<tr>'
data += '<th>time<br/>(%s)</th>' % time.tzname[time.localtime()[-1]]
data += '<th class="Change">changes</th>'
for b in builders:
box = ITopBox(b).getBox(request)
data += box.td(align="center")
data += '</tr>'
data += '</thead>'
data += '<tbody>'
data += self.phase2(request, changeNames + builderNames, timestamps, eventGrid,
sourceEvents)
data += '</tbody>\n'
data += '<tfoot>\n'
def with_args(req, remove_args=[], new_args=[], new_path=None):
# sigh, nevow makes this sort of manipulation easier
newargs = req.args.copy()
for argname in remove_args:
newargs[argname] = []
if "branch" in newargs:
newargs["branch"] = [b for b in newargs["branch"] if b]
for k,v in new_args:
if k in newargs:
newargs[k].append(v)
else:
newargs[k] = [v]
newquery = "&".join(["%s=%s" % (k, v)
for k in newargs
for v in newargs[k]
])
if new_path:
new_url = new_path
elif req.prepath:
new_url = req.prepath[-1]
else:
new_url = ''
if newquery:
new_url += "?" + newquery
return new_url
if timestamps:
data += '<tr>'
bottom = timestamps[-1]
nextpage = with_args(request, ["last_time"],
[("last_time", str(int(bottom)))])
data += '<td class="Time"><a href="%s">next page</a></td>\n' % nextpage
data += '</tr>'
data += '</tfoot>\n'
data += "</table>\n"
return data
def phase2(self, request, sourceNames, timestamps, eventGrid,
sourceEvents):
data = ""
if not timestamps:
return data
# first pass: figure out the height of the chunks, populate grid
grid = []
for i in range(1+len(sourceNames)):
grid.append([])
# grid is a list of columns, one for the timestamps, and one per
# event source. Each column is exactly the same height. Each element
# of the list is a single <td> box.
lastDate = time.strftime("<b>%Y-%m-%d</b>",
time.localtime(util.now()))
for r in range(0, len(timestamps)):
chunkstrip = eventGrid[r]
# chunkstrip is a horizontal strip of event blocks. Each block
# is a vertical list of events, all for the same source.
assert(len(chunkstrip) == len(sourceNames))
maxRows = reduce(lambda x,y: max(x,y),
map(lambda x: len(x), chunkstrip))
for i in range(maxRows):
if i != maxRows-1:
grid[0].append(None)
else:
# timestamp goes at the bottom of the chunk
stuff = []
# add the date at the beginning (if it is not the same as
# today's date), and each time it changes
today = time.strftime("<b>%Y-%m-%d</b>",
time.localtime(timestamps[r]))
if today != lastDate:
stuff.append(today)
lastDate = today
stuff.append(
time.strftime("%H:%M:%S",
time.localtime(timestamps[r])))
grid[0].append(Box(text=stuff, class_="Time",
valign="bottom", align="center"))
# at this point the timestamp column has been populated with
# maxRows boxes, most None but the last one has the time string
for c in range(0, len(chunkstrip)):
block = chunkstrip[c]
assert(block != None) # should be [] instead
for i in range(maxRows - len(block)):
# fill top of chunk with blank space
grid[c+1].append(None)
for i in range(len(block)):
# so the events are bottom-justified
b = IBox(block[i]).getBox(request)
b.parms['valign'] = "top"
b.parms['align'] = "center"
grid[c+1].append(b)
# now all the other columns have maxRows new boxes too
# populate the last row, if empty
gridlen = len(grid[0])
for i in range(len(grid)):
strip = grid[i]
assert(len(strip) == gridlen)
if strip[-1] == None:
if sourceEvents[i-1]:
filler = IBox(sourceEvents[i-1]).getBox(request)
else:
# this can happen if you delete part of the build history
filler = Box(text=["?"], align="center")
strip[-1] = filler
strip[-1].parms['rowspan'] = 1
# second pass: bubble the events upwards to un-occupied locations
# Every square of the grid that has a None in it needs to have
# something else take its place.
noBubble = request.args.get("nobubble",['0'])
noBubble = int(noBubble[0])
if not noBubble:
for col in range(len(grid)):
strip = grid[col]
if col == 1: # changes are handled differently
for i in range(2, len(strip)+1):
# only merge empty boxes. Don't bubble commit boxes.
if strip[-i] == None:
next = strip[-i+1]
assert(next)
if next:
#if not next.event:
if next.spacer:
# bubble the empty box up
strip[-i] = next
strip[-i].parms['rowspan'] += 1
strip[-i+1] = None
else:
# we are above a commit box. Leave it
# be, and turn the current box into an
# empty one
strip[-i] = Box([], rowspan=1,
comment="commit bubble")
strip[-i].spacer = True
else:
# we are above another empty box, which
# somehow wasn't already converted.
# Shouldn't happen
pass
else:
for i in range(2, len(strip)+1):
# strip[-i] will go from next-to-last back to first
if strip[-i] == None:
# bubble previous item up
assert(strip[-i+1] != None)
strip[-i] = strip[-i+1]
strip[-i].parms['rowspan'] += 1
strip[-i+1] = None
else:
strip[-i].parms['rowspan'] = 1
# third pass: render the HTML table
for i in range(gridlen):
data += " <tr>\n";
for strip in grid:
b = strip[i]
if b:
data += b.td()
else:
if noBubble:
data += td([])
# Nones are left empty, rowspan should make it all fit
data += " </tr>\n"
return data
|
ahmeier/jhbuild
|
jhbuild/buildbot/status/web/waterfall.py
|
Python
|
gpl-2.0
| 20,632
|
[
"Brian",
"VisIt"
] |
05034ee7c6cc4997885ef3f11c0680dea518ee5f0c8b1ab2d4507900f140251d
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""An analysis that determines the reach of a function definition.
A function definition is said to reach a statement if that function may exist
(and therefore may be called) when that statement executes.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gast
from tensorflow.python.autograph.pyct import anno
from tensorflow.python.autograph.pyct import cfg
from tensorflow.python.autograph.pyct import transformer
class Definition(object):
"""Definition objects describe a unique definition of a function."""
def __init__(self, def_node):
self.def_node = def_node
class _NodeState(object):
"""Abstraction for the state of the CFG walk for reaching definition analysis.
This is a value type. Only implements the strictly necessary operators.
Attributes:
value: Dict[qual_names.QN, Set[Definition, ...]], the defined symbols and
their possible definitions
"""
def __init__(self, init_from=None):
if init_from:
self.value = set(init_from)
else:
self.value = set()
def __eq__(self, other):
return self.value == other.value
def __ne__(self, other):
return self.value != other.value
def __or__(self, other):
assert isinstance(other, _NodeState)
result = _NodeState(self.value)
result.value.update(other.value)
return result
def __add__(self, value):
result = _NodeState(self.value)
result.value.add(value)
return result
def __repr__(self):
return 'NodeState[%s]=%s' % (id(self), repr(self.value))
class Analyzer(cfg.GraphVisitor):
"""CFG visitor that determines reaching definitions at statement level."""
def __init__(self, graph, external_defs):
super(Analyzer, self).__init__(graph)
# This allows communicating that nodes have extra reaching definitions,
# e.g. those that a function closes over.
self.external_defs = external_defs
def init_state(self, _):
return _NodeState()
def visit_node(self, node):
prev_defs_out = self.out[node]
if node is self.graph.entry:
defs_in = _NodeState(self.external_defs)
else:
defs_in = prev_defs_out
for n in node.prev:
defs_in |= self.out[n]
defs_out = defs_in
if isinstance(node.ast_node, (gast.Lambda, gast.FunctionDef)):
defs_out += node.ast_node
self.in_[node] = defs_in
self.out[node] = defs_out
return prev_defs_out != defs_out
class TreeAnnotator(transformer.Base):
"""AST visitor that annotates each symbol name with its reaching definitions.
Simultaneously, the visitor runs the dataflow analysis on each function node,
accounting for the effect of closures. For example:
def foo():
def f():
pass
def g():
# `def f` reaches here
"""
def __init__(self, source_info, graphs):
super(TreeAnnotator, self).__init__(source_info)
self.graphs = graphs
self.allow_skips = False
self.current_analyzer = None
def _proces_function(self, node):
parent_analyzer = self.current_analyzer
subgraph = self.graphs[node]
if (self.current_analyzer is not None
and node in self.current_analyzer.graph.index):
cfg_node = self.current_analyzer.graph.index[node]
defined_in = self.current_analyzer.in_[cfg_node].value
else:
defined_in = ()
analyzer = Analyzer(subgraph, defined_in)
analyzer.visit_forward()
self.current_analyzer = analyzer
node = self.generic_visit(node)
self.current_analyzer = parent_analyzer
return node
def visit_FunctionDef(self, node):
return self._proces_function(node)
def visit_Lambda(self, node):
return self._proces_function(node)
def visit(self, node):
# This can happen before entering the top level function
if (self.current_analyzer is not None
and node in self.current_analyzer.graph.index):
cfg_node = self.current_analyzer.graph.index[node]
anno.setanno(node, anno.Static.DEFINED_FNS_IN,
self.current_analyzer.in_[cfg_node].value)
extra_node = anno.getanno(node, anno.Basic.EXTRA_LOOP_TEST, default=None)
if extra_node is not None:
cfg_node = self.current_analyzer.graph.index[extra_node]
anno.setanno(extra_node, anno.Static.DEFINED_FNS_IN,
self.current_analyzer.in_[cfg_node].value)
return super(TreeAnnotator, self).visit(node)
def resolve(node, source_info, graphs):
"""Resolves reaching definitions for each symbol.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
Returns:
ast.AST
"""
visitor = TreeAnnotator(source_info, graphs)
node = visitor.visit(node)
return node
|
karllessard/tensorflow
|
tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py
|
Python
|
apache-2.0
| 5,429
|
[
"VisIt"
] |
f83ea9f43b41986a0f18c3e0768ecbfd5717a20be6517af63221c5cdcbecddff
|
"""Deep Dream using the Inception v5 network.
Creative Applications of Deep Learning w/ Tensorflow.
Kadenze, Inc.
Copyright Parag K. Mital, June 2016.
"""
import os
import numpy as np
import tensorflow as tf
from scipy.ndimage.filters import gaussian_filter
from skimage.transform import resize
from scipy.misc import imsave
from . import inception, vgg16, i2v
from . import gif
def get_labels(model='inception'):
"""Return labels corresponding to the `neuron_i` parameter of deep dream.
Parameters
----------
model : str, optional
Which model to load. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
Raises
------
ValueError
Unknown model. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
"""
if model == 'inception':
net = inception.get_inception_model()
return net['labels']
elif model == 'i2v_tag':
net = i2v.get_i2v_tag_model()
return net['labels']
elif model == 'vgg16':
net = vgg16.get_vgg_model()
return net['labels']
elif model == 'vgg_face':
net = vgg16.get_vgg_face_model()
return net['labels']
else:
raise ValueError("Unknown model or this model does not have labels!")
def get_layer_names(model='inception'):
"""Retun every layer's index and name in the given model.
Parameters
----------
model : str, optional
Which model to load. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
Returns
-------
names : list of tuples
The index and layer's name for every layer in the given model.
"""
g = tf.Graph()
with tf.Session(graph=g):
if model == 'inception':
net = inception.get_inception_model()
elif model == 'vgg_face':
net = vgg16.get_vgg_face_model()
elif model == 'vgg16':
net = vgg16.get_vgg_model()
elif model == 'i2v':
net = i2v.get_i2v_model()
elif model == 'i2v-tag':
net = i2v.get_i2v_tag_model()
tf.import_graph_def(net['graph_def'], name='net')
names = [(i, op.name) for i, op in enumerate(g.get_operations())]
return names
def _setup(input_img, model, downsize):
"""Internal use only. Load the given model's graph and preprocess an image.
Parameters
----------
input_img : np.ndarray
Image to process with the model's normalizaiton process.
model : str
Which model to load. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
downsize : bool
Optionally crop/resize the input image to the standard shape. Only
applies to inception network which is all convolutional.
Returns
-------
net, img, preprocess, deprocess : dict, np.ndarray, function, function
net : The networks graph_def and labels
img : The preprocessed input image
preprocess: Function for preprocessing an image
deprocess: Function for deprocessing an image
Raises
------
ValueError
If model is unknown.
"""
if model == 'inception':
net = inception.get_inception_model()
img = inception.preprocess(input_img, resize=downsize, crop=downsize)[np.newaxis]
deprocess, preprocess = inception.deprocess, inception.preprocess
elif model == 'vgg_face':
net = vgg16.get_vgg_face_model()
img = vgg16.preprocess(input_img)[np.newaxis]
deprocess, preprocess = vgg16.deprocess, vgg16.preprocess
elif model == 'vgg16':
net = vgg16.get_vgg_model()
img = vgg16.preprocess(input_img)[np.newaxis]
deprocess, preprocess = vgg16.deprocess, vgg16.preprocess
elif model == 'i2v':
net = i2v.get_i2v_model()
img = i2v.preprocess(input_img)[np.newaxis]
deprocess, preprocess = i2v.deprocess, i2v.preprocess
elif model == 'i2v_tag':
net = i2v.get_i2v_tag_model()
img = i2v.preprocess(input_img)[np.newaxis]
deprocess, preprocess = i2v.deprocess, i2v.preprocess
else:
raise ValueError(
"Unknown model name! Supported: " +
"['inception', 'vgg_face', 'vgg16', 'i2v', 'i2v_tag']")
return net, img, preprocess, deprocess
def _apply(img,
gradient,
it_i,
decay=0.998,
sigma=1.5,
blur_step=10,
step=1.0,
crop=0,
crop_step=1,
pth=0):
"""Interal use only. Apply the gradient to an image with the given params.
Parameters
----------
img : np.ndarray
Tensor to apply gradient ascent to.
gradient : np.ndarray
Gradient to ascend to.
it_i : int
Current iteration (used for step modulos)
decay : float, optional
Amount to decay.
sigma : float, optional
Sigma for Gaussian Kernel.
blur_step : int, optional
How often to blur.
step : float, optional
Step for gradient ascent.
crop : int, optional
Amount to crop from each border.
crop_step : int, optional
How often to crop.
pth : int, optional
Percentile to mask out.
Returns
-------
img : np.ndarray
Ascended image.
"""
gradient /= (np.std(gradient) + 1e-10)
img += gradient * step
img *= decay
if pth:
mask = (np.abs(img) < np.percentile(np.abs(img), pth))
img = img - img * mask
if blur_step and it_i % blur_step == 0:
for ch_i in range(3):
img[..., ch_i] = gaussian_filter(img[..., ch_i], sigma)
if crop and it_i % crop_step == 0:
height, width, *ch = img[0].shape
# Crop a 1 pixel border from height and width
img = img[:, crop:-crop, crop:-crop, :]
# Resize
img = resize(img[0], (height, width), order=3,
clip=False, preserve_range=True
)[np.newaxis].astype(np.float32)
def deep_dream(input_img,
downsize=False,
model='inception',
layer_i=-1,
neuron_i=-1,
n_iterations=100,
save_gif=None,
save_images='imgs',
device='/cpu:0',
**kwargs):
"""Deep Dream with the given parameters.
Parameters
----------
input_img : np.ndarray
Image to apply deep dream to. Should be 3-dimenionsal H x W x C
RGB uint8 or float32.
downsize : bool, optional
Whether or not to downsize the image. Only applies to
model=='inception'.
model : str, optional
Which model to load. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
layer_i : int, optional
Which layer to use for finding the gradient. E.g. the softmax layer
for inception is -1, for vgg networks it is -2. Use the function
"get_layer_names" to find the layer number that you need.
neuron_i : int, optional
Which neuron to use. -1 for the entire layer.
n_iterations : int, optional
Number of iterations to dream.
save_gif : bool, optional
Save a GIF.
save_images : str, optional
Folder to save images to.
device : str, optional
Which device to use, e.g. ['/cpu:0'] or '/gpu:0'.
**kwargs : dict
See "_apply" for additional parameters.
Returns
-------
imgs : list of np.array
Images of every iteration
"""
net, img, preprocess, deprocess = _setup(input_img, model, downsize)
batch, height, width, *ch = img.shape
g = tf.Graph()
with tf.Session(graph=g) as sess, g.device(device):
tf.import_graph_def(net['graph_def'], name='net')
names = [op.name for op in g.get_operations()]
input_name = names[0] + ':0'
x = g.get_tensor_by_name(input_name)
layer = g.get_tensor_by_name(names[layer_i] + ':0')
layer_shape = sess.run(tf.shape(layer), feed_dict={x: img})
layer_vec = np.ones(layer_shape) / layer_shape[-1]
layer_vec[..., neuron_i] = 1.0 - (1.0 / layer_shape[-1])
ascent = tf.gradients(layer, x)
imgs = []
for it_i in range(n_iterations):
print(it_i, np.min(img), np.max(img))
if neuron_i == -1:
this_res = sess.run(
ascent, feed_dict={x: img})[0]
else:
this_res = sess.run(
ascent, feed_dict={x: img, layer: layer_vec})[0]
_apply(img, this_res, it_i, **kwargs)
imgs.append(deprocess(img[0]))
if save_images is not None:
imsave(os.path.join(save_images,
'frame{}.png'.format(it_i)), imgs[-1])
if save_gif is not None:
gif.build_gif(imgs, saveto=save_gif)
return imgs
def guided_dream(input_img,
guide_img=None,
downsize=False,
layers=[162, 183, 184, 247],
label_i=962,
layer_i=-1,
feature_loss_weight=1.0,
tv_loss_weight=1.0,
l2_loss_weight=1.0,
softmax_loss_weight=1.0,
model='inception',
neuron_i=920,
n_iterations=100,
save_gif=None,
save_images='imgs',
device='/cpu:0',
**kwargs):
"""Deep Dream v2. Use an optional guide image and other techniques.
Parameters
----------
input_img : np.ndarray
Image to apply deep dream to. Should be 3-dimenionsal H x W x C
RGB uint8 or float32.
guide_img : np.ndarray, optional
Optional image to find features at different layers for. Must pass in
a list of layers that you want to find features for. Then the guided
dream will try to match this images features at those layers.
downsize : bool, optional
Whether or not to downsize the image. Only applies to
model=='inception'.
layers : list, optional
A list of layers to find features for in the "guide_img".
label_i : int, optional
Which label to use for the softmax layer. Use the "get_labels" function
to find the index corresponding the object of interest. If None, not
used.
layer_i : int, optional
Which layer to use for finding the gradient. E.g. the softmax layer
for inception is -1, for vgg networks it is -2. Use the function
"get_layer_names" to find the layer number that you need.
feature_loss_weight : float, optional
Weighting for the feature loss from the guide_img.
tv_loss_weight : float, optional
Total variational loss weighting. Enforces smoothness.
l2_loss_weight : float, optional
L2 loss weighting. Enforces smaller values and reduces saturation.
softmax_loss_weight : float, optional
Softmax loss weighting. Must set label_i.
model : str, optional
Which model to load. Must be one of: ['inception'], 'i2v_tag', 'i2v',
'vgg16', or 'vgg_face'.
neuron_i : int, optional
Which neuron to use. -1 for the entire layer.
n_iterations : int, optional
Number of iterations to dream.
save_gif : bool, optional
Save a GIF.
save_images : str, optional
Folder to save images to.
device : str, optional
Which device to use, e.g. ['/cpu:0'] or '/gpu:0'.
**kwargs : dict
See "_apply" for additional parameters.
Returns
-------
imgs : list of np.ndarray
Images of the dream.
"""
net, img, preprocess, deprocess = _setup(input_img, model, downsize)
print(img.shape, input_img.shape)
print(img.min(), img.max())
if guide_img is not None:
guide_img = preprocess(guide_img.copy(), model)[np.newaxis]
assert(guide_img.shape == img.shape)
batch, height, width, *ch = img.shape
g = tf.Graph()
with tf.Session(graph=g) as sess, g.device(device):
tf.import_graph_def(net['graph_def'], name='net')
names = [op.name for op in g.get_operations()]
input_name = names[0] + ':0'
x = g.get_tensor_by_name(input_name)
features = [names[layer_i] + ':0' for layer_i in layers]
feature_loss = tf.Variable(0.0)
for feature_i in features:
layer = g.get_tensor_by_name(feature_i)
if guide_img is None:
feature_loss += tf.reduce_mean(layer)
else:
# Reshape it to 2D vector
layer = tf.reshape(layer, [-1, 1])
# Do the same for our guide image
guide_layer = sess.run(layer, feed_dict={x: guide_img})
guide_layer = guide_layer.reshape(-1, 1)
# Now calculate their dot product
correlation = tf.matmul(guide_layer.T, layer)
feature_loss += feature_loss_weight * tf.reduce_mean(correlation)
softmax_loss = tf.Variable(0.0)
if label_i is not None:
layer = g.get_tensor_by_name(names[layer_i] + ':0')
layer_shape = sess.run(tf.shape(layer), feed_dict={x: img})
layer_vec = np.ones(layer_shape) / layer_shape[-1]
layer_vec[..., neuron_i] = 1.0 - 1.0 / layer_shape[1]
softmax_loss += softmax_loss_weight * tf.reduce_mean(tf.nn.l2_loss(layer - layer_vec))
dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])
dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])
tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))
l2_loss = l2_loss_weight * tf.reduce_mean(tf.nn.l2_loss(x))
ascent = tf.gradients(feature_loss + softmax_loss + tv_loss + l2_loss, x)[0]
sess.run(tf.global_variables_initializer())
imgs = []
for it_i in range(n_iterations):
this_res, this_feature_loss, this_softmax_loss, this_tv_loss, this_l2_loss = sess.run(
[ascent, feature_loss, softmax_loss, tv_loss, l2_loss], feed_dict={x: img})
print('feature:', this_feature_loss,
'softmax:', this_softmax_loss,
'tv', this_tv_loss,
'l2', this_l2_loss)
_apply(img, -this_res, it_i, **kwargs)
imgs.append(deprocess(img[0]))
if save_images is not None:
imsave(os.path.join(save_images,
'frame{}.png'.format(it_i)), imgs[-1])
if save_gif is not None:
gif.build_gif(imgs, saveto=save_gif)
return imgs
|
pkmital/CADL
|
session-5/libs/deepdream.py
|
Python
|
apache-2.0
| 14,815
|
[
"Gaussian",
"NEURON"
] |
90695525667b795ea6a53a6f64413945c6c01a80f767746e910f0dfd73363a06
|
import os
import deepchem as dc
import numpy as np
def load_gaussian_cdf_data():
"""Load example with numbers sampled from Gaussian normal distribution.
Each feature and task is a column of values that is sampled
from a normal distribution of mean 0, stdev 1."""
current_dir = os.path.dirname(os.path.abspath(__file__))
features = ["feat0", "feat1"]
featurizer = dc.feat.UserDefinedFeaturizer(features)
tasks = ["task0", "task1"]
input_file = os.path.join(current_dir, "assets/gaussian_cdf_example.csv")
loader = dc.data.UserCSVLoader(
tasks=tasks, featurizer=featurizer, id_field="id")
return loader.create_dataset(input_file)
def test_power_X_transformer():
"""Test Power transformer on Gaussian normal dataset."""
N = 10
n_feat = 2
powers = [1, 2, 0.5]
X = np.random.rand(N, n_feat)
y = np.random.normal(size=(N,))
gaussian_dataset = dc.data.NumpyDataset(X, y)
powers = [1, 2, 0.5]
power_transformer = dc.trans.PowerTransformer(transform_X=True, powers=powers)
X, y, w, ids = (gaussian_dataset.X, gaussian_dataset.y, gaussian_dataset.w,
gaussian_dataset.ids)
gaussian_dataset2 = power_transformer.transform(gaussian_dataset)
X_t, y_t, w_t, ids_t = (gaussian_dataset2.X, gaussian_dataset2.y,
gaussian_dataset2.w, gaussian_dataset2.ids)
# Check ids are unchanged.
for id_elt, id_t_elt in zip(ids, ids_t):
assert id_elt == id_t_elt
# Check y is unchanged since this is an X transformer
np.testing.assert_allclose(y.flatten(), y_t.flatten())
# Check w is unchanged since this is an X transformer
np.testing.assert_allclose(w, w_t)
# Check X is now holding the proper values in each column.
np.testing.assert_allclose(X_t.shape[1], len(powers) * X.shape[1])
np.testing.assert_allclose(X, X_t[:, :2])
np.testing.assert_allclose(np.power(X, 2), X_t[:, 2:4])
np.testing.assert_allclose(np.power(X, 0.5), X_t[:, 4:])
def test_power_y_transformer():
"""Test Power transformer on Gaussian normal dataset."""
N = 10
n_feat = 2
powers = [1, 2, 0.5]
X = np.random.rand(N, n_feat)
y = np.random.rand(N)
gaussian_dataset = dc.data.NumpyDataset(X, y)
power_transformer = dc.trans.PowerTransformer(transform_y=True, powers=powers)
X, y, w, ids = (gaussian_dataset.X, gaussian_dataset.y, gaussian_dataset.w,
gaussian_dataset.ids)
gaussian_dataset2 = power_transformer.transform(gaussian_dataset)
X_t, y_t, w_t, ids_t = (gaussian_dataset2.X, gaussian_dataset2.y,
gaussian_dataset2.w, gaussian_dataset2.ids)
# Check ids are unchanged.
for id_elt, id_t_elt in zip(ids, ids_t):
assert id_elt == id_t_elt
# Check X is unchanged since this is an X transformer
np.testing.assert_allclose(X, X_t)
# Check w is unchanged since this is an X transformer
np.testing.assert_allclose(w, w_t)
# Check y is now holding the proper values in each column.
np.testing.assert_allclose(y_t.shape[1], len(powers))
np.testing.assert_allclose(y, y_t[:, :1].flatten())
np.testing.assert_allclose(np.power(y, 2), y_t[:, 1:2].flatten())
np.testing.assert_allclose(np.power(y, 0.5), y_t[:, 2:].flatten())
# Check that untransform does the right thing.
np.testing.assert_allclose(power_transformer.untransform(y_t).flatten(), y)
|
peastman/deepchem
|
deepchem/trans/tests/test_power.py
|
Python
|
mit
| 3,314
|
[
"Gaussian"
] |
d2413411e2e81fdc6461754ad0078d99cc3a2729045ed8c9a6c13e1878773e64
|
import networkx as nx
import pybel.constants as pc
from indra.assemblers.pybel import assembler as pa
from indra.databases import hgnc_client
from indra.statements import *
from pybel.dsl import abundance, activity, bioprocess, complex_abundance, hgvs, pmod, protein, reaction
def id(gene_name):
return hgnc_client.get_hgnc_id(gene_name)
phos_dsl = pmod('Ph', 'Ser', 218)
ub_dsl = pmod('Ub', 'Ser', 218)
egfr_phos_dsl = pmod('Ph', 'Tyr', 1173)
braf_dsl = protein(namespace='HGNC', name='BRAF')
map2k1_dsl = protein(namespace='HGNC', name='MAP2K1')
tp53_dsl = protein(namespace='HGNC', name='TP53')
mdm2_dsl = protein(namespace='HGNC', name='MDM2')
egfr_dsl = protein(namespace='HGNC', name='EGFR')
chebi_17534 = abundance(namespace='CHEBI', name='17634')
chebi_4170 = abundance(namespace='CHEBI', name='4170')
chebi_17534_to_4170 = reaction(chebi_17534, chebi_4170)
grb2_dsl = protein(namespace='HGNC', name='GRB2')
sos1_dsl = protein(namespace='HGNC', name='SOS1')
sos1_phosphorylated_dsl = sos1_dsl.with_variants(pmod('Ph'))
kras_node = protein(namespace='HGNC', name='KRAS')
egfr_grb2_sos1_complex_dsl = complex_abundance([
egfr_dsl,
grb2_dsl,
sos1_dsl,
])
egfr_grb2_sos1_phos_complex_dsl = complex_abundance([
egfr_dsl,
grb2_dsl,
sos1_phosphorylated_dsl,
])
def draw(g, filename):
ag = nx.nx_agraph.to_agraph(g)
ag.draw(filename, prog='dot')
def get_edge_data(g, u, v):
return list(g.get_edge_data(u, v).values())[0]
def get_first_edge_data(g):
return list(g.edges(data=True))[0][2]
def test_simple_modification_no_evidence():
braf = Agent('BRAF', db_refs={'HGNC': '1097', 'UP': 'P15056'})
braf_kin = Agent('BRAF', activity=ActivityCondition('kinase', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
braf_cat = Agent('BRAF', activity=ActivityCondition('catalytic', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
stmt1 = Phosphorylation(braf, mek, 'S', '218')
stmt2 = Phosphorylation(braf_kin, mek, 'S', '218')
stmt3 = Ubiquitination(braf_cat, mek, 'S', '218')
# Edge info for subject
edge1 = None
edge2 = activity('kin')
edge3 = activity('cat')
for stmt, modtuple, subj_edge in ((stmt1, phos_dsl, edge1),
(stmt2, phos_dsl, edge2),
(stmt3, ub_dsl, edge3)):
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 3
assert braf_dsl in belgraph
map2k1_mod_dsl = map2k1_dsl.with_variants(modtuple)
assert map2k1_mod_dsl in belgraph
assert belgraph.number_of_edges() == 2
edge_data = get_edge_data(belgraph, braf_dsl, map2k1_mod_dsl)
assert edge_data.get(pc.SUBJECT) == subj_edge
assert edge_data[pc.RELATION] == pc.INCREASES
def test_modification_with_evidences():
braf_kin = Agent('BRAF', activity=ActivityCondition('kinase', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
evidence = Evidence(source_api='test', text='evidence text', pmid='1234')
stmt = Phosphorylation(braf_kin, mek, 'S', '218', evidence=evidence)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 3
assert braf_dsl in belgraph
map2k1_mod_dsl = map2k1_dsl.with_variants(phos_dsl)
assert map2k1_mod_dsl in belgraph
assert belgraph.number_of_edges() == 2
edge_data = get_edge_data(belgraph, braf_dsl, map2k1_mod_dsl)
assert edge_data.get(pc.SUBJECT) == activity('kin')
assert edge_data[pc.RELATION] == pc.INCREASES
assert edge_data[pc.EVIDENCE] == 'evidence text'
assert edge_data[pc.CITATION] == {
pc.CITATION_TYPE: pc.CITATION_TYPE_PUBMED,
pc.CITATION_REFERENCE: '1234',
}
assert 'source_api' in edge_data[pc.ANNOTATIONS]
assert edge_data[pc.ANNOTATIONS]['source_api'] == 'test'
assert 'source_id' not in edge_data[pc.ANNOTATIONS]
def test_modification_with_mutation():
braf = Agent('BRAF', mutations=[MutCondition('600', 'V', 'E')],
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
stmt = Phosphorylation(braf, mek, 'S', '218')
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
# Adds in the base protein nodes as well as the variants (so 4 nodes)
assert len(belgraph.nodes()) == 4
braf_mut_dsl = braf_dsl.with_variants(hgvs('p.Val600Glu'))
assert braf_mut_dsl in belgraph
def test_activation():
braf_no_act = Agent('BRAF', db_refs={'HGNC': '1097', 'UP': 'P15056'})
braf_kin = Agent('BRAF', activity=ActivityCondition('kinase', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
stmt1 = Activation(braf_no_act, mek)
stmt2 = Activation(braf_kin, mek, 'kinase')
edge1 = {
pc.RELATION: pc.INCREASES,
pc.OBJECT: {pc.MODIFIER: pc.ACTIVITY}
}
edge2 = {
pc.RELATION: pc.INCREASES,
pc.SUBJECT: activity('kin'),
pc.OBJECT: activity('kin')
}
for stmt, edge in ((stmt1, edge1), (stmt2, edge2)):
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 2
assert braf_dsl in belgraph
assert map2k1_dsl in belgraph
assert belgraph.number_of_edges() == 1
edge_data = get_first_edge_data(belgraph)
assert edge_data == edge
def test_direct_activation():
braf_no_act = Agent('BRAF', db_refs={'HGNC': '1097', 'UP': 'P15056'})
braf_kin = Agent('BRAF', activity=ActivityCondition('kinase', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
stmt1_ev = Evidence(
pmid='1234',
epistemics={'direct': True},
)
stmt1 = Activation(braf_no_act, mek, evidence=stmt1_ev)
stmt2 = Activation(braf_kin, mek, 'kinase', evidence=stmt1_ev)
edge1 = {
pc.RELATION: pc.DIRECTLY_INCREASES,
pc.OBJECT: {pc.MODIFIER: pc.ACTIVITY},
pc.EVIDENCE: 'No evidence text.',
pc.CITATION: {
pc.CITATION_TYPE: pc.CITATION_TYPE_PUBMED,
pc.CITATION_REFERENCE: '1234',
},
}
edge2 = {
pc.RELATION: pc.DIRECTLY_INCREASES,
pc.SUBJECT: activity('kin'),
pc.OBJECT: activity('kin'),
pc.EVIDENCE: 'No evidence text.',
pc.CITATION: {
pc.CITATION_TYPE: pc.CITATION_TYPE_PUBMED,
pc.CITATION_REFERENCE: '1234',
},
}
for stmt, expected_edge in ((stmt1, edge1), (stmt2, edge2)):
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 2
assert braf_dsl in belgraph
assert map2k1_dsl in belgraph
assert belgraph.number_of_edges() == 1
edge_data = get_first_edge_data(belgraph)
assert expected_edge == edge_data
def test_inhibition():
braf_kin = Agent('BRAF', activity=ActivityCondition('kinase', True),
db_refs={'HGNC': '1097', 'UP': 'P15056'})
mek = Agent('MAP2K1', db_refs={'HGNC': '6840', 'UP': 'Q02750'})
stmt = Inhibition(braf_kin, mek, 'kinase')
edge = {
pc.RELATION: pc.DECREASES,
pc.SUBJECT: activity('kin'),
pc.OBJECT: activity('kin')
}
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 2
assert braf_dsl in belgraph
assert map2k1_dsl in belgraph
assert belgraph.number_of_edges() == 1
edge_data = get_first_edge_data(belgraph)
assert edge_data == edge
def test_increase_amount():
tp53 = Agent('TP53', db_refs={'HGNC': '11998'})
mdm2 = Agent('MDM2', db_refs={'HGNC': '6973'})
stmt = IncreaseAmount(tp53, mdm2)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 2
assert mdm2_dsl in belgraph
assert tp53_dsl in belgraph
assert belgraph.number_of_edges() == 1
edge_data = get_first_edge_data(belgraph)
assert edge_data[pc.RELATION] == pc.INCREASES
def test_increase_amount_tscript():
tp53 = Agent('TP53', activity=ActivityCondition('transcription', True),
db_refs={'HGNC': '11998'})
mdm2 = Agent('MDM2', db_refs={'HGNC': '6973'})
stmt = IncreaseAmount(tp53, mdm2)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph.nodes()) == 2
assert mdm2_dsl in belgraph
assert tp53_dsl in belgraph
assert belgraph.number_of_edges() == 1
edge_data = get_first_edge_data(belgraph)
assert edge_data[pc.RELATION] == pc.INCREASES
assert edge_data[pc.SUBJECT] == activity('tscript')
def test_gef():
gef = Agent('SOS1', mods=[ModCondition('phosphorylation')],
db_refs={'HGNC': '11187'})
ras = Agent('KRAS', db_refs={'HGNC': '6407'})
stmt = Gef(gef, ras)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 3
assert belgraph.number_of_edges() == 2
gef_reference_node = protein(namespace='HGNC', name='SOS1')
gef_node = gef_reference_node.with_variants(pmod('Ph'))
assert gef_reference_node in belgraph
assert gef_node in belgraph
assert kras_node in belgraph
edge_data = get_edge_data(belgraph, gef_node, kras_node)
edge = {
pc.RELATION: pc.DIRECTLY_INCREASES,
pc.SUBJECT: activity('gef'),
pc.OBJECT: activity('gtp')
}
assert edge_data == edge
def test_gap():
gap = Agent('RASA1', mods=[ModCondition('phosphorylation')],
db_refs={'HGNC': '9871'})
ras = Agent('KRAS', db_refs={'HGNC': '6407'})
stmt = Gap(gap, ras)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 3
assert belgraph.number_of_edges() == 2
gap_reference_node = protein(namespace='HGNC', name='RASA1')
gap_node = gap_reference_node.with_variants(pmod('Ph'))
ras_node = protein(namespace='HGNC', name='KRAS')
assert gap_reference_node in belgraph
assert gap_node in belgraph
assert ras_node in belgraph
edge_data = get_edge_data(belgraph, gap_node, ras_node)
edge = {
pc.RELATION: pc.DIRECTLY_DECREASES,
pc.SUBJECT: activity('gap'),
pc.OBJECT: activity('gtp')
}
assert edge_data == edge
def test_active_form():
ras = Agent('KRAS', mutations=[MutCondition('12', 'G', 'V')],
db_refs={'HGNC': '6407'})
mapk1_p = Agent('MAP2K1',
mods=[ModCondition('phosphorylation', 'T', '185')],
db_refs={'HGNC': hgnc_client.get_hgnc_id('MAP2K1')})
mapk1_pp = Agent('MAP2K1',
mods=[ModCondition('phosphorylation', 'T', '185'),
ModCondition('phosphorylation', 'Y', '187')],
db_refs={'HGNC': hgnc_client.get_hgnc_id('MAP2K1')})
stmt1 = ActiveForm(ras, 'gtpbound', True)
stmt2 = ActiveForm(mapk1_p, 'kinase', True)
stmt3 = ActiveForm(mapk1_pp, 'kinase', True)
for stmt in (stmt1, stmt2, stmt3):
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 2
def test_complex():
egfr = Agent('EGFR', db_refs={'HGNC': id('EGFR')})
grb2 = Agent('GRB2', db_refs={'HGNC': id('GRB2')})
sos = Agent('SOS1', db_refs={'HGNC': id('SOS1')})
stmt = Complex([egfr, grb2, sos])
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
# The graph should contain the node for the complex as well as nodes
# for all of the members
assert len(belgraph) == 4
assert egfr_grb2_sos1_complex_dsl in belgraph
for member in egfr_grb2_sos1_complex_dsl.members:
assert member in belgraph
def test_rxn_no_controller():
glu = Agent('D-GLUCOSE', db_refs={'CHEBI': 'CHEBI:17634'})
g6p = Agent('GLUCOSE-6-PHOSPHATE', db_refs={'CHEBI': 'CHEBI:4170'})
stmt = Conversion(None, [glu], [g6p])
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
# The graph should contain the node for the reaction as well as nodes
# for all of the members
assert len(belgraph) == 3
assert chebi_17534_to_4170 in belgraph
for reactant in chebi_17534_to_4170.reactants:
assert reactant in belgraph
# TODO check edge chebi_17534_to_4170 hasReactant chebi_17534
for product in chebi_17534_to_4170.products:
assert product in belgraph
# TODO check edge chebi_17534_to_4170 hasProduct chebi_4170
def test_rxn_with_controller():
hk1 = Agent('HK1', db_refs={'HGNC': id('HK1')})
glu = Agent('D-GLUCOSE', db_refs={'CHEBI': 'CHEBI:17634'})
g6p = Agent('GLUCOSE-6-PHOSPHATE', db_refs={'CHEBI': 'CHEBI:4170'})
stmt = Conversion(hk1, [glu], [g6p])
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
# The graph should contain the node for the reaction as well as nodes
# for all of the members
assert len(belgraph) == 4
# check the catalyst makes it
assert protein(namespace='HGNC', name='HK1') in belgraph
# The reaction data should be the same as before
assert chebi_17534 in belgraph
assert chebi_4170 in belgraph
assert chebi_17534_to_4170 in belgraph
def test_autophosphorylation():
egfr = Agent('EGFR', db_refs={'HGNC': id('EGFR')})
stmt = Autophosphorylation(egfr, 'Y', '1173')
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 2
assert egfr_dsl in belgraph
egfr_phos_node = egfr_dsl.with_variants(egfr_phos_dsl)
assert egfr_dsl in belgraph
assert egfr_phos_node in belgraph
assert belgraph.number_of_nodes() == 2
assert belgraph.number_of_edges() == 2
# There will be two edges between these nodes
edge_dicts = list(belgraph.get_edge_data(egfr_dsl,
egfr_phos_node).values())
assert {pc.RELATION: pc.DIRECTLY_INCREASES} in edge_dicts
# Test an autophosphorylation with a bound condition
tab1 = Agent('TAB1', db_refs={'HGNC': id('TAB1')})
p38_tab1 = Agent('MAPK14', bound_conditions=[BoundCondition(tab1)],
db_refs={'HGNC': id('MAPK14')})
stmt = Autophosphorylation(p38_tab1, 'Y', '100')
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert belgraph.number_of_nodes() == 4
assert belgraph.number_of_edges() == 4
def test_bound_condition():
egfr = Agent('EGFR', db_refs={'HGNC': id('EGFR')})
grb2 = Agent('GRB2', db_refs={'HGNC': id('GRB2')})
ras = Agent('KRAS', db_refs={'HGNC': '6407'})
sos1_bound = Agent('SOS1', mods=[ModCondition('phosphorylation')],
bound_conditions=[BoundCondition(egfr), BoundCondition(grb2)],
db_refs={'HGNC': id('SOS1')})
stmt = Gef(sos1_bound, ras)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 6
assert belgraph.number_of_edges() == 5
# Don't bother to check the tuple, which is now generated by
# PyBEL directly, but check the node data
assert egfr_grb2_sos1_phos_complex_dsl in belgraph
assert kras_node in belgraph
assert (egfr_grb2_sos1_phos_complex_dsl, kras_node) in belgraph.edges()
edge_data = (egfr_grb2_sos1_phos_complex_dsl, kras_node,
{
pc.RELATION: pc.DIRECTLY_INCREASES,
pc.OBJECT: activity('gtp')
})
assert edge_data in belgraph.edges(data=True)
def test_transphosphorylation():
egfr = Agent('EGFR', db_refs={'HGNC': id('EGFR')})
egfr_dimer = Agent('EGFR', bound_conditions=[BoundCondition(egfr)],
db_refs={'HGNC': id('EGFR')})
stmt = Transphosphorylation(egfr_dimer, 'Y', '1173')
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert belgraph.number_of_nodes() == 3
assert belgraph.number_of_edges() == 3
egfr_dimer_node = complex_abundance([egfr_dsl, egfr_dsl])
egfr_phos_node = egfr_dsl.with_variants(pmod('Ph', 'Tyr', 1173))
edge_data = get_edge_data(belgraph, egfr_dimer_node, egfr_phos_node)
assert edge_data == {pc.RELATION: pc.DIRECTLY_INCREASES}
"""
def test_translocation():
foxo = Agent('FOXO1', db_refs={'HGNC': id('FOXO1')})
stmt = Translocation(foxo, 'cytoplasm', 'nucleus')
nuc_go = 'GO:0005634'
cyto_go = 'GO:0005737'
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 1
"""
def test_complex_with_pmod():
sos1_phos = Agent('SOS1',
mods=[ModCondition('phosphorylation', 'Y', '100')],
db_refs={'HGNC': id('SOS1')})
grb2 = Agent('GRB2', db_refs={'HGNC': id('GRB2')})
egfr = Agent('EGFR', db_refs={'HGNC': id('EGFR')})
stmt = Complex([sos1_phos, grb2, egfr])
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert belgraph.number_of_nodes() == 5
assert belgraph.number_of_edges() == 4
egfr_grb2_sos_phos_tyr_100 = complex_abundance([
egfr_dsl,
grb2_dsl,
sos1_dsl.with_variants(pmod('Ph', 'Tyr', 100))
])
assert sos1_dsl in belgraph
assert egfr_grb2_sos_phos_tyr_100 in belgraph
for member in egfr_grb2_sos_phos_tyr_100.members:
assert member in belgraph
def test_complex_with_complex():
grb2 = Agent('GRB2', db_refs={'HGNC': id('GRB2')})
egfr_grb2 = Agent('EGFR', db_refs={'HGNC': id('EGFR')},
bound_conditions=[BoundCondition(grb2)])
sos1_phos = Agent('SOS1',
mods=[ModCondition('phosphorylation', 'Y', '100')],
db_refs={'HGNC': id('SOS1')})
stmt = Complex([sos1_phos, egfr_grb2])
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 6
assert belgraph.number_of_edges() == 5
egfr_grb2_complex = complex_abundance([egfr_dsl, grb2_dsl])
egfr_grb2_complex_sos1_phos_complex = complex_abundance([
egfr_grb2_complex,
sos1_dsl.with_variants(pmod('Ph', 'Tyr', 100))
])
assert egfr_grb2_complex in belgraph
for member in egfr_grb2_complex.members:
assert member in belgraph
assert egfr_grb2_complex_sos1_phos_complex in belgraph
for member in egfr_grb2_complex_sos1_phos_complex.members:
assert member in belgraph
def test_no_activity_on_bioprocess():
yfg_agent = Agent('PPP1R13L', db_refs={'HGNC': id('PPP1R13L')})
apoptosis_agent = Agent('apoptotic process', db_refs={'GO': 'GO:0006915'})
stmt = Activation(yfg_agent, apoptosis_agent)
pba = pa.PybelAssembler([stmt])
belgraph = pba.make_model()
assert len(belgraph) == 2
assert belgraph.number_of_edges() == 1
yfg_pybel = protein('HGNC', 'PPP1R13L')
apoptosis_pybel = bioprocess('GO', 'GO:0006915')
assert yfg_pybel in belgraph
assert apoptosis_pybel in belgraph
_, _, e = list(belgraph.edges(data=True))[0]
assert pc.OBJECT not in e
|
pvtodorov/indra
|
indra/tests/test_pybel_assembler.py
|
Python
|
bsd-2-clause
| 19,310
|
[
"Pybel"
] |
83b6cd3d18c2bb3eb652ee89c79f9a642b2df7dde308bfb9dccf306f5f16ccca
|
import os
import sys
from os.path import expanduser
SEPARATOR = " @ "
ENV_SSH_USER = os.getenv('AWS_FUZZ_USER')
ENV_KEY_PATH = os.getenv('AWS_FUZZ_KEY_PATH')
ENV_USE_PRIVATE_IP = os.getenv('AWS_FUZZ_PRIVATE_IP')
ENV_USE_SSM = os.getenv('AWS_FUZZ_SSM') # use AWS Secure Session Manager instead of ssh
ENV_USE_PUBLIC_DNS_OVER_IP = os.getenv('AWS_FUZZ_DNS_OVER_IP', False) # use public DNS over IP (both public or private)
ENV_TUNNEL_SSH_USER = os.getenv('AWS_FUZZ_TUNNEL_USER')
ENV_TUNNEL_KEY_PATH = os.getenv('AWS_FUZZ_TUNNEL_KEY_PATH')
ENV_SSH_COMMAND_TEMPLATE = os.getenv('AWS_FUZZ_SSH_COMMAND_TEMPLATE', "ssh {key} {user}{host}")
ENV_SSM_COMMAND_TEMPLATE = os.getenv('AWS_FUZZ_SSM_COMMAND_TEMPLATE', "aws ssm start-session --profile {profile} --target {target}")
ENV_AWS_REGIONS = os.getenv('AWS_FUZZ_AWS_REGIONS', '')
CACHE_EXPIRY_TIME = int(os.getenv('AWS_FUZZ_CACHE_EXPIRY', 3600))
CACHE_ENABLED = os.getenv('AWS_FUZZ_USE_CACHE', False)
AWS_DEFAULT_PROFILE=os.getenv('AWS_DEFAULT_PROFILE', 'default')
CACHE_DIR = '{}/{}'.format(expanduser("~"), '.aws_fuzzy_finder_cache')
CACHE_PATH = '{}/{}'.format(CACHE_DIR, os.getenv('AWS_PROFILE', AWS_DEFAULT_PROFILE))
fzf_base = 'fzf-0.17.0'
is_64_bit = sys.maxsize > 2**32
if is_64_bit:
arch = 'amd64'
else:
arch = '386'
if sys.platform.startswith('linux'):
system = 'linux'
elif sys.platform == 'darwin':
system = 'darwin'
else:
print('Currently only MAC OS and Linux are supported, exiting.')
exit(1)
lib = '{}-{}_{}'.format(fzf_base, system, arch)
LIBRARY_PATH = '{}/libs/{}'.format(
os.path.dirname(os.path.abspath(__file__)),
lib
)
if ENV_AWS_REGIONS:
regions = ENV_AWS_REGIONS.split(",")
else:
regions = [os.getenv("AWS_DEFAULT_REGION")]
AWS_REGIONS = regions
NO_REGION_ERROR = """No AWS region specified.
Specify region in your boto config or add a "AWS_DEFAULT_REGION" environment variable.
$ export AWS_DEFAULT_REGION="<your_region_code>"
For more info visit:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions"""
NO_CREDENTIALS_ERROR = """No AWS credentials specified.
Make sure to set your aws_access_key_id, aws_secret_access_key and region in your boto config
as described here: http://boto3.readthedocs.io/en/latest/guide/configuration.html"""
WRONG_CREDENTIALS_ERROR = "Authentication failure, check your AWS credentials"
|
pmazurek/aws-fuzzy-finder
|
aws_fuzzy_finder/settings.py
|
Python
|
mit
| 2,400
|
[
"VisIt"
] |
e9455cb3bdfdb0310d288017bd41fe33b552541867cd6fb26008b35a9782bea3
|
# -*- Mode: python; tab-width: 4; indent-tabs-mode:nil; coding:utf-8 -*-
# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4 fileencoding=utf-8
#
# MDAnalysis --- https://www.mdanalysis.org
# Copyright (c) 2006-2017 The MDAnalysis Development Team and contributors
# (see the file AUTHORS for the full list of names)
#
# Released under the GNU Public Licence, v2 or any higher version
#
# Please cite your use of MDAnalysis in published work:
#
# R. J. Gowers, M. Linke, J. Barnoud, T. J. E. Reddy, M. N. Melo, S. L. Seyler,
# D. L. Dotson, J. Domanski, S. Buchoux, I. M. Kenney, and O. Beckstein.
# MDAnalysis: A Python package for the rapid analysis of molecular dynamics
# simulations. In S. Benthall and S. Rostrup editors, Proceedings of the 15th
# Python in Science Conference, pages 102-109, Austin, TX, 2016. SciPy.
# doi: 10.25080/majora-629e541a-00e
#
# N. Michaud-Agrawal, E. J. Denning, T. B. Woolf, and O. Beckstein.
# MDAnalysis: A Toolkit for the Analysis of Molecular Dynamics Simulations.
# J. Comput. Chem. 32 (2011), 2319--2327, doi:10.1002/jcc.21787
#
import MDAnalysis as mda
import numpy as np
import pytest
from MDAnalysisTests.datafiles import (COORDINATES_XTC, COORDINATES_TOPOLOGY)
from numpy.testing import assert_almost_equal, assert_equal
def test_get_bad_auxreader_format_raises_ValueError():
# should raise a ValueError when no AuxReaders with match the specified format
with pytest.raises(ValueError):
mda.auxiliary.core.get_auxreader_for(format='bad-format')
class BaseAuxReference(object):
## assumes the reference auxiliary data has 5 steps, with three values
## for each step: i, 2*i and 2^i, where i is the step number.
## If a particular AuxReader is such that auxiliary data is read in a
## format other than np.array([i, 2*i, 2**i]), format_data() should be
## overwritten tp return the appropriate format
def __init__(self):
self.n_steps = 5
self.dt = 1
self.initial_time = 0
self.name = 'test'
# reference description of the (basic) auxiliary reader. Will
# have to add 'format' and 'auxdata' when creating the reference
# for each particular reader
self.description= {'dt':self.dt, 'represent_ts_as':'closest',
'initial_time':self.initial_time, 'time_selector':None,
'data_selector':None, 'constant_dt':True,
'cutoff':-1, 'auxname':self.name}
def reference_auxstep(i):
# create a reference AuxStep for step i
auxstep = mda.auxiliary.base.AuxStep(dt=self.dt,
initial_time=self.initial_time)
auxstep.step = i
auxstep._data = self.format_data([i, 2*i, 2**i])
return auxstep
self.auxsteps = [reference_auxstep(i) for i in range(self.n_steps)]
## testing __getitem__ with slice and list. Should allow us to iterate
## through the specified auxiliary steps...
self.iter_list = [1, -2]
self.iter_list_auxsteps = [self.auxsteps[1], self.auxsteps[3]]
self.iter_slice = slice(None, None, 2) # every second step
self.iter_slice_auxsteps = [self.auxsteps[0], self.auxsteps[2],
self.auxsteps[4]]
def reference_timestep(dt=1, offset=0):
# return a trajectory timestep with specified dt, offset + move to
# frame 1; for use in auxiliary reading of different timesteps
ts = mda.coordinates.base.Timestep(0, dt=dt,
time_offset=offset)
ts.frame = 1
return ts
## test reading a timestep with lower frequency. Auxiliary steps with
## times between [1ps, 3ps) will be assigned to this timestep, i.e.
## step 1 (1 ps) and step 2 (2 ps).
self.lower_freq_ts = reference_timestep(dt=2, offset=0)
# 'closest' representative value will match step 2
self.lowf_closest_rep = self.format_data([2, 2*2, 2**2])
# 'average' representative value
self.lowf_average_rep = self.format_data([1.5, 3, 3])
## test reading a timestep with higher frequency. Auxiliart steps with
## times between [0.25ps, 0.75ps) will be assigned to this timestep, i.e.
## no auxiliary steps
self.higher_freq_ts = reference_timestep(dt=0.5, offset=0)
self.highf_rep = self.format_data([np.nan, np.nan, np.nan])
## test reading a timestep that is offset. Auxiliary steps with
## times between [0.75ps, 1.75ps) will be assigned to this timestep, i.e.
## step 1 (1 ps)
self.offset_ts = reference_timestep(dt=1, offset=0.25)
# 'closest' representative value will match step 1 data
self.offset_closest_rep = self.format_data([1, 2*1, 2**1])
## testing cutoff for representative values
self.cutoff = 0
# for 'average': use low frequenct timestep, only step 2 within 0ps cutoff
self.lowf_cutoff_average_rep = self.format_data([2, 2*2, 2**2])
# for 'closest': use offset timestep; no timestep within 0ps cutoff
self.offset_cutoff_closest_rep = self.format_data([np.nan, np.nan, np.nan])
## testing selection of time/data. Overload for each auxilairy format
## as appropraite.
# default None behavior set here so won't get errors when time/data
# selection not implemented.
self.time_selector = None
self.select_time_ref = np.arange(self.n_steps)
self.data_selector = None
self.select_data_ref = [self.format_data([2*i, 2**i]) for i in range(self.n_steps)]
def format_data(self, data):
## overload if auxiliary reader will read data with a format different
## to e.g. np.array([0, 0, 1])
return np.array(data)
class BaseAuxReaderTest(object):
def test_n_steps(self, ref, reader):
assert len(reader) == ref.n_steps, "number of steps does not match"
def test_dt(self, ref, reader):
assert reader.dt == ref.dt, "dt does not match"
def test_initial_time(self, ref, reader):
assert reader.initial_time == ref.initial_time, "initial time does not match"
def test_first_step(self, ref, reader):
# on first loading we should start at step 0
assert_auxstep_equal(reader.auxstep, ref.auxsteps[0])
def test_next(self, ref, reader):
# should take us to step 1
next(reader)
assert_auxstep_equal(reader.auxstep, ref.auxsteps[1])
def test_rewind(self, ref, reader):
# move to step 1...
reader.next()
# now rewind should read step 0
reader.rewind()
assert_auxstep_equal(reader.auxstep, ref.auxsteps[0])
def test_move_to_step(self, ref, reader):
# should take us to step 3
reader[3]
assert_auxstep_equal(reader.auxstep, ref.auxsteps[3])
def test_last_step(self, ref, reader):
# should take us to the last step
reader[-1]
assert_auxstep_equal(reader.auxstep, ref.auxsteps[-1])
def test_next_past_last_step_raises_StopIteration(self, ref, reader):
# should take us to the last step
reader[-1]
# if we try to move to next step from here, should raise StopIteration
with pytest.raises(StopIteration):
reader.next()
def test_move_to_invalid_step_raises_IndexError(self, ref, reader):
# last step is number n_steps -1 ; if we try move to step number
# n_steps we should get a ValueError
with pytest.raises(IndexError):
reader[ref.n_steps]
def test_invalid_step_to_time_raises_ValueError(self, reader):
# last step is number n_steps-1; if we try to run step_to_time on
# step n_steps we should get a ValueError
with pytest.raises(ValueError):
reader.step_to_time(reader.n_steps)
def test_iter(self,ref, reader):
for i, val in enumerate(reader):
assert_auxstep_equal(val, ref.auxsteps[i])
def test_iter_list(self, ref, reader):
# test using __getitem__ with a list
for i, val in enumerate(reader[ref.iter_list]):
assert_auxstep_equal(val, ref.iter_list_auxsteps[i])
def test_iter_slice(self, ref, reader):
# test using __getitem__ with a slice
for i, val in enumerate(reader[ref.iter_slice]):
assert_auxstep_equal(val, ref.iter_slice_auxsteps[i])
def test_slice_start_after_stop_raises_IndexError(self, reader):
#should raise IndexError if start frame after end frame
with pytest.raises(IndexError):
reader[2:1]
def test_slice_out_of_range_raises_IndexError(self, ref, reader):
# should raise IndexError if indices our of range
with pytest.raises(IndexError):
reader[ref.n_steps:]
def test_slice_non_int_raises_TypeError(self, reader):
# should raise TypeError if try pass in non-integer to slice
with pytest.raises(TypeError):
reader['a':]
def test_bad_represent_raises_ValueError(self, reader):
# if we try to set represent_ts_as to something not listed as a
# valid option, we should get a ValueError
with pytest.raises(ValueError):
reader.represent_ts_as = 'invalid-option'
def test_time_selector(self, ref):
# reload the reader, passing a time selector
reader = ref.reader(ref.testdata,
time_selector = ref.time_selector)
# time should still match reference time for each step
for i, val in enumerate(reader):
assert val.time == ref.select_time_ref[i], "time for step {} does not match".format(i)
def test_time_non_constant_dt(self, reader):
reader.constant_dt = False
with pytest.raises(ValueError, match="If dt is not constant, must have a valid time selector"):
reader.time
def test_time_selector_manual(self, ref):
reader = ref.reader(ref.testdata,
time_selector = ref.time_selector)
# Manually set time selector
reader.time_selector = ref.time_selector
for i, val in enumerate(reader):
assert val.time == ref.select_time_ref[i], "time for step {} does not match".format(i)
def test_data_selector(self, ref):
# reload reader, passing in a data selector
reader = ref.reader(ref.testdata,
data_selector=ref.data_selector)
# data should match reference data for each step
for i, val in enumerate(reader):
assert_equal(val.data, ref.select_data_ref[i], "data for step {0} does not match".format(i))
def test_no_constant_dt(self, ref):
## assume we can select time...
# reload reader, without assuming constant dt
reader = ref.reader(ref.testdata,
time_selector=ref.time_selector,
constant_dt=False)
# time should match reference for selecting time, for each step
for i, val in enumerate(reader):
assert val.time == ref.select_time_ref[i], "data for step {} does not match".format(i)
def test_update_ts_without_auxname_raises_ValueError(self, ref):
# reload reader without auxname
with pytest.raises(ValueError):
reader = ref.reader(ref.testdata)
ts = ref.lower_freq_ts
reader.update_ts(ts)
def test_read_lower_freq_timestep(self, ref, reader):
# test reading a timestep with lower frequency
ts = ref.lower_freq_ts
reader.update_ts(ts)
# check the value set in ts is as we expect
assert_almost_equal(ts.aux.test, ref.lowf_closest_rep,
err_msg="Representative value in ts.aux does not match")
def test_represent_as_average(self, ref, reader):
# test the 'average' option for 'represent_ts_as'
# reset the represent method to 'average'...
reader.represent_ts_as = 'average'
# read timestep; use the low freq timestep
ts = ref.lower_freq_ts
reader.update_ts(ts)
# check the representative value set in ts is as expected
assert_almost_equal(ts.aux.test, ref.lowf_average_rep,
err_msg="Representative value does not match when "
"using with option 'average'")
def test_represent_as_average_with_cutoff(self, ref, reader):
# test the 'represent_ts_as' 'average' option when we have a cutoff set
# set the cutoff...
reader.cutoff = ref.cutoff
# read timestep; use the low frequency timestep
ts = ref.lower_freq_ts
reader.update_ts(ts)
# check representative value set in ts is as expected
assert_almost_equal(ts.aux.test, ref.lowf_cutoff_average_rep,
err_msg="Representative value does not match when "
"applying cutoff")
def test_read_offset_timestep(self, ref, reader):
# try reading a timestep offset from auxiliary
ts = ref.offset_ts
reader.update_ts(ts)
assert_almost_equal(ts.aux.test, ref.offset_closest_rep,
err_msg="Representative value in ts.aux does not match")
def test_represent_as_closest_with_cutoff(self, ref, reader):
# test the 'represent_ts_as' 'closest' option when we have a cutoff set
# set the cutoff...
reader.cutoff = ref.cutoff
# read timestep; use the offset timestep
ts = ref.offset_ts
reader.update_ts(ts)
# check representative value set in ts is as expected
assert_almost_equal(ts.aux.test, ref.offset_cutoff_closest_rep,
err_msg="Representative value does not match when "
"applying cutoff")
def test_read_higher_freq_timestep(self, ref, reader):
# try reading a timestep with higher frequency
ts = ref.higher_freq_ts
reader.update_ts(ts)
assert_almost_equal(ts.aux.test, ref.highf_rep,
err_msg="Representative value in ts.aux does not match")
def test_get_auxreader_for(self, ref, reader):
# check guesser gives us right reader
reader = mda.auxiliary.core.get_auxreader_for(ref.testdata)
assert reader == ref.reader
def test_iterate_through_trajectory(self, ref, ref_universe):
# check the representative values of aux for each frame are as expected
# trajectory here has same dt, offset; so there's a direct correspondence
# between frames and steps
for i, ts in enumerate(ref_universe.trajectory):
assert_equal(ts.aux.test, ref.auxsteps[i].data,
"representative value does not match when "
"iterating through all trajectory timesteps")
def test_iterate_as_auxiliary_from_trajectory(self, ref, ref_universe):
# check representative values of aux for each frame are as expected
# trajectory here has same dt, offset, so there's a direct correspondence
# between frames and steps, and iter_as_aux will run through all frames
for i, ts in enumerate(ref_universe.trajectory.iter_as_aux('test')):
assert_equal(ts.aux.test, ref.auxsteps[i].data,
"representative value does not match when "
"iterating through all trajectory timesteps")
def test_auxiliary_read_ts_rewind(self, ref_universe):
# AuxiliaryBase.read_ts() should retrieve the correct step after
# reading the last one. Issue #2674 describes a case in which the
# object gets stuck on the last frame.
aux_info_0 = ref_universe.trajectory[0].aux.test
ref_universe.trajectory[-1]
aux_info_0_rewind = ref_universe.trajectory[0].aux.test
assert_equal(aux_info_0, aux_info_0_rewind,
"aux info was retrieved incorrectly "
"after reading the last step")
def test_get_description(self, ref, reader):
description = reader.get_description()
for attr in ref.description:
assert description[attr] == ref.description[attr], "'Description' does not match for {}".format(attr)
def test_load_from_description(self, reader):
description = reader.get_description()
new = mda.auxiliary.core.auxreader(**description)
assert new == reader, "AuxReader reloaded from description does not match"
def test_step_to_frame_out_of_bounds(self, reader, ref):
ts = mda.coordinates.base.Timestep(0, dt=ref.dt)
assert reader.step_to_frame(-1, ts) is None
assert reader.step_to_frame(reader.n_steps, ts) is None
def test_step_to_frame_no_time_diff(self, reader, ref):
ts = mda.coordinates.base.Timestep(0, dt=ref.dt)
for idx in range(reader.n_steps):
assert reader.step_to_frame(idx, ts) == idx
def test_step_to_frame_time_diff(self, reader, ref):
# Timestep is 0.1 longer than auxiliary data
ts = mda.coordinates.base.Timestep(0, dt=ref.dt + 0.1)
# Test all 5 frames
for idx in range(5):
frame, time_diff = reader.step_to_frame(idx, ts, return_time_diff=True)
assert frame == idx
np.testing.assert_almost_equal(time_diff, idx * 0.1)
def test_go_to_step_fail(self, reader):
with pytest.raises(ValueError, match="Step index [0-9]* is not valid for auxiliary"):
reader._go_to_step(reader.n_steps)
@pytest.mark.parametrize("constant", [True, False])
def test_set_constant_dt(self, reader, constant):
reader.constant_dt = constant
assert reader.constant_dt == constant
def assert_auxstep_equal(A, B):
if not isinstance(A, mda.auxiliary.base.AuxStep):
raise AssertionError('A is not of type AuxStep')
if not isinstance(B, mda.auxiliary.base.AuxStep):
raise AssertionError('B is not of type AuxStep')
if A.step != B.step:
raise AssertionError('A and B refer to different steps: A.step = {}, '
'B.step = {}'.format(A.step, B.step))
if A.time != B.time:
raise AssertionError('A and B have different times: A.time = {}, '
'B.time = {}'.format(A.time, B.time))
if all(A.data != B.data):
raise AssertionError('A and B have different data: A.data = {}, '
'B.data = {}'.format(A.data, B.data))
|
MDAnalysis/mdanalysis
|
testsuite/MDAnalysisTests/auxiliary/base.py
|
Python
|
gpl-2.0
| 18,784
|
[
"MDAnalysis"
] |
70cf220d0f85bdeb3e9b75b1c8fa30f4ed96830b620ea1b1a9235436eea4399d
|
########################################################################
# File : ResourcesDefaults.py
# Author : Ricardo Graciani
########################################################################
"""
Some Helper class to access Default options for Different Resources (CEs, SEs, Catalags,...)
"""
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
from diraccfg import CFG
from DIRAC.ConfigurationSystem.Client.Helpers.Path import cfgResourceSection, cfgPath, cfgInstallPath, cfgPathToList
__RCSID__ = "$Id$"
def defaultSection(resource):
"""
Build the path for the Defaults section
"""
return cfgPath(cfgResourceSection, "Defaults", resource)
def getComputingElementDefaults(ceName="", ceType="", cfg=None, currentSectionPath=""):
"""
Return cfgDefaults with defaults for the given CEs defined either in arguments or in the provided cfg
"""
cesCfg = CFG()
if cfg:
try:
cesCfg.loadFromFile(cfg)
cesPath = cfgInstallPath("ComputingElements")
if cesCfg.isSection(cesPath):
for section in cfgPathToList(cesPath):
cesCfg = cesCfg[section]
except Exception:
return CFG()
# Overwrite the cfg with Command line arguments
if ceName:
if not cesCfg.isSection(ceName):
cesCfg.createNewSection(ceName)
if currentSectionPath:
# Add Options from Command Line
optionsDict = __getExtraOptions(currentSectionPath)
for name, value in optionsDict.items():
cesCfg[ceName].setOption(name, value) # pylint: disable=no-member
if ceType:
cesCfg[ceName].setOption("CEType", ceType) # pylint: disable=no-member
ceDefaultSection = cfgPath(defaultSection("ComputingElements"))
# Load Default for the given type from Central configuration is defined
ceDefaults = __gConfigDefaults(ceDefaultSection)
for ceName in cesCfg.listSections():
if "CEType" in cesCfg[ceName]:
ceType = cesCfg[ceName]["CEType"]
if ceType in ceDefaults:
for option in ceDefaults[ceType].listOptions(): # pylint: disable=no-member
if option not in cesCfg[ceName]:
cesCfg[ceName].setOption(
option, ceDefaults[ceType][option] # pylint: disable=unsubscriptable-object
)
return cesCfg
def __gConfigDefaults(defaultPath):
"""
Build a cfg from a Default Section
"""
from DIRAC import gConfig
cfgDefaults = CFG()
result = gConfig.getSections(defaultPath)
if not result["OK"]:
return cfgDefaults
for name in result["Value"]:
typePath = cfgPath(defaultPath, name)
cfgDefaults.createNewSection(name)
result = gConfig.getOptionsDict(typePath)
if result["OK"]:
optionsDict = result["Value"]
for option, value in optionsDict.items():
cfgDefaults[name].setOption(option, value)
return cfgDefaults
def __getExtraOptions(currentSectionPath):
from DIRAC import gConfig
optionsDict = {}
if not currentSectionPath:
return optionsDict
result = gConfig.getOptionsDict(currentSectionPath)
if not result["OK"]:
return optionsDict
print(result)
return result["Value"]
|
ic-hep/DIRAC
|
src/DIRAC/ConfigurationSystem/Client/Helpers/ResourcesDefaults.py
|
Python
|
gpl-3.0
| 3,443
|
[
"DIRAC"
] |
7ba066f55ae8ce044ba4ab8f0295ed95dfe70578e65bcd82b0c3cbc0cf26b05a
|
from __future__ import print_function
import os
import numpy as np
import subprocess
# flopy imports
from ..modflow.mfdisu import ModflowDisU
from .util_array import read1d, Util2d
from ..export.shapefile_utils import shp2recarray
from ..mbase import which
try:
import shapefile
except:
raise Exception('Error importing shapefile: ' +
'try pip install pyshp')
# todo
# creation of line and polygon shapefiles from features (holes!)
# program layer functionality for plot method
# support an asciigrid option for top and bottom interpolation
# add intersection capability
def features_to_shapefile(features, featuretype, filename):
"""
Write a shapefile for the features of type featuretype.
Parameters
----------
features : list
List of point, line, or polygon features
featuretype : str
Must be 'point', 'line', or 'polygon'
filename : string
name of the shapefile to write
Returns
-------
None
"""
if featuretype.lower() not in ['point', 'line', 'polygon']:
raise Exception('Unrecognized feature type: {}'.format(featuretype))
if featuretype.lower() == 'line':
wr = shapefile.Writer(shapeType=shapefile.POLYLINE)
wr.field("SHAPEID", "N", 20, 0)
for i, line in enumerate(features):
wr.line(line)
wr.record(i)
elif featuretype.lower() == 'point':
wr = shapefile.Writer(shapeType=shapefile.POINT)
wr.field("SHAPEID", "N", 20, 0)
for i, point in enumerate(features):
wr.point(point[0], point[1])
wr.record(i)
elif featuretype.lower() == 'polygon':
wr = shapefile.Writer(shapeType=shapefile.POLYGON)
wr.field("SHAPEID", "N", 20, 0)
for i, polygon in enumerate(features):
wr.poly(polygon)
wr.record(i)
wr.save(filename)
return
def ndarray_to_asciigrid(fname, a, extent, nodata=1.e30):
# extent info
xmin, xmax, ymin, ymax = extent
ncol, nrow = a.shape
dx = (xmax - xmin) / ncol
assert dx == (ymax - ymin) / nrow
# header
header = 'ncols {}\n'.format(ncol)
header += 'nrows {}\n'.format(nrow)
header += 'xllcorner {}\n'.format(xmin)
header += 'yllcorner {}\n'.format(ymin)
header += 'cellsize {}\n'.format(dx)
header += 'NODATA_value {}\n'.format(np.float(nodata))
# replace nan with nodata
idx = np.isnan(a)
a[idx] = np.float(nodata)
# write
with open(fname, 'wb') as f:
f.write(header.encode('ascii'))
np.savetxt(f, a, fmt='%15.6e')
return
class Gridgen(object):
"""
Class to work with the gridgen program to create layered quadtree grids.
Parameters
----------
dis : flopy.modflow.ModflowDis
Flopy discretization object
model_ws : str
workspace location for creating gridgen files (default is '.')
exe_name : str
path and name of the gridgen program. (default is gridgen)
surface_interpolation : str
Default gridgen method for interpolating elevations. Valid options
include 'replicate' (default) and 'interpolate'
Notes
-----
For the surface elevations, the top of a layer uses the same surface as
the bottom of the overlying layer.
"""
def __init__(self, dis, model_ws='.', exe_name='gridgen',
surface_interpolation='replicate'):
self.nodes = 0
self.nja = 0
self.nodelay = np.zeros((dis.nlay), dtype=np.int)
self._vertdict = {}
self.dis = dis
self.model_ws = model_ws
exe_name = which(exe_name)
if exe_name is None:
raise Exception('Cannot find gridgen binary executable')
self.exe_name = os.path.abspath(exe_name)
# Set default surface interpolation for all surfaces (nlay + 1)
surface_interpolation = surface_interpolation.upper()
if surface_interpolation not in ['INTERPOLATE', 'REPLICATE']:
raise Exception('Error. Unknown surface interpolation method: '
'{}. Must be INTERPOLATE or '
'REPLICATE'.format(surface_interpolation))
self.surface_interpolation = [surface_interpolation
for k in range(dis.nlay + 1)]
# Set up a blank _active_domain list with None for each layer
self._addict = {}
self._active_domain = []
for k in range(dis.nlay):
self._active_domain.append(None)
# Set up a blank _refinement_features list with empty list for
# each layer
self._rfdict = {}
self._refinement_features = []
for k in range(dis.nlay):
self._refinement_features.append([])
# Set up blank _elev and _elev_extent dictionaries
self._asciigrid_dict = {}
return
def set_surface_interpolation(self, isurf, type, elev=None,
elev_extent=None):
"""
Parameters
----------
isurf : int
surface number where 0 is top and nlay + 1 is bottom
type : str
Must be 'INTERPOLATE', 'REPLICATE' or 'ASCIIGRID'.
elev : numpy.ndarray of shape (nr, nc) or str
Array that is used as an asciigrid. If elev is a string, then
it is assumed to be the name of the asciigrid.
elev_extent : list-like
list of xmin, xmax, ymin, ymax extents of the elev grid.
Returns
-------
None
"""
assert 0 <= isurf <= self.dis.nlay + 1
type = type.upper()
if type not in ['INTERPOLATE', 'REPLICATE', 'ASCIIGRID']:
raise Exception('Error. Unknown surface interpolation type: '
'{}. Must be INTERPOLATE or '
'REPLICATE'.format(type))
else:
self.surface_interpolation[isurf] = type
if type == 'ASCIIGRID':
if isinstance(elev, np.ndarray):
if elev_extent is None:
raise Exception('Error. ASCIIGRID was specified but '
'elev_extent was not.')
try:
xmin, xmax, ymin, ymax = elev_extent
except:
raise Exception('Cannot cast elev_extent into xmin, xmax, '
'ymin, ymax: {}'.format(elev_extent))
nm = '_gridgen.lay{}.asc'.format(isurf)
fname = os.path.join(self.model_ws, nm)
ndarray_to_asciigrid(fname, elev, elev_extent)
self._asciigrid_dict[isurf] = nm
elif isinstance(elev, str):
if not os.path.isfile(elev):
raise Exception('Error. elev is not a valid file: '
'{}'.format(elev))
self._asciigrid_dict[isurf] = elev
else:
raise Exception('Error. ASCIIGRID was specified but '
'elev was not specified as a numpy ndarray or'
'valid asciigrid file.')
return
def add_active_domain(self, feature, layers):
"""
Parameters
----------
feature : str or list
feature can be either a string containing the name of a polygon
shapefile or it can be a list of polygons
layers : list
A list of layers (zero based) for which this active domain
applies.
Returns
-------
None
"""
# set nodes and nja to 0 to indicate that grid must be rebuilt
self.nodes = 0
self.nja = 0
# Create shapefile or set shapefile to feature
adname = 'ad{}'.format(len(self._addict))
if isinstance(feature, list):
# Create a shapefile
adname_w_path = os.path.join(self.model_ws, adname)
features_to_shapefile(feature, 'polygon', adname_w_path)
shapefile = adname
else:
shapefile = feature
self._addict[adname] = shapefile
sn = os.path.join(self.model_ws, shapefile + '.shp')
assert os.path.isfile(sn), 'Shapefile does not exist: {}'.format(sn)
for k in layers:
self._active_domain[k] = adname
return
def add_refinement_features(self, features, featuretype, level, layers):
"""
Parameters
----------
features : str or list
features can be either a string containing the name of a shapefile
or it can be a list of points, lines, or polygons
featuretype : str
Must be either 'point', 'line', or 'polygon'
level : int
The level of refinement for this features
layers : list
A list of layers (zero based) for which this refinement features
applies.
Returns
-------
None
"""
# set nodes and nja to 0 to indicate that grid must be rebuilt
self.nodes = 0
self.nja = 0
# Create shapefile or set shapefile to feature
rfname = 'rf{}'.format(len(self._rfdict))
if isinstance(features, list):
rfname_w_path = os.path.join(self.model_ws, rfname)
features_to_shapefile(features, featuretype, rfname_w_path)
shapefile = rfname
else:
shapefile = features
self._rfdict[rfname] = [shapefile, featuretype, level]
sn = os.path.join(self.model_ws, shapefile + '.shp')
assert os.path.isfile(sn), 'Shapefile does not exist: {}'.format(sn)
for k in layers:
self._refinement_features[k].append(rfname)
return
def build(self, verbose=False):
"""
Build the quadtree grid
Parameters
----------
verbose : bool
If true, print the results of the gridgen command to the terminal
(default is False)
Returns
-------
None
"""
fname = os.path.join(self.model_ws, '_gridgen_build.dfn')
f = open(fname, 'w')
# Write the basegrid information
f.write(self._mfgrid_block())
f.write(2 * '\n')
# Write the quadtree builder block
f.write(self._builder_block())
f.write(2 * '\n')
# Write the active domain blocks
f.write(self._ad_blocks())
f.write(2 * '\n')
# Write the refinement features
f.write(self._rf_blocks())
f.write(2 * '\n')
f.close()
# Command: gridgen quadtreebuilder _gridgen_build.dfn
qtgfname = os.path.join(self.model_ws, 'quadtreegrid.dfn')
if os.path.isfile(qtgfname):
os.remove(qtgfname)
cmds = [self.exe_name, 'quadtreebuilder', '_gridgen_build.dfn']
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
assert os.path.isfile(qtgfname)
# Export the grid to shapefiles, usgdata, and vtk files
self.export(verbose)
# Create a dictionary that relates nodenumber to vertices
self._mkvertdict()
# read and save nodelay array to self
fname = os.path.join(self.model_ws, 'qtg.nodesperlay.dat')
f = open(fname, 'r')
self.nodelay = read1d(f, self.nodelay)
f.close()
# Create a recarray of the grid polygon shapefile
shapename = os.path.join(self.model_ws, 'qtgrid')
self.qtra = shp2recarray(shapename)
return
def get_vertices(self, nodenumber):
"""
Return a list of 5 vertices for the cell. The first vertex should
be the same as the last vertex.
Parameters
----------
nodenumber
Returns
-------
list of vertices : list
"""
return self._vertdict[nodenumber]
def get_center(self, nodenumber):
"""
Return the cell center x and y coordinates
Parameters
----------
nodenumber
Returns
-------
(x, y) : tuple
"""
vts = self.get_vertices(nodenumber)
xmin = vts[0][0]
xmax = vts[1][0]
ymin = vts[2][1]
ymax = vts[0][1]
return ((xmin + xmax) * 0.5, (ymin + ymax) * 0.5)
def export(self, verbose=False):
"""
Export the quadtree grid to shapefiles, usgdata, and vtk
Returns
-------
None
"""
# Create the export definition file
fname = os.path.join(self.model_ws, '_gridgen_export.dfn')
f = open(fname, 'w')
f.write('LOAD quadtreegrid.dfn\n')
f.write('\n')
f.write(self._grid_export_blocks())
f.close()
assert os.path.isfile(fname), \
'Could not create export dfn file: {}'.format(fname)
# Export shapefiles
cmds = [self.exe_name, 'grid_to_shapefile_poly', '_gridgen_export.dfn']
buff = []
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
fn = os.path.join(self.model_ws, 'qtgrid.shp')
assert os.path.isfile(fn)
except:
print('Error. Failed to export polygon shapefile of grid', buff)
cmds = [self.exe_name, 'grid_to_shapefile_point',
'_gridgen_export.dfn']
buff = []
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
fn = os.path.join(self.model_ws, 'qtgrid_pt.shp')
assert os.path.isfile(fn)
except:
print('Error. Failed to export polygon shapefile of grid', buff)
# Export the usg data
cmds = [self.exe_name, 'grid_to_usgdata', '_gridgen_export.dfn']
buff = []
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
fn = os.path.join(self.model_ws, 'qtg.nod')
assert os.path.isfile(fn)
except:
print('Error. Failed to export usgdata', buff)
# Export vtk
cmds = [self.exe_name, 'grid_to_vtk', '_gridgen_export.dfn']
buff = []
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
fn = os.path.join(self.model_ws, 'qtg.vtu')
assert os.path.isfile(fn)
except:
print('Error. Failed to export vtk file', buff)
cmds = [self.exe_name, 'grid_to_vtk_sv', '_gridgen_export.dfn']
buff = []
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
if verbose:
print(buff)
fn = os.path.join(self.model_ws, 'qtg_sv.vtu')
assert os.path.isfile(fn)
except:
print('Error. Failed to export shared vertex vtk file', buff)
return
def plot(self, ax=None, layer=0, edgecolor='k', facecolor='none',
cmap='Dark2', a=None, masked_values=None, **kwargs):
"""
Plot the grid. This method will plot the grid using the shapefile
that was created as part of the build method.
Note that the layer option is not working yet.
Parameters
----------
ax : matplotlib.pyplot axis
The plot axis. If not provided it, plt.gca() will be used.
If there is not a current axis then a new one will be created.
layer : int
Layer number to plot
cmap : string
Name of colormap to use for polygon shading (default is 'Dark2')
edgecolor : string
Color name. (Default is 'scaled' to scale the edge colors.)
facecolor : string
Color name. (Default is 'scaled' to scale the face colors.)
a : numpy.ndarray
Array to plot.
masked_values : iterable of floats, ints
Values to mask.
kwargs : dictionary
Keyword arguments that are passed to
PatchCollection.set(``**kwargs``). Some common kwargs would be
'linewidths', 'linestyles', 'alpha', etc.
Returns
-------
pc : matplotlib.collections.PatchCollection
"""
import matplotlib.pyplot as plt
from ..plot import plot_shapefile, shapefile_extents
if ax is None:
ax = plt.gca()
shapename = os.path.join(self.model_ws, 'qtgrid')
xmin, xmax, ymin, ymax = shapefile_extents(shapename)
idx = np.where(self.qtra.layer == layer)[0]
pc = plot_shapefile(shapename, ax=ax, edgecolor=edgecolor,
facecolor=facecolor, cmap=cmap, a=a,
masked_values=masked_values, idx=idx, **kwargs)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
return pc
def get_nod_recarray(self):
"""
Load the qtg.nod file and return as a numpy recarray
Returns
-------
node_ra : ndarray
Recarray representation of the node file with zero-based indexing
"""
# nodes, nlay, ivsd, itmuni, lenuni, idsymrd, laycbd
fname = os.path.join(self.model_ws, 'qtg.nod')
f = open(fname, 'r')
dt = np.dtype([('node', np.int), ('layer', np.int),
('x', np.float), ('y', np.float), ('z', np.float),
('dx', np.float), ('dy', np.float), ('dz', np.float),
])
node_ra = np.genfromtxt(fname, dtype=dt, skip_header=1)
node_ra['layer'] -= 1
node_ra['node'] -= 1
return node_ra
def get_disu(self, model, nper=1, perlen=1, nstp=1, tsmult=1, steady=True,
itmuni=4, lenuni=2):
# nodes, nlay, ivsd, itmuni, lenuni, idsymrd, laycbd
fname = os.path.join(self.model_ws, 'qtg.nod')
f = open(fname, 'r')
line = f.readline()
ll = line.strip().split()
nodes = int(ll.pop(0))
f.close()
nlay = self.dis.nlay
ivsd = 0
idsymrd = 0
laycbd = 0
# Save nodes
self.nodes = nodes
# nodelay
nodelay = np.empty((nlay), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.nodesperlay.dat')
f = open(fname, 'r')
nodelay = read1d(f, nodelay)
f.close()
# top
top = [0] * nlay
for k in range(nlay):
fname = os.path.join(self.model_ws,
'quadtreegrid.top{}.dat'.format(k + 1))
f = open(fname, 'r')
tpk = np.empty((nodelay[k]), dtype=np.float32)
tpk = read1d(f, tpk)
f.close()
if tpk.min() == tpk.max():
tpk = tpk.min()
else:
tpk = Util2d(model, (1, nodelay[k]), np.float32,
np.reshape(tpk, (1, nodelay[k])),
name='top {}'.format(k + 1))
top[k] = tpk
# bot
bot = [0] * nlay
for k in range(nlay):
fname = os.path.join(self.model_ws,
'quadtreegrid.bot{}.dat'.format(k + 1))
f = open(fname, 'r')
btk = np.empty((nodelay[k]), dtype=np.float32)
btk = read1d(f, btk)
f.close()
if btk.min() == btk.max():
btk = btk.min()
else:
btk = Util2d(model, (1, nodelay[k]), np.float32,
np.reshape(btk, (1, nodelay[k])),
name='bot {}'.format(k + 1))
bot[k] = btk
# area
area = [0] * nlay
fname = os.path.join(self.model_ws, 'qtg.area.dat')
f = open(fname, 'r')
anodes = np.empty((nodes), dtype=np.float32)
anodes = read1d(f, anodes)
f.close()
istart = 0
for k in range(nlay):
istop = istart + nodelay[k]
ark = anodes[istart: istop]
if ark.min() == ark.max():
ark = ark.min()
else:
ark = Util2d(model, (1, nodelay[k]), np.float32,
np.reshape(ark, (1, nodelay[k])),
name='area layer {}'.format(k + 1))
area[k] = ark
istart = istop
# iac
iac = np.empty((nodes), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.iac.dat')
f = open(fname, 'r')
iac = read1d(f, iac)
f.close()
# Calculate njag and save as nja to self
njag = iac.sum()
self.nja = njag
# ja
ja = np.empty((njag), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.ja.dat')
f = open(fname, 'r')
ja = read1d(f, ja)
f.close()
# ivc
fldr = np.empty((njag), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.fldr.dat')
f = open(fname, 'r')
fldr = read1d(f, fldr)
ivc = np.where(abs(fldr) == 3, 1, 0)
f.close()
cl1 = None
cl2 = None
# cl12
cl12 = np.empty((njag), dtype=np.float32)
fname = os.path.join(self.model_ws, 'qtg.c1.dat')
f = open(fname, 'r')
cl12 = read1d(f, cl12)
f.close()
# fahl
fahl = np.empty((njag), dtype=np.float32)
fname = os.path.join(self.model_ws, 'qtg.fahl.dat')
f = open(fname, 'r')
fahl = read1d(f, fahl)
f.close()
# create dis object instance
disu = ModflowDisU(model, nodes=nodes, nlay=nlay, njag=njag, ivsd=ivsd,
nper=nper, itmuni=itmuni, lenuni=lenuni,
idsymrd=idsymrd, laycbd=laycbd, nodelay=nodelay,
top=top, bot=bot, area=area, iac=iac, ja=ja,
ivc=ivc, cl1=cl1, cl2=cl2, cl12=cl12, fahl=fahl,
perlen=perlen, nstp=nstp, tsmult=tsmult,
steady=steady)
# return dis object instance
return disu
def get_gridprops(self):
gridprops = {}
# nodes, nlay, ivsd, itmuni, lenuni, idsymrd, laycbd
fname = os.path.join(self.model_ws, 'qtg.nod')
f = open(fname, 'r')
line = f.readline()
ll = line.strip().split()
nodes = int(ll.pop(0))
f.close()
nlay = self.dis.nlay
gridprops['nodes'] = nodes
gridprops['nlay'] = nlay
# nodelay
nodelay = np.empty((nlay), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.nodesperlay.dat')
f = open(fname, 'r')
nodelay = read1d(f, nodelay)
f.close()
gridprops['nodelay'] = nodelay
# top
top = np.empty((nodes), dtype=np.float32)
istart = 0
for k in range(nlay):
istop = istart + nodelay[k]
fname = os.path.join(self.model_ws,
'quadtreegrid.top{}.dat'.format(k + 1))
f = open(fname, 'r')
tpk = np.empty((nodelay[k]), dtype=np.float32)
tpk = read1d(f, tpk)
f.close()
top[istart:istop] = tpk
istart = istop
gridprops['top'] = top
# bot
bot = np.empty((nodes), dtype=np.float32)
istart = 0
for k in range(nlay):
istop = istart + nodelay[k]
fname = os.path.join(self.model_ws,
'quadtreegrid.bot{}.dat'.format(k + 1))
f = open(fname, 'r')
btk = np.empty((nodelay[k]), dtype=np.float32)
btk = read1d(f, btk)
f.close()
bot[istart:istop] = btk
istart = istop
gridprops['bot'] = bot
# area
fname = os.path.join(self.model_ws, 'qtg.area.dat')
f = open(fname, 'r')
area = np.empty((nodes), dtype=np.float32)
area = read1d(f, area)
f.close()
gridprops['area'] = area
# iac
iac = np.empty((nodes), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.iac.dat')
f = open(fname, 'r')
iac = read1d(f, iac)
f.close()
gridprops['iac'] = iac
# Calculate njag and save as nja to self
njag = iac.sum()
gridprops['nja'] = njag
# ja
ja = np.empty((njag), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.ja.dat')
f = open(fname, 'r')
ja = read1d(f, ja)
f.close()
gridprops['ja'] = ja
# fldr
fldr = np.empty((njag), dtype=np.int)
fname = os.path.join(self.model_ws, 'qtg.fldr.dat')
f = open(fname, 'r')
fldr = read1d(f, fldr)
f.close()
gridprops['fldr'] = fldr
# ivc
ivc = np.zeros(fldr.shape, dtype=np.int)
idx = (abs(fldr) == 3)
ivc[idx] = 1
gridprops['ivc'] = ivc
cl1 = None
cl2 = None
# cl12
cl12 = np.empty((njag), dtype=np.float32)
fname = os.path.join(self.model_ws, 'qtg.c1.dat')
f = open(fname, 'r')
cl12 = read1d(f, cl12)
f.close()
gridprops['cl12'] = cl12
# fahl
fahl = np.empty((njag), dtype=np.float32)
fname = os.path.join(self.model_ws, 'qtg.fahl.dat')
f = open(fname, 'r')
fahl = read1d(f, fahl)
f.close()
gridprops['fahl'] = fahl
# ihc
ihc = np.empty(fldr.shape, dtype=np.int)
ihc = np.where(abs(fldr) == 0, 0, ihc)
ihc = np.where(abs(fldr) == 1, 1, ihc)
ihc = np.where(abs(fldr) == 2, 1, ihc)
ihc = np.where(abs(fldr) == 3, 0, ihc)
gridprops['ihc'] = ihc
#hwva
hwva = fahl.copy()
ipos = 0
for n in range(nodes):
for j in range(iac[n]):
if j == 0:
pass
elif ihc[ipos] == 0:
pass
else:
m = ja[ipos] - 1
dzn = top[n] - bot[n]
dzm = top[m] - bot[m]
dzavg = 0.5 * (dzn + dzm)
hwva[ipos] = hwva[ipos] / dzavg
ipos += 1
gridprops['hwva'] = hwva
# angldegx
angldegx = np.zeros(fldr.shape, dtype=np.float)
angldegx = np.where(fldr == 0, 1.e30, angldegx)
angldegx = np.where(abs(fldr) == 3, 1.e30, angldegx)
angldegx = np.where(fldr == 2, 90, angldegx)
angldegx = np.where(fldr == -1, 180, angldegx)
angldegx = np.where(fldr == -2, 270, angldegx)
gridprops['angldegx'] = angldegx
# vertices -- not optimized for redundant vertices yet
nvert = nodes * 4
vertices = np.empty((nvert, 2), dtype=np.float)
ipos = 0
for n in range(nodes):
vs = self.get_vertices(n)
for x, y in vs[:-1]: # do not include last vertex
vertices[ipos, 0] = x
vertices[ipos, 1] = y
ipos += 1
gridprops['nvert'] = nvert
gridprops['vertices'] = vertices
cellxy = np.empty((nodes, 2), dtype=np.float)
for n in range(nodes):
x, y = self.get_center(n)
cellxy[n, 0] = x
cellxy[n, 1] = y
gridprops['cellxy'] = cellxy
return gridprops
def to_disu8(self, fname, writevertices=True):
gridprops = self.get_gridprops()
f = open(fname, 'w')
# opts
f.write('BEGIN OPTIONS\n')
f.write('END OPTIONS\n\n')
# dims
f.write('BEGIN DIMENSIONS\n')
f.write(' NODES {}\n'.format(gridprops['nodes']) )
f.write(' NJA {}\n'.format(gridprops['nja']) )
if writevertices:
f.write(' NVERT {}\n'.format(gridprops['nvert']))
f.write('END DIMENSIONS\n\n')
# disdata
f.write('BEGIN DISDATA\n')
for prop in ['top', 'bot', 'area']:
f.write(' {}\n'.format(prop.upper()))
f.write(' INTERNAL 1 (FREE)\n')
a = gridprops[prop]
for aval in a:
f.write('{} '.format(aval))
f.write('\n')
f.write('END DISDATA\n\n')
# condata
f.write('BEGIN CONNECTIONDATA\n')
for prop in ['iac', 'ja', 'ihc', 'cl12', 'hwva', 'angldegx']:
f.write(' {}\n'.format(prop.upper()))
f.write(' INTERNAL 1 (FREE)\n')
a = gridprops[prop]
for aval in a:
f.write('{} '.format(aval))
f.write('\n')
f.write('END CONNECTIONDATA\n\n')
if writevertices:
# vertices -- not optimized for redundant vertices yet
f.write('BEGIN VERTICES\n')
vertices = gridprops['vertices']
for i, row in enumerate(vertices):
x = row[0]
y = row[1]
s = ' {} {} {}\n'.format(i + 1, x, y)
f.write(s)
f.write('END VERTICES\n\n')
# celldata -- not optimized for redundant vertices yet
f.write('BEGIN CELL2D\n')
cellxy = gridprops['cellxy']
iv = 1
for n, row in enumerate(cellxy):
xc = row[0]
yc = row[1]
s = ' {} {} {} {} {} {} {} {}\n'.format(n + 1, xc, yc, 4, iv,
iv+1, iv+2, iv+3)
f.write(s)
iv += 4
f.write('END CELL2D\n\n')
f.close()
return
def intersect(self, features, featuretype, layer):
"""
Parameters
----------
features : str or list
features can be either a string containing the name of a shapefile
or it can be a list of points, lines, or polygons
featuretype : str
Must be either 'point', 'line', or 'polygon'
layer : int
Layer (zero based) to intersect with. Zero based.
Returns
-------
result : np.recarray
Recarray of the intersection properties.
"""
ifname = 'intersect_feature'
if isinstance(features, list):
ifname_w_path = os.path.join(self.model_ws, ifname)
if os.path.exists(ifname_w_path + '.shp'):
os.remove(ifname_w_path + '.shp')
features_to_shapefile(features, featuretype, ifname_w_path)
shapefile = ifname
else:
shapefile = features
sn = os.path.join(self.model_ws, shapefile + '.shp')
assert os.path.isfile(sn), 'Shapefile does not exist: {}'.format(sn)
fname = os.path.join(self.model_ws, '_intersect.dfn')
if os.path.isfile(fname):
os.remove(fname)
f = open(fname, 'w')
f.write('LOAD quadtreegrid.dfn\n')
f.write(1 * '\n')
f.write(self._intersection_block(shapefile, featuretype, layer))
f.close()
# Intersect
cmds = [self.exe_name, 'intersect', '_intersect.dfn']
buff = []
fn = os.path.join(self.model_ws, 'intersection.ifo')
if os.path.isfile(fn):
os.remove(fn)
try:
buff = subprocess.check_output(cmds, cwd=self.model_ws)
except:
print('Error. Failed to perform intersection', buff)
# Make sure new intersection file was created.
if not os.path.isfile(fn):
s = ('Error. Failed to perform intersection', buff)
raise Exception(s)
# Calculate the number of columns to import
# The extra comma causes one too many columns, so calculate the length
f = open(fn, 'r')
line = f.readline()
f.close()
ncol = len(line.strip().split(',')) - 1
# Load the intersection results as a recarray, convert nodenumber
# to zero-based and return
result = np.genfromtxt(fn, dtype=None, names=True, delimiter=',',
usecols=tuple(range(ncol)))
result = result.view(np.recarray)
result['nodenumber'] -= 1
return result
def _intersection_block(self, shapefile, featuretype, layer):
s = ''
s += 'BEGIN GRID_INTERSECTION intersect' + '\n'
s += ' GRID = quadtreegrid\n'
s += ' LAYER = {}\n'.format(layer + 1)
s += ' SHAPEFILE = {}\n'.format(shapefile)
s += ' FEATURE_TYPE = {}\n'.format(featuretype)
s += ' OUTPUT_FILE = {}\n'.format('intersection.ifo')
s += 'END GRID_INTERSECTION intersect' + '\n'
return s
def _mfgrid_block(self):
# Need to adjust offsets and rotation because gridgen rotates around
# lower left corner, whereas flopy rotates around upper left.
# gridgen rotation is counter clockwise, whereas flopy rotation is
# clock wise. Crazy.
sr = self.dis.parent.sr
xll = sr.xul
yll = sr.yul - sr.yedge[0]
xllrot, yllrot = sr.rotate(xll, yll, sr.rotation, xorigin=sr.xul,
yorigin=sr.yul)
s = ''
s += 'BEGIN MODFLOW_GRID basegrid' + '\n'
s += ' ROTATION_ANGLE = {}\n'.format(sr.rotation)
s += ' X_OFFSET = {}\n'.format(xllrot)
s += ' Y_OFFSET = {}\n'.format(yllrot)
s += ' NLAY = {}\n'.format(self.dis.nlay)
s += ' NROW = {}\n'.format(self.dis.nrow)
s += ' NCOL = {}\n'.format(self.dis.ncol)
# delr
delr = self.dis.delr.array
if delr.min() == delr.max():
s += ' DELR = CONSTANT {}\n'.format(delr.min())
else:
s += ' DELR = OPEN/CLOSE delr.dat\n'
fname = os.path.join(self.model_ws, 'delr.dat')
np.savetxt(fname, delr)
# delc
delc = self.dis.delc.array
if delc.min() == delc.max():
s += ' DELC = CONSTANT {}\n'.format(delc.min())
else:
s += ' DELC = OPEN/CLOSE delc.dat\n'
fname = os.path.join(self.model_ws, 'delc.dat')
np.savetxt(fname, delc)
# top
top = self.dis.top.array
if top.min() == top.max():
s += ' TOP = CONSTANT {}\n'.format(top.min())
else:
s += ' TOP = OPEN/CLOSE top.dat\n'
fname = os.path.join(self.model_ws, 'top.dat')
np.savetxt(fname, top)
# bot
botm = self.dis.botm
for k in range(self.dis.nlay):
bot = botm[k].array
if bot.min() == bot.max():
s += ' BOTTOM LAYER {} = CONSTANT {}\n'.format(k + 1,
bot.min())
else:
s += ' BOTTOM LAYER {0} = OPEN/CLOSE bot{0}.dat\n'.format(k +
1)
fname = os.path.join(self.model_ws, 'bot{}.dat'.format(k + 1))
np.savetxt(fname, bot)
s += 'END MODFLOW_GRID' + '\n'
return s
def _rf_blocks(self):
s = ''
for rfname, rf in self._rfdict.items():
shapefile, featuretype, level = rf
s += 'BEGIN REFINEMENT_FEATURES {}\n'.format(rfname)
s += ' SHAPEFILE = {}\n'.format(shapefile)
s += ' FEATURE_TYPE = {}\n'.format(featuretype)
s += ' REFINEMENT_LEVEL = {}\n'.format(level)
s += 'END REFINEMENT_FEATURES\n'
s += 2 * '\n'
return s
def _ad_blocks(self):
s = ''
for adname, shapefile in self._addict.items():
s += 'BEGIN ACTIVE_DOMAIN {}\n'.format(adname)
s += ' SHAPEFILE = {}\n'.format(shapefile)
s += ' FEATURE_TYPE = {}\n'.format('polygon')
s += ' INCLUDE_BOUNDARY = {}\n'.format('True')
s += 'END ACTIVE_DOMAIN\n'
s += 2 * '\n'
return s
def _builder_block(self):
s = 'BEGIN QUADTREE_BUILDER quadtreebuilder\n'
s += ' MODFLOW_GRID = basegrid\n'
# Write active domain information
for k, adk in enumerate(self._active_domain):
if adk is None:
continue
s += ' ACTIVE_DOMAIN LAYER {} = {}\n'.format(k + 1, adk)
# Write refinement feature information
for k, rfkl in enumerate(self._refinement_features):
if len(rfkl) == 0:
continue
s += ' REFINEMENT_FEATURES LAYER {} = '.format(k + 1)
for rf in rfkl:
s += rf + ' '
s += '\n'
s += ' SMOOTHING = full\n'
for k in range(self.dis.nlay):
if self.surface_interpolation[k] == 'ASCIIGRID':
grd = '_gridgen.lay{}.asc'.format(k)
else:
grd = 'basename'
s += ' TOP LAYER {} = {} {}\n'.format(k + 1,
self.surface_interpolation[k],
grd)
for k in range(self.dis.nlay):
if self.surface_interpolation[k + 1] == 'ASCIIGRID':
grd = '_gridgen.lay{}.asc'.format(k + 1)
else:
grd = 'basename'
s += ' BOTTOM LAYER {} = {} {}\n'.format(k + 1,
self.surface_interpolation[k + 1],
grd)
s += ' GRID_DEFINITION_FILE = quadtreegrid.dfn\n'
s += 'END QUADTREE_BUILDER\n'
return s
def _grid_export_blocks(self):
s = 'BEGIN GRID_TO_SHAPEFILE grid_to_shapefile_poly\n'
s += ' GRID = quadtreegrid\n'
s += ' SHAPEFILE = qtgrid\n'
s += ' FEATURE_TYPE = polygon\n'
s += 'END GRID_TO_SHAPEFILE\n'
s += '\n'
s += 'BEGIN GRID_TO_SHAPEFILE grid_to_shapefile_point\n'
s += ' GRID = quadtreegrid\n'
s += ' SHAPEFILE = qtgrid_pt\n'
s += ' FEATURE_TYPE = point\n'
s += 'END GRID_TO_SHAPEFILE\n'
s += '\n'
s += 'BEGIN GRID_TO_USGDATA grid_to_usgdata\n'
s += ' GRID = quadtreegrid\n'
s += ' USG_DATA_PREFIX = qtg\n'
s += 'END GRID_TO_USGDATA\n'
s += '\n'
s += 'BEGIN GRID_TO_VTKFILE grid_to_vtk\n'
s += ' GRID = quadtreegrid\n'
s += ' VTKFILE = qtg\n'
s += ' SHARE_VERTEX = False\n'
s += 'END GRID_TO_VTKFILE\n'
s += '\n'
s += 'BEGIN GRID_TO_VTKFILE grid_to_vtk_sv\n'
s += ' GRID = quadtreegrid\n'
s += ' VTKFILE = qtg_sv\n'
s += ' SHARE_VERTEX = True\n'
s += 'END GRID_TO_VTKFILE\n'
return s
def _mkvertdict(self):
"""
Create the self._vertdict dictionary that maps the nodenumber to
the vertices
Returns
-------
None
"""
# ensure there are active leaf cells from gridgen
fname = os.path.join(self.model_ws, 'qtg.nod')
if not os.path.isfile(fname):
raise Exception('File {} should have been created by gridgen.'.
format(fname))
f = open(fname, 'r')
line = f.readline()
ll = line.strip().split()
nodes = int(ll[0])
if nodes == 0:
raise Exception('Gridgen resulted in no active cells.')
# ensure shape file was created by gridgen
fname = os.path.join(self.model_ws, 'qtgrid.shp')
assert os.path.isfile(fname), 'gridgen shape file does not exist'
# read vertices from shapefile
sf = shapefile.Reader(fname)
shapes = sf.shapes()
fields = sf.fields
attributes = [l[0] for l in fields[1:]]
records = sf.records()
idx = attributes.index('nodenumber')
for i in range(len(shapes)):
nodenumber = int(records[i][idx]) - 1
self._vertdict[nodenumber] = shapes[i].points
return
|
bdestombe/flopy-1
|
flopy/utils/gridgen.py
|
Python
|
bsd-3-clause
| 40,297
|
[
"VTK"
] |
f8c31862bba82212542a07bfd7d0246e05adfa195ca216d50af21d90a8754f2a
|
# Copyright (c) 2013 Huan Do, http://huan.do
import ast
from collections import deque
class VariableFinder(ast.NodeVisitor):
def __init__(self, env):
self.env = env
self.visit_queue = deque()
self._conditional_stack = []
self._global = False
def visit(self, node):
"""Does a bfs, visit_queue will elements put inside of it
as it visits."""
ast.NodeVisitor.visit(self, node)
while self.visit_queue:
node = self.visit_queue.popleft()
with self.env.Frame(node):
ast.NodeVisitor.generic_visit(self, node)
def visit_arguments(self, node):
for arg in node.args:
self.generic_declare(arg)
if node.vararg:
self.generic_declare(node.vararg)
if node.kwarg:
self.generic_declare(node.kwarg)
def visit_Assign(self, node):
for target in node.targets:
self.generic_declare(target)
ast.NodeVisitor.generic_visit(self, node)
def visit_Lambda(self, node):
with self.env.extend_frame(node):
self.visit_queue.append(node)
def new_scope(self, node):
self.generic_declare(node.name)
with self.env.extend_frame(node):
self.visit_queue.append(node)
visit_ClassDef = visit_FunctionDef = new_scope
def visit_ExceptHandler(self, node):
if isinstance(node.name, ast.Name):
self.generic_declare(node.name)
ast.NodeVisitor.generic_visit(self, node)
def visit_For(self, node):
self._conditional_stack.append(node)
self.generic_declare(node.target)
ast.NodeVisitor.generic_visit(self, node)
assert node == self._conditional_stack.pop()
def visit_Global(self, node):
for name in node.names:
self._global = True
self.generic_declare(name)
self._global = False
def visit_ImportFrom(self, node):
if node.module != '__future__':
for alias in node.names:
if alias.name == '*':
self.env.starred = True
continue
if alias.asname is None:
alias.asname = alias.name
self.generic_declare(alias.asname)
def visit_Import(self, node):
for alias in node.names:
if '.' in alias.name:
name = alias.name[:alias.name.index('.')]
self.env.current_frame.add(
name,
False,
bool(self._conditional_stack))
continue
if alias.asname is None:
alias.asname = alias.name
self.generic_declare(alias.asname)
def visit_If(self, node):
self._conditional_stack.append(node)
self.generic_visit(node)
assert node == self._conditional_stack.pop()
visit_While = visit_TryExcept = visit_If
def visit_With(self, node):
# XXX: Python >= 3, each with statement can have multiple with items
if hasattr(node, 'items'):
for with_item in node.items:
self.generic_visit(with_item)
if hasattr(node, 'optional_vars'):
self.generic_declare(node.optional_vars)
self.generic_visit(node)
visit_withitem = visit_With
def scope_generators(self, generators):
if generators:
first = generators[0]
rest = generators[1:]
with self.env.extend_frame(first):
self.visit_comprehension(first)
self.scope_generators(rest)
def visit_Delete(self, node):
for target in node.targets:
if isinstance(target, ast.Name):
self.notify_delete(target)
def notify_delete(self, node):
decl = self.env.current_frame.declarations.get(node.id)
if decl:
decl.delete = True
def visit_Comprehensions(self, node):
self.scope_generators(node.generators)
visit_DictComp = visit_ListComp = visit_SetComp = visit_Comprehensions
def visit_comprehension(self, node):
self.generic_declare(node.target)
def generic_declare(self, target):
specific_declare = 'declare_' + type(target).__name__
getattr(self, specific_declare)(target)
def declare_str(self, name):
self.env.current_frame.add(
name,
self._global,
bool(self._conditional_stack))
def declare_Name(self, node):
self.generic_declare(node.id)
def declare_arg(self, node):
self.generic_declare(node.arg)
def declare_Subscript(self, node):
ast.NodeVisitor.generic_visit(self, node)
declare_Attribute = declare_Subscript
def declare_Tuple(self, node):
for element in node.elts:
self.generic_declare(element)
declare_List = declare_Tuple
|
huan/Underscore
|
underscore/variable_finder.py
|
Python
|
mit
| 4,915
|
[
"VisIt"
] |
07b6e17767c51b7bb1dfe077979044b42967186cc177fc2be369b57baa305d0a
|
#
# @BEGIN LICENSE
#
# Psi4: an open-source quantum chemistry software package
#
# Copyright (c) 2007-2018 The Psi4 Developers.
#
# The copyrights for code used from other parties are included in
# the corresponding files.
#
# This file is part of Psi4.
#
# Psi4 is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, version 3.
#
# Psi4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with Psi4; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# @END LICENSE
#
from .proc_table import procedures, hooks, energy_only_methods
from .proc import scf_helper, scf_wavefunction_factory
from .empirical_dispersion import EmpericalDispersion
from . import dft_funcs
from . import response
|
amjames/psi4
|
psi4/driver/procrouting/__init__.py
|
Python
|
lgpl-3.0
| 1,133
|
[
"Psi4"
] |
7a49b3d25de9e79390a7a0827826374fec0d302db0670b7b1b770f16e63736ce
|
# Copyright (C) 2013, Thomas Leonard
# See the README file for details, or visit http://0install.net.
import argparse
import os
import sys
import logging
from os.path import join, dirname, abspath
import zeroinstall.injector.config
from zeroinstall import SafeException
from repo import archives
def main(argv):
parser = argparse.ArgumentParser(description='Manage a 0install repository.')
parser.add_argument('--verbose', '-v', help='more verbose output', action='count')
subparsers = parser.add_subparsers(dest='subcommand')
parser_import = subparsers.add_parser('add', help='import pre-existing feeds into 0repo')
parser_import.add_argument('path', metavar='PATH', nargs='+',
help='the signed feeds to import')
parser_modify = subparsers.add_parser('modify', help='modify implementations in an existing feed')
parser_modify.add_argument('uri', metavar='URI', help='the URI of the feed to modify')
parser_modify.add_argument('id', metavar='ID', help='the ID or version number of the implementations to modify')
parser_modify.add_argument('--stability', metavar='STABILITY', help='the new stability rating to set for the implementations')
parser_create = subparsers.add_parser('create', help='create a new repository')
parser_create.add_argument('path', metavar='DIR',
help='the directory to create to hold the new repository')
parser_create.add_argument('key', metavar='GPGKEY',
help='the GPG key used to sign the generated feeds and commits; use "-" to disable GPG signing (e.g., for testing)')
subparsers.add_parser('register', help='add this repository location to ~/.config/...')
subparsers.add_parser('reindex', help='update archives.db from archives directory')
subparsers.add_parser('update', help='process "incoming" and generate output files')
parser_proxy = subparsers.add_parser('proxy', help='run a http proxy which serves all repository URLs directly from the "public" directory')
parser_proxy.add_argument('-p', '--port', help='the port to run the HTTP proxy on', default=8080, type=int)
if len(argv) == 1:
argv = argv + ['update']
args = parser.parse_args(argv[1:])
if args.verbose == 1:
logging.getLogger().setLevel(logging.INFO)
elif args.verbose is not None and args.verbose > 1:
logging.getLogger().setLevel(logging.DEBUG)
logging.info("Starting 0repo")
cmd = __import__('repo.cmd.' + args.subcommand, globals(), locals(), [args.subcommand], 0)
cmd.handle(args)
def find_config(missing_ok = False):
"""Change to parent directory until we find one with 0repo-config.py."""
def is_root_dir():
if os.name == 'nt':
# Top-level directories on Windows are always three characters long (e.g. 'C:\')
return len(os.getcwd()) == 3
else:
return os.path.samefile('.', '..')
# Walk up the directory tree to find the root of the repository
while not os.path.isfile('0repo-config.py'):
if is_root_dir():
if missing_ok:
return False
raise SafeException('0repo must be run from a repository directory (a directory that contains\n'
'a "0repo-config.py" file). To create a new repository, use "0repo create"')
os.chdir('..')
return True
def load_config():
"""Load 0repo-config.py from the current directory."""
import importlib
sys.path.insert(0, abspath('.'))
config = importlib.import_module('0repo-config')
del sys.path[0]
config.default_resources = join(dirname(dirname(dirname(abspath(__file__)))), 'resources')
for setting in ['REPOSITORY_BASE_URL', 'ARCHIVES_BASE_URL', 'LOCAL_ARCHIVES_BACKUP_DIR']:
value = getattr(config, setting)
if not value.endswith('/'):
setattr(config, setting, value + '/')
config.archive_db = archives.ArchiveDB("archives.db")
config.zconfig = zeroinstall.injector.config.load_config()
return config
|
0install/0repo
|
repo/cmd/__init__.py
|
Python
|
lgpl-2.1
| 3,768
|
[
"VisIt"
] |
8eabb005c1e9d225bc720c6b157c1f0263e6e8aa2aae1c64528beca9cfe17f69
|
#
# -*- coding: utf-8 -*-
#
# Copyright (C) 2008-2011 Red Hat, Inc.
# This file is part of python-fedora
#
# python-fedora is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# python-fedora is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with python-fedora; if not, see <http://www.gnu.org/licenses/>
#
'''
Cross-site Request Forgery Protection.
http://en.wikipedia.org/wiki/Cross-site_request_forgery
.. moduleauthor:: John (J5) Palmieri <johnp@redhat.com>
.. moduleauthor:: Luke Macken <lmacken@redhat.com>
.. versionadded:: 0.3.17
'''
import logging
from bunch import Bunch
from kitchen.text.converters import to_bytes
from webob import Request
try:
# webob > 1.0
from webob.headers import ResponseHeaders
except ImportError:
# webob < 1.0
from webob.headerdict import HeaderDict as ResponseHeaders
from paste.httpexceptions import HTTPFound
from paste.response import replace_header
from repoze.who.interfaces import IMetadataProvider
from zope.interface import implements
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from fedora.urlutils import update_qs
log = logging.getLogger(__name__)
class CSRFProtectionMiddleware(object):
'''
CSRF Protection WSGI Middleware.
A layer of WSGI middleware that is responsible for making sure
authenticated requests originated from the user inside of the app's domain
and not a malicious website.
This middleware works with the :mod:`repoze.who` middleware, and requires
that it is placed below :mod:`repoze.who` in the WSGI stack,
since it relies upon ``repoze.who.identity`` to exist in the environ before
it is called.
To utilize this middleware, you can just add it to your WSGI stack below
the :mod:`repoze.who` middleware. Here is an example of utilizing the
`CSRFProtectionMiddleware` within a TurboGears2 application.
In your ``project/config/middleware.py``, you would wrap your main
application with the `CSRFProtectionMiddleware`, like so:
.. code-block:: python
from fedora.wsgi.csrf import CSRFProtectionMiddleware
def make_app(global_conf, full_stack=True, **app_conf):
app = make_base_app(global_conf, wrap_app=CSRFProtectionMiddleware,
full_stack=full_stack, **app_conf)
You then need to add the CSRF token to every url that you need to be
authenticated for. When used with TurboGears2, an overridden version of
:func:`tg.url` is provided. You can use it directly by calling::
from fedora.tg2.utils import url
[...]
url = url('/authentication_needed')
An easier and more portable way to use that is from within TG2 to set this
up is to use :func:`fedora.tg2.utils.enable_csrf` when you setup your
application. This function will monkeypatch TurboGears2's :func:`tg.url`
so that it adds a csrf token to urls. This way, you can keep the same
code in your templates and controller methods whether or not you configure
the CSRF middleware to provide you with protection via
:func:`~fedora.tg2.utils.enable_csrf`.
'''
def __init__(self, application, csrf_token_id='_csrf_token',
clear_env='repoze.who.identity repoze.what.credentials',
token_env='CSRF_TOKEN', auth_state='CSRF_AUTH_STATE'):
'''
Initialize the CSRF Protection WSGI Middleware.
:csrf_token_id: The name of the CSRF token variable
:clear_env: Variables to clear out of the `environ` on invalid token
:token_env: The name of the token variable in the environ
:auth_state: The environ key that will be set when we are logging in
'''
log.info('Creating CSRFProtectionMiddleware')
self.application = application
self.csrf_token_id = csrf_token_id
self.clear_env = clear_env.split()
self.token_env = token_env
self.auth_state = auth_state
def _clean_environ(self, environ):
''' Delete the ``keys`` from the supplied ``environ`` '''
log.debug('clean_environ(%s)' % to_bytes(self.clear_env))
for key in self.clear_env:
if key in environ:
log.debug('Deleting %(key)s from environ' %
{'key': to_bytes(key)})
del(environ[key])
def __call__(self, environ, start_response):
'''
This method is called for each request. It looks for a user-supplied
CSRF token in the GET/POST parameters, and compares it to the token
attached to ``environ['repoze.who.identity']['_csrf_token']``. If it
does not match, or if a token is not provided, it will remove the
user from the ``environ``, based on the ``clear_env`` setting.
'''
request = Request(environ)
log.debug('CSRFProtectionMiddleware(%(r_path)s)' %
{'r_path': to_bytes(request.path)})
token = environ.get('repoze.who.identity', {}).get(self.csrf_token_id)
csrf_token = environ.get(self.token_env)
if token and csrf_token and token == csrf_token:
log.debug('User supplied CSRF token matches environ!')
else:
if not environ.get(self.auth_state):
log.debug('Clearing identity')
self._clean_environ(environ)
if 'repoze.who.identity' not in environ:
environ['repoze.who.identity'] = Bunch()
if 'repoze.who.logins' not in environ:
# For compatibility with friendlyform
environ['repoze.who.logins'] = 0
if csrf_token:
log.warning('Invalid CSRF token. User supplied'
' (%(u_token)s) does not match what\'s in our'
' environ (%(e_token)s)' %
{'u_token': to_bytes(csrf_token),
'e_token': to_bytes(token)})
response = request.get_response(self.application)
if environ.get(self.auth_state):
log.debug('CSRF_AUTH_STATE; rewriting headers')
token = environ.get('repoze.who.identity', {})\
.get(self.csrf_token_id)
loc = update_qs(
response.location, {self.csrf_token_id: str(token)})
response.location = loc
log.debug('response.location = %(r_loc)s' %
{'r_loc': to_bytes(response.location)})
environ[self.auth_state] = None
return response(environ, start_response)
class CSRFMetadataProvider(object):
'''
Repoze.who CSRF Metadata Provider Plugin.
This metadata provider is called with an authenticated users identity
automatically by repoze.who. It will then take the SHA1 hash of the
users session cookie, and set it as the CSRF token in
``environ['repoze.who.identity']['_csrf_token']``.
This plugin will also set ``CSRF_AUTH_STATE`` in the environ if the user
has just authenticated during this request.
To enable this plugin in a TurboGears2 application, you can
add the following to your ``project/config/app_cfg.py``
.. code-block:: python
from fedora.wsgi.csrf import CSRFMetadataProvider
base_config.sa_auth.mdproviders = [('csrfmd', CSRFMetadataProvider())]
Note: If you use the faswho plugin, this is turned on automatically.
'''
implements(IMetadataProvider)
def __init__(self, csrf_token_id='_csrf_token', session_cookie='tg-visit',
clear_env='repoze.who.identity repoze.what.credentials',
login_handler='/post_login', token_env='CSRF_TOKEN',
auth_session_id='CSRF_AUTH_SESSION_ID',
auth_state='CSRF_AUTH_STATE'):
'''
Create the CSRF Metadata Provider Plugin.
:kwarg csrf_token_id: The name of the CSRF token variable. The
identity will contain an entry with this as key and the
computed csrf_token as the value.
:kwarg session_cookie: The name of the session cookie
:kwarg login_handler: The path to the login handler, used to determine
if the user logged in during this request
:kwarg token_env: The name of the token variable in the environ.
The environ will contain the token from the request
:kwarg auth_session_id: The environ key containing an optional
session id
:kwarg auth_state: The environ key that indicates when we are
logging in
'''
self.csrf_token_id = csrf_token_id
self.session_cookie = session_cookie
self.clear_env = clear_env
self.login_handler = login_handler
self.token_env = token_env
self.auth_session_id = auth_session_id
self.auth_state = auth_state
def strip_script(self, environ, path):
# Strips the script portion of a url path so the middleware works even
# when mounted under a path other than root
if path.startswith('/') and 'SCRIPT_NAME' in environ:
prefix = environ.get('SCRIPT_NAME')
if prefix.endswith('/'):
prefix = prefix[:-1]
if path.startswith(prefix):
path = path[len(prefix):]
return path
def add_metadata(self, environ, identity):
request = Request(environ)
log.debug('CSRFMetadataProvider.add_metadata(%(r_path)s)'
% {'r_path': to_bytes(request.path)})
session_id = environ.get(self.auth_session_id)
if not session_id:
session_id = request.cookies.get(self.session_cookie)
log.debug('session_id = %(s_id)r' % {'s_id':
to_bytes(session_id)})
if session_id and session_id != 'Set-Cookie:':
environ[self.auth_session_id] = session_id
token = sha1(session_id).hexdigest()
identity.update({self.csrf_token_id: token})
log.debug('Identity updated with CSRF token')
path = self.strip_script(environ, request.path)
if path == self.login_handler:
log.debug('Setting CSRF_AUTH_STATE')
environ[self.auth_state] = True
environ[self.token_env] = token
else:
environ[self.token_env] = self.extract_csrf_token(request)
app = environ.get('repoze.who.application')
if app:
# This occurs during login in some application configurations
if isinstance(app, HTTPFound) and environ.get(self.auth_state):
log.debug('Got HTTPFound(302) from'
' repoze.who.application')
# What possessed people to make this a string or
# a function?
location = app.location
if hasattr(location, '__call__'):
location = location()
loc = update_qs(location, {self.csrf_token_id:
str(token)})
headers = app.headers.items()
replace_header(headers, 'location', loc)
app.headers = ResponseHeaders(headers)
log.debug('Altered headers: %(headers)s' % {
'headers': to_bytes(app.headers)})
else:
log.warning('Invalid session cookie %(s_id)r, not setting CSRF'
' token!' % {'s_id': to_bytes(session_id)})
def extract_csrf_token(self, request):
'''Extract and remove the CSRF token from a given
:class:`webob.Request`
'''
csrf_token = None
if self.csrf_token_id in request.GET:
log.debug("%(token)s in GET" % {'token':
to_bytes(self.csrf_token_id)})
csrf_token = request.GET[self.csrf_token_id]
del(request.GET[self.csrf_token_id])
request.query_string = '&'.join(['%s=%s' % (k, v) for k, v in
request.GET.items()])
if self.csrf_token_id in request.POST:
log.debug("%(token)s in POST" % {'token':
to_bytes(self.csrf_token_id)})
csrf_token = request.POST[self.csrf_token_id]
del(request.POST[self.csrf_token_id])
return csrf_token
|
nirik/python-fedora
|
fedora/wsgi/csrf.py
|
Python
|
gpl-2.0
| 13,006
|
[
"VisIt"
] |
fdfcc82ab007225f1a56799a56dd7d6f58ff3422ff1d4bad0a168f380c247235
|
#!/usr/bin/env python
# Copyright 2018 Informatics Matters Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys, subprocess
from pipelines_utils import utils
def execute(input, output, extension, format, ph, noGzip):
# TODO - convert this to use the Python API rather than an external process
filename = output + "." + extension
base_args = ["obabel", "-ipdb", input, format, "-O", filename]
if ph:
base_args.append("-p")
base_args.append(str(ph))
utils.log("Command: " + " ".join(base_args))
subprocess.check_call(base_args, stdout=sys.stderr, stderr=sys.stderr)
# NOTE the -z argument does not seem to work correctly with obabel (truncated files generated) so we
# fall back to good old gzip to handle the compression once the uncompressed file is created
if not noGzip:
subprocess.check_call(['gzip', filename], stdout=sys.stderr, stderr=sys.stderr)
def main():
global PDB_PATH,WRITER,THRESHOLD
parser = argparse.ArgumentParser(description='Open babel PDB prepare')
parser.add_argument('--no-gzip', action='store_true', help='Do not compress the output')
parser.add_argument('-i', '--input', help="PDB file for converting")
parser.add_argument('-o', '--output', help="Base name for output files (no extension).")
parser.add_argument('-mol2', '--mol2', action='store_true', help='Output as Mol2 format.')
parser.add_argument('-pdbqt', '--pdbqt', action='store_true', help='Output as pdbqt format.')
parser.add_argument('--meta', action='store_true', help='Write metrics files')
parser.add_argument('-prot', '--protonate', type=float, help="protonate at this pH (optional)")
args = parser.parse_args()
utils.log("Prepare Args: ", args)
if not (args.mol2 or args.pdbqt):
raise ValueError("Must specify at least one output fromat: mol2 and/or pdbqt")
if args.pdbqt:
utils.log("Preparing as pdbqt")
execute(args.input, args.output, "pdbqt", "-opdbqt", args.protonate, args.no_gzip)
if args.mol2:
utils.log("Preparing as mol2")
execute(args.input, args.output, "mol2", "-omol2", args.protonate, args.no_gzip)
utils.log("Preparation complete")
if __name__ == "__main__":
main()
|
InformaticsMatters/pipelines
|
src/python/pipelines/docking/obabel_prepare_pdb.py
|
Python
|
apache-2.0
| 2,778
|
[
"Open Babel"
] |
66afc1845eccb08aa96c91b60ef131337075401be084945fce5f563c3c284bef
|
from copy import deepcopy
from random import choice
from flask import render_template
from flask_ask import statement, question, session
from . import alexa, db
from .utils.recipes import find_recipes, all_recipes
from .dialog import reply, recipe_card
from .models import User
#####
#
# Helpers for long-term session persistence
#
####
def load_state(user):
'''
Loads the current state from the DB and sets session.attributes
'''
for k, v in user.state_machine.items():
session.attributes[k] = v
# Check if the session is correctly loaded...
all_keys = session.attributes.keys()
for k in ['state','ingredient_list','recipe_list','recipe','step','confirm_for_restart']:
if k not in all_keys:
break
else:
return True
return False
def save_state(user):
'''
Save the current state to the DB
'''
if session.attributes.get('state', False):
user.state_machine = deepcopy(dict(session.attributes))
def reset_state():
session.attributes['state'] = 'ingredients'
session.attributes['ingredient_list'] = []
session.attributes['recipe_list'] = []
session.attributes['recipe'] = -1
session.attributes['step'] = 0 # Step is used both during prepare and cook to keep track of the current step
session.attributes['confirm_for_restart'] = False # Need to roll our own confirmation dialog...
def start_session():
'''
This starts or resumes a previous session.
Typically happens when the user (re-)enters the skill.
'''
user_id = session.user['userId']
user = User.query.get(user_id)
# Complete reset only if new user
if user == None:
reset_state()
# Create new user and save state
user = User(user_id = user_id)
db.session.add(user)
save_state(user)
# Resume from last time
else:
user.update_online()
success = load_state(user)
# If something is not ok, reset everything...
if not success:
reset_state()
save_state(user)
db.session.commit()
#####
#
# The standard events
#
#####
@alexa.launch
def launched():
# Start session if not already done
if not session.attributes.get('state', False):
print 'Todo: Investigate why this happens'
start_session()
if session.attributes['state'] == 'ingredients':
answer = '''
Welcome to food world! Discover exciting new recipes!
First add some ingredients you want to use. For example: alexa, add {}.
'''
answer = answer.format(choice(['tomatoes', 'potatoes', 'flour', 'coconut', 'avocado']))
return question(answer)
else:
answer = u'Welcome back to food world! Let\'s continue where we left off. '
answer += u'You can restart any time using: alexa, restart. ' + reply()
# Resume, also send card if recipe is selected
if session.attributes['recipe'] == -1:
return question(answer)
else:
return question(answer).simple_card(**recipe_card())
@alexa.on_session_started
def new_session():
'''
Reset the state. (we start by selecting ingredients)
'''
# Start session if not already done
if not session.attributes.get('state'):
start_session()
@alexa.intent('AMAZON.HelpIntent')
def help():
helping = '''
Food world let\'s you discover recipes from all over the world!
You can add ingredients using for example: alexa, add eggs.
You can also search for recipes using: alexa, search recipes.
'''
return question(helping)
@alexa.intent('AMAZON.CancelIntent')
def cancel():
user = User.query.get(session.user['userId'])
save_state(user)
db.session.commit()
return statement('Thanks for using food world!')
@alexa.intent('AMAZON.StopIntent')
def stop():
user = User.query.get(session.user['userId'])
save_state(user)
db.session.commit()
return statement('Thanks for using food world!')
@alexa.session_ended
def session_ended():
user = User.query.get(session.user['userId'])
save_state(user)
db.session.commit()
return statement('Thanks for using food world!')
#####
#
# Events tightly bound to the state machine:
# ingredient -> search -> prepare -> cook
#
# Session variables:
# status the current status (e.g. search)
# step relevant for keeping track of ingredient and cooking steps
# ingredient_list all the ingredients the user wants to cook with
# recipe the selected recipe
# recipe_list in search we only calculate recipes once
#
#####
@alexa.intent('YesIntent')
def confirm():
'''
"Let's start", "Yes", "Here we go!"
Confirm a question
Used in search, prepare and cook
'''
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
state = sess.get('state')
if not state:
pass
# User says yes to reset, we reset.
elif sess['confirm_for_restart']:
reset_state()
# Update the state machine
elif state == 'ingredients':
pass
# Yes to recipe = start preparing (listing amounts)
elif state == 'search':
sess['state'] = 'prepare'
sess['step'] = 0
# Check all the ingredients
elif state == 'prepare':
recipe = all_recipes[sess['recipe']]
# Advance among the list of ingredients
# Reply() will handle if less than three are left...
max_step = len(recipe[1][0]) - 1
# Done with all the ingredients
if sess['step'] + 4 >= max_step:
sess['state'] = 'cook'
sess['step'] = 0
# Some more left to show
else:
sess['step'] += 4
else:
return question('Ok with me...')
return question(reply())
@alexa.intent('NoIntent')
def deny():
'''
"No." "Never." "This sounds awful."
Deny a question
Used in search and prepare
'''
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
state = sess.get('state')
# A card is sent if a new recipe is displayed
send_card = False
if not state:
pass
# No to reset: proceed as usual
elif sess['confirm_for_restart']:
sess['confirm_for_restart'] = False
# Update the state machine
elif state == 'ingredients':
pass
# No in search = move to next recipe
# No in prepare = also move to next recipe
elif state in ('search', 'prepare'):
next = (sess['recipe_list'].index(sess['recipe']) + 1) % len(sess['recipe_list'])
sess['recipe'] = sess['recipe_list'][next]
# Needed for prepare
sess['state'] = 'search'
send_card = True
else:
return question('Ok with me...')
if not send_card:
return question(reply())
else:
return question(reply()).simple_card(**recipe_card())
@alexa.intent('RepeatIntent')
def repeat():
'''
"Can you repeat that?"
Repeat the last statement.
Used in search, prepare and cook.
'''
if not session.attributes.get('state', False):
start_session()
# The simplest of them all, no changes to state machine = same result:)
return question(reply())
@alexa.intent('PreviousIntent')
def back():
'''
"Can we go back to the last step?"
Used in cook or search
'''
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
state = sess['state']
# Previous in cook
if state == 'cook':
recipe = all_recipes[sess['recipe']]
if sess['step'] > 0:
sess['step'] -= 1
# Previous in prepare
elif state == 'prepare':
recipe = all_recipes[sess['recipe']]
sess['step'] -= 4
if sess['step'] < 0:
sess['step'] = 0
# Previous in search
elif state == 'search':
next = (sess['recipe_list'].index(sess['recipe']) - 1) % len(sess['recipe_list'])
sess['recipe'] = sess['recipe_list'][next]
else:
return question('Fine with me!')
return question(reply())
@alexa.intent('NextIntent')
def next():
'''
"next step"
'''
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
state = sess['state']
if state == 'cook':
recipe = all_recipes[sess['recipe']]
sess['step'] += 1
# Finished cooking (app & session ends)
if sess['step'] >= len(recipe[2]):
# We reset the state and save it
reset_state()
save_state(User.query.get(session.user['userId']))
db.session.commit()
answer = u'Your done preparing {}! Thanks for using food world and enjoy your meal!'.format(recipe[0])
return statement(answer)
# Next in search = new recipe
elif state == 'search':
next = (sess['recipe_list'].index(sess['recipe']) + 1) % len(sess['recipe_list'])
sess['recipe'] = sess['recipe_list'][next]
else:
return question('Fine with me!')
return question(reply())
@alexa.intent('AddIntent')
def add_ingredient(ingredient):
'''
"I want to use ..."
User specifies ingredients to be used in recipe.
'''
if ingredient is None:
return question('Please specify an ingredient, for example: alexa, add parsley.')
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
# Add ingredient to ingredient_list
if ingredient not in sess['ingredient_list']:
sess['ingredient_list'].append(ingredient)
prefix = u'Added ' + ingredient + u'. '
# Go back to the adding stage
sess['state'] = 'ingredients'
else:
prefix = ingredient + u' is already on the ingredient list. '
return question(prefix + reply())
@alexa.intent('RemoveIntent')
def remove_ingredient(ingredient):
'''
"I want to remove ..."
Remove ingredients from the ingredient list
'''
if ingredient is None:
return question('Please specify an ingredient, for example: alexa, remove coconut.')
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
# Add ingredient to ingredient_list
if ingredient not in sess['ingredient_list']:
prefix = ingredient + u' is not on the ingredient list. '
else:
sess['ingredient_list'].remove(ingredient)
prefix = u'Remove ' + ingredient + u' from the ingredient list. '
return question(prefix + reply())
@alexa.intent('SearchIntent')
def search():
'''
"Search recipes"
Based on previously collected ingredient list
'''
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
state = sess.get('state')
if not state:
return question(reply())
# Search for recipes
if len(sess['ingredient_list']) > 0:
# Search for recipes
recipes = find_recipes(sess['ingredient_list'])
if recipes:
# Update the state machine
sess['state'] = 'search'
sess['recipe_list'] = recipes
sess['recipe'] = recipes[0]
else:
answer = u'Could not find a recipe. Try removing an ingredient, your current list is: '
answer += u', '.join(sess['ingredient_list'][:-1]) + u' and ' + sess['ingredient_list'][-1]
return question(answer)
else:
return question(u'I need at least one ingredient. Add ingredients using for example: alexa, add marshmallows.')
return question(reply()).simple_card(**recipe_card())
#####
#
# More general purpose intents
#
#####
@alexa.intent('ClarificationIntent')
def clarification(ingredient):
'''
"How many eggs did i need again?"
'''
if ingredient is None:
return question('Please specify an ingredient, for example: alexa, how much bread did i need?')
if not session.attributes.get('state', False):
start_session()
sess = session.attributes
recipe = all_recipes[sess['recipe']]
for line in recipe[1][0]:
if ingredient in line:
answer = u'You need ' + line + u'. Repeat the last step using: alexa, repeat.'
break
else:
answer = u'Could not find any {} in this recipe.'.format(ingredient)
return question(answer)
@alexa.intent('RestartIntent')
def restart():
'''
"Let's start from the top."
We need to implement our own confirmation bc its only available in Builder Beta:(
'''
if not session.attributes.get('state', False):
start_session()
session.attributes['confirm_for_restart'] = True
return question('Restarting ends the current recipe and resets the ingredients. Are you sure?')
|
oOo0oOo/FoodWorld
|
app/views.py
|
Python
|
mit
| 13,070
|
[
"exciting"
] |
3ca0e12e368bc9b56cc482b21ddb214fea71caa22700ba85e3bcc2a2322930a0
|
#!/usr/bin/env python
# AlGDock Pipeline script to actually run BPMF calculations
import glob
import numpy as np
import os
from os.path import abspath, basename, dirname, exists, getsize, \
isdir, isfile, join
import tarfile
def nonzero(path):
return (isfile(path) and getsize(path)>0) or \
(isdir(path))
# Look for AlGDock and qsub_command.py
import inspect
dirs = {}
dirs['current'] = os.getcwd()
dirs['script'] = dirname(abspath(\
inspect.getfile(inspect.currentframe())))
import sys
sys.path.append(abspath(join(dirs['script'],'..','AlGDock')))
import IO
execfile(join(dirs['script'],'_external_paths.py'))
command_paths = findPaths(['qsub_command'])
algdock_path = findPath(search_paths['algdock'])
# Parse arguments
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--script', default=algdock_path, help='AlGDock script')
parser.add_argument('--saved_arguments', default='saved_arguments.py',
help='File containing default values of parameters' + \
'(can be overwritten by flags)')
parser.add_argument('--include_ligand', nargs='+', default=None, \
help='Only runs AlGDock for these ligands')
parser.add_argument('--exclude_ligand', nargs='+', default=None, \
help='Does not run AlGDock for these ligands')
parser.add_argument('--new_instance_per_ligand', action='store_true', \
default=False, \
help='Runs run_AlGDock.py for each ligand')
parser.add_argument('--include_receptor', nargs='+', default=None, \
help='Only runs AlGDock for these receptors')
parser.add_argument('--exclude_receptor', nargs='+', default=None, \
help='Does not run AlGDock for these receptor')
parser.add_argument('--older_than', type=int, default=None, \
help='Only runs redo_free_energies jobs if f_RL.pkl.gz ' + \
'is older than OLDER_THAN hours')
parser.add_argument('--check_complete', action='store_true', default=False, \
help='Checks whether f_RL{0}.pkl.gz, where {0} is pose_string, is complete')
parser.add_argument('--clear_locks', action='store_true', default=False, \
help='clears locks')
parser.add_argument('--create_empty_files', action='store_true', default=False)
parser.add_argument('--jobname_prefix', default=None,
help='Prefix for job names')
# Arguments related to file locations
parser.add_argument('--forcefield', help='File for Generalized AMBER Force Field')
parser.add_argument('--ligand', default='../ligand/AlGDock_in/', \
help='The directory/file tree to look for ligand files')
parser.add_argument('--library_requirement', default=None, \
help='The ligand file name must contain the string LIBRARY_REQUIREMENT')
parser.add_argument('--receptor', default='../receptor/amber_in', \
help='The directory/file to look for receptor files (prmtop and inpcrd)')
parser.add_argument('--receptor_grids', default='../receptor/AlGDock_in', \
help='The directory to look for receptor grids (nc)')
parser.add_argument('--complex', default='../complex/AlGDock_in', \
help='The directory tree to look for complex files (prmtop and inpcrd)')
parser.add_argument('--site_info', \
default='../receptor/2-binding_site/measured_binding_site.py', \
help='Python script with binding site parameters ' + \
'(site_R, half_edge_length)')
parser.add_argument('--dock6', default='../dock6/', \
help='The directory to look for dock6 results (*.mol2.gz)')
parser.add_argument('--tree_cool', default='cool/',
help='Directory tree to store cooling results')
parser.add_argument('--tree_dock', default='dock/',
help='Directory tree to store docking results')
# Arguments related to job management
parser.add_argument('--reps', default=None, nargs=2, type=int, \
help='Range of repetitions')
parser.add_argument('--max_jobs', default=None, type=int)
parser.add_argument('--calcs_per_job', default=1, type=int,
help='Number of jobs to include in a single submission script.')
parser.add_argument('--first_ligand', default=None, type=int)
parser.add_argument('--max_ligands', default=None, type=int)
parser.add_argument('--dry', action='store_true', default=False, \
help='Does not actually submit the job to the queue')
parser.add_argument('--no_release', action='store_true', default=False, \
help='Does not release held jobs')
parser.add_argument('--interactive', action='store_true', default=False, \
help='Output command for running in an interactive python environment')
parser.add_argument('--check_tarballs', action='store_true', default=False, \
help='Check inside tarballs for files')
parser.add_argument('--skip_onq', action='store_true', default=False, \
help='Skips looking for the job on the queue')
# Arguments related to scoring and assessment
parser.add_argument('--score', default=None,
help='File for starting pose or keyword: xtal (crystal structure), ' + \
'dock (another docking program), xtal_plus_dock')
parser.add_argument('--poses', default=[-1,0], nargs=2, type=int, \
help='Confine the ligand to specific poses from the "score" file. ' + \
'If the argument is "-1 0" there is no pose restriction. ' + \
'Otherwise the argument is the range of poses.')
parser.add_argument('--rmsd', choices=['xtal',None], default=None,
help='Calulates the rmsd between snapshots a configuration. ' + \
'xtal means the crystal structure.')
# Simulation settings and constants
# Run-dependent
parser.add_argument('--cool_repX_cycles', type=int,
help='Number of replica exchange cycles for cooling')
parser.add_argument('--dock_repX_cycles', type=int,
help='Number of replica exchange cycles for docking')
parser.add_argument('--run_type',
choices=['configuration_energies', \
'minimized_configuration_energies',
'store_params', 'initial_cool', 'cool', \
'initial_dock', 'dock' ,'postprocess', \
'redo_postprocess','free_energies','redo_free_energies', 'all', \
'timed', 'timed_cool', 'timed_dock', \
'render_docked', 'render_intermediates', \
'clear_intermediates', None], \
help='Type of calculation to run')
parser.add_argument('--max_time', type=int, default = 180, \
help='For timed calculations, the maximum amount of wall clock time, ' + \
'in minutes')
parser.add_argument('--keep_tar', action='store_true', \
help='Keep files extracted from tar input')
parser.add_argument('--cores', type=int, \
help='Number of CPU cores to use')
# Defaults
parser.add_argument('--protocol', choices=['Adaptive','Geometric'],
help='Approach to determining series of thermodynamic states')
parser.add_argument('--therm_speed', type=float,
help='Thermodynamic speed during adaptive simulation')
parser.add_argument('--sampler',
choices=['HMC','MixedHMC','NUTS','VV'],
help='Sampling method')
parser.add_argument('--MCMC_moves', type=int,
help='Types of MCMC moves to use')
parser.add_argument('--T_HIGH', type=float,
help='High temperature')
parser.add_argument('--T_SIMMIN', type=float,
help='Lowest simulated temperature')
parser.add_argument('--T_TARGET', type=float,
help='Target temperature')
parser.add_argument('--temperature_scaling', \
choices=['Linear','Quadratic'], \
help='Determines whether the temperature changes linearly with the docking progress variable or quadratically with the grid scaling progress variable. (only for docking)')
parser.add_argument('--H_mass', type=float,
help='The repartitioned mass of hydrogen. Set negative to turn off HMR')
parser.add_argument('--delta_t', type=float,
help='The default time step, in fs')
# For initialization
parser.add_argument('--seeds_per_state', type=int,
help='Number of starting configurations in each state during initialization')
parser.add_argument('--steps_per_seed', type=int,
help='Number of MD steps per state during initialization')
parser.add_argument('--darts_per_seed', type=int,
help='Number of smart darting attempts for each seed during initialization')
# For replica exchange
parser.add_argument('--repX_cycles', type=int,
help='Number of replica exchange cycles for docking and cooling')
parser.add_argument('--sweeps_per_cycle', type=int,
help='Number of replica exchange sweeps per cycle')
parser.add_argument('--snaps_per_cycle', type=int,
help='Number of snapshots to save per cycle')
parser.add_argument('--attempts_per_sweep', type=int,
help='Number of replica exchange attempts per sweeps')
parser.add_argument('--steps_per_sweep', type=int,
help='Number of MD steps per replica exchange sweep')
parser.add_argument('--darts_per_sweep', type=int,
help='Number of smart darting attempts per replica exchange sweep')
parser.add_argument('--sampling_importance_resampling', action='store_true', default=None,
help='Use sampling importance resamping')
parser.add_argument('--solvation', default='Desolvated', \
choices=['Desolvated','Reduced','Full','Fractional'],
help='How to use OBC implicit solvent during sampling.')
parser.add_argument('--keep_intermediate', action='store_true', default=None,
help='Keep configurations for intermediate states?')
parser.add_argument('--min_repX_acc', type=float,
help='Minimum value for replica exchange acceptance rate')
# For postprocessing
parser.add_argument('--phases', nargs='+', \
help='Phases to use in postprocessing')
# Stored in dir_cool
parser.add_argument('--cool_protocol', choices=['Adaptive','Set'],
help='Approach to determining series of thermodynamic states')
parser.add_argument('--cool_therm_speed', type=float,
help='Thermodynamic speed during adaptive simulation')
parser.add_argument('--cool_sampler',
choices=['HMC','NUTS','VV'],
help='Sampling method')
# For initialization
parser.add_argument('--cool_seeds_per_state', type=int,
help='Number of starting configurations in each state during initialization')
parser.add_argument('--cool_steps_per_seed', type=int,
help='Number of MD steps per state during initialization')
# For replica exchange
parser.add_argument('--cool_sweeps_per_cycle', type=int,
help='Number of replica exchange sweeps per cycle')
parser.add_argument('--cool_attempts_per_sweep', type=int,
help='Number of replica exchange attempts per sweeps')
parser.add_argument('--cool_steps_per_sweep', type=int,
help='Number of MD steps per replica exchange sweep')
parser.add_argument('--cool_keep_intermediate', action='store_true',
default=None, help='Keep configurations for intermediate states?')
# Stored in dir_dock
parser.add_argument('--dock_protocol', choices=['Adaptive','Set'],
help='Approach to determining series of thermodynamic states')
parser.add_argument('--dock_therm_speed', type=float,
help='Thermodynamic speed during adaptive simulation')
parser.add_argument('--dock_sampler',
choices=['HMC','NUTS','VV'],
help='Sampling method')
# For initialization
parser.add_argument('--dock_seeds_per_state', type=int,
help='Number of starting configurations in each state during initialization')
parser.add_argument('--dock_attempts_per_sweep', type=int,
help='Number of replica exchange attempts per sweeps')
parser.add_argument('--dock_steps_per_seed', type=int,
help='Number of MD steps per state during initialization')
# For replica exchange
parser.add_argument('--dock_sweeps_per_cycle', type=int,
help='Number of replica exchange sweeps per cycle')
parser.add_argument('--dock_steps_per_sweep', type=int,
help='Number of MD steps per replica exchange sweep')
parser.add_argument('--dock_keep_intermediate', action='store_true',
default=None, help='Keep configurations for intermediate states?')
parser.add_argument('--site',
choices=['Sphere','Cylinder','Measure'], \
help='Type of binding site. "Measure" means that parameters' + \
' for a sphere will be measured from docked configurations.')
parser.add_argument('--site_center', nargs=3, type=float,
help='Position of binding site center')
# parser.add_argument('--site_direction', nargs=3, type=float,
# help='Principal axis of a cylindrical binding site')
# parser.add_argument('--site_max_X', type=float,
# help='Maximum position along principal axis in a cylindrical binding site')
parser.add_argument('--site_max_R', type=float,
help='Maximum radial position for a spherical or cylindrical binding site')
parser.add_argument('--site_density', type=float,
help='Density of center-of-mass points in the first docking stage')
# Additional calculations
parser.add_argument('--receptor_Gas', type=float, nargs='+',
help='Receptor potential energies in AMBER Gas implicit solvent (in units of kJ/mol)')
parser.add_argument('--receptor_GBSA', type=float, nargs='+',
help='Receptor potential energies in AMBER GBSA implicit solvent (in units of kJ/mol)')
parser.add_argument('--receptor_PBSA', type=float, nargs='+',
help='Receptor potential energies in AMBER PBSA implicit solvent (in units of kJ/mol)')
parser.add_argument('--receptor_NAMD_Gas', type=float, nargs='+',
help='Receptor potential energies in gas phase (in units of kJ/mol)')
parser.add_argument('--receptor_NAMD_OBC', type=float, nargs='+',
help='Receptor potential energies in NAMD GBSA implicit solvent (in units of kJ/mol)')
args_in = parser.parse_args()
# Convert argument paths to absolute paths
for path in ['ligand','receptor','receptor_grids','complex',\
'tree_dock','tree_cool','dock6','forcefield']:
if hasattr(args_in,path):
setattr(args_in,path,abspath(getattr(args_in,path)))
# Determine which jobs are on the queue
if not args_in.skip_onq:
execfile(join(dirs['script'],'_jobs_on_queue.py'))
onq = jobs_on_queue()
else:
onq = []
# Get other parameters
general_sim_arg_keys = ['protocol', 'therm_speed',\
'sampler', 'seeds_per_state', 'steps_per_seed', 'repX_cycles', \
'sweeps_per_cycle', 'snaps_per_cycle', \
'attempts_per_sweep', 'steps_per_sweep', 'darts_per_sweep', \
'sampling_importance_resampling', 'solvation', \
'keep_intermediate', 'phases']
sim_arg_keys = general_sim_arg_keys + \
['cool_'+a for a in general_sim_arg_keys] + \
['dock_'+a for a in general_sim_arg_keys] + \
['MCMC_moves', 'score', 'rmsd', 'pose', 'run_type', \
'max_time', 'keep_tar', 'cores', \
'site', 'site_center', 'site_direction', \
'site_max_X', 'site_max_R', 'site_density', \
'T_HIGH', 'T_TARGET', 'T_SIMMIN', 'reps']
# Load saved arguments
if (args_in.saved_arguments is not None) and nonzero(args_in.saved_arguments):
print 'Passed arguments:'
print 'Loading default simulation arguments from '+args_in.saved_arguments
execfile(args_in.saved_arguments)
for key in sim_arg_keys:
if hasattr(args_saved,key) and \
((not hasattr(args_in,key)) or (getattr(args_in,key) is None)):
setattr(args_in,key,getattr(args_saved,key))
else:
class args_saved:
pass
# Report arguments in args_in that are not listed in sim_arg_keys
print "\nThese arguments will not be passed to AlGDock:"
for key in args_in.__dict__.keys():
if key not in sim_arg_keys and getattr(args_in,key) is not None:
print '{0}={1}'.format(key,getattr(args_in,key))
print
# Determine the binding site radius and half edge length
site = args_in.site
if hasattr(args_in,'site_center') and hasattr(args_in,'site_max_R'):
site_center = args_in.site_center
site_max_R = args_in.site_max_R
else:
# If no argument, load in a file
if isfile(args_in.site_info):
execfile(args_in.site_info)
# These should be in nanometers, not Angstroms
site_center = [half_edge_length*0.1, half_edge_length*0.1, half_edge_length*0.1]
site_max_R = site_R/10.
else:
print 'No binding site information'
# Look for ligand files
if isfile(args_in.ligand):
ligand_FNs = [abspath(args_in.ligand)]
elif isdir(args_in.ligand):
ligand_FNs = glob.glob(join(args_in.ligand,'*/*.tar.gz'))
ligand_FNs = sorted([abspath(FN) for FN in ligand_FNs])
else:
raise Exception('Ligand input %s is not a file or directory!'%args_in.ligand)
# Check that ligand tarball has nonzero size
ligand_FNs = [FN for FN in ligand_FNs if getsize(FN)>0]
if len(ligand_FNs)>1:
# Filter ligands
if args_in.include_ligand is not None:
ligand_FNs = [FN for FN in ligand_FNs \
if np.array([FN.find(ligN)!=-1 \
for ligN in args_in.include_ligand]).any()]
if args_in.exclude_ligand is not None:
ligand_FNs = [FN for FN in ligand_FNs \
if not np.array([FN.find(ligN)!=-1 \
for ligN in args_in.exclude_ligand]).any()]
if args_in.new_instance_per_ligand:
import sys
for ligand_FN in ligand_FNs:
command = ' '.join(sys.argv + ['--ligand',ligand_FN]) + ' &'
print command
os.system(command)
sys.exit()
# Sort ligands in reverse order by size. This way,
# slower jobs will be queued sooner and
# jobs that start around the same time will likely finish around the same time
ligand_FNs.sort(key=lambda x:getsize(x), reverse=True)
# Look for receptor files
if isfile(args_in.receptor):
receptor_FNs = [args_in.receptor]
elif isdir(args_in.receptor):
receptor_FNs = glob.glob(join(args_in.receptor,'*.prmtop'))
else:
raise Exception('Receptor input %s is not a file or directory!'%args_in.receptor)
receptor_FNs = [abspath(FN) for FN in receptor_FNs]
# Filter receptors
if args_in.include_receptor is not None:
receptor_FNs = [FN for FN in receptor_FNs \
if np.array([FN.find(recN)!=-1 \
for recN in args_in.include_receptor]).any()]
if args_in.exclude_receptor is not None:
receptor_FNs = [FN for FN in receptor_FNs \
if not np.array([FN.find(recN)!=-1 \
for recN in args_in.exclude_receptor]).any()]
# Require inpcrd as well as prmtop files
receptor_FNs = [FN for FN in receptor_FNs if
np.array([nonzero(FN[:-6]+key) \
for key in ['prmtop','inpcrd']]).all()]
# Require grid files
receptor_FNs = [FN for FN in receptor_FNs if np.array(\
[nonzero(join(args_in.receptor_grids,basename(FN)[:-6]+key)) \
for key in ['LJa.25.nc','LJr.25.nc','PB.nc']]).all()]
# Look for complex files
if isfile(args_in.complex):
complex_FNs = [args_in.complex]
elif isdir(args_in.complex):
complex_FNs = glob.glob(join(args_in.complex,'*/*/*.tar.gz'))
if len(complex_FNs)==0:
complex_FNs = glob.glob(join(args_in.complex,'*/*.tar.gz'))
else:
raise Exception('Complex input %s is not a file or directory!'%args_in.complex)
# Require that complex tarball has nonzero size
complex_FNs = [FN for FN in complex_FNs if nonzero(FN)]
print 'Found %d ligands, %d receptors, and %d complexes ready for AlGDock'%(\
len(ligand_FNs),len(receptor_FNs),len(complex_FNs))
if (args_in.library_requirement is not None):
ligand_FNs = [FN for FN in ligand_FNs \
if FN.find(args_in.library_requirement)>-1]
print '%d ligand(s) meet the library requirement'%(len(ligand_FNs))
if args_in.reps is None:
args_in.reps = [0,1]
if args_in.first_ligand is None:
args_in.first_ligand = 0
if args_in.max_ligands is None:
args_in.max_ligands = len(ligand_FNs)
print 'Arguments:'
print args_in
namespace = locals()
status = {'jobs':0, 'submitted':0, 'skipped':0, \
'no_complex':0, 'no_configurations':0, 'no_cool':0, 'no_dock6':0, \
'missing_file':0, 'onq':0, 'recently_redone':0, 'locked':0, 'complete':0}
checked = []
terminal_commands = []
# Loops are over ligands, repetitions, receptors, and poses
### Loop over ligands
for ligand_FN in ligand_FNs[args_in.first_ligand:args_in.first_ligand+args_in.max_ligands]:
labels = {}
if isfile(args_in.ligand):
lib_and_key_prefix = ligand_FN.split('/')[-2]
else:
lib_and_key_prefix = dirname(ligand_FN[len(args_in.ligand)+1:])
labels['library'] = '.'.join(lib_and_key_prefix.split('.')[:-1])
labels['key'] = basename(ligand_FN[:-7])
labels['lib_subdir'] = labels['library']+'.'+labels['key'][:-2]+'__'
labels['ligand'] = labels['library']+'.'+labels['key']
# Define and check files within the ligand tarball
paths_in_tar = {'ligand_database':labels['ligand'].lower()+'.db'}
for key in [('ligand_prmtop','prmtop'),('ligand_inpcrd','inpcrd'), \
('ligand_mol2','mol2'), \
('frcmodList','frcmod')]:
paths_in_tar[key[0]] = labels['ligand']+'.'+key[1]
if args_in.check_tarballs and (ligand_FN not in checked):
tarF = tarfile.open(ligand_FN)
names = [m.name for m in tarF.getmembers()]
not_found = [paths_in_tar[key] for key in paths_in_tar.keys() \
if not paths_in_tar[key] in names]
if len(not_found)>0:
print 'The following files were missing in '+ligand_FN+':'
print ' '.join(not_found)
continue
else:
checked.append(ligand_FN)
### Loop over repetitions
for rep in range(args_in.reps[0],args_in.reps[1]):
paths = {'dir_cool':join(args_in.tree_cool, \
labels['lib_subdir'], '%s-%d'%(labels['key'],rep)),
'forcefield':args_in.forcefield,
'ligand_tarball':ligand_FN}
if not isdir(paths['dir_cool']):
os.system('mkdir -p '+paths['dir_cool'])
if nonzero(join(paths['dir_cool'],'f_L.pkl.gz')):
if (args_in.run_type in ['initial_cool','cool','timed_cool']):
status['complete'] += 1
continue # Cooling is already done
else:
if (args_in.run_type in ['dock','timed_dock']):
status['no_cool'] += 1
continue # Not ready for docking because cooling isn't complete
### Loop over receptor files
for receptor_FN in receptor_FNs:
labels['receptor'] = basename(receptor_FN)[:-7]
labels['complex'] = labels['library']+'.'+labels['key']+'-'+labels['receptor']
# Identify receptor files
for key in ['prmtop','inpcrd']:
paths['receptor_'+key] = abspath(receptor_FN[:-6]+key)
if isfile(abspath(receptor_FN[:-6]+'pdb')):
paths['receptor_fixed_atoms'] = abspath(receptor_FN[:-6]+'pdb')
for key in [('grid_LJa','LJa.25.nc'), ('grid_LJr','LJr.25.nc'), \
('grid_ELE','PB.nc')]:
paths[key[0]] = join(args_in.receptor_grids, \
'%s.%s'%(labels['receptor'],key[1]))
# Identify complex tarball
complex_tar_FN = join(args_in.complex, \
labels['lib_subdir'], labels['key'], labels['receptor']+'.tar.gz')
if not (isfile(complex_tar_FN)):
complex_tar_FN = join(args_in.complex, labels['lib_subdir'], \
'{0}.{1}-{2}.tar.gz'.format(\
labels['library'],labels['key'],labels['receptor']))
if not (isfile(complex_tar_FN)):
print 'No complex tarfile ' + complex_tar_FN
status['no_complex'] += 1
continue # Complex files are missing
paths['complex_tarball'] = complex_tar_FN
# Identify docking directory
paths['dir_dock'] = join(args_in.tree_dock, \
labels['lib_subdir'], labels['key'], '%s-%d'%(labels['receptor'],rep))
if not isdir(paths['dir_dock']):
os.system('mkdir -p '+paths['dir_dock'])
# Convert relative path to absolute paths
for key in paths.keys():
paths[key] = abspath(paths[key])
# Define and check files within the complex tarball
for key in ['prmtop','inpcrd']:
paths_in_tar['complex_'+key] = labels['complex']+'.'+key
if 'receptor_fixed_atoms' in paths.keys():
paths_in_tar['complex_fixed_atoms'] = labels['complex']+'.pdb'
if args_in.check_tarballs and (complex_tar_FN not in checked):
tarF = tarfile.open(complex_tar_FN)
names = [m.name for m in tarF.getmembers()]
not_found = [paths_in_tar[key] for key in paths_in_tar.keys() \
if key.startswith('complex') and not paths_in_tar[key] in names]
if len(not_found)>0:
print 'The following files were missing in '+complex_tar_FN+':'
print ' '.join(not_found)
continue
else:
checked.append(complex_tar_FN)
input_FNs = paths.values()
input_FNs_missing = np.array([not nonzero(FN) for FN in input_FNs])
if input_FNs_missing.any():
print 'Necessary files:'
print paths
print 'Files are missing: ' + ', '.join(np.array(input_FNs)[input_FNs_missing])
status['missing_file'] += 1
continue # Files are missing
# Determine the score argument
if args_in.score is None:
score = False
elif args_in.score=='xtal':
score = 'default'
elif args_in.score=='dock':
score_path = abspath(join(args_in.dock6, \
labels['lib_subdir'], labels['key'], \
labels['receptor'] + '.nc'))
if not nonzero(score_path):
if isfile(score_path[:-3]+'.mol2.gz'):
status['no_configurations'] += 1
skip_job = True
break # No configurations in dock6
else:
print 'No dock6 output in '+args_in.score
status['no_dock6'] += 1
skip_job = True
break # Dock6 files are missing
elif args_in.score=='xtal_plus_dock':
score = abspath(join(dir_dock6,\
'xtal_plus_dock6_scored.mol2'))
else:
score = args_in.score
# Check for completed configuration energies
if args_in.run_type.endswith('configuration_energies') and \
(not args_in.interactive):
prefix = 'xtal' if score=='default' else \
os.path.basename(score).split('.')[0]
if args_in.run_type=='minimized_configuration_energies':
prefix = 'min_' + prefix
FN = join(paths['dir_dock'], prefix + '.pkl.gz')
complete = False
if os.path.isfile(FN):
import gzip, pickle
F = gzip.open(FN, 'r')
(confs, Es) = pickle.load(F)
F.close()
complete = True
for phase in args_in.phases:
for moeity in ['R','L','RL']:
if moeity+phase not in Es.keys():
print '{0} is missing {1}'.format(score, moeity+phase)
complete = False
# Pose energy is done
if complete:
status['complete'] += 1
continue
# Find out the number of poses
pose_upper_range = args_in.poses[1]
if isfile(complex_tar_FN):
tarF = tarfile.open(complex_tar_FN)
try:
poses_in_mdcrd = 'poses.mdcrd' in tarF.getnames()
except IOError:
print 'Error with '+complex_tar_FN
import sys
sys.exit()
if poses_in_mdcrd:
tarF.extract('poses.mdcrd')
IO_crd = IO.crd()
(crd, title) = IO_crd.read('poses.mdcrd', return_title=True)
os.remove('poses.mdcrd')
pose_upper_range = min(int(title.split()[0]), args_in.poses[1])
### Loop over poses
for pose in range(args_in.poses[0],pose_upper_range):
if pose==-1:
pose_string = ''
else:
pose_string = '_pose%03d'%pose
f_RL_FN = join(paths['dir_dock'],'f_RL{0}.pkl.gz'.format(pose_string))
if nonzero(f_RL_FN):
import time
# Check if the calculation was recently redone
if (args_in.run_type=='redo_free_energies') and \
(args_in.older_than is not None) and \
(time.time()-os.path.getmtime(f_RL_FN))/60./60.<args_in.older_than:
status['recently_redone'] += 1
continue
if (args_in.run_type in \
['random_dock','initial_dock', 'dock','all','timed']):
if args_in.check_complete:
import gzip, pickle
F = gzip.open(f_RL_FN,'r')
dat = pickle.load(F)
F.close()
try:
completed_cycles = np.min([len(dat[-1][p+'_MBAR']) for p in args_in.phases])
complete = (completed_cycles >= int(args_in.dock_repX_cycles))
except:
print 'Error in '+f_RL_FN
completed_cycles = 0
complete = False
if complete:
status['complete'] += 1 # Docking is done
continue
else:
print '%d/%d cycles in %s'%(\
completed_cycles, args_in.dock_repX_cycles, paths['dir_dock'])
else:
status['complete'] += 1 # Docking is done
continue
elif (args_in.run_type=='redo_free_energies'):
status['missing_file'] += 1
continue
jobname = '' if args_in.jobname_prefix is None \
else (args_in.jobname_prefix + '.')
jobname += '{0}-{1}{2}.{3}'.format(labels['complex'], \
rep, pose_string, args_in.run_type)
if jobname in onq:
status['onq'] += 1
print jobname + ' is on the queue'
continue # Job is on the queue
if args_in.clear_locks:
if not (jobname in onq):
if exists(join(paths['dir_cool'],'.lock')):
# There is a lock in the cooling directory
print '# Removing lock in cooling directory %s'%(paths['dir_cool'])
os.remove(join(paths['dir_cool'],'.lock'))
if exists(join(paths['dir_dock'],'.lock'+pose_string)):
# There is a lock in the docking directory
print '# Removing lock in docking directory %s'%(paths['dir_dock'])
os.remove(join(paths['dir_dock'],'.lock'+pose_string))
# Consolidate paths to pass
paths_to_pass = {}
for key in paths.keys():
paths_to_pass[key] = paths[key]
for key in paths_in_tar.keys():
paths_to_pass[key] = paths_in_tar[key]
interactive_to_pass = []
terminal_to_pass = []
skip_job = False
for key in sorted(paths_to_pass.keys() + sim_arg_keys):
# Priority is passed arguments (which may include saved arguments),
# local variables,
# and then the path dictionary
if hasattr(args_in,key) and (getattr(args_in,key) is not None):
val = getattr(args_in,key)
elif (key in namespace.keys()) and (namespace[key] is not None):
val = namespace[key]
elif key in paths_to_pass.keys():
val = paths_to_pass[key]
else:
continue
# Special cases
if key=='rmsd':
val = {None: False, 'xtal':True}[val]
elif key=='score':
val = score
elif key=='frcmodList':
val = [val]
# Actual strings to pass
if isinstance(val,str):
interactive_to_pass.append("%s = '%s'"%(key,val))
terminal_to_pass.append("--%s %s"%(key,val))
elif isinstance(val,bool):
if val:
interactive_to_pass.append("%s = %s"%(key,val))
terminal_to_pass.append("--%s"%(key))
elif isinstance(val,int):
interactive_to_pass.append("%s = %d"%(key,val))
terminal_to_pass.append("--%s %d"%(key,val))
elif isinstance(val,float):
interactive_to_pass.append("%s = %.5f"%(key,val))
terminal_to_pass.append("--%s %.5f"%(key,val))
elif isinstance(val,list):
if isinstance(val[0],str):
interactive_to_pass.append("%s = ['%s']"%(key,
"', '".join([a for a in val])))
terminal_to_pass.append("--%s %s"%(key,
" ".join([a for a in val])))
elif isinstance(val[0],float):
interactive_to_pass.append("%s = [%s]"%(key,
", ".join(['%.5f'%a for a in val])))
terminal_to_pass.append("--%s %s"%(key,
" ".join(['%.5f'%a for a in val])))
else:
print 'Value:', val
raise Exception('Type not known!')
if skip_job:
continue
outputFNs = {}
for FN in ['cool_log.txt',
'cool_progress.pkl.gz','cool_progress.pkl.gz.BAK',
'cool_data.pkl.gz','cool_data.pkl.gz.BAK',
'f_L.pkl.gz']:
outputFNs[FN] = join(paths_to_pass['dir_cool'],FN)
for FN in ['dock%s_log.txt'%pose_string,
'dock_progress%s.pkl.gz'%pose_string,
'dock_progress%s.pkl.gz.BAK'%pose_string,
'dock_data%s.pkl.gz'%pose_string,
'dock_data%s.pkl.gz.BAK'%pose_string,
'f_RL%s.pkl.gz'%pose_string]:
outputFNs[FN] = join(paths_to_pass['dir_dock'],FN)
for k in outputFNs.keys():
if args_in.create_empty_files and (not isfile(outputFNs[k])):
open(outputFNs[k], 'a').close()
transfer_output_remaps = [f(key) \
for key in outputFNs.keys() \
for f in (lambda x:key,lambda x:outputFNs[key])]
interactive_command = "self = AlGDock.BindingPMF.BPMF(" + \
', \\\n '.join(sorted(interactive_to_pass)) + \
")"
if args_in.script is None:
terminal_command = '$ALGDOCK '
else:
if args_in.script.find('.py')!=-1:
terminal_command = 'python ' + args_in.script + ' '
else:
terminal_command = args_in.script + ' '
terminal_command += ' \\\n '.join(terminal_to_pass)
if args_in.interactive:
print interactive_command
else:
import subprocess
if args_in.calcs_per_job > 1:
terminal_commands.append(terminal_command)
if len(terminal_commands)==args_in.calcs_per_job:
command_string = ' &; '.join([\
command.replace(' \\\n','').replace(' ',' ') \
for command in terminal_commands]) + ' &;'
subprocess.call(['python', command_paths['qsub_command'], \
jobname, command_string] + \
{True:['--dry'],False:[]}[args_in.dry])
status['jobs'] += 1
terminal_commands = []
else:
subprocess.call(['python', command_paths['qsub_command'], \
jobname, terminal_command, '--mem', '4'] + \
['--input_files'] + outputFNs.values() + \
['--output_files'] + list(outputFNs) + \
['--output_remaps'] + transfer_output_remaps + \
['--comment', interactive_command.replace(' \\\n','')] + \
{True:['--dry'],False:[]}[args_in.dry] + \
{True:['--no_release'],False:[]}[args_in.no_release])
status['jobs'] += 1
status['submitted'] += 1
if (args_in.max_jobs is not None) and \
(status['jobs']>=args_in.max_jobs):
break
if (args_in.max_jobs is not None) and \
(status['jobs']>=args_in.max_jobs):
break
if (args_in.max_jobs is not None) and \
(status['jobs']>=args_in.max_jobs):
break
if (args_in.max_jobs is not None) and \
(status['jobs']>=args_in.max_jobs):
break
# Submit the remaining calculations in a final job
if (args_in.calcs_per_job > 1) and len(terminal_commands)>0 \
and (not args_in.interactive):
import subprocess
command_string = ' &; '.join([\
command.replace(' \\\n','').replace(' ',' ') \
for command in terminal_commands]) + ' &;'
subprocess.call(['python', command_paths['qsub_command'], \
jobname, command_string] + \
{True:['--dry'],False:[]}[args_in.dry])
status['jobs'] += 1
terminal_commands = []
format_str = "Calculation status: {jobs} jobs, {submitted} submitted, " + \
"{skipped} skipped, " + \
"{no_complex} without complex files, {no_cool} without cooling files" + \
"{no_dock6} without dock6 files, " + \
"{no_configurations} have no docked configurations, " + \
"{missing_file} missing other files, {onq} on the queue, " + \
"{recently_redone} recently redone, {complete} complete"
print format_str.format(**status)
|
CCBatIIT/AlGDock
|
Pipeline/run_AlGDock.py
|
Python
|
mit
| 35,605
|
[
"Amber",
"CRYSTAL",
"NAMD"
] |
350bf77678d73e6cca8c1391f723dc7b765a989f2ade8293d0c5efbe6f958b1b
|
"""
Acceptance tests for studio related to the outline page.
"""
import json
from datetime import datetime, timedelta
import itertools
from pytz import UTC
from bok_choy.promise import EmptyPromise
from nose.plugins.attrib import attr
from ...pages.studio.settings_advanced import AdvancedSettingsPage
from ...pages.studio.overview import CourseOutlinePage, ContainerPage, ExpandCollapseLinkState
from ...pages.studio.utils import add_discussion, drag, verify_ordering
from ...pages.lms.courseware import CoursewarePage
from ...pages.lms.course_nav import CourseNavPage
from ...pages.lms.staff_view import StaffPage
from ...fixtures.config import ConfigModelFixture
from ...fixtures.course import XBlockFixtureDesc
from base_studio_test import StudioCourseTest
from ..helpers import load_data_str
from ...pages.lms.progress import ProgressPage
SECTION_NAME = 'Test Section'
SUBSECTION_NAME = 'Test Subsection'
UNIT_NAME = 'Test Unit'
class CourseOutlineTest(StudioCourseTest):
"""
Base class for all course outline tests
"""
def setUp(self):
"""
Install a course with no content using a fixture.
"""
super(CourseOutlineTest, self).setUp()
self.course_outline_page = CourseOutlinePage(
self.browser, self.course_info['org'], self.course_info['number'], self.course_info['run']
)
self.advanced_settings = AdvancedSettingsPage(
self.browser, self.course_info['org'], self.course_info['number'], self.course_info['run']
)
def populate_course_fixture(self, course_fixture):
""" Install a course with sections/problems, tabs, updates, and handouts """
course_fixture.add_children(
XBlockFixtureDesc('chapter', SECTION_NAME).add_children(
XBlockFixtureDesc('sequential', SUBSECTION_NAME).add_children(
XBlockFixtureDesc('vertical', UNIT_NAME).add_children(
XBlockFixtureDesc('problem', 'Test Problem 1', data=load_data_str('multiple_choice.xml')),
XBlockFixtureDesc('html', 'Test HTML Component'),
XBlockFixtureDesc('discussion', 'Test Discussion Component')
)
)
)
)
def do_action_and_verify(self, outline_page, action, expected_ordering):
"""
Perform the supplied action and then verify the resulting ordering.
"""
if outline_page is None:
outline_page = self.course_outline_page.visit()
action(outline_page)
verify_ordering(self, outline_page, expected_ordering)
# Reload the page and expand all subsections to see that the change was persisted.
outline_page = self.course_outline_page.visit()
outline_page.q(css='.outline-item.outline-subsection.is-collapsed .ui-toggle-expansion').click()
verify_ordering(self, outline_page, expected_ordering)
@attr('shard_3')
class CourseOutlineDragAndDropTest(CourseOutlineTest):
"""
Tests of drag and drop within the outline page.
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
"""
Create a course with one section, two subsections, and four units
"""
# with collapsed outline
self.chap_1_handle = 0
self.chap_1_seq_1_handle = 1
# with first sequential expanded
self.seq_1_vert_1_handle = 2
self.seq_1_vert_2_handle = 3
self.chap_1_seq_2_handle = 4
course_fixture.add_children(
XBlockFixtureDesc('chapter', "1").add_children(
XBlockFixtureDesc('sequential', '1.1').add_children(
XBlockFixtureDesc('vertical', '1.1.1'),
XBlockFixtureDesc('vertical', '1.1.2')
),
XBlockFixtureDesc('sequential', '1.2').add_children(
XBlockFixtureDesc('vertical', '1.2.1'),
XBlockFixtureDesc('vertical', '1.2.2')
)
)
)
def drag_and_verify(self, source, target, expected_ordering, outline_page=None):
self.do_action_and_verify(
outline_page,
lambda (outline): drag(outline, source, target),
expected_ordering
)
def test_drop_unit_in_collapsed_subsection(self):
"""
Drag vertical "1.1.2" from subsection "1.1" into collapsed subsection "1.2" which already
have its own verticals.
"""
course_outline_page = self.course_outline_page.visit()
# expand first subsection
course_outline_page.q(css='.outline-item.outline-subsection.is-collapsed .ui-toggle-expansion').first.click()
expected_ordering = [{"1": ["1.1", "1.2"]},
{"1.1": ["1.1.1"]},
{"1.2": ["1.1.2", "1.2.1", "1.2.2"]}]
self.drag_and_verify(self.seq_1_vert_2_handle, self.chap_1_seq_2_handle, expected_ordering, course_outline_page)
@attr('shard_3')
class WarningMessagesTest(CourseOutlineTest):
"""
Feature: Warning messages on sections, subsections, and units
"""
__test__ = True
STAFF_ONLY_WARNING = 'Contains staff only content'
LIVE_UNPUBLISHED_WARNING = 'Unpublished changes to live content'
FUTURE_UNPUBLISHED_WARNING = 'Unpublished changes to content that will release in the future'
NEVER_PUBLISHED_WARNING = 'Unpublished units will not be released'
class PublishState(object):
"""
Default values for representing the published state of a unit
"""
NEVER_PUBLISHED = 1
UNPUBLISHED_CHANGES = 2
PUBLISHED = 3
VALUES = [NEVER_PUBLISHED, UNPUBLISHED_CHANGES, PUBLISHED]
class UnitState(object):
""" Represents the state of a unit """
def __init__(self, is_released, publish_state, is_locked):
""" Creates a new UnitState with the given properties """
self.is_released = is_released
self.publish_state = publish_state
self.is_locked = is_locked
@property
def name(self):
""" Returns an appropriate name based on the properties of the unit """
result = "Released " if self.is_released else "Unreleased "
if self.publish_state == WarningMessagesTest.PublishState.NEVER_PUBLISHED:
result += "Never Published "
elif self.publish_state == WarningMessagesTest.PublishState.UNPUBLISHED_CHANGES:
result += "Unpublished Changes "
else:
result += "Published "
result += "Locked" if self.is_locked else "Unlocked"
return result
def populate_course_fixture(self, course_fixture):
""" Install a course with various configurations that could produce warning messages """
# Define the dimensions that map to the UnitState constructor
features = [
[True, False], # Possible values for is_released
self.PublishState.VALUES, # Possible values for publish_state
[True, False] # Possible values for is_locked
]
# Add a fixture for every state in the product of features
course_fixture.add_children(*[
self._build_fixture(self.UnitState(*state)) for state in itertools.product(*features)
])
def _build_fixture(self, unit_state):
""" Returns an XBlockFixtureDesc with a section, subsection, and possibly unit that has the given state. """
name = unit_state.name
start = (datetime(1984, 3, 4) if unit_state.is_released else datetime.now(UTC) + timedelta(1)).isoformat()
subsection = XBlockFixtureDesc('sequential', name, metadata={'start': start})
# Children of never published subsections will be added on demand via _ensure_unit_present
return XBlockFixtureDesc('chapter', name).add_children(
subsection if unit_state.publish_state == self.PublishState.NEVER_PUBLISHED
else subsection.add_children(
XBlockFixtureDesc('vertical', name, metadata={
'visible_to_staff_only': True if unit_state.is_locked else None
})
)
)
def test_released_never_published_locked(self):
""" Tests that released never published locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.NEVER_PUBLISHED, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_released_never_published_unlocked(self):
""" Tests that released never published unlocked units display 'Unpublished units will not be released' """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.NEVER_PUBLISHED, is_locked=False),
self.NEVER_PUBLISHED_WARNING
)
def test_released_unpublished_changes_locked(self):
""" Tests that released unpublished changes locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.UNPUBLISHED_CHANGES, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_released_unpublished_changes_unlocked(self):
""" Tests that released unpublished changes unlocked units display 'Unpublished changes to live content' """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.UNPUBLISHED_CHANGES, is_locked=False),
self.LIVE_UNPUBLISHED_WARNING
)
def test_released_published_locked(self):
""" Tests that released published locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.PUBLISHED, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_released_published_unlocked(self):
""" Tests that released published unlocked units display no warnings """
self._verify_unit_warning(
self.UnitState(is_released=True, publish_state=self.PublishState.PUBLISHED, is_locked=False),
None
)
def test_unreleased_never_published_locked(self):
""" Tests that unreleased never published locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.NEVER_PUBLISHED, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_unreleased_never_published_unlocked(self):
""" Tests that unreleased never published unlocked units display 'Unpublished units will not be released' """
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.NEVER_PUBLISHED, is_locked=False),
self.NEVER_PUBLISHED_WARNING
)
def test_unreleased_unpublished_changes_locked(self):
""" Tests that unreleased unpublished changes locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.UNPUBLISHED_CHANGES, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_unreleased_unpublished_changes_unlocked(self):
"""
Tests that unreleased unpublished changes unlocked units display 'Unpublished changes to content that will
release in the future'
"""
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.UNPUBLISHED_CHANGES, is_locked=False),
self.FUTURE_UNPUBLISHED_WARNING
)
def test_unreleased_published_locked(self):
""" Tests that unreleased published locked units display staff only warnings """
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.PUBLISHED, is_locked=True),
self.STAFF_ONLY_WARNING
)
def test_unreleased_published_unlocked(self):
""" Tests that unreleased published unlocked units display no warnings """
self._verify_unit_warning(
self.UnitState(is_released=False, publish_state=self.PublishState.PUBLISHED, is_locked=False),
None
)
def _verify_unit_warning(self, unit_state, expected_status_message):
"""
Verifies that the given unit's messages match the expected messages.
If expected_status_message is None, then the unit status message is expected to not be present.
"""
self._ensure_unit_present(unit_state)
self.course_outline_page.visit()
section = self.course_outline_page.section(unit_state.name)
subsection = section.subsection_at(0)
subsection.expand_subsection()
unit = subsection.unit_at(0)
if expected_status_message == self.STAFF_ONLY_WARNING:
self.assertEqual(section.status_message, self.STAFF_ONLY_WARNING)
self.assertEqual(subsection.status_message, self.STAFF_ONLY_WARNING)
self.assertEqual(unit.status_message, self.STAFF_ONLY_WARNING)
else:
self.assertFalse(section.has_status_message)
self.assertFalse(subsection.has_status_message)
if expected_status_message:
self.assertEqual(unit.status_message, expected_status_message)
else:
self.assertFalse(unit.has_status_message)
def _ensure_unit_present(self, unit_state):
""" Ensures that a unit with the given state is present on the course outline """
if unit_state.publish_state == self.PublishState.PUBLISHED:
return
name = unit_state.name
self.course_outline_page.visit()
subsection = self.course_outline_page.section(name).subsection(name)
subsection.expand_subsection()
if unit_state.publish_state == self.PublishState.UNPUBLISHED_CHANGES:
unit = subsection.unit(name).go_to()
add_discussion(unit)
elif unit_state.publish_state == self.PublishState.NEVER_PUBLISHED:
subsection.add_unit()
unit = ContainerPage(self.browser, None)
unit.wait_for_page()
if unit.is_staff_locked != unit_state.is_locked:
unit.toggle_staff_lock()
@attr('shard_3')
class EditingSectionsTest(CourseOutlineTest):
"""
Feature: Editing Release date, Due date and grading type.
"""
__test__ = True
def test_can_edit_subsection(self):
"""
Scenario: I can edit settings of subsection.
Given that I have created a subsection
Then I see release date, due date and grading policy of subsection in course outline
When I click on the configuration icon
Then edit modal window is shown
And release date, due date and grading policy fields present
And they have correct initial values
Then I set new values for these fields
And I click save button on the modal
Then I see release date, due date and grading policy of subsection in course outline
"""
self.course_outline_page.visit()
subsection = self.course_outline_page.section(SECTION_NAME).subsection(SUBSECTION_NAME)
# Verify that Release date visible by default
self.assertTrue(subsection.release_date)
# Verify that Due date and Policy hidden by default
self.assertFalse(subsection.due_date)
self.assertFalse(subsection.policy)
modal = subsection.edit()
# Verify fields
self.assertTrue(modal.has_release_date())
self.assertTrue(modal.has_release_time())
self.assertTrue(modal.has_due_date())
self.assertTrue(modal.has_due_time())
self.assertTrue(modal.has_policy())
# Verify initial values
self.assertEqual(modal.release_date, u'1/1/1970')
self.assertEqual(modal.release_time, u'00:00')
self.assertEqual(modal.due_date, u'')
self.assertEqual(modal.due_time, u'')
self.assertEqual(modal.policy, u'Not Graded')
# Set new values
modal.release_date = '3/12/1972'
modal.release_time = '04:01'
modal.due_date = '7/21/2014'
modal.due_time = '23:39'
modal.policy = 'Lab'
modal.save()
self.assertIn(u'Released: Mar 12, 1972', subsection.release_date)
self.assertIn(u'04:01', subsection.release_date)
self.assertIn(u'Due: Jul 21, 2014', subsection.due_date)
self.assertIn(u'23:39', subsection.due_date)
self.assertIn(u'Lab', subsection.policy)
def test_can_edit_section(self):
"""
Scenario: I can edit settings of section.
Given that I have created a section
Then I see release date of section in course outline
When I click on the configuration icon
Then edit modal window is shown
And release date field present
And it has correct initial value
Then I set new value for this field
And I click save button on the modal
Then I see release date of section in course outline
"""
self.course_outline_page.visit()
section = self.course_outline_page.section(SECTION_NAME)
# Verify that Release date visible by default
self.assertTrue(section.release_date)
# Verify that Due date and Policy are not present
self.assertFalse(section.due_date)
self.assertFalse(section.policy)
modal = section.edit()
# Verify fields
self.assertTrue(modal.has_release_date())
self.assertFalse(modal.has_due_date())
self.assertFalse(modal.has_policy())
# Verify initial value
self.assertEqual(modal.release_date, u'1/1/1970')
# Set new value
modal.release_date = '5/14/1969'
modal.save()
self.assertIn(u'Released: May 14, 1969', section.release_date)
# Verify that Due date and Policy are not present
self.assertFalse(section.due_date)
self.assertFalse(section.policy)
def test_subsection_is_graded_in_lms(self):
"""
Scenario: I can grade subsection from course outline page.
Given I visit progress page
And I see that problem in subsection has grading type "Practice"
Then I visit course outline page
And I click on the configuration icon of subsection
And I set grading policy to "Lab"
And I click save button on the modal
Then I visit progress page
And I see that problem in subsection has grading type "Problem"
"""
progress_page = ProgressPage(self.browser, self.course_id)
progress_page.visit()
progress_page.wait_for_page()
self.assertEqual(u'Practice', progress_page.grading_formats[0])
self.course_outline_page.visit()
subsection = self.course_outline_page.section(SECTION_NAME).subsection(SUBSECTION_NAME)
modal = subsection.edit()
# Set new values
modal.policy = 'Lab'
modal.save()
progress_page.visit()
self.assertEqual(u'Problem', progress_page.grading_formats[0])
def test_unchanged_release_date_is_not_saved(self):
"""
Scenario: Saving a subsection without changing the release date will not override the release date
Given that I have created a section with a subsection
When I open the settings modal for the subsection
And I pressed save
And I open the settings modal for the section
And I change the release date to 07/20/1969
And I press save
Then the subsection and the section have the release date 07/20/1969
"""
self.course_outline_page.visit()
modal = self.course_outline_page.section_at(0).subsection_at(0).edit()
modal.save()
modal = self.course_outline_page.section_at(0).edit()
modal.release_date = '7/20/1969'
modal.save()
release_text = 'Released: Jul 20, 1969'
self.assertIn(release_text, self.course_outline_page.section_at(0).release_date)
self.assertIn(release_text, self.course_outline_page.section_at(0).subsection_at(0).release_date)
@attr('shard_3')
class StaffLockTest(CourseOutlineTest):
"""
Feature: Sections, subsections, and units can be locked and unlocked from the course outline.
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
""" Create a course with one section, two subsections, and four units """
course_fixture.add_children(
XBlockFixtureDesc('chapter', '1').add_children(
XBlockFixtureDesc('sequential', '1.1').add_children(
XBlockFixtureDesc('vertical', '1.1.1'),
XBlockFixtureDesc('vertical', '1.1.2')
),
XBlockFixtureDesc('sequential', '1.2').add_children(
XBlockFixtureDesc('vertical', '1.2.1'),
XBlockFixtureDesc('vertical', '1.2.2')
)
)
)
def _verify_descendants_are_staff_only(self, item):
"""Verifies that all the descendants of item are staff only"""
self.assertTrue(item.is_staff_only)
if hasattr(item, 'children'):
for child in item.children():
self._verify_descendants_are_staff_only(child)
def _remove_staff_lock_and_verify_warning(self, outline_item, expect_warning):
"""Removes staff lock from a course outline item and checks whether or not a warning appears."""
modal = outline_item.edit()
modal.is_explicitly_locked = False
if expect_warning:
self.assertTrue(modal.shows_staff_lock_warning())
else:
self.assertFalse(modal.shows_staff_lock_warning())
modal.save()
def _toggle_lock_on_unlocked_item(self, outline_item):
"""Toggles outline_item's staff lock on and then off, verifying the staff lock warning"""
self.assertFalse(outline_item.has_staff_lock_warning)
outline_item.set_staff_lock(True)
self.assertTrue(outline_item.has_staff_lock_warning)
self._verify_descendants_are_staff_only(outline_item)
outline_item.set_staff_lock(False)
self.assertFalse(outline_item.has_staff_lock_warning)
def _verify_explicit_staff_lock_remains_after_unlocking_parent(self, child_item, parent_item):
"""Verifies that child_item's explicit staff lock remains after removing parent_item's staff lock"""
child_item.set_staff_lock(True)
parent_item.set_staff_lock(True)
self.assertTrue(parent_item.has_staff_lock_warning)
self.assertTrue(child_item.has_staff_lock_warning)
parent_item.set_staff_lock(False)
self.assertFalse(parent_item.has_staff_lock_warning)
self.assertTrue(child_item.has_staff_lock_warning)
def test_units_can_be_locked(self):
"""
Scenario: Units can be locked and unlocked from the course outline page
Given I have a course with a unit
When I click on the configuration icon
And I enable explicit staff locking
And I click save
Then the unit shows a staff lock warning
And when I click on the configuration icon
And I disable explicit staff locking
And I click save
Then the unit does not show a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
unit = self.course_outline_page.section_at(0).subsection_at(0).unit_at(0)
self._toggle_lock_on_unlocked_item(unit)
def test_subsections_can_be_locked(self):
"""
Scenario: Subsections can be locked and unlocked from the course outline page
Given I have a course with a subsection
When I click on the subsection's configuration icon
And I enable explicit staff locking
And I click save
Then the subsection shows a staff lock warning
And all its descendants are staff locked
And when I click on the subsection's configuration icon
And I disable explicit staff locking
And I click save
Then the the subsection does not show a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
self._toggle_lock_on_unlocked_item(subsection)
def test_sections_can_be_locked(self):
"""
Scenario: Sections can be locked and unlocked from the course outline page
Given I have a course with a section
When I click on the section's configuration icon
And I enable explicit staff locking
And I click save
Then the section shows a staff lock warning
And all its descendants are staff locked
And when I click on the section's configuration icon
And I disable explicit staff locking
And I click save
Then the section does not show a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
self._toggle_lock_on_unlocked_item(section)
def test_explicit_staff_lock_remains_after_unlocking_section(self):
"""
Scenario: An explicitly locked unit is still locked after removing an inherited lock from a section
Given I have a course with sections, subsections, and units
And I have enabled explicit staff lock on a section and one of its units
When I click on the section's configuration icon
And I disable explicit staff locking
And I click save
Then the unit still shows a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
unit = section.subsection_at(0).unit_at(0)
self._verify_explicit_staff_lock_remains_after_unlocking_parent(unit, section)
def test_explicit_staff_lock_remains_after_unlocking_subsection(self):
"""
Scenario: An explicitly locked unit is still locked after removing an inherited lock from a subsection
Given I have a course with sections, subsections, and units
And I have enabled explicit staff lock on a subsection and one of its units
When I click on the subsection's configuration icon
And I disable explicit staff locking
And I click save
Then the unit still shows a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
unit = subsection.unit_at(0)
self._verify_explicit_staff_lock_remains_after_unlocking_parent(unit, subsection)
def test_section_displays_lock_when_all_subsections_locked(self):
"""
Scenario: All subsections in section are explicitly locked, section should display staff only warning
Given I have a course one section and two subsections
When I enable explicit staff lock on all the subsections
Then the section shows a staff lock warning
"""
self.course_outline_page.visit()
section = self.course_outline_page.section_at(0)
section.subsection_at(0).set_staff_lock(True)
section.subsection_at(1).set_staff_lock(True)
self.assertTrue(section.has_staff_lock_warning)
def test_section_displays_lock_when_all_units_locked(self):
"""
Scenario: All units in a section are explicitly locked, section should display staff only warning
Given I have a course with one section, two subsections, and four units
When I enable explicit staff lock on all the units
Then the section shows a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
section.subsection_at(0).unit_at(0).set_staff_lock(True)
section.subsection_at(0).unit_at(1).set_staff_lock(True)
section.subsection_at(1).unit_at(0).set_staff_lock(True)
section.subsection_at(1).unit_at(1).set_staff_lock(True)
self.assertTrue(section.has_staff_lock_warning)
def test_subsection_displays_lock_when_all_units_locked(self):
"""
Scenario: All units in subsection are explicitly locked, subsection should display staff only warning
Given I have a course with one subsection and two units
When I enable explicit staff lock on all the units
Then the subsection shows a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
subsection.unit_at(0).set_staff_lock(True)
subsection.unit_at(1).set_staff_lock(True)
self.assertTrue(subsection.has_staff_lock_warning)
def test_section_does_not_display_lock_when_some_subsections_locked(self):
"""
Scenario: Only some subsections in section are explicitly locked, section should NOT display staff only warning
Given I have a course with one section and two subsections
When I enable explicit staff lock on one subsection
Then the section does not show a staff lock warning
"""
self.course_outline_page.visit()
section = self.course_outline_page.section_at(0)
section.subsection_at(0).set_staff_lock(True)
self.assertFalse(section.has_staff_lock_warning)
def test_section_does_not_display_lock_when_some_units_locked(self):
"""
Scenario: Only some units in section are explicitly locked, section should NOT display staff only warning
Given I have a course with one section, two subsections, and four units
When I enable explicit staff lock on three units
Then the section does not show a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
section.subsection_at(0).unit_at(0).set_staff_lock(True)
section.subsection_at(0).unit_at(1).set_staff_lock(True)
section.subsection_at(1).unit_at(1).set_staff_lock(True)
self.assertFalse(section.has_staff_lock_warning)
def test_subsection_does_not_display_lock_when_some_units_locked(self):
"""
Scenario: Only some units in subsection are explicitly locked, subsection should NOT display staff only warning
Given I have a course with one subsection and two units
When I enable explicit staff lock on one unit
Then the subsection does not show a staff lock warning
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
subsection.unit_at(0).set_staff_lock(True)
self.assertFalse(subsection.has_staff_lock_warning)
def test_locked_sections_do_not_appear_in_lms(self):
"""
Scenario: A locked section is not visible to students in the LMS
Given I have a course with two sections
When I enable explicit staff lock on one section
And I click the View Live button to switch to staff view
Then I see two sections in the sidebar
And when I switch the view mode to student view
Then I see one section in the sidebar
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_top_button()
self.course_outline_page.section_at(1).set_staff_lock(True)
self.course_outline_page.view_live()
courseware = CoursewarePage(self.browser, self.course_id)
courseware.wait_for_page()
self.assertEqual(courseware.num_sections, 2)
StaffPage(self.browser, self.course_id).set_staff_view_mode('Student')
self.assertEqual(courseware.num_sections, 1)
def test_locked_subsections_do_not_appear_in_lms(self):
"""
Scenario: A locked subsection is not visible to students in the LMS
Given I have a course with two subsections
When I enable explicit staff lock on one subsection
And I click the View Live button to switch to staff view
Then I see two subsections in the sidebar
And when I switch the view mode to student view
Then I see one section in the sidebar
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).subsection_at(1).set_staff_lock(True)
self.course_outline_page.view_live()
courseware = CoursewarePage(self.browser, self.course_id)
courseware.wait_for_page()
self.assertEqual(courseware.num_subsections, 2)
StaffPage(self.browser, self.course_id).set_staff_view_mode('Student')
self.assertEqual(courseware.num_subsections, 1)
def test_toggling_staff_lock_on_section_does_not_publish_draft_units(self):
"""
Scenario: Locking and unlocking a section will not publish its draft units
Given I have a course with a section and unit
And the unit has a draft and published version
When I enable explicit staff lock on the section
And I disable explicit staff lock on the section
And I click the View Live button to switch to staff view
Then I see the published version of the unit
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
unit = self.course_outline_page.section_at(0).subsection_at(0).unit_at(0).go_to()
add_discussion(unit)
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
section.set_staff_lock(True)
section.set_staff_lock(False)
unit = section.subsection_at(0).unit_at(0).go_to()
unit.view_published_version()
courseware = CoursewarePage(self.browser, self.course_id)
courseware.wait_for_page()
self.assertEqual(courseware.num_xblock_components, 0)
def test_toggling_staff_lock_on_subsection_does_not_publish_draft_units(self):
"""
Scenario: Locking and unlocking a subsection will not publish its draft units
Given I have a course with a subsection and unit
And the unit has a draft and published version
When I enable explicit staff lock on the subsection
And I disable explicit staff lock on the subsection
And I click the View Live button to switch to staff view
Then I see the published version of the unit
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
unit = self.course_outline_page.section_at(0).subsection_at(0).unit_at(0).go_to()
add_discussion(unit)
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
subsection.set_staff_lock(True)
subsection.set_staff_lock(False)
unit = subsection.unit_at(0).go_to()
unit.view_published_version()
courseware = CoursewarePage(self.browser, self.course_id)
courseware.wait_for_page()
self.assertEqual(courseware.num_xblock_components, 0)
def test_removing_staff_lock_from_unit_without_inherited_lock_shows_warning(self):
"""
Scenario: Removing explicit staff lock from a unit which does not inherit staff lock displays a warning.
Given I have a course with a subsection and unit
When I enable explicit staff lock on the unit
And I disable explicit staff lock on the unit
Then I see a modal warning.
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
unit = self.course_outline_page.section_at(0).subsection_at(0).unit_at(0)
unit.set_staff_lock(True)
self._remove_staff_lock_and_verify_warning(unit, True)
def test_removing_staff_lock_from_subsection_without_inherited_lock_shows_warning(self):
"""
Scenario: Removing explicit staff lock from a subsection which does not inherit staff lock displays a warning.
Given I have a course with a section and subsection
When I enable explicit staff lock on the subsection
And I disable explicit staff lock on the subsection
Then I see a modal warning.
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
subsection.set_staff_lock(True)
self._remove_staff_lock_and_verify_warning(subsection, True)
def test_removing_staff_lock_from_unit_with_inherited_lock_shows_no_warning(self):
"""
Scenario: Removing explicit staff lock from a unit which also inherits staff lock displays no warning.
Given I have a course with a subsection and unit
When I enable explicit staff lock on the subsection
And I enable explicit staff lock on the unit
When I disable explicit staff lock on the unit
Then I do not see a modal warning.
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
subsection = self.course_outline_page.section_at(0).subsection_at(0)
unit = subsection.unit_at(0)
subsection.set_staff_lock(True)
unit.set_staff_lock(True)
self._remove_staff_lock_and_verify_warning(unit, False)
def test_removing_staff_lock_from_subsection_with_inherited_lock_shows_no_warning(self):
"""
Scenario: Removing explicit staff lock from a subsection which also inherits staff lock displays no warning.
Given I have a course with a section and subsection
When I enable explicit staff lock on the section
And I enable explicit staff lock on the subsection
When I disable explicit staff lock on the subsection
Then I do not see a modal warning.
"""
self.course_outline_page.visit()
self.course_outline_page.expand_all_subsections()
section = self.course_outline_page.section_at(0)
subsection = section.subsection_at(0)
section.set_staff_lock(True)
subsection.set_staff_lock(True)
self._remove_staff_lock_and_verify_warning(subsection, False)
@attr('shard_3')
class EditNamesTest(CourseOutlineTest):
"""
Feature: Click-to-edit section/subsection names
"""
__test__ = True
def set_name_and_verify(self, item, old_name, new_name, expected_name):
"""
Changes the display name of item from old_name to new_name, then verifies that its value is expected_name.
"""
self.assertEqual(item.name, old_name)
item.change_name(new_name)
self.assertFalse(item.in_editable_form())
self.assertEqual(item.name, expected_name)
def test_edit_section_name(self):
"""
Scenario: Click-to-edit section name
Given that I have created a section
When I click on the name of section
Then the section name becomes editable
And given that I have edited the section name
When I click outside of the edited section name
Then the section name saves
And becomes non-editable
"""
self.course_outline_page.visit()
self.set_name_and_verify(
self.course_outline_page.section_at(0),
'Test Section',
'Changed',
'Changed'
)
def test_edit_subsection_name(self):
"""
Scenario: Click-to-edit subsection name
Given that I have created a subsection
When I click on the name of subsection
Then the subsection name becomes editable
And given that I have edited the subsection name
When I click outside of the edited subsection name
Then the subsection name saves
And becomes non-editable
"""
self.course_outline_page.visit()
self.set_name_and_verify(
self.course_outline_page.section_at(0).subsection_at(0),
'Test Subsection',
'Changed',
'Changed'
)
def test_edit_empty_section_name(self):
"""
Scenario: Click-to-edit section name, enter empty name
Given that I have created a section
And I have clicked to edit the name of the section
And I have entered an empty section name
When I click outside of the edited section name
Then the section name does not change
And becomes non-editable
"""
self.course_outline_page.visit()
self.set_name_and_verify(
self.course_outline_page.section_at(0),
'Test Section',
'',
'Test Section'
)
def test_edit_empty_subsection_name(self):
"""
Scenario: Click-to-edit subsection name, enter empty name
Given that I have created a subsection
And I have clicked to edit the name of the subsection
And I have entered an empty subsection name
When I click outside of the edited subsection name
Then the subsection name does not change
And becomes non-editable
"""
self.course_outline_page.visit()
self.set_name_and_verify(
self.course_outline_page.section_at(0).subsection_at(0),
'Test Subsection',
'',
'Test Subsection'
)
def test_editing_names_does_not_expand_collapse(self):
"""
Scenario: A section stays in the same expand/collapse state while its name is edited
Given that I have created a section
And the section is collapsed
When I click on the name of the section
Then the section is collapsed
And given that I have entered a new name
Then the section is collapsed
And given that I press ENTER to finalize the name
Then the section is collapsed
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).expand_subsection()
self.assertFalse(self.course_outline_page.section_at(0).in_editable_form())
self.assertTrue(self.course_outline_page.section_at(0).is_collapsed)
self.course_outline_page.section_at(0).edit_name()
self.assertTrue(self.course_outline_page.section_at(0).in_editable_form())
self.assertTrue(self.course_outline_page.section_at(0).is_collapsed)
self.course_outline_page.section_at(0).enter_name('Changed')
self.assertTrue(self.course_outline_page.section_at(0).is_collapsed)
self.course_outline_page.section_at(0).finalize_name()
self.assertTrue(self.course_outline_page.section_at(0).is_collapsed)
@attr('shard_3')
class CreateSectionsTest(CourseOutlineTest):
"""
Feature: Create new sections/subsections/units
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
""" Start with a completely empty course to easily test adding things to it """
pass
def test_create_new_section_from_top_button(self):
"""
Scenario: Create new section from button at top of page
Given that I am on the course outline
When I click the "+ Add section" button at the top of the page
Then I see a new section added to the bottom of the page
And the display name is in its editable form.
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_top_button()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.assertTrue(self.course_outline_page.section_at(0).in_editable_form())
def test_create_new_section_from_bottom_button(self):
"""
Scenario: Create new section from button at bottom of page
Given that I am on the course outline
When I click the "+ Add section" button at the bottom of the page
Then I see a new section added to the bottom of the page
And the display name is in its editable form.
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_bottom_button()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.assertTrue(self.course_outline_page.section_at(0).in_editable_form())
def test_create_new_section_from_bottom_button_plus_icon(self):
"""
Scenario: Create new section from button plus icon at bottom of page
Given that I am on the course outline
When I click the plus icon in "+ Add section" button at the bottom of the page
Then I see a new section added to the bottom of the page
And the display name is in its editable form.
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_bottom_button(click_child_icon=True)
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.assertTrue(self.course_outline_page.section_at(0).in_editable_form())
def test_create_new_subsection(self):
"""
Scenario: Create new subsection
Given that I have created a section
When I click the "+ Add subsection" button in that section
Then I see a new subsection added to the bottom of the section
And the display name is in its editable form.
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_top_button()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.course_outline_page.section_at(0).add_subsection()
subsections = self.course_outline_page.section_at(0).subsections()
self.assertEqual(len(subsections), 1)
self.assertTrue(subsections[0].in_editable_form())
def test_create_new_unit(self):
"""
Scenario: Create new unit
Given that I have created a section
And that I have created a subsection within that section
When I click the "+ Add unit" button in that subsection
Then I am redirected to a New Unit page
And the display name is in its editable form.
"""
self.course_outline_page.visit()
self.course_outline_page.add_section_from_top_button()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.course_outline_page.section_at(0).add_subsection()
self.assertEqual(len(self.course_outline_page.section_at(0).subsections()), 1)
self.course_outline_page.section_at(0).subsection_at(0).add_unit()
unit_page = ContainerPage(self.browser, None)
EmptyPromise(unit_page.is_browser_on_page, 'Browser is on the unit page').fulfill()
self.assertTrue(unit_page.is_inline_editing_display_name())
@attr('shard_3')
class DeleteContentTest(CourseOutlineTest):
"""
Feature: Deleting sections/subsections/units
"""
__test__ = True
def test_delete_section(self):
"""
Scenario: Delete section
Given that I am on the course outline
When I click the delete button for a section on the course outline
Then I should receive a confirmation message, asking me if I really want to delete the section
When I click "Yes, I want to delete this component"
Then the confirmation message should close
And the section should immediately be deleted from the course outline
"""
self.course_outline_page.visit()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.course_outline_page.section_at(0).delete()
self.assertEqual(len(self.course_outline_page.sections()), 0)
def test_cancel_delete_section(self):
"""
Scenario: Cancel delete of section
Given that I clicked the delte button for a section on the course outline
And I received a confirmation message, asking me if I really want to delete the component
When I click "Cancel"
Then the confirmation message should close
And the section should remain in the course outline
"""
self.course_outline_page.visit()
self.assertEqual(len(self.course_outline_page.sections()), 1)
self.course_outline_page.section_at(0).delete(cancel=True)
self.assertEqual(len(self.course_outline_page.sections()), 1)
def test_delete_subsection(self):
"""
Scenario: Delete subsection
Given that I am on the course outline
When I click the delete button for a subsection on the course outline
Then I should receive a confirmation message, asking me if I really want to delete the subsection
When I click "Yes, I want to delete this component"
Then the confiramtion message should close
And the subsection should immediately be deleted from the course outline
"""
self.course_outline_page.visit()
self.assertEqual(len(self.course_outline_page.section_at(0).subsections()), 1)
self.course_outline_page.section_at(0).subsection_at(0).delete()
self.assertEqual(len(self.course_outline_page.section_at(0).subsections()), 0)
def test_cancel_delete_subsection(self):
"""
Scenario: Cancel delete of subsection
Given that I clicked the delete button for a subsection on the course outline
And I received a confirmation message, asking me if I really want to delete the subsection
When I click "cancel"
Then the confirmation message should close
And the subsection should remain in the course outline
"""
self.course_outline_page.visit()
self.assertEqual(len(self.course_outline_page.section_at(0).subsections()), 1)
self.course_outline_page.section_at(0).subsection_at(0).delete(cancel=True)
self.assertEqual(len(self.course_outline_page.section_at(0).subsections()), 1)
def test_delete_unit(self):
"""
Scenario: Delete unit
Given that I am on the course outline
When I click the delete button for a unit on the course outline
Then I should receive a confirmation message, asking me if I really want to delete the unit
When I click "Yes, I want to delete this unit"
Then the confirmation message should close
And the unit should immediately be deleted from the course outline
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).subsection_at(0).expand_subsection()
self.assertEqual(len(self.course_outline_page.section_at(0).subsection_at(0).units()), 1)
self.course_outline_page.section_at(0).subsection_at(0).unit_at(0).delete()
self.assertEqual(len(self.course_outline_page.section_at(0).subsection_at(0).units()), 0)
def test_cancel_delete_unit(self):
"""
Scenario: Cancel delete of unit
Given that I clicked the delete button for a unit on the course outline
And I received a confirmation message, asking me if I really want to delete the unit
When I click "Cancel"
Then the confirmation message should close
And the unit should remain in the course outline
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).subsection_at(0).expand_subsection()
self.assertEqual(len(self.course_outline_page.section_at(0).subsection_at(0).units()), 1)
self.course_outline_page.section_at(0).subsection_at(0).unit_at(0).delete(cancel=True)
self.assertEqual(len(self.course_outline_page.section_at(0).subsection_at(0).units()), 1)
def test_delete_all_no_content_message(self):
"""
Scenario: Delete all sections/subsections/units in a course, "no content" message should appear
Given that I delete all sections, subsections, and units in a course
When I visit the course outline
Then I will see a message that says, "You haven't added any content to this course yet"
Add see a + Add Section button
"""
self.course_outline_page.visit()
self.assertFalse(self.course_outline_page.has_no_content_message)
self.course_outline_page.section_at(0).delete()
self.assertEqual(len(self.course_outline_page.sections()), 0)
self.assertTrue(self.course_outline_page.has_no_content_message)
@attr('shard_3')
class ExpandCollapseMultipleSectionsTest(CourseOutlineTest):
"""
Feature: Courses with multiple sections can expand and collapse all sections.
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
""" Start with a course with two sections """
course_fixture.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Unit')
)
),
XBlockFixtureDesc('chapter', 'Test Section 2').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection 2').add_children(
XBlockFixtureDesc('vertical', 'Test Unit 2')
)
)
)
def verify_all_sections(self, collapsed):
"""
Verifies that all sections are collapsed if collapsed is True, otherwise all expanded.
"""
for section in self.course_outline_page.sections():
self.assertEqual(collapsed, section.is_collapsed)
def toggle_all_sections(self):
"""
Toggles the expand collapse state of all sections.
"""
for section in self.course_outline_page.sections():
section.expand_subsection()
def test_expanded_by_default(self):
"""
Scenario: The default layout for the outline page is to show sections in expanded view
Given I have a course with sections
When I navigate to the course outline page
Then I see the "Collapse All Sections" link
And all sections are expanded
"""
self.course_outline_page.visit()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.COLLAPSE)
self.verify_all_sections(collapsed=False)
def test_no_expand_link_for_empty_course(self):
"""
Scenario: Collapse link is removed after last section of a course is deleted
Given I have a course with multiple sections
And I navigate to the course outline page
When I will confirm all alerts
And I press the "section" delete icon
Then I do not see the "Collapse All Sections" link
And I will see a message that says "You haven't added any content to this course yet"
"""
self.course_outline_page.visit()
for section in self.course_outline_page.sections():
section.delete()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.MISSING)
self.assertTrue(self.course_outline_page.has_no_content_message)
def test_collapse_all_when_all_expanded(self):
"""
Scenario: Collapse all sections when all sections are expanded
Given I navigate to the outline page of a course with sections
And all sections are expanded
When I click the "Collapse All Sections" link
Then I see the "Expand All Sections" link
And all sections are collapsed
"""
self.course_outline_page.visit()
self.verify_all_sections(collapsed=False)
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.EXPAND)
self.verify_all_sections(collapsed=True)
def test_collapse_all_when_some_expanded(self):
"""
Scenario: Collapsing all sections when 1 or more sections are already collapsed
Given I navigate to the outline page of a course with sections
And all sections are expanded
When I collapse the first section
And I click the "Collapse All Sections" link
Then I see the "Expand All Sections" link
And all sections are collapsed
"""
self.course_outline_page.visit()
self.verify_all_sections(collapsed=False)
self.course_outline_page.section_at(0).expand_subsection()
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.EXPAND)
self.verify_all_sections(collapsed=True)
def test_expand_all_when_all_collapsed(self):
"""
Scenario: Expanding all sections when all sections are collapsed
Given I navigate to the outline page of a course with multiple sections
And I click the "Collapse All Sections" link
When I click the "Expand All Sections" link
Then I see the "Collapse All Sections" link
And all sections are expanded
"""
self.course_outline_page.visit()
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.EXPAND)
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.COLLAPSE)
self.verify_all_sections(collapsed=False)
def test_expand_all_when_some_collapsed(self):
"""
Scenario: Expanding all sections when 1 or more sections are already expanded
Given I navigate to the outline page of a course with multiple sections
And I click the "Collapse All Sections" link
When I expand the first section
And I click the "Expand All Sections" link
Then I see the "Collapse All Sections" link
And all sections are expanded
"""
self.course_outline_page.visit()
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.EXPAND)
self.course_outline_page.section_at(0).expand_subsection()
self.course_outline_page.toggle_expand_collapse()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.COLLAPSE)
self.verify_all_sections(collapsed=False)
@attr('shard_3')
class ExpandCollapseSingleSectionTest(CourseOutlineTest):
"""
Feature: Courses with a single section can expand and collapse all sections.
"""
__test__ = True
def test_no_expand_link_for_empty_course(self):
"""
Scenario: Collapse link is removed after last section of a course is deleted
Given I have a course with one section
And I navigate to the course outline page
When I will confirm all alerts
And I press the "section" delete icon
Then I do not see the "Collapse All Sections" link
And I will see a message that says "You haven't added any content to this course yet"
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).delete()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.MISSING)
self.assertTrue(self.course_outline_page.has_no_content_message)
def test_old_subsection_stays_collapsed_after_creation(self):
"""
Scenario: Collapsed subsection stays collapsed after creating a new subsection
Given I have a course with one section and subsection
And I navigate to the course outline page
Then the subsection is collapsed
And when I create a new subsection
Then the first subsection is collapsed
And the second subsection is expanded
"""
self.course_outline_page.visit()
self.assertTrue(self.course_outline_page.section_at(0).subsection_at(0).is_collapsed)
self.course_outline_page.section_at(0).add_subsection()
self.assertTrue(self.course_outline_page.section_at(0).subsection_at(0).is_collapsed)
self.assertFalse(self.course_outline_page.section_at(0).subsection_at(1).is_collapsed)
@attr('shard_3')
class ExpandCollapseEmptyTest(CourseOutlineTest):
"""
Feature: Courses with no sections initially can expand and collapse all sections after addition.
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
""" Start with an empty course """
pass
def test_no_expand_link_for_empty_course(self):
"""
Scenario: Expand/collapse for a course with no sections
Given I have a course with no sections
When I navigate to the course outline page
Then I do not see the "Collapse All Sections" link
"""
self.course_outline_page.visit()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.MISSING)
def test_link_appears_after_section_creation(self):
"""
Scenario: Collapse link appears after creating first section of a course
Given I have a course with no sections
When I navigate to the course outline page
And I add a section
Then I see the "Collapse All Sections" link
And all sections are expanded
"""
self.course_outline_page.visit()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.MISSING)
self.course_outline_page.add_section_from_top_button()
self.assertEquals(self.course_outline_page.expand_collapse_link_state, ExpandCollapseLinkState.COLLAPSE)
self.assertFalse(self.course_outline_page.section_at(0).is_collapsed)
@attr('shard_3')
class DefaultStatesEmptyTest(CourseOutlineTest):
"""
Feature: Misc course outline default states/actions when starting with an empty course
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
""" Start with an empty course """
pass
def test_empty_course_message(self):
"""
Scenario: Empty course state
Given that I am in a course with no sections, subsections, nor units
When I visit the course outline
Then I will see a message that says "You haven't added any content to this course yet"
And see a + Add Section button
"""
self.course_outline_page.visit()
self.assertTrue(self.course_outline_page.has_no_content_message)
self.assertTrue(self.course_outline_page.bottom_add_section_button.is_present())
@attr('shard_3')
class DefaultStatesContentTest(CourseOutlineTest):
"""
Feature: Misc course outline default states/actions when starting with a course with content
"""
__test__ = True
def test_view_live(self):
"""
Scenario: View Live version from course outline
Given that I am on the course outline
When I click the "View Live" button
Then a new tab will open to the course on the LMS
"""
self.course_outline_page.visit()
self.course_outline_page.view_live()
courseware = CoursewarePage(self.browser, self.course_id)
courseware.wait_for_page()
self.assertEqual(courseware.num_xblock_components, 3)
self.assertEqual(courseware.xblock_component_type(0), 'problem')
self.assertEqual(courseware.xblock_component_type(1), 'html')
self.assertEqual(courseware.xblock_component_type(2), 'discussion')
@attr('shard_3')
class UnitNavigationTest(CourseOutlineTest):
"""
Feature: Navigate to units
"""
__test__ = True
def test_navigate_to_unit(self):
"""
Scenario: Click unit name to navigate to unit page
Given that I have expanded a section/subsection so I can see unit names
When I click on a unit name
Then I will be taken to the appropriate unit page
"""
self.course_outline_page.visit()
self.course_outline_page.section_at(0).subsection_at(0).expand_subsection()
unit = self.course_outline_page.section_at(0).subsection_at(0).unit_at(0).go_to()
self.assertTrue(unit.is_browser_on_page)
@attr('shard_3')
class PublishSectionTest(CourseOutlineTest):
"""
Feature: Publish sections.
"""
__test__ = True
def populate_course_fixture(self, course_fixture):
"""
Sets up a course structure with 2 subsections inside a single section.
The first subsection has 2 units, and the second subsection has one unit.
"""
self.courseware = CoursewarePage(self.browser, self.course_id)
self.course_nav = CourseNavPage(self.browser)
course_fixture.add_children(
XBlockFixtureDesc('chapter', SECTION_NAME).add_children(
XBlockFixtureDesc('sequential', SUBSECTION_NAME).add_children(
XBlockFixtureDesc('vertical', UNIT_NAME),
XBlockFixtureDesc('vertical', 'Test Unit 2'),
),
XBlockFixtureDesc('sequential', 'Test Subsection 2').add_children(
XBlockFixtureDesc('vertical', 'Test Unit 3'),
),
),
)
def test_unit_publishing(self):
"""
Scenario: Can publish a unit and see published content in LMS
Given I have a section with 2 subsections and 3 unpublished units
When I go to the course outline
Then I see publish button for the first unit, subsection, section
When I publish the first unit
Then I see that publish button for the first unit disappears
And I see publish buttons for subsection, section
And I see the changed content in LMS
"""
self._add_unpublished_content()
self.course_outline_page.visit()
section, subsection, unit = self._get_items()
self.assertTrue(unit.publish_action)
self.assertTrue(subsection.publish_action)
self.assertTrue(section.publish_action)
unit.publish()
self.assertFalse(unit.publish_action)
self.assertTrue(subsection.publish_action)
self.assertTrue(section.publish_action)
self.courseware.visit()
self.assertEqual(1, self.courseware.num_xblock_components)
def test_subsection_publishing(self):
"""
Scenario: Can publish a subsection and see published content in LMS
Given I have a section with 2 subsections and 3 unpublished units
When I go to the course outline
Then I see publish button for the unit, subsection, section
When I publish the first subsection
Then I see that publish button for the first subsection disappears
And I see that publish buttons disappear for the child units of the subsection
And I see publish button for section
And I see the changed content in LMS
"""
self._add_unpublished_content()
self.course_outline_page.visit()
section, subsection, unit = self._get_items()
self.assertTrue(unit.publish_action)
self.assertTrue(subsection.publish_action)
self.assertTrue(section.publish_action)
self.course_outline_page.section(SECTION_NAME).subsection(SUBSECTION_NAME).publish()
self.assertFalse(unit.publish_action)
self.assertFalse(subsection.publish_action)
self.assertTrue(section.publish_action)
self.courseware.visit()
self.assertEqual(1, self.courseware.num_xblock_components)
self.courseware.go_to_sequential_position(2)
self.assertEqual(1, self.courseware.num_xblock_components)
def test_section_publishing(self):
"""
Scenario: Can publish a section and see published content in LMS
Given I have a section with 2 subsections and 3 unpublished units
When I go to the course outline
Then I see publish button for the unit, subsection, section
When I publish the section
Then I see that publish buttons disappears
And I see the changed content in LMS
"""
self._add_unpublished_content()
self.course_outline_page.visit()
section, subsection, unit = self._get_items()
self.assertTrue(subsection.publish_action)
self.assertTrue(section.publish_action)
self.assertTrue(unit.publish_action)
self.course_outline_page.section(SECTION_NAME).publish()
self.assertFalse(subsection.publish_action)
self.assertFalse(section.publish_action)
self.assertFalse(unit.publish_action)
self.courseware.visit()
self.assertEqual(1, self.courseware.num_xblock_components)
self.courseware.go_to_sequential_position(2)
self.assertEqual(1, self.courseware.num_xblock_components)
self.course_nav.go_to_section(SECTION_NAME, 'Test Subsection 2')
self.assertEqual(1, self.courseware.num_xblock_components)
def _add_unpublished_content(self):
"""
Adds unpublished HTML content to first three units in the course.
"""
for index in xrange(3):
self.course_fixture.create_xblock(
self.course_fixture.get_nested_xblocks(category="vertical")[index].locator,
XBlockFixtureDesc('html', 'Unpublished HTML Component ' + str(index)),
)
def _get_items(self):
"""
Returns first section, subsection, and unit on the page.
"""
section = self.course_outline_page.section(SECTION_NAME)
subsection = section.subsection(SUBSECTION_NAME)
unit = subsection.expand_subsection().unit(UNIT_NAME)
return (section, subsection, unit)
@attr('shard_3')
class DeprecationWarningMessageTest(CourseOutlineTest):
"""
Feature: Verify deprecation warning message.
"""
HEADING_TEXT = 'This course uses features that are no longer supported.'
COMPONENT_LIST_HEADING = 'You must delete or replace the following components.'
ADVANCE_MODULES_REMOVE_TEXT = ('To avoid errors, edX strongly recommends that you remove unsupported features '
'from the course advanced settings. To do this, go to the Advanced Settings '
'page, locate the "Advanced Module List" setting, and then delete the following '
'modules from the list.')
DEFAULT_DISPLAYNAME = "Deprecated Component"
def _add_deprecated_advance_modules(self, block_types):
"""
Add `block_types` into `Advanced Module List`
Arguments:
block_types (list): list of block types
"""
self.advanced_settings.visit()
self.advanced_settings.set_values({"Advanced Module List": json.dumps(block_types)})
def _create_deprecated_components(self):
"""
Create deprecated components.
"""
parent_vertical = self.course_fixture.get_nested_xblocks(category="vertical")[0]
self.course_fixture.create_xblock(
parent_vertical.locator,
XBlockFixtureDesc('poll', "Poll", data=load_data_str('poll_markdown.xml'))
)
self.course_fixture.create_xblock(parent_vertical.locator, XBlockFixtureDesc('survey', 'Survey'))
def _verify_deprecation_warning_info(
self,
deprecated_blocks_present,
components_present,
components_display_name_list=None,
deprecated_modules_list=None
):
"""
Verify deprecation warning
Arguments:
deprecated_blocks_present (bool): deprecated blocks remove text and
is list is visible if True else False
components_present (bool): components list shown if True else False
components_display_name_list (list): list of components display name
deprecated_modules_list (list): list of deprecated advance modules
"""
self.assertTrue(self.course_outline_page.deprecated_warning_visible)
self.assertEqual(self.course_outline_page.warning_heading_text, self.HEADING_TEXT)
self.assertEqual(self.course_outline_page.modules_remove_text_shown, deprecated_blocks_present)
if deprecated_blocks_present:
self.assertEqual(self.course_outline_page.modules_remove_text, self.ADVANCE_MODULES_REMOVE_TEXT)
self.assertEqual(self.course_outline_page.deprecated_advance_modules, deprecated_modules_list)
self.assertEqual(self.course_outline_page.components_visible, components_present)
if components_present:
self.assertEqual(self.course_outline_page.components_list_heading, self.COMPONENT_LIST_HEADING)
self.assertItemsEqual(self.course_outline_page.components_display_names, components_display_name_list)
def test_no_deprecation_warning_message_present(self):
"""
Scenario: Verify that deprecation warning message is not shown if no deprecated
advance modules are not present and also no deprecated component exist in
course outline.
When I goto course outline
Then I don't see any deprecation warning
"""
self.course_outline_page.visit()
self.assertFalse(self.course_outline_page.deprecated_warning_visible)
def test_deprecation_warning_message_present(self):
"""
Scenario: Verify deprecation warning message if deprecated modules
and components are present.
Given I have "poll" advance modules present in `Advanced Module List`
And I have created 2 poll components
When I go to course outline
Then I see poll deprecated warning
And I see correct poll deprecated warning heading text
And I see correct poll deprecated warning advance modules remove text
And I see list of poll components with correct display names
"""
self._add_deprecated_advance_modules(block_types=['poll', 'survey'])
self._create_deprecated_components()
self.course_outline_page.visit()
self._verify_deprecation_warning_info(
deprecated_blocks_present=True,
components_present=True,
components_display_name_list=['Poll', 'Survey'],
deprecated_modules_list=['poll', 'survey']
)
def test_deprecation_warning_with_no_displayname(self):
"""
Scenario: Verify deprecation warning message if poll components are present.
Given I have created 1 poll deprecated component
When I go to course outline
Then I see poll deprecated warning
And I see correct poll deprecated warning heading text
And I see list of poll components with correct message
"""
parent_vertical = self.course_fixture.get_nested_xblocks(category="vertical")[0]
# Create a deprecated component with display_name to be empty and make sure
# the deprecation warning is displayed with
self.course_fixture.create_xblock(
parent_vertical.locator,
XBlockFixtureDesc(category='poll', display_name="", data=load_data_str('poll_markdown.xml'))
)
self.course_outline_page.visit()
self._verify_deprecation_warning_info(
deprecated_blocks_present=False,
components_present=True,
components_display_name_list=[self.DEFAULT_DISPLAYNAME],
)
def test_warning_with_poll_advance_modules_only(self):
"""
Scenario: Verify that deprecation warning message is shown if only
poll advance modules are present and no poll component exist.
Given I have poll advance modules present in `Advanced Module List`
When I go to course outline
Then I see poll deprecated warning
And I see correct poll deprecated warning heading text
And I see correct poll deprecated warning advance modules remove text
And I don't see list of poll components
"""
self._add_deprecated_advance_modules(block_types=['poll', 'survey'])
self.course_outline_page.visit()
self._verify_deprecation_warning_info(
deprecated_blocks_present=True,
components_present=False,
deprecated_modules_list=['poll', 'survey']
)
def test_warning_with_poll_components_only(self):
"""
Scenario: Verify that deprecation warning message is shown if only
poll component exist and no poll advance modules are present.
Given I have created two poll components
When I go to course outline
Then I see poll deprecated warning
And I see correct poll deprecated warning heading text
And I don't see poll deprecated warning advance modules remove text
And I see list of poll components with correct display names
"""
self._create_deprecated_components()
self.course_outline_page.visit()
self._verify_deprecation_warning_info(
deprecated_blocks_present=False,
components_present=True,
components_display_name_list=['Poll', 'Survey']
)
@attr('shard_4')
class SelfPacedOutlineTest(CourseOutlineTest):
"""Test the course outline for a self-paced course."""
def populate_course_fixture(self, course_fixture):
course_fixture.add_children(
XBlockFixtureDesc('chapter', SECTION_NAME).add_children(
XBlockFixtureDesc('sequential', SUBSECTION_NAME).add_children(
XBlockFixtureDesc('vertical', UNIT_NAME)
)
),
)
self.course_fixture.add_course_details({
'self_paced': True,
'start_date': datetime.now() + timedelta(days=1)
})
ConfigModelFixture('/config/self_paced', {'enabled': True}).install()
def test_release_dates_not_shown(self):
"""
Scenario: Ensure that block release dates are not shown on the
course outline page of a self-paced course.
Given I am the author of a self-paced course
When I go to the course outline
Then I should not see release dates for course content
"""
self.course_outline_page.visit()
section = self.course_outline_page.section(SECTION_NAME)
self.assertEqual(section.release_date, '')
subsection = section.subsection(SUBSECTION_NAME)
self.assertEqual(subsection.release_date, '')
def test_edit_section_and_subsection(self):
"""
Scenario: Ensure that block release/due dates are not shown
in their settings modals.
Given I am the author of a self-paced course
When I go to the course outline
And I click on settings for a section or subsection
Then I should not see release or due date settings
"""
self.course_outline_page.visit()
section = self.course_outline_page.section(SECTION_NAME)
modal = section.edit()
self.assertFalse(modal.has_release_date())
self.assertFalse(modal.has_due_date())
modal.cancel()
subsection = section.subsection(SUBSECTION_NAME)
modal = subsection.edit()
self.assertFalse(modal.has_release_date())
self.assertFalse(modal.has_due_date())
|
ampax/edx-platform
|
common/test/acceptance/tests/studio/test_studio_outline.py
|
Python
|
agpl-3.0
| 81,678
|
[
"VisIt"
] |
d28ad69fc05e01914d0caa8865e03ac0f56a86dae1e1ab72e897513949657d29
|
# TODO: linkify bug numbers in task descriptions
import os, json, re, glob
import sys
from wsgiref.simple_server import make_server
import mimetypes
from cgi import parse_qs, escape
from siteinfo import load_data, find_screenshot, get_test_steps_as_strings
import redis
redisDB = redis.from_url(os.environ.get("REDIS_URL"))
def head_html(title):
return str("""<!DOCTYPE html">
<html>
<head>
<!--<script type="text/javascript" src="data/masterbugtable.js"></script>-->
<link href="/css/tabzilla.css" rel="stylesheet" />
<link rel="StyleSheet" lang="text/css" href="/css/main.css">
<title>Are We Compatible Yet? - %s</title>
</head>
<div id="wrapper">
<div id="tabzilla">
<a href="https://www.mozilla.org/">Mozilla</a>
</div>
<h1>%s</h1>
""" % (title,title))
# On Stackato, paths to static files was handled in wsgi config
# As a temporary solution on Heroku, we serve static files
# through Python..for now.
serve_path_directly = [ '/timeline.html', '/screenshots/index.htm' ]
for fn in glob.glob( './data/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './data/testing/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './js/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './screenshots/js/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './screenshots/css/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './css/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './images/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './extensions/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
for fn in glob.glob( './lists/*.*'):
serve_path_directly.append(fn[1:].replace('\\', '/'))
# Files that are generated by a worker script
# are served from redis
serve_path_redis = {
'/data/masterbugtable.js': {
'headers' : [
('Content-type', 'text/javascript'),
('Cache-control', 'max-age=%s' % 60*60*24)
# Cache this resource for a day
# Data is updated from bug systems more frequently,
# but the data in those systems do not change that much
# This is of course a trade-off, especially given that
# the whole point of AWCY is an updated and different view
# of what needs doing right now..
],
'key': 'masterbugtable'
},
'/data/history.json': {
'headers' : [
('Content-type', 'text/json'),
('Cache-control', 'max-age=%s' % 60*60*24*7)
# This is updated weekly, so let's tell the browser to cache it
],
'key': 'history'
}
}
def get_mime_type(f):
m_type = mimetypes.guess_type(f)[0]
if m_type is None:
if '.json' in f:
m_type = 'application/json'
elif '.xpi' in f:
m_type = 'application/x-xpinstall'
elif '.ttf' in f:
m_type = 'application/x-font-ttf'
else:
m_type = ''
return m_type
# import - ish..
f = open('screenshots/headers.py', 'r')
exec(f)
f.close()
def buglist_to_table(title, list, the_data, output):
if len(list) == 0:
return
output.append(str('<h2>%s</h2>\n<table>' % title))
for bug in list:
bug_data = the_data['masterbugtable']['bugs'].get(str(bug))
steps = get_test_steps_as_strings(the_data['tests_parsed'], bug)
test_result = the_data['test_results'].get(str(bug))
output.append(str('<tr class="bug"><td><a href="%s">%s</a></td><td><a href="%s">%s</a></td><td>%s</td><td>%s</td></tr>' % (bug_data['link'], bug_data['id'], bug_data['link'], bug_data['summary'], bug_data['status'], bug_data.get('resolution', ''))))
if test_result:
if test_result['test_state'] == 'false':
test_result['test_state'] = 'Failed'
elif test_result['test_state'] == 'true':
test_result['test_state'] = 'Passed'
output.append(str('<tr><th style="text-align:left">Tested: </th><td colspan="3">%s, Result: <b>%s</b></td></tr>' % (test_result['test_date'], test_result['test_state'])))
if steps:
output.append(str('<tr><th style="text-align:left">Test code: </th><td colspan="3"><pre>%s</pre></td></tr>' % str('\n'.join(steps))))
output.append('</table>')
# The first argument passed to the function
# is a dictionary containing CGI-style envrironment variables and the
# second variable is the callable object (see PEP 333).
def arewecompatibleyet(environ, start_response):
parameters = parse_qs(environ.get('QUERY_STRING', ''))
output=[]
print( environ['PATH_INFO'])
sys.stdout.flush()
if environ['PATH_INFO'] is '/':
f = open('timeline.html', 'r')
contents = f.read()
f.close()
headers = [('Content-type', 'text/html;charset=utf-8'), ('X-served-with', 'wsgi.py')]
start_response('200 OK', headers)
return [contents]
if environ['PATH_INFO'] in serve_path_redis:
data = serve_path_redis[environ['PATH_INFO']]
if data:
content = redisDB.get(data['key'])
if content:
data['headers'].append(('X-served-with', 'wsgi.py from redis'))
start_response('200 OK', data['headers'])
return [content]
if environ['PATH_INFO'] in serve_path_directly:
# sigh.. we tried --static-map configuration, but it seems we have to serve this "manually" anyway
m_type = get_mime_type(environ['PATH_INFO'])
if 'text/' in m_type:
m_type += ';charset=utf-8'
headers = [('Content-type', m_type), ('X-served-with', 'wsgi.py')]
start_response('200 OK', headers)
f = open('.' + environ['PATH_INFO'], 'r')
contents = f.read()
f.close()
return [contents]
status = '200 OK' # HTTP Status
headers = [('Content-type', 'text/html;charset=utf-8')] # HTTP Headers
start_response(status, headers)
f = open('data/masterbugtable.js', 'r')
f.seek(89)
data = json.loads(f.read())
f.close()
if environ['PATH_INFO'] == '/newsite':
f = open('data/toplists.js', 'r')
f.seek(61)
#return f.read()
lists = json.loads(f.read())
f.close()
output = [head_html('Site lists')]
output.append('<ul>')
for lst in lists:
num_sites = len(data['lists'][lst['id']]['data'])
output.append(str('<li><a href="lists/%s.html">%s</a>'%(str(lst['id']), str(lst['name']))))
output.append('</ul>')
elif '/taskdetails/' in environ['PATH_INFO']:
# task types: check, writetest, contact, recontact, analyze, regcheck
the_type = sanitize(parameters['type'][0])
the_bug = sanitize(parameters['bug'][0])
the_link = sanitize(parameters['link'][0])
the_desc = sanitize(parameters['desc'][0])
the_host = sanitize(parameters['host'][0])
output.append(head_html(the_desc))
output.append('<p>Thank you and welcome! We need your help. This is a small guide to how to complete your task - if you already know what to do, simply <a href="%s">jump right in</a>!</p>' % the_link)
#output.append('<h2>%s</h2>' % the_desc)
if 'webcompat.com' in the_link or 'github.com' in the_link or the_type == 'writetest':
output.append('<p><b>Before you start</b>: for this task you need a <a href="https://github.com">GitHub</a> account. If you don\'t have one, please register before you continue.</p>')
elif 'bugzilla.mozilla.org' in the_link:
output.append('<p><b>Before you start</b>: for this task you need a <a href="https://bugzilla.mozilla.org">Bugzilla</a> account. If you don\'t have one, please register before you continue.</p>')
if the_type == 'check' or the_type == 'regcheck':
output.append('<h3>Howto</h3>')
output.append('<ol>\n<li>Load <a href="%s">the bug report</a>, check the description\n<li>Visit the website and see if you can find the problem \n<li>Leave a comment in the bug report explaining if you found it or not' % the_link)
if the_type == 'regcheck':
output.append('<li>The bug report is likely closed. If you find the problem, re-open it.\n<li>If you don\'t see a way to reopen the bug, make sure to say so in the comment and somebody will help you.')
output.append('<li>If you <b>don\'t</b> see any problem, we need to fix the test. Comment in the bug that the test needs fixing.')
output.append('</ol>')
elif the_type == 'findcontact':
output.append('<h3>Howto</h3>')
output.append('<p><a href="%s">The bug report</a> should explain what the site needs to know. Your task is to reach out and try to find a contact that can help us fix the website.</p>' % the_link)
output.append('<ol>')
if the_host in data['hostIndex']:
if len(data['hostIndex'][the_host]['resolved']) > 0:
output.append('<li><b>Pro tip:</b> Look at old, resolved bugs <a href="/site/%s">listed here</a> - if we contacted them already, closed bugs may have useful contact details' % the_host)
# the below loop does not actually work, and I have absolutely no idea why..
for bug_number in data['hostIndex'][the_host]['resolved']:
output.append('<li>..indeed, we may already have contacted this site about <a href="%s">bug %s</a>. Check if that bug has contact details!' % (bug_number, data['bugs'][str(bug_number)]['link']))
output.append('<li>You can look for "Contact us" forms or E-mail addresses on the site..')
output.append('<li>..you can try to find developers for the site on GitHub or LinkedIn..')
output.append('<li>..you can look for the site\'s accounts on Twitter or Facebook..')
output.append('<li>Once you\'ve found a promising way to contact the site, add a comment to the bug with this information.')
if 'webcompat.com' in the_link or 'github.com' in the_link:
output.append('<li>Please also remove the <b>needscontact</b> label and add a <b>contactready</b> label')
elif 'bugzilla.mozilla.org' in the_link:
output.append('<li>Please also remove [needscontact] and add [contactready] to the Whiteboard field if Bugzilla lets you do so. (If not just comment and someone will help)')
output.append('</ol>')
elif the_type == 'contact' or the_type == 'recontact':
output.append('<h3>Howto</h3>')
output.append('<p><a href="%s">The bug report</a> should explain what the site needs to know. Your task is to reach out to get them to fix the website.</p>' % the_link)
output.append('<p>The bug should already have some information about how to contact the site. Now try to write a polite letter..</p>')
output.append('<ol>')
if the_type == 'recontact':
output.append('<li>(We already tried to contact the site, but the problem seems to still be there.. Maybe find another contact?)')
output.append('<li>(It\'s probably a good idea to first check if they fixed the problem already without telling us. It happens..)</li>')
output.append('<li>You may find <a href="https://wiki.mozilla.org/Compatibility/StdLetters">our letter templates</a> useful')
output.append('<li>Say who you are, that you volunteer for Firefox, and that you\'re trying to reach a web developer')
if 'webcompat.com' in the_link or 'github.com' in the_link:
output.append('<li>Make sure you include the bug report link, <b>%s</b> '%the_link)
output.append('<li>Having sent a request to the site, please comment on the bug and add a <b>sitewait</b> label')
elif 'bugzilla.mozilla.org' in the_link:
output.append('<li>Please include this link: <b>https://webcompat.com/simplebug/index.html#mozilla/%s</b>' % the_bug)
output.append('<li>Having sent a request to the site, please comment on the bug and add [sitewait] to the Whiteboard field if Bugzilla lets you do so. (If not just comment and someone will help)')
output.append('</ol>')
elif the_type == 'writetest':
output.append('<p>Hey, cool! You want to write a test? A test is a small JavaScript that will return false if the problem exists, true if it is fixed.</p>')
output.append('<ol><li>Have a look at <a href="https://github.com/hallvors/sitecomptester-extension/blob/master/README.md#writing-tests">the documentation</a>')
output.append('<li>Start by cloning the <a href="https://github.com/hallvors/sitecomptester-extension/">sitecomptester-extension</a> repository on GitHub')
output.append('<li>Study the <a href="%s">bug report</a> and make sure you understand the problem' % the_link)
output.append('<li>Use two instances of Firefox, one of which is spoofing as the browser affected by the bug, and one spoofing as a browser that gets the content we want')
output.append('<li>See if you can come up with a script that says "true" when ran in the console of the "right content" browser but "false" in the other one')
output.append('<li>Add that script with the required meta data at the top of <a href="https://github.com/hallvors/sitecomptester-extension/blob/master/data/sitedata.js">sitedata.js</a> and create a pull request')
output.append('</ol>')
elif the_type == 'analyze':
output.append('<p>You rock! Plenty of sites need some help figuring out why they are broken.</p>')
output.append('<ol>')
output.append('<li>If possible, load the site in two browsers, one where it works and one where it fails')
output.append('<li>Study the <a href="%s">bug report</a> and make sure you understand the problem' % the_link)
output.append('<li>Use the developer tools to investigate. Good luck!')
output.append('<li>Comment in the bug when you found the cause. If you\'re confident you\'ve found it.. ')
if 'webcompat.com' in the_link or 'github.com' in the_link:
output.append(' add a <b>contactready</b> label')
elif 'bugzilla.mozilla.org' in the_link:
output.append(' add [contactready] to the Whiteboard field if Bugzilla lets you do so. (If not just comment and someone will help)')
output.append('</ol>')
elif '/site/' in environ['PATH_INFO']:
site = environ['PATH_INFO'][6:].strip()
output.append(head_html(site))
the_data = load_data()
ss = find_screenshot(the_data['masterbugtable'], site)
if ss:
output.append(str('<p><img src="%s" alt="Comparison screenshots of %s"></p>' % (ss, site)))
if site in the_data['masterbugtable']['hostIndex']:
buglist_to_table('Open bugs', the_data['masterbugtable']['hostIndex'][site]['open'], the_data, output)
buglist_to_table('Closed bugs', the_data['masterbugtable']['hostIndex'][site]['resolved'], the_data, output)
else:
output.append('<p>No information known about this site.</p>')
# data = data['lists'][site]
# output.append('<ul>')
# for domain in data['data']:
# output.append('<li><a href="/sites/%s">%s</a>' % (domain, domain))
# output.append('<pre>')
# output.append(json.dumps(data, indent=4))
elif 'screenshots/headers.py' in environ['PATH_INFO']:
url = parameters['url'][0]
if not '://' in url:
url = 'http://%s'%url
return check_url(url)
return output
def sanitize(dirty_html):
dirty_html = dirty_html.replace('&', '&')
dirty_html = dirty_html.replace('<', '<')
dirty_html = dirty_html.replace('>', '>')
dirty_html = dirty_html.replace('"', '"')
return dirty_html
application = arewecompatibleyet
if __name__ == '__main__':
port = int(os.getenv('PORT', '8000'))
srv = make_server('0.0.0.0', port, application)
srv.serve_forever()
|
hallvors/mobilewebcompat
|
wsgi.py
|
Python
|
mpl-2.0
| 15,424
|
[
"VisIt"
] |
7a98eb636c1eedcc4cf2ebe2f0c3feb418f8e8ed5ebd7e9311874b492c9b19d8
|
from distutils.core import setup
import os
def version():
setupDir = os.path.dirname(os.path.realpath(__file__))
versionFile = open(os.path.join(setupDir, 'mingle', 'VERSION'))
return versionFile.read().strip()
setup(
name='mingle',
version=version(),
author='Joel Boyd, Ben Woodcroft, Donovan Parks',
author_email='donovan.parks@gmail.com',
packages=['mingle'],
scripts=['bin/mingle'],
package_data={'mingle': ['VERSION']},
url='http://pypi.python.org/pypi/mingle/',
license='GPL3',
description='Infer gene trees with HMMs or BLAST homology search.',
long_description=open('README.md').read(),
install_requires=[
"biopython >= 1.63",
"graftm >= 0.0.1",
"biolib >= 0.0.7",
"dendropy >= 4.0.0"],
)
|
dparks1134/mingle
|
setup.py
|
Python
|
gpl-3.0
| 793
|
[
"BLAST",
"Biopython"
] |
b8d433e49c7fe1cb73228b0ed12531f2104e6c4efc0ba9984c1d374a8037b031
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pybel
def smi2mol(smiles):
"""
read smiles string , retrun 3d mol file content.
:param smiles:
:return:
"""
mol = pybel.readstring("smi", smiles)
mol.make3D()
return mol.write("mol")
|
a358003542/expython
|
expython/chem/babelit.py
|
Python
|
mit
| 288
|
[
"Pybel"
] |
d0a6be277c3a649d868b385eb9316060f7d10c59a11d1d9cacf4ab4a221984d2
|
#!/usr/bin/python
# ------------------------------------------------------------------
# BSOF/I - Block structured factorization and inversion codes
# for CPU+GPU platforms
# Copyright (c) 2013, Sergiy Gogolenko
# e-mail: sgogolenko@ucdavis.edu
# ------------------------------------------------------------------
# Description:
# Benchmarking script for Dirac (temporary solution).
# ------------------------------------------------------------------
import os
############################################################
# Parameters
############################################################
L = 4
Lmax = 20
n_list = (1024,)# 512, 256)
nthreads = 12
implementations = ('hybrid',)# 'cpu')
platform = 'dirac_tesla' #'dirac_fermi'
ofolder = r"./results/%s" % platform
############################################################
def call_minimalistic():
"""
Implementation of job running
with minimalistic requirements to Python interpreter
"""
cmd_format = r"./bench/%s/bench -t %s -n %s -L %s -f %s/%04i_%02i.blk"
job_file_format = r"%s/%s%04i.job"
job_template_format = r"config/jobs/%s.sh.in"
ntests = Lmax*(1024/n) - 3
cmd = cmd_format % (implementation, ntests, n, L,
ofolder_local, n, nthreads)
job_template_handle = open(job_template_format % platform, "r")
job_script = job_template_handle.read()
job_template_handle.close()
job_script = job_script.replace(r"@cmd@", cmd)
# print job_script
job_file = job_file_format % (ofolder_local, platform,n)
job_script_handle = open(job_file, "w")
job_script_handle.write(job_script)
job_script_handle.close()
os.system(r"qsub -S /bin/bash %s" % job_file)
# ############################################################
# def call_modern():
# """
# Implementation of job running
# for Python version >= 2.5
# """
# cmd_format = r"./bench/{implement}/bench -t {ntests} -n {blksize} -L {blknum}" \
# " -f {ofolder}/{blksize:04d}_{nthreads:02d}.blk"
# job_file_format = r"{ofolder}/{platform}{blksize:04d}.job"
# job_template_format = r"config/jobs/{platform}.sh.in"
# ntests = Lmax*(1024/n) - 3
# cmd = cmd_format.format(implement = implementation,
# ofolder = ofolder_local,
# nthreads=nthreads, ntests=ntests,
# blknum=L, blksize=n)
# with open(job_template_format.format(platform=platform), "r") as job_template_handle:
# job_script = job_template_handle.read()
# job_script = job_script.replace(r"@cmd@", cmd)
# print job_script
# job_file = job_file_format.format(ofolder = ofolder_local,
# platform = platform, blksize = n)
# with open(job_file, "w") as job_script_handle:
# job_script_handle.write(job_script)
# os.system(r"qsub -S /bin/bash %s" % job_file)
############################################################
# Cycle over commands to run
############################################################
if not os.path.exists(ofolder): os.makedirs(ofolder)
for implementation in implementations:
ofolder_local = os.path.join(ofolder, implementation)
if not os.path.exists(ofolder_local): os.mkdir(ofolder_local)
# for nthreads in (12,): # 6, 4, 2)
# os.environ['OMP_NUM_THREADS'] = str(nthreads)
# os.environ['MKL_NUM_THREADS'] = str(nthreads)
for n in n_list:
call_minimalistic() # call_modern() #
|
SGo-Go/BSOFI
|
bench/dirac_bench.py
|
Python
|
gpl-2.0
| 3,529
|
[
"DIRAC"
] |
d4a4d25da115adda6c65cb09128a29f2efa2a50276ca400722f5c903cced73dc
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
=========================================================================
Program: Visualization Toolkit
Module: TestNamedColorsIntegration.py
Copyright (c) Ken Martin, Will Schroeder, Bill Lorensen
All rights reserved.
See Copyright.txt or http://www.kitware.com/Copyright.htm for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notice for more information.
=========================================================================
'''
import sys
import vtk
import vtk.test.Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
'''
Prevent .pyc files from being created.
Stops the vtk source being polluted
by .pyc files.
'''
sys.dont_write_bytecode = True
import TestFixedPointRayCasterNearest
class TestFixedPointRayCasterLinearCropped(vtk.test.Testing.vtkTest):
def testFixedPointRayCasterLinearCropped(self):
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
iRen = vtk.vtkRenderWindowInteractor()
tFPRCN = TestFixedPointRayCasterNearest.FixedPointRayCasterNearest(ren, renWin, iRen)
volumeProperty = tFPRCN.GetVolumeProperty()
volumeMapper = tFPRCN.GetVolumeMapper()
for j in range(0, 5):
for i in range(0, 5):
volumeMapper[i][j].SetCroppingRegionPlanes(10, 20, 10, 20, 10, 20)
volumeMapper[i][j].SetCroppingRegionFlags(253440)
volumeMapper[i][j].CroppingOn()
volumeProperty[i][j].SetInterpolationTypeToLinear()
# render and interact with data
renWin.Render()
img_file = "TestFixedPointRayCasterLinearCropped.png"
vtk.test.Testing.compareImage(iRen.GetRenderWindow(), vtk.test.Testing.getAbsImagePath(img_file), threshold=10)
vtk.test.Testing.interact()
if __name__ == "__main__":
vtk.test.Testing.main([(TestFixedPointRayCasterLinearCropped, 'test')])
|
HopeFOAM/HopeFOAM
|
ThirdParty-0.1/ParaView-5.0.1/VTK/Rendering/Volume/Testing/Python/TestFixedPointRayCasterLinearCropped.py
|
Python
|
gpl-3.0
| 2,104
|
[
"VTK"
] |
6ab4fe6c6d0f784106cd459f1a7f45b2478701ef953e599f0b1d6c9037bf16ce
|
""" Example of ExecutorMindHandler implementation
"""
from __future__ import print_function
import time
import random
from DIRAC import S_OK, gLogger
from DIRAC.Core.Utilities import DEncode
from DIRAC.Core.Base.ExecutorMindHandler import ExecutorMindHandler
random.seed()
class PingPongMindHandler(ExecutorMindHandler):
MSG_DEFINITIONS = {'StartReaction': {'numBounces': (int, long)}}
auth_msg_StartReaction = ['all']
def msg_StartReaction(self, msgObj):
bouncesLeft = msgObj.numBounces
taskid = time.time() + random.random()
taskData = {'bouncesLeft': bouncesLeft}
return self.executeTask(time.time() + random.random(), taskData)
auth_startPingOfDeath = ['all']
types_startPingOfDeath = [int]
def export_startPingOfDeath(self, numBounces):
taskData = {'bouncesLeft': numBounces}
gLogger.info("START TASK = %s" % taskData)
return self.executeTask(int(time.time() + random.random()), taskData)
@classmethod
def exec_executorConnected(cls, trid, eTypes):
"""
This function will be called any time an executor reactor connects
eTypes is a list of executor modules the reactor runs
"""
gLogger.info("EXECUTOR CONNECTED OF TYPE %s" % eTypes)
return S_OK()
@classmethod
def exec_executorDisconnected(cls, trid):
"""
This function will be called any time an executor disconnects
"""
return S_OK()
@classmethod
def exec_dispatch(cls, taskid, taskData, pathExecuted):
"""
Before a task can be executed, the mind has to know which executor module can process it
"""
gLogger.info("IN DISPATCH %s" % taskData)
if taskData['bouncesLeft'] > 0:
gLogger.info("SEND TO PLACE")
return S_OK("Test/PingPongExecutor")
return S_OK()
@classmethod
def exec_prepareToSend(cls, taskId, taskData, trid):
"""
"""
return S_OK()
@classmethod
def exec_serializeTask(cls, taskData):
gLogger.info("SERIALIZE %s" % taskData)
return S_OK(DEncode.encode(taskData))
@classmethod
def exec_deserializeTask(cls, taskStub):
gLogger.info("DESERIALIZE %s" % taskStub)
return S_OK(DEncode.decode(taskStub)[0])
@classmethod
def exec_taskProcessed(cls, taskid, taskData, eType):
"""
This function will be called when a task has been processed and by which executor module
"""
gLogger.info("PROCESSED %s" % taskData)
taskData['bouncesLeft'] -= 1
return cls.executeTask(taskid, taskData)
@classmethod
def exec_taskError(cls, taskid, taskData, errorMsg):
print("OOOOOO THERE WAS AN ERROR!!", errorMsg)
return S_OK()
@classmethod
def exec_taskFreeze(cls, taskid, taskData, eType):
"""
A task can be frozen either because there are no executors connected that can handle it
or becase an executor has requested it.
"""
print("OOOOOO THERE WAS A TASK FROZEN")
return S_OK()
|
fstagni/DIRAC
|
docs/source/DeveloperGuide/AddingNewComponents/DevelopingExecutors/PingPongMindHandler.py
|
Python
|
gpl-3.0
| 2,878
|
[
"DIRAC"
] |
a86d7328c53dddac5cb0c4491f9061e514e78695f28593d63063a287e2159bde
|
# -*- coding: utf-8 -*-
"""Module containing the nc2map Maps class
Basic control class for the nc2map module, governing multiple CbarManager
instances, multiple EvaluatorBase instances and multiple MapBase and
LinePlot instances."""
import glob
import logging
import pickle
from copy import copy, deepcopy
from collections import OrderedDict
from itertools import izip, izip_longest, chain, permutations, product
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import numpy as np
import matplotlib as mpl
from matplotlib.animation import FuncAnimation
from _cbar_manager import CbarManager
import readers
from .warning import warn, critical
from _maps_manager import MapsManager, _props
from mapos import MapBase, FieldPlot, WindPlot, returnbounds, round_to_05
import formatoptions
from formatoptions import FmtBase
from evaluators import ViolinEval, FldMeanEvaluator
evaluatorsdict = { # dictionary with evaluator classes
'violin': ViolinEval,
'fldmean': FldMeanEvaluator
}
class Maps(MapsManager):
"""object controlling multiple :class:`~nc2map.mapos.MapBase` instances
This is the base class of the :mod:`nc2map` module, that controls the plot
and interactive usage of multiple variables at the same time.
Parameters
----------
linesonly: bool
plots lines only and uses the :meth:`addline` method
Further ``*args`` and ``**kwargs`` are determined by the :meth:`addmap`
(or :meth:`addline`) method.
Examples
--------
Open all variables in a NetCDF file::
ncfile = "my-own-ncfile.nc"
mymaps = nc2map.Maps(ncfile)
Open specific variables by their name for the first and second time step::
mymaps = nc2map.Maps(ncfile, vlst=['t2m', 'u'], time=[0, 1])
Open the 1st of April 2015::
mymaps = nc2map.Maps(ncfile, time=["2015-04-01"])
Attributes
----------
maps: list
List of :class:`~nc2map.mapos.MapBase` instances (see also
:meth:`get_maps` method)
lines: list
List of :class:`~nc2map.mapos.LinePlot` instances (see also
:meth:`get_maps` method)
evaluators: list
List of :class:`~nc2map.evaluators.EvaluatorBase` instances (see also
the :meth:`evaluate` method
meta
Methods
-------
addmap(ncfile, vlst, ...)
add another map to the Maps instance
addline(ncfile, vlst, ...)
add another LinePlot to the Maps instance
update(...)
update the plots
output(filename, ...)
Saves the specified figures to a file
make_movie(filename, ...)
makes a movie of the specified figures and saves it
update_cbar(...)
updates and creates shared colorbars to handle multiple
:class:`~nc2map.mapos.MapBase` instances
evaluate(maps, ...)
evaluate your maps with %s evaluators
nextt(*args, **kwargs)
updates specified maps to the next timestep
prevt(*args, **kwargs)
updates all maps to the previous timestep
reset(...)
reinitializes the Maps instance, creates new figures and makes a new
plot
show()
show all figures
undo(num=-1)
undo the last `num` changes made
redo(num=1)
redo `num` changes that were undone by the :meth:`undo`
method
disable_maps(**kwargs)
disables the specified :class:`~nc2map.mapos.MapBase` from the
updating process (see :meth:`update`)
enable_maps(**kwargs)
enables the specified :class:`~nc2map.mapos.MapBase` in the
updating process (see :meth:`update`)
dump_nc(output, ...)
save your data into a new NetCDF file
get_disabled(**kwargs)
Returns disabled maps specified by ``**kwargs``
get_maps(mode='both', maps=[], _meta={}, **kwargs)
returns a list of all :class:`~nc2map.mapos.MapBase` instances
contained in the :class:`Maps` instance
get_evaluator(*args, **kwargs)
return the class of one of the above evaluators to see their
documentations
get_cbars(*args, **kwargs)
returns a list of shared colorbars (i.e. the corresponding
:class:`~nc2map.CbarManager` instances)
get_figs(*args, **kwargs)
returns a dictionary with figs as keys and the corresponding
:class:`~nc2map.mapos.MapBase` instance
get_winds(*args, **kwargs)
if not windonly: Returns the :class:`~nc2map.mapos.WindPlot` instances
of the :class:`~nc2map.mapos.FieldPlot` instances
asdict(*args)
returns the current formatoptions of the Maps instance as dictionary
save(filename, ...)
creates a pickle file which can be used to reload the Maps instance
with the :func:`nc2map.load` function
removecbars(*args)
removes the specified colorbars if any cbars are drawn with the
:meth:`update_cbar` method
close(num, **kwargs)
closes the Maps instance and all (or some) corresponding
:class:`~nc2map.mapos.MapBase` instances and figures
See methods below for further details.
"""
__doc__ %= ', '.join(evaluatorsdict)
# ------------------ define properties here -----------------------
# mapdata dictionary property
evaluators = _props.default(
'evaluators', """
List containing the evaluator instances of the Maps instance""")
def __init__(self, *args, **kwargs):
"""Initialization method for Maps instance
Parameters
----------
linesonly: bool
plots lines only and uses the :meth:`addline` method
Further ``*args`` and ``**kwargs`` are determined by the
:meth:`~nc2map.Maps.addmap` method (or :meth:`~nc2map.Maps.addline`
method)."""
self.set_logger()
super(Maps, self).__init__()
self._cbars = []
self.evaluators = []
self._disabled = set()
try:
linesonly = kwargs.pop('linesonly')
except KeyError:
linesonly = False
if not kwargs.get('_noadd'):
if linesonly:
self.addline(*args, **kwargs)
else:
self.addmap(*args, **kwargs)
def evaluate(self, evalname, *args, **kwargs):
"""Perform an evaluation of your plots
Makes visual evaluations of the specified
:class:`~nc2map.mapos.MapBase`
Parameters
----------
evalname: {%s}
Name of the evaluator
``*args`` and ``**kwargs`` depend on the chosen evalutor.
See Also
--------
nc2map.Maps.get_evaluator: method for documentation"""
# docstring is extended below
try:
evaluator = evaluatorsdict[evalname]
except KeyError:
raise KeyError(
"Unknown evaluator %s! Possible evaluator names are %s!" % (
evalname, ', '.join(evaluatorsdict)))
self.evaluators.append(
evaluator(*args, mapsin=self, **kwargs))
return self.evaluators[-1]
def get_evaluator(self, evalname):
"""Returns the evaluator class
Returns the evaluator class specified by `evalname` for out of the
:mod:`nc2map.evaluators` module.
Parameters
----------
evalname: {%s}
Name of the evaluator
Returns
-------
evaluator: object
:class:`~nc2map.evaluators.EvaluatorBase` subclass corresponding
to `evalname`
See Also
--------
nc2map.Maps.evaluate: method for documentation"""
# docstring is extended below
try:
return evaluatorsdict[evalname]
except KeyError:
raise KeyError(
"Unknown evaluator %s! Possible evaluator names are %s!" % (
evalname, ', '.join(evaluatorsdict)))
def disable_maps(self, **kwargs):
"""Disables maps and lines.
Disables the specified :class:`~nc2map.mapos.MapBase` and
:class:`~nc2map.mapos.LinePlot` instances from the updating
(:meth:~nc2map.Maps.update), output, etc..
``**kwargs`` are determined by the :meth:`~nc2map.Maps.get_maps`
method.
See Also
--------
nc2map.Maps.enable_mapo: Method to reenable the instances
nc2map.Maps.get_disabled: Method to get disabled Maps"""
maps = self.get_maps(**kwargs)
self._disabled.update(maps)
for mapo in maps:
try:
self.maps.remove(mapo)
except ValueError:
self.lines.remove(mapo)
def enable_maps(self, **kwargs):
"""Enables maps and lines
Enables the specified :class:`~nc2map.mapos.MapBase` and
:class:`~nc2map.mapos.LinePlot` instances for the updating method
(:meth:~nc2map.Maps.update), output, etc..
``**kwargs`` are determined by the :meth:`~nc2map.Maps.get_maps`
method.
See Also
--------
nc2map.Maps.disable_mapo: Method to disable the maps or lines
nc2map.Maps.get_disabled: Method to get disabled Maps"""
try:
maps = kwargs.pop('maps')
except KeyError:
maps = self._disabled
maps = self.get_maps(maps=maps, **kwargs)
self._disabled -= set(maps)
for mapo in maps:
if isinstance(mapo, MapBase):
self.maps.append(mapo)
else:
self.lines.append(mapo)
def get_disabled(self, **kwargs):
"""Returns disabled maps and lines
Gives the specified disabled :class:`~nc2map.mapos.MapBase` and
:class:`~nc2map.mapos.LinePlot`.
``**kwargs`` are determined by the :meth:`~nc2map.Maps.get_maps`
method.
Returns
-------
maps and lines: list
List of the disabled :class:`~nc2map.mapos.MapBase` and
:class:`~nc2map.mapos.LinePlot` instances
See Also
--------
nc2map.Maps.disable_mapo: Method to disable the maps or lines
nc2map.Maps.enable_mapo: Method to reenable the instances"""
try:
maps = kwargs.pop('maps')
except KeyError:
maps = self._disabled
return self.get_maps(maps=self._disabled, **kwargs)
def output(self, output, *args, **kwargs):
"""Save the figures.
Save the figures into a file. Just setting output = `filename.pdf`
will save all figures of the maps object to `filename.pdf`
Parameters
----------
output: string, 1D-array of strings or object
- An object may be an open
matplotlib.backends.backend_pdf.PdfPages instance
- If string: %%(key)s will be replaced by the meta informations
contained in the MapBase instances.
*args
- Either figures or MapBase instances which shall be saved (in
case of MapBase, the corresponding figure will be saved)
- 'tight' making the bbox_inches of the plot tight, i.e. reduce
the output to the plot margins
**kwargs
- returnpdf: bool, optional
Default: False. If True and all files are plotted into one
matplotlib.backends.backend_pdf.PdfPages instance, this will be
returned
- any keyword that specifies the MapBase instances to save (see
get_maps method). Only enabled mapos are considered (see
disable_mapo and enable_mapo method).
- any other keyword as specified in the matplotlib.pyplot.savefig
function.
Returns
-------
pdf : matplotlib.backends.backend_pdf.PdfPages
if the `returnpdf` keyword is True
See Also
--------
nc2map.Maps.make_movie: Method to create a movie over the timesteps"""
# the docstring is extended by the plt.savefig docstring below
saveops = {key: value for key, value in kwargs.items()
if key not in self.meta.keys() + ['mode', 'maps']}
try:
returnpdf = kwargs.pop('returnpdf')
except KeyError:
returnpdf = False
if 'tight' in args:
saveops['bbox_inches'] = 'tight'
args = tuple([arg for arg in args if arg != 'tight'])
kwargs = {key: value for key, value in kwargs.items()
if key in self.meta.keys() + ['mode', 'maps']}
if not args:
figs = self.get_figs(**kwargs).keys()
elif isinstance(args[0], MapBase):
labels = self.get_label_dict(*args)
figs = self.get_figs(**labels)
else:
figs = args
no_new = hasattr(output, 'savefig')
if isinstance(output, str) or no_new:
if no_new or output[-4:] in ['.pdf', '.PDF']:
if not no_new:
output = self._replace(output, *figs, delimiter='-')
pdf = output if no_new else PdfPages(output)
for fig in figs:
pdf.savefig(fig, **saveops)
if not no_new and not returnpdf:
self.logger.info('Saving plot to %s', output)
pdf.close()
return None
else:
return pdf
else:
strout = output
output = []
for fig in figs:
output = self._replace(output, fig, delimiter='-')
else:
pass
# test output
try:
if len(np.shape(output)) > 1:
raise ValueError(
'Output array must be a 1D-array!')
if len(figs) != len(output):
raise ValueError((
'Length of output names (%i) does not fit to the number '
' of figures (%i).') % (len(output), len(figs)))
except TypeError:
raise TypeError((
'Output names must be either a string or an 1D-array of '
'strings!'))
for fig in figs:
fig.savefig(output[figs.index(fig)], **saveops)
self.logger.info('Plot saved to %s', output[figs.index(fig)])
return output
def make_movie(self, output, fmt={}, onecbar={}, steps=None,
checklen=False, calc_bounds=True, *args, **kwargs):
"""Create a movie of the maps
Method to create a movie out of the specified
:class:`~nc2map.mapos.MapBase` instances with the specified (or
current) formatoptions.
Parameters
----------
output: string or 1D-array of strings.
- If string: meta attributes (see :attr:`meta`) will be replaced
by the attributes contained in the figures.
- If 1D-array: The length of the array must fit to the specified
figures
fmt: dict, optional
Default: {}. Formatoptions (same hierarchical order as in the
:meth:`~nc2map.Maps.addmap` method) where the values of the
formatoption keywords need to be (iterable) 1D-arrays
onecbar: dict or list of dictionaries, optional
Default: {}. Same settings as for update_cbar function but (like
`fmt`) with values of formatoption keywords being 1D-arrays with
same length as number of `steps`
steps: List of integers or None, optional
If None, all timesteps in the nc-file are used for the movie.
Otherwise set the timesteps as list
checklen: bool, optional
Default: False. If False, the formatoption keywords are simply
iterated (and possibly repeated). If True, their lenghts have to
match the number of steps.
calc_bounds: bool, optional
If True and bounds are set automatically, they are computed to
match the whole period. Otherwise the current bounds are used.
Further ``*args`` and ``**kwargs`` may be figures, MapBase instances,
var=[...], etc. as used in the :meth:`~nc2map.Maps.get_figs` method to
specify the figures to make movies of.
Furthermore any valid keyword of the matplotlib.animation.FuncAnimation
save function can be set. Default value for writer is 'imagemagick',
extra_args are ['-vcodec', 'libx264'].
Notes
-----
if filename is in the additional ``**kwargs``, it will replace the
output variable.
See Also
--------
nc2map.Maps.output: Method to create a single picture of the plots"""
# docstring will be extended below
# default options for kwargs if not in self.meta attribute, etc.
defaults = {'dpi': None, 'fps': 3, 'writer': 'imagemagick',
'extra_args': ['-vcodec', 'libx264']}
dimnames = self.meta.keys() + ['maps', 'mode', '_meta']
# options as set in kwargs
movieops = {key: value for key, value in kwargs.items()
if key not in dimnames}
for key, value in defaults.items():
movieops.setdefault(key, value)
# delete options from kwargs
kwargs = {key: value for key, value in kwargs.items()
if key in dimnames}
# reset output to 'filename' in movieops if given
if 'filename' in movieops:
output = movieops.pop('filename')
fmt = self._setupfmt(fmt)
figs = self.get_figs(*args, **kwargs)
for fig in figs:
if isinstance(output, str):
out = self._replace(output, fig, delimiter='-')
else:
out = output[i]
maps = figs[fig]
cbars = self.get_cbars(*maps)
if steps is None:
steps = range(len(maps[0].reader.time))
# check lengths
if checklen:
for name in [mapo.name for mapo in maps
if mapo.name in fmt]:
try:
valuelens = map(lambda x: len(x) != len(steps),
fmt[name].values())
except TypeError, e:
print(
"Could not estimate lengths. To use iterables set "
"checklon to False.\n")
raise e
if any(valuelens):
wrongkeys = [key for key, value in fmt[name].items()
if len(value) != len(steps)]
raise ValueError((
"Lengths of arguments for %s do not match to "
"number of steps (%i)! Set checklen keyword to "
"False to use iterables.") % (
', '.join(map(
lambda x: '%s (%i)' % (x[0], x[1]),
[(key, len(fmt[name][key]))
for key in wrongkeys])),
len(steps)))
# save bound options
bounds = [getattr(mapo.fmt, 'bounds') for mapo in maps]
windbounds = [getattr(mapo.wind.fmt, 'bounds') for mapo in maps
if hasattr(mapo, 'wind') and mapo.wind is not None]
if calc_bounds:
# modify bounds
self.logger.info("Calculate bounds")
# handle the mapobject coordinated by one single cbar
for cbar in cbars:
cbar.set_global_bounds(
maps=set(set(maps) & set(cbar.maps)),
time=steps)
self.set_global_bounds(maps=set(maps) | set(chain(*(
cbar.maps for cbar in cbars))), time=steps)
# izip has no __len__ method which is required by the animation
# function. Therefore we define a subclass and use it for the data
# generator
class myizip(izip):
def __len__(self):
return len(steps)
# data generator
if cbars != []:
data_gen = myizip(
myizip(*(mapo._moviedata(steps, **fmt[mapo.name])
for mapo in maps)),
myizip(*(cbar._moviedata(steps, **onecbar)
for cbar in cbars)))
else:
data_gen = myizip(*(mapo._moviedata(steps, **fmt[mapo.name])
for mapo in maps))
# run function
if cbars != []:
def runmovie(args):
return [mapo._runmovie(args[0][maps.index(mapo)])
for mapo in maps] + \
[cbar._runmovie(args[1][cbars.index(cbar)])
for cbar in cbars]
else:
def runmovie(args):
return [mapo._runmovie(args[maps.index(mapo)])
for mapo in maps]
# movie initialization function
def init_func():
plt.figure(fig.gcf())
plt.draw()
#self.update({}, add=False, delete=False)
#if self._cbars:
#self.update_cbar({}, add=False, delete=False)
self.logger.info("Make movie")
ani = FuncAnimation(fig, runmovie, frames=data_gen, repeat=True,
init_func=plt.draw)
for mapo in maps: mapo._resize=False
if out == 'show':
plt.show(block=False)
return ani
ani.save(out, **movieops)
self.logger.info('Saved movie to %s', out)
# restore initial settings
self.update(self._fmt[-1][0], add=False, delete=False,
todefault=True)
if not self._fmt[-1][2]:
for cbar in self._cbars:
cbar._removecbar()
else:
self.update_cbar(*self._fmt[-1][2], add=False,
delete=False, todefault=True)
return
def reinit(self, *args, **kwargs):
"""Reinitialize the specified maps
This method makes the specified :class:`~nc2map.mapos.MapBase`
instances in the :attr:`~nc2map.Maps.maps` to get the data
from the reader (may be useful if you changed the reader)
``*args`` and ``**kwargs`` are determined by the nc2map.Maps.get_map
method"""
maps = self.get_maps(*args, **kwargs)
for mapo in maps:
mapo._reinitialize = 1
self.update(maps=maps)
def update(self, fmt={}, add=True, delete=True, todefault=False,
force=False, windonly=False, **kwargs):
"""Update the MapBase instances.
This method can be used for the interactive usage of the
:class:`~nc2map.Maps` instance to update the specified
:class:`~nc2map.mapos.MapBase` instances
Parameters
----------
fmt: dictionary
The same shape and options like in the :meth:`~nc2map.Maps.addmap`
method.
add: bool
Default: True. Adds the new formatoptions to old formatoptions
allowing a undoing via the :meth:`~nc2map.Maps.undo` method
delete: bool
Default: True. Deletes the newer formatoptions for the
:meth:`~nc2map.Maps.redo` method.
todefault: bool
Default: False. Sets all formatoptions which are not specified by
`fmt` or ``**kwargs`` to default.
force: bool
By default the formatoption keywords whose values correspond to
what is already set, are removed if not `force` is True.
windonly: bool
If True, ``**kwargs`` and `fmt` are only affecting the WindPlot
instances of the FieldPlot instances (same as::
fmt={'windplot': {key :val}}
for a key, value pair in `fmt` or ``**kwargs``.
**kwargs
may be any valid formatoption keyword or a key from the meta
attribute to specifically select :class:`~nc2map.mapos.MapBase`
instances (see :meth:`~nc2map.Maps.get_maps` method).
Notes
-----
You have to use the `fmt` keyword if you want to update the dimensions
(e.g. time, etc.). Otherwise it will be regarded as a specifier to
select the corresponding MapBase instances (see below). In other
words::
mymaps.update(fmt={'time': 1}, title='test')
will update all MapBase instances to time=1 and title='test', whereas::
mymaps.update(time=1, title='test')
will only update the title of the MapBase with time=1 to title='test'
See Also
--------
nc2map.Maps.update_lines: Method to update the LinePlots
nc2map.Maps.update_cbar: Method to update shared colorbars"""
# if not deepcopied, the update in the next line will use previous fmts
# given to the update function
fmt = deepcopy(fmt)
fmt.update({key: value for key, value in kwargs.items()
if key not in self.meta.keys() + ['maps', '_meta']})
fmt = self._setupfmt(fmt)
if windonly:
get_func = self.get_winds
else:
get_func = self.get_maps
maps = get_func(mode='maps',
**{key: value for key, value in kwargs.items()
if key in self.meta.keys() + ['maps', '_meta']})
# update maps
for mapo in maps:
mapo.update(todefault=todefault, force=force, **fmt[mapo.name])
# update figure window title and draw
for cbar in self.get_cbars(*maps):
cbar._draw_colorbar()
for fig in self.get_figs(*maps):
plt.figure(fig.number)
self._set_window_title(fig)
# if it is part of a cbar, it has already been drawn above
plt.draw()
# add to old fmts
if add:
self._fmt.append(self.asdict('maps', 'lines', 'cbars'))
# delete new fmts
if delete:
self._newfmt = []
del fmt
def set_global_bounds(self, maps=None, time=slice(None)):
"""Calculate colorbar bounds considering all time steps
Sets the bounds in the specified :class:`~nc2map.mapos.MapBase`
instances to the limits over all timesteps
Parameters
----------
maps: list
List of MapBase instances. If None, self.maps is used
time: iterable or slice
Alternate time slice to use in the calculation"""
# now handle the rest of the mapobjects
if maps is None:
maps = self.maps
for mapo in maps:
boundsnames = ['rounded', 'sym', 'minmax', 'roundedsym']
if (isinstance(mapo.fmt.bounds, tuple)
and isinstance(mapo.fmt.bounds[0], str)):
if isinstance(mapo, FieldPlot):
if (mapo.fmt.bounds[0] in boundsnames
and len(mapo.fmt.bounds) == 2):
boundsdata = map(
lambda x: (np.ma.min(x), np.ma.max(x)),
(data[:] for data in mapo.gen_data(
time=time)))
boundsdata = np.ma.array(
boundsdata, mask=np.isnan(boundsdata))
mapo.fmt.bounds = returnbounds(
boundsdata[:], mapo.fmt.bounds)
elif (mapo.fmt.bounds[0] in boundsnames
and len(mapo.fmt.bounds) == 3):
mapo.fmt.bounds = returnbounds(np.ma.concatenate(
tuple(data[:] for data in mapo.gen_data(
time=steps))), mapo.fmt.bounds)
if (isinstance(mapo, WindPlot)
or (hasattr(mapo, 'wind')
and mapo.wind is not None
and mapo.wind._bounds is not None)):
if isinstance(mapo, WindPlot):
wind = mapo
else:
wind = mapo.wind
if wind.fmt.bounds[0] in boundsnames:
try:
wind.fmt.color = wind.set_bounds(wind.fmt.color,
time=steps)
except (TypeError, ValueError):
pass
def update_lines(self, fmt={}, add=True, delete=True,
todefault=False, **kwargs):
"""Function to update the lines
This method can be used for the interactive usage of the
:class:`~nc2map.Maps` instance to update the specified
:class:`~nc2map.mapos.LinePlot` instances
Parameters
----------
fmt: dictionary
the same shape and options like in the addline method
add: bool
Default: True. Adds the new formatoptions to old formatoptions
allowing a undoing via the :meth:`~nc2map.Maps.undo` method
delete: bool
Default: True. Deletes the newer formatoptions for the
:meth:`~nc2map.Maps.redo` method.
todefault: bool
Default: False. Sets all formatoptions which are not specified by
`fmt` or ``**kwargs`` to default.
**kwargs
may be any valid formatoption keyword or a key from the meta
attribute to specifically select :class:`~nc2map.mapos.MapBase`
instances (see :meth:`~nc2map.Maps.get_maps` method).
See Also
--------
nc2map.Maps.update: Method to update the maps
nc2map.Maps.update_cbar: Method to update shared colorbars"""
from copy import deepcopy
# if not deepcopied, the update in the next line will use previous fmts
# given to the update function
fmt = copy(fmt)
dims_identifiers = dict(item for item in kwargs.items()
if item[0] in self.meta.keys())
for dim in dims_identifiers:
del kwargs[dim]
fmt.update(kwargs)
lines = self.get_maps(mode='lines', **dims_identifiers)
names = [line.name for line in lines]
default_fmt = {key: val for key, val in fmt.items()
if key not in names}
final_fmt = {name: default_fmt for name in names}
for name, name_dict in final_fmt.items():
name_dict.update(fmt.get(name, {}))
# update maps
for line in lines:
line.update(todefault=todefault, **final_fmt[line.name])
for fig in self.get_figs(name=names, mode='lines'):
plt.figure(fig.number)
self._set_window_title(fig)
# if it is part of a cbar, it has already been drawn above
plt.draw()
# add to old fmts
if add:
self._fmt.append(self.asdict('maps', 'lines', 'cbars'))
# delete new fmts
if delete:
self._newfmt = []
del fmt
def addline(self, *args, **kwargs):
# docstring is set below to be equal to MapsManager.addline method
try:
add = kwargs.pop('add')
except KeyError:
add = True
try:
delete = kwargs.pop('delete')
except KeyError:
delete = True
super(Maps, self).addline(*args, **kwargs)
# add to old fmts
if add:
# reset old fmts (for function undo)
self._fmt = [self.asdict('maps', 'lines', 'cbars')]
if delete:
# reset future fmts (for function redo)
self._newfmt = []
def reset(self, num=0, fromscratch=False, ax=None, sort=None,
sortlines=False, **kwargs):
"""Reinitializes the instance
Reinitializes the :class:`Maps` instance, even if
:meth:`~nc2map.Maps.undo` method fails.
Parameters
----------
num: int
Number of formatoptions (like :meth:`~nc2map.Maps.undo` method).
0 is current, -1 the one before (often the last one working), etc.
fromscratch: bool
If False, only figures will be closed and recreated (if ax is not
None) or the axes will be reset if ax is None. If True the whole
Maps instance will be closed and reopend by loading the data from
the readers
ax: tuple, subplot or list of subplots
see :meth:`~nc2map.Maps.addmap` method. Specify the subplots to
plot on
sort: string or list of strings
see :meth:`~nc2map.Maps.addmap` method. Specifies how the
:class:`nc2map.mapos.MapBase` instances are sorted to the subplots
**kwargs
anything that is passed to the nc2map.subplots function to create
the figures (e.g. figsize).
See Also
--------
nc2map.Maps.undo: less hard undo function"""
if not self._fmt:
raise ValueError('Impossible option')
if num > 0 and num >= len(self._fmt)-1:
raise ValueError(
'Too high number! Maximal number is %i' % len(self._fmt)-1)
elif num < 0 and num < -len(self._fmt):
raise ValueError(
'Too small number! Minimal number is %i' % -len(self._fmt)+1)
name_dicts = self.name_dicts
# try to save readers
linenames = self.linenames
mapnames = self.mapnames
vlst = self.vlst
readers = [mapo.reader for mapo in self.maps]
# reset cbars
self.removecbars()
self._cbars = []
# set new subplots
if ax is not None or fromscratch:
maps_subplots = []
lines_subplots = []
if ax is None:
subplots = []
for fig, maps in self.get_figs().items():
shape = maps[0]._get_axes_shape()
try:
maps[0]._ax._AxesWrapper__init_kwargs.pop('num')
except KeyError:
pass
maps[0]._ax._AxesWrapper__init_kwargs.update(kwargs)
this_subplots = self._make_subplots(
shape, len(maps), *maps[0]._ax._AxesWrapper__init_args,
**maps[0]._ax._AxesWrapper__init_kwargs)
mapos = self.get_maps(name=[mapo.name for mapo in maps],
mode='maps')
maps_subplots += [this_subplots[mapo._get_axes_num()-1]
for mapo in mapos]
linos = self.get_maps(name=[line.name for line in maps],
mode='lines')
lines_subplots += [this_subplots[line._get_axes_num()-1]
for line in linos]
subplots += this_subplots
else:
subplots = self._make_subplots(
ax, len(self.get_maps()), **kwargs)
maps_subplots = subplots[:len(self.maps)]
lines_subplots = subplots[len(self.maps):]
else:
maps_subplots = [mapo.ax for mapo in self.maps]
lines_subplots = [line.ax for line in self.lines]
subplots = maps_subplots + lines_subplots
for axes in maps_subplots + lines_subplots:
axes.clear()
if sort is not None and all(name in sort for name in self.names):
maps_subplots = [subplots[i] for i, name in enumerate(sort)
if name in mapnames]
lines_subplots = [subplots[i] for i, name in enumerate(sort)
if name in linenames]
for name in linenames:
sort.remove(name)
# close the Maps instance
if fromscratch:
try:
self.logger.debug("Try to close data and figures...")
self.close(30, mode='maps')
self.close(15, mode='lines')
except (AttributeError, KeyError):
self.logger.debug("Could not close figures", exc_info=1)
warn("Could not close the figures but anyway will draw new "
"figures")
elif ax is not None:
try:
self.logger.debug("Try to close figures...")
self.close(3, remove=False)
except (AttributeError, KeyError):
self.logger.debug("Could not close figures", exc_info=1)
warn("Could not close the figures but anyway will draw new "
"figures")
# set new figures
# change names sorting
if sort is not None:
name_dicts = self._setupnames(name_dicts, vlst, sort, rename=False)
if not fromscratch:
self._resort_maps(name_dicts.keys())
# set up MapBase instances
for subplot, name_dict, reader in izip_longest(
maps_subplots, name_dicts.items(), readers):
self._setupfigs(dict([name_dict]), fmt=self._fmt[num-1][0],
subplots=[subplot], reader=reader,
fromscratch=fromscratch)
# set up lineplots
lines_subplots = iter(lines_subplots)
for line in self.lines:
line.ax = next(lines_subplots)
line.update(**self._fmt[num-1][1][line.name])
if self.plot:
self.logger.info("Setting up projections...")
for mapo in self.get_maps(mode='maps'):
mapo._make_plot = 1
if hasattr(mapo, 'wind') and mapo.wind is not None:
mapo.wind._make_plot = 1
mapo._setupproj()
self.logger.info("Making plots...")
self.make_plot(mode='maps')
for fig in self.get_figs():
plt.figure(fig.number)
self._set_window_title(fig)
plt.draw()
if self._fmt[num-1][2]:
self.update_cbar(*self._fmt[num-1][2], add=False, delete=False)
# shift to new fmt
if num != 0:
self._newfmt = self._fmt[num:] + self._newfmt
if num <= 0:
self._fmt.__delslice__(len(self._fmt)+num, len(self._fmt))
else:
self._fmt.__delslice__(num, len(self._fmt))
def update_cbar(self, *args, **kwargs):
"""Update or create a shared colorbar
Shared colorbars may be used to control the appearance of multiple
:class:`~nc2map.mapos.MapBase` instances at once. They are especially
useful if you use automatically calculated bounds
Parameters
----------
*args
dictionaries::
onecbar = {'meta_key':..., 'fmt_key':...}
'meta_key' may be anything suitable for the
:meth:`~nc2map.Maps.get_maps` method). `fmt_key` may be any
formatoption keyword controlling the colorbar and colormap
aesthetics.
add: bool
Default: True. Adds the new formatoptions to old formatoptions
allowing a undoing via the :meth:`~nc2map.Maps.undo` method
delete: bool
Default: True. Deletes the newer formatoptions for the
:meth:`~nc2map.Maps.redo` method.
todefault: bool
Default: False. Sets all formatoptions which are not specified by
`fmt` or ``**kwargs`` to default.
**kwargs
additionally to `add`, `delete` and `todefault` keywords, any
formatoption keywords for colorbars or anything for the
:meth:`~nc2map.Maps.get_maps` method is possible. Those will then
be treated like a single dictionary (this is just to avoid nasty
typing of :, {}, etc.).
Notes
-----
If no colorbar with any of the specified dimensions is found, a new
:class:`~nc2map.CbarManager` instance is created.
See Also
--------
nc2map.Maps.update: Method for updating maps
nc2map.Maps.update_lines: Method for updating lines"""
dimnames = self.meta.keys() + ['_meta', 'maps']
add = kwargs.get('add', True)
delete = kwargs.get('delete', True)
todefault = kwargs.get('todefault', False)
plot = kwargs.get('plot', True)
kwargs = {key: value for key, value in kwargs.items()
if key not in ['add', 'delete', 'todefault', 'plot']}
if kwargs != {}:
newops = list(args) + [kwargs]
else:
newops = list(args)
if not newops:
newops = [{}]
# first set colorbars
for cbarops in newops:
if 'windplot' in cbarops:
args = tuple(['wind'])
cbarops.update(cbarops.pop('windplot'))
wind = True
get_func = self.get_winds
else:
args = ()
wind = False
get_func = self.get_maps
dims = {
key: cbarops[key] for key in dimnames if key in cbarops}
for key in dims:
del cbarops[key]
# if no colorbars are set up to now and no specific var, time and
# level options are set, make colorbars for each figure
if not self._cbars and not dims:
figs = self.get_figs(*args, mode='maps')
for fig in figs:
self._cbars.append(CbarManager(
maps=figs[fig], fig=[fig], cbar={}, mapsin=self,
wind=wind))
# now update colorbar objects or create them if they are not
# existent
dims['mode'] = 'maps'
cbars = self.get_cbars(*args, **dims)
if not cbars:
self._cbars.append(CbarManager(
maps=get_func(**dims), cbar={}, mapsin=self, wind=wind,
fig=self.get_figs(*args, **dims).keys()))
cbars = [self._cbars[-1]]
# now draw and update colorbars
for cbar in cbars:
cbar.update(fmt=cbarops, todefault=todefault, plot=plot)
if plot:
for fig in cbar.get_figs():
plt.figure(fig.number)
plt.draw()
if add:
self._fmt.append(self.asdict('maps', 'lines', 'cbars'))
if delete:
self._newfmt = []
def get_cbars(self, *args, **kwargs):
"""Get the cbars
Method to return the shared cbars (:class:`~nc2map.CbarManager`) of
the specified input.
Parameters
----------
*args
instances of :class:`~nc2map.mapos.MapBase`, figures or
:class:`~nc2map.CbarManagers`
**kwargs
any keyword of the :meth:`~nc2map.Maps.get_maps` method (i.e. by
the keys in the meta attribute)
Returns
-------
colorbars: list
list of :class:`~nc2map.CbarManagers`"""
maps = []
args = list(args)
cbars = [cbar for cbar in args if isinstance(cbar, CbarManager)]
kwargs['mode'] = 'maps'
if not args:
maps = self.get_maps(**kwargs)
elif args == ['wind']:
maps = self.get_winds(**kwargs)
else:
figs = self.get_figs(*(arg for arg in args if arg == 'wind'),
mode='maps')
for fig in figs:
if fig in args:
maps = maps + figs[fig]
args.remove(fig)
maps += list(
arg for arg in args if not isinstance(arg, CbarManager))
cbars = cbars + [cbar for cbar in self._cbars
if any(mapo in cbar.maps for mapo in maps)]
return cbars
def removecbars(self, *args, **kwargs):
"""Method to remove share colorbars
This method removes the specified :class:`~nc2map.CbarManager`
instances from the plot and reenables the manual setting of the bounds
via the :meth:`~nc2map.Maps.update` method.
Parameters
----------
*args
instances of :class:`~nc2map.mapos.MapBase`, figures or
:class:`~nc2map.CbarManagers`
**kwargs
any keyword of the :meth:`~nc2map.Maps.get_maps` method (i.e. by
the keys in the meta attribute)
"""
cbars = self.get_cbars(*args, **kwargs)
maps = chain(*(cbar.maps for cbar in cbars))
for cbar in cbars:
cbar._removecbar()
self._cbars.remove(cbar)
# draw figures
for fig in self.get_figs(maps=list(maps)):
plt.figure(fig.number)
plt.draw()
def undo(self, num=-1):
"""Undo the changes made.
Parameters
----------
num: int
number of changes to go back.
See Also
--------
nc2map.Maps.redo: Redo formatoption changes that were undone by this
method"""
if not self._fmt or len(self._fmt) == 1:
raise ValueError('Impossible option')
if num > 0 and num >= len(self._fmt)-1:
raise ValueError(
'Too high number! Maximal number is %i' % len(self._fmt)-1)
elif num < 0 and num < -len(self._fmt):
raise ValueError(
'Too small number! Minimal number is %i' % (-len(self._fmt)+1))
if not self._fmt[num-1][2]:
self.removecbars()
self.update(self._fmt[num-1][0], add=False, delete=False,
todefault=True)
self.update_lines(add=False, delete=False, todefault=True,
**self._fmt[num-1][1])
if self._fmt[num-1][2]:
self.update_cbar(*self._fmt[num-1][2], add=False, delete=False,
todefault=True)
# shift to new fmt
self._newfmt = self._fmt[num:] + self._newfmt
if num <= 0:
self._fmt.__delslice__(len(self._fmt)+num, len(self._fmt))
else:
self._fmt.__delslice__(num, len(self._fmt))
def redo(self, num=1):
"""Redo the changes made
Parameters
----------
num: int
number of changes to use.
See Also
--------
nc2map.Maps.undo: Undo formatoption changes"""
if not self._newfmt:
raise ValueError('Impossible option')
if num > 0 and num > len(self._newfmt):
raise ValueError(
'Too high number! Maximal number is %i' % len(self._newfmt))
elif num < 0 and num < -len(self._newfmt):
raise ValueError(
'Too small number! Minimal number is %i' % (
-len(self._newfmt)-1))
if not self._newfmt[num-1][2]:
self.removecbars()
self.update(self._newfmt[num-1][0], add=False, delete=False,
todefault=True)
self.update_lines(add=False, delete=False, todefault=True,
**self._newfmt[num-1][1])
if self._newfmt[num-1][2]:
self.update_cbar(*self._newfmt[num-1][2], add=False, delete=False,
todefault=True)
# shift to old fmt
self._fmt += self._newfmt[:num]
if num > 0:
self._newfmt.__delslice__(0, num)
else:
self._newfmt.__delslice__(0, len(self._newfmt)+num)
def close(self, num=0, remove=True, **kwargs):
# docstring is set below
cbars = self.get_cbars(**kwargs)
super(Maps, self).close(num=num, remove=remove, **kwargs)
if not num % 2 or not num % 3:
for cbar in self.get_cbars(*cbars):
if not set(self.maps) & set(cbar.maps):
cbar.close(num=num)
self._cbars.remove(cbar)
def asdict(self, *args, **kwargs):
"""Get the current formatoptions
This method gives the formatoptions of the specified
:class:`~nc2map.mapos.MapBase` instances,
:class:`~nc2map.CbarManager` and :class:`~nc2map.mapos.LinePlot`
instances as a list of dictionaries.
Parameters
----------
*args
- 'maps' for MapBase instances in :attr:`nc2map.Maps.maps`
attribute
- 'lines' for SimplePlot instances in :attr:`nc2map.Maps.lines`
attribute
- 'cbars' to return only the dictionary controlling the
CbarManager instances (see onecbar in the initialization)
**kwargs
Any key suitable for the :meth:`~nc2map.Maps.get_maps` method."""
fmt = {}
returns = []
cbars = []
if not args or any(x in args for x in ['maps', 'lines', 'frominit']):
returns += super(Maps, self).asdict(*args, **kwargs)
if 'cbars' in args:
for cbar in self.get_cbars(**kwargs):
cbars.append(cbar.asdict())
cbars[-1]['name'] = cbar.names
returns.append(cbars)
return tuple(returns)
def addmap(self, ncfile, names=None, vlst=None, ax=(1, 1), sort=None,
fmt=None, onecbar=False, u=None, v=None, mode=None, dims={},
windonly=False, plot=True, add=True, delete=True, meta={},
**kwargs):
"""
onecbar: bool, dict or tuple of dictionaries
Default: False. If True, one colorbar will be drawn for each
figure. If dictionary: the syntax is as follows::
onecbar = {'meta_key':..., 'fmt_key':...}
'meta_key' may be anything suitable for the
:meth:`~nc2map.Maps.get_maps` method). `fmt_key` may be any
formatoption keyword controlling the colorbar and colormap
aesthetics.
add: bool
Default: True. Adds the new formatoptions to old formatoptions
allowing a undoing via the :meth:`~nc2map.Maps.undo` method
delete: bool
Default: True. Deletes the newer formatoptions for the
:meth:`~nc2map.Maps.redo` method.
"""
# docstring is set below from MapsManager
super(Maps, self).addmap(ncfile=ncfile, names=names, vlst=vlst, ax=ax,
sort=sort, fmt=fmt, u=u, v=v, mode=mode,
meta=meta, dims=dims, windonly=windonly,
plot=plot, **kwargs)
if onecbar is not False:
if onecbar is True:
self.update_cbar(
*(self.get_label_dict(fig) for fig in self.get_figs()),
add=False, delete=False, plot=plot)
elif isinstance(onecbar, dict):
self.update_cbar(onecbar, add=False, delete=False)
else:
self.update_cbar(*onecbar, add=False, delete=False)
if add:
# reset old fmts (for function undo)
self._fmt = [self.asdict('maps', 'lines', 'cbars')]
if delete:
# reset future fmts (for function redo)
self._newfmt = []
def save(self, output=None, ask=True, ask_ax=True, ncnames=[]):
"""Saves the settings of the Maps instance (not the data!)
This method creates a pickle file in order to reinitialize the
:class:`~nc2map.Maps` instance with the :func:`~nc2map.load` function
Parameters
----------
output: str
Name of the pickle file (e.g. 'mymaps.pkl') or None if only the
dictionary shall be returned and no file shall be created
ask: bool
Default: True. If True and if the initialization keywords of a
reader can not be determined (like it is the case for
ArrayReader instances), it will be asked for a filename to dump
the reader into a NetCDF file.
ask_ax: bool
Default: True. If True and if the initialization keywords of a
subplot (i.e. the subplot shape of a figure) can not be
determined (like it is the case for user defined subplots from
pyplot.subplots function), it will be asked for the subplot shape
and subplot number
ncnames: List of strings
E.g. 'my-ncfile.nc'. NetCDF files that shall be used if the reader
is not already dumped (like it is the case for
:class:`~nc2map.readers.ArrayReader` instances).
Returns
-------
settings_dict: dict
Dictionary containing all informations to reload the instance with
the :func:`~nc2map.load` function
See Also
--------
nc2map.load: Function to load from the settings_dict or the pickle
file that you just saved
"""
# extract reader settings
self.logger.debug('Save Maps instance ...')
self.logger.debug(' Extract reader settings')
readers_dict = OrderedDict(
[(rd, 'reader%i' % i) for i, rd in enumerate(self.get_readers())])
reader_settings = {}
ncnames = iter(ncnames)
for reader, name in readers_dict.items():
reader_settings[name] = [reader.__class__.__name__, [], {}]
try:
reader_settings[name][1] = reader._init_args
except AttributeError:
pass
try:
reader_settings[name][2] = reader._init_kwargs
except AttributeError:
pass
if not any(reader_settings[name][1:]):
try:
fname = next(ncnames)
except StopIteration:
if ask:
fname = raw_input(
"Could not estimate how to initialize the reader. "
"Enter a file name as string or nothing to "
"ignore it.\n")
if fname:
reader.dump_nc(fname)
reader_settings[name][0] = 'NCReader'
reader_settings[name][1] = [fname]
else:
reader_settings[name] = None
elif not any(reader_settings[name][1:]) and not ask:
warn("Could not estimate how to initialize the reader of %s.",
', '.join(mapo.name for mapo in self.get_readers()[
reader]))
reader_settings[name] = None
if not reader_settings[name] is None:
reader_settings[name][2].update({
'timenames': reader._timenames,
'levelnames': reader._levelnames,
'lonnames': reader._lonnames,
'latnames': reader._latnames})
# extract figure settings
self.logger.debug(' Extract figure settings')
figures_dict = self.get_figs()
figure_settings = OrderedDict()
for fig, maps in figures_dict.items():
try:
mode = 'ask' if ask_ax else 'raise'
figure_settings[fig.number] = [
maps[0]._get_axes_shape(mode=mode),
maps[0]._ax._AxesWrapper__init_args,
maps[0]._ax._AxesWrapper__init_kwargs]
except ValueError:
warn("Could not estimate figure options for %s" %
', '.join(mapo.name for mapo in maps))
figure_settings[fig.number] = None
self.logger.debug(' Extract maps settings')
mapo_settings = OrderedDict((mapo.name, {}) for mapo in self.maps)
mode = 'ask' if ask_ax else 'ignore'
for mapo, mdict in zip(self.maps, mapo_settings.values()):
self.logger.debug(' mapo %s', mapo.name)
mdict['fmt'] = mapo.asdict()
mdict['dims'] = mapo.dims
mdict['fig'] = mapo.ax.get_figure().number
mdict['reader'] = readers_dict[mapo.reader]
mdict['meta'] = mapo._meta
try:
mdict['num'] = mapo._get_axes_num(mode=mode)
except ValueError:
pass
mdict['class'] = mapo.__class__.__name__
mdict['name'] = mapo.name
self.logger.debug(' Extract shared info...')
share_settings = {mapo.name: mapo.asdict(shared=True)['_shared']
for mapo in self.maps}
self.logger.debug(' Extract line settings')
line_settings = OrderedDict((line.name, {}) for line in self.lines)
for line, ldict in zip(self.lines, line_settings.values()):
self.logger.debug(' line %s', line.name)
ldict['init'] = line.asdict()
ldict['fig'] = line.ax.get_figure().number
ldict['reader'] = readers_dict[line.reader]
ldict['num'] = line._get_axes_num(mode='ignore')
ldict['class'] = line.__class__.__name__
ldict['name'] = line.name
try:
ldict['meta'] = line._meta
except AttributeError:
pass
self.logger.debug(' Extract cbars')
cbar_settings = self.asdict('cbars')[0]
self.logger.debug('Dump to %s', output)
out_dict = {'readers': reader_settings,
'figures': figure_settings,
'maps': mapo_settings,
'lines': line_settings,
'cbars': cbar_settings,
'share': share_settings}
if output:
with open(output, 'w') as f:
pickle.dump(out_dict, f)
return out_dict
def dump_nc(self, output, maps=None, full_data=False, mask_data=True,
**kwargs):
"""Method for creating a NetCDF file out of the given maps.
Parameters
----------
maps: list of MapBase instances
If None, defaults to the list in :attr:`~nc2map.Maps.maps` and
:attr:`~nc2map.Maps.lines`
full_data: bool
Default: False. If True, the full data as stored in the readers
are used. Otherwise only the current time and level step is used.
mask_data: bool
Default: True. If True, the formatoption masking options (including
lonlatbox, mask, maskbelow, etc.) is used. Otherwise the full field
as stored in the corresponding :class:`~nc2map.readers.ReaderBase`
instance is used.
**kwargs
anything that is passed to the
:meth:`~nc2map.readers.ReaderBase.dump_nc` method
Returns
-------
nco : object
:class:`~nc2map.readers.NCReader` instance that is created
Warnings
--------
The readers of the `maps` are merged into one file, so make sure that
they match according to the :meth:`~nc2map.readers.ReaderBase.merge`
method
Notes
-----
By default the `nco` is closed, unless you set close=False
See Also
--------
nc2map.readers.ReaderBase.dump_nc: Basic method that is used"""
if maps is None:
maps = self.get_maps()
self.logger.debug("Dump mapos %s to NetCDF file...",
', '.join(mapo.name for mapo in maps))
if len(maps) == 1:
try:
return maps[0].extract_in_reader(
full_data=full_data, mask_data=mask_data).dump_nc(
output=output, **kwargs)
except TypeError: # line does not take the mask_data keyword
return maps[0].extract_in_reader(full_data=full_data).dump_nc(
output=output, **kwargs)
readers = []
for mapo in maps:
self.logger.debug("Extracting mapo %s", mapo.name)
try:
self.logger.debug("Try with mask_data")
readers.append(mapo.extract_in_reader(
full_data=full_data, mask_data=mask_data))
except TypeError: # line does not take the mask_data keyword
self.logger.debug("Failed. --> Assume line", exc_info = True)
readers.append(mapo.extract_in_reader(
full_data=full_data))
return readers[0].merge(*readers[1:]).dump_nc(
output=output, **kwargs)
def get_label_dict(self, *args, **kwargs):
"""Returns dictionary with meta attributes
Parameters
----------
*args
instances of :class:`~nc2map.mapos.MapBase`, figures,
:class:`~nc2map.mapos.LinePlot`, :class:`~nc2map.CbarManager`, etc.
delimiter: str
string that shall be used for separating values in a string, if
multiple meta values are found in the specified input. If not given
(or None), sets will be returned, not strings
Returns
-------
meta: dict
concatenated dictionary with meta informations as sets or as string
(if delimiter keyword is given)"""
args = list(args)
if not 'lines' in args:
for cbar in self.get_cbars('wind') + self.get_cbars():
if cbar in args:
args += cbar.maps
args.remove(cbar)
return super(Maps, self).get_label_dict(*args, **kwargs)
# ------------------ modify docstrings here --------------------------
evaluate.__doc__ %= ', '.join(map(repr, evaluatorsdict.keys()))
get_evaluator.__doc__ %= ', '.join(map(repr, evaluatorsdict.keys()))
close.__doc__ = MapsManager.close.__doc__
addline.__doc__ = MapsManager.addline.__doc__
addmap.__doc__ = MapsManager.addmap.__doc__[
:MapsManager.addmap.__doc__.find('fmt: ')] \
+ addmap.__doc__ + \
MapsManager.addmap.__doc__[
MapsManager.addmap.__doc__.find('fmt: '):]
|
Chilipp/nc2map
|
_maps.py
|
Python
|
gpl-2.0
| 62,613
|
[
"NetCDF"
] |
23848e90eaab31308c66ee74a0aa727735cfd6ba1d74029e7d41f3510503220d
|
''' EmailAction
This action writes all the necessary data to a cache file ( cache.db ) that
will be used later by the EmailAgent in order to send the emails for each site.
'''
import os
import sqlite3
from DIRAC import S_ERROR, S_OK
from DIRAC.ResourceStatusSystem.PolicySystem.Actions.BaseAction import BaseAction
from DIRAC.Core.Utilities.SiteSEMapping import getSitesForSE
__RCSID__ = '$Id: $'
class EmailAction( BaseAction ):
def __init__( self, name, decisionParams, enforcementResult, singlePolicyResults,
clients = None ):
super( EmailAction, self ).__init__( name, decisionParams, enforcementResult,
singlePolicyResults, clients )
if 'DIRAC' in os.environ:
self.cacheFile = os.path.join( os.getenv('DIRAC'), 'work/ResourceStatus/cache.db' )
else:
self.cacheFile = os.path.realpath('cache.db')
def run( self ):
''' Checks it has the parameters it needs and writes the date to a cache file.
'''
# Minor security checks
element = self.decisionParams[ 'element' ]
if element is None:
return S_ERROR( 'element should not be None' )
name = self.decisionParams[ 'name' ]
if name is None:
return S_ERROR( 'name should not be None' )
statusType = self.decisionParams[ 'statusType' ]
if statusType is None:
return S_ERROR( 'statusType should not be None' )
previousStatus = self.decisionParams[ 'status' ]
if previousStatus is None:
return S_ERROR( 'status should not be None' )
status = self.enforcementResult[ 'Status' ]
if status is None:
return S_ERROR( 'status should not be None' )
reason = self.enforcementResult[ 'Reason' ]
if reason is None:
return S_ERROR( 'reason should not be None' )
if self.decisionParams[ 'element' ] == 'Site':
siteName = self.decisionParams[ 'name' ]
else:
siteName = getSitesForSE(name)
if not siteName['OK']:
self.log.error('Resource %s does not exist at any site: %s' % (name, siteName['Message']))
siteName = "Unassigned Resources"
elif not siteName['Value']:
siteName = "Unassigned Resources"
else:
siteName = siteName['Value'][0]
with sqlite3.connect(self.cacheFile) as conn:
try:
conn.execute('''CREATE TABLE IF NOT EXISTS ResourceStatusCache(
SiteName VARCHAR(64) NOT NULL,
ResourceName VARCHAR(64) NOT NULL,
Status VARCHAR(8) NOT NULL DEFAULT "",
PreviousStatus VARCHAR(8) NOT NULL DEFAULT "",
StatusType VARCHAR(128) NOT NULL DEFAULT "all",
Time TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);''')
except sqlite3.OperationalError:
self.log.error('Email cache database is locked')
conn.execute("INSERT INTO ResourceStatusCache (SiteName, ResourceName, Status, PreviousStatus, StatusType)"
" VALUES ('" + siteName + "', '" + name + "', '" + status + "', '" + previousStatus + "', '" + statusType + "' ); "
)
conn.commit()
return S_OK()
################################################################################
#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF
|
Andrew-McNab-UK/DIRAC
|
ResourceStatusSystem/PolicySystem/Actions/EmailAction.py
|
Python
|
gpl-3.0
| 3,347
|
[
"DIRAC"
] |
f732575aecbd3713974cd69062430d2e475d61cb30085a2ec584ab3de7bbbc81
|
########################################################################
# $HeadURL $
# File: TracedTests.py
# Author: Krzysztof.Ciba@NOSPAMgmail.com
# Date: 2012/08/08 15:21:32
########################################################################
""" :mod: TracedTests
=======================
.. module: TracedTests
:synopsis: Traced test cases
.. moduleauthor:: Krzysztof.Ciba@NOSPAMgmail.com
Traced test cases
"""
__RCSID__ = "$Id $"
##
# @file TracedTests.py
# @author Krzysztof.Ciba@NOSPAMgmail.com
# @date 2012/08/08 15:21:44
# @brief Definition of TracedTests class.
## imports
import unittest
## SUT
from DIRAC.Core.Utilities.Traced import Traced, TracedDict, TracedList
########################################################################
class TracedTests(unittest.TestCase):
"""
.. class:: TracedTests
"""
def setUp( self ):
"""c'tor
:param self: self reference
"""
self.tracedDict = TracedDict( { 1 : 1 } )
self.tracedList = TracedList( [ 1 ] )
class TracedClass( object ):
__metaclass__ = Traced
classArg = None
def __init__( self ):
instanceArg = None
self.tracedClass = TracedClass()
def testTarcedDict( self ):
""" TracedDict tests """
self.assertEqual( self.tracedDict.updated(), [] )
## update, not changing value
self.tracedDict[1] = 1
self.assertEqual( self.tracedDict.updated(), [] )
## update, changing value
self.tracedDict[1] = 2
self.assertEqual( self.tracedDict.updated(), [1] )
## set new
self.tracedDict[2] = 2
self.assertEqual( self.tracedDict.updated(), [ 1, 2 ] )
## update from diff dict
self.tracedDict.update( { 3: 3 } )
self.assertEqual( self.tracedDict.updated(), [ 1, 2, 3 ] )
def testTracedList( self ):
""" traced list """
self.assertEqual( self.tracedList.updated(), [] )
## no value change
self.tracedList[0] = 1
self.assertEqual( self.tracedList.updated(), [] )
## value change
self.tracedList[0] = 2
self.assertEqual( self.tracedList.updated(), [0] )
## append
self.tracedList.append( 1 )
self.assertEqual( self.tracedList.updated(), [0, 1] )
def testTracedClass( self ):
""" traced class """
self.assertEqual( self.tracedClass.updated(), [] )
self.tracedClass.instanceArg = 1
self.assertEqual( self.tracedClass.updated(), [ "instanceArg" ] )
self.tracedClass.classArg = 1
self.assertEqual( self.tracedClass.updated(), [ "instanceArg" , "classArg" ] )
## test execution
if __name__ == "__main__":
TESTLOADER = unittest.TestLoader()
SUITE = TESTLOADER.loadTestsFromTestCase( TracedTests )
unittest.TextTestRunner(verbosity=3).run( SUITE )
|
Sbalbp/DIRAC
|
Core/Utilities/test/TracedTests.py
|
Python
|
gpl-3.0
| 2,744
|
[
"DIRAC"
] |
4220bab64d97c1cf6abd8d5556048736bfc49d84a7966e15b6932b722d28f148
|
# Copyright 2018 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Implements commands for running and interacting with Fuchsia on devices."""
from __future__ import print_function
import amber_repo
import boot_data
import filecmp
import logging
import os
import re
import subprocess
import sys
import target
import tempfile
import time
import uuid
from common import SDK_ROOT, EnsurePathExists, GetHostToolPathFromPlatform
# The maximum times to attempt mDNS resolution when connecting to a freshly
# booted Fuchsia instance before aborting.
BOOT_DISCOVERY_ATTEMPTS = 30
# Number of failed connection attempts before redirecting system logs to stdout.
CONNECT_RETRY_COUNT_BEFORE_LOGGING = 10
# Number of seconds to wait for device discovery.
BOOT_DISCOVERY_TIMEOUT_SECS = 2 * 60
# The timeout limit for one call to the device-finder tool.
_DEVICE_FINDER_TIMEOUT_LIMIT_SECS = \
BOOT_DISCOVERY_TIMEOUT_SECS / BOOT_DISCOVERY_ATTEMPTS
# Time between a reboot command is issued and when connection attempts from the
# host begin.
_REBOOT_SLEEP_PERIOD = 20
def GetTargetType():
return DeviceTarget
class DeviceTarget(target.Target):
"""Prepares a device to be used as a deployment target. Depending on the
command line parameters, it automatically handling a number of preparatory
steps relating to address resolution.
If |_node_name| is unset:
If there is one running device, use it for deployment and execution.
If there are more than one running devices, then abort and instruct the
user to re-run the command with |_node_name|
If |_node_name| is set:
If there is a running device with a matching nodename, then it is used
for deployment and execution.
If |_host| is set:
Deploy to a device at the host IP address as-is."""
def __init__(self,
out_dir,
target_cpu,
host=None,
node_name=None,
port=None,
ssh_config=None,
fuchsia_out_dir=None,
os_check='update',
system_log_file=None):
"""out_dir: The directory which will contain the files that are
generated to support the deployment.
target_cpu: The CPU architecture of the deployment target. Can be
"x64" or "arm64".
host: The address of the deployment target device.
node_name: The node name of the deployment target device.
port: The port of the SSH service on the deployment target device.
ssh_config: The path to SSH configuration data.
fuchsia_out_dir: The path to a Fuchsia build output directory, for
deployments to devices paved with local Fuchsia builds.
os_check: If 'check', the target's SDK version must match.
If 'update', the target will be repaved if the SDK versions
mismatch.
If 'ignore', the target's SDK version is ignored."""
super(DeviceTarget, self).__init__(out_dir, target_cpu)
self._system_log_file = system_log_file
self._host = host
self._port = port
self._fuchsia_out_dir = None
self._node_name = node_name
self._os_check = os_check
self._amber_repo = None
if self._host and self._node_name:
raise Exception('Only one of "--host" or "--name" can be specified.')
if fuchsia_out_dir:
if ssh_config:
raise Exception('Only one of "--fuchsia-out-dir" or "--ssh_config" can '
'be specified.')
self._fuchsia_out_dir = os.path.expanduser(fuchsia_out_dir)
# Use SSH keys from the Fuchsia output directory.
self._ssh_config_path = os.path.join(self._fuchsia_out_dir, 'ssh-keys',
'ssh_config')
self._os_check = 'ignore'
elif ssh_config:
# Use the SSH config provided via the commandline.
self._ssh_config_path = os.path.expanduser(ssh_config)
else:
# Default to using an automatically generated SSH config and keys.
boot_data.ProvisionSSH(out_dir)
self._ssh_config_path = boot_data.GetSSHConfigPath(out_dir)
@staticmethod
def CreateFromArgs(args):
return DeviceTarget(args.out_dir, args.target_cpu, args.host,
args.node_name, args.port, args.ssh_config,
args.fuchsia_out_dir, args.os_check,
args.system_log_file)
@staticmethod
def RegisterArgs(arg_parser):
device_args = arg_parser.add_argument_group(
'device', 'External device deployment arguments')
device_args.add_argument('--host',
help='The IP of the target device. Optional.')
device_args.add_argument('--node-name',
help='The node-name of the device to boot or '
'deploy to. Optional, will use the first '
'discovered device if omitted.')
device_args.add_argument('--port',
'-p',
type=int,
default=None,
help='The port of the SSH service running on the '
'device. Optional.')
device_args.add_argument('--ssh-config',
'-F',
help='The path to the SSH configuration used for '
'connecting to the target device.')
device_args.add_argument(
'--os-check',
choices=['check', 'update', 'ignore'],
default='update',
help="Sets the OS version enforcement policy. If 'check', then the "
"deployment process will halt if the target\'s version doesn\'t "
"match. If 'update', then the target device will automatically "
"be repaved. If 'ignore', then the OS version won\'t be checked.")
def _ProvisionDeviceIfNecessary(self):
if self._Discover():
self._WaitUntilReady()
else:
raise Exception('Could not find device. If the device is connected '
'to the host remotely, make sure that --host flag is '
'set and that remote serving is set up.')
def _Discover(self):
"""Queries mDNS for the IP address of a booted Fuchsia instance whose name
matches |_node_name| on the local area network. If |_node_name| isn't
specified, and there is only one device on the network, then returns the
IP address of that advice.
Sets |_host_name| and returns True if the device was found,
or waits up to |timeout| seconds and returns False if the device couldn't
be found."""
dev_finder_path = GetHostToolPathFromPlatform('device-finder')
if self._node_name:
command = [
dev_finder_path,
'resolve',
'-timeout',
"%ds" % _DEVICE_FINDER_TIMEOUT_LIMIT_SECS,
'-device-limit',
'1', # Exit early as soon as a host is found.
self._node_name
]
else:
command = [
dev_finder_path, 'list', '-full', '-timeout',
"%ds" % _DEVICE_FINDER_TIMEOUT_LIMIT_SECS
]
proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=open(os.devnull, 'w'))
output = set(proc.communicate()[0].strip().split('\n'))
if proc.returncode != 0:
return False
if self._node_name:
# Handle the result of "device-finder resolve".
self._host = output.pop().strip()
else:
name_host_pairs = [x.strip().split(' ') for x in output]
# Handle the output of "device-finder list".
if len(name_host_pairs) > 1:
print('More than one device was discovered on the network.')
print('Use --node-name <name> to specify the device to use.')
print('\nList of devices:')
for pair in name_host_pairs:
print(' ' + pair[1])
print()
raise Exception('Ambiguous target device specification.')
assert len(name_host_pairs) == 1
self._host, self._node_name = name_host_pairs[0]
logging.info('Found device "%s" at address %s.' % (self._node_name,
self._host))
return True
def Start(self):
if self._host:
self._WaitUntilReady()
else:
self._ProvisionDeviceIfNecessary()
def GetAmberRepo(self):
if not self._amber_repo:
if self._fuchsia_out_dir:
# Deploy to an already-booted device running a local Fuchsia build.
self._amber_repo = amber_repo.ExternalAmberRepo(
os.path.join(self._fuchsia_out_dir, 'amber-files'))
else:
# Create an ephemeral Amber repo, then start both "pm serve" as well as
# the bootserver.
self._amber_repo = amber_repo.ManagedAmberRepo(self)
return self._amber_repo
def _ParseNodename(self, output):
# Parse the nodename from bootserver stdout.
m = re.search(r'.*Proceeding with nodename (?P<nodename>.*)$', output,
re.MULTILINE)
if not m:
raise Exception('Couldn\'t parse nodename from bootserver output.')
self._node_name = m.groupdict()['nodename']
logging.info('Booted device "%s".' % self._node_name)
# Repeatdly query mDNS until we find the device, or we hit the timeout of
# DISCOVERY_TIMEOUT_SECS.
logging.info('Waiting for device to join network.')
for _ in xrange(BOOT_DISCOVERY_ATTEMPTS):
if self._Discover():
break
if not self._host:
raise Exception('Device %s couldn\'t be discovered via mDNS.' %
self._node_name)
self._WaitUntilReady();
def _GetEndpoint(self):
return (self._host, self._port)
def _GetSshConfigPath(self):
return self._ssh_config_path
def Restart(self):
"""Restart the device."""
self.RunCommandPiped('dm reboot')
time.sleep(_REBOOT_SLEEP_PERIOD)
self.Start()
|
youtube/cobalt
|
build/fuchsia/device_target.py
|
Python
|
bsd-3-clause
| 10,010
|
[
"Amber"
] |
7fcf7eb5cb890ae686a93c10cca6b67e59ddcc124bb2a5d2eebb61d7a4290742
|
import pyspeckit
from pyspeckit.spectrum.readers import read_class
from pyspeckit import cubes
import numpy as np
from astropy import wcs
from astropy import coordinates
from astropy import units as u
from astropy import constants
try:
from .progressbar import ProgressBar
except:
from astropy.utils.console import ProgressBar
from astropy.convolution import convolve, Gaussian1DKernel, Gaussian2DKernel
from sdpy import makecube
from astropy.io import fits
from FITS_tools import cube_regrid
from FITS_tools.load_header import get_cd
from astropy.wcs import WCS
import FITS_tools
import scipy.ndimage
import scipy.linalg
import time
from astropy.time import Time
import mpl_plot_templates
import pylab as pl
import os
import errno
from astropy import log
import glob
from scipy.ndimage import filters
from scipy import signal,interpolate
import warnings
import image_tools
import spectral_cube
from spectral_cube import SpectralCube,BooleanArrayMask
from agpy import mad
import matplotlib
from lines import all_lines
import paths
matplotlib.rc_file(paths.pcpath('pubfiguresrc'))
# http://www.apex-telescope.org/heterodyne/shfi/calibration/calfactor/
# Apparently all data taken on MPI and ESO time in the first half of 2014 were
# off by 15-25%. The data need to be multiplied by these factors.
calibration_factors = {'2014-04-23:2014-06-13': 0.875, # from Sgr B2
'2014-02-01:2014-04-23': 0.78,
None: 1,
}
datasets_ao = ['O-085.F-9311A-2010_merge','E-085.B-0964A-2010_merge']
datasets_2013 = ['M-091.F-0019-2013-2013-06-08',
'M-091.F-0019-2013-2013-06-11',
'M-091.F-0019-2013-2013-06-12',
'M-091.F-0019-2013-2013-06-13']
datasets_2014 = {
'E-093.C-0144A.2014APR02/E-093.C-0144A-2014-2014-04-01': ('MAP_001',),
'E-093.C-0144A.2014APR03/E-093.C-0144A-2014-2014-04-02': ('MAP_001',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-04-24': ('MAP_115','MAP_116',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-04-30': ('MAP_116',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-02': ('MAP_116',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-04': ('MAP_115','MAP_116',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-08': ('MAP_117','MAP_118',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-09': ('MAP_119','MAP_118',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-10': ('MAP_120','MAP_121','MAP_119',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-11': ('MAP_121','MAP_122','MAP_123','MAP_124',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-12': ('MAP_055','MAP_056','MAP_124',),
'M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-13': ('MAP_031','MAP_032','MAP_057','MAP_058',),
'E-093.C-0144A.2014MAY30/E-093.C-0144A-2014-2014-05-29': ('MAP_002','MAP_003','MAP_004'),
'E-093.C-0144A.2014MAY31/E-093.C-0144A-2014-2014-05-30': ('MAP_005','MAP_006'),
'E-093.C-0144A.2014JUN01/E-093.C-0144A-2014-2014-05-31': ('MAP_007',),
'E-093.C-0144A.2014JUN02/E-093.C-0144A-2014-2014-06-01': ('MAP_009','MAP_010','MAP_008',),
'E-093.C-0144A.2014JUN03/E-093.C-0144A-2014-2014-06-02': ('MAP_011','MAP_012','MAP_013', 'MAP_018', 'MAP_019'),
'E-093.C-0144A.2014JUN06/E-093.C-0144A-2014-2014-06-05': ('Map_020', 'Map_021', 'Map_022', 'Map_023', 'Map_024', 'Map_025'),
# There is some corrupt data in 06-06
'E-093.C-0144A.2014JUN07/E-093.C-0144A-2014-2014-06-06': ('Map_001', 'Map_026', 'Map_027', 'Map_028', 'Map_029', 'Map_030'),
# should be 05-07: map117
'M-093.F-0009-2014-2014-07-10':['MAP_031', 'MAP_030'],
'M-093.F-0009-2014-2014-07-11':['MAP_029', 'MAP_030'],
'M-093.F-0009-2014-2014-07-12':['MAP_028', 'MAP_029'],
'M-093.F-0009-2014-2014-07-13':['MAP_028', 'MAP_026', 'MAP_027', 'MAP_024', 'MAP_025'],
'M-093.F-0009-2014-2014-07-14':['MAP_024','MAP_025'],
'M-093.F-0009-2014-2014-07-19':['MAP_024',],
'M-093.F-0009-2014-2014-07-20':['MAP_024',],
'E-093.C-0144A.2014JUL29/E-093.C-0144A-2014-2014-07-28': ['MAP_002', 'MAP_001'],
'E-093.C-0144A.2014JUL29/E-093.C-0144A-2014-2014-07-29': ['MAP_002',],
'E-093.C-0144A.2014JUL30/E-093.C-0144A-2014-2014-07-29': ['MAP_004', 'MAP_002', 'MAP_003'],
'E-093.C-0144A.2014JUL31/E-093.C-0144A-2014-2014-07-30': ['MAP_005', 'MAP_006'],
'E-093.C-0144A.2014AUG01/E-093.C-0144A-2014-2014-07-31': ['MAP_006', 'MAP_007', 'MAP_008', 'MAP_009', 'MAP_012', 'MAP_011', 'MAP_010'],
'E-093.C-0144A.2014AUG01/E-093.C-0144A-2014-2014-08-01': ['MAP_013',],
'E-093.C-0144A.2014AUG02/E-093.C-0144A-2014-2014-08-01': ['MAP_013', 'MAP_018'],
'E-093.C-0144A.2014AUG09/E-093.C-0144A-2014-2014-08-07': ['MAP_024', 'MAP_022', 'MAP_023', 'MAP_025'],
'E-093.C-0144A.2014AUG09/E-093.C-0144A-2014-2014-08-08': ['MAP_027', 'MAP_026'],
'E-093.C-0144A.2014OCT07/E-093.C-0144A-2014-2014-10-06':['MAP_020', 'MAP_019', 'MAP_021'],
'E-093.C-0144A.2014OCT06/E-093.C-0144A-2014-2014-10-05':['MAP_020'],
'E-093.C-0144A.2014OCT08/E-093.C-0144A-2014-2014-10-07':['MAP_056', 'MAP_022', 'MAP_031', 'MAP_023', 'MAP_019', 'MAP_032'],
'E-093.C-0144A.2014OCT09/E-093.C-0144A-2014-2014-10-08':['MAP_022', 'MAP_117', 'MAP_122', 'MAP_057', 'MAP_121'],
'E-093.C-0144A.2014OCT11/E-093.C-0144A-2014-2014-10-10':['MAP_011', 'MAP_058', 'MAP_018'],
'E-093.C-0144A.2014OCT12/E-093.C-0144A-2014-2014-10-11':['MAP_005', 'MAP_011', 'MAP_030', 'MAP_028'],
'E-093.C-0144A.2014OCT13/E-093.C-0144A-2014-2014-10-12':['MAP_001'],
'E-095.C-0242A.2015APR24/E-095.C-0242A-2015-2015-04-23':['MAP_032','MAP_123','MAP_124','MAP_118','MAP_031'],
'E-095.C-0242A.2015APR25/E-095.C-0242A-2015-2015-04-24':['MAP_123','MAP_124','MAP_118','MAP_115','MAP_055','MAP_031'],
'E-095.C-0242A.2015APR26/E-095.C-0242A-2015-2015-04-25':['MAP_115','MAP_019','MAP_116','MAP_122'],
'E-095.C-0242A.2015APR27/E-095.C-0242A-2015-2015-04-26':['MAP_056','MAP_021'],
'E-095.C-0242A.2015MAY04/E-095.C-0242A-2015-2015-05-03':['MAP_056'],
}
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-04-24 ['MAP_115', 'MAP_116']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-04-30 ['MAP_116']
#E-093.C-0144A.2014JUN01/E-093.C-0144A-2014-2014-05-31 ['MAP_007']
#E-093.C-0144A.2014APR03/E-093.C-0144A-2014-2014-04-02 ['MAP_001']
#E-093.C-0144A.2014JUN02/E-093.C-0144A-2014-2014-06-01 ['MAP_008', 'MAP_009', 'MAP_010']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-09 ['MAP_118', 'MAP_119']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-08 ['MAP_118', 'MAP_117']
#E-093.C-0144A.2014JUN03/E-093.C-0144A-2014-2014-06-02 ['MAP_013', 'MAP_012', 'MAP_011', 'MAP_019', 'MAP_018']
#E-093.C-0144A.2014MAY31/E-093.C-0144A-2014-2014-05-30 ['MAP_005', 'MAP_006']
#E-093.C-0144A.2014MAY30/E-093.C-0144A-2014-2014-05-29 ['MAP_004', 'MAP_002', 'MAP_003']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-02 ['MAP_116']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-04 ['MAP_115', 'MAP_116']
#E-093.C-0144A.2014APR02/E-093.C-0144A-2014-2014-04-01 ['MAP_001']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-10 ['MAP_119', 'MAP_121', 'MAP_120']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-11 ['MAP_123', 'MAP_122', 'MAP_121', 'MAP_124']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-12 ['MAP_124', 'MAP_056', 'MAP_055']
#M-093.F-0009-2014-2014-04/M-093.F-0009-2014-2014-05-13 ['MAP_031', 'MAP_058', 'MAP_057', 'MAP_032']
june2013datapath = '/Users/adam/work/h2co/apex/june2013/raw/M-091.F-0019-2013/'
june2013path = '/Users/adam/work/h2co/apex/june2013/'
april2014path = '/Users/adam/work/h2co/apex/april2014/'
h2copath = '/Users/adam/work/h2co/apex/h2co_cubes/'
mergepath = '/Users/adam/work/h2co/apex/merged_datasets/'
molpath = os.path.join(mergepath, 'molecule_cubes/')
aorawpath = '/Users/adam/work/h2co/apex/2010_reduced/2010_raw/'
aopath = '/Users/adam/work/h2co/apex/2010_reduced/'
diagplotdir = '/Users/adam/work/h2co/apex/diagnostic_plots/'
bright_lines = {k:all_lines[k] for k in
['H2CO_303_202', 'H2CO_322_221', 'H2CO_321_220', 'SiO_54',
'CH3OH_422_312', 'C18O', '13CO']}
bandwidths = {'H2CO_303_202':25,
'H2CO_322_221':25,
'H2CO_321_220':25,
'SiO_54':25,
'CH3OH_422_312':25,
'CH3OH_514_422':25,
'CH3OH_633_716':25,
'HCCCH_65': 25,
'OCS_18_17':25,
'CH3OCHO_17_16':25,
'C18O':75,
'13CO':75,
#'H2S 2(2,0)-2(1,1)': 216.71044, ??
}
lines218 = {x:v for x,v in all_lines.iteritems()
if 'H2CO' in x or 'CH3OH_422_312' in x}
all_apexfiles = ([os.path.join(june2013datapath, k)+".apex"
for k in datasets_2013] +
[os.path.join(april2014path, k)+".apex"
for k in datasets_2014] +
[os.path.join(aorawpath, k)+".apex"
for k in datasets_ao]
)
def mkdir_p(path):
""" http://stackoverflow.com/questions/600268/mkdir-p-functionality-in-python """
try:
os.makedirs(path)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else: raise
def checkdir_makedir(path):
dpath = os.path.split(path)[0]
if not os.path.exists(dpath) and dpath:
mkdir_p(dpath)
def MAD(a, c=0.6745, axis=None):
"""
Median Absolute Deviation along given axis of an array:
median(abs(a - median(a))) / c
c = 0.6745 is the constant to convert from MAD to std; it is used by
default
"""
a = ma.masked_where(a!=a, a)
if a.ndim == 1:
d = ma.median(a)
m = ma.median(ma.fabs(a - d) / c)
else:
d = ma.median(a, axis=axis)
# I don't want the array to change so I have to copy it?
if axis > 0:
aswp = ma.swapaxes(a,0,axis)
else:
aswp = a
m = ma.median(ma.fabs(aswp - d) / c, axis=0)
return m
def debug_and_load(test='test'):
spectra,headers,indices,data,hdrs,gal = load_dataset_for_debugging(skip_data=False, lowhigh='high')
make_blanks_freq(gal, hdrs[0], test, clobber=True)
dmeansub,gal,hdrs = process_data(data, gal, hdrs, dataset=test,
subspectralmeans=True, scanblsub=False)
add_apex_data(dmeansub, hdrs, gal, test, retfreq=True, varweight=True,)
dscube = cube_regrid.downsample_cube(fits.open(test+".fits")[0], factor=4)
dscube.writeto(test+"_ds.fits",clobber=True)
make_blanks_freq(gal, hdrs[0], test+"_blsub", clobber=True)
dspecsub,gal,hdrs = process_data(data, gal, hdrs, dataset=test+"_blsub",
subspectralmeans=True, scanblsub=True)
add_apex_data(dspecsub, hdrs, gal, test+"_blsub", retfreq=True, varweight=True,)
dscube = cube_regrid.downsample_cube(fits.open(test+"_blsub.fits")[0], factor=4)
dscube.writeto(test+"_blsub_ds.fits",clobber=True)
make_blanks_freq(gal, hdrs[0], test+"_pcasub", clobber=True)
dpcasub,gal,hdrs = process_data(data, gal, hdrs, dataset=test+"_pcasub",
subspectralmeans=True, scanblsub=True,
pca_clean=True, pcakwargs={})
add_apex_data(dpcasub, hdrs, gal, test+"_pcasub", retfreq=True, varweight=True,)
dscube = cube_regrid.downsample_cube(fits.open(test+"_pcasub.fits")[0], factor=4)
dscube.writeto(test+"_pcasub_ds.fits",clobber=True)
freq = hdr_to_freq(hdrs[0])
mask = make_line_mask(freq)
return spectra,headers,indices,data,hdrs,gal,dspecsub,dmeansub,dpcasub,freq,mask
def load_dataset_for_debugging(lowhigh='low', downsample_factor=8,
dataset='M-091.F-0019-2013-2013-06-11',
datapath=june2013datapath,
xscan=37986,
sourcename='SGRA',
shapeselect=4096,
backend='xffts',
skip_data=True):
"""
Example:
spectra,headers,indices, data,hdrs,gal = load_dataset_for_debugging(skip_data=False)
make_blanks_freq(gal, hdrs[0], 'test', clobber=True)
noise = np.std(data,axis=1)
freq_step = np.array([h['FRES'] for h in hdrs])
exptime = np.array([h['EXPOSURE'] for h in hdrs])
tsys = np.array([h['TSYS'] for h in hdrs])
diagplot(data, tsys, noise, 'test')
add_apex_data(data, hdrs, gal, cubefilename, retfreq=True, varweight=True,)
"""
if lowhigh not in ('low','high'):
raise ValueError
if backend == 'xffts':
xtel = 'AP-H201-X202' if lowhigh=='low' else 'AP-H201-X201'
else:
xtel = 'AP-H201-F101' if lowhigh == 'high' else 'AP-H201-F102'
apex_filename=datapath+dataset+".apex"
spectra,headers,indices = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel,
sourcename=sourcename)
data, hdrs, gal = select_apex_data(spectra, headers, indices,
sourcename=sourcename,
shapeselect=shapeselect,
tsysrange=[100,325],
xtel=xtel,
rchanrange=None,
xscan=xscan,
skip_data=skip_data)
return spectra,headers,indices, data,hdrs,gal
def get_sourcenames(headers):
return list(set([h['SOURC'].strip() for h in headers]))
def load_apex_cube(apex_filename='data/E-085.B-0964A-2010.apex',
skip_data=False, DEBUG=False, downsample_factor=None,
sourcename=None, xtel=None,
memmap=True, **kwargs):
found_data = read_class.read_class(apex_filename,
downsample_factor=downsample_factor,
sourcename=sourcename, telescope=xtel,
**kwargs)
return found_data
def select_apex_data(spectra,headers,indices, sourcename=None,
shapeselect=None, tsysrange=None, rchanrange=None,
xscan=None,
xtel=None,
skip_data=False,
dont_flag_sgrb2=True,
galactic_coordinate_range=[[-2,2],[-2,2]]):
log.info("Determining RA/Dec")
ra,dec = zip(*[(h['RA']+h['RAoff']/np.cos(h['DEC']/180.*np.pi),
h['DEC']+h['DECoff']) for h in headers])
log.info("Determining Galactic coordinates")
gal = coordinates.SkyCoord(np.array(ra)*u.deg,
np.array(dec)*u.deg,
frame='icrs').galactic
#gal.l.wrap_angle = 180*u.deg
if galactic_coordinate_range is not None:
(lmin,lmax),(bmin,bmax) = galactic_coordinate_range
galOK = ((gal.l.wrap_at(180*u.deg).deg > lmin) &
(gal.l.wrap_at(180*u.deg).deg < lmax) &
(gal.b.deg > bmin) &
(gal.b.deg < bmax))
else:
galOK = True
sourceOK = True
#if isinstance(sourcename, (list,tuple)):
# sourceOK = np.array([h['SOURC'].strip() in sourcename for h in headers])
#elif sourcename is not None:
# sourceOK = np.array([h['SOURC'].strip()==sourcename for h in headers])
#else:
# sourceOK = True
if xscan is not None:
xscanOK = np.array([h['SCAN']==xscan for h in headers])
else:
xscanOK = True
xtelOK = True
#if xtel is not None:
# xtelOK = np.array([h['XTEL'].strip()==xtel for h in headers])
#else:
# xtelOK = True
if tsysrange is not None:
tsys = np.array([h['TSYS'] for h in headers])
tsysOK = (tsys>tsysrange[0]) & (tsys<tsysrange[1])
if dont_flag_sgrb2:
sgrb2 = ((gal.l.wrap_at(180*u.deg).deg > 0.64) &
(gal.l.wrap_at(180*u.deg).deg<0.7) &
(gal.b.deg>-0.06) &
(gal.b.deg<-0.01))
tsysOK[sgrb2] = True
else:
tsysOK = True
if rchanrange is not None:
rchan = np.array([h['RCHAN'] if 'RCHAN' in h else np.inf for h in headers])
rchanOK = (rchan>rchanrange[0]) & (rchan<rchanrange[1])
else:
rchanOK = True
mostOK = galOK & sourceOK & tsysOK & rchanOK & xtelOK & xscanOK
if not skip_data:
log.info("Shaping data")
data1 = np.array(spectra)
shapes = np.array([d.shape for d in data1])
if shapeselect is not None:
OKshapes = (shapes == shapeselect).squeeze()
elif len(np.unique(shapes[mostOK])) > 1:
raise ValueError("Inconsistent shapes.")
else:
OKshapes = True
else:
OKshapes = True
allOK = mostOK & OKshapes
if allOK.sum() == 0:
raise ValueError("Data selection yielded empty. Sourcename={0}".format(sourcename))
if skip_data:
data = None
else:
data = np.array(data1[allOK].tolist())
hdrs = [h for h,K in zip(headers,allOK) if K]
gal = gal[allOK]
return data,hdrs,gal
def process_data(data, gal, hdrs, dataset, scanblsub=False,
subspectralmeans=True, verbose=False, noisefactor=3.0,
linemask=False, automask=2,
zero_edge_pixels=0,
subtract_time_average=False,
pca_clean=False,
timewise_pca=True,
pcakwargs={},
**kwargs):
timeaxis = 0
freqaxis = 1
log.info("Processing {0}".format(dataset))
if zero_edge_pixels:
# Force the Nth first/last frequency pixels to zero
data[:,:zero_edge_pixels] = 0
data[:,-zero_edge_pixels:] = 0
# flag extremely bad pixels (don't know where these come from, scary!)
extremely_bad = (data > 1e10) | (data < -1e10)
# Set to zero rather than nan to avoid masking-related issues below
data[extremely_bad] = 0
if subspectralmeans:
data = data - data.mean(axis=freqaxis)[:,None]
obsids = np.array([h['SCAN'] for h in hdrs])
# for plotting and masking, determine frequency array
freq = hdr_to_freq(hdrs[0])
scans = identify_scans_fromcoords(gal)
if scanblsub:
data_diagplot(data, dataset+"_presub", scans=scans, freq=freq,
**kwargs)
for ii,xscan in enumerate(np.unique(obsids)):
match = obsids == xscan
# maybe mask=mask_pix.max(axis=timeaxis), ?
#mask=mask_pix[ii],
data_diagplot(data[match], dataset+"_presub_obs%i" % xscan,
freq=freq, **kwargs)
if linemask:
mask = make_line_mask(freq)
else:
mask = None
dsub,mask_pix = subtract_scan_linear_fit(data, scans, mask_pixels=mask,
verbose=verbose,
automask=automask,
smooth_all=True,
return_mask=True)
if len(mask_pix) == 0:
mask = None
else:
mask = mask_pix.max(axis=timeaxis).astype('bool')
elif subtract_time_average:
# subtracting mean spectrum from all spectra
dsub = data - data.mean(axis=timeaxis)
mask = None
else:
mask = None
dsub = data
if pca_clean:
t0 = time.time()
if timewise_pca:
dsub = PCA_clean(dsub.T, smoothing_scale=False,
diagplotfilename=os.path.join(diagplotdir,
dataset+"_time_pca_diagnostic.png"),
**pcakwargs).T
else:
# DON'T remove the mean: that's dealt with in 'spectral baselining' in
# a more conservative fashion
dmean = dsub.mean(axis=0)
dsub = PCA_clean(dsub-dmean,
diagplotfilename=os.path.join(diagplotdir,
dataset+"_pca_diagnostic.png"),
**pcakwargs) + dmean
log.info("PCA cleaning took {0} seconds".format(time.time()-t0))
# Standard Deviation can be fooled by obscene outliers
#noise = MAD(dsub,axis=freqaxis)
noise = np.std(dsub,axis=freqaxis)
freq_step = np.array([h['FRES'] for h in hdrs])
exptime = np.array([h['EXPOSURE'] for h in hdrs])
tsys = np.array([h['TSYS'] for h in hdrs])
# 2 for 2 polarizations; otherwise this is Wilson 2009 eqn 4.41
theoretical_rms = tsys/(2.*np.abs(freq_step*1.0e6)*exptime)**0.5
# extra factor 3.0 to avoid overflagging; this means flagging
# only 3-sigma outliers.
bad = noise > (theoretical_rms*noisefactor)
# SgrB2 has higher noise. Don't flag it out.
sgrb2 = ((gal.l.wrap_at(180*u.deg).deg > 0.64) &
(gal.l.wrap_at(180*u.deg).deg<0.7) &
(gal.b.deg>-0.06) &
(gal.b.deg<-0.01))
bad[sgrb2] = False
# pre-flagging diagnostic
diagplot(dsub, tsys, noise, dataset+"_preflag", freq=freq, mask=mask,
scans=scans, **kwargs)
if np.count_nonzero(bad) == bad.size:
import ipdb; ipdb.set_trace()
raise ValueError("All data will be flagged out; something is amiss.")
dsub = dsub[True-bad]
obsids = obsids[True-bad]
tsys = tsys[True-bad]
noise = noise[True-bad]
gal = gal[True-bad]
hdrs = [h for h,b in zip(hdrs,bad) if not b]
log.info("Flagged out %i bad values (%0.1f%%)." % (bad.sum(),bad.sum()/float(bad.size)))
diagplot(dsub, tsys, noise, dataset, freq=freq, mask=mask, scans=scans,
**kwargs)
for xscan in np.unique(obsids):
match = obsids == xscan
diagplot(dsub[match], tsys[match], noise[match],
dataset+"_obs%i" % xscan, freq=freq, mask=mask, **kwargs)
return dsub,gal,hdrs
def classheader_to_fitsheader(header, axisnumber=1):
header['CRPIX{0}'.format(axisnumber)] = header['RCHAN']
header['CRVAL{0}'.format(axisnumber)] = header['VOFF']
header['CDELT{0}'.format(axisnumber)] = header['VRES']
header['RESTFRQ'.format(axisnumber)] = header['RESTF']
header['CUNIT{0}'.format(axisnumber)] = 'km s-1'
hdr = fits.Header()
for k in header:
if k == 'DATEOBS':
hdr[k] = header[k].datetime.isoformat()
elif isinstance(header[k], (np.ndarray, list, tuple)):
for ii,val in enumerate(header[k]):
hdr[k[:7]+str(ii)] = val
else:
hdr[k[:8]] = header[k]
hdr['TREC'] = 0
#hdr.insert(axisnumber+2, ('NAXIS{0}'.format(axisnumber), header['DATALEN']))
#assert hdr.cards[3][0] == 'NAXIS1'
return hdr
def hdr_to_freq(h):
freqarr = ((np.arange(h['NCHAN'])+1-h['RCHAN']) * h['FRES'] +
h['FOFF'] + h['RESTF'])
return freqarr
def hdr_to_velo(h):
veloarr = (np.arange(h['NCHAN'])+1-h['RCHAN']) * h['VRES'] + h['VOFF']
return veloarr
def add_apex_data(data, hdrs, gal, cubefilename, noisecut=np.inf,
retfreq=False, excludefitrange=None, varweight=True,
debug=False, kernel_fwhm=10./3600.):
if debug and log.level > 10:
log.level = 10
log.info("Data shape: {}. Next step is gridding.".format(data.shape))
if data.ndim != 2:
raise ValueError('Data shape is NOT ok.')
if data.shape[0] != len(hdrs):
raise ValueError('Data and headers do not match')
if data.shape[0] != len(gal):
raise ValueError('Data and coords od not match')
def data_iterator(data=data, continuum=False, fsw=False):
shape0 = data.shape[0]
for ii in xrange(shape0):
#for ii in xrange(1000):
yield data[ii,:]
# as defined on http://www.apex-telescope.org/heterodyne/shfi/het230/lines/
linefreq = 218222.192
def velo_iterator(data=None, linefreq=linefreq, headers=hdrs):
for h in headers:
if retfreq:
freqarr = hdr_to_freq(h)
#veloarr = ((freqarr-linefreq)/linefreq * constants.c).to(u.km/u.s).value
# needs to be in hz
yield freqarr*1e6*u.Hz
else:
veloarr = hdr_to_velo(h)
yield veloarr*u.km/u.s
def coord_iterator(data=None, coordsys_out='galactic', gal=gal):
for c in gal:
yield c.l.deg, c.b.deg
nhits = cubefilename+"_nhits.fits"
flatheader = fits.getheader(nhits)
cubeheader = fits.getheader(cubefilename+".fits")
makecube.add_data_to_cube(cubefilename+".fits", data=data,
flatheader=flatheader,
cubeheader=cubeheader, linefreq=218.22219,
allow_smooth=True,
nhits=nhits,
data_iterator=data_iterator,
coord_iterator=coord_iterator,
velo_iterator=velo_iterator,
progressbar=True, coordsys='galactic',
velocity_offset=0.0, negative_mean_cut=None,
add_with_kernel=True, kernel_fwhm=kernel_fwhm,
fsw=False,
diagnostic_plot_name=None, chmod=False,
default_unit=u.GHz if retfreq else u.km/u.s,
smoothto=2,
noisecut=noisecut,
excludefitrange=None,
varweight=varweight,
continuum_prefix=None)
def add_pipeline_parameters_to_file(fileprefix, pipeline_type, **kwargs):
if not os.path.exists(fileprefix+".fits"):
return False
f = fits.open(fileprefix+".fits")
f[0].header['PIPECALL'] = (pipeline_type,'build_cube function called')
for ii,(k,v) in enumerate(kwargs.iteritems()):
try:
kw = ('P{pipetype:_<4s}K{n:02d}'.format(n=ii,
pipetype=pipeline_type[:4])
.upper())
keypair = "{k}:{v}".format(k=k, v=v)
f[0].header[kw] = keypair
except Exception as ex:
log.warning("Header could not be updated with key/value pair"
"{k}:{v}. Error: {ex}".format(k=k, v=v, ex=ex))
f.writeto(fileprefix+".fits", clobber=True, output_verify='fix')
def add_pipeline_header_data(header):
header['PIPELINE'] = 'Ginsburg 2014 SHFI OTF Pipeline'
header['TELESCOP'] = 'APEX'
header['INSTRUME'] = 'SHFI-1'
header['PIPEDATE'] = (time.strftime("%y_%m_%d_%H:%M:%S"), 'Date pipeline was run')
from .version import version,githash
header['PIPEVERS'] = version
header['PIPEGIT'] = githash
import sdpy.version
header['SDPYVERS'] = (sdpy.version.version, 'sdpy version')
import astropy.version
header['ASTROPYV'] = (astropy.version.version,'Astropy version')
try:
import pyspeckit
header['PYSPECKV'] = pyspeckit.__version__
except ImportError,AttributeError:
pass
import FITS_tools.version
header['FITSTOOV'] = (FITS_tools.version.version,'FITS_tools version')
import scipy.version
header['SCIPYVER'] = (scipy.version.version,'scipy version')
import numpy.version
header['NUMPYVER'] = (numpy.version.version,'numpy version')
import spectral_cube.version
header['SPCUBEVE'] = (spectral_cube.version.version,'spectral_cube version')
header['BUNIT'] = ('K', 'T_A*; ETAMB has efficiency')
header['ETAMB'] = (0.75, 'http://www.apex-telescope.org/telescope/efficiency/')
def make_blanks(gal, header, cubefilename, clobber=True, pixsize=7.2*u.arcsec):
lrange = (gal.l.wrap_at(180*u.deg).deg.min()+15/3600.,
gal.l.wrap_at(180*u.deg).deg.max()+15/3600.)
brange = gal.b.deg.min()+15/3600.,gal.b.deg.max()+15/3600.
log.info("Map extent automatically determined: "
"%0.2f < l < %0.2f, %0.2f < b < %0.2f" % (lrange[0], lrange[1],
brange[0], brange[1]))
naxis1 = (lrange[1]-lrange[0])/(pixsize.to(u.deg).value)
naxis2 = (brange[1]-brange[0])/(pixsize.to(u.deg).value)
restfreq = (header['RESTF']*u.MHz)
# beam major/minor axis are the same, gaussian for 12m telescope
# we convolved with a 10" FWHM Gaussian kernel, so we add that in quadrature
bmaj_ = (1.22*restfreq.to(u.m,u.spectral())/(12*u.m))*u.radian
bmaj = (bmaj**2 + (10*u.arcsec)**2)**0.5
cubeheader, flatheader = makecube.generate_header(np.mean(lrange),
np.mean(brange),
naxis1=naxis1,
naxis2=naxis2,
naxis3=4096,
coordsys='galactic',
ctype3='VRAD',
bmaj=bmaj.to(u.deg).value,
bmin=bmaj.to(u.deg).value,
pixsize=pixsize.to(u.arcsec).value,
cunit3='km/s',
output_flatheader='header.txt',
output_cubeheader='cubeheader.txt',
cd3=header['VRES'],
crval3=-1*header['VRES']*header['RCHAN'],
crpix3=1, clobber=True,
bunit="K",
restfreq=restfreq.to(u.Hz).value,
radio=True)
add_pipeline_header_data(cubeheader)
add_pipeline_header_data(flatheader)
makecube.make_blank_images(cubefilename, cubeheader=cubeheader,
flatheader=flatheader, clobber=clobber,
dtype='float32')
den make_blanks_freq(gal, header, cubefilename, clobber=True, pixsize=7.2*u.arcsec):
""" complete freq covg """
lrange = gal.l.wrap_at(180*u.deg).deg.min()+15/3600.,gal.l.wrap_at(180*u.deg).deg.max()+15/3600.
brange = gal.b.deg.min()+15/3600.,gal.b.deg.max()+15/3600.
log.info("Map extent: %0.2f < l < %0.2f, %0.2f < b < %0.2f" % (lrange[0],
lrange[1],
brange[0],
brange[1]))
naxis1 = int((lrange[1]-lrange[0])/(pixsize.to(u.deg).value)+10)
naxis2 = int((brange[1]-brange[0])/(pixsize.to(u.deg).value)+10)
restfreq = (header['RESTF']*u.MHz)
# beam major/minor axis are the same, gaussian for 12m telescope
# we convolved with a 10" FWHM Gaussian kernel, so we add that in quadrature
bmaj_ = (1.22*restfreq.to(u.m,u.spectral())/(12*u.m))*u.radian
bmaj = (bmaj**2 + (10*u.arcsec)**2)**0.5
rchan = header['RCHAN']
#scalefactor = 1./downsample_factor
#crpix3 = (rchan-1)*scalefactor+0.5+scalefactor/2.
cubeheader, flatheader = makecube.generate_header(np.mean(lrange),
np.mean(brange),
naxis1=naxis1,
naxis2=naxis2,
naxis3=header['NCHAN'],
coordsys='galactic',
bmaj=bmaj.to(u.deg).value,
bmin=bmaj.to(u.deg).value,
pixsize=pixsize.to(u.arcsec).value,
cunit3='Hz',
ctype3='FREQ',
output_flatheader='header.txt',
output_cubeheader='cubeheader.txt',
cd3=header['FRES']*1e6,
crval3=restfreq.to(u.Hz).value,
crpix3=rchan,
clobber=True, bunit="K",
restfreq=restfreq.to(u.Hz).value,
radio=True)
add_pipeline_header_data(cubeheader)
add_pipeline_header_data(flatheader)
makecube.make_blank_images(cubefilename, flatheader=flatheader,
cubeheader=cubeheader, clobber=clobber,
dtype='float32')
def make_blanks_merge(cubefilename, lowhigh='low', clobber=True,
width=1.0*u.GHz, lowest_freq=None, pixsize=7.2*u.arcsec,
restfreq=218222.192*u.MHz):
# total size is 2.3 x 0.4 degrees
# 1150x
# center is 0.55 -0.075
naxis1 = 1150
naxis2 = 200
# beam major/minor axis are the same, gaussian for 12m telescope
# we convolved with a 10" FWHM Gaussian kernel, so we add that in quadrature
bmaj_ = (1.22*restfreq.to(u.m,u.spectral())/(12*u.m))*u.radian
bmaj = (bmaj**2 + (10*u.arcsec)**2)**0.5
cd3 = ((1*u.km/u.s)/constants.c * 218.2*u.GHz).to(u.Hz).value
naxis3 = int(np.ceil(((width / (218.2*u.GHz) * constants.c) / (u.km/u.s)).decompose().value))
if lowest_freq is None:
lowest_freq = 216.8e9 if lowhigh=='low' else 218e9
cubeheader, flatheader = makecube.generate_header(0.55, -0.075,
naxis1=naxis1,
naxis2=naxis2,
naxis3=naxis3,
coordsys='galactic',
bmaj=bmaj.to(u.deg).value,
bmin=bmaj.to(u.deg).value,
pixsize=pixsize.to(u.arcsec).value,
cunit3='Hz',
ctype3='FREQ',
output_flatheader='header.txt',
output_cubeheader='cubeheader.txt',
cd3=cd3,
crval3=lowest_freq,
crpix3=1, clobber=True,
bunit="K",
restfreq=restfreq.to(u.Hz).value,
radio=True)
add_pipeline_header_data(cubeheader)
add_pipeline_header_data(flatheader)
makecube.make_blank_images(cubefilename, flatheader=flatheader,
cubeheader=cubeheader, clobber=clobber,
dtype='float32')
def data_diagplot(data, dataset, ext='png', newfig=False,
max_size=1024, freq=None, scans=None,
figure=None, axis=None):
log.info("Doing diagnostics in "+dataset)
if figure:
pass
elif newfig:
figure = pl.figure()
else:
figure = pl.figure(1)
figure.clf()
if (np.isnan(data)).all():
log.exception("ALL data is NaN in {0}".format(dataset))
import ipdb; ipdb.set_trace()
if np.any([d > max_size for d in data.shape]):
# downsample to *not less than* max_size
factors = [max([1,int(np.floor(d / max_size))]) for d in data.shape]
data = image_tools.downsample(data, min(factors))
if axis is None:
axis = figure.gca()
axis = mpl_plot_templates.imdiagnostics(data, axis=axis,
second_xaxis=freq)
if freq is not None:
#axis.set_xticklabels(np.interp(axis.get_xticks(),
# np.arange(freq.size),
# freq))
axis.figure.axes[5].set_xlabel("Frequency")
else:
axis.set_xlabel("Channel #")
axis.set_ylabel("Integration #")
if scans is not None and len(scans) < 50:
xlim = axis.get_xlim()
ylim = axis.get_ylim()
axis.hlines(scans, xlim[0], xlim[1], color='k', linestyle='--',
alpha=0.5)
axis.set_xlim(*xlim)
figfilename = os.path.join(diagplotdir, dataset+"_diagnostics."+ext)
checkdir_makedir(figfilename)
try:
pl.savefig(figfilename,bbox_inches='tight')
except Exception as ex:
log.exception(ex)
print ex
return axis
def diagplot(data, tsys, noise, dataset, freq=None, mask=None, ext='png',
newfig=False, **kwargs):
"""
Generate a set of diagnostic plots
Parameters
----------
data : `numpy.ndarray`
A 2D data set, with scans along the y-axis and frequency along the
x-axis
tsys : `numpy.ndarray`
A 1D data set giving TSYS at each time
noise : `numpy.ndarray`
The measured noise in each scan
freq : `numpy.ndarray` or None
The frequencies to plot along the X-axis
mask : `numpy.ndarray`
A boolean mask array with True = good values to be plotted
ext : str
The image extension to use when saving
"""
if newfig:
pl.figure()
else:
pl.figure(2)
pl.clf()
pl.subplot(2,1,1)
pl.plot(tsys,np.arange(tsys.size),alpha=0.5)
pl.xlabel("TSYS")
pl.ylabel("Integration")
pl.subplot(2,1,2)
pl.plot(tsys, noise, '.',alpha=0.5)
pl.xlabel("TSYS")
pl.ylabel("Noise")
figfilename = os.path.join(diagplotdir, dataset+"_tsys."+ext)
checkdir_makedir(figfilename)
pl.savefig(figfilename,bbox_inches='tight')
if newfig:
pl.figure()
else:
pl.figure(3)
pl.clf()
if freq is None:
freq = np.arange(data.shape[1])
pl.plot(freq, data.mean(axis=0))
if mask is not None:
# Avoid the incorrect appearance of interpolation by masking out
# intermediate values
d_to_plot = data.mean(axis=0)
d_to_plot[mask] = np.nan
pl.plot(freq, d_to_plot)
pl.xlabel("Frequency")
pl.ylabel("Mean Counts")
figfilename = os.path.join(diagplotdir, dataset+"_masked."+ext)
checkdir_makedir(figfilename)
pl.savefig(figfilename,bbox_inches='tight')
data_diagplot(data, dataset, ext=ext, newfig=newfig, freq=freq, **kwargs)
def build_cube_generic(window, freq=True, mergefile=None, datapath='./',
outpath='./', datasets=[], scanblsub=False,
shapeselect=None,
sourcename=None,
tsysrange=[100,250],
excludefitrange=None,
downsample_factor=None,
pixsize=7.2*u.arcsec,
kernel_fwhm=10/3600.,
pca_clean=False,
timewise_pca=True,
memmap=True,
mask_level_sigma=3,
blsub=True,
contsub=False,
verbose=False, debug=False, **kwargs):
"""
TODO: comment!
kwargs are passed to process_data
Parameters
----------
window : 'low' or 'high'
Which of the two APEX SHFI windows to use
freq : bool
If True, the cube will be in frequency units and will fully cover the
observed spectral range. If False, the cube will be in velocity units
centered on the observed rest frequency. This is ignored if mergefile
is set
"""
if window not in ('low','high'):
raise ValueError()
if mergefile:
cubefilename=os.path.join(outpath,"{0}_{1}".format(mergefile, window))
else:
# assume that we want a cube for EACH data set
cubefilename = None
#rcr = [-1000,0] if window == 'low' else [0,5000]
#xtel = 'AP-H201-F101' if window == 'high' else 'AP-H201-F102'
xtel = 'AP-H201-X202' if window=='low' else 'AP-H201-X201'
all_data,all_hdrs,all_gal = {},{},{}
for dataset in datasets:
apex_filename = os.path.join(datapath,dataset+".apex")
spectra,headers,indices = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel,
sourcename=sourcename)
data,hdrs,gal = select_apex_data(spectra, headers, indices,
sourcename=sourcename,
shapeselect=shapeselect, xtel=xtel,
rchanrange=None,
galactic_coordinate_range=None,
tsysrange=tsysrange)
log.info("Selected %i spectra from %s" % (len(hdrs), dataset))
all_data[dataset] = data
all_hdrs[dataset] = hdrs
all_gal[dataset] = gal
all_gal_vect = coordinates.SkyCoord(np.hstack([all_gal[g].l.to(u.radian).value
for g in all_gal]) * u.radian,
np.hstack([all_gal[g].b.to(u.radian).value
for g in all_gal]) * u.radian,
frame='galactic')
all_gal_vect.l.wrap_angle = 180*u.deg
log.info("Data has been collected and flagged, now adding to cube.")
headerpars = dict(kernel_fwhm=kernel_fwhm, pca_clean=pca_clean,
timewise_pca=timewise_pca,
scanblsub=scanblsub)
if 'pcakwargs' in kwargs:
headerpars.update(kwargs['pcakwargs'])
if cubefilename is not None:
add_pipeline_parameters_to_file(cubefilename, 'generic', **headerpars)
for dataset in all_data:
if not mergefile:
cubefilename = os.path.join(outpath, "{0}_{1}_cube".format(dataset,window))
log.debug("Creating blanks for {0}".format(cubefilename))
if freq:
make_blanks_freq(all_gal_vect, hdrs[0], cubefilename,
clobber=True, pixsize=pixsize)
else:
make_blanks(all_gal_vect, hdrs[0], cubefilename, clobber=True,
pixsize=pixsize)
add_pipeline_parameters_to_file(cubefilename, 'generic', **headerpars)
if 'raw' in cubefilename:
import ipdb; ipdb.set_trace()
data = all_data[dataset]
hdrs = all_hdrs[dataset]
gal = all_gal[dataset]
data, gal, hdrs = process_data(data, gal, hdrs, dataset+"_"+xtel,
scanblsub=scanblsub, verbose=verbose,
timewise_pca=timewise_pca,
pca_clean=pca_clean, **kwargs)
add_apex_data(data, hdrs, gal, cubefilename,
excludefitrange=excludefitrange,
retfreq=freq,
varweight=True,
kernel_fwhm=kernel_fwhm,
debug=debug)
if not mergefile:
if contsub:
log.info("Continuum subtraction: {0}.".format(cubefilename))
contsub_cube(cubefilename)
elif blsub:
log.info("Baseline subtraction: {0}.".format(cubefilename))
baseline_cube(cubefilename+".fits",
mask_level_sigma=mask_level_sigma)
if mergefile and contsub:
log.info("Completed cubemaking. Continuum subtraction now.")
contsub_cube(cubefilename)
elif mergefile and blsub:
log.info("Completed cubemaking. Baseline subtraction now.")
baseline_cube(cubefilename, mask_level_sigma=mask_level_sigma)
# Downsample by some factor?
if downsample_factor:
downsample_cube(cubefilename, downsample_factor)
log.info("Done with "+cubefilename)
def downsample_cube(cubefilename, downsample_factor):
log.info("Downsampling "+cubefilename)
cube = fits.open(cubefilename+".fits")
avg = FITS_tools.downsample.downsample_axis(cube[0].data, downsample_factor, 0)
cube[0].data = avg
cube[0].header['CDELT3'] *= downsample_factor
scalefactor = 1./downsample_factor
crpix3 = (cube[0].header['CRPIX3']-1)*scalefactor+0.5+scalefactor/2.
cube[0].header['CRPIX3'] = crpix3
cube.writeto(cubefilename+'_downsampled.fits', clobber=True)
def build_cube_ao(window, freq=False, mergefile=None,
mergefilename=None,
datapath=aorawpath,
outpath=aopath,
datasets=datasets_ao,
kernel_fwhm=10/3600.,
scanblsub=False,
verbose=False,
debug=False,
pca_clean=True,
timewise_pca=True,
extra_suffix="",
**kwargs):
"""
TODO: comment!
kwargs are passed to process_data
"""
if window not in ('low','high'):
raise ValueError()
if mergefile:
if mergefilename is not None:
cubefilename = mergefilename
else:
cubefilename=os.path.join(outpath,'APEX_H2CO_merge_%s' % window)
elif freq:
cubefilename=os.path.join(outpath,'APEX_H2CO_Ao_Freq_%s' % window)
else:
cubefilename=os.path.join(outpath,'APEX_H2CO_Ao_%s' % window)
if extra_suffix:
cubefilename = cubefilename + extra_suffix
#rcr = [-1000,0] if window == 'low' else [0,5000]
xtel = 'AP-H201-F101' if window == 'high' else 'AP-H201-F102'
all_data,all_hdrs,all_gal = {},{},{}
for dataset in datasets:
apex_filename = os.path.join(datapath,dataset+".apex")
spectra,headers,indices = load_apex_cube(apex_filename,
#downsample_factor=downsample_factor,
xtel=xtel,
sourcename='SGRA')
data,hdrs,gal = select_apex_data(spectra, headers, indices,
sourcename='SGRA', shapeselect=4096,
xtel=xtel,
rchanrange=None,
#rchanrange=rcr,
tsysrange=[100,250])
log.info("Selected %i spectra from %s" % (len(hdrs), dataset))
#This flagging is more appropriately done in the process_data step
# # noise_cut = 4 determined by looking at a plot of noise vs time; 0.7%
# # of data is above 4
# # Extreme noise appears independent of TSYS!
# # 4% of data >0.75, but it's pretty bad
# noise = np.std(data,axis=1)
# freq_step = np.array([h['FRES'] for h in hdrs])
# exptime = np.array([h['EXPOSURE'] for h in hdrs])
# tsys = np.array([h['TSYS'] for h in hdrs])
# theoretical_rms = 2.0**0.5*tsys/(np.abs(freq_step*1.0e6)*exptime)**0.5
# bad = noise > theoretical_rms
# data = data[True-bad]
# gal = gal[True-bad]
# hdrs = [h for h,b in zip(hdrs,bad) if not b]
# print "Flagged out %i bad values (%0.1f%%)." % (bad.sum(),bad.sum()/float(bad.size))
all_data[dataset] = data
all_hdrs[dataset] = hdrs
all_gal[dataset] = gal
all_gal_vect = coordinates.SkyCoord(np.hstack([all_gal[g].l.to(u.radian).value
for g in all_gal]) * u.radian,
np.hstack([all_gal[g].b.to(u.radian).value
for g in all_gal]) * u.radian,
frame='galactic')
all_gal_vect.l.wrap_angle = 180*u.deg
if not mergefile:
if freq:
make_blanks_freq(all_gal_vect, hdrs[0], cubefilename, clobber=True)
else:
make_blanks(all_gal_vect, hdrs[0], cubefilename, clobber=True)
if freq:
excludefitrange=None
else:
excludefitrange = [700,1300] # FIX THIS when velos are fixed
headerpars = dict(kernel_fwhm=kernel_fwhm, pca_clean=pca_clean,
timewise_pca=timewise_pca,
scanblsub=scanblsub)
if 'pcakwargs' in kwargs:
headerpars.update(kwargs['pcakwargs'])
add_pipeline_parameters_to_file(cubefilename, 'ao', **headerpars)
log.info("Data has been collected and flagged, now adding to cube.")
for dataset in all_data:
data = all_data[dataset]
hdrs = all_hdrs[dataset]
gal = all_gal[dataset]
data, gal, hdrs = process_data(data, gal, hdrs, dataset+"_"+xtel,
scanblsub=scanblsub, verbose=verbose,
pca_clean=pca_clean,
timewise_pca=timewise_pca,
**kwargs)
add_apex_data(data, hdrs, gal, cubefilename,
excludefitrange=excludefitrange,
kernel_fwhm=kernel_fwhm,
retfreq=freq,
varweight=True,
debug=debug)
cube = fits.open(cubefilename+'.fits', memmap=False)
cont = fits.getdata(cubefilename+'_continuum.fits')
data = cube[0].data
cube[0].data = data - cont
cube.writeto(cubefilename+'_sub.fits', clobber=True)
if not mergefile:
# Downsample by averaging over a factor of 8
downsample_factor = 4 if freq else 8
avg = np.mean([cube[0].data[ii::downsample_factor,:,:] for ii in
xrange(downsample_factor)], axis=0)
cube[0].data = avg
cube[0].header['CDELT3'] *= float(downsample_factor)
scalefactor = 1./downsample_factor
crpix3 = (cube[0].header['CRPIX3']-1)*scalefactor+0.5+scalefactor/2.
cube[0].header['CRPIX3'] = crpix3
# from FITS_tools/hcongrid h['CRPIX2'] = (h['CRPIX2']-1)*scalefactor + scalefactor/2. + 0.5
cube.writeto(cubefilename+'_downsampled.fits', clobber=True)
def build_cube_2013(mergefile=None,
lowhigh='low',
downsample_factor=8,
datapath=june2013datapath,
outpath=june2013path,
datasets=datasets_2013,
kernel_fwhm=10/3600.,
scanblsub=False,
timewise_pca=False, # 2013 data can't handle cleaning.
pca_clean=False, # 2013 data can't handle cleaning. =(
extra_suffix="",
noisefactor=1.0,
verbose=True, **kwargs):
"""
Notes:
noisefactor set to 1.0 on Nov 15 because there is no data below that value
and only 18 above that value in one data set. The noise is lower than the
theoretical value for some reason.
"""
if mergefile:
cubefilename=os.path.join(outpath,mergefile)
else:
cubefilename=os.path.join(outpath,
'APEX_H2CO_2013_%s' % lowhigh)
if extra_suffix:
cubefilename = cubefilename + extra_suffix
xtel = 'AP-H201-X202' if lowhigh=='low' else 'AP-H201-X201'
# Flag out the central pixel: it is terrible.
flag_array = np.zeros(32768, dtype='bool')
flag_array[16384] = True
flag_array[-1] = True
if not mergefile:
# Need two loops. First one is just to determine map extent.
all_gal = {}
for dataset in datasets:
apex_filename=datapath+dataset+".apex"
spectra,headers,indices = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel,
flag_array=flag_array,
sourcename='SGRA')
data, hdrs, gal = select_apex_data(spectra, headers, indices,
sourcename='SGRA',
shapeselect=32768/downsample_factor,
tsysrange=[100,325],
xtel=xtel,
rchanrange=None,
skip_data=True)
all_gal[dataset] = gal
all_gal_vect = coordinates.SkyCoord(np.hstack([all_gal[g].l.to(u.radian).value
for g in all_gal]) * u.radian,
np.hstack([all_gal[g].b.to(u.radian).value
for g in all_gal]) * u.radian,
frame='galactic')
all_gal_vect.l.wrap_angle = 180*u.deg
make_blanks_freq(all_gal_vect, hdrs[0], cubefilename, clobber=True)
headerpars = dict(kernel_fwhm=kernel_fwhm, pca_clean=pca_clean,
timewise_pca=timewise_pca,
scanblsub=scanblsub)
if 'pcakwargs' in kwargs:
headerpars.update(kwargs['pcakwargs'])
add_pipeline_parameters_to_file(cubefilename, '2013', **headerpars)
# need two loops to avoid loading too much stuff into memory
for dataset in datasets:
log.info("Adding data set {0} to cube file {1}".format(dataset, cubefilename))
apex_filename=datapath+dataset+".apex"
spectra,headers,indices = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel,
flag_array=flag_array,
sourcename='SGRA')
if dataset == 'M-091.F-0019-2013-2013-06-13':
tsysrange=[100,260]
else:
tsysrange=[100,325]
data, hdrs, gal = select_apex_data(spectra, headers, indices,
sourcename='SGRA',
# NOT ignored, even though it's not used above...
# this is only OK because the bad shapes are from
# Saturn
shapeselect=32768/downsample_factor,
tsysrange=tsysrange,
xtel=xtel,
rchanrange=None,
skip_data=False)
data, gal, hdrs = process_data(data, gal, hdrs, dataset+"_"+xtel,
scanblsub=scanblsub, verbose=verbose,
timewise_pca=timewise_pca,
pca_clean=pca_clean,
**kwargs)
add_apex_data(data, hdrs, gal, cubefilename, retfreq=True,
kernel_fwhm=kernel_fwhm,
varweight=True,)
# FORCE cleanup
del data,hdrs,gal
cube = fits.open(cubefilename+'.fits', memmap=False)
cont = fits.getdata(cubefilename+'_continuum.fits')
data = cube[0].data
cube[0].data = data - cont
cube.writeto(cubefilename+'_sub.fits', clobber=True)
# Downsample by averaging over a factor of 8
# (this is extra downsampling)
avg = np.mean([cube[0].data[ii::2,:,:] for ii in xrange(2)], axis=0)
cube[0].data = avg
cube[0].header['CDELT3'] *= 2
scalefactor = 1./2.
crpix3 = (cube[0].header['CRPIX3']-1)*scalefactor+0.5+scalefactor/2.
cube[0].header['CRPIX3'] = crpix3
cube.writeto(cubefilename+'_downsampled.fits', clobber=True)
def build_cube_2014(sourcename,
mergefile=None,
lowhigh='low',
downsample_factor=8,
datapath=april2014path,
kernel_fwhm=10/3600.,
outpath=april2014path,
datasets=None,
scanblsub=False,
verbose=True,
pca_clean=False,
timewise_pca=False,
extra_suffix='',
tsysrange=[100,325],
posang=None,
calibration_factors=calibration_factors,
**kwargs
):
"""
Wrapper. Because each field has its own name in 2014, this will need to be
modified for the mergefile to accept wildcards or something for sourcename
selection
"""
if mergefile:
cubefilename=os.path.join(outpath,mergefile)
elif isinstance(sourcename, str):
cubefilename=os.path.join(outpath,
'APEX_H2CO_2014_%s_%s' % (sourcename, lowhigh))
else:
raise ValueError("Use a mergefile")
if extra_suffix:
cubefilename = cubefilename + extra_suffix
log.info("Building cubes for "+cubefilename)
assert lowhigh in ('low','high')
xtel = 'AP-H201-X202' if lowhigh=='low' else 'AP-H201-X201'
t0 = time.time()
# Flag out the central pixel: it is terrible.
flag_array = np.zeros(32768, dtype='bool')
flag_array[16384] = True
flag_array[-1] = True
if not mergefile:
# Need two loops. First one is just to determine map extent.
all_gal = {}
for dataset in datasets:
apex_filename=datapath+dataset+".apex"
log.info("".join(("Pre-Loading data for dataset ", dataset,
" to filename ", apex_filename, " t=",
str(time.time()-t0))))
found_data = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel, sourcename=sourcename,
flag_array=flag_array,
posang=posang)
if found_data is None:
log.info("Skipping dataset {0} because it doesn't contain "
"{1} or {2}".format(dataset, sourcename, xtel))
continue
else:
spectra,headers,indices = found_data
data, hdrs, gal = select_apex_data(spectra, headers, indices,
sourcename=sourcename,
shapeselect=32768/downsample_factor,
tsysrange=[100,325],
xtel=xtel,
rchanrange=None,
skip_data=True)
all_gal[dataset] = gal
all_gal_vect = coordinates.SkyCoord(np.hstack([all_gal[g].l.to(u.radian).value
for g in all_gal]) * u.radian,
np.hstack([all_gal[g].b.to(u.radian).value
for g in all_gal]) * u.radian,
frame='galactic')
all_gal_vect.l.wrap_angle = 180*u.deg
log.info("Making blanks for "+cubefilename)
make_blanks_freq(all_gal_vect, hdrs[0], cubefilename, clobber=True)
headerpars = dict(kernel_fwhm=kernel_fwhm, pca_clean=pca_clean,
timewise_pca=timewise_pca,
scanblsub=scanblsub)
if 'pcakwargs' in kwargs:
headerpars.update(kwargs['pcakwargs'])
add_pipeline_parameters_to_file(cubefilename, '2014', **headerpars)
# need two loops to avoid loading too much stuff into memory
# (actually, we don't any more with memmaping)
for dataset in datasets:
apex_filename=datapath+dataset+".apex"
log.info("".join(("Loading data for dataset ",dataset," in filename ",
apex_filename," t=",str(time.time()-t0))))
found_data = load_apex_cube(apex_filename,
downsample_factor=downsample_factor,
xtel=xtel, sourcename=sourcename,
flag_array=flag_array,
posang=posang)
if found_data is None:
log.info("Skipping dataset {0} because it doesn't contain "
"{1} or {2}".format(dataset, sourcename, xtel))
continue
else:
spectra,headers,indices = found_data
log.info("".join(("Selecting data for dataset ", dataset,
" in filename ", apex_filename, " t=",
str(time.time()-t0))))
data, hdrs, gal = select_apex_data(spectra, headers, indices,
sourcename=sourcename,
# NOT ignored, even though it's not used above...
# this is only OK because the bad shapes are from
# Saturn
#shapeselect=4096,
shapeselect=32768/downsample_factor,
tsysrange=tsysrange,
xtel=xtel,
rchanrange=None,
skip_data=False)
log.info("".join(("Processing data for dataset ", dataset,
" in filename ", apex_filename, " t=",
str(time.time()-t0))))
data, gal, hdrs = process_data(data, gal, hdrs, os.path.join(outpath,
dataset)+"_"+xtel,
scanblsub=scanblsub, verbose=verbose,
timewise_pca=timewise_pca,
pca_clean=pca_clean,
**kwargs)
log.info("".join(("Adding data for dataset ", dataset,
" to filename ", cubefilename, " t=",
str(time.time()-t0))))
# TODO: parse date and determine whether a calibration factor needs to
# be applied
obsdates = [Time(h['DOBS'], format='jyear') for h in hdrs]
start,end = min(obsdates),max(obsdates)
calkey = cal_date_overlap([start,end])
calfactor = calibration_factors[calkey]
add_apex_data(data*calfactor, hdrs, gal, cubefilename, retfreq=True,
kernel_fwhm=kernel_fwhm, varweight=True,
# downsample factor for freqarr
)
# FORCE cleanup
log.info("".join(("Clearing data for dataset ", dataset,
" to filename ", cubefilename, " t=",
str(time.time()-t0))))
del data,hdrs,gal
log.info("".join(("Continuum subtracting ",cubefilename)))
cube = fits.open(cubefilename+'.fits', memmap=False)
cont = fits.getdata(cubefilename+'_continuum.fits')
data = cube[0].data
cube[0].data = data - cont
cube.writeto(cubefilename+'_sub.fits', clobber=True)
log.info("Downsampling "+cubefilename)
# Downsample by averaging over a factor of 8
avg = FITS_tools.downsample.downsample_axis(cube[0].data, 2, 0)
cube[0].data = avg
cube[0].header['CDELT3'] *= 2
scalefactor = 1./2.
crpix3 = (cube[0].header['CRPIX3']-1)*scalefactor+0.5+scalefactor/2.
cube[0].header['CRPIX3'] = crpix3
cube.writeto(cubefilename+'_downsampled.fits', clobber=True)
log.info("Done with "+cubefilename)
def make_high_mergecube(pca_clean={'2014':False,
'2013':False,
'ao':False},
scanblsub={'2014':False, '2013':False, 'ao':False},
timewise_pca={'2014': True, '2013':False, 'ao':True},
mergefile2=None):
if mergefile2 is None:
raise ValueError("Must specify a merge filename")
#if pca_clean:
# if timewise_pca:
# mergefile2 = 'APEX_H2CO_merge_high_timepca'
# else:
# mergefile2 = 'APEX_H2CO_merge_high'
#else:
# mergefile2 = 'APEX_H2CO_merge_high_nopca'
for suff in ("_2014_bscans", "_2014_lscans", "_2013","_ao"):
make_blanks_merge(os.path.join(mergepath,mergefile2+suff),
lowhigh='high', lowest_freq=218e9, width=1.0*u.GHz)
mapnames = ['MAP_{0:03d}'.format(ii) for ii in range(1,130)]
log.info("Building cubes: "+str(mapnames))
# Frequency: (216.9, 219.4)
build_cube_2014(mapnames,
mergefile=mergefile2+"_2014_bscans",
posang=[140,160],
outpath=mergepath,
datapath=april2014path,
lowhigh='low',
pca_clean=pca_clean['2014'],
timewise_pca=timewise_pca['2014'],
scanblsub=scanblsub['2014'],
datasets=datasets_2014)
build_cube_2014(mapnames,
mergefile=mergefile2+"_2014_lscans",
posang=[50,70],
outpath=mergepath,
datapath=april2014path,
lowhigh='low',
pca_clean=pca_clean['2014'],
timewise_pca=timewise_pca['2014'],
scanblsub=scanblsub['2014'],
datasets=datasets_2014)
log.info("Building Ao cubes")
# ('ao', 'high'): (218.0, 219.0),
build_cube_ao(window='high', mergefile=True, freq=True, outpath=mergepath,
pca_clean=pca_clean['ao'], timewise_pca=timewise_pca['ao'],
mergefilename=os.path.join(mergepath, mergefile2+"_ao"),
scanblsub=scanblsub['ao'],
datapath=aorawpath)
log.info("Building 2013 cubes")
# (2013, 'high'): (217.5, 220.0)
build_cube_2013(mergefile=mergefile2+"_2013",
outpath=mergepath,
datapath=june2013datapath,
lowhigh='high',
timewise_pca=timewise_pca['2013'],
pca_clean=pca_clean['2013'],
scanblsub=scanblsub['2013'])
do_plait_h2comerge(mergepath=mergepath, mergefile2=mergefile2)
def do_plait_h2comerge(mergepath=mergepath, mergefile2=None):
"""
doplait, not yoplait
(create the merged, plaited cube)
default is
do_plait(mergefile2='APEX_H2CO_merge_high')
"""
from sdpy import plait
# plaiting doesn't work well for unequal weights or large swathes
# of missing data
all_targets = ("_2014_bscans", "_2014_lscans", "_2013","_ao")
plait_targets = all_targets[:2]
def fnify(suff, end='.fits'):
return os.path.join(mergepath, mergefile2+suff+end)
headers = [fits.getheader(fnify(suff))
for suff in plait_targets]
header = headers[0]
for h in headers:
for k in h:
header[k] = h[k]
cubes = [fits.getdata(fnify(suff))
for suff in plait_targets]
angles = [0, 90]#, 58.6, 58.6]
cube_comb = plait.plait_cube(cubes, angles=angles, scale=3)
hdu = fits.PrimaryHDU(data=cube_comb, header=header)
hdu.writeto(fnify("_plait"), clobber=True)
comb_weights = np.sum([fits.getdata(fnify(suff, '_nhits.fits'))
for suff in plait_targets], axis=0)
whdu = fits.PrimaryHDU(data=comb_weights,
header=fits.getheader(fnify(suff, '_nhits.fits')))
whdu.writeto(fnify('_nhits'), clobber=True)
# Add back the 2013 and Ao data without plaiting (since that doesn't work)
data = [cube_comb] + [np.nan_to_num(fits.getdata(fnify(suff)))
for suff in all_targets[2:]]
weights = ([comb_weights] +
[fits.getdata(fnify(suff, '_nhits.fits'))
for suff in all_targets[2:]])
sweights = np.sum(weights, axis=0)
total_stack = (np.sum([(d*w) for d,w in zip(data,weights)], axis=0) /
sweights)
total_stack[:,sweights<0.5] = np.nan
for h in [fits.getheader(fnify(suff)) for suff in all_targets[2:]]:
for k in h:
header[k] = h[k]
hdu = fits.PrimaryHDU(data=total_stack, header=header)
hdu.writeto(fnify('_plait_all'), clobber=True)
whdu = fits.PrimaryHDU(data=sweights, header=header)
whdu.writeto(fnify('_plait_all_nhits'), clobber=True)
# Smooth and downsample finally...
cube = spectral_cube.SpectralCube.read(fnify('_plait_all'))
outheader = cube.header.copy()
outheader['CRPIX3'] = 1
outheader['CRVAL3'] = 218e9
outheader['CUNIT3'] = 'Hz'
outheader['CDELT3'] = 1453333. # about 2km/s
outheader['NAXIS3'] = 1e9 / outheader['CDELT3'] # 688 pixels
# kw = 2 pix
cubesm = cube_regrid.spatial_smooth_cube(cube.filled_data[:], 2,
use_fft=False,
numcores=4)
cubesm = cube_regrid.spectral_smooth_cube(cubesm, 2,
use_fft=False,
numcores=4)
cubesm[cubesm==0] = np.nan
hdu = fits.PrimaryHDU(data=cubesm, header=cube.header)
newhdu = cube_regrid.regrid_cube_hdu(hdu, outheader, order=2,
prefilter=False)
newhdu.writeto(fnify('_plait_all_smooth'), output_verify='fix', clobber=True)
baseline_cube(fnify('_plait_all'), polyspline='spline', mask_level_sigma=5,
order=3)
# Can't get this to work - apparently there are some entirely flagged-out
# data sets
baseline_cube(fnify('_plait_all_smooth'), polyspline='spline',
mask_level_sigma=5, order=3, splinesampling=50)
def make_low_mergecube(pca_clean={'2014':False,
'2013':False,
'ao':False},
scanblsub={'2014':False, '2013':False, 'ao':False},
timewise_pca={'2014': True, '2013':False, 'ao':True},
mergefile1 = 'APEX_H2CO_merge_low',):
make_blanks_merge(os.path.join(mergepath,mergefile1), lowhigh='low')
for suff in ("_2014_bscans", "_2014_lscans", "_2013","_ao"):
make_blanks_merge(os.path.join(mergepath,mergefile1+suff),
lowhigh='high', lowest_freq=216.9e9, width=2.0*u.GHz)
mapnames = ['MAP_{0:03d}'.format(ii) for ii in range(1,130)]
log.info("Building cubes: "+str(mapnames)+" low bscans")
build_cube_2014(mapnames,
mergefile=mergefile1+"_2014_bscans",
posang=[140,160],
outpath=mergepath,
datapath=april2014path,
lowhigh='low',
pca_clean=pca_clean['2014'],
timewise_pca=timewise_pca['2014'],
scanblsub=scanblsub['2014'],
datasets=datasets_2014)
log.info("Building cubes: "+str(mapnames)+" low lscans")
build_cube_2014(mapnames,
mergefile=mergefile1+"_2014_lscans",
posang=[50,70],
outpath=mergepath,
datapath=april2014path,
lowhigh='low',
pca_clean=pca_clean['2014'],
timewise_pca=timewise_pca['2014'],
scanblsub=scanblsub['2014'],
datasets=datasets_2014)
log.info("Building Ao cubes")
# ('ao', 'high'): (218.0, 219.0),
build_cube_ao(window='low', mergefile=True, freq=True, outpath=mergepath,
pca_clean=pca_clean['ao'], timewise_pca=timewise_pca['ao'],
mergefilename=os.path.join(mergepath, mergefile1+"_ao"),
scanblsub=scanblsub['ao'],
datapath=aorawpath)
log.info("Building 2013 cubes")
# (2013, 'high'): (217.5, 220.0)
build_cube_2013(mergefile=mergefile1+"_2013",
outpath=mergepath,
datapath=june2013datapath,
lowhigh='low',
timewise_pca=timewise_pca['2013'],
pca_clean=pca_clean['2013'],
scanblsub=scanblsub['2013'])
print "TODO: plait the low-frequency merge."
print "TODO: possible merge the ao low/high into the low-merge?"
def integrate_slices_high(prefix='merged_datasets/APEX_H2CO_merge_high_sub'):
ffile = fits.open(prefix+'.fits')
cd3 = (ffile[0].header['CD3_3'] if 'CD3_3' in ffile[0].header else
ffile[0].header['CDELT3']) / 1e3 # convert to km/s (I hope)
integ1,hdr = cubes.integ(ffile, [235,344], average=np.nansum) # first H2CO line: blue
hdu1 = fits.PrimaryHDU(data=integ1/cd3, header=hdr)
hdu1.writeto(prefix+"_H2CO_303-202_blue.fits", clobber=True)
integ2,hdr = cubes.integ(ffile, [161,235], average=np.nansum) # first H2CO line: red
hdu2 = fits.PrimaryHDU(data=integ2/cd3, header=hdr)
hdu2.writeto(prefix+"_H2CO_303-202_red.fits", clobber=True)
integ4,hdr = cubes.integ(ffile, [161,344], average=np.nansum) # first H2CO line: red
hdu4 = fits.PrimaryHDU(data=integ4/cd3, header=hdr)
hdu4.writeto(prefix+"_H2CO_303-202.fits", clobber=True)
integ3,hdr = cubes.integ(ffile, [513,615], average=np.nansum) # second H2CO line: blue
hdu3 = fits.PrimaryHDU(data=integ3/cd3, header=hdr)
hdu3.writeto(prefix+"_H2CO_322-221_blue.fits", clobber=True)
def integrate_slices_low(prefix='merged_datasets/APEX_H2CO_merge_low_sub'):
ffile = fits.open(prefix+'.fits')
integ1,hdr = cubes.integ(ffile, [335,446], average=np.nansum)
hdu1 = fits.PrimaryHDU(data=integ1, header=hdr)
hdu1.writeto(prefix+"_SiO5-4.fits", clobber=True)
def integrate_mask(prefix, mask=h2copath+'APEX_H2CO_303_202_mask.fits',
maskpre=''):
"""
Integrate a cube with name specified by 'prefix' using a specific mask
"""
if isinstance(mask,str):
mask = fits.getdata(mask).astype('bool')
ffile = fits.open(prefix+'.fits')
cd = ffile[0].header['CDELT3']
ffile[0].data *= mask * cd
ffile[0].data[~mask.astype('bool')] = np.nan
integ1,hdr = cubes.integ(ffile, [0,ffile[0].shape[0]], average=np.nansum)
hdr['BUNIT'] = ('K km/s',"Integrated over masked region")
hdu1 = fits.PrimaryHDU(data=integ1, header=hdr)
hdu1.writeto("{0}_{1}mask_integ.fits".format(prefix, maskpre),
clobber=True)
def integrate_h2co_by_freq(filename):
import spectral_cube
cube = spectral_cube.SpectralCube.read(filename)
#if 'high' in filename:
# cocube = cube
#else:
# cocube = spectral_cube.SpectralCube.read(filename.replace('low','high'))
#mcube = cocube.with_spectral_unit(u.km/u.s,
# rest_value=bright_lines['13CO']*u.GHz,
# velocity_convention='radio')
#coscube = mcube.spectral_slab(-100*u.km/u.s, 150*u.km/u.s)
#mask = coscube > 1
for line in bright_lines:
scube = cube.with_spectral_unit(u.km/u.s,
rest_value=bright_lines[line]*u.GHz,
velocity_convention='radio')
subcube1 = scube.spectral_slab(-100*u.km/u.s, 150*u.km/u.s)
ncube = scube.spectral_slab(-150*u.km/u.s, -100*u.km/u.s)
noise = ncube.apply_numpy_function(np.std, axis=0)
#mask._wcs = subcube1.wcs
subcube = subcube1.with_mask(subcube1>noise)#.with_mask(mask)
if subcube.shape[0] == 1:
# implies out of range
continue
mom0 = subcube.moment0()
mom1 = subcube.moment1()
mom2 = subcube.moment2()
fn = os.path.split(filename)[1]
outfn = 'projections/'+fn.replace(".fits","_{line}_{mom}.fits")
mom0.hdu.writeto(outfn.format(line=line, mom='mom0'),clobber=True)
mom1.hdu.writeto(outfn.format(line=line, mom='mom1'),clobber=True)
mom2.hdu.writeto(outfn.format(line=line, mom='mom2'),clobber=True)
def compute_noise_high(prefix=mergepath+'APEX_H2CO_merge_high_sub',
pixrange=[700,900]):
ffile = fits.open(prefix+'.fits')
try:
mad.bottleneck_MAD([0,1,2,3,4])
integ1,hdr = cubes.integ(ffile, pixrange, average=mad.bottleneck_MAD)
except:
integ1,hdr = cubes.integ(ffile, pixrange, average=mad.MAD)
integ1.fill_value = np.nan
hdu1 = fits.PrimaryHDU(data=integ1.filled(), header=hdr)
hdu1.writeto(prefix+"_noise.fits", clobber=True)
def compute_noise_low(prefix=mergepath+'APEX_H2CO_merge_low_sub',pixrange=[512,675]):
ffile = fits.open(prefix+'.fits')
integ1,hdr = cubes.integ(ffile, pixrange, average=np.nanstd)
hdu1 = fits.PrimaryHDU(data=integ1, header=hdr)
hdu1.writeto(prefix+"_noise.fits", clobber=True)
def compute_noise_extras(prefix=june2013path+'APEX_H2CO_2013_%s_sub',
lowhigh='high',
pixrange=[0,4096]):
ffile = fits.open((prefix % lowhigh)+'.fits')
integ1,hdr = cubes.integ(ffile, pixrange, average=np.nanstd)
hdu1 = fits.PrimaryHDU(data=integ1, header=hdr)
hdu1.writeto(prefix+"_noise.fits", clobber=True)
def signal_to_noise_mask_cube(prefix=None, cube=None, noise=None,
kernelsize=[2,2,2], grow=1, sigmacut=3,
mask_hc3n=False):
"""
Generate a signal-to-noise mask and use it to select the detected pixels in
a cube.
The algorithm finds all pixels in a smoothed version of the cube with
values >``sigmacut``*noise. It then grows that mask by ``grow`` pixels in
each direction.
Parameters
----------
prefix : str
The prefix for the FITS input and output files
cube : np.ndarray
Alternative to prefix: can pass in a cube directly
noise : np.ndarray
an array that is broadcastable to the cube shape
kernelsize : (int,int,int)
A length-3 list or tuple specifying the size of the kernel to smooth
the cube with.
grow : int
The number of pixels to grow the mask in each direction
sigmacut : float
The significance level of the pixels to include
"""
if prefix is not None:
ffile = fits.open(prefix+'.fits')
cube = ffile[0].data
if noise is None:
noise = fits.getdata(prefix+'_noise.fits')
log.info("Initiating cube smooth of {0}.".format(prefix))
elif None in (cube,noise):
raise ValueError("Must specify cube and noise if you do not "
"specify a prefix")
t0 = time.time()
smcube = cube_regrid.gsmooth_cube(cube, kernelsize, use_fft=False,
kernelsize_mult=3)
log.info("Completed cube smooth in %i seconds" % (time.time()-t0))
mask = smcube > noise*sigmacut
mask_grow = scipy.ndimage.morphology.binary_dilation(mask, iterations=grow)
cube[~mask_grow] = np.nan
if prefix is None:
return cube, mask_grow
ffile[0].data = cube
ffile[0].writeto(prefix+"_snmasked.fits", clobber=True)
ffile[0].data = mask_grow.astype('int')
if mask_hc3n:
maskhdu = mask_out_hc3n(ffile[0])
maskhdu.writeto(prefix+"_mask.fits", clobber=True)
else:
ffile[0].writeto(prefix+"_mask.fits", clobber=True)
def do_sncube_masking_hi(prefix=h2copath+'APEX_H2CO_303_202'):
# 0-25 not checked! arbitrary choice.
compute_noise_high(prefix, pixrange=[0,25])
signal_to_noise_mask_cube(prefix)
integrate_slices_high(prefix+'_snmasked')
def extract_subcube(cubefilename, outfilename, linefreq=218.22219*u.GHz,
debug=False, smooth=False, vsmooth=False, naxis3=300,
vmin=-155*u.km/u.s, vmax=155*u.km/u.s):
# Picked a tighter range to avoid other lines contaminating H2CO
#vmin=-225*u.km/u.s, vmax=275*u.km/u.s):
t0 = time.time()
log.info(("Extracting subcube at {0} from {1}"
" with smooth={2} and vsmooth={3}").format(linefreq,
cubefilename, smooth,
vsmooth))
cube = spectral_cube.SpectralCube.read(cubefilename)
vcube = cube.with_spectral_unit(u.km/u.s, rest_value=linefreq,
velocity_convention='radio')
svcube = vcube.spectral_slab(vmin, vmax)
crval3 = vmin.to(u.km/u.s).value
outheader = svcube.header
outheader['CRPIX3'] = 1
outheader['CRVAL3'] = crval3
outheader['CUNIT3'] = 'km/s'
outheader['CDELT3'] = 1.0
outheader['NAXIS3'] = naxis3
outheader['NAXIS2'] = svcube.shape[1]
outheader['NAXIS1'] = svcube.shape[2]
if smooth:
#cubesm = gsmooth_cube(ffile[0].data, [3,2,2], use_fft=True,
# psf_pad=False, fft_pad=False)
# smoothed with 2 pixels -> sigma=10", fwhm=23"
# this is an "optimal smooth", boosting s/n and smoothing to 36"
# resolution.
kw = 2 if not vsmooth else 4
cubesm = cube_regrid.spatial_smooth_cube(svcube.filled_data[:], kw,
use_fft=False,
numcores=4)
cubesm = cube_regrid.spectral_smooth_cube(cubesm, 3/2.35,
use_fft=False,
numcores=4)
svcube._data = cubesm
outheader['CDELT3'] = outheader['CDELT3'] * kw
outheader['NAXIS3'] = outheader['NAXIS3'] / kw
crpix3 = (outheader['CRPIX3']-1)*(1./kw)+0.5+(1./kw)/2.
outheader['CRPIX3'] = crpix3
# Now that we've written this out, we use interpolation to force the cube
# onto a grid that starts at *exactly* vmin
newhdu = cube_regrid.regrid_cube_hdu(svcube.hdu, outheader, order=1, prefilter=False)
newhdu.writeto(outfilename, output_verify='fix', clobber=True)
log.info("Completed cube extraction to {1} in {0} seconds.".format(time.time()-t0,
outfilename))
return newhdu
def make_smooth_noise(noisefilename, outfilename, kernelwidth=2, clobber=True):
data = fits.getdata(noisefilename)
kernel = Gaussian2DKernel(stddev=kernelwidth)
kernel.normalize('integral')
smdata = convolve(data, kernel)
kernel.normalize('peak')
npix = kernel.array.sum()
# Average down the noise by sqrt(npix)
hdu = fits.PrimaryHDU(data=(smdata/npix**0.5).astype(data.dtype),
header=fits.getheader(noisefilename))
hdu.writeto(outfilename, clobber=clobber)
def make_line_mask(freqarr, lines=bright_lines):
mask = np.ones(freqarr.size, dtype='bool')
for ln,lf in lines.iteritems():
bw = bandwidths[ln]
wh = (lf*1e3-bw < freqarr) & (lf*1e3+bw > freqarr)
mask[wh] = False
return mask
def do_extract_subcubes(outdir=molpath, merge_prefix='APEX_H2CO_merge',
cubefilename=None,
frange=None, lines=all_lines,
suffix="_sub",
vsmooth=False,
integrate=False):
"""
Parameters
----------
integrate : bool
Integrate the extracted cube using a mask. WARNING: doesn't check
if the mask exists!
Examples
--------
>>> do_extract_subcubes(outdir='/Volumes/passport/apex/merged_datasets/molecule_cubes',
... suffix='', merge_prefix='APEX_H2CO_2014_merge')
>>> do_extract_subcubes(lines=lines, merge_prefix='APEX_H2CO_merge',
... suffix='_plait_all')
"""
if cubefilename is None:
cubefilenames = [os.path.join(mergepath,
merge_prefix+'_low{0}.fits'.format(suffix)),
os.path.join(mergepath,
merge_prefix+'_high{0}.fits'.format(suffix))]
else:
cubefilenames = [cubefilename]
# For each cube, (maybe) load it, check it, then move on
# (the previous method would only match things in the first cube selected...)
for cubefilename in cubefilenames:
if not os.path.exists(cubefilename):
log.info("File {0} does not exist. Skipping.".format(cubefilename))
continue
for line,freq in lines.iteritems():
if frange is not None:
if freq<frange[0] or freq>frange[1]:
log.info("Skipping line {0}".format(line))
continue
log.info("Extracting {0} from {1}".format(line,cubefilename))
header = fits.getheader(cubefilename)
ww = wcs.WCS(header)
wspec = ww.sub([wcs.WCSSUB_SPECTRAL])
nax = header['NAXIS%i' % (ww.wcs.spec+1)]
freqarr = wspec.wcs_pix2world(np.arange(nax),0)[0]
# Note that this leaves open the possibility of extracting incomplete
# cubes from the edges of the high/low cubes...
if freq*1e9 > freqarr.min() and freq*1e9 < freqarr.max():
extract_subcube(cubefilename,
os.path.join(outdir, 'APEX_{0}.fits').format(line),
linefreq=freq*u.GHz)
extract_subcube(cubefilename,
os.path.join(outdir, 'APEX_{0}_smooth.fits').format(line),
linefreq=freq*u.GHz, smooth=True)
if vsmooth:
extract_subcube(cubefilename,
os.path.join(outdir, 'APEX_{0}_vsmooth.fits').format(line),
linefreq=freq*u.GHz, smooth=True, vsmooth=True)
if integrate:
integrate_mask(os.path.join(outdir, 'APEX_{0}'.format(line)))
integrate_mask(os.path.join(outdir, 'APEX_{0}_smooth'.format(line)),
mask=h2copath+'APEX_H2CO_303_202_smooth_mask.fits')
integrate_mask(os.path.join(outdir, 'APEX_{0}'.format(line)),
mask=h2copath+'APEX_13CO_matched_H2CO_mask.fits',
maskpre='13co',
)
integrate_mask(os.path.join(outdir, 'APEX_{0}_smooth'.format(line)),
mask=h2copath+'APEX_13CO_matched_H2CO_smooth_mask.fits',
maskpre='13co',
)
else:
log.info("Skipping line {0}".format(line))
def do_everything(pca_clean={'2014':False, '2013':False, 'ao':False},
scanblsub={'2014':False, '2013':False, 'ao':False},
timewise_pca={'2014':True, '2013':False, 'ao':True},
mergefile2='APEX_H2CO_merge_high',
mergepath=mergepath, molpath=molpath, h2copath=h2copath):
make_high_mergecube(mergefile2=mergefile2, pca_clean=pca_clean,
scanblsub=scanblsub, timewise_pca=timewise_pca)
do_postprocessing(mergepath=mergepath, molpath=molpath, h2copath=h2copath)
extract_co_subcubes(mergepath=mergepath)
def do_postprocessing(molpath=molpath, mergepath=mergepath, h2copath=h2copath):
#make_low_mergecube() # there's only one really useful overlap region
#os.chdir(mergepath)
# vsmoothds is made here:
#os.system('./APEX_H2CO_merge_high_starlink_custom.sh')
#os.chdir('../')
# OLD: merge_prefix = 'APEX_H2CO_merge_high' # Oct 4, 2014
merge_prefix='APEX_H2CO_merge_high_plait_all'
do_extract_subcubes(outdir=molpath, frange=[218,219],
cubefilename=os.path.join(mergepath,
merge_prefix+".fits"),
lines=lines218)
# Because I really want to see SiO...
do_extract_subcubes(outdir=molpath,
lines={'SiO_54':217.10498},
merge_prefix='APEX_H2CO_2014_merge', suffix="")
compute_noise_high(prefix=mergepath+merge_prefix, pixrange=[700,900])
compute_noise_high(prefix=mergepath+merge_prefix+"_smooth", pixrange=[320,400])
#compute_noise_high(mergepath+merge_prefix+'_smooth',[203,272])
#compute_noise_high(mergepath+'APEX_H2CO_merge_high_vsmoothds',[203,272])
#compute_noise_high(mergepath+'APEX_H2CO_303_202_vsmooth',[75,100])
#compute_noise_low()
signal_to_noise_mask_cube(os.path.join(molpath,'APEX_H2CO_303_202'),
noise=fits.getdata(os.path.join(mergepath,
'APEX_H2CO_merge_high_plait_all_noise.fits')),
sigmacut=2,
grow=2,
mask_hc3n=False) # unfortunately, flagged out brick & Sgr A
signal_to_noise_mask_cube(molpath+'APEX_H2CO_303_202_smooth',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_smooth_noise.fits'),
sigmacut=3,
mask_hc3n=False)
signal_to_noise_mask_cube(os.path.join(molpath,'APEX_H2CO_321_220'),
noise=fits.getdata(os.path.join(mergepath,
'APEX_H2CO_merge_high_plait_all_noise.fits')),
sigmacut=2,
grow=2)
signal_to_noise_mask_cube(molpath+'APEX_H2CO_321_220_smooth',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_smooth_noise.fits'),
sigmacut=2)
integrate_mask(molpath+'APEX_H2CO_303_202',
mask=molpath+'APEX_H2CO_303_202_mask.fits')
integrate_mask(molpath+'APEX_H2CO_303_202_smooth',
mask=molpath+'APEX_H2CO_303_202_smooth_mask.fits')
integrate_mask(molpath+'APEX_H2CO_303_202',
mask=molpath+'APEX_H2CO_321_220_mask.fits',
maskpre='321')
integrate_mask(molpath+'APEX_H2CO_303_202_smooth',
mask=molpath+'APEX_H2CO_321_220_smooth_mask.fits',
maskpre='321')
for fn in glob.glob(os.path.join(mergepath,'APEX_H2CO_30*fits')):
try:
os.symlink(fn,
os.path.join(h2copath,os.path.split(fn)[-1]))
except OSError:
log.debug("Skipped file {0} because it exists".format(fn))
# Create a few integrated H2CO 303 maps
integrate_slices_high(molpath+'APEX_H2CO_303_202_snmasked')
# Use spectral_cube to do a bunch of integrations
# PATH SENSITIVE
# integrate_h2co_by_freq(mergepath+mergefile2+".fits")
# On second thought, let's not go to camelot
# (this function proved ineffective)
for line in lines218:
fn = mergepath+'APEX_{0}.fits'.format(line)
if os.path.exists(fn):
integrate_mask(molpath+'APEX_{0}'.format(line),
mask=molpath+'APEX_H2CO_303_202_mask.fits')
integrate_mask(molpath+'APEX_{0}'.format(line),
mask=molpath+'APEX_H2CO_321_220_mask.fits',
maskpre='321')
integrate_mask(molpath+'APEX_{0}_smooth'.format(line),
mask=molpath+'APEX_H2CO_303_202_smooth_mask.fits')
integrate_mask(molpath+'APEX_{0}_smooth'.format(line),
mask=molpath+'APEX_H2CO_321_220_smooth_mask.fits',
maskpre='321')
log.debug("Integrated masked file {0}".format(fn))
else:
log.debug("File {0} does not exist".format(fn))
for line in lines218:
if os.path.exists(molpath+'APEX_{0}.fits'.format(line)):
baseline_cube(molpath+'APEX_{0}.fits'.format(line),
maskfn=molpath+'APEX_H2CO_303_202_mask.fits',
order=7)
baseline_cube(molpath+'APEX_{0}_smooth.fits'.format(line),
maskfn=molpath+'APEX_H2CO_303_202_smooth_mask.fits',
order=7)
#compute_noise_high(molpath+'APEX_H2CO_303_202_bl',[350,400])
#compute_noise_high(molpath+'APEX_H2CO_303_202_smooth_bl',[175,200])
#compute_noise_high(molpath+'APEX_H2CO_303_202_vsmooth_bl',[80,100])
signal_to_noise_mask_cube(molpath+'APEX_H2CO_303_202_bl',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_noise.fits'),
grow=2,
sigmacut=2,
mask_hc3n=False)
signal_to_noise_mask_cube(molpath+'APEX_H2CO_303_202_smooth_bl',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_smooth_noise.fits'),
sigmacut=3,
mask_hc3n=False)
signal_to_noise_mask_cube(molpath+'APEX_H2CO_321_220_bl',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_noise.fits'),
sigmacut=2,
grow=2)
signal_to_noise_mask_cube(molpath+'APEX_H2CO_321_220_smooth_bl',
noise=fits.getdata(mergepath+'APEX_H2CO_merge_high_plait_all_noise.fits'),
sigmacut=2,
grow=2)
for line in lines218:
if os.path.exists(molpath+'APEX_{0}_bl.fits'.format(line)):
integrate_mask(molpath+'APEX_{0}_bl'.format(line),
mask=molpath+'APEX_H2CO_303_202_bl_mask.fits')
integrate_mask(molpath+'APEX_{0}_smooth_bl'.format(line),
mask=molpath+'APEX_H2CO_303_202_smooth_bl_mask.fits')
integrate_mask(molpath+'APEX_{0}_bl'.format(line),
mask=molpath+'APEX_H2CO_321_220_bl_mask.fits',
maskpre='321')
integrate_mask(molpath+'APEX_{0}_smooth_bl'.format(line),
mask=molpath+'APEX_H2CO_321_220_smooth_bl_mask.fits',
maskpre='321')
do_mask_ch3oh(dpath=molpath)
for fn in glob.glob(os.path.join(molpath,'APEX_H2CO_3*fits')):
try:
os.symlink(fn,
os.path.join(h2copath,os.path.split(fn)[-1]))
log.info("Linked file {0} to {1}".format(fn, h2copath))
except OSError:
log.debug("Skipped file {0} because it exists".format(fn))
# moved to analysis doratio(h2copath=h2copath)
# moved to analysis do_temperature(ratio=False, h2copath=h2copath)
def contsub_cube(cubefilename,):
cube = fits.open(cubefilename+'.fits', memmap=False)
cont = fits.getdata(cubefilename+'_continuum.fits')
data = cube[0].data
cube[0].data = data - cont
cube.writeto(cubefilename+'_sub.fits', clobber=True)
def neighborly_masking(cube, sigma=1, roll=2):
"""
Try masking 1-sigma points surrounded by 1-sigma points
"""
noise = cube.std(axis=0)
mcube = cube > (noise*sigma)
mcube[:2,:,:] = mcube[-2:,:,:] = False
mcube2 = (mcube.astype('int16') + np.roll(mcube, 1, axis=0) +
np.roll(mcube, 2, axis=0) + np.roll(mcube, -1, axis=0) +
np.roll(mcube, -2, axis=0))
mask = mcube2 >= 3
return mask
def baseline_cube(cubefn, mask=None, maskfn=None, mask_level=None,
mask_level_sigma=None, order=5,
outfilename=None,
polyspline='poly', splinesampling=100):
"""
Baseline-subtract a data cube with polynomials or splines.
Can mask the cube first.
"""
from pyspeckit.cubes.cubes import baseline_cube
f = fits.open(cubefn)
cube = f[0].data
if mask is None:
if maskfn is not None:
mask = fits.getdata(maskfn).astype('bool')
if cube.shape != mask.shape:
raise ValueError("Cube and mask don't match.")
elif mask_level is not None:
mask = cube > mask_level
elif mask_level_sigma is not None:
mask = ((cube-cube.mean(axis=0)) >
(cube.std(axis=0)*mask_level_sigma))
t0 = time.time()
if polyspline == 'poly':
log.info("Baselining cube {0} with order {1}...".format(cubefn, order))
bc = baseline_cube(cube, polyorder=order, cubemask=mask)
elif polyspline == 'spline':
log.info("Baselining cube {0} with sample scale {1}...".format(cubefn,
splinesampling))
# Splines can't be pickled
bc = baseline_cube(cube, splineorder=order,
sampling=splinesampling, cubemask=mask,
numcores=1)
log.info("Baselining done ({0} seconds)".format(time.time()-t0))
f[0].data = bc
if outfilename is None:
outfilename = cubefn.replace(".fits","_bl.fits")
f.writeto(outfilename, clobber=True)
def do_everything_2013extrafreqs():
build_cube_2013(lowhigh='low',
scanblsub=False)
build_cube_2013(lowhigh='high',
scanblsub=False)
#raise NotImplementedError
#compute_noise_extras(lowhigh='low',pixrange=[0,4096])
#compute_noise_extras(lowhigh='high',pixrange=[0,4096])
def dopeaksn():
from FITS_tools import strip_headers
f = fits.open(h2copath+'APEX_H2CO_303_202.fits')
header = strip_headers.flatten_header(f[0].header)
f[0].header=header
f[0].data = f[0].data.max(axis=0)
n = fits.getdata(h2copath+'APEX_H2CO_merge_high_sub_noise.fits')
f[0].data /= n
f.writeto(h2copath+'APEX_H2CO_303_202_peaksn.fits',clobber=True)
f = fits.open(h2copath+'APEX_H2CO_303_202_smooth.fits')
header = strip_headers.flatten_header(f[0].header)
f[0].header=header
f[0].data = f[0].data.max(axis=0)
n = fits.getdata(h2copath+'APEX_H2CO_merge_high_smooth_noise.fits')
f[0].data /= n
f.writeto(h2copath+'APEX_H2CO_303_202_peaksn_smooth.fits',clobber=True)
def docleannhits():
""" not really used now """
f = fits.open(h2copath+'APEX_H2CO_merge_high_nhits.fits')
nh = f[0].data
nhm = scipy.ndimage.median_filter(nh, 5)
f[0].data = nhm
def mask_out_hc3n(maskhdu):
"""
HC3N shows up in some of the H2CO 303 cubes. We can exclude it from the
signal masks by shifting the mask to HC3N velocities and excluding any
regions detected in H2CO
"""
nu_hc3n = all_lines['HC3N_24-23']
nu_h2co = all_lines['H2CO_303_202']
v_hc3n = ((nu_hc3n - nu_h2co)/nu_h2co * constants.c).to(u.km/u.s).value
mask = maskhdu.data
dv = maskhdu.header['CDELT3']
shift = v_hc3n / dv
newmask = np.zeros_like(mask, dtype='bool')
newmask[np.abs(shift):,:,:] = mask[:-np.abs(shift),:,:].astype('bool')
maskhdu.data[newmask] = 0
# Need to convert to int because fits doesn't support bool
maskhdu.data = maskhdu.data.astype('int')
return maskhdu
def mask_out_ch3oh(smooth='_smooth', dpath=mergepath):
nu_ch3oh = all_lines['CH3OH_422_312']
nu_h2co = all_lines['H2CO_322_221']
v_ch3oh = ((nu_ch3oh - nu_h2co)/nu_h2co * constants.c).to(u.km/u.s).value
hdu = fits.open(dpath+'APEX_H2CO_322_221{0}.fits'.format(smooth))[0]
dv = hdu.header['CDELT3']
shift = v_ch3oh / dv
log.info("CH3OH Masking: dv: {0} shift: {1} ".format(dv,shift))
mask = fits.getdata(dpath+'APEX_H2CO_303_202{0}_mask.fits'.format(smooth)).astype('bool')
log.info("CH3OH Masking: Mask shape: {0}".format(mask.shape))
newmask = mask*False
log.info("CH3OH Masking: NewMask shape: {0}".format(newmask.shape))
newmask[np.abs(shift):,:,:] = mask[:-np.abs(shift),:,:]
log.info("CH3OH Masking: NewMask number of masked pixels: {0}".format(newmask.sum()))
hdu.data[newmask] = np.nan
hdu.writeto(dpath+'APEX_H2CO_322_221{0}_CH3OHchomped.fits'.format(smooth), clobber=True)
hdu.data[True-mask] = np.nan
hdu.writeto(dpath+'APEX_H2CO_322_221{0}_CH3OHchomped_masked.fits'.format(smooth), clobber=True)
integrate_mask(dpath+'APEX_H2CO_322_221{0}_CH3OHchomped'.format(smooth),
mask=dpath+'APEX_H2CO_303_202{0}_mask.fits'.format(smooth))
def do_mask_ch3oh(dpath=mergepath, vsmooth=False):
mask_out_ch3oh('', dpath=dpath)
# spatial smoothing = 2pix
mask_out_ch3oh('_smooth', dpath=dpath)
if vsmooth:
# spatial smoothing = 4pix
mask_out_ch3oh('_vsmooth', dpath=dpath)
mask_out_ch3oh('_bl', dpath=dpath)
# spatial smoothing = 2pix
mask_out_ch3oh('_smooth_bl', dpath=dpath)
if vsmooth:
# spatial smoothing = 4pix
mask_out_ch3oh('_vsmooth_bl', dpath=dpath)
def do_2014(datasets=datasets_2014, scanblsub=False):
#datasets = ['E-093.C-0144A.2014APR02/E-093.C-0144A-2014-2014-04-01',
# 'E-093.C-0144A.2014APR03/E-093.C-0144A-2014-2014-04-02']
#build_cube_2014('MAP_001', datasets=datasets, scanblsub=True, lowhigh='low')
#build_cube_2014('MAP_001', datasets=datasets, scanblsub=True, lowhigh='high')
#build_cube_2014('MAP_001', datasets=datasets, scanblsub=False, lowhigh='high_nosub')
for dataset in datasets:
for source in datasets[dataset]:
build_cube_2014(source, datasets=[dataset], scanblsub=scanblsub,
outpath=mergepath,
datapath=april2014path,
lowhigh='low')
build_cube_2014(source, datasets=[dataset], scanblsub=scanblsub,
outpath=mergepath,
datapath=april2014path,
lowhigh='high')
def do_2014_merge(datasets=datasets_2014,
lowhigh=('low','high')):
log.info("Starting merge")
if not isinstance(lowhigh, (tuple,list)):
if isinstance(lowhigh, str):
lowhigh = (lowhigh,)
else:
raise ValueError("Invalid lowhigh.")
for lh in lowhigh:
mergefile = 'APEX_H2CO_2014_merge_{0}'.format(lh)
log.info("Making blanks")
lowest_freq = 218.4e9 if lh=='high' else 216.9e9
make_blanks_merge(os.path.join(mergepath,mergefile), lowhigh=lh,
lowest_freq=lowest_freq, width=2.5*u.GHz)
mapnames = ['MAP_{0:03d}'.format(ii) for ii in range(1,130)]
log.info("Building cubes: "+str(mapnames)+" "+lh)
build_cube_2014(mapnames,
mergefile=mergefile,
outpath=mergepath,
datapath=april2014path,
lowhigh=lh,
datasets=datasets)
baseline_cube(os.path.join(mergepath,mergefile+".fits"),
polyspline='spline', mask_level_sigma=5)
def get_info_2014(datapath='/Users/adam/work/h2co/apex/april2014/',
datasets=datasets_2014):
info = {}
for dataset in datasets:
apex_filename=os.path.join(datapath,dataset)+".apex"
spectra,headers,indices = load_apex_cube(apex_filename)
info[dataset] = set([h['OBJECT'] for h in headers])
log.info("{0}:{1}".format(dataset, str(info[dataset])))
return info
def identify_scans_fromcoords(gal):
# identify where the *derivative* changes signs
# each np.diff shifts 1 to the left
# 2 np.diffs -> +2 to index
scans = 2+np.where(np.diff(np.sign(np.diff(gal.l.wrap_at(180*u.deg)))))[0]
return scans
def per_scan_fourier_clean(data, scans, mask_pixels=None,
verbose=False, smoothing_width=10,
automask=False, smooth_all=False,
smoothing_kernel_size_scale=40,
nsigma_ignore=1, return_mask=False):
"""
An implementation of the Emerson 1988 prescription for "scan noise" removal
performed in "scan space" rather than map space.
Parameters
----------
data : np.ndarray
2D data, with time along axis 0 and frequency along axis 1
scans : np.ndarray
The endpoints of the scans. Should not include 0 or naxis
verbose : bool
Print out simple stats about the fits
"""
raise NotImplementedError("Work in progress - maybe a bad idea")
# Create a new array for hosting the subtracted data
dsub = data*0
timeaxis = 0
freqaxis = 1
# Kernel must be ODD
kernel_size = smoothing_kernel_size_scale * smoothing_width
if kernel_size % 2 == 0:
kernel_size += 1
masklist = []
for ii,jj in zip([0]+scans.tolist(),
scans.tolist()+[data.shape[timeaxis]]):
x = np.arange(jj-ii)
y = data[ii:jj,:]
fty = np.fft.fft(y,axis=0)
ftf = np.fft.fftfreq(x)
# The components to suppress should be decided in the map plane...
return dsub
def subtract_scan_linear_fit(data, scans, mask_pixels=None,
verbose=False, smoothing_width=10,
automask=False, smooth_all=False,
smoothing_kernel_size_scale=40,
nsigma_ignore=1, return_mask=False):
"""
Use linear algebra to fit a time-baseline to each scan to remove spectral
baseline drifts.
WARNING: This may remove map-spanning signals!! That can be BAD for 13CO!
Source:
http://stackoverflow.com/questions/20343500/efficient-1d-linear-regression-for-each-element-of-3d-numpy-array
(includes a solution for masked arrays: this will be EXTREMELY useful!)
Parameters
----------
data : np.ndarray
2D data, with time along axis 0 and frequency along axis 1
scans : np.ndarray
The endpoints of the scans. Should not include 0 or naxis
divscale : bool
DISABLED: this is crazy
If True, will use only the slope and will divide out the normalized
slope rather than subtracting
mask_pixels : None or np.ndarray
A mask array to select pixels to interpolate the fits across in
the *Frequency* axis
automask : bool
Mask any scans with a mean > the overall mean + 1 stddev. The data are
slightly smoothed first if automask > 1.
verbose : bool
Print out simple stats about the fits
smoothing_kernel_size_scale : int
The size multiplier of the smoothing kernel used for interpolation in
the frequency domain; smoothing_kernel_size_scale * smoothing_width
defines the number of pixels to use when interpolating
nsigma_ignore : float
Fit masking control parameter. Pixels with values greater than the
mean noise + nsigma_ignore * std(mean_spectrum) will be ignored for
fitting then interpolated back over later
return_mask : bool
Return an array of the mask used for each scan
"""
#dmeans = data[:,percentile*data.shape[1]:(1-percentile)*data.shape[1]].mean(axis=1)
dsub = data*0
timeaxis = 0
freqaxis = 1
# Kernel must be ODD
kernel_size = smoothing_kernel_size_scale * smoothing_width
if kernel_size % 2 == 0:
kernel_size += 1
masklist = []
for ii,jj in zip([0]+scans.tolist(),
scans.tolist()+[data.shape[timeaxis]]):
x = np.arange(jj-ii)
if automask:
mean_spectrum = data[ii:jj,:].mean(axis=timeaxis)
if automask > 1:
mean_spectrum = convolve(mean_spectrum,
Gaussian1DKernel(stddev=automask))
mask_pixels = (mean_spectrum < (mean_spectrum.mean() +
nsigma_ignore*mean_spectrum.std()))
if verbose:
nflag = (~mask_pixels).sum()
log.info(("Masked {0} pixels for scanblsub fitting"
" in scan {1}-{2} "
"({3}%)").format(nflag, ii, jj,
nflag/float(mask_pixels.size),)
)
if mask_pixels is None:
y = data[ii:jj,:]
else:
# mask_pixels is an include mask
inds = np.arange(data.shape[freqaxis])[mask_pixels]
y = data[ii:jj,mask_pixels]
if return_mask and automask > 0:
masklist.append(mask_pixels)
# X is a vector of the X-values and a constant (1)
# Becomes set of equations y = m x + b || y = X mb
X = np.c_[x,np.ones(jj-ii)]
mb = np.linalg.lstsq(X,y)[0]
if mask_pixels is not None:
# Mask out the bad values, interpolate using a wide gaussian that
# ignores nans
m = np.zeros(data.shape[freqaxis]) + np.nan
m[inds] = mb[0,:]
m = convolve(m, Gaussian1DKernel(stddev=smoothing_width,
x_size=kernel_size))
b = np.zeros(data.shape[freqaxis]) + np.nan
b[inds] = mb[1,:]
b = convolve(b, Gaussian1DKernel(stddev=smoothing_width,
x_size=kernel_size))
# restore initial sampling unless we want smooth
if not smooth_all:
m[inds] = mb[0,:]
b[inds] = mb[1,:]
mb = np.array([m,b])
dsub[ii:jj,:] = data[ii:jj,:] - np.inner(X,mb.T)
log.info("Fit {0} scans with mean slopes {1} and offset {2}".format(len(scans)+1,
mb.mean(axis=1)[0],
mb.mean(axis=1)[1]))
if np.any(np.isnan(dsub)):
warnings.warn("There were NaNs left over from time-baseline subtraction.")
dsub[np.isnan(dsub)] = 0
if return_mask:
return dsub, np.array(masklist)
return dsub
def efuncs(arr, neig=None, return_others=False, huge_limit=500):
"""
Determine eigenfunctions of an array for use with
PCA cleaning
Parameters
----------
arr : `numpy.ndarray`
The array (2D)
neig : None or int
The number of eigenvalues to compute. Smaller = faster!
None = All!
huge_limit : int
The limit above which an error will be raised (for large arrays, this
can take *forever*)
return_others : bool
Return the evals, evects, and covmat or just the efuncs?
Returns
-------
efuncarr : np.ndarray
The eigenfunctions
Optional Returns
----------------
covmat : np.ndarray
Symmetric covariance matrix
evals : np.ndarray
1D array of eigenvalues
evects : np.ndarray
Eigenvectors
"""
if hasattr(arr,'filled'):
arr = arr.filled(0)
if arr.shape[1] > huge_limit and not neig:
log.critical("Very large eigenvalue computation!"
" Danger stranger! Stranger danger!")
import ipdb; ipdb.set_trace()
covmat = np.dot(arr.T.conj(),arr)
# assert covariance matrix is Hermitian
# (symmetric under transpose + conjugation)
if not (covmat.T.conj() == covmat).all():
diff = (covmat.T.conj() - covmat)
worst_diff_ind = np.argmax(np.abs(diff))
worst_diff = diff.flat[worst_diff_ind]/covmat.flat[worst_diff_ind]
log.warning("There are differences between the upper "
"and lower triangular components of the "
"covariance matrix; this is probably a "
"floating point error and should not be terrible."
" The relative error is {wd}.".format(wd=worst_diff))
if np.abs(worst_diff) > 1e-4:
log.warning("Actually, that's a pretty large error. "
"You may be in trouble.")
# Changed from np.linalg.eig to scipy.linalg.eigh
# and numpy.linalg.eigh, which both return values in
# the opposite order from np.linalg.eig
if neig:
sz = covmat.shape[1]
eva, eve = scipy.linalg.eigh(covmat,
eigvals=(sz-neig,sz-1))
# eigh returns values in opposit order from np.linalg.eig
# we also want a fully populated matrix so the size stays
# the same
inds = np.argsort(eva)[::-1]
evals = np.zeros(sz)
evals[:neig] = eva[inds]
evects = np.zeros([sz,sz])
evects[:, :neig] = eve[:,inds]
else:
evals,evects = np.linalg.eigh(covmat)
inds = np.argsort(evals)[::-1]
evals = evals[inds]
evects = evects[:,inds]
efuncarr = np.dot(arr,evects)
if return_others:
return efuncarr,covmat,evals,evects
else:
return efuncarr
def PCA_clean(data,
smoothing_scale=25., # should be ~200 for SEDIGISM
timeaxis=0,
freqaxis=1,
ncomponents=3,
diagplotfilename=None,
scans=None,
maxntimes=5000,
):
"""
Remove N PCA components in the time direction
TODO: speed up by downsampling in TIME as well; we don't expect large
second-to-second variations REVISE: No, actually, there are sharp
jumps in time.
Maybe scan-by-scan pca is faster?
Smoothing scale is ~200 in total, which means 25 for pre-downsampled
CMZ data
Parameters
----------
data : `numpy.ndarray`
2D data, with dimensions ``[times, frequencies]`` (or reversed if
``timeaxis`` and ``freqaxis`` are appropriately specified)
smoothing_scale : float
The scale over which frequencies should be smoothed prior to performing
the PCA analysis. This is the width of a gaussian. The data will be
downsampled by a factor (1/5)*smoothing_scale
timeaxis : int
freqaxis : int
The axis #'s of the frequency and time data
ncomponents : int
The number of PCA components to remove. 3 is empirically decent, but
it's very important to test this #
diagplotfilename : None or str
A filename to save a diagnostic plot in. The plot shows the first
``ncomponents`` eigenfunctions.
scans : list
A list of scans. If these are specified, the PCA analysis will be done
on a scan-by-scan basis, in which the most-correlated N components will
be identified in each scan. This is not obviously the best thing to
do, but it can be useful.
maxntimes : int or None
If specified, the timestream will be chunked out into sections with
length < maxntimes before doing PCA computations. In principle, this
can be used to overcome memory limitations, but it should be used with
caution as the locations of the splits are somewhat arbitrary and could
result in different principle component selections if the data aren't
well-behaved.
"""
if freqaxis == 0 and timeaxis == 1:
data = data.swapaxes(0,1)
elif freqaxis != 1 or timeaxis != 0:
raise ValueError("Invalid axis specification.")
if np.any(np.isnan(data)):
warnings.warn("There were NaNs in the PCA-target data")
import ipdb; ipdb.set_trace()
data = np.nan_to_num(data)
if maxntimes and scans is None:
ntimes = data.shape[0]
if ntimes > maxntimes:
nsplits = np.ceil(ntimes/float(maxntimes))
length = ntimes/nsplits
# Split with equal length, but leave out the starting point
# and the end point since those are both added
splits = np.linspace(0, ntimes, nsplits+1)[1:-1]
scans = splits.astype('int')
if scans is not None:
all_data = data
all_dsub = np.empty(data.shape)
for start,end in zip([0]+scans.tolist(),
scans.tolist()+[data.shape[0]]):
log.info("Computing PCA on an array with shape"
" {0}".format(data[start:end,:].shape))
dsub,efuncarr = PCA_subtract(data[start:end,:],
smoothing_scale=smoothing_scale,
ncomponents=ncomponents)
if start == 0:
efuncs = efuncarr[:,:ncomponents]
else:
efuncs += efuncarr[:,:ncomponents]
all_dsub[start:end,:] = dsub
dsub = all_dsub
efuncarr = efuncs / (len(scans)+1.) # Average removed efuncs
else:
log.info("Computing PCA on an array with shape"
" {0}".format(data.shape))
dsub,efuncarr = PCA_subtract(data,
smoothing_scale=smoothing_scale,
ncomponents=ncomponents)
if diagplotfilename is not None:
fig = pl.figure(4)
fig.clf()
ax = fig.gca()
for ii in range(ncomponents):
ax.plot(efuncarr[:,ii], label=str(ii), linewidth=2, alpha=0.5)
ax.legend(loc='best')
checkdir_makedir(diagplotfilename)
fig.savefig(diagplotfilename, bbox_inches='tight')
if freqaxis == 0 and timeaxis == 1:
dsub = dsub.swapaxes(0,1)
return dsub.real
def PCA_subtract(data, smoothing_scale=None, ncomponents=3):
"""
Parameters
----------
data : `numpy.ndarray`
2D data, with dimensions (times, frequencies)
smoothing_scale : float
The scale over which frequencies should be smoothed prior to performing
the PCA analysis. This is the width of a gaussian. The data will be
downsampled by a factor (1/5)*smoothing_scale
Returns
-------
dsub : `numpy.ndarray`
The data with ``ncomponents`` principle components removed
efuncarr :
"""
t0 = time.time()
log.info("PCA will remove {0} components".format(ncomponents))
if smoothing_scale:
log.info(("PCA cleaning an image with size {0},"
" which will downsample to {1}").format(data.shape,
(data.shape[0],
data.shape[1]/(smoothing_scale/5))))
sm_data = filters.gaussian_filter1d(data, smoothing_scale,
axis=1, mode='mirror').real
efuncarr,covmat,evals,evects = efuncs(sm_data[:,::smoothing_scale/5].T,
neig=ncomponents,
huge_limit=1000,
return_others=True)
else:
log.info("PCA cleaning an image with size {0}".format(data.shape))
efuncarr,covmat,evals,evects = efuncs(data.T,
neig=ncomponents,
huge_limit=1000,
return_others=True)
log.info("Completed PCA (eigenfunction/vector) computation"
" in {0} seconds.".format(time.time()-t0))
# Zero-out the components we want to keep
# (technically no longer necessary: this should be a null operation)
efuncarr[:,ncomponents:] = 0
to_subtract = np.inner(efuncarr,evects).T
if smoothing_scale:
ifunc = interpolate.interp1d(np.arange(to_subtract.shape[1]),
to_subtract,
axis=1)
to_subtract = ifunc(np.linspace(0, to_subtract.shape[1]-1, data.shape[1]))
dsub = data - to_subtract
return dsub, efuncarr
def _is_sci(source, sourcereg='MAP'):
return (((sourcereg in source)) and
('SKY' not in source) and
('TCAL' not in source) and
('TREC' not in source) and
('TSYS' not in source) and
('HOT' not in source) and
('COLD' not in source))
def get_source_tel_line(apex_filename):
if 'M-093' in apex_filename or 'E-093' in apex_filename:
sourcereg = 'MAP'
line = 'shfi219ghz'
telescopes = ['AP-H201-X201', 'AP-H201-X202']
elif 'M-091' in apex_filename:
sourcereg = 'SGRA'
line = 'H2CO(3-2)'
telescopes = ['AP-H201-X201', 'AP-H201-X202']
elif 'O-085' in apex_filename:
sourcereg = 'SGRA'
line = 'H2CO(3-2)'
telescopes = ['AP-H201-F101', 'AP-H201-F102']
elif 'E-085' in apex_filename:
sourcereg = 'SGRA'
line = 'H2CO32'
telescopes = ['AP-H201-F101', 'AP-H201-F102']
else:
raise ValueError("Data selected is not from ao, 2013 or 2014")
return sourcereg,line,telescopes
def compute_and_save_pca_components(apex_filename, ncomponents=5,
suppress_endpoints=4, redo=True):
log.info("Starting {0}".format(apex_filename))
outdir = os.path.join(os.path.dirname(apex_filename),
os.path.splitext(os.path.basename(apex_filename))[0])
if not os.path.isdir(outdir):
os.mkdir(outdir)
sourcereg,line,telescopes = get_source_tel_line(apex_filename)
if not redo and all([os.path.exists(
os.path.join(outdir,
'{1}_pca_component_{0}_els0.fits'.
format(ii,tel)))
for ii in range(ncomponents)
for tel in telescopes]):
log.info("Skipping {0} because it's been done".format(apex_filename))
return
log.info("Outdir is {0}".format(outdir))
cl = read_class.ClassObject(apex_filename)
for telescope in cl.getinfo()['tels']:
if 'PA' not in telescope:
selection = [x
for source in cl.sources
if _is_sci(source, sourcereg)
for x in cl.select_spectra(telescope=telescope,
line=line,
source=source)]
mmdata,headers = zip(*cl.read_observations(selection, progressbar=True))
log.info("Converting data to an array by every 1000 elts"
" out of {0} total (memory use should rise here)".
format(len(mmdata)))
for jj in range(len(mmdata) / 1000 + 1):
log.info('Elements {0}-{1}'.format(jj*1000,
min((jj+1)*1000,
len(mmdata))))
data = np.asarray(mmdata[jj*1000:(jj+1)*1000])
# Endpoints can be ~1e14
bad = abs(data) > 1e9
nbad = np.count_nonzero(bad)
if nbad > 0:
log.info("Found {0} bad values".format(nbad))
data[bad] = 0
log.info('Computing eigenfunctions (intensive step)')
efuncarr,covmat,evals,evects = efuncs(data.T,
neig=ncomponents,
huge_limit=1000,
return_others=True)
log.info("Writing PCA components to disk. This step should be fast.")
header = classheader_to_fitsheader(headers[0])
evals_norm = evals/evals.sum()
for ii in range(ncomponents):
header['PCACOMP'] = ii
header['EVAL'] = evals_norm[ii]
hdu = fits.PrimaryHDU(data=efuncarr[:,ii], header=header)
hdu.writeto(os.path.join(outdir,
'{2}_pca_component_{0}_els{1}.fits'.
format(ii,jj,telescope)),
clobber=True,
output_verify='fix')
# Re-do the correlations using those PCA components
log.info("Re-computing PCA using the sub-components.")
data = np.array([fits.getdata(os.path.join(outdir,
'{2}_pca_component_{0}_els{1}.fits'.
format(ii,jj,telescope)))
for ii in range(ncomponents)
for jj in range(len(mmdata) / 1000 + 1)])
efuncarr,covmat,evals,evects = efuncs(data.T,
neig=ncomponents,
huge_limit=1000,
return_others=True)
evals_norm = evals/evals.sum()
for ii in range(ncomponents):
header['PCACOMP'] = ii
header['EVAL'] = evals_norm[ii]
hdu = fits.PrimaryHDU(data=efuncarr[:,ii], header=header)
hdu.writeto(os.path.join(outdir,
'{1}_pca_component_{0}.fits'.
format(ii,telescope)),
clobber=True,
output_verify='fix')
log.info("Completed {0}".format(apex_filename))
def do_all_pcacomponents(redo=True, **kwargs):
for fn in all_apexfiles:
try:
compute_and_save_pca_components(fn, redo=redo, **kwargs)
plot_pca_components(fn)
except Exception as ex:
log.error("Error: {0}".format(ex))
print(ex)
continue
def plot_pca_components(apex_filename, ncomponents=3):
log.info("Plotting {0}".format(apex_filename))
outdir = os.path.join(os.path.dirname(apex_filename),
os.path.splitext(os.path.basename(apex_filename))[0])
fig1 = pl.figure(1)
fig1.clf()
fig2 = pl.figure(2)
fig2.clf()
figs = [fig1,fig2]
for fglob in [os.path.join(outdir, '*_pca_component_{0}.fits'.format(ii))
for ii in range(ncomponents)]:
files = glob.glob(fglob)
for jj,(fn,fig) in enumerate(zip(files,figs)):
data = fits.getdata(fn)
ax1 = fig.add_subplot(2,1,1)
ax1.plot(data, ',', label=str(jj))
ft = np.fft.fft(data)
ftf = np.fft.fftfreq(data.size)
ax2 = fig.add_subplot(2,1,2)
ax2.loglog(ftf[ftf>=0], abs(ft[ftf>=0]), label=str(jj), alpha=0.5)
fig1.savefig(files[0].replace(".fits",".png"))
fig2.savefig(files[1].replace(".fits",".png"))
log.info("Done plotting {0}".format(apex_filename))
def extract_mean_abs_spectra(apex_filename):
outdir = os.path.join(os.path.dirname(apex_filename),
os.path.splitext(os.path.basename(apex_filename))[0])
if not os.path.isdir(outdir):
os.mkdir(outdir)
sourcereg,line,telescopes = get_source_tel_line(apex_filename)
cl = read_class.ClassObject(apex_filename)
for telescope in cl.getinfo()['tels']:
if 'PA' not in telescope:
selection = [x
for source in cl.sources
if _is_sci(source, sourcereg)
for x in cl.select_spectra(telescope=telescope,
line=line,
source=source)]
# Only do first 10000
# 1e4 * 2**15 * 4 = 1.31 GB
mmdata,headers = zip(*cl.read_observations(selection[:10000], progressbar=True))
header = classheader_to_fitsheader(headers[0])
header['LINE1'] = 'mean(abs)'
header['LINE2'] = 'std(abs)'
del headers
data = np.abs(np.array(mmdata, dtype='float32'))
del mmdata
dft = np.fft.fft(data, axis=1)
dftmeanabs = np.abs(dft).mean(axis=0).astype('float32')
del dft
absdata = np.abs(data).astype('float32')
del data
meanabs = (absdata).mean(axis=0).astype('float32')
stdabs = (absdata).std(axis=0).astype('float32')
darr = np.array([meanabs,stdabs,dftmeanabs])
assert darr.shape == (3, meanabs.size)
hdu = fits.PrimaryHDU(data=darr, header=header)
hdu.writeto(os.path.join(outdir,
'{0}_meanabsspec.fits'.format(telescope)),
clobber=True,
output_verify='fix')
def plot_mean_abs_spectrum(apex_filename, ncomponents=3):
log.info("Plotting {0}".format(apex_filename))
basename = os.path.splitext(os.path.basename(apex_filename))[0]
outdir = os.path.join(os.path.dirname(apex_filename), basename)
fig1 = pl.figure(1)
fig1.clf()
pl.title(basename)
fig2 = pl.figure(2)
fig2.clf()
figs = [fig1,fig2]
fglob = os.path.join(outdir, '*_meanabsspec.fits')
files = glob.glob(fglob)
for jj,(fn,fig) in enumerate(zip(files,figs)):
mspec, sspec, ftabs = fits.getdata(fn)
ax1 = fig.add_subplot(2,1,1)
ax1.plot(mspec-np.median(mspec), ',', label=str(jj))
mmad = mad.MAD(mspec)
ax1.set_ylim(mmad*-10, mmad*10)
ax1.set_title(basename)
ft = np.fft.fft(mspec)
ftf = np.fft.fftfreq(mspec.size)
ax2 = fig.add_subplot(2,1,2)
ax2.loglog(ftf[ftf>=0], abs(ft[ftf>=0]), label=str(jj), alpha=0.5)
ax2.loglog(ftf[ftf>=0], abs(ftabs[ftf>=0]), alpha=0.5)
ax2.set_xlim(ftf.min(), ftf.max())
fig.savefig(fn.replace(".fits",".png"), bbox_inches='tight')
log.info("Done plotting {0}".format(apex_filename))
def do_all_meanabsspectra(**kwargs):
for fn in all_apexfiles:
extract_mean_abs_spectra(fn, **kwargs)
plot_mean_abs_spectrum(fn)
#except Exception as ex:
# log.error("Error: {0}".format(ex))
# print(ex)
# continue
def extract_co_subcubes(mergepath=april2014path):
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(mergepath,'APEX_13CO_2014_merge.fits'),
linefreq=220.39868*u.GHz, naxis3=500, vmin=-225*u.km/u.s,
vmax=275*u.km/u.s)
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(mergepath,'APEX_C18O_2014_merge.fits'),
linefreq=219.56036*u.GHz, naxis3=500, vmin=-225*u.km/u.s,
vmax=275*u.km/u.s)
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(h2copath,'APEX_13CO_matched_H2CO.fits'),
linefreq=220.39868*u.GHz,)
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(h2copath,'APEX_C18O_matched_H2CO.fits'),
linefreq=219.56036*u.GHz,)
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(h2copath,'APEX_13CO_matched_H2CO_smooth.fits'),
linefreq=220.39868*u.GHz, smooth=True)
extract_subcube(os.path.join(mergepath,'APEX_H2CO_2014_merge_high.fits'),
os.path.join(h2copath,'APEX_C18O_matched_H2CO_smooth.fits'),
linefreq=219.56036*u.GHz, smooth=True)
signal_to_noise_mask_cube(os.path.join(h2copath,'APEX_13CO_matched_H2CO_smooth'),
noise=fits.getdata(os.path.join(mergepath,
'APEX_H2CO_merge_high_plait_all_noise.fits')))
signal_to_noise_mask_cube(os.path.join(h2copath,'APEX_13CO_matched_H2CO'),
noise=fits.getdata(os.path.join(mergepath,
'APEX_H2CO_merge_high_plait_all_smooth_noise.fits')))
def quick_extract_13cocube(fn, snthreshold=3, overwrite=True, intrange=None):
if fits.getheader(fn)['NAXIS'] > 2:
cube = SpectralCube.read(fn).with_spectral_unit(u.km/u.s,
rest_value=220.39868*u.GHz,
velocity_convention='radio')
cocube = cube.spectral_slab(-200*u.km/u.s, 200*u.km/u.s)
cocube.write(fn[:-5]+"_13COcube.fits", overwrite=overwrite)
noise = cube.std(axis=0)
noise.hdu.writeto(fn[:-5]+"_noise.fits", clobber=overwrite)
sn = cocube.filled_data[:]/noise
comask = cocube.with_mask(BooleanArrayMask(sn > snthreshold, wcs=cocube._wcs))
if intrange is None:
coint = comask.moment0()
else:
coint = comask.spectral_slab(intrange[0], intrange[1]).moment0()
coint.write(fn[:-5]+"_13COmaskintegrated.fits", overwrite=overwrite)
coint2 = cocube.spectral_slab(intrange[0], intrange[1]).moment0()
coint2.write(fn[:-5]+"_13COintegrated.fits", overwrite=overwrite)
def cal_date_overlap(dates1, calibration_factors=calibration_factors):
for k in calibration_factors:
if k is not None:
d1,d2 = Time(k.split(":"))
if dates1[0] < d2 and dates1[1] > d1:
return k
|
adamginsburg/APEX_CMZ_H2CO
|
shfi_otf_pipeline/make_apex_cubes.py
|
Python
|
bsd-3-clause
| 136,320
|
[
"Gaussian"
] |
74d3a6d6c218aa29657a9ccbd0471ec9bf9388bbb9d3d40f2a2069758fa47cde
|
# Author: Idan Gutman
# Modified by jkaberg, https://github.com/jkaberg for SceneAccess
# URL: http://code.google.com/p/sickbeard/
#
# This file is part of SickRage.
#
# SickRage is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# SickRage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with SickRage. If not, see <http://www.gnu.org/licenses/>.
import re
import traceback
import datetime
import urlparse
import sickbeard
import generic
from sickbeard.common import Quality, cpu_presets
from sickbeard import logger
from sickbeard import tvcache
from sickbeard import db
from sickbeard import classes
from sickbeard import helpers
from sickbeard import show_name_helpers
from sickbeard.exceptions import ex, AuthException
from sickbeard import clients
from lib import requests
from lib.requests import exceptions
from sickbeard.bs4_parser import BS4Parser
from lib.unidecode import unidecode
from sickbeard.helpers import sanitizeSceneName
class HDTorrentsProvider(generic.TorrentProvider):
urls = {'base_url': 'https://hdts.ru/index.php',
'login': 'https://hdts.ru/login.php',
'detail': 'https://www.hdts.ru/details.php?id=%s',
'search': 'https://hdts.ru/torrents.php?search=%s&active=1&options=0%s',
'download': 'https://www.sceneaccess.eu/%s',
'home': 'https://www.hdts.ru/%s'
}
def __init__(self):
generic.TorrentProvider.__init__(self, "HDTorrents")
self.supportsBacklog = True
self.enabled = False
self._uid = None
self._hash = None
self.username = None
self.password = None
self.ratio = None
self.minseed = None
self.minleech = None
self.cache = HDTorrentsCache(self)
self.url = self.urls['base_url']
self.categories = "&category[]=59&category[]=60&category[]=30&category[]=38"
self.cookies = None
def isEnabled(self):
return self.enabled
def imageName(self):
return 'hdtorrents.png'
def getQuality(self, item, anime=False):
quality = Quality.sceneQuality(item[0])
return quality
def _checkAuth(self):
if not self.username or not self.password:
raise AuthException("Your authentication credentials for " + self.name + " are missing, check your config.")
return True
def _doLogin(self):
if any(requests.utils.dict_from_cookiejar(self.session.cookies).values()):
return True
if self._uid and self._hash:
requests.utils.add_dict_to_cookiejar(self.session.cookies, self.cookies)
else:
login_params = {'uid': self.username,
'pwd': self.password,
'submit': 'Confirm',
}
try:
response = self.session.post(self.urls['login'], data=login_params, timeout=30, verify=False)
except (requests.exceptions.ConnectionError, requests.exceptions.HTTPError), e:
logger.log(u'Unable to connect to ' + self.name + ' provider: ' + ex(e), logger.ERROR)
return False
if re.search('You need cookies enabled to log in.', response.text) \
or response.status_code == 401:
logger.log(u'Invalid username or password for ' + self.name + ' Check your settings', logger.ERROR)
return False
self._uid = requests.utils.dict_from_cookiejar(self.session.cookies)['uid']
self._hash = requests.utils.dict_from_cookiejar(self.session.cookies)['pass']
self.cookies = {'uid': self._uid,
'pass': self._hash
}
return True
def _get_season_search_strings(self, ep_obj):
search_string = {'Season': []}
for show_name in set(show_name_helpers.allPossibleShowNames(self.show)):
if ep_obj.show.air_by_date or ep_obj.show.sports:
ep_string = show_name + ' ' + str(ep_obj.airdate).split('-')[0]
elif ep_obj.show.anime:
ep_string = show_name + ' ' + "%d" % ep_obj.scene_absolute_number
else:
ep_string = show_name + ' S%02d' % int(ep_obj.scene_season) #1) showName SXX
search_string['Season'].append(ep_string)
return [search_string]
def _get_episode_search_strings(self, ep_obj, add_string=''):
search_string = {'Episode': []}
if not ep_obj:
return []
if self.show.air_by_date:
for show_name in set(show_name_helpers.allPossibleShowNames(self.show)):
ep_string = sanitizeSceneName(show_name) + ' ' + \
str(ep_obj.airdate).replace('-', '|')
search_string['Episode'].append(ep_string)
elif self.show.sports:
for show_name in set(show_name_helpers.allPossibleShowNames(self.show)):
ep_string = sanitizeSceneName(show_name) + ' ' + \
str(ep_obj.airdate).replace('-', '|') + '|' + \
ep_obj.airdate.strftime('%b')
search_string['Episode'].append(ep_string)
elif self.show.anime:
for show_name in set(show_name_helpers.allPossibleShowNames(self.show)):
ep_string = sanitizeSceneName(show_name) + ' ' + \
"%i" % int(ep_obj.scene_absolute_number)
search_string['Episode'].append(ep_string)
else:
for show_name in set(show_name_helpers.allPossibleShowNames(self.show)):
ep_string = show_name_helpers.sanitizeSceneName(show_name) + ' ' + \
sickbeard.config.naming_ep_type[2] % {'seasonnumber': ep_obj.scene_season,
'episodenumber': ep_obj.scene_episode}
search_string['Episode'].append(re.sub('\s+', ' ', ep_string))
return [search_string]
def _doSearch(self, search_params, search_mode='eponly', epcount=0, age=0):
results = []
items = {'Season': [], 'Episode': [], 'RSS': []}
if not self._doLogin():
return []
for mode in search_params.keys():
for search_string in search_params[mode]:
if isinstance(search_string, unicode):
search_string = unidecode(search_string)
if search_string == '':
continue
search_string = str(search_string).replace('.', ' ')
searchURL = self.urls['search'] % (search_string, self.categories)
logger.log(u"Search string: " + searchURL, logger.DEBUG)
data = self.getURL(searchURL)
if not data:
continue
# Remove HDTorrents NEW list
split_data = data.partition('<!-- show New Torrents After Last Visit -->\n\n\n\n')
data = split_data[2]
try:
with BS4Parser(data, features=["html5lib", "permissive"]) as html:
#Get first entry in table
entries = html.find_all('td', attrs={'align': 'center'})
if not entries:
logger.log(u"The Data returned from " + self.name + " do not contains any torrent",
logger.DEBUG)
continue
try:
title = entries[22].find('a')['title'].strip('History - ').replace('Blu-ray', 'bd50')
url = self.urls['home'] % entries[15].find('a')['href']
download_url = self.urls['home'] % entries[15].find('a')['href']
id = entries[23].find('div')['id']
seeders = int(entries[20].get_text())
leechers = int(entries[21].get_text())
except (AttributeError, TypeError):
continue
if mode != 'RSS' and (seeders < self.minseed or leechers < self.minleech):
continue
if not title or not download_url:
continue
item = title, download_url, id, seeders, leechers
logger.log(u"Found result: " + title + "(" + searchURL + ")", logger.DEBUG)
items[mode].append(item)
#Now attempt to get any others
result_table = html.find('table', attrs={'class': 'mainblockcontenttt'})
if not result_table:
continue
entries = result_table.find_all('td', attrs={'align': 'center', 'class': 'listas'})
if not entries:
continue
for result in entries:
block2 = result.find_parent('tr').find_next_sibling('tr')
if not block2:
continue
cells = block2.find_all('td')
try:
title = cells[1].find('b').get_text().strip('\t ').replace('Blu-ray', 'bd50')
url = self.urls['home'] % cells[4].find('a')['href']
download_url = self.urls['home'] % cells[4].find('a')['href']
detail = cells[1].find('a')['href']
id = detail.replace('details.php?id=', '')
seeders = int(cells[9].get_text())
leechers = int(cells[10].get_text())
except (AttributeError, TypeError):
continue
if mode != 'RSS' and (seeders < self.minseed or leechers < self.minleech):
continue
if not title or not download_url:
continue
item = title, download_url, id, seeders, leechers
logger.log(u"Found result: " + title + "(" + searchURL + ")", logger.DEBUG)
items[mode].append(item)
except Exception, e:
logger.log(u"Failed parsing " + self.name + " Traceback: " + traceback.format_exc(), logger.ERROR)
#For each search mode sort all the items by seeders
items[mode].sort(key=lambda tup: tup[3], reverse=True)
results += items[mode]
return results
def _get_title_and_url(self, item):
title, url, id, seeders, leechers = item
if title:
title = u'' + title
title = title.replace(' ', '.')
if url:
url = str(url).replace('&', '&')
return (title, url)
def findPropers(self, search_date=datetime.datetime.today()):
results = []
myDB = db.DBConnection()
sqlResults = myDB.select(
'SELECT s.show_name, e.showid, e.season, e.episode, e.status, e.airdate FROM tv_episodes AS e' +
' INNER JOIN tv_shows AS s ON (e.showid = s.indexer_id)' +
' WHERE e.airdate >= ' + str(search_date.toordinal()) +
' AND (e.status IN (' + ','.join([str(x) for x in Quality.DOWNLOADED]) + ')' +
' OR (e.status IN (' + ','.join([str(x) for x in Quality.SNATCHED]) + ')))'
)
if not sqlResults:
return []
for sqlshow in sqlResults:
self.show = curshow = helpers.findCertainShow(sickbeard.showList, int(sqlshow["showid"]))
if not self.show: continue
curEp = curshow.getEpisode(int(sqlshow["season"]), int(sqlshow["episode"]))
searchString = self._get_episode_search_strings(curEp, add_string='PROPER|REPACK')
for item in self._doSearch(searchString[0]):
title, url = self._get_title_and_url(item)
results.append(classes.Proper(title, url, datetime.datetime.today(), self.show))
return results
def seedRatio(self):
return self.ratio
class HDTorrentsCache(tvcache.TVCache):
def __init__(self, provider):
tvcache.TVCache.__init__(self, provider)
# only poll HDTorrents every 10 minutes max
self.minTime = 20
def _getRSSData(self):
search_params = {'RSS': []}
return self.provider._doSearch(search_params)
provider = HDTorrentsProvider()
|
whitepyro/debian_server_setup
|
sickbeard/providers/hdtorrents.py
|
Python
|
gpl-3.0
| 13,199
|
[
"VisIt"
] |
3edaeb94f8d92c841228b7d6a8cb87eae749f7bbc21c15918dd4a4ea74aded37
|
# -*- coding: utf-8 -*-
from django.test import TestCase
from website.models import Bucketlist, BucketlistItem, UserProfile
from django.contrib.auth.models import User
class BucketListTestCase(TestCase):
def setUp(self):
self.title = "Before I turn 80"
user = User.objects.create_user(
username="john_doe",
first_name="John",
last_name="Doe",
email="john.doe@anon.ms")
user.save()
self.user = user
bucketlist = Bucketlist(
name=self.title,
user_id=self.user.id)
bucketlist.save()
def test_that_bucket_list_can_be_created(self):
"""Ensures that bucketlists can be created.
"""
bucketlist = Bucketlist.objects.get(
name=self.title)
self.assertEqual(type(bucketlist.id), int)
def test_that_bucket_list_item_can_be_created(self):
"""Ensures that bucketlist items can be created.
"""
bucketlist = Bucketlist.objects.get(
name=self.title)
bucketlist_item = BucketlistItem(
name="Visit India",
done=False,
bucketlist=bucketlist,
user_id=self.user.id)
bucketlist_item.save()
bucketlist_item = BucketlistItem.objects.get(
name="Visit India")
self.assertEqual(bucketlist_item.done, False)
self.assertEqual(
bucketlist_item.bucketlist_id,
bucketlist.id)
def test_that_models_can_be_queried(self):
"""Ensures that models can be queried.
"""
bucketlist = Bucketlist.objects.get(
name=self.title)
bucketlist_item = BucketlistItem(
name="Visit India",
done=False,
bucketlist=bucketlist,
user_id=self.user.id)
bucketlist_item.save()
query_results = BucketlistItem.search('Visit India')
self.assertIn(bucketlist_item, query_results)
def test_that_items_done_can_be_retrieved(self):
"""Ensure that items done can be retrieved
"""
bucketlist = Bucketlist.objects.get(
name=self.title)
bucketlist_item = BucketlistItem(
name="Visit India",
done=False,
bucketlist=bucketlist,
user_id=self.user.id)
bucketlist_item.save()
self.assertEqual(bucketlist.num_items_done(), 0)
bucketlist_item.done = True
bucketlist_item.save()
self.assertEqual(bucketlist.num_items_done(), 1)
def test_that_user_profile_can_be_created(self):
"""Ensure that user profile can be created
"""
user_profile = UserProfile(bio="Some bio info", age=24, user=self.user)
user_profile.save()
test_user_profile = UserProfile.objects.get(user=self.user)
self.assertEqual(test_user_profile.id, user_profile.id)
|
andela-osule/django-bucketlist-application
|
bucketlist/website/tests/test_models.py
|
Python
|
gpl-2.0
| 2,916
|
[
"VisIt"
] |
75886e793a056bc61379251332a4726e4cfca390f87ec59eef91f83e030083b4
|
# ------------------------------------------------------------
# MC911 - Compiler construction laboratory.
# IC - UNICAMP
#
# RA094139 - Marcelo Mingatos de Toledo
# RA093175 - Victor Fernando Pompeo Barbosa
#
# lya_codegen.py
# Lya Decorated AST Code Generator
#
# ------------------------------------------------------------
from .astnodevisitor import ASTNodeVisitor
from .lya_ast import *
from .lya_errors import *
from .lya_builtins import *
from .lya_lvminstruction import *
from .lya_scope import LyaScope
class CodeGenerator(ASTNodeVisitor):
"""
"""
def __init__(self):
super().__init__()
self.environment = None
self.current_scope = None # type: LyaScope
self.instructions = []
self.instructions_index = 0
self.labels_map = {}
self.errors = []
self.string_constants = []
def visit(self, node):
try:
super().visit(node)
except LyaError as err:
print(LyaColor.WARNING + str(err) + LyaColor.ENDC)
self.errors.append(err)
exit()
else:
# Called if no errors raised.
pass
finally:
# Called always.
pass
def _add_instruction(self, instruction: LyaInstruction):
instruction.index = self.instructions_index
self.instructions.append(instruction)
if isinstance(instruction, LBL):
self.labels_map[instruction.i] = instruction.index
self.instructions_index += 1
def _lookup_procedure(self, proc_call: ProcedureCall):
entry_procedure = self.current_scope.procedure_lookup(proc_call.identifier.name, proc_call.lineno)
if entry_procedure is None:
raise LyaNameError(proc_call.lineno, proc_call.identifier.name)
return entry_procedure
# Code Generation ----
def visit_Program(self, program: Program):
self.current_scope = program.scope
self._add_instruction(STP())
if program.offset != 0:
self._add_instruction(ALC(program.offset))
for stmts in program.statements:
self.visit(stmts)
if program.offset != 0:
self._add_instruction(DLC(program.offset))
self._add_instruction(END())
# Statement -------------------------------------------------
def visit_Declaration(self, declaration: Declaration):
if declaration.init is not None:
# TODO: Load string constant
for identifier in declaration.ids:
if declaration.init.exp_value is not None:
self._add_instruction(LDC(declaration.init.exp_value))
else:
self.visit(declaration.init)
self._add_instruction(STV(self.current_scope.level, identifier.displacement))
# Mode -----------------------------------------------
def visit_ArrayMode(self, array_mode: ArrayMode):
pass
# Procedure ------------------------------------------
def visit_ProcedureStatement(self, procedure: ProcedureStatement):
self.current_scope = procedure.scope
self._add_instruction(JMP(procedure.end_label))
self._add_instruction(LBL(procedure.start_label))
self._add_instruction(ENF(self.current_scope.level))
if procedure.offset != 0:
self._add_instruction(ALC(procedure.offset))
self.visit(procedure.definition)
self._add_instruction(LBL(procedure.return_label))
if procedure.offset != 0:
self._add_instruction(DLC(procedure.offset))
if procedure.identifier.raw_type is not LyaVoidType:
# Calculating the number of parameters received
mem_size = 0
for p in procedure.definition.parameters:
for i in p.ids:
if isinstance(i.raw_type, LyaArrayType):
mem_size += 1
else:
mem_size += i.raw_type.memory_size
self._add_instruction(RET(self.current_scope.level, mem_size))
self._add_instruction(LBL(procedure.end_label))
self.current_scope = self.current_scope.parent
def visit_ProcedureCall(self, call: ProcedureCall):
procedure = self._lookup_procedure(call)
ret = procedure.scope.ret
if ret is not None and ret.raw_type.memory_size > 0:
self._add_instruction(ALC(ret.raw_type.memory_size))
for expression in reversed(call.expressions):
sub_exp = expression.sub_expression
if isinstance(sub_exp, Location) and isinstance(sub_exp.type, Identifier):
if isinstance(sub_exp.type.raw_type, LyaArrayType):
self._add_instruction(LDR(sub_exp.type.scope_level, sub_exp.type.displacement))
else:
self.visit(expression)
elif isinstance(sub_exp, Expression):
if sub_exp.exp_value:
self._add_instruction(LDC(sub_exp.exp_value))
else:
self.visit(expression)
else:
self.visit(expression)
self._add_instruction(CFU(call.start_label))
# if procedure.definition.result.loc is QualifierType.ref_location:
# self._add_instruction(GRC())
def visit_ReturnAction(self, return_action: ReturnAction):
procedure = self.current_scope.enclosure # type: ProcedureStatement
if return_action.expression is not None:
self.visit(return_action.expression)
self._add_instruction(STV(self.current_scope.level, return_action.displacement))
self._add_instruction(JMP(procedure.return_label))
def visit_ResultAction(self, result: ResultAction):
result.expression.sub_expression.qualifier = QualifierType.location
self.visit(result.expression)
self._add_instruction(STV(self.current_scope.level, result.displacement))
def visit_BuiltinCall(self, builtin_call: BuiltinCall):
name = builtin_call.name
if name == 'print':
print_arg_list = builtin_call.expressions # type: Expression
for print_arg in print_arg_list:
self.visit(print_arg)
if isinstance(print_arg.raw_type, LyaStringType):
if isinstance(print_arg.sub_expression, StringConstant):
self._add_instruction(PRC(print_arg.sub_expression.heap_position))
#TODO: what to do if it's not constant?
elif isinstance(print_arg.sub_expression, LyaArrayType):
# TODO: Improve array printing
self._add_instruction(PRT(print_arg.sub_expression.length))
else:
self._add_instruction(PRV())
# if name == 'print':
# print_arg = builtin_call.expressions[0] # type: Expression
#
# self.visit(print_arg)
# if isinstance(print_arg.raw_type, LyaStringType):
# self._add_instruction(PRS())
# elif isinstance(print_arg.sub_expression, StringConstant):
# self._add_instruction(PRC(print_arg.sub_expression.heap_position))
# elif isinstance(print_arg.sub_expression, LyaArrayType):
# # TODO: Improove array printing
# self._add_instruction(PRT(print_arg.sub_expression.length))
# else:
# self._add_instruction(PRV())
if name == 'read':
read_arg_list = builtin_call.expressions # type: Expression
for read_arg in read_arg_list:
location = read_arg.sub_expression # type: Location
if isinstance(read_arg.raw_type, LyaStringType):
self._add_instruction(RDS())
# TODO: Test String
self._add_instruction(STS(read_arg.raw_type.length))
else:
if isinstance(location.type, Identifier):
self._add_instruction(RDV())
self._add_instruction(STV(location.type.scope_level, location.type.displacement))
elif isinstance(location.type, Element):
self.visit(location.type)
self._add_instruction(RDV())
self._add_instruction(SMV(location.type.raw_type.memory_size))
if name == 'lower':
read_arg = builtin_call.expressions[0] # type: Expression
location = read_arg.sub_expression # type: Location
raw_type = location.raw_type # type: LyaArrayType
self._add_instruction(LDC(raw_type.index_range[0]))
if name == 'upper':
read_arg = builtin_call.expressions[0] # type: Expression
location = read_arg.sub_expression # type: Location
raw_type = location.raw_type # type: LyaArrayType
self._add_instruction(LDC(raw_type.index_range[1]))
if name == 'length':
read_arg = builtin_call.expressions[0] # type: Expression
location = read_arg.sub_expression # type: Location
raw_type = location.raw_type
self._add_instruction(LDC(raw_type.length))
# Location
def visit_Location(self, location: Location):
if isinstance(location.type, Identifier):
# TODO: Other location types
if location.type.synonym_value is not None:
self._add_instruction(LDC(location.type.synonym_value))
elif location.type.qualifier is QualifierType.location:
self._add_instruction(LRV(location.type.scope_level, location.type.displacement))
elif location.type.qualifier is QualifierType.ref_location:
self._add_instruction(LDR(location.type.scope_level, location.type.displacement))
else:
self._add_instruction(LDV(location.type.scope_level, location.type.displacement))
else:
self.visit(location.type)
def visit_DereferencedReference(self, dereferenced_reference: DereferencedReference):
if isinstance(dereferenced_reference.loc.type, Identifier):
self._add_instruction(LRV(dereferenced_reference.loc.type.scope_level, dereferenced_reference.loc.type.displacement))
else:
self.visit(dereferenced_reference.loc.type)
def visit_ReferencedLocation(self, referenced_location: ReferencedLocation):
if isinstance(referenced_location.loc.type, Identifier):
self._add_instruction(LDR(referenced_location.loc.type.scope_level, referenced_location.loc.type.displacement))
else:
self.visit(referenced_location.loc.type)
def visit_Element(self, element: Element):
if isinstance(element.location, Identifier):
if element.location.displacement < 0: # Function Argument
self._add_instruction(LDV(element.location.scope_level, element.location.displacement))
else:
self._add_instruction(LDR(element.location.scope_level, element.location.displacement))
# self.visit(location.type)
# TODO: Not identifier?
else:
self.visit(element.location)
# TODO: More levels (len(expression) > 0)
exp = element.expressions[0]
self.visit(exp)
if isinstance(element.location.raw_type, LyaArrayType):
self._add_instruction(LDC(element.location.raw_type.index_range[0]))
# TODO: StringElement
self._add_instruction(SUB())
self._add_instruction(IDX(element.raw_type.memory_size))
# for expression in element.expressions:
# self.visit(expression)
# Constants / Literals ----------------------------------
def visit_IntegerConstant(self, iconst: IntegerConstant):
self._add_instruction(LDC(iconst.value))
def visit_BooleanConstant(self, bconst: BooleanConstant):
self._add_instruction(LDC(bconst.value))
def visit_CharacterConstant(self, cconst: CharacterConstant):
cconst.raw_type = LTF.char_type()
# def visit_EmptyConstant(self, econst: EmptyConstant):
# econst.raw_type = LTF.void_type()
# def visit_StringConstant(self, sconst: StringConstant):
# sconst.heap_position = self.environment.store_string_constant(sconst.value)
# sconst.raw_type = LTF.string_type(sconst.length)
# Expression
def visit_Expression(self, expression: Expression):
self.visit(expression.sub_expression)
if isinstance(expression.sub_expression, Location):
if isinstance(expression.sub_expression.type, Element):
# TODO: What if Element is another array, or ref?
self._add_instruction(GRC())
def visit_BinaryExpression(self, binary_expression: BinaryExpression):
left = binary_expression.left
right = binary_expression.right
op = binary_expression.operation
if isinstance(left, Location):
if isinstance(left.type, Identifier):
if left.type.synonym_value is not None:
self._add_instruction(LDC(left.type.synonym_value))
else:
self._add_instruction(LDV(left.type.scope_level, left.type.displacement))
elif isinstance(left.type, Element):
self.visit(left)
self._add_instruction(GRC())
else:
self.visit(left)
elif isinstance(left, Expression):
if left.exp_value and not isinstance(left.exp_value, StringConstant): # STRConstants are loaded into the heap
self._add_instruction(LDC(left.exp_value))
else:
self.visit(left)
else:
self.visit(left)
if isinstance(right, Location):
if isinstance(right.type, Identifier):
if right.type.synonym_value is not None:
self._add_instruction(LDC(right.type.synonym_value))
else:
self._add_instruction(LDV(right.type.scope_level, right.type.displacement))
elif isinstance(right.type, Element):
self.visit(right)
self._add_instruction(GRC())
else:
self.visit(right)
elif isinstance(right, Expression):
if right.exp_value and not isinstance(right.exp_value, StringConstant):
self._add_instruction(LDC(right.exp_value))
else:
self.visit(right)
else:
self.visit(right)
self._add_instruction(left.raw_type.get_binary_instruction(op))
def visit_RelationalExpression(self, relational_expression: RelationalExpression):
# TODO: Refatorar RelationalExpression
left = relational_expression.l_value
right = relational_expression.r_value
op = relational_expression.op
if isinstance(left, Location):
if isinstance(left.type, Identifier):
if left.type.synonym_value is not None:
self._add_instruction(LDC(left.type.synonym_value))
else:
self._add_instruction(LDV(left.type.scope_level, left.type.displacement))
elif isinstance(left.type, Element):
self.visit(left)
self._add_instruction(GRC())
else:
self.visit(left)
elif isinstance(left, Expression):
if left.exp_value and not isinstance(left.exp_value, StringConstant):
self._add_instruction(LDC(left.exp_value))
else:
self.visit(left)
else:
self.visit(left)
if isinstance(right, Location):
if isinstance(right.type, Identifier):
if right.type.synonym_value is not None:
self._add_instruction(LDC(right.type.synonym_value))
else:
self._add_instruction(LDV(right.type.scope_level, right.type.displacement))
elif isinstance(right.type, Element):
self.visit(right)
self._add_instruction(GRC())
else:
self.visit(right)
elif isinstance(right, Expression):
if right.exp_value and not isinstance(right.exp_value, StringConstant):
self._add_instruction(LDC(right.exp_value))
else:
self.visit(right)
else:
self.visit(right)
self._add_instruction(left.raw_type.get_relational_instruction(op))
def visit_UnaryExpression(self, unary_expression: UnaryExpression):
value = unary_expression.value
op = unary_expression.op
if isinstance(value, Location):
if isinstance(value.type, Identifier):
if value.type is not None:
self._add_instruction(LDC(value.type.synonym_value))
else:
self._add_instruction(LDV(value.type.scope_level, value.type.displacement))
elif isinstance(value.type, Element):
self.visit(value)
self._add_instruction(GRC())
else:
self.visit(value)
elif isinstance(value, Expression):
if value.exp_value and not isinstance(value.exp_value, StringConstant):
self._add_instruction(LDC(value.exp_value))
else:
self.visit(value)
else:
self.visit(value)
self._add_instruction(value.raw_type.get_unary_instruction(op))
# Action -----------------------------------------------------------------------------------------------------------
def visit_LabeledAction(self, labeled_action: LabeledAction):
self.visit(labeled_action.action)
self._add_instruction(LBL(labeled_action.label))
# def visit_Action(self, action: Action):
# def visit_BracketedAction(self, bracketed_action: BracketedAction):
def visit_AssignmentAction(self, assignment: AssignmentAction):
if isinstance(assignment.location.type, ProcedureCall):
self.visit(assignment.location.type)
# Assignment Location
if isinstance(assignment.location.type, Identifier):
self.visit(assignment.expression)
if assignment.location.type.qualifier == QualifierType.location:
self._add_instruction(SRV(assignment.location.type.scope_level, assignment.location.type.displacement))
elif assignment.location.type.qualifier == QualifierType.ref_location:
self._add_instruction(SRV(assignment.location.type.scope_level, assignment.location.type.displacement))
else:
self._add_instruction(STV(assignment.location.type.scope_level, assignment.location.type.displacement))
elif isinstance(assignment.location.type, Element):
# Element assignment. Array.
self.visit(assignment.location)
self.visit(assignment.expression)
# TODO: String?
self._add_instruction(SMV(assignment.location.type.raw_type.memory_size))
elif isinstance(assignment.location.type, DereferencedReference):
self.visit(assignment.expression)
identifier = assignment.location.type.loc.type # type: Identifier
self._add_instruction(SRV(identifier.scope_level, identifier.displacement))
else:
self.visit(assignment.expression)
# Assignment Expression
if hasattr(assignment.expression, 'sub_expression'):
if isinstance(assignment.expression.sub_expression, Location):
if isinstance(assignment.expression.sub_expression.type, ProcedureCall):
procedure_call = assignment.expression.sub_expression.type
procedure_statement = self._lookup_procedure(procedure_call)
if procedure_statement.definition.result.loc == QualifierType.ref_location:
self._add_instruction(GRC())
# if isinstance(assignment.expression.sub_expression.type, Element):
# # TODO: Resolver elemento e colocar instrução para grc
# pass
if isinstance(assignment.location.type, ProcedureCall):
# TODO: Não tem que checar se o call tem retorno antes de fazer isso?
self._add_instruction(SMV(assignment.location.type.raw_type.memory_size))
# IfAction ---------------------------------------------------------------------------------------------------------
def visit_IfAction(self, if_action: IfAction):
# IfAction
self.visit(if_action.boolean_expression)
self._add_instruction(JOF(if_action.next_label))
# ThenClause
self.visit(if_action.then_clause)
# ElseClause
if if_action.else_clause is not None:
self._add_instruction(JMP(if_action.exit_label))
self._add_instruction(LBL(if_action.next_label))
self.visit(if_action.else_clause)
self._add_instruction(LBL(if_action.exit_label))
else:
self._add_instruction(LBL(if_action.next_label))
# if if_action.exit_label is not None:
# self._add_instruction(LBL(if_action.exit_label))
def visit_ElsIfClause(self, else_if_clause: ElsIfClause):
# If
self.visit(else_if_clause.boolean_expression)
self._add_instruction(JOF(else_if_clause.next_label))
# Then
self.visit(else_if_clause.then_clause)
# Else
if else_if_clause.else_clause is not None:
self._add_instruction(JMP(else_if_clause.exit_label))
self._add_instruction(LBL(else_if_clause.next_label))
self.visit(else_if_clause.else_clause)
else:
self._add_instruction(LBL(else_if_clause.next_label))
# DoAction ---------------------------------------------------------------------------------------------------------
def visit_DoAction(self, do_action: DoAction):
# For setup.
if do_action.control.for_control is not None:
iteration = do_action.control.for_control.iteration
if isinstance(iteration, StepEnumeration):
self.visit(iteration.start_expression)
self._add_instruction(STV(iteration.identifier.scope_level,
iteration.identifier.displacement))
else:
# TODO: RangeEnumeration
pass
self._add_instruction(LBL(do_action.start_label))
# While Control. Stopping condition.
if do_action.control.while_control is not None:
self.visit(do_action.control.while_control.boolean_expression)
self._add_instruction(JOF(do_action.end_label))
for action in do_action.actions:
self.visit(action)
# For Control. Stopping condition.
if do_action.control.for_control is not None:
iteration = do_action.control.for_control.iteration
if isinstance(iteration, StepEnumeration):
# Push i (control identifier)
self._add_instruction(LDV(iteration.identifier.scope_level,
iteration.identifier.displacement))
if iteration.step_expression is not None:
# Push step value.
self.visit(iteration.step_expression)
else:
# Push 1.
self._add_instruction(LDC(1))
if iteration.down:
# i - step
self._add_instruction(SUB())
else:
# i + step
self._add_instruction(ADD())
# Storing updated i.
self._add_instruction(STV(iteration.identifier.scope_level,
iteration.identifier.displacement))
# Pushing i back to memory.
self._add_instruction(LDV(iteration.identifier.scope_level,
iteration.identifier.displacement))
# Push end value.
self.visit(iteration.end_expression)
# Checking if stopping condition reached.
if iteration.down:
# i < end
self._add_instruction(LES())
else:
# i > end
self._add_instruction(GRT())
# If not stopping condition, go to next iteration.
self._add_instruction(JOF(do_action.start_label))
else:
# TODO: RangeEnumeration
pass
# Next iteration.
if do_action.control.while_control is not None \
and do_action.control.for_control is None:
self._add_instruction(JMP(do_action.start_label))
if do_action.end_label is not None:
self._add_instruction(LBL(do_action.end_label))
# Exit Action
def visit_ExitAction(self, exit_action: ExitAction):
self._add_instruction(JMP(exit_action.exit_label))
|
MC911-MV-1s2016/lya-compiler-python
|
lyacompiler/lya_codegen.py
|
Python
|
bsd-3-clause
| 25,472
|
[
"VisIt"
] |
e7390bd668a7ec4cbaa1fdf023e90b6d9e46437086702f8710c86b3c72de3468
|
from pathlib import Path
import unittest
import unittest.mock as mock
from rt1.rtprocess import RTprocess
from rt1.rtresults import RTresults
from rt1.rtfits import MultiFits
import warnings
warnings.simplefilter("ignore")
# use "test_0_---" "test_1_---" to ensure test-order
class TestRTfits(unittest.TestCase):
def test_0_parallel_processing(self):
reader_args = [dict(gpi=i) for i in [1, 2, 3, 4]]
config_path = Path(__file__).parent.absolute() / "test_config.ini"
proc = RTprocess(config_path, autocontinue=True)
proc.run_processing(ncpu=4, reader_args=reader_args)
# run again to check what happens if files already exist
proc.run_processing(ncpu=4, reader_args=reader_args)
# ----------------------------------------- check if files have been copied
assert Path(
"tests/proc_test/dump01/cfg"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test/dump01/results"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test/dump01/dumps"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test/dump01/cfg/test_config.ini"
).exists(), "copying did not work"
assert Path(
"tests/proc_test/dump01/cfg/parallel_processing_config.py"
).exists(), "copying did not work"
def test_1_rtresults(self):
results = RTresults("tests/proc_test")
assert hasattr(results, "dump01"), "dumpfolder not found by RTresults"
dumpfiles = [i for i in results.dump01.dump_files]
print(dumpfiles)
fit = results.dump01.load_fit()
cfg = results.dump01.load_cfg()
with results.dump01.load_nc() as ncfile:
processed_ids = list(ncfile.ID.values)
processed_ids.sort()
assert processed_ids == [1, 2, 3], "NetCDF export does not include all IDs"
# check if NetCDF_variables works as expected
results.dump01.NetCDF_variables
# remove the save_path directory
# print('deleting save_path directory...')
# shutil.rmtree(results._parent_path)
def test_2_single_core_processing(self):
reader_args = [dict(gpi=i) for i in [1, 2, 3, 4]]
config_path = Path(__file__).parent.absolute() / "test_config.ini"
# mock inputs as shown here: https://stackoverflow.com/a/37467870/9703451
with self.assertRaises(SystemExit):
with mock.patch("builtins.input", side_effect=["N"]):
proc = RTprocess(config_path, autocontinue=False)
proc.setup()
with self.assertRaises(SystemExit):
with mock.patch("builtins.input", side_effect=["N"]):
proc = RTprocess(config_path, autocontinue=False)
proc.setup()
with mock.patch("builtins.input", side_effect=["REMOVE", "Y"]):
proc = RTprocess(config_path, autocontinue=False)
proc.setup()
assert (
len(list(Path("tests/proc_test/dump01/dumps").iterdir())) == 0
), "user-input REMOVE did not work"
with mock.patch("builtins.input", side_effect=["REMOVE", "Y"]):
proc = RTprocess(config_path, autocontinue=False, copy=False)
proc.run_processing(ncpu=1, reader_args=reader_args)
# ----------------------------------------- check if files have been copied
assert Path(
"tests/proc_test/dump01/cfg"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test/dump01/results"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test/dump01/dumps"
).exists(), "folder-generation did not work"
assert not Path(
"tests/proc_test/dump01/cfg/test_config.ini"
).exists(), "NOT copying did not work"
assert not Path(
"tests/proc_test/dump01/cfg/parallel_processing_config.py"
).exists(), "NOT copying did not work"
# remove the save_path directory
# print('deleting save_path directory...')
# shutil.rmtree(proc.dumppath.parent)
def test_3_parallel_processing_init_kwargs(self):
# test overwriting keyword-args from .ini file
reader_args = [dict(gpi=i) for i in [1, 2, 3, 4]]
config_path = Path(__file__).parent.absolute() / "test_config.ini"
proc = RTprocess(config_path, autocontinue=True)
proc.override_config(
PROCESS_SPECS=dict(
path__save_path="tests/proc_test2",
dumpfolder="dump02",
)
)
proc.run_processing(ncpu=4, reader_args=reader_args)
# ----------------------------------------- check if files have been copied
assert Path(
"tests/proc_test2/dump02/cfg"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test2/dump02/results"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test2/dump02/dumps"
).exists(), "folder-generation did not work"
assert Path(
"tests/proc_test2/dump02/cfg/test_config.ini"
).exists(), "copying did not work"
assert Path(
"tests/proc_test2/dump02/cfg/parallel_processing_config.py"
).exists(), "copying did not work"
def test_4_postprocess_and_finalout(self):
config_path = Path(__file__).parent.absolute() / "test_config.ini"
reader_args = [dict(gpi=i) for i in [1, 2, 3, 4]]
proc = RTprocess(config_path)
proc.override_config(
PROCESS_SPECS=dict(path__save_path="tests/proc_test3", dumpfolder="dump03")
)
proc.run_processing(ncpu=4, reader_args=reader_args, postprocess=False)
results = RTresults("tests/proc_test3")
assert hasattr(results, "dump03"), "dumpfolder dump02 not found by RTresults"
finalout_name = results.dump03.load_cfg().get_process_specs()["finalout_name"]
assert not Path(
f"tests/proc_test3/dump03/results/{finalout_name}.nc"
).exists(), "disabling postprocess did not work"
proc.run_finaloutput(ncpu=1, finalout_name="ncpu_1.nc")
assert Path(
"tests/proc_test3/dump03/results/ncpu_1.nc"
).exists(), "run_finalout with ncpu=1 not work"
proc.run_finaloutput(ncpu=4, finalout_name="ncpu_2.nc")
assert Path(
"tests/proc_test3/dump03/results/ncpu_2.nc"
).exists(), "run_finalout with ncpu=2 not work"
def test_5_multiconfig(self):
config_path = Path(__file__).parent.absolute() / "test_config_multi.ini"
reader_args = [dict(gpi=i) for i in [1, 2, 3, 4]]
proc = RTprocess(config_path)
proc.run_processing(ncpu=4, reader_args=reader_args, postprocess=True)
for cfg in ["cfg_0", "cfg_1"]:
# check if all folders are properly initialized
# assert Path(
# f"tests/proc_multi/dump01/dumps/{cfg}"
# ).exists(), "multiconfig folder generation did not work"
# assert Path(
# f"tests/proc_multi/dump01/dumps/{cfg}"
# ).exists(), "multiconfig folder generation did not work"
# check if model-definition files are written correctly
assert Path(
f"tests/proc_multi/dump01/cfg/model_definition__{cfg}.txt"
).exists(), "multiconfig model_definition.txt export did not work"
# # check if netcdf have been exported
# assert Path(
# f"tests/proc_multi/dump01/results/{cfg}/results.nc"
# ).exists(), "multiconfig NetCDF export did not work"
# check if netcdf have been exported
assert Path(
f"tests/proc_multi/dump01/results/results__{cfg}.nc"
).exists(), "multiconfig NetCDF export did not work"
def test_7_multiconfig_finalout(self):
config_path = Path(__file__).parent.absolute() / "test_config_multi.ini"
proc = RTprocess(config_path)
proc.run_finaloutput(
ncpu=1,
finalout_name="ncpu1.nc",
)
proc.run_finaloutput(
ncpu=3,
finalout_name="ncpu3.nc",
)
for cfg in ["cfg_0", "cfg_1"]:
# check if all folders are properly initialized
assert Path(
f"tests/proc_multi/dump01/results/ncpu1__{cfg}.nc"
).exists(), "multiconfig finaloutput with ncpu=1 did not work"
assert Path(
f"tests/proc_multi/dump01/results/ncpu3__{cfg}.nc"
).exists(), "multiconfig finaloutput with ncpu=3 did not work"
def test_8_multiconfig_rtresults(self):
res = RTresults("tests/proc_multi")
mfit = res.dump01.load_fit()
assert isinstance(mfit, MultiFits), "the dumped fitobject is not a MultiFits!"
for cfg in ["cfg_0", "cfg_1"]:
# check if all configs are added
assert hasattr(mfit.configs, cfg), "multi-results are not properly added"
def test_9_multiconfig_props(self):
# check if properties have been set correctly
res = RTresults("tests/proc_multi")
mfit = res.dump01.load_fit()
fit = mfit.configs.cfg_0
assert fit.defdict["omega"][0] is False, "multiconfig props not correct"
assert fit.defdict["omega"][1] == 0.5, "multiconfig props not correct"
assert fit.int_Q is False, "multiconfig props not correct"
assert str(fit.V.t) == "t_v", "multiconfig props not correct"
assert fit.SRF.ncoefs == 10, "multiconfig props not correct"
assert fit.lsq_kwargs["ftol"] == 0.0001, "multiconfig props not correct"
# -----------------------------
fit = mfit.configs.cfg_1
assert fit.defdict["omega"][0] is True, "multiconfig props not correct"
assert fit.defdict["omega"][1] == 0.05, "multiconfig props not correct"
assert fit.defdict["omega"][2] == "2M", "multiconfig props not correct"
assert fit.int_Q is True, "multiconfig props not correct"
assert fit.V.t == 0.25, "multiconfig props not correct"
assert fit.SRF.ncoefs == 5, "multiconfig props not correct"
assert fit.lsq_kwargs["ftol"] == 0.001, "multiconfig props not correct"
def test_91_export_results(self):
for folder in ["proc_test", "proc_test2", "proc_test3", "proc_multi"]:
proc = RTprocess(f"tests/{folder}")
parameters = dict(t_s=dict(long_name="bare soil directionality"),
tau=dict(long_name="optical depth"),
sig2=dict(long_name="sigma0 squared")
)
metrics = dict(R=["pearson", "sig", "tot",
dict(long_name="sig0 pearson correlation")],
RMSD=["rmsd", "sig", "tot",
dict(long_name="sig0 RMSD")]
)
export_functions = dict(sig2=lambda fit: fit.dataset.sig**2)
attributes = dict(info="some info")
res = proc.export_data(parameters=parameters,
metrics=metrics,
export_functions=export_functions,
attributes=attributes,
index_col='gpi')
if folder != "proc_multi":
# make single-fit results a dict as well so that they can be treated
# in the same way as multi-fits
res = dict(single_fit=res)
for cfg_name, useres in res.items():
assert useres.R.dims == ('gpi',), "metric dim is wrong"
assert useres.RMSD.dims == ('gpi',), "metric dim is wrong"
assert useres.t_s.dims == ('gpi',), "static parameter dim is wrong"
assert useres.tau.dims == ('gpi', 'date'), "dynamic parameter dim is wrong"
assert useres.sig2.dims == ('gpi', 'date'), "dynamic parameter dim is wrong"
# check if attributes are correctly attached
assert useres.attrs['info'] == attributes['info'], "attributes not correctly attached"
for key, attrs in parameters.items():
for name, a in attrs.items():
assert useres[key].attrs[name] == a, "attributes not correctly attached"
assert "model_definition" in useres.attrs, "model_definition not correctly attached"
if __name__ == "__main__":
unittest.main()
|
TUW-GEO/rt1
|
tests/test_parallel_processing.py
|
Python
|
apache-2.0
| 12,780
|
[
"NetCDF"
] |
0573e2c505a1a145c8d2d4e0fc4039de2b03a8977d1d2547781db4bba8422000
|
from __future__ import absolute_import
import glob, mmap, os, struct
from bup import _helpers
from bup.helpers import log, mmap_read
MIDX_VERSION = 4
extract_bits = _helpers.extract_bits
_total_searches = 0
_total_steps = 0
class PackMidx:
"""Wrapper which contains data from multiple index files.
Multiple index (.midx) files constitute a wrapper around index (.idx) files
and make it possible for bup to expand Git's indexing capabilities to vast
amounts of files.
"""
def __init__(self, filename):
self.name = filename
self.force_keep = False
self.map = None
assert(filename.endswith('.midx'))
self.map = mmap_read(open(filename))
if str(self.map[0:4]) != 'MIDX':
log('Warning: skipping: invalid MIDX header in %r\n' % filename)
self.force_keep = True
return self._init_failed()
ver = struct.unpack('!I', self.map[4:8])[0]
if ver < MIDX_VERSION:
log('Warning: ignoring old-style (v%d) midx %r\n'
% (ver, filename))
self.force_keep = False # old stuff is boring
return self._init_failed()
if ver > MIDX_VERSION:
log('Warning: ignoring too-new (v%d) midx %r\n'
% (ver, filename))
self.force_keep = True # new stuff is exciting
return self._init_failed()
self.bits = _helpers.firstword(self.map[8:12])
self.entries = 2**self.bits
self.fanout = buffer(self.map, 12, self.entries*4)
self.sha_ofs = 12 + self.entries*4
self.nsha = nsha = self._fanget(self.entries-1)
self.shatable = buffer(self.map, self.sha_ofs, nsha*20)
self.which_ofs = self.sha_ofs + 20*nsha
self.whichlist = buffer(self.map, self.which_ofs, nsha*4)
self.idxnames = str(self.map[self.which_ofs + 4*nsha:]).split('\0')
def __del__(self):
self.close()
def _init_failed(self):
self.bits = 0
self.entries = 1
self.fanout = buffer('\0\0\0\0')
self.shatable = buffer('\0'*20)
self.idxnames = []
def _fanget(self, i):
start = i*4
s = self.fanout[start:start+4]
return _helpers.firstword(s)
def _get(self, i):
return str(self.shatable[i*20:(i+1)*20])
def _get_idx_i(self, i):
return struct.unpack('!I', self.whichlist[i*4:(i+1)*4])[0]
def _get_idxname(self, i):
return self.idxnames[self._get_idx_i(i)]
def close(self):
if self.map is not None:
self.map.close()
self.map = None
def exists(self, hash, want_source=False):
"""Return nonempty if the object exists in the index files."""
global _total_searches, _total_steps
_total_searches += 1
want = str(hash)
el = extract_bits(want, self.bits)
if el:
start = self._fanget(el-1)
startv = el << (32-self.bits)
else:
start = 0
startv = 0
end = self._fanget(el)
endv = (el+1) << (32-self.bits)
_total_steps += 1 # lookup table is a step
hashv = _helpers.firstword(hash)
#print '(%08x) %08x %08x %08x' % (extract_bits(want, 32), startv, hashv, endv)
while start < end:
_total_steps += 1
#print '! %08x %08x %08x %d - %d' % (startv, hashv, endv, start, end)
mid = start + (hashv-startv)*(end-start-1)/(endv-startv)
#print ' %08x %08x %08x %d %d %d' % (startv, hashv, endv, start, mid, end)
v = self._get(mid)
#print ' %08x' % self._num(v)
if v < want:
start = mid+1
startv = _helpers.firstword(v)
elif v > want:
end = mid
endv = _helpers.firstword(v)
else: # got it!
return want_source and self._get_idxname(mid) or True
return None
def __iter__(self):
for i in xrange(self._fanget(self.entries-1)):
yield buffer(self.shatable, i*20, 20)
def __len__(self):
return int(self._fanget(self.entries-1))
def clear_midxes(dir=None):
for midx in glob.glob(os.path.join(dir, '*.midx')):
os.unlink(midx)
|
ToxicFrog/bup
|
lib/bup/midx.py
|
Python
|
lgpl-2.1
| 4,297
|
[
"exciting"
] |
c5431211eef07cc7d5d73cfdf07a73867deafad0202d48b0a4915ad603d8423a
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import importlib
import pandas as pd
import numpy as np
from pyspark.ml.feature import Bucketizer
from pyspark.mllib.stat import KernelDensity # type: ignore[no-redef]
from pyspark.sql import functions as F
from pandas.core.base import PandasObject
from pandas.core.dtypes.inference import is_integer
from pyspark.pandas.missing import unsupported_function
from pyspark.pandas.config import get_option
from pyspark.pandas.spark import functions as SF
from pyspark.pandas.utils import name_like_string
class TopNPlotBase:
def get_top_n(self, data):
from pyspark.pandas import DataFrame, Series
max_rows = get_option("plotting.max_rows")
# Simply use the first 1k elements and make it into a pandas dataframe
# For categorical variables, it is likely called from df.x.value_counts().plot.xxx().
if isinstance(data, (Series, DataFrame)):
data = data.head(max_rows + 1)._to_pandas()
else:
raise TypeError("Only DataFrame and Series are supported for plotting.")
self.partial = False
if len(data) > max_rows:
self.partial = True
data = data.iloc[:max_rows]
return data
def set_result_text(self, ax):
max_rows = get_option("plotting.max_rows")
assert hasattr(self, "partial")
if self.partial:
ax.text(
1,
1,
"showing top {} elements only".format(max_rows),
size=6,
ha="right",
va="bottom",
transform=ax.transAxes,
)
class SampledPlotBase:
def get_sampled(self, data):
from pyspark.pandas import DataFrame, Series
fraction = get_option("plotting.sample_ratio")
if fraction is None:
fraction = 1 / (len(data) / get_option("plotting.max_rows"))
fraction = min(1.0, fraction)
self.fraction = fraction
if isinstance(data, (DataFrame, Series)):
if isinstance(data, Series):
data = data.to_frame()
sampled = data._internal.resolved_copy.spark_frame.sample(fraction=self.fraction)
return DataFrame(data._internal.with_new_sdf(sampled))._to_pandas()
else:
raise TypeError("Only DataFrame and Series are supported for plotting.")
def set_result_text(self, ax):
assert hasattr(self, "fraction")
if self.fraction < 1:
ax.text(
1,
1,
"showing the sampled result by fraction %s" % self.fraction,
size=6,
ha="right",
va="bottom",
transform=ax.transAxes,
)
class NumericPlotBase:
@staticmethod
def prepare_numeric_data(data):
from pyspark.pandas.series import Series
if isinstance(data, Series):
data = data.to_frame()
numeric_data = data.select_dtypes(
include=["byte", "decimal", "integer", "float", "long", "double", np.datetime64]
)
# no empty frames or series allowed
if len(numeric_data.columns) == 0:
raise TypeError(
"Empty {0!r}: no numeric data to " "plot".format(numeric_data.__class__.__name__)
)
return data, numeric_data
class HistogramPlotBase(NumericPlotBase):
@staticmethod
def prepare_hist_data(data, bins):
data, numeric_data = NumericPlotBase.prepare_numeric_data(data)
if is_integer(bins):
# computes boundaries for the column
bins = HistogramPlotBase.get_bins(data._to_spark(), bins)
return numeric_data, bins
@staticmethod
def get_bins(sdf, bins):
# 'data' is a Spark DataFrame that selects all columns.
if len(sdf.columns) > 1:
min_col = F.least(*map(F.min, sdf))
max_col = F.greatest(*map(F.max, sdf))
else:
min_col = F.min(sdf.columns[-1])
max_col = F.max(sdf.columns[-1])
boundaries = sdf.select(min_col, max_col).first()
# divides the boundaries into bins
if boundaries[0] == boundaries[1]:
boundaries = (boundaries[0] - 0.5, boundaries[1] + 0.5)
return np.linspace(boundaries[0], boundaries[1], bins + 1)
@staticmethod
def compute_hist(psdf, bins):
# 'data' is a Spark DataFrame that selects one column.
assert isinstance(bins, (np.ndarray, np.generic))
sdf = psdf._internal.spark_frame
scols = []
input_column_names = []
for label in psdf._internal.column_labels:
input_column_name = name_like_string(label)
input_column_names.append(input_column_name)
scols.append(psdf._internal.spark_column_for(label).alias(input_column_name))
sdf = sdf.select(*scols)
# 1. Make the bucket output flat to:
# +----------+-------+
# |__group_id|buckets|
# +----------+-------+
# |0 |0.0 |
# |0 |0.0 |
# |0 |1.0 |
# |0 |2.0 |
# |0 |3.0 |
# |0 |3.0 |
# |1 |0.0 |
# |1 |1.0 |
# |1 |1.0 |
# |1 |2.0 |
# |1 |1.0 |
# |1 |0.0 |
# +----------+-------+
colnames = sdf.columns
bucket_names = ["__{}_bucket".format(colname) for colname in colnames]
output_df = None
for group_id, (colname, bucket_name) in enumerate(zip(colnames, bucket_names)):
# creates a Bucketizer to get corresponding bin of each value
bucketizer = Bucketizer(
splits=bins, inputCol=colname, outputCol=bucket_name, handleInvalid="skip"
)
bucket_df = bucketizer.transform(sdf)
if output_df is None:
output_df = bucket_df.select(
SF.lit(group_id).alias("__group_id"), F.col(bucket_name).alias("__bucket")
)
else:
output_df = output_df.union(
bucket_df.select(
SF.lit(group_id).alias("__group_id"), F.col(bucket_name).alias("__bucket")
)
)
# 2. Calculate the count based on each group and bucket.
# +----------+-------+------+
# |__group_id|buckets| count|
# +----------+-------+------+
# |0 |0.0 |2 |
# |0 |1.0 |1 |
# |0 |2.0 |1 |
# |0 |3.0 |2 |
# |1 |0.0 |2 |
# |1 |1.0 |3 |
# |1 |2.0 |1 |
# +----------+-------+------+
result = (
output_df.groupby("__group_id", "__bucket")
.agg(F.count("*").alias("count"))
.toPandas()
.sort_values(by=["__group_id", "__bucket"])
)
# 3. Fill empty bins and calculate based on each group id. From:
# +----------+--------+------+
# |__group_id|__bucket| count|
# +----------+--------+------+
# |0 |0.0 |2 |
# |0 |1.0 |1 |
# |0 |2.0 |1 |
# |0 |3.0 |2 |
# +----------+--------+------+
# +----------+--------+------+
# |__group_id|__bucket| count|
# +----------+--------+------+
# |1 |0.0 |2 |
# |1 |1.0 |3 |
# |1 |2.0 |1 |
# +----------+--------+------+
#
# to:
# +-----------------+
# |__values1__bucket|
# +-----------------+
# |2 |
# |1 |
# |1 |
# |2 |
# |0 |
# +-----------------+
# +-----------------+
# |__values2__bucket|
# +-----------------+
# |2 |
# |3 |
# |1 |
# |0 |
# |0 |
# +-----------------+
output_series = []
for i, (input_column_name, bucket_name) in enumerate(zip(input_column_names, bucket_names)):
current_bucket_result = result[result["__group_id"] == i]
# generates a pandas DF with one row for each bin
# we need this as some of the bins may be empty
indexes = pd.DataFrame({"__bucket": np.arange(0, len(bins) - 1)})
# merges the bins with counts on it and fills remaining ones with zeros
pdf = indexes.merge(current_bucket_result, how="left", on=["__bucket"]).fillna(0)[
["count"]
]
pdf.columns = [input_column_name]
output_series.append(pdf[input_column_name])
return output_series
class BoxPlotBase:
@staticmethod
def compute_stats(data, colname, whis, precision):
# Computes mean, median, Q1 and Q3 with approx_percentile and precision
pdf = data._psdf._internal.resolved_copy.spark_frame.agg(
*[
F.expr(
"approx_percentile(`{}`, {}, {})".format(colname, q, int(1.0 / precision))
).alias("{}_{}%".format(colname, int(q * 100)))
for q in [0.25, 0.50, 0.75]
],
F.mean("`%s`" % colname).alias("{}_mean".format(colname)),
).toPandas()
# Computes IQR and Tukey's fences
iqr = "{}_iqr".format(colname)
p75 = "{}_75%".format(colname)
p25 = "{}_25%".format(colname)
pdf.loc[:, iqr] = pdf.loc[:, p75] - pdf.loc[:, p25]
pdf.loc[:, "{}_lfence".format(colname)] = pdf.loc[:, p25] - whis * pdf.loc[:, iqr]
pdf.loc[:, "{}_ufence".format(colname)] = pdf.loc[:, p75] + whis * pdf.loc[:, iqr]
qnames = ["25%", "50%", "75%", "mean", "lfence", "ufence"]
col_summ = pdf[["{}_{}".format(colname, q) for q in qnames]]
col_summ.columns = qnames
lfence, ufence = col_summ["lfence"], col_summ["ufence"]
stats = {
"mean": col_summ["mean"].values[0],
"med": col_summ["50%"].values[0],
"q1": col_summ["25%"].values[0],
"q3": col_summ["75%"].values[0],
}
return stats, (lfence.values[0], ufence.values[0])
@staticmethod
def outliers(data, colname, lfence, ufence):
# Builds expression to identify outliers
expression = F.col("`%s`" % colname).between(lfence, ufence)
# Creates a column to flag rows as outliers or not
return data._psdf._internal.resolved_copy.spark_frame.withColumn(
"__{}_outlier".format(colname), ~expression
)
@staticmethod
def calc_whiskers(colname, outliers):
# Computes min and max values of non-outliers - the whiskers
minmax = (
outliers.filter("not `__{}_outlier`".format(colname))
.agg(F.min("`%s`" % colname).alias("min"), F.max(colname).alias("max"))
.toPandas()
)
return minmax.iloc[0][["min", "max"]].values
@staticmethod
def get_fliers(colname, outliers, min_val):
# Filters only the outliers, should "showfliers" be True
fliers_df = outliers.filter("`__{}_outlier`".format(colname))
# If shows fliers, takes the top 1k with highest absolute values
# Here we normalize the values by subtracting the minimum value from
# each, and use absolute values.
order_col = F.abs(F.col("`{}`".format(colname)) - min_val.item())
fliers = (
fliers_df.select(F.col("`{}`".format(colname)))
.orderBy(order_col)
.limit(1001)
.toPandas()[colname]
.values
)
return fliers
class KdePlotBase(NumericPlotBase):
@staticmethod
def prepare_kde_data(data):
_, numeric_data = NumericPlotBase.prepare_numeric_data(data)
return numeric_data
@staticmethod
def get_ind(sdf, ind):
def calc_min_max():
if len(sdf.columns) > 1:
min_col = F.least(*map(F.min, sdf))
max_col = F.greatest(*map(F.max, sdf))
else:
min_col = F.min(sdf.columns[-1])
max_col = F.max(sdf.columns[-1])
return sdf.select(min_col, max_col).first()
if ind is None:
min_val, max_val = calc_min_max()
sample_range = max_val - min_val
ind = np.linspace(
min_val - 0.5 * sample_range,
max_val + 0.5 * sample_range,
1000,
)
elif is_integer(ind):
min_val, max_val = calc_min_max()
sample_range = max_val - min_val
ind = np.linspace(
min_val - 0.5 * sample_range,
max_val + 0.5 * sample_range,
ind,
)
return ind
@staticmethod
def compute_kde(sdf, bw_method=None, ind=None):
# 'sdf' is a Spark DataFrame that selects one column.
# Using RDD is slow so we might have to change it to Dataset based implementation
# once Spark has that implementation.
sample = sdf.rdd.map(lambda x: float(x[0]))
kd = KernelDensity()
kd.setSample(sample)
assert isinstance(bw_method, (int, float)), "'bw_method' must be set as a scalar number."
if bw_method is not None:
# Match the bandwidth with Spark.
kd.setBandwidth(float(bw_method))
return kd.estimate(list(map(float, ind)))
class PandasOnSparkPlotAccessor(PandasObject):
"""
Series/Frames plotting accessor and method.
Uses the backend specified by the
option ``plotting.backend``. By default, plotly is used.
Plotting methods can also be accessed by calling the accessor as a method
with the ``kind`` argument:
``s.plot(kind='hist')`` is equivalent to ``s.plot.hist()``
"""
pandas_plot_data_map = {
"pie": TopNPlotBase().get_top_n,
"bar": TopNPlotBase().get_top_n,
"barh": TopNPlotBase().get_top_n,
"scatter": TopNPlotBase().get_top_n,
"area": SampledPlotBase().get_sampled,
"line": SampledPlotBase().get_sampled,
}
_backends = {} # type: ignore[var-annotated]
def __init__(self, data):
self.data = data
@staticmethod
def _find_backend(backend):
"""
Find a pandas-on-Spark plotting backend
"""
try:
return PandasOnSparkPlotAccessor._backends[backend]
except KeyError:
try:
module = importlib.import_module(backend)
except ImportError:
# We re-raise later on.
pass
else:
if hasattr(module, "plot") or hasattr(module, "plot_pandas_on_spark"):
# Validate that the interface is implemented when the option
# is set, rather than at plot time.
PandasOnSparkPlotAccessor._backends[backend] = module
return module
raise ValueError(
"Could not find plotting backend '{backend}'. Ensure that you've installed "
"the package providing the '{backend}' entrypoint, or that the package has a "
"top-level `.plot` method.".format(backend=backend)
)
@staticmethod
def _get_plot_backend(backend=None):
backend = backend or get_option("plotting.backend")
# Shortcut
if backend in PandasOnSparkPlotAccessor._backends:
return PandasOnSparkPlotAccessor._backends[backend]
if backend == "matplotlib":
# Because matplotlib is an optional dependency,
# we need to attempt an import here to raise an ImportError if needed.
try:
# test if matplotlib can be imported
import matplotlib # noqa: F401
from pyspark.pandas.plot import matplotlib as module
except ImportError:
raise ImportError(
"matplotlib is required for plotting when the "
"default backend 'matplotlib' is selected."
) from None
PandasOnSparkPlotAccessor._backends["matplotlib"] = module
elif backend == "plotly":
try:
# test if plotly can be imported
import plotly # noqa: F401
from pyspark.pandas.plot import plotly as module
except ImportError:
raise ImportError(
"plotly is required for plotting when the "
"default backend 'plotly' is selected."
) from None
PandasOnSparkPlotAccessor._backends["plotly"] = module
else:
module = PandasOnSparkPlotAccessor._find_backend(backend)
PandasOnSparkPlotAccessor._backends[backend] = module
return module
def __call__(self, kind="line", backend=None, **kwargs):
plot_backend = PandasOnSparkPlotAccessor._get_plot_backend(backend)
plot_data = self.data
kind = {"density": "kde"}.get(kind, kind)
if hasattr(plot_backend, "plot_pandas_on_spark"):
# use if there's pandas-on-Spark specific method.
return plot_backend.plot_pandas_on_spark(plot_data, kind=kind, **kwargs)
else:
# fallback to use pandas'
if not PandasOnSparkPlotAccessor.pandas_plot_data_map[kind]:
raise NotImplementedError(
"'%s' plot is not supported with '%s' plot "
"backend yet." % (kind, plot_backend.__name__)
)
plot_data = PandasOnSparkPlotAccessor.pandas_plot_data_map[kind](plot_data)
return plot_backend.plot(plot_data, kind=kind, **kwargs)
def line(self, x=None, y=None, **kwargs):
"""
Plot DataFrame/Series as lines.
This function is useful to plot lines using Series's values
as coordinates.
Parameters
----------
x : int or str, optional
Columns to use for the horizontal axis.
Either the location or the label of the columns to be used.
By default, it will use the DataFrame indices.
y : int, str, or list of them, optional
The values to be plotted.
Either the location or the label of the columns to be used.
By default, it will use the remaining DataFrame numeric columns.
**kwds
Keyword arguments to pass on to :meth:`Series.plot` or :meth:`DataFrame.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
See Also
--------
plotly.express.line : Plot y versus x as lines and/or markers (plotly).
matplotlib.pyplot.plot : Plot y versus x as lines and/or markers (matplotlib).
Examples
--------
Basic plot.
For Series:
.. plotly::
>>> s = ps.Series([1, 3, 2])
>>> s.plot.line() # doctest: +SKIP
For DataFrame:
.. plotly::
The following example shows the populations for some animals
over the years.
>>> df = ps.DataFrame({'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]},
... index=[1990, 1997, 2003, 2009, 2014])
>>> df.plot.line() # doctest: +SKIP
.. plotly::
The following example shows the relationship between both
populations.
>>> df = ps.DataFrame({'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]},
... index=[1990, 1997, 2003, 2009, 2014])
>>> df.plot.line(x='pig', y='horse') # doctest: +SKIP
"""
return self(kind="line", x=x, y=y, **kwargs)
def bar(self, x=None, y=None, **kwds):
"""
Vertical bar plot.
Parameters
----------
x : label or position, optional
Allows plotting of one column versus another.
If not specified, the index of the DataFrame is used.
y : label or position, optional
Allows plotting of one column versus another.
If not specified, all numerical columns are used.
**kwds : optional
Additional keyword arguments are documented in
:meth:`pyspark.pandas.Series.plot` or
:meth:`pyspark.pandas.DataFrame.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Examples
--------
Basic plot.
For Series:
.. plotly::
>>> s = ps.Series([1, 3, 2])
>>> s.plot.bar() # doctest: +SKIP
For DataFrame:
.. plotly::
>>> df = ps.DataFrame({'lab': ['A', 'B', 'C'], 'val': [10, 30, 20]})
>>> df.plot.bar(x='lab', y='val') # doctest: +SKIP
Plot a whole dataframe to a bar plot. Each column is stacked with a
distinct color along the horizontal axis.
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.bar() # doctest: +SKIP
Instead of stacking, the figure can be split by column with plotly
APIs.
.. plotly::
>>> from plotly.subplots import make_subplots
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> fig = (make_subplots(rows=2, cols=1)
... .add_trace(df.plot.bar(y='speed').data[0], row=1, col=1)
... .add_trace(df.plot.bar(y='speed').data[0], row=1, col=1)
... .add_trace(df.plot.bar(y='lifespan').data[0], row=2, col=1))
>>> fig # doctest: +SKIP
Plot a single column.
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.bar(y='speed') # doctest: +SKIP
Plot only selected categories for the DataFrame.
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.bar(x='lifespan') # doctest: +SKIP
"""
from pyspark.pandas import DataFrame, Series
if isinstance(self.data, Series):
return self(kind="bar", **kwds)
elif isinstance(self.data, DataFrame):
return self(kind="bar", x=x, y=y, **kwds)
def barh(self, x=None, y=None, **kwargs):
"""
Make a horizontal bar plot.
A horizontal bar plot is a plot that presents quantitative data with
rectangular bars with lengths proportional to the values that they
represent. A bar plot shows comparisons among discrete categories. One
axis of the plot shows the specific categories being compared, and the
other axis represents a measured value.
Parameters
----------
x : label or position, default DataFrame.index
Column to be used for categories.
y : label or position, default All numeric columns in dataframe
Columns to be plotted from the DataFrame.
**kwds
Keyword arguments to pass on to
:meth:`pyspark.pandas.DataFrame.plot` or :meth:`pyspark.pandas.Series.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
See Also
--------
plotly.express.bar : Plot a vertical bar plot using plotly.
matplotlib.axes.Axes.bar : Plot a vertical bar plot using matplotlib.
Examples
--------
For Series:
.. plotly::
>>> df = ps.DataFrame({'lab': ['A', 'B', 'C'], 'val': [10, 30, 20]})
>>> df.val.plot.barh() # doctest: +SKIP
For DataFrame:
.. plotly::
>>> df = ps.DataFrame({'lab': ['A', 'B', 'C'], 'val': [10, 30, 20]})
>>> df.plot.barh(x='lab', y='val') # doctest: +SKIP
Plot a whole DataFrame to a horizontal bar plot
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.barh() # doctest: +SKIP
Plot a column of the DataFrame to a horizontal bar plot
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.barh(y='speed') # doctest: +SKIP
Plot DataFrame versus the desired column
.. plotly::
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = ps.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> df.plot.barh(x='lifespan') # doctest: +SKIP
"""
from pyspark.pandas import DataFrame, Series
if isinstance(self.data, Series):
return self(kind="barh", **kwargs)
elif isinstance(self.data, DataFrame):
return self(kind="barh", x=x, y=y, **kwargs)
def box(self, **kwds):
"""
Make a box plot of the Series columns.
Parameters
----------
**kwds : optional
Additional keyword arguments are documented in
:meth:`pyspark.pandas.Series.plot`.
precision: scalar, default = 0.01
This argument is used by pandas-on-Spark to compute approximate statistics
for building a boxplot. Use *smaller* values to get more precise
statistics (matplotlib-only).
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Notes
-----
There are behavior differences between pandas-on-Spark and pandas.
* pandas-on-Spark computes approximate statistics - expect differences between
pandas and pandas-on-Spark boxplots, especially regarding 1st and 3rd quartiles.
* The `whis` argument is only supported as a single number.
* pandas-on-Spark doesn't support the following argument(s) (matplotlib-only).
* `bootstrap` argument is not supported
* `autorange` argument is not supported
Examples
--------
Draw a box plot from a DataFrame with four columns of randomly
generated data.
For Series:
.. plotly::
>>> data = np.random.randn(25, 4)
>>> df = ps.DataFrame(data, columns=list('ABCD'))
>>> df['A'].plot.box() # doctest: +SKIP
This is an unsupported function for DataFrame type
"""
from pyspark.pandas import DataFrame, Series
if isinstance(self.data, Series):
return self(kind="box", **kwds)
elif isinstance(self.data, DataFrame):
return unsupported_function(class_name="pd.DataFrame", method_name="box")()
def hist(self, bins=10, **kwds):
"""
Draw one histogram of the DataFrame’s columns.
A `histogram`_ is a representation of the distribution of data.
This function calls :meth:`plotting.backend.plot`,
on each series in the DataFrame, resulting in one histogram per column.
.. _histogram: https://en.wikipedia.org/wiki/Histogram
Parameters
----------
bins : integer or sequence, default 10
Number of histogram bins to be used. If an integer is given, bins + 1
bin edges are calculated and returned. If bins is a sequence, gives
bin edges, including left edge of first bin and right edge of last
bin. In this case, bins is returned unmodified.
**kwds
All other plotting keyword arguments to be passed to
plotting backend.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Examples
--------
Basic plot.
For Series:
.. plotly::
>>> s = ps.Series([1, 3, 2])
>>> s.plot.hist() # doctest: +SKIP
For DataFrame:
.. plotly::
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns=['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> df = ps.from_pandas(df)
>>> df.plot.hist(bins=12, alpha=0.5) # doctest: +SKIP
"""
return self(kind="hist", bins=bins, **kwds)
def kde(self, bw_method=None, ind=None, **kwargs):
"""
Generate Kernel Density Estimate plot using Gaussian kernels.
Parameters
----------
bw_method : scalar
The method used to calculate the estimator bandwidth.
See KernelDensity in PySpark for more information.
ind : NumPy array or integer, optional
Evaluation points for the estimated PDF. If None (default),
1000 equally spaced points are used. If `ind` is a NumPy array, the
KDE is evaluated at the points passed. If `ind` is an integer,
`ind` number of equally spaced points are used.
**kwargs : optional
Keyword arguments to pass on to :meth:`pandas-on-Spark.Series.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Examples
--------
A scalar bandwidth should be specified. Using a small bandwidth value can
lead to over-fitting, while using a large bandwidth value may result
in under-fitting:
.. plotly::
>>> s = ps.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> s.plot.kde(bw_method=0.3) # doctest: +SKIP
.. plotly::
>>> s = ps.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> s.plot.kde(bw_method=3) # doctest: +SKIP
The `ind` parameter determines the evaluation points for the
plot of the estimated KDF:
.. plotly::
>>> s = ps.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> s.plot.kde(ind=[1, 2, 3, 4, 5], bw_method=0.3) # doctest: +SKIP
For DataFrame, it works in the same way as Series:
.. plotly::
>>> df = ps.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> df.plot.kde(bw_method=0.3) # doctest: +SKIP
.. plotly::
>>> df = ps.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> df.plot.kde(bw_method=3) # doctest: +SKIP
.. plotly::
>>> df = ps.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> df.plot.kde(ind=[1, 2, 3, 4, 5, 6], bw_method=0.3) # doctest: +SKIP
"""
return self(kind="kde", bw_method=bw_method, ind=ind, **kwargs)
density = kde
def area(self, x=None, y=None, **kwds):
"""
Draw a stacked area plot.
An area plot displays quantitative data visually.
This function wraps the plotly area function.
Parameters
----------
x : label or position, optional
Coordinates for the X axis. By default uses the index.
y : label or position, optional
Column to plot. By default uses all columns.
stacked : bool, default True
Area plots are stacked by default. Set to False to create a
unstacked plot (matplotlib-only).
**kwds : optional
Additional keyword arguments are documented in
:meth:`DataFrame.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Examples
--------
For Series
.. plotly::
>>> df = ps.DataFrame({
... 'sales': [3, 2, 3, 9, 10, 6],
... 'signups': [5, 5, 6, 12, 14, 13],
... 'visits': [20, 42, 28, 62, 81, 50],
... }, index=pd.date_range(start='2018/01/01', end='2018/07/01',
... freq='M'))
>>> df.sales.plot.area() # doctest: +SKIP
For DataFrame
.. plotly::
>>> df = ps.DataFrame({
... 'sales': [3, 2, 3, 9, 10, 6],
... 'signups': [5, 5, 6, 12, 14, 13],
... 'visits': [20, 42, 28, 62, 81, 50],
... }, index=pd.date_range(start='2018/01/01', end='2018/07/01',
... freq='M'))
>>> df.plot.area() # doctest: +SKIP
"""
from pyspark.pandas import DataFrame, Series
if isinstance(self.data, Series):
return self(kind="area", **kwds)
elif isinstance(self.data, DataFrame):
return self(kind="area", x=x, y=y, **kwds)
def pie(self, **kwds):
"""
Generate a pie plot.
A pie plot is a proportional representation of the numerical data in a
column. This function wraps :meth:`plotly.express.pie` for the
specified column.
Parameters
----------
y : int or label, optional
Label or position of the column to plot.
If not provided, ``subplots=True`` argument must be passed (matplotlib-only).
**kwds
Keyword arguments to pass on to :meth:`pandas-on-Spark.Series.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
Examples
--------
For Series:
.. plotly::
>>> df = ps.DataFrame({'mass': [0.330, 4.87, 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> df.mass.plot.pie() # doctest: +SKIP
For DataFrame:
.. plotly::
>>> df = ps.DataFrame({'mass': [0.330, 4.87, 5.97],
... 'radius': [2439.7, 6051.8, 6378.1]},
... index=['Mercury', 'Venus', 'Earth'])
>>> df.plot.pie(y='mass') # doctest: +SKIP
"""
from pyspark.pandas import DataFrame, Series
if isinstance(self.data, Series):
return self(kind="pie", **kwds)
else:
# pandas will raise an error if y is None and subplots if not True
if (
isinstance(self.data, DataFrame)
and kwds.get("y", None) is None
and not kwds.get("subplots", False)
):
raise ValueError(
"pie requires either y column or 'subplots=True' (matplotlib-only)"
)
return self(kind="pie", **kwds)
def scatter(self, x, y, **kwds):
"""
Create a scatter plot with varying marker point size and color.
The coordinates of each point are defined by two dataframe columns and
filled circles are used to represent each point. This kind of plot is
useful to see complex correlations between two variables. Points could
be for instance natural 2D coordinates like longitude and latitude in
a map or, in general, any pair of metrics that can be plotted against
each other.
Parameters
----------
x : int or str
The column name or column position to be used as horizontal
coordinates for each point.
y : int or str
The column name or column position to be used as vertical
coordinates for each point.
s : scalar or array_like, optional
(matplotlib-only).
c : str, int or array_like, optional
(matplotlib-only).
**kwds: Optional
Keyword arguments to pass on to :meth:`pyspark.pandas.DataFrame.plot`.
Returns
-------
:class:`plotly.graph_objs.Figure`
Return an custom object when ``backend!=plotly``.
Return an ndarray when ``subplots=True`` (matplotlib-only).
See Also
--------
plotly.express.scatter : Scatter plot using multiple input data
formats (plotly).
matplotlib.pyplot.scatter : Scatter plot using multiple input data
formats (matplotlib).
Examples
--------
Let's see how to draw a scatter plot using coordinates from the values
in a DataFrame's columns.
.. plotly::
>>> df = ps.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1],
... [6.4, 3.2, 1], [5.9, 3.0, 2]],
... columns=['length', 'width', 'species'])
>>> df.plot.scatter(x='length', y='width') # doctest: +SKIP
And now with dark scheme:
.. plotly::
>>> df = ps.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1],
... [6.4, 3.2, 1], [5.9, 3.0, 2]],
... columns=['length', 'width', 'species'])
>>> fig = df.plot.scatter(x='length', y='width')
>>> fig.update_layout(template="plotly_dark") # doctest: +SKIP
"""
return self(kind="scatter", x=x, y=y, **kwds)
def hexbin(self, **kwds):
return unsupported_function(class_name="pd.DataFrame", method_name="hexbin")()
|
shaneknapp/spark
|
python/pyspark/pandas/plot/core.py
|
Python
|
apache-2.0
| 41,560
|
[
"Gaussian"
] |
5c41540e4beebf2721e1942ee21fc88217ca6856ca309a2459eb05a42e23ffd8
|
#! /usr/bin/env python
from Email import *
from Job import *
import subprocess
from os import listdir, walk
from os.path import isfile, join, getmtime
import re
import sys
from shutil import move
from collections import deque
import cPickle as pickle
PICKLE_PATH = "/opt/octoqueue/pickle/octoqueue.pickle"
QUEUE_PATH = "/data/GaussJobs/waiting_room"
INPUT_PATH = "/data/GaussJobs/inp"
OUTPUT_PATH = "/data/GaussJobs/out"
def isgaussianrunning():
ps = subprocess.Popen("ps -eaf | grep g09", shell=True, stdout=subprocess.PIPE)
output = ps.stdout.read()
ps.stdout.close()
ps.wait()
rg = re.compile('(\\/)'+'(opt)'+'(\\/)'+'(g09)'+'(\\/)'+'(l)'+'(\\d+)'+'(\\.)'+'(exe)',
re.IGNORECASE | re.DOTALL)
if not re.search(rg, output):
return False
else:
return True
def generate_job(inp):
# First convert to UNIX format
ps = subprocess.Popen('dos2unix %s' %(QUEUE_PATH+inp), shell=True, stdout=subprocess.PIPE)
ps.wait()
inp_file = open(inp,'r')
inp_file_info = inp_file.read().split('\n')[0]
inp_file.close()
# Initial comment in Gaussian input file looks like:
# ! Name#E-mail
info = inp_file_info.strip('!').strip(" ").split("#")
return Job(uname=info[0], inp_name=inp, out_name=inp.split('.')[0]+".out", email=info[1])
def get_new_jobs(queue):
dir_list = sorted(listdir(QUEUE_PATH), key=lambda x: getmtime(join(QUEUE_PATH, x)))
if not dir_list:
pass # No files waiting to be run
else: # Now check each file in QUEUE_PATH to see if it's in the queue
if queue: # There's stuff in the queue to check
for inp_file in dir_list:
if inp_file in [f.inp_name for f in queue]:
print inp_file
continue
else:
queue.append(generate_job(inp_file))
else: # Nothing in the queue, but stuff to add
for inp_file in dir_list:
queue.append(generate_job(inp_file))
return queue
def main():
# First check to see if pickled queue is available
if isfile(PICKLE_PATH):
current_queue = pickle.load(open(PICKLE_PATH, 'rb'))
# Check to see if Gaussian is done with top job on queue
if not isgaussianrunning():
if current_queue: # There's at least one item in the queue and it's done
cur_job = current_queue.popleft()
Email(dest=cur_job.email, job_json=cur_job.to_json(), status='end')
else: # Nothing in the queue, null
pass
else: # Gaussian is still working
sys.exit()
# Now time to check if there's any new jobs to add to queue
current_queue = get_new_jobs(current_queue)
# Otherwise make a new queue
else:
current_queue = get_new_jobs(deque())
if not current_queue:
sys.exit() # Nothing to do
else: # Let's start the next process!
cur_job = current_queue.popleft()
current_queue.appendleft(cur_job) # Put it back in
input_path = join(INPUT_PATH, cur_job.inp_name)
output_path = join(OUTPUT_PATH, cur_job.out_name)
move(join(QUEUE_PATH, cur_job.inp_name), input_path)
subprocess.Popen('g09 < %s > %s' %(input_path, output_path))
Email(dest=cur_job.email, job_json=cur_job.to_json(), status='start')
# Pickle the queue and quit
pickle.dump(current_queue, open(PICKLE_PATH, 'wb'))
sys.exit()
|
nseifert/octowolf_queue
|
octoqueue/Queuer.py
|
Python
|
mit
| 3,520
|
[
"Gaussian"
] |
78357b4ff0c109e35af6fb34e0317c52b16d382718b4502d29793da81527cdf4
|
# ----------------------------------------------------------------------------
# Copyright (c) 2013--, scikit-bio development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
import numpy as np
import numpy.testing as npt
from unittest import TestCase, main
from skbio.stats.ordination import corr, mean_and_std, e_matrix, f_matrix
class TestUtils(TestCase):
def setUp(self):
self.x = np.array([[1, 2, 3], [4, 5, 6]])
self.y = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
self.matrix = np.arange(1, 7).reshape(2, 3)
self.matrix2 = np.arange(1, 10).reshape(3, 3)
self.small_mat = np.array([[7, 5, 5], [4, 4, 9], [7, 5, 3]])
def test_mean_and_std(self):
obs = mean_and_std(self.x)
npt.assert_almost_equal((3.5, 1.707825127), obs)
obs = mean_and_std(self.x, with_std=False)
self.assertEqual((3.5, None), obs)
obs = mean_and_std(self.x, ddof=2)
npt.assert_almost_equal((3.5, 2.091650066), obs)
def test_mean_and_std_no_mean_no_std(self):
with npt.assert_raises(ValueError):
mean_and_std(self.x, with_mean=False, with_std=False)
def test_corr(self):
obs = corr(self.small_mat)
npt.assert_almost_equal(np.array([[1, 1, -0.94491118],
[1, 1, -0.94491118],
[-0.94491118, -0.94491118, 1]]),
obs)
def test_corr_shape_mismatch(self):
with npt.assert_raises(ValueError):
corr(self.x, self.y)
def test_e_matrix(self):
E = e_matrix(self.matrix)
expected_E = np.array([[-0.5, -2., -4.5],
[-8., -12.5, -18.]])
npt.assert_almost_equal(E, expected_E)
def test_f_matrix(self):
F = f_matrix(self.matrix2)
expected_F = np.zeros((3, 3))
# Note that `test_make_F_matrix` in cogent is wrong
npt.assert_almost_equal(F, expected_F)
if __name__ == '__main__':
main()
|
anderspitman/scikit-bio
|
skbio/stats/ordination/tests/test_util.py
|
Python
|
bsd-3-clause
| 2,214
|
[
"scikit-bio"
] |
3b2ec4d433b71a7916bf14b53102bdba4a7ba68789ad8fbe927bc5bc3d2d4db1
|
__author__ = "Mario Lukas"
__copyright__ = "Copyright 2017"
__license__ = "GPL v2"
__maintainer__ = "Mario Lukas"
__email__ = "info@mariolukas.de"
import math
import logging
import numpy as np
import scipy.ndimage
import scipy
import cv2
from fabscan.FSConfig import ConfigInterface
from fabscan.FSSettings import SettingsInterface
from fabscan.lib.util.FSInject import inject
from fabscan.scanner.interfaces.FSImageProcessor import ImageProcessorInterface
class LinearLeastSquares2D(object):
'''
2D linear least squares using the hesse normal form:
d = x*sin(theta) + y*cos(theta)
which allows you to have vertical lines.
'''
def fit(self, data):
data_mean = data.mean(axis=0)
x0, y0 = data_mean
if data.shape[0] > 2: # over determined
u, v, w = np.linalg.svd(data - data_mean)
vec = w[0]
theta = math.atan2(vec[0], vec[1])
elif data.shape[0] == 2: # well determined
theta = math.atan2(data[1, 0] - data[0, 0], data[1, 1] - data[0, 1])
theta = (theta + math.pi * 5 / 2) % (2 * math.pi)
d = x0 * math.sin(theta) + y0 * math.cos(theta)
return d, theta
def residuals(self, model, data):
d, theta = model
dfit = data[:, 0] * math.sin(theta) + data[:, 1] * math.cos(theta)
return np.abs(d - dfit)
def is_degenerate(self, sample):
return False
@inject(
config=ConfigInterface,
settings=SettingsInterface,
)
class ImageProcessor(ImageProcessorInterface):
def __init__(self, config, settings):
self._logger = logging.getLogger(__name__)
self.settings = settings
self.config = config
self.laser_color_channel = self.config.file.laser.color
self.threshold_enable = False
self.threshold_value = 0
self.blur_enable = True
self.blur_value = 0
self.window_enable = False
self.window_value = 0
self.color = (255, 255, 255)
self.refinement_method = 'SGF' #possible RANSAC, SGF
self.image_height = self.config.file.camera.resolution.width
self.image_width = self.config.file.camera.resolution.height
self.high_resolution = (self.config.file.camera.resolution.height, self.config.file.camera.resolution.width)
self.preview_resolution = (self.config.file.camera.preview_resolution.height, self.config.file.camera.preview_resolution.width)
self.mem_debug_count = 0
#aruco.DICT_5X5_250
# Note: Pattern generated using the following link
# https://calib.io/pages/camera-calibration-pattern-generator
self.aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_100)
self.charuco_board = cv2.aruco.CharucoBoard_create(11, 9, 1, 0.5, self.aruco_dict)
self._logger.debug("config ID: {0}".format(id(self.config)))
if self.config.high_weighted_matrix.size == 0:
self._logger.debug("Initializing weighted Matrix")
self.config.high_weighted_matrix = self._compute_weight_matrix(resolution=self.high_resolution)
self.config.low_weighted_matrix = self._compute_weight_matrix(resolution=self.preview_resolution)
self._criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
self.object_pattern_points = self.create_object_pattern_points()
def get_aruco_board(self):
return self.charuco_board
def get_aruco_dict(self):
return self.aruco_dict
def _compute_weight_matrix(self, resolution):
_weight_matrix = np.array(
(
np.matrix(np.linspace(0, resolution[0] - 1, resolution[0])).T *
np.matrix(np.ones(resolution[1]))
).T
)
return _weight_matrix
def create_object_pattern_points(self):
objp = np.zeros((self.config.file.calibration.pattern.rows * self.config.file.calibration.pattern.columns, 3), np.float32)
objp[:, :2] = np.mgrid[0:self.config.file.calibration.pattern.columns,
0:self.config.file.calibration.pattern.rows].T.reshape(-1, 2)
objp = np.multiply(objp, self.config.file.calibration.pattern.square_size)
return objp
def ransac(self, data, model_class, min_samples, threshold, max_trials=100):
'''
Fits a model to data with the RANSAC algorithm.
:param data: numpy.ndarray
data set to which the model is fitted, must be of shape NxD where
N is the number of data points and D the dimensionality of the data
:param model_class: object
object with the following methods implemented:
* fit(data): return the computed model
* residuals(model, data): return residuals for each data point
* is_degenerate(sample): return boolean value if sample choice is
degenerate
see LinearLeastSquares2D class for a sample implementation
:param min_samples: int
the minimum number of data points to fit a model
:param threshold: int or float
maximum distance for a data point to count as an inlier
:param max_trials: int, optional
maximum number of iterations for random sample selection, default 100
:returns: tuple
best model returned by model_class.fit, best inlier indices
'''
best_model = None
best_inlier_num = 0
best_inliers = None
data_idx = np.arange(data.shape[0])
for _ in range(max_trials):
sample = data[np.random.randint(0, data.shape[0], 2)]
if model_class.is_degenerate(sample):
continue
sample_model = model_class.fit(sample)
sample_model_residua = model_class.residuals(sample_model, data)
sample_model_inliers = data_idx[sample_model_residua < threshold]
inlier_num = sample_model_inliers.shape[0]
if inlier_num > best_inlier_num:
best_inlier_num = inlier_num
best_inliers = sample_model_inliers
if best_inliers is not None:
best_model = model_class.fit(data[best_inliers])
return best_model
def _window_mask(self, image, window_enable=True):
height, width = image.shape
window_value = 3
mask = 0
if window_enable:
peak = image.argmax(axis=1)
_min = peak - window_value
_max = peak + window_value + 1
mask = np.zeros_like(image)
for i in range(height):
mask[i, _min[i]:_max[i]] = 255
# Apply mask
image = cv2.bitwise_and(image, mask)
mask = None
return image
def _threshold_image(self, image, blur_enable=True):
if self.settings.file.auto_threshold == True:
threshold_value = 0
threshold_settings = cv2.THRESH_TOZERO+cv2.THRESH_OTSU
else:
threshold_value = self.settings.file.threshold
threshold_settings = cv2.THRESH_TOZERO
image = cv2.threshold(
image, threshold_value, 255, threshold_settings)[1]
if blur_enable:
image = cv2.GaussianBlur(image, (7, 7), 0)
image = cv2.threshold(image, threshold_value, 255, threshold_settings)[1]
return image
def _obtain_red_channel(self, image):
if self.laser_color_channel == 'R (RGB)':
image = cv2.split(image)[2]
elif self.laser_color_channel == 'G (RGB)':
image = cv2.split(image)[1]
elif self.laser_color_channel == 'Cr (YCrCb)':
image = cv2.split(cv2.cvtColor(image, cv2.COLOR_RGB2YCR_CB))[1]
elif self.laser_color_channel == 'U (YUV)':
image = cv2.split(cv2.cvtColor(image, cv2.COLOR_RGB2YUV))[1]
elif self.laser_color_channel == 'R (HSV)':
hsv_frame = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
redHueArea = 30
redRange = ((hsv_frame[:, :, 0] + 360 + redHueArea) % 360)
hsv_frame[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
hsv_frame[np.where(hsv_frame[:, :, 1] < 95)] = [0, 0, 0]
rgb = cv2.cvtColor(hsv_frame, cv2.COLOR_HSV2RGB)
image = cv2.split(rgb)[2]
return image
def compute_line_segmentation(self, image, index=0, roi_mask=False):
if image is not None:
if roi_mask is True:
image = self.mask_image(image, index)
image = self._obtain_red_channel(image)
image = self._threshold_image(image)
# Window mask
image = self._window_mask(image)
return image
# Savitzky-Golay-Filter
def _sgf(self, u, v, s):
if len(u) > 1:
i = 0
sigma = 2.0
f = np.array([])
segments = [s[_r] for _r in np.ma.clump_unmasked(np.ma.masked_equal(s, 0))]
# Detect stripe segments
for segment in segments:
j = len(segment)
# Apply gaussian filter
fseg = scipy.ndimage.gaussian_filter(u[i:i + j], sigma=sigma)
f = np.concatenate((f, fseg))
i += j
segments = None
return f, v
else:
return u, v
# RANSAC implementation: https://github.com/ahojnnes/numpy-snippets/blob/master/ransac.py
def _ransac(self, u, v):
if len(u) > 1:
data = np.vstack((v.ravel(), u.ravel())).T
dr, thetar = self.ransac(data, LinearLeastSquares2D(), 2, 2)
u = (dr - v * math.sin(thetar)) / math.cos(thetar)
return u, v
def compute_2d_points(self, image, index=0, roi_mask=True, preview=False, refinement_method='SGF'):
if image is not None:
image = self.compute_line_segmentation(image, index, roi_mask=roi_mask)
# Peak detection: center of mass
s = image.sum(axis=1)
v = np.where(s > 0)[0]
if preview:
u = (self.config.low_weighted_matrix * image).sum(axis=1)[v] / s[v]
else:
u = (self.config.high_weighted_matrix * image).sum(axis=1)[v] / s[v]
if refinement_method == 'SGF':
# Segmented gaussian filter
u, v = self._sgf(u, v, s)
elif refinement_method == 'RANSAC':
# Random sample consensus
u, v = self._ransac(u, v)
s = None
return (u, v)
def get_texture_stream_frame(self, cam_image):
cam_image = self.rotate_image(cam_image)
return cam_image
def get_settings_stream_frame(self, cam_image):
cam_image = self.rotate_image(cam_image)
return cam_image
def get_calibration_stream_frame(self, cam_image):
cam_image = self.rotate_image(cam_image)
gray_image = cv2.cvtColor(cam_image, cv2.COLOR_RGB2GRAY)
corners = cv2.goodFeaturesToTrack(gray_image, self.config.file.calibration.pattern.columns*self.config.file.calibration.pattern.rows, 0.01, 10)
corners = np.int0(corners)
for i in corners:
x, y = i.ravel()
cv2.circle(cam_image, (x, y), 3, (0, 0, 255), -1)
gray_image = None
return cam_image
def get_adjustment_stream_frame(self, cam_image):
cam_image = self.rotate_image(cam_image)
cv2.resize(cam_image, (self.config.file.camera.preview_resolution.width, self.config.file.camera.preview_resolution.height))
cv2.line(cam_image, (int(0.5*cam_image.shape[1]),0), (int(0.5*cam_image.shape[1]), cam_image.shape[0]), (0,255,0), thickness=3, lineType=8, shift=0)
return cam_image
def drawCorners(self, image):
corners = self.detect_corners(image)
cv2.drawChessboardCorners(
image, (self.config.file.calibration.pattern.columns, self.config.file.calibration.pattern.rows), corners, True)
return image
def get_laser_stream_frame(self, image, type='CAMERA'):
try:
image = self.rotate_image(image)
if bool(self.settings.file.show_laser_overlay):
points = self.compute_2d_points(image, roi_mask=False, preview=True)
u, v = points
c = list(zip(u, v))
[cv2.line(image, (int(t[0]) - 1, int(t[1])), (int(t[0]) + 1, int(t[1])), (255, 0, 0), thickness=1, lineType=8, shift=0) for t in c]
if bool(self.settings.file.show_calibration_pattern):
cv2.line(image, (int(0.5*image.shape[1]), 0), (int(0.5*image.shape[1]), image.shape[0]), (0, 255, 0), thickness=1, lineType=8, shift=0)
cv2.line(image, (0, int(0.5*image.shape[0])), (image.shape[1], int(0.5*image.shape[0])), (0, 255, 0), thickness=1, lineType=8, shift=0)
except Exception as e:
return image
return image
def rotate_image(self, image):
if self.config.file.camera.rotate == "True":
image = cv2.transpose(image)
if self.config.file.camera.hflip == "True":
image = cv2.flip(image, 1)
if self.config.file.camera.vflip == "True":
image = cv2.flip(image, 0)
return image
#FIXME: rename texture_image into texture_image
def process_image(self, angle, laser_image, texture_image=None, index=0):
''' Takes picture and angle (in degrees). Adds to point cloud '''
res = np.array([])
try:
_theta = np.deg2rad(-angle)
points_2d = self.compute_2d_points(laser_image, index, roi_mask=True)
point_cloud = self.compute_point_cloud(_theta, points_2d, index=index)
if type(point_cloud) is np.ndarray:
if bool(self.settings.file.color) and texture_image is not None:
u, v = points_2d
point_color = texture_image[v, np.around(u).astype(int)].T
point_cloud, point_color = self.mask_point_cloud(point_cloud, point_color)
else:
if index == 0: #laser 1
point_color = np.array([255, 255, 255], np.uint8).T
else: #laser 2
point_color = np.array([200, 200, 200], np.uint8).T
point_cloud, _ = self.mask_point_cloud(point_cloud, None)
num_rows, num_cols = point_cloud.shape
res = np.zeros((num_cols, 6), np.float32)
res[:, 3:] = point_color.T
res[:, :3] = point_cloud.T
color = None
texture_image = None
point_cloud = None
else:
point_cloud = None
color = None
return res
except Exception as e:
res = None
laser_image = None
point_color = None
self._logger.exception("Process Error: {0}".format(e))
return np.array([])
def mask_image(self, image, index):
if index == 0:
mask = np.zeros(image.shape, np.uint8)
mask[0:self.image_height, (self.image_width // 2):self.image_width] = image[0:self.image_height, (self.image_width // 2):self.image_width]
else:
mask = np.zeros(image.shape, np.uint8)
mask[0:self.image_height, 0:(self.image_width // 2)] = image[0:self.image_height, 0:(self.image_width // 2)]
return mask
def mask_point_cloud(self, point_cloud, texture):
if point_cloud is not None and len(point_cloud) > 0:
rho = np.sqrt(np.square(point_cloud[0, :]) + np.square(point_cloud[1, :]))
z = point_cloud[2, :]
turntable_radius = int(self.config.file.turntable.radius)
idx = np.where((z >= 0) &
(z <= self.config.file.turntable.height) &
(rho >= -turntable_radius) &
(rho <= turntable_radius))[0]
z = None
if texture is None:
return point_cloud, None
else:
return point_cloud[:, idx], texture[:, idx]
else:
return point_cloud, texture
def compute_point_cloud(self, theta, points_2d, index):
if points_2d[0].size == 0 or points_2d[1].size == 0:
return None
# Load calibration values
R = np.matrix(self.config.file.calibration.platform_rotation)
t = np.matrix(self.config.file.calibration.platform_translation).T
# Compute platform transformation
Xwo = self.compute_platform_point_cloud(points_2d, R, t, index)
# Rotate to world coordinates
c, s = np.cos(-theta), np.sin(-theta)
Rz = np.matrix([[c, -s, 0], [s, c, 0], [0, 0, 1]])
Xw = Rz * Xwo
# Return point cloud
if Xw.size > 0:
return np.array(Xw)
else:
return None
def compute_platform_point_cloud(self, points_2d, R, t, index):
# Load calibration values
n = self.config.file.calibration.laser_planes[index]['normal']
d = self.config.file.calibration.laser_planes[index]['distance']
# Camera system
Xc = self.compute_camera_point_cloud(points_2d, d, n)
# Compute platform transformation
return R.T * Xc - R.T * t
def compute_camera_point_cloud(self, points_2d, d, n):
# Load calibration values
fx = self.config.file.calibration.camera_matrix[0][0]
fy = self.config.file.calibration.camera_matrix[1][1]
cx = self.config.file.calibration.camera_matrix[0][2]
cy = self.config.file.calibration.camera_matrix[1][2]
# Compute projection point
u, v = points_2d
x = np.concatenate(((u - cx) / fx, (v - cy) / fy, np.ones(len(u)))).reshape(3, len(u))
return d / np.dot(n, x) * x
def detect_corners(self, image, flags=None, type="chessboard"):
ret = None
corners = None
ids = None
imsize = None
if type == "chessboard":
ret, corners = self._detect_chessboard(image, flags)
elif type == "charucoboard":
ret, corners, ids, imsize = self._detect_charucoboard(image)
return ret, corners, ids, imsize
def detect_pose(self, image, flags=None):
_, corners, ids, imsize = self.detect_corners(image, flags=flags, type=self.config.file.calibration.pattern.type )
if corners is not None:
ret, rvecs, tvecs = cv2.solvePnP(
self.object_pattern_points, corners,
np.array(self.config.file.calibration.camera_matrix), np.array(self.config.file.calibration.distortion_vector))
if ret:
return (cv2.Rodrigues(rvecs)[0], tvecs, corners)
def detect_pattern_plane(self, pose):
if pose is not None:
R = pose[0]
t = pose[1].T[0]
c = pose[2]
n = R.T[2]
d = np.dot(n, t)
return (d, n, c)
else:
return None
def pattern_mask(self, image, corners):
if image is not None:
h, w, d = image.shape
if corners is not None:
corners = corners.astype(np.int)
p1 = corners[0][0]
p2 = corners[self.config.file.calibration.pattern.columns - 1][0]
p3 = corners[self.config.file.calibration.pattern.columns * (self.config.file.calibration.pattern.rows - 1)][0]
p4 = corners[self.config.file.calibration.pattern.columns * self.config.file.calibration.pattern.rows - 1][0]
mask = np.zeros((h, w), np.uint8)
points = np.array([p1, p2, p4, p3])
cv2.fillConvexPoly(mask, points, 255)
image = cv2.bitwise_and(image, image, mask=mask)
return image
def _detect_charucoboard(self, image):
"""
Charuco base pose estimation.
"""
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
corners, ids, _ = cv2.aruco.detectMarkers(gray, self.aruco_dict)
if len(corners) > 0:
ret, c_corners, c_ids = cv2.aruco.interpolateCornersCharuco(markerCorners=corners, markerIds=ids, image=gray, board=self.charuco_board, minMarkers=0)
# ret is the number of detected corners
if ret > 0:
imsize = gray.shape
return ret, c_corners, c_ids, imsize
else:
self._logger.debug('Charuco detection Failed!')
return None, None
def _detect_chessboard(self, image, flags=None):
if image is not None:
if self.config.file.calibration.pattern.rows > 2 and self.config.file.calibration.pattern.columns > 2:
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
if flags is None:
ret, corners = cv2.findChessboardCorners(gray, (self.config.file.calibration.pattern.columns, self.config.file.calibration.pattern.rows), None)
else:
ret, corners = cv2.findChessboardCorners(gray, (self.config.file.calibration.pattern.columns, self.config.file.calibration.pattern.rows), flags=flags)
if ret:
cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), self._criteria)
return ret, corners
else:
return None, None
|
mariolukas/FabScanPi-Server
|
src/fabscan/scanner/laserscanner/FSImageProcessor.py
|
Python
|
gpl-2.0
| 21,487
|
[
"Gaussian"
] |
fbb0718d63d029493aa9e7b8e8ea06c2b0154392b7d595b1d343c1107ec39cba
|
"""Solid objects are used by the segment module. A solid has a position, and
orientation (defined by a rotation matrix). This module also contains the
class definition for stadium objects, which are used to construct
StadiumSolid solids. The Solid class has two subclasses: the StadiumSolid and
Semiellipsoid classes.
"""
# Use Python3 integer division rules.
from __future__ import division
import warnings
import numpy as np
from . import inertia
from .utils import printoptions
class Stadium(object):
"""Stadium, the 2D shape.
"""
validStadiaLabels = {
'Ls0': 'hip joint centre',
'Ls1': 'umbilicus',
'Ls2': 'lowest front rib',
'Ls3': 'nipple',
'Ls4': 'shoulder joint centre',
'Ls5': 'acromion',
'Ls6': 'beneath nose',
'Ls7': 'above ear',
'La0': 'shoulder joint centre',
'La1': 'mid-arm',
'La2': 'elbow joint centre',
'La3': 'maximum forearm perimeter',
'La4': 'wrist joint centre',
'La5': 'base of thumb',
'La6': 'knuckles',
'La7': 'fingernails',
'Lb0': 'shoulder joint centre',
'Lb1': 'mid-arm',
'Lb2': 'elbow joint centre',
'Lb3': 'maximum forearm perimeter',
'Lb4': 'wrist joint centre',
'Lb5': 'base of thumb',
'Lb6': 'knuckles',
'Lb7': 'fingernails',
'Lj0': 'hip joint centre',
'Lj1': 'crotch',
'Lj2': 'mid-thigh',
'Lj3': 'knee joint centre',
'Lj4': 'maximum calf perimeter',
'Lj5': 'ankle joint centre',
'Lj6': 'heel',
'Lj7': 'arch',
'Lj8': 'ball',
'Lj9': 'toe nails',
'Lk0': 'hip joint centre',
'Lk1': 'crotch',
'Lk2': 'mid-thigh',
'Lk3': 'knee joint centre',
'Lk4': 'maximum calf perimeter',
'Lk5': 'ankle joint centre',
'Lk6': 'heel',
'Lk7': 'arch',
'Lk8': 'ball',
'Lk9': 'toe nails'}
def __init__(self, label, inID, in1, in2=None, alignment='ML'):
"""Defines a 2D stadium shape and checks inputs for errors. A stadium,
described in Yeadon 1990-ii, is defined by two parameters. Stadia can
depracate to circles if their "thickness" is 0.
Parameters
----------
label : str
Name of the stadium level, according to Yeadon 1990-ii.
inID : str
Identifies the type of information for the next two inputs.
'perimwidth' for perimeter and width input, 'depthwidth' for
depth and width input, 'perimeter' or 'radius' for a circle,
'thicknessradius' for thickness and radius input.
in1 : float
Either perimeter, depth, or thickness, as determined by inID
in2 : float
Either width, or radius, as determined by inID
alignment = 'ML' : str
Identifies the long direction of the stadium. 'ML' stands for
medio-lateral. Aleternatively, 'AP' (anteroposterior) can be
supplied. The only 'AP' stadiums should be at the heels.
"""
if label == 'Ls5: acromion/bottom of neck':
self.label = label
elif label in [lab + ': ' + desc for lab, desc in
self.validStadiaLabels.items()]:
self.label = label
else:
raise ValueError("'{}' is not a valid label.".format(label))
if inID == 'perimwidth':
self.perimeter = in1
self.width = in2
self.thickness = ((np.pi * self.width - self.perimeter) /
(2.0 * np.pi - 4.0))
self.radius = ((self.perimeter - 2.0 * self.width) /
(2.0 * np.pi - 4.0))
elif inID == 'depthwidth':
self.width = in2
self.perimeter = 2.0 * in2 + (np.pi - 2.0) * in1
self.thickness = ((np.pi * self.width - self.perimeter) /
(2.0 * np.pi - 4.0))
self.radius = (self.perimeter - 2.0 * self.width) / (2.0 * np.pi -
4.0)
elif inID == 'perimeter':
self._set_as_circle(in1 / (2.0 * np.pi))
elif inID == 'radius':
self._set_as_circle(in1)
elif inID == 'thicknessradius':
self.thickness = in1
self.radius = in2
self.perimeter = 4.0 * self.thickness + 2.0 * np.pi * self.radius
self.width = 2.0 * self.thickness + 2.0 * self.radius
else:
raise ValueError("Stadium " + self.label +
" not defined properly, " + inID + " is not valid. You must " +
"use inID= perimwidth, depthwidth, perimeter, or radius.")
if self.radius == 0:
raise ValueError("Radius of stadium '%' is zero." % self.radius)
if self.radius < 0 or self.thickness < 0:
warnings.warn("Stadium '{}' is defined "
"incorrectly, r must be positive and t must be nonnegative. "
"r = {} and t = {} . This means that 2 < perimeter/width < pi. "
"Currently, this ratio is {}.\n".format(self.label,
self.radius, self.thickness, self.perimeter / self.width))
if inID == 'perimwidth':
self._set_as_circle(in1 / (2.0 * np.pi))
print("Fix: stadium set as circle with perimeter as given.")
elif inID == 'depthwidth':
self._set_as_circle(0.5 * in2)
print("Fix: stadium set as circle with diameter of given "
"width.")
else:
raise ValueError("Negative radius/thickness cannot be "
"corrected.")
if alignment != 'AP' and alignment != 'ML':
raise ValueError("Error: stadium " + self.label +
" alignment is not valid, must be either AP or ML")
else:
self.alignment = alignment
def _set_as_circle(self, radius):
"""Sets radius, perimeter, thickness, and width if thickness is 0."""
self.radius = radius
self.perimeter = 2.0 * np.pi * self.radius
self.thickness = 0.0
self.width = self.perimeter / np.pi
class Solid(object):
"""Solid. Has two subclasses, stadiumsolid and semiellipsoid. This base
class manages setting orientation, and calculating properties.
"""
# Transparency for plotting.
alpha = .5
@property
def mass(self):
"""Mass of the solid, a float in units of kg."""
return self._mass
@property
def center_of_mass(self):
"""Center of mass of the solid, a np.ndarray of shape (3,1), in
units of m, expressed in the global frame, from the bottom center of
the pelvis (Ls0)."""
return self._center_of_mass
@property
def inertia(self):
"""Inertia matrix of the solid, a np.matrix of shape (3,3), in units
of kg-m^2, about the center of mass of the human, expressed in the
global frame.
"""
return self._inertia
@property
def rel_center_of_mass(self):
"""Center of mass of the solid, a np.ndarray of shape (3,1), in
units of m, expressed in the frame of the solid, from the origin of
the solid."""
return self._rel_center_of_mass
@property
def rel_inertia(self):
"""Inertia matrix of the solid, a np.matrix of shape (3,3), in units
of kg-m^2, about the center of mass of the solid, expressed in the
frame of the solid."""
return self._rel_inertia
@property
def pos(self):
"""Position of the origin of the solid, which is the center of the
surface closest to the pelvis, a np.ndarray of shape (3,1), in units
of m, expressed in the global frame, from the bottom center of the
pelvis (Ls0)."""
return self._pos
@property
def end_pos(self):
"""Position of the point on the solid farthest from the origin along
the longitudinal axis of the segment, a np.ndarray of shape (3,1),
in units of m, expressed in the global frame, from the bottom center
of the pelvis (Ls0)."""
return self._end_pos
def __init__(self, label, density, height):
"""Defines a solid. This is a base class. Sets the alpha value to
be used for drawing with MayaVi.
Parameters
----------
label : str
Name of the solid
density : float
In units (kg/m^3), used to calculate the solid's mass
height : float
Distance from bottom to top of the solid
"""
#TODO: Check for valid labels
self.label = label
#TODO: Check that these two are floats
self.density = density
self.height = height
self._rel_inertia = np.zeros((3, 3)) # this gets set in subclasses
self._mass = 0.0
self._rel_center_of_mass = np.array([[0.0], [0.0], [0.0]])
def set_orientation(self, proximal_pos, rot_mat, build_toward_positive_z):
"""Sets the position, rotation matrix of the solid, and calculates
the "absolute" properties (center of mass, and inertia tensor) of the
solid.
Parameters
----------
proximal_pos : np.array (3,1)
Position of center of proximal end of solid in the absolute fixed
coordinates of the human.
rot_mat : np.matrix (3,3)
Orientation of solid, with respect to the fixed coordinate system.
build_toward_positive_z : bool, optional
The order of the solids in the parent segment matters. By default
they are stacked on top of each other in the segment's local +z
direction. If this is set to False, then they are stacked in the
local -z direction. This is done so that, for example, in the default
configuration, the arms are directed down.
"""
self._rot_mat = rot_mat
if build_toward_positive_z:
self._pos = proximal_pos
self._end_pos = self.pos + (self.height * self._rot_mat *
np.array([[0], [0], [1]]))
else:
self._end_pos = proximal_pos
self._pos = self._end_pos - (self.height * self._rot_mat *
np.array([[0], [0], [1]]))
self.calc_properties()
def calc_properties(self):
"""Sets the center of mass and inertia of the solid, both with respect
to the fixed human frame.
"""
try:
try:
# Here is v_a = R * v_b, where A is global frame and B is
# rotated frame relative to A.
self._center_of_mass = (self.pos + self._rot_mat *
self.rel_center_of_mass)
except AttributeError as err:
message = str(err) + \
'. You must set the orientation before attempting ' + \
'to calculate the properties.'
raise AttributeError(message)
except AttributeError as e:
print(e)
self._inertia = inertia.rotate_inertia(self._rot_mat, self.rel_inertia)
def print_properties(self, precision=5, suppress=True):
"""Prints mass, center of mass (in solid and global frames), and
inertia (in solid and global frames).
The solid's origin is at the bottom center of the proximal stadium
(or stadium closest to the pelvis, Ls0).
Parameters
----------
precision : integer, default=5
The precision for floating point representation.
suppress : boolean, default=True
Print very small values as 0 instead of scientific notation.
Notes
-----
See numpy.set_printoptions for more details on the optional
arguments.
"""
# self.COM, etc. needs to be defined first.
if not hasattr(self, 'center_of_mass') or not hasattr(self, 'inertia'):
self.calc_properties()
template = \
"""\
{label} properties:
Mass (kg):
{mass:1.{precision}f}
COM in solid's frame from solid's origin (m):
{rel_center_of_mass}
COM in global frame from bottom center of pelvis (Ls0) (m):
{center_of_mass}
Inertia tensor in solid's frame about solid's COM (kg-m^2):
{rel_inertia}
Inertia tensor in global frame about solid's COM (kg-m^2):
{inertia}
"""
with printoptions(precision=precision, suppress=suppress):
print(template.format(label=self.label,
mass=self.mass,
precision=precision,
rel_center_of_mass=self.rel_center_of_mass,
center_of_mass=self.center_of_mass,
rel_inertia=self.rel_inertia,
inertia=self.inertia))
def draw_mayavi(self, mlabobj, col):
raise NotImplementedError()
class StadiumSolid(Solid):
"""Stadium solid. Derived from the solid class.
"""
def __init__(self, label, density, stadium0, stadium1, height):
"""Defines a stadium solid object. Creates its base object, and
calculates relative/local inertia properties.
Parameters
----------
label : str
Name of the solid.
density : float
Density of the solid (kg/m^3).
stadium0 : :py:class:`Stadium`
Lower stadium of the stadium solid.
stadium1 : :py:class:`Stadium`
Upper stadium of the stadium solid.
height : float
Distance between the lower and upper stadia.
"""
super(StadiumSolid, self).__init__(label, density, height)
self.stads = [stadium0, stadium1]
self.alignment = 'ML'
# if either stadium is oriented anteroposteriorly.
# inertia must be rotated, and the plots must be modified
if (self.stads[0].alignment == 'AP' or
self.stads[1].alignment == 'AP'):
self.alignment = 'AP'
if stadium0.thickness == 0:
self.degenerate_by_t0 = True
else:
self.degenerate_by_t0 = False
self.calc_rel_properties()
self._orig_mesh_points = list()
self._orig_mesh_points.append(self._make_mesh(0))
self._orig_mesh_points.append(self._make_mesh(1))
def calc_rel_properties(self):
"""Calculates mass, relative center of mass, and relative/local
inertia, according to formulae in Appendix B of Yeadon 1990-ii. If the
stadium solid is arranged anteroposteriorly, the inertia is rotated
by pi/2 about the z axis.
"""
# There are two cases of stadium solid degeneracy to consider:
# t0 = 0, and t0 = t1 = 0. The degeneracy arises when b has a
# denominator of 0. The case that t1 = 0 is not an issue, then.
# The way the case of t0 = 0 is handled is by switching the two stadia.
# Note that thi affects how the relative center of mass is set, but
# does not affect the mass or moments of inertia calculations.
# The case in which t0 = t1 = 0, we set b to 1. That is because t = t0
# (1 + bz) is going to be zero anyway, since t0 = 0.
D = self.density
h = self.height
if self.degenerate_by_t0:
# Swap the stadia.
r0 = self.stads[1].radius
t0 = self.stads[1].thickness
r1 = self.stads[0].radius
t1 = self.stads[0].thickness
else:
r0 = self.stads[0].radius
t0 = self.stads[0].thickness
r1 = self.stads[1].radius
t1 = self.stads[1].thickness
a = (r1 - r0) / r0
if t0 == 0:
# Truncated cone, since both thicknesses are zero.
# b can be anything, because t = t0(1 + bz) = (0)(1 + bz) = 0.
b = 1
else:
b = (t1 - t0) / t0
self._mass = D * h * r0 * (4.0 * t0 * self._F1(a,b) +
np.pi * r0 * self._F1(a,a))
zcom = D * (h**2.0) * (4.0 * r0 * t0 * self._F2(a,b) +
np.pi * (r0**2.0) * self._F2(a,a)) / self.mass
if self.degenerate_by_t0 and t0 != 0:
# We swapped the stadia, and it's not a truncated cone.
# Must define this intermediate because zcom above is still what
# must be used for the parallel axis theorem below.
adjusted_zcom = h - zcom
else:
adjusted_zcom = zcom
self._rel_center_of_mass = np.array([[0.0],[0.0],[adjusted_zcom]])
# moments of inertia
Izcom = D * h * (4.0 * r0 * (t0**3.0) * self._F4(a,b) / 3.0 +
np.pi * (r0**2.0) * (t0**2.0) * self._F5(a,b) +
4.0 * (r0**3.0) * t0 * self._F4(b,a) +
np.pi * (r0**4.0) * self._F4(a,a) * 0.5 )
# CAUGHT AN (minor) ERROR IN YEADON'S PAPER HERE. The Dh^3 in the
# formula below is missing from the second formula for Iy^0 on page 73
# of Yeadon1990-ii.
Iy = (D * h * (4.0 * r0 * (t0**3.0) * self._F4(a,b) / 3.0 +
np.pi * (r0**2.0) * (t0**2.0) * self._F5(a,b) +
8.0 * (r0**3.0) * t0*self._F4(b,a) / 3.0 +
np.pi * (r0**4.0) * self._F4(a,a) * 0.25) +
D * (h**3.0) * (4.0 * r0 * t0 * self._F3(a,b) +
np.pi * (r0**2.0) * self._F3(a,a)))
Iycom = Iy - self.mass * (zcom**2.0)
Ix = (D * h * (4.0 * r0 * (t0**3.0) * self._F4(a,b) / 3.0 +
np.pi * (r0**4.0) * self._F4(a,a) * 0.25) +
D * (h**3.0) * (4.0 * r0 * t0 * self._F3(a,b) +
np.pi * (r0**2.0) * self._F3(a,a)))
Ixcom = Ix - self.mass*(zcom**2.0)
self._rel_inertia = np.mat([[Ixcom,0.0,0.0],
[0.0,Iycom,0.0],
[0.0,0.0,Izcom]])
if self.alignment == 'AP':
# rearrange to anterorposterior orientation
self._rel_inertia = inertia.rotate_inertia(
inertia.rotate_space_123([0, 0, np.pi/2]), self.rel_inertia)
def draw_mayavi(self, mlabobj, col):
"""Draws the initial stadium in 3D using MayaVi.
Parameters
----------
mlabobj : mayavi.soemthing
The MayaVi object we can draw on.
col : tuple (3,)
Color as an rgb tuple, with values between 0 and 1.
"""
self._generate_mesh()
self._mesh = mlabobj.mesh(self._mesh_points['x'], self._mesh_points['y'],
self._mesh_points['z'], color=col, opacity=Solid.alpha)
def _update_mayavi(self):
"""Updates the mesh in MayaVi."""
self._generate_mesh()
self._mesh.mlab_source.set(x=self._mesh_points['x'],
y=self._mesh_points['y'], z=self._mesh_points['z'])
def _generate_mesh(self):
"""Generates grid points for a MayaVi mesh."""
X0, Y0, Z0 = self._make_pos(0)
X1, Y1, Z1 = self._make_pos(1)
Xpts = np.array(np.concatenate( (X0, X1), axis=0))
Ypts = np.array(np.concatenate( (Y0, Y1), axis=0))
Zpts = np.array(np.concatenate( (Z0, Z1), axis=0))
self._mesh_points = {'x': Xpts, 'y': Ypts, 'z': Zpts}
def _make_mesh(self, i):
"""Generates the un-rotated coordinates of the solid. These values are
saved at instantiation.
Parameters
----------
i : int
Identifies which stadium to generate the mesh points for (the top
or bottom).
"""
theta = [np.linspace(0.0,np.pi/2,5)]
x = self.stads[i].thickness + self.stads[i].radius * np.cos(theta);
y = self.stads[i].radius * np.sin(theta);
if self.alignment == 'AP':
temp = x
x = y
y = temp
del temp
xrev = x[:, ::-1]
yrev = y[:, ::-1]
X = np.concatenate( (x, -xrev, -x, xrev), axis=1)
Y = np.concatenate( (y, yrev, -y, -yrev), axis=1)
Z = i*self.height*np.ones((1,20))
POSES = np.concatenate( (X, Y, Z), axis=0)
return POSES
def _make_pos(self, i):
"""Generates coordinates to be used for 3D visualization purposes.
"""
rotated_points = self._rot_mat * self._orig_mesh_points[i]
X, Y, Z = np.vsplit(rotated_points, 3)
X = X + self.pos[0]
Y = Y + self.pos[1]
Z = Z + self.pos[2]
return X, Y, Z
@staticmethod
def _F1(a, b):
"""Integration term. See Yeadon 1990-ii Appendix 2."""
return 1.0 + (a + b) * 0.5 + a * b / 3.0
@staticmethod
def _F2(a, b):
"""Integration term. See Yeadon 1990-ii Appendix 2."""
return 0.5 + (a + b) / 3.0 + a * b * 0.25
@staticmethod
def _F3(a, b):
"""Integration term. See Yeadon 1990-ii Appendix 2."""
return 1.0/3.0 + (a + b) / 4.0 + a * b *0.2
@staticmethod
def _F4(a, b):
"""Integration term. See Yeadon 1990-ii Appendix 2."""
return (1.0 + (a + 3.0 * b) * 0.5 + (a * b + b**2.0) +
(3.0 * a * b**2.0 + b**3.0) * 0.25 + a * (b**3.0) * 0.2)
@staticmethod
def _F5(a, b):
"""Integration term. See Yeadon 1990-ii Appendix 2."""
return (1.0 + (a + b) + (a**2.0 + 4.0 * a * b + b**2.0) / 3.0 +
a * b * (a + b) * 0.5 + (a**2.0) * (b**2.0) * 0.2)
class Semiellipsoid(Solid):
"""Semiellipsoid."""
n_mesh_points = 30
def __init__(self,label,density,baseperim,height):
"""Defines a semiellipsoid (solid) object. Creates its base object, and
calculates relative/local inertia properties. The base is circular (its
height axis is pointed upwards), so only 2 parameters are needed to
define the semiellipsoid.
Parameters
----------
label : str
Name of the solid.
density : float
Density of the solid (kg/m^3).
baseperimeter : float
The base is circular.
height : float
The remaining minor axis.
"""
super(Semiellipsoid, self).__init__(label, density, height)
self.baseperimeter = baseperim
self.radius = self.baseperimeter/(2.0*np.pi)
self.calc_rel_properties()
self._mesh_x, self._mesh_y, self._mesh_z = self._make_mesh()
def calc_rel_properties(self):
"""Calculates mass, relative center of mass, and relative/local
inertia, according to somewhat commonly availble formulae.
"""
D = self.density
r = self.radius
h = self.height
self._mass = D * 2.0/3.0 * np.pi * (r**2) * h
self._rel_center_of_mass = np.array([[0.0],[0.0],[3.0/8.0 * h]])
Izcom = D * 4.0/15.0 * np.pi * (r**4.0) * h
Iycom = D * np.pi * (2.0/15.0 * (r**2.0) * h * (r**2.0 + h**2.0) -
3.0/32.0 * (r**2.0) * (h**3.0))
Ixcom = Iycom
self._rel_inertia = np.mat([[Ixcom,0.0,0.0],
[0.0,Iycom,0.0],
[0.0,0.0,Izcom]])
def draw_mayavi(self, mlabobj, col):
"""Draws the semiellipsoid in 3D using MayaVi.
Parameters
----------
mlabobj :
The MayaVi object we can draw on.
col : tuple (3,)
Color as an rgb tuple, with values between 0 and 1.
"""
self._generate_mesh()
self._mesh = mlabobj.mesh(*self._mesh_points, color=col,
opacity=Solid.alpha)
def _update_mayavi(self):
"""Updates the mesh in MayaVi."""
self._generate_mesh()
self._mesh.mlab_source.set(x=self._mesh_points[0],
y=self._mesh_points[1], z=self._mesh_points[2])
def _generate_mesh(self):
"""Generates a mesh for MayaVi."""
self._mesh_points = self._make_pos()
def _make_mesh(self):
"""Generates the un-rotated coordinates of the solid. These values are
saved at instantiation.
"""
u = np.linspace(0, 2.0 * np.pi, self.n_mesh_points)
v = np.linspace(0, np.pi / 2.0, self.n_mesh_points)
x = self.radius * np.outer(np.cos(u), np.sin(v))
y = self.radius * np.outer(np.sin(u), np.sin(v))
z = self.height * np.outer(np.ones(np.size(u)), np.cos(v))
return x, y, z
def _make_pos(self):
"""Generates coordinates to be used for 3D visualization purposes,
given the position and orientation of the solid.
"""
x = np.zeros(self._mesh_x.shape)
y = np.zeros(self._mesh_y.shape)
z = np.zeros(self._mesh_z.shape)
for i in np.arange(self.n_mesh_points):
for j in np.arange(self.n_mesh_points):
POS = np.array([
[self._mesh_x[i,j]],
[self._mesh_y[i,j]],
[self._mesh_z[i,j]]])
POS = self._rot_mat * POS
x[i,j] = POS[0,0]
y[i,j] = POS[1,0]
z[i,j] = POS[2,0]
x = self.pos[0,0] + x
y = self.pos[1,0] + y
z = self.pos[2,0] + z
return x, y, z
|
chrisdembia/yeadon
|
yeadon/solid.py
|
Python
|
bsd-3-clause
| 25,424
|
[
"Mayavi"
] |
7b21a301339733f92cd07f9fcf0d60e22ad072c8a57ea70ff09d805fdccc5b89
|
#!/usr/bin/env python
import vtk
from vtk.test import Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
# create a rendering window and renderer
ren1 = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren1)
renWin.StereoCapableWindowOn()
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
reader = vtk.vtkGenericEnSightReader()
reader.SetCaseFileName("" + str(VTK_DATA_ROOT) + "/Data/EnSight/elements6-bin.case")
reader.UpdateInformation()
reader.GetOutputInformation(0).Set(vtk.vtkStreamingDemandDrivenPipeline.UPDATE_TIME_STEP(), 0.1)
geom = vtk.vtkGeometryFilter()
geom.SetInputConnection(reader.GetOutputPort())
calc = vtk.vtkArrayCalculator()
calc.SetInputConnection(geom.GetOutputPort())
calc.SetAttributeModeToUsePointData()
calc.SetFunction("pointTensors_XZ - pointTensors_YZ")
calc.AddScalarVariable("pointTensors_XZ","pointTensors", 5)
calc.AddScalarVariable("pointTensors_YZ","pointTensors", 4)
calc.SetResultArrayName("test")
mapper = vtk.vtkHierarchicalPolyDataMapper()
mapper.SetInputConnection(calc.GetOutputPort())
mapper.SetColorModeToMapScalars()
mapper.SetScalarModeToUsePointFieldData()
mapper.ColorByArrayComponent("test",0)
mapper.SetScalarRange(-0.1,0.1)
actor = vtk.vtkActor()
actor.SetMapper(mapper)
# assign our actor to the renderer
ren1.AddActor(actor)
# enable user interface interactor
iren.Initialize()
renWin.Render()
# prevent the tk window from showing up then start the event loop
reader.SetDefaultExecutivePrototype(None)
# --- end of script --
|
hlzz/dotfiles
|
graphics/VTK-7.0.0/IO/EnSight/Testing/Python/EnSightTensorsInversionBin.py
|
Python
|
bsd-3-clause
| 1,592
|
[
"VTK"
] |
197cd32cf1b5a13225f38465e4fe84cf3a29e50ec05183e60fc0944f474a829b
|
from __future__ import (division as _, print_function as _,
absolute_import as _, unicode_literals as _)
import numpy as np
import scipy as sp
from .degrade_spec import degrade_spec
from scipy import interp
from scipy import ndimage
__all__ = ['convolve_spec']
def convolve_spec(Ahr, lamhr, filters, forceTopHat=False):
# if wl grid is backwards reverse it
if lamhr[1] > lamhr[0]:
pass
else:
lamhr=lamhr[::-1]
Ahr=Ahr[::-1]
# Sort filters by wavelength
tdict = sorted(filters.__dict__.items(), key=lambda x: x[1].bandcenter)
F = []
for x in tdict:
if (x[1].wl is None) or (x[1].response is None) or forceTopHat:
# Use FWHM with tophat convolution
Fi = tophat_instrument(Ahr, lamhr, x[1].bandcenter, FWHM=x[1].FWHM)
else:
Fi = convolve_filter_response(lamhr, Ahr, x[1].wl, x[1].response, degrade=True)
F.append(Fi)
return np.array(F)
def convolve_filter_response(wlh, fh, wlf, response, degrade=False):
# if wl grid is backwards reverse it
if wlh[1] > wlh[0]:
pass
else:
wlh=wlh[::-1]
fh=fh[::-1]
Nfilt = len(wlf)
wlmin = np.min(wlf)
wlmax = np.max(wlf)
dlo = wlf[1] - wlf[0]
dhi = wlf[-1] - wlf[-2]
# Select only this wavelength region
wlhr = wlh[(wlh > (wlmin-dlo*10)) & (wlh < (wlmax+dhi*10))]
fhr = fh[(wlh > (wlmin-dlo*10)) & (wlh < (wlmax+dhi*10))]
Nspec = len(wlhr)
if degrade:
try:
F = degrade_spec(fhr, wlhr, wlf)
R = response
except ValueError:
print('Error in degrade_spec, switching to numpy interpolation.')
if Nspec > Nfilt:
# Interpolate filter response to hi-res spectral grid
R = np.interp(wlhr, wlf, response)
F = fhr
else:
# Interpolate spectrum to hi-res filter grid
F = np.interp(wlf, wlhr, fhr)
R = response
elif Nspec > Nfilt:
# Interpolate filter response to hi-res spectral grid
R = np.interp(wlhr, wlf, response)
F = fhr
else:
# Interpolate spectrum to hi-res filter grid
F = np.interp(wlf, wlhr, fhr)
R = response
# Convolve with normalized filter response function
F = F * (R / np.sum(R))
# Integrate by summing
Fc = np.sum(F)
if Fc < 0.0:
print(Fc, wlf, degrade)
return Fc
def tophat_instrument(Fp, wl_hr, wlgrid, FWHM=0.035):
Fratio11=tophatfold(wl_hr, Fp, FWHM)
Fratio=interp(wlgrid,wl_hr,Fratio11)
return Fratio
def tophatfold(lam, flux, FWHM=0.035):
lammin=min(lam)
lammax=max(lam)
dlambda=FWHM/17.
interlam=np.arange(lammin,lammax,dlambda)
interflux=interp(interlam,lam,flux)
#convovle flux array with gaussian--use smooth
fold=sp.ndimage.filters.uniform_filter(interflux,size=17)
#interpolate back to original grid
fluxfold=interp(lam,interlam,fold)
return fluxfold
|
jlustigy/coronagraph
|
coronagraph/convolve_spec.py
|
Python
|
mit
| 3,041
|
[
"Gaussian"
] |
95d1ef428454a31e35ef0a5a33228ad09b00a5fa7b24978c34bbb897714da7d9
|
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Autocompletion config for YouCompleteMe in Chromium.
#
# USAGE:
#
# 1. Install YCM [https://github.com/Valloric/YouCompleteMe]
# (Googlers should check out [go/ycm])
#
# 2. Create a symbolic link to this file called .ycm_extra_conf.py in the
# directory above your Chromium checkout (i.e. next to your .gclient file).
#
# cd src
# ln -rs tools/vim/chromium.ycm_extra_conf.py ../.ycm_extra_conf.py
#
# 3. (optional) Whitelist the .ycm_extra_conf.py from step #2 by adding the
# following to your .vimrc:
#
# let g:ycm_extra_conf_globlist=['<path to .ycm_extra_conf.py>']
#
# You can also add other .ycm_extra_conf.py files you want to use to this
# list to prevent excessive prompting each time you visit a directory
# covered by a config file.
#
# 4. Profit
#
#
# Usage notes:
#
# * You must use ninja & clang to build Chromium.
#
# * You must have built Chromium recently.
#
#
# Hacking notes:
#
# * The purpose of this script is to construct an accurate enough command line
# for YCM to pass to clang so it can build and extract the symbols.
#
# * Right now, we only pull the -I and -D flags. That seems to be sufficient
# for everything I've used it for.
#
# * That whole ninja & clang thing? We could support other configs if someone
# were willing to write the correct commands and a parser.
#
# * This has only been tested on Linux and macOS.
import os
import os.path
import re
import shlex
import subprocess
import sys
# Flags from YCM's default config.
_default_flags = [
'-DUSE_CLANG_COMPLETER',
'-std=c++14',
'-x',
'c++',
]
_header_alternates = ('.cc', '.cpp', '.c', '.mm', '.m')
_extension_flags = {
'.m': ['-x', 'objective-c'],
'.mm': ['-x', 'objective-c++'],
}
def PathExists(*args):
return os.path.exists(os.path.join(*args))
def FindChromeSrcFromFilename(filename):
"""Searches for the root of the Chromium checkout.
Simply checks parent directories until it finds .gclient and src/.
Args:
filename: (String) Path to source file being edited.
Returns:
(String) Path of 'src/', or None if unable to find.
"""
curdir = os.path.normpath(os.path.dirname(filename))
while not (
os.path.basename(curdir) == 'src' and PathExists(curdir, 'DEPS') and
(PathExists(curdir, '..', '.gclient') or PathExists(curdir, '.git'))):
nextdir = os.path.normpath(os.path.join(curdir, '..'))
if nextdir == curdir:
return None
curdir = nextdir
return curdir
def GetDefaultSourceFile(chrome_root, filename):
"""Returns the default source file to use as an alternative to |filename|.
Compile flags used to build the default source file is assumed to be a
close-enough approximation for building |filename|.
Args:
chrome_root: (String) Absolute path to the root of Chromium checkout.
filename: (String) Absolute path to the source file.
Returns:
(String) Absolute path to substitute source file.
"""
blink_root = os.path.join(chrome_root, 'third_party', 'WebKit')
if filename.startswith(blink_root):
return os.path.join(blink_root, 'Source', 'core', 'CoreInitializer.cpp')
else:
if 'test.' in filename:
return os.path.join(chrome_root, 'base', 'logging_unittest.cc')
return os.path.join(chrome_root, 'base', 'logging.cc')
def GetNinjaBuildOutputsForSourceFile(out_dir, filename):
"""Returns a list of build outputs for filename.
The list is generated by invoking 'ninja -t query' tool to retrieve a list of
inputs and outputs of |filename|. This list is then filtered to only include
.o and .obj outputs.
Args:
out_dir: (String) Absolute path to ninja build output directory.
filename: (String) Absolute path to source file.
Returns:
(List of Strings) List of target names. Will return [] if |filename| doesn't
yield any .o or .obj outputs.
"""
# Ninja needs the path to the source file relative to the output build
# directory.
rel_filename = os.path.relpath(filename, out_dir)
p = subprocess.Popen(
['ninja', '-C', out_dir, '-t', 'query', rel_filename],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
stdout, _ = p.communicate()
if p.returncode != 0:
return []
# The output looks like:
# ../../relative/path/to/source.cc:
# outputs:
# obj/reative/path/to/target.source.o
# obj/some/other/target2.source.o
# another/target.txt
#
outputs_text = stdout.partition('\n outputs:\n')[2]
output_lines = [line.strip() for line in outputs_text.split('\n')]
return [
target for target in output_lines
if target and (target.endswith('.o') or target.endswith('.obj'))
]
def GetClangCommandLineForNinjaOutput(out_dir, build_target):
"""Returns the Clang command line for building |build_target|
Asks ninja for the list of commands used to build |filename| and returns the
final Clang invocation.
Args:
out_dir: (String) Absolute path to ninja build output directory.
build_target: (String) A build target understood by ninja
Returns:
(String or None) Clang command line or None if a Clang command line couldn't
be determined.
"""
p = subprocess.Popen(
['ninja', '-v', '-C', out_dir, '-t', 'commands', build_target],
stdout=subprocess.PIPE,
universal_newlines=True)
stdout, stderr = p.communicate()
if p.returncode != 0:
return None
# Ninja will return multiple build steps for all dependencies up to
# |build_target|. The build step we want is the last Clang invocation, which
# is expected to be the one that outputs |build_target|.
for line in reversed(stdout.split('\n')):
if 'clang' in line:
return line
return None
def GetClangCommandLineFromNinjaForSource(out_dir, filename):
"""Returns a Clang command line used to build |filename|.
The same source file could be built multiple times using different tool
chains. In such cases, this command returns the first Clang invocation. We
currently don't prefer one toolchain over another. Hopefully the tool chain
corresponding to the Clang command line is compatible with the Clang build
used by YCM.
Args:
out_dir: (String) Absolute path to Chromium checkout.
filename: (String) Absolute path to source file.
Returns:
(String or None): Command line for Clang invocation using |filename| as a
source. Returns None if no such command line could be found.
"""
build_targets = GetNinjaBuildOutputsForSourceFile(out_dir, filename)
for build_target in build_targets:
command_line = GetClangCommandLineForNinjaOutput(out_dir, build_target)
if command_line:
return command_line
return None
def GetClangOptionsFromCommandLine(clang_commandline, out_dir,
additional_flags):
"""Extracts relevant command line options from |clang_commandline|
Args:
clang_commandline: (String) Full Clang invocation.
out_dir: (String) Absolute path to ninja build directory. Relative paths in
the command line are relative to |out_dir|.
additional_flags: (List of String) Additional flags to return.
Returns:
(List of Strings) The list of command line flags for this source file. Can
be empty.
"""
clang_flags = [] + additional_flags
def abspath(path):
return os.path.normpath(os.path.join(out_dir, path))
# Parse flags that are important for YCM's purposes.
clang_tokens = shlex.split(clang_commandline)
include_pattern = re.compile(r'^(-I|-isystem|-F)(.+)$')
for flag_index, flag in enumerate(clang_tokens):
include_match = include_pattern.match(flag)
if include_match:
# Relative paths need to be resolved, because they're relative to the
# output dir, not the source.
path = abspath(include_match.group(2))
clang_flags.append(include_match.group(1) + path)
elif flag.startswith('-std') or flag == '-nostdinc++':
clang_flags.append(flag)
elif flag.startswith('-march=arm'):
# Value armv7-a of this flag causes a parsing error with a message
# "ClangParseError: Failed to parse the translation unit."
continue
elif flag.startswith('-') and flag[1] in 'DWfmO':
if flag == '-Wno-deprecated-register' or flag == '-Wno-header-guard':
# These flags causes libclang (3.3) to crash. Remove it until things
# are fixed.
continue
clang_flags.append(flag)
elif flag == '-isysroot' or flag == '-isystem' or flag == '-I':
if flag_index + 1 < len(clang_tokens):
clang_flags.append(flag)
clang_flags.append(abspath(clang_tokens[flag_index + 1]))
elif flag.startswith('--sysroot='):
# On Linux we use a sysroot image.
sysroot_path = flag.lstrip('--sysroot=')
if sysroot_path.startswith('/'):
clang_flags.append(flag)
else:
clang_flags.append('--sysroot=' + abspath(sysroot_path))
return clang_flags
def GetClangOptionsFromNinjaForFilename(chrome_root, filename):
"""Returns the Clang command line options needed for building |filename|.
Command line options are based on the command used by ninja for building
|filename|. If |filename| is a .h file, uses its companion .cc or .cpp file.
If a suitable companion file can't be located or if ninja doesn't know about
|filename|, then uses default source files in Blink and Chromium for
determining the commandline.
Args:
chrome_root: (String) Path to src/.
filename: (String) Absolute path to source file being edited.
Returns:
(List of Strings) The list of command line flags for this source file. Can
be empty.
"""
if not chrome_root:
return []
# Generally, everyone benefits from including Chromium's src/, because all of
# Chromium's includes are relative to that.
additional_flags = ['-I' + os.path.join(chrome_root)]
# Version of Clang used to compile Chromium can be newer then version of
# libclang that YCM uses for completion. So it's possible that YCM's libclang
# doesn't know about some used warning options, which causes compilation
# warnings (and errors, because of '-Werror');
additional_flags.append('-Wno-unknown-warning-option')
sys.path.append(os.path.join(chrome_root, 'tools', 'vim'))
from ninja_output import GetNinjaOutputDirectory
out_dir = GetNinjaOutputDirectory(chrome_root)
basename, extension = os.path.splitext(filename)
if extension == '.h':
candidates = [basename + ext for ext in _header_alternates]
else:
candidates = [filename]
clang_line = None
buildable_extension = extension
for candidate in candidates:
clang_line = GetClangCommandLineFromNinjaForSource(out_dir, candidate)
if clang_line:
buildable_extension = os.path.splitext(candidate)[1]
break
additional_flags += _extension_flags.get(buildable_extension, [])
if not clang_line:
# If ninja didn't know about filename or it's companion files, then try a
# default build target. It is possible that the file is new, or build.ninja
# is stale.
clang_line = GetClangCommandLineFromNinjaForSource(out_dir,
GetDefaultSourceFile(
chrome_root,
filename))
if not clang_line:
return additional_flags
return GetClangOptionsFromCommandLine(clang_line, out_dir, additional_flags)
# FlagsForFile entrypoint is deprecated in YCM and has replaced by
# Settings.
def FlagsForFile(filename):
"""This is the old entry point for YCM. Its interface is fixed.
Args:
filename: (String) Path to source file being edited.
Returns:
(Dictionary)
'flags': (List of Strings) Command line flags.
'do_cache': (Boolean) True if the result should be cached.
"""
return Settings(filename=filename)
def Settings(**kwargs):
filename = kwargs['filename']
abs_filename = os.path.abspath(filename)
chrome_root = FindChromeSrcFromFilename(abs_filename)
clang_flags = GetClangOptionsFromNinjaForFilename(chrome_root, abs_filename)
# If clang_flags could not be determined, then assume that was due to a
# transient failure. Preventing YCM from caching the flags allows us to try to
# determine the flags again.
should_cache_flags_for_file = bool(clang_flags)
final_flags = _default_flags + clang_flags
return {'flags': final_flags, 'do_cache': should_cache_flags_for_file}
|
scheib/chromium
|
tools/vim/chromium.ycm_extra_conf.py
|
Python
|
bsd-3-clause
| 12,726
|
[
"VisIt"
] |
d9a527cc7c7c4f9316ddc47483e71f93c07e29828e9290ee43f7b5a018a52acf
|
from datetime import datetime
import numpy as np
from netcdf import netcdf as nc
import logging
from linketurbidity import instrument as linke
from noaadem import instrument as dem
from helpers import short
import os
class Cache(object):
def __init__(self, filenames, tile_cut={}, read_only=False):
self._attrs = {}
self.filenames = filenames
self.tile_cut = tile_cut
self.root = nc.tailor(filenames, dimensions=tile_cut,
read_only=read_only)
def __getattr__(self, name):
if name not in self._attrs.keys():
self.load(name)
return self._attrs[name]
def getvar(self, var_name):
return nc.getvar(self.root, var_name)
def load(self, name):
var_name = name[4:] if name[0:4] == 'ref_' else name
key = 'ref_{:s}'.format(var_name)
if key not in self._attrs.keys():
var = self.getvar(var_name)
self._attrs[key] = var
else:
var = self._attrs
self._attrs[var_name] = var[:]
def dump(self):
for k in self._attrs.keys():
self._attrs.pop(k, None)
nc.close(self.root)
class StaticCache(Cache):
@classmethod
def project_dem(cls, root, lat, lon):
logging.info("Projecting DEM's map... ")
dem_var = nc.getvar(root, 'dem', 'f4', source=lon)
dem_var[:] = dem.obtain(lat[0], lon[0])
@classmethod
def project_linke(cls, root, lat, lon):
logging.info("Projecting Linke's turbidity index... ")
dts = map(lambda m: datetime(2014, m, 15), range(1, 13))
linkes = map(lambda dt: linke.obtain(dt, compressed=True), dts)
linkes = map(lambda l: linke.transform_data(l, lat[0],
lon[0]), linkes)
linkes = np.vstack([[linkes]])
nc.getdim(root, 'months', 12)
linke_var = nc.getvar(root, 'linke', 'f4', ('months', 'yc', 'xc'))
# The linkes / 20. uncompress the linke coefficients and save them as
# floats.
linke_var[:] = linkes / 20.
@classmethod
def construct(cls, static_file, ref_filename):
# At first it should have: lat, lon, dem, linke
logging.info("This is the first execution from the deployment... ")
with nc.loader(ref_filename) as root_ref:
with nc.loader(static_file) as root:
lat = nc.getvar(root_ref, 'lat')
lon = nc.getvar(root_ref, 'lon')
nc.getvar(root, 'lat', source=lat)
nc.getvar(root, 'lon', source=lon)
cls.project_dem(root, lat, lon)
cls.project_linke(root, lat, lon)
def __init__(self, static_filename, data_filenames, tile_cut):
if not os.path.exists(static_filename):
StaticCache.construct(static_filename,
data_filenames[0])
super(StaticCache, self).__init__(static_filename, tile_cut)
@property
def dem(self):
if not hasattr(self, '_cached_dem'):
self._cached_dem = nc.getvar(self.root, 'dem')[:]
return self._cached_dem
@property
def linke(self):
if not hasattr(self, '_linke'):
self._linke = nc.getvar(self.root, 'linke')[:]
return self._linke
class OutputCache(Cache):
def __init__(self, product, tile_cut, ref_filenames):
super(OutputCache, self).__init__(ref_filenames,
tile_cut)
self.product = product
self.initialize_variables(self.filenames)
def create_1px_dimensions(self, root):
nc.getdim(root, 'xc_k', 1)
nc.getdim(root, 'yc_k', 1)
nc.getdim(root, 'time', 1)
def initialize_path(self, filenames, images):
if not os.path.exists(self.output_path):
os.makedirs(self.output_path)
self.output = Cache(map(self.get_output_file, self.filenames),
tile_cut=self.tile_cut)
self.root = self.output.root
map(self.create_1px_dimensions, self.root.roots)
self.root.getvar('time', source=images.getvar('time'))
self.root.getvar('cloudindex', 'f4', source=images.getvar('data'))
self.root.getvar('globalradiation', 'f4', source=images.getvar('data'))
def initialize_variables(self, filenames):
self.path = '/'.join(filenames[0].split('/')[0:-1])
self.output_path = self.product
with nc.loader(self.filenames, dimensions=self.tile_cut) as images:
if self.output_path:
self.initialize_path(filenames, images)
else:
data_shape = images.getvar('data').shape
self.time = np.zeros(images.getvar('time').shape)
self.ref_cloudindex = np.zeros(data_shape)
self.cloudindex = self.ref_cloudindex
self.ref_globalradiation = np.zeros(data_shape)
self.globalradiation = self.ref_globalradiation
def get_output_file(self, filename):
return '{:s}/{:s}'.format(self.output_path,
short(filename, None, None))
class memoize(object):
def __init__(self, function):
self.function = function
self.memoized = {}
def __call__(self, *args):
try:
return self.memoized[args]
except KeyError:
self.memoized[args] = self.function(*args)
return self.memoized[args]
|
gersolar/solar_radiation_model
|
models/cache.py
|
Python
|
mit
| 5,500
|
[
"NetCDF"
] |
bb73b72bf5cc2f64eb646373ba8154be544ab94b5a86ca2881511dbad81ec48a
|
# -*- coding: utf-8 -*-
# Copyright 2007-2016 The HyperSpy developers
#
# This file is part of HyperSpy.
#
# HyperSpy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# HyperSpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with HyperSpy. If not, see <http://www.gnu.org/licenses/>.
from __future__ import division
import warnings
import numpy as np
import math
import logging
from hyperspy.misc.utils import stash_active_state
from hyperspy.misc.eds.utils import _get_element_and_line
from hyperspy.models.model1d import Model1D
from hyperspy._signals.eds import EDSSpectrum
from hyperspy.misc.elements import elements as elements_db
from hyperspy.misc.eds import utils as utils_eds
import hyperspy.components1d as create_component
from hyperspy.misc.test_utils import ignore_warning
_logger = logging.getLogger(__name__)
eV2keV = 1000.
sigma2fwhm = 2 * math.sqrt(2 * math.log(2))
def _get_weight(element, line, weight_line=None):
if weight_line is None:
weight_line = elements_db[
element]['Atomic_properties']['Xray_lines'][line]['weight']
return "x * {}".format(weight_line)
def _get_sigma(E, E_ref, units_factor, return_f=False):
"""
Calculates an approximate sigma value, accounting for peak broadening due
to the detector, for a peak at energy E given a known width at a reference
energy.
The factor 2.5 is a constant derived by Fiori & Newbury as references
below.
Parameters
----------
energy_resolution_MnKa : float
Energy resolution of Mn Ka in eV
E : float
Energy of the peak in keV
Returns
-------
float : FWHM of the peak in keV
Notes
-----
This method implements the equation derived by Fiori and Newbury as is
documented in the following:
Fiori, C. E., and Newbury, D. E. (1978). In SEM/1978/I, SEM, Inc.,
AFM O'Hare, Illinois, p. 401.
Goldstein et al. (2003). "Scanning Electron Microscopy & X-ray
Microanalysis", Plenum, third edition, p 315.
"""
energy2sigma_factor = 2.5 / (eV2keV * (sigma2fwhm**2))
if return_f:
return lambda sig_ref: math.sqrt(abs(
energy2sigma_factor * (E - E_ref) * units_factor +
np.power(sig_ref, 2)))
else:
return "sqrt(abs({} * ({} - {}) * {} + sig_ref ** 2))".format(
energy2sigma_factor, E, E_ref, units_factor)
def _get_offset(diff):
return "x + {}".format(diff)
def _get_scale(E1, E_ref1, fact):
return "{} + {} * (x - {})".format(E1, fact, E_ref1)
class EDSModel(Model1D):
"""Build and fit a model of an EDS Signal1D.
Parameters
----------
spectrum : an EDSSpectrum (or any EDSSpectrum subclass) instance.
auto_add_lines : boolean
If True, automatically add Gaussians for all X-rays generated
in the energy range by an element, using the edsmodel.add_family_lines
method.
auto_background : boolean
If True, adds automatically a polynomial order 6 to the model,
using the edsmodel.add_polynomial_background method.
Any extra arguments are passed to the Model creator.
Example
-------
>>> m = s.create_model()
>>> m.fit()
>>> m.fit_background()
>>> m.calibrate_energy_axis('resolution')
>>> m.calibrate_xray_lines('energy', ['Au_Ma'])
>>> m.calibrate_xray_lines('sub_weight',['Mn_La'], bound=10)
"""
def __init__(self, spectrum,
auto_background=True,
auto_add_lines=True,
*args, **kwargs):
Model1D.__init__(self, spectrum, *args, **kwargs)
self.xray_lines = list()
self.family_lines = list()
end_energy = self.axes_manager.signal_axes[0].high_value
self.end_energy = min(end_energy, self.signal._get_beam_energy())
self.start_energy = self.axes_manager.signal_axes[0].low_value
self.background_components = list()
if 'dictionary' in kwargs or len(args) > 1:
auto_add_lines = False
auto_background = False
d = args[1] if len(args) > 1 else kwargs['dictionary']
if len(d['xray_lines']) > 0:
self.xray_lines.extend(
[self[name] for name in d['xray_lines']])
if len(d['background_components']) > 0:
self.background_components.extend(
[self[name] for name in d['background_components']])
if auto_background is True:
self.add_polynomial_background()
if auto_add_lines is True:
# Will raise an error if no elements are specified, so check:
if 'Sample.elements' in self.signal.metadata:
self.add_family_lines()
def as_dictionary(self, fullcopy=True):
dic = super(EDSModel, self).as_dictionary(fullcopy)
dic['xray_lines'] = [c.name for c in self.xray_lines]
dic['background_components'] = [c.name for c in
self.background_components]
return dic
@property
def units_factor(self):
units_name = self.axes_manager.signal_axes[0].units
if units_name == 'eV':
return 1000.
elif units_name == 'keV':
return 1.
else:
raise ValueError("Energy units, %s, not supported" %
str(units_name))
@property
def spectrum(self):
return self._signal
@spectrum.setter
def spectrum(self, value):
if isinstance(value, EDSSpectrum):
self._signal = value
else:
raise ValueError(
"This attribute can only contain an EDSSpectrum "
"but an object of type %s was provided" %
str(type(value)))
@property
def _active_xray_lines(self):
return [xray_line for xray_line
in self.xray_lines if xray_line.active]
def add_family_lines(self, xray_lines='from_elements'):
"""Create the Xray-lines instances and configure them appropiately
If a X-ray line is given, all the the lines of the familiy is added.
For instance if Zn Ka is given, Zn Kb is added too. The main lines
(alpha) is added to self.xray_lines
Parameters
-----------
xray_lines: {None, 'from_elements', list of string}
If None, if `metadata` contains `xray_lines` list of lines use
those. If 'from_elements', add all lines from the elements contains
in `metadata`. Alternatively, provide an iterable containing
a list of valid X-ray lines symbols. (eg. ('Al_Ka','Zn_Ka')).
"""
only_one = False
only_lines = ("Ka", "La", "Ma")
if xray_lines is None or xray_lines == 'from_elements':
if 'Sample.xray_lines' in self.signal.metadata \
and xray_lines != 'from_elements':
xray_lines = self.signal.metadata.Sample.xray_lines
elif 'Sample.elements' in self.signal.metadata:
xray_lines = self.signal._get_lines_from_elements(
self.signal.metadata.Sample.elements,
only_one=only_one,
only_lines=only_lines)
else:
raise ValueError(
"No elements defined, set them with `add_elements`")
components_names = [xr.name for xr in self.xray_lines]
xray_lines = filter(lambda x: x not in components_names, xray_lines)
xray_lines, xray_not_here = self.signal.\
_get_xray_lines_in_spectral_range(xray_lines)
for xray in xray_not_here:
warnings.warn("%s is not in the data energy range." % (xray))
for xray_line in xray_lines:
element, line = utils_eds._get_element_and_line(xray_line)
line_energy, line_FWHM = self.signal._get_line_energy(
xray_line,
FWHM_MnKa='auto')
component = create_component.Gaussian()
component.centre.value = line_energy
component.fwhm = line_FWHM
component.centre.free = False
component.sigma.free = False
component.name = xray_line
self.append(component)
self.xray_lines.append(component)
self[xray_line].A.map[
'values'] = self.signal.isig[line_energy].data * \
line_FWHM / self.signal.axes_manager[-1].scale
self[xray_line].A.map['is_set'] = (
np.ones(self.signal.isig[line_energy].data.shape) == 1)
component.A.ext_force_positive = True
for li in elements_db[element]['Atomic_properties']['Xray_lines']:
if line[0] in li and line != li:
xray_sub = element + '_' + li
if self.signal.\
_get_xray_lines_in_spectral_range(
[xray_sub])[0] != []:
line_energy, line_FWHM = self.signal.\
_get_line_energy(
xray_sub, FWHM_MnKa='auto')
component_sub = create_component.Gaussian()
component_sub.centre.value = line_energy
component_sub.fwhm = line_FWHM
component_sub.centre.free = False
component_sub.sigma.free = False
component_sub.name = xray_sub
component_sub.A.twin_function_expr = _get_weight(
element, li)
component_sub.A.twin = component.A
self.append(component_sub)
self.family_lines.append(component_sub)
self.fetch_stored_values()
@property
def _active_background_components(self):
return [bc for bc in self.background_components
if bc.free_parameters]
def add_polynomial_background(self, order=6):
"""
Add a polynomial background.
the background is added to self.background_components
Parameters
----------
order: int
The order of the polynomial
"""
with ignore_warning(message="The API of the `Polynomial` component"):
background = create_component.Polynomial(order=order, legacy=False)
background.name = 'background_order_' + str(order)
background.isbackground = True
self.append(background)
self.background_components.append(background)
def free_background(self):
"""
Free the yscale of the background components.
"""
for component in self.background_components:
component.set_parameters_free()
def fix_background(self):
"""
Fix the background components.
"""
for component in self._active_background_components:
component.set_parameters_not_free()
def enable_xray_lines(self):
"""Enable the X-ray lines components.
"""
for component in self.xray_lines:
component.active = True
def disable_xray_lines(self):
"""Disable the X-ray lines components.
"""
for component in self._active_xray_lines:
component.active = False
def _make_position_adjuster(self, component, fix_it, show_label):
# Override to ensure formatting of labels of xray lines
super(EDSModel, self)._make_position_adjuster(
component, fix_it, show_label)
if show_label and component in (self.xray_lines + self.family_lines):
label = self._position_widgets[component._position][1]
label.string = (r"$\mathrm{%s}_{\mathrm{%s}}$" %
_get_element_and_line(component.name))
def fit_background(self,
start_energy=None,
end_energy=None,
windows_sigma=(4., 3.),
kind='single',
**kwargs):
"""
Fit the background in the energy range containing no X-ray line.
After the fit, the background is fixed.
Parameters
----------
start_energy : {float, None}
If float, limit the range of energies from the left to the
given value.
end_energy : {float, None}
If float, limit the range of energies from the right to the
given value.
windows_sigma: tuple of two float
The (lower, upper) bounds around each X-ray line, each as a float,
to define the energy range free of X-ray lines.
kind : {'single', 'multi'}
If 'single' fit only the current location. If 'multi'
use multifit.
**kwargs : extra key word arguments
All extra key word arguments are passed to fit or
See also
--------
free_background
"""
if end_energy is None:
end_energy = self.end_energy
if start_energy is None:
start_energy = self.start_energy
# disactivate line
self.free_background()
with stash_active_state(self):
self.disable_xray_lines()
self.set_signal_range(start_energy, end_energy)
for component in self:
if component.isbackground is False:
self.remove_signal_range(
component.centre.value -
windows_sigma[0] * component.sigma.value,
component.centre.value +
windows_sigma[1] * component.sigma.value)
if kind == 'single':
self.fit(**kwargs)
if kind == 'multi':
self.multifit(**kwargs)
self.reset_signal_range()
self.fix_background()
def _twin_xray_lines_width(self, xray_lines):
"""
Twin the width of the peaks
The twinning models the energy resolution of the detector
Parameters
----------
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
for i, xray_line in enumerate(xray_lines):
component = self[xray_line]
if i == 0:
component_ref = component
component_ref.sigma.free = True
E_ref = component_ref.centre.value
else:
component.sigma.free = True
E = component.centre.value
component.sigma.twin_inverse_function_expr = _get_sigma(
E_ref, E, self.units_factor)
component.sigma.twin_function_expr = _get_sigma(
E, E_ref, self.units_factor)
def _set_energy_resolution(self, xray_lines, *args, **kwargs):
"""
Adjust the width of all lines and set the fitted energy resolution
to the spectrum
Parameters
----------
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
energy_Mn_Ka, FWHM_MnKa_old = self.signal._get_line_energy('Mn_Ka',
'auto')
FWHM_MnKa_old *= eV2keV / self.units_factor
get_sigma_Mn_Ka = _get_sigma(
energy_Mn_Ka, self[xray_lines[0]].centre.value, self.units_factor,
return_f=True)
FWHM_MnKa = get_sigma_Mn_Ka(self[xray_lines[0]].sigma.value
) * eV2keV / self.units_factor * sigma2fwhm
if FWHM_MnKa < 110:
raise ValueError("FWHM_MnKa of " + str(FWHM_MnKa) +
" smaller than" + "physically possible")
else:
self.signal.set_microscope_parameters(
energy_resolution_MnKa=FWHM_MnKa)
_logger.info("Energy resolution (FWHM at Mn Ka) changed from " +
"{:.2f} to {:.2f} eV".format(
FWHM_MnKa_old, FWHM_MnKa))
for component in self:
if component.isbackground is False:
line_FWHM = self.signal._get_line_energy(
component.name, FWHM_MnKa='auto')[1]
component.fwhm = line_FWHM
def _twin_xray_lines_scale(self, xray_lines):
"""
Twin the scale of the peaks
Parameters
----------
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
ax = self.signal.axes_manager[-1]
ref = []
for i, xray_line in enumerate(xray_lines):
component = self[xray_line]
if i == 0:
component_ref = component
component_ref.centre.free = True
E_ref = component_ref.centre.value
ref.append(E_ref)
else:
component.centre.free = True
E = component.centre.value
fact = float(ax.value2index(E)) / ax.value2index(E_ref)
component.centre.twin_function_expr = _get_scale(
E, E_ref, fact)
component.centre.twin = component_ref.centre
ref.append(E)
return ref
def _set_energy_scale(self, xray_lines, ref):
"""
Adjust the width of all lines and set the fitted energy resolution
to the spectrum
Parameters
----------
xray_lines: list of str or 'all_alpha'
The X-ray lines. If 'all_alpha', fit all using all alpha lines
ref: list of float
The centres, before fitting, of the X-ray lines included
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
ax = self.signal.axes_manager[-1]
scale_old = self.signal.axes_manager[-1].scale
ind = np.argsort(np.array(
[compo.centre.value for compo in self.xray_lines]))[-1]
E = self[xray_lines[ind]].centre.value
scale = (ref[ind] - ax.offset) / ax.value2index(E)
ax.scale = scale
for i, xray_line in enumerate(xray_lines):
component = self[xray_line]
component.centre.value = ref[i]
_logger.info("Scale changed from %lf to %lf", scale_old, scale)
def _twin_xray_lines_offset(self, xray_lines):
"""
Twin the offset of the peaks
Parameters
----------
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
ref = []
for i, xray_line in enumerate(xray_lines):
component = self[xray_line]
if i == 0:
component_ref = component
component_ref.centre.free = True
E_ref = component_ref.centre.value
ref.append(E_ref)
else:
component.centre.free = True
E = component.centre.value
diff = E_ref - E
component.centre.twin_function_expr = _get_offset(-diff)
component.centre.twin = component_ref.centre
ref.append(E)
return ref
def _set_energy_offset(self, xray_lines, ref):
"""
Adjust the width of all lines and set the fitted energy resolution
to the spectrum
Parameters
----------
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
ref: list of float
The centres, before fitting, of the X-ray lines included
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
diff = self[xray_lines[0]].centre.value - ref[0]
offset_old = self.signal.axes_manager[-1].offset
self.signal.axes_manager[-1].offset -= diff
offset = self.signal.axes_manager[-1].offset
_logger.info("Offset changed from %lf to %lf", offset_old, offset)
for i, xray_line in enumerate(xray_lines):
component = self[xray_line]
component.centre.value = ref[i]
def calibrate_energy_axis(self,
calibrate='resolution',
xray_lines='all_alpha',
**kwargs):
"""
Calibrate the resolution, the scale or the offset of the energy axis
by fitting.
Parameters
----------
calibrate: 'resolution' or 'scale' or 'offset'
If 'resolution', fits the width of Gaussians place at all x-ray
lines. The width is given by a model of the detector resolution,
obtained by extrapolating the `energy_resolution_MnKa` in `metadata`
`metadata`.
This method will update the value of `energy_resolution_MnKa`.
If 'scale', calibrate the scale of the energy axis
If 'offset', calibrate the offset of the energy axis
xray_lines: list of str or 'all_alpha'
The Xray lines. If 'all_alpha', fit all using all alpha lines
**kwargs : extra key word arguments
All extra key word arguments are passed to fit or
multifit, depending on the value of kind.
"""
if calibrate == 'resolution':
free = self._twin_xray_lines_width
fix = self.fix_xray_lines_width
scale = self._set_energy_resolution
elif calibrate == 'scale':
free = self._twin_xray_lines_scale
fix = self.fix_xray_lines_energy
scale = self._set_energy_scale
elif calibrate == 'offset':
free = self._twin_xray_lines_offset
fix = self.fix_xray_lines_energy
scale = self._set_energy_offset
ref = free(xray_lines=xray_lines)
self.fit(**kwargs)
fix(xray_lines=xray_lines)
scale(xray_lines=xray_lines, ref=ref)
self.update_plot()
def free_sub_xray_lines_weight(self, xray_lines='all', bound=0.01):
"""
Free the weight of a sub X-ray lines
Remove the twin on the height of sub-Xray lines (non alpha)
Parameters
----------
xray_lines: list of str or 'all'
The Xray lines. If 'all', fit all lines
bounds: float
Bound the height of the peak to a fraction of
its height
"""
def free_twin(component):
component.A.twin = None
component.A.free = True
if component.A.value - bound * component.A.value <= 0:
component.A.bmin = 1e-10
else:
component.A.bmin = component.A.value - \
bound * component.A.value
component.A.bmax = component.A.value + \
bound * component.A.value
component.A.ext_force_positive = True
xray_families = [
utils_eds._get_xray_lines_family(line) for line in xray_lines]
for component in self:
if component.isbackground is False:
if xray_lines == 'all':
free_twin(component)
elif utils_eds._get_xray_lines_family(
component.name) in xray_families:
free_twin(component)
def fix_sub_xray_lines_weight(self, xray_lines='all'):
"""
Fix the weight of a sub X-ray lines to the main X-ray lines
Establish the twin on the height of sub-Xray lines (non alpha)
"""
def fix_twin(component):
component.A.bmin = 0.0
component.A.bmax = None
element, line = utils_eds._get_element_and_line(component.name)
for li in elements_db[element]['Atomic_properties']['Xray_lines']:
if line[0] in li and line != li:
xray_sub = element + '_' + li
if xray_sub in self:
component_sub = self[xray_sub]
component_sub.A.bmin = 1e-10
component_sub.A.bmax = None
weight_line = component_sub.A.value / component.A.value
component_sub.A.twin_function_expr = _get_weight(
element, li, weight_line)
component_sub.A.twin = component.A
else:
warnings.warn("The X-ray line expected to be in the "
"model was not found")
for component in self.xray_lines:
if xray_lines == 'all' or component.name in xray_lines:
fix_twin(component)
self.fetch_stored_values()
def free_xray_lines_energy(self, xray_lines='all', bound=0.001):
"""
Free the X-ray line energy (shift or centre of the Gaussian)
Parameters
----------
xray_lines: list of str or 'all'
The Xray lines. If 'all', fit all lines
bound: float
the bound around the actual energy, in keV or eV
"""
for component in self:
if component.isbackground is False:
if xray_lines == 'all' or component.name in xray_lines:
component.centre.free = True
component.centre.bmin = component.centre.value - bound
component.centre.bmax = component.centre.value + bound
def fix_xray_lines_energy(self, xray_lines='all'):
"""
Fix the X-ray line energy (shift or centre of the Gaussian)
Parameters
----------
xray_lines: list of str, 'all', or 'all_alpha'
The Xray lines. If 'all', fit all lines. If 'all_alpha' fit all
using all alpha lines.
bound: float
the bound around the actual energy, in keV or eV
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
for component in self:
if component.isbackground is False:
if xray_lines == 'all' or component.name in xray_lines:
component.centre.twin = None
component.centre.free = False
component.centre.bmin = None
component.centre.bmax = None
def free_xray_lines_width(self, xray_lines='all', bound=0.01):
"""
Free the X-ray line width (sigma of the Gaussian)
Parameters
----------
xray_lines: list of str or 'all'
The Xray lines. If 'all', fit all lines
bound: float
the bound around the actual energy, in keV or eV
"""
for component in self:
if component.isbackground is False:
if xray_lines == 'all' or component.name in xray_lines:
component.sigma.free = True
component.sigma.bmin = component.sigma.value - bound
component.sigma.bmax = component.sigma.value + bound
def fix_xray_lines_width(self, xray_lines='all'):
"""
Fix the X-ray line width (sigma of the Gaussian)
Parameters
----------
xray_lines: list of str, 'all', or 'all_alpha'
The Xray lines. If 'all', fit all lines. If 'all_alpha' fit all
using all alpha lines.
bound: float
the bound around the actual energy, in keV or eV
"""
if xray_lines == 'all_alpha':
xray_lines = [compo.name for compo in self.xray_lines]
for component in self:
if component.isbackground is False:
if xray_lines == 'all' or component.name in xray_lines:
component.sigma.twin = None
component.sigma.free = False
component.sigma.bmin = None
component.sigma.bmax = None
def calibrate_xray_lines(self,
calibrate='energy',
xray_lines='all',
bound=1,
kind='single',
**kwargs):
"""
Calibrate individually the X-ray line parameters.
The X-ray line energy, the weight of the sub-lines and the X-ray line
width can be calibrated.
Parameters
----------
calibrate: 'energy' or 'sub_weight' or 'width'
If 'energy', calibrate the X-ray line energy.
If 'sub_weight', calibrate the ratio between the main line
alpha and the other sub-lines of the family
If 'width', calibrate the X-ray line width.
xray_lines: list of str or 'all'
The Xray lines. If 'all', fit all lines
bounds: float
for 'energy' and 'width' the bound in energy, in eV
for 'sub_weight' Bound the height of the peak to fraction of
its height
kind : {'single', 'multi'}
If 'single' fit only the current location. If 'multi'
use multifit.
**kwargs : extra key word arguments
All extra key word arguments are passed to fit or
multifit, depending on the value of kind.
"""
if calibrate == 'energy':
bound = (bound / eV2keV) * self.units_factor
free = self.free_xray_lines_energy
fix = self.fix_xray_lines_energy
elif calibrate == 'sub_weight':
free = self.free_sub_xray_lines_weight
fix = self.fix_sub_xray_lines_weight
elif calibrate == 'width':
bound = (bound / eV2keV) * self.units_factor
free = self.free_xray_lines_width
fix = self.fix_xray_lines_width
free(xray_lines=xray_lines, bound=bound)
if kind == 'single':
self.fit(bounded=True, fitter='mpfit', **kwargs)
elif kind == 'multi':
self.multifit(bounded=True, fitter='mpfit', **kwargs)
fix(xray_lines=xray_lines)
def get_lines_intensity(self,
xray_lines=None,
plot_result=False,
only_one=True,
only_lines=("a",),
**kwargs):
"""
Return the fitted intensity of the X-ray lines.
Return the area under the gaussian corresping to the X-ray lines
Parameters
----------
xray_lines: {None, list of string}
If None,
if `metadata.Sample.elements.xray_lines` contains a
list of lines use those.
If `metadata.Sample.elements.xray_lines` is undefined
or empty but `metadata.Sample.elements` is defined,
use the same syntax as `add_line` to select a subset of lines
for the operation.
Alternatively, provide an iterable containing
a list of valid X-ray lines symbols.
plot_result : bool
If True, plot the calculated line intensities. If the current
object is a single spectrum it prints the result instead.
only_one : bool
If False, use all the lines of each element in the data spectral
range. If True use only the line at the highest energy
above an overvoltage of 2 (< beam energy / 2).
only_lines : {None, list of strings}
If not None, use only the given lines.
kwargs
The extra keyword arguments for plotting. See
`utils.plot.plot_signals`
Returns
-------
intensities : list
A list containing the intensities as Signal subclasses.
Examples
--------
>>> m.multifit()
>>> m.get_lines_intensity(["C_Ka", "Ta_Ma"])
"""
from hyperspy import utils
intensities = []
if xray_lines is None:
xray_lines = [component.name for component in self.xray_lines]
else:
xray_lines = self.signal._parse_xray_lines(
xray_lines, only_one, only_lines)
xray_lines = list(filter(lambda x: x in [a.name for a in
self], xray_lines))
if len(xray_lines) == 0:
raise ValueError("These X-ray lines are not part of the model.")
for xray_line in xray_lines:
element, line = utils_eds._get_element_and_line(xray_line)
line_energy = self.signal._get_line_energy(xray_line)
data_res = self[xray_line].A.map['values']
if self.axes_manager.navigation_dimension == 0:
data_res = data_res[0]
img = self.signal.isig[0:1].integrate1D(-1)
img.data = data_res
img.metadata.General.title = (
'Intensity of %s at %.2f %s from %s' %
(xray_line,
line_energy,
self.signal.axes_manager.signal_axes[0].units,
self.signal.metadata.General.title))
img.axes_manager.set_signal_dimension(0)
if plot_result and img.axes_manager.signal_dimension == 0:
print("%s at %s %s : Intensity = %.2f"
% (xray_line,
line_energy,
self.signal.axes_manager.signal_axes[0].units,
img.data))
img.metadata.set_item("Sample.elements", ([element]))
img.metadata.set_item("Sample.xray_lines", ([xray_line]))
intensities.append(img)
if plot_result and img.axes_manager.signal_dimension != 0:
utils.plot.plot_signals(intensities, **kwargs)
return intensities
def remove(self, thing):
thing = self._get_component(thing)
if not np.iterable(thing):
thing = [thing, ]
for comp in thing:
if comp in self.xray_lines:
self.xray_lines.remove(comp)
elif comp in self.family_lines:
self.family_lines.remove(comp)
elif comp in self.background_components:
self.background_components.remove(comp)
super().remove(thing)
|
sem-geologist/hyperspy
|
hyperspy/models/edsmodel.py
|
Python
|
gpl-3.0
| 35,363
|
[
"Gaussian"
] |
854e00a8c180cbaad40afb9ea0b35db1d02d6d7ef3ed4728710d6393a8d8658b
|
# -*- coding: utf-8 -*-
"""Test sequences for graphiness.
"""
# Copyright (C) 2004-2019 by
# Aric Hagberg <hagberg@lanl.gov>
# Dan Schult <dschult@colgate.edu>
# Pieter Swart <swart@lanl.gov>
# All rights reserved.
# BSD license.
import heapq
import networkx as nx
__author__ = "\n".join(['Aric Hagberg (hagberg@lanl.gov)',
'Pieter Swart (swart@lanl.gov)',
'Dan Schult (dschult@colgate.edu)'
'Joel Miller (joel.c.miller.research@gmail.com)'
'Ben Edwards'
'Brian Cloteaux <brian.cloteaux@nist.gov>'])
__all__ = ['is_graphical',
'is_multigraphical',
'is_pseudographical',
'is_digraphical',
'is_valid_degree_sequence_erdos_gallai',
'is_valid_degree_sequence_havel_hakimi',
]
def is_graphical(sequence, method='eg'):
"""Returns True if sequence is a valid degree sequence.
A degree sequence is valid if some graph can realize it.
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
method : "eg" | "hh" (default: 'eg')
The method used to validate the degree sequence.
"eg" corresponds to the Erdős-Gallai algorithm, and
"hh" to the Havel-Hakimi algorithm.
Returns
-------
valid : bool
True if the sequence is a valid degree sequence and False if not.
Examples
--------
>>> G = nx.path_graph(4)
>>> sequence = (d for n, d in G.degree())
>>> nx.is_graphical(sequence)
True
References
----------
Erdős-Gallai
[EG1960]_, [choudum1986]_
Havel-Hakimi
[havel1955]_, [hakimi1962]_, [CL1996]_
"""
if method == 'eg':
valid = is_valid_degree_sequence_erdos_gallai(list(sequence))
elif method == 'hh':
valid = is_valid_degree_sequence_havel_hakimi(list(sequence))
else:
msg = "`method` must be 'eg' or 'hh'"
raise nx.NetworkXException(msg)
return valid
def _basic_graphical_tests(deg_sequence):
# Sort and perform some simple tests on the sequence
deg_sequence = nx.utils.make_list_of_ints(deg_sequence)
p = len(deg_sequence)
num_degs = [0] * p
dmax, dmin, dsum, n = 0, p, 0, 0
for d in deg_sequence:
# Reject if degree is negative or larger than the sequence length
if d < 0 or d >= p:
raise nx.NetworkXUnfeasible
# Process only the non-zero integers
elif d > 0:
dmax, dmin, dsum, n = max(dmax, d), min(dmin, d), dsum + d, n + 1
num_degs[d] += 1
# Reject sequence if it has odd sum or is oversaturated
if dsum % 2 or dsum > n * (n - 1):
raise nx.NetworkXUnfeasible
return dmax, dmin, dsum, n, num_degs
def is_valid_degree_sequence_havel_hakimi(deg_sequence):
r"""Returns True if deg_sequence can be realized by a simple graph.
The validation proceeds using the Havel-Hakimi theorem.
Worst-case run time is $O(s)$ where $s$ is the sum of the sequence.
Parameters
----------
deg_sequence : list
A list of integers where each element specifies the degree of a node
in a graph.
Returns
-------
valid : bool
True if deg_sequence is graphical and False if not.
Notes
-----
The ZZ condition says that for the sequence d if
.. math::
|d| >= \frac{(\max(d) + \min(d) + 1)^2}{4*\min(d)}
then d is graphical. This was shown in Theorem 6 in [1]_.
References
----------
.. [1] I.E. Zverovich and V.E. Zverovich. "Contributions to the theory
of graphic sequences", Discrete Mathematics, 105, pp. 292-303 (1992).
[havel1955]_, [hakimi1962]_, [CL1996]_
"""
try:
dmax, dmin, dsum, n, num_degs = _basic_graphical_tests(deg_sequence)
except nx.NetworkXUnfeasible:
return False
# Accept if sequence has no non-zero degrees or passes the ZZ condition
if n == 0 or 4 * dmin * n >= (dmax + dmin + 1) * (dmax + dmin + 1):
return True
modstubs = [0] * (dmax + 1)
# Successively reduce degree sequence by removing the maximum degree
while n > 0:
# Retrieve the maximum degree in the sequence
while num_degs[dmax] == 0:
dmax -= 1
# If there are not enough stubs to connect to, then the sequence is
# not graphical
if dmax > n - 1:
return False
# Remove largest stub in list
num_degs[dmax], n = num_degs[dmax] - 1, n - 1
# Reduce the next dmax largest stubs
mslen = 0
k = dmax
for i in range(dmax):
while num_degs[k] == 0:
k -= 1
num_degs[k], n = num_degs[k] - 1, n - 1
if k > 1:
modstubs[mslen] = k - 1
mslen += 1
# Add back to the list any non-zero stubs that were removed
for i in range(mslen):
stub = modstubs[i]
num_degs[stub], n = num_degs[stub] + 1, n + 1
return True
def is_valid_degree_sequence_erdos_gallai(deg_sequence):
r"""Returns True if deg_sequence can be realized by a simple graph.
The validation is done using the Erdős-Gallai theorem [EG1960]_.
Parameters
----------
deg_sequence : list
A list of integers
Returns
-------
valid : bool
True if deg_sequence is graphical and False if not.
Notes
-----
This implementation uses an equivalent form of the Erdős-Gallai criterion.
Worst-case run time is $O(n)$ where $n$ is the length of the sequence.
Specifically, a sequence d is graphical if and only if the
sum of the sequence is even and for all strong indices k in the sequence,
.. math::
\sum_{i=1}^{k} d_i \leq k(k-1) + \sum_{j=k+1}^{n} \min(d_i,k)
= k(n-1) - ( k \sum_{j=0}^{k-1} n_j - \sum_{j=0}^{k-1} j n_j )
A strong index k is any index where d_k >= k and the value n_j is the
number of occurrences of j in d. The maximal strong index is called the
Durfee index.
This particular rearrangement comes from the proof of Theorem 3 in [2]_.
The ZZ condition says that for the sequence d if
.. math::
|d| >= \frac{(\max(d) + \min(d) + 1)^2}{4*\min(d)}
then d is graphical. This was shown in Theorem 6 in [2]_.
References
----------
.. [1] A. Tripathi and S. Vijay. "A note on a theorem of Erdős & Gallai",
Discrete Mathematics, 265, pp. 417-420 (2003).
.. [2] I.E. Zverovich and V.E. Zverovich. "Contributions to the theory
of graphic sequences", Discrete Mathematics, 105, pp. 292-303 (1992).
[EG1960]_, [choudum1986]_
"""
try:
dmax, dmin, dsum, n, num_degs = _basic_graphical_tests(deg_sequence)
except nx.NetworkXUnfeasible:
return False
# Accept if sequence has no non-zero degrees or passes the ZZ condition
if n == 0 or 4 * dmin * n >= (dmax + dmin + 1) * (dmax + dmin + 1):
return True
# Perform the EG checks using the reformulation of Zverovich and Zverovich
k, sum_deg, sum_nj, sum_jnj = 0, 0, 0, 0
for dk in range(dmax, dmin - 1, -1):
if dk < k + 1: # Check if already past Durfee index
return True
if num_degs[dk] > 0:
run_size = num_degs[dk] # Process a run of identical-valued degrees
if dk < k + run_size: # Check if end of run is past Durfee index
run_size = dk - k # Adjust back to Durfee index
sum_deg += run_size * dk
for v in range(run_size):
sum_nj += num_degs[k + v]
sum_jnj += (k + v) * num_degs[k + v]
k += run_size
if sum_deg > k * (n - 1) - k * sum_nj + sum_jnj:
return False
return True
def is_multigraphical(sequence):
"""Returns True if some multigraph can realize the sequence.
Parameters
----------
sequence : list
A list of integers
Returns
-------
valid : bool
True if deg_sequence is a multigraphic degree sequence and False if not.
Notes
-----
The worst-case run time is $O(n)$ where $n$ is the length of the sequence.
References
----------
.. [1] S. L. Hakimi. "On the realizability of a set of integers as
degrees of the vertices of a linear graph", J. SIAM, 10, pp. 496-506
(1962).
"""
try:
deg_sequence = nx.utils.make_list_of_ints(sequence)
except nx.NetworkXError:
return False
dsum, dmax = 0, 0
for d in deg_sequence:
if d < 0:
return False
dsum, dmax = dsum + d, max(dmax, d)
if dsum % 2 or dsum < 2 * dmax:
return False
return True
def is_pseudographical(sequence):
"""Returns True if some pseudograph can realize the sequence.
Every nonnegative integer sequence with an even sum is pseudographical
(see [1]_).
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
Returns
-------
valid : bool
True if the sequence is a pseudographic degree sequence and False if not.
Notes
-----
The worst-case run time is $O(n)$ where n is the length of the sequence.
References
----------
.. [1] F. Boesch and F. Harary. "Line removal algorithms for graphs
and their degree lists", IEEE Trans. Circuits and Systems, CAS-23(12),
pp. 778-782 (1976).
"""
try:
deg_sequence = nx.utils.make_list_of_ints(sequence)
except nx.NetworkXError:
return False
return sum(deg_sequence) % 2 == 0 and min(deg_sequence) >= 0
def is_digraphical(in_sequence, out_sequence):
r"""Returns True if some directed graph can realize the in- and out-degree
sequences.
Parameters
----------
in_sequence : list or iterable container
A sequence of integer node in-degrees
out_sequence : list or iterable container
A sequence of integer node out-degrees
Returns
-------
valid : bool
True if in and out-sequences are digraphic False if not.
Notes
-----
This algorithm is from Kleitman and Wang [1]_.
The worst case runtime is $O(s \times \log n)$ where $s$ and $n$ are the
sum and length of the sequences respectively.
References
----------
.. [1] D.J. Kleitman and D.L. Wang
Algorithms for Constructing Graphs and Digraphs with Given Valences
and Factors, Discrete Mathematics, 6(1), pp. 79-88 (1973)
"""
try:
in_deg_sequence = nx.utils.make_list_of_ints(in_sequence)
out_deg_sequence = nx.utils.make_list_of_ints(out_sequence)
except nx.NetworkXError:
return False
# Process the sequences and form two heaps to store degree pairs with
# either zero or non-zero out degrees
sumin, sumout, nin, nout = 0, 0, len(in_deg_sequence), len(out_deg_sequence)
maxn = max(nin, nout)
maxin = 0
if maxn == 0:
return True
stubheap, zeroheap = [], []
for n in range(maxn):
in_deg, out_deg = 0, 0
if n < nout:
out_deg = out_deg_sequence[n]
if n < nin:
in_deg = in_deg_sequence[n]
if in_deg < 0 or out_deg < 0:
return False
sumin, sumout, maxin = sumin + in_deg, sumout + out_deg, max(maxin, in_deg)
if in_deg > 0:
stubheap.append((-1 * out_deg, -1 * in_deg))
elif out_deg > 0:
zeroheap.append(-1 * out_deg)
if sumin != sumout:
return False
heapq.heapify(stubheap)
heapq.heapify(zeroheap)
modstubs = [(0, 0)] * (maxin + 1)
# Successively reduce degree sequence by removing the maximum out degree
while stubheap:
# Take the first value in the sequence with non-zero in degree
(freeout, freein) = heapq.heappop(stubheap)
freein *= -1
if freein > len(stubheap) + len(zeroheap):
return False
# Attach out stubs to the nodes with the most in stubs
mslen = 0
for i in range(freein):
if zeroheap and (not stubheap or stubheap[0][0] > zeroheap[0]):
stubout = heapq.heappop(zeroheap)
stubin = 0
else:
(stubout, stubin) = heapq.heappop(stubheap)
if stubout == 0:
return False
# Check if target is now totally connected
if stubout + 1 < 0 or stubin < 0:
modstubs[mslen] = (stubout + 1, stubin)
mslen += 1
# Add back the nodes to the heap that still have available stubs
for i in range(mslen):
stub = modstubs[i]
if stub[1] < 0:
heapq.heappush(stubheap, stub)
else:
heapq.heappush(zeroheap, stub[0])
if freeout < 0:
heapq.heappush(zeroheap, freeout)
return True
|
sserrot/champion_relationships
|
venv/Lib/site-packages/networkx/algorithms/graphical.py
|
Python
|
mit
| 13,029
|
[
"Brian"
] |
39ed33bc8a99ae7cbdd4fcd4ccbf745e7571b09888e92810c6fbe7cc0a17e909
|
# -*- coding: utf-8 -*-
# Copyright 2016-2021 The pyXem developers
#
# This file is part of pyXem.
#
# pyXem is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# pyXem is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with pyXem. If not, see <http://www.gnu.org/licenses/>.
from warnings import warn
import numpy as np
from transforms3d.euler import mat2euler
import hyperspy.api as hs
from hyperspy.signal import BaseSignal
from orix.quaternion import Rotation
from orix.crystal_map import CrystalMap
from pyxem.signals.diffraction_vectors import generate_marker_inputs_from_peaks
from pyxem.utils.signal import transfer_navigation_axes
from pyxem.utils.indexation_utils import get_nth_best_solution
def crystal_from_vector_matching(z_matches):
"""Takes vector matching results for a single navigation position and
returns the best matching phase and orientation with correlation and
reliability to define a crystallographic map.
Parameters
----------
z_matches : numpy.array
Template matching results in an array of shape (m,5) sorted by
total_error (ascending) within each phase, with entries
[phase, R, match_rate, ehkls, total_error]
Returns
-------
results_array : numpy.array
Crystallographic mapping results in an array of shape (3) with entries
[phase, np.array((z, x, z)), dict(metrics)]
"""
if z_matches.shape == (1,): # pragma: no cover
z_matches = z_matches[0]
# Create empty array for results.
results_array = np.empty(3, dtype="object")
# get best matching phase
best_match = get_nth_best_solution(
z_matches, "vector", key="total_error", descending=False
)
results_array[0] = best_match.phase_index
# get best matching orientation Euler angles
results_array[1] = np.rad2deg(mat2euler(best_match.rotation_matrix, "rzxz"))
# get vector matching metrics
metrics = dict()
metrics["match_rate"] = best_match.match_rate
metrics["ehkls"] = best_match.error_hkls
metrics["total_error"] = best_match.total_error
results_array[2] = metrics
return results_array
def _peaks_from_best_template(single_match_result, library, rank=0):
"""Takes a TemplateMatchingResults object and return the associated peaks,
to be used in combination with map().
Parameters
----------
single_match_result : ndarray
An entry in a TemplateMatchingResults.
library : DiffractionLibrary
Diffraction library containing the phases and rotations.
rank : int
Get peaks from nth best orientation (default: 0, best vector match)
Returns
-------
peaks : array
Coordinates of peaks in the matching results object in calibrated units.
"""
best_fit = get_nth_best_solution(single_match_result, "template", rank=rank)
phase_names = list(library.keys())
phase_index = int(best_fit[0])
phase = phase_names[phase_index]
simulation = library.get_library_entry(phase=phase, angle=tuple(best_fit[1:4]))[
"Sim"
]
peaks = simulation.coordinates[:, :2] # cut z
return peaks
def _get_best_match(z):
"""Returns the match with the highest score for a given navigation pixel
Parameters
----------
z : np.array
array with shape (5,n_matches), the 5 elements are phase, alpha, beta, gamma, score
Returns
-------
z_best : np.array
array with shape (5,)
"""
return z[np.argmax(z[:, -1]),:]
class GenericMatchingResults:
def __init__(self, data):
self.data = hs.signals.Signal2D(data)
def to_crystal_map(self):
"""
Exports an indexation result with multiple results per navigation position to
crystal map with one result per pixel
Returns
-------
orix.CrystalMap
"""
_s = self.data.map(_get_best_match, inplace=False)
""" Gets properties """
phase_id = _s.isig[0].data.flatten()
alpha = _s.isig[1].data.flatten()
beta = _s.isig[2].data.flatten()
gamma = _s.isig[3].data.flatten()
score = _s.isig[4].data.flatten()
""" Gets navigation placements """
xy = np.indices(_s.data.shape[:2])
x = xy[1].flatten()
y = xy[0].flatten()
""" Tidies up so we can put these things into CrystalMap """
euler = np.vstack((alpha, beta, gamma)).T
rotations = Rotation.from_euler(
euler, convention="bunge", direction="crystal2lab"
)
properties = {"score": score}
return CrystalMap(
rotations=rotations, phase_id=phase_id, x=x, y=y, prop=properties
)
class TemplateMatchingResults(GenericMatchingResults):
"""Template matching results containing the top n best matching crystal
phase and orientation at each navigation position with associated metrics.
Examples
--------
Saving the signal containing all potential matches at each pixel
>>> TemplateMatchingResult.data.save("filename")
Exporting the best matches to a crystal map
>>> xmap = TemplateMatchingResult.to_crystal_map()
"""
def plot_best_matching_results_on_signal(
self, signal, library, permanent_markers=True, *args, **kwargs
):
"""Plot the best matching diffraction vectors on a signal.
Parameters
----------
signal : ElectronDiffraction2D
The ElectronDiffraction2D signal object on which to plot the peaks.
This signal must have the same navigation dimensions as the peaks.
library : DiffractionLibrary
Diffraction library containing the phases and rotations
permanent_markers : bool
Permanently save the peaks as markers on the signal
*args :
Arguments passed to signal.plot()
**kwargs :
Keyword arguments passed to signal.plot()
"""
match_peaks = self.data.map(
_peaks_from_best_template, library=library, inplace=False
)
mmx, mmy = generate_marker_inputs_from_peaks(match_peaks)
signal.plot(*args, **kwargs)
for mx, my in zip(mmx, mmy):
m = hs.markers.point(x=mx, y=my, color="red", marker="x")
signal.add_marker(m, plot_marker=True, permanent=permanent_markers)
class VectorMatchingResults(BaseSignal):
"""Vector matching results containing the top n best matching crystal
phase and orientation at each navigation position with associated metrics.
Attributes
----------
vectors : DiffractionVectors
Diffraction vectors indexed.
hkls : BaseSignal
Miller indices associated with each diffraction vector.
"""
_signal_dimension = 0
_signal_type = "vector_matching"
def __init__(self, *args, **kwargs):
BaseSignal.__init__(self, *args, **kwargs)
# self.axes_manager.set_signal_dimension(2)
self.vectors = None
self.hkls = None
def get_crystallographic_map(self, *args, **kwargs):
"""Obtain a crystallographic map specifying the best matching phase and
orientation at each probe position with corresponding metrics.
Returns
-------
cryst_map : Signal2D
Crystallographic mapping results containing the best matching phase
and orientation at each navigation position with associated metrics.
The Signal at each navigation position is an array of,
[phase, np.array((z,x,z)), dict(metrics)]
which defines the phase, orientation as Euler angles in the zxz
convention and metrics associated with the matching.
Metrics for template matching results are
'match_rate'
'total_error'
'orientation_reliability'
'phase_reliability'
"""
crystal_map = self.map(
crystal_from_vector_matching, inplace=False, *args, **kwargs
)
crystal_map = transfer_navigation_axes(crystal_map, self)
return crystal_map
def get_indexed_diffraction_vectors(
self, vectors, overwrite=False, *args, **kwargs
):
"""Obtain an indexed diffraction vectors object.
Parameters
----------
vectors : DiffractionVectors
A diffraction vectors object to be indexed.
Returns
-------
indexed_vectors : DiffractionVectors
An indexed diffraction vectors object.
"""
if overwrite is False:
if vectors.hkls is not None:
warn(
"The vectors supplied are already associated with hkls set "
"overwrite=True to replace these hkls."
)
else:
vectors.hkls = self.hkls
elif overwrite is True:
vectors.hkls = self.hkls
return vectors
|
pycrystem/pycrystem
|
pyxem/signals/indexation_results.py
|
Python
|
gpl-3.0
| 9,375
|
[
"CRYSTAL"
] |
60fa15ae69ce9b62cda57474c874c6ba8017e697d57e6278d7444b665cbff712
|
import numpy, os
import matplotlib.pyplot as plt
from labutil.plugins.pwscf import run_qe_pwscf, PWscf_inparam, parse_qe_pwscf_output
from labutil.objects import Struc, Dir, ase2struc, Kpoints, PseudoPotential
from ase.spacegroup import crystal
from ase.io import write
from ase.build import bulk
def make_struc(alat):
"""
Creates the crystal structure using ASE.
:param alat: Lattice parameter in angstrom
:return: structure object converted from ase
"""
fecell = bulk('Fe', 'hcp', a=alat)
# check how your cell looks like
#write('s.cif', gecell)
print(fecell, fecell.get_atomic_numbers())
fecell.set_atomic_numbers([26, 27])
structure = Struc(ase2struc(fecell))
print(structure.species)
return structure
def compute_energy(alat, nk, ecut):
"""
Make an input template and select potential and structure, and the path where to run
"""
potname = 'Fe.pbe-nd-rrkjus.UPF'
potpath = os.path.join(os.environ['QE_POTENTIALS'], potname)
pseudopots = {'Fe': PseudoPotential(path=potpath, ptype='uspp', element='Fe',
functional='GGA', name=potname),
'Co': PseudoPotential(path=potpath, ptype='uspp', element='Fe',
functional='GGA', name=potname)
}
struc = make_struc(alat=alat)
kpts = Kpoints(gridsize=[nk, nk, nk], option='automatic', offset=False)
dirname = 'Fe_a_{}_ecut_{}_nk_{}'.format(alat, ecut, nk)
runpath = Dir(path=os.path.join(os.environ['WORKDIR'], "Lab3/Problem1", dirname))
input_params = PWscf_inparam({
'CONTROL': {
'calculation': 'scf',
'pseudo_dir': os.environ['QE_POTENTIALS'],
'outdir': runpath.path,
'tstress': True,
'tprnfor': True,
'disk_io': 'none',
},
'SYSTEM': {
'ecutwfc': ecut,
'ecutrho': ecut * 8,
'nspin': 2,
'starting_magnetization(1)': 0.7,
'occupations': 'smearing',
'smearing': 'mp',
'degauss': 0.02
},
'ELECTRONS': {
'diagonalization': 'david',
'mixing_beta': 0.5,
'conv_thr': 1e-7,
},
'IONS': {},
'CELL': {},
})
output_file = run_qe_pwscf(runpath=runpath, struc=struc, pseudopots=pseudopots,
params=input_params, kpoints=kpts, ncpu=2)
output = parse_qe_pwscf_output(outfile=output_file)
return output
def compute_energy_anti(alat, nk, ecut):
"""
Make an input template and select potential and structure, and the path where to run
"""
potname = 'Fe.pbe-nd-rrkjus.UPF'
potpath = os.path.join(os.environ['QE_POTENTIALS'], potname)
pseudopots = {'Fe': PseudoPotential(path=potpath, ptype='uspp', element='Fe',
functional='GGA', name=potname),
'Co': PseudoPotential(path=potpath, ptype='uspp', element='Fe',
functional='GGA', name=potname)
}
struc = make_struc(alat=alat)
kpts = Kpoints(gridsize=[nk, nk, nk], option='automatic', offset=False)
dirname = 'Fe_a_{}_ecut_{}_nk_{}'.format(alat, ecut, nk)
runpath = Dir(path=os.path.join(os.environ['WORKDIR'], "Lab3/Problem1", dirname))
input_params = PWscf_inparam({
'CONTROL': {
'calculation': 'scf',
'pseudo_dir': os.environ['QE_POTENTIALS'],
'outdir': runpath.path,
'tstress': True,
'tprnfor': True,
'disk_io': 'none',
},
'SYSTEM': {
'ecutwfc': ecut,
'ecutrho': ecut * 8,
'nspin': 2,
'starting_magnetization(1)': 1,
'starting_magnetization(2)': -1,
'occupations': 'smearing',
'smearing': 'mp',
'degauss': 0.02
},
'ELECTRONS': {
'diagonalization': 'david',
'mixing_beta': 0.5,
'conv_thr': 1e-7,
},
'IONS': {},
'CELL': {},
})
output_file = run_qe_pwscf(runpath=runpath, struc=struc, pseudopots=pseudopots,
params=input_params, kpoints=kpts, ncpu=2)
output = parse_qe_pwscf_output(outfile=output_file)
return output
def lattice_scan():
nk = 3
ecut = 30
alat = 3.0
output = compute_energy_anti(alat=alat, ecut=ecut, nk=nk)
print(output)
if __name__ == '__main__':
# put here the function that you actually want to run
lattice_scan()
|
bkoz37/labutil
|
samples/lab3_samples/Fe_sample.py
|
Python
|
mit
| 4,650
|
[
"ASE",
"CRYSTAL"
] |
b68a8ed57a5634ae84e842b0b4394b676c2e3b4ff5d02fde545ef7a83e9999ec
|
from setuptools import setup, find_packages
setup(
name='tst',
version="0.14.0",
description='TST Simple Test Runner',
url='http://github.com/daltonserey/tst',
author='Dalton Serey',
author_email='daltonserey@gmail.com',
maintainer='Dalton Serey',
maintainer_email='daltonserey@gmail.com',
license='MIT',
packages=find_packages(),
include_package_data=True,
scripts=[
'bin/runjava',
],
python_requires='>=3.6',
install_requires=[
'pyyaml>=5.4.1',
'requests>=2.6.1',
'cachecontrol[filecache]'
],
entry_points = {
'console_scripts': [
'tst=tst.commands:main',
]
},
zip_safe=False
)
|
daltonserey/tst
|
setup.py
|
Python
|
agpl-3.0
| 717
|
[
"Dalton"
] |
63a472743a1d4039ff3ba4e0570dda77a827169aeacd8fbada347eac8b674132
|
"""
docstring needed
:copyright: Copyright 2010-2017 by the NineML Python team, see AUTHORS.
:license: BSD-3, see LICENSE for details.
"""
from past.builtins import basestring
from nineml.exceptions import NineMLUsageError, NineMLDimensionError
from nineml.abstraction.expressions.utils import is_valid_lhs_target
from nineml.abstraction.expressions import reserved_identifiers, Expression
from nineml.base import BaseNineMLObject
import operator
import sympy
from sympy import sympify
from nineml.base import SendPortBase
from sympy.logic.boolalg import BooleanTrue, BooleanFalse
from nineml.visitors import BaseVisitor, BaseVisitorWithContext
from functools import reduce
class AliasesAreNotRecursiveComponentValidator(BaseVisitor):
"""Check that aliases are not self-referential"""
def __init__(self, component_class, **kwargs): # @UnusedVariable
BaseVisitor.__init__(self)
self.visit(component_class)
def action_componentclass(self, component_class, **kwargs): # @UnusedVariable @IgnorePep8
unresolved_aliases = dict((a.lhs, a) for a in component_class.aliases)
def alias_contains_unresolved_symbols(alias):
unresolved = [sym for sym in alias.rhs_symbol_names
if sym in unresolved_aliases]
return len(unresolved) != 0
def get_resolved_aliases():
return [alias for alias in list(unresolved_aliases.values())
if not alias_contains_unresolved_symbols(alias)]
while(unresolved_aliases):
resolved_aliases = get_resolved_aliases()
if resolved_aliases:
for r in resolved_aliases:
del unresolved_aliases[r.lhs]
else:
raise NineMLUsageError(
"Unable to resolve all aliases, you may have a recursion "
"issue. Remaining Aliases: {}".format(
','.join(list(unresolved_aliases.keys()))))
def default_action(self, obj, nineml_cls, **kwargs):
pass
class NoUnresolvedSymbolsComponentValidator(BaseVisitor):
"""
Check that aliases and timederivatives are defined in terms of other
parameters, aliases, statevariables and ports
"""
def __init__(self, component_class, **kwargs): # @UnusedVariable @IgnorePep8
BaseVisitor.__init__(self)
self.available_symbols = []
self.aliases = []
self.time_derivatives = []
self.state_assignments = []
self.component_class = component_class
self.visit(component_class)
# Check Aliases:
for alias in self.aliases:
for rhs_atom in alias.rhs_symbol_names:
if rhs_atom in reserved_identifiers:
continue
if rhs_atom not in self.available_symbols:
raise NineMLUsageError(
"Unresolved Symbol in Alias: {} [{}]"
.format(rhs_atom, alias))
# Check TimeDerivatives:
for timederivative in self.time_derivatives:
for rhs_atom in timederivative.rhs_symbol_names:
if (rhs_atom not in self.available_symbols and
rhs_atom not in reserved_identifiers):
raise NineMLUsageError(
"Unresolved Symbol in Time Derivative: {} [{}]"
.format(rhs_atom, timederivative))
# Check StateAssignments
for state_assignment in self.state_assignments:
for rhs_atom in state_assignment.rhs_symbol_names:
if (rhs_atom not in self.available_symbols and
rhs_atom not in reserved_identifiers):
raise NineMLUsageError(
'Unresolved Symbol in Assignment: {} [{}]'
.format(rhs_atom, state_assignment))
def add_symbol(self, symbol):
if symbol in self.available_symbols:
raise NineMLUsageError(
"Duplicate Symbol '{}' found".format(symbol))
self.available_symbols.append(symbol)
def action_alias(self, alias, **kwargs): # @UnusedVariable
if alias in self.component_class.aliases:
self.add_symbol(symbol=alias.lhs)
self.aliases.append(alias)
def action_parameter(self, parameter, **kwargs): # @UnusedVariable @IgnorePep8
self.add_symbol(symbol=parameter.name)
def action_constant(self, constant, **kwargs): # @UnusedVariable @IgnorePep8
self.add_symbol(constant.name)
def default_action(self, obj, nineml_cls, **kwargs):
pass
class CheckNoLHSAssignmentsToMathsNamespaceComponentValidator(
BaseVisitor):
"""
This class checks that there is not a mathematical symbols, (e.g. pi, e)
on the left-hand-side of an equation
"""
def __init__(self, component_class, **kwargs): # @UnusedVariable
BaseVisitor.__init__(self)
self.visit(component_class)
def check_lhssymbol_is_valid(self, symbol):
assert isinstance(symbol, basestring)
if not is_valid_lhs_target(symbol):
err = 'Symbol: %s found on left-hand-side of an equation'
raise NineMLUsageError(err)
def action_parameter(self, parameter, **kwargs): # @UnusedVariable
self.check_lhssymbol_is_valid(parameter.name)
def action_alias(self, alias, **kwargs): # @UnusedVariable
self.check_lhssymbol_is_valid(alias.lhs)
def action_constant(self, constant, **kwargs): # @UnusedVariable
self.check_lhssymbol_is_valid(constant.name)
def default_action(self, obj, nineml_cls, **kwargs):
pass
class DimensionalityComponentValidator(BaseVisitorWithContext):
_RECURSION_MAX = 450
class DeclaredDimensionsVisitor(BaseVisitor):
"""
Inserts declared dimensions into dimensionality dictionary
before inferring dimensions from derived expressions
"""
def __init__(self, component_class, as_class, **kwargs):
BaseVisitor.__init__(self)
self._dimensions = {}
self.as_class = as_class
self.visit(component_class, **kwargs)
def default_action(self, obj, nineml_cls, **kwargs): # @UnusedVariable
if not isinstance(obj, SendPortBase):
try:
self._dimensions[obj.id] = sympify(obj.dimension)
except AttributeError:
# If element doesn't have dimension attribute
try:
self._dimensions[obj.id] = sympify(obj.units.dimension)
except AttributeError:
pass # If element doesn't have units attribute
@property
def dimensions(self):
return self._dimensions
def __init__(self, component_class, **kwargs): # @UnusedVariable @IgnorePep8
BaseVisitorWithContext.__init__(self)
self.component_class = component_class
self._dimensions = self.DeclaredDimensionsVisitor(
component_class, self.as_class, **kwargs).dimensions
self._recursion_count = 0
self.visit(component_class)
def _get_dimensions(self, element):
if isinstance(element, (sympy.Symbol, basestring)):
if element == sympy.Symbol('t'): # Reserved symbol 't'
return sympy.Symbol('t') # representation of the time dim.
name = Expression.symbol_to_str(element)
# Look back through the scope stack to find the referenced
# element
element = None
for context in reversed(self.contexts):
try:
element = context.parent.element(
name, child_types=context.parent_cls.nineml_children)
except KeyError:
pass
if element is None:
raise NineMLUsageError(
"Did not find '{}' in '{}' dynamics class (scopes: {})"
.format(name, self.component_class.name,
list(reversed([c.parent for c in self.contexts]))))
try:
expr = element.rhs
except AttributeError: # for basic sympy expressions
expr = element
try:
dims = self._dimensions[element.id]
self._recursion_count = 0
except (KeyError, AttributeError): # for derived dimensions
if self._recursion_count > self._RECURSION_MAX:
assert False, (
"'{}' is not defined.\nDefined symbols:\n{}"
"\n\nElements:\n{}".format(
expr, "\n".join(
str(e) for e in self._dimensions.keys()),
"\n".join(
str(e) for e in self.component_class.elements(
child_types=(
self.as_class.nineml_children)))
))
self._recursion_count += 1
dims = self._flatten_dims(expr, element)
self._dimensions[element.id] = dims
return dims
def _flatten_dims(self, expr, element):
if isinstance(expr, (sympy.Integer, sympy.Float, int, float)):
dims = 1
elif isinstance(expr, (BooleanTrue, BooleanFalse)):
dims = 0
elif isinstance(expr, sympy.Symbol):
dims = self._get_dimensions(expr)
elif isinstance(expr, sympy.Mul):
dims = reduce(operator.mul,
(self._flatten_dims(a, element) for a in expr.args))
if isinstance(dims, sympy.Basic):
dims = dims.powsimp()
elif isinstance(expr, sympy.Pow):
base = expr.args[0]
exponent = expr.args[1]
exp_dims = self._flatten_dims(exponent, element)
if exp_dims != 1:
raise NineMLDimensionError(self._construct_error_message(
"Exponents are required to be dimensionless arguments,"
" which was not the case in", exp_dims, expr, element))
base_dims = self._flatten_dims(base, element)
if base_dims != 1:
if not isinstance(exponent, (sympy.Integer, int,
sympy.numbers.NegativeOne)):
raise NineMLDimensionError(self._construct_error_message(
"Integer exponents are required for non-dimensionless "
"bases, which was not the case in", exp_dims, expr,
element))
dims = (self._flatten_dims(base, element) ** exponent)
elif isinstance(expr, sympy.Add):
dims = None
for arg in expr.args:
arg_dims = self._flatten_dims(arg, element)
if dims is None:
dims = arg_dims
elif arg_dims - dims != 0:
raise NineMLDimensionError(self._construct_error_message(
"Dimensions do not match within",
' + '.join(str(self._flatten_dims(a, element))
for a in expr.args), expr, element))
elif isinstance(expr, (sympy.GreaterThan, sympy.LessThan,
sympy.StrictGreaterThan, sympy.StrictLessThan)):
lhs_dims = self._flatten_dims(expr.args[0], element)
rhs_dims = self._flatten_dims(expr.args[1], element)
if lhs_dims - rhs_dims != 0:
raise NineMLDimensionError(self._construct_error_message(
"LHS/RHS dimensions of boolean expression",
lhs_dims - rhs_dims, expr, postamble="do not match"))
dims = 0 # boolean expression
elif isinstance(expr, (sympy.And, sympy.Or, sympy.Not)):
for arg in expr.args:
dims = self._flatten_dims(arg, element)
# boolean expression == 0
if dims != 0 and dims != 1: # FIXME: allow dimless until bool params @IgnorePep8
raise NineMLDimensionError(self._construct_error_message(
"Logical expression provided non-boolean argument '{}'"
.format(arg), dims, expr))
elif isinstance(type(expr), sympy.FunctionClass):
for arg in expr.args:
arg_dims = self._flatten_dims(arg, element)
if arg_dims != 1:
raise NineMLDimensionError(self._construct_error_message(
"Dimensionless arguments required for function",
arg_dims, element=element, expr=arg))
dims = 1
elif (type(expr).__name__ in ('Pi',) or
isinstance(expr, sympy.Rational)):
dims = 1
elif isinstance(element, BaseNineMLObject):
assert False, ("{} was not added to pre-determined dimensions"
.format(element))
else:
raise NotImplementedError(
"Unrecognised type {} of expression '{}'"
.format(type(expr), expr))
return dims
def _compare_dimensionality(self, dimension, reference, element, ref_name):
if dimension - sympify(reference) != 0:
raise NineMLDimensionError(self._construct_error_message(
"Dimension of", dimension, element=element,
postamble=(" match that declared for '{}', {} ('{}')".format(
ref_name, sympify(reference), reference.name))))
def _check_send_port(self, port):
# Get the state variable or alias associated with the analog send
# port
element = self.component_class.element(
port.name, child_types=self.as_class.nineml_children)
try:
if element.dimension != port.dimension:
raise NineMLDimensionError(self._construct_error_message(
"Dimension of", sympify(element.dimension),
element=element, postamble=(
"does match attached send port dimension {} ('{}')"
.format(sympify(port.dimension),
port.dimension.name))))
except AttributeError: # If element doesn't have explicit dimension
self._compare_dimensionality(self._get_dimensions(element),
port.dimension, element, port.name)
def _construct_error_message(self, preamble, dimension, expr=None,
element=None, postamble=None):
if expr is None:
try:
expr = element.rhs
symbols = element.rhs_symbol_names
except AttributeError:
expr = ''
symbols = []
else:
symbols = expr.free_symbols
msg = preamble
if element is None:
msg += ' expression'
else:
msg += " {} '{}' in '{}'".format(
element.__class__.__name__, element.key,
self.component_class.name)
msg += ", {} [{}, with {}], ".format(
dimension, expr, ', '.join(
'{}={}'.format(a, self._get_dimensions(a)) for a in symbols))
if postamble is not None:
msg += postamble
return msg
def action_alias(self, alias, **kwargs): # @UnusedVariable
self._get_dimensions(alias)
def default_action(self, obj, nineml_cls, **kwargs):
pass
|
INCF/lib9ML
|
nineml/abstraction/componentclass/visitors/validators/general.py
|
Python
|
bsd-3-clause
| 15,659
|
[
"VisIt"
] |
9f6cd0feaf008ed54f2f1915c2395385c94a367aeb6b8bf84ae36aaf0623cdc3
|
#!/usr/bin/env python
# Copyright 2013 National Renewable Energy Laboratory, Golden CO, USA
# This file is part of NREL MatDB.
#
# NREL MatDB is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# NREL MatDB is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with NREL MatDB. If not, see <http://www.gnu.org/licenses/>.
import copy, json, math, re, sys
import numpy as np
# Reads an xyz format file, determines likely bonds,
# and writes an smol file.
#
#
#
#
# XYZ format:
#
# numberOfAtoms
# description
# sym x y z
# sym x y z
# ...
#
# Example xyz file format:
# 3
# Carbon dioxide
# O -1 0 0
# C 0 0 0
# O 1 0 0
#
#
#
#
#
#
# xxx redo: now is JSON
# Smol format:
#
# Blank lines and comments, which start with '#' after
# initial whitespace, are ignored.
#
# The first word of non-comment lines determines the line type.
# The line types are:
#
# Line type Parameters Meaning
# ----------- ---------------- ------------------------------------------
#
# description words Description of the file or the study
#
# element sym radius color Specifies element characteristics:
# atomicRadius in pm, color as 6 hex digits.
#
# coordType tp cartesian or direct: specifies the type
# of coords in the atom lines.
# If coordType=='cartesian',
# both posScale and basisMatrix are omitted.
#
# posScale sval position scale factor
#
# basisMatrix va vb vc Basis matrix, such that
# cartesianPosMatInAngstrom
# = posScale * fracPosMat * basisMat
#
# atom sym x y z Specifies an atom position.
# If coordType is "direct", this is
# the fracPosMat.
# If "cartesian", this is the cartesianPosMat.
#
# bond ixa ixb Specifies a bond between two atoms,
# using the indices of the atoms, origin 0.
#
# The sections must occur in order:
# description before element lines before basis lines, etc.
#
# Example smol file:
# description Carbon dioxide
#
# element C 70 909090
# element O 60 ff0d0d
#
# coordType direct
# posScale 1.0
# basisMatrix 1 0 0
# basisMatrix 0 1 0
# basisMatrix 0 0 1
#
# atom O -1 0 0
# atom C 0 0 0
# atom O 1 0 0
#
# bond 0 1
# bond 1 2
class Atom:
def __init__(self, aix, asym, directCoords, elementMap):
self.aix = aix
self.asym = asym
self.directCoords = directCoords
self.elementMap = elementMap
self.acluster = None
self.origAtom = self # Reflected atoms point to the original,
# Used for bond counting.
self.numBond = 0 # Num bonds. Used for origAtoms only.
self.bonds = [] # list of bond atoms
if not self.elementMap.has_key(asym):
throwerr('Atom: invalid symbol: "%s"' % (asym,))
gp = self.elementMap[asym].egroup
# He, Ne, Ar, Kr, Xe, ...
if gp == 18: self.maxBond = 0
# H, Li, Na, K, ... and F, Cl, Br, I, ...
elif gp == 1 or gp == 17: self.maxBond = 1
# Be, Mg, Ca, Sr, ... and O, S, Se, Te, ...
elif gp == 2 or gp == 16: self.maxBond = 2
# Sc, Y, La, Ac, ... or N, P, As, Sb, ...
elif gp == 3 or gp == 15: self.maxBond = 3
else: self.maxBond = float('inf')
self.maxBond = float('inf') # xxx allow any num bonds
def __str__(self):
res = '%s%d' % (self.asym, self.aix,)
return res
def format(self):
res = 'aix: %d asym: %s directCoords: %s' \
% (self.aix, self.asym, self.directCoords,)
res += ' bonds: '
for atomb in self.bonds:
res += ' %s%d' % (atomb.asym, atomb.aid,)
if self.acluster != None: res += ' acluster: %d' % (self.acluster.cid,)
return res
def addBond( self, atom):
self.bonds.append( atom)
self.origAtom.numBond += 1
if self.origAtom.numBond > self.origAtom.maxBond:
throwerr('too many bonds: atom: %s' % (self.format(),))
class Cluster:
idCounter = 0
def __init__(self):
self.cid = Cluster.idCounter
Cluster.idCounter += 1
self.atoms = []
self.kida = None
self.kidb = None
self.distance = None
def addAtom( self, atom):
self.atoms.append( atom)
atom.acluster = self
def merge(self, cluster, distance):
newClus = Cluster()
newClus.atoms = self.atoms + cluster.atoms
for atom in newClus.atoms:
atom.acluster = newClus
newClus.kida = self
newClus.kidb = cluster
newClus.distance = distance
return newClus
def __str__(self):
atomStgs = [str(atom) for atom in self.atoms]
atomMsg = ' '.join( atomStgs)
if self.distance == None:
if self.kida != None or self.kidb != None:
throwerr('invalid kids')
res = 'cid: %d atoms: %s' % (self.cid, atomMsg,)
else:
res = 'cid: %d kids: %d %d dist: %.4f atoms: %s' \
% (self.cid, self.kida.cid, self.kidb.cid, self.distance, atomMsg,)
return res
class Bond:
def __init__( self, atoma, atomb, distance, ca, cb, cc):
self.atoma = atoma
self.atomb = atomb
self.distance = distance
self.clustera = ca # old A
self.clusterb = cb # old B
self.clusterc = cc # new merged
atoma.addBond( atomb)
atomb.addBond( atoma)
def __str__(self):
eleMap = self.atoma.elementMap
res = '%-5s %-5s dist: %.4f rada: %.3f radb: %.3f ca: %3d cb: %3d cc: %3d' \
% (self.atoma, self.atomb, self.distance,
0.01 * eleMap[self.atoma.asym].eradiusAtomic_pm, # pm to Angstrom
0.01 * eleMap[self.atomb.asym].eradiusAtomic_pm, # pm to Angstrom
self.clustera.cid, self.clusterb.cid, self.clusterc.cid,)
return res
def badparms(msg):
print 'Error: ', msg
print 'Parms:'
print ''
print ' -bugLev Debug level'
print ''
print ' -distType Center or shell: distance from centers or atom shells'
print ''
print ' -posScale Scale factor for input dists, such that'
print ' cartesianPosMatInAngstrom'
print ' = posScale * fracPosMat * basisMat'
print ''
print ' -inFile Name of input xyz format file'
print ''
print ' -outFile Name of output smol format file'
print ''
print 'See pgm doc for info on xyz and smol formats.'
print ''
print 'Example:'
print './xyzToSmol.py -distType shell -posScale 1 -inFile tempa -outFile tempb'
sys.exit(1)
def main():
bugLev = None
distType = None
posScale = None
inFile = None
outFile = None
if len(sys.argv) % 2 != 1: badparms("parms must be key/value pairs")
for iarg in range( 1, len(sys.argv), 2):
key = sys.argv[iarg]
val = sys.argv[iarg+1]
if key == '-bugLev': bugLev = int( val)
elif key == '-distType': distType = val
elif key == '-posScale': posScale = float( val)
elif key == '-inFile': inFile = val
elif key == '-outFile': outFile = val
else: badparms('unknown key: "' + key + '"')
if bugLev == None: badparms('parm not specified: -bugLev')
if distType == None: badparms('parm not specified: -distType')
if posScale == None: badparms('parm not specified: -posScale')
if inFile == None: badparms('parm not specified: -inFile')
if outFile == None: badparms('parm not specified: -outFile')
basisMat = np.array( [
[ 1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 1]
])
elementMap = getElementMap()
(description, atoms) = readXyz( inFile, elementMap)
# The basisMat is such that
# cartesianPosMatInAngstrom = posScale * fracPosMat * basisMat
# For an xyz file, we have only the cartesian coords
# and no basis matrix.
# Normally posScale = 1.0 (for Angstroms),
# but for testing calcDist we use other values.
coordType = 'cartesian'
addReflections( bugLev, atoms, elementMap)
bonds = calcBonds( bugLev, distType, posScale, basisMat, atoms)
smolStg = formatSmol( bugLev, description, elementMap, coordType,
posScale, basisMat, atoms, bonds)
with open( outFile, 'w') as fout:
fout.write( smolStg)
def addReflections( bugLev, atoms, elementMap):
# When an atom coordinate is 0, add a matching
# atom with that coordinate at 1.
# When an atom coordinate is 1, add a matching
# atom with that coordinate at 0.
def reflectIt( atom, jj):
if bugLev >= 5:
print 'reflectIt: entry: jj: %d atom: %s directCoords: %s' \
% (jj, atom, atom.directCoords,)
if jj < 2: reflectIt( atom, jj+1)
epsilon = 0.02
incr = 0
if abs( atom.directCoords[jj]) < epsilon: incr = 1
elif abs( 1 - atom.directCoords[jj]) < epsilon: incr = -1
if incr != 0:
directCoords = copy.copy( atom.directCoords)
directCoords[jj] += incr
if directCoords[jj] < 0: directCoords[jj] = 0
if directCoords[jj] > 1: directCoords[jj] = 1
atomId = len( atoms)
newAtom = Atom( atomId, atom.asym, directCoords, elementMap)
newAtom.origAtom = atom
atoms.append( newAtom)
if bugLev >= 5:
print ' reflectIt: jj: %d atom: %s reflect low new id: %d' \
% (jj, atom, len( atoms),)
if jj < 2: reflectIt( newAtom, jj+1)
natom = len( atoms)
for ii in range(natom):
reflectIt( atoms[ii], 0)
def calcBonds( bugLev, distType, posScale, basisMat, atoms):
if atoms == None or len(atoms) < 2: bonds = []
else: bonds = calcBondsSub( bugLev, distType, posScale, basisMat, atoms)
return bonds
def calcBondsSub( bugLev, distType, posScale, basisMat, atoms):
if type(basisMat).__name__ == 'list':
basisMat = np.array( basisMat)
norig = len( atoms) # num original atoms
if basisMat.shape != (3,3): throwerr('basisMat is not 3x3')
basisMatTranspose = basisMat.T # transpose
# Find all pairwise distances
naug = len( atoms) # num augmented atoms
ndist = naug * (naug-1) / 2
distances = ndist * [None]
kk = 0
for ia in range( naug):
for ib in range( ia+1, naug):
dist = calcDist(
distType, posScale, basisMatTranspose,
atoms[ia], atoms[ib])
distances[kk] = (ia, ib, dist,)
kk += 1
# Sort by increasing distance
distances.sort( cmp = lambda x, y: cmp( x[2], y[2]))
avgDist = sum([dist[2] for dist in distances]) / ndist
# Start with one cluster per atom
clusters = naug * [None]
for ii in range( naug):
clusters[ii] = Cluster()
clusters[ii].addAtom( atoms[ii])
for ii in range( naug):
cluster = clusters[ii]
atom = cluster.atoms[0]
if atom.origAtom != atom:
origCluster = atom.origAtom.acluster
origCluster.merge( cluster, 0)
# For each distance, if the atoms are in different clusters,
# join the two clusters.
bonds = [] # list of tuples (atoma, atomb, distance)
for idist in range( ndist):
(ia, ib, distance) = distances[idist]
atoma = atoms[ia]
atomb = atoms[ib]
if atoma.acluster != atomb.acluster \
and not(
atoma.origAtom.numBond >= atoma.origAtom.maxBond or
atomb.origAtom.numBond >= atomb.origAtom.maxBond):
# Create new cluster by merging atoma, atomb clusters
ca = atoma.acluster
cb = atomb.acluster
cnew = ca.merge( cb, distance)
bonds.append( Bond(atoma, atomb, distance, ca, cb, cnew))
if bugLev >= 1:
print 'norig: %d naug: %d' % (norig, naug,)
if naug > 0:
print '\nClusters'
printCluster( atoms[0].acluster, 0) # indent = 0
print '\nBonds:'
for bond in bonds:
print bond
return bonds
def printCluster( clus, indent):
indstg = indent * ' '
print '%s%s' % (indstg, clus,)
if clus.kida != None: printCluster( clus.kida, indent + 1)
if clus.kidb != None: printCluster( clus.kidb, indent + 1)
def readXyz( inFile, elementMap):
with open( inFile) as fin:
iline = 0
# Get count, having format like:
# 26
(iline, line) = getLine( iline, fin)
if line == None: throwerrLine('empty input file', iline, line)
try: count = int( line)
except ValueError, exc:
throwerrLine('invalid count', iline, line)
# Get description, having format like:
# icsd_098129 Co2 Cu4 O14 Sr4 Y2
(iline, line) = getLine( iline, fin)
if line == None: throwerrLine('invalid description', iline, line)
description = line
# Get coordinate lines, having format like:
# Co 0.2478457246260035 0.8330718276390030 0.4999011568620020
atoms = [] # list of Atom
while True:
(iline, line) = getLine( iline, fin)
if line == None: break
toks = line.split()
if len(toks) != 4: throwerrLine('invalid coord line', iline, line)
atomSym = toks[0]
atomSym = atomSym[0].upper() + atomSym[1:].lower() # make case like 'Si'
try: directCoords = map( float, toks[1:])
except ValueError, exc:
throwerrLine('invalid coord line', iline, line)
atomId = len(atoms)
atoms.append( Atom( atomId, atomSym, directCoords, elementMap))
if len(atoms) != count:
throwerr('count==%d != len(atoms)==%d' % (count, len(atoms),))
return (description, atoms)
def formatSmol( bugLev, description, elementMap, coordType,
posScale, basisMat, atoms, bonds):
if type(basisMat).__name__ == 'list':
basisMat = np.array( basisMat)
# Make a map of element entries. Each entry is a submap.
# For a submap, convert an Element object with attributes
# to a map with keys.
jsonEleMap = {}
syms = set( [atom.asym for atom in atoms]) # unique atom symbols
for sym in syms:
ele = elementMap[sym]
keys = dir( ele)
keys.sort()
jele = {}
for key in keys:
if key.startswith('e'):
jele[key] = getattr( ele, key)
jsonEleMap[sym] = jele
# Make JSON compatable structure for atoms
jsonAtoms = []
for atom in atoms:
isReflection = False
if atom.origAtom != atom: isReflection = True
jatom = dict(
aix = atom.aix,
asym = atom.asym,
directCoords = atom.directCoords,
isReflection = isReflection)
jsonAtoms.append( jatom)
# Make JSON compatable structure for bonds
jsonBonds = []
for bond in bonds:
jbond = [ bond.atoma.aix, bond.atomb.aix]
jsonBonds.append( jbond)
smol = dict(
description = description,
elementMap = jsonEleMap,
coordType = coordType,
posScale = posScale,
basisMat = basisMat.tolist(), # convert numpy array to list for JSON
atoms = jsonAtoms,
bonds = jsonBonds)
smolStg = json.dumps( smol, sort_keys=True, indent=2, separators=(',', ': '))
return smolStg
# cartesianPosMatInAngstrom = posScale * fracPosMat * basisMat
#
# cartVec = posScale * basisMatTranspose * directVec
def calcDist(
distType,
posScale,
basisMatTranspose,
atoma,
atomb):
directa = atoma.directCoords
directb = atomb.directCoords
# Convert to cartesian coords in Angstroms
carta = posScale * np.dot( basisMatTranspose, directa)
cartb = posScale * np.dot( basisMatTranspose, directb)
# Calc distance between centers in Angstrom
if len(carta) != 3 or len(cartb) != 3: throwerr('invalid directCoords')
sumsq = 0
for ii in range(len(carta)):
sumsq += (carta[ii] - cartb[ii]) ** 2
centerDist = math.sqrt( sumsq)
# Subtract radii to get distance between shells
eleMap = atoma.elementMap
ra = 0.01 * eleMap[atoma.asym].eradiusAtomic_pm # pm to Angstrom
rb = 0.01 * eleMap[atomb.asym].eradiusAtomic_pm # pm to Angstrom
if distType == 'center': edgeDist = centerDist
elif distType == 'shell':
# Distance between spheres in Angstrom
edgeDist = centerDist - ra - rb
if edgeDist < 0: edgeDist = 0
else: badparms('invalid distType: %s' % (distType,))
return edgeDist
def getLine( iline, fin):
line = fin.readline() # includes final '\n'
if line == '': line = None
else:
line = line.strip()
iline += 1
print '### getLine exit: iline: %d line: %s' % (iline, line,)
return (iline, line)
def throwerrLine( msg, iline, line):
print '\nError: %s' % (msg,)
print ' iline: %d' % (iline,)
print ' line: %s' % (line,)
sys.exit( 1)
def throwerr( msg):
print '\nError: %s' % (msg,)
sys.exit( 1)
class Element:
def __init__( self,
# Following values are from
# http://en.wikipedia.org/wiki/List_of_elements
# Missing value: -999
enum, # Atomic number for oxygen: 8
esym, # Atomic symbol for oxygen: 'O'
ename, # Name, 'Oxygen'
egroup, # Periodic table group (column) for oxygen: 16
eperiod, # Periodic table period (row) for oxygen: 2
eweighta, # Atomic weight, version A for oxygen: 15.999
edensity_g_cm3, # Density, g/cm3 for oxygen: 0.001429
emelt_K, # Melting point, K for oxygen: 54.36
eboil_K, # Boiling point, K for oxygen: 90.20
eheatCap_J_gK, # Specific heat capacity, J/gK for oxygen: 0.918
eelectroNega, # Electronegativity, version A for oxygen: 3.44
eabundance_mg_kg, # Abundance in the earth's crust,
# mg/kg. for oxygen: 461000
# The special value "0.00099" means "<0.001".
# Following values are from
# http://en.wikipedia.org/wiki/List_of_elements_by_atomic_properties
# Missing value: -999
eweightb, # Atomic weight, version B for oxygen: 15.9994
eelectroNegb, # Electronegativity, version B for oxygen: 3.44
efirstIon_eV, # First ionization energy, eV for oxygen: 13.61806
eradiusAtomic_pm, # Atomic radius, pm for oxygen: 60
erasiusVdw_pm, # Van der Waals radius, pm for oxygen: 152
eradiusCov_pm, # Covalent radius, pm for oxygen: 73
enumValence, # Num valence electrons for oxygen: 6
# Following values are from
# http://jmol.sourceforge.net/jscolors/
ecolorHex, # Color used by Jmol, integer for oxygen: 0xFF0D0D
):
self.enum = enum
self.esym = esym
self.ename = ename
self.egroup = egroup
self.eperiod = eperiod
self.eweighta = eweighta
self.edensity_g_cm3 = edensity_g_cm3
self.emelt_K = emelt_K
self.eboil_K = eboil_K
self.eheatCap_J_gK = eheatCap_J_gK
self.eelectroNega = eelectroNega
self.eabundance_mg_kg = eabundance_mg_kg
self.eweightb = eweightb
self.eelectroNegb = eelectroNegb
self.efirstIon_eV = efirstIon_eV
self.eradiusAtomic_pm = eradiusAtomic_pm
self.erasiusVdw_pm = erasiusVdw_pm
self.eradiusCov_pm = eradiusCov_pm
self.enumValence = enumValence
self.ecolorHex = ecolorHex
def __str__( self):
res = self.esym
return res
def format( self):
res = ''
res += ' enum: %s\n' % (self.enum,)
res += ' esym: %s\n' % (self.esym,)
res += ' ename: %s\n' % (self.ename,)
res += ' egroup: %s\n' % (self.egroup,)
res += ' eperiod: %s\n' % (self.eperiod,)
res += ' eweighta: %s\n' % (self.eweighta,)
res += ' edensity_g_cm3: %s\n' % (self.edensity_g_cm3,)
res += ' emelt_K: %s\n' % (self.emelt_K,)
res += ' eboil_K: %s\n' % (self.eboil_K,)
res += ' eheatCap_J_gK: %s\n' % (self.eheatCap_J_gK,)
res += ' eelectroNega: %s\n' % (self.eelectroNega,)
res += ' eabundance_mg_kg: %s\n' % (self.eabundance_mg_kg,)
res += ' eweightb: %s\n' % (self.eweightb,)
res += ' eelectroNegb: %s\n' % (self.eelectroNegb,)
res += ' efirstIon_eV: %s\n' % (self.efirstIon_eV,)
res += ' eradiusAtomic_pm: %s\n' % (self.eradiusAtomic_pm,)
res += ' erasiusVdw_pm: %s\n' % (self.erasiusVdw_pm,)
res += ' eradiusCov_pm: %s\n' % (self.eradiusCov_pm,)
res += ' enumValence: %s\n' % (self.enumValence,)
res += ' ecolorHex: 0x%06x\n' % (self.ecolorHex,)
return res
def getElementMap():
elements = [
Element( 1 , "H" , "Hydrogen" , 1 , 1, 1.008 , 0.00008988, 14.01 , 20.28 , 14.304, 2.20, 1400 , 1.00794 , 2.2 , 13.59844, 25 , 120 , 38 , 1 , 0xFFFFFF ),
Element( 2 , "He" , "Helium" , 18 , 1, 4.002602 , 0.0001785 , 0.956 , 4.22 , 5.193 , -999, 0.008 , 4.002602 , -999, 24.58741, 31 , 140 , 32 , 2 , 0xD9FFFF ),
Element( 3 , "Li" , "Lithium" , 1 , 2, 6.94 , 0.534 , 453.69 , 1560 , 3.582 , 0.98, 20 , 6.941 , 0.98, 5.39172 , 145 , 182 , 134 , 1 , 0xCC80FF ),
Element( 4 , "Be" , "Beryllium" , 2 , 2, 9.012182 , 1.85 , 1560 , 2742 , 1.825 , 1.57, 2.8 , 9.012182 , 1.57, 9.3227 , 105 , -999, 90 , 2 , 0xC2FF00 ),
Element( 5 , "B" , "Boron" , 13 , 2, 10.81 , 2.34 , 2349 , 4200 , 1.026 , 2.04, 10 , 10.811 , 2.04, 8.29803 , 85 , -999, 82 , 3 , 0xFFB5B5 ),
Element( 6 , "C" , "Carbon" , 14 , 2, 12.011 , 2.267 , 3800 , 4300 , 0.709 , 2.55, 200 , 12.0107 , 2.55, 11.2603 , 70 , 170 , 77 , 4 , 0x909090 ),
Element( 7 , "N" , "Nitrogen" , 15 , 2, 14.007 , 0.0012506 , 63.15 , 77.36 , 1.04 , 3.04, 19 , 14.0067 , 3.04, 14.53414, 65 , 155 , 75 , 5 , 0x3050F8 ),
Element( 8 , "O" , "Oxygen" , 16 , 2, 15.999 , 0.001429 , 54.36 , 90.20 , 0.918 , 3.44, 461000 , 15.9994 , 3.44, 13.61806, 60 , 152 , 73 , 6 , 0xFF0D0D ),
Element( 9 , "F" , "Fluorine" , 17 , 2, 18.9984032 , 0.001696 , 53.53 , 85.03 , 0.824 , 3.98, 585 , 18.9984032 , 3.98, 17.42282, 50 , 147 , 71 , 7 , 0x90E050 ),
Element( 10 , "Ne" , "Neon" , 18 , 2, 20.1797 , 0.0008999 , 24.56 , 27.07 , 1.03 , -999, 0.005 , 20.10097 , -999, 21.5646 , 38 , 154 , 69 , 8 , 0xB3E3F5 ),
Element( 11 , "Na" , "Sodium" , 1 , 3, 22.98976928, 0.971 , 370.87 , 1156 , 1.228 , 0.93, 23600 , 22.98976928, 0.93, 5.13908 , 180 , 227 , 154 , 1 , 0xAB5CF2 ),
Element( 12 , "Mg" , "Magnesium" , 2 , 3, 24.3050 , 1.738 , 923 , 1363 , 1.023 , 1.31, 23300 , 24.3050 , 1.31, 7.64624 , 150 , 173 , 130 , 2 , 0x8AFF00 ),
Element( 13 , "Al" , "Aluminium" , 13 , 3, 26.9815386 , 2.698 , 933.47 , 2792 , 0.897 , 1.61, 82300 , 26.9815386 , 1.61, 5.98577 , 125 , -999, 118 , 3 , 0xBFA6A6 ),
Element( 14 , "Si" , "Silicon" , 14 , 3, 28.085 , 2.3296 , 1687 , 3538 , 0.705 , 1.9 , 282000 , 28.0855 , 1.9 , 8.15169 , 110 , 210 , 111 , 4 , 0xF0C8A0 ),
Element( 15 , "P" , "Phosphorus" , 15 , 3, 30.973762 , 1.82 , 317.30 , 550 , 0.769 , 2.19, 1050 , 30.973762 , 2.19, 10.48669, 100 , 180 , 106 , 5 , 0xFF8000 ),
Element( 16 , "S" , "Sulfur" , 16 , 3, 32.06 , 2.067 , 388.36 , 717.87, 0.71 , 2.58, 350 , 32.065 , 2.58, 10.36001, 100 , 180 , 102 , 6 , 0xFFFF30 ),
Element( 17 , "Cl" , "Chlorine" , 17 , 3, 35.45 , 0.003214 , 171.6 , 239.11, 0.479 , 3.16, 145 , 35.453 , 3.16, 12.96764, 100 , 175 , 99 , 7 , 0x1FF01F ),
Element( 18 , "Ar" , "Argon" , 18 , 3, 39.948 , 0.0017837 , 83.80 , 87.30 , 0.52 , -999, 3.5 , 39.948 , -999, 15.75962, 71 , 188 , 97 , 8 , 0x80D1E3 ),
Element( 19 , "K" , "Potassium" , 1 , 4, 39.0983 , 0.862 , 336.53 , 1032 , 0.757 , 0.82, 20900 , 39.0983 , 0.82, 4.34066 , 220 , 275 , 196 , 1 , 0x8F40D4 ),
Element( 20 , "Ca" , "Calcium" , 2 , 4, 40.078 , 1.54 , 1115 , 1757 , 0.647 , 1 , 41500 , 40.078 , 1 , 6.11316 , 180 , -999, 174 , 2 , 0x3DFF00 ),
Element( 21 , "Sc" , "Scandium" , 3 , 4, 44.955912 , 2.989 , 1814 , 3109 , 0.568 , 1.36, 22 , 44.955912 , 1.36, 6.5615 , 160 , -999, 144 , 2 , 0xE6E6E6 ),
Element( 22 , "Ti" , "Titanium" , 4 , 4, 47.867 , 4.54 , 1941 , 3560 , 0.523 , 1.54, 5650 , 47.867 , 1.54, 6.8281 , 140 , -999, 136 , 2 , 0xBFC2C7 ),
Element( 23 , "V" , "Vanadium" , 5 , 4, 50.9415 , 6.11 , 2183 , 3680 , 0.489 , 1.63, 120 , 50.9415 , 1.63, 6.7462 , 135 , -999, 125 , 2 , 0xA6A6AB ),
Element( 24 , "Cr" , "Chromium" , 6 , 4, 51.9961 , 7.15 , 2180 , 2944 , 0.449 , 1.66, 102 , 51.9961 , 1.66, 6.7665 , 140 , -999, 127 , 1 , 0x8A99C7 ),
Element( 25 , "Mn" , "Manganese" , 7 , 4, 54.938045 , 7.44 , 1519 , 2334 , 0.479 , 1.55, 950 , 54.938045 , 1.55, 7.43402 , 140 , -999, 139 , 2 , 0x9C7AC7 ),
Element( 26 , "Fe" , "Iron" , 8 , 4, 55.845 , 7.874 , 1811 , 3134 , 0.449 , 1.83, 56300 , 55.845 , 1.83, 7.9024 , 140 , -999, 125 , 2 , 0xE06633 ),
Element( 27 , "Co" , "Cobalt" , 9 , 4, 58.933195 , 8.86 , 1768 , 3200 , 0.421 , 1.88, 25 , 58.933195 , 1.91, 7.6398 , 135 , 163 , 121 , 2 , 0xF090A0 ),
Element( 28 , "Ni" , "Nickel" , 10 , 4, 58.6934 , 8.912 , 1728 , 3186 , 0.444 , 1.91, 84 , 58.6934 , 1.88, 7.881 , 135 , -999, 126 , 2 , 0x50D050 ),
Element( 29 , "Cu" , "Copper" , 11 , 4, 63.546 , 8.96 , 1357.77 , 2835 , 0.385 , 1.9 , 60 , 63.546 , 1.9 , 7.72638 , 135 , 140 , 138 , 1 , 0xC88033 ),
Element( 30 , "Zn" , "Zinc" , 12 , 4, 65.38 , 7.134 , 692.88 , 1180 , 0.388 , 1.65, 70 , 65.38 , 1.65, 9.3942 , 135 , 139 , 131 , 2 , 0x7D80B0 ),
Element( 31 , "Ga" , "Gallium" , 13 , 4, 69.723 , 5.907 , 302.9146, 2477 , 0.371 , 1.81, 19 , 69.723 , 1.81, 5.9993 , 130 , 187 , 126 , 3 , 0xC28F8F ),
Element( 32 , "Ge" , "Germanium" , 14 , 4, 72.63 , 5.323 , 1211.40 , 3106 , 0.32 , 2.01, 1.5 , 72.64 , 2.01, 7.8994 , 125 , -999, 122 , 4 , 0x668F8F ),
Element( 33 , "As" , "Arsenic" , 15 , 4, 74.92160 , 5.776 , 1090 , 887 , 0.329 , 2.18, 1.8 , 74.92160 , 2.18, 9.7886 , 115 , 185 , 119 , 5 , 0xBD80E3 ),
Element( 34 , "Se" , "Selenium" , 16 , 4, 78.96 , 4.809 , 453 , 958 , 0.321 , 2.55, 0.05 , 78.96 , 2.55, 9.75238 , 115 , 190 , 116 , 6 , 0xFFA100 ),
Element( 35 , "Br" , "Bromine" , 17 , 4, 79.904 , 3.122 , 265.8 , 332.0 , 0.474 , 2.96, 2.4 , 79.904 , 2.96, 11.81381, 115 , 185 , 114 , 7 , 0xA62929 ),
Element( 36 , "Kr" , "Krypton" , 18 , 4, 83.798 , 0.003733 , 115.79 , 119.93, 0.248 , 3 , 0.00099, 83.798 , 3 , 13.99961, 88 , 202 , 110 , 8 , 0x5CB8D1 ),
Element( 37 , "Rb" , "Rubidium" , 1 , 5, 85.4678 , 1.532 , 312.46 , 961 , 0.363 , 0.82, 90 , 85.4678 , 0.82, 4.17713 , 235 , -999, 211 , 1 , 0x702EB0 ),
Element( 38 , "Sr" , "Strontium" , 2 , 5, 87.62 , 2.64 , 1050 , 1655 , 0.301 , 0.95, 370 , 87.62 , 0.95, 5.6949 , 200 , -999, 192 , 2 , 0x00FF00 ),
Element( 39 , "Y" , "Yttrium" , 3 , 5, 88.90585 , 4.469 , 1799 , 3609 , 0.298 , 1.22, 33 , 88.90585 , 1.22, 6.2171 , 180 , -999, 162 , 2 , 0x94FFFF ),
Element( 40 , "Zr" , "Zirconium" , 4 , 5, 91.224 , 6.506 , 2128 , 4682 , 0.278 , 1.33, 165 , 91.224 , 1.33, 6.6339 , 155 , -999, 148 , 2 , 0x94E0E0 ),
Element( 41 , "Nb" , "Niobium" , 5 , 5, 92.90638 , 8.57 , 2750 , 5017 , 0.265 , 1.6 , 20 , 92.90638 , 1.6 , 6.75885 , 145 , -999, 137 , 1 , 0x73C2C9 ),
Element( 42 , "Mo" , "Molybdenum" , 6 , 5, 95.96 , 10.22 , 2896 , 4912 , 0.251 , 2.16, 1.2 , 95.96 , 2.16, 7.09243 , 145 , -999, 145 , 1 , 0x54B5B5 ),
Element( 43 , "Tc" , "Technetium" , 7 , 5, 98 , 11.5 , 2430 , 4538 , -999 , 1.9 , 0.00099, 98 , 1.9 , 7.28 , 135 , -999, 156 , 1 , 0x3B9E9E ),
Element( 44 , "Ru" , "Ruthenium" , 8 , 5, 101.07 , 12.37 , 2607 , 4423 , 0.238 , 2.2 , 0.001 , 101.07 , 2.2 , 7.3605 , 130 , -999, 126 , 1 , 0x248F8F ),
Element( 45 , "Rh" , "Rhodium" , 9 , 5, 102.90550 , 12.41 , 2237 , 3968 , 0.243 , 2.28, 0.001 , 102.90550 , 2.28, 7.4589 , 135 , -999, 135 , 1 , 0x0A7D8C ),
Element( 46 , "Pd" , "Palladium" , 10 , 5, 106.42 , 12.02 , 1828.05 , 3236 , 0.244 , 2.2 , 0.015 , 106.42 , 2.2 , 8.3369 , 140 , 163 , 131 , -999, 0x006985 ),
Element( 47 , "Ag" , "Silver" , 11 , 5, 107.8682 , 10.501 , 1234.93 , 2435 , 0.235 , 1.93, 0.075 , 107.8682 , 1.93, 7.5762 , 160 , 172 , 153 , 1 , 0xC0C0C0 ),
Element( 48 , "Cd" , "Cadmium" , 12 , 5, 112.411 , 8.69 , 594.22 , 1040 , 0.232 , 1.69, 0.159 , 112.411 , 1.69, 8.9938 , 155 , 158 , 148 , 2 , 0xFFD98F ),
Element( 49 , "In" , "Indium" , 13 , 5, 114.818 , 7.31 , 429.75 , 2345 , 0.233 , 1.78, 0.25 , 114.818 , 1.78, 5.78636 , 155 , 193 , 144 , 3 , 0xA67573 ),
Element( 50 , "Sn" , "Tin" , 14 , 5, 118.710 , 7.287 , 505.08 , 2875 , 0.228 , 1.96, 2.3 , 118.710 , 1.96, 7.3439 , 145 , 217 , 141 , 4 , 0x668080 ),
Element( 51 , "Sb" , "Antimony" , 15 , 5, 121.760 , 6.685 , 903.78 , 1860 , 0.207 , 2.05, 0.2 , 121.760 , 2.05, 8.6084 , 145 , -999, 138 , 5 , 0x9E63B5 ),
Element( 52 , "Te" , "Tellurium" , 16 , 5, 127.60 , 6.232 , 722.66 , 1261 , 0.202 , 2.1 , 0.001 , 127.60 , 2.1 , 9.0096 , 140 , 206 , 135 , 6 , 0xD47A00 ),
Element( 53 , "I" , "Iodine" , 17 , 5, 126.90447 , 4.93 , 386.85 , 457.4 , 0.214 , 2.66, 0.45 , 126.90447 , 2.66, 10.45126, 140 , 198 , 133 , 7 , 0x940094 ),
Element( 54 , "Xe" , "Xenon" , 18 , 5, 131.293 , 0.005887 , 161.4 , 165.03, 0.158 , 2.6 , 0.00099, 131.293 , 2.6 , 12.1298 , 108 , 216 , 130 , 8 , 0x429EB0 ),
Element( 55 , "Cs" , "Caesium" , 1 , 6, 132.9054519, 1.873 , 301.59 , 944 , 0.242 , 0.79, 3 , 132.9054519, 0.79, 3.8939 , 260 , -999, 225 , 1 , 0x57178F ),
Element( 56 , "Ba" , "Barium" , 2 , 6, 137.327 , 3.594 , 1000 , 2170 , 0.204 , 0.89, 425 , 137.327 , 0.89, 5.2117 , 215 , -999, 198 , 2 , 0x00C900 ),
Element( 57 , "La" , "Lanthanum" , -999, 6, 138.90547 , 6.145 , 1193 , 3737 , 0.195 , 1.1 , 39 , 138.90547 , 1.1 , 5.5769 , 195 , -999, 169 , 2 , 0x70D4FF ),
Element( 58 , "Ce" , "Cerium" , -999, 6, 140.116 , 6.77 , 1068 , 3716 , 0.192 , 1.12, 66.5 , 140.116 , 1.12, 5.5387 , 185 , -999, -999, 2 , 0xFFFFC7 ),
Element( 59 , "Pr" , "Praseodymium" , -999, 6, 140.90765 , 6.773 , 1208 , 3793 , 0.193 , 1.13, 9.2 , 140.90765 , 1.13, 5.473 , 185 , -999, -999, 2 , 0xD9FFC7 ),
Element( 60 , "Nd" , "Neodymium" , -999, 6, 144.242 , 7.007 , 1297 , 3347 , 0.19 , 1.14, 41.5 , 144.242 , 1.14, 5.525 , 185 , -999, -999, 2 , 0xC7FFC7 ),
Element( 61 , "Pm" , "Promethium" , -999, 6, 145 , 7.26 , 1315 , 3273 , -999 , -999, 0.00099, 145 , -999, 5.582 , 185 , -999, -999, 2 , 0xA3FFC7 ),
Element( 62 , "Sm" , "Samarium" , -999, 6, 150.36 , 7.52 , 1345 , 2067 , 0.197 , 1.17, 7.05 , 150.36 , 1.17, 5.6436 , 185 , -999, -999, 2 , 0x8FFFC7 ),
Element( 63 , "Eu" , "Europium" , -999, 6, 151.964 , 5.243 , 1099 , 1802 , 0.182 , 1.2 , 2 , 151.964 , -999, 5.6704 , 185 , -999, -999, 2 , 0x61FFC7 ),
Element( 64 , "Gd" , "Gadolinium" , -999, 6, 157.25 , 7.895 , 1585 , 3546 , 0.236 , 1.2 , 6.2 , 157.25 , 1.2 , 6.1501 , 180 , -999, -999, 2 , 0x45FFC7 ),
Element( 65 , "Tb" , "Terbium" , -999, 6, 158.92535 , 8.229 , 1629 , 3503 , 0.182 , 1.2 , 1.2 , 158.92535 , -999, 5.8638 , 175 , -999, -999, 2 , 0x30FFC7 ),
Element( 66 , "Dy" , "Dysprosium" , -999, 6, 162.500 , 8.55 , 1680 , 2840 , 0.17 , 1.22, 5.2 , 162.500 , 1.22, 5.9389 , 175 , -999, -999, 2 , 0x1FFFC7 ),
Element( 67 , "Ho" , "Holmium" , -999, 6, 164.93032 , 8.795 , 1734 , 2993 , 0.165 , 1.23, 1.3 , 164.93032 , 1.23, 6.0215 , 175 , -999, -999, 2 , 0x00FF9C ),
Element( 68 , "Er" , "Erbium" , -999, 6, 167.259 , 9.066 , 1802 , 3141 , 0.168 , 1.24, 3.5 , 167.259 , 1.24, 6.1077 , 175 , -999, -999, 2 , 0x00E675 ),
Element( 69 , "Tm" , "Thulium" , -999, 6, 168.93421 , 9.321 , 1818 , 2223 , 0.16 , 1.25, 0.52 , 168.93421 , 1.25, 6.18431 , 175 , -999, -999, 2 , 0x00D452 ),
Element( 70 , "Yb" , "Ytterbium" , -999, 6, 173.054 , 6.965 , 1097 , 1469 , 0.155 , 1.1 , 3.2 , 173.054 , -999, 6.25416 , 175 , -999, -999, 2 , 0x00BF38 ),
Element( 71 , "Lu" , "Lutetium" , 3 , 6, 174.9668 , 9.84 , 1925 , 3675 , 0.154 , 1.27, 0.8 , 174.9668 , 1.27, 5.4259 , 175 , -999, 160 , 2 , 0x00AB24 ),
Element( 72 , "Hf" , "Hafnium" , 4 , 6, 178.49 , 13.31 , 2506 , 4876 , 0.144 , 1.3 , 3 , 178.49 , 1.3 , 6.82507 , 155 , -999, 150 , 2 , 0x4DC2FF ),
Element( 73 , "Ta" , "Tantalum" , 5 , 6, 180.94788 , 16.654 , 3290 , 5731 , 0.14 , 1.5 , 2 , 180.94788 , 1.5 , 7.5496 , 145 , -999, 138 , 2 , 0x4DA6FF ),
Element( 74 , "W" , "Tungsten" , 6 , 6, 183.84 , 19.25 , 3695 , 5828 , 0.132 , 2.36, 1.3 , 183.84 , 2.36, 7.864 , 135 , -999, 146 , 2 , 0x2194D6 ),
Element( 75 , "Re" , "Rhenium" , 7 , 6, 186.207 , 21.02 , 3459 , 5869 , 0.137 , 1.9 , 0.00099, 186.207 , 1.9 , 7.8335 , 135 , -999, 159 , 2 , 0x267DAB ),
Element( 76 , "Os" , "Osmium" , 8 , 6, 190.23 , 22.61 , 3306 , 5285 , 0.13 , 2.2 , 0.002 , 190.23 , 2.2 , 8.4382 , 130 , -999, 128 , 2 , 0x266696 ),
Element( 77 , "Ir" , "Iridium" , 9 , 6, 192.217 , 22.56 , 2719 , 4701 , 0.131 , 2.2 , 0.001 , 192.217 , 2.2 , 8.967 , 135 , -999, 137 , 2 , 0x175487 ),
Element( 78 , "Pt" , "Platinum" , 10 , 6, 195.084 , 21.46 , 2041.4 , 4098 , 0.133 , 2.28, 0.005 , 195.084 , 2.28, 8.9587 , 135 , 175 , 128 , 1 , 0xD0D0E0 ),
Element( 79 , "Au" , "Gold" , 11 , 6, 196.966569 , 19.282 , 1337.33 , 3129 , 0.129 , 2.54, 0.004 , 196.966569 , 2.54, 9.2255 , 135 , 166 , 144 , 1 , 0xFFD123 ),
Element( 80 , "Hg" , "Mercury" , 12 , 6, 200.59 , 13.5336 , 234.43 , 629.88, 0.14 , 2 , 0.085 , 200.59 , 2 , 10.4375 , 150 , 155 , 149 , 2 , 0xB8B8D0 ),
Element( 81 , "Tl" , "Thallium" , 13 , 6, 204.38 , 11.85 , 577 , 1746 , 0.129 , 1.62, 0.85 , 204.3833 , 1.62, 6.1082 , 190 , 196 , 148 , 3 , 0xA6544D ),
Element( 82 , "Pb" , "Lead" , 14 , 6, 207.2 , 11.342 , 600.61 , 2022 , 0.129 , 2.33, 14 , 207.2 , 2.33, 7.41666 , 180 , 202 , 147 , 4 , 0x575961 ),
Element( 83 , "Bi" , "Bismuth" , 15 , 6, 208.98040 , 9.807 , 544.7 , 1837 , 0.122 , 2.02, 0.009 , 208.98040 , 2.02, 7.2856 , 160 , -999, 146 , 5 , 0x9E4FB5 ),
Element( 84 , "Po" , "Polonium" , 16 , 6, 209 , 9.32 , 527 , 1235 , -999 , 2 , 0.00099, 209 , 2 , 8.417 , 190 , -999, -999, 6 , 0xAB5C00 ),
Element( 85 , "At" , "Astatine" , 17 , 6, 210 , 7 , 575 , 610 , -999 , 2.2 , 0.00099, 210 , 2.2 , -999 , -999, -999, -999, 7 , 0x754F45 ),
Element( 86 , "Rn" , "Radon" , 18 , 6, 222 , 0.00973 , 202 , 211.3 , 0.094 , -999, 0.00099, 222 , -999, 10.7485 , 120 , -999, 145 , 8 , 0x428296 ),
Element( 87 , "Fr" , "Francium" , 1 , 7, 223 , 1.87 , 300 , 950 , -999 , 0.7 , 0.00099, 223 , 0.7 , 4.0727 , -999, -999, -999, 1 , 0x420066 ),
Element( 88 , "Ra" , "Radium" , 2 , 7, 226 , 5.5 , 973 , 2010 , -999 , 0.9 , 0.00099, 226 , 0.9 , 5.2784 , 215 , -999, -999, 2 , 0x007D00 ),
Element( 89 , "Ac" , "Actinium" , -999, 7, 227 , 10.07 , 1323 , 3471 , 0.12 , 1.1 , 0.00099, 227 , 1.1 , 5.17 , 195 , -999, -999, 2 , 0x70ABFA ),
Element( 90 , "Th" , "Thorium" , -999, 7, 232.03806 , 11.72 , 2115 , 5061 , 0.113 , 1.3 , 9.6 , 232.03806 , 1.3 , 6.3067 , 180 , -999, -999, 2 , 0x00BAFF ),
Element( 91 , "Pa" , "Protactinium" , -999, 7, 231.03588 , 15.37 , 1841 , 4300 , -999 , 1.5 , 0.00099, 231.03588 , 1.5 , 5.89 , 180 , -999, -999, 2 , 0x00A1FF ),
Element( 92 , "U" , "Uranium" , -999, 7, 238.02891 , 18.95 , 1405.3 , 4404 , 0.116 , 1.38, 2.7 , 238.02891 , 1.38, 6.19405 , 175 , 186 , -999, 2 , 0x008FFF ),
Element( 93 , "Np" , "Neptunium" , -999, 7, 237 , 20.45 , 917 , 4273 , -999 , 1.36, 0.00099, 237 , 1.36, 6.2657 , 175 , -999, -999, 2 , 0x0080FF ),
Element( 94 , "Pu" , "Plutonium" , -999, 7, 244 , 19.84 , 912.5 , 3501 , -999 , 1.28, 0.00099, 244 , 1.28, 6.0262 , 175 , -999, -999, 2 , 0x006BFF ),
Element( 95 , "Am" , "Americium" , -999, 7, 243 , 13.69 , 1449 , 2880 , -999 , 1.3 , 0.00099, 243 , 1.3 , 5.9738 , 175 , -999, -999, 2 , 0x545CF2 ),
Element( 96 , "Cm" , "Curium" , -999, 7, 247 , 13.51 , 1613 , 3383 , -999 , 1.3 , 0.00099, 247 , 1.3 , 5.9915 , -999, -999, -999, 2 , 0x785CE3 ),
Element( 97 , "Bk" , "Berkelium" , -999, 7, 247 , 14.79 , 1259 , -999 , -999 , 1.3 , 0.00099, 247 , 1.3 , 6.1979 , -999, -999, -999, 2 , 0x8A4FE3 ),
Element( 98 , "Cf" , "Californium" , -999, 7, 251 , 15.1 , 1173 , -999 , -999 , 1.3 , 0.00099, 251 , 1.3 , 6.2817 , -999, -999, -999, 2 , 0xA136D4 ),
Element( 99 , "Es" , "Einsteinium" , -999, 7, 252 , 13.5 , 1133 , -999 , -999 , 1.3 , 0 , 252 , 1.3 , 6.42 , -999, -999, -999, 2 , 0xB31FD4 ),
Element( 100, "Fm" , "Fermium" , -999, 7, 257 , -999 , 1800 , -999 , -999 , 1.3 , 0 , 257 , 1.3 , 6.5 , -999, -999, -999, 2 , 0xB31FBA ),
Element( 101, "Md" , "Mendelevium" , -999, 7, 258 , -999 , 1100 , -999 , -999 , 1.3 , 0 , 258 , 1.3 , 6.58 , -999, -999, -999, 2 , 0xB30DA6 ),
Element( 102, "No" , "Nobelium" , -999, 7, 259 , -999 , 1100 , -999 , -999 , 1.3 , 0 , 259 , 1.3 , 6.65 , -999, -999, -999, 2 , 0xBD0D87 ),
Element( 103, "Lr" , "Lawrencium" , 3 , 7, 262 , -999 , 1900 , -999 , -999 , 1.3 , 0 , 262 , -999, 4.9 , -999, -999, -999, 3 , 0xC70066 ),
Element( 104, "Rf" , "Rutherfordium", 4 , 7, 267 , -999 , -999 , -999 , -999 , -999, 0 , 261 , -999, 6 , -999, -999, -999, -999, 0xCC0059 ),
Element( 105, "Db" , "Dubnium" , 5 , 7, 268 , -999 , -999 , -999 , -999 , -999, 0 , 262 , -999, -999 , -999, -999, -999, -999, 0xD1004F ),
Element( 106, "Sg" , "Seaborgium" , 6 , 7, 269 , -999 , -999 , -999 , -999 , -999, 0 , 263 , -999, -999 , -999, -999, -999, -999, 0xD90045 ),
Element( 107, "Bh" , "Bohrium" , 7 , 7, 270 , -999 , -999 , -999 , -999 , -999, 0 , 262 , -999, -999 , -999, -999, -999, -999, 0xE00038 ),
Element( 108, "Hs" , "Hassium" , 8 , 7, 269 , -999 , -999 , -999 , -999 , -999, 0 , 265 , -999, -999 , -999, -999, -999, -999, 0xE6002E ),
Element( 109, "Mt" , "Meitnerium" , 9 , 7, 278 , -999 , -999 , -999 , -999 , -999, 0 , 266 , -999, -999 , -999, -999, -999, -999, 0xEB0026 ),
Element( 110, "Ds" , "Darmstadtium" , 10 , 7, 281 , -999 , -999 , -999 , -999 , -999, 0 , 269 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 111, "Rg" , "Roentgenium" , 11 , 7, 281 , -999 , -999 , -999 , -999 , -999, 0 , 272 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 112, "Cn" , "Copernicium" , 12 , 7, 285 , -999 , -999 , -999 , -999 , -999, 0 , 277 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 113, "Uut", "Ununtrium" , 13 , 7, 286 , -999 , -999 , -999 , -999 , -999, 0 , 283 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 114, "Fl" , "Flerovium" , 14 , 7, 289 , -999 , -999 , -999 , -999 , -999, 0 , 285 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 115, "Uup", "Ununpentium" , 15 , 7, 288 , -999 , -999 , -999 , -999 , -999, 0 , 287 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 116, "Lv" , "Livermorium" , 16 , 7, 293 , -999 , -999 , -999 , -999 , -999, 0 , 289 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 117, "Uus", "Ununseptium" , 17 , 7, 294 , -999 , -999 , -999 , -999 , -999, 0 , 291 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
Element( 118, "Uuo", "Ununoctium" , 18 , 7, 294 , -999 , -999 , -999 , -999 , -999, 0 , 293 , -999, -999 , -999, -999, -999, -999, 0xFF0000 ),
]
print 'getElementMap: eles[0]:\n%s' % (elements[0].format(),)
elementMap = {}
for ele in elements:
elementMap[ele.esym] = ele
return elementMap
if __name__ == '__main__': main()
|
ssullivangh/nrelmat
|
web/TestMako/testmako/xyzToSmol.py
|
Python
|
gpl-3.0
| 44,701
|
[
"Jmol"
] |
dbc7a39b080810083894dc9c6968e0ec640b1625f8b78b25dbf3d8631a347a33
|
#!/usr/bin/env python -i
# preceeding line should have path for Python on your machine
# vizplotgui_pymol.py
# Purpose: viz running LAMMPS simulation via PyMol with plot and GUI
# Syntax: vizplotgui_pymol.py in.lammps Nfreq compute-ID
# in.lammps = LAMMPS input script
# Nfreq = plot data point and viz shapshot every this many steps
# compute-ID = ID of compute that calculates temperature
# (or any other scalar quantity)
# IMPORTANT: this script cannot yet be run in parallel via Pypar,
# because I can't seem to do a MPI-style broadcast in Pypar
from __future__ import print_function
import sys,time
sys.path.append("./pizza")
# methods called by GUI
def run():
global runflag
runflag = 1
def stop():
global runflag
runflag = 0
def settemp(value):
global temptarget
temptarget = slider.get()
def quit():
global breakflag
breakflag = 1
# method called by timestep loop every Nfreq steps
# read dump snapshot and viz it, update plot with compute value
def update(ntimestep):
d.next()
d.unscale()
p.single(ntimestep)
pm.load("tmp.pdb")
pm.forward()
value = lmp.extract_compute(compute,0,0)
xaxis.append(ntimestep)
yaxis.append(value)
gn.plot(xaxis,yaxis)
# parse command line
argv = sys.argv
if len(argv) != 4:
print("Syntax: vizplotgui_pymol.py in.lammps Nfreq compute-ID")
sys.exit()
infile = sys.argv[1]
nfreq = int(sys.argv[2])
compute = sys.argv[3]
me = 0
# uncomment if running in parallel via Pypar
#import pypar
#me = pypar.rank()
#nprocs = pypar.size()
from lammps import lammps
lmp = lammps()
# run infile all at once
# assumed to have no run command in it
# dump a file in native LAMMPS dump format for Pizza.py dump tool
lmp.file(infile)
lmp.command("thermo %d" % nfreq)
lmp.command("dump python all atom %d tmp.dump" % nfreq)
# initial 0-step run to generate initial 1-point plot, dump file, and image
lmp.command("run 0 pre yes post no")
value = lmp.extract_compute(compute,0,0)
ntimestep = 0
xaxis = [ntimestep]
yaxis = [value]
breakflag = 0
runflag = 0
temptarget = 1.0
# wrapper on PyMol
# just proc 0 handles reading of dump file and viz
if me == 0:
import pymol
pymol.finish_launching()
from dump import dump
from pdbfile import pdbfile
from pymol import cmd as pm
d = dump("tmp.dump",0)
p = pdbfile(d)
d.next()
d.unscale()
p.single(ntimestep)
pm.load("tmp.pdb")
pm.show("spheres","tmp")
# display GUI with run/stop buttons and slider for temperature
if me == 0:
try:
from Tkinter import *
except:
from tkinter import *
tkroot = Tk()
tkroot.withdraw()
root = Toplevel(tkroot)
root.title("LAMMPS GUI")
frame = Frame(root)
Button(frame,text="Run",command=run).pack(side=LEFT)
Button(frame,text="Stop",command=stop).pack(side=LEFT)
slider = Scale(frame,from_=0.0,to=5.0,resolution=0.1,
orient=HORIZONTAL,label="Temperature")
slider.bind('<ButtonRelease-1>',settemp)
slider.set(temptarget)
slider.pack(side=LEFT)
Button(frame,text="Quit",command=quit).pack(side=RIGHT)
frame.pack()
tkroot.update()
# wrapper on GnuPlot via Pizza.py gnu tool
if me == 0:
from gnu import gnu
gn = gnu()
gn.plot(xaxis,yaxis)
gn.title(compute,"Timestep","Temperature")
# endless loop, checking status of GUI settings every Nfreq steps
# run with pre yes/no and post yes/no depending on go/stop status
# re-invoke fix langevin with new seed when temperature slider changes
# after re-invoke of fix langevin, run with pre yes
running = 0
temp = temptarget
seed = 12345
lmp.command("fix 2 all langevin %g %g 0.1 %d" % (temp,temp,seed))
while 1:
if me == 0: tkroot.update()
if temp != temptarget:
temp = temptarget
seed += me+1
lmp.command("fix 2 all langevin %g %g 0.1 12345" % (temp,temp))
running = 0
if runflag and running:
lmp.command("run %d pre no post no" % nfreq)
ntimestep += nfreq
if me == 0: update(ntimestep)
elif runflag and not running:
lmp.command("run %d pre yes post no" % nfreq)
ntimestep += nfreq
if me == 0: update(ntimestep)
elif not runflag and running:
lmp.command("run %d pre no post yes" % nfreq)
ntimestep += nfreq
if me == 0: update(ntimestep)
if breakflag: break
if runflag: running = 1
else: running = 0
time.sleep(0.01)
lmp.command("run 0 pre no post yes")
# uncomment if running in parallel via Pypar
#print("Proc %d out of %d procs has" % (me,nprocs), lmp)
#pypar.finalize()
|
Dixon3/lammps
|
python/examples/vizplotgui_pymol.py
|
Python
|
gpl-2.0
| 4,489
|
[
"LAMMPS",
"PyMOL"
] |
2573c4cb25321bac1712850818d78994886a1a6e350c88b0e3d351de97783e72
|
#!/usr/bin/env python
########################################################################
# File : dirac-admin-proxy-upload.py
# Author : Adrian Casajus
########################################################################
"""
Upload proxy.
Example:
$ dirac-admin-proxy-upload
"""
import sys
from DIRAC.Core.Base.Script import Script
from DIRAC.FrameworkSystem.Client.ProxyUpload import CLIParams, uploadProxy
@Script()
def main():
cliParams = CLIParams()
cliParams.registerCLISwitches()
Script.parseCommandLine()
retVal = uploadProxy(cliParams)
if not retVal["OK"]:
print(retVal["Message"])
sys.exit(1)
sys.exit(0)
if __name__ == "__main__":
main()
|
DIRACGrid/DIRAC
|
src/DIRAC/FrameworkSystem/scripts/dirac_admin_proxy_upload.py
|
Python
|
gpl-3.0
| 716
|
[
"DIRAC"
] |
696bb39b54c55f4dcd578fa52f8a7d6202f1c273597f23127f5bdd1bced6db07
|
# -*- coding: utf-8 -*-
"""
Copyright (C) 2007-2008 Matthew Perry
Copyright (C) 2008-2010 Borys Jurgiel
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
"""
import sys
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from qgis.core import QgsApplication, QgsContextHelp
from ui_qgsplugininstallerfetchingbase import Ui_QgsPluginInstallerFetchingDialogBase
from ui_qgsplugininstallerinstallingbase import Ui_QgsPluginInstallerInstallingDialogBase
from ui_qgsplugininstallerrepositorybase import Ui_QgsPluginInstallerRepositoryDetailsDialogBase
from ui_qgsplugininstallerpluginerrorbase import Ui_QgsPluginInstallerPluginErrorDialogBase
from ui_qgsplugininstallerbase import Ui_QgsPluginInstallerDialogBase
from installer_data import *
try:
from qgis.utils import startPlugin, unloadPlugin, loadPlugin # QGIS >= 1.4
from qgis.utils import reloadPlugin, updateAvailablePlugins # QGIS >= 1.5
except Exception:
pass
# --- common functions ------------------------------------------------------------------- #
def removeDir(path):
result = QString()
if not QFile(path).exists():
result = QCoreApplication.translate("QgsPluginInstaller","Nothing to remove! Plugin directory doesn't exist:")+"\n"+path
elif QFile(path).remove(): # if it is only link, just remove it without resolving.
pass
else:
fltr = QDir.Dirs | QDir.Files | QDir.Hidden
iterator = QDirIterator(path, fltr, QDirIterator.Subdirectories)
while iterator.hasNext():
item = iterator.next()
if QFile(item).remove():
pass
fltr = QDir.Dirs | QDir.Hidden
iterator = QDirIterator(path, fltr, QDirIterator.Subdirectories)
while iterator.hasNext():
item = iterator.next()
if QDir().rmpath(item):
pass
if QFile(path).exists():
result = QCoreApplication.translate("QgsPluginInstaller","Failed to remove the directory:")+"\n"+path+"\n"+QCoreApplication.translate("QgsPluginInstaller","Check permissions or remove it manually")
# restore plugin directory if removed by QDir().rmpath()
pluginDir = QFileInfo(QgsApplication.qgisUserDbFilePath()).path() + "/python/plugins"
if not QDir(pluginDir).exists():
QDir().mkpath(pluginDir)
return result
# --- /common functions ------------------------------------------------------------------ #
# --- class QgsPluginInstallerFetchingDialog --------------------------------------------------------------- #
class QgsPluginInstallerFetchingDialog(QDialog, Ui_QgsPluginInstallerFetchingDialogBase):
# ----------------------------------------- #
def __init__(self, parent):
QDialog.__init__(self, parent)
self.setupUi(self)
self.progressBar.setRange(0,len(repositories.allEnabled())*100)
self.itemProgress = {}
self.item = {}
for key in repositories.allEnabled():
self.item[key] = QTreeWidgetItem(self.treeWidget)
self.item[key].setText(0,key)
if repositories.all()[key]["state"] > 1:
self.itemProgress[key] = 100
self.displayState(key,0)
else:
self.itemProgress[key] = 0
self.displayState(key,2)
self.treeWidget.resizeColumnToContents(0)
QObject.connect(repositories, SIGNAL("repositoryFetched(QString)"), self.repositoryFetched)
QObject.connect(repositories, SIGNAL("anythingChanged(QString, int, int)"), self.displayState)
# ----------------------------------------- #
def displayState(self,key,state,state2=None):
messages=[self.tr("Success"),self.tr("Resolving host name..."),self.tr("Connecting..."),self.tr("Host connected. Sending request..."),self.tr("Downloading data..."),self.tr("Idle"),self.tr("Closing connection..."),self.tr("Error")]
message = messages[state]
if state2:
message += " (%s%%)" % state2
self.item[key].setText(1,message)
if state == 4 and state2:
self.itemProgress[key] = state2
totalProgress = sum(self.itemProgress.values())
self.progressBar.setValue(totalProgress)
# ----------------------------------------- #
def repositoryFetched(self, repoName):
self.itemProgress[repoName] = 100
if repositories.all()[repoName]["state"] == 2:
self.displayState(repoName,0)
else:
self.displayState(repoName,7)
if not repositories.fetchingInProgress():
self.close()
# --- /class QgsPluginInstallerFetchingDialog -------------------------------------------------------------- #
# --- class QgsPluginInstallerRepositoryDialog ------------------------------------------------------------- #
class QgsPluginInstallerRepositoryDialog(QDialog, Ui_QgsPluginInstallerRepositoryDetailsDialogBase):
# ----------------------------------------- #
def __init__(self, parent=None):
QDialog.__init__(self, parent)
self.setupUi(self)
self.editURL.setText("http://")
self.connect(self.editName, SIGNAL("textChanged(const QString &)"), self.textChanged)
self.connect(self.editURL, SIGNAL("textChanged(const QString &)"), self.textChanged)
self.textChanged(None)
# ----------------------------------------- #
def textChanged(self, string):
enable = (self.editName.text().count() > 0 and self.editURL.text().count() > 0)
self.buttonBox.button(QDialogButtonBox.Ok).setEnabled(enable)
# --- /class QgsPluginInstallerRepositoryDialog ------------------------------------------------------------ #
# --- class QgsPluginInstallerInstallingDialog --------------------------------------------------------------- #
class QgsPluginInstallerInstallingDialog(QDialog, Ui_QgsPluginInstallerInstallingDialogBase):
# ----------------------------------------- #
def __init__(self, parent, plugin):
QDialog.__init__(self, parent)
self.setupUi(self)
self.plugin = plugin
self.mResult = QString()
self.progressBar.setRange(0,0)
self.progressBar.setFormat(QString("%p%"))
self.labelName.setText(QString(plugin["name"]))
self.connect(self.buttonBox, SIGNAL("clicked(QAbstractButton*)"), self.abort)
url = QUrl(plugin["url"])
path = QString(url.toPercentEncoding(url.path(), "!$&'()*+,;=:/@"))
fileName = plugin["filename"]
tmpDir = QDir.tempPath()
tmpPath = QDir.cleanPath(tmpDir+"/"+fileName)
self.file = QFile(tmpPath)
port = url.port()
if port < 0:
port = 80
self.http = QPHttp(url.host(), port)
self.connect(self.http, SIGNAL("stateChanged ( int )"), self.stateChanged)
self.connect(self.http, SIGNAL("dataReadProgress ( int , int )"), self.readProgress)
self.connect(self.http, SIGNAL("requestFinished (int, bool)"), self.requestFinished)
self.httpGetId = self.http.get(path, self.file)
# ----------------------------------------- #
def result(self):
return self.mResult
# ----------------------------------------- #
def stateChanged(self, state):
messages=[self.tr("Installing..."),self.tr("Resolving host name..."),self.tr("Connecting..."),self.tr("Host connected. Sending request..."),self.tr("Downloading data..."),self.tr("Idle"),self.tr("Closing connection..."),self.tr("Error")]
self.labelState.setText(messages[state])
# ----------------------------------------- #
def readProgress(self, done, total):
self.progressBar.setMaximum(total)
self.progressBar.setValue(done)
# ----------------------------------------- #
def requestFinished(self, requestId, state):
if requestId != self.httpGetId:
return
self.buttonBox.setEnabled(False)
if state:
self.mResult = self.http.errorString()
self.reject()
return
self.file.close()
pluginDir = QFileInfo(QgsApplication.qgisUserDbFilePath()).path() + "/python/plugins"
tmpPath = self.file.fileName()
# make sure that the parent directory exists
if not QDir(pluginDir).exists():
QDir().mkpath(pluginDir)
# if the target directory already exists as a link, remove the link without resolving:
QFile(pluginDir+QString(QDir.separator())+self.plugin["localdir"]).remove()
try:
un = unzip()
un.extract(unicode(tmpPath), unicode(pluginDir)) # test extract. If fails, then exception will be raised and no removing occurs
# removing old plugin files if exist
removeDir(QDir.cleanPath(pluginDir+"/"+self.plugin["localdir"])) # remove old plugin if exists
un.extract(unicode(tmpPath), unicode(pluginDir)) # final extract.
except:
self.mResult = self.tr("Failed to unzip the plugin package. Probably it's broken or missing from the repository. You may also want to make sure that you have write permission to the plugin directory:") + "\n" + pluginDir
self.reject()
return
try:
# cleaning: removing the temporary zip file
QFile(tmpPath).remove()
except:
pass
self.close()
# ----------------------------------------- #
def abort(self):
self.http.abort()
self.mResult = self.tr("Aborted by user")
self.reject()
# --- /class QgsPluginInstallerInstallingDialog ------------------------------------------------------------- #
# --- class QgsPluginInstallerPluginErrorDialog -------------------------------------------------------------- #
class QgsPluginInstallerPluginErrorDialog(QDialog, Ui_QgsPluginInstallerPluginErrorDialogBase):
# ----------------------------------------- #
def __init__(self, parent, errorMessage):
QDialog.__init__(self, parent)
self.setupUi(self)
if not errorMessage:
errorMessage = self.tr("no error message received")
self.textBrowser.setText(errorMessage)
# --- /class QgsPluginInstallerPluginErrorDialog ------------------------------------------------------------- #
# --- class QgsPluginInstallerDialog ------------------------------------------------------------------------- #
class QgsPluginInstallerDialog(QDialog, Ui_QgsPluginInstallerDialogBase):
# ----------------------------------------- #
def __init__(self, parent):
QDialog.__init__(self, parent)
self.setupUi(self)
self.reposGroup = "/Qgis/plugin-repos"
self.connect(self.lineFilter, SIGNAL("textChanged (QString)"), self.filterChanged)
self.connect(self.comboFilter1, SIGNAL("currentIndexChanged (int)"), self.filterChanged)
self.connect(self.comboFilter2, SIGNAL("currentIndexChanged (int)"), self.filterChanged)
# grab clicks on trees
self.connect(self.treePlugins, SIGNAL("itemSelectionChanged()"), self.pluginTreeClicked)
self.connect(self.treeRepositories, SIGNAL("itemSelectionChanged()"), self.repositoryTreeClicked)
# buttons
self.connect(self.buttonUpgradeAll, SIGNAL("clicked()"), self.upgradeAllClicked)
self.connect(self.buttonInstall, SIGNAL("clicked()"), self.installPluginClicked)
self.connect(self.buttonUninstall, SIGNAL("clicked()"), self.uninstallPluginClicked)
self.buttonInstall.setEnabled(False)
self.buttonUninstall.setEnabled(False)
self.buttonHelp.setEnabled(QGIS_14)
self.connect(self.buttonHelp, SIGNAL("clicked()"), self.runHelp)
# repositories handling
self.connect(self.treeRepositories, SIGNAL("doubleClicked(QModelIndex)"), self.editRepository)
#self.connect(self.buttonFetchRepositories, SIGNAL("clicked()"), self.addKnownRepositories)
self.connect(self.buttonAddRep, SIGNAL("clicked()"), self.addRepository)
self.connect(self.buttonEditRep, SIGNAL("clicked()"), self.editRepository)
self.connect(self.buttonDeleteRep, SIGNAL("clicked()"), self.deleteRepository)
self.buttonEditRep.setEnabled(False)
self.buttonDeleteRep.setEnabled(False)
# configuration widgets
self.connect(self.checkUpdates, SIGNAL("toggled (bool)"), self.changeCheckingPolicy)
self.connect(self.comboInterval, SIGNAL("currentIndexChanged (int)"), self.changeCheckingInterval)
self.connect(self.radioPluginType0, SIGNAL("toggled (bool)"), self.changePluginPolicy)
self.connect(self.radioPluginType1, SIGNAL("toggled (bool)"), self.changePluginPolicy)
self.connect(self.radioPluginType2, SIGNAL("toggled (bool)"), self.changePluginPolicy)
# increase default icon size
self.treePlugins.setIconSize(QSize(22, 22))
if repositories.checkingOnStart():
self.checkUpdates.setChecked(Qt.Checked)
else:
self.checkUpdates.setChecked(Qt.Unchecked)
interval = repositories.checkingOnStartInterval()
intervals = [0,1,3,7,14,30] # days
if intervals.count(interval):
index = intervals.index(interval)
else:
index = 1
self.comboInterval.setCurrentIndex(index)
self.populateMostWidgets()
# ----------------------------------------- #
def getAllAvailablePlugins(self):
""" fetch plugins from all repositories """
repositories.load()
plugins.getAllInstalled()
for key in repositories.allEnabled():
repositories.requestFetching(key)
if repositories.fetchingInProgress():
self.fetchDlg = QgsPluginInstallerFetchingDialog(self)
self.fetchDlg.exec_()
del self.fetchDlg
for key in repositories.all():
repositories.killConnection(key)
# display error messages for every unavailable reposioty, unless Shift pressed nor all repositories are unavailable
keepQuiet = QgsApplication.keyboardModifiers() == Qt.KeyboardModifiers(Qt.ShiftModifier)
if repositories.allUnavailable() and repositories.allUnavailable() != repositories.allEnabled():
for key in repositories.allUnavailable():
if not keepQuiet:
QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), self.tr("Error reading repository:") + " " + key + "\n" + repositories.all()[key]["error"])
if QgsApplication.keyboardModifiers() == Qt.KeyboardModifiers(Qt.ShiftModifier):
keepQuiet = True
# ----------------------------------------- #
def populateMostWidgets(self):
self.comboFilter1.clear()
self.comboFilter1.addItem(self.tr("all repositories"))
self.treeRepositories.clear()
for key in repositories.all():
a = QTreeWidgetItem(self.treeRepositories)
a.setText(1,key)
a.setText(2,repositories.all()[key]["url"])
if repositories.all()[key]["enabled"] and repositories.all()[key]["valid"]:
if repositories.all()[key]["state"] == 2:
a.setText(0,self.tr("connected"))
a.setIcon(0,QIcon(":/plugins/installer/repoConnected.png"))
a.setToolTip(0,self.tr("This repository is connected"))
self.comboFilter1.addItem(key)
else:
a.setText(0,self.tr("unavailable"))
a.setIcon(0,QIcon(":/plugins/installer/repoUnavailable.png"))
a.setToolTip(0,self.tr("This repository is enabled, but unavailable"))
self.comboFilter1.addItem(key)
else:
a.setText(0,self.tr("disabled"))
a.setIcon(0,QIcon(":/plugins/installer/repoDisabled.png"))
if repositories.all()[key]["valid"]:
a.setToolTip(0,self.tr("This repository is disabled"))
else:
a.setToolTip(0,self.tr("This repository is blocked due to incompatibility with your Quantum GIS version"))
for i in [0,1,2]:
a.setForeground(i,QBrush(QColor(Qt.gray)))
for i in [0,1,2]:
self.treeRepositories.resizeColumnToContents(i)
self.comboFilter1.addItem(self.tr("orphans"))
# fill the status filter comboBox
self.comboFilter2.clear()
self.comboFilter2.addItem(self.tr("any status"))
self.comboFilter2.addItem(self.tr("not installed", "plural"))
self.comboFilter2.addItem(self.tr("installed", "plural"))
if plugins.isThereAnythingNew():
self.comboFilter2.addItem(self.tr("upgradeable and news"))
settings = QSettings()
(i, ok) = settings.value(settingsGroup+"/allowedPluginType", QVariant(1)).toInt()
if i == 1:
self.radioPluginType0.setChecked(Qt.Checked)
elif i == 2:
self.radioPluginType1.setChecked(Qt.Checked)
else:
self.radioPluginType2.setChecked(Qt.Checked)
# ----------------------------------------- #
def filterChanged(self,i):
""" one of the filter widgets has been changed """
self.populatePluginTree()
# ----------------------------------------- #
def filterCheck(self,plugin):
""" the filter for the pluginsTree """
if self.comboFilter1.currentIndex() != 0 and self.comboFilter1.currentText() != self.tr("orphans"):
if self.comboFilter1.currentText() != plugin["repository"]:
return False
elif self.comboFilter1.currentText() == self.tr("orphans"):
if plugin["status"] != "orphan":
return False
if self.comboFilter2.currentIndex() == 1 and not plugin["status"] in ["not installed","new"]:
return False
if self.comboFilter2.currentIndex() == 2 and not plugin["status"] in ["installed","upgradeable","newer","orphan"]:
return False
if self.comboFilter2.currentIndex() == 3 and not plugin["status"] in ["upgradeable","new"]:
return False
if self.lineFilter.text() == "":
return True
else:
for i in ["name","version_inst","version_avail","desc_repo","desc_local","author","status","repository"]:
item = QString(plugin[i])
if item != None:
if item.contains(self.lineFilter.text(), Qt.CaseInsensitive):
return True
return False
# ----------------------------------------- #
def populatePluginTree(self):
""" fill up the pluginTree """
statusTips={"not installed" : self.tr("This plugin is not installed"),
"installed" : self.tr("This plugin is installed"),
"upgradeable" : self.tr("This plugin is installed, but there is an updated version available"),
"orphan" : self.tr("This plugin is installed, but I can't find it in any enabled repository"),
"new" : self.tr("This plugin is not installed and is seen for the first time"),
"newer" : self.tr("This plugin is installed and is newer than its version available in a repository"),
"incompatible" : self.tr("This plugin is incompatible with your Quantum GIS version and probably won't work."),
"dependent" : self.tr("The required Python module is not installed.\nFor more information, please visit its homepage and Quantum GIS wiki."),
"broken" : self.tr("This plugin seems to be broken.\nIt has been installed but can't be loaded.\nHere is the error message:")}
statuses ={"not installed" : self.tr("not installed", "singular"),
"installed" : self.tr("installed", "singular"),
"upgradeable" : self.tr("upgradeable", "singular"),
"orphan" : self.tr("installed", "singular"),
"new" : self.tr("new!", "singular"),
"newer" : self.tr("installed", "singular"),
"incompatible" : self.tr("invalid", "singular"),
"dependent" : self.tr("invalid", "singular"),
"broken" : self.tr("invalid", "singular")}
orderInvalid = ["incompatible","broken","dependent"]
orderValid = ["upgradeable","new","not installed","installed","orphan","newer"]
def addItem(p):
if self.filterCheck(p):
statusTip = statusTips[p["status"]]
if p["read-only"]:
statusTip = statusTip + "\n" + self.tr("Note that it's an uninstallable core plugin")
installedVersion = p["version_inst"]
if not installedVersion:
installedVersion = "?"
availableVersion = p["version_avail"]
if not availableVersion:
availableVersion = "?"
if p["status"] == "upgradeable":
ver = installedVersion + " -> " + availableVersion
elif p["status"] == "newer":
ver = installedVersion + " (" + availableVersion + ")"
elif p["status"] in ["not installed", "new"]:
ver = availableVersion
else:
ver = installedVersion
if p["status"] in ["upgradeable","newer"] or p["error"]:
verTip = self.tr("installed version") + ": " + installedVersion + "\n" + self.tr("available version") + ": " + availableVersion
elif p["status"] in ["not installed", "new"]:
verTip = self.tr("available version") + ": " + availableVersion
elif p["status"] == "installed":
verTip = self.tr("installed version") + ": " + installedVersion + "\n" + self.tr("That's the newest available version")
elif p["status"] == "orphan":
verTip = self.tr("installed version") + ": " + installedVersion + "\n" + self.tr("There is no version available for download")
else:
verTip = ""
if p["error"] == "broken":
desc = self.tr("This plugin is broken")
descTip = statusTips[p["error"]] + "\n" + p["error_details"]
statusTip = descTip
elif p["error"] == "incompatible":
desc = self.tr("This plugin requires a newer version of Quantum GIS") + " (" + self.tr("at least")+ " " + p["error_details"] + ")"
descTip = statusTips[p["error"]]
statusTip = descTip
elif p["error"] == "dependent":
desc = self.tr("This plugin requires a missing module") + " (" + p["error_details"] + ")"
descTip = statusTips[p["error"]]
statusTip = descTip
else:
desc = p["desc_local"]
descTip = p["desc_repo"]
if not desc:
desc = descTip
if not p["repository"]:
repository = self.tr("only locally available")
else:
repository = p["repository"]
a = QgsPluginTreeItem(self.treePlugins)
if p["experimental"]:
a.setIcon(0, QIcon(":/plugins/installer/pluginExperimental.png"))
a.setToolTip(0, self.tr("Experimental plugin. Use at own risk"))
a.setData(0, Qt.UserRole, QVariant(0))
else:
# set empty icon to keep row height same for all plugins
a.setIcon(0, QIcon(":/plugins/installer/pluginStable.png"))
a.setData(0, Qt.UserRole, QVariant(1))
if p["error"]:
a.setText(1,statuses[p["error"]])
else:
a.setText(1,statuses[p["status"]])
a.setToolTip(1,statusTip)
a.setText(2,p["name"])
a.setText(3,ver)
a.setToolTip(3,verTip)
a.setText(4,desc)
# split the tooltip into multiple lines when they are too long
tmp = ""
splitTip = ""
for word in descTip.split(" "):
if len(tmp + word) < 80:
tmp = tmp + " " + word
else:
splitTip += tmp + "\n"
tmp = word
a.setToolTip(4, splitTip+tmp)
a.setText(5,p["author"])
if p["homepage"]:
a.setToolTip(5,p["homepage"])
else:
a.setToolTip(6,"")
a.setText(6,repository)
a.setToolTip(6,p["url"])
# set fonts and colors
for i in [0,1,2,3,4,5,6]:
if p["error"]:
a.setForeground(i,QBrush(QColor(Qt.red)))
if p["status"] in ["new","upgradeable"] or p["error"]:
font = QFont()
font.setWeight(QFont.Bold)
a.setFont(i,font)
# -------- #
if not plugins.all():
return
self.treePlugins.clear()
for i in orderInvalid:
for p in plugins.all().values():
if p["error"] == i:
addItem(p)
n = 0 # displayed plugins count
self.upgradeablePlugins = [] # list of plugins able to update
for i in orderValid:
for p in plugins.all().values():
if p["status"] == i and not p["error"]:
addItem(p)
if p["status"] == "upgradeable": self.upgradeablePlugins += [p["localdir"]]
n +=1
self.setWindowTitle(self.tr("QGIS Python Plugin Installer") + self.tr(" - %d plugins available" % len(plugins.all())))
self.buttonUpgradeAll.setEnabled( len(self.upgradeablePlugins) )
# initially, keep insert order
self.treePlugins.sortItems(100,Qt.AscendingOrder)
# resize the columns
for i in [0,1,2,3,4,5]:
self.treePlugins.resizeColumnToContents(i)
for i in [0,1,2,4,5]:
if self.treePlugins.columnWidth(i) > 260:
self.treePlugins.setColumnWidth(i, 260)
if self.treePlugins.columnWidth(3) > 560:
self.treePlugins.setColumnWidth(3, 560)
# ----------------------------------------- #
def pluginTreeClicked(self):
""" the pluginsTree has been clicked """
buttons={"not installed":(True,False,self.tr("Install plugin")),
"installed":(True,True,self.tr("Reinstall plugin")),
"upgradeable":(True,True,self.tr("Upgrade plugin")),
"orphan":(False,True,self.tr("Install/upgrade plugin")),
"new":(True,False,self.tr("Install plugin")),
"newer":(True,True,self.tr("Downgrade plugin"))}
self.buttonInstall.setEnabled(False)
self.buttonInstall.setText(self.tr("Install/upgrade plugin"))
self.buttonUninstall.setEnabled(False)
if not self.treePlugins.selectedItems():
return
item = self.treePlugins.currentItem()
if not item:
return
key = plugins.keyByUrl(item.toolTip(6))
if not key:
return
plugin = plugins.all()[key]
if not plugin:
return
self.buttonInstall.setEnabled(buttons[plugin["status"]][0])
self.buttonUninstall.setEnabled(buttons[plugin["status"]][1])
self.buttonInstall.setText(buttons[plugin["status"]][2])
if plugin["read-only"]:
self.buttonUninstall.setEnabled(False)
# ----------------------------------------- #
def upgradeAllClicked(self):
for key in self.upgradeablePlugins:
self.installPlugin(key, quiet=True)
# ----------------------------------------- #
def installPluginClicked(self):
if not self.treePlugins.currentItem():
return
key = plugins.keyByUrl(self.treePlugins.currentItem().toolTip(6))
self.installPlugin(key)
# ----------------------------------------- #
def uninstallPluginClicked(self):
if not self.treePlugins.currentItem():
return
key = plugins.keyByUrl(self.treePlugins.currentItem().toolTip(6))
self.uninstallPlugin(key)
# ----------------------------------------- #
def installPlugin(self, key, quiet=False):
""" install currently selected plugin """
infoString = ('','')
plugin = plugins.all()[key]
previousStatus = plugin["status"]
if not plugin:
return
if plugin["status"] == "newer" and not plugin["error"]: # ask for confirmation if user downgrades an usable plugin
if QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), self.tr("Are you sure you want to downgrade the plugin to the latest available version? The installed one is newer!"), QMessageBox.Yes, QMessageBox.No) == QMessageBox.No:
return
dlg = QgsPluginInstallerInstallingDialog(self,plugin)
dlg.exec_()
if dlg.result():
infoString = (self.tr("Plugin installation failed"), dlg.result())
elif not QDir(QDir.cleanPath(QgsApplication.qgisSettingsDirPath() + "/python/plugins/" + key)).exists():
infoString = (self.tr("Plugin has disappeared"), self.tr("The plugin seems to have been installed but I don't know where. Probably the plugin package contained a wrong named directory.\nPlease search the list of installed plugins. I'm nearly sure you'll find the plugin there, but I just can't determine which of them it is. It also means that I won't be able to determine if this plugin is installed and inform you about available updates. However the plugin may work. Please contact the plugin author and submit this issue."))
QApplication.setOverrideCursor(Qt.WaitCursor)
plugins.getAllInstalled()
plugins.rebuild()
QApplication.restoreOverrideCursor()
else:
if QGIS_14:
if QGIS_15: # update the list of plugins in plugin handling routines
updateAvailablePlugins()
# try to load the plugin
loadPlugin(plugin["localdir"])
else: # QGIS < 1.4
try:
exec ("sys.path_importer_cache.clear()")
exec ("import %s" % plugin["localdir"])
exec ("reload (%s)" % plugin["localdir"])
except:
pass
plugins.getAllInstalled(testLoad=True)
plugins.rebuild()
plugin = plugins.all()[key]
if not plugin["error"]:
if previousStatus in ["not installed", "new"]:
if QGIS_14: # plugins can be started in python from QGIS >= 1.4
infoString = (self.tr("Plugin installed successfully"), self.tr("Plugin installed successfully"))
settings = QSettings()
settings.setValue("/PythonPlugins/"+plugin["localdir"], QVariant(True))
startPlugin(plugin["localdir"])
else: infoString = (self.tr("Plugin installed successfully"), self.tr("Python plugin installed.\nNow you need to enable it in Plugin Manager."))
else:
if QGIS_15: # plugins can be reloaded on the fly in QGIS >= 1.5
settings = QSettings()
if key != 'plugin_installer' and settings.value("/PythonPlugins/"+key).toBool(): # plugin will be reloaded on the fly only if currently loaded
infoString = (self.tr("Plugin reinstalled successfully"), self.tr("Plugin reinstalled successfully"))
reloadPlugin(key)
else:
infoString = (self.tr("Plugin reinstalled successfully"), self.tr("Python plugin reinstalled.\nYou need to restart Quantum GIS in order to reload it."))
else: infoString = (self.tr("Plugin reinstalled successfully"), self.tr("Python plugin reinstalled.\nYou need to restart Quantum GIS in order to reload it."))
if quiet:
infoString = (None, None)
else:
if plugin["error"] == "incompatible":
message = self.tr("The plugin is designed for a newer version of Quantum GIS. The minimum required version is:")
message += " <b>" + plugin["error_details"] + "</b>"
elif plugin["error"] == "dependent":
message = self.tr("The plugin depends on some components missing on your system. You need to install the following Python module in order to enable it:")
message += "<b> " + plugin["error_details"] + "</b>"
else:
message = self.tr("The plugin is broken. Python said:")
message += "<br><b>" + plugin["error_details"] + "</b>"
dlg = QgsPluginInstallerPluginErrorDialog(self,message)
dlg.exec_()
if dlg.result():
# revert installation
plugins.getAllInstalled()
plugins.rebuild()
pluginDir = QFileInfo(QgsApplication.qgisUserDbFilePath()).path() + "/python/plugins/" + plugin["localdir"]
removeDir(pluginDir)
if QDir(pluginDir).exists():
infoString = (self.tr("Plugin uninstall failed"), result)
try:
exec ("sys.path_importer_cache.clear()")
exec ("import %s" % plugin["localdir"])
exec ("reload (%s)" % plugin["localdir"])
except:
pass
else:
try:
exec ("del sys.modules[%s]" % plugin["localdir"])
except:
pass
plugins.getAllInstalled()
plugins.rebuild()
if plugins.all().has_key(key) and not plugins.all()[key]["status"] in ["not installed", "new"]:
if previousStatus in ["not installed", "new"]:
history.markChange(key,'A')
else:
history.markChange(key,'R')
self.populatePluginTree()
if infoString[0]:
QMessageBox.information(self, infoString[0], infoString[1])
# ----------------------------------------- #
def uninstallPlugin(self,key):
""" uninstall currently selected plugin """
plugin = plugins.all()[key]
if not plugin:
return
warning = self.tr("Are you sure you want to uninstall the following plugin?") + "\n(" + plugin["name"] + ")"
if plugin["status"] == "orphan" and not plugin["error"]:
warning += "\n\n"+self.tr("Warning: this plugin isn't available in any accessible repository!")
if QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), warning , QMessageBox.Yes, QMessageBox.No) == QMessageBox.No:
return
# unload the plugin if it's not plugin_installer itself (otherwise, do it after removing its directory):
if key != "plugin_installer":
try:
unloadPlugin(key)
except:
pass
pluginDir = QFileInfo(QgsApplication.qgisUserDbFilePath()).path() + "/python/plugins/" + plugin["localdir"]
result = removeDir(pluginDir)
if result:
QMessageBox.warning(self, self.tr("Plugin uninstall failed"), result)
else:
# if the uninstalled plugin is the installer itself, reload it and quit
if key == "plugin_installer":
if QGIS_15:
try:
QMessageBox.information(self, self.tr("QGIS Python Plugin Installer"), self.tr("Plugin Installer update uninstalled. Plugin Installer will now close and revert to its primary version. You can find it in the Plugins menu and continue operation."))
reloadPlugin(key)
return
except:
pass
else:
QMessageBox.information(self, self.tr("QGIS Python Plugin Installer"), self.tr("Plugin Installer update uninstalled. Please restart QGIS in order to load its primary version."))
# safe remove
try:
unloadPlugin(plugin["localdir"])
except:
pass
try:
exec ("plugins[%s].unload()" % plugin["localdir"])
exec ("del plugins[%s]" % plugin["localdir"])
except:
pass
try:
exec ("del sys.modules[%s]" % plugin["localdir"])
except:
pass
plugins.getAllInstalled()
plugins.rebuild()
self.populatePluginTree()
if QGIS_14: QMessageBox.information(self, self.tr("Plugin uninstalled successfully"), self.tr("Plugin uninstalled successfully"))
else: QMessageBox.information(self, self.tr("Plugin uninstalled successfully"), self.tr("Python plugin uninstalled. Note that you may need to restart Quantum GIS in order to remove it completely."))
history.markChange(key,'D')
# ----------------------------------------- #
def repositoryTreeClicked(self):
""" the repositoryTree has been clicked """
if self.treeRepositories.selectedItems():
self.buttonEditRep.setEnabled(True)
self.buttonDeleteRep.setEnabled(True)
else:
self.buttonEditRep.setEnabled(False)
self.buttonDeleteRep.setEnabled(False)
# ----------------------------------------- #
def changeCheckingPolicy(self,policy):
""" the Checking On Start checkbox has been clicked """
if policy:
repositories.setCheckingOnStart(True)
else:
repositories.setCheckingOnStart(False)
# ----------------------------------------- #
def changeCheckingInterval(self,interval):
""" the Checking on start interval combobox has been clicked """
intervals = [0,1,3,7,14,30]
repositories.setCheckingOnStartInterval(intervals[interval])
# ----------------------------------------- #
def changePluginPolicy(self, state):
""" one of the plugin type radiobuttons has been clicked """
if not state: # radio button released
return
if self.radioPluginType0.isChecked():
i = 1
elif self.radioPluginType1.isChecked():
i = 2
else:
i = 3
settings = QSettings()
settings.setValue(settingsGroup+"/allowedPluginType", QVariant(i))
plugins.rebuild()
self.populatePluginTree()
## depreciated in qgis 1.8 until we use 3rd party repos again
# ----------------------------------------- #
#def addKnownRepositories(self):
#""" update list of known repositories - in the future it will be replaced with an online fetching """
#message = self.tr("You are about to add several plugin repositories that are neither authorized nor supported by the Quantum GIS team. Plugin authors generally make efforts to ensure that their work is useful and safe, however, we can assume no responsibility for them.")
#if QMessageBox.question(self, self.tr("QGIS Python Plugin Installer"), message, QMessageBox.Ok, QMessageBox.Abort) == QMessageBox.Ok:
#repositories.addKnownRepos()
## refresh lists and populate widgets
#QApplication.setOverrideCursor(Qt.WaitCursor)
#self.getAllAvailablePlugins()
#plugins.rebuild()
#self.populateMostWidgets()
#self.populatePluginTree()
#QApplication.restoreOverrideCursor()
# ----------------------------------------- #
def addRepository(self):
""" add repository button has been clicked """
dlg = QgsPluginInstallerRepositoryDialog(self)
dlg.checkBoxEnabled.setCheckState(Qt.Checked)
if not dlg.exec_():
return
for i in repositories.all().values():
if dlg.editURL.text().trimmed() == i["url"]:
QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), self.tr("Unable to add another repository with the same URL!"))
return
settings = QSettings()
settings.beginGroup(self.reposGroup)
reposName = dlg.editName.text()
reposURL = dlg.editURL.text().trimmed()
if repositories.all().has_key(reposName):
reposName = reposName + "(2)"
# add to settings
settings.setValue(reposName+"/url", QVariant(reposURL))
settings.setValue(reposName+"/enabled", QVariant(bool(dlg.checkBoxEnabled.checkState())))
# refresh lists and populate widgets
QApplication.setOverrideCursor(Qt.WaitCursor)
plugins.removeRepository(reposName)
self.getAllAvailablePlugins()
plugins.rebuild()
self.populateMostWidgets()
self.populatePluginTree()
QApplication.restoreOverrideCursor()
# ----------------------------------------- #
def editRepository(self):
""" edit repository button has been clicked """
checkState={False:Qt.Unchecked,True:Qt.Checked}
current = self.treeRepositories.currentItem()
if current == None:
return
reposName = current.text(1)
dlg = QgsPluginInstallerRepositoryDialog(self)
dlg.editName.setText(reposName)
dlg.editURL.setText(repositories.all()[reposName]["url"])
dlg.checkBoxEnabled.setCheckState(checkState[repositories.all()[reposName]["enabled"]])
if repositories.all()[reposName]["valid"]:
dlg.checkBoxEnabled.setEnabled(True)
dlg.labelInfo.setText("")
else:
dlg.checkBoxEnabled.setEnabled(False)
dlg.labelInfo.setText(self.tr("This repository is blocked due to incompatibility with your Quantum GIS version"))
dlg.labelInfo.setFrameShape(QFrame.Box)
if not dlg.exec_():
return # nothing to do if cancelled
for i in repositories.all().values():
if dlg.editURL.text().trimmed() == i["url"] and dlg.editURL.text().trimmed() != repositories.all()[reposName]["url"]:
QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), self.tr("Unable to add another repository with the same URL!"))
return
# delete old repo from QSettings and create new one
settings = QSettings()
settings.beginGroup(self.reposGroup)
settings.remove(reposName)
newName = dlg.editName.text()
if repositories.all().has_key(newName) and newName != reposName:
newName = newName + "(2)"
settings.setValue(newName+"/url", QVariant(dlg.editURL.text().trimmed()))
settings.setValue(newName+"/enabled", QVariant(bool(dlg.checkBoxEnabled.checkState())))
if dlg.editURL.text().trimmed() == repositories.all()[reposName]["url"] and dlg.checkBoxEnabled.checkState() == checkState[repositories.all()[reposName]["enabled"]]:
repositories.rename(reposName, newName)
self.populateMostWidgets()
return # nothing else to do if only repository name was changed
# refresh lists and populate widgets
QApplication.setOverrideCursor(Qt.WaitCursor)
plugins.removeRepository(reposName)
self.getAllAvailablePlugins()
plugins.rebuild()
self.populateMostWidgets()
self.populatePluginTree()
QApplication.restoreOverrideCursor()
# ----------------------------------------- #
def deleteRepository(self):
""" delete repository button has been clicked """
current = self.treeRepositories.currentItem()
if current == None:
return
warning = self.tr("Are you sure you want to remove the following repository?") + "\n" + current.text(1)
if QMessageBox.warning(self, self.tr("QGIS Python Plugin Installer"), warning , QMessageBox.Yes, QMessageBox.No) == QMessageBox.No:
return
reposName = current.text(1)
# delete from the settings, refresh data and repopulate all the widgets
settings = QSettings()
settings.beginGroup(self.reposGroup)
settings.remove(reposName)
repositories.remove(reposName)
plugins.removeRepository(reposName)
plugins.rebuild()
self.populateMostWidgets()
self.populatePluginTree()
# ----------------------------------------- #
def runHelp(self):
""" open the context help browser """
QgsContextHelp.run("QgsPluginInstallerDialog")
# ----------------------------------------- #
def reject(self):
""" update the list of seen plugins before exit (both 'done' and 'x' buttons emit 'reject' signal) """
plugins.updateSeenPluginsList()
QDialog.reject(self)
# --- /class QgsPluginInstallerDialog ------------------------------------------------------------------------ #
class QgsPluginTreeItem(QTreeWidgetItem):
def __init__(self, parent=None):
QTreeWidgetItem.__init__(self, parent)
def __lt__(self, otherItem):
column = self.treeWidget().sortColumn()
if column == 0:
return self.data(column, Qt.UserRole).toInt()[0] < otherItem.data(column, Qt.UserRole).toInt()[0]
else:
return self.text(column) < otherItem.text(column)
|
bstroebl/QGIS
|
python/plugins/plugin_installer/installer_gui.py
|
Python
|
gpl-2.0
| 42,256
|
[
"VisIt"
] |
ce557d95bf4c21286ff8d60ca1b4db6416239319c5e24d4766dadc52cc95ddb0
|
"""Handle extraction of final files from processing pipelines into storage.
"""
import datetime
import os
import six
import toolz as tz
from bcbio import log, utils
from bcbio.upload import shared, filesystem, galaxy, s3, irods
from bcbio.pipeline import run_info
from bcbio.variation import vcfutils
import bcbio.pipeline.datadict as dd
from bcbio.rnaseq.ericscript import EricScriptConfig
_approaches = {"filesystem": filesystem,
"galaxy": galaxy,
"s3": s3,
"irods": irods}
def project_from_sample(sample):
upload_config = sample.get("upload")
if upload_config:
approach = _approaches[upload_config.get("method", "filesystem")]
for finfo in _get_files_project(sample, upload_config):
approach.update_file(finfo, None, upload_config)
return [[sample]]
def from_sample(sample):
"""Upload results of processing from an analysis pipeline sample.
"""
upload_config = sample.get("upload")
if upload_config:
approach = _approaches[upload_config.get("method", "filesystem")]
for finfo in _get_files(sample):
approach.update_file(finfo, sample, upload_config)
return [[sample]]
def get_all_upload_paths_from_sample(sample):
upload_path_mapping = dict()
upload_config = sample.get("upload")
if upload_config:
method = upload_config.get("method", "filesystem")
if method == "filesystem":
approach = _approaches[method]
for finfo in _get_files_project(sample, upload_config):
path = approach.get_upload_path(finfo, None, upload_config)
upload_path_mapping[finfo["path"]] = path
for finfo in _get_files(sample):
path = approach.get_upload_path(finfo, sample, upload_config)
upload_path_mapping[finfo["path"]] = path
return upload_path_mapping
# ## File information from sample
def _get_files(sample):
"""Retrieve files for the sample, dispatching by analysis type.
Each file is a dictionary containing the path plus associated
metadata about the file and pipeline versions.
"""
analysis = sample.get("analysis")
if analysis.lower() in ["variant", "snp calling", "variant2", "standard"]:
return _get_files_variantcall(sample)
elif analysis.lower() in ["rna-seq", "fastrna-seq"]:
return _get_files_rnaseq(sample)
elif analysis.lower() in ["smallrna-seq"]:
return _get_files_srnaseq(sample)
elif analysis.lower() in ["chip-seq"]:
return _get_files_chipseq(sample)
elif analysis.lower() in ["scrna-seq"]:
return _get_files_scrnaseq(sample)
else:
return []
def _get_files_rnaseq(sample):
out = []
algorithm = sample["config"]["algorithm"]
out = _maybe_add_summary(algorithm, sample, out)
out = _maybe_add_alignment(algorithm, sample, out)
out = _maybe_add_transcriptome_alignment(sample, out)
out = _maybe_add_disambiguate(algorithm, sample, out)
out = _maybe_add_counts(algorithm, sample, out)
out = _maybe_add_cufflinks(algorithm, sample, out)
out = _maybe_add_stringtie(algorithm, sample, out)
out = _maybe_add_oncofuse(algorithm, sample, out)
out = _maybe_add_pizzly(algorithm, sample, out)
out = _maybe_add_rnaseq_variant_file(algorithm, sample, out)
out = _maybe_add_sailfish_files(algorithm, sample, out)
out = _maybe_add_salmon_files(algorithm, sample, out)
out = _maybe_add_kallisto_files(algorithm, sample, out)
out = _maybe_add_ericscript_files(algorithm, sample, out)
out = _maybe_add_arriba_files(algorithm, sample, out)
out = _maybe_add_junction_files(algorithm, sample, out)
return _add_meta(out, sample)
def _get_files_srnaseq(sample):
out = []
algorithm = sample["config"]["algorithm"]
out = _maybe_add_summary(algorithm, sample, out)
out = _maybe_add_trimming(algorithm, sample, out)
out = _maybe_add_seqbuster(algorithm, sample, out)
out = _maybe_add_trna(algorithm, sample, out)
out = _maybe_add_transcriptome_alignment(sample, out)
return _add_meta(out, sample)
def _get_files_scrnaseq(sample):
out = []
algorithm = sample["config"]["algorithm"]
out = _maybe_add_summary(algorithm, sample, out)
out = _maybe_add_transcriptome_alignment(sample, out)
out = _maybe_add_scrnaseq(algorithm, sample, out)
out = _maybe_add_barcode_histogram(algorithm, sample, out)
return _add_meta(out, sample)
def _get_files_chipseq(sample):
out = []
algorithm = sample["config"]["algorithm"]
out = _maybe_add_summary(algorithm, sample, out)
out = _maybe_add_alignment(algorithm, sample, out)
out = _maybe_add_nucleosome_alignments(algorithm, sample, out)
out = _maybe_add_peaks(algorithm, sample, out)
out = _maybe_add_greylist(algorithm, sample, out)
return _add_meta(out, sample)
def _add_meta(xs, sample=None, config=None):
"""Add top level information about the sample or flowcell to output.
Sorts outputs into sample names (sample input) and project (config input).
"""
out = []
for x in xs:
if not isinstance(x["path"], six.string_types) or not os.path.exists(x["path"]):
raise ValueError("Unexpected path for upload: %s" % x)
x["mtime"] = shared.get_file_timestamp(x["path"])
if sample:
sample_name = dd.get_sample_name(sample)
if "sample" not in x:
x["sample"] = sample_name
elif x["sample"] != sample_name:
x["run"] = sample_name
if config:
fc_name = config.get("fc_name") or "project"
fc_date = config.get("fc_date") or datetime.datetime.now().strftime("%Y-%m-%d")
x["run"] = "%s_%s" % (fc_date, fc_name)
out.append(x)
return out
def _get_files_variantcall(sample):
"""Return output files for the variant calling pipeline.
"""
out = []
algorithm = sample["config"]["algorithm"]
out = _maybe_add_summary(algorithm, sample, out)
out = _maybe_add_alignment(algorithm, sample, out)
out = _maybe_add_callable(sample, out)
out = _maybe_add_disambiguate(algorithm, sample, out)
out = _maybe_add_variant_file(algorithm, sample, out)
out = _maybe_add_sv(algorithm, sample, out)
out = _maybe_add_hla(algorithm, sample, out)
out = _maybe_add_heterogeneity(algorithm, sample, out)
out = _maybe_add_validate(algorithm, sample, out)
return _add_meta(out, sample)
def _maybe_add_validate(algorith, sample, out):
for i, plot in enumerate(tz.get_in(("validate", "grading_plots"), sample, [])):
ptype = os.path.splitext(plot)[-1].replace(".", "")
out.append({"path": plot,
"type": ptype,
"ext": "validate%s" % ("" if i == 0 else "-%s" % (i + 1))})
return out
def _maybe_add_rnaseq_variant_file(algorithm, sample, out):
vfile = sample.get("vrn_file")
if vfile:
ftype = "vcf.gz" if vfile.endswith(".gz") else "vcf"
out.append({"path": vfile,
"type": ftype})
if utils.file_exists(vfile + ".tbi"):
out.append({"path": vfile + ".tbi",
"type": "vcf.gz.tbi",
"index": True})
return out
def _maybe_add_callable(data, out):
"""Add callable and depth regions to output folder.
"""
callable_bed = dd.get_sample_callable(data)
if callable_bed:
out.append({"path": callable_bed, "type": "bed", "ext": "callable"})
perbase_bed = tz.get_in(["depth", "variant_regions", "per_base"], data)
if perbase_bed:
out.append({"path": perbase_bed, "type": "bed.gz", "ext": "depth-per-base"})
return out
def _maybe_add_variant_file(algorithm, sample, out):
if sample.get("align_bam") is not None and sample.get("vrn_file"):
for x in sample["variants"]:
if not _sample_variant_file_in_population(x):
out.extend(_get_variant_file(x, ("vrn_file",)))
if x.get("bed_file"):
out.append({"path": x["bed_file"],
"type": "bed",
"ext": "%s-callregions" % x["variantcaller"],
"variantcaller": x["variantcaller"]})
if x.get("vrn_stats"):
for extra, fname in x["vrn_stats"].items():
ext = utils.splitext_plus(fname)[-1].replace(".", "")
out.append({"path": fname,
"type": ext,
"ext": "%s-%s" % (x["variantcaller"], extra),
"variantcaller": x["variantcaller"]})
if x.get("germline") and os.path.exists(x["germline"]):
out.extend(_get_variant_file(x, ("germline",), "-germline"))
return out
def _maybe_add_hla(algorithm, sample, out):
if sample.get("align_bam") is not None and sample.get("hla") and "call_file" in sample["hla"]:
out.append({"path": sample["hla"]["call_file"],
"type": "csv",
"ext": "hla-%s" % (sample["hla"]["hlacaller"])})
return out
def _maybe_add_heterogeneity(algorithm, sample, out):
for hetinfo in sample.get("heterogeneity", []):
report = hetinfo.get("report")
if report and os.path.exists(report):
out.append({"path": report,
"type": utils.splitext_plus(report)[-1].replace(".", "").replace("-", ""),
"ext": "%s-report" % (hetinfo["caller"])})
for plot_type, plot_file in hetinfo.get("plots", {}).items():
if plot_file and os.path.exists(plot_file):
out.append({"path": plot_file,
"type": utils.splitext_plus(plot_file)[-1].replace(".", ""),
"ext": "%s-%s-plot" % (hetinfo["caller"], plot_type)})
return out
def _get_batch_name(sample):
"""Retrieve batch name for use in SV calling outputs.
Handles multiple batches split via SV calling.
"""
batch = dd.get_batch(sample) or dd.get_sample_name(sample)
if isinstance(batch, (list, tuple)) and len(batch) > 1:
batch = dd.get_sample_name(sample)
return batch
def _maybe_add_sv(algorithm, sample, out):
if sample.get("align_bam") is not None and sample.get("sv"):
batch = _get_batch_name(sample)
for svcall in sample["sv"]:
if svcall.get("variantcaller") == "seq2c":
out.extend(_get_variant_file(svcall, ("calls",), sample=batch))
out.extend(_get_variant_file(svcall, ("gender_predicted",), sample=batch))
for key in ["vrn_file", "cnr", "cns", "seg", "gainloss",
"segmetrics", "vrn_bed", "vrn_bedpe"]:
out.extend(_get_variant_file(svcall, (key,), sample=batch))
out.extend(_get_variant_file(svcall, ("background",), suffix="-background", sample=batch))
out.extend(_get_variant_file(svcall, ("call_file",), suffix="-call", sample=batch))
out.extend(_get_variant_file(svcall, ("priority",), suffix="-priority", sample=batch))
if "plot" in svcall:
for plot_name, fname in svcall["plot"].items():
ext = os.path.splitext(fname)[-1].replace(".", "")
out.append({"path": fname,
"sample": batch,
"type": ext,
"ext": "%s-%s" % (svcall["variantcaller"], plot_name),
"variantcaller": svcall["variantcaller"]})
if "raw_files" in svcall:
for caller, fname in svcall["raw_files"].items():
ext = utils.splitext_plus(fname)[-1][1:]
out.append({"path": fname,
"sample": batch,
"type": ext,
"ext": "%s-%s" % (svcall["variantcaller"], caller),
"variantcaller": svcall["variantcaller"]})
if utils.file_exists(fname + ".tbi"):
out.append({"path": fname + ".tbi",
"sample": batch,
"type": "vcf.gz.tbi",
"index": True,
"ext": "%s-%s" % (svcall["variantcaller"], caller),
"variantcaller": svcall["variantcaller"]})
for extra in ["subclones", "contamination", "hetsummary", "lohsummary", "read_evidence"]:
svfile = svcall.get(extra)
if svfile and os.path.exists(svfile):
ftype = os.path.splitext(svfile)[-1].replace(".", "")
out.append({"path": svfile,
"sample": dd.get_sample_name(sample) if ftype == "bam" else batch,
"type": ftype,
"ext": "%s-%s" % (svcall["variantcaller"], extra),
"variantcaller": svcall["variantcaller"]})
fext = ".bai" if ftype == "bam" else ""
if fext and os.path.exists(svfile + fext):
out.append({"path": svfile + fext,
"sample": dd.get_sample_name(sample) if ftype == "bam" else batch,
"type": ftype + fext,
"index": True,
"ext": "%s-%s" % (svcall["variantcaller"], extra),
"variantcaller": svcall["variantcaller"]})
if "sv-validate" in sample:
for vkey in ["csv", "plot", "df"]:
vfile = tz.get_in(["sv-validate", vkey], sample)
if vfile:
to_u = []
if isinstance(vfile, dict):
for svtype, fname in vfile.items():
to_u.append((fname, "-%s" % svtype))
else:
to_u.append((vfile, "-%s" % vkey if vkey in ["df"] else ""))
for vfile, ext in to_u:
vext = os.path.splitext(vfile)[-1].replace(".", "")
out.append({"path": vfile,
"sample": batch,
"type": vext,
"ext": "sv-validate%s" % ext})
return out
def _sample_variant_file_in_population(x):
"""Check if a sample file is the same as the population file.
This is true for batches where we don't extract into samples and do not
run decomposition for gemini.
'"""
if "population" in x:
a = _get_project_vcf(x)
b = _get_variant_file(x, ("vrn_file",))
decomposed = tz.get_in(("population", "decomposed"), x)
if (a and b and not decomposed and len(a) > 0 and len(b) > 0 and
vcfutils.get_samples(a[0]["path"]) == vcfutils.get_samples(b[0]["path"])):
return True
return False
def _get_variant_file(x, key, suffix="", sample=None, ignore_do_upload=False):
"""Retrieve VCF file with the given key if it exists, handling bgzipped.
"""
out = []
fname = utils.get_in(x, key)
upload_key = list(key)
upload_key[-1] = "do_upload"
do_upload = tz.get_in(tuple(upload_key), x, True)
if fname and (ignore_do_upload or do_upload):
if fname.endswith(".vcf.gz"):
out.append({"path": fname,
"type": "vcf.gz",
"ext": "%s%s" % (x["variantcaller"], suffix),
"variantcaller": x["variantcaller"]})
if utils.file_exists(fname + ".tbi"):
out.append({"path": fname + ".tbi",
"type": "vcf.gz.tbi",
"index": True,
"ext": "%s%s" % (x["variantcaller"], suffix),
"variantcaller": x["variantcaller"]})
elif fname.endswith((".vcf", ".bed", ".bedpe", ".bedgraph", ".cnr", ".cns", ".cnn", ".txt", ".tsv")):
ftype = utils.splitext_plus(fname)[-1][1:]
if ftype == "txt":
extended_ftype = fname.split("-")[-1]
if "/" not in extended_ftype:
ftype = extended_ftype
out.append({"path": fname,
"type": ftype,
"ext": "%s%s" % (x["variantcaller"], suffix),
"variantcaller": x["variantcaller"]})
if sample:
out_sample = []
for x in out:
x["sample"] = sample
out_sample.append(x)
return out_sample
else:
return out
def _maybe_add_sailfish_files(algorithm, sample, out):
analysis = dd.get_analysis(sample)
sailfish_dir = os.path.join(dd.get_work_dir(sample), "sailfish",
dd.get_sample_name(sample), "quant")
if os.path.exists(sailfish_dir):
out.append({"path": sailfish_dir,
"type": "directory",
"ext": "sailfish"})
return out
def _maybe_add_salmon_files(algorithm, sample, out):
salmon_dir = os.path.join(dd.get_work_dir(sample), "salmon",
dd.get_sample_name(sample))
if os.path.exists(salmon_dir):
out.append({"path": salmon_dir,
"type": "directory",
"ext": "salmon"})
return out
def _maybe_add_kallisto_files(algorithm, sample, out):
kallisto_dir = os.path.join(dd.get_work_dir(sample), "kallisto",
dd.get_sample_name(sample), "quant")
if os.path.exists(kallisto_dir):
out.append({"path": kallisto_dir,
"type": "directory",
"ext": "kallisto"})
return out
def _maybe_add_ericscript_files(algorithm, sample, out):
config = EricScriptConfig(sample)
if os.path.exists(config.sample_out_dir):
out.append({
'path': config.sample_out_dir,
'type': 'directory',
'ext': 'ericscript',
})
return out
def _flatten_file_with_secondary(input, out_dir):
"""Flatten file representation with secondary indices (CWL-like)
"""
out = []
orig_dir = os.path.dirname(input["base"])
for finfo in [input["base"]] + input.get("secondary", []):
cur_dir = os.path.dirname(finfo)
if cur_dir != orig_dir and cur_dir.startswith(orig_dir):
cur_out_dir = os.path.join(out_dir, cur_dir.replace(orig_dir + "/", ""))
else:
cur_out_dir = out_dir
out.append({"path": finfo, "dir": cur_out_dir})
return out
def _maybe_add_summary(algorithm, sample, out):
out = []
if "summary" in sample:
if sample["summary"].get("pdf"):
out.append({"path": sample["summary"]["pdf"],
"type": "pdf",
"ext": "summary"})
if sample["summary"].get("qc"):
for program, finfo in sample["summary"]["qc"].items():
out.extend(_flatten_file_with_secondary(finfo, os.path.join("qc", program)))
if utils.get_in(sample, ("summary", "researcher")):
out.append({"path": sample["summary"]["researcher"],
"type": "tsv",
"sample": run_info.clean_name(utils.get_in(sample, ("upload", "researcher"))),
"ext": "summary"})
return out
def _maybe_add_alignment(algorithm, sample, out):
if _has_alignment_file(algorithm, sample) and dd.get_phenotype(sample) != "germline":
for (fname, ext, isplus) in [(sample.get("work_bam"), "ready", False),
(sample.get("umi_bam"), "umi", False),
(sample.get("bigwig"), "ready", False),
(dd.get_disc_bam(sample), "disc", True),
(dd.get_sr_bam(sample), "sr", True)]:
if fname and os.path.exists(fname):
if fname.endswith("bam"):
ftype, fext = "bam", ".bai"
elif fname.endswith("cram"):
ftype, fext = "cram", ".crai"
elif fname.endswith("bw"):
ftype, fext = "bw", ".bw"
else:
raise ValueError("Unexpected alignment file type %s" % fname)
out.append({"path": fname,
"type": ftype,
"plus": isplus,
"ext": ext})
if utils.file_exists(fname + fext):
out.append({"path": fname + fext,
"type": ftype + fext,
"plus": isplus,
"index": True,
"ext": ext})
return out
def _maybe_add_disambiguate(algorithm, sample, out):
if "disambiguate" in sample and _has_alignment_file(algorithm, sample):
for extra_name, fname in sample["disambiguate"].items():
ftype = os.path.splitext(fname)[-1].replace(".", "")
fext = ".bai" if ftype == "bam" else ""
if fname and os.path.exists(fname):
out.append({"path": fname,
"type": ftype,
"plus": True,
"ext": "disambiguate-%s" % extra_name})
if fext and utils.file_exists(fname + fext):
out.append({"path": fname + fext,
"type": ftype + fext,
"plus": True,
"index": True,
"ext": "disambiguate-%s" % extra_name})
return out
def _maybe_add_transcriptome_alignment(sample, out):
transcriptome_bam = dd.get_transcriptome_bam(sample)
if transcriptome_bam and utils.file_exists(transcriptome_bam):
out.append({"path": transcriptome_bam,
"type": "bam",
"ext": "transcriptome"})
return out
def _maybe_add_nucleosome_alignments(algorithm, sample, out):
"""
for ATAC-seq, also upload NF, MN, DN and TN bam files
"""
atac_align = tz.get_in(("atac", "align"), sample, {})
for alignment in atac_align.keys():
out.append({"path": atac_align[alignment],
"type": "bam",
"ext": alignment})
return out
def _maybe_add_counts(algorithm, sample, out):
if not dd.get_count_file(sample):
return out
out.append({"path": sample["count_file"],
"type": "counts",
"ext": "ready"})
stats_file = os.path.splitext(sample["count_file"])[0] + ".stats"
if utils.file_exists(stats_file):
out.append({"path": stats_file,
"type": "count_stats",
"ext": "ready"})
return out
def _maybe_add_scrnaseq(algorithm, sample, out):
count_file = dd.get_count_file(sample)
if not count_file:
return out
else:
out.append({"path": count_file,
"type": "mtx"})
out.append({"path": count_file + ".rownames",
"type": "rownames"})
out.append({"path": count_file + ".colnames",
"type": "colnames"})
umi_file = os.path.splitext(count_file)[0] + "-dupes.mtx"
if utils.file_exists(umi_file):
out.append({"path": umi_file,
"type": "mtx"})
out.append({"path": umi_file + ".rownames",
"type": "rownames"})
out.append({"path": umi_file + ".colnames",
"type": "colnames"})
return out
def _maybe_add_barcode_histogram(algorithm, sample, out):
if not dd.get_count_file(sample):
return out
count_file = sample["count_file"]
histogram_file = os.path.join(os.path.dirname(count_file), "cb-histogram.txt")
histogram_filtered_file = os.path.join(os.path.dirname(count_file), "cb-histogram-filtered.txt")
out.append({"path": histogram_file,
"type": "tsv",
"ext": "barcodes"})
out.append({"path": histogram_file,
"type": "tsv",
"ext": "barcodes-filtered"})
return out
def _maybe_add_oncofuse(algorithm, sample, out):
if sample.get("oncofuse_file", None) is not None:
out.append({"path": sample["oncofuse_file"],
"type": "tsv",
"dir": "oncofuse",
"ext": "ready"})
return out
def _maybe_add_pizzly(algorithm, sample, out):
pizzly_dir = dd.get_pizzly_dir(sample)
if pizzly_dir:
out.append({"path": pizzly_dir,
"type": "directory",
"ext": "pizzly"})
return out
def _maybe_add_arriba_files(algorithm, sample, out):
pizzly_dir = dd.get_pizzly_dir(sample)
arriba = dd.get_arriba(sample)
if arriba:
out.append({"path": arriba["fusions"],
"type": "tsv",
"ext": "arriba-fusions",
"dir": "arriba"})
out.append({"path": arriba["discarded"],
"type": "tsv",
"ext": "arriba-discarded-fusions",
"dir": "arriba"})
return out
def _maybe_add_junction_files(algorithm, sample, out):
"""
add splice junction files from STAR, if available
"""
junction_bed = dd.get_junction_bed(sample)
if junction_bed:
out.append({"path": junction_bed,
"type": "bed",
"ext": "SJ",
"dir": "STAR"})
chimeric_file = dd.get_chimericjunction(sample)
if chimeric_file:
out.append({"path": chimeric_file,
"type": "tsv",
"ext": "chimericSJ",
"dir": "STAR"})
sj_file = dd.get_starjunction(sample)
if sj_file:
out.append({"path": sj_file,
"type": "tab",
"ext": "SJ",
"dir": "STAR"})
star_summary = dd.get_summary_qc(sample).get("star", None)
if star_summary:
star_log = star_summary["base"]
if star_log:
out.append({"path": star_log,
"type": "log",
"dir": "STAR"})
return out
def _maybe_add_cufflinks(algorithm, sample, out):
if "cufflinks_dir" in sample:
out.append({"path": sample["cufflinks_dir"],
"type": "directory",
"ext": "cufflinks"})
return out
def _maybe_add_stringtie(algorithm, sample, out):
if "stringtie_dir" in sample:
out.append({"path": sample["stringtie_dir"],
"type": "directory",
"ext": "stringtie"})
return out
def _maybe_add_trimming(algorithm, sample, out):
fn = sample["collapse"] + "_size_stats"
if utils.file_exists(fn):
out.append({"path": fn,
"type": "trimming_stats",
"ext": "ready"})
return out
def _maybe_add_seqbuster(algorithm, sample, out):
if "seqbuster" not in sample:
return out
fn = sample["seqbuster"]
if utils.file_exists(fn):
out.append({"path": fn,
"type": "counts",
"ext": "mirbase-ready"})
fn = sample.get("seqbuster_novel")
fn = sample["mirtop"]
if utils.file_exists(fn):
out.append({"path": fn,
"type": "gff",
"ext": "mirbase-ready"})
if "seqbuster_novel" in sample and utils.file_exists(sample["seqbuster_novel"]):
fn = sample["seqbuster_novel"]
out.append({"path": fn,
"type": "counts",
"ext": "novel-ready"})
return out
def _maybe_add_trna(algorithm, sample, out):
if "trna" not in sample:
return out
fn = sample["trna"]
if utils.file_exists(fn):
out.append({"path": fn,
"type": "directory",
"ext": "mintmap"})
return out
def _maybe_add_peaks(algorithm, sample, out):
out_dir = sample.get("peaks_files", {})
if dd.get_chip_method(sample) == "atac":
for files in out_dir.values():
for caller in files:
if caller == "main":
continue
for fn in files[caller]:
if os.path.exists(fn):
out.append({"path": fn,
"dir": caller,
"ext": utils.splitext_plus(fn)[1]})
else:
for caller in out_dir:
if caller == "main":
continue
for fn in out_dir[caller]:
if os.path.exists(fn):
out.append({"path": fn,
"dir": caller,
"ext": utils.splitext_plus(fn)[1]})
return out
def _maybe_add_greylist(algorithm, sample, out):
greylist = sample.get("greylist", None)
if greylist:
out.append({"path": greylist,
"type": "directory",
"ext": "greylist"})
return out
def _has_alignment_file(algorithm, sample):
return (((algorithm.get("aligner") or algorithm.get("realign")
or algorithm.get("recalibrate") or algorithm.get("bam_clean")
or algorithm.get("mark_duplicates", algorithm.get("aligner")))) and
sample.get("work_bam") is not None and
"upload_alignment" not in dd.get_tools_off(sample))
# ## File information from full project
def _add_batch(x, sample):
"""Potentially add batch name to an upload file.
"""
added = False
for batch in sorted(dd.get_batches(sample) or [], key=len, reverse=True):
if batch and os.path.basename(x["path"]).startswith(("%s-" % batch, "%s.vcf" % batch)):
x["batch"] = batch
added = True
break
if not added:
x["batch"] = dd.get_sample_name(sample)
return x
def _get_project_vcf(x, suffix=""):
"""Get our project VCF, either from the population or the variant batch file.
"""
vcfs = _get_variant_file(x, ("population", "vcf"), suffix=suffix)
if not vcfs:
vcfs = _get_variant_file(x, ("vrn_file_batch", ), suffix=suffix, ignore_do_upload=True)
if not vcfs and x.get("variantcaller") == "ensemble":
vcfs = _get_variant_file(x, ("vrn_file", ), suffix=suffix)
return vcfs
def _get_files_project(sample, upload_config):
"""Retrieve output files associated with an entire analysis project.
"""
out = [{"path": sample["provenance"]["programs"]}]
if os.path.exists(tz.get_in(["provenance", "data"], sample) or ""):
out.append({"path": sample["provenance"]["data"]})
for fname in ["bcbio-nextgen.log", "bcbio-nextgen-commands.log"]:
if os.path.exists(os.path.join(log.get_log_dir(sample["config"]), fname)):
out.append({"path": os.path.join(log.get_log_dir(sample["config"]), fname),
"type": "external_command_log",
"ext": ""})
if "summary" in sample and sample["summary"].get("project"):
out.append({"path": sample["summary"]["project"]})
if "summary" in sample and sample["summary"].get("metadata"):
out.append({"path": sample["summary"]["metadata"]})
mixup_check = tz.get_in(["summary", "mixup_check"], sample)
if mixup_check:
out.append({"path": sample["summary"]["mixup_check"],
"type": "directory", "ext": "mixup_check"})
report = os.path.join(dd.get_work_dir(sample), "report")
if utils.file_exists(report):
out.append({"path": report,
"type": "directory", "ext": "report"})
multiqc = tz.get_in(["summary", "multiqc"], sample)
if multiqc:
out.extend(_flatten_file_with_secondary(multiqc, "multiqc"))
if sample.get("seqcluster", {}):
out.append({"path": sample["seqcluster"].get("out_dir"),
"type": "directory", "ext": "seqcluster"})
if sample.get("mirge", {}):
for fn in sample["mirge"]:
out.append({"path": fn,
"dir": "mirge"})
if sample.get("report", None):
out.append({"path": os.path.dirname(sample["report"]),
"type": "directory", "ext": "seqclusterViz"})
for x in sample.get("variants", []):
if "pop_db" in x:
out.append({"path": x["pop_db"],
"type": "sqlite",
"variantcaller": x["variantcaller"]})
for x in sample.get("variants", []):
if "population" in x:
pop_db = tz.get_in(["population", "db"], x)
if pop_db:
out.append({"path": pop_db,
"type": "sqlite",
"variantcaller": x["variantcaller"]})
suffix = "-annotated-decomposed" if tz.get_in(("population", "decomposed"), x) else "-annotated"
vcfs = _get_project_vcf(x, suffix)
out.extend([_add_batch(f, sample) for f in vcfs])
for x in sample.get("variants", []):
if x.get("validate") and x["validate"].get("grading_summary"):
out.append({"path": x["validate"]["grading_summary"]})
break
sv_project = set([])
for svcall in sample.get("sv", []):
if svcall.get("variantcaller") == "seq2c":
if svcall.get("calls_all") and svcall["calls_all"] not in sv_project:
out.append({"path": svcall["coverage_all"], "batch": "seq2c", "ext": "coverage", "type": "tsv"})
out.append({"path": svcall["read_mapping"], "batch": "seq2c", "ext": "read_mapping", "type": "txt"})
out.append({"path": svcall["calls_all"], "batch": "seq2c", "ext": "calls", "type": "tsv"})
sv_project.add(svcall["calls_all"])
if "coverage" in sample:
cov_db = tz.get_in(["coverage", "summary"], sample)
if cov_db:
out.append({"path": cov_db, "type": "sqlite", "ext": "coverage"})
all_coverage = tz.get_in(["coverage", "all"], sample)
if all_coverage:
out.append({"path": all_coverage, "type": "bed", "ext": "coverage"})
if dd.get_mirna_counts(sample):
out.append({"path": dd.get_mirna_counts(sample)})
if dd.get_isomir_counts(sample):
out.append({"path": dd.get_isomir_counts(sample)})
if dd.get_novel_mirna_counts(sample):
out.append({"path": dd.get_novel_mirna_counts(sample)})
if dd.get_novel_isomir_counts(sample):
out.append({"path": dd.get_novel_isomir_counts(sample)})
if dd.get_combined_counts(sample):
count_file = dd.get_combined_counts(sample)
if sample["analysis"].lower() == "scrna-seq":
out.append({"path": count_file,
"type": "mtx"})
out.append({"path": count_file + ".rownames",
"type": "rownames"})
out.append({"path": count_file + ".colnames",
"type": "colnames"})
out.append({"path": count_file + ".metadata",
"type": "metadata"})
umi_file = os.path.splitext(count_file)[0] + "-dupes.mtx"
if utils.file_exists(umi_file):
out.append({"path": umi_file,
"type": "mtx"})
out.append({"path": umi_file + ".rownames",
"type": "rownames"})
out.append({"path": umi_file + ".colnames",
"type": "colnames"})
if dd.get_combined_histogram(sample):
out.append({"path": dd.get_combined_histogram(sample),
"type": "txt"})
rda = os.path.join(os.path.dirname(count_file), "se.rda")
if utils.file_exists(rda):
out.append({"path": rda,
"type": "rda"})
else:
out.append({"path": dd.get_combined_counts(sample)})
if dd.get_tximport(sample):
out.append({"path": dd.get_tximport(sample)["gene_tpm"], "dir": "tpm"})
out.append({"path": dd.get_tximport(sample)["gene_counts"], "dir": "counts"})
if dd.get_annotated_combined_counts(sample):
out.append({"path": dd.get_annotated_combined_counts(sample)})
if dd.get_combined_fpkm(sample):
out.append({"path": dd.get_combined_fpkm(sample)})
if dd.get_combined_fpkm_isoform(sample):
out.append({"path": dd.get_combined_fpkm_isoform(sample)})
if dd.get_transcript_assembler(sample):
out.append({"path": dd.get_merged_gtf(sample)})
if dd.get_dexseq_counts(sample):
out.append({"path": dd.get_dexseq_counts(sample)})
out.append({"path": "%s.ann" % dd.get_dexseq_counts(sample)})
if dd.get_express_counts(sample):
out.append({"path": dd.get_express_counts(sample)})
if dd.get_express_fpkm(sample):
out.append({"path": dd.get_express_fpkm(sample)})
if dd.get_express_tpm(sample):
out.append({"path": dd.get_express_tpm(sample)})
if dd.get_isoform_to_gene(sample):
out.append({"path": dd.get_isoform_to_gene(sample)})
if dd.get_square_vcf(sample):
out.append({"path": dd.get_square_vcf(sample)})
if dd.get_sailfish_transcript_tpm(sample):
out.append({"path": dd.get_sailfish_transcript_tpm(sample)})
if dd.get_sailfish_gene_tpm(sample):
out.append({"path": dd.get_sailfish_gene_tpm(sample)})
if dd.get_tx2gene(sample):
out.append({"path": dd.get_tx2gene(sample)})
if dd.get_spikein_counts(sample):
out.append({"path": dd.get_spikein_counts(sample)})
transcriptome_dir = os.path.join(dd.get_work_dir(sample), "inputs",
"transcriptome")
if os.path.exists(transcriptome_dir):
out.append({"path": transcriptome_dir, "type": "directory",
"ext": "transcriptome"})
return _add_meta(out, config=upload_config)
|
a113n/bcbio-nextgen
|
bcbio/upload/__init__.py
|
Python
|
mit
| 38,400
|
[
"Galaxy"
] |
cd1d93025dc782cbf42a12c6366b2a5b046d2ae8e8db7e6a54daaad3139d7464
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2005-2007 Carabos Coop. V. All rights reserved
# Copyright (C) 2008-2017 Vicent Mas. All rights reserved
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Author: Vicent Mas - vmas@vitables.org
"""
Here is defined the HelpBrowserGUI class.
This module creates the main window for the documentation browser.
"""
__docformat__ = 'restructuredtext'
import re
from qtpy import QtCore
from qtpy import QtGui
from qtpy import QtWidgets
import vitables.utils
from vitables.vtsite import DOCDIR
translate = QtWidgets.QApplication.translate
class HelpBrowserGUI(QtWidgets.QMainWindow) :
"""
The main window of the documentation browser.
:Parameters:
- `browser`: an instance of the docs browser controller
:meth:`vitables.docbrowser.helpbrowser.HelpBrowser`
- `parent`: the parent widget.
"""
def __init__(self, browser, parent=None) :
"""
Initializes the browser.
"""
super(HelpBrowserGUI, self).__init__(parent)
self.setIconSize(QtCore.QSize(22, 22))
self.setWindowTitle(translate('HelpBrowserGUI',
'Documentation browser', 'The window title'))
self.icons = vitables.utils.getHBIcons()
self.setWindowIcon(self.icons['vitables_wm'])
self.browser = browser
# The browser widget
self.text_browser = QtWidgets.QTextBrowser()
self.text_browser.setSearchPaths([DOCDIR])
self.setCentralWidget(self.text_browser)
self.text_browser.setAcceptRichText(True)
self.text_browser.setReadOnly(1)
# The popup menus
self.actions = self.setupActions()
self.initPopups()
self.connectSignals()
self.setupHistoryCombo()
self.statusBar().showMessage(translate('HelpBrowserGUI', 'Ready...',
'Status bar startup message'))
def setupActions(self):
"""Provide actions to the menubar and the toolbars.
"""
actions = {}
actions['exitBrowser'] = QtWidgets.QAction(
translate('HelpBrowserGUI', 'E&xit', 'File --> Exit'), self,
shortcut=QtGui.QKeySequence.Quit,
triggered=self.browser.exitBrowser,
icon=self.icons['application-exit'],
statusTip=translate('HelpBrowserGUI', 'Close Help Browser',
'Status bar text for the File --> Exit action'))
actions['zoomIn'] = QtWidgets.QAction(
translate('HelpBrowserGUI', 'Zoom &in', 'View --> Zoom in'), self,
shortcut=QtGui.QKeySequence.ZoomIn,
triggered=self.browser.zoomIn,
icon=self.icons['zoom-in'],
statusTip=translate('HelpBrowserGUI', 'Increases the font size',
'Status bar text for the View --> Zoom in action'))
actions['zoomOut'] = QtWidgets.QAction(
translate('HelpBrowserGUI', 'Zoom &out', 'View --> Zoom out'),
self,
shortcut=QtGui.QKeySequence.ZoomOut,
triggered=self.browser.zoomOut,
icon=self.icons['zoom-out'],
statusTip=translate('HelpBrowserGUI', 'Decreases the font size',
'Status bar text for the View --> Zoom out action'))
actions['goHome'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Home', 'Go --> Home'), self,
shortcut=QtGui.QKeySequence.UnknownKey,
triggered=self.text_browser.home,
icon=self.icons['go-first-view'],
statusTip=translate('HelpBrowserGUI',
'Go to the first visited page',
'Status bar text for the Go --> Home action'))
actions['goBackward'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Backward', ' Go --> Backward'),
self,
shortcut=QtGui.QKeySequence.Back,
triggered=self.text_browser.backward,
icon=self.icons['go-previous-view'],
statusTip=translate('HelpBrowserGUI', 'Go to previous page',
'Status bar text for the Go --> Backward action'))
actions['goForward'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Forward', ' Go --> Forward'), self,
shortcut=QtGui.QKeySequence.Forward,
triggered=self.text_browser.forward,
icon=self.icons['go-next-view'],
statusTip=translate('HelpBrowserGUI', 'Go to next page',
'Status bar text for the Go --> Forward action'))
actions['goReload'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Reload', 'Go --> Reload'), self,
shortcut=QtGui.QKeySequence.Refresh,
triggered=self.text_browser.reload,
icon=self.icons['view-refresh'],
statusTip=translate('HelpBrowserGUI', 'Reload the current page',
'Status bar text for the Go --> Reload action'))
actions['bookmarksAdd'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Add bookmark',
'Bookmarks --> Add bookmark'),
self,
shortcut=QtGui.QKeySequence('Ctrl+Alt+N'),
triggered=self.browser.addBookmark,
icon=self.icons['bookmark_add'],
statusTip=translate('HelpBrowserGUI', 'Bookmark the current page',
'Status bar text for Bookmarks --> Add bookmark action'))
actions['bookmarksEdit'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Edit bookmarks...',
'Bookmarks --> Edit bookmarks'),
self,
shortcut=QtGui.QKeySequence('Ctrl+Alt+E'),
triggered=self.browser.editBookmarks,
icon=self.icons['bookmarks'],
statusTip=translate('HelpBrowserGUI', 'Edit bookmarks',
'Status bar text for Bookmarks --> Edit bookmarks action'))
actions['bookmarksClear'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&Clear All',
'Bookmarks --> Clear bookmark'),
self,
shortcut=QtGui.QKeySequence('Ctrl+Alt+C'),
triggered=self.browser.clearBookmarks,
statusTip=translate('HelpBrowserGUI',
'Clear all existing bookmarks',
'Status bar text for Bookmarks --> Add bookmark action'))
actions['about'] = QtWidgets.QAction(
translate('HelpBrowserGUI', '&About HelpBrowser',
'Help --> About HelpBrowser'),
self,
shortcut=QtGui.QKeySequence.UnknownKey,
triggered=self.browser.aboutBrowser,
statusTip=translate('HelpBrowserGUI', 'About HelpBrowser',
'Status bar text for Help --> About HelpBrowser action'))
actions['aboutQt'] = QtWidgets.QAction(
translate('HelpBrowserGUI', 'About &Qt', 'Help --> About Qt'),
self,
shortcut=QtGui.QKeySequence.UnknownKey,
triggered=self.browser.aboutQt,
statusTip=translate('HelpBrowserGUI', 'About Qt',
'Status bar text for the Help --> About Qt action'))
actions['clearSession'] = QtWidgets.QAction(
translate('HelpBrowserGUI', 'Clear history', ''), self,
shortcut=QtGui.QKeySequence.UnknownKey,
triggered=self.browser.clearHistory,
icon=self.icons['edit-clear-history'],
statusTip=translate('HelpBrowserGUI',
'Clear the content of the history combobox', ''))
return actions
def initPopups(self) :
"""
Setup the menubar and the toolbar of the main window.
The menubar contains the menus `File`, `Go`, `Bookmarks` and `Help`.
The toolbar contains the buttons: `home`, `backward`, `forward`,
`combobox` and `clear history`.
"""
# Create the File menu and add actions/submenus/separators to it
file_menu = self.menuBar().addMenu(
translate('HelpBrowserGUI', "&File", 'The File menu entry'))
file_actions = ['exitBrowser']
vitables.utils.addActions(file_menu, file_actions, self.actions)
# Create the View menu and toolbar
view_menu = self.menuBar().addMenu(
translate('HelpBrowserGUI', "&View", 'The View menu entry'))
view_toolbar = QtWidgets.QToolBar(
translate('HelpBrowserGUI', 'View operations', 'Toolbar title'),
self)
self.addToolBar(view_toolbar)
view_actions = ['zoomIn', 'zoomOut']
vitables.utils.addActions(view_menu, view_actions, self.actions)
vitables.utils.addActions(view_toolbar, view_actions, self.actions)
# Create the Go menu and toolbar
go_menu = self.menuBar().addMenu(translate('HelpBrowserGUI', "&Go",
'The Go menu entry'))
go_toolbar = QtWidgets.QToolBar(
translate('HelpBrowserGUI', 'Go operations', 'Toolbar title'),
self)
self.addToolBar(go_toolbar)
go_actions = ['goHome', 'goBackward', 'goForward', 'goReload']
vitables.utils.addActions(go_menu, go_actions, self.actions)
vitables.utils.addActions(go_toolbar, go_actions, self.actions)
# Create the Bookmarks menu and toolbar
self.bookmarks_menu = self.menuBar().addMenu(
translate('HelpBrowserGUI', "&Bookmarks", 'Bookmarks menu entry'))
bookmarks_toolbar = QtWidgets.QToolBar(
translate('HelpBrowserGUI', 'Bookmarks operations',
'Toolbar title'), self)
self.addToolBar(bookmarks_toolbar)
bookmark_actions = ['bookmarksAdd', 'bookmarksEdit', 'bookmarksClear',
None]
vitables.utils.addActions(self.bookmarks_menu, bookmark_actions,
self.actions)
vitables.utils.addActions(bookmarks_toolbar, bookmark_actions[:2],
self.actions)
# Create the Help menu and add actions/submenus/separators to it
help_menu = self.menuBar().addMenu(
translate('HelpBrowserGUI', "&Help", 'The Help menu entry'))
help_actions = ['about', 'aboutQt']
vitables.utils.addActions(help_menu, help_actions, self.actions)
# Create the History toolbar
history_toolbar = QtWidgets.QToolBar(
translate('HelpBrowserGUI', 'History operations', 'Toolbar title'),
self)
self.addToolBar(history_toolbar)
history_actions = ['clearSession']
vitables.utils.addActions(history_toolbar, history_actions,
self.actions)
go_selector = QtWidgets.QLabel(
translate('HelpBrowserGUI', 'Go: ', 'Text of the Go: label'),
history_toolbar)
history_toolbar.addWidget(go_selector)
self.combo_history = QtWidgets.QComboBox(history_toolbar)
self.combo_history.setSizeAdjustPolicy(
QtWidgets.QComboBox.AdjustToContents)
history_toolbar.addWidget(self.combo_history)
def connectSignals(self):
"""
Connect signals to slots.
Signals coming from GUI are connected to slots in the docs browser
controller, :meth:`vitables.docbrowser.helpbrowser.HelpBrowser`.
"""
self.combo_history.activated[str].connect(self.browser.displaySrc)
# This is the most subtle connection. It encompasses source
# changes coming from anywhere, including slots (home, backward
# and forward), menus (Go and Bookmarks), clicked links and
# programatic changes (setSource calls).
self.text_browser.sourceChanged.connect(self.browser.updateHistory)
self.text_browser.backwardAvailable.connect(
self.browser.updateBackward)
self.text_browser.forwardAvailable.connect(
self.browser.updateForward)
# The Bookmarks menu is special case due to its dynamic nature.
# The menu content vary every time a bookmark is added/deleted
# In order to track changes and keep it updated, the menu is reloaded
# every time it is about to be displayed.
self.bookmarks_menu.aboutToShow.connect(self.updateRecentSubmenu)
def updateRecentSubmenu(self):
"""Update the content of the Bookmarks menu."""
# Clear the current bookmarks from the Bookmarks menu
for action in self.bookmarks_menu.actions():
if re.search("^(\s?\d)", action.text()):
self.bookmarks_menu.removeAction(action)
# and refresh it
index = 0
for filepath in self.browser.bookmarks:
index += 1
action = QtWidgets.QAction('{0:>2}. {1}'.format(index, filepath),
self.bookmarks_menu)
action.setData(filepath)
self.bookmarks_menu.addAction(action)
action.triggered.connect(self.browser.displaySrc)
def setupHistoryCombo(self):
"""
Initializes history combobox.
"""
# Setup combobox
self.combo_history.setEditable(0)
self.combo_history.setWhatsThis(translate('HelpBrowserGUI',
"""<qt>
<h3>Page selector</h3>Select the page you want to visit.
</qt>""",
'WhatsThis text for the combobox of visited pages')
)
for item in self.browser.history :
self.combo_history.addItem(item)
def closeEvent(self, event) :
"""
Reimplements the event handler for `QCloseEvent` events.
Before the close event is accepted we need to do some stuff. This can
be done in two different ways: via event filters or reimplementing the
event handler. We have chosen the second possibility.
:Parameter event: the event being handled
"""
# When the help browser window is closed via File --> Exit
# the exitBrowser is called and its history is saved.
# But if we close the window with the
# close button then history is not saved at all.
# We fix this misbehavior by overwriting this method.
self.browser.exitBrowser()
QtWidgets.QMainWindow.closeEvent(self, event)
|
ankostis/ViTables
|
vitables/docbrowser/browsergui.py
|
Python
|
gpl-3.0
| 14,862
|
[
"VisIt"
] |
293e93448b501ba6bd2c86dafa7834dbb0f3607ff069625f905ecd82c1185543
|
# This file is part of BHMM (Bayesian Hidden Markov Models).
#
# Copyright (c) 2016 Frank Noe (Freie Universitaet Berlin)
# and John D. Chodera (Memorial Sloan-Kettering Cancer Center, New York)
#
# BHMM is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import time
import copy
import numpy as np
# TODO: reactivate multiprocessing, parallelize model fitting and forward-backward
# from multiprocessing import Queue, Process, cpu_count
# BHMM imports
import bhmm
import bhmm.hidden as hidden
from bhmm.util.logger import logger
from bhmm.util import config
class MaximumLikelihoodEstimator(object):
"""
Maximum likelihood Hidden Markov model (HMM).
This class is used to fit a maximum-likelihood HMM to data.
Examples
--------
>>> import bhmm
>>> bhmm.config.verbose = False
>>>
>>> from bhmm import testsystems
>>> [model, O, S] = testsystems.generate_synthetic_observations(ntrajectories=5, length=1000)
>>> mlhmm = MaximumLikelihoodEstimator(O, model.nstates)
>>> model = mlhmm.fit()
References
----------
[1] L. E. Baum and J. A. Egon, "An inequality with applications to statistical
estimation for probabilistic functions of a Markov process and to a model
for ecology," Bull. Amer. Meteorol. Soc., vol. 73, pp. 360-363, 1967.
"""
def __init__(self, observations, nstates, initial_model=None, output='gaussian',
reversible=True, stationary=False, p=None, accuracy=1e-3, maxit=1000, maxit_P=100000):
"""Initialize a Bayesian hidden Markov model sampler.
Parameters
----------
observations : list of numpy arrays representing temporal data
`observations[i]` is a 1d numpy array corresponding to the observed
trajectory index `i`
nstates : int
The number of states in the model.
initial_model : HMM, optional, default=None
If specified, the given initial model will be used to initialize the
BHMM. Otherwise, a heuristic scheme is used to generate an initial guess.
type : str, optional, default=None
Output model type from [None, 'gaussian', 'discrete'].
reversible : bool, optional, default=True
If True, a prior that enforces reversible transition matrices (detailed
balance) is used; otherwise, a standard non-reversible prior is used.
stationary : bool, optional, default=False
If True, the initial distribution of hidden states is self-consistently
computed as the stationary distribution of the transition matrix. If
False, it will be estimated from the starting states.
p : ndarray (nstates), optional, default=None
Initial or fixed stationary distribution. If given and stationary=True,
transition matrices will be estimated with the constraint that they
have p as their stationary distribution. If given and stationary=False,
p is the fixed initial distribution of hidden states.
accuracy : float
convergence threshold for EM iteration. When two the likelihood does
not increase by more than accuracy, the iteration is stopped successfully.
maxit : int
stopping criterion for EM iteration. When so many iterations are
performed without reaching the requested accuracy, the iteration is
stopped without convergence (a warning is given)
maxit_P : int
maximum number of iterations for reversible transition matrix estimation.
Only used with reversible=True.
"""
# Store a copy of the observations.
self._observations = copy.deepcopy(observations)
self._nobs = len(observations)
self._Ts = [len(o) for o in observations]
self._maxT = np.max(self._Ts)
# Set parameters
self._nstates = nstates
self._reversible = reversible
self._stationary = stationary
if initial_model is not None:
# Use user-specified initial model, if provided.
self._hmm = copy.deepcopy(initial_model)
else:
# Generate our own initial model.
self._hmm = bhmm.init_hmm(observations, nstates, output=output)
# stationary and initial distribution
self._fixed_stationary_distribution = None
self._fixed_initial_distribution = None
if p is not None:
if stationary:
self._fixed_stationary_distribution = np.array(p)
else:
self._fixed_initial_distribution = np.array(p)
# pre-construct hidden variables
self._alpha = np.zeros((self._maxT, self._nstates), config.dtype, order='C')
self._beta = np.zeros((self._maxT, self._nstates), config.dtype, order='C')
self._pobs = np.zeros((self._maxT, self._nstates), config.dtype, order='C')
self._gammas = [np.zeros((len(self._observations[i]), self._nstates), config.dtype, order='C')
for i in range(self._nobs)]
self._Cs = [np.zeros((self._nstates, self._nstates), config.dtype, order='C') for _ in range(self._nobs)]
# convergence options
self._accuracy = accuracy
self._maxit = maxit
self._maxit_P = maxit_P
self._likelihoods = None
# Kernel for computing things
hidden.set_implementation(config.kernel)
self._hmm.output_model.set_implementation(config.kernel)
@property
def observations(self):
r""" Observation trajectories """
return self._observations
@property
def nobservations(self):
r""" Number of observation trajectories """
return self._nobs
@property
def observation_lengths(self):
r""" Lengths of observation trajectories """
return self._Ts
@property
def is_reversible(self):
r""" Whether the transition matrix is estimated with detailed balance constraints """
return self._reversible
@property
def nstates(self):
r""" Number of hidden states """
return self._nstates
@property
def accuracy(self):
r""" Convergence threshold for EM iteration """
return self._accuracy
@property
def maxit(self):
r""" Maximum number of iterations """
return self._maxit
@property
def likelihood(self):
r""" Estimated HMM likelihood """
return self._likelihoods[-1]
@property
def likelihoods(self):
r""" Sequence of likelihoods generated from the iteration """
return self._likelihoods
@property
def hidden_state_probabilities(self):
r""" Probabilities of hidden states at every trajectory and time point """
return self._gammas
@property
def hmm(self):
r""" The estimated HMM """
return self._hmm
@property
def output_model(self):
r""" The HMM output model """
return self._hmm.output_model
@property
def transition_matrix(self):
r""" Hidden transition matrix """
return self._hmm.Tij
@property
def initial_probability(self):
r""" Initial probability """
return self._hmm.Pi
@property
def stationary_probability(self):
r""" Stationary probability, if the model is stationary """
assert self._stationary, 'Estimator is not stationary'
return self._hmm.Pi
def _forward_backward(self, itraj):
"""
Estimation step: Runs the forward-back algorithm on trajectory with index itraj
Parameters
----------
itraj : int
index of the observation trajectory to process
Results
-------
logprob : float
The probability to observe the observation sequence given the HMM
parameters
gamma : ndarray(T,N, dtype=float)
state probabilities for each t
count_matrix : ndarray(N,N, dtype=float)
the Baum-Welch transition count matrix from the hidden state
trajectory
"""
# get parameters
A = self._hmm.transition_matrix
pi = self._hmm.initial_distribution
obs = self._observations[itraj]
T = len(obs)
# compute output probability matrix
# t1 = time.time()
self._hmm.output_model.p_obs(obs, out=self._pobs)
# t2 = time.time()
# self._fbtimings[0] += t2-t1
# forward variables
logprob = hidden.forward(A, self._pobs, pi, T=T, alpha_out=self._alpha)[0]
# t3 = time.time()
# self._fbtimings[1] += t3-t2
# backward variables
hidden.backward(A, self._pobs, T=T, beta_out=self._beta)
# t4 = time.time()
# self._fbtimings[2] += t4-t3
# gamma
hidden.state_probabilities(self._alpha, self._beta, T=T, gamma_out=self._gammas[itraj])
# t5 = time.time()
# self._fbtimings[3] += t5-t4
# count matrix
hidden.transition_counts(self._alpha, self._beta, A, self._pobs, T=T, out=self._Cs[itraj])
# t6 = time.time()
# self._fbtimings[4] += t6-t5
# return results
return logprob
def _init_counts(self, gammas):
gamma0_sum = np.zeros(self._nstates)
for k in range(len(self._observations)): # update state counts
gamma0_sum += gammas[k][0]
return gamma0_sum
def _transition_counts(self, count_matrices):
C = np.zeros((self._nstates, self._nstates))
for k in range(len(self._observations)): # update count matrix
C += count_matrices[k]
return C
def _update_model(self, gammas, count_matrices, maxiter=10000000):
"""
Maximization step: Updates the HMM model given the hidden state assignment and count matrices
Parameters
----------
gamma : [ ndarray(T,N, dtype=float) ]
list of state probabilities for each trajectory
count_matrix : [ ndarray(N,N, dtype=float) ]
list of the Baum-Welch transition count matrices for each hidden
state trajectory
maxiter : int
maximum number of iterations of the transition matrix estimation if
an iterative method is used.
"""
gamma0_sum = self._init_counts(gammas)
C = self._transition_counts(count_matrices)
logger().info("Initial count = \n"+str(gamma0_sum))
logger().info("Count matrix = \n"+str(C))
# compute new transition matrix
from bhmm.estimators._tmatrix_disconnected import estimate_P, stationary_distribution
T = estimate_P(C, reversible=self._hmm.is_reversible, fixed_statdist=self._fixed_stationary_distribution,
maxiter=maxiter, maxerr=1e-12, mincount_connectivity=1e-16)
# print 'P:\n', T
# estimate stationary or init distribution
if self._stationary:
if self._fixed_stationary_distribution is None:
pi = stationary_distribution(T, C=C, mincount_connectivity=1e-16)
else:
pi = self._fixed_stationary_distribution
else:
if self._fixed_initial_distribution is None:
pi = gamma0_sum / np.sum(gamma0_sum)
else:
pi = self._fixed_initial_distribution
# print 'pi: ', pi, ' stationary = ', self._hmm.is_stationary
# update model
self._hmm.update(pi, T)
logger().info("T: \n"+str(T))
logger().info("pi: \n"+str(pi))
# update output model
self._hmm.output_model.estimate(self._observations, gammas)
def compute_viterbi_paths(self):
"""
Computes the viterbi paths using the current HMM model
"""
# get parameters
K = len(self._observations)
A = self._hmm.transition_matrix
pi = self._hmm.initial_distribution
# compute viterbi path for each trajectory
paths = np.empty(K, dtype=object)
for itraj in range(K):
obs = self._observations[itraj]
# compute output probability matrix
pobs = self._hmm.output_model.p_obs(obs)
# hidden path
paths[itraj] = hidden.viterbi(A, pobs, pi)
# done
return paths
def fit(self):
"""
Maximum-likelihood estimation of the HMM using the Baum-Welch algorithm
Returns
-------
model : HMM
The maximum likelihood HMM model.
"""
logger().info("=================================================================")
logger().info("Running Baum-Welch:")
logger().info(" input observations: "+str(self.nobservations)+" of lengths "+str(self.observation_lengths))
logger().info(" initial HMM guess:"+str(self._hmm))
initial_time = time.time()
it = 0
self._likelihoods = np.zeros(self.maxit)
loglik = 0.0
# flag if connectivity has changed (e.g. state lost) - in that case the likelihood
# is discontinuous and can't be used as a convergence criterion in that iteration.
tmatrix_nonzeros = self.hmm.transition_matrix.nonzero()
converged = False
while not converged and it < self.maxit:
# self._fbtimings = np.zeros(5)
t1 = time.time()
loglik = 0.0
for k in range(self._nobs):
loglik += self._forward_backward(k)
t2 = time.time()
# convergence check
if it > 0:
dL = loglik - self._likelihoods[it-1]
# print 'dL ', dL, 'iter_P ', maxiter_P
if dL < self._accuracy:
# print "CONVERGED! Likelihood change = ",(loglik - self.likelihoods[it-1])
converged = True
# update model
self._update_model(self._gammas, self._Cs, maxiter=self._maxit_P)
t3 = time.time()
# connectivity change check
tmatrix_nonzeros_new = self.hmm.transition_matrix.nonzero()
if not np.array_equal(tmatrix_nonzeros, tmatrix_nonzeros_new):
converged = False # unset converged
tmatrix_nonzeros = tmatrix_nonzeros_new
# print 't_fb: ', str(1000.0*(t2-t1)), 't_up: ', str(1000.0*(t3-t2)), 'L = ', loglik, 'dL = ', (loglik - self._likelihoods[it-1])
# print ' fb timings (ms): pobs', (1000.0*self._fbtimings).astype(int)
logger().info(str(it) + " ll = " + str(loglik))
# print self.model.output_model
# print "---------------------"
# end of iteration
self._likelihoods[it] = loglik
it += 1
# final update with high precision
# self._update_model(self._gammas, self._Cs, maxiter=10000000)
# truncate likelihood history
self._likelihoods = self._likelihoods[:it]
# set final likelihood
self._hmm.likelihood = loglik
# set final count matrix
self.count_matrix = self._transition_counts(self._Cs)
self.initial_count = self._init_counts(self._gammas)
final_time = time.time()
elapsed_time = final_time - initial_time
logger().info("maximum likelihood HMM:"+str(self._hmm))
logger().info("Elapsed time for Baum-Welch solution: %.3f s" % elapsed_time)
logger().info("Computing Viterbi path:")
initial_time = time.time()
# Compute hidden state trajectories using the Viterbi algorithm.
self._hmm.hidden_state_trajectories = self.compute_viterbi_paths()
final_time = time.time()
elapsed_time = final_time - initial_time
logger().info("Elapsed time for Viterbi path computation: %.3f s" % elapsed_time)
logger().info("=================================================================")
return self._hmm
# TODO: reactive multiprocessing
# ###################
# # MULTIPROCESSING
#
# def _forward_backward_worker(self, work_queue, done_queue):
# try:
# for k in iter(work_queue.get, 'STOP'):
# (weight, gamma, count_matrix) = self._forward_backward(k)
# done_queue.put((k, weight, gamma, count_matrix))
# except Exception, e:
# done_queue.put(e.message)
# return True
#
#
# def fit_parallel(self):
# """
# Maximum-likelihood estimation of the HMM using the Baum-Welch algorithm
#
# Returns
# -------
# The hidden markov model
#
# """
# K = len(self.observations)#, len(A), len(B[0])
# gammas = np.empty(K, dtype=object)
# count_matrices = np.empty(K, dtype=object)
#
# it = 0
# converged = False
#
# num_threads = min(cpu_count(), K)
# work_queue = Queue()
# done_queue = Queue()
# processes = []
#
# while (not converged):
# print "it", it
# loglik = 0.0
#
# # fill work queue
# for k in range(K):
# work_queue.put(k)
#
# # start processes
# for w in xrange(num_threads):
# p = Process(target=self._forward_backward_worker, args=(work_queue, done_queue))
# p.start()
# processes.append(p)
# work_queue.put('STOP')
#
# # end processes
# for p in processes:
# p.join()
#
# # done signal
# done_queue.put('STOP')
#
# # get results
# for (k, ll, gamma, count_matrix) in iter(done_queue.get, 'STOP'):
# loglik += ll
# gammas[k] = gamma
# count_matrices[k] = count_matrix
#
# # update T, pi
# self._update_model(gammas, count_matrices)
#
# self.likelihoods[it] = loglik
#
# if it > 0:
# if loglik - self.likelihoods[it-1] < self.accuracy:
# converged = True
#
# it += 1
#
# return self.model
|
jchodera/bhmm
|
bhmm/estimators/maximum_likelihood.py
|
Python
|
lgpl-3.0
| 18,891
|
[
"Gaussian"
] |
a98893efb6d6b4e5b7ec02aa6eaefdeba35f9638e91b5ed3ccc293bfa9990bc2
|
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
#
# The MIT License (MIT)
#
# Copyright (c) 2016 Joan Puigcerver <joapuipe@prhlt.upv.es>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import argparse
import math
import re
import sys
REGEX_FLOAT = r'(?:[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?)'
class GMM(object):
def __init__(self):
self._mixes = []
@property
def dimension(self):
return (len(self._mixes[0][0]) if len(self._mixes) > 0 else None)
@property
def num_mixtures(self):
return (len(self._mixes) if self._mixes is not None else None)
def add_mixture(self, m, v, w):
assert isinstance(w, float), 'Mixture weight must '
if not isinstance(m, tuple): m = tuple(m)
if not isinstance(v, tuple): v = tuple(v)
assert len(m) == len(v), \
'Dimension of mean and variance vectors must agree'
self._mixes.append((m, v, w))
if len(self._mixes) > 1:
assert len(self._mixes[0][0]) == len(self._mixes[-1][0]), \
'All mixtures in a GMM must have the same dimension'
def write(self, f):
f.write('<DiagGMM>\n')
f.write('<WEIGHTS> [ %s ]\n' % ' '.join(
map(lambda x: str(x[-1]), self._mixes)))
f.write('<MEANS_INVVARS> [\n')
for (m, v, _) in self._mixes:
for mi, vi in zip(m, v):
f.write(' %g' % (mi / vi))
f.write('\n')
f.write(']\n')
f.write('<INV_VARS> [\n')
for (_, v, _) in self._mixes:
for vi in v:
f.write(' %g' % (1.0 / vi))
f.write('\n')
f.write(']\n')
f.write('</DiagGMM>\n')
class HMM(object):
def __init__(self):
self._states = []
self._pinit = None
@property
def dimension(self):
return self._states[0][0].dimension if len(self._states) > 0 \
else None
def add_state(self, gmm):
self._states.append([gmm, []])
def set_initial(self, pinit):
self._pinit = pinit
def set_transition(self, i, ptrans):
assert i >= 0 and i < len(self._states), 'Unknown state %d' % i
if not isinstance(ptrans, tuple): ptrans = tuple(ptrans)
self._states[i][1] = ptrans
def write_transitions(self, f):
if self._pinit[1] < 1.0:
# Initial State
f.write('<State> 0\n')
for j, p in enumerate(self._pinit):
if p <= 0.0: continue
f.write('<Transition> %d %g\n' % (j, p))
# Emitting states
for s, (_, trans) in enumerate(self._states):
f.write('<State> %d <PdfClass> %d\n' % (s + 1, s))
for j, p in enumerate(trans):
if p <= 0.0: continue
f.write('<Transition> %d %g\n' % (j, p))
f.write('</State>\n')
# Final state
f.write('<State> %d\n' % (len(self._states) + 1))
f.write('</State>\n')
else:
# Emitting states
for s, (_, trans) in enumerate(self._states):
f.write('<State> %d <PdfClass> %d\n' % (s, s))
for j, p in enumerate(trans):
if p <= 0.0: continue
f.write('<Transition> %d %g\n' % (j - 1, p))
f.write('</State>\n')
# Final state
f.write('<State> %d\n' % len(self._states))
f.write('</State>\n')
def write_pdfs(self, f):
for (gmm, _) in self._states:
gmm.write(f)
def get_states_pdfs(self):
if self._pinit[1] < 1.0:
return map(lambda x: (x + 1, x), xrange(len(self._states)))
else:
return map(lambda x: (x, x), xrange(len(self._states)))
def get_transition_logprobs(self):
logprobs = []
if self._pinit[1] < 1.0:
for p in self._pinit:
if p <= 0.0: continue
logprobs.append(math.log(p))
for (_, trans) in self._states:
for p in trans:
if p <= 0.0: continue
logprobs.append(math.log(p))
return logprobs
class Model(object):
def __init__(self):
self._dim = None
self._hmm = {}
@property
def dimension(self):
return self._dim
@dimension.setter
def dimension(self, value):
assert isinstance(value, int)
self._dim = value
def add_hmm(self, i, hmm):
assert isinstance(i, int)
assert self._dim == hmm.dimension
self._hmm[i] = hmm
def write(self, f):
if isinstance(f, str) or isinstance(f, unicode):
f = open(f, 'w')
f.write('<TransitionModel>\n')
f.write('<Topology>\n')
for hmm_id, hmm in self._hmm.iteritems():
f.write('<TopologyEntry>\n')
f.write('<ForPhones> %d </ForPhones>\n' % hmm_id)
hmm.write_transitions(f)
f.write('</TopologyEntry>\n')
f.write('</Topology>\n')
# Triples
triples, logprobs = [], [0]
for hmm_id, hmm in self._hmm.iteritems():
base_pdfs = len(triples)
for (s, pdf) in hmm.get_states_pdfs():
triples.append((hmm_id, s, pdf + base_pdfs))
logprobs.extend(hmm.get_transition_logprobs())
f.write('<Triples> %d\n' % len(triples))
for (h, s, p) in triples:
f.write('%d %d %d\n' % (h, s, p))
f.write('</Triples>\n')
f.write('<LogProbs>\n[ %s ]\n</LogProbs>\n' % \
' '.join(map(lambda x: str(x), logprobs)))
f.write('</TransitionModel>\n')
num_pdfs = max(map(lambda x: x[-1], triples)) + 1
f.write('<DIMENSION> %d <NUMPDFS> %d\n' % (self.dimension, num_pdfs))
for hmm in self._hmm.itervalues():
hmm.write_pdfs(f)
f.close()
class HTKParser(object):
def __init__(self, symbols_table):
assert isinstance(symbols_table, dict)
self._n = 0
self._line = None
self._data = None
self._model = None
self._symbols = symbols_table
def _next(self):
if not self._data:
return False
self._n += 1
if self._n < len(self._data):
self._line = self._data[self._n]
return True
else:
self._line = None
return False
def _msg(self, s):
return '%s at line %d: "%s"' % (s, self._n + 1, self._line)
def _read_dim(self):
assert self._line is not None, 'Unexpected EOF'
m = re.match('^<STREAMINFO> 1 ([0-9]+)$', self._line)
if not m: return False
self._model.dimension = int(m.group(1))
return self._next()
def _read_hmm(self):
assert self._line is not None, 'Unexpected EOF'
m = re.match('^~h "([0-9A-Za-z]+)"$', self._line)
if not m: return False
hmm_sym = m.group(1)
hmm_id = self._symbols.get(hmm_sym, None)
assert hmm_id is not None, self._msg(
'HMM symbol \"%s\" is not found in the symbols table' % hmm_sym)
hmm = HMM()
# Read <BEGINHMM>
assert self._next(), 'Unexpected EOF'
assert re.match('^<BEGINHMM>$', self._line), \
self._msg('Expected "<BEGINHMM>"')
# Read <NUMSTATES>
assert self._next(), 'Unexpected EOF'
m = re.match('^<NUMSTATES> ([0-9]+)$', self._line)
assert m, self._msg('Expected "<NUMSTATES> %%d"')
num_states = int(m.group(1))
assert num_states > 2, self._msg('Unexpected number of states')
# Process all <STATE>s
assert self._next(), 'Unexpected EOF'
for s in xrange(2, num_states):
# Check state ID
m = re.match('^<STATE> %d$' % s, self._line)
assert m, self._msg('Expected "<STATE> %d"' % s)
# Read number of mixtures
assert self._next(), 'Unexpected EOF'
m = re.match('^<NUMMIXES> ([0-9]+)$', self._line)
gmm = GMM()
if not m:
## Special case: 1 Gaussian
num_mixes = 1
weight = 1.0
# Check <MEAN>
m = re.match('^<MEAN> %d$' % self._model.dimension, self._line)
assert m, self._msg('Expected "<MEAN> %d"' % \
self._model.dimension)
# Read mean vector
assert self._next(), 'Unexpected EOF'
mean = tuple([float(x) for x in self._line.split()])
assert len(mean) == self._model.dimension, \
self._msg('Invalid mean vector')
# Check <VARIANCE>
assert self._next(), 'Unexpected EOF'
m = re.match('^<VARIANCE> %d$' % self._model.dimension,
self._line)
assert m, self._msg('Expected "<VARIANCE> %d"' % \
self._model.dimension)
# Read variance vector
assert self._next(), 'Unexpected EOF'
var = tuple([ float(x) for x in self._line.split() ])
assert len(var) == self._model.dimension, \
self._msg('Invalid variance vector')
# Read Gconst. It is not used, since it can be obtained from
# the mean and the variance vectors.
assert self._next(), 'Unexpected EOF'
m = re.match('^<GCONST> (%s)$' % REGEX_FLOAT, self._line)
assert m, self._msg('Expected "<GCONST> %%f"')
# Add mixture to the gmm
gmm.add_mixture(mean, var, weight)
assert self._next(), 'Unexpected EOF'
else:
## General case, Gaussian Mixture Model
num_mixes = int(m.group(1))
assert num_mixes > 0, self._msg('Invalid number of mixtures')
# Process all <MIXTURE>s
while True:
# Read mixture weight
assert self._next(), 'Unexpected EOF'
m = re.match('^<MIXTURE> \d+ (%s)$' % REGEX_FLOAT,
self._line)
if not m: break
weight = float(m.group(1))
assert weight > 0.0 and weight <= 1.0, \
self._msg('Invalid mixture weight')
# Check <MEAN>
assert self._next(), 'Unexpected EOF'
m = re.match('^<MEAN> %d$' % self._model.dimension,
self._line)
assert m, self._msg('Expected "<MEAN> %d"' % \
self._model.dimension)
# Read mean vector
assert self._next(), 'Unexpected EOF'
mean = tuple([float(x) for x in self._line.split()])
assert len(mean) == self._model.dimension, \
self._msg('Invalid mean vector')
# Check <VARIANCE>
assert self._next(), 'Unexpected EOF'
m = re.match('^<VARIANCE> %d$' % self._model.dimension,
self._line)
assert m, self._msg('Expected "<VARIANCE> %d"' % \
self._model.dimension)
# Read variance vector
assert self._next(), 'Unexpected EOF'
var = tuple([ float(x) for x in self._line.split() ])
assert len(var) == self._model.dimension, \
self._msg('Invalid variance vector')
# Read Gconst. It is not used, since it can be obtained from
# the mean and the variance vectors.
assert self._next(), 'Unexpected EOF'
m = re.match('^<GCONST> (%s)$' % REGEX_FLOAT, self._line)
assert m, self._msg('Expected "<GCONST> %%f"')
# Add mixture to the gmm
gmm.add_mixture(mean, var, weight)
# Add state to the HMM
hmm.add_state(gmm)
# Process transitions, Read <TRASNP>.
#assert self._next(), 'Unexpected EOF'
m = re.match('^<TRANSP> %d$' % num_states, self._line)
assert m, self._msg('Expected "<TRANSP> %d"' % self._n)
# Process initial state
assert self._next(), 'Unexpected EOF'
t = tuple([float(x) for x in self._line.split()])
assert len(t) == num_states, \
self._msg('Unexpected number of transitions')
hmm.set_initial(t)
# Process emitting states
for i in range(num_states - 2):
assert self._next(), 'Unexpected EOF'
t = tuple([ float(x) for x in self._line.split() ])
assert len(t) == num_states, \
self._msg('Unexpected number of transitions' % self._n)
hmm.set_transition(i, t)
# Process final state (just ignore it)
assert self._next(), 'Unexpected EOF'
t = tuple([ float(x) for x in self._line.split() ])
assert len(t) == num_states, \
'Unexpected number of transitions at line %d' % self._n
# Check <ENDHMM>
assert self._next(), 'Unexpected EOF'
m = re.match('^<ENDHMM>$', self._line)
self._model.add_hmm(hmm_id, hmm)
return True
def parse(self, f):
if isinstance(f, str) or isinstance(f, unicode):
f = open(f, 'r')
self._model = Model()
self._data = [line.strip() for line in f]
f.close()
assert len(self._data) > 0, 'Unexpected EOF'
self._n = -1
while self._next():
if not self._read_dim() and not self._read_hmm():
sys.stderr.write('IGNORED LINE: \"%s\"\n' % self._line)
return self._model
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Convert a HMM/GMM model file from HTK to Kaldi format.')
parser.add_argument('symbols_table', type=argparse.FileType('r'),
help='HMM symbols table')
parser.add_argument('input', type=argparse.FileType('r'),
nargs='?', default=sys.stdin,
help='Input model in HTK text format')
parser.add_argument('output', type=argparse.FileType('w'),
nargs='?', default=sys.stdout,
help='Output model in Kaldi text format')
args = parser.parse_args()
# Read symbols table
symbols_table = {}
for line in args.symbols_table:
line = line.split()
assert len(line) == 2, 'Wrong symbols table format'
symbols_table[line[0]] = int(line[1])
htk_parser = HTKParser(symbols_table)
model = htk_parser.parse(args.input)
model.write(args.output)
|
jpuigcerver/htk2kaldi
|
htk2kaldi.py
|
Python
|
mit
| 15,982
|
[
"Gaussian"
] |
f41f425afe7320b0b0578eecac16969eeceefd64c9590bd9b5666bb957f14474
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division,
print_function, unicode_literals)
import sys
import os
import importlib
import subprocess
import funannotate
from pkg_resources import get_distribution
global package_name
global __version__
package_name = 'funannotate'
__version__ = get_distribution(package_name).version
default_help = """
Usage: {:} <command> <arguments>
version: {:}
Description: Funannotate is a genome prediction, annotation, and comparison pipeline.
Commands:
clean Find/remove small repetitive contigs
sort Sort by size and rename contig headers
mask Repeatmask genome assembly
train RNA-seq mediated training of Augustus/GeneMark
predict Run gene prediction pipeline
fix Fix annotation errors (generate new GenBank file)
update RNA-seq/PASA mediated gene model refinement
remote Partial functional annotation using remote servers
iprscan InterProScan5 search (Docker or local)
annotate Assign functional annotation to gene predictions
compare Compare funannotated genomes
util Format conversion and misc utilities
setup Setup/Install databases
test Download/Run funannotate installation tests
check Check Python, Perl, and External dependencies [--show-versions]
species list pre-trained Augustus species
database Manage databases
outgroups Manage outgroups for funannotate compare
Written by Jon Palmer (2016-2019) nextgenusfs@gmail.com
""".format(package_name, __version__)
cleanHelp = """
Usage: {:} clean <arguments>
version: {:}
Description: The script sorts contigs by size, starting with shortest contigs it uses minimap2
to find contigs duplicated elsewhere, and then removes duplicated contigs.
Arguments:
-i, --input Multi-fasta genome file (Required)
-o, --out Cleaned multi-fasta output file (Required)
-p, --pident Percent identity of overlap. Default = 95
-c, --cov Percent coverage of overlap. Default = 95
-m, --minlen Minimum length of contig to keep. Default = 500
--exhaustive Test every contig. Default is to stop at N50 value.
""".format(package_name, __version__)
sortHelp = """
Usage: {:} sort <arguments>
version: {:}
Description: This script sorts the input contigs by size (longest->shortest) and then relabels
the contigs with a simple name (e.g. scaffold_1). Augustus can have problems with
some complicated contig names. Alternatively pass -s,--simplify in order
to split fasta headers at first space.
Arguments:
-i, --input Multi-fasta genome file. (Required)
-o, --out Sorted by size and relabeled output file. (Required)
-s, --simplify Try to simplify the FASTA headers, split at first space.
-b, --base Base name to relabel contigs. Default: scaffold
--minlen Shorter contigs are discarded. Default: 0
""".format(package_name, __version__)
maskHelp = """
Usage: {:} mask <arguments>
version: {:}
Description: This script is a wrapper for repeat masking. Default is to run very simple
repeat masking with tantan. The script can also run RepeatMasker and/or
RepeatModeler. It will generate a softmasked genome. Tantan is probably not
sufficient for soft-masking an assembly, but with RepBase no longer being
available RepeatMasker/Modeler may not be functional for many users.
Arguments:
-i, --input Multi-FASTA genome file. (Required)
-o, --out Output softmasked FASTA file. (Required)
Optional:
-m, --method Method to use. Default: tantan [repeatmasker, repeatmodeler]
-s, --repeatmasker_species Species to use for RepeatMasker
-l, --repeatmodeler_lib Custom repeat database (FASTA format)
--cpus Number of cpus to use. Default: 2
--debug Keep intermediate files
""".format(package_name, __version__)
trainHelp = """
Usage: {:} train <arguments>
version: {:}
Description: Script is a wrapper for de novo genome-guided transcriptome assembly using
Trinity followed by PASA. Illumina and Long-read (nanopore/pacbio) RNA-seq
is also supported. Dependencies are hisat2, Trinity, samtools, fasta,
minimap2, PASA.
Required:
-i, --input Genome multi-fasta file
-o, --out Output folder name
-l, --left Left/Forward FASTQ Illumina reads (R1)
-r, --right Right/Reverse FASTQ Illumina reads (R2)
-s, --single Single ended FASTQ reads
Optional:
--stranded If RNA-seq library stranded. [RF,FR,F,R,no]
--left_norm Normalized left FASTQ reads (R1)
--right_norm Normalized right FASTQ reads (R2)
--single_norm Normalized single-ended FASTQ reads
--pacbio_isoseq PacBio long-reads
--nanopore_cdna Nanopore cDNA long-reads
--nanopore_mrna Nanopore mRNA direct long-reads
--trinity Pre-computed Trinity transcripts (FASTA)
--jaccard_clip Turn on jaccard clip for dense genomes [Recommended for fungi]
--no_normalize_reads Skip read Normalization
--no_trimmomatic Skip Quality Trimming of reads
--memory RAM to use for Jellyfish. Default: 50G
-c, --coverage Depth to normalize reads. Default: 50
-m, --min_coverage Min depth for normalizing reads. Default: 5
--pasa_db Database to use. Default: sqlite [mysql,sqlite]
--pasa_alignment_overlap PASA --stringent_alignment_overlap. Default: 30.0
--aligners Aligners to use with PASA: Default: minimap2 blat [gmap]
--pasa_min_pct_aligned PASA --MIN_PERCENT_ALIGNED. Default: 90
--pasa_min_avg_per_id PASA --MIN_AVG_PER_ID. Default: 95
--pasa_num_bp_splice PASA --NUM_BP_PERFECT_SPLICE_BOUNDARY. Default: 3
--max_intronlen Maximum intron length. Default: 3000
--species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
--strain Strain name
--isolate Isolate name
--cpus Number of CPUs to use. Default: 2
--no-progress Do not print progress to stdout for long sub jobs
ENV Vars: If not passed, will try to load from your $PATH.
--PASAHOME
--TRINITYHOME
""".format(package_name, __version__)
predictHelp = """
Usage: {:} predict <arguments>
version: {:}
Description: Script takes genome multi-fasta file and a variety of inputs to do a comprehensive whole
genome gene prediction. Uses AUGUSTUS, GeneMark, Snap, GlimmerHMM, BUSCO, EVidence Modeler,
tbl2asn, tRNAScan-SE, Exonerate, minimap2.
Required:
-i, --input Genome multi-FASTA file (softmasked repeats)
-o, --out Output folder name
-s, --species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
Optional:
-p, --parameters Ab intio parameters JSON file to use for gene predictors
--isolate Isolate name, e.g. Af293
--strain Strain name, e.g. FGSCA4
--name Locus tag name (assigned by NCBI?). Default: FUN_
--numbering Specify where gene numbering starts. Default: 1
--maker_gff MAKER2 GFF file. Parse results directly to EVM.
--pasa_gff PASA generated gene models. filename:weight
--other_gff Annotation pass-through to EVM. filename:weight
--rna_bam RNA-seq mapped to genome to train Augustus/GeneMark-ET
--stringtie StringTie GTF result
-w, --weights Ab-initio predictor and EVM weight. Example: augustus:2 or pasa:10
--augustus_species Augustus species config. Default: uses species name
--min_training_models Minimum number of models to train Augustus. Default: 200
--genemark_mode GeneMark mode. Default: ES [ES,ET]
--genemark_mod GeneMark ini mod file
--busco_seed_species Augustus pre-trained species to start BUSCO. Default: anidulans
--optimize_augustus Run 'optimze_augustus.pl' to refine training (long runtime)
--busco_db BUSCO models. Default: dikarya. `funannotate outgroups --show_buscos`
--organism Fungal-specific options. Default: fungus. [fungus,other]
--ploidy Ploidy of assembly. Default: 1
-t, --tbl2asn Assembly parameters for tbl2asn. Default: "-l paired-ends"
-d, --database Path to funannotate database. Default: $FUNANNOTATE_DB
--protein_evidence Proteins to map to genome (prot1.fa prot2.fa uniprot.fa). Default: uniprot.fa
--protein_alignments Pre-computed protein alignments in GFF3 format
--p2g_pident Exonerate percent identity. Default: 80
--p2g_diamond_db Premade diamond genome database for protein2genome mapping
--p2g_prefilter Pre-filter hits software selection. Default: diamond [tblastn]
--transcript_evidence mRNA/ESTs to align to genome (trans1.fa ests.fa trinity.fa). Default: none
--transcript_alignments Pre-computed transcript alignments in GFF3 format
--augustus_gff Pre-computed AUGUSTUS GFF3 results (must use --stopCodonExcludedFromCDS=False)
--genemark_gtf Pre-computed GeneMark GTF results
--trnascan Pre-computed tRNAscanSE results
--min_intronlen Minimum intron length. Default: 10
--max_intronlen Maximum intron length. Default: 3000
--soft_mask Softmasked length threshold for GeneMark. Default: 2000
--min_protlen Minimum protein length. Default: 50
--repeats2evm Use repeats in EVM consensus model building
--keep_evm Keep existing EVM results (for rerunning pipeline)
--evm-partition-interval Min length between genes to make a partition: Default: 1500
--no-evm-partitions Do not split contigs into partitions
--repeat_filter Repetitive gene model filtering. Default: overlap blast [overlap,blast,none]
--keep_no_stops Keep gene models without valid stops
--SeqCenter Sequencing facilty for NCBI tbl file. Default: CFMR
--SeqAccession Sequence accession number for NCBI tbl file. Default: 12345
--force Annotated unmasked genome
--cpus Number of CPUs to use. Default: 2
--no-progress Do not print progress to stdout for long sub jobs
--tmpdir Volume/location to write temporary files. Default: /tmp
ENV Vars: If not specified at runtime, will be loaded from your $PATH
--EVM_HOME
--AUGUSTUS_CONFIG_PATH
--GENEMARK_PATH
--BAMTOOLS_PATH
""".format(package_name, __version__)
updateHelp = """
Usage: {:} update <arguments>
version: {:}
Description: Script will run PASA mediated update of gene models. It can directly update
the annotation from an NCBI downloaded GenBank file using RNA-seq data or can be
used after funannotate predict to refine UTRs and gene model predictions. Kallisto
is used to evidence filter most likely PASA gene models. Dependencies are
hisat2, Trinity, samtools, fasta, minimap2, PASA, kallisto, bedtools.
Required:
-i, --input Funannotate folder or Genome in GenBank format (.gbk,.gbff).
or
-f, --fasta Genome in FASTA format
-g, --gff Annotation in GFF3 format
--species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
Optional:
-o, --out Output folder name
-l, --left Left/Forward FASTQ Illumina reads (R1)
-r, --right Right/Reverse FASTQ Illumina reads (R2)
-s, --single Single ended FASTQ reads
--stranded If RNA-seq library stranded. [RF,FR,F,R,no]
--left_norm Normalized left FASTQ reads (R1)
--right_norm Normalized right FASTQ reads (R2)
--single_norm Normalized single-ended FASTQ reads
--pacbio_isoseq PacBio long-reads
--nanopore_cdna Nanopore cDNA long-reads
--nanopore_mrna Nanopore mRNA direct long-reads
--trinity Pre-computed Trinity transcripts (FASTA)
--jaccard_clip Turn on jaccard clip for dense genomes [Recommended for fungi]
--no_normalize_reads Skip read Normalization
--no_trimmomatic Skip Quality Trimming of reads
--memory RAM to use for Jellyfish. Default: 50G
-c, --coverage Depth to normalize reads. Default: 50
-m, --min_coverage Min depth for normalizing reads. Default: 5
--pasa_config PASA assembly config file, i.e. from previous PASA run
--pasa_db Database to use. Default: sqlite [mysql,sqlite]
--pasa_alignment_overlap PASA --stringent_alignment_overlap. Default: 30.0
--aligners Aligners to use with PASA: Default: minimap2 blat [gmap]
--pasa_min_pct_aligned PASA --MIN_PERCENT_ALIGNED. Default: 90
--pasa_min_avg_per_id PASA --MIN_AVG_PER_ID. Default: 95
--pasa_num_bp_splice PASA --NUM_BP_PERFECT_SPLICE_BOUNDARY. Default: 3
--max_intronlen Maximum intron length. Default: 3000
--min_protlen Minimum protein length. Default: 50
--alt_transcripts Expression threshold (percent) to keep alt transcripts. Default: 0.1 [0-1]
--p2g NCBI p2g file (if updating NCBI annotation)
-t, --tbl2asn Assembly parameters for tbl2asn. Example: "-l paired-ends"
--name Locus tag name (assigned by NCBI?). Default: use existing
--sbt NCBI Submission file
--species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
--strain Strain name
--isolate Isolate name
--SeqCenter Sequencing facilty for NCBI tbl file. Default: CFMR
--SeqAccession Sequence accession number for NCBI tbl file. Default: 12345
--cpus Number of CPUs to use. Default: 2
--no-progress Do not print progress to stdout for long sub jobs
ENV Vars: If not passed, will try to load from your $PATH.
--PASAHOME
--TRINITYHOME
""".format(package_name, __version__)
testHelp = """
Usage: {:} test <arguments>
version: {:}
Description: This is a script that runs several unit tests. It will download data and run
several different tests to determine if installion is functioning properly. If
you cannot download from the machine funannotate is installed at - then download
the 7 tar.gz files from https://osf.io/bj7v4/files/ and run script from directory
Arguments:
-t, --tests Test sets to run. [all,clean,mask,predict,busco,rna-seq,annotate,compare]
--cpus Number of cpus to use. Default: 2
--debug Keep output files
""".format(package_name, __version__)
fixHelp = """
Usage: {:} fix <arguments>
version: {:}
Description: Script takes a GenBank genome annotation file and an NCBI tbl file to
generate updated annotation. Script is used to fix problematic gene models
after running funannotate predict or funannotate update.
Required:
-i, --input Annotated genome in GenBank format.
-t, --tbl NCBI tbl annotation file.
-d, --drop Gene models to remove/drop from annotation. File with locus_tag 1 per line.
Optional:
-o, --out Output folder
--tbl2asn Parameters for tbl2asn. Default: "-l paired-ends"
""".format(package_name, __version__)
remoteHelp = """
Usage: {:} remote <arguments>
version: {:}
Description: Script runs remote server functional annotation for Phobius and
antiSMASH (fungi). These searches are slow, if you can setup these services
locally it will be much faster to do that. PLEASE do not abuse services!
Required:
-m, --methods Which services to run, space separated [phobius,antismash,all]
-e, --email Email address to identify yourself to services.
-i, --input Funannotate input folder.
or
-g, --genbank GenBank file (must be annotated).
-o, --out Output folder name.
--force Force query even if antiSMASH server looks busy
""".format(package_name, __version__)
setupHelp = """
Usage: {:} setup <arguments>
version: {:}
Description: Script will download/format necessary databases for funannotate.
Options:
-i, --install Download format databases. Default: all
[merops,uniprot,dbCAN,pfam,repeats,go,
mibig,interpro,busco_outgroups,gene2product]
-b, --busco_db Busco Databases to install. Default: dikarya [all,fungi,aves,etc]
-d, --database Path to funannotate database
-u, --update Check remote md5 and update if newer version found
-f, --force Force overwriting database
-w, --wget Use wget to download instead of python requests
-l, --local Use local resource JSON file instead of current on github
""".format(package_name, __version__)
iprscanHelp = """
Usage: {:} iprscan <arguments>
version: {:}
Description: This script is a wrapper for running InterProScan5 using Docker or from a
local installation. The script splits proteins into smaller chunks and then
launches several interproscan.sh "processes". It then combines the results.
Arguments:
-i, --input Funannotate folder or FASTA protein file. (Required)
-m, --method Search method to use: [local, docker] (Required)
-n, --num Number of fasta files per chunk. Default: 1000
-o, --out Output XML InterProScan5 file
--debug Keep intermediate files
--no-progress Do not print progress to stdout for long sub jobs
Docker arguments:
-c, --cpus Number of CPUs (total). Default: 12
--cpus_per_chunk Number of cpus per Docker instance. Default: 4
Local arguments:
--iprscan_path Path to interproscan.sh. Default: which(interproscan.sh)
-c, --cpus Number of InterProScan instances to run
(configure cpu/thread control in interproscan.properties file)
""".format(package_name, __version__)
annotateHelp = """
Usage: {:} annotate <arguments>
version: {:}
Description: Script functionally annotates the results from funannotate predict. It pulls
annotation from PFAM, InterPro, EggNog, UniProtKB, MEROPS, CAZyme, and GO ontology.
Required:
-i, --input Folder from funannotate predict
or
--genbank Genome in GenBank format
-o, --out Output folder for results
or
--gff Genome GFF3 annotation file
--fasta Genome in multi-fasta format
-s, --species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
-o, --out Output folder for results
Optional:
--sbt NCBI submission template file. (Recommended)
-a, --annotations Custom annotations (3 column tsv file)
-m, --mito-pass-thru Mitochondrial genome/contigs. append with :mcode
--eggnog Eggnog-mapper annotations file (if NOT installed)
--antismash antiSMASH secondary metabolism results (GBK file from output)
--iprscan InterProScan5 XML file
--phobius Phobius pre-computed results (if phobius NOT installed)
--isolate Isolate name
--strain Strain name
--rename Rename GFF gene models with locus_tag from NCBI.
--fix Gene/Product names fixed (TSV: GeneID\tName\tProduct)
--remove Gene/Product names to remove (TSV: Gene\tProduct)
--busco_db BUSCO models. Default: dikarya
-t, --tbl2asn Additional parameters for tbl2asn. Default: "-l paired-ends"
-d, --database Path to funannotate database. Default: $FUNANNOTATE_DB
--force Force over-write of output folder
--cpus Number of CPUs to use. Default: 2
--tmpdir Volume/location to write temporary files. Default: /tmp
--no-progress Do not print progress to stdout for long sub jobs
""".format(package_name, __version__)
compareHelp = """
Usage: {:} compare <arguments>
version: {:}
Description: Script does light-weight comparative genomics between funannotated genomes. Output
is graphs, phylogeny, CSV files, etc --> visualized in web-browser.
Required:
-i, --input List of funannotate genome folders or GBK files
Optional:
-o, --out Output folder name. Default: funannotate_compare
-d, --database Path to funannotate database. Default: $FUNANNOTATE_DB
--cpus Number of CPUs to use. Default: 2
--run_dnds Calculate dN/dS ratio on all orthologs. [estimate,full]
--go_fdr P-value for FDR GO-enrichment. Default: 0.05
--heatmap_stdev Cut-off for heatmap. Default: 1.0
--num_orthos Number of Single-copy orthologs to use for ML. Default: 500
--bootstrap Number of boostrap replicates to run with RAxML. Default: 100
--outgroup Name of species to use for ML outgroup. Default: no outgroup
--proteinortho Proteinortho POFF results. in TSV format.
--ml_method Maxmimum Liklihood method: Default: iqtree [raxml,iqtree]
--no-progress Do not print progress to stdout for long sub jobs
""".format(package_name, __version__)
outgroupHelp = """
Usage: {:} outgroups <arguments>
version: {:}
Description: Managing the outgroups folder for funannotate compare
Arguments:
-i, --input Proteome multi-fasta file. Required.
-s, --species Species name for adding a species. Required.
-b, --busco_db BUSCO db to use. Default. dikarya
-c, --cpus Number of CPUs to use for BUSCO search.
-d, --database Path to funannotate database. Default: $FUNANNOTATE_DB
""".format(package_name, __version__)
utilHelp = """
Usage: {:} util <arguments>
version: {:}
Commands:
stats Generate assembly and annotation stats
contrast Compare annotations to reference (GFF3 or GBK annotations)
tbl2gbk Convert TBL format to GenBank format
gbk2parts Convert GBK file to individual components
gff2prot Convert GFF3 + FASTA files to protein FASTA
gff2tbl Convert GFF3 format to NCBI annotation table (tbl)
bam2gff3 Convert BAM coord-sorted transcript alignments to GFF3
prot2genome Map proteins to genome generating GFF3 protein alignments
stringtie2gff3 Convert GTF (stringTIE) to GFF3 format
quarry2gff3 Convert CodingQuarry output to proper GFF3 format
gff-rename Sort GFF3 file and rename gene models
""".format(package_name, __version__)
statsHelp = """
Usage: {:} util stats <arguments>
version: {:}
Description: Generate JSON file with genome assembly and annotation stats.
Arguments:
-f, --fasta Genome FASTA file (Required)
-o, --out Output file (JSON format)
-g, --gff3 Genome Annotation (GFF3 format)
-t, --tbl Genome Annotation (NCBI TBL format)
--transcript_alignments Transcript alignments (GFF3 format)
--protein_alignments Protein alignments (GFF3 format)
""".format(package_name, __version__)
gff2tblHelp = """
Usage: {:} util gff2tbl <arguments>
version: {:}
Description: Convert GFF3 file into NCBI tbl format. Tbl output to stdout.
Arguments:
-g, --gff3 Reference Annotation. GFF3 format
-f, --fasta Genome FASTA file.
""".format(package_name, __version__)
prot2genomeHelp = """
Usage: {:} util prot2genome <arguments>
version: {:}
Description: Map proteins to genome using exonerate. Output is EVM compatible GFF3 file.
Arguments: -g, --genome Genome FASTA format (Required)
-p, --proteins Proteins FASTA format (Required)
-o, --out GFF3 output file (Required)
-f, --filter Pre-filtering method. Default: diamond [diamond,tblastn]
-t, --tblastn_out Output to save tblastn results. Default: off
--tblastn Use existing tblastn results
--ploidy Ploidy of assembly. Default: 1
--maxintron Max intron length. Default: 3000
--cpus Number of cpus to use. Default: 2
--EVM_HOME Location of Evidence Modeler home directory. Default: $EVM_HOME
--tmpdir Volume/location to write temporary files. Default: /tmp
--logfile Logfile output file
""".format(package_name, __version__)
gff2protHelp = """
Usage: {:} util gff2prot <arguments>
version: {:}
Description: Convert GFF3 file and genome FASTA to protein sequences. FASTA output to stdout.
Arguments: -g, --gff3 Reference Annotation. GFF3 format
-f, --fasta Genome FASTA file.
--no_stop Dont print stop codons
""".format(package_name, __version__)
gbk2partsHelp = """
Usage: {:} util gbk2parts <arguments>
version: {:}
Description: Convert GenBank file to its individual components (parts) tbl, protein
FASTA, transcript FASTA, and contig/scaffold FASTA.
Arguments: -g, --gbk Input Genome in GenBank format
-o, --output Output basename
""".format(package_name, __version__)
contrastHelp = """
Usage: {:} util contrast <arguments>
version: {:}
Description: Compare/constrast annotations to reference. Annotations in either GBK or GFF3 format.
Arguments: -r, --reference Reference Annotation. GFF3 or GBK format
-f, --fasta Genome FASTA. Required if GFF3 used
-q, --query Annotation query. GFF3 or GBK format
-o, --output Output basename
-c, --calculate_pident Measure protein percent identity between query and reference
""".format(package_name, __version__)
tbl2gbkHelp = """
Usage: {:} util tbl2gbk <arguments>
version: {:}
Description: Convert NCBI TBL annotations + Genome FASTA to GenBank format.
Required: -i, --tbl Annotation in NCBI tbl format
-f, --fasta Genome FASTA file.
-s, --species Species name, use quotes for binomial, e.g. "Aspergillus fumigatus"
Optional:
--isolate Isolate name
--strain Strain name
--sbt NCBI Submission Template file
-t, --tbl2asn Assembly parameters for tbl2asn. Example: "-l paired-ends"
-o, --output Output basename
""".format(package_name, __version__)
bam2gff3Help = """
Usage: {:} util bam2gff3 <arguments>
version: {:}
Description: Convert BAM coordsorted transcript alignments to GFF3 format.
Arguments: -i, --bam BAM file (coord-sorted)
-o, --output GFF3 output file
""".format(package_name, __version__)
stringtieHelp = """
Usage: {:} util stringtie2gff3 <arguments>
version: {:}
Description: Convert StringTIE GTF format to GFF3 funannotate compatible format. Output
to stdout.
Arguments: -i, --input GTF file from stringTIE
""".format(package_name, __version__)
quarryHelp = """
Usage: {:} util quarry2gff3 <arguments>
version: {:}
Description: Convert CodingQuarry output GFF to proper GFF3 format. Output to stdout.
Arguments: -i, --input CodingQuarry output GFF file. (PredictedPass.gff3)
""".format(package_name, __version__)
gffrenameHelp = """
Usage: {:} util gff-rename <arguments>
version: {:}
Description: Sort GFF3 file by contigs and rename gene models.
Arguments: -g, --gff3 Reference Annotation. GFF3 format
-f, --fasta Genome FASTA file.
-o, --out Output GFF3 file
-l, --locus_tag Locus tag to use. Default: FUN
-n, --numbering Start number for genes. Default: 1
""".format(package_name, __version__)
# Add subcmds into dictionary
info = {'clean': {'cmd': 'clean', 'append': None, 'help': cleanHelp, 'dir': '.'},
'sort': {'cmd': 'sort', 'append': None, 'help': sortHelp, 'dir': '.'},
'mask': {'cmd': 'mask', 'append': None, 'help': maskHelp, 'dir': '.'},
'train': {'cmd': 'train', 'append': None, 'help': trainHelp, 'dir': '.'},
'predict': {'cmd': 'predict', 'append': None, 'help': predictHelp, 'dir': '.'},
'update': {'cmd': 'update', 'append': None, 'help': updateHelp, 'dir': '.'},
'fix': {'cmd': 'fix', 'append': None, 'help': fixHelp, 'dir': '.'},
'remote': {'cmd': 'remote', 'append': None, 'help': remoteHelp, 'dir': '.'},
'check': {'cmd': 'check', 'append': None, 'help': '', 'dir': '.'},
'database': {'cmd': 'database', 'append': None, 'help': '', 'dir': '.'},
'setup': {'cmd': 'setupDB', 'append': None, 'help': setupHelp, 'dir': '.'},
'annotate': {'cmd': 'annotate', 'append': None, 'help': annotateHelp, 'dir': '.'},
'outgroups': {'cmd': 'outgroups', 'append': None, 'help': outgroupHelp, 'dir': '.'},
'compare': {'cmd': 'compare', 'append': None, 'help': compareHelp, 'dir': '.'},
'iprscan': {'cmd': 'iprscan', 'append': None, 'help': iprscanHelp, 'dir': '.'},
'species': {'cmd': 'species', 'append': None, 'help': '', 'dir': '.'},
'util': {'cmd': None, 'append': None, 'help': utilHelp, 'dir': '.'},
'stats': {'cmd': 'stats', 'append': None, 'help': statsHelp, 'dir': 'utilities'},
'gff2tbl': {'cmd': 'gff2tbl', 'append': None, 'help': gff2tblHelp, 'dir': 'utilities'},
'gff2prot': {'cmd': 'gff2prot', 'append': None, 'help': gff2protHelp, 'dir': 'utilities'},
'gbk2parts': {'cmd': 'gbk2parts', 'append': None, 'help': gbk2partsHelp, 'dir': 'utilities'},
'contrast': {'cmd': 'contrast', 'append': None, 'help': contrastHelp, 'dir': 'utilities'},
'tbl2gbk': {'cmd': 'tbl2gbk', 'append': None, 'help': tbl2gbkHelp, 'dir': 'utilities'},
'bam2gff3': {'cmd': 'bam2gff3', 'append': None, 'help': bam2gff3Help, 'dir': 'utilities'},
'stringtie2gff3': {'cmd': 'stringtie2gff3', 'append': None, 'help': stringtieHelp, 'dir': 'utilities'},
'quarry2gff3': {'cmd': 'quarry2gff3', 'append': None, 'help': quarryHelp, 'dir': 'utilities'},
'prot2genome': {'cmd': 'funannotate-p2g.py', 'append': None, 'help': prot2genomeHelp, 'dir': 'aux_scripts', 'subprocess': True},
'test': {'cmd': 'test', 'append': None, 'help': testHelp, 'dir': '.'},
'gff-rename': {'cmd': 'gff_reformat', 'append': None, 'help': gffrenameHelp, 'dir': 'utilities'}
}
# Note, the first dict record would correspond to: package_name/example1.py script to import
# the append key is to pass a command silently to the script
# the help key is to reference the above help menu strings
# dir key is to deal with any nested folder structure
# if subprocess present, then run subprocess module so multithreading is functional
# main function: will display help menu for each subcommand and import it and run main() for that script
def main():
# start here
cmdName = None
if len(sys.argv) < 2:
print(default_help)
sys.exit(1)
elif sys.argv[1] == 'version' or sys.argv[1] == '--version' or sys.argv[1] == '-version' or sys.argv[1] == '-v':
print("{:} v{:}".format(package_name, __version__))
sys.exit(1)
elif sys.argv[1] == 'util':
try:
cmdName = sys.argv[2]
except IndexError:
print(info['util']['help'])
sys.exit(1)
try:
arguments = sys.argv[3:]
except IndexError:
print(info[cmdName]['help'])
sys.exit(1)
elif sys.argv[1] in info:
cmdName = sys.argv[1]
arguments = sys.argv[2:]
else:
print(default_help)
sys.exit(1)
if cmdName and cmdName in info:
if len(arguments) > 0 or cmdName == 'check' or cmdName == 'database' or cmdName == 'species':
if '-h' in arguments or '--help' in arguments:
print(info[cmdName]['help'])
sys.exit(1)
if info[cmdName]['append']:
arguments.append(info[cmdName]['append'])
if 'subprocess' in info[cmdName]:
# run as subprocess to be able to use multiprocessing
cmd = [sys.executable, os.path.join(os.path.dirname(
funannotate.__file__), info[cmdName]['dir'], info[cmdName]['cmd'])]
cmd += arguments
subprocess.call(cmd)
else:
if info[cmdName]['dir'] != '.':
mod = importlib.import_module('{:}.{:}.{:}'.format(
package_name, info[cmdName]['dir'], info[cmdName]['cmd']))
else:
mod = importlib.import_module(
'{:}.{:}'.format(package_name, info[cmdName]['cmd']))
mod.main(arguments)
else:
print(info[cmdName]['help'])
sys.exit(1)
else:
print(default_help)
sys.exit(1)
if __name__ == "__main__":
main()
|
nextgenusfs/funannotate
|
funannotate/funannotate.py
|
Python
|
bsd-2-clause
| 34,108
|
[
"BLAST"
] |
0062c7627e1a829aaf025f1dfe6470cedc4a3e998693714dc08ef55f2e77990a
|
# encoding: utf-8
'''
OCTANT is a toolkit for working with ocean models on a C-Grid
Octant (Ocean C-grid model seTup and Analysis Toolkit) is based on the
python/numpy/matplotlib scientific python suite. NetCDF I/O is based on the
NetCDF4-python package. The toolkit contains general modeling tools for dealing
with arrays, diagnosing standard properties, curvilinear grid generation, and
interpolation. Specific tools are also included for ROMS and GETM.
External tools included within the source tree for convenience:
python-gsw : Python implementation of the Thermodynamic Equation Of Seawater - 2010 (TEOS-10)
Available at https://code.google.com/p/python-gsw/
'''
import depths
import extern
import grid
import ocean
import tools
__authors__ = ['Robert Hetland <hetland@tamu.edu>',
'Richard Hofmeister <richard.hofmeister@io-warnemuende.de>']
__version__ = '0.2.0'
|
kthyng/octant
|
octant/__init__.py
|
Python
|
bsd-3-clause
| 937
|
[
"NetCDF"
] |
77a03c1742ffc5c67df2b9b70c1a88b927b6136cd9fd3039072033abd2383072
|
'''
The settings for OSMC are handled by the OSMC Settings Addon (OSA).
In order to more easily accomodate future changes and enhancements, each OSMC settings bundle (module) is a separate addon.
The module can take the form of an xbmc service, an xbmc script, or an xbmc module, but it must be installed into the users'
/usr/share/kodi/addons folder.
The OSA collects the modules it can find, loads their icons, and launches them individually when the user clicks on an icon.
The modules can either have their own GUI, or they can leverage the settings interface provided by XBMC. If the OSG uses the XBMC
settings interface, then all of their settings must be stored in the addons settings.xml. This is true even if the source of record
is a separate config file.
An example of this type is the Pi settings module; the actual settings are read from the config.txt, then written to the
settings.xml for display in kodi, then finally all changes are written back to the config.txt. The Pi module detects user
changes to the settings by identifying the differences between a newly read settings.xml and the values from a previously
read settings.xml.
The values of the settings displayed by this module are only ever populated by the items in the settings.xml. [Note: meaning that
if the settings data is retrieved from a different source, it will need to be populated in the module before it is displayed
to the user.]
Each module must have in its folder, a sub-folder called 'resources/osmc'. Within that folder must reside this script (OSMCSetting.py),
and the icons to be used in the OSG to represent the module (FX_Icon.png and FO_Icon.png for unfocused and focused images
respectively).
When the OSA creates the OSMC Settings GUI (OSG), these modules are identified and the OSMCSetting.py script in each of them
is imported. This script provides the mechanism for the OSG to apply the changes required from a change in a setting.
The OSMCSetting.py file must have a class called OSMCSettingClass as shown below.
The key variables in this class are:
addonid : The id for the addon. This must be the id declared in the addons addon.xml.
description : The description for the module, shown in the OSA
reboot_required : A boolean to declare if the OS needs to be rebooted. If a change in a specific setting
requires an OS reboot to take affect, this is flag that will let the OSG know.
setting_data_method : This dictionary contains:
- the name of all settings in the module
- the current value of those settings
- [optional] apply - a method to call for each setting when the value changes
- [optional] translate - a method to call to translate the data before adding it to the
setting_data_method dict. The translate method must have a 'reverse' argument which
when set to True, reverses the transformation.
The key methods of this class are:
open_settings_window : This is called by the OSG when the icon is clicked. This will open the settings window.
Usually this would be __addon__.OpenSettings(), but it could be any other script.
This allows the creation of action buttons in the GUI, as well as allowing developers
to script and skin their own user interfaces.
[optional] first_method : called before any individual settings changes are applied.
[optional] final_method : called after all the individual settings changes are done.
[optional] boot_method : called when the OSA is first started.
apply_settings : This is called by the OSG to apply the changes to any settings that have changed.
It calls the first setting method, if it exists.
Then it calls the method listed in setting_data_method for each setting. Then it
calls the final method, again, if it exists.
populate_setting_data_method : This method is used to populate the setting_data_method with the current settings data.
Usually this will be from the addons setting data stored in settings.xml and retrieved
using the settings_retriever_xml method.
Sometimes the user is able to edit external setting files (such as the Pi's config.txt).
If the developer wants to use this source in place of the data stored in the
settings.xml, then they should edit this method to include a mechanism to retrieve and
parse that external data. As the window shown in the OSG populates only with data from
the settings.xml, the developer should ensure that the external data is loaded into that
xml before the settings window is opened.
settings_retriever_xml : This method is used to retrieve all the data for the settings listed in the
setting_data_method from the addons settings.xml.
The developer is free to create any methods they see fit, but the ones listed above are specifically used by the OSA.
Specifically, the apply_settings method is called when the OSA closes.
Settings changes are applied when the OSG is called to close. But this behaviour can be changed to occur when the addon
settings window closes by editing the open_settings_window. The method apply_settings will still be called by OSA, so
keep that in mind.
'''
# XBMC Modules
import xbmc
import xbmcaddon
import xbmcgui
import subprocess
import sys
import os
import threading
addonid = "script.module.osmcsetting.pi"
__addon__ = xbmcaddon.Addon(addonid)
DIALOG = xbmcgui.Dialog()
# Custom modules
sys.path.append(xbmc.translatePath(os.path.join(xbmcaddon.Addon(addonid).getAddonInfo('path'), 'resources','lib')))
# OSMC SETTING Modules
import config_tools as ct
from CompLogger import comprehensive_logger as clog
def lang(id):
san = __addon__.getLocalizedString(id).encode( 'utf-8', 'ignore' )
return san
def log(message):
xbmc.log('OSMC PI ' + str(message), level=xbmc.LOGDEBUG)
class OSMCSettingClass(threading.Thread):
'''
A OSMCSettingClass is way to substantiate the settings of an OSMC settings module, and make them available to the
OSMC Settings Addon (OSA).
'''
def __init__(self):
'''
The pi_settings_dict contains all the settings in the settings group, as well as the methods to call when a
setting_value has changed and the existing setting_value.
'''
super(OSMCSettingClass, self).__init__()
self.addonid = addonid
self.me = xbmcaddon.Addon(self.addonid)
# this is what is displayed in the main settings gui
self.shortname = 'Pi Config'
self.description = """This is the text that is shown on the OSG. [CR][CR]It should describe:[CR] - what the settings module is for,[CR] - the settings it controls,[CR] - and anything else you want, I suppose."""
self.description = """The Raspberry Pi doesn't have a conventional BIOS. System configuration parameters are stored in a "config.txt" file. For more detail, visit http://elinux.org/RPiconfig[CR]
This settings module allows you to edit your config.txt from within OSMC using a graphical interface.
The module includes:
- display rotation
- hdmi_safe & hdmi_boost
- hdmi_group & hdmi_mode
- function to save edid to file
- sdtv_mode & sdtv_aspect
- GPU memory split
- MPG2 & WVC1 licences (including status)
- your Pi's serial number
Finally, there is a Config Editor that will allow you to quickly add, edit, or delete lines in your config.txt.
Overclock settings are set using the Pi Overclock module."""
self.not_going_to_config = [ 'store_hdmi_to_file',
'gpu_mem',
]
self.values_set_elsewhere = [ 'hdmi_edid_file',
'hdmi_force_hotplug',
]
# The setting_value in this dict is what is used in the settings.xml. The need to be translated from any external source,
# line the config.txt, and then translated again for writing back.
# I have added a translate method to translate the data recieved from an external source before adding it to the setting dict
# I have also added a default setting here, because the settings stored in the settings.xml cannot be relied upon,
# because if the user adds a setting, then deletes it offline, the settings.xml will add it back in when the addon exits.
# A default value of configignore means that the setting should never be passed to the config parser.
self.pi_settings_dict = {
'hdmi_safe': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool
},
'hdmi_ignore_edid': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool
},
'store_hdmi_to_file': {'setting_value' : '',
'default': 'false',
'translate': self.translate_store_hdmi,
},
'hdmi_edid_file': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool
},
'hdmi_force_hotplug': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool,
},
'hdmi_ignore_cec': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool
},
'hdmi_ignore_cec_init ': {'setting_value' : '',
'default': 'true',
'translate': self.translate_bool
},
'hdmi_boost': {'setting_value' : '',
'default': '0',
},
'hdmi_group': {'setting_value' : '',
'default': '0',
},
'hdmi_mode': {'setting_value' : '',
'default': '0',
},
'hdmi_pixel_encoding': {'setting_value' : '',
'default': '0',
},
'display_rotate': {'setting_value' : '',
'default': '0',
},
'sdtv_mode': {'setting_value' : '',
'default': '0',
},
'sdtv_aspect': {'setting_value' : '',
'default': '0',
'translate': self.translate_sdtv_aspect
},
'gpu_mem': {'setting_value' : '',
'default': 'false',
'translate': self.translate_gpu_mem
},
'gpu_mem_256': {'setting_value' : '',
'default': '112',
},
'gpu_mem_512': {'setting_value' : '',
'default': '144',
},
'gpu_mem_1024': {'setting_value' : '',
'default': '192',
},
'decode_MPG2': {'setting_value' : '',
'default': '',
},
'decode_WVC1': {'setting_value' : '',
'default': '',
},
'max_usb_current': {'setting_value' : '',
'default': 'false',
'translate': self.translate_bool,
},
'dtoverlay': {'setting_value' : '',
'default': '',
'translate': self.translate_dtoverlay,
},
'device_tree': {'setting_value' : '',
'default': '',
'translate': self.translate_device_tree,
},
# 'other_settings_string': {'setting_value' : '',
# 'default': '',
# 'translate': self.translate_other_string
# },
}
# list to hold the keys for the other string settings
self.unknown_setting_keys = []
# list to hold the keys for the settings that need to be removed from the config.txt
self.remove_list = []
# the location of the config file FOR TESTING ONLY
try:
self.test_config = '/boot/config.txt'
# populate the settings data in the pi_settings_dict
# self.populate_pi_settings_dict()
# a flag to determine whether a setting change requires a reboot to take effect
self.reboot_required = False
# grab the Pi serial number and check to see whether the codec licences are enabled
mpg = subprocess.check_output(["/opt/vc/bin/vcgencmd", "codec_enabled", "MPG2"])
wvc = subprocess.check_output(["/opt/vc/bin/vcgencmd", "codec_enabled", "WVC1"])
serial_raw = subprocess.check_output(["cat", "/proc/cpuinfo"])
# grab just the serial number
serial = serial_raw[serial_raw.index('Serial') + len('Serial'):].replace('\n','').replace(':','').replace(' ','').replace('\t','')
# load the values into the settings gui
__addon__.setSetting('codec_check', mpg.replace('\n','') + ', ' + wvc.replace('\n',''))
__addon__.setSetting('serial', serial)
except:
self.test_config = '/home/kubkev/Documents/config.txt'
log('START')
for x, k in self.pi_settings_dict.iteritems():
log("%s = %s" % (x, k.get('setting_value','no setting value')))
@clog(log)
def populate_pi_settings_dict(self):
'''
Populates the setting_value in the pi_settings_dict.
'''
# # this is the method to use if you are populating the dict from the settings.xml
# latest_settings = self.settings_retriever_xml()
# but I am going to set up my own process in addition to the xml one, I will be reading some
# settings from the config.txt, and getting the rest from the settings.xml
self.config_settings = ct.read_config(self.test_config)
log('Config settings received from the parser: %s' % self.config_settings)
# cycle through the pi_settings_dict dict, and populate with the settings values
for key in self.pi_settings_dict.keys():
# if the value of the setting is to be assigned by another setting, then just ignore it here
# note: this will mean that the other setting will have to populate both the settings_dict and the settings.xml
if key in self.values_set_elsewhere:
continue
# grab the translate method (if there is one)
translate_method = self.pi_settings_dict.get(key,{}).get('translate',{})
# if the key is in the config.txt
if key in self.config_settings:
setting_value = self.config_settings[key]
else:
# if the key ISNT in the config.txt then set the value from the default stored in
# the pi_settings_dict dict
setting_value = self.pi_settings_dict[key].get('default','')
# get the setting value, translate it if needed
if translate_method:
setting_value = translate_method(setting_value)
# if default is setting_value, then the setting has been set in the translation so ignore it
if setting_value not in self.not_going_to_config:
self.pi_settings_dict[key]['setting_value'] = setting_value
# also set the value in the settings.xml
self.me.setSetting(key, str(setting_value))
@clog(log, nowait=True)
def run(self):
'''
The method determines what happens when the item is clicked in the settings GUI.
Usually this would be __addon__.OpenSettings(), but it could be any other script.
This allows the creation of action buttons in the GUI, as well as allowing developers to script and skin their
own user interfaces.
'''
# read the config.txt file everytime the settings are opened. This is unavoidable because it is possible for
# the user to have made manual changes to the config.txt while OSG is active.
self.populate_pi_settings_dict()
for x, k in self.pi_settings_dict.iteritems():
log("%s = %s" % (x, k.get('setting_value','no setting value')))
self.me.openSettings()
# code placed here will run when the modules settings window is closed
self.apply_permitted = True
self.apply_settings()
self.apply_permitted = False
# apply_permitted will prevent the apply function being called by anything other than this method.
# This stops it from being called twice, once when the settings are closed and another when the OSG is closed
''' FOR TESTING ONLY '''
log('END')
for x, k in self.pi_settings_dict.iteritems():
log("%s = %s" % (x, k.get('setting_value','no setting value')))
@clog(log)
def apply_settings(self):
'''
This method will apply all of the settings. It calls the first_method, if it exists.
Then it calls the method listed in pi_settings_dict for each setting. Then it calls the
final_method, again, if it exists.
'''
# this prevents the method running when called by the OSG. Rather, the method is only being run when the settings
# window is closed.
if not self.apply_permitted:
return 'apply not permitted'
# retrieve the current settings from the settings.xml (this is where the user has made changes)
new_settings = self.settings_retriever_xml()
# dict to hold the keys of the changed settings
self.changed_settings = {}
# call the first method, if there is one
self.first_method()
# apply the individual settings changes
for k, v in self.pi_settings_dict.iteritems():
# if the value of the setting is set elsewhere, then the adding of the settings to changed settings will also
# have to be handled by the apply method of that other setting.
if k in self.values_set_elsewhere:
continue
# get the application method and stored setting value from the dictionary
method = v.get('apply', False)
value = v.get('setting_value', '')
# if the new setting is different to the stored setting then change the dict and run the 'apply' method
if new_settings[k] != value:
# change stored setting_value to the new value
self.pi_settings_dict[k]['setting_value'] = new_settings[k]
# add it to the changed settings dict
self.changed_settings[k] = new_settings[k]
# if a specific apply method exists for the setting, then call that
try:
method(new_settings[k])
except:
pass
# call the final method if there is one
self.final_method()
ok = DIALOG.notification(lang(32095), lang(32096))
def settings_retriever_xml(self):
'''
Reads the stored settings (in settings.xml) and returns a dictionary with the setting_name: setting_value. This
method cannot be overwritten.
'''
latest_settings = {}
addon = xbmcaddon.Addon(self.addonid)
for key in self.pi_settings_dict.keys():
latest_settings[key] = addon.getSetting(key)
return latest_settings
##############################################################################################################################
# #
def first_method(self):
'''
The method to call before all the other setting methods are called.
For example, this could be a call to stop a service. The final method could then restart the service again.
This can be used to apply the setting changes.
'''
@clog(log)
def final_method(self):
'''
The method to call after all the other setting methods have been called.
For example, in the case of the Raspberry Pi's settings module, the final writing to the config.txt can be delayed
until all the settings have been updated in the pi_settings_dict.
'''
''' This method will write the changed settings to the config.txt file. '''
# translate the changed settings into values that can be used in the config.txt
self.translated_changed_settings = {}
for k, v in self.changed_settings.iteritems():
# translate the setting if needed
# in some cases this translation can be used to set the values for other settings and have them added
# to the translated_changed_settings dict
translate_method = self.pi_settings_dict.get(k,{}).get('translate', False)
if translate_method:
value = translate_method(v, reverse=True)
else:
value = v #.get('setting_value','')
# if the setting is not to be added to the config.txt, then dont add it to the self.translated_changed_settings dict
if k in self.not_going_to_config:
continue
# # if this is the other_settings_string then break up into the individual settings
# elif k == 'other_settings_string':
# for key, svalue in value.iteritems():
# self.translated_changed_settings[key] = svalue
# add the setting to the translated settings dict, this is now ready to send to the config.txt writer
else:
self.translated_changed_settings[k] = value
# transfer the remove list into the changes dict
# this will make sure that existing settings in the config.txt that need to be removed, will be removed
for remove_key in self.remove_list:
self.translated_changed_settings[remove_key] = 'remove'
# reset the remove list
self.remove_list = []
# start_x=1 added by default to every config.txt
# popcornmix: I would remove start_file=start_x.elf and fixup_file=fixup_x.dat and use the shortcut start_x=1
self.translated_changed_settings['start_x'] = 1
# write the settings to the config.txt
ct.write_config(self.test_config, self.translated_changed_settings)
def boot_method(self):
'''
The method to call when the OSA is first activated (on reboot)
'''
pass
# #
##############################################################################################################################
##############################################################################################################################
# #
'''
Methods beyond this point are for specific settings.
'''
def translate_sdtv_aspect(self, data, reverse=False):
''' Method to translate the sdtv_aspect from 0 based index to 1 based '''
if not reverse:
if data:
return int(data) - 1
else:
return 0
else:
return int(data) + 1
def translate_bool(self, data, reverse=False):
''' method to convert number or text into boolean '''
if not reverse:
if data in [1, '1']:
return 'true'
else:
return 'false'
else:
if data in [1, '1', 'true']:
return '1'
else:
return '0'
# def translate_other_string(self, data='', reverse=False):
# '''
# Method to collate all the unknown settings from the config.txt into a single string, delimited by |:-:|.
# The reverse function returns a dictionary with {setting_name: setting_value, ... }
# '''
# if not reverse:
# config_keys = set(self.config_settings.keys())
# xml_keys = set(self.pi_settings_dict.keys())
# self.unknown_setting_keys = list(config_keys.difference(xml_keys))
# unknown_settings = [str(x) + '=' + str(self.config_settings[x]) for x in self.unknown_setting_keys]
# return "|:-:|".join(unknown_settings)
# else:
# no_space_data = data.replace(" ",'')
# setting_pairs = no_space_data.split("|:-:|")
# other_settings = []
# for setting in setting_pairs:
# set_list = setting.split('=')
# if len(set_list) == 2:
# other_settings.append(tuple(set_list))
# new_unknown_settings = dict(other_settings)
# # construct a list of keys that are in self.unknown_setting_keys but not in new_unknown_settings_keys
# new_unknown_settings_keys = set(new_unknown_settings.keys())
# unknown_settings_keys = set(self.unknown_setting_keys)
# removals = list(unknown_settings_keys.difference(new_unknown_settings_keys))
# # setup the removed unknown settings to be removed from the config.txt
# for rem in removals:
# new_unknown_settings[rem] = 'remove'
# # change the self.unknown_setting_keys list to the current list of unknown keys
# self.unknown_setting_keys = list(new_unknown_settings_keys)
# return new_unknown_settings
@clog(log)
def translate_device_tree(self, data, reverse=False):
'''
Checks for the presence of an empty device_tree setting, which disables device tree overlays.
'''
datalist = data.split('\n')
if not reverse:
if 'device_tree' in self.config_settings and '' in datalist:
self.me.setSetting('suppress_dtoverlay', 'true')
else:
self.me.setSetting('suppress_dtoverlay', 'false')
else:
if self.me.getSetting('suppress_dtoverlay') == 'true':
return ['']
else:
return ['[remove]']
@clog(log)
def translate_dtoverlay(self, data, reverse=False):
'''
Parses the dtoverlay list. There can be multiple dtoverlays, so the config_tool puts them all into
a single list.
'''
# setting: the set of settings in the group
# value: the value to assign to the kodi displayed settings if the overlay is active
overlay_settings = {
'hifiberry-dac-overlay' : {'setting': 'soundcard_dac', 'value': '1'},
'hifiberry-dacplus-overlay' : {'setting': 'soundcard_dac', 'value': '2'},
'hifiberry-digi-overlay' : {'setting': 'soundcard_dac', 'value': '3'},
'iqaudio-dac-overlay' : {'setting': 'soundcard_dac', 'value': '4'},
'iqaudio-dacplus-overlay' : {'setting': 'soundcard_dac', 'value': '5'},
'w1-gpio-overlay' : {'setting': 'w1gpio', 'value': '1'},
'w1-gpio-pullup-overlay' : {'setting': 'w1gpio', 'value': '2'},
'lirc-rpi-overlay' : {'setting': 'lirc-rpi-overlay', 'value': 'true'},
'spi-bcm2835-overlay' : {'setting': 'spi-bcm2835-overlay', 'value': 'true'},
}
dac_all = ['hifiberry-dac-overlay', 'hifiberry-dacplus-overlay','hifiberry-digi-overlay', 'iqaudio-dac-overlay','iqaudio-dacplus-overlay']
w1gpio = ['w1-gpio-overlay', 'w1-gpio-pullup-overlay']
datalist = data.split('\n')
log('datalist = %s' % datalist)
if not reverse:
# do this when reading the items into Kodi
self.me.setSetting('lirc-rpi-overlay', 'false')
self.me.setSetting('spi-bcm2835-overlay', 'false')
self.me.setSetting('soundcard_dac', '0')
self.me.setSetting('w1gpio', '0')
# dtoverlay=lirc-rpi:gpio_out_pin=19,gpio_in_pin=23,gpio_in_pull=down
for overlay in datalist:
log('individual overlay=%s' % overlay)
# lirc has to be handled individually as it may include extra parameters
if 'lirc-rpi' in overlay:
self.me.setSetting('lirc-rpi-overlay', 'true')
sub_params = ['gpio_out_pin', 'gpio_in_pin', 'gpio_in_pull']
if ':' in overlay:
params = [x.split('=') for x in overlay[overlay.index(':')+1:].split(',')]
log('lirc-rpi params=%s' % params)
for param in params:
for sub in sub_params:
if param[0] == sub:
self.me.setSetting(sub, param[1].strip())
continue
if overlay not in overlay_settings:
log('%s not in overlay_settings' % overlay)
continue
else:
ovl = overlay_settings[overlay]
self.me.setSetting(ovl['setting'], ovl['value'])
log('overlay: %s, setting: %s, value: %s' % (overlay, ovl['setting'], ovl['value']))
else:
# do this when writing the Kodi settings back to config.txt
new_dtoverlay = []
pos = self.me.getSetting('soundcard_dac')
if pos == '0':
new_dtoverlay.extend([x + '[remove]' for x in dac_all])
else:
soundcard = dac_all[int(pos)-1]
# add the soundcard overlay
new_dtoverlay.append(soundcard)
#remove the unneeded entries
new_dtoverlay.extend([x + '[remove]' for x in dac_all if x != soundcard])
wgp = self.me.getSetting('w1gpio')
if wgp != '0':
new_dtoverlay.append(w1gpio[int(wgp)-1])
else:
new_dtoverlay.extend([x + '[remove]' for x in w1gpio])
rpi = self.me.getSetting('lirc-rpi-overlay')
if rpi == 'true':
# dtoverlay=lirc-rpi:gpio_out_pin=19,gpio_in_pin=23,gpio_in_pull=down
out_pin = self.me.getSetting('gpio_out_pin')
in_pin = self.me.getSetting('gpio_in_pin')
pull_pin = self.me.getSetting('gpio_in_pull')
lirc = 'lirc-rpi:' + 'gpio_out_pin=' + str(out_pin) + ',gpio_in_pin=' + str(in_pin)
if pull_pin != 'off':
lirc = lirc + ',gpio_in_pull=' + pull_pin
new_dtoverlay.append(lirc)
else:
new_dtoverlay.append('lirc-rpi-overlay' + '[remove]')
spi = self.me.getSetting('spi-bcm2835-overlay')
if spi == 'true':
new_dtoverlay.append('spi-bcm2835-overlay')
else:
new_dtoverlay.append('spi-bcm2835-overlay' + '[remove]')
log("NEW DT OVERLAY = %s" % new_dtoverlay)
return new_dtoverlay
@clog(log)
def translate_store_hdmi(self, data, reverse=False):
'''
Sets the settings_dict and settings.xml values for hdmi_edid_file and hdmi_force_hotplug.
'''
if not reverse:
# set the pi_settings_dict and settings.xml values for these two settings
hdmi_edid_file = self.translate_bool(self.config_settings.get('hdmi_edid_file', 0))
hdmi_force_hotplug = self.translate_bool(self.config_settings.get('hdmi_force_hotplug', 0))
# popcornmix says that if either of these settings are active, then both should be active
tethered_settings = all([hdmi_edid_file=='true' , hdmi_force_hotplug=='true'])
self.pi_settings_dict['hdmi_edid_file']['setting_value'] = tethered_settings
self.pi_settings_dict['hdmi_force_hotplug']['setting_value'] = tethered_settings
# return the appropriate value for the parent setting
if tethered_settings: return 'true'
return 'false'
else:
# if the parent setting is true, then the child settings should be set to one
# if it isnt true, then both settings should be removed from the config.txt
if data == 'true':
self.translated_changed_settings['hdmi_edid_file'] = '1'
self.translated_changed_settings['hdmi_force_hotplug'] = '1'
# run the sub_process : "tvservice -d /boot/edid.dat"
subprocess.call(["sudo", "/opt/vc/bin/tvservice", "-d", "/boot/edid.dat"])
else:
# if the parent setting is false, then remove these two child settings from the config.xml
self.remove_list.append('hdmi_edid_file')
self.remove_list.append('hdmi_force_hotplug')
return 'remove'
@clog(log)
def translate_gpu_mem(self, data, reverse=False):
'''
If gpu_mem is present in the config.txt, then apply it to both gpu_mem_256 and gpu_mem_512.
Any new config.txt should be missing the gpu_mem setting.
'''
if not reverse:
memgpu = self.config_settings.get('gpu_mem', False)
# if gpu_mem is not in the config.txt then just return
if not memgpu:
return 'remove'
# set gpu_mem for removal from the config.txt
self.remove_list.append('gpu_mem')
# get the values for the other memory setting variants
mem1024 = self.config_settings.get('gpu_mem_1024', False)
mem512 = self.config_settings.get('gpu_mem_512', False)
mem256 = self.config_settings.get('gpu_mem_256', False)
if mem1024:
return 'remove'
elif memgpu:
# set the value in the pi_settings_dict and the settings.xml for display
val512 = min(768, int(memgpu))
self.me.setSetting('gpu_mem_1024', str(val1024))
self.pi_settings_dict['gpu_mem_1024']['setting_value'] = val1024
# if gpu_mem_512 is in the config, then use that, otherwise use gpu_mem, otherwise use default
if mem512:
return 'remove'
elif memgpu:
# set the value in the pi_settings_dict and the settings.xml for display
val512 = min(448, int(memgpu))
self.me.setSetting('gpu_mem_512', str(val512))
self.pi_settings_dict['gpu_mem_512']['setting_value'] = val512
# if gpu_mem_256 is in the config, then use that, otherwise use gpu_mem, otherwise use default
if mem256:
return 'remove'
elif memgpu:
# set the value in the pi_settings_dict and the settings.xml for display
val256 = min(192, int(memgpu))
self.me.setSetting('gpu_mem_256', str(val256))
self.pi_settings_dict['gpu_mem_256']['setting_value'] = val256
return 'remove'
else:
return 'remove'
# #
##############################################################################################################################
if __name__ == "__main__":
pass
|
ojengwa/osmc
|
package/mediacenter-addon-osmc/src/script.module.osmcsetting.pi/resources/osmc/OSMCSetting.py
|
Python
|
gpl-2.0
| 31,987
|
[
"VisIt"
] |
7b837cfc1193793f8a818a23e3d3e76ba028c238aba2cc81909903cdc7559dd6
|
"""
Tests for discussion pages
"""
import datetime
from pytz import UTC
from uuid import uuid4
from nose.plugins.attrib import attr
from .helpers import BaseDiscussionTestCase
from ..helpers import UniqueCourseTest
from ...pages.lms.auto_auth import AutoAuthPage
from ...pages.lms.courseware import CoursewarePage
from ...pages.lms.discussion import (
DiscussionTabSingleThreadPage,
InlineDiscussionPage,
InlineDiscussionThreadPage,
DiscussionUserProfilePage,
DiscussionTabHomePage,
DiscussionSortPreferencePage,
)
from ...pages.lms.learner_profile import LearnerProfilePage
from ...fixtures.course import CourseFixture, XBlockFixtureDesc
from ...fixtures.discussion import (
SingleThreadViewFixture,
UserProfileViewFixture,
SearchResultFixture,
Thread,
Response,
Comment,
SearchResult,
MultipleThreadFixture)
from .helpers import BaseDiscussionMixin
THREAD_CONTENT_WITH_LATEX = """Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
\n\n----------\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur. (b).\n\n
**(a)** $H_1(e^{j\\omega}) = \\sum_{n=-\\infty}^{\\infty}h_1[n]e^{-j\\omega n} =
\\sum_{n=-\\infty} ^{\\infty}h[n]e^{-j\\omega n}+\\delta_2e^{-j\\omega n_0}$
$= H(e^{j\\omega})+\\delta_2e^{-j\\omega n_0}=A_e (e^{j\\omega}) e^{-j\\omega n_0}
+\\delta_2e^{-j\\omega n_0}=e^{-j\\omega n_0} (A_e(e^{j\\omega})+\\delta_2)
$H_3(e^{j\\omega})=A_e(e^{j\\omega})+\\delta_2$. Dummy $A_e(e^{j\\omega})$ dummy post $.
$A_e(e^{j\\omega}) \\ge -\\delta_2$, it follows that $H_3(e^{j\\omega})$ is real and
$H_3(e^{j\\omega})\\ge 0$.\n\n**(b)** Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.\n\n
**Case 1:** If $re^{j\\theta}$ is a Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
\n\n**Case 3:** Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
Lorem $H_3(e^{j\\omega}) = P(cos\\omega)(cos\\omega - cos\\theta)^k$,
Lorem Lorem Lorem Lorem Lorem Lorem $P(cos\\omega)$ has no
$(cos\\omega - cos\\theta)$ factor.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
$P(cos\\theta) \\neq 0$. Since $P(cos\\omega)$ this is a dummy data post $\\omega$,
dummy $\\delta > 0$ such that for all $\\omega$ dummy $|\\omega - \\theta|
< \\delta$, $P(cos\\omega)$ Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit sse cillum dolore
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit sse cillum dolore eu fugiat nulla pariatur.
"""
class DiscussionResponsePaginationTestMixin(BaseDiscussionMixin):
"""
A mixin containing tests for response pagination for use by both inline
discussion and the discussion tab
"""
def assert_response_display_correct(self, response_total, displayed_responses):
"""
Assert that various aspects of the display of responses are all correct:
* Text indicating total number of responses
* Presence of "Add a response" button
* Number of responses actually displayed
* Presence and text of indicator of how many responses are shown
* Presence and text of button to load more responses
"""
self.assertEqual(
self.thread_page.get_response_total_text(),
str(response_total) + " responses"
)
self.assertEqual(self.thread_page.has_add_response_button(), response_total != 0)
self.assertEqual(self.thread_page.get_num_displayed_responses(), displayed_responses)
self.assertEqual(
self.thread_page.get_shown_responses_text(),
(
None if response_total == 0 else
"Showing all responses" if response_total == displayed_responses else
"Showing first {} responses".format(displayed_responses)
)
)
self.assertEqual(
self.thread_page.get_load_responses_button_text(),
(
None if response_total == displayed_responses else
"Load all responses" if response_total - displayed_responses < 100 else
"Load next 100 responses"
)
)
def test_pagination_no_responses(self):
self.setup_thread(0)
self.assert_response_display_correct(0, 0)
def test_pagination_few_responses(self):
self.setup_thread(5)
self.assert_response_display_correct(5, 5)
def test_pagination_two_response_pages(self):
self.setup_thread(50)
self.assert_response_display_correct(50, 25)
self.thread_page.load_more_responses()
self.assert_response_display_correct(50, 50)
def test_pagination_exactly_two_response_pages(self):
self.setup_thread(125)
self.assert_response_display_correct(125, 25)
self.thread_page.load_more_responses()
self.assert_response_display_correct(125, 125)
def test_pagination_three_response_pages(self):
self.setup_thread(150)
self.assert_response_display_correct(150, 25)
self.thread_page.load_more_responses()
self.assert_response_display_correct(150, 125)
self.thread_page.load_more_responses()
self.assert_response_display_correct(150, 150)
def test_add_response_button(self):
self.setup_thread(5)
self.assertTrue(self.thread_page.has_add_response_button())
self.thread_page.click_add_response_button()
def test_add_response_button_closed_thread(self):
self.setup_thread(5, closed=True)
self.assertFalse(self.thread_page.has_add_response_button())
@attr('shard_2')
class DiscussionHomePageTest(UniqueCourseTest):
"""
Tests for the discussion home page.
"""
SEARCHED_USERNAME = "gizmo"
def setUp(self):
super(DiscussionHomePageTest, self).setUp()
CourseFixture(**self.course_info).install()
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.page = DiscussionTabHomePage(self.browser, self.course_id)
self.page.visit()
def test_new_post_button(self):
"""
Scenario: I can create new posts from the Discussion home page.
Given that I am on the Discussion home page
When I click on the 'New Post' button
Then I should be shown the new post form
"""
self.assertIsNotNone(self.page.new_post_button)
self.page.click_new_post_button()
self.assertIsNotNone(self.page.new_post_form)
@attr('shard_2')
class DiscussionTabSingleThreadTest(BaseDiscussionTestCase, DiscussionResponsePaginationTestMixin):
"""
Tests for the discussion page displaying a single thread
"""
def setUp(self):
super(DiscussionTabSingleThreadTest, self).setUp()
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def setup_thread_page(self, thread_id):
self.thread_page = self.create_single_thread_page(thread_id) # pylint: disable=attribute-defined-outside-init
self.thread_page.visit()
def test_mathjax_rendering(self):
thread_id = "test_thread_{}".format(uuid4().hex)
thread_fixture = SingleThreadViewFixture(
Thread(
id=thread_id,
body=THREAD_CONTENT_WITH_LATEX,
commentable_id=self.discussion_id,
thread_type="discussion"
)
)
thread_fixture.push()
self.setup_thread_page(thread_id)
self.assertTrue(self.thread_page.is_discussion_body_visible())
self.thread_page.verify_mathjax_preview_available()
self.thread_page.verify_mathjax_rendered()
def test_markdown_reference_link(self):
"""
Check markdown editor renders reference link correctly
and colon(:) in reference link is not converted to %3a
"""
sample_link = "http://example.com/colon:test"
thread_content = """[enter link description here][1]\n[1]: http://example.com/colon:test"""
thread_id = "test_thread_{}".format(uuid4().hex)
thread_fixture = SingleThreadViewFixture(
Thread(
id=thread_id,
body=thread_content,
commentable_id=self.discussion_id,
thread_type="discussion"
)
)
thread_fixture.push()
self.setup_thread_page(thread_id)
self.assertEqual(self.thread_page.get_link_href(), sample_link)
def test_marked_answer_comments(self):
thread_id = "test_thread_{}".format(uuid4().hex)
response_id = "test_response_{}".format(uuid4().hex)
comment_id = "test_comment_{}".format(uuid4().hex)
thread_fixture = SingleThreadViewFixture(
Thread(id=thread_id, commentable_id=self.discussion_id, thread_type="question")
)
thread_fixture.addResponse(
Response(id=response_id, endorsed=True),
[Comment(id=comment_id)]
)
thread_fixture.push()
self.setup_thread_page(thread_id)
self.assertFalse(self.thread_page.is_comment_visible(comment_id))
self.assertFalse(self.thread_page.is_add_comment_visible(response_id))
self.assertTrue(self.thread_page.is_show_comments_visible(response_id))
self.thread_page.show_comments(response_id)
self.assertTrue(self.thread_page.is_comment_visible(comment_id))
self.assertTrue(self.thread_page.is_add_comment_visible(response_id))
self.assertFalse(self.thread_page.is_show_comments_visible(response_id))
@attr('shard_2')
class DiscussionTabMultipleThreadTest(BaseDiscussionTestCase):
"""
Tests for the discussion page with multiple threads
"""
def setUp(self):
super(DiscussionTabMultipleThreadTest, self).setUp()
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.thread_count = 2
self.thread_ids = []
self.setup_multiple_threads(thread_count=self.thread_count)
self.thread_page_1 = DiscussionTabSingleThreadPage(
self.browser,
self.course_id,
self.discussion_id,
self.thread_ids[0]
)
self.thread_page_2 = DiscussionTabSingleThreadPage(
self.browser,
self.course_id,
self.discussion_id,
self.thread_ids[1]
)
self.thread_page_1.visit()
def setup_multiple_threads(self, thread_count):
threads = []
for i in range(thread_count):
thread_id = "test_thread_{}_{}".format(i, uuid4().hex)
thread_body = "Dummy Long text body." * 50
threads.append(
Thread(id=thread_id, commentable_id=self.discussion_id, body=thread_body),
)
self.thread_ids.append(thread_id)
view = MultipleThreadFixture(threads)
view.push()
def test_page_scroll_on_thread_change_view(self):
"""
Check switching between threads changes the page to scroll to bottom
"""
# verify threads are rendered on the page
self.assertTrue(
self.thread_page_1.check_threads_rendered_successfully(thread_count=self.thread_count)
)
# From the thread_page_1 open & verify next thread
self.thread_page_1.click_and_open_thread(thread_id=self.thread_ids[1])
self.assertTrue(self.thread_page_2.is_browser_on_page())
# Verify that window is on top of page.
self.thread_page_2.check_window_is_on_top()
@attr('shard_2')
class DiscussionOpenClosedThreadTest(BaseDiscussionTestCase):
"""
Tests for checking the display of attributes on open and closed threads
"""
def setUp(self):
super(DiscussionOpenClosedThreadTest, self).setUp()
self.thread_id = "test_thread_{}".format(uuid4().hex)
def setup_user(self, roles=[]):
roles_str = ','.join(roles)
self.user_id = AutoAuthPage(self.browser, course_id=self.course_id, roles=roles_str).visit().get_user_id()
def setup_view(self, **thread_kwargs):
thread_kwargs.update({'commentable_id': self.discussion_id})
view = SingleThreadViewFixture(
Thread(id=self.thread_id, **thread_kwargs)
)
view.addResponse(Response(id="response1"))
view.push()
def setup_openclosed_thread_page(self, closed=False):
self.setup_user(roles=['Moderator'])
if closed:
self.setup_view(closed=True)
else:
self.setup_view()
page = self.create_single_thread_page(self.thread_id)
page.visit()
page.close_open_thread()
return page
def test_originally_open_thread_vote_display(self):
page = self.setup_openclosed_thread_page()
self.assertFalse(page._is_element_visible('.forum-thread-main-wrapper .action-vote'))
self.assertTrue(page._is_element_visible('.forum-thread-main-wrapper .display-vote'))
self.assertFalse(page._is_element_visible('.response_response1 .action-vote'))
self.assertTrue(page._is_element_visible('.response_response1 .display-vote'))
def test_originally_closed_thread_vote_display(self):
page = self.setup_openclosed_thread_page(True)
self.assertTrue(page._is_element_visible('.forum-thread-main-wrapper .action-vote'))
self.assertFalse(page._is_element_visible('.forum-thread-main-wrapper .display-vote'))
self.assertTrue(page._is_element_visible('.response_response1 .action-vote'))
self.assertFalse(page._is_element_visible('.response_response1 .display-vote'))
@attr('shard_2')
class DiscussionCommentDeletionTest(BaseDiscussionTestCase):
"""
Tests for deleting comments displayed beneath responses in the single thread view.
"""
def setup_user(self, roles=[]):
roles_str = ','.join(roles)
self.user_id = AutoAuthPage(self.browser, course_id=self.course_id, roles=roles_str).visit().get_user_id()
def setup_view(self):
view = SingleThreadViewFixture(Thread(id="comment_deletion_test_thread", commentable_id=self.discussion_id))
view.addResponse(
Response(id="response1"), [
Comment(id="comment_other_author"),
Comment(id="comment_self_author", user_id=self.user_id, thread_id="comment_deletion_test_thread")
]
)
view.push()
def test_comment_deletion_as_student(self):
self.setup_user()
self.setup_view()
page = self.create_single_thread_page("comment_deletion_test_thread")
page.visit()
self.assertTrue(page.is_comment_deletable("comment_self_author"))
self.assertTrue(page.is_comment_visible("comment_other_author"))
self.assertFalse(page.is_comment_deletable("comment_other_author"))
page.delete_comment("comment_self_author")
def test_comment_deletion_as_moderator(self):
self.setup_user(roles=['Moderator'])
self.setup_view()
page = self.create_single_thread_page("comment_deletion_test_thread")
page.visit()
self.assertTrue(page.is_comment_deletable("comment_self_author"))
self.assertTrue(page.is_comment_deletable("comment_other_author"))
page.delete_comment("comment_self_author")
page.delete_comment("comment_other_author")
@attr('shard_2')
class DiscussionResponseEditTest(BaseDiscussionTestCase):
"""
Tests for editing responses displayed beneath thread in the single thread view.
"""
def setup_user(self, roles=[]):
roles_str = ','.join(roles)
self.user_id = AutoAuthPage(self.browser, course_id=self.course_id, roles=roles_str).visit().get_user_id()
def setup_view(self):
view = SingleThreadViewFixture(Thread(id="response_edit_test_thread", commentable_id=self.discussion_id))
view.addResponse(
Response(id="response_other_author", user_id="other", thread_id="response_edit_test_thread"),
)
view.addResponse(
Response(id="response_self_author", user_id=self.user_id, thread_id="response_edit_test_thread"),
)
view.push()
def edit_response(self, page, response_id):
self.assertTrue(page.is_response_editable(response_id))
page.start_response_edit(response_id)
new_response = "edited body"
page.set_response_editor_value(response_id, new_response)
page.submit_response_edit(response_id, new_response)
def test_edit_response_as_student(self):
"""
Scenario: Students should be able to edit the response they created not responses of other users
Given that I am on discussion page with student logged in
When I try to edit the response created by student
Then the response should be edited and rendered successfully
And responses from other users should be shown over there
And the student should be able to edit the response of other people
"""
self.setup_user()
self.setup_view()
page = self.create_single_thread_page("response_edit_test_thread")
page.visit()
self.assertTrue(page.is_response_visible("response_other_author"))
self.assertFalse(page.is_response_editable("response_other_author"))
self.edit_response(page, "response_self_author")
def test_edit_response_as_moderator(self):
"""
Scenario: Moderator should be able to edit the response they created and responses of other users
Given that I am on discussion page with moderator logged in
When I try to edit the response created by moderator
Then the response should be edited and rendered successfully
And I try to edit the response created by other users
Then the response should be edited and rendered successfully
"""
self.setup_user(roles=["Moderator"])
self.setup_view()
page = self.create_single_thread_page("response_edit_test_thread")
page.visit()
self.edit_response(page, "response_self_author")
self.edit_response(page, "response_other_author")
def test_vote_report_endorse_after_edit(self):
"""
Scenario: Moderator should be able to vote, report or endorse after editing the response.
Given that I am on discussion page with moderator logged in
When I try to edit the response created by moderator
Then the response should be edited and rendered successfully
And I try to edit the response created by other users
Then the response should be edited and rendered successfully
And I try to vote the response created by moderator
Then the response should be voted successfully
And I try to vote the response created by other users
Then the response should be voted successfully
And I try to report the response created by moderator
Then the response should be reported successfully
And I try to report the response created by other users
Then the response should be reported successfully
And I try to endorse the response created by moderator
Then the response should be endorsed successfully
And I try to endorse the response created by other users
Then the response should be endorsed successfully
"""
self.setup_user(roles=["Moderator"])
self.setup_view()
page = self.create_single_thread_page("response_edit_test_thread")
page.visit()
self.edit_response(page, "response_self_author")
self.edit_response(page, "response_other_author")
page.vote_response('response_self_author')
page.vote_response('response_other_author')
page.report_response('response_self_author')
page.report_response('response_other_author')
page.endorse_response('response_self_author')
page.endorse_response('response_other_author')
@attr('shard_2')
class DiscussionCommentEditTest(BaseDiscussionTestCase):
"""
Tests for editing comments displayed beneath responses in the single thread view.
"""
def setup_user(self, roles=[]):
roles_str = ','.join(roles)
self.user_id = AutoAuthPage(self.browser, course_id=self.course_id, roles=roles_str).visit().get_user_id()
def setup_view(self):
view = SingleThreadViewFixture(Thread(id="comment_edit_test_thread", commentable_id=self.discussion_id))
view.addResponse(
Response(id="response1"),
[Comment(id="comment_other_author", user_id="other"), Comment(id="comment_self_author", user_id=self.user_id)])
view.push()
def edit_comment(self, page, comment_id):
page.start_comment_edit(comment_id)
new_comment = "edited body"
page.set_comment_editor_value(comment_id, new_comment)
page.submit_comment_edit(comment_id, new_comment)
def test_edit_comment_as_student(self):
self.setup_user()
self.setup_view()
page = self.create_single_thread_page("comment_edit_test_thread")
page.visit()
self.assertTrue(page.is_comment_editable("comment_self_author"))
self.assertTrue(page.is_comment_visible("comment_other_author"))
self.assertFalse(page.is_comment_editable("comment_other_author"))
self.edit_comment(page, "comment_self_author")
def test_edit_comment_as_moderator(self):
self.setup_user(roles=["Moderator"])
self.setup_view()
page = self.create_single_thread_page("comment_edit_test_thread")
page.visit()
self.assertTrue(page.is_comment_editable("comment_self_author"))
self.assertTrue(page.is_comment_editable("comment_other_author"))
self.edit_comment(page, "comment_self_author")
self.edit_comment(page, "comment_other_author")
def test_cancel_comment_edit(self):
self.setup_user()
self.setup_view()
page = self.create_single_thread_page("comment_edit_test_thread")
page.visit()
self.assertTrue(page.is_comment_editable("comment_self_author"))
original_body = page.get_comment_body("comment_self_author")
page.start_comment_edit("comment_self_author")
page.set_comment_editor_value("comment_self_author", "edited body")
page.cancel_comment_edit("comment_self_author", original_body)
def test_editor_visibility(self):
"""Only one editor should be visible at a time within a single response"""
self.setup_user(roles=["Moderator"])
self.setup_view()
page = self.create_single_thread_page("comment_edit_test_thread")
page.visit()
self.assertTrue(page.is_comment_editable("comment_self_author"))
self.assertTrue(page.is_comment_editable("comment_other_author"))
self.assertTrue(page.is_add_comment_visible("response1"))
original_body = page.get_comment_body("comment_self_author")
page.start_comment_edit("comment_self_author")
self.assertFalse(page.is_add_comment_visible("response1"))
self.assertTrue(page.is_comment_editor_visible("comment_self_author"))
page.set_comment_editor_value("comment_self_author", "edited body")
page.start_comment_edit("comment_other_author")
self.assertFalse(page.is_comment_editor_visible("comment_self_author"))
self.assertTrue(page.is_comment_editor_visible("comment_other_author"))
self.assertEqual(page.get_comment_body("comment_self_author"), original_body)
page.start_response_edit("response1")
self.assertFalse(page.is_comment_editor_visible("comment_other_author"))
self.assertTrue(page.is_response_editor_visible("response1"))
original_body = page.get_comment_body("comment_self_author")
page.start_comment_edit("comment_self_author")
self.assertFalse(page.is_response_editor_visible("response1"))
self.assertTrue(page.is_comment_editor_visible("comment_self_author"))
page.cancel_comment_edit("comment_self_author", original_body)
self.assertFalse(page.is_comment_editor_visible("comment_self_author"))
self.assertTrue(page.is_add_comment_visible("response1"))
@attr('shard_2')
class InlineDiscussionTest(UniqueCourseTest, DiscussionResponsePaginationTestMixin):
"""
Tests for inline discussions
"""
def setUp(self):
super(InlineDiscussionTest, self).setUp()
self.thread_ids = []
self.discussion_id = "test_discussion_{}".format(uuid4().hex)
self.additional_discussion_id = "test_discussion_{}".format(uuid4().hex)
self.course_fix = CourseFixture(**self.course_info).add_children(
XBlockFixtureDesc("chapter", "Test Section").add_children(
XBlockFixtureDesc("sequential", "Test Subsection").add_children(
XBlockFixtureDesc("vertical", "Test Unit").add_children(
XBlockFixtureDesc(
"discussion",
"Test Discussion",
metadata={"discussion_id": self.discussion_id}
),
XBlockFixtureDesc(
"discussion",
"Test Discussion 1",
metadata={"discussion_id": self.additional_discussion_id}
)
)
)
)
).install()
self.user_id = AutoAuthPage(self.browser, course_id=self.course_id).visit().get_user_id()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
self.courseware_page.visit()
self.discussion_page = InlineDiscussionPage(self.browser, self.discussion_id)
self.additional_discussion_page = InlineDiscussionPage(self.browser, self.additional_discussion_id)
def setup_thread_page(self, thread_id):
self.discussion_page.expand_discussion()
self.assertEqual(self.discussion_page.get_num_displayed_threads(), 1)
self.thread_page = InlineDiscussionThreadPage(self.browser, thread_id) # pylint: disable=attribute-defined-outside-init
self.thread_page.expand()
def setup_multiple_inline_threads(self, thread_count):
"""
Set up multiple treads on the page by passing 'thread_count'
"""
threads = []
for i in range(thread_count):
thread_id = "test_thread_{}_{}".format(i, uuid4().hex)
threads.append(
Thread(id=thread_id, commentable_id=self.discussion_id),
)
self.thread_ids.append(thread_id)
thread_fixture = MultipleThreadFixture(threads)
thread_fixture.add_response(
Response(id="response1"),
[Comment(id="comment1", user_id="other"), Comment(id="comment2", user_id=self.user_id)],
threads[0]
)
thread_fixture.push()
def test_page_while_expanding_inline_discussion(self):
"""
Tests for the Inline Discussion page with multiple treads. Page should not focus 'thread-wrapper'
after loading responses.
"""
self.setup_multiple_inline_threads(thread_count=3)
self.discussion_page.expand_discussion()
thread_page = InlineDiscussionThreadPage(self.browser, self.thread_ids[0])
thread_page.expand()
# Check if 'thread-wrapper' is focused after expanding thread
self.assertFalse(thread_page.check_if_selector_is_focused(selector='.thread-wrapper'))
def test_initial_render(self):
self.assertFalse(self.discussion_page.is_discussion_expanded())
def test_expand_discussion_empty(self):
self.discussion_page.expand_discussion()
self.assertEqual(self.discussion_page.get_num_displayed_threads(), 0)
def check_anonymous_to_peers(self, is_staff):
thread = Thread(id=uuid4().hex, anonymous_to_peers=True, commentable_id=self.discussion_id)
thread_fixture = SingleThreadViewFixture(thread)
thread_fixture.push()
self.setup_thread_page(thread.get("id"))
self.assertEqual(self.thread_page.is_thread_anonymous(), not is_staff)
def test_anonymous_to_peers_threads_as_staff(self):
AutoAuthPage(self.browser, course_id=self.course_id, roles="Administrator").visit()
self.courseware_page.visit()
self.check_anonymous_to_peers(True)
def test_anonymous_to_peers_threads_as_peer(self):
self.check_anonymous_to_peers(False)
def test_discussion_blackout_period(self):
now = datetime.datetime.now(UTC)
self.course_fix.add_advanced_settings(
{
u"discussion_blackouts": {
"value": [
[
(now - datetime.timedelta(days=14)).isoformat(),
(now + datetime.timedelta(days=2)).isoformat()
]
]
}
}
)
self.course_fix._add_advanced_settings()
self.browser.refresh()
thread = Thread(id=uuid4().hex, commentable_id=self.discussion_id)
thread_fixture = SingleThreadViewFixture(thread)
thread_fixture.addResponse(
Response(id="response1"),
[Comment(id="comment1", user_id="other"), Comment(id="comment2", user_id=self.user_id)])
thread_fixture.push()
self.setup_thread_page(thread.get("id"))
self.assertFalse(self.discussion_page.element_exists(".new-post-btn"))
self.assertFalse(self.thread_page.has_add_response_button())
self.assertFalse(self.thread_page.is_response_editable("response1"))
self.assertFalse(self.thread_page.is_add_comment_visible("response1"))
self.assertFalse(self.thread_page.is_comment_editable("comment1"))
self.assertFalse(self.thread_page.is_comment_editable("comment2"))
self.assertFalse(self.thread_page.is_comment_deletable("comment1"))
self.assertFalse(self.thread_page.is_comment_deletable("comment2"))
def test_dual_discussion_module(self):
"""
Scenario: Two discussion module in one unit shouldn't override their actions
Given that I'm on courseware page where there are two inline discussion
When I click on one discussion module new post button
Then it should add new post form of that module in DOM
And I should be shown new post form of that module
And I shouldn't be shown second discussion module new post form
And I click on second discussion module new post button
Then it should add new post form of second module in DOM
And I should be shown second discussion new post form
And I shouldn't be shown first discussion module new post form
And I have two new post form in the DOM
When I click back on first module new post button
And I should be shown new post form of that module
And I shouldn't be shown second discussion module new post form
"""
self.discussion_page.wait_for_page()
self.additional_discussion_page.wait_for_page()
self.discussion_page.click_new_post_button()
with self.discussion_page.handle_alert():
self.discussion_page.click_cancel_new_post()
self.additional_discussion_page.click_new_post_button()
self.assertFalse(self.discussion_page._is_element_visible(".new-post-article"))
with self.additional_discussion_page.handle_alert():
self.additional_discussion_page.click_cancel_new_post()
self.discussion_page.click_new_post_button()
self.assertFalse(self.additional_discussion_page._is_element_visible(".new-post-article"))
@attr('shard_2')
class DiscussionUserProfileTest(UniqueCourseTest):
"""
Tests for user profile page in discussion tab.
"""
PAGE_SIZE = 20 # django_comment_client.forum.views.THREADS_PER_PAGE
PROFILED_USERNAME = "profiled-user"
def setUp(self):
super(DiscussionUserProfileTest, self).setUp()
CourseFixture(**self.course_info).install()
# The following line creates a user enrolled in our course, whose
# threads will be viewed, but not the one who will view the page.
# It isn't necessary to log them in, but using the AutoAuthPage
# saves a lot of code.
self.profiled_user_id = AutoAuthPage(
self.browser,
username=self.PROFILED_USERNAME,
course_id=self.course_id
).visit().get_user_id()
# now create a second user who will view the profile.
self.user_id = AutoAuthPage(
self.browser,
course_id=self.course_id
).visit().get_user_id()
def check_pages(self, num_threads):
# set up the stub server to return the desired amount of thread results
threads = [Thread(id=uuid4().hex) for _ in range(num_threads)]
UserProfileViewFixture(threads).push()
# navigate to default view (page 1)
page = DiscussionUserProfilePage(
self.browser,
self.course_id,
self.profiled_user_id,
self.PROFILED_USERNAME
)
page.visit()
current_page = 1
total_pages = max(num_threads - 1, 1) / self.PAGE_SIZE + 1
all_pages = range(1, total_pages + 1)
return page
def _check_page():
# ensure the page being displayed as "current" is the expected one
self.assertEqual(page.get_current_page(), current_page)
# ensure the expected threads are being shown in the right order
threads_expected = threads[(current_page - 1) * self.PAGE_SIZE:current_page * self.PAGE_SIZE]
self.assertEqual(page.get_shown_thread_ids(), [t["id"] for t in threads_expected])
# ensure the clickable page numbers are the expected ones
self.assertEqual(page.get_clickable_pages(), [
p for p in all_pages
if p != current_page
and p - 2 <= current_page <= p + 2
or (current_page > 2 and p == 1)
or (current_page < total_pages and p == total_pages)
])
# ensure the previous button is shown, but only if it should be.
# when it is shown, make sure it works.
if current_page > 1:
self.assertTrue(page.is_prev_button_shown(current_page - 1))
page.click_prev_page()
self.assertEqual(page.get_current_page(), current_page - 1)
page.click_next_page()
self.assertEqual(page.get_current_page(), current_page)
else:
self.assertFalse(page.is_prev_button_shown())
# ensure the next button is shown, but only if it should be.
if current_page < total_pages:
self.assertTrue(page.is_next_button_shown(current_page + 1))
else:
self.assertFalse(page.is_next_button_shown())
# click all the way up through each page
for i in range(current_page, total_pages):
_check_page()
if current_page < total_pages:
page.click_on_page(current_page + 1)
current_page += 1
# click all the way back down
for i in range(current_page, 0, -1):
_check_page()
if current_page > 1:
page.click_on_page(current_page - 1)
current_page -= 1
def test_0_threads(self):
self.check_pages(0)
def test_1_thread(self):
self.check_pages(1)
def test_20_threads(self):
self.check_pages(20)
def test_21_threads(self):
self.check_pages(21)
def test_151_threads(self):
self.check_pages(151)
def test_pagination_window_reposition(self):
page = self.check_pages(50)
page.click_next_page()
page.wait_for_ajax()
self.assertTrue(page.is_window_on_top())
def test_redirects_to_learner_profile(self):
"""
Scenario: Verify that learner-profile link is present on forum discussions page and we can navigate to it.
Given that I am on discussion forum user's profile page.
And I can see a username on left sidebar
When I click on my username.
Then I will be navigated to Learner Profile page.
And I can my username on Learner Profile page
"""
learner_profile_page = LearnerProfilePage(self.browser, self.PROFILED_USERNAME)
page = self.check_pages(1)
page.click_on_sidebar_username()
learner_profile_page.wait_for_page()
self.assertTrue(learner_profile_page.field_is_visible('username'))
@attr('shard_2')
class DiscussionSearchAlertTest(UniqueCourseTest):
"""
Tests for spawning and dismissing alerts related to user search actions and their results.
"""
SEARCHED_USERNAME = "gizmo"
def setUp(self):
super(DiscussionSearchAlertTest, self).setUp()
CourseFixture(**self.course_info).install()
# first auto auth call sets up a user that we will search for in some tests
self.searched_user_id = AutoAuthPage(
self.browser,
username=self.SEARCHED_USERNAME,
course_id=self.course_id
).visit().get_user_id()
# this auto auth call creates the actual session user
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.page = DiscussionTabHomePage(self.browser, self.course_id)
self.page.visit()
def setup_corrected_text(self, text):
SearchResultFixture(SearchResult(corrected_text=text)).push()
def check_search_alert_messages(self, expected):
actual = self.page.get_search_alert_messages()
self.assertTrue(all(map(lambda msg, sub: msg.lower().find(sub.lower()) >= 0, actual, expected)))
def test_no_rewrite(self):
self.setup_corrected_text(None)
self.page.perform_search()
self.check_search_alert_messages(["no threads"])
def test_rewrite_dismiss(self):
self.setup_corrected_text("foo")
self.page.perform_search()
self.check_search_alert_messages(["foo"])
self.page.dismiss_alert_message("foo")
self.check_search_alert_messages([])
def test_new_search(self):
self.setup_corrected_text("foo")
self.page.perform_search()
self.check_search_alert_messages(["foo"])
self.setup_corrected_text("bar")
self.page.perform_search()
self.check_search_alert_messages(["bar"])
self.setup_corrected_text(None)
self.page.perform_search()
self.check_search_alert_messages(["no threads"])
def test_rewrite_and_user(self):
self.setup_corrected_text("foo")
self.page.perform_search(self.SEARCHED_USERNAME)
self.check_search_alert_messages(["foo", self.SEARCHED_USERNAME])
def test_user_only(self):
self.setup_corrected_text(None)
self.page.perform_search(self.SEARCHED_USERNAME)
self.check_search_alert_messages(["no threads", self.SEARCHED_USERNAME])
# make sure clicking the link leads to the user profile page
UserProfileViewFixture([]).push()
self.page.get_search_alert_links().first.click()
DiscussionUserProfilePage(
self.browser,
self.course_id,
self.searched_user_id,
self.SEARCHED_USERNAME
).wait_for_page()
@attr('shard_2')
class DiscussionSortPreferenceTest(UniqueCourseTest):
"""
Tests for the discussion page displaying a single thread.
"""
def setUp(self):
super(DiscussionSortPreferenceTest, self).setUp()
# Create a course to register for.
CourseFixture(**self.course_info).install()
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.sort_page = DiscussionSortPreferencePage(self.browser, self.course_id)
self.sort_page.visit()
def test_default_sort_preference(self):
"""
Test to check the default sorting preference of user. (Default = date )
"""
selected_sort = self.sort_page.get_selected_sort_preference()
self.assertEqual(selected_sort, "date")
def test_change_sort_preference(self):
"""
Test that if user sorting preference is changing properly.
"""
selected_sort = ""
for sort_type in ["votes", "comments", "date"]:
self.assertNotEqual(selected_sort, sort_type)
self.sort_page.change_sort_preference(sort_type)
selected_sort = self.sort_page.get_selected_sort_preference()
self.assertEqual(selected_sort, sort_type)
def test_last_preference_saved(self):
"""
Test that user last preference is saved.
"""
selected_sort = ""
for sort_type in ["votes", "comments", "date"]:
self.assertNotEqual(selected_sort, sort_type)
self.sort_page.change_sort_preference(sort_type)
selected_sort = self.sort_page.get_selected_sort_preference()
self.assertEqual(selected_sort, sort_type)
self.sort_page.refresh_page()
selected_sort = self.sort_page.get_selected_sort_preference()
self.assertEqual(selected_sort, sort_type)
|
hamzehd/edx-platform
|
common/test/acceptance/tests/discussion/test_discussion.py
|
Python
|
agpl-3.0
| 46,777
|
[
"VisIt"
] |
db2c8bc3663fb749ca174945e9aae7c6de9286cf8c1e9dc8a28e3d0beb4758ab
|
# Ground Temperature Calculator
#
# Ladybug: A Plugin for Environmental Analysis (GPL) started by Mostapha Sadeghipour Roudsari
#
# This file is part of Ladybug.
#
# Copyright (c) 2013-2015, Anton Szilasi with help from Chris Mackey <ajszilasi@gmail.com>
# Ladybug is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation; either version 3 of the License,
# or (at your option) any later version.
#
# Ladybug is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ladybug; If not, see <http://www.gnu.org/licenses/>.
#
# @license GPL-3.0+ <http://spdx.org/licenses/GPL-3.0+>
"""
Use this component to visualise ground temperatures throughout the year at specific depths. Please note that epw files usually only provide ground temperature data at depths 0.5 meters, 2 meters and 4 meters thus data has been interpolated for all other depths. In particular this interpolation assumes that ground temperatures do not vary over the seasons once the depth has reach 9 meters below the ground surface.
-
Provided by Ladybug 0.0.60
Args:
_groundTemperatureData: ...
_epwFile: An .epw file path on your system as a string
visualisedata_Season: Set to true to visualise the ground temperature data as an average for every season
visualisedata_Month: Set to true to visualise the ground temperature data for every month
Returns:
readMe!: ...
groundtemp1st: In every epw file there are monthly ground temperatures at 3 different depths this is the 1st
groundtemp2nd: In every epw file there are monthly ground temperatures at 3 different depths this is the 2nd
groundtemp3rd: In every epw file there are monthly ground temperatures at 3 different depths this is the 3rd
graphAxes: This output draws the axes of the graph it doesn't need to be connected to anything
graphtext: This output draws the text of the graph it doesn't need to be connected to anything
crvColors: This output draws the colours of the temperature curves connect it to S of the Grasshopper component Custom Preview
profileCrvs: This output draws the curves of the temperature curves connect it to G of the Grasshopper component Custom Preview
"""
ghenv.Component.Name = "Ladybug_Import Ground Temp"
ghenv.Component.NickName = 'Importgroundtemp'
ghenv.Component.Message = 'VER 0.0.60\nAUG_24_2015'
ghenv.Component.Category = "Ladybug"
ghenv.Component.SubCategory = "2 | VisualizeWeatherData"
#compatibleLBVersion = VER 0.0.58\nJAN_10_2015
try: ghenv.Component.AdditionalHelpFromDocStrings = "5"
except: pass
import scriptcontext as sc
import os
import itertools
from clr import AddReference
AddReference('Grasshopper')
import Grasshopper.Kernel as gh
import Rhino as rc
import System
def drawLegend(colors):
# A function which draws the legend box in the Rhino Viewport
dataMeshes = []
def draw_Legendbox(x,z1,z2,color):
dataMeshes = []
facePt1 = rc.Geometry.Point3d(rectangleCenterPt.X-5, rectangleCenterPt.Y, rectangleCenterPt.Z-z1)
facePt2 = rc.Geometry.Point3d(rectangleCenterPt.X-5+x, rectangleCenterPt.Y, rectangleCenterPt.Z-z1)
facePt3 = rc.Geometry.Point3d(rectangleCenterPt.X-5, rectangleCenterPt.Y, rectangleCenterPt.Z-z2)
facePt4 = rc.Geometry.Point3d(rectangleCenterPt.X-5+x, rectangleCenterPt.Y, rectangleCenterPt.Z-z2)
# Create the mesh of the bars themselves
barMesh = rc.Geometry.Mesh()
for point in [facePt1, facePt2, facePt3, facePt4]:
barMesh.Vertices.Add(point)
barMesh.Faces.AddFace(0, 1, 3, 2)
barMesh.Flip(True, True, True)
# Color the mesh faces
barMesh.VertexColors.CreateMonotoneMesh(color)
dataMeshes.append(barMesh)
return dataMeshes
def draw_Legendboxlabel(x,z,text):
legPlane = rc.Geometry.Plane(rc.Geometry.Point3d(x,5,z), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(1,0,1))
legPt = rc.Geometry.Point3d(x,0,z)
textSrfs = lb_visualization.text2srf([text], [legPt],'Verdana', 0.3, False, legPlane)
for txt in textSrfs:
graphtext.extend(txt)
return graphtext
dataMeshes.extend(draw_Legendbox(2,10,10.5,colors[0]))
dataMeshes.extend(draw_Legendbox(2,10.75,11.25,colors[1]))
dataMeshes.extend(draw_Legendbox(2,11.5,12,colors[2]))
dataMeshes.extend(draw_Legendbox(2,12.25,12.75,colors[3]))
dataMeshes.extend(draw_Legendboxlabel(2.5,-10.25,'winter'))
dataMeshes.extend(draw_Legendboxlabel(2.5,-11,'spring'))
dataMeshes.extend(draw_Legendboxlabel(2.5,-11.75,'autumn'))
dataMeshes.extend(draw_Legendboxlabel(2.5,-12.5,'summer'))
return dataMeshes
def drawAxes(groundtemp1st,groundtemp2nd,groundtemp3rd):
#Create a surface to represent the ground plane
xyPlane = rc.Geometry.Plane.WorldXY
rectangle = rc.Geometry.Rectangle3d(xyPlane, 10, 10)
rectangleCurve = rectangle.ToNurbsCurve()
rectangleBrep = rc.Geometry.Brep.CreatePlanarBreps(rectangleCurve)[0]
global rectangleCenterPt
rectangleCenterPt = rc.Geometry.AreaMassProperties.Compute(rectangleBrep).Centroid # Create a reference point for all lines to refere to
#Create a line to represent the vertical axis.
rectanglePt1 = rc.Geometry.Point3d(rectangleCenterPt.X-5, rectangleCenterPt.Y, rectangleCenterPt.Z)
rectanglePt2 = rc.Geometry.Point3d(rectangleCenterPt.X-5, rectangleCenterPt.Y, rectangleCenterPt.Z-10)
verticalAxis = rc.Geometry.LineCurve(rectanglePt1, rectanglePt2)
#Create markings along this vertical axis every meter.
divisionParams = rc.Geometry.Curve.DivideByLength(verticalAxis, 1, False)
divisionPts = []
for param in divisionParams:
divisionPts.append(verticalAxis.PointAt(param))
divisionLines = []
for point in divisionPts:
otherPt = rc.Geometry.Point3d(point.X+0.25, point.Y, point.Z)
divisionLines.append(rc.Geometry.Line(point, otherPt))
#Create a line that represents the horizontal axis.
horizPt1 = rc.Geometry.Point3d(rectangleCenterPt.X+5, rectangleCenterPt.Y, rectangleCenterPt.Z)
horizPt2 = rc.Geometry.Point3d(rectangleCenterPt.X-5, rectangleCenterPt.Y, rectangleCenterPt.Z)
horizAxis = rc.Geometry.LineCurve(horizPt1, horizPt2) # Draw a line between these two points making the horizontail axis
#Create markings along this horizontial axis every degree C.
divisionParams1 = rc.Geometry.Curve.DivideByLength(horizAxis, 1, False)
divisionPts1 = []
for param in divisionParams1:
divisionPts1.append(horizAxis.PointAt(param))
divisionLines1 = []
for point in divisionPts1:
otherPt1 = rc.Geometry.Point3d(point.X, point.Y, point.Z-0.25)
divisionLines1.append(rc.Geometry.Line(point, otherPt1))
# Creating range of ground temp data to display on horizontial axis of graph
global groundtempall
groundtempall = groundtemp1st[7:] + groundtemp2nd[7:] + groundtemp3rd[7:] # Adding the ground temp data from each of the ground temp depths (1st,2nd and 3rd) in the epw file
groundtempCtext = [] # The text (numbers) that will be shown as labels on the horzontial axis
global ratio
ratio = (max(groundtempall)-min(groundtempall))/(len(divisionPts1)-1) # How many degrees C on the horzontial axis one Rhino square corresponds to
groundtempCtext.append(min(groundtempall))
tot = groundtempCtext[0]
for i in range(len(divisionPts1)-1):
tot += ratio
groundtempCtext.append(tot)
#Put all of the above into a list for the graphAxes output.
graphAxes = [rectangleBrep, verticalAxis, horizAxis]
graphAxes.extend(divisionLines) # Markers on vertical axis
graphAxes.extend(divisionLines1) # Markers on horizontial axis
return divisionPts,divisionPts1,groundtempCtext,graphAxes
# These 3 function inputs are taken from drawAxes function
def drawText(divisionPts,divisionPts1,groundtempCtext):
graphtext = []
# Drawing the labels on the vertical axis
graphtextvert = []
for point in divisionPts:
textPlane = rc.Geometry.Plane(rc.Geometry.Point3d(point[0]-1.5, point[1], point[2]), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(0,0,1)) # A new point is made to offset labels from vertical axis
textSrfs = lb_visualization.text2srf([str(point[2]) + ' m'], [rc.Geometry.Point3d(point[0]-1.5, point[1], point[2])],'Verdana', 0.25, False, textPlane)
for txt in textSrfs:
graphtextvert.extend(txt)
# Drawing the labels on the horzontial axis
graphtexthort = []
divisionPts1.sort()
for point,i in zip(divisionPts1,groundtempCtext):
textPlane = rc.Geometry.Plane(rc.Geometry.Point3d(point[0], point[1], point[2]+0.3), rc.Geometry.Vector3d(1,1,1), rc.Geometry.Vector3d(0,1,1))
textSrfs2 = lb_visualization.text2srf([str(round(i,1)) + ' C'], [rc.Geometry.Point3d(point[0], point[1], point[2]+0.3)],'Verdana', 0.25, False, textPlane) # Point should be groundtemp data
for txt in textSrfs2: # Adding text surfaces to graphtext so that they will be displayed in Rhino View ports
graphtexthort.extend(txt)
for labels in graphtexthort:
xtozxrotation = rc.Geometry.Transform.Translation(0,0,0)
labels.Transform(xtozxrotation)
for txt in textSrfs:
graphtext.extend(txt)
graphtext = graphtexthort + graphtextvert
# Create title on vertical axis
legPlane = rc.Geometry.Plane(rc.Geometry.Point3d(-3,5,-5), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(0,0,1))
legPt = rc.Geometry.Point3d(-2,-1,0)
textSrfs3 = lb_visualization.text2srf(['Depth'], [legPt],'Verdana', 0.3, False, legPlane)
for txt in textSrfs3:
graphtext.extend(txt)
# Create title on horizontial axis
legPlane1 = rc.Geometry.Plane(rc.Geometry.Point3d(3.5,5,1.5), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(1,0,1))
legPt1 = rc.Geometry.Point3d(0,0,0)
textSrfs4 = lb_visualization.text2srf(['Ground Temperature'], [legPt1],'Verdana', 0.3, False, legPlane1)
for txt in textSrfs4:
graphtext.extend(txt)
return graphtext
def drawprofileCrvs_Season(groundtemp1st,groundtemp2nd,groundtemp3rd):
#Create a list of the depths that each list corresponds to.
depthsList = [0.5, 2, 4]
#Find the annual average temperature, which will be the temperature at very low depths.
annualAvg = sum(groundtemp1st[7:])/12
#Find the maximum deviation around this for get a scale for the horizontal axis.
allValues = []
allValues.extend(groundtemp1st[7:])
allValues.extend(groundtemp2nd[7:])
allValues.extend(groundtemp3rd[7:])
# The function orderbyseason returns a list of dictionaries with temperatures for each season and their correponding depths
def orderbyseason(groundtemp1st,groundtemp2nd,groundtemp3rd):
alllists = []
alllists.append(groundtemp1st)
alllists.append(groundtemp2nd)
alllists.append(groundtemp3rd)
#Before defining seasons test whether site is in Northern or Southern Hemisphere
if latitude.find("-") == -1: # If true site is in the Northern Hemisphere
winter = {}
spring = {}
summer = {}
autumn = {}
for count,i in enumerate(alllists):
if count == 0: # Depth at 0.5 m
winter['0.5']= sum(groundtemp1st[7:9]+groundtemp1st[18:])/3
spring['0.5']= sum(groundtemp1st[9:12])/3
summer['0.5']= sum(groundtemp1st[12:15])/3
autumn['0.5']= sum(groundtemp1st[15:18])/3
if count == 1: # Depth at 2 m
winter['2'] = sum(groundtemp2nd[7:9]+groundtemp2nd[18:])/3
spring['2'] = sum(groundtemp2nd[9:12])/3
summer['2'] = sum(groundtemp2nd[12:15])/3
autumn['2'] = sum(groundtemp2nd[15:18])/3
if count == 2: # Depth at 4 m
winter['4'] = sum(groundtemp3rd[7:9]+groundtemp3rd[18:])/3
spring['4'] = sum(groundtemp3rd[9:12])/3
summer['4'] = sum(groundtemp3rd[12:15])/3
autumn['4']= sum(groundtemp3rd[15:18])/3
else: # Site is in the Southern Hemisphere
winter = {}
spring = {}
summer = {}
autumn = {}
for count,i in enumerate(alllists):
if count == 0: # Depth at 0.5 m
winter['0.5']= sum(groundtemp1st[12:15])/3
spring['0.5']= sum(groundtemp1st[15:18])/3
summer['0.5']= sum(groundtemp1st[7:9]+groundtemp1st[18:])/3
autumn['0.5']= sum(groundtemp1st[9:12])/3
if count == 1: # Depth at 2 m
winter['2']= sum(groundtemp2nd[12:15])/3
spring['2']= sum(groundtemp2nd[15:18])/3
summer['2']= sum(groundtemp2nd[7:9]+groundtemp2nd[18:])/3
autumn['2']= sum(groundtemp2nd[9:12])/3
if count == 2: # Depth at 4 m
winter['4']= sum(groundtemp3rd[12:15])/3
spring['4']= sum(groundtemp3rd[15:18])/3
summer['4']= sum(groundtemp3rd[7:9]+groundtemp3rd[18:])/3
autumn['4']= sum(groundtemp3rd[9:12])/3
return winter,spring,autumn,summer, # Return the seasons in this order
seasons = orderbyseason(groundtemp1st,groundtemp2nd,groundtemp3rd)
#Create the points for the season temperature curves
ptsList = []
crvColors = []
colors = System.Drawing.Color.LightBlue,System.Drawing.Color.ForestGreen,System.Drawing.Color.Yellow,System.Drawing.Color.Tomato
# Colors for season curves in order of winter,spring,autumn,summer
for i,season in enumerate(seasons): # Drawing season curves and their corresponding colours.
pt1 = rc.Geometry.Point3d(rectangleCenterPt.X+((season['0.5']-min(groundtempall))/ratio)-4,rectangleCenterPt.Y, rectangleCenterPt.Z-0.5)
pt2 = rc.Geometry.Point3d(rectangleCenterPt.X+((season['2']-min(groundtempall))/ratio)-4, rectangleCenterPt.Y, rectangleCenterPt.Z-2)
pt3 = rc.Geometry.Point3d(rectangleCenterPt.X+((season['4']-min(groundtempall))/ratio)-4, rectangleCenterPt.Y, rectangleCenterPt.Z-4)
pt4 = rc.Geometry.Point3d(rectangleCenterPt.X, rectangleCenterPt.Y, rectangleCenterPt.Z-9)
ptsList.append([pt1, pt2, pt3,pt4])
crvColors.append(colors[i]) # Appending colors to color curves in order of the list colors
#Create the ground profile curves.
profileCrvs = []
for list in ptsList:
seasonCrv = rc.Geometry.Curve.CreateInterpolatedCurve(list, 3)
profileCrvs.append(seasonCrv)
return profileCrvs,crvColors,colors
def drawprofileCrvs_Month(groundtemp1st,groundtemp2nd,groundtemp3rd):
#Find the annual average temperature, which will be the temperature at very low depths.
annualAvg = sum(groundtemp1st[7:])/12
#Find the maximum deviation around this for get a scale for the horizontal axis.
allValues = []
allValues.extend(groundtemp1st[7:])
allValues.extend(groundtemp2nd[7:])
allValues.extend(groundtemp3rd[7:])
allValues.sort()
maxDiff = max(allValues) - annualAvg
minDiff = annualAvg - min(allValues)
if maxDiff > minDiff: diffFactor = maxDiff/4
else: diffFactor = minDiff/4
lowB, highB, numSeg, customColors, legendBasePoint, legendScale, legendFont, legendFontSize, legendBold = lb_preparation.readLegendParameters([], False)
# colors = lb_visualization.gradientColor(range(12), 0, 11, customColors) ophaned code, initally each month line was a different colour now colouring by season
#Create the points for the temperature profile curves
ptsList = []
crvColors = []
if latitude.find("-") == -1: # If true site is in the Northern Hemisphere
for count in range(12):
pt1 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp1st[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-0.5)
pt2 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp2nd[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-2)
pt3 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp3rd[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-4)
pt4 = rc.Geometry.Point3d(rectangleCenterPt.X, rectangleCenterPt.Y, rectangleCenterPt.Z-9)
ptsList.append([pt1, pt2, pt3, pt4])
if (count == 0) or (count == 1) or (count == 11): # Winter in Northern Hemisphere
crvColors.append(System.Drawing.Color.LightBlue)
elif (count == 2) or (count == 3) or (count == 4): # Spring in Northern Hemisphere
crvColors.append(System.Drawing.Color.ForestGreen)
elif (count == 5) or (count == 6) or (count == 7): # Summer in Northern Hemisphere
crvColors.append(System.Drawing.Color.Tomato)
elif (count == 8) or (count == 9) or (count == 10): # Autumn in Northern Hemisphere
crvColors.append(System.Drawing.Color.Yellow) # Autumn
else: # Site in Southern Hemisphere
for count in range(12):
pt1 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp1st[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-0.5)
pt2 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp2nd[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-2)
pt3 = rc.Geometry.Point3d(rectangleCenterPt.X + (groundtemp3rd[count+7]- annualAvg)/diffFactor, rectangleCenterPt.Y, rectangleCenterPt.Z-4)
pt4 = rc.Geometry.Point3d(rectangleCenterPt.X, rectangleCenterPt.Y, rectangleCenterPt.Z-9)
ptsList.append([pt1, pt2, pt3, pt4])
if (count == 0) or (count == 1) or (count == 11): # Summer in Southern Hemisphere
crvColors.append(System.Drawing.Color.Tomato)
elif (count == 2) or (count == 3) or (count == 4): # Autumn in Southern Hemisphere
crvColors.append(System.Drawing.Color.Yellow)
elif (count == 5) or (count == 6) or (count == 7): # Winter in Southern Hemisphere
crvColors.append(System.Drawing.Color.LightBlue)
elif (count == 8) or (count == 9) or (count == 10): # Spring in Northern Hemisphere
crvColors.append(System.Drawing.Color.ForestGreen)
#Create the ground profile curves.
profileCrvs = []
for list in ptsList:
monthCrv = rc.Geometry.Curve.CreateInterpolatedCurve(list, 3)
profileCrvs.append(monthCrv)
return profileCrvs,crvColors
def main(_epw_file):
# import the classes
if sc.sticky.has_key('ladybug_release'):
try:
if not sc.sticky['ladybug_release'].isCompatible(ghenv.Component): return -1
except:
warning = "You need a newer version of Ladybug to use this compoent." + \
"Use updateLadybug component to update userObjects.\n" + \
"If you have already updated userObjects drag Ladybug_Ladybug component " + \
"into canvas and try again."
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, warning)
return -1
# Create an instance of the lb_preparation class
lb_preparation = sc.sticky["ladybug_Preparation"]()
lb_visualization = sc.sticky["ladybug_ResultVisualization"]()
if not os.path.isfile(_epw_file):
warningM = "Failed to find the file: " + str(_epw_file)
print warningM
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, warningM)
return -1
locationData = lb_preparation.epwLocation(_epw_file)
groundtemp = lb_preparation.groundTempData(_epw_file,locationData[0])
lb_preparation.printgroundTempData(lb_preparation.groundtemp)
return locationData, groundtemp, lb_visualization, lb_preparation
else:
warningM = "First please let the Ladybug fly..."
print warningM
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, warningM)
return -1
# Collecting Data from epw
if _epwFile and _epwFile.endswith('.epw') and _epwFile != 'C:\Example.epw':
result = main(_epwFile)
if result!= -1:
location, locName, latitude = result[0][-1], result[0][0], result[0][1]
groundtemp1st,groundtemp2nd,groundtemp3rd = result[1][0],result[1][1],result[1][2]
lb_visualization, lb_preparation = result[2], result[3]
elif _epwFile == 'C:\Example.epw': pass
else:
print "Please connect a valid epw file address to _epw_file input..."
# Graphing the ground temperature data
if visualisedata_Season == True and visualisedata_Month == True and (result!= -1):
print "This component cannot draw both season and month curves please only set visualisedata_Season or visualisedata_Month to True but not both"
w = gh.GH_RuntimeMessageLevel.Warning
ghenv.Component.AddRuntimeMessage(w, "This component cannot draw both season and month curves please only set visualisedata_Season or visualisedata_Month to True but not both")
elif visualisedata_Season == True:
divisionPts,divisionPts1,groundtempCtext,graphAxes = drawAxes(groundtemp1st,groundtemp2nd,groundtemp3rd)
graphtext = drawText(divisionPts,divisionPts1,groundtempCtext)
# Draw text above graph which shows location
Plane = rc.Geometry.Plane(rc.Geometry.Point3d(3.5,5,2.5), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(0,0,1))
textSrfs5 = lb_visualization.text2srf([str(location.split(",")[1])], [rc.Geometry.Point3d(-2,5,0)],'Verdana', 0.3, False, Plane)
for txt in textSrfs5:
graphtext.extend(txt)
profileCrvs,crvColors,colors = drawprofileCrvs_Season(groundtemp1st,groundtemp2nd,groundtemp3rd)
Legend = drawLegend(colors)
elif visualisedata_Month == True:
# Draw graph axes
divisionPts,divisionPts1,groundtempCtext,graphAxes = drawAxes(groundtemp1st,groundtemp2nd,groundtemp3rd)
# Draw graph text
graphtext = drawText(divisionPts,divisionPts1,groundtempCtext)
# Draw text above graph which shows location
Plane = rc.Geometry.Plane(rc.Geometry.Point3d(3.5,5,2.5), rc.Geometry.Vector3d(1,0,0), rc.Geometry.Vector3d(0,0,1))
textSrfs5 = lb_visualization.text2srf([str(location.split(",")[1])], [rc.Geometry.Point3d(-2,5,0)],'Verdana', 0.3, False, Plane)
for txt in textSrfs5:
graphtext.extend(txt)
#locationtext = lb_visualization.text2srf([str() + ' m'], [rc.Geometry.Point3d(point[0]-1.5, point[1], point[2])],'Verdana', 0.25, False, textPlane)
profileCrvs,crvColors = drawprofileCrvs_Month(groundtemp1st,groundtemp2nd,groundtemp3rd)
# Create legend colours in order of winter, spring, Autumn and Summer
Legendcolors = [System.Drawing.Color.LightBlue,System.Drawing.Color.ForestGreen,System.Drawing.Color.Yellow,System.Drawing.Color.Tomato]
Legend = drawLegend(Legendcolors)
|
samuto/ladybug
|
src/Ladybug_Import Ground Temp.py
|
Python
|
gpl-3.0
| 23,919
|
[
"EPW"
] |
c6c155e56a200dec7c10eddd73f1b50f8c50eaf808adf4a887b9f51c97689fec
|
#-------------------------------------------------------------------------------
# Name: Hamilton
# Purpose:
#
# Author: winxos
#
# Created: 30/01/2012
# Copyright: (c) wzx 2012
# Licence: free
#-------------------------------------------------------------------------------
#!/usr/bin/env python
import random
hd=[[1,2],[0,3,4],[0,5,6],[1,7],[1,5,8], \
[2,4,10],[2,11],[3,8,12],[4,7,9],[8,10,13], \
[5,9,11],[6,10,14],[7,13],[9,12,14],[11,13]]
visited=[0] * len(hd)
routh=[]
def isVisited(city):
return visited[city]==1
def isAllVisited(city):
citys=hd[city]
Flag=True
for c in citys:
if visited[c]==0:
Flag=False
break
return Flag
def visit(startcity):
if len(routh)==len(visited):
return
if isVisited(startcity):
return
if isAllVisited(startcity):
visited[startcity]=0
routh.pop()
visited[startcity]=1
routh.append(startcity)
city=hd[startcity][random.randint(0,len(hd[startcity])-1)]
#for city in hd[startcity]:
if not isVisited(city):
visit(city)
else:
visit(startcity)
def main():
visit(0)
print routh
if __name__ == '__main__':
main()
|
winxos/python
|
hamilton.py
|
Python
|
mit
| 1,228
|
[
"VisIt"
] |
31ebd11557e0a568088b00f53823138c77cdc9f454b835f9a110c79da1401592
|
#!/usr/bin/env python
# Copyright 2014-2019 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Qiming Sun <osirpt.sun@gmail.com>
#
'''
Non-relativistic unrestricted Kohn-Sham electron spin-rotation coupling
(In testing)
Refs:
J. Phys. Chem. A. 114, 9246, 2010
Mole. Phys. 9, 6, 585, 1964
'''
from functools import reduce
import numpy, sys
from pyscf import lib
from pyscf.lib import logger
from pyscf.dft import numint
from pyscf.prop.nmr import uks as uks_nmr
from pyscf.prop.esr import uhf as uhf_esr
from pyscf.prop.esr.uhf import _write, align
from pyscf.data import nist
from pyscf.grad import rks as rks_grad
# Note mo10 is the imaginary part of MO^1
def para(obj, mo10, mo_coeff, mo_occ, qed_fac=1):
mol = obj.mol
effspin = mol.spin * .5
muB = .5 # Bohr magneton
#qed_fac = (nist.G_ELECTRON - 1)
orboa = mo_coeff[0][:,mo_occ[0]>0]
orbob = mo_coeff[1][:,mo_occ[1]>0]
dm0a = numpy.dot(orboa, orboa.T)
dm0b = numpy.dot(orbob, orbob.T)
dm10a = [reduce(numpy.dot, (mo_coeff[0], x, orboa.T)) for x in mo10[0]]
dm10b = [reduce(numpy.dot, (mo_coeff[1], x, orbob.T)) for x in mo10[1]]
dm10a = numpy.asarray([x-x.T for x in dm10a])
dm10b = numpy.asarray([x-x.T for x in dm10b])
hso1e = uhf_esr.make_h01_soc1e(obj, mo_coeff, mo_occ, qed_fac)
para1e =-numpy.einsum('xji,yij->xy', dm10a, hso1e)
para1e+= numpy.einsum('xji,yij->xy', dm10b, hso1e)
para1e *= 1./effspin / muB
#_write(obj, align(para1e)[0], 'SOC(1e)/OZ')
if obj.para_soc2e:
raise NotImplementedError('dia_soc2e = %s' % obj.dia_soc2e)
para = para1e
return para
# Treat Vxc as one-particle operator Vnuc
def get_vxc_soc(ni, mol, grids, xc_code, dms, max_memory=2000, verbose=None):
xctype = ni._xc_type(xc_code)
make_rho, nset, nao = ni._gen_rho_evaluator(mol, dms, hermi=1)
ngrids = len(grids.weights)
BLKSIZE = numint.BLKSIZE
blksize = min(int(max_memory/12*1e6/8/nao/BLKSIZE)*BLKSIZE, ngrids)
shls_slice = (0, mol.nbas)
ao_loc = mol.ao_loc_nr()
vmat = numpy.zeros((2,3,nao,nao))
if xctype == 'LDA':
buf = numpy.empty((4,blksize,nao))
ao_deriv = 1
for ao, mask, weight, coords \
in ni.block_loop(mol, grids, nao, ao_deriv, max_memory,
blksize=blksize, buf=buf):
rho_a = make_rho(0, ao[0], mask, 'LDA')
rho_b = make_rho(1, ao[0], mask, 'LDA')
vxc = ni.eval_xc(xc_code, (rho_a, rho_b), 1, deriv=1)[1]
vrho = vxc[0]
aow = numpy.einsum('xpi,p->xpi', ao[1:], weight*vrho[:,0])
_cross3x3_(vmat[0], mol, aow, ao[1:], mask, shls_slice, ao_loc)
aow = numpy.einsum('xpi,p->xpi', ao[1:], weight*vrho[:,1])
_cross3x3_(vmat[1], mol, aow, ao[1:], mask, shls_slice, ao_loc)
rho = vxc = vrho = aow = None
elif xctype == 'GGA':
buf = numpy.empty((10,blksize,nao))
ao_deriv = 2
for ao, mask, weight, coords \
in ni.block_loop(mol, grids, nao, ao_deriv, max_memory,
blksize=blksize, buf=buf):
rho_a = make_rho(0, ao, mask, 'GGA')
rho_b = make_rho(1, ao, mask, 'GGA')
vxc = ni.eval_xc(xc_code, (rho_a,rho_b), 1, deriv=1)[1]
wva, wvb = numint._uks_gga_wv0((rho_a, rho_b), vxc, weight)
ip_ao = ao[1:4]
ipip_ao = ao[4:]
aow = rks_grad._make_dR_dao_w(ao, wva)
_cross3x3_(vmat[0], mol, aow, ip_ao, mask, shls_slice, ao_loc)
aow = rks_grad._make_dR_dao_w(ao, wvb)
_cross3x3_(vmat[1], mol, aow, ip_ao, mask, shls_slice, ao_loc)
rho = vxc = vrho = vsigma = wv = aow = None
vmat = vmat - vmat.transpose(0,1,3,2)
else:
raise NotImplementedError('meta-GGA')
return vmat
def _cross3x3_(out, mol, ao1, ao2, mask, shls_slice, ao_loc):
out[0] += numint._dot_ao_ao(mol, ao1[1], ao2[2], mask, shls_slice, ao_loc)
out[0] -= numint._dot_ao_ao(mol, ao1[2], ao2[1], mask, shls_slice, ao_loc)
out[1] += numint._dot_ao_ao(mol, ao1[2], ao2[0], mask, shls_slice, ao_loc)
out[1] -= numint._dot_ao_ao(mol, ao1[0], ao2[2], mask, shls_slice, ao_loc)
out[2] += numint._dot_ao_ao(mol, ao1[0], ao2[1], mask, shls_slice, ao_loc)
out[2] -= numint._dot_ao_ao(mol, ao1[1], ao2[0], mask, shls_slice, ao_loc)
return out
# Jia, start to work here
class ESR(uhf_esr.ESR):
'''dE = B dot gtensor dot s'''
def __init__(self, scf_method):
uhf_esr.ESR.__init__(self, scf_method)
self.dia_soc2e = False
self.para_soc2e = False
def para(self, mo10=None, mo_coeff=None, mo_occ=None):
if mo_coeff is None: mo_coeff = self._scf.mo_coeff
if mo_occ is None: mo_occ = self._scf.mo_occ
if mo10 is None:
self.mo10, self.mo_e10 = self.solve_mo1()
mo10 = self.mo10
return para(self, mo10, mo_coeff, mo_occ)
#make_para_soc2e = make_para_soc2e
get_fock = uks_nmr.get_fock
if __name__ == '__main__':
from pyscf import gto, scf
mol = gto.M(atom='H 0 0.1 0; H 0 0 1.',
basis='ccpvdz', spin=1, charge=-1, verbose=3)
mf = scf.UKS(mol).set(xc='bp86').run()
esr_obj = ESR(mf)
esr_obj.gauge_orig = (0,0,0)
esr_obj.para_soc2e = False
esr_obj.so_eff_charge = True
print(esr_obj.kernel())
mol = gto.M(atom='''
H 0 0 1
H 1.2 0 1
H .1 1.1 0.3
H .8 .7 .6
''',
basis='ccpvdz', spin=1, charge=1, verbose=3)
mf = scf.UKS(mol).set(xc='bp86').run()
gobj = GTensor(mf)
#print(gobj.kernel())
gobj.para_soc2e = 'SSO'
gobj.dia_soc2e = None
gobj.so_eff_charge = False
nao, nmo = mf.mo_coeff[0].shape
nelec = mol.nelec
numpy.random.seed(1)
mo10 =[numpy.random.random((3,nmo,nelec[0])),
numpy.random.random((3,nmo,nelec[1]))]
print(lib.finger(para(gobj, mo10, mf.mo_coeff, mf.mo_occ)) - -2.1813250579863279e-05)
numpy.random.seed(1)
dm0 = numpy.random.random((2,nao,nao))
dm0 = dm0 + dm0.transpose(0,2,1)
dm10 = numpy.random.random((2,3,nao,nao))
dm10 = dm10 - dm10.transpose(0,1,3,2)
print(lib.finger(make_para_soc2e(gobj, dm0, dm10)) - 0.0036073897889263721)
|
gkc1000/pyscf
|
pyscf/prop/esr/uks.py
|
Python
|
apache-2.0
| 6,901
|
[
"PySCF"
] |
613db58372f4876e8bede46362c5168f0abb6421594cda7738df283aae5cfa11
|
# -*- coding: utf-8 -*-
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import sys
import os
import os.path
import difflib
import urllib
import urllib2
import re
import json
import logging
import cPickle as pickle
from hashlib import sha1
from datetime import datetime, timedelta
from collections import defaultdict
import shlex
import socket
from functools import partial
import tg
import genshi.template
import chardet
import pkg_resources
from formencode.validators import FancyValidator
from dateutil.parser import parse
from bson import ObjectId
from paste.deploy import appconfig
from pymongo.errors import InvalidId
from contextlib import contextmanager
from pylons import tmpl_context as c, app_globals as g
from pylons import response, request
from tg.decorators import before_validate
from formencode.variabledecode import variable_decode
import formencode
from jinja2 import Markup
from jinja2.filters import contextfilter
from paste.deploy.converters import asbool, aslist
from webhelpers import date, feedgenerator, html, number, misc, text
from webob.exc import HTTPUnauthorized
from allura.lib import exceptions as exc
# Reimport to make available to templates
from allura.lib import AsciiDammit
from .security import has_access
log = logging.getLogger(__name__)
# validates project, subproject, and user names
re_project_name = re.compile(r'^[a-z][-a-z0-9]{2,14}$')
# validates tool mount point names
re_tool_mount_point = re.compile(r'^[a-z][-a-z0-9]{0,62}$')
re_tool_mount_point_fragment = re.compile(r'[a-z][-a-z0-9]*')
re_relaxed_tool_mount_point = re.compile(
r'^[a-zA-Z0-9][-a-zA-Z0-9_\.\+]{0,62}$')
re_relaxed_tool_mount_point_fragment = re.compile(
r'[a-zA-Z0-9][-a-zA-Z0-9_\.\+]*')
re_clean_vardec_key = re.compile(r'''\A
( # first part
\w+# name...
(-\d+)?# with optional -digits suffix
)
(\. # next part(s)
\w+# name...
(-\d+)?# with optional -digits suffix
)+
\Z''', re.VERBOSE)
# markdown escaping regexps
re_amp = re.compile(r'''
[&] # amp
(?= # look ahead for:
([a-zA-Z0-9]+;) # named HTML entity
|
(\#[0-9]+;) # decimal entity
|
(\#x[0-9A-F]+;) # hex entity
)
''', re.VERBOSE)
re_leading_spaces = re.compile(r'^[\t ]+', re.MULTILINE)
re_preserve_spaces = re.compile(r'''
[ ] # space
(?=[ ]) # lookahead for a space
''', re.VERBOSE)
re_angle_bracket_open = re.compile('<')
re_angle_bracket_close = re.compile('>')
md_chars_matcher_all = re.compile(r"([`\*_{}\[\]\(\)#!\\\.+-])")
def make_safe_path_portion(ustr, relaxed=True):
"""Return an ascii representation of ``ustr`` that conforms to mount point
naming :attr:`rules <re_tool_mount_point_fragment>`.
Will return an empty string if no char in ``ustr`` is latin1-encodable.
:param relaxed: Use relaxed mount point naming rules (allows more
characters. See :attr:`re_relaxed_tool_mount_point_fragment`.
:returns: The converted string.
"""
regex = (re_relaxed_tool_mount_point_fragment if relaxed else
re_tool_mount_point_fragment)
ustr = really_unicode(ustr)
s = ustr.encode('latin1', 'ignore')
s = AsciiDammit.asciiDammit(s)
if not relaxed:
s = s.lower()
s = '-'.join(regex.findall(s))
s = s.replace('--', '-')
return s
def monkeypatch(*objs):
def patchem(func):
for obj in objs:
setattr(obj, func.__name__, func)
return patchem
def urlquote(url, safe="/"):
try:
return urllib.quote(str(url), safe=safe)
except UnicodeEncodeError:
return urllib.quote(url.encode('utf-8'), safe=safe)
def urlquoteplus(url, safe=""):
try:
return urllib.quote_plus(str(url), safe=safe)
except UnicodeEncodeError:
return urllib.quote_plus(url.encode('utf-8'), safe=safe)
def _attempt_encodings(s, encodings):
if s is None:
return u''
for enc in encodings:
try:
if enc is None:
return unicode(s) # try default encoding
else:
return unicode(s, enc)
except (UnicodeDecodeError, LookupError):
pass
# Return the repr of the str -- should always be safe
return unicode(repr(str(s)))[1:-1]
def really_unicode(s):
# Try to guess the encoding
def encodings():
yield None
yield 'utf-8'
yield chardet.detect(s[:1024])['encoding']
yield chardet.detect(s)['encoding']
yield 'latin-1'
return _attempt_encodings(s, encodings())
def find_user(email=None, username=None, display_name=None):
from allura import model as M
user = None
if email:
user = M.User.by_email_address(email)
if not user and username:
user = M.User.by_username(username)
if not user and display_name:
user = M.User.by_display_name(display_name)
return user
def find_project(url_path):
from allura import model as M
for n in M.Neighborhood.query.find():
if url_path.strip("/").startswith(n.url_prefix.strip("/")):
break
else:
return None, url_path
# easily off-by-one, might be better to join together everything but
# url_prefix
project_part = n.shortname_prefix + url_path[len(n.url_prefix):]
parts = project_part.split('/')
length = len(parts)
while length:
shortname = '/'.join(parts[:length])
p = M.Project.query.get(shortname=shortname, deleted=False,
neighborhood_id=n._id)
if p:
return p, parts[length:]
length -= 1
return None, url_path.split('/')
def make_neighborhoods(ids):
return _make_xs('Neighborhood', ids)
def make_projects(ids):
return _make_xs('Project', ids)
def make_users(ids):
return _make_xs('User', ids)
def make_roles(ids):
return _make_xs('ProjectRole', ids)
def _make_xs(X, ids):
from allura import model as M
X = getattr(M, X)
ids = list(ids)
results = dict(
(r._id, r)
for r in X.query.find(dict(_id={'$in': ids})))
result = (results.get(i) for i in ids)
return (r for r in result if r is not None)
def make_app_admin_only(app):
from allura.model.auth import ProjectRole
admin_role = ProjectRole.by_name('Admin', app.project)
for ace in [ace for ace in app.acl if ace.role_id != admin_role._id]:
app.acl.remove(ace)
@contextmanager
def push_config(obj, **kw):
saved_attrs = {}
new_attrs = []
for k, v in kw.iteritems():
try:
saved_attrs[k] = getattr(obj, k)
except AttributeError:
new_attrs.append(k)
setattr(obj, k, v)
try:
yield obj
finally:
for k, v in saved_attrs.iteritems():
setattr(obj, k, v)
for k in new_attrs:
delattr(obj, k)
def sharded_path(name, num_parts=2):
parts = [
name[:i + 1]
for i in range(num_parts)]
return '/'.join(parts)
def set_context(project_shortname_or_id, mount_point=None, app_config_id=None, neighborhood=None):
from allura import model
try:
p = model.Project.query.get(_id=ObjectId(str(project_shortname_or_id)))
except InvalidId:
p = None
if p is None and type(project_shortname_or_id) != ObjectId:
if neighborhood is None:
raise TypeError('neighborhood is required; it must not be None')
if not isinstance(neighborhood, model.Neighborhood):
n = model.Neighborhood.query.get(name=neighborhood)
if n is None:
try:
n = model.Neighborhood.query.get(
_id=ObjectId(str(neighborhood)))
except InvalidId:
pass
if n is None:
raise exc.NoSuchNeighborhoodError(
"Couldn't find neighborhood %s" %
repr(neighborhood))
neighborhood = n
query = dict(shortname=project_shortname_or_id,
neighborhood_id=neighborhood._id)
p = model.Project.query.get(**query)
if p is None:
raise exc.NoSuchProjectError("Couldn't find project %s nbhd %s" %
(project_shortname_or_id, neighborhood))
c.project = p
if app_config_id is None:
c.app = p.app_instance(mount_point)
else:
if isinstance(app_config_id, basestring):
app_config_id = ObjectId(app_config_id)
app_config = model.AppConfig.query.get(_id=app_config_id)
c.app = p.app_instance(app_config)
@contextmanager
def push_context(project_id, mount_point=None, app_config_id=None, neighborhood=None):
project = getattr(c, 'project', ())
app = getattr(c, 'app', ())
set_context(project_id, mount_point, app_config_id, neighborhood)
try:
yield
finally:
if project == ():
del c.project
else:
c.project = project
if app == ():
del c.app
else:
c.app = app
def encode_keys(d):
'''Encodes the unicode keys of d, making the result
a valid kwargs argument'''
return dict(
(k.encode('utf-8'), v)
for k, v in d.iteritems())
def vardec(fun):
def vardec_hook(remainder, params):
new_params = variable_decode(dict(
(k, v) for k, v in params.items()
if re_clean_vardec_key.match(k)))
params.update(new_params)
before_validate(vardec_hook)(fun)
return fun
def nonce(length=4):
return sha1(ObjectId().binary + os.urandom(10)).hexdigest()[:length]
def cryptographic_nonce(length=40):
hex_format = '%.2x' * length
return hex_format % tuple(map(ord, os.urandom(length)))
def ago(start_time, show_date_after=7):
"""
Return time since starting time as a rounded, human readable string.
E.g., "3 hours ago"
"""
if start_time is None:
return 'unknown'
granularities = ['century', 'decade', 'year', 'month', 'day', 'hour',
'minute']
end_time = datetime.utcnow()
if show_date_after is not None and end_time - start_time > timedelta(days=show_date_after):
return start_time.strftime('%Y-%m-%d')
while True:
granularity = granularities.pop()
ago = date.distance_of_time_in_words(
start_time, end_time, granularity, round=True)
rounded_to_one_granularity = 'and' not in ago
if rounded_to_one_granularity:
break
return ago + ' ago'
def ago_ts(timestamp):
return ago(datetime.utcfromtimestamp(timestamp))
def ago_string(s):
try:
return ago(parse(s, ignoretz=True))
except (ValueError, AttributeError):
return 'unknown'
class DateTimeConverter(FancyValidator):
def _to_python(self, value, state):
try:
return parse(value)
except ValueError:
if self.if_invalid != formencode.api.NoDefault:
return self.if_invalid
else:
raise
def _from_python(self, value, state):
return value.isoformat()
def absurl(url):
"""
Given a root-relative URL, return a full URL including protocol and host
"""
if url is None:
return None
if '://' in url:
return url
try:
# try request first, so we can get proper http/https value
host = request.host_url
except TypeError:
# for tests, etc
host = tg.config['base_url'].rstrip('/')
return host + url
def diff_text(t1, t2, differ=None):
t1_lines = t1.replace('\r', '').split('\n')
t2_lines = t2.replace('\r', '').split('\n')
t1_words = []
for line in t1_lines:
for word in line.split(' '):
t1_words.append(word)
t1_words.append('\n')
t2_words = []
for line in t2_lines:
for word in line.split(' '):
t2_words.append(word)
t2_words.append('\n')
if differ is None:
differ = difflib.SequenceMatcher(None, t1_words, t2_words)
result = []
for tag, i1, i2, j1, j2 in differ.get_opcodes():
if tag in ('delete', 'replace'):
result += ['<del>'] + t1_words[i1:i2] + ['</del>']
if tag in ('insert', 'replace'):
result += ['<ins>'] + t2_words[j1:j2] + ['</ins>']
if tag == 'equal':
result += t1_words[i1:i2]
return ' '.join(result).replace('\n', '<br/>\n')
def gen_message_id(_id=None):
if not _id:
_id = nonce(40)
if getattr(c, 'project', None):
parts = c.project.url().split('/')[1:-1]
else:
parts = ['mail']
if getattr(c, 'app', None):
addr = '%s.%s' % (_id, c.app.config.options['mount_point'])
else:
addr = _id
return '%s@%s.%s' % (
addr, '.'.join(reversed(parts)), tg.config['domain'])
class ProxiedAttrMeta(type):
def __init__(cls, name, bases, dct):
for v in dct.itervalues():
if isinstance(v, attrproxy):
v.cls = cls
class attrproxy(object):
cls = None
def __init__(self, *attrs):
self.attrs = attrs
def __repr__(self):
return '<attrproxy on %s for %s>' % (
self.cls, self.attrs)
def __get__(self, obj, klass=None):
if obj is None:
obj = klass
for a in self.attrs:
obj = getattr(obj, a)
return proxy(obj)
def __getattr__(self, name):
if self.cls is None:
return promised_attrproxy(lambda: self.cls, name)
return getattr(
attrproxy(self.cls, *self.attrs),
name)
class promised_attrproxy(attrproxy):
def __init__(self, promise, *attrs):
super(promised_attrproxy, self).__init__(*attrs)
self._promise = promise
def __repr__(self):
return '<promised_attrproxy for %s>' % (self.attrs,)
def __getattr__(self, name):
cls = self._promise()
return getattr(cls, name)
class proxy(object):
def __init__(self, obj):
self._obj = obj
def __getattr__(self, name):
return getattr(self._obj, name)
def __call__(self, *args, **kwargs):
return self._obj(*args, **kwargs)
def render_genshi_plaintext(template_name, **template_vars):
assert os.path.exists(template_name)
fd = open(template_name)
try:
tpl_text = fd.read()
finally:
fd.close()
filepath = os.path.dirname(template_name)
tt = genshi.template.NewTextTemplate(tpl_text,
filepath=filepath, filename=template_name)
stream = tt.generate(**template_vars)
return stream.render(encoding='utf-8').decode('utf-8')
@tg.expose(content_type='text/plain')
def json_validation_error(controller, **kwargs):
result = dict(status='Validation Error',
errors=c.validation_exception.unpack_errors(),
value=c.validation_exception.value,
params=kwargs)
response.status = 400
return json.dumps(result, indent=2)
def pop_user_notifications(user=None):
from allura import model as M
if user is None:
user = c.user
mbox = M.Mailbox.query.get(user_id=user._id, is_flash=True)
if mbox:
notifications = M.Notification.query.find(
dict(_id={'$in': mbox.queue}))
mbox.queue = []
mbox.queue_empty = True
for n in notifications:
# clean it up so it doesn't hang around
M.Notification.query.remove({'_id': n._id})
yield n
def config_with_prefix(d, prefix):
'''Return a subdictionary keys with a given prefix,
with the prefix stripped
'''
plen = len(prefix)
return dict((k[plen:], v) for k, v in d.iteritems()
if k.startswith(prefix))
@contextmanager
def twophase_transaction(*engines):
connections = [
e.contextual_connect()
for e in engines]
txns = []
to_rollback = []
try:
for c in connections:
txn = c.begin_twophase()
txns.append(txn)
to_rollback.append(txn)
yield
to_rollback = []
for txn in txns:
txn.prepare()
to_rollback.append(txn)
for txn in txns:
txn.commit()
except:
for txn in to_rollback:
txn.rollback()
raise
class log_action(object):
extra_proto = dict(
action=None,
action_type=None,
tool_type=None,
tool_mount=None,
project=None,
neighborhood=None,
username=None,
url=None,
ip_address=None)
def __init__(self, logger, action):
self._logger = logger
self._action = action
def log(self, level, message, *args, **kwargs):
kwargs = dict(kwargs)
extra = kwargs.setdefault('extra', {})
meta = kwargs.pop('meta', {})
kwpairs = extra.setdefault('kwpairs', {})
for k, v in meta.iteritems():
kwpairs['meta_%s' % k] = v
extra.update(self._make_extra())
self._logger.log(level, self._action + ': ' + message, *args, **kwargs)
def info(self, message, *args, **kwargs):
self.log(logging.INFO, message, *args, **kwargs)
def debug(self, message, *args, **kwargs):
self.log(logging.DEBUG, message, *args, **kwargs)
def error(self, message, *args, **kwargs):
self.log(logging.ERROR, message, *args, **kwargs)
def critical(self, message, *args, **kwargs):
self.log(logging.CRITICAL, message, *args, **kwargs)
def exception(self, message, *args, **kwargs):
self.log(logging.EXCEPTION, message, *args, **kwargs)
def warning(self, message, *args, **kwargs):
self.log(logging.EXCEPTION, message, *args, **kwargs)
warn = warning
def _make_extra(self):
result = dict(self.extra_proto, action=self._action)
try:
if hasattr(c, 'app') and c.app:
result['tool_type'] = c.app.config.tool_name
result['tool_mount'] = c.app.config.options['mount_point']
if hasattr(c, 'project') and c.project:
result['project'] = c.project.shortname
result['neighborhood'] = c.project.neighborhood.name
if hasattr(c, 'user') and c.user:
result['username'] = c.user.username
else:
result['username'] = '*system'
try:
result['url'] = request.url
ip_address = request.headers.get(
'X_FORWARDED_FOR', request.remote_addr)
if ip_address is not None:
ip_address = ip_address.split(',')[0].strip()
result['ip_address'] = ip_address
else:
result['ip_address'] = '0.0.0.0'
except TypeError:
pass
return result
except:
self._logger.warning(
'Error logging to rtstats, some info may be missing', exc_info=True)
return result
def paging_sanitizer(limit, page, total_count, zero_based_pages=True):
"""Return limit, page - both converted to int and constrained to
valid ranges based on total_count.
Useful for sanitizing limit and page query params.
"""
limit = max(int(limit), 1)
max_page = (total_count / limit) + (1 if total_count % limit else 0)
max_page = max(0, max_page - (1 if zero_based_pages else 0))
page = min(max(int(page), (0 if zero_based_pages else 1)), max_page)
return limit, page
def _add_inline_line_numbers_to_text(text):
markup_text = '<div class="codehilite"><pre>'
for line_num, line in enumerate(text.splitlines(), 1):
markup_text = markup_text + \
'<span id="l%s" class="code_block"><span class="lineno">%s</span> %s</span>' % (
line_num, line_num, line)
markup_text = markup_text + '</pre></div>'
return markup_text
def _add_table_line_numbers_to_text(text):
def _prepend_whitespaces(num, max_num):
num, max_num = str(num), str(max_num)
diff = len(max_num) - len(num)
return ' ' * diff + num
def _len_to_str_column(l, start=1):
max_num = l + start
return '\n'.join(map(_prepend_whitespaces, range(start, max_num), [max_num] * l))
lines = text.splitlines(True)
linenumbers = '<td class="linenos"><div class="linenodiv"><pre>' + \
_len_to_str_column(len(lines)) + '</pre></div></td>'
markup_text = '<table class="codehilitetable"><tbody><tr>' + \
linenumbers + '<td class="code"><div class="codehilite"><pre>'
for line_num, line in enumerate(lines, 1):
markup_text = markup_text + \
'<span id="l%s" class="code_block">%s</span>' % (line_num, line)
markup_text = markup_text + '</pre></div></td></tr></tbody></table>'
return markup_text
INLINE = 'inline'
TABLE = 'table'
def render_any_markup(name, text, code_mode=False, linenumbers_style=TABLE):
"""
renders markdown using allura enhacements if file is in markdown format
renders any other markup format using the pypeline
Returns jinja-safe text
"""
if text == '':
text = '<p><em>Empty File</em></p>'
else:
fmt = g.pypeline_markup.can_render(name)
if fmt == 'markdown':
text = g.markdown.convert(text)
else:
text = g.pypeline_markup.render(name, text)
if not fmt:
if code_mode and linenumbers_style == INLINE:
text = _add_inline_line_numbers_to_text(text)
elif code_mode and linenumbers_style == TABLE:
text = _add_table_line_numbers_to_text(text)
else:
text = '<pre>%s</pre>' % text
return Markup(text)
# copied from jinja2 dev
# latest release, 2.6, implements this incorrectly
# can remove and use jinja2 implementation after upgrading to 2.7
def do_filesizeformat(value, binary=False):
"""Format the value like a 'human-readable' file size (i.e. 13 kB,
4.1 MB, 102 Bytes, etc). Per default decimal prefixes are used (Mega,
Giga, etc.), if the second parameter is set to `True` the binary
prefixes are used (Mebi, Gibi).
"""
bytes = float(value)
base = binary and 1024 or 1000
prefixes = [
(binary and 'KiB' or 'kB'),
(binary and 'MiB' or 'MB'),
(binary and 'GiB' or 'GB'),
(binary and 'TiB' or 'TB'),
(binary and 'PiB' or 'PB'),
(binary and 'EiB' or 'EB'),
(binary and 'ZiB' or 'ZB'),
(binary and 'YiB' or 'YB')
]
if bytes == 1:
return '1 Byte'
elif bytes < base:
return '%d Bytes' % bytes
else:
for i, prefix in enumerate(prefixes):
unit = base ** (i + 2)
if bytes < unit:
return '%.1f %s' % ((base * bytes / unit), prefix)
return '%.1f %s' % ((base * bytes / unit), prefix)
def log_if_changed(artifact, attr, new_val, message):
"""Set `artifact.attr` to `new_val` if changed. Add AuditLog record."""
from allura import model as M
if not hasattr(artifact, attr):
return
if getattr(artifact, attr) != new_val:
M.AuditLog.log(message)
setattr(artifact, attr, new_val)
def get_tool_packages(tool_name):
"Return package for given tool (e.g. 'forgetracker' for 'tickets')"
from allura.app import Application
app = g.entry_points['tool'].get(tool_name.lower())
if not app:
return []
classes = [c for c in app.mro() if c not in (Application, object)]
return [cls.__module__.split('.')[0] for cls in classes]
def get_first(d, key):
"""Return value for d[key][0] if d[key] is a list with elements, else return d[key].
Useful to retrieve values from solr index (e.g. `title` and `text` fields),
which are stored as lists.
"""
v = d.get(key)
if isinstance(v, list):
return v[0] if len(v) > 0 else None
return v
def datetimeformat(value, format='%Y-%m-%d %H:%M:%S'):
return value.strftime(format)
@contextmanager
def log_output(log):
class Writer(object):
def __init__(self, func):
self.func = func
self.closed = False
def write(self, buf):
self.func(buf)
def flush(self):
pass
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = Writer(log.info)
sys.stderr = Writer(log.error)
try:
yield log
finally:
sys.stdout = _stdout
sys.stderr = _stderr
def topological_sort(items, partial_order):
"""Perform topological sort.
items is a list of items to be sorted.
partial_order is a list of pairs. If pair (a,b) is in it, it means
that item a should appear before item b.
Returns a list of the items in one of the possible orders, or None
if partial_order contains a loop.
Modified from: http://www.bitformation.com/art/python_toposort.html
"""
# Original topological sort code written by Ofer Faigon
# (www.bitformation.com) and used with permission
def add_arc(graph, fromnode, tonode):
"""Add an arc to a graph. Can create multiple arcs.
The end nodes must already exist."""
graph[fromnode].append(tonode)
# Update the count of incoming arcs in tonode.
graph[tonode][0] = graph[tonode][0] + 1
# step 1 - create a directed graph with an arc a->b for each input
# pair (a,b).
# The graph is represented by a dictionary. The dictionary contains
# a pair item:list for each node in the graph. /item/ is the value
# of the node. /list/'s 1st item is the count of incoming arcs, and
# the rest are the destinations of the outgoing arcs. For example:
# {'a':[0,'b','c'], 'b':[1], 'c':[1]}
# represents the graph: c <-- a --> b
# The graph may contain loops and multiple arcs.
# Note that our representation does not contain reference loops to
# cause GC problems even when the represented graph contains loops,
# because we keep the node names rather than references to the nodes.
graph = defaultdict(lambda: [0])
for a, b in partial_order:
add_arc(graph, a, b)
# Step 2 - find all roots (nodes with zero incoming arcs).
roots = [n for n in items if graph[n][0] == 0]
roots.reverse() # keep sort stable
# step 3 - repeatedly emit a root and remove it from the graph. Removing
# a node may convert some of the node's direct children into roots.
# Whenever that happens, we append the new roots to the list of
# current roots.
sorted = []
while roots:
# If len(roots) is always 1 when we get here, it means that
# the input describes a complete ordering and there is only
# one possible output.
# When len(roots) > 1, we can choose any root to send to the
# output; this freedom represents the multiple complete orderings
# that satisfy the input restrictions. We arbitrarily take one of
# the roots using pop(). Note that for the algorithm to be efficient,
# this operation must be done in O(1) time.
root = roots.pop()
sorted.append(root)
for child in graph[root][1:]:
graph[child][0] = graph[child][0] - 1
if graph[child][0] == 0:
roots.append(child)
del graph[root]
if len(graph) > 0:
# There is a loop in the input.
return None
return sorted
@contextmanager
def ming_config(**conf):
"""Temporarily swap in a new ming configuration, restoring the previous
one when the contextmanager exits.
:param \*\*conf: keyword arguments defining the new ming configuration
"""
import ming
from ming.session import Session
datastores = Session._datastores
try:
ming.configure(**conf)
yield
finally:
Session._datastores = datastores
for name, session in Session._registry.iteritems():
session.bind = datastores.get(name, None)
session._name = name
@contextmanager
def ming_config_from_ini(ini_path):
"""Temporarily swap in a new ming configuration, restoring the previous
one when the contextmanager exits.
:param ini_path: Path to ini file containing the ming configuration
"""
root = pkg_resources.get_distribution('allura').location
conf = appconfig('config:%s' % os.path.join(root, ini_path))
with ming_config(**conf):
yield
def split_select_field_options(field_options):
try:
# shlex have problems with parsing unicode,
# it's better to pass properly encoded byte-string
field_options = shlex.split(field_options.encode('utf-8'))
# convert splitted string back to unicode
field_options = map(really_unicode, field_options)
except ValueError:
field_options = field_options.split()
# After regular split field_options might contain a " characters,
# which would break html when rendered inside tag's value attr.
# Escaping doesn't help here, 'cause it breaks EasyWidgets' validation,
# so we're getting rid of those.
field_options = [o.replace('"', '') for o in field_options]
return field_options
@contextmanager
def notifications_disabled(project, disabled=True):
"""Temporarily disable email notifications on a project.
"""
orig = project.notifications_disabled
try:
project.notifications_disabled = disabled
yield
finally:
project.notifications_disabled = orig
@contextmanager
def null_contextmanager(*args, **kw):
"""A no-op contextmanager.
"""
yield
class exceptionless(object):
'''Decorator making the decorated function return 'error_result' on any
exceptions rather than propagating exceptions up the stack
'''
def __init__(self, error_result, log=None):
self.error_result = error_result
self.log = log
def __call__(self, fun):
fname = 'exceptionless(%s)' % fun.__name__
def inner(*args, **kwargs):
try:
return fun(*args, **kwargs)
except Exception as e:
if self.log:
self.log.exception(
'Error calling %s(args=%s, kwargs=%s): %s',
fname, args, kwargs, str(e))
return self.error_result
inner.__name__ = fname
return inner
def urlopen(url, retries=3, codes=(408,), timeout=None):
"""Open url, optionally retrying if an error is encountered.
Socket timeouts will always be retried if retries > 0.
HTTP errors are retried if the error code is passed in ``codes``.
:param retries: Number of time to retry.
:param codes: HTTP error codes that should be retried.
"""
attempts = 0
while True:
try:
return urllib2.urlopen(url, timeout=timeout)
except (urllib2.HTTPError, socket.timeout) as e:
if attempts < retries and (isinstance(e, socket.timeout) or
e.code in codes):
attempts += 1
continue
else:
try:
url_string = url.get_full_url() # if url is Request obj
except Exception:
url_string = url
if timeout is None:
timeout = socket.getdefaulttimeout()
log.exception(
'Failed after %s retries on url with a timeout of %s: %s: %s',
attempts, timeout, url_string, e)
raise e
def plain2markdown(text, preserve_multiple_spaces=False, has_html_entities=False):
if not has_html_entities:
# prevent &foo; and { from becoming HTML entities
text = re_amp.sub('&', text)
# avoid accidental 4-space indentations creating code blocks
if preserve_multiple_spaces:
text = text.replace('\t', ' ' * 4)
text = re_preserve_spaces.sub(' ', text)
else:
text = re_leading_spaces.sub('', text)
try:
# try to use html2text for most of the escaping
import html2text
html2text.BODY_WIDTH = 0
text = html2text.escape_md_section(text, snob=True)
except ImportError:
# fall back to just escaping any MD-special chars
text = md_chars_matcher_all.sub(r"\\\1", text)
# prevent < and > from becoming tags
text = re_angle_bracket_open.sub('<', text)
text = re_angle_bracket_close.sub('>', text)
return text
def iter_entry_points(group, *a, **kw):
"""Yields entry points that have not been disabled in the config.
If ``group`` is "allura" (Allura tool entry points), this function also
checks for multiple entry points with the same name. If there are
multiple entry points with the same name, and one of them is a subclass
of the other(s), it will be yielded, and the other entry points with that
name will be ignored. If a subclass is not found, an ImportError will be
raised.
This treatment of "allura" entry points allows tool authors to subclass
another tool while reusing the original entry point name.
"""
def active_eps():
disabled = aslist(
tg.config.get('disable_entry_points.' + group), sep=',')
return [ep for ep in pkg_resources.iter_entry_points(group, *a, **kw)
if ep.name not in disabled]
def unique_eps(entry_points):
by_name = defaultdict(list)
for ep in entry_points:
by_name[ep.name].append(ep)
for name, eps in by_name.iteritems():
ep_count = len(eps)
if ep_count == 1:
yield eps[0]
else:
yield subclass(eps)
def subclass(entry_points):
loaded = dict((ep, ep.load()) for ep in entry_points)
for ep, cls in loaded.iteritems():
others = loaded.values()[:]
others.remove(cls)
if all([issubclass(cls, other) for other in others]):
return ep
raise ImportError('Ambiguous [allura] entry points detected. ' +
'Multiple entry points with name "%s".' % entry_points[0].name)
return iter(unique_eps(active_eps()) if group == 'allura' else active_eps())
# http://stackoverflow.com/a/1060330/79697
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days)):
yield start_date + timedelta(n)
@contextmanager
def login_overlay(exceptions=None):
"""
Override the default behavior of redirecting to the auth.login_url and
instead display an overlay with content from auth.login_fragment_url.
This is to allow pages that require authentication for any actions but
not for the initial view to be more apparent what you will get once
logged in.
This should be wrapped around call to `require_access()` (presumably in
the `_check_security()` method on a controller). The `exceptions` param
can be given a list of exposed views to leave with the original behavior.
For example:
class MyController(BaseController);
def _check_security(self):
with login_overlay(exceptions=['process']):
require_access(self.neighborhood, 'register')
@expose
def index(self, *args, **kw):
return {}
@expose
def list(self, *args, **kw):
return {}
@expose
def process(self, *args, **kw):
return {}
This would show the overlay to unauthenticated users who visit `/`
or `/list` but would perform the normal redirect when `/process` is
visited.
"""
try:
yield
except HTTPUnauthorized as e:
if exceptions:
for exception in exceptions:
if request.path.rstrip('/').endswith('/%s' % exception):
raise
c.show_login_overlay = True
def get_filter(ctx, filter_name):
"""
Gets a named Jinja2 filter, passing through
any context requested by the filter.
"""
filter_ = ctx.environment.filters[filter_name]
if getattr(filter_, 'contextfilter', False):
return partial(filter_, ctx)
elif getattr(filter_, 'evalcontextfilter', False):
return partial(filter_, ctx.eval_ctx)
elif getattr(filter_, 'environmentfilter', False):
return partial(filter_, ctx.environment)
@contextfilter
def map_jinja_filter(ctx, seq, filter_name, *a, **kw):
"""
A Jinja2 filter that applies the named filter with the
given args to the sequence this filter is applied to.
"""
filter_ = get_filter(ctx, filter_name)
return [filter_(value, *a, **kw) for value in seq]
def unidiff(old, new):
"""Returns unified diff between `one` and `two`."""
return '\n'.join(difflib.unified_diff(
a=old.splitlines(),
b=new.splitlines(),
fromfile='old',
tofile='new',
lineterm=''))
|
apache/incubator-allura
|
Allura/allura/lib/helpers.py
|
Python
|
apache-2.0
| 38,038
|
[
"VisIt"
] |
bba6eeec80e9d63cc9b24c75bee1f66b1a1e8ffc59aed621f2339f642b210fe6
|
# -*- coding: utf-8 -*-
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# LICENSE
#
# Copyright (c) 2010-2017, GEM Foundation, G. Weatherill, M. Pagani,
# D. Monelli.
#
# The Hazard Modeller's Toolkit is free software: you can redistribute
# it and/or modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either version
# 3 of the License, or (at your option) any later version.
#
# You should have received a copy of the GNU Affero General Public License
# along with OpenQuake. If not, see <http://www.gnu.org/licenses/>
#
# DISCLAIMER
#
# The software Hazard Modeller's Toolkit (hmtk) provided herein
# is released as a prototype implementation on behalf of
# scientists and engineers working within the GEM Foundation (Global
# Earthquake Model).
#
# It is distributed for the purpose of open collaboration and in the
# hope that it will be useful to the scientific, engineering, disaster
# risk and software design communities.
#
# The software is NOT distributed as part of GEM’s OpenQuake suite
# (http://www.globalquakemodel.org/openquake) and must be considered as a
# separate entity. The software provided herein is designed and implemented
# by scientific staff. It is not developed to the design standards, nor
# subject to same level of critical review by professional software
# developers, as GEM’s OpenQuake software suite.
#
# Feedback and contribution to the software is welcome, and can be
# directed to the hazard scientific staff of the GEM Model Facility
# (hazard@globalquakemodel.org).
#
# The Hazard Modeller's Toolkit (hmtk) is therefore distributed WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# The GEM Foundation, and the authors of the software, assume no
# liability for use of the software.
#!/usr/bin/env python
'''
Utility functions for seismicity calculations
'''
from __future__ import division
import numpy as np
from shapely import geometry
try:
from scipy.stats._continuous_distns import (truncnorm_gen,
_norm_cdf, _norm_sf,
_norm_ppf, _norm_isf)
class hmtk_truncnorm_gen(truncnorm_gen):
"""
At present, the scipy.stats.truncnorm.rvs object does not support
vector inputs for the bounds - this piece of duck punching changes that
"""
def _argcheck(self, a, b):
self.a = a
self.b = b
self._nb = _norm_cdf(b)
self._na = _norm_cdf(a)
self._sb = _norm_sf(b)
self._sa = _norm_sf(a)
self._delta = self._nb - self._na
idx = self.a > 0
self._delta[idx] = -(self._sb[idx] - self._sa[idx])
self._logdelta = np.log(self._delta)
return (a != b)
def _ppf(self, q, a, b):
output = np.zeros_like(self.a)
idx = self.a > 0
if np.any(idx):
output[idx] = _norm_isf(q[idx]*self._sb[idx] +
self._sa[idx]*(-q[idx] + 1.0))
idx = np.logical_not(idx)
if np.any(idx):
output[idx] = _norm_ppf(q[idx]*self._nb[idx] +
self._na[idx]*(-q[idx] + 1.0))
return output
hmtk_truncnorm = hmtk_truncnorm_gen(name="hmtk_truncnorm")
except:
print("Continuous distributions not available on Scipy version < 0.15\n")
print("Bootstrap sampling of the depth distribution will raise an error")
hmtk_truncnorm = None
MARKER_NORMAL = np.array([0, 31, 59, 90, 120, 151, 181,
212, 243, 273, 304, 334])
MARKER_LEAP = np.array([0, 31, 60, 91, 121, 152, 182,
213, 244, 274, 305, 335])
SECONDS_PER_DAY = 86400.0
def decimal_year(year, month, day):
"""
Allows to calculate the decimal year for a vector of dates
(TODO this is legacy code kept to maintain comparability with previous
declustering algorithms!)
:param year: year column from catalogue matrix
:type year: numpy.ndarray
:param month: month column from catalogue matrix
:type month: numpy.ndarray
:param day: day column from catalogue matrix
:type day: numpy.ndarray
:returns: decimal year column
:rtype: numpy.ndarray
"""
marker = np.array([0., 31., 59., 90., 120., 151., 181.,
212., 243., 273., 304., 334.])
tmonth = (month - 1).astype(int)
day_count = marker[tmonth] + day - 1.
dec_year = year + (day_count / 365.)
return dec_year
def leap_check(year):
"""
Returns logical array indicating if year is a leap year
"""
return np.logical_and((year % 4) == 0,
np.logical_or((year % 100 != 0), (year % 400) == 0))
def decimal_time(year, month, day, hour, minute, second):
"""
Returns the full time as a decimal value
:param year:
Year of events (integer numpy.ndarray)
:param month:
Month of events (integer numpy.ndarray)
:param day:
Days of event (integer numpy.ndarray)
:param hour:
Hour of event (integer numpy.ndarray)
:param minute:
Minute of event (integer numpy.ndarray)
:param second:
Second of event (float numpy.ndarray)
:returns decimal_time:
Decimal representation of the time (as numpy.ndarray)
"""
tmo = np.ones_like(year, dtype=int)
tda = np.ones_like(year, dtype=int)
tho = np.zeros_like(year, dtype=int)
tmi = np.zeros_like(year, dtype=int)
tse = np.zeros_like(year, dtype=float)
if any(month):
tmo = month
if any(day):
tda = day
if any(hour):
tho = hour
if any(minute):
tmi = minute
if any(second):
tse = second
tmonth = tmo - 1
day_count = MARKER_NORMAL[tmonth] + tda - 1
id_leap = leap_check(year)
leap_loc = np.where(id_leap)[0]
day_count[leap_loc] = MARKER_LEAP[tmonth[leap_loc]] + tda[leap_loc] - 1
year_secs = (day_count.astype(float) * SECONDS_PER_DAY) + tse + \
(60. * tmi.astype(float)) + (3600. * tho.astype(float))
dtime = year.astype(float) + (year_secs / (365. * 24. * 3600.))
dtime[leap_loc] = year[leap_loc].astype(float) + \
(year_secs[leap_loc] / (366. * 24. * 3600.))
return dtime
def haversine(lon1, lat1, lon2, lat2, radians=False, earth_rad=6371.227):
"""
Allows to calculate geographical distance
using the haversine formula.
:param lon1: longitude of the first set of locations
:type lon1: numpy.ndarray
:param lat1: latitude of the frist set of locations
:type lat1: numpy.ndarray
:param lon2: longitude of the second set of locations
:type lon2: numpy.float64
:param lat2: latitude of the second set of locations
:type lat2: numpy.float64
:keyword radians: states if locations are given in terms of radians
:type radians: bool
:keyword earth_rad: radius of the earth in km
:type earth_rad: float
:returns: geographical distance in km
:rtype: numpy.ndarray
"""
if not radians:
cfact = np.pi / 180.
lon1 = cfact * lon1
lat1 = cfact * lat1
lon2 = cfact * lon2
lat2 = cfact * lat2
# Number of locations in each set of points
if not np.shape(lon1):
nlocs1 = 1
lon1 = np.array([lon1])
lat1 = np.array([lat1])
else:
nlocs1 = np.max(np.shape(lon1))
if not np.shape(lon2):
nlocs2 = 1
lon2 = np.array([lon2])
lat2 = np.array([lat2])
else:
nlocs2 = np.max(np.shape(lon2))
# Pre-allocate array
distance = np.zeros((nlocs1, nlocs2))
i = 0
while i < nlocs2:
# Perform distance calculation
dlat = lat1 - lat2[i]
dlon = lon1 - lon2[i]
aval = (np.sin(dlat / 2.) ** 2.) + (np.cos(lat1) * np.cos(lat2[i]) *
(np.sin(dlon / 2.) ** 2.))
distance[:, i] = (2. * earth_rad * np.arctan2(np.sqrt(aval),
np.sqrt(1 - aval))).T
i += 1
return distance
def greg2julian(year, month, day, hour, minute, second):
"""
Function to convert a date from Gregorian to Julian format
:param year:
Year of events (integer numpy.ndarray)
:param month:
Month of events (integer numpy.ndarray)
:param day:
Days of event (integer numpy.ndarray)
:param hour:
Hour of event (integer numpy.ndarray)
:param minute:
Minute of event (integer numpy.ndarray)
:param second:
Second of event (float numpy.ndarray)
:returns julian_time:
Julian representation of the time (as float numpy.ndarray)
"""
year = year.astype(float)
month = month.astype(float)
day = day.astype(float)
timeut = hour.astype(float) + (minute.astype(float) / 60.0) + \
(second / 3600.0)
julian_time = ((367.0 * year) -
np.floor(
7.0 * (year + np.floor((month + 9.0) / 12.0)) / 4.0) -
np.floor(3.0 *
(np.floor((year + (month - 9.0) / 7.0) / 100.0) +
1.0) / 4.0) +
np.floor((275.0 * month) / 9.0) +
day + 1721028.5 + (timeut / 24.0))
return julian_time
def piecewise_linear_scalar(params, xval):
'''Piecewise linear function for a scalar variable xval (float).
:param params:
Piecewise linear parameters (numpy.ndarray) in the following form:
[slope_i,... slope_n, turning_point_i, ..., turning_point_n, intercept]
Length params === 2 * number_segments, e.g.
[slope_1, slope_2, slope_3, turning_point1, turning_point_2, intercept]
:param xval:
Value for evaluation of function (float)
:returns:
Piecewise linear function evaluated at point xval (float)
'''
n_params = len(params)
n_seg, remainder = divmod(n_params, 2)
if remainder:
raise ValueError(
'Piecewise Function requires 2 * nsegments parameters')
if n_seg == 1:
return params[1] + params[0] * xval
gradients = params[0:n_seg]
turning_points = params[n_seg: -1]
c_val = np.array([params[-1]])
for iloc in range(1, n_seg):
c_val = np.hstack(
[c_val, (c_val[iloc - 1] + gradients[iloc - 1] *
turning_points[iloc - 1]) - (gradients[iloc] *
turning_points[iloc - 1])])
if xval <= turning_points[0]:
return gradients[0] * xval + c_val[0]
elif xval > turning_points[-1]:
return gradients[-1] * xval + c_val[-1]
else:
select = np.nonzero(turning_points <= xval)[0][-1] + 1
return gradients[select] * xval + c_val[select]
def sample_truncated_gaussian_vector(data, uncertainties, bounds=None):
'''
Samples a Gaussian distribution subject to boundaries on the data
:param numpy.ndarray data:
Vector of N data values
:param numpy.ndarray uncertainties:
Vector of N data uncertainties
:param int number_bootstraps:
Number of bootstrap samples
:param tuple bounds:
(Lower, Upper) bound of data space
'''
nvals = len(data)
if bounds:
#if bounds[0] or (fabs(bounds[0]) < 1E-12):
if bounds[0] is not None:
lower_bound = (bounds[0] - data) / uncertainties
else:
lower_bound = -np.inf * np.ones_like(data)
#if bounds[1] or (fabs(bounds[1]) < 1E-12):
if bounds[1] is not None:
upper_bound = (bounds[1] - data) / uncertainties
else:
upper_bound = np.inf * np.ones_like(data)
sample = hmtk_truncnorm.rvs(lower_bound, upper_bound, size=nvals)
else:
sample = np.random.normal(0., 1., nvals)
return data + uncertainties * sample
def hmtk_histogram_1D(values, intervals, offset=1.0E-10):
"""
So, here's the problem. We tend to refer to certain data (like magnitudes)
rounded to the nearest 0.1 (or similar, i.e. 4.1, 5.7, 8.3 etc.). We also
like our tables to fall on on the same interval, i.e. 3.1, 3.2, 3.3 etc.
We usually assume that the counter should correspond to the low edge,
i.e. 3.1 is in the group 3.1 to 3.2 (i.e. L <= M < U).
Floating point precision can be a bitch! Because when we read in magnitudes
from files 3.1 might be represented as 3.0999999999 or as 3.1000000000001
and this is seemingly random. Similarly, if np.arange() is used to generate
the bin intervals then we see similar floating point problems emerging. As
we are frequently encountering density plots with empty rows or columns
where data should be but isn't because it has been assigned to the wrong
group.
Instead of using numpy's own historgram function we use a slower numpy
version that allows us to offset the intervals by a smaller amount and
ensure that 3.0999999999, 3.0, and 3.10000000001 would fall in the group
3.1 - 3.2!
:param numpy.ndarray values:
Values of data
:param numpy.ndarray intervals:
Data bins
:param float offset:
Small amount to offset the bins for floating point precision
:returns:
Count in each bin (as float)
"""
nbins = len(intervals) - 1
counter = np.zeros(nbins, dtype=float)
x_ints = intervals - offset
for i in range(nbins):
idx = np.logical_and(values >= x_ints[i], values < x_ints[i + 1])
counter[i] += float(np.sum(idx))
return counter
def hmtk_histogram_2D(xvalues, yvalues, bins, x_offset=1.0E-10,
y_offset=1.0E-10):
"""
See the explanation for the 1D case - now applied to 2D.
:param numpy.ndarray xvalues:
Values of x-data
:param numpy.ndarray yvalues:
Values of y-data
:param tuple bins:
Tuple containing bin intervals for x-data and y-data (as numpy arrays)
:param float x_offset:
Small amount to offset the x-bins for floating point precision
:param float y_offset:
Small amount to offset the y-bins for floating point precision
:returns:
Count in each bin (as float)
"""
xbins, ybins = (bins[0] - x_offset, bins[1] - y_offset)
n_x = len(xbins) - 1
n_y = len(ybins) - 1
counter = np.zeros([n_y, n_x], dtype=float)
for j in range(n_y):
y_idx = np.logical_and(yvalues >= ybins[j], yvalues < ybins[j + 1])
x_vals = xvalues[y_idx]
for i in range(n_x):
idx = np.logical_and(x_vals >= xbins[i], x_vals < xbins[i + 1])
counter[j, i] += float(np.sum(idx))
return counter.T
def bootstrap_histogram_1D(
values, intervals, uncertainties=None,
normalisation=False, number_bootstraps=None, boundaries=None):
'''
Bootstrap samples a set of vectors
:param numpy.ndarray values:
The data values
:param numpy.ndarray intervals:
The bin edges
:param numpy.ndarray uncertainties:
The standard deviations of each observation
:param bool normalisation:
If True then returns the histogram as a density function
:param int number_bootstraps:
Number of bootstraps
:param tuple boundaries:
(Lower, Upper) bounds on the data
:param returns:
1-D histogram of data
'''
if not number_bootstraps or np.all(np.fabs(uncertainties < 1E-12)):
# No bootstraps or all uncertaintes are zero - return ordinary
# histogram
#output = np.histogram(values, intervals)[0]
output = hmtk_histogram_1D(values, intervals)
if normalisation:
output = output / float(np.sum(output))
else:
output = output
return output
else:
temp_hist = np.zeros([len(intervals) - 1, number_bootstraps],
dtype=float)
for iloc in range(0, number_bootstraps):
sample = sample_truncated_gaussian_vector(values,
uncertainties,
boundaries)
#output = np.histogram(sample, intervals)[0]
output = hmtk_histogram_1D(sample, intervals)
temp_hist[:, iloc] = output
output = np.sum(temp_hist, axis=1)
if normalisation:
output = output / float(np.sum(output))
else:
output = output / float(number_bootstraps)
return output
def bootstrap_histogram_2D(
xvalues, yvalues, xbins, ybins,
boundaries=[None, None], xsigma=None, ysigma=None,
normalisation=False, number_bootstraps=None):
'''
Calculates a 2D histogram of data, allowing for normalisation and
bootstrap sampling
:param numpy.ndarray xvalues:
Data values of the first variable
:param numpy.ndarray yvalues:
Data values of the second variable
:param numpy.ndarray xbins:
Bin edges for the first variable
:param numpy.ndarray ybins:
Bin edges for the second variable
:param list boundaries:
List of (Lower, Upper) tuples corresponding to the bounds of the
two data sets
:param numpy.ndarray xsigma:
Error values (standard deviatons) on first variable
:param numpy.ndarray ysigma:
Error values (standard deviatons) on second variable
:param bool normalisation:
If True then returns the histogram as a density function
:param int number_bootstraps:
Number of bootstraps
:param returns:
2-D histogram of data
'''
if (xsigma is None and ysigma is None) or not number_bootstraps:
# No sampling - return simple 2-D histrogram
#output = np.histogram2d(xvalues, yvalues, bins=[xbins, ybins])[0]
output = hmtk_histogram_2D(xvalues, yvalues, bins=(xbins, ybins))
if normalisation:
output = output / float(np.sum(output))
return output
else:
if xsigma is None:
xsigma = np.zeros(len(xvalues), dtype=float)
if ysigma is None:
ysigma = np.zeros(len(yvalues), dtype=float)
temp_hist = np.zeros(
[len(xbins) - 1, len(ybins) - 1, number_bootstraps],
dtype=float)
for iloc in range(0, number_bootstraps):
xsample = sample_truncated_gaussian_vector(xvalues, xsigma,
boundaries[0])
ysample = sample_truncated_gaussian_vector(yvalues, ysigma,
boundaries[0])
#temp_hist[:, :, iloc] = np.histogram2d(xsample,
# ysample,
# bins=[xbins, ybins])[0]
temp_hist[:, :, iloc] = hmtk_histogram_2D(xsample,
ysample,
bins=(xbins, ybins))
if normalisation:
output = np.sum(temp_hist, axis=2)
output = output / np.sum(output)
else:
output = np.sum(temp_hist, axis=2) / float(number_bootstraps)
return output
# Parameters of WGS84 projection (in km)
WGS84 = {"a": 6378.137, "e": 0.081819191, "1/f": 298.257223563}
WGS84["e2"] = WGS84["e"] ** 2.
# Parameters of WGS84 projection (in m)
WGS84m = {"a": 6378137., "e": 0.081819191, "1/f": 298.2572221}
WGS84m["e2"] = WGS84m["e"] ** 2.
TO_Q = lambda lat: (
(1.0 - WGS84["e2"]) * (
(np.sin(lat) / (1.0 - (WGS84["e2"] * (np.sin(lat) ** 2.))) -
((1. / (2.0 * WGS84["e"])) * np.log((1.0 - WGS84["e"] * np.sin(lat)) /
(1.0 + WGS84["e"] * np.sin(lat))))))
)
TO_Qm = lambda lat: (
(1.0 - WGS84m["e2"]) * (
(np.sin(lat) / (1.0 - (WGS84m["e2"] * (np.sin(lat) ** 2.))) -
((1. / (2.0 * WGS84m["e"])) * np.log((1.0 - WGS84m["e"] * np.sin(lat)) /
(1.0 + WGS84m["e"] * np.sin(lat))))))
)
def lonlat_to_laea(lon, lat, lon0, lat0, f_e=0.0, f_n=0.0):
"""
Converts vectors of longitude and latitude into Lambert Azimuthal
Equal Area projection (km), with respect to an origin point
:param numpy.ndarray lon:
Longitudes
:param numpy.ndarray lat:
Latitude
:param float lon0:
Central longitude
:param float lat0:
Central latitude
:param float f_e:
False easting (km)
:param float f_e:
False northing (km)
:returns:
* easting (km)
* northing (km)
"""
lon = np.radians(lon)
lat = np.radians(lat)
lon0 = np.radians(lon0)
lat0 = np.radians(lat0)
q_0 = TO_Q(lat0)
q_p = TO_Q(np.pi / 2.)
q_val = TO_Q(lat)
beta = np.arcsin(q_val / q_p)
beta0 = np.arcsin(q_0 / q_p)
r_q = WGS84["a"] * np.sqrt(q_p / 2.)
dval = WGS84["a"] * (
np.cos(lat0) / np.sqrt(1.0 - (WGS84["e2"] * (np.sin(lat0) ** 2.))) /
(r_q * np.cos(beta0)))
bval = r_q * np.sqrt(
2. / (1.0 + (np.sin(beta0) * np.sin(beta)) + (np.cos(beta) *
np.cos(beta0) * np.cos(lon - lon0))))
easting = f_e + ((bval * dval) * (np.cos(beta) * np.sin(lon - lon0)))
northing = f_n + (bval / dval) * ((np.cos(beta0) * np.sin(beta)) -
(np.sin(beta0) * np.cos(beta) * np.cos(lon - lon0)))
return easting, northing
def area_of_polygon(polygon):
"""
Returns the area of an OpenQuake polygon in square kilometres
"""
lon0 = np.mean(polygon.lons)
lat0 = np.mean(polygon.lats)
# Transform to lamber equal area projection
x, y = lonlat_to_laea(polygon.lons, polygon.lats, lon0, lat0)
# Build shapely polygons
poly = geometry.Polygon(zip(x, y))
return poly.area
|
gem/oq-hazardlib
|
openquake/hmtk/seismicity/utils.py
|
Python
|
agpl-3.0
| 21,990
|
[
"Gaussian"
] |
36764635470ef2c083104777a47f14066b84479d95273055b51b973ecef75fc4
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
from django.views.generic import TemplateView
from qa_feedback.views import homepage
urlpatterns = [
url(r'^$', TemplateView.as_view(template_name='pages/home.html'), name="home"),
url(r'^about/$', TemplateView.as_view(template_name='pages/about.html'), name="about"),
# Django Admin
url(r'^admin/', include(admin.site.urls)),
url(r'^home', homepage, name='home'),
# User management
url(r'^users/', include("django_qa.users.urls", namespace="users")),
url(r'^accounts/', include('allauth.urls')),
url(r'^api/', include('qa_feedback.api.urls')),
# Your stuff: custom urls includes go here
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
if settings.DEBUG:
# This allows the error pages to be debugged during development, just visit
# these url in browser to see how these error pages look like.
urlpatterns += [
url(r'^400/$', 'django.views.defaults.bad_request'),
url(r'^403/$', 'django.views.defaults.permission_denied'),
url(r'^404/$', 'django.views.defaults.page_not_found'),
url(r'^500/$', 'django.views.defaults.server_error'),
]
# url(r'^api/', include('qa_feedback.api.urls'),
|
milin/django-qa
|
config/urls.py
|
Python
|
bsd-3-clause
| 1,418
|
[
"VisIt"
] |
540d464d1e69658a8de807d0f62b2aeff20878e15ac5bc71a295d9050a30c51f
|
r"""
=====================================================================
The Johnson-Lindenstrauss bound for embedding with random projections
=====================================================================
The `Johnson-Lindenstrauss lemma`_ states that any high dimensional
dataset can be randomly projected into a lower dimensional Euclidean
space while controlling the distortion in the pairwise distances.
.. _`Johnson-Lindenstrauss lemma`: https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
Theoretical bounds
==================
The distortion introduced by a random projection `p` is asserted by
the fact that `p` is defining an eps-embedding with good probability
as defined by:
.. math::
(1 - eps) \|u - v\|^2 < \|p(u) - p(v)\|^2 < (1 + eps) \|u - v\|^2
Where u and v are any rows taken from a dataset of shape [n_samples,
n_features] and p is a projection by a random Gaussian N(0, 1) matrix
with shape [n_components, n_features] (or a sparse Achlioptas matrix).
The minimum number of components to guarantees the eps-embedding is
given by:
.. math::
n\_components >= 4 log(n\_samples) / (eps^2 / 2 - eps^3 / 3)
The first plot shows that with an increasing number of samples ``n_samples``,
the minimal number of dimensions ``n_components`` increased logarithmically
in order to guarantee an ``eps``-embedding.
The second plot shows that an increase of the admissible
distortion ``eps`` allows to reduce drastically the minimal number of
dimensions ``n_components`` for a given number of samples ``n_samples``
Empirical validation
====================
We validate the above bounds on the digits dataset or on the 20 newsgroups
text document (TF-IDF word frequencies) dataset:
- for the digits dataset, some 8x8 gray level pixels data for 500
handwritten digits pictures are randomly projected to spaces for various
larger number of dimensions ``n_components``.
- for the 20 newsgroups dataset some 500 documents with 100k
features in total are projected using a sparse random matrix to smaller
euclidean spaces with various values for the target number of dimensions
``n_components``.
The default dataset is the digits dataset. To run the example on the twenty
newsgroups dataset, pass the --twenty-newsgroups command line argument to this
script.
For each value of ``n_components``, we plot:
- 2D distribution of sample pairs with pairwise distances in original
and projected spaces as x and y axis respectively.
- 1D histogram of the ratio of those distances (projected / original).
We can see that for low values of ``n_components`` the distribution is wide
with many distorted pairs and a skewed distribution (due to the hard
limit of zero ratio on the left as distances are always positives)
while for larger values of n_components the distortion is controlled
and the distances are well preserved by the random projection.
Remarks
=======
According to the JL lemma, projecting 500 samples without too much distortion
will require at least several thousands dimensions, irrespective of the
number of features of the original dataset.
Hence using random projections on the digits dataset which only has 64 features
in the input space does not make sense: it does not allow for dimensionality
reduction in this case.
On the twenty newsgroups on the other hand the dimensionality can be decreased
from 56436 down to 10000 while reasonably preserving pairwise distances.
"""
print(__doc__)
import sys
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.random_projection import johnson_lindenstrauss_min_dim
from sklearn.random_projection import SparseRandomProjection
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.datasets import load_digits
from sklearn.metrics.pairwise import euclidean_distances
# Part 1: plot the theoretical dependency between n_components_min and
# n_samples
# range of admissible distortions
eps_range = np.linspace(0.1, 0.99, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))
# range of number of samples (observation) to embed
n_samples_range = np.logspace(1, 9, 9)
plt.figure()
for eps, color in zip(eps_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)
plt.loglog(n_samples_range, min_n_components, color=color)
plt.legend(["eps = %0.1f" % eps for eps in eps_range], loc="lower right")
plt.xlabel("Number of observations to eps-embed")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_samples vs n_components")
# range of admissible distortions
eps_range = np.linspace(0.01, 0.99, 100)
# range of number of samples (observation) to embed
n_samples_range = np.logspace(2, 6, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))
plt.figure()
for n_samples, color in zip(n_samples_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)
plt.semilogy(eps_range, min_n_components, color=color)
plt.legend(["n_samples = %d" % n for n in n_samples_range], loc="upper right")
plt.xlabel("Distortion eps")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_components vs eps")
# Part 2: perform sparse random projection of some digits images which are
# quite low dimensional and dense or documents of the 20 newsgroups dataset
# which is both high dimensional and sparse
if '--twenty-newsgroups' in sys.argv:
# Need an internet connection hence not enabled by default
data = fetch_20newsgroups_vectorized().data[:500]
else:
data = load_digits().data[:500]
n_samples, n_features = data.shape
print("Embedding %d samples with dim %d using various random projections"
% (n_samples, n_features))
n_components_range = np.array([300, 1000, 10000])
dists = euclidean_distances(data, squared=True).ravel()
# select only non-identical samples pairs
nonzero = dists != 0
dists = dists[nonzero]
for n_components in n_components_range:
t0 = time()
rp = SparseRandomProjection(n_components=n_components)
projected_data = rp.fit_transform(data)
print("Projected %d samples from %d to %d in %0.3fs"
% (n_samples, n_features, n_components, time() - t0))
if hasattr(rp, 'components_'):
n_bytes = rp.components_.data.nbytes
n_bytes += rp.components_.indices.nbytes
print("Random matrix with size: %0.3fMB" % (n_bytes / 1e6))
projected_dists = euclidean_distances(
projected_data, squared=True).ravel()[nonzero]
plt.figure()
plt.hexbin(dists, projected_dists, gridsize=100, cmap=plt.cm.PuBu)
plt.xlabel("Pairwise squared distances in original space")
plt.ylabel("Pairwise squared distances in projected space")
plt.title("Pairwise distances distribution for n_components=%d" %
n_components)
cb = plt.colorbar()
cb.set_label('Sample pairs counts')
rates = projected_dists / dists
print("Mean distances rate: %0.2f (%0.2f)"
% (np.mean(rates), np.std(rates)))
plt.figure()
plt.hist(rates, bins=50, normed=True, range=(0., 2.))
plt.xlabel("Squared distances rate: projected / original")
plt.ylabel("Distribution of samples pairs")
plt.title("Histogram of pairwise distance rates for n_components=%d" %
n_components)
# TODO: compute the expected value of eps and add them to the previous plot
# as vertical lines / region
plt.show()
|
RPGOne/Skynet
|
scikit-learn-0.18.1/examples/plot_johnson_lindenstrauss_bound.py
|
Python
|
bsd-3-clause
| 7,474
|
[
"Gaussian"
] |
27c0cbe9208a67017ea976d1bdbef669116726946a9fff768ce7345d4b3bf854
|
"""Functions to plot epochs data."""
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Stefan Repplinger <stefan.repplinger@ovgu.de>
# Daniel McCloy <dan@mccloy.info>
#
# License: Simplified BSD
from collections import Counter
from copy import deepcopy
import warnings
import numpy as np
from .raw import _setup_channel_selections
from ..defaults import _handle_default
from ..utils import verbose, logger, warn, fill_doc, _check_option
from ..io.meas_info import create_info, _validate_type
from ..io.pick import (_get_channel_types, _picks_to_idx, _DATA_CH_TYPES_SPLIT,
_VALID_CHANNEL_TYPES)
from .utils import (tight_layout, _setup_vmin_vmax, plt_show,
_check_cov, _handle_precompute,
_compute_scalings, DraggableColorbar, _setup_cmap,
_handle_decim, _set_title_multiple_electrodes,
_make_combine_callable, _set_window_title,
_make_event_color_dict, _get_channel_plotting_order)
@fill_doc
def plot_epochs_image(epochs, picks=None, sigma=0., vmin=None,
vmax=None, colorbar=True, order=None, show=True,
units=None, scalings=None, cmap=None, fig=None,
axes=None, overlay_times=None, combine=None,
group_by=None, evoked=True, ts_args=None, title=None,
clear=False):
"""Plot Event Related Potential / Fields image.
Parameters
----------
epochs : instance of Epochs
The epochs.
%(picks_good_data)s
``picks`` interacts with ``group_by`` and ``combine`` to determine the
number of figures generated; see Notes.
sigma : float
The standard deviation of a Gaussian smoothing window applied along
the epochs axis of the image. If 0, no smoothing is applied.
Defaults to 0.
vmin : None | float | callable
The min value in the image (and the ER[P/F]). The unit is µV for
EEG channels, fT for magnetometers and fT/cm for gradiometers.
If vmin is None and multiple plots are returned, the limit is
equalized within channel types.
Hint: to specify the lower limit of the data, use
``vmin=lambda data: data.min()``.
vmax : None | float | callable
The max value in the image (and the ER[P/F]). The unit is µV for
EEG channels, fT for magnetometers and fT/cm for gradiometers.
If vmin is None and multiple plots are returned, the limit is
equalized within channel types.
colorbar : bool
Display or not a colorbar.
order : None | array of int | callable
If not ``None``, order is used to reorder the epochs along the y-axis
of the image. If it is an array of :class:`int`, its length should
match the number of good epochs. If it is a callable it should accept
two positional parameters (``times`` and ``data``, where
``data.shape == (len(good_epochs), len(times))``) and return an
:class:`array <numpy.ndarray>` of indices that will sort ``data`` along
its first axis.
show : bool
Show figure if True.
units : dict | None
The units of the channel types used for axes labels. If None,
defaults to ``units=dict(eeg='µV', grad='fT/cm', mag='fT')``.
scalings : dict | None
The scalings of the channel types to be applied for plotting.
If None, defaults to ``scalings=dict(eeg=1e6, grad=1e13, mag=1e15,
eog=1e6)``.
cmap : None | colormap | (colormap, bool) | 'interactive'
Colormap. If tuple, the first value indicates the colormap to use and
the second value is a boolean defining interactivity. In interactive
mode the colors are adjustable by clicking and dragging the colorbar
with left and right mouse button. Left mouse button moves the scale up
and down and right mouse button adjusts the range. Hitting space bar
resets the scale. Up and down arrows can be used to change the
colormap. If 'interactive', translates to ('RdBu_r', True).
If None, "RdBu_r" is used, unless the data is all positive, in which
case "Reds" is used.
fig : Figure | None
:class:`~matplotlib.figure.Figure` instance to draw the image to.
Figure must contain the correct number of axes for drawing the epochs
image, the evoked response, and a colorbar (depending on values of
``evoked`` and ``colorbar``). If ``None`` a new figure is created.
Defaults to ``None``.
axes : list of Axes | dict of list of Axes | None
List of :class:`~matplotlib.axes.Axes` objects in which to draw the
image, evoked response, and colorbar (in that order). Length of list
must be 1, 2, or 3 (depending on values of ``colorbar`` and ``evoked``
parameters). If a :class:`dict`, each entry must be a list of Axes
objects with the same constraints as above. If both ``axes`` and
``group_by`` are dicts, their keys must match. Providing non-``None``
values for both ``fig`` and ``axes`` results in an error. Defaults to
``None``.
overlay_times : array_like, shape (n_epochs,) | None
Times (in seconds) at which to draw a line on the corresponding row of
the image (e.g., a reaction time associated with each epoch). Note that
``overlay_times`` should be ordered to correspond with the
:class:`~mne.Epochs` object (i.e., ``overlay_times[0]`` corresponds to
``epochs[0]``, etc).
%(combine)s
If callable, the callable must accept one positional input (data of
shape ``(n_epochs, n_channels, n_times)``) and return an
:class:`array <numpy.ndarray>` of shape ``(n_epochs, n_times)``. For
example::
combine = lambda data: np.median(data, axis=1)
If ``combine`` is ``None``, channels are combined by computing GFP,
unless ``group_by`` is also ``None`` and ``picks`` is a list of
specific channels (not channel types), in which case no combining is
performed and each channel gets its own figure. See Notes for further
details. Defaults to ``None``.
group_by : None | dict
Specifies which channels are aggregated into a single figure, with
aggregation method determined by the ``combine`` parameter. If not
``None``, one :class:`~matplotlib.figure.Figure` is made per dict
entry; the dict key will be used as the figure title and the dict
values must be lists of picks (either channel names or integer indices
of ``epochs.ch_names``). For example::
group_by=dict(Left_ROI=[1, 2, 3, 4], Right_ROI=[5, 6, 7, 8])
Note that within a dict entry all channels must have the same type.
``group_by`` interacts with ``picks`` and ``combine`` to determine the
number of figures generated; see Notes. Defaults to ``None``.
evoked : bool
Draw the ER[P/F] below the image or not.
ts_args : None | dict
Arguments passed to a call to `~mne.viz.plot_compare_evokeds` to style
the evoked plot below the image. Defaults to an empty dictionary,
meaning `~mne.viz.plot_compare_evokeds` will be called with default
parameters.
title : None | str
If :class:`str`, will be plotted as figure title. Otherwise, the
title will indicate channel(s) or channel type being plotted. Defaults
to ``None``.
clear : bool
Whether to clear the axes before plotting (if ``fig`` or ``axes`` are
provided). Defaults to ``False``.
Returns
-------
figs : list of Figure
One figure per channel, channel type, or group, depending on values of
``picks``, ``group_by``, and ``combine``. See Notes.
Notes
-----
You can control how channels are aggregated into one figure or plotted in
separate figures through a combination of the ``picks``, ``group_by``, and
``combine`` parameters. If ``group_by`` is a :class:`dict`, the result is
one :class:`~matplotlib.figure.Figure` per dictionary key (for any valid
values of ``picks`` and ``combine``). If ``group_by`` is ``None``, the
number and content of the figures generated depends on the values of
``picks`` and ``combine``, as summarized in this table:
.. cssclass:: table-bordered
.. rst-class:: midvalign
+----------+----------------------------+------------+-------------------+
| group_by | picks | combine | result |
+==========+============================+============+===================+
| | None, int, list of int, | None, | |
| dict | ch_name, list of ch_names, | string, or | 1 figure per |
| | ch_type, list of ch_types | callable | dict key |
+----------+----------------------------+------------+-------------------+
| | None, | None, | |
| | ch_type, | string, or | 1 figure per |
| | list of ch_types | callable | ch_type |
| None +----------------------------+------------+-------------------+
| | int, | None | 1 figure per pick |
| | ch_name, +------------+-------------------+
| | list of int, | string or | 1 figure |
| | list of ch_names | callable | |
+----------+----------------------------+------------+-------------------+
"""
from scipy.ndimage import gaussian_filter1d
from .. import EpochsArray
_validate_type(group_by, (dict, None), 'group_by')
units = _handle_default('units', units)
scalings = _handle_default('scalings', scalings)
if set(units) != set(scalings):
raise ValueError('Scalings and units must have the same keys.')
# is picks a channel type (or None)?
picks, picked_types = _picks_to_idx(epochs.info, picks, return_kind=True)
ch_types = _get_channel_types(epochs.info, picks)
# `combine` defaults to 'gfp' unless picks are specific channels and
# there was no group_by passed
combine_given = combine is not None
if combine is None and (group_by is not None or picked_types):
combine = 'gfp'
# convert `combine` into callable (if None or str)
combine_func = _make_combine_callable(combine)
# handle ts_args (params for the evoked time series)
ts_args = dict() if ts_args is None else ts_args
manual_ylims = 'ylim' in ts_args
if combine is not None:
ts_args['show_sensors'] = False
vlines = [0] if (epochs.times[0] < 0 < epochs.times[-1]) else []
ts_defaults = dict(colors={'cond': 'k'}, title='', show=False,
truncate_yaxis=False, truncate_xaxis=False,
vlines=vlines, legend=False)
ts_defaults.update(**ts_args)
ts_args = ts_defaults.copy()
# construct a group_by dict if one wasn't supplied
if group_by is None:
if picked_types:
# one fig per ch_type
group_by = {ch_type: picks[np.array(ch_types) == ch_type]
for ch_type in set(ch_types)
if ch_type in _DATA_CH_TYPES_SPLIT}
elif combine is None:
# one fig per pick
group_by = {epochs.ch_names[pick]: [pick] for pick in picks}
else:
# one fig to rule them all
ch_names = np.array(epochs.ch_names)[picks].tolist()
key = _set_title_multiple_electrodes(None, combine, ch_names)
group_by = {key: picks}
else:
group_by = deepcopy(group_by)
# check for heterogeneous sensor type combinations / "combining" 1 channel
for this_group, these_picks in group_by.items():
this_ch_type = np.array(ch_types)[np.in1d(picks, these_picks)]
if len(set(this_ch_type)) > 1:
types = ', '.join(set(this_ch_type))
raise ValueError('Cannot combine sensors of different types; "{}" '
'contains types {}.'.format(this_group, types))
# now we know they're all the same type...
group_by[this_group] = dict(picks=these_picks, ch_type=this_ch_type[0],
title=title)
# are they trying to combine a single channel?
if len(these_picks) < 2 and combine_given:
warn('Only one channel in group "{}"; cannot combine by method '
'"{}".'.format(this_group, combine))
# check for compatible `fig` / `axes`; instantiate figs if needed; add
# fig(s) and axes into group_by
group_by = _validate_fig_and_axes(fig, axes, group_by, evoked, colorbar,
clear=clear)
# prepare images in advance to get consistent vmin/vmax.
# At the same time, create a subsetted epochs object for each group
data = epochs.get_data()
vmin_vmax = {ch_type: dict(images=list(), norm=list())
for ch_type in set(ch_types)}
for this_group, this_group_dict in group_by.items():
these_picks = this_group_dict['picks']
this_ch_type = this_group_dict['ch_type']
this_ch_info = [epochs.info['chs'][n] for n in these_picks]
these_ch_names = np.array(epochs.info['ch_names'])[these_picks]
this_data = data[:, these_picks]
# create subsetted epochs object
this_info = create_info(sfreq=epochs.info['sfreq'],
ch_names=list(these_ch_names),
ch_types=[this_ch_type] * len(these_picks))
with this_info._unlock():
this_info['chs'] = this_ch_info
this_epochs = EpochsArray(this_data, this_info, tmin=epochs.times[0])
# apply scalings (only to image, not epochs object), combine channels
this_image = combine_func(this_data * scalings[this_ch_type])
# handle `order`. NB: this can potentially yield different orderings
# in each figure!
this_image, _overlay_times = _order_epochs(this_image, epochs.times,
order, overlay_times)
this_norm = np.all(this_image > 0)
# apply smoothing
if sigma > 0.:
this_image = gaussian_filter1d(this_image, sigma=sigma, axis=0,
mode='nearest')
# update the group_by and vmin_vmax dicts
group_by[this_group].update(image=this_image, epochs=this_epochs,
norm=this_norm)
vmin_vmax[this_ch_type]['images'].append(this_image)
vmin_vmax[this_ch_type]['norm'].append(this_norm)
# compute overall vmin/vmax for images
for ch_type, this_vmin_vmax_dict in vmin_vmax.items():
image_list = this_vmin_vmax_dict['images']
image_stack = np.stack(image_list)
norm = all(this_vmin_vmax_dict['norm'])
vmin_vmax[ch_type] = _setup_vmin_vmax(image_stack, vmin, vmax, norm)
del image_stack, vmin, vmax
# prepare to plot
auto_ylims = {ch_type: [0., 0.] for ch_type in set(ch_types)}
# plot
for this_group, this_group_dict in group_by.items():
this_ch_type = this_group_dict['ch_type']
this_axes_dict = this_group_dict['axes']
vmin, vmax = vmin_vmax[this_ch_type]
# plot title
if this_group_dict['title'] is None:
title = _handle_default('titles').get(this_group, this_group)
if isinstance(combine, str) and len(title):
_comb = combine.upper() if combine == 'gfp' else combine
_comb = 'std. dev.' if _comb == 'std' else _comb
title += f' ({_comb})'
# plot the image
this_fig = _plot_epochs_image(
this_group_dict['image'], epochs=this_group_dict['epochs'],
picks=picks, colorbar=colorbar, vmin=vmin, vmax=vmax, cmap=cmap,
style_axes=True, norm=this_group_dict['norm'],
unit=units[this_ch_type], ax=this_axes_dict, show=False,
title=title, combine=combine, combine_given=combine_given,
overlay_times=_overlay_times, evoked=evoked, ts_args=ts_args)
group_by[this_group].update(fig=this_fig)
# detect ylims across figures
if evoked and not manual_ylims:
# ensure get_ylim works properly
this_axes_dict['evoked'].figure.canvas.draw_idle()
this_bot, this_top = this_axes_dict['evoked'].get_ylim()
this_min = min(this_bot, this_top)
this_max = max(this_bot, this_top)
curr_min, curr_max = auto_ylims[ch_type]
auto_ylims[this_ch_type] = [min(curr_min, this_min),
max(curr_max, this_max)]
# equalize ylims across figures (does not adjust ticks)
if evoked:
for this_group_dict in group_by.values():
ax = this_group_dict['axes']['evoked']
ch_type = this_group_dict['ch_type']
if not manual_ylims:
args = auto_ylims[ch_type]
if 'invert_y' in ts_args:
args = args[::-1]
ax.set_ylim(*args)
plt_show(show)
# impose deterministic order of returned objects
return_order = np.array(sorted(group_by))
are_ch_types = np.in1d(return_order, _VALID_CHANNEL_TYPES)
if any(are_ch_types):
return_order = np.concatenate((return_order[are_ch_types],
return_order[~are_ch_types]))
return [group_by[group]['fig'] for group in return_order]
def _validate_fig_and_axes(fig, axes, group_by, evoked, colorbar, clear=False):
"""Check user-provided fig/axes compatibility with plot_epochs_image."""
from matplotlib.pyplot import figure, Axes, subplot2grid
n_axes = 1 + int(evoked) + int(colorbar)
ax_names = ('image', 'evoked', 'colorbar')
ax_names = np.array(ax_names)[np.where([True, evoked, colorbar])]
prefix = 'Since evoked={} and colorbar={}, '.format(evoked, colorbar)
# got both fig and axes
if fig is not None and axes is not None:
raise ValueError('At least one of "fig" or "axes" must be None; got '
'fig={}, axes={}.'.format(fig, axes))
# got fig=None and axes=None: make fig(s) and axes
if fig is None and axes is None:
axes = dict()
colspan = 9 if colorbar else 10
rowspan = 2 if evoked else 3
shape = (3, 10)
for this_group in group_by:
this_fig = figure()
_set_window_title(this_fig, this_group)
subplot2grid(shape, (0, 0), colspan=colspan, rowspan=rowspan,
fig=this_fig)
if evoked:
subplot2grid(shape, (2, 0), colspan=colspan, rowspan=1,
fig=this_fig)
if colorbar:
subplot2grid(shape, (0, 9), colspan=1, rowspan=rowspan,
fig=this_fig)
axes[this_group] = this_fig.axes
# got a Figure instance
if fig is not None:
# If we're re-plotting into a fig made by a previous call to
# `plot_image`, be forgiving of presence/absence of sensor inset axis.
if len(fig.axes) not in (n_axes, n_axes + 1):
raise ValueError('{}"fig" must contain {} axes, got {}.'
''.format(prefix, n_axes, len(fig.axes)))
if len(list(group_by)) != 1:
raise ValueError('When "fig" is not None, "group_by" can only '
'have one group (got {}: {}).'
.format(len(group_by), ', '.join(group_by)))
key = list(group_by)[0]
if clear: # necessary if re-plotting into previous figure
_ = [ax.clear() for ax in fig.axes]
if len(fig.axes) > n_axes: # get rid of sensor inset
fig.axes[-1].remove()
_set_window_title(fig, key)
axes = {key: fig.axes}
# got an Axes instance, be forgiving (if evoked and colorbar are False)
if isinstance(axes, Axes):
axes = [axes]
# got an ndarray; be forgiving
if isinstance(axes, np.ndarray):
axes = axes.ravel().tolist()
# got a list of axes, make it a dict
if isinstance(axes, list):
if len(axes) != n_axes:
raise ValueError('{}"axes" must be length {}, got {}.'
''.format(prefix, n_axes, len(axes)))
# for list of axes to work, must be only one group
if len(list(group_by)) != 1:
raise ValueError('When axes is a list, can only plot one group '
'(got {} groups: {}).'
.format(len(group_by), ', '.join(group_by)))
key = list(group_by)[0]
axes = {key: axes}
# got a dict of lists of axes, make it dict of dicts
if isinstance(axes, dict):
# in theory a user could pass a dict of axes but *NOT* pass a group_by
# dict, but that is forbidden in the docstring so it shouldn't happen.
# The next test could fail in that case because we've constructed a
# group_by dict and the user won't have known what keys we chose.
if set(axes) != set(group_by):
raise ValueError('If "axes" is a dict its keys ({}) must match '
'the keys in "group_by" ({}).'
.format(list(axes), list(group_by)))
for this_group, this_axes_list in axes.items():
if len(this_axes_list) != n_axes:
raise ValueError('{}each value in "axes" must be a list of {} '
'axes, got {}.'.format(prefix, n_axes,
len(this_axes_list)))
# NB: next line assumes all axes in each list are in same figure
group_by[this_group]['fig'] = this_axes_list[0].get_figure()
group_by[this_group]['axes'] = {key: axis for key, axis in
zip(ax_names, this_axes_list)}
return group_by
def _order_epochs(data, times, order=None, overlay_times=None):
"""Sort epochs image data (2D). Helper for plot_epochs_image."""
n_epochs = len(data)
if overlay_times is not None:
if len(overlay_times) != n_epochs:
raise ValueError(
f'size of overlay_times parameter ({len(overlay_times)}) does '
f'not match the number of epochs ({n_epochs}).')
overlay_times = np.array(overlay_times)
times_min = np.min(overlay_times)
times_max = np.max(overlay_times)
if (times_min < times[0]) or (times_max > times[-1]):
warn('Some values in overlay_times fall outside of the epochs '
f'time interval (between {times[0]} s and {times[-1]} s)')
if callable(order):
order = order(times, data)
if order is not None:
if len(order) != n_epochs:
raise ValueError(f'If order is a {type(order).__name__}, its '
f'length ({len(order)}) must match the length of '
f'the data ({n_epochs}).')
order = np.array(order)
data = data[order]
if overlay_times is not None:
overlay_times = overlay_times[order]
return data, overlay_times
def _plot_epochs_image(image, style_axes=True, epochs=None, picks=None,
vmin=None, vmax=None, colorbar=False, show=False,
unit=None, cmap=None, ax=None, overlay_times=None,
title=None, evoked=False, ts_args=None, combine=None,
combine_given=False, norm=False):
"""Plot epochs image. Helper function for plot_epochs_image."""
from matplotlib.ticker import AutoLocator
if cmap is None:
cmap = 'Reds' if norm else 'RdBu_r'
tmin = epochs.times[0]
tmax = epochs.times[-1]
ax_im = ax['image']
fig = ax_im.get_figure()
# draw the image
cmap = _setup_cmap(cmap, norm=norm)
n_epochs = len(image)
extent = [tmin, tmax, 0, n_epochs]
im = ax_im.imshow(image, vmin=vmin, vmax=vmax, cmap=cmap[0], aspect='auto',
origin='lower', interpolation='nearest', extent=extent)
# optional things
if style_axes:
ax_im.set_title(title)
ax_im.set_ylabel('Epochs')
if not evoked:
ax_im.set_xlabel('Time (s)')
ax_im.axis('auto')
ax_im.axis('tight')
ax_im.axvline(0, color='k', linewidth=1, linestyle='--')
if overlay_times is not None:
ax_im.plot(overlay_times, 0.5 + np.arange(n_epochs), 'k',
linewidth=2)
ax_im.set_xlim(tmin, tmax)
# draw the evoked
if evoked:
from . import plot_compare_evokeds
pass_combine = (combine if combine_given else None)
_picks = [0] if len(picks) == 1 else None # prevent applying GFP
plot_compare_evokeds({'cond': list(epochs.iter_evoked(copy=False))},
picks=_picks, axes=ax['evoked'],
combine=pass_combine, **ts_args)
ax['evoked'].set_xlim(tmin, tmax)
ax['evoked'].lines[0].set_clip_on(True)
ax['evoked'].collections[0].set_clip_on(True)
ax['evoked'].get_shared_x_axes().join(ax['evoked'], ax_im)
# fix the axes for proper updating during interactivity
loc = ax_im.xaxis.get_major_locator()
ax['evoked'].xaxis.set_major_locator(loc)
ax['evoked'].yaxis.set_major_locator(AutoLocator())
# draw the colorbar
if colorbar:
from matplotlib.pyplot import colorbar as cbar
this_colorbar = cbar(im, cax=ax['colorbar'])
this_colorbar.ax.set_ylabel(unit, rotation=270, labelpad=12)
if cmap[1]:
ax_im.CB = DraggableColorbar(this_colorbar, im)
with warnings.catch_warnings(record=True):
warnings.simplefilter('ignore')
tight_layout(fig=fig)
# finish
plt_show(show)
return fig
def plot_drop_log(drop_log, threshold=0, n_max_plot=20, subject=None,
color='lightgray', width=0.8, ignore=('IGNORED',),
show=True):
"""Show the channel stats based on a drop_log from Epochs.
Parameters
----------
drop_log : list of list
Epoch drop log from Epochs.drop_log.
threshold : float
The percentage threshold to use to decide whether or not to
plot. Default is zero (always plot).
n_max_plot : int
Maximum number of channels to show stats for.
subject : str | None
The subject name to use in the title of the plot. If ``None``, do not
display a subject name.
.. versionchanged:: 0.23
Added support for ``None``.
.. versionchanged:: 1.0
Defaults to ``None``.
color : tuple | str
Color to use for the bars.
width : float
Width of the bars.
ignore : list
The drop reasons to ignore.
show : bool
Show figure if True.
Returns
-------
fig : instance of matplotlib.figure.Figure
The figure.
"""
import matplotlib.pyplot as plt
from ..epochs import _drop_log_stats
percent = _drop_log_stats(drop_log, ignore)
if percent < threshold:
logger.info('Percent dropped epochs < supplied threshold; not '
'plotting drop log.')
return
absolute = len([x for x in drop_log if len(x)
if not any(y in ignore for y in x)])
n_epochs_before_drop = len([x for x in drop_log
if not any(y in ignore for y in x)])
scores = Counter([ch for d in drop_log for ch in d if ch not in ignore])
ch_names = np.array(list(scores.keys()))
counts = np.array(list(scores.values()))
# init figure, handle easy case (no drops)
fig, ax = plt.subplots()
title = (f'{absolute} of {n_epochs_before_drop} epochs removed '
f'({percent:.1f}%)')
if subject is not None:
title = f'{subject}: {title}'
ax.set_title(title)
if len(ch_names) == 0:
ax.text(0.5, 0.5, 'No drops', ha='center', fontsize=14)
return fig
# count epochs that aren't fully caught by `ignore`
n_used = sum([any(ch not in ignore for ch in d) or len(d) == 0
for d in drop_log])
# calc plot values
n_bars = min(n_max_plot, len(ch_names))
x = np.arange(n_bars)
y = 100 * counts / n_used
order = np.flipud(np.argsort(y))
ax.bar(x, y[order[:n_bars]], color=color, width=width, align='center')
ax.set_xticks(x)
ax.set_xticklabels(ch_names[order[:n_bars]], rotation=45, size=10,
horizontalalignment='right')
ax.set_ylabel('% of epochs removed')
ax.grid(axis='y')
tight_layout(pad=1, fig=fig)
plt_show(show)
return fig
@fill_doc
def plot_epochs(epochs, picks=None, scalings=None, n_epochs=20, n_channels=20,
title=None, events=None, event_color=None,
order=None, show=True, block=False, decim='auto',
noise_cov=None, butterfly=False, show_scrollbars=True,
show_scalebars=True, epoch_colors=None, event_id=None,
group_by='type', precompute=None, use_opengl=None):
"""Visualize epochs.
Bad epochs can be marked with a left click on top of the epoch. Bad
channels can be selected by clicking the channel name on the left side of
the main axes. Calling this function drops all the selected bad epochs as
well as bad epochs marked beforehand with rejection parameters.
Parameters
----------
epochs : instance of Epochs
The epochs object.
%(picks_good_data)s
%(scalings)s
n_epochs : int
The number of epochs per view. Defaults to 20.
n_channels : int
The number of channels per view. Defaults to 20.
title : str | None
The title of the window. If None, epochs name will be displayed.
Defaults to None.
events : None | array, shape (n_events, 3)
Events to show with vertical bars. You can use `~mne.viz.plot_events`
as a legend for the colors. By default, the coloring scheme is the
same. Defaults to ``None``.
.. warning:: If the epochs have been resampled, the events no longer
align with the data.
.. versionadded:: 0.14.0
%(event_color)s
Defaults to ``None``.
order : array of str | None
Order in which to plot channel types.
.. versionadded:: 0.18.0
show : bool
Show figure if True. Defaults to True.
block : bool
Whether to halt program execution until the figure is closed.
Useful for rejecting bad trials on the fly by clicking on an epoch.
Defaults to False.
decim : int | 'auto'
Amount to decimate the data during display for speed purposes.
You should only decimate if the data are sufficiently low-passed,
otherwise aliasing can occur. The 'auto' mode (default) uses
the decimation that results in a sampling rate at least three times
larger than ``info['lowpass']`` (e.g., a 40 Hz lowpass will result in
at least a 120 Hz displayed sample rate).
.. versionadded:: 0.15.0
noise_cov : instance of Covariance | str | None
Noise covariance used to whiten the data while plotting.
Whitened data channels are scaled by ``scalings['whitened']``,
and their channel names are shown in italic.
Can be a string to load a covariance from disk.
See also :meth:`mne.Evoked.plot_white` for additional inspection
of noise covariance properties when whitening evoked data.
For data processed with SSS, the effective dependence between
magnetometers and gradiometers may introduce differences in scaling,
consider using :meth:`mne.Evoked.plot_white`.
.. versionadded:: 0.16.0
butterfly : bool
Whether to directly call the butterfly view.
.. versionadded:: 0.18.0
%(show_scrollbars)s
%(show_scalebars)s
.. versionadded:: 0.24.0
epoch_colors : list of (n_epochs) list (of n_channels) | None
Colors to use for individual epochs. If None, use default colors.
event_id : dict | None
Dictionary of event labels (e.g. 'aud_l') as keys and associated event
integers as values. Useful when ``events`` contains event numbers not
present in ``epochs.event_id`` (e.g., because of event subselection).
Values in ``event_id`` will take precedence over those in
``epochs.event_id`` when there are overlapping keys.
.. versionadded:: 0.20
%(browse_group_by)s
%(precompute)s
%(use_opengl)s
Returns
-------
fig : instance of matplotlib.figure.Figure
The figure.
Notes
-----
The arrow keys (up/down/left/right) can be used to navigate between
channels and epochs and the scaling can be adjusted with - and + (or =)
keys, but this depends on the backend matplotlib is configured to use
(e.g., mpl.use(``TkAgg``) should work). Full screen mode can be toggled
with f11 key. The amount of epochs and channels per view can be adjusted
with home/end and page down/page up keys. ``h`` key plots a histogram of
peak-to-peak values along with the used rejection thresholds. Butterfly
plot can be toggled with ``b`` key. Left mouse click adds a vertical line
to the plot. Click 'help' button at bottom left corner of the plotter to
view all the options.
.. versionadded:: 0.10.0
"""
from ._figure import _get_browser
epochs.drop_bad()
info = epochs.info.copy()
sfreq = info['sfreq']
projs = info['projs']
projs_on = np.full_like(projs, epochs.proj, dtype=bool)
if not epochs.proj:
with info._unlock():
info['projs'] = list()
# handle defaults / check arg validity
color = _handle_default('color', None)
scalings = _compute_scalings(scalings, epochs)
scalings = _handle_default('scalings_plot_raw', scalings)
if scalings['whitened'] == 'auto':
scalings['whitened'] = 1.
units = _handle_default('units', None)
unit_scalings = _handle_default('scalings', None)
decim, picks_data = _handle_decim(epochs.info.copy(), decim, None)
noise_cov = _check_cov(noise_cov, epochs.info)
event_id_rev = {v: k for k, v in (event_id or {}).items()}
_check_option('group_by', group_by,
('selection', 'position', 'original', 'type'))
# validate epoch_colors
_validate_type(epoch_colors, (list, None), 'epoch_colors')
if epoch_colors is not None:
if len(epoch_colors) != len(epochs.events):
msg = ('epoch_colors must have length equal to the number of '
f'epochs ({len(epochs)}); got length {len(epoch_colors)}.')
raise ValueError(msg)
for ix, this_colors in enumerate(epoch_colors):
_validate_type(this_colors, list, f'epoch_colors[{ix}]')
if len(this_colors) != len(epochs.ch_names):
msg = (f'epoch colors for epoch {ix} has length '
f'{len(this_colors)}, expected {len(epochs.ch_names)}.')
raise ValueError(msg)
# handle time dimension
n_epochs = min(n_epochs, len(epochs))
n_times = len(epochs) * len(epochs.times)
duration = n_epochs * len(epochs.times) / sfreq
# NB: this includes start and end of data:
boundary_times = np.arange(len(epochs) + 1) * len(epochs.times) / sfreq
# events
if events is not None:
event_nums = events[:, 2]
event_samps = events[:, 0]
epoch_n_samps = len(epochs.times)
# handle overlapping epochs (each event may show up in multiple places)
boundaries = (epochs.events[:, [0]] + np.array([-1, 1])
* epochs.time_as_index(0))
in_bounds = np.logical_and(boundaries[:, [0]] <= event_samps,
event_samps < boundaries[:, [1]])
event_ixs = [np.nonzero(a)[0] for a in in_bounds.T]
warned = False
event_times = list()
event_numbers = list()
for samp, num, _ixs in zip(event_samps, event_nums, event_ixs):
relevant_epoch_events = epochs.events[:, 0][_ixs]
if len(relevant_epoch_events) > 1 and not warned:
logger.info('You seem to have overlapping epochs. Some event '
'lines may be duplicated in the plot.')
warned = True
offsets = samp - relevant_epoch_events + epochs.time_as_index(0)
this_event_times = (_ixs * epoch_n_samps + offsets) / sfreq
event_times.extend(this_event_times)
event_numbers.extend([num] * len(_ixs))
event_nums = np.array(event_numbers)
event_times = np.array(event_times)
else:
event_nums = None
event_times = None
event_color_dict = _make_event_color_dict(event_color, events, event_id)
# determine trace order
picks = _picks_to_idx(info, picks)
n_channels = min(n_channels, len(picks))
ch_names = np.array(epochs.ch_names)
ch_types = np.array(epochs.get_channel_types())
order = _get_channel_plotting_order(order, ch_types, picks)
selections = None
if group_by in ('selection', 'position'):
selections = _setup_channel_selections(epochs, group_by, order)
order = np.concatenate(list(selections.values()))
default_selection = list(selections)[0]
n_channels = len(selections[default_selection])
# generate window title
if title is None:
title = epochs._name
if title is None or len(title) == 0:
title = 'Epochs'
elif not isinstance(title, str):
raise TypeError(f'title must be None or a string, got a {type(title)}')
precompute = _handle_precompute(precompute)
params = dict(inst=epochs,
info=info,
n_epochs=n_epochs,
# channels and channel order
ch_names=ch_names,
ch_types=ch_types,
ch_order=order,
picks=order[:n_channels],
n_channels=n_channels,
picks_data=picks_data,
group_by=group_by,
ch_selections=selections,
# time
t_start=0,
duration=duration,
n_times=n_times,
first_time=0,
time_format='float',
decim=decim,
boundary_times=boundary_times,
# events
event_id_rev=event_id_rev,
event_color_dict=event_color_dict,
event_nums=event_nums,
event_times=event_times,
# preprocessing
projs=projs,
projs_on=projs_on,
apply_proj=epochs.proj,
remove_dc=True,
filter_coefs=None,
filter_bounds=None,
noise_cov=noise_cov,
use_noise_cov=noise_cov is not None,
# scalings
scalings=scalings,
units=units,
unit_scalings=unit_scalings,
# colors
ch_color_bad='lightgray',
ch_color_dict=color,
epoch_color_bad=(1, 0, 0),
epoch_colors=epoch_colors,
# display
butterfly=butterfly,
clipping=None,
scrollbars_visible=show_scrollbars,
scalebars_visible=show_scalebars,
window_title=title,
xlabel='Epoch number',
# pyqtgraph-specific
precompute=precompute,
use_opengl=use_opengl)
fig = _get_browser(show=show, block=block, **params)
return fig
@verbose
def plot_epochs_psd(epochs, fmin=0, fmax=np.inf, tmin=None, tmax=None,
proj=False, bandwidth=None, adaptive=False, low_bias=True,
normalization='length', picks=None, ax=None, color='black',
xscale='linear', area_mode='std', area_alpha=0.33,
dB=True, estimate='auto', show=True, n_jobs=1,
average=False, line_alpha=None, spatial_colors=True,
sphere=None, exclude='bads', verbose=None):
"""%(plot_psd_doc)s.
Parameters
----------
epochs : instance of Epochs
The epochs object.
fmin : float
Start frequency to consider.
fmax : float
End frequency to consider.
tmin : float | None
Start time to consider.
tmax : float | None
End time to consider.
proj : bool
Apply projection.
bandwidth : float
The bandwidth of the multi taper windowing function in Hz. The default
value is a window half-bandwidth of 4.
adaptive : bool
Use adaptive weights to combine the tapered spectra into PSD
(slow, use n_jobs >> 1 to speed up computation).
low_bias : bool
Only use tapers with more than 90%% spectral concentration within
bandwidth.
%(normalization)s
%(plot_psd_picks_good_data)s
ax : instance of Axes | None
Axes to plot into. If None, axes will be created.
%(plot_psd_color)s
%(plot_psd_xscale)s
%(plot_psd_area_mode)s
%(plot_psd_area_alpha)s
%(plot_psd_dB)s
%(plot_psd_estimate)s
%(show)s
%(n_jobs)s
%(plot_psd_average)s
%(plot_psd_line_alpha)s
%(plot_psd_spatial_colors)s
%(topomap_sphere_auto)s
exclude : list of str | 'bads'
Channels names to exclude from being shown. If 'bads', the bad channels
are excluded. Pass an empty list to plot all channels (including
channels marked "bad", if any).
.. versionadded:: 0.24.0
%(verbose)s
Returns
-------
fig : instance of Figure
Figure with frequency spectra of the data channels.
"""
from ._mpl_figure import _psd_figure
# generate figure
# epochs always use multitaper, not Welch, so no need to allow "window"
# param above
fig = _psd_figure(
inst=epochs, proj=proj, picks=picks, axes=ax, tmin=tmin, tmax=tmax,
fmin=fmin, fmax=fmax, sphere=sphere, xscale=xscale, dB=dB,
average=average, estimate=estimate, area_mode=area_mode,
line_alpha=line_alpha, area_alpha=area_alpha, color=color,
spatial_colors=spatial_colors, n_jobs=n_jobs, bandwidth=bandwidth,
adaptive=adaptive, low_bias=low_bias, normalization=normalization,
window='hamming', exclude=exclude)
plt_show(show)
return fig
|
larsoner/mne-python
|
mne/viz/epochs.py
|
Python
|
bsd-3-clause
| 43,523
|
[
"Gaussian"
] |
03a7bca89eff9735875a6716bf87ec0eab425e52c257259ca7186c9f3258f447
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.