text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
# -*- coding: utf-8 -*-
r"""
.. _disc-filtering:
===================================
Background information on filtering
===================================
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in
Parks & Burrus (1987) :footcite:`ParksBurrus1987`
and Ifeachor & Jervis (2002) :footcite:`IfeachorJervis2002`,
and for filtering in an M/EEG context we recommend reading
Widmann *et al.* (2015) :footcite:`WidmannEtAl2015`.
.. note::
This tutorial goes pretty deep into the mathematics of filtering and the
design decisions that go into choosing a filter. If you just want to know
how to apply the default filters in MNE-Python to your data, skip this
tutorial and read :ref:`tut-filter-resample` instead (but someday, you
should come back and read this one too 🙂).
Problem statement
=================
Practical issues with filtering electrophysiological data are covered
in Widmann *et al.* (2012) :footcite:`WidmannSchroger2012`, where they
conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011)
:footcite:`VanRullen2011`.
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase signal-to-noise ratio (SNR), but if it
is not used carefully, it can distort data. Here we hope to cover some
filtering basics so users can better understand filtering trade-offs and why
MNE-Python has chosen particular defaults.
.. _tut_filtering_basics:
Filtering basics
================
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
.. math::
H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + \ldots + a_N z^{-M}} \\
&= \frac{\sum_{k=0}^Mb_kz^{-k}}{\sum_{k=1}^Na_kz^{-k}}
In the time domain, the numerator coefficients :math:`b_k` and denominator
coefficients :math:`a_k` can be used to obtain our output data
:math:`y(n)` in terms of our input data :math:`x(n)` as:
.. math::
:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + \ldots + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - \ldots - a_N y(n - N)\\
&= \sum_{k=0}^M b_k x(n-k) - \sum_{k=1}^N a_k y(n-k)
In other words, the output at time :math:`n` is determined by a sum over
1. the numerator coefficients :math:`b_k`, which get multiplied by
the previous input values :math:`x(n-k)`, and
2. the denominator coefficients :math:`a_k`, which get multiplied by
the previous output values :math:`y(n-k)`.
Note that these summations correspond to (1) a weighted `moving average`_ and
(2) an autoregression_.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients :math:`b_k` (:math:`\forall k, a_k=0`), and thus each output
value of :math:`y(n)` depends only on the :math:`M` previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in Parks & Burrus (1987) :footcite:`ParksBurrus1987`,
FIR and IIR have different trade-offs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann *et al.*
(2015) :footcite:`WidmannEtAl2015`:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required
(Ifeachor and Jervis, 2002 :footcite:`IfeachorJervis2002`, p. 321)...
FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always trade-offs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency trade-off, and it will
show up below.
FIR Filters
===========
First, we will focus on FIR filters, which are the default filters used by
MNE-Python.
"""
###############################################################################
# Designing FIR filters
# ---------------------
# Here we'll try to design a low-pass filter and look at trade-offs in terms
# of time- and frequency-domain filter characteristics. Later, in
# :ref:`tut_effect_on_signals`, we'll look at how such filters can affect
# signals when they are used.
#
# First let's import some useful tools for filtering, and set some default
# values for our data that are reasonable for M/EEG.
import numpy as np
from numpy.fft import fft, fftfreq
from scipy import signal
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
###############################################################################
# Take for example an ideal low-pass filter, which would give a magnitude
# response of 1 in the pass-band (up to frequency :math:`f_p`) and a magnitude
# response of 0 in the stop-band (down to frequency :math:`f_s`) such that
# :math:`f_p=f_s=40` Hz here (shown to a lower limit of -60 dB for simplicity):
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
###############################################################################
# This filter hypothetically achieves zero ripple in the frequency domain,
# perfect attenuation, and perfect steepness. However, due to the discontinuity
# in the frequency response, the filter would require infinite ringing in the
# time domain (i.e., infinite order) to be realized. Another way to think of
# this is that a rectangular window in the frequency domain is actually a sinc_
# function in the time domain, which requires an infinite number of samples
# (and thus infinite time) to represent. So although this filter has ideal
# frequency suppression, it has poor time-domain characteristics.
#
# Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
# at the filter itself in the time domain and the frequency domain:
n = int(round(0.1 * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
###############################################################################
# This is not so good! Making the filter 10 times longer (1 s) gets us a
# slightly better stop-band suppression, but still has a lot of ringing in
# the time domain. Note the x-axis is an order of magnitude longer here,
# and the filter has a correspondingly much longer group delay (again equal
# to half the filter length, or 0.5 seconds):
n = int(round(1. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
###############################################################################
# Let's make the stop-band tighter still with a longer filter (10 s),
# with a resulting larger x-axis:
n = int(round(10. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
###############################################################################
# Now we have very sharp frequency suppression, but our filter rings for the
# entire 10 seconds. So this naïve method is probably not a good way to build
# our low-pass filter.
#
# Fortunately, there are multiple established methods to design FIR filters
# based on desired response characteristics. These include:
#
# 1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
# 2. Windowed FIR design (:func:`scipy.signal.firwin2`,
# :func:`scipy.signal.firwin`, and `MATLAB fir2`_)
# 3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
# 4. Frequency-domain design (construct filter in Fourier
# domain and use an :func:`IFFT <numpy.fft.ifft>` to invert it)
#
# .. note:: Remez and least squares designs have advantages when there are
# "do not care" regions in our frequency response. However, we want
# well controlled responses in all frequency regions.
# Frequency-domain construction is good when an arbitrary response
# is desired, but generally less clean (due to sampling issues) than
# a windowed approach for more straightforward filter applications.
# Since our filters (low-pass, high-pass, band-pass, band-stop)
# are fairly simple and we require precise control of all frequency
# regions, we will primarily use and explore windowed FIR design.
#
# If we relax our frequency-domain filter requirements a little bit, we can
# use these functions to construct a lowpass filter that instead has a
# *transition band*, or a region between the pass frequency :math:`f_p`
# and stop frequency :math:`f_s`, e.g.:
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
###############################################################################
# Accepting a shallower roll-off of the filter in the frequency domain makes
# our time-domain response potentially much better. We end up with a more
# gradual slope through the transition region, but a *much* cleaner time
# domain signal. Here again for the 1 s filter:
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',
flim=flim, compensate=True)
###############################################################################
# Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
# use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
# stop-band attenuation:
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',
flim=flim, compensate=True)
###############################################################################
# But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
# our effective stop frequency gets pushed out past 60 Hz:
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',
flim=flim, compensate=True)
###############################################################################
# If we want a filter that is only 0.1 seconds long, we should probably use
# something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',
flim=flim, compensate=True)
###############################################################################
# So far, we have only discussed *non-causal* filtering, which means that each
# sample at each time point :math:`t` is filtered using samples that come
# after (:math:`t + \Delta t`) *and* before (:math:`t - \Delta t`) the current
# time point :math:`t`.
# In this sense, each sample is influenced by samples that come both before
# and after it. This is useful in many cases, especially because it does not
# delay the timing of events.
#
# However, sometimes it can be beneficial to use *causal* filtering,
# whereby each sample :math:`t` is filtered only using time points that came
# after it.
#
# Note that the delay is variable (whereas for linear/zero-phase filters it
# is constant) but small in the pass-band. Unlike zero-phase filters, which
# require time-shifting backward the output of a linear-phase filtering stage
# (and thus becoming non-causal), minimum-phase filters do not require any
# compensation to achieve small delays in the pass-band. Note that as an
# artifact of the minimum phase filter construction step, the filter does
# not end up being as steep as the linear/zero-phase version.
#
# We can construct a minimum-phase filter from our existing linear-phase
# filter with the :func:`scipy.signal.minimum_phase` function, and note
# that the falloff is not as steep:
h_min = signal.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
###############################################################################
# .. _tut_effect_on_signals:
#
# Applying FIR filters
# --------------------
#
# Now lets look at some practical effects of these filters by applying
# them to some data.
#
# Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
# plus noise (random and line). Note that the original clean signal contains
# frequency content in both the pass band and transition bands of our
# low-pass filter.
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
###############################################################################
# Filter it with a shallow cutoff, linear-phase FIR (which allows us to
# compensate for the constant filter delay):
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin', verbose=True)
x_v16 = np.convolve(h, x)
# this is the linear->zero phase, causal-to-non-causal conversion / shift
x_v16 = x_v16[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,
compensate=True)
###############################################################################
# Filter it with a different design method ``fir_design="firwin2"``, and also
# compensate for the constant filter delay. This method does not produce
# quite as sharp a transition compared to ``fir_design="firwin"``, despite
# being twice as long:
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# filter_dur = 6.6 / transition_band # sec
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2', verbose=True)
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,
compensate=True)
###############################################################################
# Let's also filter with the MNE-Python 0.13 default, which is a
# long-duration, steep cutoff FIR that gets applied twice:
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2', verbose=True)
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
# the effective h is one that is applied to the time-reversed version of itself
h_eff = np.convolve(h, h[::-1])
plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,
compensate=True)
###############################################################################
# Let's also filter it with the MNE-C default, which is a long-duration
# steep-slope FIR filter designed using frequency-domain techniques:
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)
###############################################################################
# And now an example of a minimum-phase filter:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin',
verbose=True)
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
###############################################################################
# Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
# attenuation, but it comes at a cost of potential
# ringing (long-lasting ripples) in the time domain. Ringing can occur with
# steep filters, especially in signals with frequency content around the
# transition band. Our Morlet wavelet signal has power in our transition band,
# and the time-domain ringing is thus more pronounced for the steep-slope,
# long-duration filter than the shorter, shallower-slope filter:
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
"""Plot a signal."""
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])
X = fft(x)
freqs = fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yscale = 30
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
yticks = -np.arange(len(yticklabels)) / yscale
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-len(yticks) / yscale, 1. / yscale],
yticks=yticks, yticklabels=yticklabels)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
###############################################################################
# IIR filters
# ===========
#
# MNE-Python also offers IIR filtering functionality that is based on the
# methods from :mod:`scipy.signal`. Specifically, we use the general-purpose
# functions :func:`scipy.signal.iirfilter` and :func:`scipy.signal.iirdesign`,
# which provide unified interfaces to IIR filter design.
#
# Designing IIR filters
# ---------------------
#
# Let's continue with our design of a 40 Hz low-pass filter and look at
# some trade-offs of different IIR filters.
#
# Often the default IIR filter is a `Butterworth filter`_, which is designed
# to have a *maximally flat pass-band*. Let's look at a few filter orders,
# i.e., a few different number of coefficients used and therefore steepness
# of the filter:
#
# .. note:: Notice that the group delay (which is related to the phase) of
# the IIR filters below are not constant. In the FIR case, we can
# design so-called linear-phase filters that have a constant group
# delay, and thus compensate for the delay (making the filter
# non-causal) if necessary. This cannot be done with IIR filters, as
# they have a non-linear phase (non-constant group delay). As the
# filter order increases, the phase distortion near and in the
# transition band worsens. However, if non-causal (forward-backward)
# filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,
# these phase issues can theoretically be mitigated.
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,
compensate=True)
x_shallow = signal.sosfiltfilt(sos, x)
del sos
###############################################################################
# The falloff of this filter is not very steep.
#
# .. note:: Here we have made use of second-order sections (SOS)
# by using :func:`scipy.signal.sosfilt` and, under the
# hood, :func:`scipy.signal.zpk2sos` when passing the
# ``output='sos'`` keyword argument to
# :func:`scipy.signal.iirfilter`. The filter definitions
# given :ref:`above <tut_filtering_basics>` use the polynomial
# numerator/denominator (sometimes called "tf") form ``(b, a)``,
# which are theoretically equivalent to the SOS form used here.
# In practice, however, the SOS form can give much better results
# due to issues with numerical precision (see
# :func:`scipy.signal.sosfilt` for an example), so SOS should be
# used whenever possible.
#
# Let's increase the order, and note that now we have better attenuation,
# with a longer impulse response. Let's also switch to using the MNE filter
# design function, which simplifies a few things and gives us some information
# about the resulting filter:
iir_params = dict(order=8, ftype='butter')
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,
compensate=True)
x_steep = signal.sosfiltfilt(filt['sos'], x)
###############################################################################
# There are other types of IIR filters that we can use. For a complete list,
# check out the documentation for :func:`scipy.signal.iirdesign`. Let's
# try a Chebychev (type I) filter, which trades off ripple in the pass-band
# to get better attenuation in the stop-band:
iir_params.update(ftype='cheby1',
rp=1., # dB of acceptable pass-band ripple
)
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)
###############################################################################
# If we can live with even more ripple, we can get it slightly steeper,
# but the impulse response begins to ring substantially longer (note the
# different x-axis scale):
iir_params['rp'] = 6.
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim,
compensate=True)
###############################################################################
# Applying IIR filters
# --------------------
#
# Now let's look at how our shallow and steep Butterworth IIR filters
# perform on our Morlet signal from before:
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
###############################################################################
# Some pitfalls of filtering
# ==========================
#
# Multiple recent papers have noted potential risks of drawing
# errant inferences due to misapplication of filters.
#
# Low-pass problems
# -----------------
#
# Filters in general, especially those that are non-causal (zero-phase), can
# make activity appear to occur earlier or later than it truly did. As
# mentioned in VanRullen (2011) :footcite:`VanRullen2011`,
# investigations of commonly (at the time)
# used low-pass filters created artifacts when they were applied to simulated
# data. However, such deleterious effects were minimal in many real-world
# examples in Rousselet (2012) :footcite:`Rousselet2012`.
#
# Perhaps more revealing, it was noted in Widmann & Schröger (2012)
# :footcite:`WidmannSchroger2012` that the problematic low-pass filters from
# VanRullen (2011) :footcite:`VanRullen2011`:
#
# 1. Used a least-squares design (like :func:`scipy.signal.firls`) that
# included "do-not-care" transition regions, which can lead to
# uncontrolled behavior.
# 2. Had a filter length that was independent of the transition bandwidth,
# which can cause excessive ringing and signal distortion.
#
# .. _tut_filtering_hp_problems:
#
# High-pass problems
# ------------------
#
# When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
# were found in Acunzo *et al.* (2012) :footcite:`AcunzoEtAl2012` to:
#
# "... generate a systematic bias easily leading to misinterpretations of
# neural activity.”
#
# In a related paper, Widmann *et al.* (2015) :footcite:`WidmannEtAl2015`
# also came to suggest a 0.1 Hz highpass. More evidence followed in
# Tanner *et al.* (2015) :footcite:`TannerEtAl2015` of such distortions.
# Using data from language ERP studies of semantic and
# syntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz
# caused significant effects to be introduced implausibly early when compared
# to the unfiltered data. From this, the authors suggested the optimal
# high-pass value for language processing to be 0.1 Hz.
#
# We can recreate a problematic simulation from
# Tanner *et al.* (2015) :footcite:`TannerEtAl2015`:
#
# "The simulated component is a single-cycle cosine wave with an amplitude
# of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The
# simulated component was embedded in 20 s of zero values to avoid
# filtering edge effects... Distortions [were] caused by 2 Hz low-pass
# and high-pass filters... No visible distortion to the original
# waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
# Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
# (12 dB/octave roll-off)."
#
# .. note:: This simulated signal contains energy not just within the
# pass-band, but also within the transition and stop-bands -- perhaps
# most easily understood because the signal has a non-zero DC value,
# but also because it is a shifted cosine that has been
# *windowed* (here multiplied by a rectangular window), which
# makes the cosine and DC frequencies spread to other frequencies
# (multiplication in time is convolution in frequency, so multiplying
# by a rectangular window in the time domain means convolving a sinc
# function with the impulses at DC and the cosine frequency in the
# frequency domain).
#
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = r'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
###############################################################################
# Similarly, in a P300 paradigm reported by
# Kappenman & Luck (2010) :footcite:`KappenmanLuck2010`,
# they found that applying a 1 Hz high-pass decreased the probability of
# finding a significant difference in the N100 response, likely because
# the P300 response was smeared (and inverted) in time by the high-pass
# filter such that it tended to cancel out the increased N100. However,
# they nonetheless note that some high-passing can still be useful to deal
# with drifts in the data.
#
# Even though these papers generally advise a 0.1 Hz or lower frequency for
# a high-pass, it is important to keep in mind (as most authors note) that
# filtering choices should depend on the frequency content of both the
# signal(s) of interest and the noise to be suppressed. For example, in
# some of the MNE-Python examples involving the :ref:`sample-dataset` dataset,
# high-pass values of around 1 Hz are used when looking at auditory
# or visual N100 responses, because we analyze standard (not deviant) trials
# and thus expect that contamination by later or slower components will
# be limited.
#
# Baseline problems (or solutions?)
# ---------------------------------
#
# In an evolving discussion, Tanner *et al.* (2015) :footcite:`TannerEtAl2015`
# suggest using baseline correction to remove slow drifts in data. However,
# Maess *et al.* (2016) :footcite:`MaessEtAl2016`
# suggest that baseline correction, which is a form of high-passing, does
# not offer substantial advantages over standard high-pass filtering.
# Tanner *et al.* (2016) :footcite:`TannerEtAl2016`
# rebutted that baseline correction can correct for problems with filtering.
#
# To see what they mean, consider again our old simulated signal ``x`` from
# before:
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
###############################################################################
# In response, Maess *et al.* (2016) :footcite:`MaessEtAl2016a`
# note that these simulations do not
# address cases of pre-stimulus activity that is shared across conditions, as
# applying baseline correction will effectively copy the topology outside the
# baseline period. We can see this if we give our signal ``x`` with some
# consistent pre-stimulus activity, which makes everything look bad.
#
# .. note:: An important thing to keep in mind with these plots is that they
# are for a single simulated sensor. In multi-electrode recordings
# the topology (i.e., spatial pattern) of the pre-stimulus activity
# will leak into the post-stimulus period. This will likely create a
# spatially varying distortion of the time-domain signals, as the
# averaged pre-stimulus spatial pattern gets subtracted from the
# sensor time courses.
#
# Putting some activity in the baseline period:
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
###############################################################################
# Both groups seem to acknowledge that the choices of filtering cutoffs, and
# perhaps even the application of baseline correction, depend on the
# characteristics of the data being investigated, especially when it comes to:
#
# 1. The frequency content of the underlying evoked activity relative
# to the filtering parameters.
# 2. The validity of the assumption of no consistent evoked activity
# in the baseline period.
#
# We thus recommend carefully applying baseline correction and/or high-pass
# values based on the characteristics of the data to be analyzed.
#
#
# Filtering defaults
# ==================
#
# .. _tut_filtering_in_python:
#
# Defaults in MNE-Python
# ----------------------
#
# Most often, filtering in MNE-Python is done at the :class:`mne.io.Raw` level,
# and thus :func:`mne.io.Raw.filter` is used. This function under the hood
# (among other things) calls :func:`mne.filter.filter_data` to actually
# filter the data, which by default applies a zero-phase FIR filter designed
# using :func:`scipy.signal.firwin`.
# In Widmann *et al.* (2015) :footcite:`WidmannEtAl2015`, they
# suggest a specific set of parameters to use for high-pass filtering,
# including:
#
# "... providing a transition bandwidth of 25% of the lower passband
# edge but, where possible, not lower than 2 Hz and otherwise the
# distance from the passband edge to the critical frequency.”
#
# In practice, this means that for each high-pass value ``l_freq`` or
# low-pass value ``h_freq`` below, you would get this corresponding
# ``l_trans_bandwidth`` or ``h_trans_bandwidth``, respectively,
# if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):
#
# +------------------+-------------------+-------------------+
# | l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |
# +==================+===================+===================+
# | 0.01 | 0.01 | 2.0 |
# +------------------+-------------------+-------------------+
# | 0.1 | 0.1 | 2.0 |
# +------------------+-------------------+-------------------+
# | 1.0 | 1.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 2.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 4.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 8.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 10.0 | 2.5 | 2.5 |
# +------------------+-------------------+-------------------+
# | 20.0 | 5.0 | 5.0 |
# +------------------+-------------------+-------------------+
# | 40.0 | 10.0 | 10.0 |
# +------------------+-------------------+-------------------+
# | 50.0 | 12.5 | 12.5 |
# +------------------+-------------------+-------------------+
#
# MNE-Python has adopted this definition for its high-pass (and low-pass)
# transition bandwidth choices when using ``l_trans_bandwidth='auto'`` and
# ``h_trans_bandwidth='auto'``.
#
# To choose the filter length automatically with ``filter_length='auto'``,
# the reciprocal of the shortest transition bandwidth is used to ensure
# decent attenuation at the stop frequency. Specifically, the reciprocal
# (in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming,
# or Blackman windows, respectively, as selected by the ``fir_window``
# argument for ``fir_design='firwin'``, and double these for
# ``fir_design='firwin2'`` mode.
#
# .. note:: For ``fir_design='firwin2'``, the multiplicative factors are
# doubled compared to what is given in
# Ifeachor & Jervis (2002) :footcite:`IfeachorJervis2002`
# (p. 357), as :func:`scipy.signal.firwin2` has a smearing effect
# on the frequency response, which we compensate for by
# increasing the filter length. This is why
# ``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.
#
# In 0.14, we default to using a Hamming window in filter design, as it
# provides up to 53 dB of stop-band attenuation with small pass-band ripple.
#
# .. note:: In band-pass applications, often a low-pass filter can operate
# effectively with fewer samples than the high-pass filter, so
# it is advisable to apply the high-pass and low-pass separately
# when using ``fir_design='firwin2'``. For design mode
# ``fir_design='firwin'``, there is no need to separate the
# operations, as the lowpass and highpass elements are constructed
# separately to meet the transition band requirements.
#
# For more information on how to use the
# MNE-Python filtering functions with real data, consult the preprocessing
# tutorial on :ref:`tut-filter-resample`.
#
# Defaults in MNE-C
# -----------------
# MNE-C by default uses:
#
# 1. 5 Hz transition band for low-pass filters.
# 2. 3-sample transition band for high-pass filters.
# 3. Filter length of 8197 samples.
#
# The filter is designed in the frequency domain, creating a linear-phase
# filter such that the delay is compensated for as is done with the MNE-Python
# ``phase='zero'`` filtering option.
#
# Squared-cosine ramps are used in the transition regions. Because these
# are used in place of more gradual (e.g., linear) transitions,
# a given transition width will result in more temporal ringing but also more
# rapid attenuation than the same transition width in windowed FIR designs.
#
# The default filter length will generally have excellent attenuation
# but long ringing for the sample rates typically encountered in M/EEG data
# (e.g. 500-2000 Hz).
#
# Defaults in other software
# --------------------------
# A good but possibly outdated comparison of filtering in various software
# packages is available in Widmann *et al.* (2015) :footcite:`WidmannEtAl2015`.
# Briefly:
#
# * EEGLAB
# MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB
# (see the `EEGLAB filtering FAQ`_ for more information).
# * FieldTrip
# By default FieldTrip applies a forward-backward Butterworth IIR filter
# of order 4 (band-pass and band-stop filters) or 2 (for low-pass and
# high-pass filters). Similar filters can be achieved in MNE-Python when
# filtering with :meth:`raw.filter(..., method='iir') <mne.io.Raw.filter>`
# (see also :func:`mne.filter.construct_iir_filter` for options).
# For more information, see e.g. the
# `FieldTrip band-pass documentation <ftbp_>`_.
#
# Reporting Filters
# =================
# On page 45 in Widmann *et al.* (2015) :footcite:`WidmannEtAl2015`,
# there is a convenient list of
# important filter parameters that should be reported with each publication:
#
# 1. Filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)
# 2. Cutoff frequency (including definition)
# 3. Filter order (or length)
# 4. Roll-off or transition bandwidth
# 5. Passband ripple and stopband attenuation
# 6. Filter delay (zero-phase, linear-phase, non-linear phase) and causality
# 7. Direction of computation (one-pass forward/reverse, or two-pass forward
# and reverse)
#
# In the following, we will address how to deal with these parameters in MNE:
#
#
# Filter type
# -----------
# Depending on the function or method used, the filter type can be specified.
# To name an example, in :func:`mne.filter.create_filter`, the relevant
# arguments would be ``l_freq``, ``h_freq``, ``method``, and if the method is
# FIR ``fir_window`` and ``fir_design``.
#
#
# Cutoff frequency
# ----------------
# The cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the
# middle of the transition band. That is, if you construct a lowpass FIR filter
# with ``h_freq = 40``, the filter function will provide a transition
# bandwidth that depends on the ``h_trans_bandwidth`` argument. The desired
# half-amplitude cutoff of the lowpass FIR filter is then at
# ``h_freq + transition_bandwidth/2.``.
#
# Filter length (order) and transition bandwidth (roll-off)
# ---------------------------------------------------------
# In the :ref:`tut_filtering_in_python` section, we have already talked about
# the default filter lengths and transition bandwidths that are used when no
# custom values are specified using the respective filter function's arguments.
#
# If you want to find out about the filter length and transition bandwidth that
# were used through the 'auto' setting, you can use
# :func:`mne.filter.create_filter` to print out the settings once more:
# Use the same settings as when calling e.g., `raw.filter()`
fir_coefs = mne.filter.create_filter(
data=None, # data is only used for sanity checking, not strictly needed
sfreq=1000., # sfreq of your data in Hz
l_freq=None,
h_freq=40., # assuming a lowpass of 40 Hz
method='fir',
fir_window='hamming',
fir_design='firwin',
verbose=True)
# See the printed log for the transition bandwidth and filter length.
# Alternatively, get the filter length through:
filter_length = fir_coefs.shape[0]
###############################################################################
# .. note:: If you are using an IIR filter, :func:`mne.filter.create_filter`
# will not print a filter length and transition bandwidth to the log.
# Instead, you can specify the roll-off with the ``iir_params``
# argument or stay with the default, which is a fourth order
# (Butterworth) filter.
#
# Passband ripple and stopband attenuation
# ----------------------------------------
#
# When use standard :func:`scipy.signal.firwin` design (as for FIR filters in
# MNE), the passband ripple and stopband attenuation are dependent upon the
# window used in design. For standard windows the values are listed in this
# table (see Ifeachor & Jervis (2002) :footcite:`IfeachorJervis2002`, p. 357):
#
# +-------------------------+-----------------+----------------------+
# | Name of window function | Passband ripple | Stopband attenuation |
# +=========================+=================+======================+
# | Hann | 0.0545 dB | 44 dB |
# +-------------------------+-----------------+----------------------+
# | Hamming | 0.0194 dB | 53 dB |
# +-------------------------+-----------------+----------------------+
# | Blackman | 0.0017 dB | 74 dB |
# +-------------------------+-----------------+----------------------+
#
#
# Filter delay and direction of computation
# -----------------------------------------
# For reporting this information, it might be sufficient to read the docstring
# of the filter function or method that you apply. For example in the
# docstring of `mne.filter.create_filter`, for the phase parameter it says:
#
# Phase of the filter, only used if ``method='fir'``.
# By default, a symmetric linear-phase FIR filter is constructed.
# If ``phase='zero'`` (default), the delay of this filter
# is compensated for. If ``phase=='zero-double'``, then this filter
# is applied twice, once forward, and once backward. If 'minimum',
# then a minimum-phase, causal filter will be used.
#
#
# Summary
# =======
#
# When filtering, there are always trade-offs that should be considered.
# One important trade-off is between time-domain characteristics (like ringing)
# and frequency-domain attenuation characteristics (like effective transition
# bandwidth). Filters with sharp frequency cutoffs can produce outputs that
# ring for a long time when they operate on signals with frequency content
# in the transition band. In general, therefore, the wider a transition band
# that can be tolerated, the better behaved the filter will be in the time
# domain.
#
# References
# ==========
# .. footbibliography::
#
# .. _FIR: https://en.wikipedia.org/wiki/Finite_impulse_response
# .. _IIR: https://en.wikipedia.org/wiki/Infinite_impulse_response
# .. _sinc: https://en.wikipedia.org/wiki/Sinc_function
# .. _moving average: https://en.wikipedia.org/wiki/Moving_average
# .. _autoregression: https://en.wikipedia.org/wiki/Autoregressive_model
# .. _Remez: https://en.wikipedia.org/wiki/Remez_algorithm
# .. _matlab firpm: https://www.mathworks.com/help/signal/ref/firpm.html
# .. _matlab fir2: https://www.mathworks.com/help/signal/ref/fir2.html
# .. _matlab firls: https://www.mathworks.com/help/signal/ref/firls.html
# .. _Butterworth filter: https://en.wikipedia.org/wiki/Butterworth_filter
# .. _eeglab filtering faq: https://sccn.ucsd.edu/wiki/Firfilt_FAQ
# .. _ftbp: http://www.fieldtriptoolbox.org/reference/ft_preproc_bandpassfilter
|
rkmaddox/mne-python
|
tutorials/preprocessing/25_background_filtering.py
|
Python
|
bsd-3-clause
| 48,286
|
[
"Gaussian"
] |
1b01f4e5898bf3bd3fa9aa40a5d1b148f4a75a2acea49d92c74cb9db7b081da0
|
# Mantid Repository : https://github.com/mantidproject/mantid
#
# Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI,
# NScD Oak Ridge National Laboratory, European Spallation Source
# & Institut Laue - Langevin
# SPDX - License - Identifier: GPL - 3.0 +
from __future__ import (absolute_import, division, print_function)
from mantid.kernel import logger
import AbinsModules
import six
from mantid.kernel import Atom
class GeneralAbInitioProgramName(type):
def __str__(self):
return self.__name__
# noinspection PyMethodMayBeStatic
@six.add_metaclass(GeneralAbInitioProgramName)
class GeneralAbInitioProgram(object):
"""
A general class which groups all methods which should be inherited or implemented by an ab initio program used
in INS analysis.
"""
def __init__(self, input_ab_initio_filename=None):
self._num_k = None
self._num_atoms = None
self._sample_form = None
self._ab_initio_program = None
self._clerk = AbinsModules.IOmodule(input_filename=input_ab_initio_filename,
group_name=AbinsModules.AbinsParameters.ab_initio_group)
def read_vibrational_or_phonon_data(self):
"""
This method is different for different ab initio programs. It has to be overridden by inheriting class.
This method reads vibrational or phonon data produced by an ab initio program.
This method should do the following:
1) Open file with vibrational or phonon data (CASTEP: foo.phonon). Name of a file should be stored in
self._input_filename. There must be no spaces in the name
of a file. Extension of a file (part of a name after '.') is arbitrary.
2) Method should read from an ab initio file information about frequencies, atomic displacements,
k-point vectors, weights of k-points and ions.
3) Method should reconstruct data for symmetry equivalent k-points
(protected method _recover_symmetry_points).
**Notice: this step is not implemented now. At the moment only Gamma point calculations are supported.**
4) Method should determine symmetry equivalent atoms
**Notice: this step is not implemented now.**
5) Method should calculate hash of a file with vibrational or phonon data (protected method _calculateHash).
6) Method should store vibrational or phonon data in an hdf file (inherited method save()). The name of an hdf file is
foo.hdf5 (CASTEP: foo.phonon -> foo.hdf5). In order to save the data to hdf file the following fields
should be set:
self._hdf_filename
self._group_name
self._attributes
self._datasets
The datasets should be a dictionary with the following entries:
"frequencies" - frequencies for all k-points grouped in one numpy.array in cm^-1
"weights" - weights of all k-points in one numpy.array
"k_vectors" - all k-points in one numpy array
**Notice: both symmetry equivalent and inequivalent points should be stored; at
the moment only Gamma point calculations are supported**
"atomic_displacements" - atomic displacements for all atoms and all k-points in one numpy array
"unit_cell" - numpy array with unit cell vectors in Angstroms
The following structured datasets should be also defined:
"atoms" - Python dictionary with the information about ions. Each entry in the
dictionary has the following format 'atom_n'. Here n means number of
atom in the unit cell.
Each entry 'atom_n' in the dictionary is a dictionary with the following
entries:
"symbol" - chemical symbol of the element (for example hydrogen -> H)
"sort" - defines symmetry equivalent atoms, e.g, atoms with the same
sort are symmetry equivalent
**Notice at the moment this parameter is not functional
in LoadCastep**
"coord" - equilibrium position of atom in Angstroms;
it has a form of numpy array with three floats
"mass" - mass of atom
The attributes should be a dictionary with the following entries:
"hash" - hash of a file with the vibrational or phonon data. It should be a string
representation of hash.
"ab_initio_program" - name of the ab initio program which was used to obtain vibrational or
phonon data (for CASTEP -> CASTEP).
"filename" - name of input ab initio file
For more details about these fields please look at the documentation of IOmodule class.
:returns: Method should return an object of type AbinsData.
"""
return None
def load_formatted_data(self):
"""
Loads data from hdf file. After data is loaded it is put into AbinsData object.
:returns: object of type AbinsData
"""
data = self._clerk.load(list_of_datasets=["frequencies", "weights", "k_vectors",
"atomic_displacements", "unit_cell", "atoms"])
datasets = data["datasets"]
self._num_k = datasets["k_vectors"].shape[0]
self._num_atoms = len(datasets["atoms"])
loaded_data = {"frequencies": datasets["frequencies"],
"weights": datasets["weights"],
"k_vectors": datasets["k_vectors"],
"atomic_displacements": datasets["atomic_displacements"],
"unit_cell": datasets["unit_cell"],
"atoms": datasets["atoms"]}
return self._rearrange_data(data=loaded_data)
# Protected methods which should be reused by classes which read ab initio phonon data
def _recover_symmetry_points(self, data=None):
"""
This method reconstructs symmetry equivalent k-points.
:param data: dictionary with the data for only symmetry inequivalent k-points. This methods
adds to this dictionary phonon data for symmetry equivalent k-points.
"""
pass
def _rearrange_data(self, data=None):
"""
This method rearranges data read from input ab initio file.
:param data: dictionary with the data to rearrange
:returns: Returns an object of type AbinsData
"""
k_points = AbinsModules.KpointsData(num_atoms=self._num_atoms, num_k=self._num_k)
# 1D [k] (one entry corresponds to weight of one k-point)
k_points.set({"weights": data["weights"],
# 2D [k][3] (one entry corresponds to one coordinate of particular k-point)
"k_vectors": data["k_vectors"],
# 2D array [k][freq] (one entry corresponds to one frequency for the k-point k)
"frequencies": data["frequencies"],
# 4D array [k][atom_n][freq][3] (one entry corresponds to
# one coordinate for atom atom_n, frequency freq and k-point k )
"atomic_displacements": data["atomic_displacements"],
"unit_cell": data["unit_cell"]
})
atoms = AbinsModules.AtomsDaTa(num_atoms=self._num_atoms)
atoms.set(data["atoms"])
result_data = AbinsModules.AbinsData()
result_data.set(k_points_data=k_points, atoms_data=atoms)
return result_data
def save_ab_initio_data(self, data=None):
"""
Saves ab initio data to an HDF5 file.
:param data: dictionary with data to be saved.
"""
for name in data:
self._clerk.add_data(name=name, value=data[name])
self._clerk.add_file_attributes()
self._clerk.add_attribute("ab_initio_program", self._ab_initio_program)
self._clerk.save()
def get_formatted_data(self):
# try to load ab initio data from *.hdf5 file
try:
if self._ab_initio_program != self._clerk.get_previous_ab_initio_program():
raise ValueError("Different ab initio program was used in the previous calculation. Data in the hdf "
"file will be erased.")
self._clerk.check_previous_data()
ab_initio_data = self.load_formatted_data()
logger.notice(str(ab_initio_data) + " has been loaded from the HDF file.")
# if loading from *.hdf5 file failed than read data directly from input ab initio file and erase hdf file
except (IOError, ValueError) as err:
logger.notice(str(err))
self._clerk.erase_hdf_file()
ab_initio_data = self.read_vibrational_or_phonon_data()
logger.notice(str(ab_initio_data) + " from ab initio input file has been loaded.")
return ab_initio_data
def check_isotopes_substitution(self, atoms=None, masses=None, approximate=False):
"""
Updates atomic mass in case of isotopes.
:param atoms: dictionary with atoms to check
:param masses: atomic masses read from an ab initio file
:param approximate: whether or not look for isotopes in the approximated way
"""
num_atoms = len(atoms)
eps = AbinsModules.AbinsConstants.MASS_EPS
if approximate:
isotopes_found = [abs(round(atoms["atom_%s" % i]["mass"]) - round(masses[i])) > eps
for i in range(num_atoms)]
else:
isotopes_found = [abs(atoms["atom_%s" % i]["mass"] - masses[i]) > eps for i in range(num_atoms)]
if any(isotopes_found):
for i in range(num_atoms):
if isotopes_found[i]:
z_num = Atom(symbol=atoms["atom_{}".format(i)]["symbol"]).z_number
a_num = int(round(masses[i]))
try:
temp = Atom(a_number=a_num, z_number=z_num).mass
atoms["atom_{}".format(i)]["mass"] = temp
# no mass for isotopes available; assume no isotopic substitution for this atom
except RuntimeError:
pass
|
mganeva/mantid
|
scripts/AbinsModules/GeneralAbInitioProgram.py
|
Python
|
gpl-3.0
| 11,064
|
[
"CASTEP"
] |
0e2a78a5254621739f85f3ce21170fbd928dc13c1ea211f460f06c82e69a5736
|
import pytest
from events import renderers
from events.models import Location, CustomEvent
@pytest.fixture
def simple_component_renderer(mocker):
mocker.patch.multiple(
'events.renderers',
render_event=str,
render_block_location=lambda v: '{} '.format(v),
)
@pytest.fixture
def time_map(events):
times = set()
for e in events.values():
times.add(e.begin_time)
times.add(e.end_time)
return {time: i for i, time in enumerate(sorted(times))}
# Hack to get all possible location values by introspection.
POSSIBLE_LOCATIONS = sorted(
getattr(Location, k)
for k in Location.__dict__ if k.isupper() and k != 'OTHER'
)
@pytest.mark.parametrize('location', POSSIBLE_LOCATIONS)
def test_render_block_location(parser, utils, location):
rendered = renderers.render_block_location(location)
assert utils.is_safe(rendered)
expected = {
Location.ALL: (
'<div class="slot-item__label slot-item__label--all"></div>'
),
Location.R012: (
'<div class="slot-item__label slot-item__label--r012">'
'R0 R1 R2</div>'
),
Location.R0: (
'<div class="slot-item__label slot-item__label--r0">'
'R0</div>'
),
Location.R1: (
'<div class="slot-item__label slot-item__label--r1">'
'R1</div>'
),
Location.R2: (
'<div class="slot-item__label slot-item__label--r2">'
'R2</div>'
),
Location.R3: (
'<div class="slot-item__label slot-item__label--r3">'
'R3</div>'
),
Location.R4: (
'<div class="slot-item__label slot-item__label--r4">'
'R4</div>'
),
}[location]
assert parser.arrange(rendered) == parser.arrange(expected)
@pytest.mark.parametrize('event_key', [
'custom_event', 'keynote_event', 'proposed_talk_event', 'sponsored_event',
])
@pytest.mark.usefixtures('simple_component_renderer')
def test_render_block(parser, utils, time_map, events, event_key):
e = events[event_key]
rendered = renderers.render_block(e, time_map, [e])
assert utils.is_safe(rendered)
expected = {
'custom_event': """
<div class="slot-item slot-item--w3 slot-item--hsmall">
3-r012 Job Fair
</div>""",
'keynote_event': """
<div class="slot-item slot-item--w4 slot-item--hsmall">
2-all Keynote: Amber Brown
</div>""",
'proposed_talk_event': """
<div class="slot-item slot-item--w1 slot-item--h1">
4-r0 Beyond the Style Guides<br>
</div>""",
'sponsored_event': """
<div class="slot-item slot-item--w1 slot-item--h1">
6-r2 Camera engine office woman lights
</div>""",
}[event_key]
assert parser.arrange(rendered) == parser.arrange(expected)
@pytest.mark.parametrize('event_key,begin,end', [
('custom_event', '14:45', '15:15'),
('keynote_event', '9:00', '10:00'),
('proposed_talk_event', '16:00', '16:45'),
('sponsored_event', '11:00', '11:25'),
])
def test_render_attached_period(utils, events, event_key, begin, end):
e = events[event_key]
rendered = renderers.render_attached_period(e.begin_time, e.end_time)
assert utils.is_safe(rendered)
assert rendered == (
'<div class="attached time-table__time">{} – {}</div>'.format(
begin, end,
)
)
@pytest.mark.parametrize('time_count', [2, 3, 4])
def test_render_columned_period(parser, utils, make_time, time_count):
times = [make_time(h) for h in range(time_count)]
rendered, _ = renderers.render_columned_period(times, [
CustomEvent(
title='M<3', location=Location.ALL,
begin_time=begin_time, end_time=end_time,
)
for begin_time, end_time in zip(times[:-1], times[1:])
])
assert utils.is_safe(rendered)
expected = {
2: (
'<div class="columned time-table__time time-table__time--row-span '
'time-table__time--hsmall">'
' <div class="time__cell">0:00<br>|<br>1:00</div>'
'</div>'
),
3: (
'<div class="columned time-table__time time-table__time--row-span '
'time-table__time--h2">'
' <div class="time__cell">0:00<br>|<br>1:00</div>'
' <div class="time__cell">1:00<br>|<br>2:00</div>'
'</div>'
),
4: (
'<div class="columned time-table__time time-table__time--row-span '
'time-table__time--h3">'
' <div class="time__cell">0:00<br>|<br>1:00</div>'
' <div class="time__cell">1:00<br>|<br>2:00</div>'
' <div class="time__cell">2:00<br>|<br>3:00</div>'
'</div>'
),
}[time_count]
assert parser.arrange(rendered) == parser.arrange(expected)
|
pycontw/pycontw2016
|
src/events/tests/renderers/test_render_block.py
|
Python
|
mit
| 4,987
|
[
"Amber"
] |
9952fc50057c4d09e1a7699403441f4f36362f4f60fa6a3edc893374a77b36ae
|
from __future__ import print_function
import os
import boto
from boto.s3.key import Key
import assaytools.version
if assaytools.version.release:
# The secret key is available as a secure environment variable
# on travis-ci to push the build documentation to Amazon S3.
AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
BUCKET_NAME = 'assaytools'
bucket_name = AWS_ACCESS_KEY_ID.lower() + '-' + BUCKET_NAME
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(BUCKET_NAME)
root = 'doc/_build'
versions = json.load(urllib2.urlopen('http://mdtraj.org/versions.json'))
# new release so all the others are now old
for i in xrange(len(versions)):
versions[i]['latest'] = False
versions.append({'version' : msmbuilder.version.short_version, 'latest' : True})
k = Key(bucket)
k.key = 'versions.json'
k.set_contents_from_string(json.dumps(versions))
else:
print("This is not a release.")
|
MehtapIsik/assaytools
|
devtools/travis-ci/update-versions.py
|
Python
|
lgpl-2.1
| 1,078
|
[
"MDTraj"
] |
5178f4eec837c801e3d2ff056cd5fe0495108b754461a8a685856c00f3f11d1e
|
"""
Multilayer Perceptron
"""
__authors__ = "Ian Goodfellow"
__copyright__ = "Copyright 2012-2013, Universite de Montreal"
__credits__ = ["Ian Goodfellow", "David Warde-Farley"]
__license__ = "3-clause BSD"
__maintainer__ = "LISA Lab"
import logging
import math
import operator
import sys
import warnings
import numpy as np
from theano.compat import six
from theano.compat.six.moves import reduce, xrange
from theano import config
from theano.gof.op import get_debug_values
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.sandbox.cuda.dnn import dnn_available, dnn_pool
from theano.tensor.signal.downsample import max_pool_2d
import theano.tensor as T
from pylearn2.compat import OrderedDict
from pylearn2.costs.mlp import Default
from pylearn2.expr.probabilistic_max_pooling import max_pool_channels
from pylearn2.linear import conv2d
from pylearn2.linear.matrixmul import MatrixMul
from pylearn2.model_extensions.norm_constraint import MaxL2FilterNorm
from pylearn2.models.model import Model
from pylearn2.monitor import get_monitor_doc
from pylearn2.expr.nnet import arg_of_softmax
from pylearn2.expr.nnet import pseudoinverse_softmax_numpy
from pylearn2.space import CompositeSpace
from pylearn2.space import Conv2DSpace
from pylearn2.space import Space
from pylearn2.space import VectorSpace, IndexSpace
from pylearn2.utils import function
from pylearn2.utils import is_iterable
from pylearn2.utils import py_float_types
from pylearn2.utils import py_integer_types
from pylearn2.utils import safe_union
from pylearn2.utils import safe_zip
from pylearn2.utils import safe_izip
from pylearn2.utils import sharedX
from pylearn2.utils import wraps
from pylearn2.utils import contains_inf
from pylearn2.utils import isfinite
from pylearn2.utils.data_specs import DataSpecsMapping
from pylearn2.expr.nnet import (elemwise_kl, kl, compute_precision,
compute_recall, compute_f1)
# Only to be used by the deprecation warning wrapper functions
from pylearn2.costs.mlp import L1WeightDecay as _L1WD
from pylearn2.costs.mlp import WeightDecay as _WD
from pylearn2.sandbox.rnn.models.mlp_hook import RNNWrapper
logger = logging.getLogger(__name__)
logger.debug("MLP changing the recursion limit.")
# We need this to be high enough that the big theano graphs we make
# when doing max pooling via subtensors don't cause python to complain.
# python intentionally declares stack overflow well before the stack
# segment is actually exceeded. But we can't make this value too big
# either, or we'll get seg faults when the python interpreter really
# does go over the stack segment.
# IG encountered seg faults on eos3 (a machine at LISA labo) when using
# 50000 so for now it is set to 40000.
# I think the actual safe recursion limit can't be predicted in advance
# because you don't know how big of a stack frame each function will
# make, so there is not really a "correct" way to do this. Really the
# python interpreter should provide an option to raise the error
# precisely when you're going to exceed the stack segment.
sys.setrecursionlimit(40000)
if six.PY3:
LayerBase = six.with_metaclass(RNNWrapper, Model)
else:
LayerBase = Model
class Layer(LayerBase):
"""
Abstract class. A Layer of an MLP.
May only belong to one MLP.
Parameters
----------
kwargs : dict
Passed on to the superclass.
Notes
-----
This is not currently a Block because as far as I know the Block interface
assumes every input is a single matrix. It doesn't support using Spaces to
work with composite inputs, stacked multichannel image inputs, etc. If the
Block interface were upgraded to be that flexible, then we could make this
a block.
"""
# This enables RNN compatibility
__metaclass__ = RNNWrapper
# When applying dropout to a layer's input, use this for masked values.
# Usually this will be 0, but certain kinds of layers may want to override
# this behaviour.
dropout_input_mask_value = 0.
def get_mlp(self):
"""
Returns the MLP that this layer belongs to.
Returns
-------
mlp : MLP
The MLP that this layer belongs to, or None if it has not been
assigned to an MLP yet.
"""
if hasattr(self, 'mlp'):
return self.mlp
return None
def set_mlp(self, mlp):
"""
Assigns this layer to an MLP. This layer will then use the MLP's
random number generator, batch size, etc. This layer's name must
be unique within the MLP.
Parameters
----------
mlp : MLP
"""
assert self.get_mlp() is None
self.mlp = mlp
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
"""
Returns monitoring channels.
Parameters
----------
state_below : member of self.input_space
A minibatch of states that this Layer took as input.
Most of the time providing state_blow is unnecessary when
state is given.
state : member of self.output_space
A minibatch of states that this Layer took on during fprop.
Provided externally so that we don't need to make a second
expression for it. This helps keep the Theano graph smaller
so that function compilation runs faster.
targets : member of self.output_space
Should be None unless this is the last layer.
If specified, it should be a minibatch of targets for the
last layer.
Returns
-------
channels : OrderedDict
A dictionary mapping channel names to monitoring channels of
interest for this layer.
"""
return OrderedDict()
def fprop(self, state_below):
"""
Does the forward prop transformation for this layer.
Parameters
----------
state_below : member of self.input_space
A minibatch of states of the layer below.
Returns
-------
state : member of self.output_space
A minibatch of states of this layer.
"""
raise NotImplementedError(
str(type(self)) + " does not implement fprop.")
def cost(self, Y, Y_hat):
"""
The cost of outputting Y_hat when the true output is Y.
Parameters
----------
Y : theano.gof.Variable
The targets
Y_hat : theano.gof.Variable
The predictions.
Assumed to be the output of the layer's `fprop` method.
The implmentation is permitted to do things like look at the
ancestors of `Y_hat` in the theano graph. This is useful for
e.g. computing numerically stable *log* probabilities when
`Y_hat` is the *probability*.
Returns
-------
cost : theano.gof.Variable
A Theano scalar describing the cost.
"""
raise NotImplementedError(
str(type(self)) + " does not implement mlp.Layer.cost.")
def cost_from_cost_matrix(self, cost_matrix):
"""
The cost final scalar cost computed from the cost matrix
Parameters
----------
cost_matrix : WRITEME
Examples
--------
>>> # C = model.cost_matrix(Y, Y_hat)
>>> # Do something with C like setting some values to 0
>>> # cost = model.cost_from_cost_matrix(C)
"""
raise NotImplementedError(
str(type(self)) + " does not implement "
"mlp.Layer.cost_from_cost_matrix.")
def cost_matrix(self, Y, Y_hat):
"""
The element wise cost of outputting Y_hat when the true output is Y.
Parameters
----------
Y : WRITEME
Y_hat : WRITEME
Returns
-------
WRITEME
"""
raise NotImplementedError(
str(type(self)) + " does not implement mlp.Layer.cost_matrix")
def set_weights(self, weights):
"""
Sets the weights of the layer.
Parameters
----------
weights : ndarray
A numpy ndarray containing the desired weights of the layer. This
docstring is provided by the Layer base class. Layer subclasses
should add their own docstring explaining the subclass-specific
format of the ndarray.
"""
raise NotImplementedError(
str(type(self)) + " does not implement set_weights.")
def get_biases(self):
"""
Returns the value of the biases of the layer.
Returns
-------
biases : ndarray
A numpy ndarray containing the biases of the layer. This docstring
is provided by the Layer base class. Layer subclasses should add
their own docstring explaining the subclass-specific format of the
ndarray.
"""
raise NotImplementedError(
str(type(self)) + " does not implement "
"get_biases (perhaps because the class has no biases).")
def set_biases(self, biases):
"""
Sets the biases of the layer.
Parameters
----------
biases : ndarray
A numpy ndarray containing the desired biases of the layer. This
docstring is provided by the Layer base class. Layer subclasses
should add their own docstring explaining the subclass-specific
format of the ndarray.
"""
raise NotImplementedError(
str(type(self)) + " does not implement "
"set_biases (perhaps because the class has no biases).")
def get_weights_format(self):
"""
Returns a description of how to interpret the weights of the layer.
Returns
-------
format: tuple
Either ('v', 'h') or ('h', 'v').
('v', 'h') means a weight matrix of shape
(num visible units, num hidden units),
while ('h', 'v') means the transpose of it.
"""
raise NotImplementedError
def get_weight_decay(self, coeff):
"""
Provides an expression for a squared L2 penalty on the weights.
Parameters
----------
coeff : float or tuple
The coefficient on the weight decay penalty for this layer.
This docstring is provided by the Layer base class. Individual
Layer subclasses should add their own docstring explaining the
format of `coeff` for that particular layer. For most ordinary
layers, `coeff` is a single float to multiply by the weight
decay term. Layers containing many pieces may take a tuple or
nested tuple of floats, and should explain the semantics of
the different elements of the tuple.
Returns
-------
weight_decay : theano.gof.Variable
An expression for the weight decay penalty term for this
layer.
"""
raise NotImplementedError(
str(type(self)) + " does not implement get_weight_decay.")
def get_l1_weight_decay(self, coeff):
"""
Provides an expression for an L1 penalty on the weights.
Parameters
----------
coeff : float or tuple
The coefficient on the L1 weight decay penalty for this layer.
This docstring is provided by the Layer base class. Individual
Layer subclasses should add their own docstring explaining the
format of `coeff` for that particular layer. For most ordinary
layers, `coeff` is a single float to multiply by the weight
decay term. Layers containing many pieces may take a tuple or
nested tuple of floats, and should explain the semantics of
the different elements of the tuple.
Returns
-------
weight_decay : theano.gof.Variable
An expression for the L1 weight decay penalty term for this
layer.
"""
raise NotImplementedError(
str(type(self)) + " does not implement get_l1_weight_decay.")
def set_input_space(self, space):
"""
Tells the layer to prepare for input formatted according to the
given space.
Parameters
----------
space : Space
The Space the input to this layer will lie in.
Notes
-----
This usually resets parameters.
"""
raise NotImplementedError(
str(type(self)) + " does not implement set_input_space.")
class MLP(Layer):
"""
A multilayer perceptron.
Note that it's possible for an entire MLP to be a single layer of a larger
MLP.
Parameters
----------
layers : list
A list of Layer objects. The final layer specifies the output space
of this MLP.
batch_size : int, optional
If not specified then must be a positive integer. Mostly useful if
one of your layers involves a Theano op like convolution that
requires a hard-coded batch size.
nvis : int, optional
Number of "visible units" (input units). Equivalent to specifying
`input_space=VectorSpace(dim=nvis)`. Note that certain methods require
a different type of input space (e.g. a Conv2Dspace in the case of
convnets). Use the input_space parameter in such cases. Should be
None if the MLP is part of another MLP.
input_space : Space object, optional
A Space specifying the kind of input the MLP accepts. If None,
input space is specified by nvis. Should be None if the MLP is
part of another MLP.
input_source : string or (nested) tuple of strings, optional
A (nested) tuple of strings specifiying the input sources this
MLP accepts. The structure should match that of input_space. The
default is 'features'. Note that this argument is ignored when
the MLP is nested.
target_source : string or (nested) tuple of strings, optional
A (nested) tuple of strings specifiying the target sources this
MLP accepts. The structure should match that of target_space. The
default is 'targets'. Note that this argument is ignored when
the MLP is nested.
layer_name : name of the MLP layer. Should be None if the MLP is
not part of another MLP.
seed : WRITEME
monitor_targets : bool, optional
Default: True
If true, includes monitoring channels that are functions of the
targets. This can be disabled to allow monitoring on monitoring
datasets that do not include targets.
kwargs : dict
Passed on to the superclass.
"""
def __init__(self, layers, batch_size=None, input_space=None,
input_source='features', target_source='targets',
nvis=None, seed=None, layer_name=None, monitor_targets=True,
**kwargs):
super(MLP, self).__init__(**kwargs)
self.seed = seed
assert isinstance(layers, list)
assert all(isinstance(layer, Layer) for layer in layers)
assert len(layers) >= 1
self.layer_name = layer_name
self.layer_names = set()
for layer in layers:
assert layer.get_mlp() is None
if layer.layer_name in self.layer_names:
raise ValueError("MLP.__init__ given two or more layers "
"with same name: " + layer.layer_name)
layer.set_mlp(self)
self.layer_names.add(layer.layer_name)
self.layers = layers
self.batch_size = batch_size
self.force_batch_size = batch_size
self._input_source = input_source
self._target_source = target_source
self.monitor_targets = monitor_targets
if input_space is not None or nvis is not None:
self._nested = False
self.setup_rng()
# check if the layer_name is None (the MLP is the outer MLP)
assert layer_name is None
if nvis is not None:
input_space = VectorSpace(nvis)
# Check whether the input_space and input_source structures match
try:
DataSpecsMapping((input_space, input_source))
except ValueError:
raise ValueError("The structures of `input_space`, %s, and "
"`input_source`, %s do not match. If you "
"specified a CompositeSpace as an input, "
"be sure to specify the data sources as well."
% (input_space, input_source))
self.input_space = input_space
self._update_layer_input_spaces()
else:
self._nested = True
self.freeze_set = set([])
@property
def input_source(self):
assert not self._nested, "A nested MLP does not have an input source"
return self._input_source
@property
def target_source(self):
assert not self._nested, "A nested MLP does not have a target source"
return self._target_source
def setup_rng(self):
"""
.. todo::
WRITEME
"""
assert not self._nested, "Nested MLPs should use their parent's RNG"
if self.seed is None:
self.seed = [2013, 1, 4]
self.rng = np.random.RandomState(self.seed)
@wraps(Layer.get_default_cost)
def get_default_cost(self):
return Default()
@wraps(Layer.get_output_space)
def get_output_space(self):
return self.layers[-1].get_output_space()
@wraps(Layer.get_target_space)
def get_target_space(self):
return self.layers[-1].get_target_space()
@wraps(Layer.set_input_space)
def set_input_space(self, space):
if hasattr(self, "mlp"):
assert self._nested
self.rng = self.mlp.rng
self.batch_size = self.mlp.batch_size
self.input_space = space
self._update_layer_input_spaces()
def _update_layer_input_spaces(self):
"""
Tells each layer what its input space should be.
Notes
-----
This usually resets the layer's parameters!
"""
layers = self.layers
try:
layers[0].set_input_space(self.get_input_space())
except BadInputSpaceError as e:
raise TypeError("Layer 0 (" + str(layers[0]) + " of type " +
str(type(layers[0])) +
") does not support the MLP's "
+ "specified input space (" +
str(self.get_input_space()) +
" of type " + str(type(self.get_input_space())) +
"). Original exception: " + str(e))
for i in xrange(1, len(layers)):
layers[i].set_input_space(layers[i - 1].get_output_space())
def add_layers(self, layers):
"""
Add new layers on top of the existing hidden layers
Parameters
----------
layers : WRITEME
"""
existing_layers = self.layers
assert len(existing_layers) > 0
for layer in layers:
assert layer.get_mlp() is None
layer.set_mlp(self)
# In the case of nested MLPs, input/output spaces may have not yet
# been initialized
if not self._nested or hasattr(self, 'input_space'):
layer.set_input_space(existing_layers[-1].get_output_space())
existing_layers.append(layer)
assert layer.layer_name not in self.layer_names
self.layer_names.add(layer.layer_name)
def freeze(self, parameter_set):
"""
Freezes some of the parameters (new theano functions that implement
learning will not use them; existing theano functions will continue
to modify them).
Parameters
----------
parameter_set : set
Set of parameters to freeze.
"""
self.freeze_set = self.freeze_set.union(parameter_set)
@wraps(Layer.get_monitoring_channels)
def get_monitoring_channels(self, data):
# if the MLP is the outer MLP \
# (ie MLP is not contained in another structure)
if self.monitor_targets:
X, Y = data
else:
X = data
Y = None
state = X
rval = self.get_layer_monitoring_channels(state_below=X,
targets=Y)
return rval
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
rval = OrderedDict()
state = state_below
for layer in self.layers:
# We don't go through all the inner layers recursively
state_below = state
state = layer.fprop(state)
args = [state_below, state]
if layer is self.layers[-1] and targets is not None:
args.append(targets)
ch = layer.get_layer_monitoring_channels(*args)
if not isinstance(ch, OrderedDict):
raise TypeError(str((type(ch), layer.layer_name)))
for key in ch:
value = ch[key]
doc = get_monitor_doc(value)
if doc is None:
doc = str(type(layer)) + \
".get_monitoring_channels_from_state did" + \
" not provide any further documentation for" + \
" this channel."
doc = 'This channel came from a layer called "' + \
layer.layer_name + '" of an MLP.\n' + doc
value.__doc__ = doc
rval[layer.layer_name + '_' + key] = value
return rval
def get_monitoring_data_specs(self):
"""
Returns data specs requiring both inputs and targets.
Returns
-------
data_specs: TODO
The data specifications for both inputs and targets.
"""
if not self.monitor_targets:
return (self.get_input_space(), self.get_input_source())
space = CompositeSpace((self.get_input_space(),
self.get_target_space()))
source = (self.get_input_source(), self.get_target_source())
return (space, source)
@wraps(Layer.get_params)
def get_params(self):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
rval = []
for layer in self.layers:
for param in layer.get_params():
if param.name is None:
logger.info(type(layer))
layer_params = layer.get_params()
assert not isinstance(layer_params, set)
for param in layer_params:
if param not in rval:
rval.append(param)
rval = [elem for elem in rval if elem not in self.freeze_set]
assert all([elem.name is not None for elem in rval])
return rval
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeffs):
# check the case where coeffs is a scalar
if not hasattr(coeffs, '__iter__'):
coeffs = [coeffs] * len(self.layers)
layer_costs = []
for layer, coeff in safe_izip(self.layers, coeffs):
if coeff != 0.:
layer_costs += [layer.get_weight_decay(coeff)]
if len(layer_costs) == 0:
return T.constant(0, dtype=config.floatX)
total_cost = reduce(operator.add, layer_costs)
return total_cost
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeffs):
# check the case where coeffs is a scalar
if not hasattr(coeffs, '__iter__'):
coeffs = [coeffs] * len(self.layers)
layer_costs = []
for layer, coeff in safe_izip(self.layers, coeffs):
if coeff != 0.:
layer_costs += [layer.get_l1_weight_decay(coeff)]
if len(layer_costs) == 0:
return T.constant(0, dtype=config.floatX)
total_cost = reduce(operator.add, layer_costs)
return total_cost
@wraps(Model.set_batch_size)
def set_batch_size(self, batch_size):
self.batch_size = batch_size
self.force_batch_size = batch_size
for layer in self.layers:
layer.set_batch_size(batch_size)
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
for layer in self.layers:
layer.modify_updates(updates)
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
return get_lr_scalers_from_layers(self)
@wraps(Layer.get_weights)
def get_weights(self):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
return self.layers[0].get_weights()
@wraps(Layer.get_weights_view_shape)
def get_weights_view_shape(self):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
return self.layers[0].get_weights_view_shape()
@wraps(Layer.get_weights_format)
def get_weights_format(self):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
return self.layers[0].get_weights_format()
@wraps(Layer.get_weights_topo)
def get_weights_topo(self):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
return self.layers[0].get_weights_topo()
def dropout_fprop(self, state_below, default_input_include_prob=0.5,
input_include_probs=None, default_input_scale=2.,
input_scales=None, per_example=True):
"""
Returns the output of the MLP, when applying dropout to the input and
intermediate layers.
Parameters
----------
state_below : WRITEME
The input to the MLP
default_input_include_prob : WRITEME
input_include_probs : WRITEME
default_input_scale : WRITEME
input_scales : WRITEME
per_example : bool, optional
Sample a different mask value for every example in a batch.
Defaults to `True`. If `False`, sample one mask per mini-batch.
Notes
-----
Each input to each layer is randomly included or
excluded for each example. The probability of inclusion is independent
for each input and each example. Each layer uses
`default_input_include_prob` unless that layer's name appears as a key
in input_include_probs, in which case the input inclusion probability
is given by the corresponding value.
Each feature is also multiplied by a scale factor. The scale factor for
each layer's input scale is determined by the same scheme as the input
probabilities.
"""
if input_include_probs is None:
input_include_probs = {}
if input_scales is None:
input_scales = {}
self._validate_layer_names(list(input_include_probs.keys()))
self._validate_layer_names(list(input_scales.keys()))
theano_rng = MRG_RandomStreams(max(self.rng.randint(2 ** 15), 1))
for layer in self.layers:
layer_name = layer.layer_name
if layer_name in input_include_probs:
include_prob = input_include_probs[layer_name]
else:
include_prob = default_input_include_prob
if layer_name in input_scales:
scale = input_scales[layer_name]
else:
scale = default_input_scale
state_below = self.apply_dropout(
state=state_below,
include_prob=include_prob,
theano_rng=theano_rng,
scale=scale,
mask_value=layer.dropout_input_mask_value,
input_space=layer.get_input_space(),
per_example=per_example
)
state_below = layer.fprop(state_below)
return state_below
def masked_fprop(self, state_below, mask, masked_input_layers=None,
default_input_scale=2., input_scales=None):
"""
Forward propagate through the network with a dropout mask
determined by an integer (the binary representation of
which is used to generate the mask).
Parameters
----------
state_below : tensor_like
The (symbolic) output state of the layer below.
mask : int
An integer indexing possible binary masks. It should be
< 2 ** get_total_input_dimension(masked_input_layers)
and greater than or equal to 0.
masked_input_layers : list, optional
A list of layer names to mask. If `None`, the input to all layers
(including the first hidden layer) is masked.
default_input_scale : float, optional
The amount to scale inputs in masked layers that do not appear in
`input_scales`. Defaults to 2.
input_scales : dict, optional
A dictionary mapping layer names to floating point numbers
indicating how much to scale input to a given layer.
Returns
-------
masked_output : tensor_like
The output of the forward propagation of the masked network.
"""
if input_scales is not None:
self._validate_layer_names(input_scales)
else:
input_scales = {}
if any(n not in masked_input_layers for n in input_scales):
layers = [n for n in input_scales if n not in masked_input_layers]
raise ValueError("input scales provided for layer not masked: " %
", ".join(layers))
if masked_input_layers is not None:
self._validate_layer_names(masked_input_layers)
else:
masked_input_layers = self.layer_names
num_inputs = self.get_total_input_dimension(masked_input_layers)
assert mask >= 0, "Mask must be a non-negative integer."
if mask > 0 and math.log(mask, 2) > num_inputs:
raise ValueError("mask value of %d too large; only %d "
"inputs to layers (%s)" %
(mask, num_inputs,
", ".join(masked_input_layers)))
def binary_string(x, length, dtype):
"""
Create the binary representation of an integer `x`, padded to
`length`, with dtype `dtype`.
Parameters
----------
length : WRITEME
dtype : WRITEME
Returns
-------
WRITEME
"""
s = np.empty(length, dtype=dtype)
for i in range(length - 1, -1, -1):
if x // (2 ** i) == 1:
s[i] = 1
else:
s[i] = 0
x = x % (2 ** i)
return s
remaining_mask = mask
for layer in self.layers:
if layer.layer_name in masked_input_layers:
scale = input_scales.get(layer.layer_name,
default_input_scale)
n_inputs = layer.get_input_space().get_total_dimension()
layer_dropout_mask = remaining_mask & (2 ** n_inputs - 1)
remaining_mask >>= n_inputs
mask = binary_string(layer_dropout_mask, n_inputs,
'uint8')
shape = layer.get_input_space().get_origin_batch(1).shape
s_mask = T.as_tensor_variable(mask).reshape(shape)
if layer.dropout_input_mask_value == 0:
state_below = state_below * s_mask * scale
else:
state_below = T.switch(s_mask, state_below * scale,
layer.dropout_input_mask_value)
state_below = layer.fprop(state_below)
return state_below
def _validate_layer_names(self, layers):
"""
.. todo::
WRITEME
"""
if any(layer not in self.layer_names for layer in layers):
unknown_names = [layer for layer in layers
if layer not in self.layer_names]
raise ValueError("MLP has no layer(s) named %s" %
", ".join(unknown_names))
def get_total_input_dimension(self, layers):
"""
Get the total number of inputs to the layers whose
names are listed in `layers`. Used for computing the
total number of dropout masks.
Parameters
----------
layers : WRITEME
Returns
-------
WRITEME
"""
self._validate_layer_names(layers)
total = 0
for layer in self.layers:
if layer.layer_name in layers:
total += layer.get_input_space().get_total_dimension()
return total
@wraps(Layer.fprop)
def fprop(self, state_below, return_all=False):
if not hasattr(self, "input_space"):
raise AttributeError("Input space has not been provided.")
rval = self.layers[0].fprop(state_below)
rlist = [rval]
for layer in self.layers[1:]:
rval = layer.fprop(rval)
rlist.append(rval)
if return_all:
return rlist
return rval
def apply_dropout(self, state, include_prob, scale, theano_rng,
input_space, mask_value=0, per_example=True):
"""
.. todo::
WRITEME
Parameters
----------
state: WRITEME
include_prob : WRITEME
scale : WRITEME
theano_rng : WRITEME
input_space : WRITEME
mask_value : WRITEME
per_example : bool, optional
Sample a different mask value for every example in a batch.
Defaults to `True`. If `False`, sample one mask per mini-batch.
"""
if include_prob in [None, 1.0, 1]:
return state
assert scale is not None
if isinstance(state, tuple):
return tuple(self.apply_dropout(substate, include_prob,
scale, theano_rng, mask_value)
for substate in state)
# TODO: all of this assumes that if it's not a tuple, it's
# a dense tensor. It hasn't been tested with sparse types.
# A method to format the mask (or any other values) as
# the given symbolic type should be added to the Spaces
# interface.
if per_example:
mask = theano_rng.binomial(p=include_prob, size=state.shape,
dtype=state.dtype)
else:
batch = input_space.get_origin_batch(1)
mask = theano_rng.binomial(p=include_prob, size=batch.shape,
dtype=state.dtype)
rebroadcast = T.Rebroadcast(*zip(xrange(batch.ndim),
[s == 1 for s in batch.shape]))
mask = rebroadcast(mask)
if mask_value == 0:
rval = state * mask * scale
else:
rval = T.switch(mask, state * scale, mask_value)
return T.cast(rval, state.dtype)
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
return self.layers[-1].cost(Y, Y_hat)
@wraps(Layer.cost_matrix)
def cost_matrix(self, Y, Y_hat):
return self.layers[-1].cost_matrix(Y, Y_hat)
@wraps(Layer.cost_from_cost_matrix)
def cost_from_cost_matrix(self, cost_matrix):
return self.layers[-1].cost_from_cost_matrix(cost_matrix)
def cost_from_X(self, data):
"""
Computes self.cost, but takes data=(X, Y) rather than Y_hat as an
argument.
This is just a wrapper around self.cost that computes Y_hat by
calling Y_hat = self.fprop(X)
Parameters
----------
data : WRITEME
"""
self.cost_from_X_data_specs()[0].validate(data)
X, Y = data
Y_hat = self.fprop(X)
return self.cost(Y, Y_hat)
def cost_from_X_data_specs(self):
"""
Returns the data specs needed by cost_from_X.
This is useful if cost_from_X is used in a MethodCost.
"""
space = CompositeSpace((self.get_input_space(),
self.get_target_space()))
source = (self.get_input_source(), self.get_target_source())
return (space, source)
def __str__(self):
"""
Summarizes the MLP by printing the size and format of the input to all
layers. Feel free to add reasonably concise info as needed.
"""
rval = []
for layer in self.layers:
rval.append(layer.layer_name)
input_space = layer.get_input_space()
rval.append('\tInput space: ' + str(input_space))
rval.append('\tTotal input dimension: ' +
str(input_space.get_total_dimension()))
rval = '\n'.join(rval)
return rval
class Softmax(Layer):
"""
A layer that can apply an optional affine transformation
to vectorial inputs followed by a softmax nonlinearity.
Parameters
----------
n_classes : int
Number of classes for softmax targets.
layer_name : string
Name of Softmax layers.
irange : float
If specified, initialized each weight randomly in
U(-irange, irange).
istdev : float
If specified, initialize each weight randomly from
N(0,istdev).
sparse_init : int
If specified, initial sparse_init number of weights
for each unit from N(0,1).
W_lr_scale : float
Scale for weight learning rate.
b_lr_scale : float
Scale for bias learning rate.
max_row_norm : float
Maximum norm for a row of the weight matrix.
no_affine : boolean
If True, softmax nonlinearity is applied directly to
inputs.
max_col_norm : float
Maximum norm for a column of the weight matrix.
init_bias_target_marginals : dataset
Take the probability distribution of the targets into account to
intelligently initialize biases.
binary_target_dim : int, optional
If your targets are class labels (i.e. a binary vector) then set the
number of targets here so that an IndexSpace of the proper dimension
can be used as the target space. This allows the softmax to compute
the cost much more quickly than if it needs to convert the targets
into a VectorSpace. With binary_target_dim>1, you can use one layer
to simultaneously predict a bag of words (i.e. order is not important,
the same element can be included more than once).
non_redundant : bool
If True, learns only n_classes - 1 biases and weight vectors
"""
def __init__(self, n_classes, layer_name, irange=None,
istdev=None,
sparse_init=None, W_lr_scale=None,
b_lr_scale=None, max_row_norm=None,
no_affine=False,
max_col_norm=None, init_bias_target_marginals=None,
binary_target_dim=None, non_redundant=False):
super(Softmax, self).__init__()
if max_col_norm is not None:
self.extensions.append(MaxL2FilterNorm(max_col_norm))
if non_redundant:
if init_bias_target_marginals:
msg = ("init_bias_target_marginals currently only works "
"with the overcomplete parameterization.")
raise NotImplementedError(msg)
if isinstance(W_lr_scale, str):
W_lr_scale = float(W_lr_scale)
self.__dict__.update(locals())
del self.self
del self.init_bias_target_marginals
if not isinstance(n_classes, py_integer_types):
raise TypeError("n_classes is of type %s, but must be integer" %
type(n_classes))
if binary_target_dim is not None:
assert isinstance(binary_target_dim, py_integer_types)
self._has_binary_target = True
self._target_space = IndexSpace(dim=binary_target_dim,
max_labels=n_classes)
else:
self._has_binary_target = False
self.output_space = VectorSpace(n_classes)
if not no_affine:
self.b = sharedX(np.zeros((n_classes - self.non_redundant,)),
name='softmax_b')
if init_bias_target_marginals:
y = init_bias_target_marginals.y
if init_bias_target_marginals.y_labels is None:
marginals = y.mean(axis=0)
else:
# compute class frequencies
if np.max(y.shape) != np.prod(y.shape):
raise AssertionError("Use of "
"`init_bias_target_marginals` "
"requires that each example has "
"a single label.")
marginals = np.bincount(y.flat) / float(y.shape[0])
assert marginals.ndim == 1
b = pseudoinverse_softmax_numpy(marginals).astype(self.b.dtype)
assert b.ndim == 1
assert b.dtype == self.b.dtype
self.b.set_value(b)
else:
assert init_bias_target_marginals is None
def __setstate__(self, state):
super(Softmax, self).__setstate__(state)
# Patch old pickle files
if not hasattr(self, 'non_redundant'):
self.non_redundant = False
if not hasattr(self, 'mask_weights'):
self.mask_weights = None
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
rval = OrderedDict()
if self.W_lr_scale is not None:
assert isinstance(self.W_lr_scale, float)
rval[self.W] = self.W_lr_scale
if not hasattr(self, 'b_lr_scale'):
self.b_lr_scale = None
if self.b_lr_scale is not None:
assert isinstance(self.b_lr_scale, float)
rval[self.b] = self.b_lr_scale
return rval
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
rval = OrderedDict()
if not self.no_affine:
W = self.W
assert W.ndim == 2
sq_W = T.sqr(W)
row_norms = T.sqrt(sq_W.sum(axis=1))
col_norms = T.sqrt(sq_W.sum(axis=0))
rval.update(OrderedDict([('row_norms_min', row_norms.min()),
('row_norms_mean', row_norms.mean()),
('row_norms_max', row_norms.max()),
('col_norms_min', col_norms.min()),
('col_norms_mean', col_norms.mean()),
('col_norms_max', col_norms.max()), ]))
if (state_below is not None) or (state is not None):
if state is None:
state = self.fprop(state_below)
mx = state.max(axis=1)
rval.update(OrderedDict([('mean_max_class', mx.mean()),
('max_max_class', mx.max()),
('min_max_class', mx.min())]))
if (targets is not None):
if ((not self._has_binary_target) or
self.binary_target_dim == 1):
# if binary_target_dim>1, the misclass rate is ill-defined
y_hat = T.argmax(state, axis=1)
y = (targets.reshape(y_hat.shape)
if self._has_binary_target
else T.argmax(targets, axis=1))
misclass = T.neq(y, y_hat).mean()
misclass = T.cast(misclass, config.floatX)
rval['misclass'] = misclass
rval['nll'] = self.cost(Y_hat=state, Y=targets)
return rval
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.input_space = space
if not isinstance(space, Space):
raise TypeError("Expected Space, got " +
str(space) + " of type " + str(type(space)))
self.input_dim = space.get_total_dimension()
self.needs_reformat = not isinstance(space, VectorSpace)
if self.no_affine:
desired_dim = self.n_classes - self.non_redundant
assert self.input_dim == desired_dim
else:
desired_dim = self.input_dim
self.desired_space = VectorSpace(desired_dim)
if not self.needs_reformat:
assert self.desired_space == self.input_space
rng = self.mlp.rng
if self.no_affine:
self._params = []
else:
num_cols = self.n_classes - self.non_redundant
if self.irange is not None:
assert self.istdev is None
assert self.sparse_init is None
W = rng.uniform(-self.irange,
self.irange,
(self.input_dim, num_cols))
elif self.istdev is not None:
assert self.sparse_init is None
W = rng.randn(self.input_dim, num_cols) * self.istdev
else:
assert self.sparse_init is not None
W = np.zeros((self.input_dim, num_cols))
for i in xrange(num_cols):
for j in xrange(self.sparse_init):
idx = rng.randint(0, self.input_dim)
while W[idx, i] != 0.:
idx = rng.randint(0, self.input_dim)
W[idx, i] = rng.randn()
self.W = sharedX(W, 'softmax_W')
self._params = [self.b, self.W]
@wraps(Layer.get_weights_topo)
def get_weights_topo(self):
if not isinstance(self.input_space, Conv2DSpace):
raise NotImplementedError()
desired = self.W.get_value().T
ipt = self.desired_space.np_format_as(desired, self.input_space)
rval = Conv2DSpace.convert_numpy(ipt,
self.input_space.axes,
('b', 0, 1, 'c'))
return rval
@wraps(Layer.get_weights)
def get_weights(self):
if not isinstance(self.input_space, VectorSpace):
raise NotImplementedError()
return self.W.get_value()
@wraps(Layer.set_weights)
def set_weights(self, weights):
self.W.set_value(weights)
@wraps(Layer.set_biases)
def set_biases(self, biases):
self.b.set_value(biases)
@wraps(Layer.get_biases)
def get_biases(self):
return self.b.get_value()
@wraps(Layer.get_weights_format)
def get_weights_format(self):
return ('v', 'h')
@wraps(Layer.fprop)
def fprop(self, state_below):
self.input_space.validate(state_below)
if self.needs_reformat:
state_below = self.input_space.format_as(state_below,
self.desired_space)
self.desired_space.validate(state_below)
assert state_below.ndim == 2
if not hasattr(self, 'no_affine'):
self.no_affine = False
if self.no_affine:
Z = state_below
else:
assert self.W.ndim == 2
b = self.b
Z = T.dot(state_below, self.W) + b
if self.non_redundant:
zeros = T.alloc(0., Z.shape[0], 1)
Z = T.concatenate((zeros, Z), axis=1)
rval = T.nnet.softmax(Z)
for value in get_debug_values(rval):
if self.mlp.batch_size is not None:
assert value.shape[0] == self.mlp.batch_size
return rval
def _cost(self, Y, Y_hat):
z = arg_of_softmax(Y_hat)
assert z.ndim == 2
z = z - z.max(axis=1).dimshuffle(0, 'x')
log_prob = z - T.log(T.exp(z).sum(axis=1).dimshuffle(0, 'x'))
# we use sum and not mean because this is really one variable per row
if self._has_binary_target:
# The following code is the equivalent of accessing log_prob by the
# indices in Y, but it is written such that the computation can
# happen on the GPU rather than CPU.
flat_Y = Y.flatten()
flat_Y.name = 'flat_Y'
flat_log_prob = log_prob.flatten()
flat_log_prob.name = 'flat_log_prob'
range_ = T.arange(Y.shape[0])
if self.binary_target_dim > 1:
# because of an error in optimization (local_useless_tile)
# when tiling with (1, 1)
range_ = T.tile(range_.dimshuffle(0, 'x'),
(1, self.binary_target_dim)).flatten()
flat_indices = flat_Y + range_ * self.n_classes
flat_indices.name = 'flat_indices'
log_prob_of = flat_log_prob[flat_indices].reshape(Y.shape, ndim=2)
log_prob_of.name = 'log_prob_of'
else:
log_prob_of = (Y * log_prob)
return log_prob_of
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
log_prob_of = self._cost(Y, Y_hat).sum(axis=1)
assert log_prob_of.ndim == 1
rval = log_prob_of.mean()
return - rval
@wraps(Layer.cost_matrix)
def cost_matrix(self, Y, Y_hat):
log_prob_of = self._cost(Y, Y_hat)
if self._has_binary_target:
flat_Y = Y.flatten()
flat_matrix = T.alloc(0, (Y.shape[0] * log_prob_of.shape[1]))
flat_indices = flat_Y + T.extra_ops.repeat(
T.arange(Y.shape[0]) * log_prob_of.shape[1], Y.shape[1]
)
log_prob_of = T.set_subtensor(flat_matrix[flat_indices], flat_Y)
return -log_prob_of
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
return coeff * T.sqr(self.W).sum()
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W = self.W
return coeff * abs(W).sum()
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
if self.no_affine:
return
if self.max_row_norm is not None:
W = self.W
if W in updates:
updated_W = updates[W]
row_norms = T.sqrt(T.sum(T.sqr(updated_W), axis=1))
desired_norms = T.clip(row_norms, 0, self.max_row_norm)
scales = desired_norms / (1e-7 + row_norms)
updates[W] = updated_W * scales.dimshuffle(0, 'x')
class SoftmaxPool(Layer):
"""
A hidden layer that uses the softmax function to do max pooling over groups
of units. When the pooling size is 1, this reduces to a standard sigmoidal
MLP layer.
Parameters
----------
detector_layer_dim : WRITEME
layer_name : WRITEME
pool_size : WRITEME
irange : WRITEME
sparse_init : WRITEME
sparse_stdev : WRITEME
include_prob : float, optional
Probability of including a weight element in the set of weights
initialized to U(-irange, irange). If not included it is
initialized to 0.
init_bias : WRITEME
W_lr_scale : WRITEME
b_lr_scale : WRITEME
mask_weights : WRITEME
max_col_norm : WRITEME
"""
def __init__(self,
detector_layer_dim,
layer_name,
pool_size=1,
irange=None,
sparse_init=None,
sparse_stdev=1.,
include_prob=1.0,
init_bias=0.,
W_lr_scale=None,
b_lr_scale=None,
mask_weights=None,
max_col_norm=None):
super(SoftmaxPool, self).__init__()
self.__dict__.update(locals())
del self.self
self.b = sharedX(np.zeros((self.detector_layer_dim,)) + init_bias,
name=(layer_name + '_b'))
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
if not hasattr(self, 'W_lr_scale'):
self.W_lr_scale = None
if not hasattr(self, 'b_lr_scale'):
self.b_lr_scale = None
rval = OrderedDict()
if self.W_lr_scale is not None:
W, = self.transformer.get_params()
rval[W] = self.W_lr_scale
if self.b_lr_scale is not None:
rval[self.b] = self.b_lr_scale
return rval
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.input_space = space
if isinstance(space, VectorSpace):
self.requires_reformat = False
self.input_dim = space.dim
else:
self.requires_reformat = True
self.input_dim = space.get_total_dimension()
self.desired_space = VectorSpace(self.input_dim)
if not (self.detector_layer_dim % self.pool_size == 0):
raise ValueError("detector_layer_dim = %d, pool_size = %d. "
"Should be divisible but remainder is %d" %
(self.detector_layer_dim,
self.pool_size,
self.detector_layer_dim % self.pool_size))
self.h_space = VectorSpace(self.detector_layer_dim)
self.pool_layer_dim = self.detector_layer_dim / self.pool_size
self.output_space = VectorSpace(self.pool_layer_dim)
rng = self.mlp.rng
if self.irange is not None:
assert self.sparse_init is None
W = rng.uniform(-self.irange,
self.irange,
(self.input_dim, self.detector_layer_dim)) * \
(rng.uniform(0., 1., (self.input_dim, self.detector_layer_dim))
< self.include_prob)
else:
assert self.sparse_init is not None
W = np.zeros((self.input_dim, self.detector_layer_dim))
def mask_rejects(idx, i):
if self.mask_weights is None:
return False
return self.mask_weights[idx, i] == 0.
for i in xrange(self.detector_layer_dim):
assert self.sparse_init <= self.input_dim
for j in xrange(self.sparse_init):
idx = rng.randint(0, self.input_dim)
while W[idx, i] != 0 or mask_rejects(idx, i):
idx = rng.randint(0, self.input_dim)
W[idx, i] = rng.randn()
W *= self.sparse_stdev
W = sharedX(W)
W.name = self.layer_name + '_W'
self.transformer = MatrixMul(W)
W, = self.transformer.get_params()
assert W.name is not None
if self.mask_weights is not None:
expected_shape = (self.input_dim, self.detector_layer_dim)
if expected_shape != self.mask_weights.shape:
raise ValueError("Expected mask with shape " +
str(expected_shape) +
" but got " +
str(self.mask_weights.shape))
self.mask = sharedX(self.mask_weights)
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
if self.mask_weights is not None:
W, = self.transformer.get_params()
if W in updates:
updates[W] = updates[W] * self.mask
if self.max_col_norm is not None:
W, = self.transformer.get_params()
if W in updates:
updated_W = updates[W]
col_norms = T.sqrt(T.sum(T.sqr(updated_W), axis=0))
desired_norms = T.clip(col_norms, 0, self.max_col_norm)
updates[W] = updated_W * (desired_norms / (1e-7 + col_norms))
@wraps(Layer.get_params)
def get_params(self):
assert self.b.name is not None
W, = self.transformer.get_params()
assert W.name is not None
rval = self.transformer.get_params()
assert not isinstance(rval, set)
rval = list(rval)
assert self.b not in rval
rval.append(self.b)
return rval
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * T.sqr(W).sum()
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * abs(W).sum()
@wraps(Layer.get_weights)
def get_weights(self):
if self.requires_reformat:
# This is not really an unimplemented case.
# We actually don't know how to format the weights
# in design space. We got the data in topo space
# and we don't have access to the dataset
raise NotImplementedError()
W, = self.transformer.get_params()
return W.get_value()
@wraps(Layer.set_weights)
def set_weights(self, weights):
W, = self.transformer.get_params()
W.set_value(weights)
@wraps(Layer.set_biases)
def set_biases(self, biases):
"""
.. todo::
WRITEME
"""
self.b.set_value(biases)
@wraps(Layer.get_biases)
def get_biases(self):
return self.b.get_value()
@wraps(Layer.get_weights_format)
def get_weights_format(self):
return ('v', 'h')
@wraps(Layer.get_weights_view_shape)
def get_weights_view_shape(self):
total = self.detector_layer_dim
cols = self.pool_size
if cols == 1:
# Let the PatchViewer decide how to arrange the units
# when they're not pooled
raise NotImplementedError()
# When they are pooled, make each pooling unit have one row
rows = total / cols
return rows, cols
@wraps(Layer.get_weights_topo)
def get_weights_topo(self):
if not isinstance(self.input_space, Conv2DSpace):
raise NotImplementedError()
W, = self.transformer.get_params()
W = W.T
W = W.reshape((self.detector_layer_dim,
self.input_space.shape[0],
self.input_space.shape[1],
self.input_space.num_channels))
W = Conv2DSpace.convert(W, self.input_space.axes, ('b', 0, 1, 'c'))
return function([], W)()
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, **kwargs):
W, = self.transformer.get_params()
assert W.ndim == 2
sq_W = T.sqr(W)
row_norms = T.sqrt(sq_W.sum(axis=1))
col_norms = T.sqrt(sq_W.sum(axis=0))
rval = OrderedDict([('row_norms_min', row_norms.min()),
('row_norms_mean', row_norms.mean()),
('row_norms_max', row_norms.max()),
('col_norms_min', col_norms.min()),
('col_norms_mean', col_norms.mean()),
('col_norms_max', col_norms.max()), ])
if (state_below is not None) or (state is not None):
if state is None:
P = self.fprop(state_below)
else:
P = state
if self.pool_size == 1:
vars_and_prefixes = [(P, '')]
else:
vars_and_prefixes = [(P, 'p_')]
for var, prefix in vars_and_prefixes:
v_max = var.max(axis=0)
v_min = var.min(axis=0)
v_mean = var.mean(axis=0)
v_range = v_max - v_min
# max_x.mean_u is "the mean over *u*nits of the max over
# e*x*amples" The x and u are included in the name because
# otherwise its hard to remember which axis is which when
# reading the monitor I use inner.outer rather than
# outer_of_inner or something like that because I want
# mean_x.* to appear next to each other in the alphabetical
# list, as these are commonly plotted together
for key, val in [('max_x.max_u', v_max.max()),
('max_x.mean_u', v_max.mean()),
('max_x.min_u', v_max.min()),
('min_x.max_u', v_min.max()),
('min_x.mean_u', v_min.mean()),
('min_x.min_u', v_min.min()),
('range_x.max_u', v_range.max()),
('range_x.mean_u', v_range.mean()),
('range_x.min_u', v_range.min()),
('mean_x.max_u', v_mean.max()),
('mean_x.mean_u', v_mean.mean()),
('mean_x.min_u', v_mean.min())]:
rval[prefix + key] = val
return rval
@wraps(Layer.fprop)
def fprop(self, state_below):
self.input_space.validate(state_below)
if self.requires_reformat:
state_below = self.input_space.format_as(state_below,
self.desired_space)
z = self.transformer.lmul(state_below) + self.b
if self.layer_name is not None:
z.name = self.layer_name + '_z'
p, h = max_pool_channels(z, self.pool_size)
p.name = self.layer_name + '_p_'
return p
class Linear(Layer):
"""
A "linear model" in machine learning terminology. This would be more
accurately described as an affine model because it adds an offset to
the output as well as doing a matrix multiplication. The output is:
output = T.dot(weights, input) + biases
This class may be used as the output layer of an MLP for regression.
It may also be used as a hidden layer. Most hidden layers classes are
subclasses of this class that add apply a fixed nonlinearity to the
output of the affine transformation provided by this class.
One notable use of this class is to provide "bottleneck" layers.
By using a Linear layer with few hidden units followed by a nonlinear
layer such as RectifiedLinear with many hidden units, one essentially
gets a RectifiedLinear layer with a factored weight matrix, which can
reduce the number of parameters in the model (by making the effective
weight matrix low rank).
Parameters
----------
dim : int
The number of elements in the output of the layer.
layer_name : str
The name of the layer. All layers in an MLP must have a unique name.
irange : WRITEME
istdev : WRITEME
sparse_init : WRITEME
sparse_stdev : WRITEME
include_prob : float
Probability of including a weight element in the set of weights
initialized to U(-irange, irange). If not included it is
initialized to 0.
Anything that can be broadcasted to a numpy vector.
Provides the initial value of the biases of the model.
When using this class as an output layer (specifically the Linear
class, or subclasses that don't change the output like
LinearGaussian, but not subclasses that change the output, like
Softmax) it can be a good idea to set this to the return value of
the `mean_of_targets` function. This provides the mean value of
all the targets in the training set, so the model is initialized
to a dummy model that predicts the expected value of each output
variable.
W_lr_scale : float, optional
Multiply the learning rate on the weights by this constant.
b_lr_scale : float, optional
Multiply the learning rate on the biases by this constant.
mask_weights : ndarray, optional
If provided, the weights will be multiplied by this mask after each
learning update.
max_row_norm : WRITEME
max_col_norm : WRITEME
min_col_norm : WRITEME
copy_input : REMOVED
use_abs_loss : bool, optional
If True, the cost function will be mean absolute error rather
than mean squared error.
You can think of mean squared error as fitting a Gaussian
distribution with variance 1, or as learning to predict the mean
of the data.
You can think of mean absolute error as fitting a Laplace
distribution with variance 1, or as learning to predict the
median of the data.
use_bias : bool, optional
If False, does not add the bias term to the output.
"""
def __init__(self,
dim,
layer_name,
irange=None,
istdev=None,
sparse_init=None,
sparse_stdev=1.,
include_prob=1.0,
init_bias=0.,
W_lr_scale=None,
b_lr_scale=None,
mask_weights=None,
max_row_norm=None,
max_col_norm=None,
min_col_norm=None,
copy_input=None,
use_abs_loss=False,
use_bias=True):
if copy_input is not None:
raise AssertionError(
"The copy_input option had a bug and has "
"been removed from the library.")
super(Linear, self).__init__()
if use_bias and init_bias is None:
init_bias = 0.
self.__dict__.update(locals())
del self.self
if use_bias:
self.b = sharedX(np.zeros((self.dim,)) + init_bias,
name=(layer_name + '_b'))
else:
assert b_lr_scale is None
init_bias is None
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
if not hasattr(self, 'W_lr_scale'):
self.W_lr_scale = None
if not hasattr(self, 'b_lr_scale'):
self.b_lr_scale = None
rval = OrderedDict()
if self.W_lr_scale is not None:
W, = self.transformer.get_params()
rval[W] = self.W_lr_scale
if self.b_lr_scale is not None:
rval[self.b] = self.b_lr_scale
return rval
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.input_space = space
if isinstance(space, VectorSpace):
self.requires_reformat = False
self.input_dim = space.dim
else:
self.requires_reformat = True
self.input_dim = space.get_total_dimension()
self.desired_space = VectorSpace(self.input_dim)
self.output_space = VectorSpace(self.dim)
rng = self.mlp.rng
if self.irange is not None:
assert self.istdev is None
assert self.sparse_init is None
W = rng.uniform(-self.irange,
self.irange,
(self.input_dim, self.dim)) * \
(rng.uniform(0., 1., (self.input_dim, self.dim))
< self.include_prob)
elif self.istdev is not None:
assert self.sparse_init is None
W = rng.randn(self.input_dim, self.dim) * self.istdev
else:
assert self.sparse_init is not None
W = np.zeros((self.input_dim, self.dim))
def mask_rejects(idx, i):
if self.mask_weights is None:
return False
return self.mask_weights[idx, i] == 0.
for i in xrange(self.dim):
assert self.sparse_init <= self.input_dim
for j in xrange(self.sparse_init):
idx = rng.randint(0, self.input_dim)
while W[idx, i] != 0 or mask_rejects(idx, i):
idx = rng.randint(0, self.input_dim)
W[idx, i] = rng.randn()
W *= self.sparse_stdev
W = sharedX(W)
W.name = self.layer_name + '_W'
self.transformer = MatrixMul(W)
W, = self.transformer.get_params()
assert W.name is not None
if self.mask_weights is not None:
expected_shape = (self.input_dim, self.dim)
if expected_shape != self.mask_weights.shape:
raise ValueError("Expected mask with shape " +
str(expected_shape) + " but got " +
str(self.mask_weights.shape))
self.mask = sharedX(self.mask_weights)
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
if self.mask_weights is not None:
W, = self.transformer.get_params()
if W in updates:
updates[W] = updates[W] * self.mask
if self.max_row_norm is not None:
W, = self.transformer.get_params()
if W in updates:
updated_W = updates[W]
row_norms = T.sqrt(T.sum(T.sqr(updated_W), axis=1))
desired_norms = T.clip(row_norms, 0, self.max_row_norm)
scales = desired_norms / (1e-7 + row_norms)
updates[W] = updated_W * scales.dimshuffle(0, 'x')
if self.max_col_norm is not None or self.min_col_norm is not None:
assert self.max_row_norm is None
if self.max_col_norm is not None:
max_col_norm = self.max_col_norm
if self.min_col_norm is None:
self.min_col_norm = 0
W, = self.transformer.get_params()
if W in updates:
updated_W = updates[W]
col_norms = T.sqrt(T.sum(T.sqr(updated_W), axis=0))
if self.max_col_norm is None:
max_col_norm = col_norms.max()
desired_norms = T.clip(col_norms,
self.min_col_norm,
max_col_norm)
updates[W] = updated_W * desired_norms / (1e-7 + col_norms)
@wraps(Layer.get_params)
def get_params(self):
W, = self.transformer.get_params()
assert W.name is not None
rval = self.transformer.get_params()
assert not isinstance(rval, set)
rval = list(rval)
if self.use_bias:
assert self.b.name is not None
assert self.b not in rval
rval.append(self.b)
return rval
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * T.sqr(W).sum()
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * abs(W).sum()
@wraps(Layer.get_weights)
def get_weights(self):
if self.requires_reformat:
# This is not really an unimplemented case.
# We actually don't know how to format the weights
# in design space. We got the data in topo space
# and we don't have access to the dataset
raise NotImplementedError()
W, = self.transformer.get_params()
W = W.get_value()
return W
@wraps(Layer.set_weights)
def set_weights(self, weights):
W, = self.transformer.get_params()
W.set_value(weights)
@wraps(Layer.set_biases)
def set_biases(self, biases):
self.b.set_value(biases)
@wraps(Layer.get_biases)
def get_biases(self):
"""
.. todo::
WRITEME
"""
return self.b.get_value()
@wraps(Layer.get_weights_format)
def get_weights_format(self):
return ('v', 'h')
@wraps(Layer.get_weights_topo)
def get_weights_topo(self):
if not isinstance(self.input_space, Conv2DSpace):
raise NotImplementedError()
W, = self.transformer.get_params()
W = W.T
W = W.reshape((self.dim, self.input_space.shape[0],
self.input_space.shape[1],
self.input_space.num_channels))
W = Conv2DSpace.convert(W, self.input_space.axes, ('b', 0, 1, 'c'))
return function([], W)()
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
W, = self.transformer.get_params()
assert W.ndim == 2
sq_W = T.sqr(W)
row_norms = T.sqrt(sq_W.sum(axis=1))
col_norms = T.sqrt(sq_W.sum(axis=0))
rval = OrderedDict([('row_norms_min', row_norms.min()),
('row_norms_mean', row_norms.mean()),
('row_norms_max', row_norms.max()),
('col_norms_min', col_norms.min()),
('col_norms_mean', col_norms.mean()),
('col_norms_max', col_norms.max()), ])
if (state is not None) or (state_below is not None):
if state is None:
state = self.fprop(state_below)
mx = state.max(axis=0)
mean = state.mean(axis=0)
mn = state.min(axis=0)
rg = mx - mn
rval['range_x_max_u'] = rg.max()
rval['range_x_mean_u'] = rg.mean()
rval['range_x_min_u'] = rg.min()
rval['max_x_max_u'] = mx.max()
rval['max_x_mean_u'] = mx.mean()
rval['max_x_min_u'] = mx.min()
rval['mean_x_max_u'] = mean.max()
rval['mean_x_mean_u'] = mean.mean()
rval['mean_x_min_u'] = mean.min()
rval['min_x_max_u'] = mn.max()
rval['min_x_mean_u'] = mn.mean()
rval['min_x_min_u'] = mn.min()
return rval
def _linear_part(self, state_below):
"""
Parameters
----------
state_below : member of input_space
Returns
-------
output : theano matrix
Affine transformation of state_below
"""
self.input_space.validate(state_below)
if self.requires_reformat:
state_below = self.input_space.format_as(state_below,
self.desired_space)
z = self.transformer.lmul(state_below)
if self.use_bias:
z += self.b
if self.layer_name is not None:
z.name = self.layer_name + '_z'
return z
@wraps(Layer.fprop)
def fprop(self, state_below):
p = self._linear_part(state_below)
return p
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
return self.cost_from_cost_matrix(self.cost_matrix(Y, Y_hat))
@wraps(Layer.cost_from_cost_matrix)
def cost_from_cost_matrix(self, cost_matrix):
return cost_matrix.sum(axis=1).mean()
@wraps(Layer.cost_matrix)
def cost_matrix(self, Y, Y_hat):
if(self.use_abs_loss):
return T.abs_(Y - Y_hat)
else:
return T.sqr(Y - Y_hat)
class Tanh(Linear):
"""
A layer that performs an affine transformation of its (vectorial)
input followed by a hyperbolic tangent elementwise nonlinearity.
Parameters
----------
kwargs : dict
Keyword arguments to pass through to `Linear` class constructor.
"""
@wraps(Layer.fprop)
def fprop(self, state_below):
p = self._linear_part(state_below)
p = T.tanh(p)
return p
@wraps(Layer.cost)
def cost(self, *args, **kwargs):
raise NotImplementedError()
class Sigmoid(Linear):
"""
A layer that performs an affine transformation of its
input followed by a logistic sigmoid elementwise nonlinearity.
Parameters
----------
monitor_style : string
Values can be any of ['detection', 'one_hot_class',
'bit_vector_class']
'detection' is the default.
- 'detection' : get_monitor_from_state makes no assumptions about
target, reports info about how good model is at
detecting positive bits.
This will monitor precision, recall, and F1 score
based on a detection threshold of 0.5. Note that
these quantities are computed *per-minibatch* and
averaged together. Unless your entire monitoring
dataset fits in one minibatch, this is not the same
as the true F1 score, etc., and will usually
seriously overestimate your performance.
- 'one_hot_class' : get_monitor_from_state assumes target is
one-hot class indicator, even though you're training the
model as k independent sigmoids. Gives info on how
good the argmax over the sigmoids behaves as a classifier.
- 'bit_vector_class' : get_monitor_from_state treats each
sigmoid as predicting a 1 iff its value is > 0.5. Each
example is counted as correct iff all of the bits in its
target are predicted correctly.
This includes as a special case the situation where the
target is a single 0 or 1 label.
- 'classification' : deprecated; originally this string was
used for 'one_hot_class', then due to a miscommunication
it was changed to be used for 'bit_vector_class'.
kwargs : dict
Passed through to the Layer class constructor
"""
def __init__(self, monitor_style='detection', **kwargs):
super(Sigmoid, self).__init__(**kwargs)
if monitor_style == 'classification':
monitor_style = 'bit_vector_class'
warnings.warn("The 'classification' monitor style is deprecated."
" Switch to 'bit_vector_class' (or possibly"
" 'one_hot_class' if your code predates 8f4b62b3df)."
" 'classification' may be removed on or after "
"2015-04-21.")
assert monitor_style in ['one_hot_class', 'bit_vector_class',
'detection']
self.monitor_style = monitor_style
@wraps(Layer.fprop)
def fprop(self, state_below):
p = self._linear_part(state_below)
p = T.nnet.sigmoid(p)
return p
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
"""
Returns a batch (vector) of
mean across units of KL divergence for each example.
Parameters
----------
Y : theano.gof.Variable
Targets
Y_hat : theano.gof.Variable
Output of `fprop`
mean across units, mean across batch of KL divergence
Notes
-----
Uses KL(P || Q) where P is defined by Y and Q is defined by Y_hat
Currently Y must be purely binary. If it's not, you'll still
get the right gradient, but the value in the monitoring channel
will be wrong.
Y_hat must be generated by fprop, i.e., it must be a symbolic
sigmoid.
p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)
For binary p, some terms drop out:
- p log q - (1-p) log (1-q)
- p log sigmoid(z) - (1-p) log sigmoid(-z)
p softplus(-z) + (1-p) softplus(z)
"""
total = self.kl(Y=Y, Y_hat=Y_hat)
ave = total.mean()
return ave
def kl(self, Y, Y_hat):
"""
Computes the KL divergence.
Parameters
----------
Y : Variable
targets for the sigmoid outputs. Currently Y must be purely binary.
If it's not, you'll still get the right gradient, but the
value in the monitoring channel will be wrong.
Y_hat : Variable
predictions made by the sigmoid layer. Y_hat must be generated by
fprop, i.e., it must be a symbolic sigmoid.
Returns
-------
ave : Variable
average kl divergence between Y and Y_hat.
Notes
-----
Warning: This function expects a sigmoid nonlinearity in the
output layer and it uses kl function under pylearn2/expr/nnet/.
Returns a batch (vector) of mean across units of KL
divergence for each example,
KL(P || Q) where P is defined by Y and Q is defined by Y_hat:
p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)
For binary p, some terms drop out:
- p log q - (1-p) log (1-q)
- p log sigmoid(z) - (1-p) log sigmoid(-z)
p softplus(-z) + (1-p) softplus(z)
"""
batch_axis = self.output_space.get_batch_axis()
div = kl(Y=Y, Y_hat=Y_hat, batch_axis=batch_axis)
return div
@wraps(Layer.cost_matrix)
def cost_matrix(self, Y, Y_hat):
rval = elemwise_kl(Y, Y_hat)
assert rval.ndim == 2
return rval
def get_detection_channels_from_state(self, state, target):
"""
Returns monitoring channels when using the layer to do detection
of binary events.
Parameters
----------
state : theano.gof.Variable
Output of `fprop`
target : theano.gof.Variable
The targets from the dataset
Returns
-------
channels : OrderedDict
Dictionary mapping channel names to Theano channel values.
"""
rval = OrderedDict()
y_hat = state > 0.5
y = target > 0.5
wrong_bit = T.cast(T.neq(y, y_hat), state.dtype)
rval['01_loss'] = wrong_bit.mean()
rval['kl'] = self.cost(Y_hat=state, Y=target)
y = T.cast(y, state.dtype)
y_hat = T.cast(y_hat, state.dtype)
tp = (y * y_hat).sum()
fp = ((1 - y) * y_hat).sum()
precision = compute_precision(tp, fp)
recall = compute_recall(y, tp)
f1 = compute_f1(precision, recall)
rval['precision'] = precision
rval['recall'] = recall
rval['f1'] = f1
tp = (y * y_hat).sum(axis=0)
fp = ((1 - y) * y_hat).sum(axis=0)
precision = compute_precision(tp, fp)
rval['per_output_precision_max'] = precision.max()
rval['per_output_precision_mean'] = precision.mean()
rval['per_output_precision_min'] = precision.min()
recall = compute_recall(y, tp)
rval['per_output_recall_max'] = recall.max()
rval['per_output_recall_mean'] = recall.mean()
rval['per_output_recall_min'] = recall.min()
f1 = compute_f1(precision, recall)
rval['per_output_f1_max'] = f1.max()
rval['per_output_f1_mean'] = f1.mean()
rval['per_output_f1_min'] = f1.min()
return rval
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
rval = super(Sigmoid, self).get_layer_monitoring_channels(
state=state, targets=targets)
if (targets is not None) and \
((state_below is not None) or (state is not None)):
if state is None:
state = self.fprop(state_below)
if self.monitor_style == 'detection':
rval.update(self.get_detection_channels_from_state(state,
targets))
elif self.monitor_style == 'one_hot_class':
# For this monitor style, we know (by assumption) that
# exactly one bit is always on, so we pick
# the single most likely bit under the model, regardless
# of whether its probability exceeds 0.5
prediction = state.argmax(axis=1)
labels = targets.argmax(axis=1)
incorrect = T.neq(prediction, labels)
misclass = T.cast(incorrect, config.floatX).mean()
rval['misclass'] = misclass
else:
assert self.monitor_style == 'bit_vector_class'
# Threshold Y_hat at 0.5.
prediction = T.gt(state, 0.5)
# If even one feature is wrong for a given training example,
# it's considered incorrect, so we max over columns.
incorrect = T.neq(targets, prediction).max(axis=1)
rval['misclass'] = T.cast(incorrect, config.floatX).mean()
return rval
class RectifiedLinear(Linear):
"""
Rectified linear MLP layer (Glorot and Bengio 2011).
Parameters
----------
left_slope : float
The slope the line should have left of 0.
kwargs : dict
Keyword arguments to pass to `Linear` class constructor.
"""
def __init__(self, left_slope=0.0, **kwargs):
super(RectifiedLinear, self).__init__(**kwargs)
self.left_slope = left_slope
@wraps(Layer.fprop)
def fprop(self, state_below):
p = self._linear_part(state_below)
# Original: p = p * (p > 0.) + self.left_slope * p * (p < 0.)
# T.switch is faster.
# For details, see benchmarks in
# pylearn2/scripts/benchmark/time_relu.py
p = T.switch(p > 0., p, self.left_slope * p)
return p
@wraps(Layer.cost)
def cost(self, *args, **kwargs):
raise NotImplementedError()
class Softplus(Linear):
"""
An MLP layer using the softplus nonlinearity
h = log(1 + exp(Wx + b))
Parameters
----------
kwargs : dict
Keyword arguments to `Linear` constructor.
"""
def __init__(self, **kwargs):
super(Softplus, self).__init__(**kwargs)
@wraps(Layer.fprop)
def fprop(self, state_below):
p = self._linear_part(state_below)
p = T.nnet.softplus(p)
return p
@wraps(Layer.cost)
def cost(self, *args, **kwargs):
raise NotImplementedError()
class SpaceConverter(Layer):
"""
A Layer with no parameters that converts the input from
one space to another.
Parameters
----------
layer_name : str
Name of the layer.
output_space : Space
The space to convert to.
"""
def __init__(self, layer_name, output_space):
super(SpaceConverter, self).__init__()
self.__dict__.update(locals())
del self.self
self._params = []
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.input_space = space
@wraps(Layer.fprop)
def fprop(self, state_below):
return self.input_space.format_as(state_below, self.output_space)
class ConvNonlinearity(object):
"""
Abstract convolutional nonlinearity class.
"""
def apply(self, linear_response):
"""
Applies the nonlinearity over the convolutional layer.
Parameters
----------
linear_response: Variable
linear response of the layer.
Returns
-------
p: Variable
the response of the layer after the activation function
is applied over.
"""
p = linear_response
return p
def _get_monitoring_channels_for_activations(self, state):
"""
Computes the monitoring channels which does not require targets.
Parameters
----------
state : member of self.output_space
A minibatch of states that this Layer took on during fprop.
Provided externally so that we don't need to make a second
expression for it. This helps keep the Theano graph smaller
so that function compilation runs faster.
Returns
-------
rval : OrderedDict
A dictionary mapping channel names to monitoring channels of
interest for this layer.
"""
rval = OrderedDict({})
mx = state.max(axis=0)
mean = state.mean(axis=0)
mn = state.min(axis=0)
rg = mx - mn
rval['range_x_max_u'] = rg.max()
rval['range_x_mean_u'] = rg.mean()
rval['range_x_min_u'] = rg.min()
rval['max_x_max_u'] = mx.max()
rval['max_x_mean_u'] = mx.mean()
rval['max_x_min_u'] = mx.min()
rval['mean_x_max_u'] = mean.max()
rval['mean_x_mean_u'] = mean.mean()
rval['mean_x_min_u'] = mean.min()
rval['min_x_max_u'] = mn.max()
rval['min_x_mean_u'] = mn.mean()
rval['min_x_min_u'] = mn.min()
return rval
def get_monitoring_channels_from_state(self, state, target,
cost_fn=None):
"""
Override the default get_monitoring_channels_from_state function.
Parameters
----------
state : member of self.output_space
A minibatch of states that this Layer took on during fprop.
Provided externally so that we don't need to make a second
expression for it. This helps keep the Theano graph smaller
so that function compilation runs faster.
target : member of self.output_space
Should be None unless this is the last layer.
If specified, it should be a minibatch of targets for the
last layer.
cost_fn : theano computational graph or None
This is the theano computational graph of a cost function.
Returns
-------
rval : OrderedDict
A dictionary mapping channel names to monitoring channels of
interest for this layer.
"""
rval = self._get_monitoring_channels_for_activations(state)
return rval
class IdentityConvNonlinearity(ConvNonlinearity):
"""
Linear convolutional nonlinearity class.
"""
def __init__(self):
self.non_lin_name = "linear"
@wraps(ConvNonlinearity.get_monitoring_channels_from_state)
def get_monitoring_channels_from_state(self,
state,
target,
cost_fn=False):
rval = super(IdentityConvNonlinearity,
self).get_monitoring_channels_from_state(state,
target,
cost_fn)
if target is not None:
prediction = T.gt(state, 0.5)
incorrect = T.new(target, prediction).max(axis=1)
rval["misclass"] = T.cast(incorrect, config.floatX).mean()
return rval
class RectifierConvNonlinearity(ConvNonlinearity):
"""
A simple rectifier nonlinearity class for convolutional layers.
Parameters
----------
left_slope : float
The slope of the left half of the activation function.
"""
def __init__(self, left_slope=0.0):
"""
Parameters
----------
left_slope : float, optional
left slope for the linear response of the rectifier function.
default is 0.0.
"""
self.non_lin_name = "rectifier"
self.left_slope = left_slope
@wraps(ConvNonlinearity.apply)
def apply(self, linear_response):
"""
Applies the rectifier nonlinearity over the convolutional layer.
"""
p = linear_response * (linear_response > 0.) + self.left_slope *\
linear_response * (linear_response < 0.)
return p
class SigmoidConvNonlinearity(ConvNonlinearity):
"""
Sigmoid nonlinearity class for convolutional layers.
Parameters
----------
monitor_style : str, optional
default monitor_style is "classification".
This determines whether to do classification or detection.
"""
def __init__(self, monitor_style="classification"):
assert monitor_style in ['classification', 'detection']
self.monitor_style = monitor_style
self.non_lin_name = "sigmoid"
@wraps(ConvNonlinearity.apply)
def apply(self, linear_response):
"""
Applies the sigmoid nonlinearity over the convolutional layer.
"""
rval = OrderedDict()
p = T.nnet.sigmoid(linear_response)
return p
@wraps(ConvNonlinearity.get_monitoring_channels_from_state)
def get_monitoring_channels_from_state(self, state, target,
cost_fn=None):
rval = super(SigmoidConvNonlinearity,
self).get_monitoring_channels_from_state(state,
target,
cost_fn)
if target is not None:
y_hat = state > 0.5
y = target > 0.5
wrong_bit = T.cast(T.neq(y, y_hat), state.dtype)
rval['01_loss'] = wrong_bit.mean()
rval['kl'] = cost_fn(Y_hat=state, Y=target)
y = T.cast(y, state.dtype)
y_hat = T.cast(y_hat, state.dtype)
tp = (y * y_hat).sum()
fp = ((1 - y) * y_hat).sum()
precision = compute_precision(tp, fp)
recall = compute_recall(y, tp)
f1 = compute_f1(precision, recall)
rval['precision'] = precision
rval['recall'] = recall
rval['f1'] = f1
tp = (y * y_hat).sum(axis=[0, 1])
fp = ((1 - y) * y_hat).sum(axis=[0, 1])
precision = compute_precision(tp, fp)
rval['per_output_precision_max'] = precision.max()
rval['per_output_precision_mean'] = precision.mean()
rval['per_output_precision_min'] = precision.min()
recall = compute_recall(y, tp)
rval['per_output_recall_max'] = recall.max()
rval['per_output_recall_mean'] = recall.mean()
rval['per_output_recall_min'] = recall.min()
f1 = compute_f1(precision, recall)
rval['per_output_f1_max'] = f1.max()
rval['per_output_f1_mean'] = f1.mean()
rval['per_output_f1_min'] = f1.min()
return rval
class TanhConvNonlinearity(ConvNonlinearity):
"""
Tanh nonlinearity class for convolutional layers.
"""
def __init__(self):
self.non_lin_name = "tanh"
@wraps(ConvNonlinearity.apply)
def apply(self, linear_response):
"""
Applies the tanh nonlinearity over the convolutional layer.
"""
p = T.tanh(linear_response)
return p
class ConvElemwise(Layer):
"""
Generic convolutional elemwise layer.
Takes the ConvNonlinearity object as an argument and implements
convolutional layer with the specified nonlinearity.
This function can implement:
* Linear convolutional layer
* Rectifier convolutional layer
* Sigmoid convolutional layer
* Tanh convolutional layer
based on the nonlinearity argument that it recieves.
Parameters
----------
output_channels : int
The number of output channels the layer should have.
kernel_shape : tuple
The shape of the convolution kernel.
pool_shape : tuple
The shape of the spatial max pooling. A two-tuple of ints.
pool_stride : tuple
The stride of the spatial max pooling. Also must be square.
layer_name : str
A name for this layer that will be prepended to monitoring channels
related to this layer.
nonlinearity : object
An instance of a nonlinearity object which might be inherited
from the ConvNonlinearity class.
irange : float, optional
if specified, initializes each weight randomly in
U(-irange, irange)
border_mode : str, optional
A string indicating the size of the output:
- "full" : The output is the full discrete linear convolution of the
inputs.
- "valid" : The output consists only of those elements that do not
rely on the zero-padding. (Default)
sparse_init : WRITEME
include_prob : float, optional
probability of including a weight element in the set of weights
initialized to U(-irange, irange). If not included it is initialized
to 1.0.
init_bias : float, optional
All biases are initialized to this number. Default is 0.
W_lr_scale : float or None
The learning rate on the weights for this layer is multiplied by this
scaling factor
b_lr_scale : float or None
The learning rate on the biases for this layer is multiplied by this
scaling factor
max_kernel_norm : float or None
If specified, each kernel is constrained to have at most this norm.
pool_type : str or None
The type of the pooling operation performed the convolution.
Default pooling type is max-pooling.
tied_b : bool, optional
If true, all biases in the same channel are constrained to be the
same as each other. Otherwise, each bias at each location is
learned independently. Default is true.
detector_normalization : callable or None
See `output_normalization`.
If pooling argument is not provided, detector_normalization
is not applied on the layer.
output_normalization : callable or None
if specified, should be a callable object. the state of the
network is optionally replaced with normalization(state) at each
of the 3 points in processing:
- detector: the maxout units can be normalized prior to the
spatial pooling
- output: the output of the layer, after sptial pooling, can
be normalized as well
kernel_stride : 2-tuple of ints, optional
The stride of the convolution kernel. Default is (1, 1).
"""
def __init__(self,
output_channels,
kernel_shape,
layer_name,
nonlinearity,
irange=None,
border_mode='valid',
sparse_init=None,
include_prob=1.0,
init_bias=0.,
W_lr_scale=None,
b_lr_scale=None,
max_kernel_norm=None,
pool_type=None,
pool_shape=None,
pool_stride=None,
tied_b=None,
detector_normalization=None,
output_normalization=None,
kernel_stride=(1, 1),
monitor_style="classification"):
super(ConvElemwise, self).__init__()
if (irange is None) and (sparse_init is None):
raise AssertionError("You should specify either irange or "
"sparse_init when calling the constructor of "
"ConvElemwise.")
elif (irange is not None) and (sparse_init is not None):
raise AssertionError("You should specify either irange or "
"sparse_init when calling the constructor of "
"ConvElemwise and not both.")
if pool_type is not None:
assert pool_shape is not None, (
"You should specify the shape of "
"the spatial %s-pooling." % pool_type)
assert pool_stride is not None, (
"You should specify the strides of "
"the spatial %s-pooling." % pool_type)
assert nonlinearity is not None
self.nonlin = nonlinearity
self.__dict__.update(locals())
assert monitor_style in ['classification', 'detection'], (
"%s.monitor_style should be either"
"detection or classification" % self.__class__.__name__)
del self.self
def initialize_transformer(self, rng):
"""
This function initializes the transformer of the class. Re-running
this function will reset the transformer.
Parameters
----------
rng : object
random number generator object.
"""
if self.irange is not None:
assert self.sparse_init is None
self.transformer = conv2d.make_random_conv2D(
irange=self.irange,
input_space=self.input_space,
output_space=self.detector_space,
kernel_shape=self.kernel_shape,
subsample=self.kernel_stride,
border_mode=self.border_mode,
rng=rng)
elif self.sparse_init is not None:
self.transformer = conv2d.make_sparse_random_conv2D(
num_nonzero=self.sparse_init,
input_space=self.input_space,
output_space=self.detector_space,
kernel_shape=self.kernel_shape,
subsample=self.kernel_stride,
border_mode=self.border_mode,
rng=rng)
def initialize_output_space(self):
"""
Initializes the output space of the ConvElemwise layer by taking
pooling operator and the hyperparameters of the convolutional layer
into consideration as well.
"""
dummy_batch_size = self.mlp.batch_size
if dummy_batch_size is None:
dummy_batch_size = 2
dummy_detector =\
sharedX(self.detector_space.get_origin_batch(dummy_batch_size))
if self.pool_type is not None:
assert self.pool_type in ['max', 'mean']
if self.pool_type == 'max':
dummy_p = max_pool(bc01=dummy_detector,
pool_shape=self.pool_shape,
pool_stride=self.pool_stride,
image_shape=self.detector_space.shape)
elif self.pool_type == 'mean':
dummy_p = mean_pool(bc01=dummy_detector,
pool_shape=self.pool_shape,
pool_stride=self.pool_stride,
image_shape=self.detector_space.shape)
dummy_p = dummy_p.eval()
self.output_space = Conv2DSpace(shape=[dummy_p.shape[2],
dummy_p.shape[3]],
num_channels=self.output_channels,
axes=('b', 'c', 0, 1))
else:
dummy_detector = dummy_detector.eval()
self.output_space = Conv2DSpace(shape=[dummy_detector.shape[2],
dummy_detector.shape[3]],
num_channels=self.output_channels,
axes=('b', 'c', 0, 1))
logger.info('Output space: {0}'.format(self.output_space.shape))
@wraps(Layer.set_input_space)
def set_input_space(self, space):
""" Note: this function will reset the parameters! """
self.input_space = space
if not isinstance(space, Conv2DSpace):
raise BadInputSpaceError(self.__class__.__name__ +
".set_input_space "
"expected a Conv2DSpace, got " +
str(space) + " of type " +
str(type(space)))
rng = self.mlp.rng
if self.border_mode == 'valid':
output_shape = [int((self.input_space.shape[0]
- self.kernel_shape[0])
/ self.kernel_stride[0]) + 1,
int((self.input_space.shape[1]
- self.kernel_shape[1])
/ self.kernel_stride[1]) + 1]
elif self.border_mode == 'full':
output_shape = [int((self.input_space.shape[0]
+ self.kernel_shape[0])
/ self.kernel_stride[0]) - 1,
int((self.input_space.shape[1]
+ self.kernel_shape[1])
/ self.kernel_stride[1]) - 1]
self.detector_space = Conv2DSpace(shape=output_shape,
num_channels=self.output_channels,
axes=('b', 'c', 0, 1))
self.initialize_transformer(rng)
W, = self.transformer.get_params()
W.name = self.layer_name + '_W'
if self.tied_b:
self.b = sharedX(np.zeros((self.detector_space.num_channels)) +
self.init_bias)
else:
self.b = sharedX(self.detector_space.get_origin() + self.init_bias)
self.b.name = self.layer_name + '_b'
logger.info('Input shape: {0}'.format(self.input_space.shape))
logger.info('Detector space: {0}'.format(self.detector_space.shape))
self.initialize_output_space()
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
if self.max_kernel_norm is not None:
W, = self.transformer.get_params()
if W in updates:
updated_W = updates[W]
row_norms = T.sqrt(T.sum(T.sqr(updated_W), axis=(1, 2, 3)))
desired_norms = T.clip(row_norms, 0, self.max_kernel_norm)
updates[W] = updated_W * (
desired_norms /
(1e-7 + row_norms)).dimshuffle(0, 'x', 'x', 'x')
@wraps(Layer.get_params)
def get_params(self):
assert self.b.name is not None
W, = self.transformer.get_params()
assert W.name is not None
rval = self.transformer.get_params()
assert not isinstance(rval, set)
rval = list(rval)
assert self.b not in rval
rval.append(self.b)
return rval
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * T.sqr(W).sum()
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeff):
if isinstance(coeff, str):
coeff = float(coeff)
assert isinstance(coeff, float) or hasattr(coeff, 'dtype')
W, = self.transformer.get_params()
return coeff * abs(W).sum()
@wraps(Layer.set_weights)
def set_weights(self, weights):
W, = self.transformer.get_params()
W.set_value(weights)
@wraps(Layer.set_biases)
def set_biases(self, biases):
self.b.set_value(biases)
@wraps(Layer.get_biases)
def get_biases(self):
return self.b.get_value()
@wraps(Layer.get_weights_format)
def get_weights_format(self):
return ('v', 'h')
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
if not hasattr(self, 'W_lr_scale'):
self.W_lr_scale = None
if not hasattr(self, 'b_lr_scale'):
self.b_lr_scale = None
rval = OrderedDict()
if self.W_lr_scale is not None:
W, = self.transformer.get_params()
rval[W] = self.W_lr_scale
if self.b_lr_scale is not None:
rval[self.b] = self.b_lr_scale
return rval
@wraps(Layer.get_weights_topo)
def get_weights_topo(self):
outp, inp, rows, cols = range(4)
raw = self.transformer._filters.get_value()
return np.transpose(raw, (outp, rows, cols, inp))
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
W, = self.transformer.get_params()
assert W.ndim == 4
sq_W = T.sqr(W)
row_norms = T.sqrt(sq_W.sum(axis=(1, 2, 3)))
rval = OrderedDict([
('kernel_norms_min', row_norms.min()),
('kernel_norms_mean', row_norms.mean()),
('kernel_norms_max', row_norms.max()),
])
cst = self.cost
orval = self.nonlin.get_monitoring_channels_from_state(state,
targets,
cost_fn=cst)
rval.update(orval)
return rval
@wraps(Layer.fprop)
def fprop(self, state_below):
self.input_space.validate(state_below)
z = self.transformer.lmul(state_below)
if not hasattr(self, 'tied_b'):
self.tied_b = False
if self.tied_b:
b = self.b.dimshuffle('x', 0, 'x', 'x')
else:
b = self.b.dimshuffle('x', 0, 1, 2)
z = z + b
d = self.nonlin.apply(z)
if self.layer_name is not None:
d.name = self.layer_name + '_z'
self.detector_space.validate(d)
if self.pool_type is not None:
if not hasattr(self, 'detector_normalization'):
self.detector_normalization = None
if self.detector_normalization:
d = self.detector_normalization(d)
assert self.pool_type in ['max', 'mean'], ("pool_type should be"
"either max or mean"
"pooling.")
if self.pool_type == 'max':
p = max_pool(bc01=d, pool_shape=self.pool_shape,
pool_stride=self.pool_stride,
image_shape=self.detector_space.shape)
elif self.pool_type == 'mean':
p = mean_pool(bc01=d, pool_shape=self.pool_shape,
pool_stride=self.pool_stride,
image_shape=self.detector_space.shape)
self.output_space.validate(p)
else:
p = d
if not hasattr(self, 'output_normalization'):
self.output_normalization = None
if self.output_normalization:
p = self.output_normalization(p)
return p
def cost(self, Y, Y_hat):
"""
Cost for convnets is hardcoded to be the cost for sigmoids.
TODO: move the cost into the non-linearity class.
Parameters
----------
Y : theano.gof.Variable
Output of `fprop`
Y_hat : theano.gof.Variable
Targets
Returns
-------
cost : theano.gof.Variable
0-D tensor describing the cost
Notes
-----
Cost mean across units, mean across batch of KL divergence
KL(P || Q) where P is defined by Y and Q is defined by Y_hat
KL(P || Q) = p log p - p log q + (1-p) log (1-p) - (1-p) log (1-q)
"""
assert self.nonlin.non_lin_name == "sigmoid", ("ConvElemwise "
"supports "
"cost function "
"for only "
"sigmoid layer "
"for now.")
batch_axis = self.output_space.get_batch_axis()
ave_total = kl(Y=Y, Y_hat=Y_hat, batch_axis=batch_axis)
ave = ave_total.mean()
return ave
class ConvRectifiedLinear(ConvElemwise):
"""
A convolutional rectified linear layer, based on theano's B01C
formatted convolution.
Parameters
----------
output_channels : int
The number of output channels the layer should have.
kernel_shape : tuple
The shape of the convolution kernel.
pool_shape : tuple
The shape of the spatial max pooling. A two-tuple of ints.
pool_stride : tuple
The stride of the spatial max pooling. Also must be square.
layer_name : str
A name for this layer that will be prepended to monitoring channels
related to this layer.
irange : float
if specified, initializes each weight randomly in
U(-irange, irange)
border_mode : str
A string indicating the size of the output:
- "full" : The output is the full discrete linear convolution of the
inputs.
- "valid" : The output consists only of those elements that do not
rely on the zero-padding. (Default)
include_prob : float
probability of including a weight element in the set of weights
initialized to U(-irange, irange). If not included it is initialized
to 0.
init_bias : float
All biases are initialized to this number
W_lr_scale : float
The learning rate on the weights for this layer is multiplied by this
scaling factor
b_lr_scale : float
The learning rate on the biases for this layer is multiplied by this
scaling factor
left_slope : float
The slope of the left half of the activation function
max_kernel_norm : float
If specifed, each kernel is constrained to have at most this norm.
pool_type :
The type of the pooling operation performed the the convolution.
Default pooling type is max-pooling.
tied_b : bool
If true, all biases in the same channel are constrained to be the
same as each other. Otherwise, each bias at each location is
learned independently.
detector_normalization : callable
See `output_normalization`
output_normalization : callable
if specified, should be a callable object. the state of the
network is optionally replaced with normalization(state) at each
of the 3 points in processing:
- detector: the rectifier units can be normalized prior to the
spatial pooling
- output: the output of the layer, after spatial pooling, can
be normalized as well
kernel_stride : tuple
The stride of the convolution kernel. A two-tuple of ints.
"""
def __init__(self,
output_channels,
kernel_shape,
pool_shape,
pool_stride,
layer_name,
irange=None,
border_mode='valid',
sparse_init=None,
include_prob=1.0,
init_bias=0.,
W_lr_scale=None,
b_lr_scale=None,
left_slope=0.0,
max_kernel_norm=None,
pool_type='max',
tied_b=False,
detector_normalization=None,
output_normalization=None,
kernel_stride=(1, 1),
monitor_style="classification"):
nonlinearity = RectifierConvNonlinearity(left_slope)
if (irange is None) and (sparse_init is None):
raise AssertionError("You should specify either irange or "
"sparse_init when calling the constructor of "
"ConvRectifiedLinear.")
elif (irange is not None) and (sparse_init is not None):
raise AssertionError("You should specify either irange or "
"sparse_init when calling the constructor of "
"ConvRectifiedLinear and not both.")
# Alias the variables for pep8
mkn = max_kernel_norm
dn = detector_normalization
on = output_normalization
super(ConvRectifiedLinear, self).__init__(output_channels,
kernel_shape,
layer_name,
nonlinearity,
irange=irange,
border_mode=border_mode,
sparse_init=sparse_init,
include_prob=include_prob,
init_bias=init_bias,
W_lr_scale=W_lr_scale,
b_lr_scale=b_lr_scale,
pool_shape=pool_shape,
pool_stride=pool_stride,
max_kernel_norm=mkn,
pool_type=pool_type,
tied_b=tied_b,
detector_normalization=dn,
output_normalization=on,
kernel_stride=kernel_stride,
monitor_style=monitor_style)
def pool_dnn(bc01, pool_shape, pool_stride, mode='max'):
"""
cuDNN pooling op.
Parameters
----------
bc01 : theano tensor
Minibatch in format (batch size, channels, rows, cols).
pool_shape : tuple
Shape of the pool region (rows, cols).
pool_stride : tuple
Strides between pooling regions (row stride, col stride).
mode : str
Flag for `mean` or `max` pooling.
Returns
-------
mx : theano tensor
The output of pooling applied to `bc01`.
"""
assert mode in ['max', 'mean']
if mode == 'mean':
raise NotImplementedError('Mean pooling is not implemented '
'in Pylearn2 using cuDNN as of '
'January 19th, 2015.')
mx = dnn_pool(bc01, tuple(pool_shape), tuple(pool_stride), mode)
return mx
def max_pool(bc01, pool_shape, pool_stride, image_shape, try_dnn=True):
"""
Theano's max pooling op only supports pool_stride = pool_shape
so here we have a graph that does max pooling with strides
Parameters
----------
bc01 : theano tensor
minibatch in format (batch size, channels, rows, cols)
pool_shape : tuple
shape of the pool region (rows, cols)
pool_stride : tuple
strides between pooling regions (row stride, col stride)
image_shape : tuple
avoid doing some of the arithmetic in theano
try_dnn : bool
Flag to set cuDNN use (default: True).
Returns
-------
pooled : theano tensor
The output of pooling applied to `bc01`
See Also
--------
max_pool_c01b : Same functionality but with ('c', 0, 1, 'b') axes
sandbox.cuda_convnet.pool.max_pool_c01b : Same functionality as
`max_pool_c01b` but GPU-only and considerably faster.
mean_pool : Mean pooling instead of max pooling
"""
mx = None
r, c = image_shape
pr, pc = pool_shape
rs, cs = pool_stride
assert pr <= r
assert pc <= c
name = bc01.name
if name is None:
name = 'anon_bc01'
if try_dnn and bc01.dtype == "float32":
use_dnn = dnn_available()
else:
use_dnn = False
if pool_shape == pool_stride and not use_dnn:
mx = max_pool_2d(bc01, pool_shape, False)
mx.name = 'max_pool(' + name + ')'
return mx
# Compute index in pooled space of last needed pool
# (needed = each input pixel must appear in at least one pool)
def last_pool(im_shp, p_shp, p_strd):
rval = int(np.ceil(float(im_shp - p_shp) / p_strd))
assert p_strd * rval + p_shp >= im_shp
assert p_strd * (rval - 1) + p_shp < im_shp
# Catch case where p_strd > p_shp causes pool
# to be set outside of im_shp.
if p_strd * rval >= im_shp:
rval -= 1
return rval
# Compute starting row of the last pool
last_pool_r = last_pool(image_shape[0],
pool_shape[0],
pool_stride[0]) * pool_stride[0]
# Compute number of rows needed in image for all indexes to work out
required_r = last_pool_r + pr
last_pool_c = last_pool(image_shape[1],
pool_shape[1],
pool_stride[1]) * pool_stride[1]
required_c = last_pool_c + pc
for bc01v in get_debug_values(bc01):
assert not contains_inf(bc01v)
assert bc01v.shape[2] == image_shape[0]
assert bc01v.shape[3] == image_shape[1]
if (required_r > r) or (required_c > c):
small_r = min(required_r, r)
small_c = min(required_c, c)
assert bc01.dtype.startswith('float')
wide_infinity = T.alloc(T.constant(-np.inf, dtype=bc01.dtype),
bc01.shape[0],
bc01.shape[1],
required_r,
required_c)
bc01 = T.set_subtensor(wide_infinity[:, :, 0:small_r, 0:small_c],
bc01[:, :, 0:small_r, 0:small_c])
name = 'infinite_padded_' + name
if use_dnn:
mx = pool_dnn(bc01, pool_shape, pool_stride, 'max')
else:
for row_within_pool in xrange(pool_shape[0]):
row_stop = last_pool_r + row_within_pool + 1
for col_within_pool in xrange(pool_shape[1]):
col_stop = last_pool_c + col_within_pool + 1
cur = bc01[:,
:,
row_within_pool:row_stop:rs,
col_within_pool:col_stop:cs]
cur.name = ('max_pool_cur_' + name + '_' +
str(row_within_pool) + '_' + str(col_within_pool))
if mx is None:
mx = cur
else:
mx = T.maximum(mx, cur)
mx.name = ('max_pool_mx_' + name + '_' +
str(row_within_pool) + '_' +
str(col_within_pool))
mx.name = 'max_pool(' + name + ')'
for mxv in get_debug_values(mx):
assert isfinite(mxv)
return mx
def max_pool_c01b(c01b, pool_shape, pool_stride, image_shape):
"""
Theano's max pooling op only supports pool_stride = pool_shape
so here we have a graph that does max pooling with strides
Parameters
----------
c01b : theano tensor
minibatch in format (channels, rows, cols, batch size)
pool_shape : tuple
shape of the pool region (rows, cols)
pool_stride : tuple
strides between pooling regions (row stride, col stride)
image_shape : tuple
avoid doing some of the arithmetic in theano
Returns
-------
pooled : theano tensor
The output of pooling applied to `c01b`
See Also
--------
sandbox.cuda_convnet.pool.max_pool_c01b : Same functionality but GPU-only
and considerably faster.
max_pool : Same functionality but with ('b', 0, 1, 'c') axes
"""
mx = None
r, c = image_shape
pr, pc = pool_shape
rs, cs = pool_stride
assert pr > 0
assert pc > 0
assert pr <= r
assert pc <= c
# Compute index in pooled space of last needed pool
# (needed = each input pixel must appear in at least one pool)
def last_pool(im_shp, p_shp, p_strd):
rval = int(np.ceil(float(im_shp - p_shp) / p_strd))
assert p_strd * rval + p_shp >= im_shp
assert p_strd * (rval - 1) + p_shp < im_shp
return rval
# Compute starting row of the last pool
last_pool_r = last_pool(image_shape[0],
pool_shape[0],
pool_stride[0]) * pool_stride[0]
# Compute number of rows needed in image for all indexes to work out
required_r = last_pool_r + pr
last_pool_c = last_pool(image_shape[1],
pool_shape[1],
pool_stride[1]) * pool_stride[1]
required_c = last_pool_c + pc
for c01bv in get_debug_values(c01b):
assert not contains_inf(c01bv)
assert c01bv.shape[1] == r
assert c01bv.shape[2] == c
wide_infinity = T.alloc(-np.inf,
c01b.shape[0],
required_r,
required_c,
c01b.shape[3])
name = c01b.name
if name is None:
name = 'anon_bc01'
c01b = T.set_subtensor(wide_infinity[:, 0:r, 0:c, :], c01b)
c01b.name = 'infinite_padded_' + name
for row_within_pool in xrange(pool_shape[0]):
row_stop = last_pool_r + row_within_pool + 1
for col_within_pool in xrange(pool_shape[1]):
col_stop = last_pool_c + col_within_pool + 1
cur = c01b[:,
row_within_pool:row_stop:rs,
col_within_pool:col_stop:cs,
:]
cur.name = ('max_pool_cur_' + c01b.name + '_' +
str(row_within_pool) + '_' + str(col_within_pool))
if mx is None:
mx = cur
else:
mx = T.maximum(mx, cur)
mx.name = ('max_pool_mx_' + c01b.name + '_' +
str(row_within_pool) + '_' + str(col_within_pool))
mx.name = 'max_pool(' + name + ')'
for mxv in get_debug_values(mx):
assert isfinite(mxv)
return mx
def mean_pool(bc01, pool_shape, pool_stride, image_shape):
"""
Does mean pooling (aka average pooling) via a Theano graph.
Parameters
----------
bc01 : theano tensor
minibatch in format (batch size, channels, rows, cols)
pool_shape : tuple
shape of the pool region (rows, cols)
pool_stride : tuple
strides between pooling regions (row stride, col stride)
image_shape : tuple
(rows, cols) tuple to avoid doing some arithmetic in theano
Returns
-------
pooled : theano tensor
The output of pooling applied to `bc01`
See Also
--------
max_pool : Same thing but with max pooling
Examples
--------
>>> import theano
>>> import theano.tensor as T
>>> from pylearn2.models.mlp import mean_pool
>>> import numpy as np
>>> t = np.array([[1, 1, 3, 3],
... [1, 1, 3, 3],
... [5, 5, 7, 7],
... [5, 5, 7, 7],
... [9, 9, 11, 11],
... [9, 9, 11, 11]])
>>> X = np.zeros((3, t.shape[0], t.shape[1]))
>>> X[:] = t
>>> X = X[np.newaxis]
>>> X_sym = T.tensor4('X')
>>> pool_it = mean_pool(X_sym, pool_shape=(2, 2), pool_stride=(2, 2),
... image_shape=(6, 4))
>>> f = theano.function(inputs=[X_sym], outputs=pool_it)
This will pool over over windows of size (2, 2) while also stepping by this
same amount, shrinking the examples input to [[1, 3], [5, 7], [9, 11]].
"""
mx = None
r, c = image_shape
pr, pc = pool_shape
rs, cs = pool_stride
# Compute index in pooled space of last needed pool
# (needed = each input pixel must appear in at least one pool)
def last_pool(im_shp, p_shp, p_strd):
rval = int(np.ceil(float(im_shp - p_shp) / p_strd))
assert p_strd * rval + p_shp >= im_shp
assert p_strd * (rval - 1) + p_shp < im_shp
return rval
# Compute starting row of the last pool
last_pool_r = last_pool(image_shape[0],
pool_shape[0],
pool_stride[0]) * pool_stride[0]
# Compute number of rows needed in image for all indexes to work out
required_r = last_pool_r + pr
last_pool_c = last_pool(image_shape[1],
pool_shape[1],
pool_stride[1]) * pool_stride[1]
required_c = last_pool_c + pc
for bc01v in get_debug_values(bc01):
assert not contains_inf(bc01v)
assert bc01v.shape[2] == image_shape[0]
assert bc01v.shape[3] == image_shape[1]
wide_infinity = T.alloc(-np.inf,
bc01.shape[0],
bc01.shape[1],
required_r,
required_c)
name = bc01.name
if name is None:
name = 'anon_bc01'
bc01 = T.set_subtensor(wide_infinity[:, :, 0:r, 0:c], bc01)
bc01.name = 'infinite_padded_' + name
# Create a 'mask' used to keep count of the number of elements summed for
# each position
wide_infinity_count = T.alloc(0, bc01.shape[0], bc01.shape[1], required_r,
required_c)
bc01_count = T.set_subtensor(wide_infinity_count[:, :, 0:r, 0:c], 1)
for row_within_pool in xrange(pool_shape[0]):
row_stop = last_pool_r + row_within_pool + 1
for col_within_pool in xrange(pool_shape[1]):
col_stop = last_pool_c + col_within_pool + 1
cur = bc01[:,
:,
row_within_pool:row_stop:rs,
col_within_pool:col_stop:cs]
cur.name = ('mean_pool_cur_' + bc01.name + '_' +
str(row_within_pool) + '_' + str(col_within_pool))
cur_count = bc01_count[:,
:,
row_within_pool:row_stop:rs,
col_within_pool:col_stop:cs]
if mx is None:
mx = cur
count = cur_count
else:
mx = mx + cur
count = count + cur_count
mx.name = ('mean_pool_mx_' + bc01.name + '_' +
str(row_within_pool) + '_' + str(col_within_pool))
mx /= count
mx.name = 'mean_pool(' + name + ')'
for mxv in get_debug_values(mx):
assert isfinite(mxv)
return mx
@wraps(_WD)
def WeightDecay(*args, **kwargs):
warnings.warn("pylearn2.models.mlp.WeightDecay has moved to "
"pylearn2.costs.mlp.WeightDecay. This link"
"may be removed after 2015-05-13.")
return _WD(*args, **kwargs)
@wraps(_L1WD)
def L1WeightDecay(*args, **kwargs):
warnings.warn("pylearn2.models.mlp.L1WeightDecay has moved to "
"pylearn2.costs.mlp.WeightDecay. This link"
"may be removed after 2015-05-13.")
return _L1WD(*args, **kwargs)
class LinearGaussian(Linear):
"""
A Linear layer augmented with a precision vector, for modeling
conditionally Gaussian data.
Specifically, given an input x, this layer models the distrbution over
the output as
y ~ p(y | x) = N(y | Wx + b, beta^-1)
i.e., y is conditionally Gaussian with mean Wx + b and variance
beta^-1.
beta is a diagonal precision matrix so beta^-1 is a diagonal covariance
matrix.
Internally, beta is stored as the vector of diagonal values on this
matrix.
Since the output covariance is not a function of the input, this does
not provide an example-specific estimate of the error in the mean.
However, the vector-valued beta does mean that maximizing log p(y | x)
will reweight the mean squared error so that variables that can be
estimated easier will receive a higher penalty. This is one way of
adapting the model better to heterogenous data.
Parameters
----------
init_beta : float or ndarray
Any value > 0 that can be broadcasted to a vector of shape (dim, ).
The elements of beta are initialized to this value.
A good value is often the precision (inverse variance) of the target
variables in the training set, as provided by the
`beta_from_targets` function. This is the optimal beta for a dummy
model that just predicts the mean target value from the training set.
min_beta : float
The elements of beta are constrained to be >= this value.
This value must be > 0., otherwise the output conditional is not
constrained to be a valid probability distribution.
A good value is often the precision (inverse variance) of the target
variables in the training set, as provided by the
`beta_from_targets` function. This is the optimal beta for a dummy
model that just predicts the mean target value from the training set.
A trained model should always be able to obtain at least this much
precision, at least on the training set.
max_beta : float
The elements of beta are constrained to be <= this value.
We impose this constraint because for problems
where the training set values can be predicted
exactly, beta can grow without bound, which also makes the
gradients grow without bound, resulting in numerical problems.
kwargs : dict
Arguments to the `Linear` superclass.
"""
def __init__(self, init_beta, min_beta, max_beta, beta_lr_scale, **kwargs):
super(LinearGaussian, self).__init__(**kwargs)
self.__dict__.update(locals())
del self.self
del self.kwargs
@wraps(Layer.set_input_space)
def set_input_space(self, space):
super(LinearGaussian, self).set_input_space(space)
assert isinstance(self.output_space, VectorSpace)
self.beta = sharedX(self.output_space.get_origin() + self.init_beta,
'beta')
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
rval = super(LinearGaussian,
self).get_layer_monitoring_channels(state_below,
state,
targets)
assert isinstance(rval, OrderedDict)
rval['beta_min'] = self.beta.min()
rval['beta_mean'] = self.beta.mean()
rval['beta_max'] = self.beta.max()
if targets:
rval['mse'] = T.sqr(state - targets).mean()
return rval
@wraps(Linear.cost)
def cost(self, Y, Y_hat):
return (0.5 * T.dot(T.sqr(Y - Y_hat), self.beta).mean() -
0.5 * T.log(self.beta).sum())
@wraps(Linear.cost_matrix)
def cost_matrix(self, Y, Y_hat):
return 0.5 * T.sqr(Y - Y_hat) * self.beta - 0.5 * T.log(self.beta)
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
super(LinearGaussian, self)._modify_updates(updates)
if self.beta in updates:
updates[self.beta] = T.clip(updates[self.beta],
self.min_beta,
self.max_beta)
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
rval = super(LinearGaussian, self).get_lr_scalers()
if self.beta_lr_scale is not None:
rval[self.beta] = self.beta_lr_scale
return rval
@wraps(Layer.get_params)
def get_params(self):
return super(LinearGaussian, self).get_params() + [self.beta]
def beta_from_design(design, min_var=1e-6, max_var=1e6):
"""
Returns the marginal precision of a design matrix.
Parameters
----------
design : ndarray
A numpy ndarray containing a design matrix
min_var : float
max_var : float
All variances are constrained to lie in the range [min_var, max_var]
to avoid numerical issues like infinite precision.
Returns
-------
beta : ndarray
A 1D vector containing the marginal precision of each variable in the
design matrix.
"""
return 1. / np.clip(design.var(axis=0), min_var, max_var)
def beta_from_targets(dataset, **kwargs):
"""
Returns the marginal precision of the targets in a dataset.
Parameters
----------
dataset : DenseDesignMatrix
A DenseDesignMatrix with a targets field `y`
kwargs : dict
Extra arguments to `beta_from_design`
Returns
-------
beta : ndarray
A 1-D vector containing the marginal precision of the *targets* in
`dataset`.
"""
return beta_from_design(dataset.y, **kwargs)
def beta_from_features(dataset, **kwargs):
"""
Returns the marginal precision of the features in a dataset.
Parameters
----------
dataset : DenseDesignMatrix
The dataset to compute the precision on.
kwargs : dict
Passed through to `beta_from_design`
Returns
-------
beta : ndarray
Vector of precision values for each feature in `dataset`
"""
return beta_from_design(dataset.X, **kwargs)
def mean_of_targets(dataset):
"""
Returns the mean of the targets in a dataset.
Parameters
----------
dataset : DenseDesignMatrix
Returns
-------
mn : ndarray
A 1-D vector with entry i giving the mean of target i
"""
return dataset.y.mean(axis=0)
class PretrainedLayer(Layer):
"""
A layer whose weights are initialized, and optionally fixed,
based on prior training.
Parameters
----------
layer_content : Model
Should implement "upward_pass" (RBM and Autoencoder do this)
freeze_params: bool
If True, regard layer_conent's parameters as fixed
If False, they become parameters of this layer and can be
fine-tuned to optimize the MLP's cost function.
"""
def __init__(self, layer_name, layer_content, freeze_params=False):
super(PretrainedLayer, self).__init__()
self.__dict__.update(locals())
del self.self
@wraps(Layer.set_input_space)
def set_input_space(self, space):
assert self.get_input_space() == space
@wraps(Layer.get_params)
def get_params(self):
if self.freeze_params:
return []
return self.layer_content.get_params()
@wraps(Layer.get_input_space)
def get_input_space(self):
return self.layer_content.get_input_space()
@wraps(Layer.get_output_space)
def get_output_space(self):
return self.layer_content.get_output_space()
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
return OrderedDict([])
@wraps(Layer.fprop)
def fprop(self, state_below):
return self.layer_content.upward_pass(state_below)
class CompositeLayer(Layer):
"""
A Layer that runs several layers in parallel. Its default behavior
is to pass the layer's input to each of the components.
Alternatively, it can take a CompositeSpace as an input and a mapping
from inputs to layers i.e. providing each component layer with a
subset of the inputs.
Parameters
----------
layer_name : str
The name of this layer
layers : tuple or list
The component layers to run in parallel.
inputs_to_layers : dict mapping int to list of ints, optional
Can only be used if the input space is a CompositeSpace.
If inputs_to_layers[i] contains j, it means input i will
be given as input to component j. Note that if multiple inputs are
passed on to e.g. an inner CompositeLayer, the same order will
be maintained. If the list is empty, the input will be discarded.
If an input does not appear in the dictionary, it will be given to
all components.
Examples
--------
>>> composite_layer = CompositeLayer(
... layer_name='composite_layer',
... layers=[Tanh(7, 'h0', 0.1), Sigmoid(5, 'h1', 0.1)],
... inputs_to_layers={
... 0: [1],
... 1: [0]
... })
This CompositeLayer has a CompositeSpace with 2 subspaces as its
input space. The first input is given to the Sigmoid layer, the second
input is given to the Tanh layer.
>>> wrapper_layer = CompositeLayer(
... layer_name='wrapper_layer',
... layers=[Linear(9, 'h2', 0.1),
... composite_layer,
... Tanh(7, 'h3', 0.1)],
... inputs_to_layers={
... 0: [1],
... 2: []
... })
This CompositeLayer takes 3 inputs. The first one is given to the
inner CompositeLayer. The second input is passed on to each component
layer i.e. to the Tanh, Linear as well as CompositeLayer. The third
input is discarded. Note that the inner CompositeLayer wil receive
the inputs with the same ordering i.e. [0, 1], and never [1, 0].
"""
def __init__(self, layer_name, layers, inputs_to_layers=None):
self.num_layers = len(layers)
if inputs_to_layers is not None:
if not isinstance(inputs_to_layers, dict):
raise TypeError("CompositeLayer expected inputs_to_layers to "
"be dict, got " + str(type(inputs_to_layers)))
self.inputs_to_layers = OrderedDict()
for key in sorted(inputs_to_layers):
assert isinstance(key, py_integer_types)
value = inputs_to_layers[key]
assert is_iterable(value)
assert all(isinstance(v, py_integer_types) for v in value)
# Check 'not value' to support case of empty list
assert not value or all(0 <= v < self.num_layers
for v in value)
self.inputs_to_layers[key] = sorted(value)
super(CompositeLayer, self).__init__()
self.__dict__.update(locals())
del self.self
@property
def routing_needed(self):
return self.inputs_to_layers is not None
@wraps(Layer.set_input_space)
def set_input_space(self, space):
if not isinstance(space, CompositeSpace):
if self.inputs_to_layers is not None:
raise ValueError("CompositeLayer received an inputs_to_layers "
"mapping, but does not have a CompositeSpace "
"as its input space, so there is nothing to "
"map. Received " + str(space) + " as input "
"space.")
elif self.routing_needed:
if not max(self.inputs_to_layers) < len(space.components):
raise ValueError("The inputs_to_layers mapping of "
"CompositeSpace contains they key " +
str(max(self.inputs_to_layers)) + " "
"(0-based) but the input space only "
"contains " + str(self.num_layers) + " "
"layers.")
# Invert the dictionary
self.layers_to_inputs = OrderedDict()
for i in xrange(self.num_layers):
inputs = []
for j in xrange(len(space.components)):
if j in self.inputs_to_layers:
if i in self.inputs_to_layers[j]:
inputs.append(j)
else:
inputs.append(j)
self.layers_to_inputs[i] = inputs
for i, layer in enumerate(self.layers):
if self.routing_needed and i in self.layers_to_inputs:
cur_space = space.restrict(self.layers_to_inputs[i])
else:
cur_space = space
layer.set_input_space(cur_space)
self.input_space = space
self.output_space = CompositeSpace(tuple(layer.get_output_space()
for layer in self.layers))
self._target_space = CompositeSpace(tuple(layer.get_target_space()
for layer in self.layers))
@wraps(Layer.get_params)
def get_params(self):
rval = []
for layer in self.layers:
rval = safe_union(layer.get_params(), rval)
return rval
@wraps(Layer.fprop)
def fprop(self, state_below):
rvals = []
for i, layer in enumerate(self.layers):
if self.routing_needed and i in self.layers_to_inputs:
cur_state_below = [state_below[j]
for j in self.layers_to_inputs[i]]
# This is to mimic the behavior of CompositeSpace's restrict
# method, which only returns a CompositeSpace when the number
# of components is greater than 1
if len(cur_state_below) == 1:
cur_state_below, = cur_state_below
else:
cur_state_below = state_below
rvals.append(layer.fprop(cur_state_below))
return tuple(rvals)
def _weight_decay_aggregate(self, method_name, coeff):
if isinstance(coeff, py_float_types):
return T.sum([getattr(layer, method_name)(coeff)
for layer in self.layers])
elif is_iterable(coeff):
assert all(layer_coeff >= 0 for layer_coeff in coeff)
return T.sum([getattr(layer, method_name)(layer_coeff) for
layer, layer_coeff in safe_zip(self.layers, coeff)
if layer_coeff > 0], dtype=config.floatX)
else:
raise TypeError("CompositeLayer's " + method_name + " received "
"coefficients of type " + str(type(coeff)) + " "
"but must be provided with a float or list/tuple")
def get_weight_decay(self, coeff):
"""
Provides an expression for a squared L2 penalty on the weights,
which is the weighted sum of the squared L2 penalties of the layer
components.
Parameters
----------
coeff : float or tuple/list
The coefficient on the squared L2 weight decay penalty for
this layer. If a single value is provided, this coefficient is
used for each component layer. If a list of tuple of
coefficients is given they are passed on to the component
layers in the given order.
Returns
-------
weight_decay : theano.gof.Variable
An expression for the squared L2 weight decay penalty term for
this layer.
"""
return self._weight_decay_aggregate('get_weight_decay', coeff)
def get_l1_weight_decay(self, coeff):
"""
Provides an expression for a squared L1 penalty on the weights,
which is the weighted sum of the squared L1 penalties of the layer
components.
Parameters
----------
coeff : float or tuple/list
The coefficient on the L1 weight decay penalty for this layer.
If a single value is provided, this coefficient is used for
each component layer. If a list of tuple of coefficients is
given they are passed on to the component layers in the
given order.
Returns
-------
weight_decay : theano.gof.Variable
An expression for the L1 weight decay penalty term for this
layer.
"""
return self._weight_decay_aggregate('get_l1_weight_decay', coeff)
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
return sum(layer.cost(Y_elem, Y_hat_elem)
for layer, Y_elem, Y_hat_elem in
safe_zip(self.layers, Y, Y_hat))
@wraps(Layer.set_mlp)
def set_mlp(self, mlp):
super(CompositeLayer, self).set_mlp(mlp)
for layer in self.layers:
layer.set_mlp(mlp)
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
rval = OrderedDict()
# TODO: reduce redundancy with fprop method
for i, layer in enumerate(self.layers):
if self.routing_needed and i in self.layers_to_inputs:
cur_state_below = [state_below[j]
for j in self.layers_to_inputs[i]]
# This is to mimic the behavior of CompositeSpace's restrict
# method, which only returns a CompositeSpace when the number
# of components is greater than 1
if len(cur_state_below) == 1:
cur_state_below, = cur_state_below
else:
cur_state_below = state_below
if state is not None:
cur_state = state[i]
else:
cur_state = None
if targets is not None:
cur_targets = targets[i]
else:
cur_targets = None
d = layer.get_layer_monitoring_channels(
cur_state_below, cur_state, cur_targets)
for key in d:
rval[layer.layer_name + '_' + key] = d[key]
return rval
@wraps(Model._modify_updates)
def _modify_updates(self, updates):
for layer in self.layers:
layer.modify_updates(updates)
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
return get_lr_scalers_from_layers(self)
class FlattenerLayer(Layer):
"""
A wrapper around a different layer that flattens
the original layer's output.
The cost works by unflattening the target and then
calling the wrapped Layer's cost.
This is mostly intended for use with CompositeLayer as the wrapped
Layer, and is mostly useful as a workaround for theano not having
a TupleVariable with which to represent a composite target.
There are obvious memory, performance, and readability issues with doing
this, so really it would be better for theano to support TupleTypes.
See pylearn2.sandbox.tuple_var and the theano-dev e-mail thread
"TupleType".
Parameters
----------
raw_layer : Layer
Layer that FlattenerLayer wraps.
"""
def __init__(self, raw_layer):
super(FlattenerLayer, self).__init__()
self.__dict__.update(locals())
del self.self
self.layer_name = raw_layer.layer_name
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.raw_layer.set_input_space(space)
total_dim = self.raw_layer.get_output_space().get_total_dimension()
self.output_space = VectorSpace(total_dim)
@wraps(Layer.get_input_space)
def get_input_space(self):
return self.raw_layer.get_input_space()
@wraps(Layer.get_monitoring_channels)
def get_monitoring_channels(self, data):
return self.raw_layer.get_monitoring_channels(data)
@wraps(Layer.get_layer_monitoring_channels)
def get_layer_monitoring_channels(self, state_below=None,
state=None, targets=None):
raw_space = self.raw_layer.get_output_space()
state = raw_space.undo_format_as(state,
self.get_output_space())
if targets is not None:
targets = self.get_target_space().format_as(
targets, self.raw_layer.get_target_space())
return self.raw_layer.get_layer_monitoring_channels(
state_below=state_below,
state=state,
targets=targets
)
@wraps(Layer.get_monitoring_data_specs)
def get_monitoring_data_specs(self):
return self.raw_layer.get_monitoring_data_specs()
@wraps(Layer.get_params)
def get_params(self):
return self.raw_layer.get_params()
@wraps(Layer.get_weights)
def get_weights(self):
return self.raw_layer.get_weights()
@wraps(Layer.get_weight_decay)
def get_weight_decay(self, coeffs):
return self.raw_layer.get_weight_decay(coeffs)
@wraps(Layer.get_l1_weight_decay)
def get_l1_weight_decay(self, coeffs):
return self.raw_layer.get_l1_weight_decay(coeffs)
@wraps(Layer.set_batch_size)
def set_batch_size(self, batch_size):
self.raw_layer.set_batch_size(batch_size)
@wraps(Layer._modify_updates)
def _modify_updates(self, updates):
self.raw_layer.modify_updates(updates)
@wraps(Layer.get_lr_scalers)
def get_lr_scalers(self):
return self.raw_layer.get_lr_scalers()
@wraps(Layer.fprop)
def fprop(self, state_below):
raw = self.raw_layer.fprop(state_below)
return self.raw_layer.get_output_space().format_as(raw,
self.output_space)
@wraps(Layer.cost)
def cost(self, Y, Y_hat):
raw_space = self.raw_layer.get_output_space()
target_space = self.output_space
raw_Y = target_space.format_as(Y, raw_space)
raw_Y_hat = raw_space.undo_format_as(Y_hat, target_space)
raw_space.validate(raw_Y_hat)
return self.raw_layer.cost(raw_Y, raw_Y_hat)
@wraps(Layer.set_mlp)
def set_mlp(self, mlp):
super(FlattenerLayer, self).set_mlp(mlp)
self.raw_layer.set_mlp(mlp)
@wraps(Layer.get_weights)
def get_weights(self):
return self.raw_layer.get_weights()
class WindowLayer(Layer):
"""
Layer used to select a window of an image input.
The input of the layer must be Conv2DSpace.
Parameters
----------
layer_name : str
A name for this layer.
window : tuple
A four-tuple of ints indicating respectively
the top left x and y position, and
the bottom right x and y position of the window.
"""
def __init__(self, layer_name, window):
super(WindowLayer, self).__init__()
self.__dict__.update(locals())
del self.self
if window[0] < 0 or window[0] > window[2] or \
window[1] < 0 or window[1] > window[3]:
raise ValueError("WindowLayer: bad window parameter")
@wraps(Layer.fprop)
def fprop(self, state_below):
extracts = [slice(None), slice(None), slice(None), slice(None)]
extracts[self.rows] = slice(self.window[0], self.window[2] + 1)
extracts[self.cols] = slice(self.window[1], self.window[3] + 1)
extracts = tuple(extracts)
return state_below[extracts]
@wraps(Layer.set_input_space)
def set_input_space(self, space):
self.input_space = space
if not isinstance(space, Conv2DSpace):
raise TypeError("The input to a Window layer should be a "
"Conv2DSpace, but layer " + self.layer_name +
" got " + str(type(self.input_space)))
axes = space.axes
self.rows = axes.index(0)
self.cols = axes.index(1)
nrows = space.shape[0]
ncols = space.shape[1]
if self.window[2] + 1 > nrows or self.window[3] + 1 > ncols:
raise ValueError("WindowLayer: bad window shape. "
"Input is [" + str(nrows) + ", " +
str(ncols) + "], "
"but layer " + self.layer_name + " has window "
+ str(self.window))
self.output_space = Conv2DSpace(
shape=[self.window[2] - self.window[0] + 1,
self.window[3] - self.window[1] + 1],
num_channels=space.num_channels,
axes=axes)
@wraps(Layer.get_params)
def get_params(self):
return []
@wraps(Layer.get_monitoring_channels)
def get_monitoring_channels(self):
return []
def generate_dropout_mask(mlp, default_include_prob=0.5,
input_include_probs=None, rng=(2013, 5, 17)):
"""
Generate a dropout mask (as an integer) given inclusion
probabilities.
Parameters
----------
mlp : object
An MLP object.
default_include_prob : float, optional
The probability of including an input to a hidden
layer, for layers not listed in `input_include_probs`.
Default is 0.5.
input_include_probs : dict, optional
A dictionary mapping layer names to probabilities
of input inclusion for that layer. Default is `None`,
in which `default_include_prob` is used for all
layers.
rng : RandomState object or seed, optional
A `numpy.random.RandomState` object or a seed used to
create one.
Returns
-------
mask : int
An integer indexing a dropout mask for the network,
drawn with the appropriate probability given the
inclusion probabilities.
"""
if input_include_probs is None:
input_include_probs = {}
if not hasattr(rng, 'uniform'):
rng = np.random.RandomState(rng)
total_units = 0
mask = 0
for layer in mlp.layers:
if layer.layer_name in input_include_probs:
p = input_include_probs[layer.layer_name]
else:
p = default_include_prob
for _ in xrange(layer.get_input_space().get_total_dimension()):
mask |= int(rng.uniform() < p) << total_units
total_units += 1
return mask
def sampled_dropout_average(mlp, inputs, num_masks,
default_input_include_prob=0.5,
input_include_probs=None,
default_input_scale=2.,
input_scales=None,
rng=(2013, 5, 17),
per_example=False):
"""
Take the geometric mean over a number of randomly sampled
dropout masks for an MLP with softmax outputs.
Parameters
----------
mlp : object
An MLP object.
inputs : tensor_like
A Theano variable representing a minibatch appropriate
for fpropping through the MLP.
num_masks : int
The number of masks to sample.
default_input_include_prob : float, optional
The probability of including an input to a hidden
layer, for layers not listed in `input_include_probs`.
Default is 0.5.
input_include_probs : dict, optional
A dictionary mapping layer names to probabilities
of input inclusion for that layer. Default is `None`,
in which `default_include_prob` is used for all
layers.
default_input_scale : float, optional
The amount to scale input in dropped out layers.
input_scales : dict, optional
A dictionary mapping layer names to constants by
which to scale the input.
rng : RandomState object or seed, optional
A `numpy.random.RandomState` object or a seed used to
create one.
per_example : bool, optional
If `True`, generate a different mask for every single
test example, so you have `num_masks` per example
instead of `num_mask` networks total. If `False`,
`num_masks` masks are fixed in the graph.
Returns
-------
geo_mean : tensor_like
A symbolic graph for the geometric mean prediction of
all the networks.
"""
if input_include_probs is None:
input_include_probs = {}
if input_scales is None:
input_scales = {}
if not hasattr(rng, 'uniform'):
rng = np.random.RandomState(rng)
mlp._validate_layer_names(list(input_include_probs.keys()))
mlp._validate_layer_names(list(input_scales.keys()))
if per_example:
outputs = [mlp.dropout_fprop(inputs, default_input_include_prob,
input_include_probs,
default_input_scale,
input_scales)
for _ in xrange(num_masks)]
else:
masks = [generate_dropout_mask(mlp, default_input_include_prob,
input_include_probs, rng)
for _ in xrange(num_masks)]
outputs = [mlp.masked_fprop(inputs, mask, None,
default_input_scale, input_scales)
for mask in masks]
return geometric_mean_prediction(outputs)
def exhaustive_dropout_average(mlp, inputs, masked_input_layers=None,
default_input_scale=2., input_scales=None):
"""
Take the geometric mean over all dropout masks of an
MLP with softmax outputs.
Parameters
----------
mlp : object
An MLP object.
inputs : tensor_like
A Theano variable representing a minibatch appropriate
for fpropping through the MLP.
masked_input_layers : list, optional
A list of layer names whose input should be masked.
Default is all layers (including the first hidden
layer, i.e. mask the input).
default_input_scale : float, optional
The amount to scale input in dropped out layers.
input_scales : dict, optional
A dictionary mapping layer names to constants by
which to scale the input.
Returns
-------
geo_mean : tensor_like
A symbolic graph for the geometric mean prediction
of all exponentially many masked subnetworks.
Notes
-----
This is obviously exponential in the size of the network,
don't do this except for tiny toy networks.
"""
if masked_input_layers is None:
masked_input_layers = mlp.layer_names
mlp._validate_layer_names(masked_input_layers)
if input_scales is None:
input_scales = {}
mlp._validate_layer_names(input_scales.keys())
if any(key not in masked_input_layers for key in input_scales):
not_in = [key for key in input_scales
if key not in mlp.layer_names]
raise ValueError(", ".join(not_in) + " in input_scales"
" but not masked")
num_inputs = mlp.get_total_input_dimension(masked_input_layers)
outputs = [mlp.masked_fprop(inputs, mask, masked_input_layers,
default_input_scale, input_scales)
for mask in xrange(2 ** num_inputs)]
return geometric_mean_prediction(outputs)
def geometric_mean_prediction(forward_props):
"""
Take the geometric mean over all dropout masks of an
MLP with softmax outputs.
Parameters
----------
forward_props : list
A list of Theano graphs corresponding to forward
propagations through the network with different
dropout masks.
Returns
-------
geo_mean : tensor_like
A symbolic graph for the geometric mean prediction
of all exponentially many masked subnetworks.
Notes
-----
This is obviously exponential in the size of the network,
don't do this except for tiny toy networks.
"""
presoftmax = []
for out in forward_props:
assert isinstance(out.owner.op, T.nnet.Softmax)
assert len(out.owner.inputs) == 1
presoftmax.append(out.owner.inputs[0])
average = reduce(operator.add, presoftmax) / float(len(presoftmax))
return T.nnet.softmax(average)
class BadInputSpaceError(TypeError):
"""
An error raised by an MLP layer when set_input_space is given an
object that is not one of the Spaces that layer supports.
"""
def get_lr_scalers_from_layers(owner):
"""
Get the learning rate scalers for all member layers of
`owner`.
Parameters
----------
owner : Model
Any Model with a `layers` field
Returns
-------
lr_scalers : OrderedDict
A dictionary mapping parameters of `owner` to learning
rate scalers.
"""
rval = OrderedDict()
params = owner.get_params()
for layer in owner.layers:
contrib = layer.get_lr_scalers()
assert isinstance(contrib, OrderedDict)
# No two layers can contend to scale a parameter
assert not any([key in rval for key in contrib])
# Don't try to scale anything that's not a parameter
assert all([key in params for key in contrib])
rval.update(contrib)
assert all([isinstance(val, float) for val in rval.values()])
return rval
|
goodfeli/pylearn2
|
pylearn2/models/mlp.py
|
Python
|
bsd-3-clause
| 166,284
|
[
"Gaussian"
] |
d73bd79c5792f44a4e515d55736a712e6e72daf92be5d6b01fc25d473facc3ad
|
import nglview
import tempfile
import os
import mdtraj as md
import numpy as np
import tempfile
from rdkit import Chem
from rdkit.Chem import Draw
from itertools import islice
from IPython.display import Image, HTML, display
def combine_mdtraj(protein, ligand):
chain = protein.topology.add_chain()
residue = protein.topology.add_residue("LIG", chain, resSeq=1)
for atom in ligand.topology.atoms:
protein.topology.add_atom(atom.name, atom.element, residue)
protein.xyz = np.hstack([protein.xyz, ligand.xyz])
protein.topology.create_standard_bonds()
return protein
def visualize_complex(complex_mdtraj):
ligand_atoms = [a.index for a in complex_mdtraj.topology.atoms if "LIG" in str(a.residue)]
binding_pocket_atoms = md.compute_neighbors(complex_mdtraj, 0.5, ligand_atoms)[0]
binding_pocket_residues = list(set([complex_mdtraj.topology.atom(a).residue.resSeq for a in binding_pocket_atoms]))
binding_pocket_residues = [str(r) for r in binding_pocket_residues]
binding_pocket_residues = " or ".join(binding_pocket_residues)
traj = nglview.MDTrajTrajectory( complex_mdtraj ) # load file from RCSB PDB
ngltraj = nglview.NGLWidget( traj )
ngltraj.representations = [
{ "type": "cartoon", "params": {
"sele": "protein", "color": "residueindex"
} },
{ "type": "licorice", "params": {
"sele": "(not hydrogen) and (%s)" % binding_pocket_residues
} },
{ "type": "ball+stick", "params": {
"sele": "LIG"
} }
]
return ngltraj
def visualize_ligand(ligand_mdtraj):
traj = nglview.MDTrajTrajectory( ligand_mdtraj ) # load file from RCSB PDB
ngltraj = nglview.NGLWidget( traj )
ngltraj.representations = [
{ "type": "ball+stick", "params": {"sele": "all" } } ]
return ngltraj
def convert_lines_to_mdtraj(molecule_lines):
tempdir = tempfile.mkdtemp()
molecule_file = os.path.join(tempdir, "molecule.pdb")
with open(molecule_file, "wb") as f:
f.writelines(molecule_lines)
molecule_mdtraj = md.load(molecule_file)
return molecule_mdtraj
def display_images(filenames):
"""Helper to pretty-print images."""
imagesList=''.join(
["<img style='width: 140px; margin: 0px; float: left; border: 1px solid black;' src='%s' />"
% str(s) for s in sorted(filenames)])
display(HTML(imagesList))
def mols_to_pngs(mols, basename="test"):
"""Helper to write RDKit mols to png files."""
filenames = []
for i, mol in enumerate(mols):
filename = "%s%d.png" % (basename, i)
Draw.MolToFile(mol, filename)
filenames.append(filename)
return filenames
|
deepchem/deepchem
|
contrib/visualization/utils.py
|
Python
|
mit
| 2,568
|
[
"MDTraj",
"RDKit"
] |
8ec14f6d92f87a6ec5af7f2ce40414b4c4917b8695b426e4a6e42f7d674d2a08
|
"""models for the ``group_messaging`` app
"""
from askbot.mail import send_mail #todo: remove dependency?
from askbot.mail.messages import GroupMessagingEmailAlert
from django.conf import settings as django_settings
from django.contrib.auth.models import Group
from django.contrib.auth.models import User
from django.contrib.sites.models import Site
from django.db import models
from django.db.models import signals
from django.template import Context
from django.template.loader import get_template
from django.utils.importlib import import_module
from django.utils.translation import ugettext as _
from group_messaging.signals import response_created
from group_messaging.signals import thread_created
import copy
import datetime
import urllib
MAX_HEADLINE_LENGTH = 80
MAX_SENDERS_INFO_LENGTH = 64
MAX_SUBJECT_LINE_LENGTH = 30
#dummy parse message function
parse_message = lambda v: v
GROUP_NAME_TPL = '_personal_%s'
def get_recipient_names(recipient_groups):
"""returns list of user names if groups are private,
or group names, otherwise"""
names = set()
for group in recipient_groups:
if group.name.startswith('_personal_'):
names.add(group.user_set.all()[0].username)
else:
names.add(group.name)
return names
def get_personal_group_by_user_id(user_id):
return Group.objects.get(name=GROUP_NAME_TPL % user_id)
def get_personal_groups_for_users(users):
"""for a given list of users return their personal groups"""
group_names = [(GROUP_NAME_TPL % user.id) for user in users]
return Group.objects.filter(name__in=group_names)
def get_personal_group(user):
"""returns personal group for the user"""
return get_personal_group_by_user_id(user.id)
def get_unread_inbox_counter(user):
"""returns unread inbox counter for the user"""
counter, junk = UnreadInboxCounter.objects.get_or_create(user=user)
return counter
def create_personal_group(user):
"""creates a personal group for the user"""
group = Group(name=GROUP_NAME_TPL % user.id)
group.save()
return group
class LastVisitTime(models.Model):
"""just remembers when a user has
last visited a given thread
"""
user = models.ForeignKey(User)
message = models.ForeignKey('Message')
at = models.DateTimeField(auto_now_add=True)
class Meta:
unique_together = ('user', 'message')
class SenderListManager(models.Manager):
"""model manager for the :class:`SenderList`"""
def get_senders_for_user(self, user=None):
"""returns query set of :class:`User`"""
user_groups = user.groups.all()
lists = self.filter(recipient__in=user_groups)
user_ids = lists.values_list(
'senders__id', flat=True
).distinct()
return User.objects.filter(id__in=user_ids)
class SenderList(models.Model):
"""a model to store denormalized data
about who sends messages to any given person
sender list is populated automatically
as new messages are created
"""
recipient = models.ForeignKey(Group, unique=True)
senders = models.ManyToManyField(User)
objects = SenderListManager()
class MessageMemo(models.Model):
"""A bridge between message recipients and messages
these records are only created when user sees a message.
The idea is that using groups as recipients, we can send
messages to massive numbers of users, without cluttering
the database.
Instead we'll be creating a "seen" message after user
reads the message.
"""
SEEN = 0
ARCHIVED = 1
DELETED = 2
STATUS_CHOICES = (
(SEEN, 'seen'),
(ARCHIVED, 'archived'),
(DELETED, 'deleted')
)
user = models.ForeignKey(User)
message = models.ForeignKey('Message', related_name='memos')
status = models.SmallIntegerField(
choices=STATUS_CHOICES, default=SEEN
)
class Meta:
unique_together = ('user', 'message')
class MessageManager(models.Manager):
"""model manager for the :class:`Message`"""
def get_sent_threads(self, sender=None):
"""returns list of threads for the "sent" mailbox
this function does not deal with deleted=True
"""
responses = self.filter(sender=sender)
responded_to = models.Q(descendants__in=responses, root=None)
seen_filter = models.Q(
memos__status=MessageMemo.SEEN,
memos__user=sender
)
seen_responses = self.filter(responded_to & seen_filter)
unseen_responses = self.filter(responded_to & ~models.Q(memos__user=sender))
return (
self.get_threads(sender=sender) \
| seen_responses.distinct() \
| unseen_responses.distinct()
).distinct()
def get_threads(self, recipient=None, sender=None, deleted=False):
"""returns query set of first messages in conversations,
based on recipient, sender and whether to
load deleted messages or not"""
if sender and recipient and sender.pk == recipient.pk:
raise ValueError('sender cannot be the same as recipient')
filter_kwargs = {
'root': None,
'message_type': Message.STORED
}
if recipient:
filter_kwargs['recipients__in'] = recipient.groups.all()
else:
#todo: possibly a confusing hack - for this branch -
#sender but no recipient in the args - we need "sent" origin threads
recipient = sender
user_thread_filter = models.Q(**filter_kwargs)
message_filter = user_thread_filter
if sender:
message_filter = message_filter & models.Q(sender=sender)
if deleted:
deleted_filter = models.Q(
memos__status=MessageMemo.ARCHIVED,
memos__user=recipient
)
return self.filter(message_filter & deleted_filter)
else:
#rather a tricky query (may need to change the idea to get rid of this)
#select threads that have a memo for the user, but the memo is not ARCHIVED
#in addition, select threads that have zero memos for the user
marked_as_non_deleted_filter = models.Q(
memos__status=MessageMemo.SEEN,
memos__user=recipient
)
#part1 - marked as non-archived
part1 = self.filter(message_filter & marked_as_non_deleted_filter)
#part2 - messages for the user without an attached memo
part2 = self.filter(message_filter & ~models.Q(memos__user=recipient))
#strange that (part1 | part2).distinct() sometimes gives wrong result
threads = list(set(part1) | set(part2))
thread_ids = [thread.id for thread in threads]
return Message.objects.filter(id__in=thread_ids).distinct()
def create(self, **kwargs):
"""creates a message"""
root = kwargs.get('root', None)
if root is None:
parent = kwargs.get('parent', None)
if parent:
if parent.root:
root = parent.root
else:
root = parent
kwargs['root'] = root
headline = kwargs.get('headline', kwargs['text'])
kwargs['headline'] = headline[:MAX_HEADLINE_LENGTH]
kwargs['html'] = parse_message(kwargs['text'])
message = super(MessageManager, self).create(**kwargs)
#creator of message saw it by definition
#crate a "seen" memo for the sender, because we
#don't want to inform the user about his/her own post
sender = kwargs['sender']
MessageMemo.objects.create(
message=message, user=sender, status=MessageMemo.SEEN
)
return message
def create_thread(self, sender=None, recipients=None, text=None):
"""creates a stored message and adds recipients"""
message = self.create(
message_type=Message.STORED,
sender=sender,
senders_info=sender.username,
text=text,
)
now = datetime.datetime.now()
LastVisitTime.objects.create(message=message, user=sender, at=now)
names = get_recipient_names(recipients)
message.add_recipient_names_to_senders_info(recipients)
message.save()
message.add_recipients(recipients)
thread_created.send(None, message=message)
return message
def create_response(self, sender=None, text=None, parent=None):
message = self.create(
parent=parent,
message_type=Message.STORED,
sender=sender,
text=text,
)
#recipients are parent's recipients + sender
#creator of response gets memo in the "read" status
recipients = set(parent.recipients.all())
if sender != parent.sender:
senders_group = get_personal_group(parent.sender)
parent.add_recipients([senders_group])
recipients.add(senders_group)
message.add_recipients(recipients)
#add author of the parent as a recipient to parent
#update headline
message.root.headline = text[:MAX_HEADLINE_LENGTH]
#mark last active timestamp for the root message
message.root.last_active_at = datetime.datetime.now()
#update senders info - stuff that is shown in the thread heading
message.root.update_senders_info()
#signal response as created, upon signal increment counters
response_created.send(None, message=message)
#move the thread to inboxes of all recipients
message.root.move_to_inbox()
return message
class Message(models.Model):
"""the message model allowing users to send
messages to other users and groups, via
personal groups.
"""
STORED = 0
TEMPORARY = 1
ONE_TIME = 2
MESSAGE_TYPE_CHOICES = (
(STORED, 'email-like message, stored in the inbox'),
(ONE_TIME, 'will be shown just once'),
(TEMPORARY, 'will be shown until certain time')
)
message_type = models.SmallIntegerField(
choices=MESSAGE_TYPE_CHOICES,
default=STORED,
)
sender = models.ForeignKey(User, related_name='group_messaging_sent_messages')
senders_info = models.CharField(
max_length=MAX_SENDERS_INFO_LENGTH,
default=''
)#comma-separated list of a few names
recipients = models.ManyToManyField(Group)
root = models.ForeignKey(
'self', null=True,
blank=True, related_name='descendants'
)
parent = models.ForeignKey(
'self', null=True,
blank=True, related_name='children'
)
headline = models.CharField(max_length=MAX_HEADLINE_LENGTH)
text = models.TextField(
null=True, blank=True,
help_text='source text for the message, e.g. in markdown format'
)
html = models.TextField(
null=True, blank=True,
help_text='rendered html of the message'
)
sent_at = models.DateTimeField(auto_now_add=True)
last_active_at = models.DateTimeField(auto_now_add=True)
active_until = models.DateTimeField(blank=True, null=True)
objects = MessageManager()
def add_recipient_names_to_senders_info(self, recipient_groups):
names = get_recipient_names(recipient_groups)
old_names = set(self.senders_info.split(','))
names |= old_names
self.senders_info = ','.join(names)
def add_recipients(self, recipients):
"""adds recipients to the message
and updates the sender lists for all recipients
todo: sender lists may be updated in a lazy way - per user
"""
self._cached_recipients_users = None #invalidate internal cache
self.recipients.add(*recipients)
for recipient in recipients:
sender_list, created = SenderList.objects.get_or_create(recipient=recipient)
sender_list.senders.add(self.sender)
def get_absolute_url(self, user=None):
"""returns absolute url to the thread"""
assert(user != None)
settings = django_settings.GROUP_MESSAGING
func_path = settings['BASE_URL_GETTER_FUNCTION']
path_bits = func_path.split('.')
url_getter = getattr(
import_module('.'.join(path_bits[:-1])),
path_bits[-1]
)
params = copy.copy(settings['BASE_URL_PARAMS'])
params['thread_id'] = self.id
url = url_getter(user) + '?' + urllib.urlencode(params)
#if include_domain_name: #don't need this b/c
# site = Site.objects.get_current()
# url = 'http://' + site.domain + url
return url
def get_email_subject_line(self):
"""forms subject line based on the root message
and prepends 'Re': if message is non-root
"""
subject = self.get_root_message().text[:MAX_SUBJECT_LINE_LENGTH]
if self.root:
subject = _('Re: ') + subject
return subject
def get_root_message(self):
"""returns root message or self
if current message is root
"""
if getattr(self, '_cached_root', None):
return self._cached_root
self._cached_root = self.root or self
return self._cached_root
def get_recipients_users(self):
"""returns query set of users"""
if getattr(self, '_cached_recipients_users', None):
return self._cached_recipients_users
groups = self.recipients.all()
recipients_users = User.objects.filter(
groups__in=groups
).exclude(
id=self.sender.id
).distinct()
self._cached_recipients_users = recipients_users
return recipients_users
def get_timeline(self):
"""returns ordered query set of messages in the thread
with the newest first"""
root = self.get_root_message()
root_qs = Message.objects.filter(id=root.id)
return (root.descendants.all() | root_qs).order_by('-sent_at')
def is_archived_or_deleted(self, user):
memos = MessageMemo.objects.filter(
user=user,
message=self,
status__gt=MessageMemo.SEEN
)
return bool(memos.count())
def is_unread_by_user(self, user, ignore_message=None):
"""True, if there is no "last visit timestamp"
or if there are new child messages created after
the last visit timestamp"""
try:
timer = LastVisitTime.objects.get(user=user, message=self)
except LastVisitTime.DoesNotExist:
#no last visit timestamp, so indeed unread
return True
else:
#see if there are new messages after the last visit
last_visit_timestamp = timer.at
descendants_filter = models.Q(sent_at__gt=last_visit_timestamp)
if ignore_message:
#ignore message used for the newly posted message
#in the same request cycle. The idea is that
#this way we avoid multiple-counting of the unread
#threads
descendants_filter &= ~models.Q(id=ignore_message.id)
follow_up_messages = self.descendants.filter(descendants_filter)
#unread, if we have new followup messages
return bool(follow_up_messages.count())
def send_email_alert(self):
"""signal handler for the message post-save"""
root_message = self.get_root_message()
data = {
'messages': self.get_timeline(),
'message': self
}
for user in self.get_recipients_users():
#todo change url scheme so that all users have the same
#urls within their personal areas of the user profile
#so that we don't need to have loops like this one
thread_url = root_message.get_absolute_url(user)
thread_url = thread_url.replace('&', '&')
#in the template we have a placeholder to be replaced like this:
data['recipient_user'] = user
email = GroupMessagingEmailAlert(data)
body_text = email.render_body()
body_text = body_text.replace('THREAD_URL_HOLE', thread_url)
send_mail(
email.render_subject(),
body_text,
django_settings.DEFAULT_FROM_EMAIL,
[user.email,],
)
def update_senders_info(self):
"""update the contributors info,
meant to be used on a root message only
"""
senders_names = self.senders_info.split(',')
if self.sender.username in senders_names:
senders_names.remove(self.sender.username)
senders_names.insert(0, self.sender.username)
self.senders_info = (','.join(senders_names))[:64]
self.save()
def move_to_inbox(self, user=None):
"""unarchive message for all recipients"""
archived_filter = {}
if user:
archived_filter['user'] = user
memos = self.memos.filter(**archived_filter)
memos.delete()
def set_status_for_user(self, status, user):
"""set specific status to the message for the user"""
memo, created = MessageMemo.objects.get_or_create(user=user, message=self)
memo.status = status
memo.save()
return created
def archive(self, user):
"""mark message as archived"""
return self.set_status_for_user(MessageMemo.ARCHIVED, user)
def mark_as_seen(self, user):
"""mark message as seen"""
is_first_time = self.set_status_for_user(MessageMemo.SEEN, user)
root = self.get_root_message()
if is_first_time or root.is_unread_by_user(user):
inbox_counter = get_unread_inbox_counter(user)
inbox_counter.decrement()
inbox_counter.save()
class UnreadInboxCounter(models.Model):
"""Stores number of unread messages
per recipient group.
It is relatively expensive to calculate this number,
therefore we store it in the database.
In order to know number of uread messages for a given
user, one has to get all groups user belongs to
and add up the corresponding counts of unread messages.
"""
user = models.ForeignKey(User)
count = models.PositiveIntegerField(default=0)
def decrement(self):
"""decrements count if > 1
does not save the object"""
if self.count > 0:
self.count -= 1
def increment(self):
self.count += 1
def reset(self):
self.count = 0
def recalculate(self):
"""recalculates count of unread messages
for the user and sets the updated value.
Does not call .save()"""
self.reset()
for thread in Message.objects.get_threads(recipient=self.user):
if thread.is_unread_by_user(self.user):
self.increment()
def increment_unread_inbox_counters(sender, message, **kwargs):
root_message = message.get_root_message()
for user in message.get_recipients_users():
if message == root_message \
or not root_message.is_unread_by_user(user, ignore_message=message) \
or root_message.is_archived_or_deleted(user):
# 1) if message is root - we have new thread,
# so it's safe to increment the inbox counter
# 2) if the message is a reply - the counter might
# have already been incremented. Therefore - we check
# whether the message was unread by the user,
# excluding the current message, which is obviously unread
# 3) if root message is deleted or archived then increment
counter = get_unread_inbox_counter(user)
counter.increment()
counter.save()
def send_email(sender, message, **kwargs):
message.send_email_alert()
thread_created.connect(
receiver=send_email,
dispatch_uid="thread_send_email"
)
thread_created.connect(
receiver=increment_unread_inbox_counters,
dispatch_uid="thread_increment_unread_inbox_counters"
)
response_created.connect(
receiver=send_email,
dispatch_uid="message_reply_send_email"
)
response_created.connect(
receiver=increment_unread_inbox_counters,
dispatch_uid="response_increment_unread_inbox_counters"
)
|
openpgh/askpgh
|
askbot/deps/group_messaging/models.py
|
Python
|
gpl-3.0
| 20,920
|
[
"VisIt"
] |
d32df48a3ff72fdc5b1ad30eb81111ef40cb9fdb2dd92c6355f7eeb3ec18c8c3
|
# coding=utf-8
# Copyright 2022 The Uncertainty Baselines Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Bidirectional encoder representations from transformers (BERT) with SNGP.
Spectral-normalized neural Gaussian process (SNGP) [1] is a simple method to
improve a deterministic neural network's uncertainty. It simply applies spectral
normalization to the hidden layers, and then replace the dense output layer
with a Gaussian process layer.
## Note:
Different from the paper, this implementation computes the posterior using the
Laplace approximation based on the Gaussian likelihood (i.e., squared loss)
rather than that based on cross-entropy loss. As a result, the logits for all
classes share the same covariance. In the experiments, this approach is shown to
perform better and computationally more scalable when the number of output
classes are large.
## References:
[1]: Jeremiah Liu et al. Simple and Principled Uncertainty Estimation with
Deterministic Deep Learning via Distance Awareness.
_arXiv preprint arXiv:2006.10108_, 2020.
https://arxiv.org/abs/2006.10108
[2]: Zhiyun Lu, Eugene Ie, Fei Sha. Uncertainty Estimation with Infinitesimal
Jackknife. _arXiv preprint arXiv:2006.07584_, 2020.
https://arxiv.org/abs/2006.07584
[3]: Tsung-Yi Lin et al. Focal Loss for Dense Object Detection. In
_International Conference on Computer Vision_, 2018.
https://arxiv.org/abs/1708.02002.
[4]: Jishnu Mukhoti et al. Calibrating Deep Neural Networks using Focal Loss.
_arXiv preprint arXiv:2002.09437_, 2020.
https://arxiv.org/abs/2002.09437
"""
import os
import time
from absl import app
from absl import flags
from absl import logging
import edward2 as ed
import robustness_metrics as rm
import tensorflow as tf
from tensorflow_addons import losses as tfa_losses
from tensorflow_addons import metrics as tfa_metrics
import uncertainty_baselines as ub
import metrics as tc_metrics # local file import from baselines.toxic_comments
import utils # local file import from baselines.toxic_comments
from uncertainty_baselines.datasets import toxic_comments as ds
from tensorboard.plugins.hparams import api as hp
# Data flags
flags.DEFINE_string(
'in_dataset_dir', None,
'Path to in-domain dataset (WikipediaToxicityDataset).')
flags.DEFINE_string(
'ood_dataset_dir', None,
'Path to out-of-domain dataset (CivilCommentsDataset).')
flags.DEFINE_string(
'identity_dataset_dir', None,
'Path to out-of-domain dataset with identity labels '
'(CivilCommentsIdentitiesDataset).')
# Model flags
flags.DEFINE_string('model_family', 'bert',
'Types of model to use. Can be either TextCNN or BERT.')
# Model flags, BERT.
flags.DEFINE_string(
'bert_dir', None,
'Directory to BERT pre-trained checkpoints and config files.')
flags.DEFINE_string(
'bert_ckpt_dir', None, 'Directory to BERT pre-trained checkpoints. '
'If None then then default to {bert_dir}/bert_model.ckpt.')
flags.DEFINE_string(
'bert_config_dir', None, 'Directory to BERT config files. '
'If None then then default to {bert_dir}/bert_config.json.')
# Normalization flags.
flags.DEFINE_bool(
'use_layer_norm_att', True,
'Whether to apply layer normalization to the self-attention layers.')
flags.DEFINE_bool(
'use_layer_norm_ffn', True,
'Whether to apply layer normalization to the feedforward layers.')
flags.DEFINE_bool(
'use_spec_norm_att', False,
'Whether to apply spectral normalization to the self-attention layers.')
flags.DEFINE_bool(
'use_spec_norm_ffn', False,
'Whether to apply spectral normalization to the feedforward layers.')
flags.DEFINE_bool(
'use_spec_norm_plr', True,
'Whether to apply spectral normalization to the final CLS pooler layer.')
flags.DEFINE_integer(
'spec_norm_iteration', 1,
'Number of power iterations to perform for estimating '
'the spectral norm of weight matrices.')
flags.DEFINE_float('spec_norm_bound', .95,
'Upper bound to spectral norm of weight matrices.')
# Gaussian process flags.
flags.DEFINE_bool('use_gp_layer', True,
'Whether to use Gaussian process as the output layer.')
flags.DEFINE_float('gp_bias', 0., 'The bias term for GP layer.')
flags.DEFINE_float(
'gp_scale', 2.,
'The length-scale parameter for the RBF kernel of the GP layer.')
flags.DEFINE_integer(
'gp_hidden_dim', 1024,
'The hidden dimension of the GP layer, which corresponds to the number of '
'random features used for the approximation.')
flags.DEFINE_bool(
'gp_input_normalization', True,
'Whether to normalize the input using LayerNorm for GP layer.'
'This is similar to automatic relevance determination (ARD) in the classic '
'GP learning.')
flags.DEFINE_float('gp_cov_ridge_penalty', 1e-3,
'Ridge penalty parameter for GP posterior covariance.')
flags.DEFINE_float(
'gp_cov_discount_factor', 0.999,
'The discount factor to compute the moving average of precision matrix.')
flags.DEFINE_float(
'gp_mean_field_factor', 1e-1,
'The tunable multiplicative factor used in the mean-field approximation '
'for the posterior mean of softmax Gaussian process. If -1 then use '
'posterior mode instead of posterior mean. See [2] for detail.')
# Optimization and evaluation flags
flags.DEFINE_integer('seed', 42, 'Random seed.')
flags.DEFINE_integer('per_core_batch_size', 32, 'Batch size per TPU core/GPU.')
flags.DEFINE_float(
'base_learning_rate', 2.5e-5,
'Base learning rate when total batch size is 128. It is '
'scaled by the ratio of the total batch size to 128.')
flags.DEFINE_float('one_minus_momentum', 0.1, 'Optimizer momentum.')
flags.DEFINE_integer(
'checkpoint_interval', 5,
'Number of epochs between saving checkpoints. Use -1 to '
'never save checkpoints.')
flags.DEFINE_integer('evaluation_interval', 1,
'Number of epochs between evaluation.')
flags.DEFINE_integer('num_ece_bins', 15, 'Number of bins for ECE.')
flags.DEFINE_integer(
'num_approx_bins', 1000,
'Number of bins for approximating collaborative and abstention metrics.')
flags.DEFINE_list(
'fractions',
['0.0', '0.001', '0.005', '0.01', '0.02', '0.05', '0.1', '0.15', '0.2'],
'A list of fractions of total examples to send to '
'the moderators (up to 1).')
flags.DEFINE_string('output_dir', '/tmp/toxic_comments', 'Output directory.')
flags.DEFINE_integer('train_epochs', 5, 'Number of training epochs.')
flags.DEFINE_float(
'warmup_proportion', 0.1,
'Proportion of training to perform linear learning rate warmup for. '
'E.g., 0.1 = 10% of training.')
flags.DEFINE_float(
'ece_label_threshold', 0.7,
'Threshold used to convert toxicity score into binary labels for computing '
'Expected Calibration Error (ECE). Default is 0.7 which is the threshold '
'value recommended by Jigsaw Conversation AI team.')
flags.DEFINE_integer(
'num_mc_samples', 1,
'Number of Monte Carlo forward passes to collect for ensemble prediction. '
'Currently can only be 1 since the model is deterministic.')
# Loss type
flags.DEFINE_enum('loss_type', 'cross_entropy',
['cross_entropy', 'focal_cross_entropy', 'mse', 'mae'],
'Type of loss function to use.')
flags.DEFINE_float(
'focal_loss_alpha', 0.1,
'Multiplicative factor used in the focal loss [3]-[4] to '
'upweight inconfident examples.')
flags.DEFINE_float(
'focal_loss_gamma', 1.,
'Exponentiate factor used in the focal loss [3]-[4] to '
'upweight inconfident examples.')
# Accelerator flags.
flags.DEFINE_bool('use_gpu', False, 'Whether to run on GPU or otherwise TPU.')
flags.DEFINE_bool('use_bfloat16', False, 'Whether to use mixed precision.')
flags.DEFINE_integer('num_cores', 8, 'Number of TPU cores or number of GPUs.')
flags.DEFINE_string('tpu', None,
'Name of the TPU. Only used if use_gpu is False.')
FLAGS = flags.FLAGS
_MAX_SEQ_LENGTH = 512
def main(argv):
del argv # unused arg
tf.io.gfile.makedirs(FLAGS.output_dir)
logging.info('Saving checkpoints at %s', FLAGS.output_dir)
tf.random.set_seed(FLAGS.seed)
if FLAGS.use_gpu:
logging.info('Use GPU')
strategy = tf.distribute.MirroredStrategy()
else:
logging.info('Use TPU at %s',
FLAGS.tpu if FLAGS.tpu is not None else 'local')
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=FLAGS.tpu)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
batch_size = FLAGS.per_core_batch_size * FLAGS.num_cores
test_batch_size = batch_size
data_buffer_size = batch_size * 10
train_dataset_builder = ds.WikipediaToxicityDataset(
split='train',
data_dir=FLAGS.in_dataset_dir,
shuffle_buffer_size=data_buffer_size)
ind_dataset_builder = ds.WikipediaToxicityDataset(
split='test',
data_dir=FLAGS.in_dataset_dir,
drop_remainder=True,
shuffle_buffer_size=data_buffer_size)
ood_dataset_builder = ds.CivilCommentsDataset(
split='test',
data_dir=FLAGS.ood_dataset_dir,
drop_remainder=True,
shuffle_buffer_size=data_buffer_size)
ood_identity_dataset_builder = ds.CivilCommentsIdentitiesDataset(
split='test',
data_dir=FLAGS.identity_dataset_dir,
drop_remainder=True,
shuffle_buffer_size=data_buffer_size)
train_dataset_builders = {
'wikipedia_toxicity_subtypes': train_dataset_builder
}
test_dataset_builders = {
'ind': ind_dataset_builder,
'ood': ood_dataset_builder,
'ood_identity': ood_identity_dataset_builder,
}
if FLAGS.prediction_mode and FLAGS.identity_prediction:
for dataset_name in utils.IDENTITY_LABELS:
if utils.NUM_EXAMPLES[dataset_name]['test'] > 100:
test_dataset_builders[dataset_name] = ds.CivilCommentsIdentitiesDataset(
split='test',
data_dir=os.path.join(
FLAGS.identity_specific_dataset_dir, dataset_name),
drop_remainder=True,
shuffle_buffer_size=data_buffer_size)
for dataset_name in utils.IDENTITY_TYPES:
if utils.NUM_EXAMPLES[dataset_name]['test'] > 100:
test_dataset_builders[dataset_name] = ds.CivilCommentsIdentitiesDataset(
split='test',
data_dir=os.path.join(
FLAGS.identity_type_dataset_dir, dataset_name),
drop_remainder=True,
shuffle_buffer_size=data_buffer_size)
class_weight = utils.create_class_weight(
train_dataset_builders, test_dataset_builders)
logging.info('class_weight: %s', str(class_weight))
ds_info = train_dataset_builder.tfds_info
# Positive and negative classes.
num_classes = ds_info.metadata['num_classes']
train_datasets = {}
dataset_steps_per_epoch = {}
total_steps_per_epoch = 0
# TODO(jereliu): Apply strategy.experimental_distribute_dataset to the
# dataset_builders.
for dataset_name, dataset_builder in train_dataset_builders.items():
train_datasets[dataset_name] = dataset_builder.load(
batch_size=FLAGS.per_core_batch_size)
dataset_steps_per_epoch[dataset_name] = (
dataset_builder.num_examples // batch_size)
total_steps_per_epoch += dataset_steps_per_epoch[dataset_name]
test_datasets = {}
steps_per_eval = {}
for dataset_name, dataset_builder in test_dataset_builders.items():
test_datasets[dataset_name] = dataset_builder.load(
batch_size=test_batch_size)
if dataset_name in ['ind', 'ood', 'ood_identity']:
steps_per_eval[dataset_name] = (
dataset_builder.num_examples // test_batch_size)
else:
steps_per_eval[dataset_name] = (
utils.NUM_EXAMPLES[dataset_name]['test'] // test_batch_size)
if FLAGS.use_bfloat16:
tf.keras.mixed_precision.set_global_policy('mixed_bfloat16')
summary_writer = tf.summary.create_file_writer(
os.path.join(FLAGS.output_dir, 'summaries'))
with strategy.scope():
logging.info('Building BERT %s model', FLAGS.bert_model_type)
logging.info('use_gp_layer=%s', FLAGS.use_gp_layer)
logging.info('use_spec_norm_att=%s', FLAGS.use_spec_norm_att)
logging.info('use_spec_norm_ffn=%s', FLAGS.use_spec_norm_ffn)
logging.info('use_layer_norm_att=%s', FLAGS.use_layer_norm_att)
logging.info('use_layer_norm_ffn=%s', FLAGS.use_layer_norm_ffn)
bert_config_dir, bert_ckpt_dir = utils.resolve_bert_ckpt_and_config_dir(
FLAGS.bert_model_type, FLAGS.bert_dir, FLAGS.bert_config_dir,
FLAGS.bert_ckpt_dir)
bert_config = utils.create_config(bert_config_dir)
gp_layer_kwargs = dict(
num_inducing=FLAGS.gp_hidden_dim,
gp_kernel_scale=FLAGS.gp_scale,
gp_output_bias=FLAGS.gp_bias,
normalize_input=FLAGS.gp_input_normalization,
gp_cov_momentum=FLAGS.gp_cov_discount_factor,
gp_cov_ridge_penalty=FLAGS.gp_cov_ridge_penalty)
spec_norm_kwargs = dict(
iteration=FLAGS.spec_norm_iteration,
norm_multiplier=FLAGS.spec_norm_bound)
model, bert_encoder = ub.models.bert_sngp_model(
num_classes=num_classes,
bert_config=bert_config,
gp_layer_kwargs=gp_layer_kwargs,
spec_norm_kwargs=spec_norm_kwargs,
use_gp_layer=FLAGS.use_gp_layer,
use_spec_norm_att=FLAGS.use_spec_norm_att,
use_spec_norm_ffn=FLAGS.use_spec_norm_ffn,
use_layer_norm_att=FLAGS.use_layer_norm_att,
use_layer_norm_ffn=FLAGS.use_layer_norm_ffn,
use_spec_norm_plr=FLAGS.use_spec_norm_plr)
# Create an AdamW optimizer with beta_2=0.999, epsilon=1e-6.
optimizer = utils.create_optimizer(
FLAGS.base_learning_rate,
steps_per_epoch=total_steps_per_epoch,
epochs=FLAGS.train_epochs,
warmup_proportion=FLAGS.warmup_proportion,
beta_1=1.0 - FLAGS.one_minus_momentum)
logging.info('Model input shape: %s', model.input_shape)
logging.info('Model output shape: %s', model.output_shape)
logging.info('Model number of weights: %s', model.count_params())
metrics = {
'train/negative_log_likelihood':
tf.keras.metrics.Mean(),
'train/accuracy':
tf.keras.metrics.Accuracy(),
'train/accuracy_weighted':
tf.keras.metrics.Accuracy(),
'train/auroc':
tf.keras.metrics.AUC(),
'train/loss':
tf.keras.metrics.Mean(),
'train/ece':
rm.metrics.ExpectedCalibrationError(num_bins=FLAGS.num_ece_bins),
'train/precision':
tf.keras.metrics.Precision(),
'train/recall':
tf.keras.metrics.Recall(),
'train/f1':
tfa_metrics.F1Score(
num_classes=num_classes,
average='micro',
threshold=FLAGS.ece_label_threshold),
}
checkpoint = tf.train.Checkpoint(model=model, optimizer=optimizer)
if FLAGS.prediction_mode:
latest_checkpoint = tf.train.latest_checkpoint(FLAGS.eval_checkpoint_dir)
else:
latest_checkpoint = tf.train.latest_checkpoint(FLAGS.output_dir)
initial_epoch = 0
if latest_checkpoint:
# checkpoint.restore must be within a strategy.scope() so that optimizer
# slot variables are mirrored.
checkpoint.restore(latest_checkpoint)
logging.info('Loaded checkpoint %s', latest_checkpoint)
initial_epoch = optimizer.iterations.numpy() // total_steps_per_epoch
else:
# load BERT from initial checkpoint
bert_encoder, _, _ = utils.load_bert_weight_from_ckpt(
bert_model=bert_encoder,
bert_ckpt_dir=bert_ckpt_dir,
repl_patterns=ub.models.bert_sngp.CHECKPOINT_REPL_PATTERNS)
logging.info('Loaded BERT checkpoint %s', bert_ckpt_dir)
metrics.update({
'test/negative_log_likelihood':
tf.keras.metrics.Mean(),
'test/auroc':
tf.keras.metrics.AUC(curve='ROC'),
'test/aupr':
tf.keras.metrics.AUC(curve='PR'),
'test/brier':
tf.keras.metrics.MeanSquaredError(),
'test/brier_weighted':
tf.keras.metrics.MeanSquaredError(),
'test/ece':
rm.metrics.ExpectedCalibrationError(num_bins=FLAGS.num_ece_bins),
'test/acc':
tf.keras.metrics.Accuracy(),
'test/acc_weighted':
tf.keras.metrics.Accuracy(),
'test/eval_time':
tf.keras.metrics.Mean(),
'test/stddev':
tf.keras.metrics.Mean(),
'test/precision':
tf.keras.metrics.Precision(),
'test/recall':
tf.keras.metrics.Recall(),
'test/f1':
tfa_metrics.F1Score(
num_classes=num_classes,
average='micro',
threshold=FLAGS.ece_label_threshold)
})
for policy in ('uncertainty', 'toxicity'):
metrics.update({
'test_{}/calibration_auroc'.format(policy):
tc_metrics.CalibrationAUC(curve='ROC'),
'test_{}/calibration_auprc'.format(policy):
tc_metrics.CalibrationAUC(curve='PR')
})
for fraction in FLAGS.fractions:
metrics.update({
'test_{}/collab_acc_{}'.format(policy, fraction):
rm.metrics.OracleCollaborativeAccuracy(
fraction=float(fraction), num_bins=FLAGS.num_approx_bins),
'test_{}/abstain_prec_{}'.format(policy, fraction):
tc_metrics.AbstainPrecision(
abstain_fraction=float(fraction),
num_approx_bins=FLAGS.num_approx_bins),
'test_{}/abstain_recall_{}'.format(policy, fraction):
tc_metrics.AbstainRecall(
abstain_fraction=float(fraction),
num_approx_bins=FLAGS.num_approx_bins),
'test_{}/collab_auroc_{}'.format(policy, fraction):
tc_metrics.OracleCollaborativeAUC(
oracle_fraction=float(fraction),
num_bins=FLAGS.num_approx_bins),
'test_{}/collab_auprc_{}'.format(policy, fraction):
tc_metrics.OracleCollaborativeAUC(
oracle_fraction=float(fraction),
curve='PR',
num_bins=FLAGS.num_approx_bins),
})
for dataset_name, test_dataset in test_datasets.items():
if dataset_name != 'ind':
metrics.update({
'test/nll_{}'.format(dataset_name):
tf.keras.metrics.Mean(),
'test/auroc_{}'.format(dataset_name):
tf.keras.metrics.AUC(curve='ROC'),
'test/aupr_{}'.format(dataset_name):
tf.keras.metrics.AUC(curve='PR'),
'test/brier_{}'.format(dataset_name):
tf.keras.metrics.MeanSquaredError(),
'test/brier_weighted_{}'.format(dataset_name):
tf.keras.metrics.MeanSquaredError(),
'test/ece_{}'.format(dataset_name):
rm.metrics.ExpectedCalibrationError(num_bins=FLAGS.num_ece_bins
),
'test/acc_{}'.format(dataset_name):
tf.keras.metrics.Accuracy(),
'test/acc_weighted_{}'.format(dataset_name):
tf.keras.metrics.Accuracy(),
'test/eval_time_{}'.format(dataset_name):
tf.keras.metrics.Mean(),
'test/stddev_{}'.format(dataset_name):
tf.keras.metrics.Mean(),
'test/precision_{}'.format(dataset_name):
tf.keras.metrics.Precision(),
'test/recall_{}'.format(dataset_name):
tf.keras.metrics.Recall(),
'test/f1_{}'.format(dataset_name):
tfa_metrics.F1Score(
num_classes=num_classes,
average='micro',
threshold=FLAGS.ece_label_threshold)
})
for policy in ('uncertainty', 'toxicity'):
metrics.update({
'test_{}/calibration_auroc_{}'.format(policy, dataset_name):
tc_metrics.CalibrationAUC(curve='ROC'),
'test_{}/calibration_auprc_{}'.format(policy, dataset_name):
tc_metrics.CalibrationAUC(curve='PR'),
})
for fraction in FLAGS.fractions:
metrics.update({
'test_{}/collab_acc_{}_{}'.format(policy, fraction,
dataset_name):
rm.metrics.OracleCollaborativeAccuracy(
fraction=float(fraction),
num_bins=FLAGS.num_approx_bins),
'test_{}/abstain_prec_{}_{}'.format(policy, fraction,
dataset_name):
tc_metrics.AbstainPrecision(
abstain_fraction=float(fraction),
num_approx_bins=FLAGS.num_approx_bins),
'test_{}/abstain_recall_{}_{}'.format(policy, fraction,
dataset_name):
tc_metrics.AbstainRecall(
abstain_fraction=float(fraction),
num_approx_bins=FLAGS.num_approx_bins),
'test_{}/collab_auroc_{}_{}'.format(policy, fraction,
dataset_name):
tc_metrics.OracleCollaborativeAUC(
oracle_fraction=float(fraction),
num_bins=FLAGS.num_approx_bins),
'test_{}/collab_auprc_{}_{}'.format(policy, fraction,
dataset_name):
tc_metrics.OracleCollaborativeAUC(
oracle_fraction=float(fraction),
curve='PR',
num_bins=FLAGS.num_approx_bins),
})
@tf.function
def generate_sample_weight(labels, class_weight, label_threshold=0.7):
"""Generate sample weight for weighted accuracy calculation."""
if label_threshold != 0.7:
logging.warning('The class weight was based on `label_threshold` = 0.7, '
'and weighted accuracy/brier will be meaningless if '
'`label_threshold` is not equal to this value, which is '
'recommended by Jigsaw Conversation AI team.')
labels_int = tf.cast(labels > label_threshold, tf.int32)
sample_weight = tf.gather(class_weight, labels_int)
return sample_weight
@tf.function
def train_step(iterator, dataset_name, num_steps):
"""Training StepFn."""
def step_fn(inputs):
"""Per-Replica StepFn."""
features, labels, _ = utils.create_feature_and_label(inputs)
with tf.GradientTape() as tape:
logits = model(features, training=True)
if isinstance(logits, (list, tuple)):
# If model returns a tuple of (logits, covmat), extract logits
logits, _ = logits
if FLAGS.use_bfloat16:
logits = tf.cast(logits, tf.float32)
loss_logits = tf.squeeze(logits, axis=1)
if FLAGS.loss_type == 'cross_entropy':
logging.info('Using cross entropy loss')
negative_log_likelihood = tf.nn.sigmoid_cross_entropy_with_logits(
labels, loss_logits)
elif FLAGS.loss_type == 'focal_cross_entropy':
logging.info('Using focal cross entropy loss')
negative_log_likelihood = tfa_losses.sigmoid_focal_crossentropy(
labels,
loss_logits,
alpha=FLAGS.focal_loss_alpha,
gamma=FLAGS.focal_loss_gamma,
from_logits=True)
elif FLAGS.loss_type == 'mse':
logging.info('Using mean squared error loss')
loss_probs = tf.nn.sigmoid(loss_logits)
negative_log_likelihood = tf.keras.losses.mean_squared_error(
labels, loss_probs)
elif FLAGS.loss_type == 'mae':
logging.info('Using mean absolute error loss')
loss_probs = tf.nn.sigmoid(loss_logits)
negative_log_likelihood = tf.keras.losses.mean_absolute_error(
labels, loss_probs)
negative_log_likelihood = tf.reduce_mean(negative_log_likelihood)
l2_loss = sum(model.losses)
loss = negative_log_likelihood + l2_loss
# Scale the loss given the TPUStrategy will reduce sum all gradients.
scaled_loss = loss / strategy.num_replicas_in_sync
grads = tape.gradient(scaled_loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
probs = tf.nn.sigmoid(logits)
# Cast labels to discrete for ECE computation.
ece_labels = tf.cast(labels > FLAGS.ece_label_threshold, tf.float32)
one_hot_labels = tf.one_hot(tf.cast(ece_labels, tf.int32),
depth=num_classes)
ece_probs = tf.concat([1. - probs, probs], axis=1)
auc_probs = tf.squeeze(probs, axis=1)
pred_labels = tf.math.argmax(ece_probs, axis=-1)
sample_weight = generate_sample_weight(
labels, class_weight['train/{}'.format(dataset_name)],
FLAGS.ece_label_threshold)
metrics['train/negative_log_likelihood'].update_state(
negative_log_likelihood)
metrics['train/accuracy'].update_state(labels, pred_labels)
metrics['train/accuracy_weighted'].update_state(
ece_labels, pred_labels, sample_weight=sample_weight)
metrics['train/auroc'].update_state(labels, auc_probs)
metrics['train/loss'].update_state(loss)
metrics['train/ece'].add_batch(ece_probs, label=ece_labels)
metrics['train/precision'].update_state(ece_labels, pred_labels)
metrics['train/recall'].update_state(ece_labels, pred_labels)
metrics['train/f1'].update_state(one_hot_labels, ece_probs)
for _ in tf.range(tf.cast(num_steps, tf.int32)):
strategy.run(step_fn, args=(next(iterator),))
@tf.function
def test_step(iterator, dataset_name):
"""Evaluation StepFn."""
def step_fn(inputs):
"""Per-Replica StepFn."""
features, labels, _ = utils.create_feature_and_label(inputs)
eval_start_time = time.time()
# Compute ensemble prediction over Monte Carlo forward-pass samples.
logits_list = []
stddev_list = []
for _ in range(FLAGS.num_mc_samples):
logits = model(features, training=False)
if isinstance(logits, (list, tuple)):
# If model returns a tuple of (logits, covmat), extract both.
logits, covmat = logits
else:
covmat = tf.eye(test_batch_size)
if FLAGS.use_bfloat16:
logits = tf.cast(logits, tf.float32)
covmat = tf.cast(covmat, tf.float32)
logits = ed.layers.utils.mean_field_logits(
logits, covmat, mean_field_factor=FLAGS.gp_mean_field_factor)
stddev = tf.sqrt(tf.linalg.diag_part(covmat))
logits_list.append(logits)
stddev_list.append(stddev)
eval_time = (time.time() - eval_start_time) / FLAGS.per_core_batch_size
# Logits dimension is (num_samples, batch_size, num_classes).
logits_list = tf.stack(logits_list, axis=0)
stddev_list = tf.stack(stddev_list, axis=0)
stddev = tf.reduce_mean(stddev_list, axis=0)
probs_list = tf.nn.sigmoid(logits_list)
probs = tf.reduce_mean(probs_list, axis=0)
# Cast labels to discrete for ECE computation.
ece_labels = tf.cast(labels > FLAGS.ece_label_threshold, tf.float32)
one_hot_labels = tf.one_hot(tf.cast(ece_labels, tf.int32),
depth=num_classes)
ece_probs = tf.concat([1. - probs, probs], axis=1)
pred_labels = tf.math.argmax(ece_probs, axis=-1)
auc_probs = tf.squeeze(probs, axis=1)
# Use normalized binary predictive variance as the confidence score.
# Since the prediction variance p*(1-p) is within range (0, 0.25),
# normalize it by maximum value so the confidence is between (0, 1).
calib_confidence = 1. - probs * (1. - probs) / .25
ce = tf.nn.sigmoid_cross_entropy_with_logits(
labels=tf.broadcast_to(
labels, [FLAGS.num_mc_samples, labels.shape[0]]),
logits=tf.squeeze(logits_list, axis=-1)
)
negative_log_likelihood = -tf.reduce_logsumexp(
-ce, axis=0) + tf.math.log(float(FLAGS.num_mc_samples))
negative_log_likelihood = tf.reduce_mean(negative_log_likelihood)
sample_weight = generate_sample_weight(
labels, class_weight['test/{}'.format(dataset_name)],
FLAGS.ece_label_threshold)
if dataset_name == 'ind':
metrics['test/negative_log_likelihood'].update_state(
negative_log_likelihood)
metrics['test/auroc'].update_state(labels, auc_probs)
metrics['test/aupr'].update_state(labels, auc_probs)
metrics['test/brier'].update_state(labels, auc_probs)
metrics['test/brier_weighted'].update_state(
tf.expand_dims(labels, -1), probs, sample_weight=sample_weight)
metrics['test/ece'].add_batch(ece_probs, label=ece_labels)
metrics['test/acc'].update_state(ece_labels, pred_labels)
metrics['test/acc_weighted'].update_state(
ece_labels, pred_labels, sample_weight=sample_weight)
metrics['test/eval_time'].update_state(eval_time)
metrics['test/stddev'].update_state(stddev)
metrics['test/precision'].update_state(ece_labels, pred_labels)
metrics['test/recall'].update_state(ece_labels, pred_labels)
metrics['test/f1'].update_state(one_hot_labels, ece_probs)
for policy in ('uncertainty', 'toxicity'):
# calib_confidence or decreasing toxicity score.
confidence = 1. - probs if policy == 'toxicity' else calib_confidence
binning_confidence = tf.squeeze(confidence)
metrics['test_{}/calibration_auroc'.format(policy)].update_state(
ece_labels, pred_labels, confidence)
metrics['test_{}/calibration_auprc'.format(policy)].update_state(
ece_labels, pred_labels, confidence)
for fraction in FLAGS.fractions:
metrics['test_{}/collab_acc_{}'.format(policy, fraction)].add_batch(
ece_probs,
label=ece_labels,
custom_binning_score=binning_confidence)
metrics['test_{}/abstain_prec_{}'.format(
policy, fraction)].update_state(ece_labels, pred_labels,
confidence)
metrics['test_{}/abstain_recall_{}'.format(
policy, fraction)].update_state(ece_labels, pred_labels,
confidence)
metrics['test_{}/collab_auroc_{}'.format(
policy, fraction)].update_state(
labels, auc_probs, custom_binning_score=binning_confidence)
metrics['test_{}/collab_auprc_{}'.format(
policy, fraction)].update_state(
labels, auc_probs, custom_binning_score=binning_confidence)
else:
metrics['test/nll_{}'.format(dataset_name)].update_state(
negative_log_likelihood)
metrics['test/auroc_{}'.format(dataset_name)].update_state(
labels, auc_probs)
metrics['test/aupr_{}'.format(dataset_name)].update_state(
labels, auc_probs)
metrics['test/brier_{}'.format(dataset_name)].update_state(
labels, auc_probs)
metrics['test/brier_weighted_{}'.format(dataset_name)].update_state(
tf.expand_dims(labels, -1), probs, sample_weight=sample_weight)
metrics['test/ece_{}'.format(dataset_name)].add_batch(
ece_probs, label=ece_labels)
metrics['test/acc_{}'.format(dataset_name)].update_state(
ece_labels, pred_labels)
metrics['test/acc_weighted_{}'.format(dataset_name)].update_state(
ece_labels, pred_labels, sample_weight=sample_weight)
metrics['test/eval_time_{}'.format(dataset_name)].update_state(
eval_time)
metrics['test/stddev_{}'.format(dataset_name)].update_state(stddev)
metrics['test/precision_{}'.format(dataset_name)].update_state(
ece_labels, pred_labels)
metrics['test/recall_{}'.format(dataset_name)].update_state(
ece_labels, pred_labels)
metrics['test/f1_{}'.format(dataset_name)].update_state(
one_hot_labels, ece_probs)
for policy in ('uncertainty', 'toxicity'):
# calib_confidence or decreasing toxicity score.
confidence = 1. - probs if policy == 'toxicity' else calib_confidence
binning_confidence = tf.squeeze(confidence)
metrics['test_{}/calibration_auroc_{}'.format(
policy, dataset_name)].update_state(ece_labels, pred_labels,
confidence)
metrics['test_{}/calibration_auprc_{}'.format(
policy, dataset_name)].update_state(ece_labels, pred_labels,
confidence)
for fraction in FLAGS.fractions:
metrics['test_{}/collab_acc_{}_{}'.format(
policy, fraction, dataset_name)].add_batch(
ece_probs,
label=ece_labels,
custom_binning_score=binning_confidence)
metrics['test_{}/abstain_prec_{}_{}'.format(
policy, fraction,
dataset_name)].update_state(ece_labels, pred_labels, confidence)
metrics['test_{}/abstain_recall_{}_{}'.format(
policy, fraction,
dataset_name)].update_state(ece_labels, pred_labels, confidence)
metrics['test_{}/collab_auroc_{}_{}'.format(
policy, fraction, dataset_name)].update_state(
labels, auc_probs, custom_binning_score=binning_confidence)
metrics['test_{}/collab_auprc_{}_{}'.format(
policy, fraction, dataset_name)].update_state(
labels, auc_probs, custom_binning_score=binning_confidence)
strategy.run(step_fn, args=(next(iterator),))
@tf.function
def final_eval_step(iterator):
"""Final Evaluation StepFn to save prediction to directory."""
def step_fn(inputs):
bert_features, labels, additional_labels = utils.create_feature_and_label(
inputs)
logits = model(bert_features, training=False)
if isinstance(logits, (list, tuple)):
# If model returns a tuple of (logits, covmat), extract both.
logits, covmat = logits
else:
covmat = tf.eye(test_batch_size)
if FLAGS.use_bfloat16:
logits = tf.cast(logits, tf.float32)
covmat = tf.cast(covmat, tf.float32)
logits = ed.layers.utils.mean_field_logits(
logits, covmat, mean_field_factor=FLAGS.gp_mean_field_factor)
features = inputs['input_ids']
return features, logits, labels, additional_labels
(per_replica_texts, per_replica_logits, per_replica_labels,
per_replica_additional_labels) = (
strategy.run(step_fn, args=(next(iterator),)))
if strategy.num_replicas_in_sync > 1:
texts_list = tf.concat(per_replica_texts.values, axis=0)
logits_list = tf.concat(per_replica_logits.values, axis=0)
labels_list = tf.concat(per_replica_labels.values, axis=0)
additional_labels_dict = {}
for additional_label in utils.IDENTITY_LABELS:
if additional_label in per_replica_additional_labels:
additional_labels_dict[additional_label] = tf.concat(
per_replica_additional_labels[additional_label], axis=0)
else:
texts_list = per_replica_texts
logits_list = per_replica_logits
labels_list = per_replica_labels
additional_labels_dict = {}
for additional_label in utils.IDENTITY_LABELS:
if additional_label in per_replica_additional_labels:
additional_labels_dict[
additional_label] = per_replica_additional_labels[
additional_label]
return texts_list, logits_list, labels_list, additional_labels_dict
if FLAGS.prediction_mode:
# Prediction and exit.
for dataset_name, test_dataset in test_datasets.items():
test_iterator = iter(test_dataset) # pytype: disable=wrong-arg-types
message = 'Final eval on dataset {}'.format(dataset_name)
logging.info(message)
texts_all = []
logits_all = []
labels_all = []
additional_labels_all_dict = {}
if 'identity' in dataset_name:
for identity_label_name in utils.IDENTITY_LABELS:
additional_labels_all_dict[identity_label_name] = []
try:
with tf.experimental.async_scope():
for step in range(steps_per_eval[dataset_name]):
if step % 20 == 0:
message = 'Starting to run eval step {}/{} of dataset: {}'.format(
step, steps_per_eval[dataset_name], dataset_name)
logging.info(message)
(text_step, logits_step, labels_step,
additional_labels_dict_step) = final_eval_step(test_iterator)
texts_all.append(text_step)
logits_all.append(logits_step)
labels_all.append(labels_step)
if 'identity' in dataset_name:
for identity_label_name in utils.IDENTITY_LABELS:
additional_labels_all_dict[identity_label_name].append(
additional_labels_dict_step[identity_label_name])
except (StopIteration, tf.errors.OutOfRangeError):
tf.experimental.async_clear_error()
logging.info('Done with eval on %s', dataset_name)
texts_all = tf.concat(texts_all, axis=0)
logits_all = tf.concat(logits_all, axis=0)
labels_all = tf.concat(labels_all, axis=0)
additional_labels_all = []
if additional_labels_all_dict:
for identity_label_name in utils.IDENTITY_LABELS:
additional_labels_all.append(
tf.concat(
additional_labels_all_dict[identity_label_name], axis=0))
additional_labels_all = tf.convert_to_tensor(additional_labels_all)
utils.save_prediction(
texts_all.numpy(),
path=os.path.join(FLAGS.output_dir, 'texts_{}'.format(dataset_name)))
utils.save_prediction(
labels_all.numpy(),
path=os.path.join(FLAGS.output_dir, 'labels_{}'.format(dataset_name)))
utils.save_prediction(
logits_all.numpy(),
path=os.path.join(FLAGS.output_dir, 'logits_{}'.format(dataset_name)))
if 'identity' in dataset_name:
utils.save_prediction(
additional_labels_all.numpy(),
path=os.path.join(FLAGS.output_dir,
'additional_labels_{}'.format(dataset_name)))
logging.info('Done with testing on %s', dataset_name)
else:
# Execute train / eval loop.
start_time = time.time()
train_iterators = {}
for dataset_name, train_dataset in train_datasets.items():
train_iterators[dataset_name] = iter(train_dataset)
for epoch in range(initial_epoch, FLAGS.train_epochs):
logging.info('Starting to run epoch: %s', epoch)
for dataset_name, train_iterator in train_iterators.items():
try:
with tf.experimental.async_scope():
train_step(
train_iterator,
dataset_name,
dataset_steps_per_epoch[dataset_name])
current_step = (
epoch * total_steps_per_epoch +
dataset_steps_per_epoch[dataset_name])
max_steps = total_steps_per_epoch * FLAGS.train_epochs
time_elapsed = time.time() - start_time
steps_per_sec = float(current_step) / time_elapsed
eta_seconds = (max_steps - current_step) / steps_per_sec
message = ('{:.1%} completion: epoch {:d}/{:d}. {:.1f} steps/s. '
'ETA: {:.0f} min. Time elapsed: {:.0f} min'.format(
current_step / max_steps, epoch + 1,
FLAGS.train_epochs, steps_per_sec,
eta_seconds / 60, time_elapsed / 60))
logging.info(message)
except (StopIteration, tf.errors.OutOfRangeError):
tf.experimental.async_clear_error()
logging.info('Done with testing on %s', dataset_name)
if epoch % FLAGS.evaluation_interval == 0:
for dataset_name, test_dataset in test_datasets.items():
test_iterator = iter(test_dataset)
logging.info('Testing on dataset %s', dataset_name)
try:
with tf.experimental.async_scope():
for step in range(steps_per_eval[dataset_name]):
if step % 20 == 0:
logging.info('Starting to run eval step %s/%s of epoch: %s',
step, steps_per_eval[dataset_name], epoch)
test_step(test_iterator, dataset_name)
except (StopIteration, tf.errors.OutOfRangeError):
tf.experimental.async_clear_error()
logging.info('Done with testing on %s', dataset_name)
logging.info('Train Loss: %.4f, ECE: %.2f, Accuracy: %.2f',
metrics['train/loss'].result(),
metrics['train/ece'].result()['ece'],
metrics['train/accuracy'].result())
total_results = {
name: metric.result() for name, metric in metrics.items()
}
# Metrics from Robustness Metrics (like ECE) will return a dict with a
# single key/value, instead of a scalar.
total_results = {
k: (list(v.values())[0] if isinstance(v, dict) else v)
for k, v in total_results.items()
}
with summary_writer.as_default():
for name, result in total_results.items():
tf.summary.scalar(name, result, step=epoch + 1)
for metric in metrics.values():
metric.reset_states()
checkpoint_interval = min(FLAGS.checkpoint_interval, FLAGS.train_epochs)
if checkpoint_interval > 0 and (epoch + 1) % checkpoint_interval == 0:
checkpoint_name = checkpoint.save(
os.path.join(FLAGS.output_dir, 'checkpoint'))
logging.info('Saved checkpoint to %s', checkpoint_name)
# Save model in SavedModel format on exit.
final_save_name = os.path.join(FLAGS.output_dir, 'model')
model.save(final_save_name)
logging.info('Saved model to %s', final_save_name)
with summary_writer.as_default():
hp.hparams({
'base_learning_rate': FLAGS.base_learning_rate,
'one_minus_momentum': FLAGS.one_minus_momentum,
'gp_mean_field_factor': FLAGS.gp_mean_field_factor,
})
if __name__ == '__main__':
app.run(main)
|
google/uncertainty-baselines
|
baselines/toxic_comments/sngp.py
|
Python
|
apache-2.0
| 43,600
|
[
"Gaussian"
] |
568f362b79e39307c1e5c64c4b246bb87a3636e7e0c93af2f07a480b7e525798
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# nicechart.py
#
# Copyright 2011-2016
#
# Christoph Sterz
# Florian Weber
# Maren Hachmann
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
# TODO / Ideas:
# allow negative values for bar charts
# show values for stacked bar charts
# don't create a new layer for each chart, but a normal group
# correct bar height for stacked bars (it's only half as high as it should be, double)
# adjust position of heading
# use aliasing workaround for stacked bars (e.g. let the rectangles overlap)
# Example CSV file contents:
'''
Month;1978;1979;1980;1981
January;2;1,3;0.1;2.3
February;6.5;2.4;1.2;6.1
March;7.4;6.7;7.9;4.7
April;7.7;6.4;8.2;8.9
May;10.9;11.7;18.7;11.1
June;12.6;14.2;14.7;14.7
July;16.5;15.5;17.5;15.1
August;15.9;15.4;14.6;16.6
September;14;14.5;13.2;15.3
October;11.9;13.9;11.5;9.2
November;6.7;8.5;7;6.6
December;6.4;2.2;6.3;3.5
'''
# The extension creates one chart for a single value column in one go,
# e.g. chart all temperatures for all months of the year 1978 into one chart.
# (for this, select column 0 for labels and column 1 for values).
# "1978" etc. can be used as heading (Need not be numeric. If not used delete the heading line.)
# Month names can be used as labels
# Values can be shown, in addition to labels (doesn't work with stacked bar charts)
# Values can contain commas as decimal separator, as long as delimiter isn't comma
# Negative values are not yet supported.
import re
import sys
import math
import inkex
from simplestyle import *
#www.sapdesignguild.org/goodies/diagram_guidelines/color_palettes.html#mss
COLOUR_TABLE = {
"red": ["#460101", "#980101", "#d40000", "#f44800", "#fb8b00", "#eec73e", "#d9bb7a", "#fdd99b"],
"blue": ["#000442", "#0F1781", "#252FB7", "#3A45E1", "#656DDE", "#8A91EC"],
"gray": ["#222222", "#444444", "#666666", "#888888", "#aaaaaa", "#cccccc", "#eeeeee"],
"contrast": ["#0000FF", "#FF0000", "#00FF00", "#CF9100", "#FF00FF", "#00FFFF"],
"sap": ["#f8d753", "#5c9746", "#3e75a7", "#7a653e", "#e1662a", "#74796f", "#c4384f",
"#fff8a3", "#a9cc8f", "#b2c8d9", "#bea37a", "#f3aa79", "#b5b5a9", "#e6a5a5"]
}
def get_color_scheme(name="default"):
return COLOUR_TABLE.get(name.lower(), COLOUR_TABLE['red'])
class NiceChart(inkex.Effect):
"""
Inkscape extension that can draw pie charts and bar charts
(stacked, single, horizontally or vertically)
with optional drop shadow, from a csv file or from pasted text
"""
def __init__(self):
"""
Constructor.
Defines the "--what" option of a script.
"""
# Call the base class constructor.
inkex.Effect.__init__(self)
# Define string option "--what" with "-w" shortcut and default chart values.
self.OptionParser.add_option('-w', '--what', action='store',
type='string', dest='what', default='22,11,67',
help='Chart Values')
# Define string option "--type" with "-t" shortcut.
self.OptionParser.add_option("-t", "--type", action="store",
type="string", dest="type", default='',
help="Chart Type")
# Define bool option "--blur" with "-b" shortcut.
self.OptionParser.add_option("-b", "--blur", action="store",
type="inkbool", dest="blur", default='True',
help="Blur Type")
# Define string option "--file" with "-f" shortcut.
self.OptionParser.add_option("-f", "--filename", action="store",
type="string", dest="filename", default='',
help="Name of File")
# Define string option "--input_type" with "-i" shortcut.
self.OptionParser.add_option("-i", "--input_type", action="store",
type="string", dest="input_type", default='file',
help="Chart Type")
# Define string option "--delimiter" with "-d" shortcut.
self.OptionParser.add_option("-d", "--delimiter", action="store",
type="string", dest="csv_delimiter", default=';',
help="delimiter")
# Define string option "--colors" with "-c" shortcut.
self.OptionParser.add_option("-c", "--colors", action="store",
type="string", dest="colors", default='default',
help="color-scheme")
# Define string option "--colors_override"
self.OptionParser.add_option("", "--colors_override", action="store",
type="string", dest="colors_override", default='',
help="color-scheme-override")
self.OptionParser.add_option("", "--reverse_colors", action="store",
type="inkbool", dest="reverse_colors", default='False',
help="reverse color-scheme")
self.OptionParser.add_option("-k", "--col_key", action="store",
type="int", dest="col_key", default='0',
help="column that contains the keys")
self.OptionParser.add_option("-v", "--col_val", action="store",
type="int", dest="col_val", default='1',
help="column that contains the values")
self.OptionParser.add_option("", "--encoding", action="store",
type="string", dest="encoding", default='utf-8',
help="encoding of the CSV file, e.g. utf-8")
self.OptionParser.add_option("", "--headings", action="store",
type="inkbool", dest="headings", default='False',
help="the first line of the CSV file consists of headings for the columns")
self.OptionParser.add_option("-r", "--rotate", action="store",
type="inkbool", dest="rotate", default='False',
help="Draw barchart horizontally")
self.OptionParser.add_option("-W", "--bar-width", action="store",
type="int", dest="bar_width", default='10',
help="width of bars")
self.OptionParser.add_option("-p", "--pie-radius", action="store",
type="int", dest="pie_radius", default='100',
help="radius of pie-charts")
self.OptionParser.add_option("-H", "--bar-height", action="store",
type="int", dest="bar_height", default='100',
help="height of bars")
self.OptionParser.add_option("-O", "--bar-offset", action="store",
type="int", dest="bar_offset", default='5',
help="distance between bars")
self.OptionParser.add_option("", "--stroke-width", action="store",
type="float", dest="stroke_width", default='1')
self.OptionParser.add_option("-o", "--text-offset", action="store",
type="int", dest="text_offset", default='5',
help="distance between bar and descriptions")
self.OptionParser.add_option("", "--heading-offset", action="store",
type="int", dest="heading_offset", default='50',
help="distance between chart and chart title")
self.OptionParser.add_option("", "--segment-overlap", action="store",
type="inkbool", dest="segment_overlap", default='False',
help="work around aliasing effects by letting pie chart segments overlap")
self.OptionParser.add_option("-F", "--font", action="store",
type="string", dest="font", default='sans-serif',
help="font of description")
self.OptionParser.add_option("-S", "--font-size", action="store",
type="int", dest="font_size", default='10',
help="font size of description")
self.OptionParser.add_option("-C", "--font-color", action="store",
type="string", dest="font_color", default='black',
help="font color of description")
#Dummy:
self.OptionParser.add_option("","--input_sections")
self.OptionParser.add_option("-V", "--show_values", action="store",
type="inkbool", dest="show_values", default='False',
help="Show values in chart")
def effect(self):
"""
Effect behaviour.
Overrides base class' method and inserts a nice looking chart into SVG document.
"""
# Get script's "--what" option value and process the data type --- i concess the if term is a little bit of magic
what = self.options.what
keys = []
values = []
orig_values = []
keys_present = True
pie_abs = False
cnt = 0
csv_file_name = self.options.filename
csv_delimiter = self.options.csv_delimiter
input_type = self.options.input_type
col_key = self.options.col_key
col_val = self.options.col_val
show_values = self.options.show_values
encoding = self.options.encoding.strip() or 'utf-8'
headings = self.options.headings
heading_offset = self.options.heading_offset
if input_type == "\"file\"":
csv_file = open(csv_file_name, "r")
for linenum, line in enumerate(csv_file):
value = line.decode(encoding).split(csv_delimiter)
#make sure that there is at least one value (someone may want to use it as description)
if len(value) >= 1:
# allow to parse headings as strings
if linenum == 0 and headings:
heading = value[col_val]
else:
keys.append(value[col_key])
# replace comma decimal separator from file by colon,
# to avoid file editing for people whose programs output
# values with comma
values.append(float(value[col_val].replace(",",".")))
csv_file.close()
elif input_type == "\"direct_input\"":
what = re.findall("([A-Z|a-z|0-9]+:[0-9]+\.?[0-9]*)", what)
for value in what:
value = value.split(":")
keys.append(value[0])
values.append(float(value[1]))
# warn about negative values (not yet supported)
for value in values:
if value < 0:
inkex.errormsg("Negative values are currently not supported!")
return
# Get script's "--type" option value.
charttype = self.options.type
if charttype == "pie_abs":
pie_abs = True
charttype = "pie"
# Get access to main SVG document element and get its dimensions.
svg = self.document.getroot()
# Get the page attibutes:
width = self.getUnittouu(svg.get('width'))
height = self.getUnittouu(svg.attrib['height'])
# Create a new layer.
layer = inkex.etree.SubElement(svg, 'g')
layer.set(inkex.addNS('label', 'inkscape'), 'Chart-Layer: %s' % (what))
layer.set(inkex.addNS('groupmode', 'inkscape'), 'layer')
# Check if a drop shadow should be drawn:
draw_blur = self.options.blur
if draw_blur:
# Get defs of Document
defs = self.xpathSingle('/svg:svg//svg:defs')
if defs == None:
defs = inkex.etree.SubElement(self.document.getroot(), inkex.addNS('defs', 'svg'))
# Create new Filter
filt = inkex.etree.SubElement(defs,inkex.addNS('filter', 'svg'))
filtId = self.uniqueId('filter')
self.filtId = 'filter:url(#%s);' % filtId
for k, v in [('id', filtId), ('height', "3"),
('width', "3"),
('x', '-0.5'), ('y', '-0.5')]:
filt.set(k, v)
# Append Gaussian Blur to that Filter
fe = inkex.etree.SubElement(filt, inkex.addNS('feGaussianBlur', 'svg'))
fe.set('stdDeviation', "1.1")
# Set Default Colors
self.options.colors_override.strip()
if len(self.options.colors_override) > 0:
colors = self.options.colors_override
else:
colors = self.options.colors
if colors[0].isalpha():
colors = get_color_scheme(colors)
else:
colors = re.findall("(#[0-9a-fA-F]{6})", colors)
#to be sure we create a fallback:
if len(colors) == 0:
colors = get_color_scheme()
color_count = len(colors)
if self.options.reverse_colors:
colors.reverse()
# Those values should be self-explanatory:
bar_height = self.options.bar_height
bar_width = self.options.bar_width
bar_offset = self.options.bar_offset
# offset of the description in stacked-bar-charts:
# stacked_bar_text_offset=self.options.stacked_bar_text_offset
text_offset = self.options.text_offset
# prevents ugly aliasing effects between pie chart segments by overlapping
segment_overlap = self.options.segment_overlap
# get font
font = self.options.font
font_size = self.options.font_size
font_color = self.options.font_color
# get rotation
rotate = self.options.rotate
pie_radius = self.options.pie_radius
stroke_width = self.options.stroke_width
if charttype == "bar":
#########
###BAR###
#########
# iterate all values, use offset to draw the bars in different places
offset = 0
color = 0
# Normalize the bars to the largest value
try:
value_max = max(values)
except ValueError:
value_max = 0.0
for x in range(len(values)):
orig_values.append(values[x])
values[x] = (values[x]/value_max) * bar_height
# Draw Single bars with their shadows
for value in values:
# draw drop shadow, if necessary
if draw_blur:
# Create shadow element
shadow = inkex.etree.Element(inkex.addNS("rect", "svg"))
# Set chart position to center of document. Make it horizontal or vertical
if not rotate:
shadow.set('x', str(width/2 + offset + 1))
shadow.set('y', str(height/2 - int(value) + 1))
shadow.set("width", str(bar_width))
shadow.set("height", str(int(value)))
else:
shadow.set('y', str(width/2 + offset + 1))
shadow.set('x', str(height/2 + 1))
shadow.set("height", str(bar_width))
shadow.set("width", str(int(value)))
# Set shadow blur (connect to filter object in xml path)
shadow.set("style", "filter:url(#filter)")
# Create rectangle element
rect = inkex.etree.Element(inkex.addNS('rect', 'svg'))
# Set chart position to center of document.
if not rotate:
rect.set('x', str(width/2 + offset))
rect.set('y', str(height/2 - int(value)))
rect.set("width", str(bar_width))
rect.set("height", str(int(value)))
else:
rect.set('y', str(width/2 + offset))
rect.set('x', str(height/2))
rect.set("height", str(bar_width))
rect.set("width", str(int(value)))
rect.set("style", "fill:" + colors[color % color_count])
# If keys are given, create text elements
if keys_present:
text = inkex.etree.Element(inkex.addNS('text', 'svg'))
if not rotate: #=vertical
text.set("transform", "matrix(0,-1,1,0,0,0)")
#y after rotation:
text.set("x", "-" + str(height/2 + text_offset))
#x after rotation:
text.set("y", str(width/2 + offset + bar_width/2 + font_size/3))
else: #=horizontal
text.set("y", str(width/2 + offset + bar_width/2 + font_size/3))
text.set("x", str(height/2 - text_offset))
text.set("style", "font-size:" + str(font_size)\
+ "px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:"\
+ font + ";-inkscape-font-specification:Bitstream Charter;text-align:end;text-anchor:end;fill:"\
+ font_color)
text.text = keys[cnt]
# Increase Offset and Color
#offset=offset+bar_width+bar_offset
color = (color + 1) % 8
# Connect elements together.
if draw_blur:
layer.append(shadow)
layer.append(rect)
if keys_present:
layer.append(text)
if show_values:
vtext = inkex.etree.Element(inkex.addNS('text', 'svg'))
if not rotate: #=vertical
vtext.set("transform", "matrix(0,-1,1,0,0,0)")
#y after rotation:
vtext.set("x", "-"+str(height/2+text_offset-value-text_offset-text_offset))
#x after rotation:
vtext.set("y", str(width/2+offset+bar_width/2+font_size/3))
else: #=horizontal
vtext.set("y", str(width/2+offset+bar_width/2+font_size/3))
vtext.set("x", str(height/2-text_offset+value+text_offset+text_offset))
vtext.set("style", "font-size:"+str(font_size)\
+ "px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:"\
+ font + ";-inkscape-font-specification:Bitstream Charter;text-align:start;text-anchor:start;fill:"\
+ font_color)
vtext.text = str(int(orig_values[cnt]))
layer.append(vtext)
cnt = cnt+1
offset = offset + bar_width + bar_offset
# set x position for heading line
if not rotate:
heading_x = width/2 # TODO: adjust
else:
heading_x = width/2 # TODO: adjust
elif charttype == "pie":
#########
###PIE###
#########
# Iterate all values to draw the different slices
color = 0
# Create the shadow first (if it should be created):
if draw_blur:
shadow = inkex.etree.Element(inkex.addNS("circle", "svg"))
shadow.set('cx', str(width/2))
shadow.set('cy', str(height/2))
shadow.set('r', str(pie_radius))
shadow.set("style", "filter:url(#filter);fill:#000000")
layer.append(shadow)
# Add a grey background circle with a light stroke
background = inkex.etree.Element(inkex.addNS("circle", "svg"))
background.set("cx", str(width/2))
background.set("cy", str(height/2))
background.set("r", str(pie_radius))
background.set("style", "stroke:#ececec;fill:#f9f9f9")
layer.append(background)
#create value sum in order to divide the slices
try:
valuesum = sum(values)
except ValueError:
valuesum = 0
if pie_abs:
valuesum = 100
num_values = len(values)
# Set an offsetangle
offset = 0
# Draw single slices
for i in range(num_values):
value = values[i]
# Calculate the PI-angles for start and end
angle = (2*3.141592) / valuesum * float(value)
start = offset
end = offset + angle
# proper overlapping
if segment_overlap:
if i != num_values-1:
end += 0.09 # add a 5° overlap
if i == 0:
start -= 0.09 # let the first element overlap into the other direction
#then add the slice
pieslice = inkex.etree.Element(inkex.addNS("path", "svg"))
pieslice.set(inkex.addNS('type', 'sodipodi'), 'arc')
pieslice.set(inkex.addNS('cx', 'sodipodi'), str(width/2))
pieslice.set(inkex.addNS('cy', 'sodipodi'), str(height/2))
pieslice.set(inkex.addNS('rx', 'sodipodi'), str(pie_radius))
pieslice.set(inkex.addNS('ry', 'sodipodi'), str(pie_radius))
pieslice.set(inkex.addNS('start', 'sodipodi'), str(start))
pieslice.set(inkex.addNS('end', 'sodipodi'), str(end))
pieslice.set("style", "fill:"+ colors[color % color_count] + ";stroke:none;fill-opacity:1")
#If text is given, draw short paths and add the text
if keys_present:
path = inkex.etree.Element(inkex.addNS("path", "svg"))
path.set("d", "m "
+ str((width/2) + pie_radius * math.cos(angle/2 + offset)) + ","
+ str((height/2) + pie_radius * math.sin(angle/2 + offset)) + " "
+ str((text_offset - 2) * math.cos(angle/2 + offset)) + ","
+ str((text_offset - 2) * math.sin(angle/2 + offset)))
path.set("style", "fill:none;stroke:"
+ font_color + ";stroke-width:" + str(stroke_width)
+ "px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1")
layer.append(path)
text = inkex.etree.Element(inkex.addNS('text', 'svg'))
text.set("x", str((width/2) + (pie_radius + text_offset) * math.cos(angle/2 + offset)))
text.set("y", str((height/2) + (pie_radius + text_offset) * math.sin(angle/2 + offset) + font_size/3))
textstyle = "font-size:" + str(font_size) \
+ "px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:" \
+ font + ";-inkscape-font-specification:Bitstream Charter;fill:" + font_color
# check if it is right or left of the Pie
if math.cos(angle/2 + offset) > 0:
text.set("style", textstyle)
else:
text.set("style", textstyle + ";text-align:end;text-anchor:end")
text.text = keys[cnt]
if show_values:
text.text = text.text + "(" + str(values[cnt])
if pie_abs:
text.text = text.text + " %"
text.text = text.text + ")"
cnt = cnt + 1
layer.append(text)
# increase the rotation-offset and the colorcycle-position
offset = offset + angle
color = (color + 1) % 8
# append the objects to the extension-layer
layer.append(pieslice)
# set x position for heading line
heading_x = width/2 - pie_radius # TODO: adjust
elif charttype == "stbar":
#################
###STACKED BAR###
#################
# Iterate over all values to draw the different slices
color = 0
#create value sum in order to divide the bars
try:
valuesum = sum(values)
except ValueError:
valuesum = 0.0
for value in values:
valuesum = valuesum + float(value)
# Init offset
offset = 0
if draw_blur:
# Create rectangle element
shadow = inkex.etree.Element(inkex.addNS("rect", "svg"))
# Set chart position to center of document.
if not rotate:
shadow.set('x', str(width/2))
shadow.set('y', str(height/2 - bar_height/2))
else:
shadow.set('x', str(width/2))
shadow.set('y', str(height/2))
# Set rectangle properties
if not rotate:
shadow.set("width", str(bar_width))
shadow.set("height", str(bar_height/2))
else:
shadow.set("width",str(bar_height/2))
shadow.set("height", str(bar_width))
# Set shadow blur (connect to filter object in xml path)
shadow.set("style", "filter:url(#filter)")
layer.append(shadow)
i = 0
# Draw Single bars
for value in values:
# Calculate the individual heights normalized on 100units
normedvalue = (bar_height / valuesum) * float(value)
# Create rectangle element
rect = inkex.etree.Element(inkex.addNS('rect', 'svg'))
# Set chart position to center of document.
if not rotate:
rect.set('x', str(width / 2 ))
rect.set('y', str(height / 2 - offset - normedvalue))
else:
rect.set('x', str(width / 2 + offset ))
rect.set('y', str(height / 2 ))
# Set rectangle properties
if not rotate:
rect.set("width", str(bar_width))
rect.set("height", str(normedvalue))
else:
rect.set("height", str(bar_width))
rect.set("width", str(normedvalue))
rect.set("style", "fill:" + colors[color % color_count])
#If text is given, draw short paths and add the text
# TODO: apply overlap workaround for visible gaps in between
if keys_present:
if not rotate:
path = inkex.etree.Element(inkex.addNS("path", "svg"))
path.set("d","m " + str((width + bar_width)/2) + ","
+ str(height/2 - offset - (normedvalue / 2)) + " "
+ str(bar_width/2 + text_offset) + ",0")
path.set("style", "fill:none;stroke:" + font_color
+ ";stroke-width:" + str(stroke_width)
+ "px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1")
layer.append(path)
text = inkex.etree.Element(inkex.addNS('text', 'svg'))
text.set("x", str(width/2 + bar_width + text_offset + 1))
text.set("y", str(height/ 2 - offset + font_size/3 - (normedvalue/2)))
text.set("style", "font-size:" + str(font_size)
+ "px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:"
+ font + ";-inkscape-font-specification:Bitstream Charter;fill:" + font_color)
text.text = keys[cnt]
cnt = cnt + 1
layer.append(text)
else:
path = inkex.etree.Element(inkex.addNS("path", "svg"))
path.set("d","m " + str((width)/2 + offset + normedvalue/2) + ","
+ str(height / 2 + bar_width/2) + " 0,"
+ str(bar_width/2 + (font_size * i) + text_offset)) #line
path.set("style", "fill:none;stroke:" + font_color
+ ";stroke-width:" + str(stroke_width)
+ "px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1")
layer.append(path)
text = inkex.etree.Element(inkex.addNS('text', 'svg'))
text.set("x", str((width)/2 + offset + normedvalue/2 - font_size/3))
text.set("y", str((height/2) + bar_width + (font_size * (i + 1)) + text_offset))
text.set("style", "font-size:" + str(font_size)
+ "px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:"
+ font + ";-inkscape-font-specification:Bitstream Charter;fill:" + font_color)
text.text = keys[color]
layer.append(text)
# Increase Offset and Color
offset = offset + normedvalue
color = (color + 1) % 8
# Draw rectangle
layer.append(rect)
i += 1
# set x position for heading line
if not rotate:
heading_x = width/2 + offset + normedvalue # TODO: adjust
else:
heading_x = width/2 + offset + normedvalue # TODO: adjust
if headings and input_type == "\"file\"":
headingtext = inkex.etree.Element(inkex.addNS('text', 'svg'))
headingtext.set("y", str(height/2 + heading_offset))
headingtext.set("x", str(heading_x))
headingtext.set("style", "font-size:" + str(font_size + 4)\
+ "px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-family:"\
+ font + ";-inkscape-font-specification:Bitstream Charter;text-align:end;text-anchor:end;fill:"\
+ font_color)
headingtext.text = heading
layer.append(headingtext)
def getUnittouu(self, param):
try:
return inkex.unittouu(param)
except AttributeError:
return self.unittouu(param)
if __name__ == '__main__':
# Create effect instance and apply it.
effect = NiceChart()
effect.affect()
|
danieljabailey/inkscape_experiments
|
share/extensions/nicechart.py
|
Python
|
gpl-2.0
| 32,218
|
[
"Gaussian"
] |
84aaa9ca6a1c48324e55bba4e72c89ff24dcedfef897a043b45257abea692edc
|
import demistomock as demisto
from CommonServerPython import *
from CommonServerUserPython import *
''' IMPORTS '''
import requests
import json
import collections
# Disable insecure warnings
requests.packages.urllib3.disable_warnings()
''' GLOBALS/PARAMS '''
USERNAME = demisto.params().get('credentials').get('identifier')
PASSWORD = demisto.params().get('credentials').get('password')
API_KEY = demisto.params().get('api-key')
FETCH_TIME = int(demisto.params().get('fetch_time', '7'))
SERVER = demisto.params()['url'][:-1] if (demisto.params()['url'] and demisto.params()['url'].endswith('/')) else \
demisto.params()['url']
USE_SSL = not demisto.params().get('insecure', False)
BASE_URL = SERVER + '/v1'
# Remove proxy if not set to true in params
handle_proxy()
STATUSES = {
'Not Reviewed': '0',
'Investigating': '1',
'On hold': '2',
'False Positive': '3',
'Escalated': '4'
}
TLP_MAP = {
'WHITE': 0,
'GREEN': 1,
'AMBER': 2,
'RED': 3
}
CONFIDENCE_MAP = {
'LOW': 0,
'MEDIUM': 1,
'HIGH': 2
}
OBSERVABLE_TYPES_MAP = {
'IP': 0,
'Domain': 1,
'URL': 2,
'REGEX': 3,
'File Hash': 4
}
''' HELPER FUNCTIONS '''
# Allows nested keys to be accessible
def makehash():
return collections.defaultdict(makehash)
def http_request(method, url_suffix, params=None, data=None, headers=None):
try:
res = requests.request(
method,
BASE_URL + url_suffix,
verify=USE_SSL,
params=params,
data=data,
headers=headers
)
if res.status_code == 403:
return_error('Connection forbidden. Please verify your API key is valid.')
elif res.status_code not in {200, 201}:
return_error(f'Error in API call to Perch Integration [{res.status_code}] - {res.reason}')
except requests.exceptions.ConnectionError as error:
return_error(f"Failed to establish a new connection: {type(error)}")
try:
response = res.json()
except Exception as e:
return_error(f'Failed to parse JSON response: {str(e)}')
return response
def find_key_by_value(val, dic_map):
for key, value in dic_map.items():
if value == val:
return key
def format_alerts(alert):
hr = makehash() # type: dict
ec = makehash() # type: dict
if alert.get('id'):
hr['ID'] = alert.get('id')
ec['ID'] = alert.get('id')
if alert.get('sensor_id'):
hr['Sensor ID'] = alert.get('sensor_id')
ec['SensorID'] = alert.get('sensor_id')
if alert.get('observable_id'):
hr['Observable ID'] = alert.get('observable_id')
ec['ObservableID'] = alert.get('observable_id')
if alert.get('indicator_id'):
hr['Indicator ID'] = alert.get('indicator_id')
ec['IndicatorID'] = alert.get('indicator_id')
if alert.get('status'):
hr['Status'] = alert.get('status')
ec['Status'] = alert.get('status')
if alert.get('ts'):
hr['Timestamp'] = alert.get('ts')
ec['TS'] = alert.get('ts')
if alert.get('title'):
hr['Title'] = alert.get('title')
ec['Title'] = alert.get('title')
if alert.get('protocol'):
hr['Protocol'] = alert.get('protocol')
ec['Protocol'] = alert.get('protocol')
if alert.get('src_ip'):
hr['Source IP'] = alert.get('src_ip')
ec['SrcIP'] = alert.get('src_ip')
if alert.get('src_port'):
hr['Source Port'] = alert.get('src_port')
ec['SrcPort'] = alert.get('src_port')
if alert.get('src_geo_ip'):
src_geo = alert['src_geo_ip']
if src_geo.get('latitude'):
hr['Source Geo']['Latitude'] = src_geo.get('latitude')
ec['SrcGeo']['Latitude'] = src_geo.get('latitude')
if src_geo.get('longitude'):
hr['Source Geo']['Longitude'] = src_geo.get('longitude')
ec['SrcGeo']['Longitude'] = src_geo.get('longitude')
if src_geo.get('country_name'):
hr['Source Geo']['Country Name'] = src_geo.get('country_name')
ec['SrcGeo']['Country'] = src_geo.get('country_name')
if alert.get('dest_ip'):
hr['Destination IP'] = alert.get('dest_ip')
ec['DestIP'] = alert.get('dest_ip')
if alert.get('dest_port'):
hr['Destination Port'] = alert.get('dest_port')
ec['DestPort'] = alert.get('dest_port')
if alert.get('dest_geo_ip'):
dest_geo = alert['dest_geo_ip']
if dest_geo.get('latitude'):
hr['Destination Geo']['Latitude'] = dest_geo.get('latitude')
ec['DestGeo']['Latitude'] = dest_geo.get('latitude')
if dest_geo.get('longitude'):
hr['Destination Geo']['Longitude'] = dest_geo.get('longitude')
ec['DestGeo']['Longitude'] = dest_geo.get('longitude')
if dest_geo.get('country_name'):
hr['Destination Geo']['Country Name'] = dest_geo.get('country_name')
ec['DestGeo']['Country'] = dest_geo.get('country_name')
return hr, ec
def alerts_params(args):
params = {} # type:dict
if args.get('page'):
params['page'] = args.get('page')
if args.get('page_size'):
params['page_size'] = args.get('page_size')
if args.get('closed'):
params['closed'] = args.get('closed')
if args.get('closed_at'):
params['closed_at'] = args.get('closed_at')
if args.get('community_id'):
params['community_id'] = args.get('community_id')
if args.get('created_at'):
params['created_at'] = args.get('created_at')
if args.get('dest_ip'):
params['dest_ip'] = args.get('dest_ip')
if args.get('dest_port'):
params['dest_port'] = args.get('dest_port')
if args.get('full_url'):
params['full_url'] = args.get('full_url')
if args.get('id'):
params['id'] = args.get('id')
if args.get('indicator_id'):
params['indicator_id'] = args.get('indicator_id')
if args.get('indicator_loaded'):
params['indicator_loaded'] = args.get('indicator_loaded')
if args.get('observable_id'):
params['observable_id'] = args.get('observable_id')
if args.get('protocol'):
params['protocol'] = args.get('protocol')
if args.get('sensor_id'):
params['sensor_id'] = args.get('sensor_id')
if args.get('sensor_name'):
params['sensor_name'] = args.get('sensor_name')
if args.get('soc_status'):
params['soc_status'] = args.get('soc_status')
if args.get('src_ip'):
params['src_ip'] = args.get('src_ip')
if args.get('src_port'):
params['src_port'] = args.get('src_port')
if args.get('status'):
params['status'] = args.get('status')
if args.get('status_updated_at'):
params['status_updated_at'] = args.get('status_updated_at')
if args.get('team_id'):
params['team_id'] = args.get('team_id')
if args.get('title'):
params['title'] = args.get('title')
if args.get('ts'):
params['ts'] = args.get('ts')
if args.get('closed_at__gte'):
params['closed_at__gte'] = args.get('closed_at__gte')
if args.get('closed_at__lte'):
params['closed_at__lte'] = args.get('closed_at__lte')
if args.get('created_at__gte'):
params['created_at__gte'] = args.get('created_at__gte')
if args.get('created_at__lte'):
params['created_at__lte'] = args.get('created_at__lte')
if args.get('status_updated_at__gte'):
params['status_updated_at__gte'] = args.get('status_updated_at__gte')
if args.get('status_updated_at__lte'):
params['status_updated_at__lte'] = args.get('status_updated_at__lte')
if args.get('status_updated_at__gt'):
params['status_updated_at__gt'] = args.get('status_updated_at__gt')
if args.get('status_updated_at__lt'):
params['status_updated_at__lt'] = args.get('status_updated_at__lt')
if args.get('ordering'):
params['ordering'] = args.get('ordering')
return params
def indicator_params(args):
params = []
param = {}
observables = []
communities = []
if args.get('communities'):
community = {
'id': args.get('communities')
}
communities.append(community)
param['communities'] = communities
if args.get('type'):
observable = {
'type': OBSERVABLE_TYPES_MAP[args.get('type')],
'details': {
'value': args.get('value')
}
}
observables.append(observable)
param['observables'] = observables
if args.get('title'):
param['title'] = args.get('title')
if args.get('description'):
param['description'] = args.get('description')
if args.get('tlp'):
param['tlp'] = TLP_MAP[args.get('tlp')] # type: ignore
if args.get('confidence'):
param['confidence'] = CONFIDENCE_MAP[args.get('confidence')] # type: ignore
if args.get('operator'):
param['operator'] = args.get('operator')
if args.get('first_sighting'):
param['first_sighting'] = args.get('first_sighting')
if args.get('email_summary'):
param['email_summary'] = args.get('email_summary')
params.append(param)
return params
def authenticate():
headers = {'Content-Type': 'application/json', 'x-api-key': API_KEY}
req_body = json.dumps({'username': USERNAME, 'password': PASSWORD})
url = '/auth/access_token'
res_body = http_request('POST', url, data=req_body, headers=headers)
headers['Authorization'] = 'Bearer ' + res_body['access_token']
return headers
def format_indicator(indicator):
hr = makehash() # type: dict
ec = makehash() # type: dict
if indicator.get('id'):
hr['ID'] = indicator.get('id')
ec['ID'] = indicator.get('id')
if indicator.get('confidence'):
hr['Confidence'] = find_key_by_value(indicator.get('confidence'), CONFIDENCE_MAP)
ec['Confidence'] = find_key_by_value(indicator.get('confidence'), CONFIDENCE_MAP)
if indicator.get('created_at'):
hr['Created At'] = indicator.get('created_at')
ec['CreatedAt'] = indicator.get('created_at')
if indicator.get('created_by'):
hr['Created By'] = indicator.get('created_by')
ec['CreatedBy'] = indicator.get('created_by')
if indicator.get('description'):
hr['Description'] = indicator.get('description')
ec['Description'] = indicator.get('description')
if indicator.get('email_summary'):
hr['Email Summary'] = indicator.get('email_summary')
ec['EmailSummary'] = indicator.get('email_summary')
if indicator.get('title'):
hr['Title'] = indicator.get('title')
ec['Title'] = indicator.get('title')
if indicator.get('first_sighting'):
hr['First Sighting'] = indicator.get('first_sighting')
ec['FirstSighting'] = indicator.get('first_sighting')
if indicator.get('perch_id'):
hr['Perch ID'] = indicator.get('perch_id')
ec['PerchID'] = indicator.get('perch_id')
if indicator.get('team'):
hr['Team'] = indicator.get('team')
ec['Team'] = indicator.get('team')
if indicator.get('tlp'):
hr['TLP'] = find_key_by_value(indicator.get('tlp'), TLP_MAP)
ec['TLP'] = find_key_by_value(indicator.get('tlp'), TLP_MAP)
if indicator.get('updated_at'):
hr['Updated At'] = indicator.get('updated_at')
ec['UpdatedAt'] = indicator.get('updated_at')
if indicator.get('operator'):
hr['Operator'] = indicator.get('operator')
ec['Operator'] = indicator.get('operator')
return hr, ec
def item_to_incident(item):
incident = {'name': 'Perch Incident: ' + item.get('title'),
'occurred': item.get('created_at'),
'rawJSON': json.dumps(item)}
return incident
'''COMMAND FUNCTIONS'''
def search_alerts_command():
headers = authenticate()
args = demisto.args()
params = alerts_params(args)
url = '/alerts'
res = http_request('GET', url, headers=headers, params=params)
res_results = res.get('results')
hr = ''
ec = {
"Perch": {
"Alert": []
}
} # type: dict
for alert in res_results:
alert_hr, alert_ec = format_alerts(alert)
ec['Perch']['Alert'].append(alert_ec)
hr += tableToMarkdown(f'{alert_ec.get("Title")}', alert_hr)
if len(res_results) == 0:
demisto.results('No results were found')
else:
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['markdown'],
'Contents': res_results,
'HumanReadable': hr,
'EntryContext': ec
})
def list_communities_command():
headers = authenticate()
args = demisto.args()
params = alerts_params(args)
url = '/communities'
res = http_request('GET', url, headers=headers, params=params)
res_results = res.get('results')
hr = tableToMarkdown('Communities Found', res_results, headerTransform=string_to_table_header, removeNull=True)
ec = {
"Perch": {
"Community": []
}
} # type: dict
for alert in res_results:
ec['Perch']['Community'].append(createContext(alert, keyTransform=string_to_context_key, removeNull=True))
if len(res_results) == 0:
demisto.results('No communities were found')
else:
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['markdown'],
'Contents': res_results,
'HumanReadable': hr,
'EntryContext': ec
})
def get_community_command():
headers = authenticate()
args = demisto.args()
params = alerts_params(args)
community_id = args.get('id')
url = f'/communities/{community_id}'
res = http_request('GET', url, headers=headers, params=params)
if len(res) > 0:
hr = tableToMarkdown('Communities Found', res, headerTransform=string_to_table_header, removeNull=True)
ec = {
"Perch": {
"Community": createContext(res, keyTransform=string_to_context_key, removeNull=True)
}
} # type: dict
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['markdown'],
'Contents': res,
'HumanReadable': hr,
'EntryContext': ec
})
else:
demisto.results('No communities were found')
def create_indicator_command():
headers = authenticate()
args = demisto.args()
raw_data = indicator_params(args)
data = json.dumps(raw_data)
url = '/indicators'
res = http_request('POST', url, headers=headers, data=data)
indicator_hr, indicator_ec = format_indicator(res[0])
hr = ''
ec = {
"Perch": {
"Indicator": []
}
} # type: dict
ec['Perch']['Indicator'].append(indicator_ec)
hr += tableToMarkdown(f'{indicator_hr.get("Title")}', indicator_hr)
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['markdown'],
'Contents': res,
'HumanReadable': hr,
'EntryContext': ec
})
def fetch_alerts(last_run, headers):
last_fetch = last_run.get('time')
url = '/alerts'
statuses_to_fetch = demisto.params().get('soc_status', [])
if statuses_to_fetch:
items = []
for status in statuses_to_fetch:
res = http_request('GET', url, headers=headers, params=alerts_params({'soc_status': STATUSES[status]}))
items += res.get('results')
else:
res = http_request('GET', url, headers=headers)
items = res.get('results')
items.sort(key=lambda r: r['created_at'])
if last_fetch is None:
last_fetch_raw = datetime.now() - timedelta(days=FETCH_TIME)
last_fetch = date_to_timestamp(last_fetch_raw, '%Y-%m-%dT%H:%M:%S.%fZ')
incidents = []
for item in items:
incident = item_to_incident(item)
incident_date = date_to_timestamp(incident['occurred'], '%Y-%m-%dT%H:%M:%S.%fZ')
if incident_date > last_fetch:
incidents.append(incident)
last_fetch = incident_date
return last_fetch, incidents
def fetch_alerts_command():
last_run = demisto.getLastRun()
headers = authenticate()
last_fetch, incidents = fetch_alerts(last_run, headers)
demisto.setLastRun({'time': last_fetch})
demisto.incidents(incidents)
def test_module():
try:
headers = authenticate()
if demisto.params().get('isFetch'):
last_run = {'time': 1561017202}
fetch_alerts(last_run, headers)
demisto.results('ok')
except Exception as err:
return_error(str(err))
''' COMMANDS MANAGER / SWITCH PANEL '''
demisto.info(f'Command being called is {demisto.command()}')
try:
if demisto.command() == 'perch-search-alerts':
search_alerts_command()
elif demisto.command() == 'perch-get-community':
get_community_command()
elif demisto.command() == 'perch-list-communities':
list_communities_command()
elif demisto.command() == 'perch-create-indicator':
create_indicator_command()
elif demisto.command() == 'fetch-incidents':
fetch_alerts_command()
elif demisto.command() == 'test-module':
test_module()
# Log exceptions
except Exception as e:
LOG(str(e))
LOG.print_log()
raise
|
demisto/content
|
Packs/Perch/Integrations/Perch/Perch.py
|
Python
|
mit
| 17,486
|
[
"Amber"
] |
03bd8df2e7b09e38b0d8c15315c6b41e2a58993b74275918908942cd19f622d5
|
#!/usr/bin/env python
'''CREATED:2014-05-22 16:43:44 by Brian McFee <brm2132@columbia.edu>
Pitch-shift a recording to be in A440 tuning.
Usage: ./adjust_tuning.py [-h] input_file output_file
'''
from __future__ import print_function
import argparse
import sys
import librosa
import soundfile as sf
def adjust_tuning(input_file, output_file):
'''Load audio, estimate tuning, apply pitch correction, and save.'''
print('Loading ', input_file)
y, sr = librosa.load(input_file)
print('Separating harmonic component ... ')
y_harm = librosa.effects.harmonic(y)
print('Estimating tuning ... ')
# Just track the pitches associated with high magnitude
tuning = librosa.estimate_tuning(y=y_harm, sr=sr)
print('{:+0.2f} cents'.format(100 * tuning))
print('Applying pitch-correction of {:+0.2f} cents'.format(-100 * tuning))
y_tuned = librosa.effects.pitch_shift(y, sr, -tuning)
print('Saving tuned audio to: ', output_file)
sf.write(output_file, y_tuned, sr)
def process_arguments(args):
'''Argparse function to get the program parameters'''
parser = argparse.ArgumentParser(description='Tuning adjustment example')
parser.add_argument('input_file',
action='store',
help='path to the input file (wav, mp3, etc)')
parser.add_argument('output_file',
action='store',
help='path to store the output signal')
return vars(parser.parse_args(args))
if __name__ == '__main__':
# Get the parameters
params = process_arguments(sys.argv[1:])
# Run the beat tracker
adjust_tuning(params['input_file'], params['output_file'])
|
carlthome/librosa
|
examples/adjust_tuning.py
|
Python
|
isc
| 1,703
|
[
"Brian"
] |
7837439649bc5b25e63ee43010332f472d2acadf3f9cbc32a4adfe8ef0126cea
|
from __future__ import absolute_import, division, print_function
import functools
import warnings
from distutils.version import LooseVersion
from io import BytesIO
import numpy as np
from .. import Variable
from ..core.indexing import NumpyIndexingAdapter
from ..core.pycompat import OrderedDict, basestring, iteritems
from ..core.utils import Frozen, FrozenOrderedDict
from .common import BackendArray, DataStorePickleMixin, WritableCFDataStore
from .netcdf3 import (
encode_nc3_attr_value, encode_nc3_variable, is_valid_nc3_name)
def _decode_string(s):
if isinstance(s, bytes):
return s.decode('utf-8', 'replace')
return s
def _decode_attrs(d):
# don't decode _FillValue from bytes -> unicode, because we want to ensure
# that its type matches the data exactly
return OrderedDict((k, v if k == '_FillValue' else _decode_string(v))
for (k, v) in iteritems(d))
class ScipyArrayWrapper(BackendArray):
def __init__(self, variable_name, datastore):
self.datastore = datastore
self.variable_name = variable_name
array = self.get_array()
self.shape = array.shape
self.dtype = np.dtype(array.dtype.kind +
str(array.dtype.itemsize))
def get_array(self):
self.datastore.assert_open()
return self.datastore.ds.variables[self.variable_name].data
def __getitem__(self, key):
with self.datastore.ensure_open(autoclose=True):
data = NumpyIndexingAdapter(self.get_array())[key]
# Copy data if the source file is mmapped.
# This makes things consistent
# with the netCDF4 library by ensuring
# we can safely read arrays even
# after closing associated files.
copy = self.datastore.ds.use_mmap
return np.array(data, dtype=self.dtype, copy=copy)
def __setitem__(self, key, value):
with self.datastore.ensure_open(autoclose=True):
data = self.datastore.ds.variables[self.variable_name]
try:
data[key] = value
except TypeError:
if key is Ellipsis:
# workaround for GH: scipy/scipy#6880
data[:] = value
else:
raise
def _open_scipy_netcdf(filename, mode, mmap, version):
import scipy.io
import gzip
# if the string ends with .gz, then gunzip and open as netcdf file
if isinstance(filename, basestring) and filename.endswith('.gz'):
try:
return scipy.io.netcdf_file(gzip.open(filename), mode=mode,
mmap=mmap, version=version)
except TypeError as e:
# TODO: gzipped loading only works with NetCDF3 files.
if 'is not a valid NetCDF 3 file' in e.message:
raise ValueError('gzipped file loading only supports '
'NetCDF 3 files.')
else:
raise
if isinstance(filename, bytes) and filename.startswith(b'CDF'):
# it's a NetCDF3 bytestring
filename = BytesIO(filename)
try:
return scipy.io.netcdf_file(filename, mode=mode, mmap=mmap,
version=version)
except TypeError as e: # netcdf3 message is obscure in this case
errmsg = e.args[0]
if 'is not a valid NetCDF 3 file' in errmsg:
msg = """
If this is a NetCDF4 file, you may need to install the
netcdf4 library, e.g.,
$ pip install netcdf4
"""
errmsg += msg
raise TypeError(errmsg)
else:
raise
class ScipyDataStore(WritableCFDataStore, DataStorePickleMixin):
"""Store for reading and writing data via scipy.io.netcdf.
This store has the advantage of being able to be initialized with a
StringIO object, allow for serialization without writing to disk.
It only supports the NetCDF3 file-format.
"""
def __init__(self, filename_or_obj, mode='r', format=None, group=None,
writer=None, mmap=None, autoclose=False, lock=None):
import scipy
import scipy.io
if (mode != 'r' and
scipy.__version__ < LooseVersion('0.13')): # pragma: no cover
warnings.warn('scipy %s detected; '
'the minimal recommended version is 0.13. '
'Older version of this library do not reliably '
'read and write files.'
% scipy.__version__, ImportWarning)
if group is not None:
raise ValueError('cannot save to a group with the '
'scipy.io.netcdf backend')
if format is None or format == 'NETCDF3_64BIT':
version = 2
elif format == 'NETCDF3_CLASSIC':
version = 1
else:
raise ValueError('invalid format for scipy.io.netcdf backend: %r'
% format)
opener = functools.partial(_open_scipy_netcdf,
filename=filename_or_obj,
mode=mode, mmap=mmap, version=version)
self._ds = opener()
self._autoclose = autoclose
self._isopen = True
self._opener = opener
self._mode = mode
super(ScipyDataStore, self).__init__(writer, lock=lock)
def open_store_variable(self, name, var):
with self.ensure_open(autoclose=False):
return Variable(var.dimensions, ScipyArrayWrapper(name, self),
_decode_attrs(var._attributes))
def get_variables(self):
with self.ensure_open(autoclose=False):
return FrozenOrderedDict((k, self.open_store_variable(k, v))
for k, v in iteritems(self.ds.variables))
def get_attrs(self):
with self.ensure_open(autoclose=True):
return Frozen(_decode_attrs(self.ds._attributes))
def get_dimensions(self):
with self.ensure_open(autoclose=True):
return Frozen(self.ds.dimensions)
def get_encoding(self):
encoding = {}
encoding['unlimited_dims'] = {
k for k, v in self.ds.dimensions.items() if v is None}
return encoding
def set_dimension(self, name, length, is_unlimited=False):
with self.ensure_open(autoclose=False):
if name in self.ds.dimensions:
raise ValueError('%s does not support modifying dimensions'
% type(self).__name__)
dim_length = length if not is_unlimited else None
self.ds.createDimension(name, dim_length)
def _validate_attr_key(self, key):
if not is_valid_nc3_name(key):
raise ValueError("Not a valid attribute name")
def set_attribute(self, key, value):
with self.ensure_open(autoclose=False):
self._validate_attr_key(key)
value = encode_nc3_attr_value(value)
setattr(self.ds, key, value)
def encode_variable(self, variable):
variable = encode_nc3_variable(variable)
return variable
def prepare_variable(self, name, variable, check_encoding=False,
unlimited_dims=None):
if check_encoding and variable.encoding:
if variable.encoding != {'_FillValue': None}:
raise ValueError('unexpected encoding for scipy backend: %r'
% list(variable.encoding))
data = variable.data
# nb. this still creates a numpy array in all memory, even though we
# don't write the data yet; scipy.io.netcdf does not not support
# incremental writes.
if name not in self.ds.variables:
self.ds.createVariable(name, data.dtype, variable.dims)
scipy_var = self.ds.variables[name]
for k, v in iteritems(variable.attrs):
self._validate_attr_key(k)
setattr(scipy_var, k, v)
target = ScipyArrayWrapper(name, self)
return target, data
def sync(self, compute=True):
if not compute:
raise NotImplementedError(
'compute=False is not supported for the scipy backend yet')
with self.ensure_open(autoclose=True):
super(ScipyDataStore, self).sync(compute=compute)
self.ds.flush()
def close(self):
self.ds.close()
self._isopen = False
def __exit__(self, type, value, tb):
self.close()
def __setstate__(self, state):
filename = state['_opener'].keywords['filename']
if hasattr(filename, 'seek'):
# it's a file-like object
# seek to the start of the file so scipy can read it
filename.seek(0)
super(ScipyDataStore, self).__setstate__(state)
self._ds = None
self._isopen = False
|
jcmgray/xarray
|
xarray/backends/scipy_.py
|
Python
|
apache-2.0
| 9,010
|
[
"NetCDF"
] |
c1a84a87aa0fe9429bd90fdd8c3760723f22f048f8a608524d9db2ad2de4be57
|
# (C) British Crown Copyright 2010 - 2015, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
"""
Provides the capability to load netCDF files and interprete them
according to the 'NetCDF Climate and Forecast (CF) Metadata Conventions'.
References:
[CF] NetCDF Climate and Forecast (CF) Metadata conventions, Version 1.5, October, 2010.
[NUG] NetCDF User's Guide, http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html
"""
from __future__ import (absolute_import, division, print_function)
from six.moves import (filter, input, map, range, zip) # noqa
import six
from abc import ABCMeta, abstractmethod
from collections import Iterable, MutableMapping
import os
import re
import warnings
import netCDF4
import numpy as np
import numpy.ma as ma
import iris.util
#
# CF parse pattern common to both formula terms and measure CF variables.
#
_CF_PARSE = re.compile(r'''
\s*
(?P<lhs>[\w_]+)
\s*:\s*
(?P<rhs>[\w_]+)
\s*
''', re.VERBOSE)
# NetCDF variable attributes handled by the netCDF4 module and
# therefore automatically classed as "used" attributes.
_CF_ATTRS_IGNORE = set(['_FillValue', 'add_offset', 'missing_value', 'scale_factor', ])
#: Supported dimensionless vertical coordinate reference surface/phemomenon
#: formula terms. Ref: [CF] Appendix D.
reference_terms = dict(atmosphere_sigma_coordinate=['ps'],
atmosphere_hybrid_sigma_pressure_coordinate=['ps'],
atmosphere_hybrid_height_coordinate=['orog'],
atmosphere_sleve_coordinate=['zsurf1', 'zsurf2'],
ocean_sigma_coordinate=['eta', 'depth'],
ocean_s_coordinate=['eta', 'depth'],
ocean_sigma_z_coordinate=['eta', 'depth'],
ocean_s_coordinate_g1=['eta', 'depth'],
ocean_s_coordinate_g2=['eta', 'depth'])
# NetCDF returns a different type for strings depending on Python version.
def _is_str_dtype(var):
return ((six.PY2 and np.issubdtype(var.dtype, np.str)) or
(six.PY3 and np.issubdtype(var.dtype, np.bytes_)))
################################################################################
class CFVariable(six.with_metaclass(ABCMeta, object)):
"""Abstract base class wrapper for a CF-netCDF variable."""
#: Name of the netCDF variable attribute that identifies this
#: CF-netCDF variable.
cf_identity = None
def __init__(self, name, data):
# Accessing the list of netCDF attributes is surprisingly slow.
# Since it's used repeatedly, caching the list makes things
# quite a bit faster.
self._nc_attrs = data.ncattrs()
#: NetCDF variable name.
self.cf_name = name
#: NetCDF4 Variable data instance.
self.cf_data = data
#: Collection of CF-netCDF variables associated with this variable.
self.cf_group = None
#: CF-netCDF formula terms that his variable participates in.
self.cf_terms_by_root = {}
self.cf_attrs_reset()
@staticmethod
def _identify_common(variables, ignore, target):
if ignore is None:
ignore = []
if target is None:
target = variables
elif isinstance(target, six.string_types):
if target not in variables:
raise ValueError('Cannot identify unknown target CF-netCDF variable %r' % target)
target = {target: variables[target]}
else:
raise TypeError('Expect a target CF-netCDF variable name')
return (ignore, target)
@abstractmethod
def identify(self, variables, ignore=None, target=None, warn=True):
"""
Identify all variables that match the criterion for this CF-netCDF variable class.
Args:
* variables:
Dictionary of netCDF4.Variable instance by variable name.
Kwargs:
* ignore:
List of variable names to ignore.
* target:
Name of a single variable to check.
* warn:
Issue a warning if a missing variable is referenced.
Returns:
Dictionary of CFVariable instance by variable name.
"""
pass
def spans(self, cf_variable):
"""
Determine whether the dimensionality of this variable
is a subset of the specified target variable.
Note that, by default scalar variables always span the
dimensionality of the target variable.
Args:
* cf_variable:
Compare dimensionality with the :class:`CFVariable`.
Returns:
Boolean.
"""
result = set(self.dimensions).issubset(cf_variable.dimensions)
return result
def __eq__(self, other):
# CF variable names are unique.
return self.cf_name == other.cf_name
def __ne__(self, other):
# CF variable names are unique.
return self.cf_name != other.cf_name
def __hash__(self):
# CF variable names are unique.
return hash(self.cf_name)
def __getattr__(self, name):
# Accessing netCDF attributes is surprisingly slow. Since
# they're often read repeatedly, caching the values makes things
# quite a bit faster.
if name in self._nc_attrs:
self._cf_attrs.add(name)
value = getattr(self.cf_data, name)
setattr(self, name, value)
return value
def __getitem__(self, key):
return self.cf_data.__getitem__(key)
def __len__(self):
return self.cf_data.__len__()
def __repr__(self):
return '%s(%r, %r)' % (self.__class__.__name__, self.cf_name, self.cf_data)
def cf_attrs(self):
"""Return a list of all attribute name and value pairs of the CF-netCDF variable."""
return tuple((attr, self.getncattr(attr))
for attr in sorted(self._nc_attrs))
def cf_attrs_ignored(self):
"""Return a list of all ignored attribute name and value pairs of the CF-netCDF variable."""
return tuple((attr, self.getncattr(attr)) for attr in
sorted(set(self._nc_attrs) & _CF_ATTRS_IGNORE))
def cf_attrs_used(self):
"""Return a list of all accessed attribute name and value pairs of the CF-netCDF variable."""
return tuple((attr, self.getncattr(attr)) for attr in
sorted(self._cf_attrs))
def cf_attrs_unused(self):
"""Return a list of all non-accessed attribute name and value pairs of the CF-netCDF variable."""
return tuple((attr, self.getncattr(attr)) for attr in
sorted(set(self._nc_attrs) - self._cf_attrs))
def cf_attrs_reset(self):
"""Reset the history of accessed attribute names of the CF-netCDF variable."""
self._cf_attrs = set([item[0] for item in self.cf_attrs_ignored()])
def add_formula_term(self, root, term):
"""
Register the participation of this CF-netCDF variable in a CF-netCDF formula term.
Args:
* root (string):
The name of CF-netCDF variable that defines the CF-netCDF formula_terms attribute.
* term (string):
The associated term name of this variable in the formula_terms definition.
Returns:
None.
"""
self.cf_terms_by_root[root] = term
def has_formula_terms(self):
"""
Determine whether this CF-netCDF variable participates in a CF-netcdf formula term.
Returns:
Boolean.
"""
return bool(self.cf_terms_by_root)
class CFAncillaryDataVariable(CFVariable):
"""
A CF-netCDF ancillary data variable is a variable that provides metadata
about the individual values of another data variable.
Identified by the CF-netCDF variable attribute 'ancillary_variables'.
Ref: [CF] Section 3.4. Ancillary Data.
"""
cf_identity = 'ancillary_variables'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF ancillary data variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for ancillary data variable references.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
for name in nc_var_att.split():
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF ancillary data variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
result[name] = CFAncillaryDataVariable(name, variables[name])
return result
class CFAuxiliaryCoordinateVariable(CFVariable):
"""
A CF-netCDF auxiliary coordinate variable is any netCDF variable that contains
coordinate data, but is not a CF-netCDF coordinate variable by definition.
There is no relationship between the name of a CF-netCDF auxiliary coordinate
variable and the name(s) of its dimension(s).
Identified by the CF-netCDF variable attribute 'coordinates'.
Also see :class:`iris.fileformats.cf.CFLabelVariable`.
Ref: [CF] Chapter 5. Coordinate Systems.
[CF] Section 6.2. Alternative Coordinates.
"""
cf_identity = 'coordinates'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF auxiliary coordinate variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for auxiliary coordinate variable references.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
for name in nc_var_att.split():
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF auxiliary coordinate variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
# Restrict to non-string type i.e. not a CFLabelVariable.
if not _is_str_dtype(variables[name]):
result[name] = CFAuxiliaryCoordinateVariable(name, variables[name])
return result
class CFBoundaryVariable(CFVariable):
"""
A CF-netCDF boundary variable is associated with a CF-netCDF variable that contains
coordinate data. When a data value provides information about conditions in a cell
occupying a region of space/time or some other dimension, the boundary variable
provides a description of cell extent.
A CF-netCDF boundary variable will have one more dimension than its associated
CF-netCDF coordinate variable or CF-netCDF auxiliary coordinate variable.
Identified by the CF-netCDF variable attribute 'bounds'.
Ref: [CF] Section 7.1. Cell Boundaries.
"""
cf_identity = 'bounds'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF boundary variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for a boundary variable reference.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
name = nc_var_att.strip()
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF boundary variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
result[name] = CFBoundaryVariable(name, variables[name])
return result
def spans(self, cf_variable):
"""
Determine whether the dimensionality of this variable
is a subset of the specified target variable.
Note that, by default scalar variables always span the
dimensionality of the target variable.
Args:
* cf_variable:
Compare dimensionality with the :class:`CFVariable`.
Returns:
Boolean.
"""
# Scalar variables always span the target variable.
result = True
if self.dimensions:
source = self.dimensions
target = cf_variable.dimensions
# Ignore the bounds extent dimension.
result = set(source[:-1]).issubset(target) or \
set(source[1:]).issubset(target)
return result
class CFClimatologyVariable(CFVariable):
"""
A CF-netCDF climatology variable is associated with a CF-netCDF variable that contains
coordinate data. When a data value provides information about conditions in a cell
occupying a region of space/time or some other dimension, the climatology variable
provides a climatological description of cell extent.
A CF-netCDF climatology variable will have one more dimension than its associated
CF-netCDF coordinate variable.
Identified by the CF-netCDF variable attribute 'climatology'.
Ref: [CF] Section 7.4. Climatological Statistics
"""
cf_identity = 'climatology'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF climatology variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for a climatology variable reference.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
name = nc_var_att.strip()
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF climatology variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
result[name] = CFClimatologyVariable(name, variables[name])
return result
def spans(self, cf_variable):
"""
Determine whether the dimensionality of this variable
is a subset of the specified target variable.
Note that, by default scalar variables always span the
dimensionality of the target variable.
Args:
* cf_variable:
Compare dimensionality with the :class:`CFVariable`.
Returns:
Boolean.
"""
# Scalar variables always span the target variable.
result = True
if self.dimensions:
source = self.dimensions
target = cf_variable.dimensions
# Ignore the climatology extent dimension.
result = set(source[:-1]).issubset(target) or \
set(source[1:]).issubset(target)
return result
class CFCoordinateVariable(CFVariable):
"""
A CF-netCDF coordinate variable is a one-dimensional variable with the same name
as its dimension, and it is defined as a numeric data type with values that are
ordered monotonically. Missing values are not allowed in CF-netCDF coordinate
variables. Also see [NUG] Section 2.3.1.
Identified by the above criterion, there is no associated CF-netCDF variable
attribute.
Ref: [CF] 1.2. Terminology.
"""
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True, monotonic=False):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF coordinate variables.
for nc_var_name, nc_var in six.iteritems(target):
if nc_var_name in ignore:
continue
# String variables can't be coordinates
if _is_str_dtype(nc_var):
continue
# Restrict to one-dimensional with name as dimension OR zero-dimensional scalar
if not ((nc_var.ndim == 1 and nc_var_name in nc_var.dimensions) or (nc_var.ndim == 0)):
continue
# Restrict to monotonic?
if monotonic:
data = nc_var[:]
# Gracefully fill a masked coordinate.
if ma.isMaskedArray(data):
data = ma.filled(data)
if nc_var.shape == () or nc_var.shape == (1,) or iris.util.monotonic(data):
result[nc_var_name] = CFCoordinateVariable(nc_var_name, nc_var)
else:
result[nc_var_name] = CFCoordinateVariable(nc_var_name, nc_var)
return result
class CFDataVariable(CFVariable):
"""
A CF-netCDF variable containing data pay-load that maps to an Iris :class:`iris.cube.Cube`.
"""
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
raise NotImplementedError
class _CFFormulaTermsVariable(CFVariable):
"""
A CF-netCDF formula terms variable corresponds to a term in a formula that
allows dimensional vertical coordinate values to be computed from dimensionless
vertical coordinate values and associated variables at specific grid points.
Identified by the CF-netCDF variable attribute 'formula_terms'.
Ref: [CF] Section 4.3.2. Dimensional Vertical Coordinate.
[CF] Appendix D. Dimensionless Vertical Coordinates.
"""
cf_identity = 'formula_terms'
def __init__(self, name, data, formula_root, formula_term):
CFVariable.__init__(self, name, data)
# Register the formula root and term relationship.
self.add_formula_term(formula_root, formula_term)
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF formula terms variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for formula terms variable references.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
for match_item in _CF_PARSE.finditer(nc_var_att):
match_group = match_item.groupdict()
# Ensure that term name is lower case, as expected.
term_name = match_group['lhs'].lower()
variable_name = match_group['rhs']
if variable_name not in ignore:
if variable_name not in variables:
if warn:
message = 'Missing CF-netCDF formula term variable %r, referenced by netCDF variable %r'
warnings.warn(message % (variable_name, nc_var_name))
else:
if variable_name not in result:
result[variable_name] = _CFFormulaTermsVariable(variable_name,
variables[variable_name],
nc_var_name, term_name)
else:
result[variable_name].add_formula_term(nc_var_name, term_name)
return result
def __repr__(self):
return '%s(%r, %r, %r)' % (self.__class__.__name__,
self.cf_name, self.cf_data,
self.cf_terms_by_root)
class CFGridMappingVariable(CFVariable):
"""
A CF-netCDF grid mapping variable contains a list of specific attributes that
define a particular grid mapping. A CF-netCDF grid mapping variable must contain
the attribute 'grid_mapping_name'.
Based on the value of the 'grid_mapping_name' attribute, there are associated
standard names of CF-netCDF coordinate variables that contain the mapping's
independent variables.
Identified by the CF-netCDF variable attribute 'grid_mapping'.
Ref: [CF] Section 5.6. Horizontal Coordinate Reference Systems, Grid Mappings, and Projections.
[CF] Appendix F. Grid Mappings.
"""
cf_identity = 'grid_mapping'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all grid mapping variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for a grid mapping variable reference.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
name = nc_var_att.strip()
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF grid mapping variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
result[name] = CFGridMappingVariable(name, variables[name])
return result
class CFLabelVariable(CFVariable):
"""
A CF-netCDF CF label variable is any netCDF variable that contain string
textual information, or labels.
Identified by the CF-netCDF variable attribute 'coordinates'.
Also see :class:`iris.fileformats.cf.CFAuxiliaryCoordinateVariable`.
Ref: [CF] Section 6.1. Labels.
"""
cf_identity = 'coordinates'
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF label variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for label variable references.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
for name in nc_var_att.split():
if name not in ignore:
if name not in variables:
if warn:
message = 'Missing CF-netCDF label variable %r, referenced by netCDF variable %r'
warnings.warn(message % (name, nc_var_name))
else:
# Register variable, but only allow string type.
var = variables[name]
if _is_str_dtype(var):
result[name] = CFLabelVariable(name, var)
return result
def cf_label_data(self, cf_data_var):
"""
Return the associated CF-netCDF label variable strings.
Args:
* cf_data_var (:class:`iris.fileformats.cf.CFDataVariable`):
The CF-netCDF data variable which the CF-netCDF label variable describes.
Returns:
String labels.
"""
if not isinstance(cf_data_var, CFDataVariable):
raise TypeError('cf_data_var argument should be of type CFDataVariable. Got %r.' % type(cf_data_var))
# Determine the name of the label string (or length) dimension by
# finding the dimension name that doesn't exist within the data dimensions.
str_dim_name = list(set(self.dimensions) - set(cf_data_var.dimensions))
if len(str_dim_name) != 1:
raise ValueError('Invalid string dimensions for CF-netCDF label variable %r' % self.cf_name)
str_dim_name = str_dim_name[0]
label_data = self[:]
if isinstance(label_data, ma.MaskedArray):
label_data = label_data.filled()
# Determine whether we have a string-valued scalar label
# i.e. a character variable that only has one dimension (the length of the string).
if self.ndim == 1:
data = np.array([''.join(label_data).strip()])
else:
# Determine the index of the string dimension.
str_dim = self.dimensions.index(str_dim_name)
# Calculate new label data shape (without string dimension) and create payload array.
new_shape = tuple(dim_len for i, dim_len in enumerate(self.shape) if i != str_dim)
string_basetype = '|S%d' if six.PY2 else '|U%d'
string_dtype = string_basetype % self.shape[str_dim]
data = np.empty(new_shape, dtype=string_dtype)
for index in np.ndindex(new_shape):
# Create the slice for the label data.
if str_dim == 0:
label_index = (slice(None, None),) + index
else:
label_index = index + (slice(None, None),)
label_string = b''.join(label_data[label_index]).strip()
if six.PY3:
label_string = label_string.decode('utf8')
data[index] = label_string
return data
def cf_label_dimensions(self, cf_data_var):
"""
Return the name of the associated CF-netCDF label variable data dimensions.
Args:
* cf_data_var (:class:`iris.fileformats.cf.CFDataVariable`):
The CF-netCDF data variable which the CF-netCDF label variable describes.
Returns:
Tuple of label data dimension names.
"""
if not isinstance(cf_data_var, CFDataVariable):
raise TypeError('cf_data_var argument should be of type CFDataVariable. Got %r.' % type(cf_data_var))
return tuple([dim_name for dim_name in self.dimensions if dim_name in cf_data_var.dimensions])
def spans(self, cf_variable):
"""
Determine whether the dimensionality of this variable
is a subset of the specified target variable.
Note that, by default scalar variables always span the
dimensionality of the target variable.
Args:
* cf_variable:
Compare dimensionality with the :class:`CFVariable`.
Returns:
Boolean.
"""
# Scalar variables always span the target variable.
result = True
if self.dimensions:
source = self.dimensions
target = cf_variable.dimensions
# Ignore label string length dimension.
result = set(source[:-1]).issubset(target) or \
set(source[1:]).issubset(target)
return result
class CFMeasureVariable(CFVariable):
"""
A CF-netCDF measure variable is a variable that contains cell areas or volumes.
Identified by the CF-netCDF variable attribute 'cell_measures'.
Ref: [CF] Section 7.2. Cell Measures.
"""
cf_identity = 'cell_measures'
def __init__(self, name, data, measure):
CFVariable.__init__(self, name, data)
#: Associated cell measure of the cell variable
self.cf_measure = measure
@classmethod
def identify(cls, variables, ignore=None, target=None, warn=True):
result = {}
ignore, target = cls._identify_common(variables, ignore, target)
# Identify all CF measure variables.
for nc_var_name, nc_var in six.iteritems(target):
# Check for measure variable references.
nc_var_att = getattr(nc_var, cls.cf_identity, None)
if nc_var_att is not None:
for match_item in _CF_PARSE.finditer(nc_var_att):
match_group = match_item.groupdict()
measure = match_group['lhs']
variable_name = match_group['rhs']
if variable_name not in ignore:
if variable_name not in variables:
if warn:
message = 'Missing CF-netCDF measure variable %r, referenced by netCDF variable %r'
warnings.warn(message % (variable_name, nc_var_name))
else:
result[variable_name] = CFMeasureVariable(variable_name, variables[variable_name], measure)
return result
################################################################################
class CFGroup(MutableMapping, object):
"""
Represents a collection of 'NetCDF Climate and Forecast (CF) Metadata
Conventions' variables and netCDF global attributes.
"""
def __init__(self):
#: Collection of CF-netCDF variables
self._cf_variables = {}
#: Collection of netCDF global attributes
self.global_attributes = {}
#: Collection of CF-netCDF variables promoted to a CFDataVariable.
self.promoted = {}
def _cf_getter(self, cls):
# Generate dictionary with dictionary comprehension.
return {cf_name: cf_var
for cf_name, cf_var in six.iteritems(self._cf_variables)
if isinstance(cf_var, cls)}
@property
def ancillary_variables(self):
"""Collection of CF-netCDF ancillary variables."""
return self._cf_getter(CFAncillaryDataVariable)
@property
def auxiliary_coordinates(self):
"""Collection of CF-netCDF auxiliary coordinate variables."""
return self._cf_getter(CFAuxiliaryCoordinateVariable)
@property
def bounds(self):
"""Collection of CF-netCDF boundary variables."""
return self._cf_getter(CFBoundaryVariable)
@property
def climatology(self):
"""Collection of CF-netCDF climatology variables."""
return self._cf_getter(CFClimatologyVariable)
@property
def coordinates(self):
"""Collection of CF-netCDF coordinate variables."""
return self._cf_getter(CFCoordinateVariable)
@property
def data_variables(self):
"""Collection of CF-netCDF data pay-load variables."""
return self._cf_getter(CFDataVariable)
@property
def formula_terms(self):
"""Collection of CF-netCDF variables that participate in a CF-netCDF formula term."""
return {cf_name: cf_var
for cf_name, cf_var in six.iteritems(self._cf_variables)
if cf_var.has_formula_terms()}
@property
def grid_mappings(self):
"""Collection of CF-netCDF grid mapping variables."""
return self._cf_getter(CFGridMappingVariable)
@property
def labels(self):
"""Collection of CF-netCDF label variables."""
return self._cf_getter(CFLabelVariable)
@property
def cell_measures(self):
"""Collection of CF-netCDF measure variables."""
return self._cf_getter(CFMeasureVariable)
def keys(self):
"""Return the names of all the CF-netCDF variables in the group."""
return self._cf_variables.keys()
def __len__(self):
return len(self._cf_variables)
def __iter__(self):
for item in self._cf_variables:
yield item
def __setitem__(self, name, variable):
if not isinstance(variable, CFVariable):
raise TypeError('Attempted to add an invalid CF-netCDF variable to the %s' % self.__class__.__name__)
if name != variable.cf_name:
raise ValueError('Mismatch between key name %r and CF-netCDF variable name %r' % (str(name), variable.cf_name))
self._cf_variables[name] = variable
def __getitem__(self, name):
if name not in self._cf_variables:
raise KeyError('Cannot get unknown CF-netCDF variable name %r' % str(name))
return self._cf_variables[name]
def __delitem__(self, name):
if name not in self._cf_variables:
raise KeyError('Cannot delete unknown CF-netcdf variable name %r' % str(name))
del self._cf_variables[name]
def __repr__(self):
result = []
result.append('variables:%d' % len(self._cf_variables))
result.append('global_attributes:%d' % len(self.global_attributes))
result.append('promoted:%d' % len(self.promoted))
return '<%s of %s>' % (self.__class__.__name__, ', '.join(result))
################################################################################
class CFReader(object):
"""
This class allows the contents of a netCDF file to be interpreted according
to the 'NetCDF Climate and Forecast (CF) Metadata Conventions'.
"""
def __init__(self, filename, warn=False, monotonic=False):
self._filename = os.path.expanduser(filename)
# All CF variable types EXCEPT for the "special cases" of
# CFDataVariable, CFCoordinateVariable and _CFFormulaTermsVariable.
self._variable_types = (CFAncillaryDataVariable, CFAuxiliaryCoordinateVariable,
CFBoundaryVariable, CFClimatologyVariable,
CFGridMappingVariable, CFLabelVariable, CFMeasureVariable)
#: Collection of CF-netCDF variables associated with this netCDF file
self.cf_group = CFGroup()
self._dataset = netCDF4.Dataset(self._filename, mode='r')
# Issue load optimisation warning.
if warn and self._dataset.file_format in ['NETCDF3_CLASSIC', 'NETCDF3_64BIT']:
warnings.warn('Optimise CF-netCDF loading by converting data from NetCDF3 ' \
'to NetCDF4 file format using the "nccopy" command.')
self._check_monotonic = monotonic
self._translate()
self._build_cf_groups()
self._reset()
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self._filename)
def _translate(self):
"""Classify the netCDF variables into CF-netCDF variables."""
netcdf_variable_names = list(self._dataset.variables.keys())
# Identify all CF coordinate variables first. This must be done
# first as, by CF convention, the definition of a CF auxiliary
# coordinate variable may include a scalar CF coordinate variable,
# whereas we want these two types of variables to be mutually exclusive.
coords = CFCoordinateVariable.identify(self._dataset.variables,
monotonic=self._check_monotonic)
self.cf_group.update(coords)
coordinate_names = list(self.cf_group.coordinates.keys())
# Identify all CF variables EXCEPT for the "special cases".
for variable_type in self._variable_types:
# Prevent grid mapping variables being mis-identified as CF coordinate variables.
ignore = None if issubclass(variable_type, CFGridMappingVariable) else coordinate_names
self.cf_group.update(variable_type.identify(self._dataset.variables, ignore=ignore))
# Identify global netCDF attributes.
attr_dict = {attr_name: _getncattr(self._dataset, attr_name, '') for
attr_name in self._dataset.ncattrs()}
self.cf_group.global_attributes.update(attr_dict)
# Identify and register all CF formula terms.
formula_terms = _CFFormulaTermsVariable.identify(self._dataset.variables)
for cf_var in six.itervalues(formula_terms):
for cf_root, cf_term in six.iteritems(cf_var.cf_terms_by_root):
# Ignore formula terms owned by a bounds variable.
if cf_root not in self.cf_group.bounds:
cf_name = cf_var.cf_name
if cf_var.cf_name not in self.cf_group:
self.cf_group[cf_name] = CFAuxiliaryCoordinateVariable(cf_name, cf_var.cf_data)
self.cf_group[cf_name].add_formula_term(cf_root, cf_term)
# Determine the CF data variables.
data_variable_names = set(netcdf_variable_names) - set(self.cf_group.ancillary_variables) - \
set(self.cf_group.auxiliary_coordinates) - set(self.cf_group.bounds) - \
set(self.cf_group.climatology) - set(self.cf_group.coordinates) - \
set(self.cf_group.grid_mappings) - set(self.cf_group.labels) - \
set(self.cf_group.cell_measures)
for name in data_variable_names:
self.cf_group[name] = CFDataVariable(name, self._dataset.variables[name])
def _build_cf_groups(self):
"""Build the first order relationships between CF-netCDF variables."""
def _build(cf_variable):
coordinate_names = list(self.cf_group.coordinates.keys())
cf_group = CFGroup()
# Build CF variable relationships.
for variable_type in self._variable_types:
# Prevent grid mapping variables being mis-identified as
# CF coordinate variables.
ignore = None if issubclass(variable_type, CFGridMappingVariable) else coordinate_names
match = variable_type.identify(self._dataset.variables, ignore=ignore,
target=cf_variable.cf_name, warn=False)
# Sanity check dimensionality coverage.
for cf_name, cf_var in six.iteritems(match):
if cf_var.spans(cf_variable):
cf_group[cf_name] = self.cf_group[cf_name]
else:
# Register the ignored variable.
# N.B. 'ignored' variable from enclosing scope.
ignored.add(cf_name)
msg = 'Ignoring variable {!r} referenced ' \
'by variable {!r}: Dimensions {!r} do not ' \
'span {!r}'.format(cf_name,
cf_variable.cf_name,
cf_var.dimensions,
cf_variable.dimensions)
warnings.warn(msg)
# Build CF data variable relationships.
if isinstance(cf_variable, CFDataVariable):
# Add global netCDF attributes.
cf_group.global_attributes.update(self.cf_group.global_attributes)
# Add appropriate "dimensioned" CF coordinate variables.
cf_group.update({cf_name: self.cf_group[cf_name] for cf_name
in cf_variable.dimensions if cf_name in
self.cf_group.coordinates})
# Add appropriate "dimensionless" CF coordinate variables.
coordinates_attr = getattr(cf_variable, 'coordinates', '')
cf_group.update({cf_name: self.cf_group[cf_name] for cf_name
in coordinates_attr.split() if cf_name in
self.cf_group.coordinates})
# Add appropriate formula terms.
for cf_var in six.itervalues(self.cf_group.formula_terms):
for cf_root in cf_var.cf_terms_by_root:
if cf_root in cf_group and cf_var.cf_name not in cf_group:
# Sanity check dimensionality.
if cf_var.spans(cf_variable):
cf_group[cf_var.cf_name] = cf_var
else:
# Register the ignored variable.
# N.B. 'ignored' variable from enclosing scope.
ignored.add(cf_var.cf_name)
msg = 'Ignoring formula terms variable {!r} ' \
'referenced by data variable {!r} via ' \
'variable {!r}: Dimensions {!r} do not ' \
'span {!r}'.format(cf_var.cf_name,
cf_variable.cf_name,
cf_root,
cf_var.dimensions,
cf_variable.dimensions)
warnings.warn(msg)
# Add the CF group to the variable.
cf_variable.cf_group = cf_group
# Ignored variables are those that cannot be attached to a
# data variable as the dimensionality of that variable is not
# a subset of the dimensionality of the data variable.
ignored = set()
for cf_variable in six.itervalues(self.cf_group):
_build(cf_variable)
# Determine whether there are any formula terms that
# may be promoted to a CFDataVariable.
if iris.FUTURE.netcdf_promote:
# Restrict promotion to only those formula terms
# that are reference surface/phenomenon.
for cf_var in six.itervalues(self.cf_group.formula_terms):
for cf_root, cf_term in six.iteritems(cf_var.cf_terms_by_root):
cf_root_var = self.cf_group[cf_root]
name = cf_root_var.standard_name or cf_root_var.long_name
terms = reference_terms.get(name, [])
if isinstance(terms, six.string_types) or \
not isinstance(terms, Iterable):
terms = [terms]
cf_var_name = cf_var.cf_name
if cf_term in terms and \
cf_var_name not in self.cf_group.promoted:
data_var = CFDataVariable(cf_var_name, cf_var.cf_data)
self.cf_group.promoted[cf_var_name] = data_var
_build(data_var)
break
# Promote any ignored variables.
promoted = set()
not_promoted = ignored.difference(promoted)
while not_promoted:
cf_name = not_promoted.pop()
if cf_name not in self.cf_group.data_variables and \
cf_name not in self.cf_group.promoted:
data_var = CFDataVariable(cf_name,
self.cf_group[cf_name].cf_data)
self.cf_group.promoted[cf_name] = data_var
_build(data_var)
# Determine whether there are still any ignored variables
# yet to be promoted.
promoted.add(cf_name)
not_promoted = ignored.difference(promoted)
else:
_netcdf_promote_warning()
def _reset(self):
"""Reset the attribute touch history of each variable."""
for nc_var_name in six.iterkeys(self._dataset.variables):
self.cf_group[nc_var_name].cf_attrs_reset()
def __del__(self):
# Explicitly close dataset to prevent file remaining open.
self._dataset.close()
def _getncattr(dataset, attr, default=None):
"""
Simple wrapper round `netCDF4.Dataset.getncattr` to make it behave
more like `getattr`.
"""
try:
value = dataset.getncattr(attr)
except AttributeError:
value = default
return value
def _netcdf_promote_warning():
msg = ('NetCDF default loading behaviour currently does not expose '
'variables which define reference surfaces for dimensionless '
'vertical coordinates as independent Cubes. This behaviour is '
'deprecated in favour of automatic promotion to Cubes. To switch '
'to the new behaviour, set iris.FUTURE.netcdf_promote to True.')
warnings.warn(msg)
|
andrewcbennett/iris
|
lib/iris/fileformats/cf.py
|
Python
|
gpl-3.0
| 44,920
|
[
"NetCDF"
] |
595068e20d7b21d3cddbf91cc6030f5b5e0b1857516e97abcd3263c6fbdf79bf
|
# The MIT License
#
# Copyright (c) 2008
# Shibzoukhov Zaur Moukhadinovich
# szport@gmail.com
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
try:
from copyreg import dispatch_table
except:
from copy_reg import dispatch_table
from pyon import dumpcache
import sys
import os
__all__ = ['dumps', 'currentScope']
null = object()
NoneType = type(None)
from types import BuiltinFunctionType, BuiltinMethodType, FunctionType, MethodType
from sys import version_info
py_version = version_info[0]*10 + version_info[1]
if py_version >= 30:
simpleTypes = (NoneType, int, str, bool, float, bytes)
else:
simpleTypes = (NoneType, int, long, str, bool, float, bytes)
def currentScope(given=None, level=1):
frame = sys._getframe(level)
scope = dict(frame.f_globals)
scope.update(frame.f_locals)
if given:
scope.update(given)
return scope
def _sortkey1(item):
key, value = item
return type(key).__name__, key, value
def _sortkey2(item):
return type(item[0]).__name__
def _safe_sorted(items):
try:
return sorted(items)
except TypeError:
try:
return sorted(items, key=_sortkey1)
except TypeError:
return sorted(items, key=_sortkey2)
# cache for visit functions
method_cache = {}
def cache_method(tp, repr_func=None):
"""Decorator for caching methods"""
if repr_func is None:
def func(m, type=tp):
if isinstance(tp, list):
for x in tp:
method_cache[x] = m
else:
method_cache[tp] = m
return m
return func
else:
method_cache[tp] = repr_func
for tp in simpleTypes:
cache_method(tp, repr)
MODULE_NAMES = ('__cache__', '__main__', '__builtin__', 'builtins')
class DumpContext(object):
def __init__(self, fast=False, classdef=False, given=None, prefix='_p__', sorted=False, pretty=False):
self.fast = fast
self.classdef = classdef
self.assigns = []
self.pretty = pretty
#self.nl = pretty and os.linesep or ''
self.nl = pretty and '\n' or ''
self.reprs = {}
self.typeNames = {}
self.prefix = prefix
self.sorted = sorted
if given:
self.given = dict((id(o),name) for name, o in given.items())
else:
self.given = {}
self.n = 0
def dump_it(self, text, offset, start=None):
if start is None:
return self.nl + offset + text
else:
return start + text
def visit(self, o, offset, start=None):
if isinstance(o, type):
return visit_type(self, o, offset, start)
method = method_cache.get(o.__class__, visit_object)
if start is None:
real_offset = self.nl + offset
else:
real_offset = start
if method is repr:
return real_offset + repr(o)
if self.fast:
return method(self, o, offset, start)
else:
oId = id(o)
if oId in self.objects_cache:
varName = self.reprs.get(oId, None)
if varName is None:
varName = self.given.get(oId, self.prefix + str(self.n))
self.n += 1
self.reprs[oId] = varName
oRepr = method(self, o, '', start)
self.assigns.append(varName + "=" + oRepr)
return real_offset + varName
else:
return real_offset + method(self, o, offset, start)
#
@cache_method(property)
def visit_property(self, o, offset, start=None):
items = [f for f in (o.fget, o.fset, o.fdel, o.__doc__) if f is not None]
return 'property(' + dump_items(self, items, offset, '') + ')'
#
@cache_method(list)
def visit_list(self, o, offset, start=None):
offset1 = self.pretty and offset + ' ' or ''
n = len(o)
if n == 0:
return '[]'
elif n == 1:
return '[' + visit(self, o[0], offset1).lstrip() + ']'
else:
return '[' + dump_items(self, o, offset1) + dump_it(self, ']', offset)
#
@cache_method(tuple)
def visit_tuple(self, o, offset, start=None):
offset1 = self.pretty and offset + ' ' or ''
n = len(o)
if n == 0:
return '()'
elif n == 1:
return '(' + visit(self, o[0], offset1).lstrip() + ',)'
else:
return '(' + dump_items(self, o, offset1) + dump_it(self, ')', offset)
#
@cache_method(dict)
def visit_dict(self, o, offset, start=None):
offset1 = self.pretty and offset + ' ' or ''
n = len(o)
if n == 0:
return '{}'
elif n == 1:
key, value = o.popitem()
return '{' + visit(self, key, offset1) + ':' + visit(self, value, offset1, '') + '}'
else:
return '{' + dump_mapping(self, o, offset1) + dump_it(self, '}', offset)
#
@cache_method(set)
def visit_set(self, o, offset, start=None):
offset1 = self.pretty and offset + ' ' or ''
n = len(o)
if n == 0:
return 'set()'
elif n == 1:
return '{' + visit(self, o.pop(), offset1).lstrip() + '}'
else:
return '{' + dump_items(self, o, offset1) + dump_it(self, '}', offset)
#
@cache_method(frozenset)
def visit_frozenset(self, o, offset, start=None):
offset1 = self.pretty and offset + ' ' or ''
n = len(o)
if n == 0:
return 'frozenset()'
elif n == 1:
return 'frozenset([' + visit(self, o.pop(), offset1).lstrip() + '])'
else:
return 'frozenset([' + dump_items(self, o, offset1) + dump_it(self, '])', offset)
#
def dump_items(self, items, offset, start=None):
return ','.join(visit(self, item, offset, start) for item in items)
#
def dump_mapping(self, mapping, offset, start=None):
if self.sorted:
mapping = _safe_sorted(mapping)
return ','.join(visit(self, k, offset) + ':' + visit(self, v, offset, '') for k, v in mapping.items())
#
def dump_kwitems(self, kwitems, offset, start=None):
if start is None:
real_offset = self.nl + offset
else:
real_offset = start
if self.sorted:
kwitems = _safe_sorted(kwitems)
return ','.join(real_offset + k + '=' + visit(self, v, offset, '') for k, v in kwitems.items())
#
@cache_method(FunctionType)
def visit_function(self, o, offset, start=None):
if o.__module__ in MODULE_NAMES:
name = o.__name__
else:
name = o.__module__ + '.' + o.__name__
return name
#
@cache_method(MethodType)
def visit_method(self, o, offset, start=None):
return visit(self, o.__self__, offset, start) + "." + o.__func__.__name__
#
@cache_method([BuiltinFunctionType, BuiltinMethodType])
def visit_builtin_function_or_method(self, o, offset, start=None):
if o.__module__ in MODULE_NAMES:
name = o.__name__
else:
name = o.__module__ + '.' + o.__name__
return name
#
@cache_method(type)
def visit_type(self, o, offset, start=None):
if not self.classdef:
if o.__module__ in MODULE_NAMES:
name = o.__name__
else:
name = o.__module__ + '.' + o.__name__
return name
offset1 = self.pretty and offset + ' ' or ''
try:
metatype = o.__metaclass__
except:
metatype = o.__class__
if metatype == type:
return o.__name__
else:
factory = metatype
args = (o.__name__, o.__bases__)
if factory.__module__ in MODULE_NAMES:
name = factory.__name__
else:
if isinstance(factory, type):
name = factory.__module__ + '.' + factory.__name__
else:
name = factory.__class__.__module__ + '.' + factory.__name__
ret = name + '('
if args:
ret += dump_items(self, args, offset1) + ','
kwargs = dict(o.__dict__)
kwargs.pop('__dict__')
kwargs.pop('__weakref__')
if '__metaclass__' in kwargs:
kwargs.pop('__metaclass__')
if kwargs:
if '__module__' in kwargs:
kwargs.pop('__module__')
if kwargs['__doc__'] is None:
kwargs.pop('__doc__')
ret += dump_kwitems(self, kwargs, offset1)
if ret[-1] == ',':
ret = ret[:-1]
ret += ')'
return ret
#
@cache_method(object)
def visit_object(self, o, offset, start=None):
oId = id(o)
reduce = dispatch_table.get(type(o))
if reduce:
rv = reduce(obj)
reduce = getattr(o, '__reduce_ex__', None)
if reduce:
state = reduce(3)
return with_reduce(self, o, state, offset, start)
else:
reduce = getattr(o, '__reduce__', None)
if reduce:
state = reduce()
return with_reduce(self, o, state, offset, start)
else:
return without_reduce(self, o, offset, start)
#
def with_reduce(self, o, state, offset, start=None):
name = visit(self, state[0], offset).lstrip()
offset1 = self.pretty and offset + ' ' or ''
if start is None:
real_offset = self.nl + offset1
else:
real_offset = offset + start
n = len(state)
ret = ''
if n > 0:
ret += name + '('
if n > 1:
if state[1]:
ret += dump_items(self, state[1], offset1) + ','
if n > 2:
if state[2]:
ret += dump_kwitems(self, state[2], offset1) +','
if n > 3:
if state[3]:
offset2 = self.pretty and offset1 + ' ' or ''
ret += real_offset + '*[' + dump_items(self, state[3], offset2) + dump_it(self, '],', offset1)
if n > 4:
if state[4]:
offset3 = self.pretty and offset1 + ' ' or ''
ret += real_offset + '**{' + dump_mapping(self, state[4], offset3) + dump_it(self, '}', offset1)
if ret[-1] == ',':
ret = ret[:-1]
ret += dump_it(self, ')', offset)
return ret
#
def without_reduce(self, o, offset=None):
clsname = o.__class__.__name__
newargs = None
getnewargs = getattr(o, '__getnewargs__', None)
if getnewargs:
newargs = getnewargs()
state = None
if hasattr(o, '__getstate__'):
state = o.__getstate__()
else:
state = getattr(o, '__dict__', None)
if state is None:
state = {}
for name in o.__slots__:
value = getattr(o, name, null)
if value is not null:
state[name] = value
offset1 = self.pretty and offset + ' ' or ''
n = len(state)
ret = ''
if n > 0:
ret += clsname + '('
if n > 1:
if state[1]:
ret += dump_items(self, state[1], offset1) + ','
if n > 2:
if state[2]:
ret += '*'+ visit(self, state[2], offset1).lstrip() + ','
if ret[-1] == ',':
ret = ret[:-1]
ret += ')'
return ret
def dumps(o, fast=False, classdef=False, pretty=False, sorted=True, given=None):
if not fast:
cacher = dumpcache.Cacher()
dumpcache.visit(cacher, o)
objects_info = dict((oId,n) for oId,n in cacher.objects_info.items() if n > 0)
objects_cache = dict((oId,o) for oId,o in cacher.objects_cache.items() if oId in objects_info)
_given = currentScope(given=given, level=2)
context = DumpContext(fast=fast, classdef=classdef, given=_given, sorted=False, pretty=pretty)
if not fast:
context.objects_cache = objects_cache
text = visit(context, o, '').lstrip()
if context.assigns:
assigns = "\n".join(context.assigns)
else:
assigns = ""
return "\n".join(s for s in [assigns,text] if s)
|
intellimath/pyon
|
dump.py
|
Python
|
mit
| 13,023
|
[
"VisIt"
] |
211c18a86c43df01d4801f1fd94f6fdb211ee7bc0805ff4b27ec59f6c298af5e
|
"""Utilities for configuring ansible runtime environment."""
import json
import logging
import os
import pathlib
import re
import subprocess
import sys
from functools import lru_cache
from typing import Any, Dict, List, Optional, Tuple, Type, Union
import packaging
import tenacity
from packaging import version
from ansiblelint.config import (
ansible_collections_path,
collection_list,
options,
parse_ansible_version,
)
from ansiblelint.constants import (
ANSIBLE_DEFAULT_ROLES_PATH,
ANSIBLE_MIN_VERSION,
ANSIBLE_MISSING_RC,
ANSIBLE_MOCKED_MODULE,
INVALID_CONFIG_RC,
INVALID_PREREQUISITES_RC,
)
from ansiblelint.loaders import yaml_from_file
_logger = logging.getLogger(__name__)
def check_ansible_presence(exit_on_error: bool = False) -> Tuple[str, str]:
"""Assures we stop execution if Ansible is missing or outdated.
Returne found version and an optional exception if something wrong
was detected.
"""
@lru_cache()
def _get_ver_err() -> Tuple[str, str]:
err = ""
failed = False
ver = ""
result = subprocess.run(
args=["ansible", "--version"],
stdout=subprocess.PIPE,
universal_newlines=True,
check=False,
)
if result.returncode != 0:
return (
ver,
"FATAL: Unable to retrieve ansible cli version: %s" % result.stdout,
)
ver, error = parse_ansible_version(result.stdout)
if error is not None:
return "", error
try:
# pylint: disable=import-outside-toplevel
from ansible.release import __version__ as ansible_module_version
if version.parse(ansible_module_version) < version.parse(
ANSIBLE_MIN_VERSION
):
failed = True
except (ImportError, ModuleNotFoundError) as e:
failed = True
ansible_module_version = "none"
err += f"{e}\n"
if failed:
err += (
"FATAL: ansible-lint requires a version of Ansible package"
" >= %s, but %s was found. "
"Please install a compatible version using the same python interpreter. See "
"https://docs.ansible.com/ansible/latest/installation_guide"
"/intro_installation.html#installing-ansible-with-pip"
% (ANSIBLE_MIN_VERSION, ansible_module_version)
)
elif ver != ansible_module_version:
err = (
f"FATAL: Ansible CLI ({ver}) and python module"
f" ({ansible_module_version}) versions do not match. This "
"indicates a broken execution environment."
)
return ver, err
ver, err = _get_ver_err()
if exit_on_error and err:
_logger.error(err)
sys.exit(ANSIBLE_MISSING_RC)
return ver, err
def install_collection(collection: str, destination: Optional[str] = None) -> None:
"""Install an Ansible collection.
Can accept version constraints like 'foo.bar:>=1.2.3'
"""
cmd = [
"ansible-galaxy",
"collection",
"install",
"--force", # required for ansible 2.9
"-v",
]
if destination:
cmd.extend(["-p", destination])
cmd.append(f"{collection}")
_logger.info("Running %s", " ".join(cmd))
run = subprocess.run(
cmd,
universal_newlines=True,
check=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
if run.returncode != 0:
_logger.error("Command returned %s code:\n%s", run.returncode, run.stdout)
sys.exit(INVALID_PREREQUISITES_RC)
@tenacity.retry( # Retry up to 3 times as galaxy server can return errors
reraise=True,
wait=tenacity.wait_fixed(30), # type: ignore
stop=tenacity.stop_after_attempt(3), # type: ignore
before_sleep=tenacity.after_log(_logger, logging.WARNING), # type: ignore
)
def install_requirements(requirement: str) -> None:
"""Install dependencies from a requirements.yml."""
if not os.path.exists(requirement):
return
cmd = [
"ansible-galaxy",
"role",
"install",
"--force", # required for ansible 2.9
"--roles-path",
f"{options.cache_dir}/roles",
"-vr",
f"{requirement}",
]
_logger.info("Running %s", " ".join(cmd))
run = subprocess.run(
cmd,
universal_newlines=True,
check=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
if run.returncode != 0:
_logger.error(run.stdout)
raise RuntimeError(run.returncode)
# Run galaxy collection install works on v2 requirements.yml
if "collections" in yaml_from_file(requirement):
cmd = [
"ansible-galaxy",
"collection",
"install",
"--force", # required for ansible 2.9
"-p",
f"{options.cache_dir}/collections",
"-vr",
f"{requirement}",
]
_logger.info("Running %s", " ".join(cmd))
run = subprocess.run(
cmd,
universal_newlines=True,
check=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
if run.returncode != 0:
_logger.error(run.stdout)
raise RuntimeError(run.returncode)
def prepare_environment(required_collections: Optional[Dict[str, str]] = None) -> None:
"""Make dependencies available if needed."""
if not options.configured:
# Allow method to be used without calling the command line, so we can
# reuse it in other tools, like molecule.
# pylint: disable=import-outside-toplevel,cyclic-import
from ansiblelint.__main__ import initialize_options
initialize_options()
if not options.offline:
install_requirements("requirements.yml")
for req in pathlib.Path(".").glob("molecule/*/requirements.yml"):
install_requirements(str(req))
if required_collections:
for name, min_version in required_collections.items():
install_collection(
f"{name}:>={min_version}",
destination=f"{options.cache_dir}/collections"
if options.cache_dir
else None,
)
_install_galaxy_role()
_perform_mockings()
_prepare_ansible_paths()
def _get_galaxy_role_ns(galaxy_infos: Dict[str, Any]) -> str:
"""Compute role namespace from meta/main.yml, including trailing dot."""
role_namespace = galaxy_infos.get('namespace', "")
if len(role_namespace) == 0:
role_namespace = galaxy_infos.get('author', "")
# if there's a space in the name space, it's likely author name
# and not the galaxy login, so act as if there was no namespace
if re.match(r"^\w+ \w+", role_namespace):
role_namespace = ""
else:
role_namespace = f"{role_namespace}."
if not isinstance(role_namespace, str):
raise RuntimeError("Role namespace must be string, not %s" % role_namespace)
return role_namespace
def _get_galaxy_role_name(galaxy_infos: Dict[str, Any]) -> str:
"""Compute role name from meta/main.yml."""
return galaxy_infos.get('role_name', "")
def _get_role_fqrn(galaxy_infos: Dict[str, Any]) -> str:
"""Compute role fqrn."""
role_namespace = _get_galaxy_role_ns(galaxy_infos)
role_name = _get_galaxy_role_name(galaxy_infos)
if len(role_name) == 0:
role_name = pathlib.Path(".").absolute().name
role_name = re.sub(r'(ansible-|ansible-role-)', '', role_name)
return f"{role_namespace}{role_name}"
def _install_galaxy_role() -> None:
"""Detect standalone galaxy role and installs it."""
if not os.path.exists("meta/main.yml"):
return
yaml = yaml_from_file("meta/main.yml")
if 'galaxy_info' not in yaml:
return
fqrn = _get_role_fqrn(yaml['galaxy_info'])
if 'role-name' not in options.skip_list:
if not re.match(r"[a-z0-9][a-z0-9_]+\.[a-z][a-z0-9_]+$", fqrn):
msg = (
"""\
Computed fully qualified role name of %s does not follow current galaxy requirements.
Please edit meta/main.yml and assure we can correctly determine full role name:
galaxy_info:
role_name: my_name # if absent directory name hosting role is used instead
namespace: my_galaxy_namespace # if absent, author is used instead
Namespace: https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespace-limitations
Role: https://galaxy.ansible.com/docs/contributing/creating_role.html#role-names
As an alternative, you can add 'role-name' to either skip_list or warn_list.
"""
% fqrn
)
if 'role-name' in options.warn_list:
_logger.warning(msg)
else:
_logger.error(msg)
sys.exit(INVALID_PREREQUISITES_RC)
else:
# when 'role-name' is in skip_list, we stick to plain role names
if 'role_name' in yaml['galaxy_info']:
role_namespace = _get_galaxy_role_ns(yaml['galaxy_info'])
role_name = _get_galaxy_role_name(yaml['galaxy_info'])
fqrn = f"{role_namespace}{role_name}"
else:
fqrn = pathlib.Path(".").absolute().name
p = pathlib.Path(f"{options.cache_dir}/roles")
p.mkdir(parents=True, exist_ok=True)
link_path = p / fqrn
# despite documentation stating that is_file() reports true for symlinks,
# it appears that is_dir() reports true instead, so we rely on exits().
target = pathlib.Path(options.project_dir).absolute()
if not link_path.exists() or os.readlink(link_path) != str(target):
if link_path.exists():
link_path.unlink()
link_path.symlink_to(target, target_is_directory=True)
_logger.info(
"Using %s symlink to current repository in order to enable Ansible to find the role using its expected full name.",
link_path,
)
def _prepare_ansible_paths() -> None:
"""Configure Ansible environment variables."""
library_paths: List[str] = []
roles_path: List[str] = []
for path_list, path in (
(library_paths, "plugins/modules"),
(library_paths, f"{options.cache_dir}/modules"),
(collection_list, f"{options.cache_dir}/collections"),
(roles_path, "roles"),
(roles_path, f"{options.cache_dir}/roles"),
):
if path not in path_list and os.path.exists(path):
path_list.append(path)
_update_env('ANSIBLE_LIBRARY', library_paths)
_update_env(ansible_collections_path(), collection_list)
_update_env('ANSIBLE_ROLES_PATH', roles_path, default=ANSIBLE_DEFAULT_ROLES_PATH)
# If we are asking to run without warnings, then also silence certain
# Ansible warnings which could slip through, namely the devel branch
# warning.
if options.verbosity < 0:
_update_env("ANSIBLE_DEVEL_WARNING", ["False"])
def _make_module_stub(module_name: str) -> None:
# a.b.c is treated a collection
if re.match(r"^(\w+|\w+\.\w+\.[\.\w]+)$", module_name):
parts = module_name.split(".")
if len(parts) < 3:
path = f"{options.cache_dir}/modules"
module_file = f"{options.cache_dir}/modules/{module_name}.py"
namespace = None
collection = None
else:
namespace = parts[0]
collection = parts[1]
path = f"{ options.cache_dir }/collections/ansible_collections/{ namespace }/{ collection }/plugins/modules/{ '/'.join(parts[2:-1]) }"
module_file = f"{path}/{parts[-1]}.py"
os.makedirs(path, exist_ok=True)
_write_module_stub(
filename=module_file,
name=module_file,
namespace=namespace,
collection=collection,
)
else:
_logger.error("Config error: %s is not a valid module name.", module_name)
sys.exit(INVALID_CONFIG_RC)
def _write_module_stub(
filename: str,
name: str,
namespace: Optional[str] = None,
collection: Optional[str] = None,
) -> None:
"""Write module stub to disk."""
body = ANSIBLE_MOCKED_MODULE.format(
name=name, collection=collection, namespace=namespace
)
with open(filename, "w") as f:
f.write(body)
def _update_env(varname: str, value: List[str], default: str = "") -> None:
"""Update colon based environment variable if needed. by appending."""
if value:
orig_value = os.environ.get(varname, default=default)
if orig_value:
# Prepend original or default variable content to custom content.
value = [*orig_value.split(':'), *value]
value_str = ":".join(value)
if value_str != os.environ.get(varname, ""):
os.environ[varname] = value_str
_logger.info("Added %s=%s", varname, value_str)
def _perform_mockings() -> None:
"""Mock modules and roles."""
for role_name in options.mock_roles:
if re.match(r"\w+\.\w+\.\w+$", role_name):
namespace, collection, role_dir = role_name.split(".")
path = f"{options.cache_dir}/collections/ansible_collections/{ namespace }/{ collection }/roles/{ role_dir }/"
else:
path = f"{options.cache_dir}/roles/{role_name}"
os.makedirs(path, exist_ok=True)
if options.mock_modules:
for module_name in options.mock_modules:
_make_module_stub(module_name)
# if inside a collection repo, symlink it to simulate its installed state
if not os.path.exists("galaxy.yml"):
return
yaml = yaml_from_file("galaxy.yml")
if not yaml:
# ignore empty galaxy.yml file
return
namespace = yaml.get('namespace', None)
collection = yaml.get('name', None)
if not namespace or not collection:
return
p = pathlib.Path(
f"{options.cache_dir}/collections/ansible_collections/{ namespace }"
)
p.mkdir(parents=True, exist_ok=True)
link_path = p / collection
target = pathlib.Path(options.project_dir).absolute()
if not link_path.exists() or os.readlink(link_path) != target:
if link_path.exists():
link_path.unlink()
link_path.symlink_to(target, target_is_directory=True)
def ansible_config_get(key: str, kind: Type[Any] = str) -> Union[str, List[str], None]:
"""Return configuration item from ansible config."""
env = os.environ.copy()
# Avoid possible ANSI garbage
env["ANSIBLE_FORCE_COLOR"] = "0"
# Avoid our own override as this prevents returning system paths.
colpathvar = ansible_collections_path()
if colpathvar in env:
env.pop(colpathvar)
config = subprocess.check_output(
["ansible-config", "dump"], universal_newlines=True, env=env
)
if kind == str:
result = re.search(rf"^{key}.* = (.*)$", config, re.MULTILINE)
if result:
return result.groups()[0]
elif kind == list:
result = re.search(rf"^{key}.* = (\[.*\])$", config, re.MULTILINE)
if result:
val = eval(result.groups()[0]) # pylint: disable=eval-used
if not isinstance(val, list):
raise RuntimeError(f"Unexpected data read for {key}: {val}")
return val
else:
raise RuntimeError("Unknown data type.")
return None
def require_collection( # noqa: C901
name: str, version: Optional[str] = None, install: bool = True
) -> None:
"""Check if a minimal collection version is present or exits.
In the future this method may attempt to install a missing or outdated
collection before failing.
"""
try:
ns, coll = name.split('.', 1)
except ValueError:
sys.exit("Invalid collection name supplied: %s" % name)
paths = ansible_config_get('COLLECTIONS_PATHS', list)
if not paths or not isinstance(paths, list):
sys.exit(f"Unable to determine ansible collection paths. ({paths})")
if options.cache_dir:
# if we have a cache dir, we want to be use that would be preferred
# destination when installing a missing collection
paths.insert(0, f"{options.cache_dir}/collections")
for path in paths:
collpath = os.path.join(path, 'ansible_collections', ns, coll)
if os.path.exists(collpath):
mpath = os.path.join(collpath, 'MANIFEST.json')
if not os.path.exists(mpath):
_logger.fatal(
"Found collection at '%s' but missing MANIFEST.json, cannot get info.",
collpath,
)
sys.exit(INVALID_PREREQUISITES_RC)
with open(mpath, 'r') as f:
manifest = json.loads(f.read())
found_version = packaging.version.parse(
manifest['collection_info']['version']
)
if version and found_version < packaging.version.parse(version):
if install:
install_collection(f"{name}:>={version}")
require_collection(name, version, install=False)
else:
_logger.fatal(
"Found %s collection %s but %s or newer is required.",
name,
found_version,
version,
)
sys.exit(INVALID_PREREQUISITES_RC)
break
else:
if install:
install_collection(f"{name}:>={version}")
require_collection(name, version, install=False)
else:
_logger.fatal("Collection '%s' not found in '%s'", name, paths)
sys.exit(INVALID_PREREQUISITES_RC)
|
ansible/ansible-lint
|
src/ansiblelint/prerun.py
|
Python
|
mit
| 17,947
|
[
"Galaxy"
] |
27752ea5348cd6a119b9979f7beec1cb9a9b4c3972b2b6780ba281cab3017175
|
#
# Copyright (C) 2013,2014,2015,2016 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Tests particle property setters/getters
import unittest as ut
import espressomd
import numpy as np
from espressomd.electrostatics import *
from espressomd.integrate import integrate
import espressomd.scafacos as scafacos
class CoulombCloudWall(ut.TestCase):
"""This compares p3m, p3m_gpu, scafacos_p3m and scafacos_p2nfft
electrostatic forces and energy against stored data."""
S = espressomd.System()
forces = {}
tolerance = 1E-3
# Reference energy from p3m in the tcl test case
reference_energy = 148.94229549
def setUp(self):
self.S.box_l = (10, 10, 10)
self.S.time_step = 0.01
self.S.skin = 0.4
# Clear actors that might be left from prev tests
if len(self.S.actors):
del self.S.actors[0]
self.S.part.clear()
data = np.genfromtxt("data/coulomb_cloud_wall_system.data")
# Add particles to system and store reference forces in hash
# Input format: id pos q f
for particle in data:
id = particle[0]
pos = particle[1:4]
q = particle[4]
f = particle[5:]
self.S.part.add(id=int(id), pos=pos, q=q)
self.forces[id] = f
def compare(self, method_name, energy=True):
# Compare forces and energy now in the system to stored ones
# Force
force_abs_diff = 0.
for p in self.S.part:
force_abs_diff += abs(np.sqrt(sum((p.f - self.forces[p.id])**2)))
force_abs_diff /= len(self.S.part)
print method_name, "force difference", force_abs_diff
# Energy
if energy:
energy_abs_diff = abs(self.S.analysis.energy(
self.S)["total"] - self.reference_energy)
print method_name, "energy difference", energy_abs_diff
self.assertTrue(energy_abs_diff <= self.tolerance, "Absolte energy difference " +
str(energy_abs_diff) + " too large for " + method_name)
self.assertTrue(force_abs_diff <= self.tolerance, "Asbolute force difference " +
str(force_abs_diff) + " too large for method " + method_name)
# Tests for individual methods
if "P3M" in espressomd.features():
def test_p3m(self):
self.S.actors.add(P3M(bjerrum_length=1, r_cut=1.001,
mesh=64, cao=7, alpha=2.70746, tune=False))
integrate(0)
self.compare("p3m", energy=True)
if "ELECTROSTATICS" in espressomd.features() and "CUDA" in espressomd.features():
def test_p3m_gpu(self):
self.S.actors.add(P3M_GPU(
bjerrum_length=1, r_cut=1.001, mesh=64, cao=7, alpha=2.70746, tune=False))
integrate(0)
self.compare("p3m_gpu", energy=False)
if "SCAFACOS" in espressomd.features():
if "p3m" in scafacos.available_methods():
def test_scafacos_p3m(self):
self.S.actors.add(Scafacos(bjerrum_length=1, method_name="p3m", method_params={
"p3m_r_cut": 1.001, "p3m_grid": 64, "p3m_cao": 7, "p3m_alpha": 2.70746}))
integrate(0)
self.compare("scafacos_p3m", energy=True)
if "p2nfft" in scafacos.available_methods():
def test_scafacos_p2nfft(self):
self.S.actors.add(Scafacos(bjerrum_length=1, method_name="p2nfft", method_params={
"p2nfft_r_cut": 1.001, "tolerance_field": 1E-4}))
integrate(0)
self.compare("scafacos_p2nfft", energy=True)
def test_zz_deactivation(self):
# Is the energy 0, if no methods active
self.assertTrue(self.S.analysis.energy(self.S)["total"] == 0.0)
if __name__ == "__main__":
ut.main()
|
tbereau/espresso
|
testsuite/python/coulomb_cloud_wall.py
|
Python
|
gpl-3.0
| 4,540
|
[
"ESPResSo"
] |
f1cbc6ab397724bf9b13d53632a510c8732dbf10adc992c135289d556e3aa1d2
|
# sql/elements.py
# Copyright (C) 2005-2018 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Core SQL expression elements, including :class:`.ClauseElement`,
:class:`.ColumnElement`, and derived classes.
"""
from __future__ import unicode_literals
from .. import util, exc, inspection
from . import type_api
from . import operators
from .visitors import Visitable, cloned_traverse, traverse
from .annotation import Annotated
import itertools
from .base import Executable, PARSE_AUTOCOMMIT, Immutable, NO_ARG
from .base import _generative
import numbers
import re
import operator
def _clone(element, **kw):
return element._clone()
def collate(expression, collation):
"""Return the clause ``expression COLLATE collation``.
e.g.::
collate(mycolumn, 'utf8_bin')
produces::
mycolumn COLLATE utf8_bin
The collation expression is also quoted if it is a case sensitive
identifier, e.g. contains uppercase characters.
.. versionchanged:: 1.2 quoting is automatically applied to COLLATE
expressions if they are case sensitive.
"""
expr = _literal_as_binds(expression)
return BinaryExpression(
expr,
CollationClause(collation),
operators.collate, type_=expr.type)
def between(expr, lower_bound, upper_bound, symmetric=False):
"""Produce a ``BETWEEN`` predicate clause.
E.g.::
from sqlalchemy import between
stmt = select([users_table]).where(between(users_table.c.id, 5, 7))
Would produce SQL resembling::
SELECT id, name FROM user WHERE id BETWEEN :id_1 AND :id_2
The :func:`.between` function is a standalone version of the
:meth:`.ColumnElement.between` method available on all
SQL expressions, as in::
stmt = select([users_table]).where(users_table.c.id.between(5, 7))
All arguments passed to :func:`.between`, including the left side
column expression, are coerced from Python scalar values if a
the value is not a :class:`.ColumnElement` subclass. For example,
three fixed values can be compared as in::
print(between(5, 3, 7))
Which would produce::
:param_1 BETWEEN :param_2 AND :param_3
:param expr: a column expression, typically a :class:`.ColumnElement`
instance or alternatively a Python scalar expression to be coerced
into a column expression, serving as the left side of the ``BETWEEN``
expression.
:param lower_bound: a column or Python scalar expression serving as the
lower bound of the right side of the ``BETWEEN`` expression.
:param upper_bound: a column or Python scalar expression serving as the
upper bound of the right side of the ``BETWEEN`` expression.
:param symmetric: if True, will render " BETWEEN SYMMETRIC ". Note
that not all databases support this syntax.
.. versionadded:: 0.9.5
.. seealso::
:meth:`.ColumnElement.between`
"""
expr = _literal_as_binds(expr)
return expr.between(lower_bound, upper_bound, symmetric=symmetric)
def literal(value, type_=None):
r"""Return a literal clause, bound to a bind parameter.
Literal clauses are created automatically when non-
:class:`.ClauseElement` objects (such as strings, ints, dates, etc.) are
used in a comparison operation with a :class:`.ColumnElement` subclass,
such as a :class:`~sqlalchemy.schema.Column` object. Use this function
to force the generation of a literal clause, which will be created as a
:class:`BindParameter` with a bound value.
:param value: the value to be bound. Can be any Python object supported by
the underlying DB-API, or is translatable via the given type argument.
:param type\_: an optional :class:`~sqlalchemy.types.TypeEngine` which
will provide bind-parameter translation for this literal.
"""
return BindParameter(None, value, type_=type_, unique=True)
def outparam(key, type_=None):
"""Create an 'OUT' parameter for usage in functions (stored procedures),
for databases which support them.
The ``outparam`` can be used like a regular function parameter.
The "output" value will be available from the
:class:`~sqlalchemy.engine.ResultProxy` object via its ``out_parameters``
attribute, which returns a dictionary containing the values.
"""
return BindParameter(
key, None, type_=type_, unique=False, isoutparam=True)
def not_(clause):
"""Return a negation of the given clause, i.e. ``NOT(clause)``.
The ``~`` operator is also overloaded on all
:class:`.ColumnElement` subclasses to produce the
same result.
"""
return operators.inv(_literal_as_binds(clause))
@inspection._self_inspects
class ClauseElement(Visitable):
"""Base class for elements of a programmatically constructed SQL
expression.
"""
__visit_name__ = 'clause'
_annotations = {}
supports_execution = False
_from_objects = []
bind = None
_is_clone_of = None
is_selectable = False
is_clause_element = True
description = None
_order_by_label_element = None
_is_from_container = False
def _clone(self):
"""Create a shallow copy of this ClauseElement.
This method may be used by a generative API. Its also used as
part of the "deep" copy afforded by a traversal that combines
the _copy_internals() method.
"""
c = self.__class__.__new__(self.__class__)
c.__dict__ = self.__dict__.copy()
ClauseElement._cloned_set._reset(c)
ColumnElement.comparator._reset(c)
# this is a marker that helps to "equate" clauses to each other
# when a Select returns its list of FROM clauses. the cloning
# process leaves around a lot of remnants of the previous clause
# typically in the form of column expressions still attached to the
# old table.
c._is_clone_of = self
return c
@property
def _constructor(self):
"""return the 'constructor' for this ClauseElement.
This is for the purposes for creating a new object of
this type. Usually, its just the element's __class__.
However, the "Annotated" version of the object overrides
to return the class of its proxied element.
"""
return self.__class__
@util.memoized_property
def _cloned_set(self):
"""Return the set consisting all cloned ancestors of this
ClauseElement.
Includes this ClauseElement. This accessor tends to be used for
FromClause objects to identify 'equivalent' FROM clauses, regardless
of transformative operations.
"""
s = util.column_set()
f = self
while f is not None:
s.add(f)
f = f._is_clone_of
return s
def __getstate__(self):
d = self.__dict__.copy()
d.pop('_is_clone_of', None)
return d
def _annotate(self, values):
"""return a copy of this ClauseElement with annotations
updated by the given dictionary.
"""
return Annotated(self, values)
def _with_annotations(self, values):
"""return a copy of this ClauseElement with annotations
replaced by the given dictionary.
"""
return Annotated(self, values)
def _deannotate(self, values=None, clone=False):
"""return a copy of this :class:`.ClauseElement` with annotations
removed.
:param values: optional tuple of individual values
to remove.
"""
if clone:
# clone is used when we are also copying
# the expression for a deep deannotation
return self._clone()
else:
# if no clone, since we have no annotations we return
# self
return self
def _execute_on_connection(self, connection, multiparams, params):
if self.supports_execution:
return connection._execute_clauseelement(self, multiparams, params)
else:
raise exc.ObjectNotExecutableError(self)
def unique_params(self, *optionaldict, **kwargs):
"""Return a copy with :func:`bindparam()` elements replaced.
Same functionality as ``params()``, except adds `unique=True`
to affected bind parameters so that multiple statements can be
used.
"""
return self._params(True, optionaldict, kwargs)
def params(self, *optionaldict, **kwargs):
"""Return a copy with :func:`bindparam()` elements replaced.
Returns a copy of this ClauseElement with :func:`bindparam()`
elements replaced with values taken from the given dictionary::
>>> clause = column('x') + bindparam('foo')
>>> print clause.compile().params
{'foo':None}
>>> print clause.params({'foo':7}).compile().params
{'foo':7}
"""
return self._params(False, optionaldict, kwargs)
def _params(self, unique, optionaldict, kwargs):
if len(optionaldict) == 1:
kwargs.update(optionaldict[0])
elif len(optionaldict) > 1:
raise exc.ArgumentError(
"params() takes zero or one positional dictionary argument")
def visit_bindparam(bind):
if bind.key in kwargs:
bind.value = kwargs[bind.key]
bind.required = False
if unique:
bind._convert_to_unique()
return cloned_traverse(self, {}, {'bindparam': visit_bindparam})
def compare(self, other, **kw):
r"""Compare this ClauseElement to the given ClauseElement.
Subclasses should override the default behavior, which is a
straight identity comparison.
\**kw are arguments consumed by subclass compare() methods and
may be used to modify the criteria for comparison.
(see :class:`.ColumnElement`)
"""
return self is other
def _copy_internals(self, clone=_clone, **kw):
"""Reassign internal elements to be clones of themselves.
Called during a copy-and-traverse operation on newly
shallow-copied elements to create a deep copy.
The given clone function should be used, which may be applying
additional transformations to the element (i.e. replacement
traversal, cloned traversal, annotations).
"""
pass
def get_children(self, **kwargs):
r"""Return immediate child elements of this :class:`.ClauseElement`.
This is used for visit traversal.
\**kwargs may contain flags that change the collection that is
returned, for example to return a subset of items in order to
cut down on larger traversals, or to return child items from a
different context (such as schema-level collections instead of
clause-level).
"""
return []
def self_group(self, against=None):
"""Apply a 'grouping' to this :class:`.ClauseElement`.
This method is overridden by subclasses to return a
"grouping" construct, i.e. parenthesis. In particular
it's used by "binary" expressions to provide a grouping
around themselves when placed into a larger expression,
as well as by :func:`.select` constructs when placed into
the FROM clause of another :func:`.select`. (Note that
subqueries should be normally created using the
:meth:`.Select.alias` method, as many platforms require
nested SELECT statements to be named).
As expressions are composed together, the application of
:meth:`self_group` is automatic - end-user code should never
need to use this method directly. Note that SQLAlchemy's
clause constructs take operator precedence into account -
so parenthesis might not be needed, for example, in
an expression like ``x OR (y AND z)`` - AND takes precedence
over OR.
The base :meth:`self_group` method of :class:`.ClauseElement`
just returns self.
"""
return self
@util.dependencies("sqlalchemy.engine.default")
def compile(self, default, bind=None, dialect=None, **kw):
"""Compile this SQL expression.
The return value is a :class:`~.Compiled` object.
Calling ``str()`` or ``unicode()`` on the returned value will yield a
string representation of the result. The
:class:`~.Compiled` object also can return a
dictionary of bind parameter names and values
using the ``params`` accessor.
:param bind: An ``Engine`` or ``Connection`` from which a
``Compiled`` will be acquired. This argument takes precedence over
this :class:`.ClauseElement`'s bound engine, if any.
:param column_keys: Used for INSERT and UPDATE statements, a list of
column names which should be present in the VALUES clause of the
compiled statement. If ``None``, all columns from the target table
object are rendered.
:param dialect: A ``Dialect`` instance from which a ``Compiled``
will be acquired. This argument takes precedence over the `bind`
argument as well as this :class:`.ClauseElement`'s bound engine,
if any.
:param inline: Used for INSERT statements, for a dialect which does
not support inline retrieval of newly generated primary key
columns, will force the expression used to create the new primary
key value to be rendered inline within the INSERT statement's
VALUES clause. This typically refers to Sequence execution but may
also refer to any server-side default generation function
associated with a primary key `Column`.
:param compile_kwargs: optional dictionary of additional parameters
that will be passed through to the compiler within all "visit"
methods. This allows any custom flag to be passed through to
a custom compilation construct, for example. It is also used
for the case of passing the ``literal_binds`` flag through::
from sqlalchemy.sql import table, column, select
t = table('t', column('x'))
s = select([t]).where(t.c.x == 5)
print s.compile(compile_kwargs={"literal_binds": True})
.. versionadded:: 0.9.0
.. seealso::
:ref:`faq_sql_expression_string`
"""
if not dialect:
if bind:
dialect = bind.dialect
elif self.bind:
dialect = self.bind.dialect
bind = self.bind
else:
dialect = default.StrCompileDialect()
return self._compiler(dialect, bind=bind, **kw)
def _compiler(self, dialect, **kw):
"""Return a compiler appropriate for this ClauseElement, given a
Dialect."""
return dialect.statement_compiler(dialect, self, **kw)
def __str__(self):
if util.py3k:
return str(self.compile())
else:
return unicode(self.compile()).encode('ascii', 'backslashreplace')
def __and__(self, other):
"""'and' at the ClauseElement level.
.. deprecated:: 0.9.5 - conjunctions are intended to be
at the :class:`.ColumnElement`. level
"""
return and_(self, other)
def __or__(self, other):
"""'or' at the ClauseElement level.
.. deprecated:: 0.9.5 - conjunctions are intended to be
at the :class:`.ColumnElement`. level
"""
return or_(self, other)
def __invert__(self):
if hasattr(self, 'negation_clause'):
return self.negation_clause
else:
return self._negate()
def _negate(self):
return UnaryExpression(
self.self_group(against=operators.inv),
operator=operators.inv,
negate=None)
def __bool__(self):
raise TypeError("Boolean value of this clause is not defined")
__nonzero__ = __bool__
def __repr__(self):
friendly = self.description
if friendly is None:
return object.__repr__(self)
else:
return '<%s.%s at 0x%x; %s>' % (
self.__module__, self.__class__.__name__, id(self), friendly)
class ColumnElement(operators.ColumnOperators, ClauseElement):
"""Represent a column-oriented SQL expression suitable for usage in the
"columns" clause, WHERE clause etc. of a statement.
While the most familiar kind of :class:`.ColumnElement` is the
:class:`.Column` object, :class:`.ColumnElement` serves as the basis
for any unit that may be present in a SQL expression, including
the expressions themselves, SQL functions, bound parameters,
literal expressions, keywords such as ``NULL``, etc.
:class:`.ColumnElement` is the ultimate base class for all such elements.
A wide variety of SQLAlchemy Core functions work at the SQL expression
level, and are intended to accept instances of :class:`.ColumnElement` as
arguments. These functions will typically document that they accept a
"SQL expression" as an argument. What this means in terms of SQLAlchemy
usually refers to an input which is either already in the form of a
:class:`.ColumnElement` object, or a value which can be **coerced** into
one. The coercion rules followed by most, but not all, SQLAlchemy Core
functions with regards to SQL expressions are as follows:
* a literal Python value, such as a string, integer or floating
point value, boolean, datetime, ``Decimal`` object, or virtually
any other Python object, will be coerced into a "literal bound
value". This generally means that a :func:`.bindparam` will be
produced featuring the given value embedded into the construct; the
resulting :class:`.BindParameter` object is an instance of
:class:`.ColumnElement`. The Python value will ultimately be sent
to the DBAPI at execution time as a parameterized argument to the
``execute()`` or ``executemany()`` methods, after SQLAlchemy
type-specific converters (e.g. those provided by any associated
:class:`.TypeEngine` objects) are applied to the value.
* any special object value, typically ORM-level constructs, which
feature a method called ``__clause_element__()``. The Core
expression system looks for this method when an object of otherwise
unknown type is passed to a function that is looking to coerce the
argument into a :class:`.ColumnElement` expression. The
``__clause_element__()`` method, if present, should return a
:class:`.ColumnElement` instance. The primary use of
``__clause_element__()`` within SQLAlchemy is that of class-bound
attributes on ORM-mapped classes; a ``User`` class which contains a
mapped attribute named ``.name`` will have a method
``User.name.__clause_element__()`` which when invoked returns the
:class:`.Column` called ``name`` associated with the mapped table.
* The Python ``None`` value is typically interpreted as ``NULL``,
which in SQLAlchemy Core produces an instance of :func:`.null`.
A :class:`.ColumnElement` provides the ability to generate new
:class:`.ColumnElement`
objects using Python expressions. This means that Python operators
such as ``==``, ``!=`` and ``<`` are overloaded to mimic SQL operations,
and allow the instantiation of further :class:`.ColumnElement` instances
which are composed from other, more fundamental :class:`.ColumnElement`
objects. For example, two :class:`.ColumnClause` objects can be added
together with the addition operator ``+`` to produce
a :class:`.BinaryExpression`.
Both :class:`.ColumnClause` and :class:`.BinaryExpression` are subclasses
of :class:`.ColumnElement`::
>>> from sqlalchemy.sql import column
>>> column('a') + column('b')
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
>>> print column('a') + column('b')
a + b
.. seealso::
:class:`.Column`
:func:`.expression.column`
"""
__visit_name__ = 'column_element'
primary_key = False
foreign_keys = []
_label = None
"""The named label that can be used to target
this column in a result set.
This label is almost always the label used when
rendering <expr> AS <label> in a SELECT statement. It also
refers to a name that this column expression can be located from
in a result set.
For a regular Column bound to a Table, this is typically the label
<tablename>_<columnname>. For other constructs, different rules
may apply, such as anonymized labels and others.
"""
key = None
"""the 'key' that in some circumstances refers to this object in a
Python namespace.
This typically refers to the "key" of the column as present in the
``.c`` collection of a selectable, e.g. sometable.c["somekey"] would
return a Column with a .key of "somekey".
"""
_key_label = None
"""A label-based version of 'key' that in some circumstances refers
to this object in a Python namespace.
_key_label comes into play when a select() statement is constructed with
apply_labels(); in this case, all Column objects in the ``.c`` collection
are rendered as <tablename>_<columnname> in SQL; this is essentially the
value of ._label. But to locate those columns in the ``.c`` collection,
the name is along the lines of <tablename>_<key>; that's the typical
value of .key_label.
"""
_render_label_in_columns_clause = True
"""A flag used by select._columns_plus_names that helps to determine
we are actually going to render in terms of "SELECT <col> AS <label>".
This flag can be returned as False for some Column objects that want
to be rendered as simple "SELECT <col>"; typically columns that don't have
any parent table and are named the same as what the label would be
in any case.
"""
_resolve_label = None
"""The name that should be used to identify this ColumnElement in a
select() object when "label resolution" logic is used; this refers
to using a string name in an expression like order_by() or group_by()
that wishes to target a labeled expression in the columns clause.
The name is distinct from that of .name or ._label to account for the case
where anonymizing logic may be used to change the name that's actually
rendered at compile time; this attribute should hold onto the original
name that was user-assigned when producing a .label() construct.
"""
_allow_label_resolve = True
"""A flag that can be flipped to prevent a column from being resolvable
by string label name."""
_alt_names = ()
def self_group(self, against=None):
if (against in (operators.and_, operators.or_, operators._asbool) and
self.type._type_affinity
is type_api.BOOLEANTYPE._type_affinity):
return AsBoolean(self, operators.istrue, operators.isfalse)
elif (against in (operators.any_op, operators.all_op)):
return Grouping(self)
else:
return self
def _negate(self):
if self.type._type_affinity is type_api.BOOLEANTYPE._type_affinity:
# TODO: see the note in AsBoolean that it seems to assume
# the element is the True_() / False_() constant, so this
# is too broad
return AsBoolean(self, operators.isfalse, operators.istrue)
else:
return super(ColumnElement, self)._negate()
@util.memoized_property
def type(self):
return type_api.NULLTYPE
@util.memoized_property
def comparator(self):
try:
comparator_factory = self.type.comparator_factory
except AttributeError:
raise TypeError(
"Object %r associated with '.type' attribute "
"is not a TypeEngine class or object" % self.type)
else:
return comparator_factory(self)
def __getattr__(self, key):
try:
return getattr(self.comparator, key)
except AttributeError:
raise AttributeError(
'Neither %r object nor %r object has an attribute %r' % (
type(self).__name__,
type(self.comparator).__name__,
key)
)
def operate(self, op, *other, **kwargs):
return op(self.comparator, *other, **kwargs)
def reverse_operate(self, op, other, **kwargs):
return op(other, self.comparator, **kwargs)
def _bind_param(self, operator, obj, type_=None):
return BindParameter(None, obj,
_compared_to_operator=operator,
type_=type_,
_compared_to_type=self.type, unique=True)
@property
def expression(self):
"""Return a column expression.
Part of the inspection interface; returns self.
"""
return self
@property
def _select_iterable(self):
return (self, )
@util.memoized_property
def base_columns(self):
return util.column_set(c for c in self.proxy_set
if not hasattr(c, '_proxies'))
@util.memoized_property
def proxy_set(self):
s = util.column_set([self])
if hasattr(self, '_proxies'):
for c in self._proxies:
s.update(c.proxy_set)
return s
def shares_lineage(self, othercolumn):
"""Return True if the given :class:`.ColumnElement`
has a common ancestor to this :class:`.ColumnElement`."""
return bool(self.proxy_set.intersection(othercolumn.proxy_set))
def _compare_name_for_result(self, other):
"""Return True if the given column element compares to this one
when targeting within a result row."""
return hasattr(other, 'name') and hasattr(self, 'name') and \
other.name == self.name
def _make_proxy(
self, selectable, name=None, name_is_truncatable=False, **kw):
"""Create a new :class:`.ColumnElement` representing this
:class:`.ColumnElement` as it appears in the select list of a
descending selectable.
"""
if name is None:
name = self.anon_label
if self.key:
key = self.key
else:
try:
key = str(self)
except exc.UnsupportedCompilationError:
key = self.anon_label
else:
key = name
co = ColumnClause(
_as_truncated(name) if name_is_truncatable else name,
type_=getattr(self, 'type', None),
_selectable=selectable
)
co._proxies = [self]
if selectable._is_clone_of is not None:
co._is_clone_of = \
selectable._is_clone_of.columns.get(key)
selectable._columns[key] = co
return co
def compare(self, other, use_proxies=False, equivalents=None, **kw):
"""Compare this ColumnElement to another.
Special arguments understood:
:param use_proxies: when True, consider two columns that
share a common base column as equivalent (i.e. shares_lineage())
:param equivalents: a dictionary of columns as keys mapped to sets
of columns. If the given "other" column is present in this
dictionary, if any of the columns in the corresponding set() pass
the comparison test, the result is True. This is used to expand the
comparison to other columns that may be known to be equivalent to
this one via foreign key or other criterion.
"""
to_compare = (other, )
if equivalents and other in equivalents:
to_compare = equivalents[other].union(to_compare)
for oth in to_compare:
if use_proxies and self.shares_lineage(oth):
return True
elif hash(oth) == hash(self):
return True
else:
return False
def cast(self, type_):
"""Produce a type cast, i.e. ``CAST(<expression> AS <type>)``.
This is a shortcut to the :func:`~.expression.cast` function.
.. versionadded:: 1.0.7
"""
return Cast(self, type_)
def label(self, name):
"""Produce a column label, i.e. ``<columnname> AS <name>``.
This is a shortcut to the :func:`~.expression.label` function.
if 'name' is None, an anonymous label name will be generated.
"""
return Label(name, self, self.type)
@util.memoized_property
def anon_label(self):
"""provides a constant 'anonymous label' for this ColumnElement.
This is a label() expression which will be named at compile time.
The same label() is returned each time anon_label is called so
that expressions can reference anon_label multiple times, producing
the same label name at compile time.
the compiler uses this function automatically at compile time
for expressions that are known to be 'unnamed' like binary
expressions and function calls.
"""
while self._is_clone_of is not None:
self = self._is_clone_of
return _anonymous_label(
'%%(%d %s)s' % (id(self), getattr(self, 'name', 'anon'))
)
class BindParameter(ColumnElement):
r"""Represent a "bound expression".
:class:`.BindParameter` is invoked explicitly using the
:func:`.bindparam` function, as in::
from sqlalchemy import bindparam
stmt = select([users_table]).\
where(users_table.c.name == bindparam('username'))
Detailed discussion of how :class:`.BindParameter` is used is
at :func:`.bindparam`.
.. seealso::
:func:`.bindparam`
"""
__visit_name__ = 'bindparam'
_is_crud = False
def __init__(self, key, value=NO_ARG, type_=None,
unique=False, required=NO_ARG,
quote=None, callable_=None,
expanding=False,
isoutparam=False,
_compared_to_operator=None,
_compared_to_type=None):
r"""Produce a "bound expression".
The return value is an instance of :class:`.BindParameter`; this
is a :class:`.ColumnElement` subclass which represents a so-called
"placeholder" value in a SQL expression, the value of which is
supplied at the point at which the statement in executed against a
database connection.
In SQLAlchemy, the :func:`.bindparam` construct has
the ability to carry along the actual value that will be ultimately
used at expression time. In this way, it serves not just as
a "placeholder" for eventual population, but also as a means of
representing so-called "unsafe" values which should not be rendered
directly in a SQL statement, but rather should be passed along
to the :term:`DBAPI` as values which need to be correctly escaped
and potentially handled for type-safety.
When using :func:`.bindparam` explicitly, the use case is typically
one of traditional deferment of parameters; the :func:`.bindparam`
construct accepts a name which can then be referred to at execution
time::
from sqlalchemy import bindparam
stmt = select([users_table]).\
where(users_table.c.name == bindparam('username'))
The above statement, when rendered, will produce SQL similar to::
SELECT id, name FROM user WHERE name = :username
In order to populate the value of ``:username`` above, the value
would typically be applied at execution time to a method
like :meth:`.Connection.execute`::
result = connection.execute(stmt, username='wendy')
Explicit use of :func:`.bindparam` is also common when producing
UPDATE or DELETE statements that are to be invoked multiple times,
where the WHERE criterion of the statement is to change on each
invocation, such as::
stmt = (users_table.update().
where(user_table.c.name == bindparam('username')).
values(fullname=bindparam('fullname'))
)
connection.execute(
stmt, [{"username": "wendy", "fullname": "Wendy Smith"},
{"username": "jack", "fullname": "Jack Jones"},
]
)
SQLAlchemy's Core expression system makes wide use of
:func:`.bindparam` in an implicit sense. It is typical that Python
literal values passed to virtually all SQL expression functions are
coerced into fixed :func:`.bindparam` constructs. For example, given
a comparison operation such as::
expr = users_table.c.name == 'Wendy'
The above expression will produce a :class:`.BinaryExpression`
construct, where the left side is the :class:`.Column` object
representing the ``name`` column, and the right side is a
:class:`.BindParameter` representing the literal value::
print(repr(expr.right))
BindParameter('%(4327771088 name)s', 'Wendy', type_=String())
The expression above will render SQL such as::
user.name = :name_1
Where the ``:name_1`` parameter name is an anonymous name. The
actual string ``Wendy`` is not in the rendered string, but is carried
along where it is later used within statement execution. If we
invoke a statement like the following::
stmt = select([users_table]).where(users_table.c.name == 'Wendy')
result = connection.execute(stmt)
We would see SQL logging output as::
SELECT "user".id, "user".name
FROM "user"
WHERE "user".name = %(name_1)s
{'name_1': 'Wendy'}
Above, we see that ``Wendy`` is passed as a parameter to the database,
while the placeholder ``:name_1`` is rendered in the appropriate form
for the target database, in this case the PostgreSQL database.
Similarly, :func:`.bindparam` is invoked automatically
when working with :term:`CRUD` statements as far as the "VALUES"
portion is concerned. The :func:`.insert` construct produces an
``INSERT`` expression which will, at statement execution time,
generate bound placeholders based on the arguments passed, as in::
stmt = users_table.insert()
result = connection.execute(stmt, name='Wendy')
The above will produce SQL output as::
INSERT INTO "user" (name) VALUES (%(name)s)
{'name': 'Wendy'}
The :class:`.Insert` construct, at compilation/execution time,
rendered a single :func:`.bindparam` mirroring the column
name ``name`` as a result of the single ``name`` parameter
we passed to the :meth:`.Connection.execute` method.
:param key:
the key (e.g. the name) for this bind param.
Will be used in the generated
SQL statement for dialects that use named parameters. This
value may be modified when part of a compilation operation,
if other :class:`BindParameter` objects exist with the same
key, or if its length is too long and truncation is
required.
:param value:
Initial value for this bind param. Will be used at statement
execution time as the value for this parameter passed to the
DBAPI, if no other value is indicated to the statement execution
method for this particular parameter name. Defaults to ``None``.
:param callable\_:
A callable function that takes the place of "value". The function
will be called at statement execution time to determine the
ultimate value. Used for scenarios where the actual bind
value cannot be determined at the point at which the clause
construct is created, but embedded bind values are still desirable.
:param type\_:
A :class:`.TypeEngine` class or instance representing an optional
datatype for this :func:`.bindparam`. If not passed, a type
may be determined automatically for the bind, based on the given
value; for example, trivial Python types such as ``str``,
``int``, ``bool``
may result in the :class:`.String`, :class:`.Integer` or
:class:`.Boolean` types being automatically selected.
The type of a :func:`.bindparam` is significant especially in that
the type will apply pre-processing to the value before it is
passed to the database. For example, a :func:`.bindparam` which
refers to a datetime value, and is specified as holding the
:class:`.DateTime` type, may apply conversion needed to the
value (such as stringification on SQLite) before passing the value
to the database.
:param unique:
if True, the key name of this :class:`.BindParameter` will be
modified if another :class:`.BindParameter` of the same name
already has been located within the containing
expression. This flag is used generally by the internals
when producing so-called "anonymous" bound expressions, it
isn't generally applicable to explicitly-named :func:`.bindparam`
constructs.
:param required:
If ``True``, a value is required at execution time. If not passed,
it defaults to ``True`` if neither :paramref:`.bindparam.value`
or :paramref:`.bindparam.callable` were passed. If either of these
parameters are present, then :paramref:`.bindparam.required`
defaults to ``False``.
.. versionchanged:: 0.8 If the ``required`` flag is not specified,
it will be set automatically to ``True`` or ``False`` depending
on whether or not the ``value`` or ``callable`` parameters
were specified.
:param quote:
True if this parameter name requires quoting and is not
currently known as a SQLAlchemy reserved word; this currently
only applies to the Oracle backend, where bound names must
sometimes be quoted.
:param isoutparam:
if True, the parameter should be treated like a stored procedure
"OUT" parameter. This applies to backends such as Oracle which
support OUT parameters.
:param expanding:
if True, this parameter will be treated as an "expanding" parameter
at execution time; the parameter value is expected to be a sequence,
rather than a scalar value, and the string SQL statement will
be transformed on a per-execution basis to accomodate the sequence
with a variable number of parameter slots passed to the DBAPI.
This is to allow statement caching to be used in conjunction with
an IN clause.
.. note:: The "expanding" feature does not support "executemany"-
style parameter sets, nor does it support empty IN expressions.
.. note:: The "expanding" feature should be considered as
**experimental** within the 1.2 series.
.. versionadded:: 1.2
.. seealso::
:ref:`coretutorial_bind_param`
:ref:`coretutorial_insert_expressions`
:func:`.outparam`
"""
if isinstance(key, ColumnClause):
type_ = key.type
key = key.key
if required is NO_ARG:
required = (value is NO_ARG and callable_ is None)
if value is NO_ARG:
value = None
if quote is not None:
key = quoted_name(key, quote)
if unique:
self.key = _anonymous_label('%%(%d %s)s' % (id(self), key
or 'param'))
else:
self.key = key or _anonymous_label('%%(%d param)s'
% id(self))
# identifying key that won't change across
# clones, used to identify the bind's logical
# identity
self._identifying_key = self.key
# key that was passed in the first place, used to
# generate new keys
self._orig_key = key or 'param'
self.unique = unique
self.value = value
self.callable = callable_
self.isoutparam = isoutparam
self.required = required
self.expanding = expanding
if type_ is None:
if _compared_to_type is not None:
self.type = \
_compared_to_type.coerce_compared_value(
_compared_to_operator, value)
else:
self.type = type_api._resolve_value_to_type(value)
elif isinstance(type_, type):
self.type = type_()
else:
self.type = type_
def _with_value(self, value):
"""Return a copy of this :class:`.BindParameter` with the given value
set.
"""
cloned = self._clone()
cloned.value = value
cloned.callable = None
cloned.required = False
if cloned.type is type_api.NULLTYPE:
cloned.type = type_api._resolve_value_to_type(value)
return cloned
@property
def effective_value(self):
"""Return the value of this bound parameter,
taking into account if the ``callable`` parameter
was set.
The ``callable`` value will be evaluated
and returned if present, else ``value``.
"""
if self.callable:
return self.callable()
else:
return self.value
def _clone(self):
c = ClauseElement._clone(self)
if self.unique:
c.key = _anonymous_label('%%(%d %s)s' % (id(c), c._orig_key
or 'param'))
return c
def _convert_to_unique(self):
if not self.unique:
self.unique = True
self.key = _anonymous_label(
'%%(%d %s)s' % (id(self), self._orig_key or 'param'))
def compare(self, other, **kw):
"""Compare this :class:`BindParameter` to the given
clause."""
return isinstance(other, BindParameter) \
and self.type._compare_type_affinity(other.type) \
and self.value == other.value \
and self.callable == other.callable
def __getstate__(self):
"""execute a deferred value for serialization purposes."""
d = self.__dict__.copy()
v = self.value
if self.callable:
v = self.callable()
d['callable'] = None
d['value'] = v
return d
def __repr__(self):
return 'BindParameter(%r, %r, type_=%r)' % (self.key,
self.value, self.type)
class TypeClause(ClauseElement):
"""Handle a type keyword in a SQL statement.
Used by the ``Case`` statement.
"""
__visit_name__ = 'typeclause'
def __init__(self, type):
self.type = type
class TextClause(Executable, ClauseElement):
"""Represent a literal SQL text fragment.
E.g.::
from sqlalchemy import text
t = text("SELECT * FROM users")
result = connection.execute(t)
The :class:`.Text` construct is produced using the :func:`.text`
function; see that function for full documentation.
.. seealso::
:func:`.text`
"""
__visit_name__ = 'textclause'
_bind_params_regex = re.compile(r'(?<![:\w\x5c]):(\w+)(?!:)', re.UNICODE)
_execution_options = \
Executable._execution_options.union(
{'autocommit': PARSE_AUTOCOMMIT})
@property
def _select_iterable(self):
return (self,)
@property
def selectable(self):
# allows text() to be considered by
# _interpret_as_from
return self
_hide_froms = []
# help in those cases where text() is
# interpreted in a column expression situation
key = _label = _resolve_label = None
_allow_label_resolve = False
def __init__(
self,
text,
bind=None):
self._bind = bind
self._bindparams = {}
def repl(m):
self._bindparams[m.group(1)] = BindParameter(m.group(1))
return ':%s' % m.group(1)
# scan the string and search for bind parameter names, add them
# to the list of bindparams
self.text = self._bind_params_regex.sub(repl, text)
@classmethod
def _create_text(self, text, bind=None, bindparams=None,
typemap=None, autocommit=None):
r"""Construct a new :class:`.TextClause` clause, representing
a textual SQL string directly.
E.g.::
from sqlalchemy import text
t = text("SELECT * FROM users")
result = connection.execute(t)
The advantages :func:`.text` provides over a plain string are
backend-neutral support for bind parameters, per-statement
execution options, as well as
bind parameter and result-column typing behavior, allowing
SQLAlchemy type constructs to play a role when executing
a statement that is specified literally. The construct can also
be provided with a ``.c`` collection of column elements, allowing
it to be embedded in other SQL expression constructs as a subquery.
Bind parameters are specified by name, using the format ``:name``.
E.g.::
t = text("SELECT * FROM users WHERE id=:user_id")
result = connection.execute(t, user_id=12)
For SQL statements where a colon is required verbatim, as within
an inline string, use a backslash to escape::
t = text("SELECT * FROM users WHERE name='\:username'")
The :class:`.TextClause` construct includes methods which can
provide information about the bound parameters as well as the column
values which would be returned from the textual statement, assuming
it's an executable SELECT type of statement. The
:meth:`.TextClause.bindparams` method is used to provide bound
parameter detail, and :meth:`.TextClause.columns` method allows
specification of return columns including names and types::
t = text("SELECT * FROM users WHERE id=:user_id").\
bindparams(user_id=7).\
columns(id=Integer, name=String)
for id, name in connection.execute(t):
print(id, name)
The :func:`.text` construct is used in cases when
a literal string SQL fragment is specified as part of a larger query,
such as for the WHERE clause of a SELECT statement::
s = select([users.c.id, users.c.name]).where(text("id=:user_id"))
result = connection.execute(s, user_id=12)
:func:`.text` is also used for the construction
of a full, standalone statement using plain text.
As such, SQLAlchemy refers
to it as an :class:`.Executable` object, and it supports
the :meth:`Executable.execution_options` method. For example,
a :func:`.text` construct that should be subject to "autocommit"
can be set explicitly so using the
:paramref:`.Connection.execution_options.autocommit` option::
t = text("EXEC my_procedural_thing()").\
execution_options(autocommit=True)
Note that SQLAlchemy's usual "autocommit" behavior applies to
:func:`.text` constructs implicitly - that is, statements which begin
with a phrase such as ``INSERT``, ``UPDATE``, ``DELETE``,
or a variety of other phrases specific to certain backends, will
be eligible for autocommit if no transaction is in progress.
:param text:
the text of the SQL statement to be created. use ``:<param>``
to specify bind parameters; they will be compiled to their
engine-specific format.
:param autocommit:
Deprecated. Use .execution_options(autocommit=<True|False>)
to set the autocommit option.
:param bind:
an optional connection or engine to be used for this text query.
:param bindparams:
Deprecated. A list of :func:`.bindparam` instances used to
provide information about parameters embedded in the statement.
This argument now invokes the :meth:`.TextClause.bindparams`
method on the construct before returning it. E.g.::
stmt = text("SELECT * FROM table WHERE id=:id",
bindparams=[bindparam('id', value=5, type_=Integer)])
Is equivalent to::
stmt = text("SELECT * FROM table WHERE id=:id").\
bindparams(bindparam('id', value=5, type_=Integer))
.. deprecated:: 0.9.0 the :meth:`.TextClause.bindparams` method
supersedes the ``bindparams`` argument to :func:`.text`.
:param typemap:
Deprecated. A dictionary mapping the names of columns
represented in the columns clause of a ``SELECT`` statement
to type objects,
which will be used to perform post-processing on columns within
the result set. This parameter now invokes the
:meth:`.TextClause.columns` method, which returns a
:class:`.TextAsFrom` construct that gains a ``.c`` collection and
can be embedded in other expressions. E.g.::
stmt = text("SELECT * FROM table",
typemap={'id': Integer, 'name': String},
)
Is equivalent to::
stmt = text("SELECT * FROM table").columns(id=Integer,
name=String)
Or alternatively::
from sqlalchemy.sql import column
stmt = text("SELECT * FROM table").columns(
column('id', Integer),
column('name', String)
)
.. deprecated:: 0.9.0 the :meth:`.TextClause.columns` method
supersedes the ``typemap`` argument to :func:`.text`.
.. seealso::
:ref:`sqlexpression_text` - in the Core tutorial
:ref:`orm_tutorial_literal_sql` - in the ORM tutorial
"""
stmt = TextClause(text, bind=bind)
if bindparams:
stmt = stmt.bindparams(*bindparams)
if typemap:
stmt = stmt.columns(**typemap)
if autocommit is not None:
util.warn_deprecated('autocommit on text() is deprecated. '
'Use .execution_options(autocommit=True)')
stmt = stmt.execution_options(autocommit=autocommit)
return stmt
@_generative
def bindparams(self, *binds, **names_to_values):
"""Establish the values and/or types of bound parameters within
this :class:`.TextClause` construct.
Given a text construct such as::
from sqlalchemy import text
stmt = text("SELECT id, name FROM user WHERE name=:name "
"AND timestamp=:timestamp")
the :meth:`.TextClause.bindparams` method can be used to establish
the initial value of ``:name`` and ``:timestamp``,
using simple keyword arguments::
stmt = stmt.bindparams(name='jack',
timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5))
Where above, new :class:`.BindParameter` objects
will be generated with the names ``name`` and ``timestamp``, and
values of ``jack`` and ``datetime.datetime(2012, 10, 8, 15, 12, 5)``,
respectively. The types will be
inferred from the values given, in this case :class:`.String` and
:class:`.DateTime`.
When specific typing behavior is needed, the positional ``*binds``
argument can be used in which to specify :func:`.bindparam` constructs
directly. These constructs must include at least the ``key``
argument, then an optional value and type::
from sqlalchemy import bindparam
stmt = stmt.bindparams(
bindparam('name', value='jack', type_=String),
bindparam('timestamp', type_=DateTime)
)
Above, we specified the type of :class:`.DateTime` for the
``timestamp`` bind, and the type of :class:`.String` for the ``name``
bind. In the case of ``name`` we also set the default value of
``"jack"``.
Additional bound parameters can be supplied at statement execution
time, e.g.::
result = connection.execute(stmt,
timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5))
The :meth:`.TextClause.bindparams` method can be called repeatedly,
where it will re-use existing :class:`.BindParameter` objects to add
new information. For example, we can call
:meth:`.TextClause.bindparams` first with typing information, and a
second time with value information, and it will be combined::
stmt = text("SELECT id, name FROM user WHERE name=:name "
"AND timestamp=:timestamp")
stmt = stmt.bindparams(
bindparam('name', type_=String),
bindparam('timestamp', type_=DateTime)
)
stmt = stmt.bindparams(
name='jack',
timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5)
)
.. versionadded:: 0.9.0 The :meth:`.TextClause.bindparams` method
supersedes the argument ``bindparams`` passed to
:func:`~.expression.text`.
"""
self._bindparams = new_params = self._bindparams.copy()
for bind in binds:
try:
existing = new_params[bind.key]
except KeyError:
raise exc.ArgumentError(
"This text() construct doesn't define a "
"bound parameter named %r" % bind.key)
else:
new_params[existing.key] = bind
for key, value in names_to_values.items():
try:
existing = new_params[key]
except KeyError:
raise exc.ArgumentError(
"This text() construct doesn't define a "
"bound parameter named %r" % key)
else:
new_params[key] = existing._with_value(value)
@util.dependencies('sqlalchemy.sql.selectable')
def columns(self, selectable, *cols, **types):
"""Turn this :class:`.TextClause` object into a :class:`.TextAsFrom`
object that can be embedded into another statement.
This function essentially bridges the gap between an entirely
textual SELECT statement and the SQL expression language concept
of a "selectable"::
from sqlalchemy.sql import column, text
stmt = text("SELECT id, name FROM some_table")
stmt = stmt.columns(column('id'), column('name')).alias('st')
stmt = select([mytable]).\
select_from(
mytable.join(stmt, mytable.c.name == stmt.c.name)
).where(stmt.c.id > 5)
Above, we pass a series of :func:`.column` elements to the
:meth:`.TextClause.columns` method positionally. These :func:`.column`
elements now become first class elements upon the :attr:`.TextAsFrom.c`
column collection, just like any other selectable.
The column expressions we pass to :meth:`.TextClause.columns` may
also be typed; when we do so, these :class:`.TypeEngine` objects become
the effective return type of the column, so that SQLAlchemy's
result-set-processing systems may be used on the return values.
This is often needed for types such as date or boolean types, as well
as for unicode processing on some dialect configurations::
stmt = text("SELECT id, name, timestamp FROM some_table")
stmt = stmt.columns(
column('id', Integer),
column('name', Unicode),
column('timestamp', DateTime)
)
for id, name, timestamp in connection.execute(stmt):
print(id, name, timestamp)
As a shortcut to the above syntax, keyword arguments referring to
types alone may be used, if only type conversion is needed::
stmt = text("SELECT id, name, timestamp FROM some_table")
stmt = stmt.columns(
id=Integer,
name=Unicode,
timestamp=DateTime
)
for id, name, timestamp in connection.execute(stmt):
print(id, name, timestamp)
The positional form of :meth:`.TextClause.columns` also provides
the unique feature of **positional column targeting**, which is
particularly useful when using the ORM with complex textual queries.
If we specify the columns from our model to :meth:`.TextClause.columns`,
the result set will match to those columns positionally, meaning the
name or origin of the column in the textual SQL doesn't matter::
stmt = text("SELECT users.id, addresses.id, users.id, "
"users.name, addresses.email_address AS email "
"FROM users JOIN addresses ON users.id=addresses.user_id "
"WHERE users.id = 1").columns(
User.id,
Address.id,
Address.user_id,
User.name,
Address.email_address
)
query = session.query(User).from_statement(stmt).options(
contains_eager(User.addresses))
.. versionadded:: 1.1 the :meth:`.TextClause.columns` method now
offers positional column targeting in the result set when
the column expressions are passed purely positionally.
The :meth:`.TextClause.columns` method provides a direct
route to calling :meth:`.FromClause.alias` as well as
:meth:`.SelectBase.cte` against a textual SELECT statement::
stmt = stmt.columns(id=Integer, name=String).cte('st')
stmt = select([sometable]).where(sometable.c.id == stmt.c.id)
.. versionadded:: 0.9.0 :func:`.text` can now be converted into a
fully featured "selectable" construct using the
:meth:`.TextClause.columns` method. This method supersedes the
``typemap`` argument to :func:`.text`.
"""
positional_input_cols = [
ColumnClause(col.key, types.pop(col.key))
if col.key in types
else col
for col in cols
]
keyed_input_cols = [
ColumnClause(key, type_) for key, type_ in types.items()]
return selectable.TextAsFrom(
self,
positional_input_cols + keyed_input_cols,
positional=bool(positional_input_cols) and not keyed_input_cols)
@property
def type(self):
return type_api.NULLTYPE
@property
def comparator(self):
return self.type.comparator_factory(self)
def self_group(self, against=None):
if against is operators.in_op:
return Grouping(self)
else:
return self
def _copy_internals(self, clone=_clone, **kw):
self._bindparams = dict((b.key, clone(b, **kw))
for b in self._bindparams.values())
def get_children(self, **kwargs):
return list(self._bindparams.values())
def compare(self, other):
return isinstance(other, TextClause) and other.text == self.text
class Null(ColumnElement):
"""Represent the NULL keyword in a SQL statement.
:class:`.Null` is accessed as a constant via the
:func:`.null` function.
"""
__visit_name__ = 'null'
@util.memoized_property
def type(self):
return type_api.NULLTYPE
@classmethod
def _instance(cls):
"""Return a constant :class:`.Null` construct."""
return Null()
def compare(self, other):
return isinstance(other, Null)
class False_(ColumnElement):
"""Represent the ``false`` keyword, or equivalent, in a SQL statement.
:class:`.False_` is accessed as a constant via the
:func:`.false` function.
"""
__visit_name__ = 'false'
@util.memoized_property
def type(self):
return type_api.BOOLEANTYPE
def _negate(self):
return True_()
@classmethod
def _instance(cls):
"""Return a :class:`.False_` construct.
E.g.::
>>> from sqlalchemy import false
>>> print select([t.c.x]).where(false())
SELECT x FROM t WHERE false
A backend which does not support true/false constants will render as
an expression against 1 or 0::
>>> print select([t.c.x]).where(false())
SELECT x FROM t WHERE 0 = 1
The :func:`.true` and :func:`.false` constants also feature
"short circuit" operation within an :func:`.and_` or :func:`.or_`
conjunction::
>>> print select([t.c.x]).where(or_(t.c.x > 5, true()))
SELECT x FROM t WHERE true
>>> print select([t.c.x]).where(and_(t.c.x > 5, false()))
SELECT x FROM t WHERE false
.. versionchanged:: 0.9 :func:`.true` and :func:`.false` feature
better integrated behavior within conjunctions and on dialects
that don't support true/false constants.
.. seealso::
:func:`.true`
"""
return False_()
def compare(self, other):
return isinstance(other, False_)
class True_(ColumnElement):
"""Represent the ``true`` keyword, or equivalent, in a SQL statement.
:class:`.True_` is accessed as a constant via the
:func:`.true` function.
"""
__visit_name__ = 'true'
@util.memoized_property
def type(self):
return type_api.BOOLEANTYPE
def _negate(self):
return False_()
@classmethod
def _ifnone(cls, other):
if other is None:
return cls._instance()
else:
return other
@classmethod
def _instance(cls):
"""Return a constant :class:`.True_` construct.
E.g.::
>>> from sqlalchemy import true
>>> print select([t.c.x]).where(true())
SELECT x FROM t WHERE true
A backend which does not support true/false constants will render as
an expression against 1 or 0::
>>> print select([t.c.x]).where(true())
SELECT x FROM t WHERE 1 = 1
The :func:`.true` and :func:`.false` constants also feature
"short circuit" operation within an :func:`.and_` or :func:`.or_`
conjunction::
>>> print select([t.c.x]).where(or_(t.c.x > 5, true()))
SELECT x FROM t WHERE true
>>> print select([t.c.x]).where(and_(t.c.x > 5, false()))
SELECT x FROM t WHERE false
.. versionchanged:: 0.9 :func:`.true` and :func:`.false` feature
better integrated behavior within conjunctions and on dialects
that don't support true/false constants.
.. seealso::
:func:`.false`
"""
return True_()
def compare(self, other):
return isinstance(other, True_)
class ClauseList(ClauseElement):
"""Describe a list of clauses, separated by an operator.
By default, is comma-separated, such as a column listing.
"""
__visit_name__ = 'clauselist'
def __init__(self, *clauses, **kwargs):
self.operator = kwargs.pop('operator', operators.comma_op)
self.group = kwargs.pop('group', True)
self.group_contents = kwargs.pop('group_contents', True)
text_converter = kwargs.pop(
'_literal_as_text',
_expression_literal_as_text)
if self.group_contents:
self.clauses = [
text_converter(clause).self_group(against=self.operator)
for clause in clauses]
else:
self.clauses = [
text_converter(clause)
for clause in clauses]
def __iter__(self):
return iter(self.clauses)
def __len__(self):
return len(self.clauses)
@property
def _select_iterable(self):
return iter(self)
def append(self, clause):
if self.group_contents:
self.clauses.append(_literal_as_text(clause).
self_group(against=self.operator))
else:
self.clauses.append(_literal_as_text(clause))
def _copy_internals(self, clone=_clone, **kw):
self.clauses = [clone(clause, **kw) for clause in self.clauses]
def get_children(self, **kwargs):
return self.clauses
@property
def _from_objects(self):
return list(itertools.chain(*[c._from_objects for c in self.clauses]))
def self_group(self, against=None):
if self.group and operators.is_precedent(self.operator, against):
return Grouping(self)
else:
return self
def compare(self, other, **kw):
"""Compare this :class:`.ClauseList` to the given :class:`.ClauseList`,
including a comparison of all the clause items.
"""
if not isinstance(other, ClauseList) and len(self.clauses) == 1:
return self.clauses[0].compare(other, **kw)
elif isinstance(other, ClauseList) and \
len(self.clauses) == len(other.clauses) and \
self.operator is other.operator:
if self.operator in (operators.and_, operators.or_):
completed = set()
for clause in self.clauses:
for other_clause in set(other.clauses).difference(completed):
if clause.compare(other_clause, **kw):
completed.add(other_clause)
break
return len(completed) == len(other.clauses)
else:
for i in range(0, len(self.clauses)):
if not self.clauses[i].compare(other.clauses[i], **kw):
return False
else:
return True
else:
return False
class BooleanClauseList(ClauseList, ColumnElement):
__visit_name__ = 'clauselist'
def __init__(self, *arg, **kw):
raise NotImplementedError(
"BooleanClauseList has a private constructor")
@classmethod
def _construct(cls, operator, continue_on, skip_on, *clauses, **kw):
convert_clauses = []
clauses = [
_expression_literal_as_text(clause)
for clause in
util.coerce_generator_arg(clauses)
]
for clause in clauses:
if isinstance(clause, continue_on):
continue
elif isinstance(clause, skip_on):
return clause.self_group(against=operators._asbool)
convert_clauses.append(clause)
if len(convert_clauses) == 1:
return convert_clauses[0].self_group(against=operators._asbool)
elif not convert_clauses and clauses:
return clauses[0].self_group(against=operators._asbool)
convert_clauses = [c.self_group(against=operator)
for c in convert_clauses]
self = cls.__new__(cls)
self.clauses = convert_clauses
self.group = True
self.operator = operator
self.group_contents = True
self.type = type_api.BOOLEANTYPE
return self
@classmethod
def and_(cls, *clauses):
"""Produce a conjunction of expressions joined by ``AND``.
E.g.::
from sqlalchemy import and_
stmt = select([users_table]).where(
and_(
users_table.c.name == 'wendy',
users_table.c.enrolled == True
)
)
The :func:`.and_` conjunction is also available using the
Python ``&`` operator (though note that compound expressions
need to be parenthesized in order to function with Python
operator precedence behavior)::
stmt = select([users_table]).where(
(users_table.c.name == 'wendy') &
(users_table.c.enrolled == True)
)
The :func:`.and_` operation is also implicit in some cases;
the :meth:`.Select.where` method for example can be invoked multiple
times against a statement, which will have the effect of each
clause being combined using :func:`.and_`::
stmt = select([users_table]).\
where(users_table.c.name == 'wendy').\
where(users_table.c.enrolled == True)
.. seealso::
:func:`.or_`
"""
return cls._construct(operators.and_, True_, False_, *clauses)
@classmethod
def or_(cls, *clauses):
"""Produce a conjunction of expressions joined by ``OR``.
E.g.::
from sqlalchemy import or_
stmt = select([users_table]).where(
or_(
users_table.c.name == 'wendy',
users_table.c.name == 'jack'
)
)
The :func:`.or_` conjunction is also available using the
Python ``|`` operator (though note that compound expressions
need to be parenthesized in order to function with Python
operator precedence behavior)::
stmt = select([users_table]).where(
(users_table.c.name == 'wendy') |
(users_table.c.name == 'jack')
)
.. seealso::
:func:`.and_`
"""
return cls._construct(operators.or_, False_, True_, *clauses)
@property
def _select_iterable(self):
return (self, )
def self_group(self, against=None):
if not self.clauses:
return self
else:
return super(BooleanClauseList, self).self_group(against=against)
def _negate(self):
return ClauseList._negate(self)
and_ = BooleanClauseList.and_
or_ = BooleanClauseList.or_
class Tuple(ClauseList, ColumnElement):
"""Represent a SQL tuple."""
def __init__(self, *clauses, **kw):
"""Return a :class:`.Tuple`.
Main usage is to produce a composite IN construct::
from sqlalchemy import tuple_
tuple_(table.c.col1, table.c.col2).in_(
[(1, 2), (5, 12), (10, 19)]
)
.. warning::
The composite IN construct is not supported by all backends,
and is currently known to work on PostgreSQL and MySQL,
but not SQLite. Unsupported backends will raise
a subclass of :class:`~sqlalchemy.exc.DBAPIError` when such
an expression is invoked.
"""
clauses = [_literal_as_binds(c) for c in clauses]
self._type_tuple = [arg.type for arg in clauses]
self.type = kw.pop('type_', self._type_tuple[0]
if self._type_tuple else type_api.NULLTYPE)
super(Tuple, self).__init__(*clauses, **kw)
@property
def _select_iterable(self):
return (self, )
def _bind_param(self, operator, obj, type_=None):
return Tuple(*[
BindParameter(None, o, _compared_to_operator=operator,
_compared_to_type=compared_to_type, unique=True,
type_=type_)
for o, compared_to_type in zip(obj, self._type_tuple)
]).self_group()
class Case(ColumnElement):
"""Represent a ``CASE`` expression.
:class:`.Case` is produced using the :func:`.case` factory function,
as in::
from sqlalchemy import case
stmt = select([users_table]).\
where(
case(
[
(users_table.c.name == 'wendy', 'W'),
(users_table.c.name == 'jack', 'J')
],
else_='E'
)
)
Details on :class:`.Case` usage is at :func:`.case`.
.. seealso::
:func:`.case`
"""
__visit_name__ = 'case'
def __init__(self, whens, value=None, else_=None):
r"""Produce a ``CASE`` expression.
The ``CASE`` construct in SQL is a conditional object that
acts somewhat analogously to an "if/then" construct in other
languages. It returns an instance of :class:`.Case`.
:func:`.case` in its usual form is passed a list of "when"
constructs, that is, a list of conditions and results as tuples::
from sqlalchemy import case
stmt = select([users_table]).\
where(
case(
[
(users_table.c.name == 'wendy', 'W'),
(users_table.c.name == 'jack', 'J')
],
else_='E'
)
)
The above statement will produce SQL resembling::
SELECT id, name FROM user
WHERE CASE
WHEN (name = :name_1) THEN :param_1
WHEN (name = :name_2) THEN :param_2
ELSE :param_3
END
When simple equality expressions of several values against a single
parent column are needed, :func:`.case` also has a "shorthand" format
used via the
:paramref:`.case.value` parameter, which is passed a column
expression to be compared. In this form, the :paramref:`.case.whens`
parameter is passed as a dictionary containing expressions to be
compared against keyed to result expressions. The statement below is
equivalent to the preceding statement::
stmt = select([users_table]).\
where(
case(
{"wendy": "W", "jack": "J"},
value=users_table.c.name,
else_='E'
)
)
The values which are accepted as result values in
:paramref:`.case.whens` as well as with :paramref:`.case.else_` are
coerced from Python literals into :func:`.bindparam` constructs.
SQL expressions, e.g. :class:`.ColumnElement` constructs, are accepted
as well. To coerce a literal string expression into a constant
expression rendered inline, use the :func:`.literal_column` construct,
as in::
from sqlalchemy import case, literal_column
case(
[
(
orderline.c.qty > 100,
literal_column("'greaterthan100'")
),
(
orderline.c.qty > 10,
literal_column("'greaterthan10'")
)
],
else_=literal_column("'lessthan10'")
)
The above will render the given constants without using bound
parameters for the result values (but still for the comparison
values), as in::
CASE
WHEN (orderline.qty > :qty_1) THEN 'greaterthan100'
WHEN (orderline.qty > :qty_2) THEN 'greaterthan10'
ELSE 'lessthan10'
END
:param whens: The criteria to be compared against,
:paramref:`.case.whens` accepts two different forms, based on
whether or not :paramref:`.case.value` is used.
In the first form, it accepts a list of 2-tuples; each 2-tuple
consists of ``(<sql expression>, <value>)``, where the SQL
expression is a boolean expression and "value" is a resulting value,
e.g.::
case([
(users_table.c.name == 'wendy', 'W'),
(users_table.c.name == 'jack', 'J')
])
In the second form, it accepts a Python dictionary of comparison
values mapped to a resulting value; this form requires
:paramref:`.case.value` to be present, and values will be compared
using the ``==`` operator, e.g.::
case(
{"wendy": "W", "jack": "J"},
value=users_table.c.name
)
:param value: An optional SQL expression which will be used as a
fixed "comparison point" for candidate values within a dictionary
passed to :paramref:`.case.whens`.
:param else\_: An optional SQL expression which will be the evaluated
result of the ``CASE`` construct if all expressions within
:paramref:`.case.whens` evaluate to false. When omitted, most
databases will produce a result of NULL if none of the "when"
expressions evaluate to true.
"""
try:
whens = util.dictlike_iteritems(whens)
except TypeError:
pass
if value is not None:
whenlist = [
(_literal_as_binds(c).self_group(),
_literal_as_binds(r)) for (c, r) in whens
]
else:
whenlist = [
(_no_literals(c).self_group(),
_literal_as_binds(r)) for (c, r) in whens
]
if whenlist:
type_ = list(whenlist[-1])[-1].type
else:
type_ = None
if value is None:
self.value = None
else:
self.value = _literal_as_binds(value)
self.type = type_
self.whens = whenlist
if else_ is not None:
self.else_ = _literal_as_binds(else_)
else:
self.else_ = None
def _copy_internals(self, clone=_clone, **kw):
if self.value is not None:
self.value = clone(self.value, **kw)
self.whens = [(clone(x, **kw), clone(y, **kw))
for x, y in self.whens]
if self.else_ is not None:
self.else_ = clone(self.else_, **kw)
def get_children(self, **kwargs):
if self.value is not None:
yield self.value
for x, y in self.whens:
yield x
yield y
if self.else_ is not None:
yield self.else_
@property
def _from_objects(self):
return list(itertools.chain(*[x._from_objects for x in
self.get_children()]))
def literal_column(text, type_=None):
r"""Produce a :class:`.ColumnClause` object that has the
:paramref:`.column.is_literal` flag set to True.
:func:`.literal_column` is similar to :func:`.column`, except that
it is more often used as a "standalone" column expression that renders
exactly as stated; while :func:`.column` stores a string name that
will be assumed to be part of a table and may be quoted as such,
:func:`.literal_column` can be that, or any other arbitrary column-oriented
expression.
:param text: the text of the expression; can be any SQL expression.
Quoting rules will not be applied. To specify a column-name expression
which should be subject to quoting rules, use the :func:`column`
function.
:param type\_: an optional :class:`~sqlalchemy.types.TypeEngine`
object which will
provide result-set translation and additional expression semantics for
this column. If left as None the type will be NullType.
.. seealso::
:func:`.column`
:func:`.text`
:ref:`sqlexpression_literal_column`
"""
return ColumnClause(text, type_=type_, is_literal=True)
class Cast(ColumnElement):
"""Represent a ``CAST`` expression.
:class:`.Cast` is produced using the :func:`.cast` factory function,
as in::
from sqlalchemy import cast, Numeric
stmt = select([
cast(product_table.c.unit_price, Numeric(10, 4))
])
Details on :class:`.Cast` usage is at :func:`.cast`.
.. seealso::
:func:`.cast`
"""
__visit_name__ = 'cast'
def __init__(self, expression, type_):
"""Produce a ``CAST`` expression.
:func:`.cast` returns an instance of :class:`.Cast`.
E.g.::
from sqlalchemy import cast, Numeric
stmt = select([
cast(product_table.c.unit_price, Numeric(10, 4))
])
The above statement will produce SQL resembling::
SELECT CAST(unit_price AS NUMERIC(10, 4)) FROM product
The :func:`.cast` function performs two distinct functions when
used. The first is that it renders the ``CAST`` expression within
the resulting SQL string. The second is that it associates the given
type (e.g. :class:`.TypeEngine` class or instance) with the column
expression on the Python side, which means the expression will take
on the expression operator behavior associated with that type,
as well as the bound-value handling and result-row-handling behavior
of the type.
.. versionchanged:: 0.9.0 :func:`.cast` now applies the given type
to the expression such that it takes effect on the bound-value,
e.g. the Python-to-database direction, in addition to the
result handling, e.g. database-to-Python, direction.
An alternative to :func:`.cast` is the :func:`.type_coerce` function.
This function performs the second task of associating an expression
with a specific type, but does not render the ``CAST`` expression
in SQL.
:param expression: A SQL expression, such as a :class:`.ColumnElement`
expression or a Python string which will be coerced into a bound
literal value.
:param type_: A :class:`.TypeEngine` class or instance indicating
the type to which the ``CAST`` should apply.
.. seealso::
:func:`.type_coerce` - Python-side type coercion without emitting
CAST.
"""
self.type = type_api.to_instance(type_)
self.clause = _literal_as_binds(expression, type_=self.type)
self.typeclause = TypeClause(self.type)
def _copy_internals(self, clone=_clone, **kw):
self.clause = clone(self.clause, **kw)
self.typeclause = clone(self.typeclause, **kw)
def get_children(self, **kwargs):
return self.clause, self.typeclause
@property
def _from_objects(self):
return self.clause._from_objects
class TypeCoerce(ColumnElement):
"""Represent a Python-side type-coercion wrapper.
:class:`.TypeCoerce` supplies the :func:`.expression.type_coerce`
function; see that function for usage details.
.. versionchanged:: 1.1 The :func:`.type_coerce` function now produces
a persistent :class:`.TypeCoerce` wrapper object rather than
translating the given object in place.
.. seealso::
:func:`.expression.type_coerce`
"""
__visit_name__ = 'type_coerce'
def __init__(self, expression, type_):
"""Associate a SQL expression with a particular type, without rendering
``CAST``.
E.g.::
from sqlalchemy import type_coerce
stmt = select([
type_coerce(log_table.date_string, StringDateTime())
])
The above construct will produce a :class:`.TypeCoerce` object, which
renders SQL that labels the expression, but otherwise does not
modify its value on the SQL side::
SELECT date_string AS anon_1 FROM log
When result rows are fetched, the ``StringDateTime`` type
will be applied to result rows on behalf of the ``date_string`` column.
The rationale for the "anon_1" label is so that the type-coerced
column remains separate in the list of result columns vs. other
type-coerced or direct values of the target column. In order to
provide a named label for the expression, use
:meth:`.ColumnElement.label`::
stmt = select([
type_coerce(
log_table.date_string, StringDateTime()).label('date')
])
A type that features bound-value handling will also have that behavior
take effect when literal values or :func:`.bindparam` constructs are
passed to :func:`.type_coerce` as targets.
For example, if a type implements the
:meth:`.TypeEngine.bind_expression`
method or :meth:`.TypeEngine.bind_processor` method or equivalent,
these functions will take effect at statement compilation/execution
time when a literal value is passed, as in::
# bound-value handling of MyStringType will be applied to the
# literal value "some string"
stmt = select([type_coerce("some string", MyStringType)])
:func:`.type_coerce` is similar to the :func:`.cast` function,
except that it does not render the ``CAST`` expression in the resulting
statement.
:param expression: A SQL expression, such as a :class:`.ColumnElement`
expression or a Python string which will be coerced into a bound
literal value.
:param type_: A :class:`.TypeEngine` class or instance indicating
the type to which the expression is coerced.
.. seealso::
:func:`.cast`
"""
self.type = type_api.to_instance(type_)
self.clause = _literal_as_binds(expression, type_=self.type)
def _copy_internals(self, clone=_clone, **kw):
self.clause = clone(self.clause, **kw)
self.__dict__.pop('typed_expression', None)
def get_children(self, **kwargs):
return self.clause,
@property
def _from_objects(self):
return self.clause._from_objects
@util.memoized_property
def typed_expression(self):
if isinstance(self.clause, BindParameter):
bp = self.clause._clone()
bp.type = self.type
return bp
else:
return self.clause
class Extract(ColumnElement):
"""Represent a SQL EXTRACT clause, ``extract(field FROM expr)``."""
__visit_name__ = 'extract'
def __init__(self, field, expr, **kwargs):
"""Return a :class:`.Extract` construct.
This is typically available as :func:`.extract`
as well as ``func.extract`` from the
:data:`.func` namespace.
"""
self.type = type_api.INTEGERTYPE
self.field = field
self.expr = _literal_as_binds(expr, None)
def _copy_internals(self, clone=_clone, **kw):
self.expr = clone(self.expr, **kw)
def get_children(self, **kwargs):
return self.expr,
@property
def _from_objects(self):
return self.expr._from_objects
class _label_reference(ColumnElement):
"""Wrap a column expression as it appears in a 'reference' context.
This expression is any that includes an _order_by_label_element,
which is a Label, or a DESC / ASC construct wrapping a Label.
The production of _label_reference() should occur when an expression
is added to this context; this includes the ORDER BY or GROUP BY of a
SELECT statement, as well as a few other places, such as the ORDER BY
within an OVER clause.
"""
__visit_name__ = 'label_reference'
def __init__(self, element):
self.element = element
def _copy_internals(self, clone=_clone, **kw):
self.element = clone(self.element, **kw)
@property
def _from_objects(self):
return ()
class _textual_label_reference(ColumnElement):
__visit_name__ = 'textual_label_reference'
def __init__(self, element):
self.element = element
@util.memoized_property
def _text_clause(self):
return TextClause._create_text(self.element)
class UnaryExpression(ColumnElement):
"""Define a 'unary' expression.
A unary expression has a single column expression
and an operator. The operator can be placed on the left
(where it is called the 'operator') or right (where it is called the
'modifier') of the column expression.
:class:`.UnaryExpression` is the basis for several unary operators
including those used by :func:`.desc`, :func:`.asc`, :func:`.distinct`,
:func:`.nullsfirst` and :func:`.nullslast`.
"""
__visit_name__ = 'unary'
def __init__(self, element, operator=None, modifier=None,
type_=None, negate=None, wraps_column_expression=False):
self.operator = operator
self.modifier = modifier
self.element = element.self_group(
against=self.operator or self.modifier)
self.type = type_api.to_instance(type_)
self.negate = negate
self.wraps_column_expression = wraps_column_expression
@classmethod
def _create_nullsfirst(cls, column):
"""Produce the ``NULLS FIRST`` modifier for an ``ORDER BY`` expression.
:func:`.nullsfirst` is intended to modify the expression produced
by :func:`.asc` or :func:`.desc`, and indicates how NULL values
should be handled when they are encountered during ordering::
from sqlalchemy import desc, nullsfirst
stmt = select([users_table]).\
order_by(nullsfirst(desc(users_table.c.name)))
The SQL expression from the above would resemble::
SELECT id, name FROM user ORDER BY name DESC NULLS FIRST
Like :func:`.asc` and :func:`.desc`, :func:`.nullsfirst` is typically
invoked from the column expression itself using
:meth:`.ColumnElement.nullsfirst`, rather than as its standalone
function version, as in::
stmt = (select([users_table]).
order_by(users_table.c.name.desc().nullsfirst())
)
.. seealso::
:func:`.asc`
:func:`.desc`
:func:`.nullslast`
:meth:`.Select.order_by`
"""
return UnaryExpression(
_literal_as_label_reference(column),
modifier=operators.nullsfirst_op,
wraps_column_expression=False)
@classmethod
def _create_nullslast(cls, column):
"""Produce the ``NULLS LAST`` modifier for an ``ORDER BY`` expression.
:func:`.nullslast` is intended to modify the expression produced
by :func:`.asc` or :func:`.desc`, and indicates how NULL values
should be handled when they are encountered during ordering::
from sqlalchemy import desc, nullslast
stmt = select([users_table]).\
order_by(nullslast(desc(users_table.c.name)))
The SQL expression from the above would resemble::
SELECT id, name FROM user ORDER BY name DESC NULLS LAST
Like :func:`.asc` and :func:`.desc`, :func:`.nullslast` is typically
invoked from the column expression itself using
:meth:`.ColumnElement.nullslast`, rather than as its standalone
function version, as in::
stmt = select([users_table]).\
order_by(users_table.c.name.desc().nullslast())
.. seealso::
:func:`.asc`
:func:`.desc`
:func:`.nullsfirst`
:meth:`.Select.order_by`
"""
return UnaryExpression(
_literal_as_label_reference(column),
modifier=operators.nullslast_op,
wraps_column_expression=False)
@classmethod
def _create_desc(cls, column):
"""Produce a descending ``ORDER BY`` clause element.
e.g.::
from sqlalchemy import desc
stmt = select([users_table]).order_by(desc(users_table.c.name))
will produce SQL as::
SELECT id, name FROM user ORDER BY name DESC
The :func:`.desc` function is a standalone version of the
:meth:`.ColumnElement.desc` method available on all SQL expressions,
e.g.::
stmt = select([users_table]).order_by(users_table.c.name.desc())
:param column: A :class:`.ColumnElement` (e.g. scalar SQL expression)
with which to apply the :func:`.desc` operation.
.. seealso::
:func:`.asc`
:func:`.nullsfirst`
:func:`.nullslast`
:meth:`.Select.order_by`
"""
return UnaryExpression(
_literal_as_label_reference(column),
modifier=operators.desc_op,
wraps_column_expression=False)
@classmethod
def _create_asc(cls, column):
"""Produce an ascending ``ORDER BY`` clause element.
e.g.::
from sqlalchemy import asc
stmt = select([users_table]).order_by(asc(users_table.c.name))
will produce SQL as::
SELECT id, name FROM user ORDER BY name ASC
The :func:`.asc` function is a standalone version of the
:meth:`.ColumnElement.asc` method available on all SQL expressions,
e.g.::
stmt = select([users_table]).order_by(users_table.c.name.asc())
:param column: A :class:`.ColumnElement` (e.g. scalar SQL expression)
with which to apply the :func:`.asc` operation.
.. seealso::
:func:`.desc`
:func:`.nullsfirst`
:func:`.nullslast`
:meth:`.Select.order_by`
"""
return UnaryExpression(
_literal_as_label_reference(column),
modifier=operators.asc_op,
wraps_column_expression=False)
@classmethod
def _create_distinct(cls, expr):
"""Produce an column-expression-level unary ``DISTINCT`` clause.
This applies the ``DISTINCT`` keyword to an individual column
expression, and is typically contained within an aggregate function,
as in::
from sqlalchemy import distinct, func
stmt = select([func.count(distinct(users_table.c.name))])
The above would produce an expression resembling::
SELECT COUNT(DISTINCT name) FROM user
The :func:`.distinct` function is also available as a column-level
method, e.g. :meth:`.ColumnElement.distinct`, as in::
stmt = select([func.count(users_table.c.name.distinct())])
The :func:`.distinct` operator is different from the
:meth:`.Select.distinct` method of :class:`.Select`,
which produces a ``SELECT`` statement
with ``DISTINCT`` applied to the result set as a whole,
e.g. a ``SELECT DISTINCT`` expression. See that method for further
information.
.. seealso::
:meth:`.ColumnElement.distinct`
:meth:`.Select.distinct`
:data:`.func`
"""
expr = _literal_as_binds(expr)
return UnaryExpression(
expr, operator=operators.distinct_op,
type_=expr.type, wraps_column_expression=False)
@property
def _order_by_label_element(self):
if self.modifier in (operators.desc_op, operators.asc_op):
return self.element._order_by_label_element
else:
return None
@property
def _from_objects(self):
return self.element._from_objects
def _copy_internals(self, clone=_clone, **kw):
self.element = clone(self.element, **kw)
def get_children(self, **kwargs):
return self.element,
def compare(self, other, **kw):
"""Compare this :class:`UnaryExpression` against the given
:class:`.ClauseElement`."""
return (
isinstance(other, UnaryExpression) and
self.operator == other.operator and
self.modifier == other.modifier and
self.element.compare(other.element, **kw)
)
def _negate(self):
if self.negate is not None:
return UnaryExpression(
self.element,
operator=self.negate,
negate=self.operator,
modifier=self.modifier,
type_=self.type,
wraps_column_expression=self.wraps_column_expression)
elif self.type._type_affinity is type_api.BOOLEANTYPE._type_affinity:
return UnaryExpression(
self.self_group(against=operators.inv),
operator=operators.inv,
type_=type_api.BOOLEANTYPE,
wraps_column_expression=self.wraps_column_expression,
negate=None)
else:
return ClauseElement._negate(self)
def self_group(self, against=None):
if self.operator and operators.is_precedent(self.operator, against):
return Grouping(self)
else:
return self
class CollectionAggregate(UnaryExpression):
"""Forms the basis for right-hand collection operator modifiers
ANY and ALL.
The ANY and ALL keywords are available in different ways on different
backends. On PostgreSQL, they only work for an ARRAY type. On
MySQL, they only work for subqueries.
"""
@classmethod
def _create_any(cls, expr):
"""Produce an ANY expression.
This may apply to an array type for some dialects (e.g. postgresql),
or to a subquery for others (e.g. mysql). e.g.::
# postgresql '5 = ANY (somearray)'
expr = 5 == any_(mytable.c.somearray)
# mysql '5 = ANY (SELECT value FROM table)'
expr = 5 == any_(select([table.c.value]))
.. versionadded:: 1.1
.. seealso::
:func:`.expression.all_`
"""
expr = _literal_as_binds(expr)
if expr.is_selectable and hasattr(expr, 'as_scalar'):
expr = expr.as_scalar()
expr = expr.self_group()
return CollectionAggregate(
expr, operator=operators.any_op,
type_=type_api.NULLTYPE, wraps_column_expression=False)
@classmethod
def _create_all(cls, expr):
"""Produce an ALL expression.
This may apply to an array type for some dialects (e.g. postgresql),
or to a subquery for others (e.g. mysql). e.g.::
# postgresql '5 = ALL (somearray)'
expr = 5 == all_(mytable.c.somearray)
# mysql '5 = ALL (SELECT value FROM table)'
expr = 5 == all_(select([table.c.value]))
.. versionadded:: 1.1
.. seealso::
:func:`.expression.any_`
"""
expr = _literal_as_binds(expr)
if expr.is_selectable and hasattr(expr, 'as_scalar'):
expr = expr.as_scalar()
expr = expr.self_group()
return CollectionAggregate(
expr, operator=operators.all_op,
type_=type_api.NULLTYPE, wraps_column_expression=False)
# operate and reverse_operate are hardwired to
# dispatch onto the type comparator directly, so that we can
# ensure "reversed" behavior.
def operate(self, op, *other, **kwargs):
if not operators.is_comparison(op):
raise exc.ArgumentError(
"Only comparison operators may be used with ANY/ALL")
kwargs['reverse'] = True
return self.comparator.operate(operators.mirror(op), *other, **kwargs)
def reverse_operate(self, op, other, **kwargs):
# comparison operators should never call reverse_operate
assert not operators.is_comparison(op)
raise exc.ArgumentError(
"Only comparison operators may be used with ANY/ALL")
class AsBoolean(UnaryExpression):
def __init__(self, element, operator, negate):
self.element = element
self.type = type_api.BOOLEANTYPE
self.operator = operator
self.negate = negate
self.modifier = None
self.wraps_column_expression = True
def self_group(self, against=None):
return self
def _negate(self):
# TODO: this assumes the element is the True_() or False_()
# object, but this assumption isn't enforced and
# ColumnElement._negate() can send any number of expressions here
return self.element._negate()
class BinaryExpression(ColumnElement):
"""Represent an expression that is ``LEFT <operator> RIGHT``.
A :class:`.BinaryExpression` is generated automatically
whenever two column expressions are used in a Python binary expression::
>>> from sqlalchemy.sql import column
>>> column('a') + column('b')
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
>>> print column('a') + column('b')
a + b
"""
__visit_name__ = 'binary'
def __init__(self, left, right, operator, type_=None,
negate=None, modifiers=None):
# allow compatibility with libraries that
# refer to BinaryExpression directly and pass strings
if isinstance(operator, util.string_types):
operator = operators.custom_op(operator)
self._orig = (left, right)
self.left = left.self_group(against=operator)
self.right = right.self_group(against=operator)
self.operator = operator
self.type = type_api.to_instance(type_)
self.negate = negate
if modifiers is None:
self.modifiers = {}
else:
self.modifiers = modifiers
def __bool__(self):
if self.operator in (operator.eq, operator.ne):
return self.operator(hash(self._orig[0]), hash(self._orig[1]))
else:
raise TypeError("Boolean value of this clause is not defined")
__nonzero__ = __bool__
@property
def is_comparison(self):
return operators.is_comparison(self.operator)
@property
def _from_objects(self):
return self.left._from_objects + self.right._from_objects
def _copy_internals(self, clone=_clone, **kw):
self.left = clone(self.left, **kw)
self.right = clone(self.right, **kw)
def get_children(self, **kwargs):
return self.left, self.right
def compare(self, other, **kw):
"""Compare this :class:`BinaryExpression` against the
given :class:`BinaryExpression`."""
return (
isinstance(other, BinaryExpression) and
self.operator == other.operator and
(
self.left.compare(other.left, **kw) and
self.right.compare(other.right, **kw) or
(
operators.is_commutative(self.operator) and
self.left.compare(other.right, **kw) and
self.right.compare(other.left, **kw)
)
)
)
def self_group(self, against=None):
if operators.is_precedent(self.operator, against):
return Grouping(self)
else:
return self
def _negate(self):
if self.negate is not None:
return BinaryExpression(
self.left,
self.right,
self.negate,
negate=self.operator,
type_=self.type,
modifiers=self.modifiers)
else:
return super(BinaryExpression, self)._negate()
class Slice(ColumnElement):
"""Represent SQL for a Python array-slice object.
This is not a specific SQL construct at this level, but
may be interpreted by specific dialects, e.g. PostgreSQL.
"""
__visit_name__ = 'slice'
def __init__(self, start, stop, step):
self.start = start
self.stop = stop
self.step = step
self.type = type_api.NULLTYPE
def self_group(self, against=None):
assert against is operator.getitem
return self
class IndexExpression(BinaryExpression):
"""Represent the class of expressions that are like an "index" operation.
"""
pass
class Grouping(ColumnElement):
"""Represent a grouping within a column expression"""
__visit_name__ = 'grouping'
def __init__(self, element):
self.element = element
self.type = getattr(element, 'type', type_api.NULLTYPE)
def self_group(self, against=None):
return self
@property
def _key_label(self):
return self._label
@property
def _label(self):
return getattr(self.element, '_label', None) or self.anon_label
def _copy_internals(self, clone=_clone, **kw):
self.element = clone(self.element, **kw)
def get_children(self, **kwargs):
return self.element,
@property
def _from_objects(self):
return self.element._from_objects
def __getattr__(self, attr):
return getattr(self.element, attr)
def __getstate__(self):
return {'element': self.element, 'type': self.type}
def __setstate__(self, state):
self.element = state['element']
self.type = state['type']
def compare(self, other, **kw):
return isinstance(other, Grouping) and \
self.element.compare(other.element)
RANGE_UNBOUNDED = util.symbol("RANGE_UNBOUNDED")
RANGE_CURRENT = util.symbol("RANGE_CURRENT")
class Over(ColumnElement):
"""Represent an OVER clause.
This is a special operator against a so-called
"window" function, as well as any aggregate function,
which produces results relative to the result set
itself. It's supported only by certain database
backends.
"""
__visit_name__ = 'over'
order_by = None
partition_by = None
def __init__(
self, element, partition_by=None,
order_by=None, range_=None, rows=None):
"""Produce an :class:`.Over` object against a function.
Used against aggregate or so-called "window" functions,
for database backends that support window functions.
:func:`~.expression.over` is usually called using
the :meth:`.FunctionElement.over` method, e.g.::
func.row_number().over(order_by=mytable.c.some_column)
Would produce::
ROW_NUMBER() OVER(ORDER BY some_column)
Ranges are also possible using the :paramref:`.expression.over.range_`
and :paramref:`.expression.over.rows` parameters. These
mutually-exclusive parameters each accept a 2-tuple, which contains
a combination of integers and None::
func.row_number().over(order_by=my_table.c.some_column, range_=(None, 0))
The above would produce::
ROW_NUMBER() OVER(ORDER BY some_column RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
A value of None indicates "unbounded", a
value of zero indicates "current row", and negative / positive
integers indicate "preceding" and "following":
* RANGE BETWEEN 5 PRECEDING AND 10 FOLLOWING::
func.row_number().over(order_by='x', range_=(-5, 10))
* ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW::
func.row_number().over(order_by='x', rows=(None, 0))
* RANGE BETWEEN 2 PRECEDING AND UNBOUNDED FOLLOWING::
func.row_number().over(order_by='x', range_=(-2, None))
* RANGE BETWEEN 1 FOLLOWING AND 3 FOLLOWING::
func.row_number().over(order_by='x', range_=(1, 3))
.. versionadded:: 1.1 support for RANGE / ROWS within a window
:param element: a :class:`.FunctionElement`, :class:`.WithinGroup`,
or other compatible construct.
:param partition_by: a column element or string, or a list
of such, that will be used as the PARTITION BY clause
of the OVER construct.
:param order_by: a column element or string, or a list
of such, that will be used as the ORDER BY clause
of the OVER construct.
:param range_: optional range clause for the window. This is a
tuple value which can contain integer values or None, and will
render a RANGE BETWEEN PRECEDING / FOLLOWING clause
.. versionadded:: 1.1
:param rows: optional rows clause for the window. This is a tuple
value which can contain integer values or None, and will render
a ROWS BETWEEN PRECEDING / FOLLOWING clause.
.. versionadded:: 1.1
This function is also available from the :data:`~.expression.func`
construct itself via the :meth:`.FunctionElement.over` method.
.. seealso::
:data:`.expression.func`
:func:`.expression.within_group`
"""
self.element = element
if order_by is not None:
self.order_by = ClauseList(
*util.to_list(order_by),
_literal_as_text=_literal_as_label_reference)
if partition_by is not None:
self.partition_by = ClauseList(
*util.to_list(partition_by),
_literal_as_text=_literal_as_label_reference)
if range_:
self.range_ = self._interpret_range(range_)
if rows:
raise exc.ArgumentError(
"'range_' and 'rows' are mutually exclusive")
else:
self.rows = None
elif rows:
self.rows = self._interpret_range(rows)
self.range_ = None
else:
self.rows = self.range_ = None
def _interpret_range(self, range_):
if not isinstance(range_, tuple) or len(range_) != 2:
raise exc.ArgumentError("2-tuple expected for range/rows")
if range_[0] is None:
lower = RANGE_UNBOUNDED
else:
try:
lower = int(range_[0])
except ValueError:
raise exc.ArgumentError(
"Integer or None expected for range value")
else:
if lower == 0:
lower = RANGE_CURRENT
if range_[1] is None:
upper = RANGE_UNBOUNDED
else:
try:
upper = int(range_[1])
except ValueError:
raise exc.ArgumentError(
"Integer or None expected for range value")
else:
if upper == 0:
upper = RANGE_CURRENT
return lower, upper
@property
def func(self):
"""the element referred to by this :class:`.Over`
clause.
.. deprecated:: 1.1 the ``func`` element has been renamed to
``.element``. The two attributes are synonymous though
``.func`` is read-only.
"""
return self.element
@util.memoized_property
def type(self):
return self.element.type
def get_children(self, **kwargs):
return [c for c in
(self.element, self.partition_by, self.order_by)
if c is not None]
def _copy_internals(self, clone=_clone, **kw):
self.element = clone(self.element, **kw)
if self.partition_by is not None:
self.partition_by = clone(self.partition_by, **kw)
if self.order_by is not None:
self.order_by = clone(self.order_by, **kw)
@property
def _from_objects(self):
return list(itertools.chain(
*[c._from_objects for c in
(self.element, self.partition_by, self.order_by)
if c is not None]
))
class WithinGroup(ColumnElement):
"""Represent a WITHIN GROUP (ORDER BY) clause.
This is a special operator against so-called
"ordered set aggregate" and "hypothetical
set aggregate" functions, including ``percentile_cont()``,
``rank()``, ``dense_rank()``, etc.
It's supported only by certain database backends, such as PostgreSQL,
Oracle and MS SQL Server.
The :class:`.WithinGroup` construct extracts its type from the
method :meth:`.FunctionElement.within_group_type`. If this returns
``None``, the function's ``.type`` is used.
"""
__visit_name__ = 'withingroup'
order_by = None
def __init__(self, element, *order_by):
r"""Produce a :class:`.WithinGroup` object against a function.
Used against so-called "ordered set aggregate" and "hypothetical
set aggregate" functions, including :class:`.percentile_cont`,
:class:`.rank`, :class:`.dense_rank`, etc.
:func:`~.expression.within_group` is usually called using
the :meth:`.FunctionElement.within_group` method, e.g.::
from sqlalchemy import within_group
stmt = select([
department.c.id,
func.percentile_cont(0.5).within_group(
department.c.salary.desc()
)
])
The above statement would produce SQL similar to
``SELECT department.id, percentile_cont(0.5)
WITHIN GROUP (ORDER BY department.salary DESC)``.
:param element: a :class:`.FunctionElement` construct, typically
generated by :data:`~.expression.func`.
:param \*order_by: one or more column elements that will be used
as the ORDER BY clause of the WITHIN GROUP construct.
.. versionadded:: 1.1
.. seealso::
:data:`.expression.func`
:func:`.expression.over`
"""
self.element = element
if order_by is not None:
self.order_by = ClauseList(
*util.to_list(order_by),
_literal_as_text=_literal_as_label_reference)
def over(self, partition_by=None, order_by=None):
"""Produce an OVER clause against this :class:`.WithinGroup`
construct.
This function has the same signature as that of
:meth:`.FunctionElement.over`.
"""
return Over(self, partition_by=partition_by, order_by=order_by)
@util.memoized_property
def type(self):
wgt = self.element.within_group_type(self)
if wgt is not None:
return wgt
else:
return self.element.type
def get_children(self, **kwargs):
return [c for c in
(self.element, self.order_by)
if c is not None]
def _copy_internals(self, clone=_clone, **kw):
self.element = clone(self.element, **kw)
if self.order_by is not None:
self.order_by = clone(self.order_by, **kw)
@property
def _from_objects(self):
return list(itertools.chain(
*[c._from_objects for c in
(self.element, self.order_by)
if c is not None]
))
class FunctionFilter(ColumnElement):
"""Represent a function FILTER clause.
This is a special operator against aggregate and window functions,
which controls which rows are passed to it.
It's supported only by certain database backends.
Invocation of :class:`.FunctionFilter` is via
:meth:`.FunctionElement.filter`::
func.count(1).filter(True)
.. versionadded:: 1.0.0
.. seealso::
:meth:`.FunctionElement.filter`
"""
__visit_name__ = 'funcfilter'
criterion = None
def __init__(self, func, *criterion):
"""Produce a :class:`.FunctionFilter` object against a function.
Used against aggregate and window functions,
for database backends that support the "FILTER" clause.
E.g.::
from sqlalchemy import funcfilter
funcfilter(func.count(1), MyClass.name == 'some name')
Would produce "COUNT(1) FILTER (WHERE myclass.name = 'some name')".
This function is also available from the :data:`~.expression.func`
construct itself via the :meth:`.FunctionElement.filter` method.
.. versionadded:: 1.0.0
.. seealso::
:meth:`.FunctionElement.filter`
"""
self.func = func
self.filter(*criterion)
def filter(self, *criterion):
"""Produce an additional FILTER against the function.
This method adds additional criteria to the initial criteria
set up by :meth:`.FunctionElement.filter`.
Multiple criteria are joined together at SQL render time
via ``AND``.
"""
for criterion in list(criterion):
criterion = _expression_literal_as_text(criterion)
if self.criterion is not None:
self.criterion = self.criterion & criterion
else:
self.criterion = criterion
return self
def over(self, partition_by=None, order_by=None):
"""Produce an OVER clause against this filtered function.
Used against aggregate or so-called "window" functions,
for database backends that support window functions.
The expression::
func.rank().filter(MyClass.y > 5).over(order_by='x')
is shorthand for::
from sqlalchemy import over, funcfilter
over(funcfilter(func.rank(), MyClass.y > 5), order_by='x')
See :func:`~.expression.over` for a full description.
"""
return Over(self, partition_by=partition_by, order_by=order_by)
@util.memoized_property
def type(self):
return self.func.type
def get_children(self, **kwargs):
return [c for c in
(self.func, self.criterion)
if c is not None]
def _copy_internals(self, clone=_clone, **kw):
self.func = clone(self.func, **kw)
if self.criterion is not None:
self.criterion = clone(self.criterion, **kw)
@property
def _from_objects(self):
return list(itertools.chain(
*[c._from_objects for c in (self.func, self.criterion)
if c is not None]
))
class Label(ColumnElement):
"""Represents a column label (AS).
Represent a label, as typically applied to any column-level
element using the ``AS`` sql keyword.
"""
__visit_name__ = 'label'
def __init__(self, name, element, type_=None):
"""Return a :class:`Label` object for the
given :class:`.ColumnElement`.
A label changes the name of an element in the columns clause of a
``SELECT`` statement, typically via the ``AS`` SQL keyword.
This functionality is more conveniently available via the
:meth:`.ColumnElement.label` method on :class:`.ColumnElement`.
:param name: label name
:param obj: a :class:`.ColumnElement`.
"""
if isinstance(element, Label):
self._resolve_label = element._label
while isinstance(element, Label):
element = element.element
if name:
self.name = name
self._resolve_label = self.name
else:
self.name = _anonymous_label(
'%%(%d %s)s' % (id(self), getattr(element, 'name', 'anon'))
)
self.key = self._label = self._key_label = self.name
self._element = element
self._type = type_
self._proxies = [element]
def __reduce__(self):
return self.__class__, (self.name, self._element, self._type)
@util.memoized_property
def _allow_label_resolve(self):
return self.element._allow_label_resolve
@property
def _order_by_label_element(self):
return self
@util.memoized_property
def type(self):
return type_api.to_instance(
self._type or getattr(self._element, 'type', None)
)
@util.memoized_property
def element(self):
return self._element.self_group(against=operators.as_)
def self_group(self, against=None):
return self._apply_to_inner(self._element.self_group, against=against)
def _negate(self):
return self._apply_to_inner(self._element._negate)
def _apply_to_inner(self, fn, *arg, **kw):
sub_element = fn(*arg, **kw)
if sub_element is not self._element:
return Label(self.name,
sub_element,
type_=self._type)
else:
return self
@property
def primary_key(self):
return self.element.primary_key
@property
def foreign_keys(self):
return self.element.foreign_keys
def get_children(self, **kwargs):
return self.element,
def _copy_internals(self, clone=_clone, anonymize_labels=False, **kw):
self._element = clone(self._element, **kw)
self.__dict__.pop('element', None)
self.__dict__.pop('_allow_label_resolve', None)
if anonymize_labels:
self.name = self._resolve_label = _anonymous_label(
'%%(%d %s)s' % (
id(self), getattr(self.element, 'name', 'anon'))
)
self.key = self._label = self._key_label = self.name
@property
def _from_objects(self):
return self.element._from_objects
def _make_proxy(self, selectable, name=None, **kw):
e = self.element._make_proxy(selectable,
name=name if name else self.name)
e._proxies.append(self)
if self._type is not None:
e.type = self._type
return e
class ColumnClause(Immutable, ColumnElement):
"""Represents a column expression from any textual string.
The :class:`.ColumnClause`, a lightweight analogue to the
:class:`.Column` class, is typically invoked using the
:func:`.column` function, as in::
from sqlalchemy import column
id, name = column("id"), column("name")
stmt = select([id, name]).select_from("user")
The above statement would produce SQL like::
SELECT id, name FROM user
:class:`.ColumnClause` is the immediate superclass of the schema-specific
:class:`.Column` object. While the :class:`.Column` class has all the
same capabilities as :class:`.ColumnClause`, the :class:`.ColumnClause`
class is usable by itself in those cases where behavioral requirements
are limited to simple SQL expression generation. The object has none of
the associations with schema-level metadata or with execution-time
behavior that :class:`.Column` does, so in that sense is a "lightweight"
version of :class:`.Column`.
Full details on :class:`.ColumnClause` usage is at :func:`.column`.
.. seealso::
:func:`.column`
:class:`.Column`
"""
__visit_name__ = 'column'
onupdate = default = server_default = server_onupdate = None
_is_multiparam_column = False
_memoized_property = util.group_expirable_memoized_property()
def __init__(self, text, type_=None, is_literal=False, _selectable=None):
"""Produce a :class:`.ColumnClause` object.
The :class:`.ColumnClause` is a lightweight analogue to the
:class:`.Column` class. The :func:`.column` function can
be invoked with just a name alone, as in::
from sqlalchemy import column
id, name = column("id"), column("name")
stmt = select([id, name]).select_from("user")
The above statement would produce SQL like::
SELECT id, name FROM user
Once constructed, :func:`.column` may be used like any other SQL
expression element such as within :func:`.select` constructs::
from sqlalchemy.sql import column
id, name = column("id"), column("name")
stmt = select([id, name]).select_from("user")
The text handled by :func:`.column` is assumed to be handled
like the name of a database column; if the string contains mixed case,
special characters, or matches a known reserved word on the target
backend, the column expression will render using the quoting
behavior determined by the backend. To produce a textual SQL
expression that is rendered exactly without any quoting,
use :func:`.literal_column` instead, or pass ``True`` as the
value of :paramref:`.column.is_literal`. Additionally, full SQL
statements are best handled using the :func:`.text` construct.
:func:`.column` can be used in a table-like
fashion by combining it with the :func:`.table` function
(which is the lightweight analogue to :class:`.Table`) to produce
a working table construct with minimal boilerplate::
from sqlalchemy import table, column, select
user = table("user",
column("id"),
column("name"),
column("description"),
)
stmt = select([user.c.description]).where(user.c.name == 'wendy')
A :func:`.column` / :func:`.table` construct like that illustrated
above can be created in an
ad-hoc fashion and is not associated with any
:class:`.schema.MetaData`, DDL, or events, unlike its
:class:`.Table` counterpart.
.. versionchanged:: 1.0.0 :func:`.expression.column` can now
be imported from the plain ``sqlalchemy`` namespace like any
other SQL element.
:param text: the text of the element.
:param type: :class:`.types.TypeEngine` object which can associate
this :class:`.ColumnClause` with a type.
:param is_literal: if True, the :class:`.ColumnClause` is assumed to
be an exact expression that will be delivered to the output with no
quoting rules applied regardless of case sensitive settings. the
:func:`.literal_column()` function essentially invokes
:func:`.column` while passing ``is_literal=True``.
.. seealso::
:class:`.Column`
:func:`.literal_column`
:func:`.table`
:func:`.text`
:ref:`sqlexpression_literal_column`
"""
self.key = self.name = text
self.table = _selectable
self.type = type_api.to_instance(type_)
self.is_literal = is_literal
def _compare_name_for_result(self, other):
if self.is_literal or \
self.table is None or self.table._textual or \
not hasattr(other, 'proxy_set') or (
isinstance(other, ColumnClause) and
(other.is_literal or
other.table is None or
other.table._textual)
):
return (hasattr(other, 'name') and self.name == other.name) or \
(hasattr(other, '_label') and self._label == other._label)
else:
return other.proxy_set.intersection(self.proxy_set)
def _get_table(self):
return self.__dict__['table']
def _set_table(self, table):
self._memoized_property.expire_instance(self)
self.__dict__['table'] = table
table = property(_get_table, _set_table)
@_memoized_property
def _from_objects(self):
t = self.table
if t is not None:
return [t]
else:
return []
@util.memoized_property
def description(self):
if util.py3k:
return self.name
else:
return self.name.encode('ascii', 'backslashreplace')
@_memoized_property
def _key_label(self):
if self.key != self.name:
return self._gen_label(self.key)
else:
return self._label
@_memoized_property
def _label(self):
return self._gen_label(self.name)
@_memoized_property
def _render_label_in_columns_clause(self):
return self.table is not None
def _gen_label(self, name):
t = self.table
if self.is_literal:
return None
elif t is not None and t.named_with_column:
if getattr(t, 'schema', None):
label = t.schema.replace('.', '_') + "_" + \
t.name + "_" + name
else:
label = t.name + "_" + name
# propagate name quoting rules for labels.
if getattr(name, "quote", None) is not None:
if isinstance(label, quoted_name):
label.quote = name.quote
else:
label = quoted_name(label, name.quote)
elif getattr(t.name, "quote", None) is not None:
# can't get this situation to occur, so let's
# assert false on it for now
assert not isinstance(label, quoted_name)
label = quoted_name(label, t.name.quote)
# ensure the label name doesn't conflict with that
# of an existing column
if label in t.c:
_label = label
counter = 1
while _label in t.c:
_label = label + "_" + str(counter)
counter += 1
label = _label
return _as_truncated(label)
else:
return name
def _bind_param(self, operator, obj, type_=None):
return BindParameter(self.key, obj,
_compared_to_operator=operator,
_compared_to_type=self.type,
type_=type_,
unique=True)
def _make_proxy(self, selectable, name=None, attach=True,
name_is_truncatable=False, **kw):
# propagate the "is_literal" flag only if we are keeping our name,
# otherwise its considered to be a label
is_literal = self.is_literal and (name is None or name == self.name)
c = self._constructor(
_as_truncated(name or self.name) if
name_is_truncatable else
(name or self.name),
type_=self.type,
_selectable=selectable,
is_literal=is_literal
)
if name is None:
c.key = self.key
c._proxies = [self]
if selectable._is_clone_of is not None:
c._is_clone_of = \
selectable._is_clone_of.columns.get(c.key)
if attach:
selectable._columns[c.key] = c
return c
class CollationClause(ColumnElement):
__visit_name__ = "collation"
def __init__(self, collation):
self.collation = collation
class _IdentifiedClause(Executable, ClauseElement):
__visit_name__ = 'identified'
_execution_options = \
Executable._execution_options.union({'autocommit': False})
def __init__(self, ident):
self.ident = ident
class SavepointClause(_IdentifiedClause):
__visit_name__ = 'savepoint'
class RollbackToSavepointClause(_IdentifiedClause):
__visit_name__ = 'rollback_to_savepoint'
class ReleaseSavepointClause(_IdentifiedClause):
__visit_name__ = 'release_savepoint'
class quoted_name(util.MemoizedSlots, util.text_type):
"""Represent a SQL identifier combined with quoting preferences.
:class:`.quoted_name` is a Python unicode/str subclass which
represents a particular identifier name along with a
``quote`` flag. This ``quote`` flag, when set to
``True`` or ``False``, overrides automatic quoting behavior
for this identifier in order to either unconditionally quote
or to not quote the name. If left at its default of ``None``,
quoting behavior is applied to the identifier on a per-backend basis
based on an examination of the token itself.
A :class:`.quoted_name` object with ``quote=True`` is also
prevented from being modified in the case of a so-called
"name normalize" option. Certain database backends, such as
Oracle, Firebird, and DB2 "normalize" case-insensitive names
as uppercase. The SQLAlchemy dialects for these backends
convert from SQLAlchemy's lower-case-means-insensitive convention
to the upper-case-means-insensitive conventions of those backends.
The ``quote=True`` flag here will prevent this conversion from occurring
to support an identifier that's quoted as all lower case against
such a backend.
The :class:`.quoted_name` object is normally created automatically
when specifying the name for key schema constructs such as
:class:`.Table`, :class:`.Column`, and others. The class can also be
passed explicitly as the name to any function that receives a name which
can be quoted. Such as to use the :meth:`.Engine.has_table` method with
an unconditionally quoted name::
from sqlalchemy import create_engine
from sqlalchemy.sql import quoted_name
engine = create_engine("oracle+cx_oracle://some_dsn")
engine.has_table(quoted_name("some_table", True))
The above logic will run the "has table" logic against the Oracle backend,
passing the name exactly as ``"some_table"`` without converting to
upper case.
.. versionadded:: 0.9.0
.. versionchanged:: 1.2 The :class:`.quoted_name` construct is now
importable from ``sqlalchemy.sql``, in addition to the previous
location of ``sqlalchemy.sql.elements``.
"""
__slots__ = 'quote', 'lower', 'upper'
def __new__(cls, value, quote):
if value is None:
return None
# experimental - don't bother with quoted_name
# if quote flag is None. doesn't seem to make any dent
# in performance however
# elif not sprcls and quote is None:
# return value
elif isinstance(value, cls) and (
quote is None or value.quote == quote
):
return value
self = super(quoted_name, cls).__new__(cls, value)
self.quote = quote
return self
def __reduce__(self):
return quoted_name, (util.text_type(self), self.quote)
def _memoized_method_lower(self):
if self.quote:
return self
else:
return util.text_type(self).lower()
def _memoized_method_upper(self):
if self.quote:
return self
else:
return util.text_type(self).upper()
def __repr__(self):
backslashed = self.encode('ascii', 'backslashreplace')
if not util.py2k:
backslashed = backslashed.decode('ascii')
return "'%s'" % backslashed
class _truncated_label(quoted_name):
"""A unicode subclass used to identify symbolic "
"names that may require truncation."""
__slots__ = ()
def __new__(cls, value, quote=None):
quote = getattr(value, "quote", quote)
# return super(_truncated_label, cls).__new__(cls, value, quote, True)
return super(_truncated_label, cls).__new__(cls, value, quote)
def __reduce__(self):
return self.__class__, (util.text_type(self), self.quote)
def apply_map(self, map_):
return self
class conv(_truncated_label):
"""Mark a string indicating that a name has already been converted
by a naming convention.
This is a string subclass that indicates a name that should not be
subject to any further naming conventions.
E.g. when we create a :class:`.Constraint` using a naming convention
as follows::
m = MetaData(naming_convention={
"ck": "ck_%(table_name)s_%(constraint_name)s"
})
t = Table('t', m, Column('x', Integer),
CheckConstraint('x > 5', name='x5'))
The name of the above constraint will be rendered as ``"ck_t_x5"``.
That is, the existing name ``x5`` is used in the naming convention as the
``constraint_name`` token.
In some situations, such as in migration scripts, we may be rendering
the above :class:`.CheckConstraint` with a name that's already been
converted. In order to make sure the name isn't double-modified, the
new name is applied using the :func:`.schema.conv` marker. We can
use this explicitly as follows::
m = MetaData(naming_convention={
"ck": "ck_%(table_name)s_%(constraint_name)s"
})
t = Table('t', m, Column('x', Integer),
CheckConstraint('x > 5', name=conv('ck_t_x5')))
Where above, the :func:`.schema.conv` marker indicates that the constraint
name here is final, and the name will render as ``"ck_t_x5"`` and not
``"ck_t_ck_t_x5"``
.. versionadded:: 0.9.4
.. seealso::
:ref:`constraint_naming_conventions`
"""
__slots__ = ()
class _defer_name(_truncated_label):
"""mark a name as 'deferred' for the purposes of automated name
generation.
"""
__slots__ = ()
def __new__(cls, value):
if value is None:
return _NONE_NAME
elif isinstance(value, conv):
return value
else:
return super(_defer_name, cls).__new__(cls, value)
def __reduce__(self):
return self.__class__, (util.text_type(self), )
class _defer_none_name(_defer_name):
"""indicate a 'deferred' name that was ultimately the value None."""
__slots__ = ()
_NONE_NAME = _defer_none_name("_unnamed_")
# for backwards compatibility in case
# someone is re-implementing the
# _truncated_identifier() sequence in a custom
# compiler
_generated_label = _truncated_label
class _anonymous_label(_truncated_label):
"""A unicode subclass used to identify anonymously
generated names."""
__slots__ = ()
def __add__(self, other):
return _anonymous_label(
quoted_name(
util.text_type.__add__(self, util.text_type(other)),
self.quote)
)
def __radd__(self, other):
return _anonymous_label(
quoted_name(
util.text_type.__add__(util.text_type(other), self),
self.quote)
)
def apply_map(self, map_):
if self.quote is not None:
# preserve quoting only if necessary
return quoted_name(self % map_, self.quote)
else:
# else skip the constructor call
return self % map_
def _as_truncated(value):
"""coerce the given value to :class:`._truncated_label`.
Existing :class:`._truncated_label` and
:class:`._anonymous_label` objects are passed
unchanged.
"""
if isinstance(value, _truncated_label):
return value
else:
return _truncated_label(value)
def _string_or_unprintable(element):
if isinstance(element, util.string_types):
return element
else:
try:
return str(element)
except Exception:
return "unprintable element %r" % element
def _expand_cloned(elements):
"""expand the given set of ClauseElements to be the set of all 'cloned'
predecessors.
"""
return itertools.chain(*[x._cloned_set for x in elements])
def _select_iterables(elements):
"""expand tables into individual columns in the
given list of column expressions.
"""
return itertools.chain(*[c._select_iterable for c in elements])
def _cloned_intersection(a, b):
"""return the intersection of sets a and b, counting
any overlap between 'cloned' predecessors.
The returned set is in terms of the entities present within 'a'.
"""
all_overlap = set(_expand_cloned(a)).intersection(_expand_cloned(b))
return set(elem for elem in a
if all_overlap.intersection(elem._cloned_set))
def _cloned_difference(a, b):
all_overlap = set(_expand_cloned(a)).intersection(_expand_cloned(b))
return set(elem for elem in a
if not all_overlap.intersection(elem._cloned_set))
@util.dependencies("sqlalchemy.sql.functions")
def _labeled(functions, element):
if not hasattr(element, 'name') or \
isinstance(element, functions.FunctionElement):
return element.label(None)
else:
return element
def _is_column(col):
"""True if ``col`` is an instance of :class:`.ColumnElement`."""
return isinstance(col, ColumnElement)
def _find_columns(clause):
"""locate Column objects within the given expression."""
cols = util.column_set()
traverse(clause, {}, {'column': cols.add})
return cols
# there is some inconsistency here between the usage of
# inspect() vs. checking for Visitable and __clause_element__.
# Ideally all functions here would derive from inspect(),
# however the inspect() versions add significant callcount
# overhead for critical functions like _interpret_as_column_or_from().
# Generally, the column-based functions are more performance critical
# and are fine just checking for __clause_element__(). It is only
# _interpret_as_from() where we'd like to be able to receive ORM entities
# that have no defined namespace, hence inspect() is needed there.
def _column_as_key(element):
if isinstance(element, util.string_types):
return element
if hasattr(element, '__clause_element__'):
element = element.__clause_element__()
try:
return element.key
except AttributeError:
return None
def _clause_element_as_expr(element):
if hasattr(element, '__clause_element__'):
return element.__clause_element__()
else:
return element
def _literal_as_label_reference(element):
if isinstance(element, util.string_types):
return _textual_label_reference(element)
elif hasattr(element, '__clause_element__'):
element = element.__clause_element__()
return _literal_as_text(element)
def _literal_and_labels_as_label_reference(element):
if isinstance(element, util.string_types):
return _textual_label_reference(element)
elif hasattr(element, '__clause_element__'):
element = element.__clause_element__()
if isinstance(element, ColumnElement) and \
element._order_by_label_element is not None:
return _label_reference(element)
else:
return _literal_as_text(element)
def _expression_literal_as_text(element):
return _literal_as_text(element, warn=True)
def _literal_as_text(element, warn=False):
if isinstance(element, Visitable):
return element
elif hasattr(element, '__clause_element__'):
return element.__clause_element__()
elif isinstance(element, util.string_types):
if warn:
util.warn_limited(
"Textual SQL expression %(expr)r should be "
"explicitly declared as text(%(expr)r)",
{"expr": util.ellipses_string(element)})
return TextClause(util.text_type(element))
elif isinstance(element, (util.NoneType, bool)):
return _const_expr(element)
else:
raise exc.ArgumentError(
"SQL expression object or string expected, got object of type %r "
"instead" % type(element)
)
def _no_literals(element):
if hasattr(element, '__clause_element__'):
return element.__clause_element__()
elif not isinstance(element, Visitable):
raise exc.ArgumentError("Ambiguous literal: %r. Use the 'text()' "
"function to indicate a SQL expression "
"literal, or 'literal()' to indicate a "
"bound value." % element)
else:
return element
def _is_literal(element):
return not isinstance(element, Visitable) and \
not hasattr(element, '__clause_element__')
def _only_column_elements_or_none(element, name):
if element is None:
return None
else:
return _only_column_elements(element, name)
def _only_column_elements(element, name):
if hasattr(element, '__clause_element__'):
element = element.__clause_element__()
if not isinstance(element, ColumnElement):
raise exc.ArgumentError(
"Column-based expression object expected for argument "
"'%s'; got: '%s', type %s" % (name, element, type(element)))
return element
def _literal_as_binds(element, name=None, type_=None):
if hasattr(element, '__clause_element__'):
return element.__clause_element__()
elif not isinstance(element, Visitable):
if element is None:
return Null()
else:
return BindParameter(name, element, type_=type_, unique=True)
else:
return element
_guess_straight_column = re.compile(r'^\w\S*$', re.I)
def _interpret_as_column_or_from(element):
if isinstance(element, Visitable):
return element
elif hasattr(element, '__clause_element__'):
return element.__clause_element__()
insp = inspection.inspect(element, raiseerr=False)
if insp is None:
if isinstance(element, (util.NoneType, bool)):
return _const_expr(element)
elif hasattr(insp, "selectable"):
return insp.selectable
# be forgiving as this is an extremely common
# and known expression
if element == "*":
guess_is_literal = True
elif isinstance(element, (numbers.Number)):
return ColumnClause(str(element), is_literal=True)
else:
element = str(element)
# give into temptation, as this fact we are guessing about
# is not one we've previously ever needed our users tell us;
# but let them know we are not happy about it
guess_is_literal = not _guess_straight_column.match(element)
util.warn_limited(
"Textual column expression %(column)r should be "
"explicitly declared with text(%(column)r), "
"or use %(literal_column)s(%(column)r) "
"for more specificity",
{
"column": util.ellipses_string(element),
"literal_column": "literal_column"
if guess_is_literal else "column"
})
return ColumnClause(
element,
is_literal=guess_is_literal)
def _const_expr(element):
if isinstance(element, (Null, False_, True_)):
return element
elif element is None:
return Null()
elif element is False:
return False_()
elif element is True:
return True_()
else:
raise exc.ArgumentError(
"Expected None, False, or True"
)
def _type_from_args(args):
for a in args:
if not a.type._isnull:
return a.type
else:
return type_api.NULLTYPE
def _corresponding_column_or_error(fromclause, column,
require_embedded=False):
c = fromclause.corresponding_column(column,
require_embedded=require_embedded)
if c is None:
raise exc.InvalidRequestError(
"Given column '%s', attached to table '%s', "
"failed to locate a corresponding column from table '%s'"
%
(column,
getattr(column, 'table', None),
fromclause.description)
)
return c
class AnnotatedColumnElement(Annotated):
def __init__(self, element, values):
Annotated.__init__(self, element, values)
ColumnElement.comparator._reset(self)
for attr in ('name', 'key', 'table'):
if self.__dict__.get(attr, False) is None:
self.__dict__.pop(attr)
def _with_annotations(self, values):
clone = super(AnnotatedColumnElement, self)._with_annotations(values)
ColumnElement.comparator._reset(clone)
return clone
@util.memoized_property
def name(self):
"""pull 'name' from parent, if not present"""
return self._Annotated__element.name
@util.memoized_property
def table(self):
"""pull 'table' from parent, if not present"""
return self._Annotated__element.table
@util.memoized_property
def key(self):
"""pull 'key' from parent, if not present"""
return self._Annotated__element.key
@util.memoized_property
def info(self):
return self._Annotated__element.info
@util.memoized_property
def anon_label(self):
return self._Annotated__element.anon_label
|
fernandog/Medusa
|
ext/sqlalchemy/sql/elements.py
|
Python
|
gpl-3.0
| 150,021
|
[
"VisIt"
] |
ac917603eb8039a1fba78e9f2dca9b8459d77013bcd8c0a68627b4b418f32121
|
# ==================================================================================================
# Copyright 2011 Twitter, Inc.
# --------------------------------------------------------------------------------------------------
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this work except in compliance with the License.
# You may obtain a copy of the License in the LICENSE file, or at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==================================================================================================
import pytest
from twitter.common.quantity import Amount, Time, Data
from twitter.common.metrics import Label, MutatorGauge
from twitter.common.metrics import (
CompoundMetrics,
Observable,
RootMetrics)
from twitter.common.metrics.metrics import Metrics
def test_root_metrics_singleton():
rm = RootMetrics()
rm2 = RootMetrics()
assert id(rm) == id(rm2)
def test_basic_registration_and_clear():
lb = Label('ping', 'pong')
rm = RootMetrics()
rm.register(lb)
assert rm.sample() == {'ping': 'pong'}
rm.clear()
assert rm.sample() == {}
def test_nontrivial_gauges():
for label_value in ['a', 0, 2.5, [1,2,"3"], {'a': 'b'}, {'c': None}, False]:
lb = Label('ping', label_value)
rm = RootMetrics()
rm.register(lb)
assert rm.sample() == {'ping': label_value}
rm.clear()
assert rm.sample() == {}
def test_basic_scoping():
lb = Label('ping', 'pong')
rm = RootMetrics()
rm.register(lb)
rm.scope('bing').register(lb)
assert rm.sample() == { 'ping': 'pong', 'bing.ping': 'pong' }
rm.clear()
def test_scoped_registration_uses_references():
mg = MutatorGauge('name', 'brian')
rm = RootMetrics()
rm.scope('earth').register(mg)
rm.scope('pluto').register(mg)
assert rm.sample() == { 'earth.name': 'brian', 'pluto.name': 'brian' }
mg.write('zargon')
assert rm.sample() == { 'earth.name': 'zargon', 'pluto.name': 'zargon' }
rm.clear()
def test_register_string():
rm = RootMetrics()
hello_gauge = rm.register('hello')
assert rm.sample() == { 'hello': None }
hello_gauge.write('poop')
assert rm.sample() == { 'hello': 'poop' }
rm.clear()
def test_nested_scopes():
rm = RootMetrics()
mg = rm.scope('a').scope('b').scope('c').register('123')
mg.write(Amount(1, Time.MILLISECONDS))
assert rm.sample() == {'a.b.c.123': '1 ms'}
rm.clear()
def test_bad_scope_names():
rm = RootMetrics()
my_scope = rm.scope('my_scope')
with pytest.raises(TypeError):
my_scope.scope(None)
with pytest.raises(TypeError):
my_scope.scope({})
with pytest.raises(TypeError):
my_scope.scope(123)
with pytest.raises(TypeError):
my_scope.scope(RootMetrics)
def test_compound_metrics():
metrics1 = Metrics()
metrics2 = Metrics()
metrics1.register(Label('value', 'first'))
metrics2.register(Label('value', 'second'))
assert CompoundMetrics(metrics1, metrics2).sample() == {'value': 'second'}
metrics1.register(Label('other', 'third'))
assert CompoundMetrics(metrics1, metrics2).sample() == {
'value': 'second', 'other': 'third'}
def test_observable():
class Derp(Observable):
def __init__(self):
self.metrics.register(Label('value', 'derp value'))
metrics = Metrics()
metrics.register_observable('derpspace', Derp())
assert metrics.sample() == {'derpspace.value': 'derp value'}
|
abel-von/commons
|
tests/python/twitter/common/metrics/test_metrics.py
|
Python
|
apache-2.0
| 3,717
|
[
"Brian"
] |
3f753a6af642ea9e0f81a60244beafb33b048e99ac1b078c8f7669a94bb4c431
|
"""
Node classes (`Apply`, `Variable`) and expression graph algorithms.
To read about what theano graphs are from a user perspective, have a look at
`graph.html <../doc/graph.html>`__.
"""
from __future__ import print_function
from collections import deque
from copy import copy
from itertools import count
import theano
from theano.gof import utils
from six import string_types, integer_types, iteritems
from theano.misc.ordered_set import OrderedSet
__docformat__ = "restructuredtext en"
# Lazy imports to avoid circular dependencies.
is_same_graph_with_merge = None
equal_computations = None
NoParams = object()
class Node(utils.object2):
"""
A Node in a theano graph.
Graphs contain two kinds of Nodes -- Variable and Apply.
Edges in the graph are not explicitly represented.
Instead each Node keeps track of its parents via
Variable.owner / Apply.inputs and its children
via Variable.clients / Apply.outputs.
"""
def get_parents(self):
"""
Return a list of the parents of this node.
Should return a copy--i.e., modifying the return
value should not modify the graph structure.
"""
raise NotImplementedError()
class Apply(Node):
"""
An :term:`Apply` instance is a node in an expression graph which represents
the application of an `Op` to some input `Variable` nodes, producing some
output `Variable` nodes.
This class is typically instantiated by an Op's make_node() function, which
is typically called by that Op's __call__() function.
An Apply instance serves as a simple structure with three important
attributes:
- :literal:`inputs` : a list of `Variable` nodes that represent the
arguments of the expression,
- :literal:`outputs` : a list of `Variable` nodes that represent the
variable of the expression, and
- :literal:`op` : an `Op` instance that determines the nature of the
expression being applied.
The driver `compile.function` uses Apply's inputs attribute together with
Variable's owner attribute to search the expression graph and determine
which inputs are necessary to compute the function's outputs.
A `Linker` uses the Apply instance's `op` field to compute the variables.
Comparing with the Python language, an `Apply` instance is theano's version
of a function call (or expression instance) whereas `Op` is theano's version
of a function definition.
Parameters
----------
op : `Op` instance
inputs : list of Variable instances
outputs : list of Variable instances
Notes
-----
The owner field of each output in the outputs list will be set to self.
If an output element has an owner that is neither None nor self, then a
ValueError exception will be raised.
"""
def __init__(self, op, inputs, outputs):
self.op = op
self.inputs = []
self.tag = utils.scratchpad()
if not isinstance(inputs, (list, tuple)):
raise TypeError("The inputs of an Apply must be a list or tuple")
if not isinstance(outputs, (list, tuple)):
raise TypeError("The output of an Apply must be a list or tuple")
# filter inputs to make sure each element is a Variable
for input in inputs:
if isinstance(input, Variable):
self.inputs.append(input)
else:
raise TypeError("The 'inputs' argument to Apply must contain Variable instances, not %s" % input)
self.outputs = []
# filter outputs to make sure each element is a Variable
for i, output in enumerate(outputs):
if isinstance(output, Variable):
if output.owner is None:
output.owner = self
output.index = i
elif output.owner is not self or output.index != i:
raise ValueError("All output variables passed to Apply must belong to it.")
self.outputs.append(output)
else:
raise TypeError("The 'outputs' argument to Apply must contain Variable instances with no owner, not %s" % output)
def run_params(self):
"""
Returns the params for the node, or NoParams if no params is set.
"""
if hasattr(self.op, 'get_params'):
return self.op.get_params(self)
return NoParams
def __getstate__(self):
d = self.__dict__
# ufunc don't pickle/unpickle well
if hasattr(self.tag, 'ufunc'):
d = copy(self.__dict__)
t = d["tag"]
del t.ufunc
d["tag"] = t
return d
def default_output(self):
"""
Returns the default output for this node.
Returns
-------
Variable instance
An element of self.outputs, typically self.outputs[0].
Notes
-----
May raise AttributeError self.op.default_output is out of range, or if
there are multiple outputs and self.op.default_output does not exist.
"""
do = getattr(self.op, 'default_output', None)
if do is None:
if len(self.outputs) == 1:
return self.outputs[0]
else:
raise AttributeError(
"%s.default_output should be an output index." % self.op)
elif not isinstance(do, integer_types):
raise AttributeError("%s.default_output should be an int or long" %
self.op)
elif do < 0 or do >= len(self.outputs):
raise AttributeError("%s.default_output is out of range." %
self.op)
return self.outputs[do]
out = property(default_output,
doc="alias for self.default_output()")
"""
Alias for self.default_output().
"""
def __str__(self):
return op_as_string(self.inputs, self)
def __repr__(self):
return str(self)
def __asapply__(self):
return self
def clone(self):
"""
Duplicate this Apply instance with inputs = self.inputs.
Returns
-------
object
A new Apply instance (or subclass instance) with new outputs.
Notes
-----
Tags are copied from self to the returned instance.
"""
cp = self.__class__(self.op, self.inputs,
[output.clone() for output in self.outputs])
cp.tag = copy(self.tag)
return cp
def clone_with_new_inputs(self, inputs, strict=True):
"""
Duplicate this Apply instance in a new graph.
Parameters
----------
inputs
List of Variable instances to use as inputs.
strict : bool
If True, the type fields of all the inputs must be equal
to the current ones (or compatible, for instance Tensor /
CudaNdarray of the same dtype and broadcastable patterns,
in which case they will be converted into current Type), and
returned outputs are guaranteed to have the same types as
self.outputs. If False, then there's no guarantee that the
clone's outputs will have the same types as self.outputs,
and cloning may not even be possible (it depends on the Op).
Returns
-------
object
An Apply instance with the same op but different outputs.
"""
assert isinstance(inputs, (list, tuple))
remake_node = False
new_inputs = inputs[:]
for i, (curr, new) in enumerate(zip(self.inputs, new_inputs)):
if not curr.type == new.type:
if strict:
# If compatible, casts new into curr.type
new_inputs[i] = curr.type.filter_variable(new)
else:
remake_node = True
if remake_node:
new_node = self.op.make_node(*new_inputs)
new_node.tag = copy(self.tag).__update__(new_node.tag)
else:
new_node = self.clone()
new_node.inputs = new_inputs
return new_node
def get_parents(self):
return list(self.inputs)
# convenience properties
nin = property(lambda self: len(self.inputs), doc='same as len(self.inputs)')
"""
Property: Number of inputs.
"""
nout = property(lambda self: len(self.outputs), doc='same as len(self.outputs)')
"""
Property: Number of outputs.
"""
params_type = property(lambda self: self.op.params_type, doc='type to use for the params')
class Variable(Node):
"""
A :term:`Variable` is a node in an expression graph that represents a
variable.
The inputs and outputs of every `Apply` (theano.gof.Apply) are `Variable`
instances. The input and output arguments to create a `function` are also
`Variable` instances. A `Variable` is like a strongly-typed variable in
some other languages; each `Variable` contains a reference to a `Type`
instance that defines the kind of value the `Variable` can take in a
computation.
A `Variable` is a container for four important attributes:
- :literal:`type` a `Type` instance defining the kind of value this
`Variable` can have,
- :literal:`owner` either None (for graph roots) or the `Apply` instance
of which `self` is an output,
- :literal:`index` the integer such that :literal:`owner.outputs[index] is
this_variable` (ignored if `owner` is None),
- :literal:`name` a string to use in pretty-printing and debugging.
There are a few kinds of Variables to be aware of: A Variable which is the
output of a symbolic computation has a reference to the Apply instance to
which it belongs (property: owner) and the position of itself in the owner's
output list (property: index).
- `Variable` (this base type) is typically the output of a symbolic
computation.
- `Constant` (a subclass) which adds a default and un-replaceable
:literal:`value`, and requires that owner is None.
- `TensorVariable` subclass of Variable that represents a numpy.ndarray
object.
- `TensorSharedVariable` Shared version of TensorVariable.
- `SparseVariable` subclass of Variable that represents
a scipy.sparse.{csc,csr}_matrix object.
- `CudaNdarrayVariable` subclass of Variable that represents our object on
the GPU that is a subset of numpy.ndarray.
- `RandomVariable`.
A Variable which is the output of a symbolic computation will have an owner
not equal to None.
Using the Variables' owner field and the Apply nodes' inputs fields, one can
navigate a graph from an output all the way to the inputs. The opposite
direction is not possible until a FunctionGraph has annotated the Variables
with the clients field, ie, before the compilation process has begun a
Variable does not know which Apply nodes take it as input.
Parameters
----------
type : a Type instance
The type governs the kind of data that can be associated with this
variable.
owner : None or Apply instance
The Apply instance which computes the value for this variable.
index : None or int
The position of this Variable in owner.outputs.
name : None or str
A string for pretty-printing and debugging.
Examples
--------
.. code-block:: python
import theano
from theano import tensor
a = tensor.constant(1.5) # declare a symbolic constant
b = tensor.fscalar() # declare a symbolic floating-point scalar
c = a + b # create a simple expression
f = theano.function([b], [c]) # this works because a has a value associated with it already
assert 4.0 == f(2.5) # bind 2.5 to an internal copy of b and evaluate an internal c
theano.function([a], [c]) # compilation error because b (required by c) is undefined
theano.function([a,b], [c]) # compilation error because a is constant, it can't be an input
d = tensor.value(1.5) # create a value similar to the constant 'a'
e = d + b
theano.function([d,b], [e]) # this works. d's default value of 1.5 is ignored.
The python variables :literal:`a,b,c` all refer to instances of type
`Variable`. The `Variable` refered to by `a` is also an instance of
`Constant`.
`compile.function` uses each `Apply` instance's `inputs` attribute together
with each Variable's `owner` field to determine which inputs are necessary
to compute the function's outputs.
"""
# __slots__ = ['type', 'owner', 'index', 'name']
__count__ = count(0)
def __init__(self, type, owner=None, index=None, name=None):
super(Variable, self).__init__()
self.tag = utils.scratchpad()
self.type = type
if owner is not None and not isinstance(owner, Apply):
raise TypeError("owner must be an Apply instance", owner)
self.owner = owner
if index is not None and not isinstance(index, int):
raise TypeError("index must be an int", index)
self.index = index
if name is not None and not isinstance(name, string_types):
raise TypeError("name must be a string", name)
self.name = name
self.auto_name = 'auto_' + str(next(self.__count__))
def __str__(self):
"""
WRITEME
"""
if self.name is not None:
return self.name
if self.owner is not None:
op = self.owner.op
if self.index == op.default_output:
return str(self.owner.op) + ".out"
else:
return str(self.owner.op) + "." + str(self.index)
else:
return "<%s>" % str(self.type)
def __repr__(self):
return str(self)
def clone(self):
"""
Return a new Variable like self.
Returns
-------
Variable instance
A new Variable instance (or subclass instance) with no owner or
index.
Notes
-----
Tags are copied to the returned instance.
Name is copied to the returned instance.
"""
# return copy(self)
cp = self.__class__(self.type, None, None, self.name)
cp.tag = copy(self.tag)
return cp
def __lt__(self, other):
raise NotImplementedError('Subclasses of Variable must provide __lt__',
self.__class__.__name__)
def __le__(self, other):
raise NotImplementedError('Subclasses of Variable must provide __le__',
self.__class__.__name__)
def __gt__(self, other):
raise NotImplementedError('Subclasses of Variable must provide __gt__',
self.__class__.__name__)
def __ge__(self, other):
raise NotImplementedError('Subclasses of Variable must provide __ge__',
self.__class__.__name__)
def get_parents(self):
if self.owner is not None:
return [self.owner]
return []
def eval(self, inputs_to_values=None):
"""
Evaluates this variable.
Parameters
----------
inputs_to_values
A dictionary mapping theano Variables to values.
Examples
--------
>>> import theano.tensor as T
>>> x = T.dscalar('x')
>>> y = T.dscalar('y')
>>> z = x + y
>>> z.eval({x : 16.3, y : 12.1})
array(28.4)
We passed :func:`eval` a dictionary mapping symbolic theano
variables to the values to substitute for them, and it returned
the numerical value of the expression.
Notes
-----
`eval` will be slow the first time you call it on a variable --
it needs to call :func:`function` to compile the expression behind
the scenes. Subsequent calls to :func:`eval` on that same variable
will be fast, because the variable caches the compiled function.
This way of computing has more overhead than a normal Theano
function, so don't use it too much in real scripts.
"""
if inputs_to_values is None:
inputs_to_values = {}
if not hasattr(self, '_fn_cache'):
self._fn_cache = dict()
inputs = tuple(sorted(inputs_to_values.keys(), key=id))
if inputs not in self._fn_cache:
self._fn_cache[inputs] = theano.function(inputs, self)
args = [inputs_to_values[param] for param in inputs]
rval = self._fn_cache[inputs](*args)
return rval
def __getstate__(self):
d = self.__dict__.copy()
d.pop("_fn_cache", None)
return d
class Constant(Variable):
"""
A :term:`Constant` is a `Variable` with a `value` field that cannot be
changed at runtime.
Constant nodes make eligible numerous optimizations: constant inlining in
C code, constant folding, etc.
Notes
-----
The data field is filtered by what is provided in the constructor for the
Constant's type field.
WRITEME
"""
# __slots__ = ['data']
def __init__(self, type, data, name=None):
Variable.__init__(self, type, None, None, name)
self.data = type.filter(data)
def equals(self, other):
# this does what __eq__ should do, but Variable and Apply should always be hashable by id
return isinstance(other, Constant) and self.signature() == other.signature()
def signature(self):
return (self.type, self.data)
def merge_signature(self):
return self.signature()
def __str__(self):
if self.name is not None:
return self.name
else:
name = str(self.data)
if len(name) > 20:
name = name[:10] + '...' + name[-10]
return 'Constant{%s}' % name
def clone(self):
"""
We clone this object, but we don't clone the data to lower memory
requirement. We suppose that the data will never change.
"""
cp = self.__class__(self.type, self.data, self.name)
cp.tag = copy(self.tag)
return cp
def __set_owner(self, value):
"""
WRITEME
Raises
------
ValueError
If `value` is not `None`.
"""
if value is not None:
raise ValueError("Constant instances cannot have an owner.")
owner = property(lambda self: None, __set_owner)
value = property(lambda self: self.data, doc='read-only data access method')
# index is not defined, because the `owner` attribute must necessarily be None
def stack_search(start, expand, mode='bfs', build_inv=False):
"""
Search through a graph, either breadth- or depth-first.
Parameters
----------
start : deque
Search from these nodes.
expand : callable
When we get to a node, add expand(node) to the list of nodes to visit.
This function should return a list, or None.
Returns
-------
list of `Variable` or `Apply` instances (depends on `expend`)
The list of nodes in order of traversal.
Notes
-----
A node will appear at most once in the return value, even if it
appears multiple times in the start parameter.
:postcondition: every element of start is transferred to the returned list.
:postcondition: start is empty.
"""
if mode not in ('bfs', 'dfs'):
raise ValueError('mode should be bfs or dfs', mode)
rval_set = set()
rval_list = list()
if mode == 'bfs':
start_pop = start.popleft
else:
start_pop = start.pop
expand_inv = {}
while start:
l = start_pop()
if id(l) not in rval_set:
rval_list.append(l)
rval_set.add(id(l))
expand_l = expand(l)
if expand_l:
if build_inv:
for r in expand_l:
expand_inv.setdefault(r, []).append(l)
start.extend(expand_l)
assert len(rval_list) == len(rval_set)
if build_inv:
return rval_list, expand_inv
return rval_list
def ancestors(variable_list, blockers=None):
"""
Return the variables that contribute to those in variable_list (inclusive).
Parameters
----------
variable_list : list of `Variable` instances
Output `Variable` instances from which to search backward through
owners.
Returns
-------
list of `Variable` instances
All input nodes, in the order found by a left-recursive depth-first
search started at the nodes in `variable_list`.
"""
def expand(r):
if r.owner and (not blockers or r not in blockers):
return reversed(r.owner.inputs)
dfs_variables = stack_search(deque(variable_list), expand, 'dfs')
return dfs_variables
def inputs(variable_list, blockers=None):
"""
Return the inputs required to compute the given Variables.
Parameters
----------
variable_list : list of `Variable` instances
Output `Variable` instances from which to search backward through
owners.
Returns
-------
list of `Variable` instances
Input nodes with no owner, in the order found by a left-recursive
depth-first search started at the nodes in `variable_list`.
"""
vlist = ancestors(variable_list, blockers)
rval = [r for r in vlist if r.owner is None]
return rval
def variables_and_orphans(i, o):
"""
WRITEME
"""
def expand(r):
if r.owner and r not in i:
l = list(r.owner.inputs) + list(r.owner.outputs)
l.reverse()
return l
variables = stack_search(deque(o), expand, 'dfs')
orphans = [r for r in variables if r.owner is None and r not in i]
return variables, orphans
def ops(i, o):
"""
WRITEME
Parameters
----------
i : list
Input L{Variable}s.
o : list
Output L{Variable}s.
Returns
-------
object
The set of ops that are contained within the subgraph that lies
between i and o, including the owners of the L{Variable}s in o and
intermediary ops between i and o, but not the owners of the L{Variable}s
in i.
"""
ops = set()
variables, orphans = variables_and_orphans(i, o)
for r in variables:
if r not in i and r not in orphans:
if r.owner is not None:
ops.add(r.owner)
return ops
def variables(i, o):
"""
WRITEME
Parameters
----------
i : list
Input L{Variable}s.
o : list
Output L{Variable}s.
Returns
-------
object
The set of Variables that are involved in the subgraph that lies
between i and o. This includes i, o, orphans(i, o) and all values of
all intermediary steps from i to o.
"""
return variables_and_orphans(i, o)[0]
def orphans(i, o):
"""
WRITEME
Parameters
----------
i : list
Input L{Variable}s.
o : list
Output L{Variable}s.
Returns
-------
object
The set of Variables which one or more Variables in o depend on but are
neither in i nor in the subgraph that lies between i and o.
Examples
--------
orphans([x], [(x+y).out]) => [y]
"""
return variables_and_orphans(i, o)[1]
def clone(i, o, copy_inputs=True):
"""
Copies the subgraph contained between i and o.
Parameters
----------
i : list
Input L{Variable}s.
o : list
Output L{Variable}s.
copy_inputs : bool
If True, the inputs will be copied (defaults to True).
Returns
-------
object
The inputs and outputs of that copy.
"""
equiv = clone_get_equiv(i, o, copy_inputs)
return [equiv[input] for input in i], [equiv[output] for output in o]
def clone_get_equiv(inputs, outputs, copy_inputs_and_orphans=True, memo=None):
"""
Return a dictionary that maps from Variable and Apply nodes in the
original graph to a new node (a clone) in a new graph.
This function works by recursively cloning inputs... rebuilding a directed
graph from the bottom (inputs) up to eventually building new outputs.
Parameters
----------
inputs : a list of Variables
outputs : a list of Variables
copy_inputs_and_orphans : bool
True means to create the cloned graph from new input and constant
nodes (the bottom of a feed-upward graph).
False means to clone a graph that is rooted at the original input
nodes.
memo : None or dict
Optionally start with a partly-filled dictionary for the return value.
If a dictionary is passed, this function will work in-place on that
dictionary and return it.
"""
if memo is None:
memo = {}
# clone the inputs if necessary
for input in inputs:
if copy_inputs_and_orphans:
cpy = input.clone()
cpy.owner = None
cpy.index = None
memo.setdefault(input, cpy)
else:
memo.setdefault(input, input)
# go through the inputs -> outputs graph cloning as we go
for apply in io_toposort(inputs, outputs):
for input in apply.inputs:
if input not in memo:
if copy_inputs_and_orphans:
cpy = input.clone()
memo[input] = cpy
else:
memo[input] = input
new_apply = apply.clone_with_new_inputs([memo[i] for i in apply.inputs])
memo.setdefault(apply, new_apply)
for output, new_output in zip(apply.outputs, new_apply.outputs):
memo.setdefault(output, new_output)
# finish up by cloning any remaining outputs (it can happen)
for output in outputs:
if output not in memo:
memo[output] = output.clone()
return memo
def general_toposort(r_out, deps, debug_print=False,
compute_deps_cache=None, deps_cache=None):
"""
WRITEME
Parameters
----------
deps
A python function that takes a node as input and returns its dependence.
compute_deps_cache : optional
If provided deps_cache should also be provided. This is a function like
deps, but that also cache its results in a dict passed as deps_cache.
deps_cache : dict
Must be used with compute_deps_cache.
Notes
-----
deps(i) should behave like a pure function (no funny business with
internal state).
deps(i) will be cached by this function (to be fast).
The order of the return value list is determined by the order of nodes
returned by the deps() function.
deps should be provided or can be None and the caller provides
compute_deps_cache and deps_cache. The second option removes a Python
function call, and allows for more specialized code, so it can be
faster.
"""
if compute_deps_cache is None:
deps_cache = {}
def compute_deps_cache(io):
if io not in deps_cache:
d = deps(io)
if d:
if not isinstance(d, (list, OrderedSet)):
raise TypeError(
"Non-deterministic collections here make"
" toposort non-deterministic.")
deps_cache[io] = list(d)
else:
deps_cache[io] = d
return d
else:
return deps_cache[io]
assert deps_cache is not None
assert isinstance(r_out, (tuple, list, deque))
reachable, clients = stack_search(deque(r_out), compute_deps_cache,
'dfs', True)
sources = deque([r for r in reachable if not deps_cache.get(r, None)])
rset = set()
rlist = []
while sources:
node = sources.popleft()
if node not in rset:
rlist.append(node)
rset.add(node)
for client in clients.get(node, []):
deps_cache[client] = [a for a in deps_cache[client]
if a is not node]
if not deps_cache[client]:
sources.append(client)
if len(rlist) != len(reachable):
if debug_print:
print('')
print(reachable)
print(rlist)
raise ValueError('graph contains cycles')
return rlist
def io_toposort(inputs, outputs, orderings=None):
"""
WRITEME
Parameters
----------
inputs : list or tuple of Variable instances
outputs : list or tuple of Apply instances
orderings: dict
Key: Apply instance. Value: list of Apply instance.
It is important that the value be a container with a deterministic
iteration order. No sets allowed!
"""
# the inputs are used only here in the function that decides what 'predecessors' to explore
iset = set(inputs)
# We build 2 functions as a speed up
deps_cache = {}
compute_deps = None
compute_deps_cache = None
if not orderings: # can be None or empty dict
# Specialized function that is faster when no ordering.
# Also include the cache in the function itself for speed up.
def compute_deps_cache(obj):
if obj in deps_cache:
return deps_cache[obj]
rval = []
if obj not in iset:
if isinstance(obj, Variable):
if obj.owner:
rval = [obj.owner]
elif isinstance(obj, Apply):
rval = list(obj.inputs)
if rval:
if not isinstance(rval, (list, OrderedSet)):
raise TypeError(
"Non-deterministic collections here make"
" toposort non-deterministic.")
deps_cache[obj] = list(rval)
else:
deps_cache[obj] = rval
else:
deps_cache[obj] = rval
return rval
else:
def compute_deps(obj):
rval = []
if obj not in iset:
if isinstance(obj, Variable):
if obj.owner:
rval = [obj.owner]
elif isinstance(obj, Apply):
rval = list(obj.inputs)
rval.extend(orderings.get(obj, []))
else:
assert not orderings.get(obj, [])
return rval
topo = general_toposort(outputs, deps=compute_deps,
compute_deps_cache=compute_deps_cache,
deps_cache=deps_cache)
return [o for o in topo if isinstance(o, Apply)]
default_leaf_formatter = str
def default_node_formatter(op, argstrings):
return "%s(%s)" % (op.op, ", ".join(argstrings))
def io_connection_pattern(inputs, outputs):
"""
Returns the connection pattern of a subgraph defined by given
inputs and outputs.
"""
inner_nodes = io_toposort(inputs, outputs)
# Initialize 'connect_pattern_by_var' by establishing each input as
# connected only to itself
connect_pattern_by_var = {}
nb_inputs = len(inputs)
for i in range(nb_inputs):
input = inputs[i]
inp_connection_pattern = [i == j for j in range(nb_inputs)]
connect_pattern_by_var[input] = inp_connection_pattern
# Iterate through the nodes used to produce the outputs from the
# inputs and, for every node, infer their connection pattern to
# every input from the connection patterns of their parents.
for n in inner_nodes:
# Get the connection pattern of the inner node's op. If the op
# does not define a connection_pattern method, assume that
# every node output is connected to every node input
try:
op_connection_pattern = n.op.connection_pattern(n)
except AttributeError:
op_connection_pattern = ([[True] * len(n.outputs)] *
len(n.inputs))
# For every output of the inner node, figure out which inputs it
# is connected to by combining the connection pattern of the inner
# node and the connection patterns of the inner node's inputs.
for out_idx in range(len(n.outputs)):
out = n.outputs[out_idx]
out_connection_pattern = [False] * nb_inputs
for inp_idx in range(len(n.inputs)):
inp = n.inputs[inp_idx]
if inp in connect_pattern_by_var:
inp_connection_pattern = connect_pattern_by_var[inp]
# If the node output is connected to the node input, it
# means it is connected to every inner input that the
# node inputs is connected to
if op_connection_pattern[inp_idx][out_idx]:
out_connection_pattern = [out_connection_pattern[i] or
inp_connection_pattern[i]
for i in range(nb_inputs)]
# Store the connection pattern of the node output
connect_pattern_by_var[out] = out_connection_pattern
# Obtain the global connection pattern by combining the
# connnection patterns of the individual outputs
global_connection_pattern = [[] for o in range(len(inputs))]
for out in outputs:
out_connection_pattern = connect_pattern_by_var[out]
for i in range(len(inputs)):
global_connection_pattern[i].append(out_connection_pattern[i])
return global_connection_pattern
def is_same_graph(var1, var2, givens=None, debug=False):
"""
Return True iff Variables `var1` and `var2` perform the same computation.
By 'performing the same computation', we mean that they must share the same
graph, so that for instance this function will return False when comparing
(x * (y * z)) with ((x * y) * z).
The current implementation is not efficient since, when possible, it
verifies equality by calling two different functions that are expected to
return the same output. The goal is to verify this assumption, to
eventually get rid of one of them in the future.
Parameters
----------
var1
The first Variable to compare.
var2
The second Variable to compare.
givens
Similar to the `givens` argument of `theano.function`, it can be used
to perform substitutions in the computational graph of `var1` and
`var2`. This argument is associated to neither `var1` nor `var2`:
substitutions may affect both graphs if the substituted variable
is present in both.
debug : bool
If True, then an exception is raised when we are in a situation where
the `equal_computations` implementation cannot be called.
This parameter is intended to be used in tests only, to make sure we
properly test both implementations.
Examples
--------
====== ====== ====== ======
var1 var2 givens output
====== ====== ====== ======
x + 1 x + 1 {} True
x + 1 y + 1 {} False
x + 1 y + 1 {x: y} True
====== ====== ====== ======
"""
# Lazy import.
if givens is None:
givens = {}
global equal_computations, is_same_graph_with_merge
if equal_computations is None:
from theano.gof.opt import is_same_graph_with_merge
from theano.scan_module.scan_utils import equal_computations
# Convert `givens` to dictionary.
if not isinstance(givens, dict):
givens = dict(givens)
# Get result from the merge-based function.
rval1 = is_same_graph_with_merge(var1=var1, var2=var2, givens=givens)
# Get result from the function `equal_computations` from scan_utils.
use_equal_computations = True
if givens:
# We need to build the `in_xs` and `in_ys` lists. To do this, we need
# to be able to tell whether a variable belongs to the computational
# graph of `var1` or `var2`.
# The typical case we want to handle is when `to_replace` belongs to
# one of these graphs, and `replace_by` belongs to the other one. In
# other situations, the current implementation of `equal_computations`
# is probably not appropriate, so we do not call it.
ok = True
in_xs = []
in_ys = []
# Compute the sets of all variables found in each computational graph.
inputs_var = list(map(inputs, ([var1], [var2])))
all_vars = [set(variables(v_i, v_o))
for v_i, v_o in ((inputs_var[0], [var1]),
(inputs_var[1], [var2]))]
def in_var(x, k):
# Return True iff `x` is in computation graph of variable `vark`.
return x in all_vars[k - 1]
for to_replace, replace_by in iteritems(givens):
# Map a substitution variable to the computational graphs it
# belongs to.
inside = dict((v, [in_var(v, k) for k in (1, 2)])
for v in (to_replace, replace_by))
if (inside[to_replace][0] and not inside[to_replace][1] and
inside[replace_by][1] and not inside[replace_by][0]):
# Substitute variable in `var1` by one from `var2`.
in_xs.append(to_replace)
in_ys.append(replace_by)
elif (inside[to_replace][1] and not inside[to_replace][0] and
inside[replace_by][0] and not inside[replace_by][1]):
# Substitute variable in `var2` by one from `var1`.
in_xs.append(replace_by)
in_ys.append(to_replace)
else:
ok = False
break
if not ok:
# We cannot directly use `equal_computations`.
if debug:
raise AssertionError(
'When `debug` is True we want to make sure we are also '
'using the `equal_computations` implementation')
use_equal_computations = False
else:
in_xs = None
in_ys = None
if use_equal_computations:
rval2 = equal_computations(xs=[var1], ys=[var2],
in_xs=in_xs, in_ys=in_ys)
assert rval2 == rval1
return rval1
def op_as_string(i, op,
leaf_formatter=default_leaf_formatter,
node_formatter=default_node_formatter):
"""
WRITEME
"""
strs = as_string(i, op.inputs, leaf_formatter, node_formatter)
return node_formatter(op, strs)
def as_string(i, o,
leaf_formatter=default_leaf_formatter,
node_formatter=default_node_formatter):
"""
WRITEME
Parameters
----------
i : list
Input `Variable` s.
o : list
Output `Variable` s.
leaf_formatter : function
Takes a `Variable` and returns a string to describe it.
node_formatter : function
Takes an `Op` and the list of strings corresponding to its arguments
and returns a string to describe it.
Returns
-------
str
Returns a string representation of the subgraph between i and o. If the
same op is used by several other ops, the first occurrence will be
marked as :literal:`*n -> description` and all subsequent occurrences
will be marked as :literal:`*n`, where n is an id number (ids are
attributed in an unspecified order and only exist for viewing
convenience).
"""
i = set(i)
orph = orphans(i, o)
multi = set()
seen = set()
for output in o:
op = output.owner
if op in seen:
multi.add(op)
else:
seen.add(op)
for op in ops(i, o):
for input in op.inputs:
op2 = input.owner
if input in i or input in orph or op2 is None:
continue
if op2 in seen:
multi.add(op2)
else:
seen.add(input.owner)
multi = [x for x in multi]
done = set()
def multi_index(x):
return multi.index(x) + 1
def describe(r):
if r.owner is not None and r not in i and r not in orph:
op = r.owner
idx = op.outputs.index(r)
if len(op.outputs) == 1:
idxs = ""
else:
idxs = "::%i" % idx
if op in done:
return "*%i%s" % (multi_index(op), idxs)
else:
done.add(op)
s = node_formatter(op, [describe(input) for input in op.inputs])
if op in multi:
return "*%i -> %s" % (multi_index(op), s)
else:
return s
else:
return leaf_formatter(r)
return [describe(output) for output in o]
def view_roots(r):
"""
Utility function that returns the leaves of a search through
consecutive view_map()s.
WRITEME
"""
owner = r.owner
if owner is not None:
try:
view_map = owner.op.view_map
view_map = dict((owner.outputs[o], i)
for o, i in iteritems(view_map))
except AttributeError:
return [r]
if r in view_map:
answer = []
for i in view_map[r]:
answer += view_roots(owner.inputs[i])
return answer
else:
return [r]
else:
return [r]
def list_of_nodes(inputs, outputs):
"""
Return the apply nodes of the graph between inputs and outputs.
"""
return stack_search(
deque([o.owner for o in outputs]),
lambda o: [inp.owner for inp in o.inputs
if inp.owner and
not any(i in inp.owner.outputs for i in inputs)])
|
cmdunkers/DeeperMind
|
PythonEnv/lib/python2.7/site-packages/theano/gof/graph.py
|
Python
|
bsd-3-clause
| 42,523
|
[
"VisIt"
] |
0afa9beef3bdb339d17fea272c16136d91998a0f8d691eff4fc2bbb018425526
|
midi_instruments = {
"Piano": [
"1.Acoustic Piano",
"2.BrtAcou Piano",
"3.ElecGrand Piano",
"4.Honky Tonk Piano",
"5.Elec.Piano 1",
"6.Elec.Piano 2",
"7.Harsichord",
"8.Clavichord",
],
"Chromatic Percussion": [
"9.Celesta",
"10.Glockenspiel",
"11.Music Box",
"12.Vibraphone",
"13.Marimba",
"14.Xylophone",
"15.Tubular Bells",
"16.Dulcimer"
],
"Organ": [
"17.Drawbar Organ",
"18.Perc. Organ",
"19.Rock Organ",
"20.Church Organ",
"21.Reed Organ",
"22.Accordian",
"23.Harmonica",
"24.Tango Accordian",
],
"Guitar": [
"25.Acoustic Guitar",
"26.SteelAcous. Guitar",
"27.El.Jazz Guitar",
"28.Electric Guitar",
"29.El. Muted Guitar",
"30.Overdriven Guitar",
"31.Distortion Guitar",
"32.Guitar Harmonic",
],
"Bass": [
"33.Acoustic Bass",
"34.El.Bass Finger",
"35.El.Bass Pick",
"36.Fretless Bass",
"37.Slap Bass 1",
"38.Slap Bass 2",
"39.Synth Bass 1",
"40.Synth Bass 2",
],
"Strings": [
"41.Violin",
"42. Viola",
"43.Cello",
"44.Contra Bass",
"45.Tremelo Strings",
"46.Pizz. Strings",
"47.Orch. Strings",
"48.Timpani",
],
"Ensemble": [
"49.String Ens.1",
"50.String Ens.2",
"51.Synth.Strings 1",
"52.Synth.Strings 2",
"53.Choir Aahs",
"54. Voice Oohs",
"55. Synth Voice",
"56.Orchestra Hit",
],
"Brass": [
"57.Trumpet",
"58.Trombone",
"59.Tuba",
"60.Muted Trumpet",
"61.French Horn",
"62.Brass Section",
"63.Synth Brass 1",
"64.Synth Brass 2",
],
"Reed": [
"65.Soprano Sax",
"66.Alto Sax",
"67.Tenor Sax",
"68.Baritone Sax",
"69. Oboe",
"70.English Horn",
"71.Bassoon",
"72.Clarinet",
],
"Pipe": [
"73.Piccolo",
"74.Flute",
"75.Recorder",
"76.Pan Flute",
"77.Blown Bottle",
"78.Shakuhachi",
"79.Whistle",
"80.Ocarina",
],
"Synth Lead": [
"81.Lead1 Square",
"82.Lead2 Sawtooth",
"83.Lead3 Calliope",
"84.Lead4 Chiff",
"85.Lead5 Charang",
"86.Lead6 Voice",
"87.Lead7 Fifths",
"88.Lead8 Bass Ld",
],
"Synth Pad": [
"89.Pad1 New Age",
"90.Pad2 Warm",
"91.Pad3 Polysynth",
"92.Pad4 Choir",
"93.Pad5 Bowed",
"94.Pad6 Metallic",
"95.Pad7 Halo",
"96.Pad8 Sweep",
],
"Synth F/X": [
"97.FX1 Rain",
"98.FX2 Soundtrack",
"99.FX3 Crystal",
"100.FX4 Atmosphere",
"101.FX5 Brightness",
"102.FX6 Goblins",
"103.FX7 Echoes",
"104.FX8 Sci-Fi",
],
"Ethnic": [
"105.Sitar",
"106.Banjo",
"107.Shamisen",
"108.Koto",
"109.Kalimba",
"110. Bagpipe",
"111. Fiddle",
"112. Shanai",
],
"Percussive": [
"113.TinkerBell",
"114.Agogo",
"115.SteelDrums",
"116.Woodblock",
"117.TaikoDrum",
"118.Melodic Tom",
"119.SynthDrum",
"120.Reverse Cymbal",
],
"Sound F/X": [
"121.Guitar Fret Noise",
"122. Breath Noise",
"123.Seashore",
"124.BirdTweet",
"125.Telephone",
"126.Helicopter",
"127.Applause",
"128.Gunshot",
],
"Drum kit": [
'35 Acoustic Bass Drum',
'36 Bass Drum 1',
'37 Side Stick',
'38 Acoustic Snare',
'39 Hand Clap',
'40 Electric Snare',
'41 Low Floor Tom',
'42 Closed Hi-Hat',
'43 High Floor Tom',
'44 Pedal Hi-Hat',
'45 Low Tom',
'46 Open Hi-Hat',
'47 Low-Mid Tom',
'48 Hi-Mid Tom',
'49 Crash Cymbal 1',
'50 High Tom',
'51 Ride Cymbal 1',
'52 Chinese Cymbal',
'53 Ride Bell',
'54 Tambourine',
'55 Splash Cymbal',
'56 Cowbell',
'57 Crash Cymbal 2',
'58 Vibraslap',
'59 Ride Cymbal 2',
'60 Hi Bongo',
'61 Low Bongo',
'62 Mute Hi Conga',
'63 Open Hi Conga',
'64 Low Conga',
'65 High Timbale',
'66 Low Timbale',
'67 High Agogo',
'68 Low Agogo',
'69 Cabasa',
'70 Maracas',
'71 Short Whistle',
'72 Long Whistle',
'73 Short Guiro',
'74 Long Guiro',
'75 Claves',
'76 Hi Wood Block',
'77 Low Wood Block',
'78 Mute Cuica',
'79 Open Cuica',
'80 Mute Triangle',
'81 Open Triangle'
]
}
|
pepitogithub/PythonScripts
|
musica/midi_instruments.py
|
Python
|
gpl-2.0
| 5,031
|
[
"CRYSTAL"
] |
d6da945bfad97982ff90b8e15f2fc93d774806bea226b0c70dbfc30ee2dc7d17
|
"""
GENERAL ARTIFICIAL INTELLIGENCE FOR GAME PLAYING
JAN KLUJ, 2017.
For more information on models or documentation how to use them, please visit:
https://github.com/Honkl/general-ai
"""
from __future__ import division
from __future__ import print_function
import random
import numpy as np
from evolution.differential_evolution import DifferentialEvolution
from evolution.evolution_parameters import EvolutionaryAlgorithmParameters, EvolutionStrategyParameters, \
DifferentialEvolutionParameters
from evolution.evolution_strategy import EvolutionStrategy
from evolution.evolutionary_algorithm import EvolutionaryAlgorithm
from models.echo_state_network import EchoState
from models.mlp import MLP
from reinforcement.ddpg.ddpg_reinforcement import DDPGReinforcement
from reinforcement.reinforcement_parameters import DDPGParameters, DQNParameters
from reinforcement.dqn.dqn import DQN
# MASTER_SEED = 42
# random.seed(MASTER_SEED)
# np.random.seed(MASTER_SEED)
def run_eva(game):
"""
EVOLUTIONARY ALGORITHM.
"""
eva_parameters = EvolutionaryAlgorithmParameters(
pop_size=25,
cxpb=0.75,
mut=("uniform", 0.1, 0.1),
ngen=1000,
game_batch_size=10,
cxindpb=0.25,
hof_size=0,
elite=5,
selection=("tournament", 3))
# mlp = MLP(hidden_layers=[100, 100, 100, 100], activation="relu")
esn = EchoState(n_readout=200, n_components=1000, output_layers=[], activation="relu")
evolution = EvolutionaryAlgorithm(game=game, evolution_params=eva_parameters, model=esn, logs_every=100,
max_workers=4)
evolution.run()
def run_ddpg(game):
"""
DEEP DETERMINISTIC POLICY GRADIENT (Reinforcement learning for games with continuous action spaces).
"""
ddpg_parameters = DDPGParameters(
batch_size=100,
replay_buffer_size=100000,
discount_factor=0.99,
episodes=10000,
test_size=25)
print("DDPG algorithm started for game {}".format(game))
print("Basic parameters: {}".format(ddpg_parameters.to_string()))
# Parameters of networks are specified inside the DDPG Model. Using default parameters in most of the time.
# For example, use path like this:
# ckpt = "D:/general-ai-cache/logs/torcs/ddpg/logs_2017-07-01_21-40-57/"
RL = DDPGReinforcement(game=game, parameters=ddpg_parameters, logs_every=50, checkpoint=None)
RL.run()
def run_es(game):
"""
EVOLUTIONARY STRATEGY (CMA-ES)
"""
strategy_parameters = EvolutionStrategyParameters(
pop_size=10,
ngen=1000,
game_batch_size=10,
hof_size=0,
elite=3,
sigma=1.0)
# mlp = MLP(hidden_layers=[50, 50, 50], activation="relu")
esn = EchoState(n_readout=200, n_components=1000, output_layers=[], activation="relu")
strategy = EvolutionStrategy(game, strategy_parameters, esn, logs_every=5, max_workers=4)
strategy.run()
def run_de(game):
"""
DIFFERENTIAL EVOLUTION
"""
diff_evolution_parameters = DifferentialEvolutionParameters(
pop_size=10,
ngen=350,
game_batch_size=1,
hof_size=5,
cr=0.25,
f=1)
# mlp = MLP(hidden_layers=[200, 200], activation="relu")
esn = EchoState(n_readout=200, n_components=1000, output_layers=[], activation="relu")
diff = DifferentialEvolution(game, diff_evolution_parameters, esn, max_workers=4, logs_every=5)
diff.run()
def run_dqn(game):
"""
DEEP REINFORCEMENT LEARNING (Q-network), epsilon greedy for exploration.
"""
parameters = DQNParameters(batch_size=100,
init_exp=0.5,
final_exp=0.01,
anneal_steps=100000,
replay_buffer_size=100000,
store_replay_every=1,
discount_factor=0.99,
target_update_frequency=1000,
reg_param=0.01,
test_size=25)
optimizer_params = {}
optimizer_params["name"] = "adam"
optimizer_params["learning_rate"] = 0.01
q_network_parameters = {}
q_network_parameters["hidden_layers"] = [500, 500]
q_network_parameters["activation"] = "relu"
q_network_parameters["dropout"] = 0.9
RL = DQN(game, parameters, q_network_parameters, optimizer_params, test_every=50)
RL.run()
if __name__ == '__main__':
# Select the game: 2048, mario, torcs, alhambra
game = "2048"
# Select learning method
run_eva(game)
# run_es(game)
# run_de(game)
# run_dqn(game)
# run_ddpg(game)
|
Honkl/general-ai
|
Controller/controller.py
|
Python
|
mit
| 4,720
|
[
"VisIt"
] |
1c613c60770952361c41eb90314efea263c8da991fb94dad93d83512bcd96eee
|
# -*- coding: UTF-8 -*-
## Copyright 2012-2013 Luc Saffre
## This file is part of the Lino project.
## Lino is free software; you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation; either version 3 of the License, or
## (at your option) any later version.
## Lino is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
## You should have received a copy of the GNU General Public License
## along with Lino; if not, see <http://www.gnu.org/licenses/>.
u"""
This module is used by
the :mod:`garble <lino_welfare.modlib.pcsw.management.commands.garble>`
command that comes with :mod:`lino_welfare`.
Example usage:
The first five a Belgians:
>>> for i in range(5):
... print LAST_NAMES_BELGIUM.pop()
Adam
Adami
Adriaen
Adriaensen
Aelter
Next comes a group of five Russians:
>>> for i in range(5):
... print LAST_NAMES_RUSSIA.pop()
Abezgauz
Aleksandrov
Altukhov
Alvang
Ankundinov
Or here is a mixture of nationalities, for each Belgian comes one foreigner:
>>> LAST_NAMES = Cycler(LAST_NAMES_BELGIUM,LAST_NAMES_RUSSIA,LAST_NAMES_BELGIUM,LAST_NAMES_MUSLIM)
>>> for i in range(10):
... print LAST_NAMES.pop()
Aelters
Arent
Aelterman
Abad
Aerens
Arnold
Aerts
Abbas
Aertsens
Arshan
Some fictive Estonians (each couple one male & one female):
>>> for i in range(5):
... print MALE_FIRST_NAMES_ESTONIA.pop(), LAST_NAMES_ESTONIA.pop(), '&',
... print FEMALE_FIRST_NAMES_ESTONIA.pop(), LAST_NAMES_ESTONIA.pop()
Aadu Ivanov & Adeele Tamm
Aare Saar & Age Sepp
Aarne Mägi & Age-Kaie Smirnov
Aaro Vasiliev & Aili Petrov
Aaron Kask & Aili Kukk
Sources:
The raw data was originally copied from:
- Belgian last names from http://www.lavoute.org/debuter/Belgique.htm
- French last names from http://www.nom-famille.com/noms-les-plus-portes-en-france.html
- Russion last names from http://www.meetmylastname.com/prd/articles/24
- French first names from
http://meilleursprenoms.com/site/LesClassiques/LesClassiques.htm
- African, Muslim and Russian names from
http://www.babynames.org.uk
and http://genealogy.familyeducation.com
- Streets of Liège from
http://fr.wikipedia.org/wiki/Liste_des_rues_de_Li%C3%A8ge
- Estonian last names:
`www.ekspress.ee <http://www.ekspress.ee/news/paevauudised/eestiuudised/top-500-eesti-koige-levinumad-perekonnanimed.d?id=27677149>`_
(I manually added some less frequent names)
- Estonian first names are extracted from my personal database.
"""
import re
STREET_RE = re.compile(r"\*\s*\[\[(.+)\]\]\s*$")
from lino.utils import Cycler
def splitter1(s):
for ln in s.splitlines():
ln = ln.strip()
if len(ln) > 1 and ln[0] != '#':
yield ln
def splitter2(s):
return [name.strip() for name in s.split(',')]
def splitter3(s):
for ln in s.splitlines():
ln = ln.strip()
if len(ln) > 1 and ln[0] != '#':
a = ln.split()
name = a[0]
yield name
LAST_NAMES_BELGIUM = u"""
A
Adam
Adami
#Adriaenssens
Adriaen
Adriaensen
#Adriaenssen
#Adriencense
#Adriensence
#Adrienssens
Aelter
Aelters
Aelterman
Aerens
Aerts
Aertsens
Albumazard
Alloo
Alsteen
Andersson
André
Andries
Andriessen
Anthon
Antoine
Appelbaum
Applaer
Arimont
Arquin
Arteman
B
Baert
Bartholomeeus
Bastien
Bastin
Baugnet
Baugniet
Baugniez
Bauwens
Beauve
Beck
Beckers
Bernard
Bertrand
Bietmé
Blaas
Blankaert
Blanquaert
Blondeel
Blondeeuw
Blondoo
Bodart
Bodson
Boeck
Boesmans
Bogaert
Bogaerts
Bogemans
Booghmans
Borremans
Borsu
Borsus
Borsut
Bosmans
Bouch
Bouchhout
Bouillère
Bouillet
Boulanger
Bourton
Bouxin
Brasseur
Brouck
Broucke
Broucq
Broucque
Brouhier
Brug
Bruggesman
Bruynseel
Bruynseels
Burger
Burghgraeve
Burgmeester
Burton
Burtont
Buyle
C
Calbert
Callebaut
Callebert
Callebout
Camby
Cappelaere
Cappelaire
Cappelier
Cappeliez
Cappellier
Carbonez
Carbonnez
Carlier
Casteau
Castel
Castiaux
Cauderlier
Caudron
Cauvel
Cauvet
Cauvin
Cavard
Ceulemans
Chantry
Charlier
Chêneboit
Chestay
Chestia
Chrispeels
Christiaens
Christoffel
Claes
Claessens
Claeys
Claus
Cléban
Clébant
Clerx
Colinus
Collard
Colleye
Collignon
Collin
Colson
Cool
Cools
Coppens
Corain
Corijn
Corin
Cornelis
Cornet
Corrin
Corring
Corringer
Coryn
Coudyser
Couhysder
Coutijser
Coutiser
Crab
Crabbe
Crama
Crépez
Crespel
Crevisse
Crevits
Crispeel
Crispeels
Crispel
Crispiels
Cuvelier
Cuypers
D
Daan
Daels
Daems
Dalmans
Damard
Damart
Danis
Dany
Danys
Dapvril
Daufresne
Dawance
De Backer
De Bisschop
De Bloedt
De Blonde
De Boeck
De Bosscher
De Bosschere
De Bruyn
De Busschere
De Buyle
De Clercq
De Cock
De Coninck
De Conninck
De Coster
De Cruyenaere
De Cuyper
De Decker
De Doncker
De Draier
De Flandre
De Frankrijker
De Greef
De Griek
De Groot
De Groote
De Guchteneere
De Haese
De Hert
De Hertog
De Hoorne
De Kimpe
De Markgraef
De Meester
De Meulenaer
De Meyer
De Molder
De Munck
De Muynck
De Muyncke
De Muynek
De Muynke
De Naeyer
De Nayer
De Pannemacker
De Pannemaecker
De Pauss
De Pauw
De Pelsemaeker
De Pester
De Potter
De Praeter
De Prester
De Ridder
De Ridere
De Rovéréaz
De Rudere
De Sachte
De Saedeleer
De Saert
De Schepper
De Schoone
De Smedt
De Smet
De Smeytere
De Smidt
De Smit
De Smyter
De Stracke
De Sueter
De Vette
De Voghels
De Vos
De Vrient
De Wilde
De Winter
Debacker
Debaere
Debakker
Debaut
Debecker
Debekker
Debled
Deboschere
Deboscker
Deboskre
Debosscher
Debosschere
Debusschere
Debuyst
Declerck
Declercq
Decock
Decocq
Decrucq
Decruyenaere
Defaux
Defawe
Degroote
Dehoorne
Dehorne
Dehornes
Deilgat
Dejong
Dejonghe
Dekale
Dekimpe
Dekoch
Dekuiper
Dekyndt
Delacuvellerie
Delafosse
Delahaye
Delahayes
Delbouille
Delboulle
Delcorps
Delflache
Delfosse
Delgat
Delhaye
Delhoste
Delhotte
Delmare
Delmer
Delobbe
Delobe
Delobes
Delplace
Delvaux
Demain
Demeiere
Demeyer
Demoor
Demoore
Demunck
Demunck
Demuynck
Den Ouste
Denaeyer
Denayer
Deneyer
Denis
Denoor
Depannemaecker
Depelsemacker
Depelsemaeker
Depelsenaire
Depelseneer
Depercenaire
Depester
Depiéreux
Depierreux
Depireux
Depoorter
Depoortere
Depooter
Depootere
Deporter
Deportere
Depoterre
Deprez
Deramaix
Deroosse
Desandrouins
Descamps
Deschepper
Desmedt
Desmet
Desmets
Desmeytere
Desmidt
Desmidts
Desmit
Desmyter
Desmytter
Desmyttere
Despineto
Després
Despret
Desprets
Despretz
Desprey
Desprez
Destoute
Deswart
Deswarte
Dethier
Deur
Deurwaerder
Devis
Devloo
Devos
Devriend
Dewever
Dewit
Dewitte
Dewyse
D'Haeyer
Dhaeyer
D'Hoeraen
Dhoeraen
D'hoolaege
Dierckx
Dierik
Doeraene
Dolhaeghe
Domiens
Dominicus
Dondaine
Dondeine
Dondenne
Dondeyne
Doolaeg(h)e
Doolaegue
Doolage
Doorn
Doorne
Doorneman
Draier
Dresselaers
Dubled
Dubois
Dumont
Dupont
Duquesnay
Duquesne
Duquesnoy
E
Ebrard
Eeckeman
Eerkens
Erckens
Erk
Erken
Erkens
Etienne
Euvrard
Evert
Evrard
Evras
Evrat
Eyck
Eysermans
F
Fawat
Faweux
Fee
Felix
Flamenck
Floche
Floquet
Fontaine
Fonteyne
Fraigany
Fraigneux
Francoeur
François
Francon
Frankel
Franken
Frankeur
Frans
Fransman
Fransolet
Franzman
Frijer
G
Gabriels
Gadisseur
Gadisseux
Gasthuys
Gaudisseu
Geerts
Gehucht
Geiregat
Geeregat
Gendebien
Genot
Georges
Gérard
Gerlache
Gerlaxhe
Germay
Germéa
Germeau
Ghiste
Gilles
Gillet
Gilson
Gits
Giets
Gidts
Geets
Geerts
Glaze
Glazeman
Goethals
Goffin
Gomaert
Gomardt
Goor
Goossens
Goud
Goudman
Goudsmith
Gourdet
Gousson
Graas
Greggs
Gregh
Grégoire
Gregoor
Grewis
Groot
Groote
Grotaers
Guillaume
Guyaux
H
Haesen
Haesevoets
Halasi
Halazy
Hamers
Hanssens
Hardas
Hardat
Hardy
Heerbrant
Hendrick
Hendrickx
Hendriks
Henry
Herbrand
Herbrandt
Herbrant
Herman
Hermann
Hermans
Herten
Hertogs
Hertogue
Heylen
Heymans
Heynemans
Heyrman
Hinck
Hinckel
Hincker
Hinkel
Hinkels
Hinkens
Hinker
Hinkle
Hoefnagel
Hoefnagels
Holemans
Honnay
Horlin
Houvenaghel
Hoyois
Hubert
Huig
I
Ickx
Istace
Istasse
J
Jaak
Jaap
Jacob
Jacobs
Jacques
Jacquet
Jan
Janhes
Jansen
Janssen
Janssens
Jef
Jenot
Jeuniaux
Joire
Jone
Joneau
Jonet
Jongers
Jonné
Jonet
Jonnet
Jordaens
Jorez
Joris
Jorissen
Jozef
Julianus
Julius
Jurgen
K
Kaalman
Kaisin
Keetels
Kenens
Kenes
Kenis
Kennens
Kennes
Kennis
Kesteloot
Ketel
Ketelsmit
Kiecken
Kimpe
Kinnen
Klein
Kleineman
Kleiner
Kleinerman
Kleinman
Klerk
Kleynen
Klingeleers
Kobus
Koeck
Konninckx
Koolman
Korring
Kramers
Kreemers
Kuipers
L
Labbez
Lacroix
Laenen
Laenens
Lafontaine
Lambert
Lambrechts
Lanen
Lanens
Langlez
Lapayre
Laseur
Laseure
Lauffer
Laurent
Lauwers
Le Mayeur
Le Provost
Leboutte
Lebrun
Leclerc
Leclercq
Lecocq
Lecomte
Ledecq
Leenhard
Leenhart
Lefebvre
Lefèvre
Legrand
Lejeune
Lemaire
Lemmens
Lemonnier
Lemounie
Lenaerts
Lénel
Lénelle
Lennel
Léonard
Lepoutre
Leprette
Lepropre
Leroy
Lescohy
Lesoil
Lesoile
Lesoille
Levecq
Lewek
Libert
Liens
Liephoudt
Liepot
Liepout
Lieseborghs
Liesenborghs
Lietaer
Lietaert
Lietar
Liétar
Liétard
Liétart
Lievens
Lievesoons
Lievevrouw
Lievrouw
Liévrouw
Lievrow
Linglay
Linglet
Liphout
Lisenborgh
Lisenborgs
Locreille
Locrel
Locrelle
Lode
Loo
Lorfèvre
Lorphêvre
Losseau
Losset
Louis
Louzeau
Lowie
Ludovicus
Lugen
Lugens
Lust
Lustig
Luyer
Luyrik
Luyten
Lyphoudt
Lyphout
M
Maca
Maertens
Maes
Maessen
Mahieu
Maka
Malchamp
Malchamps
Malmedier
Malmedy
Malmendier
Mangon
Maqua
Marchal
Marckx
Marcus
Mardaga
Maréchal
Maria
Mark
Markgraff
Martens
Martin
Martins
Massart
Masson
Mathieu
Mathissen
Mathy
Matthys
Mauchamp
Mauchamps
Maurichon
Maurissen
Maurits
Mayeur
Mayeux
Mechelaere
Meert
Meertens
Meester
Meeus
Melaerts
Mellaerts
Merchié
Merchier
Mergeai
Mergeay
Merjai
Merjay
Mertens
Mertes
Merts
Mertz
Meulemans
Meulemeesters
Meunier
Meurice
Mewis
Mewissen
Michaël
Michaux
Michel
Michiels
Mixhel
Mochamps
Moens
Moeyaert
Moiling
Moinil
Molemans
Molenaers
Monceau
Moncia
Monciaux
Monsay
Monteyne
Moreau
Mouyart
Moyaert
Mullenders
Munck
Muynck
N
Nachtegael
Nagelmaekers
Nagels
Natus
Neel
Neels
Neuray
Neureau
Neuret
Neurot
Neuts
Neuven
Neven
Nguyen
Nicolas
Nicolaus
Nicolus
Nijs
Niklaas
Noël
Nuts
Nuttin
O
Ochin
Olivier
Olyff
P
Paindavaine
Pannaye
Parmentier
Pas
Pauss
Pauwels
Peeters
Pelser
Pelsmaeker
Peschon
Peschoniez
Pester
Petersen
Petit
Pierre
Piet
Pieters
Pietersen
Piette
Pirard
Piron
Pirotte
Plaats
Poels
Poelsmans
Poncelet
Pools
Posson
Potstainer
Potter
Pottiaux
Pottié
Potty
Poyon
Praat
Premereur
Premmereur
Prevostel
Priesse
Prisse
Proost
Prost
Proust
Putmans
Putmans
Puttemans
Puttemans
Putman
Q
Quaisin
Quesnay
Quesne
Quesneau
Quesnel
Quesney
Quesnoy
Queval
R
Raes
Ramael
Raucent
Rauscent
Rausin
Raussain
Raussent
Raussin
Raveydts
Ravignat
Remy
Renard
Retelet
Ricaart
Ricaert
Ricard
Robaert
Robbert
Robert
Roels
Roland
Rooseels
Roosengardt
Rosseel
Rousseau
S
Saintmaux
Saint-Maux
Sanctorum
Santilman
Schmitz
Schnock
Schoenmakers
Schoenman
Schoone
Scorier
Scuvie
Scuvie
Segers
Seghers
Seppen
Servais
Shoen
Sijmen
Simoens
Simon
Simons
Sinnesaël
Sinnesal
Slagmolder
Slagmulder
Slamulder
Smal
Smeets
Smet
Smets
Smit
Smolders
Smulders
Somers
Sottiaux
Spinette
Sprecher
Stas
Stass
Stassaert
Stassar
Stassard
Stassart
Stasse
Stassiaux
Stassin
Stassinet
Statius
Steculorum
Stefaans
Stercken
Sterckmans
Sterckx
Stevens
Stier
Stiers
Stievens
Stine
Stoffel
Stordair
Stordeur
Stoutmans
Swart
Swarte
T
Tack
Taverner
Teissant
Terreur
Thijs
Thiry
Thissen
Thomas
Thonnisen
Thuiliau
Thuiliaux
Thuiliet
Thys
Tibaut
Timmerman
Timmermans
T'Jampens
Tjampens
Toussaint
Trausch
Tuiliau
Tuiliaux
Tuilliet
Tuin
Tumson
Tweelinckx
U
Urbain
Urting
V
Van Acker
Van Aelter
Van Belle
Van Berckel
Van Bergh
Van Caenegem
Van Caeneghem
Van Daele
Van Damme
Van de Loo
Van de Pas
Van de Poel
Van de Slijke
Van de Slycke
Van de Veld
Van de Velde
Van den Bergh
Van den Bogaerde
Van den Borne
Van den Bossche
Van den Broeck
Van den Broecke
Van den Camp
Van den Castele
Van den Dael
Van den Dorpe
Van den Tuin
Van Den
Van der Brug
Van der Gucht
Van der Pas
Van der Slijke
Van der Slikke
Van der Slycke
Van der Vleuten
Van Doren
Van Dorp
Van Dorpe
Van Dovlaeghe
Van Dyck
Van Engeland
Van Esch
Van Escht
Van Eyck
Van Hecke
Van Hoof
Van Hoorebeke
Van Hoorenbeeck
Van Horenbeck
Van Horenbeeck
Van Lierde
Van Noye
Van Noÿe
Van Pé
Van Pede
Van Pée
Van Roy
Van Sinaey
Van Slijke
Van Slycke
Van Steerteghem
Van Steerteghen
Van Steirteghem
Van Vleuten
Vanbattel
Vanbergh
Vandamme
Vandenberghe
Vandenbossche
Vandenbussche
Vandendorpe
Vandeputte
Vanderhorst
Vanderlinden
Vanderplaetsen
Vandevelde
Vandoolaeghe
Vandorpe
Vanlierde
Vanpé
Vanpede
Vanpée
Vansteertegem
Vecq
Veld
Veldmann
Vellemans
Veraghe
Veraghen
Verbeeck
Verbeke
Verbruggen
Vercammen
Vercheval
Verdoolaeg(h)e
Verhaege
Verhaegen
Verhaeghe
Verhaeghen
Verhaegue
Verhage
Verhagen
Verhaghe
Verhelst
Verheyen
Verhoeven
Verlinden
Vermeer
Vermeersch
Vermeiren
Vermeren
Vermeulen
Vermotte
Verplaetse
Verplancke
Verplancken
Verschueren
Verslijke
Verslycke
Verstraete
Verstraeten
Vervoort
Vet
Vette
Viatour
Vieutemps
Vieutems
Vieuxtemps
Vilain
Vincent
Vinchent
Visje
Vlaamsche
Vlaeminck
Vlaemynck
Vlaminck
Vlamynck
Vlemincks
Vleminckx
Vleminx
Vlemynckx
Vogels
Volckaert
Volkaert
Volkaerts
Volkart
Volkert
Voller
Vos
Vossen
Vrank
Vrindt
Vrolijt
Vrolyck
Vullers
W
Wagemans
Wagenmann
Waghon
Wagon
Walle
Wastiaux
Watrigant
Watriquant
Watteau
Watteau
Watteaux
Watteaux
Wattecamp
Wattecamps
Wattecant
Watteel
Wattel
Wattelle
Wattiau
Wattiaux
Wattieaux
Wauters
Weers
Weerts
Wek
Wevers
Weynen
Wilbaert
Wilfart
Willems
Willock
Willocq
Wilock
Wintgens
Wouter
Wouters
Wuyts
Wylock
Wylocke
Y
Yildirim
Yilmaz
Z
Zadelaar
Zegers
Zeggers
Zègres
"""
LAST_NAMES_FRANCE = u"""
Martin 236 172
Bernard 131 901
Thomas 119 078
Dubois 114 001
Durand 111 510
Robert 106 161
Moreau 103 056
Petit 95 876
Simon 95 733
Michel 93 581
Leroy 88 722
Laurent 85 243
Lefebvre 82 670
Bertrand 75 030
Roux 74 955
David 73 150
Garnier 67 829
Legrand 67 475
Garcia 67 162
Bonnet 66 124
Lambert 65 724
Girard 65 228
Morel 64 537
Andre 64 301
Dupont 63 520
Guerin 62 971
Fournier 61 770
Lefevre 61 662
Rousseau 58 884
Francois 58 409
Fontaine 57 783
Mercier 56 702
Roussel 56 300
Boyer 56 024
Blanc 54 714
Henry 54 212
Chevalier 53 741
Masson 52 966
Clement 51 177
Perrin 50 834
Lemaire 50 038
Dumont 49 834
Meyer 48 796
Marchand 47 763
Joly 47 337
Gauthier 47 218
Mathieu 47 178
Nicolas 46 761
Nguyen 46 605
Robin 46 329
Barbier 45 635
Lucas 44 369
Schmitt 44 128
Duval 44 075
Gerard 43 762
Noel 43 263
Gautier 42 411
Dufour 42 209
Meunier 41 833
Brunet 41 807
Blanchard 41 477
Leroux 41 162
Caron 40 845
Lopez 40 431
Giraud 39 896
Fabre 39 592
Pierre 39 469
Gaillard 39 260
Sanchez 39 133
Riviere 39 018
Renard 37 607
Perez 37 371
Renaud 37 274
Lemoine 37 222
Arnaud 37 173
Jean 36 901
Colin 36 289
Brun 36 159
Philippe 35 922
Picard 35 912
Rolland 35 870
Olivier 35 384
Vidal 34 737
Leclercq 34 630
Aubert 34 477
Hubert 34 429
Bourgeois 34 380
Roy 33 798
Guillaume 33 518
Adam 32 624
Dupuy 31 895
Louis 31 785
Maillard 31 752
Aubry 31 184
Charpentier 30 139
Benoit 30 055
Berger 29 640
Royer 29 425
Poirier 29 345
Dupuis 29 339
Rodriguez 29 330
Jacquet 29 274
Moulin 29 065
Charles 29 041
Lecomte 28 980
Deschamps 28 823
Fernandez 28 547
Guillot 28 526
Collet 28 333
Prevost 28 129
Germain 27 664
Bailly 27 588
Guyot 27 419
Perrot 27 293
Le gall 27 140
Renault 27 138
Le roux 26 551
Vasseur 26 431
Herve 26 272
Gonzalez 26 182
Barre 26 084
Breton 26 057
Huet 25 961
Bertin 25 960
Carpentier 25 809
Lebrun 25 749
Carre 25 435
Boucher 25 365
Menard 25 135
Rey 24 943
Klein 24 750
Weber 24 727
Collin 24 553
Cousin 24 314
Millet 24 310
Tessier 23 978
Leveque 23 737
Le goff 23 704
Lesage 23 599
Marchal 23 525
Leblanc 23 492
Bouchet 23 442
Etienne 23 413
Jacob 23 328
Humbert 23 315
Bouvier 23 290
Leger 23 273
Perrier 23 182
Pelletier 22 952
Remy 22 824
"""
FEMALE_FIRST_NAMES_FRANCE = u"""
Adélaïde, Adèle, Agnès, Alix, Béatrice, Beatrix, Elizabeth, Hélène, Héloïse, Isabeau, Iseult, Irène, Mahaut, Margot, Mathilde, Mélissende, Pétronille, Yolande,
Adèle, Aimée, Alice, Appoline, Augustine, Céleste, Célie, Emma, Élise, Églantine, Eugénie, Irène, Jeanne, Joséphine, Léopoldine, Léontine, Lucie, Louise, Madeleine, Mathilde, Ophélie, Pauline, Rose, Zoé,
Albanie, Alexine, Aglaé, Alina, Alma, Angèle, Appoline, Armance, Arthémise, Augustine, Blanche, Célestine, Colombe, Dina, Elia, Émerence, Eulalie, Eugénie, Félicie, Fleurine, Gracianne, Honorine, Jeanne, Léona, Léonie, Léontine, Lilly, Louise, Matilde, Noémi, Pétronille, Philomène, Rose, Salomée, Sidonie, Victoire, Victorine Zélie
"""
MALE_FIRST_NAMES_FRANCE = u"""
Ambroise, Amédée, Anastase, Arthur, Augustin, Aymeric, Béranger, Geoffroy, Grégoire, Guillaume, Léon, Louis, Théodore, Thibaut, Tristan,
Alfred, Alphonse, Amédée, Aristide, Augustin, Barthélémy, Cyprien, Eugène, Ferdinand, Félix, Gustave, Jules, Justin, Léon, Théophile, Victor, Virgile,
Abel, Achille, Aimé, Anatole, Anthime, Auguste, Augustin, Célestin, Edgar, Emile, Ernest, Faustin, Félix, Gaston, Gustave, Jules, Léon, Léopold, Louis, Marceau, Marius, Max, Melchior, Oscar, Philémon, Rubens, Sully, Théodore, Théophile, Victor, Victorin, Wilhem
"""
# copied from http://fr.wikipedia.org/w/index.php?title=Liste_des_rues_de_Li%C3%A8ge&action=edit
STREETS_OF_LIEGE = u"""
{{ébauche|Liège}}
Cet article dresse une liste (incomplète) des voies ([[voirie]]s et [[Place (voie)|places]]) de la [[Ville de Belgique|ville]] de [[Liège]] en [[Belgique]].
{{SommaireCompact}}
==2==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
*[[Place du 20-Août]]
</div>
==A==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue de l'Abattoir]]
* [[Rue des Abeilles (Liège)|Rue des Abeilles]]
* [[Rue des Acacias (Liège)|Rue des Acacias]]
* [[Rue de l'Académie]]
* [[Avenue Albert Mahiels]]
* [[Rue Ambiorix]]
* [[Rue d'Amercoeur]]
* [[rue des Anglais (Liège)|Rue des Anglais]]
* [[Rue d'Ans]]
* [[Quai des Ardennes]]
* [[Rue Armand Stouls]]
* [[Rue Auguste Hock]]
* [[Rue des Augustins (Liège)|Rue des Augustins]]
* [[Impasse de l'Avenir]]
* [[Boulevard d'Avroy]]
* [[Rue d'Awans]]
</div>
==B==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[La Batte]]<ref>Batte signifiant ''quai'' en [[wallon]], on ne doit donc pas dire quai de la Batte</ref>
* [[Rue Basse-Wez]]
* [[Rue Beauregard (Liège)|Rue Beauregard]]
* [[Place des Béguinages]]
* [[Rue Bernimolin]]
* [[Rue Bidaut]]
* [[Avenue Blonden]]
* [[Rue Bois Gotha]]
* [[Quai Bonaparte]]
* [[Rue Bonne-Fortune]]
* [[Rue Bonne-Nouvelle]]
* [[Rue des Bons Enfants (Liège)|Rue des Bons Enfants]]
* [[Rue du Bosquet (Liège)|Rue du Bosquet]]
* [[Rue de la Boucherie (Liège)]]
* [[Quai de la Boverie]]
* [[Rue de Bruxelles (Liège)|Rue de Bruxelles]]
* [[Montagne de Bueren]]
</div>
==C==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue de Campine]]
* [[Rue des Carmes (Liège)|Rue des Carmes]]
* [[Place des Carmes]]
* [[Rue de la Casquette]]
* [[Place de la Cathédrale]]
* [[Rue de la Cathédrale]]
* [[Boulevard César Thomson]]
* [[Rue des Champs]]
* [[Rue Charles Bartholomez]]
* [[Rue Charles Magnette]]
* [[Avenue Rogier (Liège)|Avenue Charles Rogier]]
* [[Thier de la Chartreuse]]
* [[Rue de Chaudfontaine]]
* [[Rue Chauve-Souris (Liège)|Rue Chauve-Souris]]
* [[Rue de la Cité (Liège)|Rue de la Cité]]
* [[Rue des Clarisses]]
* [[Boulevard de la Constitution]]
* [[Rue du Coq]]
* [[Rue Counotte]]
* [[Rue Cour Petit]]
* [[Place Crèvecœur]]
* [[Rue des Croisiers]]
* [[Rue des Croix-de-Guerre]]
</div>
==D==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue Darchis]]
* [[Rue Dartois]]
* [[Rue Dehin]]
* [[Rue Denis Sotiau]]
* [[Rue Dony]]
* [[Rue Douffet]]
</div>
==E==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Boulevard Émile de Laveleye]]
* [[Avenue Émile Digneffe]]
* [[Rue Émile Gérard]]
* [[Rue Émile Vandervelde (Liège)|Rue Émile Vandervelde]]
* [[Rue En Bois]]
* [[Rue Ernest de Bavière]]
* [[Rue Ernest Solvay (Liège)|Rue Ernest Solvay]]
* [[Rue Éracle]]
* [[Rue Eugène Houdret]]
* [[Rue de l'Étuve (Liège)|Rue de l'Étuve]]
</div>
==F==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Féronstrée]]
* [[Rue de Fétinne]]
* [[Rue Fond Saint-Servais]]
* [[Rue fond des Tawes]]
* [[Rue des Fontaines-Roland]]
* [[Rue des Fossés]]
* [[Rue de Fragnée]]
* [[Place des Franchises]]
* [[Boulevard Frankignoul]]
* [[Ernest-Frédéric Nyst|Rue Frédéric Nyst]]
* [[Rue aux Frênes]]
* [[Boulevard Frère-Orban]]
</div>
==G==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue Gaston Laboulle]]
* [[Rue Gaucet]]
* [[Quai de Gaulle]]
* [[Rue du Général de Gaulle]]
* [[Rue du Général Bertrand]]
* [[Place du Général Leman]]
* [[Rue Georges Simenon]]
* [[Quai Godefroid Kurth]]
* [[Quai de la Goffe]]
* [[Rue de la Goffe]]
* [[Impasse Graindor]]
* [[Rue Gramme (Liège)|Rue Gramme]]
* [[Rue Grande Bêche]]
* [[Rue des Gravillons]]
* [[Rue Grétry (Liège)|Rue Grétry]]
* [[Rue du Gros Gland]]
* [[Place des Guillemins]]
* [[Rue des Guillemins]]
</div>
==H==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue de la Halle]]
* [[Rue de Harlez]]
* [[Rue d'Harscamp]]
* [[Rue du Haut-Pré]]
* [[Place du Haut-Pré]]
* [[Rue Hazinelle]]
* [[Rue Henri Baron]]
* [[Rue Henri Koch (Liège)|Rue Henri Koch]]
* [[Rue Henri Maus (Liège)|Rue Henri Maus]]
* [[Rue Herman Reuleaux]]
* [[Rue de Hesbaye]]
* [[Rue Hocheporte]]
* [[Rue Hors-Château]]
* [[Rue des Houblonnières]]
* [[Rue Hullos]]
</div>
==I==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Place d'Italie (Liège)|Place d'Italie]]
* [[Rue des Ixellois]]
</div>
==J==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue Jambe de Bois]]
* [[Rue du Jardin Botanique]]
* [[Rue Jean Bury]]
* [[Rue Jean d'Outremeuse]]
* [[Rue Jean Haust]]
* [[Rue Joffre]]
* [[Rue de Joie]]
* [[Rue Jonckeu]]
* [[Rue Jondry]]
* [[Rue des Jonquilles (Liège)|Rue des Jonquilles]]
* [[Place Joseph de Bronckart]]
* [[Rue Joseph Demoulin]]
* [[Rue Joseph Henrion]]
* [[Rue Joseph Lacroix]]
* [[Rue Joseph Wauters (Liège) |Rue Joseph Wauters]]
</div>
==L==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue Lairesse]]
* [[Rue de Lantin]]
* [[Rue du Laveu (Liège)|Rue du Laveu]]
* [[Rue de la Légia]]
* [[Rue Lemille]]
* [[Passage Lemonnier]]
* [[Rue Léon Mignon (Liège)|Rue Léon Mignon]]
* [[Rue Léopold]]
* [[Rue Libotte]]
* [[Rue de Londres (Liège)|Rue de Londres]]
* [[Quai de Longdoz]]
* [[Rue Louis Abry]]
* [[Rue Louis Fraigneux]]
* [[Rue Louvrex]]
* [[Avenue du Luxembourg]]
</div>
==M==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Quai de Maestricht]]
* [[Rue des Maraîchers (Liège)|Rue des Maraîchers]]
* [[Place du Marché (Liège)|Place du Marché]]
* [[Quai Marcellis]]
* [[Quai Mativa]]
* [[Avenue Maurice Destenay]]
* [[Rue Méan]]
* [[Quai sur Meuse]]
* [[Rue Mississippi]]
* [[Rue du Mont Saint-Martin]]
* [[Rue Montagne Sainte-Walburge]]
</div>
==N==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue de Namur]]
* [[Rue Naniot]]
* [[Rue Natalis]]
* [[Place des Nations-Unies (Liège)|Place des Nations-Unies]]
* [[En Neuvice]]
</div>
==O==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Place de l'Opéra (Liège)|Place de l'Opéra]]
* [[Quai Orban]]
* [[Rue Oscar Rémy]]
* [[Quai de l'Ourthe]]
</div>
==P==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue Paradis (Liège)|Rue Paradis]]
* [[Rue du Parc]]
* [[Rue du Palais (Liège)|Rue du Palais]]
* [[Rue de Paris (Liège)|Rue de Paris]]
* [[Au Péri]]
* [[Boulevard Piercot]]
* [[Rue Pierreuse]]
* [[Rue du Plan Incliné]]
* [[Rue Plumier]]
* [[Rue Pont-d'Avroy]]
* [[Rue Pont-d'Ile]]
* [[Rue du Pot d'Or]]
* [[Potiérue]]
* [[Rue des Prébendiers]]
* [[Rue Publémont]]
* [[Rue Puits-en-Sock]]
* [[Rue du Puits]]
</div>
==R==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Rue des Récollets (Liège)|Rue des Récollets]]
* [[Rue de la Régence (Liège)|Rue de la Régence]]
* [[Rue Regnier-Poncelet (Liège)|Rue Regnier-Poncelet]]
* [[Avenue Reine Elisabeth]]
* [[Rue des Remparts]]
* [[Place de la République française]]
* [[Rue de la Résistance]]
* [[Quai de la Ribuée]]
* [[Rue des Rivageois]]
* [[Rue Robertson]]
* [[Quai de Rome]]
* [[Quai Roosevelt]]
* [[Rue Roture]]
</div>
==S==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Place Saint-Barthélemy]]
* [[Place Saint-Denis]]
* [[Rue Saint-Gilles (Liège)|Rue Saint-Gilles]]
* [[Place Saint-Jacques (Liège)|Place Saint-Jacques]]
* [[Place Saint-Lambert (Liège)|Place Saint-Lambert]]
* [[Rue Saint-Laurent (Liège)|Rue Saint-Laurent]]
* [[Esplanade Saint-Léonard (Liège)|Esplanade Saint-Léonard]]
* [[Rue Saint-Léonard]]
* [[Rue Sainte-Marie (Liège)|Rue Sainte-Marie]]
* [[Rue Saint-Martin-en-Île]]
* [[Place Saint-Michel (Liège)|Place Saint-Michel]]
* [[Rue Saint-Michel (Liège)|Rue Saint-Michel]]
* [[Place Saint-Paul (Liège)|Place Saint-Paul]]
* [[Rue Saint-Paul (Liège)|Rue Saint-Paul]]
* [[Rue Saint-Pierre (Liège)|Rue Saint-Pierre]]
* [[Rue Saint-Remacle]]
* [[Rue Saint-Remy]]
* [[Rue Saint-Séverin (Liège)|Rue Saint-Séverin]]
* [[Rue Sainte-Croix]]
* [[Rue Sainte-Marguerite (Liège)|Rue Sainte-Marguerite]]
* [[Place Sainte-Véronique]]
* [[Rue Sainte-Véronique]]
* [[Rue Sainte-Walburge]]
* [[Boulevard Saucy]]
* [[Boulevard de la Sauvenière]]
* [[Rue de Sclessin]]
* [[Rue de Seraing]]
* [[Rue de la Sirène]]
* [[Rue Soubre]]
* [[Rue Sous l'Eau]]
* [[Rue de Spa (Liège)|Rue de Spa]]
* [[Rue Stappers]]
* [[Rue de Stavelot]]
* [[Rue Suavius]]
</div>
==T==
<div style="-moz-column-count:3; column-count:3; -webkit-column-count:3;">
* [[Quai des Tanneurs]]
* [[Rue des Tanneurs (Liège)|Rue des Tanneurs]]
* [[Rue des Tawes]]
* [[Rue du Terris (Liège)|Rue du Terris]]
* [[Place du Tertre (Liège)|Place du Tertre]]
* [[Rue du Thier-à-Liège]]
* [[Chaussée de Tongres]]
* [[Rue Tournant Saint-Paul]]
* [[Rue Toussaint Beaujean]]
</div>
* [[Rue de l'Université (Liège)|Rue de l'Université]]
* [[Rue des Urbanistes]]
* [[Impasse des Ursulines (Liège)|Impasse des Ursulines]]
* [[Rue Valdor]]
* [[Quai Édouard van Beneden]]
* [[Rue Varin]]
* [[Rue des Vennes]]
* [[Rue du Vertbois (Liège)|Rue du Vertbois]]
* [[Rue du Vieux Mayeur]]
* [[Impasse du Vieux Pont des Arches]]
* [[Rue Villette]]
* [[Vinâve d'Île]]<ref>Vinâve signifiant ''artère principale'' en [[wallon]], on ne doit donc pas dire rue du Vinâve d'Île</ref>
* [[Rue Volière (Liège)|Rue Volière]]
* [[Rue des Wallons (Liège)|Rue des Wallons]]
* [[Rue de Waroux]]
* [[Rue de Wazon]]
* [[Rue de Wetzlar]]
* [[Rue Wiertz (Liège)|Rue Wiertz]]
* [[Place Xavier Neujean]]
"""
LAST_NAMES_RUSSIA = u"""
A
Abezgauz
Aleksandrov
Altukhov
Alvang
Ankundinov
Arent
Arnold
Arshan
Arshun
Artemieva
Astafurov
B
Bardzecki
Bartoszewicz
Bashmakov
Baskov
Bek-Murzin
Belskaia
Berendt
Berndt
Bernt
Berthner
Bilinskii
Bleiwas
Bobrov
Bogaevskaia
Bogdanjwa
Bogdanovich
Bolokhovskis
Bondar
Borenstein
Borodinskii
Borovsky
Borowski
Botkina
Budberg
Budian
Budkovskiy
Budliavski
Burdzecki
Burundukov
Buryshkin
Burzeckaia
C
Chepelskii
Cheremisinova
Cherevin
Cherkesov
Cherlin
Cherlina
Chernikova
Cherstvennikov
Chirkoff
Chopiak
Chubinskii
Chuchin
Chuzhoi
D
Dauksza
Dikau
Dmitriev
Domashevich
Dombrovski
Dotsenko
Dvorkin
Dvorzhetskii
Dzhigit
E
Elout
Entin
F
Feldberg
Fialkovskii
Fiialkov
Flits
Frinovskii
G
Garder
Gaunshtein
Gavlik
Gavrikov
Gavronskii
Gelb
Gepfner
Gerasimova
Gerburt
Gershan
Gikalov
Gise
Giunter
Glazov
Glinka
Glubonin
Golender
Golovkin
Gontmakher
Gorchakov
Gorenshtein
Grabianko
Gundobin
Gunter
Gusarov
Gutelmakher
Gutemovskii
Gutenmakher
Guttenmakher
H
Holender
I
Ialovskaia
Ialovskii
Iavlenskaia
Ioksimovich
Ioselovich
Iskander
Istomin
Iunter
Iushkevich
Ivanov
K
Kalandarishvili
Kamarauskas
Kasianenko
Kenin
Khanina
Khavin
Kheifets
Khitrovo
Khmelnitskii
Khodkevich
Khripunov
Khripunova
Kintsel
Kiselow
Kitaev
Klopov
Koliabskaia
Kologrivov
Kologrivova
Komarnitskaia
Komarov
Komarovski
Komarovskii
Komerovskaia
Konchin
Konfer
Konkin
Konn
Konstantinov
Konstantinova
Korchagina
Korchinskaia
Kosmachevskaia
Kosovskaia
Kotko
Kovalevski
Kozerskaia
Kozerski
Kozlow
Kozyrskii
Kriukov
Kulikovskaia
Kunitskaia
Kupchenko
Kuzmin
L
Lebel
Lempitskaia
Lenevski
Leontiev
Lerche
Levanda
Levinson
Levitan
Levkov
Levkova
Likharev
Likhareva
Lipinskii
Lishin
Lisitskaia
Lisovskii
Lukowskaia
M
Magnovska
Mahkno
Maier
Makarova
Maklakov
Maksimov
Malakhovskii
Maletski
Maletskii
Malinovskii
Maliszewski
Maliszkewicz
Malitzka
Malitzkii
Malygin
Markevich
Masalsky
Maslov
Massalsky
Matsevich
Matsevichus
Matskevich
Mattel
Matulevich
Mayer
Medvedev
Medvedeva
Meier
Melnikov
Menshutkin
Menzhinskii
Mezentsov
Mezentsova
Mikhailov
Milaszewicz
Milaszewska
Milaszewski
Milkovich
Milodanovich
Milosz
Miloszinski
Mionchinskaia
Mirskii
Misostov
Miziukov
Molchanov
Molotkoff
Morozov
Mosalsky
Moskvin
Moszynski
Mozheika
N
Nazilevskii
Nebogatov
Negnevitskii
Nevelskoi
Nikonechnaia
Nisselovich
O
Oborskaia
Oborski
Okecka
Okkerman
Olendzskaia
P
Pankov
Panushkis
Parnes
Parolow
Paulson
Pavlovskii
Pechatnoff
Petrov
Petrovskaia
Petrovskii
Petrowa
Piotrovskii
Pletner
Plotnitskaia
Pogoretskaia
Pogorzelski
Polivanov
Polovinkin
Ponomarev
Popova
Popovtsev
Posiet
Potemkin
Pravdin
Priselkov
Prokofiev
Prokopchenko
Prokopovich
Pruszynski
Pumpianskii
Putilina
R
Rakhmelevich
Reikhman
Reznikov
Reznikova
Rodwalski
Rogusskii
Rokitskaia
Roschin
Rosenthal
Rowan
Rusakov
Rusakova
S
Saburova
Seidin
Semenov
Shalberov
Shchepetov
Shcherbatov
Shereshevski
Sheridan
Shikov
Shiritlokov
Shkarov
Shponarskaia
Shtadler
Shtein
Shubovich
Shuliakovskii
Shulkovskii
Shumakher
Shvarts
Siegel
Simonovich
Simson
Siniakov
Sipiagin
Sivortsova
Skipetroff
Skipetrova
Slavin
Slavina
Smolenskaia
Smolenskii
Sobeskaia
Sobetskaia
Sokolovskii
Solomon
Soloviev
Somov
Somova
Sotravits
Spektor
Speshiloff
Stanevich
Steinhauer
Steinheil
Stenghel
Stepunin
Sukhodolskaia
Sverzhenskii
T
Talkovskaia
Tamashevska
Tereshchenko
Tetiukov
Tokmakoff
Tomilin
Topczewski
Topezuvjw
Topezuvjwa
Trambetskaia
Treshchev
Trombetskaia
Trubachev
Trzemin
Trzheminska
Tsarev
Tselikova
Tsert
Tsitov
Turets
Turetskii
U
Ukhtomskii
Umanskii
V
Vans
Vargunin
Vasiliev
Veis
Veksler
Verbukh
Verden
Veretennikov
Vershvovski
Vikentieva
Vladimirskii
Volynski
Vorobiev
W
Weinstein
Werner
Wittenburg
Wolkowicz
Wolowitz
Worden
Y
Yakunun
Yunter
Z
Zalesskii
Zalicker
Zeif
Zelecker
Zelichonok
Zhalobovskaia
Zheldak
Zhelobovskaia
Zilberman
Zubkin
Zukov
Zukowskaia
Zukowski
"""
FEMALE_FIRST_NAMES_RUSSIA = u"""
Adla
Adleida
Adlesha
Adleta
Adviga
Afanasiia
Afanasiya
Afimia
Afonaseva
Agafia
Agafiia
Agafiya
Agafokliia
Agafonika
Agafya
Agapiia
Agasha
Agashka
Aglaia
Aglaida
Aglaya
Agna
Agnessa
Agnia
Agniia
Agrafena
Agrafina
Agramakova
Agripena
Agripina
Agrippa
Agrippina
Aitugan
Aizdiakova
Akillina
Akiulina
Aksana
Aksinya
Alasa
Albena
Albina
Aleksandra
Alena
Alenka
Alexandra
Alexcia
Alexia
Alexis
Alina
Alma
Alona
Alyssa
Alzbeta
Amelfa
Ampliia
Ana
Anastasia
Anastasiia
Anastasija
Anatassia
Andreea
Andreeva
Andreiana
Andrievicha
Anechka
Aneska
Anfiia
Anfoma
Anfusa
Angelika
Angelina
Angusta
Ania
Animaida
Animaisa
Anina
Anisia
Anisiia
Anisiya
Anisya
Anitchka
Anitsa
Anizka
Anja
Anje
Anjelica
Anjelika
Anka
Ann
Anna
Annastasija
Antonidka
Antonina
Anusia
Anya
Anzhela
Apfiia
Apolinaria
Apolinariia
Apoloniada
Apolosakifa
Ariadna
Arina
Arkhipa
Arkhippa
Artemeva
Artemiia
Asenka
Askitreia
Askitriia
Asya
Augusta
Avdeeva
Avdiushka
Avdotia
Avgusta
Avramova
Baialyn
Baibichia
Bakhteiarova
Balbara
Barbara
Bazhena
Bedche
Bela
Beleka
Belgukovna
Belka
Bella
Belukha
Benka
Bezruchka
Bezubaia
Bezui
Biana
Biata
Bibishkina
Biiata
Biriuta
Blanka
Blausa
Bogdana
Bogukhvala
Bogumezt
Bogumila
Boguslava
Bohdana
Bohumile
Boika
Bolce
Boldina
Bolemila
Boleslava
Bolgarina
Bolgarynia
Bona
Borisova
Boriuta
Bozena
Bozhana
Bozhitsa
Bragina
Branislava
Branizlawa
Bratomila
Bratromila
Bratrumila
Bruna
Budisla
Budizla
Budshka
Budska
Bukhval
Calina
Catarina
Caterina
Catherine
Catina
Catreen
Catrin
Catrina
Catrinia
Catriona
Catryn
Cecislava
Charlotta
Chebotova
Chekhina
Chekhyna
Cheliadina
Chemislava
Chenka
Chernavka
Chernislava
Chernka
Chesislava
Chimislava
Chiona
Chiudka
Chobotova
Chynica
Ciernislava
Clavdia
Cyzarine
Czarina
Czeimislawa
Dalida
Daliunda
Dama
Danilova
Daria
Darina
Daritsa
Darja
Daromila
Darya
Dasha
Datja
Davyd
Davyzha
Davyzheia
Debora
Deda
Dedenia
Dekava
Dekhova
Demidova
Denicha
Deretka
Derska
Derzhena
Derzhka
Desa
Desha
Despa
Dessa
Desta
Detana
Detava
Deva
Devka
Devochka
Devochkina
Devora
Dikana
Dima
Dimitra
Dimut
Dina
Dinah
Dinara
Dmitreeva
Dmitrieva
Dmitrovna
Dobegneva
Dobislava
Dobka
Dobra
Dobrava
Dobreva
Dobromila
Dobroslava
Dobrowest
Dobryna
Doda
Domaslava
Dominika
Domka
Domna
Domnika
Domnikiia
Domnina
Domona
Dorofeia
Doroteya
Dosya
Dounia
Dozene
Dozhene
Draginia
Dragomira
Dragoslawa
Dragushla
Draia
Drga
Drosida
Druzhinina
Dubrava
Dubravka
Duklida
Dunya
Dunyasha
Duscha
Dusha
Dusya
Dvora
Ecatarina
Ecatrinna
Eda
Edviga
Edviva
Efdokia
Effimia
Efimia
Efiopskaia
Efrasiia
Efrosenia
Efrossina
Ekatarina
Ekaterina
Ekatrinna
Ekzuperiia
Elacha
Eleena
Elen
Eleni
Elenya
Elga
Elgiva
Eliaksha
Elikonida
Elina
Elisava
Elisaveta
Elissa
Elizabeth
Elizarova
Elizaveta
Ella
Ellena
Ellina
Elonka
Elzbeta
Elzhbeta
Ennafa
Epestemiia
Epikhariia
Epistima
Eretiia
Ermolina
Erotiida
Ertugana
Esineeva
Euafina
Eufemia
Eugenia
Euprakseia
Eupraksiia
Eva
Evanova
Evdokeia
Evdokia
Evdokiia
Evdokiya
Evdokseia
Evdoksiia
Evelina
Evfaliia
Evfrasiia
Evfroseniia
Evfrosinya
Evgenia
Evgeniia
Evgeniya
Evgenya
Evginia
Evguenia
Evpraksi
Evpraksiia
Evrosena
Evseevskaia
Evsegniia
Evseveia
Evseviia
Evstoliia
Evtropiia
Faina
Fanaila
Fanya
Fatianova
Fausta
Favsta
Fayina
Fedia
Fedka
Fedkina
Fedora
Fedoritsa
Fedorka
Fedorova
Fedosia
Fedosiia
Fedosya
Fedotia
Fedotiia
Fedya
Feia
Feiniia
Fekla
Feklitsa
Fenia
Feodora
Feodosia
Feodosiia
Feoduliia
Feofana
Feoklita
Feoktista
Feona
Feonilla
Feopimta
Feopista
Feopistiia
Feozva
Ferfufiia
Ferufa
Fesalonikiia
Fetenia
Fetinia
Fetiniia
Fevronia
Filikitata
Filippiia
Filitsata
Filofei
Filofinaia
Filonilla
Fimochka
Fiva
Fiveia
Foimina
Fokina
Fomina
Fotina
Fotiniia
Fovro
Fovroneia
Frolova
Frosiniia
Gadina
Gaianiia
Gala
Galenka
Gali
Galina
Galina
Galine
Galochka
Galya
Galyna
Gamana
Gana
Gananiia
Gandaza
Ganna
Gasha
Gema
Genka
Georgieva
Gertruda
Ginechka
Giurgevaia
Gizheurann
Gizla
Glafira
Glasha
Glebovicha
Glikeriia
Glikeriya
Glukeriia
Glukheria
Godava
Golindukha
Goltiaeva
Golubitsa
Gordislava
Gorislava
Gorshedna
Gostena
Gostenia
Gostiata
Gostimira
Goulislava
Govdela
Gravriia
Grekina
Grekinia
Grekyna
Grifina
Grigoreva
Grigorevna
Grigorieva
Groza
Gruba
Grunya
Grusha
Halyna
Helen
Helena
Helenka
Helga
Hema
Henka
Hinezka
Hinica
Hodawa
Hora
Horina
Hosche
Hostena
Hruoza
Iadviga
Iakova
Iakovleva
Iakovlevskaia
Iakun
Iakunova
Iakunovaia
Ianevaia
Ianisha
Ianishe
Ianka
Iarche
Iarena
Iarina
Iarogned
Iaroia
Iarokhna
Iaroslava
Iarshek
Iasynia
Ieliaia
Iev
Ievlia
Ifrosenia
Ignateva
Ignatevskaia
Igoshkova
Iia
Ilariia
Ilia
Ilina
Ilya
Inessa
Inkena
Inna
Ioanna
Iona
Iosifova
Iovilla
Ira
Iraida
Irena
Irene
Irina
Irinia
Irinka
Irisa
Irodia
Irodiia
Isakova
Isidora
Ismagrad
Itka
Iudita
Iuliana
Iuliania
Iulianiia
Iuliia
Iulita
Iulitta
Iuniia
Iurevna
Iustina
Ivana
Ivanova
Ivanovskaia
Iveska
Ivonne
Iziaslava
Izmaragd
Janna
Jarena
Jarene
Jarohna
Jekaterina
Jelena
Jelena
Jelizaveta
Jenica
Jeremia
Jevdokija
Jitka
Julia
Kace
Kacha
Kache
Kachka
Kala
Kaleria
Kaleriia
Kalia
Kalisa
Kalisfena
Kalista
Kalitina
Kallisfeniia
Kallista
Kamenka
Kamle
Kandaza
Kapetolina
Kaptelina
Karen
Karina
Karine
Karinna
Karolina
Karpova
Karpovskaia
Karrine
Karyna
Kasha
Kashka
Kata
Katalena
Katareena
Katarina
Kateena
Katerina
Katerinka
Katherina
Katherine
Katia
Katina
Katinka
Katiya
Katja
Katlina
Katreen
Katreena
Katrene
Katria
Katrien
Katrina
Katrine
Katrusha
Katrya
Katryn
Katryna
Kattrina
Kattryna
Katunia
Katuscha
Katya
Katyenka
Katyushka
Katyuska
Kazdoia
Kerkira
Kharesa
Khariessa
Kharitaniia
Kharitina
Kharitona
Kharitonova
Kheoniia
Khioniia
Khlopyreva
Khovra
Khrana
Khrisiia
Khristeen
Khristen
Khristianova
Khristin
Khristina
Khristine
Khristyana
Khristyna
Khrstina
Khrystina
Khrystyn
Khrystyne
Khvalibud
Khynika
Kikiliia
Kilikeia
Kilikiia
Kiprilla
Kira
Kiraanna
Kiriakiia
Kiriena
Kirilla
Kirilovskaia
Kisa
Kiska
Kitsa
Kittiana
Kiuprila
Kiuriakiia
Kiza
Klasha
Klavdiia
Kleopatra
Klychikha
Knikki
Kogorshed
Koia
Koika
Kolomianka
Konchaka
Konchasha
Konkordiia
Konstantiia
Konstiantina
Konstiantinova
Kora
Koretskaia
Korina
Korotkaia
Korotkova
Korotsek
Korotskovaia
Kosa
Kosenila
Kostenka
Kostya
Kostyusha
Kotik
Kovan
Kovana
Kowan
Kozma
Kozmina
Krabava
Krasa
Krestiia
Kristina
Krivulinaia
Krunevichovna
Krushka
Ksafipa
Ksana
Ksanfippa
Ksanochka
Ksenia
Kseniia
Kseniya
Ksenya
Kshtovtovna
Ksnia
Ksniatintsa
Kudra
Kuna
Kunei
Kunka
Kunko
Kunku
Kuntse
Kuriana
Kuznetsova
Kvasena
Kvetava
Kzhna
Lacey
Lacey
Lada
Laikina
Lala
Lanassa
Lanka
Lara
Lari
Larina
Larisa
Larissa
Larissa
Larochka
Larra
Laryssa
Latskaia
Leia
Leka
Lelik
Lena
Lenina
Lenochka
Lenora
Lenusy
Lenusya
Leonilla
Leonteva
Lepa
Lera
Lerka
Leva
Liba
Libania
Libusa
Lida
Lidena
Lidia
Lidiia
Lidija
Lidiy
Lidiya
Lidka
Lidmila
Lidocha
Lidochka
Lieba
Lila
Lilac
Lilia
Liolya
Lipa
Lisa
Lisanka
Lisaveta
Liseetsa
Lishka
Lisil
Liska
Lisotianka
Liuba
Liubchanina
Liubka
Liubokhna
Liubone
Liubusha
Liudena
Liudmila
Liunharda
Liutarda
Liutsilla
Liza
Lizabeta
Lizanka
Lizette
Ljudmila
Ljudmilla
Lolya
Lotta
Luba
Lubachitsa
Lubmila
Lubmilla
Lubohna
Lubov
Lubusha
Luda
Ludiia
Ludmia
Ludmila
Ludmilla
Ludomia
Luka
Lukeria
Lukerina
Lukerya
Lukiia
Lukina
Lukiria
Lukoianova
Lvovicha
Lyalechka
Lyalya
Lybed
Lydia
Lyeta
Lyuba
Lyubochka
Lyubonka
Lyubov
Lyudmila
Lyudmilla
Lyuha
Lyutsiana
Machko
Machna
Magdalina
Magmeteva
Maiya
Makhna
Makrina
Maksimina
Maksimova
Malana
Malania
Maliusha
Maliuta
Malka
Malona
Malonia
Maluchka
Malusha
Mamelfa
Mamika
Mana
Manechka
Manka
Manya
Mara
Marana
Maremiana
Marfa
Marfutka
Margarita
Margo
Maria
Marian
Marianna
Marianne
Marianskaia
Maricha
Marichinich
Mariia
Marimiana
Marina
Marinka
Marinochka
Marinskaia
Marionilla
Marisha
Maritanna
Maritsa
Marjka
Marka
Markiana
Marnie
Marous
Marta
Martemianova
Marufa
Marulia
Marusya
Marya
Mascha
Masha
Mashenka
Matfeitsa
Matrena
Matrona
Matruna
Matryoshka
Mavra
Maya
Mazcho
Melania
Melaniia
Meletina
Melita
Melitina
Menshikova
Mergivana
Merkureva
Miesha
Mika
Mikhaila
Mikhailova
Mikitina
Mikula
Mikulina
Mila
Milakhna
Milana
Milata
Milava
Milehva
Milekha
Milena
Milenia
Milesa
Mileva
Miliia
Milika
Militsa
Milka
Milleise
Milohna
Milokhna
Miloslava
Miloushka
Miluska
Minodora
Mira
Mirena
Mironova
Miropiia
Miroslava
Mirozlava
Mirra
Misha
Mitrodora
Mizinovskaia
Mlada
Moiko
Morava
Morawa
Mounya
Mousia
Mozyr
Mstislava
Mstislavliaia
Mudri
Muniia
Mura
Muroniia
Muza
Myrra
Myshka
Myslna
Nadeek
Nadeekovaia
Nadejda
Nadenka
Nadia
Nadie
Nadine
Nadiya
Nadja
Nadjenka
Nadya
Nadyenka
Nadysha
Nadyuiska
Naglaya
Na'Kesha
Nakita
Narkissa
Nastasia
Nastasich
Nastasiia
Nastasja
Nastassia
Nastenka
Nastia
Nastiona
Nastionka
Nastiusha
Nastka
Natachia
Natacia
Natalia
Nataliia
Natalja
Natalka
Natalya
Natascha
Natasha
Natashenka
Natashia
Natasia
Natassia
Nathasha
Nazarova
Nebracha
Nebraga
Neda
Nedana
Nedelia
Nekrasa
Nekrasia
Neliuba
Nemilka
Nemka
Neonila
Nesdits
Nesha
Nessa
Nesy
Neta
Netka
Neva
Neza
Nezhatok
Nezhdakha
Nezhka
Nifantova
Nika
Niki
Nikiforova
Nikita
Nikitina
Nikkylia
Nikolena
Niksha
Nimfodora
Nina
Ninel
Ninockha
Ninotchka
Nitasha
Nitca
Nona
Nonna
Nostasia
Nunekhiia
Nyura
Nyusha
Obrezkova
Odigitriia
Odintsova
Ofce
Ofimia
Ogafia
Ogafitsa
Ogashka
Ografena
Ogrifina
Ogrofena
Ogrufena
Ogrufina
Okinfieva
Oksana
Oksana
Oksanochka
Okseniia
Oksinia
Oksiutka
Oktyabrina
Okulina
Olechka
Oleksandra
Olena
Olenitsa
Olenka
Olfereva
Olga
Olginitsa
Olgirdovna
Olgov
Olimpiada
Olisava
Olivera
Olkha
Olya
Olzhbeta
Omelfa
Ondreiana
Onoslava
Ontonia
Ontsiforova
Ontsyforova
Oprosiniia
Orenka
Oria
Orina
Orlenda
Orlitza
Orsha
Orshinaia
Ortemeva
Orya
Osipova
Osliabia
Ostafia
Ostankova
Ostashkova
Osyenya
Ovdeeva
Ovdiukha
Ovdokea
Ovdotia
Ovdotitsa
Ovtsa
Oxana
Paladia
Palasha
Panfilova
Pansemna
Pantislava
Pantyslawa
Panya
Paraaha
Paramona
Parasha
Parasia
Paraskova
Paraskovga
Paraskovgiia
Paraskovia
Paraskoviia
Paroskova
Pasha
Patrova
Paula
Paulina
Pauline
Pavel
Pavla
Pavlova
Pavloveia
Pavlusha
Pchuneia
Pechta
Pelaga
Pelageia
Pelageya
Pelagiia
Perchta
Peredeslava
Perkhta
Perkhte
Perpetuia
Petronila
Petrova
Petrovna
Petsa
Peza
Pheodora
Piama
Piina
Piminova
Pirueva
Plakida
Platonida
Pokinaria
Poladia
Polazhitsa
Polia
Polikseniia
Polinaria
Poliuzhaia
Poloneika
Polotsk
Polotska
Poloudnitsa
Polovinova
Pomnislavka
Pompliia
Ponaria
Popliia
Popova
Poroskova
Poved
Praskovja
Praskovya
Prebrana
Predslava
Predyslava
Preia
Preksedys
Premislava
Prepedigna
Presthlava
Priba
Pribyslava
Priia
Prikseda
Priskilla
Priskula
Proksha
Proniakina
Prosdoka
Proskudiia
Przhibislava
Przybyslawa
Pukhleriia
Pulkheriia
Puna
Puteshineia
Putok
Putokoveia
Rada
Radia
Radivilovna
Radka
Rado
Radok
Radokhna
Radokovaia
Radonia
Radosha
Radoslava
Radosta
Radoste
Radozte
Radslava
Ragneda
Ragosna
Rahil
Raina
Raisa
Raiza
Rajna
Rakhiel
Ratka
Ratslava
Raya
Rechkina
Reicza
Reshunda
Richca
Richica
Richika
Richikha
Richtca
Richza
Riksa
Rima
Ripsimia
Rislava
Rita
Rogned
Roksana
Romanovna
Roscislawa
Roslava
Rossitza
Rostislava
Roza
Rozalia
Rozgneda
Rozhneva
Rufina
Rulza
Rusa
Rusna
Ryska
Sabina
Sacha
Sahsha
Samarina
Sanya
Sapozhnika
Sascha
Sashah
Sashana
Sashenka
Sashenka
Sashia
Sashka
Sausha
Savastian
Savastianova
Sbyslava
Selianka
Selivankov
Selivankova
Semenova
Semenovskaia
Semislava
Senia
Senny
Serafima
Sevastianiia
Sevastiiana
Severina
Sfandra
Shasha
Shcastna
Shchastna
Shedra
Shelovlevaya
Shiriaeva
Shkonka
Shura
Shushanika
Shvakova
Sidorova
Sima
Sina
Sinklitikiia
Siny
Sira
Siuiunbek
Siuiunbeka
Siuiunbuka
Siunbek
Siunbeka
Skameikina
Skonka
Slava
Slavna
Smils
Smina
Smirenka
Snanduliia
Snigurka
Sobina
Sofeia
Sofia
Sofiia
Sofiya
Sonaya
Sonechka
Sonia
Sonia
Sonja
Sonya
Sonyuru
Sonyusha
Sonyushka
Sophi
Sophia
Soroka
Sosanna
Sosfena
Sosipatra
Spasenieva
Spera
Spitoslava
Spitsislava
Stana
Stanislava
Stanka
Starsha
Stasy
Stasya
Stefanida
Stefanidka
Stefanova
Stefanya
Stepanida
Stepanova
Stephania
Stepka
Stesha
Stolma
Stolpolcha
Stopolcha
Stranizlava
Stratka
Strezhena
Strezhislava
Strezislava
Sudehna
Sudekhna
Sudila
Sulislava
Sumorokova
Sunklitikiia
Susana
Svakhna
Svatata
Svatava
Svatochna
Svatohna
Sveisla
Sveta
Svetlana
Svetocha
Svetokhna
Sviatata
Sviatokhna
Sviatoslava
Svoda
Swachnina
Swatawa
Symislava
Syp
Sypovaia
Tacha
Tachia
Tachiana
Tachianna
Tahn
Tahna
Tahnia
Tahniya
Tahnya
Tahsha
Taidula
Taina
Taisha
Taishineia
Taisiia
Tamara
Tamary
Tamera
Tamra
Tamryn
Tana
Tanalia
Tanasha
Tanaya
Tandula
Tanea
Tanechka
Taneya
Tania
Tanija
Tanita
Taniya
Tanja
Tanka
Tanna
Tannia
Tannis
Tanniya
Tannya
Tanya
Tasenka
Tasha
Tashana
Tashia
Tashiana
Tashianna
Tashina
Tashira
Tashiya
Tassa
Tasya
Tata
Tatiana
Tatianka
Tatianna
Tatiiana
Tatjana
Tatsa
Tatyana
Taunia
Taunya
Tavlunbeka
Tawnia
Tayna
Tazia
Teha
Tekh
Tekha
Tekusa
Tesheia
Teshka
Tetka
Tevkel
Tferianka
Thais
Thasha
Tiaga
Tina
Tishka
Tishkina
Titania
Titka
Tiutcheva
Tomila
Tomislava
Tonasha
Tonaya
Tonechka
Tonia
Tonja
Tonniya
Tonnya
Tonya
Torokanova
Toshiana
Tretiakovskaia
Troika
Trpena
Trufena
Tsaritsa
Tsvetkova
Tulna
Tutana
Tvoislava
Tvoyzlava
Ualentina
Uirko
Ulana
Uleia
Ulen'ka
Ulia
Uliaanitsa
Uliana
Ulianiia
Ulianka
Ulianushka
Uliasha
Uliiana
Ulita
Ulyana
Unefiia
Unka
Upritsa
Urshila
Ursula
Ustenia
Ustiniia
Vakhneva
Vakhtina
Valenta
Valentina
Valya
Vania
Vanmra
Vanya
Varenka
Varka
Varsonofia
Vartsislava
Varushka
Varvara
Varya
Varyusha
Vasileva
Vasilevna
Vasilevskaia
Vasilida
Vasilievaia
Vasilii
Vasilina
Vasilisa
Vasilissa
Vasilista
Vasisa
Vassa
Vassillissa
Vasya
Vaviia
Velika
Velislava
Ventseslava
Vera
Verochka
Veronika
Veronikeia
Vershina
Veruschka
Vetenega
Veveia
Viachenega
Victoria
Vida
Vika
Vikashenka
Viktoria
Viktoriya
Vila
Vilena
Vilenina
Vilma
Vilna
Virineia
Vironikiia
Vishemila
Vitalya
Vitasa
Vitko
Vitla
Vitoslava
Vivka
Vlada
Vladaia
Vladilena
Vladilenaova
Vladimira
Vladisava
Vladka
Vladlena
Vlaikha
Vlastika
Vlcena
Vlschet
Vogna
Voina
Voislava
Volodimerna
Volotka
Volotkoveia
Volotok
Vonda
Voyzlava
Vrata
Vratislava
Vrkhuslava
Vrotsislava
Vrsanka
Vseslava
Vukosava
Vukoslava
Vyesna
Vysheslava
Vyshia
Wannon
Warvara
Wava
Welislawa
Wierga
Wissa
Witoslava
Wiwka
Wladyka
Woina
Wrata
Wratislava
Wrocislawa
Xenia
Yalena
Yalenchka
Yalens
Yekaterina
Yelena
Yeva
Yevdokiya
Yevfrosinya
Yevgenya
Yogenya
Yovanka
Yulenka
Yulia
Yulianiya
Yulika
Yuliy
Yuliya
Yulya
Yusmara
Zabela
Zakharia
Zakharieva
Zakharina
Zamiatina
Zaneta
Zaritsa
Zasha
Zavidovicha
Zavorokhina
Zbina
Zbinka
Zbiska
Zbynek
Zbynko
Zbyshka
Zdena
Zdeslava
Zdislava
Zdzislaba
Zena
Zenaida
Zenaide
Zenechka
Zenochka
Zeny
Zenya
Zhanna
Zhdana
Zhena
Zhenya
Zhirava
Zhivana
Zhona
Zhonka
Zima
Zina
Zinaida
Zinerva
Zinoviia
Znata
Zofeia
Zoia
Zoika
Zoya
Zoyenka
Ztrezena
Zvatata
Zvenislava
"""
MALE_FIRST_NAMES_RUSSIA = u"""
Adrik
Akim
Alek
Aleksandr
Aleksi
Aleksis
Alexei
Alik
Aloyoshenka
Aloysha
Anatolii
Andrei
Andrusha
Andrya
Anstice
Antinko
Anton
Antosha
Arman
Avel
Bogdashha
Bohdan
Bolodenka
Boris
Boris
Boris
Borya
Boryenka
Brends
Brody
Burian
Cheslav
Czar
Danya
Demyan
Dima
Dimitri
Edik
Eduard
Egor
Egor
Evgenii
Fabi
Faddei
Fadey
Fadeyka
Fedor
Fedya
Fedyenka
Feliks
Filip
Fjodor
Fjodor
Foma
Fredek
Fyodor
Ganya
Gav
Gavrel
Gavrie
Gavril
Gavril
Gavrilovich
Gennadi
Gregori
Grigor
Grigori
Grigorii
Grisha
Hedeon
Helge
Igor
Igoryok
Ilya
Ioakim
Iov
Ivan
Ivano
Jascha
Jasha
Jeirgif
Jermija
Jov
Jurg
Karolek
Kiril
Kirill
Kliment
Konstantin
Konstantine
Kostya
Laurente
Leonide
Lev
Levka
Luka
Lukyan
Maks
Maksim
Maksimillian
Marko
Markov
Matvey
Matysh
Maxim
Michail
Mikhail
Mikhail
Misha
Mishe
Moriz
Motka
Naum
Nicolai
Nikolai
Oleg
Oleg
Olezka
Ony
Oral
Orel
Orell
Oriel
Orrel
Osip
Pabiyan
Pavel
Pavel
PavIpv
Pavlik
Pavlo
Pavlusha
Pavlushka
Pavlya
Petenka
Petrov
Petya
Pyotr
Roman
Romochka
Rurik
Rurik
Sacha
Sacha
Sanya
Sasha
Semyon
Serge
Sergei
Serguei
Seriozha
Seriozhenka
Sevastian
Shashenka
Shura
Shurik
Shurochka
Slavik
Stanislov
Stefan
Stephan
Stepka
Tamryn
Tasha
Tolenka
Tolya
Tosya
Tusya
Uri
Uriah
Urie
Ustin
Vadim
Valerii
Valerik
Vanechka
Vanya
Vanyusha
Vas
Vasilii
Vasily
Vassi
Vassily
Vasya
Viktor
Vitaliy
Vitenka
Vladik
Vladilen
Vladilen
Vladislav
Vladmir
Vladmiri
Vladya
Volody
Vyacheslav
Yakov
Yaremka
Yasha
Yefrem
Yerik
Yevgeni
Yura
Yuri
Yurii
Yurik
Yurochka
Zhenechka
Zhenya
Zhorah
Ziven
Zivon
Zory
"""
MALE_FIRST_NAMES_MUSLIM = u"""
Aabdeen
Aabid
Aadam
Aadil
Aaish
Aakif
Aamir
Aaqil
Aarif
Aasim
Aatif
Aayid
Abbaad
Abbaas
Abdul Azeez
Abdul Baari
Abdul Baasid
Abdul Fattaah
Abdul Ghafoor
Abdul Ghani
Abdul Haadi
Abdul Hai
Abdul Hakeem
Abdul Haleem
Abdul Hameed
Abdul Jabbaar
Abdul Jaleel
Abdul Kader
Abdul Kareem
Abdul Khaliq
Abdul Lateef
Abdul Maalik
Abdul Majeed
Abdul Noor
Abdul Qayyoom
Abdul Quddoos
Abdul Rauf
Abdul Waahid
Abdul Wadood
Abdul Wahaab
Abdullah
Abdur Raheem
Abdur Rahmaan
Abdur Raqeeb
Abdur Rasheed
Abdur Razzaaq
Abdus Salam
Abdus Samad
Abdut Tawwab
Abood
Abyad
Adeeb
Adham
Adnaan
Afeef
Ahmed
Aiman
Akram
Alawi
Ali
Amaan
Amaanullah
Ameen
Ameer
Amjad
Ammaar
Amru
Anas
Annnees
Anwar
Aqeel
Arafaat
Arhab
Arkaan
Arshad
Asad
Aseel
Asghar
Ashqar
Ashraf
Aslam
Asmar
Awad
Awf
Awn
Awni
Ayyoob
Azhaar
Azmi
Azzaam
Baahir
Baaqir
Baasim
Badr
Badraan
Badri
Badruddeen
Baheej
Bakar
Bandar
Basheer
Bassaam
Bassil
Bilaal
Bishr
Burhaan
Daamir
Daawood
Daif
Daifallah
Daleel
Dhaafir
Dhaahir
Dhaakir
Dhaki
Dhareef
Faadi
Faadil
Faai Z
Faaid
Faaiq
Faalih
Faaris
Faarooq
Faatih
Faatin
Fahd
Faheem
Fahmi
Faisal
Faraj
Farajallah
Fareed
Farhaan
Fateen
Fat'hi
Fawwaaz
Fawz
Fawzi
Fayyaad
Fikri
Fuaad
Furqaan
Ghaali
Ghaalib
Ghaamid
Ghaazi
Ghassaan
Haafil
Haajid
Haamid
Haani
Haarith
Haaroon
Haashid
Haashim
Haatim
Haazim
Haitham
Hakam
Hamad
Hamdaan
Hamdi
Hamood
Hamza
Haneef
Hanlala
Hasan
Hazm
Hibbaan
Hilaal
Hilmi
Hishaam
Hudhaifa
Humaid
Humaidaan
Huraira
Husaam
Husain
Husni
Ibrahim
Idrees
Ihaab
Ikram
Ilyaas
Imaad
Imraan
Irfaan
Isaam
Ishaaq
Ismad
Ismaeel
Iyaad
Izzaddeen
Izzat
Jaabir
Jaad
Jaadallah
Jaarallah
Jaasim
Jaasir
Jafar
Jalaal
Jam,Aan
Jamaal
Jameel
Jareer
Jasoor
Jawaad
Jawhar
Jihaad
Jiyaad
Jubair
Jumail
Junaid
Kaalim
Kaamil
Kaarim
Kabeer
Kaleem
Kamaal
Kamaaluddeen
Kameel
Kanaan
Katheer
Khaalid
Khairi
Khaleefa
Khaleel
Labeeb
Labeeb
Luqmaan
Lutfi
Luwai
Ma,Roof
Maahir
Maaiz
Maa'iz
Maajid
Maazin
Mahboob
Mahdi
Mahfooz
Mahmood
Mahuroos
Maisara
Maisoon
Majdi
Mamdooh
Mamoon
Mansoor
Marwaan
Marzooq
Mashal
Masood
Mastoor
Mawdood
Mazeed
Miqdaad
Miqdaam
Misfar
Mishaari
Moosha
Mu,Aawiya
Muaaid
Muammar
Mubarak
Mubashshir
Mudrik
Mufeed
Muhaajir
Muhammad
Muhsin
Muhyddeen
Mujahid
Mukarram
Mukhtaar
Mundhir
Muneeb
Muneef
Muneer
Munjid
Munsif
Muntasir
Murshid
Musaaid
Mus'ab
Musaddiq
Musheer
Mushtaaq
Muslih
Muslim
Mustaba
Mutammam
Mutasim
Mu'taz
Muthanna
Mutlaq
Muzammil
Naadir
Naaif
Naaji
Naasif
Naasiruddeen
Naazil
Naazim
Nabeeh
Nabeel
Nadeem
Nadheer
Najeeb
Najeem
Naseem
Naseer
Nashat
Nassaar
Nawaar
Nawf
Nawfal
Nazmi
Neeshaan
Nizaam
Nizaar
Noori
Nu'maan
Numair
Qaaid
Qaasim
Qais
Quraish
Qutb
Raadi
Raafi
Raaid
Raaji
Raakaan
Raamiz
Raashid
Rabi
Rafeeq
Raihaan
Rajaa
Rajab
Ramalaan
Ramzi
Rashaad
Rasheeq
Rayyaan
Razeen
Rida
Ridwaan
Rifaah
Rifat
Riyaal
Rushdi
Rushdi
Ruwaid
Saabiq
Saabir
Saadiq
Saahir
Saajid
Saalih
Saalim
Saami
Saamir
Sabaah
Sabri
Sad
Sadi
Sadoon
Saeed
Safar
Safwaan
Sahl
Saif
Sakeen
Salaah
Saleel
Saleem
Saleet
Salmaan
Samir
Saood
Saqr
Shaafi
Shaaheen
Shaahir
Shaakir
Shaamikh
Shaamil
Shabaan
Shaddaad
Shafeeq
Shaheed
Shaheed
Shaheer
Shakeel
Shameem
Shaqeeq
Sharaf
Sharaf
Shawqi
Shihaab
Shuaib
Shujaa
Shukri
Shuraih
Siddeeqi
Sidqi
Silmi
Siraaj
Sirajuddeen
Subhi
Sufyaan
Suhaib
Suhail
Sulaimaan
Sultan
Suwailim
Taaha
Taahir
Taaj
Taajuddeen
Taalib
Taamir
Taariq
Taiseer
Talaal
Talha
Tameem
Tammaam
Taqi
Tareef
Tawfeeq
Tawheed
Tayyib
Thaamir
Thaaqib
Tufail
Turki
Ubaida
Umair
Umar
Unais
Uqbah
Usaama
Uthmaa N
Uwais
Waail
Waatiq
Waddaah
Wajdi
Wajeeb
Wajeeh
Waleed
Waseef
Waseem
Wisaam
Yaasir
Ya'eesh
Yahya
Ya'qoob
Yoonus
Yoosuf
Yusri
Zaahid
Zaahir
Zaaid
Zaamil
Zaghlool
Zaid
Zaidaan
Zain
Zainuddeen
Zakariyya
Zaki
Zameel
Zayyaan
Ziyaad
Zubair
Zufar
Zuhair
Zuraara
"""
FEMALE_FIRST_NAMES_MUSLIM = u"""
Aadila
Aaida
Aaisha
Aamina
Aanisa
Aarifa
Aasima
Aasiya
Aatifa
Aatika
Aayaat
Abeer
Adeeba
Adhraaa
Afaaf
Afeefa
Afnaan
Afraah
Ahlaam
Aliyya
Almaasa
Amaani
Amal
Amatullah
Ameena
Ameera
Amniyya
Anbara
Aneesa
Aqeela
Ariyya
Arwa
Aseela
Asmaa
Atheer
Atiyya
Awaatif
Awda
Azeema
Azeeza
Azza
Fakeeha
Faraah
Fareeda
Farha
Farhaana
Farhat
Faseeha
Fateena
Fat'hiyaa
Fawqiyya
Fawzaana
Fawzia
Fidda
Fikra
Fikriyya
Firdaus
Fuaada
Gaitha
Ghaada
Ghaaliba
Ghaaliya
Ghaaziya
Ghaidaa
Ghazaala
Ghuzaila
Haafiza
Haajara
Haakima
Haala
Haamida
Haaniya
Haaritha
Haazima
Habeeba
Hadbaaa
Hadeel
Hadiyya
Hafsa
Haibaa
Haifaaa
Hakeema
Haleema
Hamaama
Hamda
Hamdoona
Hameeda
Hamna
Hamsa
Hanaaa
Hanaan
Haniyya
Hanoona
Hasana
Haseena
Hasnaa
Hawraa
Hazeela
Hiba
Hikma
Hilmiyya
Himma
Hishma
Hissa
Hiwaaya
Huda
Hujja
Humaina
Humaira
Husniyya
Huwaida
Ibtisaama
Iffat
Ilhaam
Imtinaan
Inaaya
Insaaf
Intisaar
Israa
Izza
Jadeeda
Jaleela
Jameela
Jannat
Jasra
Jawhara
Jeelaan
Juhaina
Jumaana
Jumaima
Juwairiya
Kaatima
Kaazima
Kabeera
Kameela
Kareema
Kawkab
Kawthar
Khaalida
Khadeeja
Khaira
Khairiya
Khaleela
Khawla
Khulood
Kifaaya
Kinaana
Kulthum
Laaiqa
Labeeba
Laila
Lateefa
Layaali
Lubaaba
Lubna
Lutfiyya
Maajida
Maariya
Maazina
Madeeha
Mahaa
Mahbooba
Mahdeeya
Mahdhoodha
Mahfoodha
Mahmooda
Maimoona
Maisara
Majdiyya
Majeeda
Maleeha
Maleeka
Manaahil
Manaal
Manaara
Mardiyya
Marjaana
Marwa
Marzooqa
Mas'ooda
Masroora
Mastoora
Mawhiba
Mawzoona
Mayyaada
Mazeeda
Minnah
Misbaah
Miska
Mubaaraka
Mubeena
Mudrika
Mufeeda
Mufliha
Muhjar
Mu'hsina
Mujaahida
Mumina
Mu'mina
Mumtaaza
Muna
Muneefa
Muneera
Munisa
Muntaha
Musfira
Musheera
Mushtaaqa
Mutee'a
Muzaina
Muzna
Naadiya
Naafoora
Naaifa
Naaila
Nabeeha
Nabeela
Nada
Nadeera
Nadheera
Nadiyya
Nafeesa
Nahla
Najaat
Najeeba
Najeema
Najiyya
Najlaa
Najma
Najwa
Nakheel
Nameera
Naqaa
Naqiyya
Naseeba
Naseefa
Naseema
Naseera
Nasreen
Nawaal
Nawaar
Nawfa
Nawwaara
Nazeeha
Nazeema
Nazmiyya
Nisma
Noora
Nooriyya
Nuha
Nu'ma
Nusaiba
Nuzha
Qaaida
Qamraaa
Qisma
Raabia
Raabiya
Raadiya
Raafida
Raaida
Raaniya
Rabdaa
Radiyya
Radwa
Rafeeda
Rafeeqa
Raheema
Rahma
Raihaana
Raita
Ramla
Ramza
Ramziyya
Randa
Rashaa
Rasheeda
Rasheeqa
Rawda
Rayyana
Razeena
Reema
Rif'a
Rifqa
Rihaab
Rumaana
Ruqayya
Rutaiba
Ruwaida
Saabiqa
Saabira
Saafiyya
Saahira
Saajida
Saaliha
Saalima
Saamiqa
Saamyya
Saara
Sabaaha
Sabeeha
Sabeeka
Sabiyya
Sabreen
Sabriyya
Sadeeda
Sadeeqa
Safaaa
Safiyya
Safwa
Sahar
Sahheeda
Sahla
Sajaa
Sajiyya
Sakeena
Saleema
Salma
Salwa
Sameeha
Sameera
Samraa
Sanaaa
Sanad
Sawada
Shaafia
Shaahida
Shaahira
Shaakira
Shaamila
Shabeeba
Shadhaa
Shafaaa
Shafee'a
Shafeeqa
Shahaada
Shahaama
Shaheera
Shahla
Shaimaaa
Shajee'a
Shakeela
Shakoora
Sham'a
Shamaail
Shameema
Shaqeeqa
Shareefa
Shukriyya
Siddeeqa
Sireen
Sitaara
Suhaa
Suhaad
Suhaila
Sukaina
Sulama
Sultana
Sumaita
Sumayya
Sumbula
Sundus
Taaliba
Taamira
Tahaani
Tahiyya
Tahleela
Tamanna
Tameema
Taqiyya
Tareefa
Tasneem
Tawfeeqa
Tawheeda
Tayyiba
Thaabita
Thaamira
Thamra
Thanaa
Tharwa
Tuhfa
Tulaiha
Turfa
Ulyaa
Umaima
Umaira
Ummu Kulthoom
Urwa
Waajida
Wadee'a
Wadha
Wafaaa
Waheeba
Waheeda
Wajdiyya
Wajeeha
Waleeda
Waliyya
Waneesa
Warda
Wardiyya
Waseema
Wasmaaa
Widdad
Yaasmeen
Yaasmeena
Zaahira
Zaaida
Zahra
Zahraaa
Zainab
Zaitoona
Zakiyya
Zarqaa
Zeena
Zubaida
Zuhaira
Zuhra
Zuhriyaa
Zulfa
Zumruda
"""
LAST_NAMES_AFRICAN = u"""
Ba
Bah
Ballo
Chahine
Cisse
Congo
Contee
Conteh
Dia
Diallo
Diop
Fall
Fofana
Gueye
Jalloh
Keita
Kone
Maalouf
Mensah
Ndiaye
Nwosu
Okafor
Okeke
Okoro
Osei
Owusu
Sall
Sane
Sarr
Sesay
Sow
Sy
Sylla
Toure
Traore
Turay
Yeboah
"""
LAST_NAMES_MUSLIM = u"""
Abad
Abbas
Abbasi
Abdalla
Abdallah
Abdella
Abdelnour
Abdelrahman
Abdi
Abdo
Abdoo
Abdou
Abdul
Abdulla
Abdullah
Abed
Abid
Abood
Aboud
Abraham
Abu
Adel
Afzal
Agha
Ahmad
Ahmadi
Ahmed
Ahsan
Akbar
Akbari
Akel
Akhtar
Akhter
Akram
Alam
Ali
Allam
Allee
Alli
Ally
Aly
Aman
Amara
Amber
Ameen
Amen
Amer
Amin
Amini
Amir
Amiri
Ammar
Ansari
Anwar
Arafat
Arif
Arshad
Asad
Ashraf
Aslam
Asmar
Assad
Assaf
Atallah
Attar
Awan
Aydin
Ayoob
Ayoub
Ayub
Azad
Azam
Azer
Azimi
Aziz
Azizi
Azzam
Azzi
Bacchus
Baccus
Bacho
Baddour
Badie
Badour
Bagheri
Bahri
Baig
Baksh
Baluch
Bangura
Barakat
Bari
Basa
Basha
Bashara
Basher
Bashir
Baten
Begum
Ben
Beshara
Bey
Beydoun
Bilal
Bina
Burki
Can
Chahine
Dada
Dajani
Dallal
Daoud
Dar
Darwish
Dawood
Demian
Dia
Diab
Dib
Din
Doud
Ebrahim
Ebrahimi
Edris
Eid
Elamin
Elbaz
El-Sayed
Emami
Fadel
Fahmy
Fahs
Farag
Farah
Faraj
Fares
Farha
Farhat
Farid
Faris
Farman
Farooq
Farooqui
Farra
Farrah
Farran
Fawaz
Fayad
Firman
Gaber
Gad
Galla
Ghaffari
Ghanem
Ghani
Ghattas
Ghazal
Ghazi
Greiss
Guler
Habeeb
Habib
Habibi
Hadi
Hafeez
Hai
Haidar
Haider
Hakeem
Hakim
Halaby
Halim
Hallal
Hamad
Hamady
Hamdan
Hamed
Hameed
Hamid
Hamidi
Hammad
Hammoud
Hana
Hanif
Hannan
Haq
Haque
Hares
Hariri
Harron
Harroun
Hasan
Hasen
Hashem
Hashemi
Hashim
Hashmi
Hassan
Hassen
Hatem
Hoda
Hoque
Hosein
Hossain
Hosseini
Huda
Huq
Husain
Hussain
Hussein
Ibrahim
Idris
Imam
Iman
Iqbal
Irani
Ishak
Ishmael
Islam
Ismael
Ismail
Jabara
Jabbar
Jabbour
Jaber
Jabour
Jafari
Jaffer
Jafri
Jalali
Jalil
Jama
Jamail
Jamal
Jamil
Jan
Javed
Javid
Kaba
Kaber
Kabir
Kader
Kaiser
Kaleel
Kalil
Kamal
Kamali
Kamara
Kamel
Kanan
Karam
Karim
Karimi
Kassem
Kazemi
Kazi
Kazmi
Khalaf
Khalid
Khalifa
Khalil
Khalili
Khan
Khatib
Khawaja
Koroma
Laham
Latif
Lodi
Lone
Madani
Mady
Mahdavi
Mahdi
Mahfouz
Mahmood
Mahmoud
Mahmud
Majeed
Majid
Malak
Malek
Malik
Mannan
Mansoor
Mansour
Mansouri
Mansur
Maroun
Masih
Masood
Masri
Massoud
Matar
Matin
Mattar
Meer
Meskin
Miah
Mian
Mina
Minhas
Mir
Mirza
Mitri
Moghaddam
Mohamad
Mohamed
Mohammad
Mohammadi
Mohammed
Mohiuddin
Molla
Momin
Mona
Morad
Moradi
Mostafa
Mourad
Mousa
Moussa
Moustafa
Mowad
Muhammad
Muhammed
Munir
Murad
Musa
Mussa
Mustafa
Naderi
Nagi
Naim
Naqvi
Nasir
Nasr
Nasrallah
Nasser
Nassif
Nawaz
Nazar
Nazir
Neman
Niazi
Noor
Noorani
Noori
Nour
Nouri
Obeid
Odeh
Omar
Omer
Othman
Ozer
Parsa
Pasha
Pashia
Pirani
Popal
Pour
Qadir
Qasim
Qazi
Quadri
Raad
Rabbani
Rad
Radi
Radwan
Rafiq
Rahaim
Rahaman
Rahim
Rahimi
Rahman
Rahmani
Rais
Ramadan
Ramin
Rashed
Rasheed
Rashid
Rassi
Rasul
Rauf
Rayes
Rehman
Rehmann
Reza
Riaz
Rizk
Saab
Saad
Saade
Saadeh
Saah
Saba
Saber
Sabet
Sabir
Sadek
Sader
Sadiq
Sadri
Saeed
Safar
Safi
Sahli
Saidi
Sala
Salaam
Saladin
Salah
Salahuddin
Salam
Salama
Salame
Salameh
Saleem
Saleh
Salehi
Salek
Salem
Salih
Salik
Salim
Salloum
Salman
Samaan
Samad
Samara
Sami
Samra
Sani
Sarah
Sarwar
Sattar
Satter
Sawaya
Sayed
Selim
Semaan
Sesay
Shaban
Shabazz
Shad
Shaer
Shafi
Shah
Shahan
Shaheed
Shaheen
Shahid
Shahidi
Shahin
Shaikh
Shaker
Shakir
Shakoor
Sham
Shams
Sharaf
Shareef
Sharif
Shariff
Sharifi
Shehadeh
Shehata
Sheikh
Siddiqi
Siddique
Siddiqui
Sinai
Soliman
Soltani
Srour
Sulaiman
Suleiman
Sultan
Sultana
Syed
Sylla
Tabatabai
Tabet
Taha
Taheri
Tahir
Tamer
Tariq
Tawil
Toure
Turay
Uddin
Ullah
Usman
Vaziri
Vohra
Wahab
Wahba
Waheed
Wakim
Wali
Yacoub
Yamin
Yasin
Yassin
Younan
Younes
Younis
Yousef
Yousif
Youssef
Yousuf
Yusuf
Zadeh
Zafar
Zaher
Zahra
Zaidi
Zakaria
Zaki
Zaman
Zamani
Zia
"""
FEMALE_FIRST_NAMES_AFRICAN = u"""
Aba
Abeni
Abiba
Abmaba
Aissa
Ajua
Akosua
Armani
Arziki
Asha
Ashanti
Ayana
Baako
Beyonce
Bisa
Cacey
Cassietta
Catava
Chipo
Cleotha
Deiondre
Deka
Delu
Dericia
Diara
Doli
Dumi
Ebere
Ekua
Faizah
Fola
Gaynelle
Habika
Hawa
Isoke
Jendayi
Jira
Kabibe
Kabira
Kacela
Kacondra
Kadija
Kainda
Kambo
Kande
Kanene
Kanesha
Kanoni
Kapera
Kapuki
Karasi
Karimah
Karna
Kasinda
Keeya
Keilantra
Keisha
Keishla
Kendis
Kenyatta
Keshia
Keshon
Kesia
Keyah
Kia
Kianga
Kiden
Kiho
Kijana
Kinfe
Kione
Kirabo
Kiros
Kumani
Kuron
Kwashi
Kya
Lachelle
Lakin
Lanelle
Laquanna
Laqueta
Laquinta
Laquita
Lashawn
Latanya
Lateefah
Latifah
Latonya
Latoya
Layla
Lehana
Lewa
Lilovarti
Limber
Lisimba
Loba
Lolovivi
Lulu
Maha
Mahari
Mahdi
Maisha
Maizah
Malaika
Malkia
Mandisa
Manyara
Marjani
Mekell
Messina
Moesha
Muncel
Nafuna
Nailah
Naja
Najwa
Nakeisha
Nala
Narkaesha
Nasha
Nashaly
Nichelle
Niesha
Nimeesha
Nyeki
Okal
Okapi
Onaedo
Ontibile
Paka
Panya
Pasua
Pedzi
Pemba
Penda
Pita
Quanella
Quanesha
Quisha
Raimy
Ranielle
Rashida
Raziya
Ronnell
Safara
Safiya
Saidah
Salihah
Sekai
Semira
Serwa
Sesen
Shakila
Shakina
Shandra
Shaquana
Shasa
Shasmecka
Shateque
Sibongile
Sidone
Sika
Sima
Sitembile
Siyanda
Sukutai
Taifa
Taja
Takala
Takiyah
Talaitha
Tale
Talisa
Talisha
Tamasha
Tamika
Tamira
Tamyra
Tanasha
Tandice
Tanesha
Tanginika
Taniel
Tanisha
Tapanga
Tarana
Tariana
Tarisai
Tazara
Temima
Tendai
Terehasa
Thandiwe
Thema
Tiaret
Timberly
Tineka-Jawana
Tiombe
Tyesha
Tyrell
Tyrina
Tyronica
Uchenna
Ulu
Urbi
Uwimana
Velinda
Wangari
Waseme
Wyetta
Yaa
Yetty
Zabia
Zaci
Zahwa
Zaila
Zaire
Zakiya
Zalika
Zanta
Zarina
Zasu
Zawadi
Zilli
Zina
Zoila
"""
MALE_FIRST_NAMES_AFRICAN = u"""
Afram
Arali
Armani
Banji
Chata
Chiamaka
Chike
Dakarai
Deion
Deiondre
Dele
Dembe
Denzel
Dewayne
Diallo
Dikembe
Duante
Dume
Ebi
Essien
Faraji
Ibeamaka
Jamar
Jayvyn
Jevonte
Kabonero
Kabonesa
Kadeem
Kaleb
Kasi
Kendis
Kentay
Keshawn
Khalon
Kofi
Kwamin
Kwau
Kyan
Kyrone
Lado
Laken
Lakista
Lamech
Lavaughn
La Vonn
LeBron
Lisimba
Ludacris
Lugono
Luister
Lukman
Mablevi
Mahdi
Makalo
Manu
Marques
Mashawn
Montraie
Mykelti
Nabulung
Naeem
Naftali
Napoleon
Nuru
Nwa
Obiajulu
Oja
Okal
Okapi
Okoth
Onaedo
Ontibile
Oringo
Orma
Otieno
Paulo
Peabo
Penda
Phornello
Polo
Quaashie
Quaddus
Quadrees
Quannell
Quarren
Quashawn
Quintavius
Quoitrel
Raimy
Rashon
Razi
Roshaun
Runako
Salim
Shaquille
Shevon
Shontae
Simba
Sulaiman
Tabansi
Tabari
Tamarius
Tavarius
Tavon
Tevaughn
Tevin
Trory
Tyrell
Uba
Ubanwa
Udenwa
Ulan
Uland
Umi
Useni
Usi
Uzoma
Uzondu
Vandwon
Vashon
Veltry
Verlyn
Voshon
Vul
Wasaki
Xayvion
Xhosas
Xyshaun
Yobachi
Zaid
Zareb
Zashawn
"""
LAST_NAMES_ESTONIA = u"""
Ivanov 6789
Tamm 5241
Saar 4352
Sepp 3624
Mägi 3613
Smirnov 3402
Vasiliev 3153
Petrov 2937
Kask 2847
Kukk 2728
Kuznetsov 2339
Rebane 2265
Ilves 2165
Mihhailov 1968
Pärn 1 933
Pavlov 1 927
Semenov 1 909
Koppel 1 882
Andreev 1 862
Alekseev 1 845
Luik 1 826
Kaasik 1 817
Lepik 1 814
Oja 1 809
Raudsepp 1 775
Kuusk 1 747
Karu 1 704
Fjodorov 1 685
Nikolaev 1 675
Kütt 1 646
Põder 1 628
Vaher 1 614
Popov 1 611
Stepanov 1 592
Volkov 1 590
Moroz 1 573
Lepp 1 564
Koval 1 559
Kivi 1 531
Kallas 1 525
Kozlov 1 463
Mets 1 455
Sokolov 1 446
Liiv 1 426
Grigorieva 1 424
Jakovlev 1 422
Kuusik 1 384
Teder 1 381
Lõhmus 1 368
Laur 1 360
Jõgi 1 359
Kangur 1 337
Peterson 1 285
Lebedev 1 275
Kõiv 1 271
Kull 1 269
Ots 1 242
Leppik 1 226
Dmitriev 1 225
Nikitin 1 222
Mölder 1 214
Jegorov 1 210
Toom 1 201
Puusepp 1 181
Orlov 1 149
Raud 1 130
Kuzmin 1 122
Aleksandrov 1 089
Orav 1 086
Sild 1 084
Novikov 1 070
Bogdanov 1 062
Rand 1 053
Jakobson 1 039
Makarov 1 015
Nõmm 1 010
Põld 1 010
Sarapuu 1 004
Uibo 1 000
Paju 998
Mitt 997
Männik 961
Zaitsev 960
Antonov 956
Laas 951
Jürgenson 944
Saks 944
Järv 942
Vinogradov 940
Filippov 930
Johanson 929
Pukk 920
Tomson 919
Kalda 917
Belov 915
Romanov 911
Melnik 907
Allik 905
Solovjov 905
Sergejev 891
Tamme 877
Kruus 873
Mark 870
Aas 867
Rätsep 867
Gusev 866
Maksimov 866
Paas 860
Mänd 853
Hein 852
Roos 849
Parts 847
Kase 826
Väli 826
Järve 825
Lind 823
Mõttus 821
Palm 819
Rohtla 812
Timofejev 804
Valk 797
Hunt 794
Unt 781
Adamson 775
Pihlak 766
Iljin 759
Nurk 755
Baranov 742
Lember 736
Frolov 734
Gavrilov 732
Sikk 731
Kuus 730
Kala 722
Õunapuu 720
Pärna 716
Soosaar 712
Zahharov 706
Vares 703
Tsvetkov 696
Arro 695
Vorobjov 695
Aavik 690
Kurg 690
Sorokin 688
Tali 688
Vahtra 686
Jefimov 684
Vahter 682
Varik 678
Kalinin 673
Kolesnik 672
Mikk 668
Aru 663
Matvejev 661
Trofimov 657
Kikas 652
Õun 652
Luts 650
Roots 644
Tõnisson 641
Kolk 634
Lill 634
Must 631
Piir 631
Kallaste 626
Kurvits 625
Maripuu 622
Poljakov 622
Jänes 621
Golubev 618
Sidorov 618
Mäe 615
Nikiforov 614
Kirs 609
Kangro 605
Korol 605
Maasik 601
Kokk 597
Borissov 593
Kaur 590
Tomingas 590
Koort 581
Tammik 580
Fedorov 573
Müür 573
Danilov 566
Toomsalu 566
Martin 564
Susi 563
Ploom 560
Liiva 555
Hallik 554
Tarassov 553
Fomin 550
Tilk 550
Uustalu 550
Michelson 549
Valge 548
Tihhomirov 545
Miller 543
Kulikov 541
Toots 541
Vaino 541
Nõmmik 540
Talts 540
Jürgens 538
Kikkas 538
Kesküla 537
Anton 536
Post 535
Beljajev 532
Kärner 530
Martinson 530
Hansen 529
Rüütel 527
Veski 527
Rumjantsev 526
Mironov 525
Müürsepp 525
Meier 524
Ossipov 524
Sarv 518
Palu 517
Žukov 516
Aasa 513
Laanemets 512
Nazarov 511
Krõlov 509
Žuravljov 507
Titov 507
Juhkam 506
Luht 506
Jalakas 505
Kivistik 505
Karro 503
Annus 502
Rosenberg 501
Fedotov 499
Lääne 499
Viira 499
Jõesaar 497
Tooming 497
Komarov 492
Soo 491
Ott 488
Simson 485
Kotkas 483
Malõšev 482
Kink 478
Anderson 477
Toome 477
Kirillov 476
Aus 475
Ruus 474
Saare 473
Erm 471
Lang 471
Olesk 471
Afanasjev 468
Pettai 468
Reimann 467
Tuisk 467
Kriisa 465
Ojala 465
Kroon 463
Raag 462
Raid 462
Bõstrov 461
Org 461
Lauri 460
Laan 456
Pärtel 456
Taal 456
Kadak 455
Sander 455
Kattai 454
Truu 454
Konovalov 453
Sirel 453
Liivak 451
Raja 449
Abel 448
Siim 448
Männiste 445
Lipp 443
Kisseljov 440
Medvedev 440
Meister 440
Abramov 439
Kazakov 439
Sutt 439
Saveljev 438
Filatov 437
Soots 436
Schmidt 434
Gerassimov 432
Kotov 432
Allas 431
Ivask 429
Täht 429
Loginov 428
Juhanson 426
Kiik 425
Leht 425
Saul 425
Kasemets 421
Ševtšenko 421
Sobolev 420
Lass 419
Härm 418
Kont 415
Jeršov 414
Vlassov 414
Maslov 413
Konstantinov 411
Pruul 411
Teras 411
Visnapuu 411
Aun 409
Pajula 407
Gromov 406
Kool 406
Silm 406
Tamberg 406
Lumiste 402
Kirsipuu 401
Kirss 401
Kudrjavtsev 401
Sööt 401
Kalmus 400
Sokk 399
Kalm 398
Koit 398
Oras 398
Suits 398
Laine 396
Sulg 396
Põldma 395
Vaht 394
Klimov 391
Lukk 391
Randmaa 391
Gontšarov 389
Kiis 388
Paal 388
Võsu 388
Uus 387
Jaakson 386
Lillemets 385
Mürk 383
Tiits 382
Jaanus 381
Link 381
Erik 379
Lokk 379
Randoja 379
Bondarenko 378
Drozdov 377
Lehtmets 377
Voronin 377
Kuningas 376
Laane 376
Lumi 375
Salu 375
Lomp 372
Pent 372
Laks 370
Jermakov 369
Salumäe 369
Kutsar 368
Madisson 366
Koger 365
Muru 365
Niit 365
Põllu 365
Vähi 365
Kaljula 364
Viks 364
Nõmme 363
Urb 362
Nuut 361
Kaljuvee 360
Piho 359
Piirsalu 359
Sillaste 359
Arula 358
Kondratjev 357
Tuulik 357
Alas 356
Eller 356
Kostin 356
Käsper 356
Pikk 356
Salumets 356
Jürisson 355
Kruglov 355
Liivamägi 355
Hanson 354
Õispuu 354
Ignatjev 353
Kaljuste 352
Kiisk 351
Lehtla 351
Suvi 351
Gross 350
Poom 349
Egorov 348
Mäesalu 348
Davõdov 347
Lääts 347
Panov 347
Suvorov 347
Maidla 346
Mäeots 345
Põdra 345
Raidma 345
Teesalu 345
Holm 344
Loorits 344
Raamat 344
Liblik 343
Mändla 343
Štšerbakov 343
Lukin 342
Säde 342
Trei 342
Kaljurand 341
Kuuse 341
Kelder 339
Markus 339
Ader 338
Pärnpuu 338
Oks 337
Tuul 337
Gorbunov 336
Laht 336
Leis 336
Štšerbakov 335
Jaanson 334
Kasak 332
Zujev 332
Rosin 331
Heinsalu 330
Kivimäe 330
Naumov 330
Kapp 329
Kohv 329
Moor 327
Remmel 327
Treial 327
Klein 326
Pulk 326
Põldmaa 325
Kilk 324
Ojaste 323
Soosalu 323
Käärik 321
Paap 321
Sibul 321
Klaas 320
Kurm 320
Raadik 320
Safronov 320
Sarap 320
Treier 320
Reinsalu 319
Sillaots 318
Sisask 317
Soon 317
Tiik 317
Denissov 316
Kalamees 316
Jõe 315
Lätt 315
Karpova 313
Mandel 313
Kiil 312
Ernits 311
Kasemaa 311
Vain 311
Villemson 311
Suur 310
Heinsoo 309
Pihelgas 309
Roosileht 308
Aasmäe 306
Koitla 306
Lehiste 306
Merila 306
Vill 306
Nurm 304
Viik 304
Kass 303
Käär 303
Teearu 302
Anissimov 301
Karpov 300
Kivilo 300
Püvi 300
Lehismets 1
"""
FEMALE_FIRST_NAMES_ESTONIA = u"""
Adeele
Age
Age-Kaie
Aili
Aili
Aino
Aino
Aive
Aleksandra
Alla
Allar
Angeelika
Angela
Ann
Anna-Merike
Anne
Anne
Anne
Anne
Anne
Anne(+)
Anneli
Anneli
Anneli
Anneli
Annely
Anni
Annika
Annika
Annika
Anu
Anu
Asta
Astra
Astrid
Astrid
Ave
Brigitte
Cathy
Clara
Claudine
Cris
Ebe
Eda
Edda
Eevi
Egle
Eha
Eha
Eike
Elis
Elisa
Eloliis
Emily Melissa
Ene
Ene
Eneli
Epp
Eva-Liisa
Eve
Eve
Eve
Eveli
Evely
Evi
Fatima
Florinda
Gabrielle
Grete
Halliki
Hedi
Hedi
Heidi
Helbe
Helen
Helena
Helena
Helgi
Heli
Heli
Heli
Helja
Heljo
Helju (Heljo)
Helve
Helyn
Iiris
Ija
Ilme
Ilona
Ilona
Imbi
Inge
Jaanika
Jacqueline
Jana
Jana
Jana
Janika
Jenifer
Judith
Julia
Julia
Juta
Kaari
Kadi
Kadri
Kadri
Kadri
Kai
Kaia
Kaidi
Kaija
Kaili
Kaily
Kaja
Karin
Karolina
Katarina
Katerina
Kati
Kati
Katri
Katri
Katrin
Katrin
Katrin
Katrin
Kelli
Kerle
Kersti
Kersti
Kersti
Kersti
Kersti
Kerstin
Kertu
Kirsti
Kitti
Kjersti
Krista
Krista
Krista
Krista (+10.09.10)
Kristel
Kristel
Kristel
Kristel
Kristel
Kristel
Kristi
Kristiina
Kristiina
Kristiina
Kristin
Kristina
Kuma
Kärolin
Kärt
Kätlin
Kätlin
Kätlin
Kätlin
Küllike
Külliki
Külliki
Kyllikki
Laine
Laura
Lea
Lehte
Leili
Lia
Liesel
Liia
Liina
Liina
Liina
Liis
Liisa
Liisa
Liisi (Eke)
Liivi
Lili
Linda
Linda
Loone
Lorraine
Luule (+)
Ly
Lya
Maarika
Maarja
Maarja
Madli
Madli
Mai
Maie
Maire
Malle
Mare
Mare
Maret
Margareta
Margi
Margit
Margus
Mari
Mari
Mari
Mari-Ann
Mari-Liis
Mari-Liis
Mari-Liis
Mari-Ly
Maria Joanna
Mariann
Marianne
Mariel
Marik
Mariliis
Marina
Marita
Marite
Marliese
Martti
Meeli
Meeli
Merike
Merike
Merilin
Merilin
Merlin
Mery
Michelle
Milvi
Milvi
Mirjam
Mirjam
Nadia
Natalja
Nele
Nele
Paula
Petra
Pia
Pia
Piia
Pille-Riin
Piret
Piret
Piret
Piret
Ragne
Ragne
Raili
Reet
Riia
Riina
Riina
Rita
Rita
Rita
Ruth
Rutt
Rutt
Sadu
Saija
Sanna
Sass
Saule
Signe
Sigrid
Siina
Siiri
Siiri
Silja
Silja
Silja
Silvi
Sirle
Sophie
Stella
Teele
Teresa
Tiia
Tiina
Tiina
Tiina
Tiina
Tiina
Tiina
Tiiu
Tiiu
Titta
Triin
Triin
Triin
Triin
Triin
Triin
Triinu
Triinu
Triinu
Triinuly
Ulvi
Ursula
Urve
Valia
Veera
Veera
Veronika
Veronika
Viire
Viivi
Vilma
Virge
Virge
Virve
Õie
Ülle
Ülle
Ülle
"""
MALE_FIRST_NAMES_ESTONIA = u"""
Aadu
Aare
Aarne
Aaro
Aaron
Aaron
Ado
Ago
Ago
Ago
Ahti
Ain
Ainars
Aivar
Aivar
Aivar
Aivar
Alar
Alari
Albert
Allan
Ando
Andreas
Andreas
Andreas Junior
Andres
Andres
Andres
Andres
Andres
Andres
Andres
Andres
Andres (Bit)
Andri
Andrus
Andrus
Annar
Anti
Anti
Ants
Ants
Ants
Ants
Ants
Ants
Ardi
Argo
Argo
Arko
Armo
Arne
Arno
Artur
Arvo
Arvo
Arvo
Brd
Carsten
Christian
Clemens
Daniel
Didier
Diego
Dmitri
Eerik
Einar
Elmar
Emmanuel "Manu"
Enn
Enn
Enn
Enno
Enno
Erich
Erik
Fabio
Falko
Filip
Fred
Frédéric
Frederik
Gabriel
Gordon
Gunnar
Guy
Hannes
Hannes
Hannes
Hannes
Hannes & Andres
Harmo
Harri
Harri
Harri
Heino
Heinz
Helger
Henn
Henry
Hillar
Ilmar
Imre
Imre
Imre
Indrek
Indrek
Indrek
Indrek
Ivar
Ivo
Ivo "Aadam"
Jaak
Jaak
Jaak
Jaan
Jaan
Jaan
Jaan
Jaan
Jaan
Jaan
Jaan
Jaanus
Jaanus
Janari
Janek
Jasper
Jens
Johannes
Joonas
Joosep
Juhan
Jüri
Jüri
Jürmo
Kahro
Kaido
Kaimo
Kalev
Kalev
Kalmer
Kardo
Karl Villem
Karla
Karlis
Kaur
Klaus
Klaus-Dieter
Kristjan
Kristjan
Kristo
Kristo
Kristofer
Kristofer
Laurenz
Lehari
Lembit
Leo
Leo
Luc
Maarjo
Madis
Madis
Mads
Mads Michael Hastrup
Mairold
Manfred
Marek
Marek
Margo
Margus
Margus
Marko
Mart
Mart
Mart
Martel
Martin
Martti
Martti
Mati
Mati
Mati
Mati
Mati
Matthias
Meeli
Meelis
Meelis
Meelis
Michael
Michael
Michael JJ
Mihkel
Mihkel
Mihkel
Mikk
Mikk
Ole Michael
Olev
Oliver
Oliver
Oliver
Ott
Otto
Ove
Patrick
Patrik
Pawan
Peer
Peeter
Peeter
Peter
Philippe
Philippe
Pierre Clément
Piet
Priit
Priit
Priit
Priit
Ragnar
Raigo
Raivu
Raivu
Rannes
Ranno
Raphael
Rasmus
Raul
Raul
Rauno
Rauno
Reemet
Reet
Rein
Rein
Rein
Ricardo
Riho
Risto
Roland
Ruudi
Sander
Sander
Sander
Sandor
Siim
Silver
Simon
Sonnich Jessen
Sten Erik
Stéphane
Suigu
Sulev
Sulev
Sulo
Sune
Taavi
Taavi
Taivu
Tanel
Tarmo
Tarmo
Tarvo
Tarvo
Tauno
Tero
Thierry
Tiit
Tiit
Tiit
Timo
Toivo
Toivo
Toivo
Toivo
Toomas
Toomas
Tõnis
Tõnis
Tõnis
Tõnu
Udo
Urmas
Urmas
Urmas
Urmas
Urmo
Urmo
Vahur-Paul
Vaiko
Valdo
Veiko
Veiko
Velio
Vello
Vesal
Vika
Villu
Virgo
Vladimir
Volker
William
William
Ülo
"""
def streets_of_liege():
def fn():
#~ streets = []
for ln in STREETS_OF_LIEGE.splitlines():
if ln and ln[0] == '*':
m = re.match(STREET_RE, ln)
if m:
s = m.group(1).strip()
if '|' in s:
s = s.split('|')[1]
yield s
#~ streets.append(s)
return Cycler(fn())
LAST_NAMES_ESTONIA = Cycler(splitter3(LAST_NAMES_ESTONIA))
MALE_FIRST_NAMES_ESTONIA = Cycler(splitter1(MALE_FIRST_NAMES_ESTONIA))
FEMALE_FIRST_NAMES_ESTONIA = Cycler(splitter1(FEMALE_FIRST_NAMES_ESTONIA))
LAST_NAMES_RUSSIA = Cycler(splitter1(LAST_NAMES_RUSSIA))
MALE_FIRST_NAMES_RUSSIA = Cycler(splitter1(MALE_FIRST_NAMES_RUSSIA))
FEMALE_FIRST_NAMES_RUSSIA = Cycler(splitter1(FEMALE_FIRST_NAMES_RUSSIA))
#~ def last_names_russia():
#~ return Cycler(splitter1(LAST_NAMES_RUSSIA))
#~ def male_first_names_russia():
#~ return Cycler(splitter1(MALE_FIRST_NAMES_RUSSIA))
#~ def female_first_names_russia():
#~ return Cycler(splitter1(FEMALE_FIRST_NAMES_RUSSIA))
LAST_NAMES_AFRICAN = Cycler(splitter1(LAST_NAMES_AFRICAN))
MALE_FIRST_NAMES_AFRICAN = Cycler(splitter1(MALE_FIRST_NAMES_AFRICAN))
FEMALE_FIRST_NAMES_AFRICAN = Cycler(splitter1(FEMALE_FIRST_NAMES_AFRICAN))
LAST_NAMES_MUSLIM = Cycler(splitter1(LAST_NAMES_MUSLIM))
MALE_FIRST_NAMES_MUSLIM = Cycler(splitter1(MALE_FIRST_NAMES_MUSLIM))
FEMALE_FIRST_NAMES_MUSLIM = Cycler(splitter1(FEMALE_FIRST_NAMES_MUSLIM))
#~ def last_names_muslim():
#~ return Cycler(splitter1(LAST_NAMES_MUSLIM))
#~ def male_first_names_muslim():
#~ return Cycler(splitter1(MALE_FIRST_NAMES_MUSLIM))
#~ def female_first_names_muslim():
#~ return Cycler(splitter1(FEMALE_FIRST_NAMES_MUSLIM))
LAST_NAMES_BELGIUM = Cycler(splitter1(LAST_NAMES_BELGIUM))
MALE_FIRST_NAMES_FRANCE = Cycler(splitter2(MALE_FIRST_NAMES_FRANCE))
FEMALE_FIRST_NAMES_FRANCE = Cycler(splitter2(FEMALE_FIRST_NAMES_FRANCE))
#~ def last_names_belgium():
#~ return Cycler(splitter1(LAST_NAMES_BELGIUM))
#~ def male_first_names_france():
#~ return Cycler([name.strip() for name in MALE_FIRST_NAMES_FRANCE.split(',')])
#~ def female_first_names_france():
#~ return Cycler([name.strip() for name in FEMALE_FIRST_NAMES_FRANCE.split(',')]))
#~ def belgians():
#~ yield [
#~ LAST_NAMES_BELGIUM.pop(),
#~ MALE_FIRST_NAMES_FRANCE.pop(),
#~ FEMALE_FIRST_NAMES_FRANCE.pop()]
#~ def muslims():
#~ yield [
#~ LAST_NAMES_MUSLIM.pop(),
#~ MALE_FIRST_NAMES_MUSLIM.pop(),
#~ FEMALE_FIRST_NAMES_MUSLIM.pop()]
#~ def russians():
#~ yield [
#~ LAST_NAMES_RUSSIA.pop(),
#~ MALE_FIRST_NAMES_RUSSIA.pop(),
#~ FEMALE_FIRST_NAMES_RUSSIA.pop()]
if False:
last_names = []
for ln in demo.LAST_NAMES_FRANCE.splitlines():
if ln:
a = ln.split()
if len(a) == 3:
last_names.append(a[0].strip())
elif len(a) == 4:
last_names.append(a[0].strip()+' '+a[1].strip())
def _test():
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
|
MaxTyutyunnikov/lino
|
lino/utils/demonames.py
|
Python
|
gpl-3.0
| 101,531
|
[
"Amber"
] |
949989b78423afc29f29f28f8581cbf6a2c604826257c7158c163705eed1e841
|
"""
Unit tests for the `blowout` module of ``TAMOC``
Provides testing of the class definition, methods, and functions in the
`blowout` module of ``TAMOC``. These tests check the behavior of the class
object, the results of simulations, and the related object methods.
The ambient data used here are from the `ctd_BM54.cnv` dataset, stored as::
./test/output/test_BM54.nc
This netCDF file is written by the `test_ambient.test_from_ctd` function,
which is run in the following as needed to ensure the dataset is available.
Notes
-----
All of the tests defined herein check the general behavior of each of the
programmed function--this is not a comparison against measured data. The
results of the hand calculations entered below as sample solutions have been
ground-truthed for their reasonableness. However, passing these tests only
means the programs and their interfaces are working as expected, not that they
have been validated against measurements.
"""
# S. Socolofsky, March 2020, Texas A&M University <socolofs@tamu.edu>.
from __future__ import (absolute_import, division, print_function)
from tamoc import ambient, blowout
from tamoc.test import test_sbm
import numpy as np
from numpy.testing import assert_approx_equal
from numpy.testing import assert_array_almost_equal
# ----------------------------------------------------------------------------
# Helper Functions
# ----------------------------------------------------------------------------
def get_ctd():
"""
Provide the ambient CTD data
Load the required CTD data from the ./test/output/test_BM54.nc dataset
and include water currents.
Returns
-------
profile : `ambient.Profile` object
An `ambient.Profile` object containing the required CTD data and
currents for a `bent_plume_model` simulation.
"""
# Get the CTD data from the requested file
nc = test_sbm.make_ctd_file()
profile = ambient.Profile(nc, chem_names='all')
# Add the ambient currents
z = profile.nc.variables['z'][:]
ua = np.zeros(z.shape) + 0.09
data = np.vstack((z, ua)).transpose()
symbols = ['z', 'ua']
units = ['m', 'm/s']
comments = ['measured', 'arbitrary crossflow velocity']
profile.append(data, symbols, units, comments, 0)
profile.close_nc()
# Return the profile object
return profile
def get_blowout():
"""
Create the `blowout.Blowout` object for a basic case
Define the parameters for a basic, synthetic accidental subsea oil well
blowout
Returns
-------
z0 : float, default=100
Depth of the release point (m)
d0 : float, default=0.1
Equivalent circular diameter of the release (m)
substance : str or list of str, default=['methane']
The chemical composition of the released petroleum fluid. If using
the chemical property data distributed with TAMOC, this should be a
list of TAMOC chemical property names. If using an oil from the
NOAA OilLibrary, this should be a string containing the Adios oil
ID number (e.g., 'AD01554' for Louisiana Light Sweet).
q_oil : float, default=20000.
Release rate of the dead oil composition at the release point in
stock barrels of oil per day.
gor : float, default=0.
Gas to oil ratio at standard surface conditions in standard cubic
feet per stock barrel of oil
x0 : float, default=0
x-coordinate of the release (m)
y0 : float, default=0
y-coordinate of the release (m)
u0 : float, default=None
Exit velocity of continuous-phase fluid at the release. This is
only used when produced water exits. For a pure oil and gas release,
this should be zero or None.
phi_0 : float, default=-np.pi / 2. (vertical release)
Vertical angle of the release relative to the horizontal plane; z is
positive down so that -pi/2 represents a vertically upward flowing
release (rad)
theta_0 : float, default=0.
Horizontal angle of the release relative to the x-direction (rad)
num_gas_elements : int, default=10
Number of gas bubble sizes to include in the gas bubble size
distribution
num_oil_elements : int, default=25
Number of oil droplet sizes to include in the oil droplet size
distribution
water : various
Data describing the ambient water temperature and salinity profile.
See Notes below for details.
current : various
Data describing the ambient current velocity profile. See Notes
below for details.
"""
# Define some typical blowout values
z0 = 100.
d0 = 0.2
substance = {'composition' : ['methane', 'ethane', 'propane',
'toluene', 'benzene'],
'masses' : np.array([0.2, 0.03, 0.02, 0.25, 0.5])}
q_oil = 20000.
gor = 1000.
x0 = 0.
y0 = 0.
u0 = 0.
phi_0 = -np.pi / 2.
theta_0 = 0.
num_gas_elements = 10
num_oil_elements = 10
water = None
current = np.array([0.09, 0.0, 0.0])
return (z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0,
num_gas_elements, num_oil_elements, water, current)
def check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill):
"""
Check that the attributes in the `blowout.Blowout` object contain the
correct values
Parameters
----------
z0 : float, default=100
Depth of the release point (m)
d0 : float, default=0.1
Equivalent circular diameter of the release (m)
substance : str or list of str, default=['methane']
The chemical composition of the released petroleum fluid. If using
the chemical property data distributed with TAMOC, this should be a
list of TAMOC chemical property names. If using an oil from the
NOAA OilLibrary, this should be a string containing the Adios oil
ID number (e.g., 'AD01554' for Louisiana Light Sweet).
q_oil : float, default=20000.
Release rate of the dead oil composition at the release point in
stock barrels of oil per day.
gor : float, default=0.
Gas to oil ratio at standard surface conditions in standard cubic
feet per stock barrel of oil
x0 : float, default=0
x-coordinate of the release (m)
y0 : float, default=0
y-coordinate of the release (m)
u0 : float, default=None
Exit velocity of continuous-phase fluid at the release. This is
only used when produced water exits. For a pure oil and gas release,
this should be zero or None.
phi_0 : float, default=-np.pi / 2. (vertical release)
Vertical angle of the release relative to the horizontal plane; z is
positive down so that -pi/2 represents a vertically upward flowing
release (rad)
theta_0 : float, default=0.
Horizontal angle of the release relative to the x-direction (rad)
num_gas_elements : int, default=10
Number of gas bubble sizes to include in the gas bubble size
distribution
num_oil_elements : int, default=25
Number of oil droplet sizes to include in the oil droplet size
distribution
water : various
Data describing the ambient water temperature and salinity profile.
See Notes below for details.
current : various
Data describing the ambient current velocity profile. See Notes
below for details.
spill : `blowout.Blowout` object
A `blowout.Blowout` object that contains the specified input data
"""
# Check each of the object attributes in the __init__() method
assert spill.z0 == z0
assert spill.d0 == d0
for i in range(len(substance['composition'])):
assert spill.substance['composition'][i] == \
substance['composition'][i]
assert spill.substance['masses'][i] == substance['masses'][i]
assert spill.q_oil == q_oil
assert spill.gor == gor
assert spill.x0 == x0
assert spill.y0 == y0
assert spill.u0 == u0
assert spill.phi_0 == phi_0
assert spill.theta_0 == theta_0
assert spill.num_gas_elements == num_gas_elements
assert spill.num_oil_elements == num_oil_elements
if water == None:
assert spill.water == water
else:
assert isinstance(spill.water, ambient.Profile)
if isinstance(current, float):
assert spill.current == current
else:
assert_array_almost_equal(spill.current, current, decimal=6)
def check_simulation(spill):
"""
Compare the simulation solution stored in spill to the expected solution
Parameters
----------
spill : `blowout.Blowout` object
A `blowout.Blowout` object that contains a simulation run already
completed
"""
# Check the model parameters
assert spill.update == False
assert spill.bpm.sim_stored == True
# Check that the object attributes are set properly
assert_array_almost_equal(spill.bpm.X, np.array([spill.x0, spill.y0,
spill.z0]), decimal=6)
assert spill.bpm.D == spill.d0
assert spill.bpm.Vj == spill.u0
assert spill.bpm.phi_0 == spill.phi_0
assert spill.bpm.theta_0 == spill.theta_0
assert spill.bpm.Sj == spill.Sj
assert spill.bpm.Tj == spill.Tj
assert spill.bpm.cj == spill.cj
for i in range(len(spill.tracers)):
assert spill.bpm.tracers[i] == spill.tracers[i]
assert len(spill.bpm.particles) == len(spill.disp_phases)
assert spill.bpm.track == spill.track
assert spill.bpm.dt_max == spill.dt_max
assert spill.bpm.sd_max == spill.sd_max
assert_array_almost_equal(spill.bpm.K_T0,
np.array([spill.disp_phases[i].K_T
for i in range(len(spill.disp_phases))]), decimal=6)
# Check the model simulation results
q0 = np.array([
1.29037789e+00, 4.52019374e+01, 1.47755395e+06, 3.69108722e-15,
0.00000000e+00, -6.02800289e+01, 8.56255653e-04, 0.00000000e+00,
0.00000000e+00, 1.00000000e+02, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.09302629e-04, 1.11480306e-05, 3.64591502e-06, 2.07244046e-07,
1.42962222e-06, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 7.19858668e+01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 3.06225097e-04,
3.12326136e-05, 1.02144907e-05, 5.80620330e-07, 4.00526692e-06,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
2.01677484e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 6.61440772e-04, 6.74618908e-05,
2.20631185e-05, 1.25412960e-06, 8.65130542e-06, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 4.35619783e+02,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.10149149e-03, 1.12343693e-04, 3.67415169e-05,
2.08849098e-06, 1.44069427e-05, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 7.25433789e+02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.41420165e-03, 1.44237732e-04, 4.71723241e-05, 2.68140735e-06,
1.84970309e-05, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 9.31382280e+02, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.39985011e-03,
1.42773985e-04, 4.66936120e-05, 2.65419601e-06, 1.83093201e-05,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
9.21930467e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.06829633e-03, 1.08958040e-04,
3.56342540e-05, 2.02555105e-06, 1.39727671e-05, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 7.03571710e+02,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 6.28553057e-04, 6.41075959e-05, 2.09661109e-05,
1.19177261e-06, 8.22115101e-06, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 4.13960188e+02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
2.85122937e-04, 2.90803549e-05, 9.51060382e-06, 5.40609424e-07,
3.72926150e-06, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.87779763e+02, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 9.97154314e-05,
1.01702100e-05, 3.32612302e-06, 1.89066171e-07, 1.30422661e-06,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
6.56718128e+01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 4.98777791e-06, 3.72494741e-06,
4.63112076e-06, 8.36251140e-05, 1.66674352e-04, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.50943077e+02,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 9.21816811e-06, 6.88426631e-06, 8.55901174e-06,
1.54551861e-04, 3.08039416e-04, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 2.78965640e+02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.68511998e-05, 1.25847289e-05, 1.56462341e-05, 2.82527315e-04,
5.63109034e-04, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 5.09960945e+02, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 3.01813740e-05,
2.25399032e-05, 2.80232180e-05, 5.06021096e-04, 1.00855753e-03,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
9.13366541e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 5.20333042e-05, 3.88592526e-05,
4.83125992e-05, 8.72390687e-04, 1.73877374e-03, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.57466254e+03,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 8.35592123e-05, 6.24032740e-05, 7.75842087e-05,
1.40095425e-03, 2.79226096e-03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 2.52871816e+03, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.17656634e-04, 8.78677406e-05, 1.09243453e-04, 1.97263183e-03,
3.93167932e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 3.56059446e+03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.30166694e-04,
9.72104406e-05, 1.20858965e-04, 2.18237556e-03, 4.34972240e-03,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
3.93918125e+03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 9.32233035e-05, 6.96205624e-05,
8.65572573e-05, 1.56298246e-03, 3.11520159e-03, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.82117859e+03,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 3.11693104e-05, 2.32777089e-05, 2.89405108e-05,
5.22584843e-04, 1.04157096e-03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 9.43264054e+02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.29037789e+00])
qn = np.array([
5.13446051e+02, 1.79514291e+04, 5.92952938e+08, 4.60940105e+01,
0.00000000e+00, -1.19996563e+03, 8.56255653e-04, 4.30447790e+00,
0.00000000e+00, -5.02658383e+00, 1.05121062e+02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.03192528e-04, 1.04079855e-05, 3.48092795e-06, 4.22943393e-09,
1.60501483e-08, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 6.76174348e+01, 4.35204659e+01, 0.00000000e+00,
2.91032647e-01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.92374138e-04,
2.95532013e-05, 9.84089672e-06, 2.29194433e-08, 8.29546826e-08,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.91632333e+02, 4.35823334e+01, 0.00000000e+00, 2.87233761e-01,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 6.28551396e-04, 6.34122009e-05,
2.11461661e-05, 3.36314665e-08, 1.40721865e-07, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 4.11867919e+02,
4.35662510e+01, 0.00000000e+00, 2.90550330e-01, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.04422375e-03, 1.05230143e-04, 3.51161538e-05,
8.17294213e-09, -1.01586808e-08, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 6.84004809e+02, 4.35284591e+01,
0.00000000e+00, 2.95489341e-01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.33984495e-03, 1.34952898e-04, 4.50395667e-05, -1.29117803e-09,
-2.17284742e-07, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 8.77591772e+02, 4.34476786e+01, 0.00000000e+00,
3.03639648e-01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.30136266e-03,
1.30427131e-04, 4.38315686e-05, -5.18177726e-08, -6.64680532e-07,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
8.52060331e+02, 4.32124931e+01, 0.00000000e+00, 3.22069284e-01,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 9.83687405e-04, 9.83546818e-05,
3.31705979e-05, -4.01571080e-08, -3.49550436e-07, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 6.43951432e+02,
4.29034986e+01, 0.00000000e+00, 3.43599977e-01, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 5.64027119e-04, 5.60691947e-05, 1.90963943e-05,
7.72354973e-09, 4.32600594e-08, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 3.69115144e+02, 4.24253646e+01,
0.00000000e+00, 3.70370233e-01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
2.60364477e-04, 2.59903152e-05, 8.79429763e-06, 4.03972899e-09,
2.41652607e-08, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.70442683e+02, 4.20396249e+01, 0.00000000e+00,
3.95022755e-01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 9.24622810e-05,
9.26338972e-06, 3.11651509e-06, 1.52770676e-09, 9.91826459e-09,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
6.05450845e+01, 4.16356124e+01, 0.00000000e+00, 4.21051189e-01,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.58972982e-06, 3.10260211e-06,
4.48708496e-06, 8.29233234e-05, 1.60590418e-04, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.45911289e+02,
4.82599816e+01, 0.00000000e+00, 1.50120408e-02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 4.24615716e-06, 6.08045116e-06, 8.37719910e-06,
1.53664242e-04, 3.00304893e-04, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 2.72933066e+02, 4.80937056e+01,
0.00000000e+00, 2.43853561e-02, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
9.98991456e-06, 1.15722255e-05, 1.54208391e-05, 2.81424711e-04,
5.53465675e-04, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 5.03441280e+02, 4.78444242e+01, 0.00000000e+00,
3.85096090e-02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.13979431e-05,
2.13285164e-05, 2.77566167e-05, 5.04714611e-04, 9.97099761e-04,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
9.07883497e+02, 4.75290806e+01, 0.00000000e+00, 5.64824124e-02,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 4.13725581e-05, 3.74530194e-05,
4.80053768e-05, 8.70883053e-04, 1.72552849e-03, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.57246768e+03,
4.70288999e+01, 0.00000000e+00, 8.51401019e-02, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 7.17407787e-05, 6.08909490e-05, 7.72554275e-05,
1.39933913e-03, 2.77805437e-03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 2.53332454e+03, 4.65320117e+01,
0.00000000e+00, 1.13853536e-01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.06544228e-04, 8.64757771e-05, 1.08941873e-04, 1.97114921e-03,
3.91862716e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 3.57526305e+03, 4.60119426e+01, 0.00000000e+00,
1.44168731e-01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.22130007e-04,
9.62180545e-05, 1.20644446e-04, 2.18132038e-03, 4.34042793e-03,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
3.96156171e+03, 4.54471865e+01, 0.00000000e+00, 1.77390568e-01,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 8.97627886e-05, 6.91977300e-05,
8.64660103e-05, 1.56253347e-03, 3.11124501e-03, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.84047084e+03,
4.53757193e+01, 0.00000000e+00, 1.81376690e-01, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 3.04926253e-05, 2.31955598e-05, 2.89228013e-05,
5.22497681e-04, 1.04080266e-03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 9.50389977e+02, 4.54682223e+01,
0.00000000e+00, 1.75923738e-01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
5.33889044e-04, 6.73109464e-05, 1.54138879e-05, 2.36193111e-05,
1.82894425e-04, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.29037789e+00])
assert spill.bpm.t[0] == 0.
for i in range(len(q0)):
assert_approx_equal(spill.bpm.q[0,i], q0[i], significant=6)
assert_approx_equal(spill.bpm.t[-1], 48.52956775079609, significant=6)
for i in range(len(qn)):
assert_approx_equal(spill.bpm.q[-1,i], qn[i], significant=3)
# Check tracking data for a particle outside the plume
assert spill.bpm.particles[0].farfield == False
# ----------------------------------------------------------------------------
# Unit Tests
# ----------------------------------------------------------------------------
def test_Blowout_inst():
"""
Test instantiation of a `blowout.Blowout` object
Test that the initializer of the `blowout.Blowout` class creates the
correct object
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
print(water)
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Check the object attributes set by the __init__() method
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
# Check the attributes not accessible to the user
assert spill.new_oil == False
assert spill.Sj == 0.
assert spill.cj == 1.
assert spill.tracers[0] == 'tracer'
assert len(spill.disp_phases) == num_gas_elements + num_oil_elements
assert spill.track == True
assert spill.dt_max == 5. * 3600.
assert spill.sd_max == 300. * z0 / d0
assert spill.update == True
# Check the CTD data
T, S, P, ua, va = spill.profile.get_values(z0, ['temperature',
'salinity', 'pressure', 'ua', 'va'])
assert_approx_equal(spill.T0, T, significant=6)
assert_approx_equal(spill.S0, S, significant=6)
assert_approx_equal(spill.P0, P, significant=6)
return spill
assert_approx_equal(T, 286.45, significant=6)
assert_approx_equal(S, 35.03, significant=6)
assert_approx_equal(P, 1107655.378995259, significant=6)
assert_approx_equal(ua, 0.09, significant=6)
assert_approx_equal(va, 0., significant=6)
# Check the bubble and droplet sizes
de_gas = np.array([0.00367319, 0.00421546, 0.0048378 , 0.00555201,
0.00637166, 0.00731231, 0.00839184, 0.00963073, 0.01105253,
0.01268423])
vf_gas = np.array([0.01545088, 0.0432876 , 0.09350044, 0.15570546,
0.19990978, 0.19788106, 0.15101303, 0.08885147, 0.04030462,
0.01409565])
de_oil = np.array([0.00044767, 0.00063414, 0.00089826, 0.00127239,
0.00180236, 0.00255306, 0.00361644, 0.00512273, 0.0072564 ,
0.01027876])
vf_oil = np.array([0.00876514, 0.01619931, 0.02961302, 0.05303846,
0.09143938, 0.14684062, 0.20676085, 0.22874507, 0.16382356,
0.05477458])
assert_array_almost_equal(spill.d_gas, de_gas, decimal=6)
assert_array_almost_equal(spill.vf_gas, vf_gas, decimal=6)
assert_array_almost_equal(spill.d_liq, de_oil, decimal=6)
assert_array_almost_equal(spill.vf_liq, vf_oil, decimal=6)
# Check the mass fluxes of each of the particles in the disp_phases
# particle list
m0 = np.array([5.289671808056377e-07, 7.995318667820805e-07,
1.2084893528298558e-06, 1.8266270258633492e-06,
2.760939749945933e-06, 4.173149852104387e-06, 6.307700009918694e-06,
9.534064393845064e-06, 1.4410701796700701e-05,
2.1781720543811073e-05, 4.085231065881823e-08,
1.1611175556114503e-07, 3.300165783053678e-07, 9.379837677075682e-07,
2.665967731077995e-06, 7.5772995096915136e-06,
2.1536445167832175e-05, 6.121157938574416e-05,
0.00017397752608186881, 0.0004944845384698018])
for i in range(len(m0)):
assert_approx_equal(np.sum(spill.disp_phases[i].m), m0[i],
significant=6)
def test_update_release_depth():
"""
Check that the Blowout.update_release_depth() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
z0 = 200.
spill.update_release_depth(z0)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_orifice_diameter():
"""
Check that the Blowout.update_orifice_diameter() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
d0 = 0.1
spill.update_orifice_diameter(d0)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_substance():
"""
Check that the Blowout.update_substance() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
substance = {'composition' : ['methane'],
'masses' : np.array([1.0])}
spill.update_substance(substance)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_q_oil():
"""
Check that the Blowout.update_q_oil() method works as anticipated and
with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
q_oil = 30000.
spill.update_q_oil(q_oil)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_gor():
"""
Check that the Blowout.update_gor() method works as anticipated and with
no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
gor = 500.
spill.update_gor(gor)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_produced_water():
"""
Check that the Blowout.update_produced_water() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
u0 = 1.3
spill.update_produced_water(u0)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_vertical_orientation():
"""
Check that the Blowout.update_vertical_orientation() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
phi_0 = -np.pi / 4.
spill.update_vertical_orientation(phi_0)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_horizontal_orientation():
"""
Check that the Blowout.update_horizontal_orientation() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
theta_0 = np.pi
spill.update_horizontal_orientation(theta_0)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_num_gas_elements():
"""
Check that the Blowout.update_num_gas_elements() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
num_gas_elements = 5
spill.update_num_gas_elements(num_gas_elements)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_num_oil_elements():
"""
Check that the Blowout.update_num_oil_elements() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
num_oil_elements = 20
spill.update_num_oil_elements(num_oil_elements)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_water_data():
"""
Check that the Blowout.update_water_data() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
water = get_ctd()
spill.update_water_data(water)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_update_current_data():
"""
Check that the Blowout.update_current_data() method works as
anticipated and with no side effects
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Update the release depth
current = 0.1
spill.update_current_data(current)
# Check that things were done correctly
assert spill.update == False
assert spill.bpm.sim_stored == False
check_attributes(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current, spill)
def test_simulate():
"""
Check that the Blowout.simulate() method works and produces the expected
output.
"""
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill = blowout.Blowout(z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0,
theta_0, num_gas_elements, num_oil_elements,
water, current)
# Run the simulation
# Get the input parameters for a typical blowout
z0, d0, substance, q_oil, gor, x0, y0, u0, phi_0, theta_0, \
num_gas_elements, num_oil_elements, water, current = \
get_blowout()
# Create the blowout.Blowout object
spill.simulate()
# Check the simulation
check_simulation(spill)
# Re-run the simulation and make sure the results are unchanged
spill.simulate()
check_simulation(spill)
|
socolofs/tamoc
|
tamoc/test/test_blowout.py
|
Python
|
mit
| 42,041
|
[
"NetCDF"
] |
78c4f3526fe738b36dc2417b7adaef49f7cd8c4f11aefe034b687279a151ff56
|
"""Tool for calculating RDFs
"""
from __future__ import print_function
import numpy as np
from MDAnalysis.lib.distances import distance_array
from analysisbase import AnalysisBase, blocks_of
class InterRDF(AnalysisBase):
"""Analysis object for calculating intermolecular RDF.
See the init method for arguments and keywords.
Run the analysis with method *run*
Results are stored in the following attributes:
rdf
The pair distribution function, normalised.
edges
The boundaries of each rdf bin.
bins
The center of each rdf bin.
"""
def __init__(self, *args, **kwargs):
"""InterRDF(g1, g2, nbins=75, range=(0.0, 15.0))
:Arguments:
*g1*
First AtomGroup
*g2*
Second AtomGroup
:Keywords:
*nbins*
Number of bins in the histogram [75]
*range*
The size of the RDF [0.0, 15.0]
*exclusion_block*
A tuple representing the tile to exclude from the distance
array. [None]
*start*
The frame to start at [0]
*stop*
The frame to end analysis at. [-1]
*step*
The step size through the trajectory in frames [0]
Keyword *exclusion_block* allows same molecule contributions to
be excluded from the rdf calculation.
"""
self.g1 = args[0]
self.g2 = args[1]
self.u = self.g1.universe
kwargs.update({'traj': self.u.trajectory})
self._setup_frames(**kwargs)
nbins = kwargs.pop('nbins', 75)
hrange = kwargs.pop('range', (0.0, 15.0))
self.rdf_settings = {'bins':nbins,
'range':hrange}
# Empty histogram to store the RDF
count, edges = np.histogram([-1], **self.rdf_settings)
count *= 0.0
self.count = count
self.edges = edges
self.bins = 0.5 * (edges[:-1] + edges[1:])
# Need to know average volume
self.volume = 0.0
# Allocate a results array which we will reuse
self._result = np.zeros((len(self.g1), len(self.g2)), dtype=np.float64)
# If provided exclusions, create a mask of _result which
# lets us take these out
exclusion_block = kwargs.pop('exclusion_block', None)
if not exclusion_block is None:
self._exclusion_block = exclusion_block
self._exclusion_mask = blocks_of(self._result, *exclusion_block)
self._maxrange = hrange[1] + 1.0
else:
self._exclusion_block = None
self._exclusion_mask = None
def _singleframe(self):
distance_array(self.g1.positions, self.g2.positions,
box=self.u.dimensions, result=self._result)
# Maybe exclude same molecule distances
if not self._exclusion_mask is None:
self._exclusion_mask[:] = self._maxrange
count = np.histogram(self._result, **self.rdf_settings)[0]
self.count += count
self.volume += self._ts.volume
def _normalise(self):
# Number of each selection
nA = len(self.g1)
nB = len(self.g2)
N = nA * nB
# If we had exclusions, take these into account
if self._exclusion_block:
xA, xB = self._exclusion_block
nblocks = nA / xA
N -= xA * xB * nblocks
# Volume in each radial shell
vol = np.power(self.edges[1:], 3) - np.power(self.edges[:-1], 3)
vol *= 4/3.0 * np.pi
# Number of frames
nframes = len(self.frames)
# Average number density
box_vol = self.volume / nframes
density = N / box_vol
rdf = self.count / (density * vol * nframes)
self.rdf = rdf
|
richardjgowers/MDA-RDFTool
|
rdftool.py
|
Python
|
gpl-2.0
| 3,827
|
[
"MDAnalysis"
] |
54db7262ec2e459139e54586aa29903605d10ce30dc6f380fc17757e9fe8904d
|
#!/usr/bin/env python
"""
"""
import cocos
from ui.utils.factories import GCLabelFactory
from ui.menu import GCMenu, GCMenuItem
from ui.scenes.main import GCMainScene
from ui.scenes.options import GCOptionsScene
__all__ = ['GCTitleScene']
label_factory = GCLabelFactory()
class _TitleLayer(cocos.layer.Layer):
"""
"""
def __init__(self):
super(_TitleLayer, self).__init__()
title = label_factory.centered('Galaxy Crawl', 32)
title.position = 320, 320
self.add(title)
class _MenuLayer(GCMenu):
"""
"""
def __init__(self):
super(_MenuLayer, self).__init__()
menu_items = [
GCMenuItem('Start', self.on_new_game),
GCMenuItem('Options', self.on_options),
GCMenuItem('Quit', self.on_quit)
]
self.create_menu(menu_items)
def on_new_game(self):
cocos.director.director.replace(GCMainScene)
def on_options(self):
cocos.director.director.replace(GCOptionsScene)
GCTitleScene = cocos.scene.Scene(
_TitleLayer(),
_MenuLayer()
)
|
GalaxyCrawl/GalaxyCrawl
|
galaxycrawl/ui/scenes/title/title_scene.py
|
Python
|
mit
| 1,093
|
[
"Galaxy"
] |
fef4f17c2d341f981a667889edc61484aaebd46bb6427d82ab03c2e9e53c964d
|
# copyright 2003-2013 LOGILAB S.A. (Paris, FRANCE), all rights reserved.
# contact http://www.logilab.fr/ -- mailto:contact@logilab.fr
#
# This file is part of astroid.
#
# astroid is free software: you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 2.1 of the License, or (at your
# option) any later version.
#
# astroid is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
# for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with astroid. If not, see <http://www.gnu.org/licenses/>.
"""this module contains utilities for rebuilding a _ast tree in
order to get a single Astroid representation
"""
import sys
from _ast import (
Expr as Discard, Str,
# binary operators
Add, BinOp, Div, FloorDiv, Mod, Mult, Pow, Sub, BitAnd, BitOr, BitXor,
LShift, RShift,
# logical operators
And, Or,
# unary operators
UAdd, USub, Not, Invert,
# comparison operators
Eq, Gt, GtE, In, Is, IsNot, Lt, LtE, NotEq, NotIn,
)
from astroid import nodes as new
from astroid import astpeephole
_BIN_OP_CLASSES = {Add: '+',
BitAnd: '&',
BitOr: '|',
BitXor: '^',
Div: '/',
FloorDiv: '//',
Mod: '%',
Mult: '*',
Pow: '**',
Sub: '-',
LShift: '<<',
RShift: '>>',
}
_BOOL_OP_CLASSES = {And: 'and',
Or: 'or',
}
_UNARY_OP_CLASSES = {UAdd: '+',
USub: '-',
Not: 'not',
Invert: '~',
}
_CMP_OP_CLASSES = {Eq: '==',
Gt: '>',
GtE: '>=',
In: 'in',
Is: 'is',
IsNot: 'is not',
Lt: '<',
LtE: '<=',
NotEq: '!=',
NotIn: 'not in',
}
CONST_NAME_TRANSFORMS = {'None': None,
'True': True,
'False': False,
}
REDIRECT = {'arguments': 'Arguments',
'Attribute': 'Getattr',
'comprehension': 'Comprehension',
'Call': 'CallFunc',
'ClassDef': 'Class',
"ListCompFor": 'Comprehension',
"GenExprFor": 'Comprehension',
'excepthandler': 'ExceptHandler',
'Expr': 'Discard',
'FunctionDef': 'Function',
'GeneratorExp': 'GenExpr',
'ImportFrom': 'From',
'keyword': 'Keyword',
'Repr': 'Backquote',
}
PY3K = sys.version_info >= (3, 0)
PY34 = sys.version_info >= (3, 4)
def _init_set_doc(node, newnode):
newnode.doc = None
try:
if isinstance(node.body[0], Discard) and isinstance(node.body[0].value, Str):
newnode.doc = node.body[0].value.s
node.body = node.body[1:]
except IndexError:
pass # ast built from scratch
def _lineno_parent(oldnode, newnode, parent):
newnode.parent = parent
newnode.lineno = oldnode.lineno
newnode.col_offset = oldnode.col_offset
def _set_infos(oldnode, newnode, parent):
newnode.parent = parent
if hasattr(oldnode, 'lineno'):
newnode.lineno = oldnode.lineno
if hasattr(oldnode, 'col_offset'):
newnode.col_offset = oldnode.col_offset
def _create_yield_node(node, parent, rebuilder, factory):
newnode = factory()
_lineno_parent(node, newnode, parent)
if node.value is not None:
newnode.value = rebuilder.visit(node.value, newnode)
return newnode
class TreeRebuilder(object):
"""Rebuilds the _ast tree to become an Astroid tree"""
def __init__(self, manager):
self._manager = manager
self.asscontext = None
self._global_names = []
self._from_nodes = []
self._delayed_assattr = []
self._visit_meths = {}
self._transform = manager.transform
self._peepholer = astpeephole.ASTPeepholeOptimizer()
def visit_module(self, node, modname, modpath, package):
"""visit a Module node by returning a fresh instance of it"""
newnode = new.Module(modname, None)
newnode.package = package
newnode.parent = None
_init_set_doc(node, newnode)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.file = newnode.path = modpath
return self._transform(newnode)
def visit(self, node, parent):
cls = node.__class__
if cls in self._visit_meths:
visit_method = self._visit_meths[cls]
else:
cls_name = cls.__name__
visit_name = 'visit_' + REDIRECT.get(cls_name, cls_name).lower()
visit_method = getattr(self, visit_name)
self._visit_meths[cls] = visit_method
return self._transform(visit_method(node, parent))
def _save_assignment(self, node, name=None):
"""save assignement situation since node.parent is not available yet"""
if self._global_names and node.name in self._global_names[-1]:
node.root().set_local(node.name, node)
else:
node.parent.set_local(node.name, node)
def visit_arguments(self, node, parent):
"""visit a Arguments node by returning a fresh instance of it"""
newnode = new.Arguments()
newnode.parent = parent
self.asscontext = "Ass"
newnode.args = [self.visit(child, newnode) for child in node.args]
self.asscontext = None
newnode.defaults = [self.visit(child, newnode) for child in node.defaults]
newnode.kwonlyargs = []
newnode.kw_defaults = []
vararg, kwarg = node.vararg, node.kwarg
# change added in 82732 (7c5c678e4164), vararg and kwarg
# are instances of `_ast.arg`, not strings
if vararg:
if PY34:
if vararg.annotation:
newnode.varargannotation = self.visit(vararg.annotation,
newnode)
vararg = vararg.arg
elif PY3K and node.varargannotation:
newnode.varargannotation = self.visit(node.varargannotation,
newnode)
if kwarg:
if PY34:
if kwarg.annotation:
newnode.kwargannotation = self.visit(kwarg.annotation,
newnode)
kwarg = kwarg.arg
elif PY3K:
if node.kwargannotation:
newnode.kwargannotation = self.visit(node.kwargannotation,
newnode)
newnode.vararg = vararg
newnode.kwarg = kwarg
# save argument names in locals:
if vararg:
newnode.parent.set_local(vararg, newnode)
if kwarg:
newnode.parent.set_local(kwarg, newnode)
return newnode
def visit_assattr(self, node, parent):
"""visit a AssAttr node by returning a fresh instance of it"""
assc, self.asscontext = self.asscontext, None
newnode = new.AssAttr()
_lineno_parent(node, newnode, parent)
newnode.expr = self.visit(node.expr, newnode)
self.asscontext = assc
self._delayed_assattr.append(newnode)
return newnode
def visit_assert(self, node, parent):
"""visit a Assert node by returning a fresh instance of it"""
newnode = new.Assert()
_lineno_parent(node, newnode, parent)
newnode.test = self.visit(node.test, newnode)
if node.msg is not None:
newnode.fail = self.visit(node.msg, newnode)
return newnode
def visit_assign(self, node, parent):
"""visit a Assign node by returning a fresh instance of it"""
newnode = new.Assign()
_lineno_parent(node, newnode, parent)
self.asscontext = "Ass"
newnode.targets = [self.visit(child, newnode) for child in node.targets]
self.asscontext = None
newnode.value = self.visit(node.value, newnode)
# set some function or metaclass infos XXX explain ?
klass = newnode.parent.frame()
if (isinstance(klass, new.Class)
and isinstance(newnode.value, new.CallFunc)
and isinstance(newnode.value.func, new.Name)):
func_name = newnode.value.func.name
for ass_node in newnode.targets:
try:
meth = klass[ass_node.name]
if isinstance(meth, new.Function):
if func_name in ('classmethod', 'staticmethod'):
meth.type = func_name
elif func_name == 'classproperty': # see lgc.decorators
meth.type = 'classmethod'
meth.extra_decorators.append(newnode.value)
except (AttributeError, KeyError):
continue
return newnode
def visit_assname(self, node, parent, node_name=None):
'''visit a node and return a AssName node'''
newnode = new.AssName()
_set_infos(node, newnode, parent)
newnode.name = node_name
self._save_assignment(newnode)
return newnode
def visit_augassign(self, node, parent):
"""visit a AugAssign node by returning a fresh instance of it"""
newnode = new.AugAssign()
_lineno_parent(node, newnode, parent)
newnode.op = _BIN_OP_CLASSES[node.op.__class__] + "="
self.asscontext = "Ass"
newnode.target = self.visit(node.target, newnode)
self.asscontext = None
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_backquote(self, node, parent):
"""visit a Backquote node by returning a fresh instance of it"""
newnode = new.Backquote()
_lineno_parent(node, newnode, parent)
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_binop(self, node, parent):
"""visit a BinOp node by returning a fresh instance of it"""
if isinstance(node.left, BinOp) and self._manager.optimize_ast:
# Optimize BinOp operations in order to remove
# redundant recursion. For instance, if the
# following code is parsed in order to obtain
# its ast, then the rebuilder will fail with an
# infinite recursion, the same will happen with the
# inference engine as well. There's no need to hold
# so many objects for the BinOp if they can be reduced
# to something else (also, the optimization
# might handle only Const binops, which isn't a big
# problem for the correctness of the program).
#
# ("a" + "b" + # one thousand more + "c")
newnode = self._peepholer.optimize_binop(node)
if newnode:
_lineno_parent(node, newnode, parent)
return newnode
newnode = new.BinOp()
_lineno_parent(node, newnode, parent)
newnode.left = self.visit(node.left, newnode)
newnode.right = self.visit(node.right, newnode)
newnode.op = _BIN_OP_CLASSES[node.op.__class__]
return newnode
def visit_boolop(self, node, parent):
"""visit a BoolOp node by returning a fresh instance of it"""
newnode = new.BoolOp()
_lineno_parent(node, newnode, parent)
newnode.values = [self.visit(child, newnode) for child in node.values]
newnode.op = _BOOL_OP_CLASSES[node.op.__class__]
return newnode
def visit_break(self, node, parent):
"""visit a Break node by returning a fresh instance of it"""
newnode = new.Break()
_set_infos(node, newnode, parent)
return newnode
def visit_callfunc(self, node, parent):
"""visit a CallFunc node by returning a fresh instance of it"""
newnode = new.CallFunc()
_lineno_parent(node, newnode, parent)
newnode.func = self.visit(node.func, newnode)
newnode.args = [self.visit(child, newnode) for child in node.args]
if node.starargs is not None:
newnode.starargs = self.visit(node.starargs, newnode)
if node.kwargs is not None:
newnode.kwargs = self.visit(node.kwargs, newnode)
for child in node.keywords:
newnode.args.append(self.visit(child, newnode))
return newnode
def visit_class(self, node, parent):
"""visit a Class node to become astroid"""
newnode = new.Class(node.name, None)
_lineno_parent(node, newnode, parent)
_init_set_doc(node, newnode)
newnode.bases = [self.visit(child, newnode) for child in node.bases]
newnode.body = [self.visit(child, newnode) for child in node.body]
if 'decorator_list' in node._fields and node.decorator_list:# py >= 2.6
newnode.decorators = self.visit_decorators(node, newnode)
newnode.parent.frame().set_local(newnode.name, newnode)
return newnode
def visit_const(self, node, parent):
"""visit a Const node by returning a fresh instance of it"""
newnode = new.Const(node.value)
_set_infos(node, newnode, parent)
return newnode
def visit_continue(self, node, parent):
"""visit a Continue node by returning a fresh instance of it"""
newnode = new.Continue()
_set_infos(node, newnode, parent)
return newnode
def visit_compare(self, node, parent):
"""visit a Compare node by returning a fresh instance of it"""
newnode = new.Compare()
_lineno_parent(node, newnode, parent)
newnode.left = self.visit(node.left, newnode)
newnode.ops = [(_CMP_OP_CLASSES[op.__class__], self.visit(expr, newnode))
for (op, expr) in zip(node.ops, node.comparators)]
return newnode
def visit_comprehension(self, node, parent):
"""visit a Comprehension node by returning a fresh instance of it"""
newnode = new.Comprehension()
newnode.parent = parent
self.asscontext = "Ass"
newnode.target = self.visit(node.target, newnode)
self.asscontext = None
newnode.iter = self.visit(node.iter, newnode)
newnode.ifs = [self.visit(child, newnode) for child in node.ifs]
return newnode
def visit_decorators(self, node, parent):
"""visit a Decorators node by returning a fresh instance of it"""
# /!\ node is actually a _ast.Function node while
# parent is a astroid.nodes.Function node
newnode = new.Decorators()
_lineno_parent(node, newnode, parent)
if 'decorators' in node._fields: # py < 2.6, i.e. 2.5
decorators = node.decorators
else:
decorators = node.decorator_list
newnode.nodes = [self.visit(child, newnode) for child in decorators]
return newnode
def visit_delete(self, node, parent):
"""visit a Delete node by returning a fresh instance of it"""
newnode = new.Delete()
_lineno_parent(node, newnode, parent)
self.asscontext = "Del"
newnode.targets = [self.visit(child, newnode) for child in node.targets]
self.asscontext = None
return newnode
def visit_dict(self, node, parent):
"""visit a Dict node by returning a fresh instance of it"""
newnode = new.Dict()
_lineno_parent(node, newnode, parent)
newnode.items = [(self.visit(key, newnode), self.visit(value, newnode))
for key, value in zip(node.keys, node.values)]
return newnode
def visit_dictcomp(self, node, parent):
"""visit a DictComp node by returning a fresh instance of it"""
newnode = new.DictComp()
_lineno_parent(node, newnode, parent)
newnode.key = self.visit(node.key, newnode)
newnode.value = self.visit(node.value, newnode)
newnode.generators = [self.visit(child, newnode)
for child in node.generators]
return newnode
def visit_discard(self, node, parent):
"""visit a Discard node by returning a fresh instance of it"""
newnode = new.Discard()
_lineno_parent(node, newnode, parent)
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_ellipsis(self, node, parent):
"""visit an Ellipsis node by returning a fresh instance of it"""
newnode = new.Ellipsis()
_set_infos(node, newnode, parent)
return newnode
def visit_emptynode(self, node, parent):
"""visit an EmptyNode node by returning a fresh instance of it"""
newnode = new.EmptyNode()
_set_infos(node, newnode, parent)
return newnode
def visit_excepthandler(self, node, parent):
"""visit an ExceptHandler node by returning a fresh instance of it"""
newnode = new.ExceptHandler()
_lineno_parent(node, newnode, parent)
if node.type is not None:
newnode.type = self.visit(node.type, newnode)
if node.name is not None:
# /!\ node.name can be a tuple
self.asscontext = "Ass"
newnode.name = self.visit(node.name, newnode)
self.asscontext = None
newnode.body = [self.visit(child, newnode) for child in node.body]
return newnode
def visit_exec(self, node, parent):
"""visit an Exec node by returning a fresh instance of it"""
newnode = new.Exec()
_lineno_parent(node, newnode, parent)
newnode.expr = self.visit(node.body, newnode)
if node.globals is not None:
newnode.globals = self.visit(node.globals, newnode)
if node.locals is not None:
newnode.locals = self.visit(node.locals, newnode)
return newnode
def visit_extslice(self, node, parent):
"""visit an ExtSlice node by returning a fresh instance of it"""
newnode = new.ExtSlice()
newnode.parent = parent
newnode.dims = [self.visit(dim, newnode) for dim in node.dims]
return newnode
def visit_for(self, node, parent):
"""visit a For node by returning a fresh instance of it"""
newnode = new.For()
_lineno_parent(node, newnode, parent)
self.asscontext = "Ass"
newnode.target = self.visit(node.target, newnode)
self.asscontext = None
newnode.iter = self.visit(node.iter, newnode)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.orelse = [self.visit(child, newnode) for child in node.orelse]
return newnode
def visit_from(self, node, parent):
"""visit a From node by returning a fresh instance of it"""
names = [(alias.name, alias.asname) for alias in node.names]
newnode = new.From(node.module or '', names, node.level or None)
_set_infos(node, newnode, parent)
# store From names to add them to locals after building
self._from_nodes.append(newnode)
return newnode
def visit_function(self, node, parent):
"""visit an Function node to become astroid"""
self._global_names.append({})
newnode = new.Function(node.name, None)
_lineno_parent(node, newnode, parent)
_init_set_doc(node, newnode)
newnode.args = self.visit(node.args, newnode)
newnode.body = [self.visit(child, newnode) for child in node.body]
if 'decorators' in node._fields: # py < 2.6
attr = 'decorators'
else:
attr = 'decorator_list'
decorators = getattr(node, attr)
if decorators:
newnode.decorators = self.visit_decorators(node, newnode)
if PY3K and node.returns:
newnode.returns = self.visit(node.returns, newnode)
self._global_names.pop()
frame = newnode.parent.frame()
if isinstance(frame, new.Class):
if newnode.name == '__new__':
newnode._type = 'classmethod'
else:
newnode._type = 'method'
if newnode.decorators is not None:
for decorator_expr in newnode.decorators.nodes:
if isinstance(decorator_expr, new.Name):
if decorator_expr.name in ('classmethod', 'staticmethod'):
newnode._type = decorator_expr.name
elif decorator_expr.name == 'classproperty':
newnode._type = 'classmethod'
frame.set_local(newnode.name, newnode)
return newnode
def visit_genexpr(self, node, parent):
"""visit a GenExpr node by returning a fresh instance of it"""
newnode = new.GenExpr()
_lineno_parent(node, newnode, parent)
newnode.elt = self.visit(node.elt, newnode)
newnode.generators = [self.visit(child, newnode) for child in node.generators]
return newnode
def visit_getattr(self, node, parent):
"""visit a Getattr node by returning a fresh instance of it"""
if self.asscontext == "Del":
# FIXME : maybe we should reintroduce and visit_delattr ?
# for instance, deactivating asscontext
newnode = new.DelAttr()
elif self.asscontext == "Ass":
# FIXME : maybe we should call visit_assattr ?
newnode = new.AssAttr()
self._delayed_assattr.append(newnode)
else:
newnode = new.Getattr()
_lineno_parent(node, newnode, parent)
asscontext, self.asscontext = self.asscontext, None
newnode.expr = self.visit(node.value, newnode)
self.asscontext = asscontext
newnode.attrname = node.attr
return newnode
def visit_global(self, node, parent):
"""visit an Global node to become astroid"""
newnode = new.Global(node.names)
_set_infos(node, newnode, parent)
if self._global_names: # global at the module level, no effect
for name in node.names:
self._global_names[-1].setdefault(name, []).append(newnode)
return newnode
def visit_if(self, node, parent):
"""visit a If node by returning a fresh instance of it"""
newnode = new.If()
_lineno_parent(node, newnode, parent)
newnode.test = self.visit(node.test, newnode)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.orelse = [self.visit(child, newnode) for child in node.orelse]
return newnode
def visit_ifexp(self, node, parent):
"""visit a IfExp node by returning a fresh instance of it"""
newnode = new.IfExp()
_lineno_parent(node, newnode, parent)
newnode.test = self.visit(node.test, newnode)
newnode.body = self.visit(node.body, newnode)
newnode.orelse = self.visit(node.orelse, newnode)
return newnode
def visit_import(self, node, parent):
"""visit a Import node by returning a fresh instance of it"""
newnode = new.Import()
_set_infos(node, newnode, parent)
newnode.names = [(alias.name, alias.asname) for alias in node.names]
# save import names in parent's locals:
for (name, asname) in newnode.names:
name = asname or name
newnode.parent.set_local(name.split('.')[0], newnode)
return newnode
def visit_index(self, node, parent):
"""visit a Index node by returning a fresh instance of it"""
newnode = new.Index()
newnode.parent = parent
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_keyword(self, node, parent):
"""visit a Keyword node by returning a fresh instance of it"""
newnode = new.Keyword()
newnode.parent = parent
newnode.arg = node.arg
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_lambda(self, node, parent):
"""visit a Lambda node by returning a fresh instance of it"""
newnode = new.Lambda()
_lineno_parent(node, newnode, parent)
newnode.args = self.visit(node.args, newnode)
newnode.body = self.visit(node.body, newnode)
return newnode
def visit_list(self, node, parent):
"""visit a List node by returning a fresh instance of it"""
newnode = new.List()
_lineno_parent(node, newnode, parent)
newnode.elts = [self.visit(child, newnode) for child in node.elts]
return newnode
def visit_listcomp(self, node, parent):
"""visit a ListComp node by returning a fresh instance of it"""
newnode = new.ListComp()
_lineno_parent(node, newnode, parent)
newnode.elt = self.visit(node.elt, newnode)
newnode.generators = [self.visit(child, newnode)
for child in node.generators]
return newnode
def visit_name(self, node, parent):
"""visit a Name node by returning a fresh instance of it"""
# True and False can be assigned to something in py2x, so we have to
# check first the asscontext
if self.asscontext == "Del":
newnode = new.DelName()
elif self.asscontext is not None: # Ass
assert self.asscontext == "Ass"
newnode = new.AssName()
elif node.id in CONST_NAME_TRANSFORMS:
newnode = new.Const(CONST_NAME_TRANSFORMS[node.id])
_set_infos(node, newnode, parent)
return newnode
else:
newnode = new.Name()
_lineno_parent(node, newnode, parent)
newnode.name = node.id
# XXX REMOVE me :
if self.asscontext in ('Del', 'Ass'): # 'Aug' ??
self._save_assignment(newnode)
return newnode
def visit_bytes(self, node, parent):
"""visit a Bytes node by returning a fresh instance of Const"""
newnode = new.Const(node.s)
_set_infos(node, newnode, parent)
return newnode
def visit_num(self, node, parent):
"""visit a Num node by returning a fresh instance of Const"""
newnode = new.Const(node.n)
_set_infos(node, newnode, parent)
return newnode
def visit_pass(self, node, parent):
"""visit a Pass node by returning a fresh instance of it"""
newnode = new.Pass()
_set_infos(node, newnode, parent)
return newnode
def visit_str(self, node, parent):
"""visit a Str node by returning a fresh instance of Const"""
newnode = new.Const(node.s)
_set_infos(node, newnode, parent)
return newnode
def visit_print(self, node, parent):
"""visit a Print node by returning a fresh instance of it"""
newnode = new.Print()
_lineno_parent(node, newnode, parent)
newnode.nl = node.nl
if node.dest is not None:
newnode.dest = self.visit(node.dest, newnode)
newnode.values = [self.visit(child, newnode) for child in node.values]
return newnode
def visit_raise(self, node, parent):
"""visit a Raise node by returning a fresh instance of it"""
newnode = new.Raise()
_lineno_parent(node, newnode, parent)
if node.type is not None:
newnode.exc = self.visit(node.type, newnode)
if node.inst is not None:
newnode.inst = self.visit(node.inst, newnode)
if node.tback is not None:
newnode.tback = self.visit(node.tback, newnode)
return newnode
def visit_return(self, node, parent):
"""visit a Return node by returning a fresh instance of it"""
newnode = new.Return()
_lineno_parent(node, newnode, parent)
if node.value is not None:
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_set(self, node, parent):
"""visit a Set node by returning a fresh instance of it"""
newnode = new.Set()
_lineno_parent(node, newnode, parent)
newnode.elts = [self.visit(child, newnode) for child in node.elts]
return newnode
def visit_setcomp(self, node, parent):
"""visit a SetComp node by returning a fresh instance of it"""
newnode = new.SetComp()
_lineno_parent(node, newnode, parent)
newnode.elt = self.visit(node.elt, newnode)
newnode.generators = [self.visit(child, newnode)
for child in node.generators]
return newnode
def visit_slice(self, node, parent):
"""visit a Slice node by returning a fresh instance of it"""
newnode = new.Slice()
newnode.parent = parent
if node.lower is not None:
newnode.lower = self.visit(node.lower, newnode)
if node.upper is not None:
newnode.upper = self.visit(node.upper, newnode)
if node.step is not None:
newnode.step = self.visit(node.step, newnode)
return newnode
def visit_subscript(self, node, parent):
"""visit a Subscript node by returning a fresh instance of it"""
newnode = new.Subscript()
_lineno_parent(node, newnode, parent)
subcontext, self.asscontext = self.asscontext, None
newnode.value = self.visit(node.value, newnode)
newnode.slice = self.visit(node.slice, newnode)
self.asscontext = subcontext
return newnode
def visit_tryexcept(self, node, parent):
"""visit a TryExcept node by returning a fresh instance of it"""
newnode = new.TryExcept()
_lineno_parent(node, newnode, parent)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.handlers = [self.visit(child, newnode) for child in node.handlers]
newnode.orelse = [self.visit(child, newnode) for child in node.orelse]
return newnode
def visit_tryfinally(self, node, parent):
"""visit a TryFinally node by returning a fresh instance of it"""
newnode = new.TryFinally()
_lineno_parent(node, newnode, parent)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.finalbody = [self.visit(n, newnode) for n in node.finalbody]
return newnode
def visit_tuple(self, node, parent):
"""visit a Tuple node by returning a fresh instance of it"""
newnode = new.Tuple()
_lineno_parent(node, newnode, parent)
newnode.elts = [self.visit(child, newnode) for child in node.elts]
return newnode
def visit_unaryop(self, node, parent):
"""visit a UnaryOp node by returning a fresh instance of it"""
newnode = new.UnaryOp()
_lineno_parent(node, newnode, parent)
newnode.operand = self.visit(node.operand, newnode)
newnode.op = _UNARY_OP_CLASSES[node.op.__class__]
return newnode
def visit_while(self, node, parent):
"""visit a While node by returning a fresh instance of it"""
newnode = new.While()
_lineno_parent(node, newnode, parent)
newnode.test = self.visit(node.test, newnode)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.orelse = [self.visit(child, newnode) for child in node.orelse]
return newnode
def visit_with(self, node, parent):
newnode = new.With()
_lineno_parent(node, newnode, parent)
expr = self.visit(node.context_expr, newnode)
self.asscontext = "Ass"
if node.optional_vars is not None:
vars = self.visit(node.optional_vars, newnode)
else:
vars = None
self.asscontext = None
newnode.items = [(expr, vars)]
newnode.body = [self.visit(child, newnode) for child in node.body]
return newnode
def visit_yield(self, node, parent):
"""visit a Yield node by returning a fresh instance of it"""
return _create_yield_node(node, parent, self, new.Yield)
class TreeRebuilder3k(TreeRebuilder):
"""extend and overwrite TreeRebuilder for python3k"""
def visit_arg(self, node, parent):
"""visit a arg node by returning a fresh AssName instance"""
# the <arg> node is coming from py>=3.0, but we use AssName in py2.x
# XXX or we should instead introduce a Arg node in astroid ?
return self.visit_assname(node, parent, node.arg)
def visit_nameconstant(self, node, parent):
# in Python 3.4 we have NameConstant for True / False / None
newnode = new.Const(node.value)
_set_infos(node, newnode, parent)
return newnode
def visit_arguments(self, node, parent):
newnode = super(TreeRebuilder3k, self).visit_arguments(node, parent)
self.asscontext = "Ass"
newnode.kwonlyargs = [self.visit(child, newnode) for child in node.kwonlyargs]
self.asscontext = None
newnode.kw_defaults = [self.visit(child, newnode) if child else None for child in node.kw_defaults]
newnode.annotations = [
self.visit(arg.annotation, newnode) if arg.annotation else None
for arg in node.args]
return newnode
def visit_excepthandler(self, node, parent):
"""visit an ExceptHandler node by returning a fresh instance of it"""
newnode = new.ExceptHandler()
_lineno_parent(node, newnode, parent)
if node.type is not None:
newnode.type = self.visit(node.type, newnode)
if node.name is not None:
newnode.name = self.visit_assname(node, newnode, node.name)
newnode.body = [self.visit(child, newnode) for child in node.body]
return newnode
def visit_nonlocal(self, node, parent):
"""visit a Nonlocal node and return a new instance of it"""
newnode = new.Nonlocal(node.names)
_set_infos(node, newnode, parent)
return newnode
def visit_raise(self, node, parent):
"""visit a Raise node by returning a fresh instance of it"""
newnode = new.Raise()
_lineno_parent(node, newnode, parent)
# no traceback; anyway it is not used in Pylint
if node.exc is not None:
newnode.exc = self.visit(node.exc, newnode)
if node.cause is not None:
newnode.cause = self.visit(node.cause, newnode)
return newnode
def visit_starred(self, node, parent):
"""visit a Starred node and return a new instance of it"""
newnode = new.Starred()
_lineno_parent(node, newnode, parent)
newnode.value = self.visit(node.value, newnode)
return newnode
def visit_try(self, node, parent):
# python 3.3 introduce a new Try node replacing TryFinally/TryExcept nodes
if node.finalbody:
newnode = new.TryFinally()
_lineno_parent(node, newnode, parent)
newnode.finalbody = [self.visit(n, newnode) for n in node.finalbody]
if node.handlers:
excnode = new.TryExcept()
_lineno_parent(node, excnode, newnode)
excnode.body = [self.visit(child, excnode) for child in node.body]
excnode.handlers = [self.visit(child, excnode) for child in node.handlers]
excnode.orelse = [self.visit(child, excnode) for child in node.orelse]
newnode.body = [excnode]
else:
newnode.body = [self.visit(child, newnode) for child in node.body]
elif node.handlers:
newnode = new.TryExcept()
_lineno_parent(node, newnode, parent)
newnode.body = [self.visit(child, newnode) for child in node.body]
newnode.handlers = [self.visit(child, newnode) for child in node.handlers]
newnode.orelse = [self.visit(child, newnode) for child in node.orelse]
return newnode
def visit_with(self, node, parent):
if 'items' not in node._fields:
# python < 3.3
return super(TreeRebuilder3k, self).visit_with(node, parent)
newnode = new.With()
_lineno_parent(node, newnode, parent)
def visit_child(child):
expr = self.visit(child.context_expr, newnode)
self.asscontext = 'Ass'
if child.optional_vars:
var = self.visit(child.optional_vars, newnode)
else:
var = None
self.asscontext = None
return expr, var
newnode.items = [visit_child(child)
for child in node.items]
newnode.body = [self.visit(child, newnode) for child in node.body]
return newnode
def visit_yieldfrom(self, node, parent):
return _create_yield_node(node, parent, self, new.YieldFrom)
def visit_class(self, node, parent):
newnode = super(TreeRebuilder3k, self).visit_class(node, parent)
newnode._newstyle = True
for keyword in node.keywords:
if keyword.arg == 'metaclass':
newnode._metaclass = self.visit(keyword, newnode).value
break
return newnode
if sys.version_info >= (3, 0):
TreeRebuilder = TreeRebuilder3k
|
JetChars/vim
|
vim/bundle/python-mode/pymode/libs/astroid/rebuilder.py
|
Python
|
apache-2.0
| 37,747
|
[
"VisIt"
] |
ef2264458e0685db04fc53e148babbc324523863430398022a4256fb06be5cac
|
"""
Dict with the emojis of osm tyles
"""
typeemoji = {
'aerialway:cable_car': '\xF0\x9F\x9A\xA1',
'aerialway:station': '\xF0\x9F\x9A\xA1',
'aeroway:aerodrome': '\xE2\x9C\x88',
'aeroway:terminal': '\xE2\x9C\x88',
'amenity:ambulance_station': '\xF0\x9F\x9A\x91',
'amenity:atm': '\xF0\x9F\x92\xB3',
'amenity:bank': '\xF0\x9F\x92\xB0',
'amenity:bar': '\xF0\x9F\x8D\xB8',
'amenity:biergarten': '\xF0\x9F\x8D\xBA',
'amenity:brothel': '\xF0\x9F\x91\xAF',
'amenity:cafe': '\xE2\x98\x95',
'amenity:casino': '\xE2\x99\xA0',
'amenity:cinema': '\xF0\x9F\x8E\xAC',
'amenity:college': '\xF0\x9F\x8E\x93',
'amenity:crematorium': '\xE2\x9A\xB1',
'amenity:drinking_water': '\xF0\x9F\x9A\xB0',
'amenity:fast_food': '\xF0\x9F\x8D\x94',
'amenity:fire_station': '\xF0\x9F\x9A\x92',
'amenity:fountain': '\xE2\x9B\xB2',
'amenity:fuel': '\xE2\x9B\xBD',
'amenity:hospital': '\xF0\x9F\x8F\xA5',
'amenity:hotel': '\xF0\x9F\x8F\xA8',
'amenity:ice_cream': '\xF0\x9F\x8D\xA6',
'amenity:kindergarten': '\xF0\x9F\x91\xB6',
'amenity:karaoke_box': '\xF0\x9F\x8E\xA4',
'amenity:library': '\xF0\x9F\x93\x96',
'amenity:love_hotel': '\xF0\x9F\x8F\xA9',
'amenity:place_of_worship': '\xF0\x9F\x9B\x90',
'amenity:pharmacy': '\xF0\x9F\x92\x8A',
'amenity:police': '\xF0\x9F\x9A\x93',
'amenity:pub': '\xF0\x9F\x8D\xBA',
'amenity:recycling': '\xE2\x99\xBB',
'amenity:restaurant': '\xF0\x9F\x8D\xB4',
'amenity:sauna': '\xE2\x99\xA8',
'amenity:school': '\xF0\x9F\x8E\x92',
'amenity:stripclub': '\xF0\x9F\x91\xAF',
'amenity:studio': '\xF0\x9F\x8E\x99',
'amenity:swimming_pool': '\xF0\x9F\x8F\x8A',
'amenity:taxi': '\xF0\x9F\x9A\x95',
'amenity:telephone': '\xF0\x9F\x93\x9E',
'amenity:theatre': '\xF0\x9F\x8E\xAD',
'amenity:toilets': '\xF0\x9F\x9A\xBB',
'amenity:university': '\xF0\x9F\x8E\x93',
'building:church': '\xE2\x9B\xAA',
'building:mosque': '\xF0\x9F\x95\x8C',
'building:synagogue': '\xF0\x9F\x95\x8D',
'building:stadium': '\xF0\x9F\x8F\x9F',
'building:temple': '\xF0\x9F\x8F\x9B',
'building:train_station': '\xF0\x9F\x9A\x89',
'craft:beekeeper': '\xF0\x9F\x90\x9D',
'cuisine:pasta': '\xF0\x9F\x8D\x9D',
'cuisine:pizza': '\xF0\x9F\x8D\x95',
'cuisine:sushi': '\xF0\x9F\x8D\xA3',
'emergency:ambulance_station': '\xF0\x9F\x9A\x91',
'emergency:defibrillator': '\xF0\x9F\x92\x94',
'emergency:phone': '\xF0\x9F\x86\x98',
'emergency:assembly_point':'\xF0\x9F\x8E\xAF',
'highway:bridleway': '\xE3\x80\xB0 \xF0\x9F\x90\x8E',
'highway:bus_stop': '\xF0\x9F\x9A\x8C',
'highway:construction': '\xE3\x80\xB0 \xF0\x9F\x9A\xA7',
'highway:cycleway': '\xE3\x80\xB0 \xF0\x9F\x9A\xB4',
'highway:footway': '\xE3\x80\xB0 \xF0\x9F\x9A\xB6',
'highway:living_street': '\xE3\x80\xB0 \xF0\x9F\x8F\xA0',
'highway:motorway': '\xE3\x80\xB0 \xF0\x9F\x9A\x97',
'highway:path': '\xE3\x80\xB0 \xF0\x9F\x9A\xB6',
'highway:pedestrian': '\xE3\x80\xB0 \xF0\x9F\x8F\xA0',
'highway:primary': '\xE3\x80\xB0 \xF0\x9F\x9A\x9B',
'highway:raceway': '\xE3\x80\xB0 \xF0\x9F\x8F\x81',
'highway:residential': '\xE3\x80\xB0 \xF0\x9F\x8F\xA0',
'highway:road': '\xE3\x80\xB0 \xE2\x9D\x93',
'highway:secondary': '\xE3\x80\xB0 \xF0\x9F\x9A\x9B',
'highway:tertiary': '\xE3\x80\xB0 \xF0\x9F\x9A\x9B',
'highway:track': '\xE3\x80\xB0 \xF0\x9F\x9A\x9C',
'highway:trunk': '\xE3\x80\xB0 \xF0\x9F\x9A\x97',
'highway:unclassified': '\xE3\x80\xB0 \xE2\x9D\x93',
'historic:castle': '\xF0\x9F\x8F\xB0',
'historic:monument': '\xF0\x9F\x97\xBD',
'landuse:cemetery': '\xE2\x9A\xB0',
'landuse:plant_nursery': '\xF0\x9F\x8C\xB1',
'leisure:bowling_alley': '\xF0\x9F\x8E\xB3',
'leisure:golf_course': '\xE2\x9B\xB3',
'leisure:swimming_pool': '\xF0\x9F\x8F\x8A',
'man_made:works': '\xF0\x9F\x8F\xAD',
'natural:peak': '\xF0\x9F\x97\xBB',
'natural:volcano': '\xF0\x9F\x8C\x8B',
'place:city': '\xF0\x9F\x8C\x86',
'place:ocean': '\xF0\x9F\x8C\x8A',
'place:sea': '\xF0\x9F\x8C\x8A',
'place:town': '\xF0\x9F\x8F\x98',
'place:village': '\xF0\x9F\x8F\x98',
'railway:station': '\xF0\x9F\x9A\x89',
'railway:subway': '\xF0\x9F\x9A\x87',
'railway:subway_entrance': '\xF0\x9F\x9A\x87',
'railway:tram': '\xF0\x9F\x9A\x83',
'route:piste': '\xF0\x9F\x8E\xBF',
'route:subway': '\xF0\x9F\x9A\x87',
'shop:art': '\xF0\x9F\x8E\xA8',
'shop:bag': '\xF0\x9F\x91\x9C',
'shop:bakery': '\xF0\x9F\x8D\x9E',
'shop:baby_goods': '\xF0\x9F\x8D\xBC',
'shop:books': '\xF0\x9F\x93\x9A',
'shop:butcher': '\xF0\x9F\x8D\x97',
'shop:cheese': '\xF0\x9F\xA7\x80',
'shop:chocolate': '\xF0\x9F\x8D\xAB',
'shop:clothes': '\xF0\x9F\x91\x97',
'shop:coffee': '\xE2\x98\x95',
'shop:computer': '\xF0\x9F\x92\xBB',
'shop:confectionary': '\xF0\x9F\x8D\xB0',
'shop:cosmetics': '\xF0\x9F\x92\x85',
'shop:doityourself': '\xF0\x9F\x94\xA7',
'shop:electronics': '\xF0\x9F\x93\xBA',
'shop:erotic': '\xF0\x9F\x92\x8B',
'shop:garden_centre': '\xF0\x9F\x8C\xB1',
'shop:gift': '\xF0\x9F\x8E\x81',
'shop:fishing': '\xF0\x9F\x8E\xA3',
'shop:florist': '\xF0\x9F\x92\x90',
'shop:greengrocer': '\xF0\x9F\x8D\x89',
'shop:hairdresser': '\xF0\x9F\x92\x87',
'shop:hifi': '\xF0\x9F\x94\x8A',
'shop:ice_cream': '\xF0\x9F\x8D\xA6',
'shop:jewelry': '\xF0\x9F\x92\x8D',
'shop:locksmith': '\xF0\x9F\x94\x91',
'shop:mobile_phone': '\xF0\x9F\x93\xB1',
'shop:music': '\xF0\x9F\x92\xBF',
'shop:musical_instrument': '\xF0\x9F\x8E\xB8',
'shop:newsagent': '\xF0\x9F\x93\xB0',
'shop:optician': '\xF0\x9F\x91\x93',
'shop:pastry': '\xF0\x9F\x8D\xAA',
'shop:photo': '\xF0\x9F\x93\xB7',
'shop:seafood': '\xF0\x9F\x90\x9F',
'shop:shoes': '\xF0\x9F\x91\x9E',
'shop:sports': '\xE2\x9A\xBD',
'shop:swimming_pool': '\xF0\x9F\x8F\x8A',
'shop:ticket': '\xF0\x9F\x8E\xAB',
'shop:tobacco': '\xF0\x9F\x9A\xAC',
'shop:video': '\xF0\x9F\x93\xBC',
'shop:video_games': '\xF0\x9F\x8E\xAE',
'shop:watches': '\xE2\x8C\x9A',
'shop:wine': '\xF0\x9F\x8D\xB7',
'sport:american_football': '\xF0\x9F\x8F\x88',
'sport:9pin': '\xF0\x9F\x8E\xB3',
'sport:10pin': '\xF0\x9F\x8E\xB3',
'sport:archery': '\xF0\x9F\x8F\xB9',
'sport:badminton': '\xF0\x9F\x8F\xB8',
'sport:baseball': '\xE2\x9A\xBE',
'sport:basketball': '\xF0\x9F\x8F\x80',
'sport:billiards': '\xF0\x9F\x8E\xB1',
'sport:cricket': '\xF0\x9F\x8F\x8F',
'sport:cycling': '\xF0\x9F\x9A\xB4',
'sport:darts': '\xF0\x9F\x8E\xAF',
'sport:equestrian': '\xF0\x9F\x8F\x87',
'sport:field_hockey': '\xF0\x9F\x8F\x91',
'sport:golf': '\xF0\x9F\x8F\x8C',
'sport:gymnastics': '\xF0\x9F\x8F\x8B',
'sport:horse_racing': '\xF0\x9F\x8F\x87',
'sport:ice_hockey': '\xF0\x9F\x8F\x92',
'sport:ice_skating': '\xE2\x9B\xB8',
'sport:rugby_league': '\xF0\x9F\x8F\x89',
'sport:rugby_union': '\xF0\x9F\x8F\x89',
'sport:sailing': '\xE2\x9B\xB5',
'sport:soccer': '\xE2\x9A\xBD',
'sport:surfing': '\xF0\x9F\x8F\x84',
'sport:table_tennis': '\xF0\x9F\x8F\x93',
'sport:tennis': '\xF0\x9F\x8E\xBE',
'sport:volleyball': '\xF0\x9F\x8F\x90',
'studio:audio': '\xF0\x9F\x8E\xB9',
'studio:radio': '\xF0\x9F\x93\xBB',
'studio:television': '\xF0\x9F\x93\xBA',
'studio:video': '\xF0\x9F\x8E\xA5',
'tourism:aquarium': '\xF0\x9F\x90\xA0',
'tourism:camp_site': '\xE2\x9B\xBA',
'tourism:hotel': '\xF0\x9F\x8F\xA8',
'tourism:information': '\xE2\x84\xB9',
'tourism:zoo': '\xF0\x9F\x90\x8A',
'vending:cigarettes': '\xF0\x9F\x9A\xAC'
}
|
Xevib/osmbot
|
bot/typeemoji.py
|
Python
|
gpl-3.0
| 7,711
|
[
"CASINO"
] |
87858ae92b5da940cdb448c7954ba47e436af2cbff9ab91b249d6be3dd4d3454
|
#!/Library/Frameworks/Python.framework/Versions/2.7/bin/python
##!/mnt/lustre_fs/users/mjmcc/apps/python2.7/bin/python
# ----------------------------------------
# USAGE:
# ----------------------------------------
# PREAMBLE:
import sys
import numpy as np
from numpy.linalg import *
import MDAnalysis
from distance_functions import *
# ----------------------------------------
# VARIABLE DECLARATION
pdb_file = sys.argv[1]
traj_loc = sys.argv[2]
start = int(sys.argv[3])
end = int(sys.argv[4])
zeros = np.zeros
square = np.square
sqrt = np.sqrt
flush = sys.stdout.flush
important = 'all'
# ----------------------------------------
# SUBROUTINES:
def ffprint(string):
print '%s' %(string)
flush()
# ----------------------------------------
# MAIN PROGRAM:
u = MDAnalysis.Universe(pdb_file)
u_important = u.select_atoms(important)
nRes = len(u_important.residues)
ffprint(nRes)
avg_matrix = zeros((nRes,nRes))
std_matrix = zeros((nRes,nRes))
nSteps = 0
while start < end+1:
ffprint('Loading trajectory %s' %(start))
u.load_new('test.mdcrd')
for ts in u.trajectory:
if ts.frame%10 == 0:
ffprint('Working on timestep %d of trajectory %d' %(ts.frame, start))
for i in range(nRes-1):
res0 = u_important.residues[i]
com0 = res0.center_of_mass()
for j in range(i+1,nRes):
res1 = u_important.residues[j]
com1 = res1.center_of_mass()
dist, dist2 = Euclid_distance(com0,com1,dist2=True)
avg_matrix[i,j] += dist
std_matrix[i,j] += dist2
nSteps += 1
start +=1
ffprint(nSteps)
avg_matrix /= nSteps
std_matrix /= nSteps
std_matrix = sqrt(std_matrix - square(avg_matrix))
out1 = open('%03d.%03d.avg_distance_matrix.dat' %(int(sys.argv[3]),end),'w')
out2 = open('%03d.%03d.std_distance_matrix.dat' %(int(sys.argv[3]),end),'w')
for i in range(nRes):
for j in range(nRes):
out1.write('%10f ' %(avg_matrix[i,j]))
out2.write('%10f ' %(std_matrix[i,j]))
out1.write('\n')
out2.write('\n')
out1.close()
out2.close()
|
rbdavid/Distance_matrix
|
Test_Case1/test_matrix_calc.py
|
Python
|
gpl-3.0
| 1,966
|
[
"MDAnalysis"
] |
8158a74e7eb836347be632d862320a3334d4990f544f88b6f8744329cbcbd5dc
|
from __future__ import print_function, division, absolute_import
import os
import re
from collections import defaultdict
from operator import itemgetter
import logging
import sys
from scipy.interpolate import InterpolatedUnivariateSpline as spline
from scipy.integrate import quad
from scipy.optimize import minimize_scalar, fmin
import matplotlib.pyplot as plt
import numpy as np
import emcee
import h5py
import pandas as pd
from george import kernels
import george
from kglib import fitters
from kglib.utils import StarData
from kglib.utils.HelperFunctions import mad, integral
from kglib.spectral_type import SpectralTypeRelations
def classify_filename(fname, type='bright'):
"""
Given a CCF filename, classify the star combination, temperature, metallicity, and vsini
:param fname:
:return:
"""
# First, remove any leading directories
fname = fname.split('/')[-1]
# Star combination
m1 = re.search('\.[0-9]+kps', fname)
stars = fname[:m1.start()]
star1 = stars.split('+')[0].replace('_', ' ')
star2 = stars.split('+')[1].split('_{}'.format(type))[0].replace('_', ' ')
# secondary star vsini
vsini = float(fname[m1.start() + 1:].split('kps')[0])
# Temperature
m2 = re.search('[0-9]+\.0K', fname)
temp = float(m2.group()[:-1])
# logg
m3 = re.search('K\+[0-9]\.[0-9]', fname)
logg = float(m3.group()[1:])
# metallicity
metal = float(fname.split(str(logg))[-1])
return star1, star2, vsini, temp, logg, metal
def get_ccf_data(basedir, primary_name=None, secondary_name=None, vel_arr=np.arange(-900.0, 900.0, 0.1), type='bright'):
"""
Searches the given directory for CCF files, and classifies
by star, temperature, metallicity, and vsini
:param basedir: The directory to search for CCF files
:keyword primary_name: Optional keyword. If given, it will only get the requested primary star data
:keyword secondary_name: Same as primary_name, but only reads ccfs for the given secondary
:keyword vel_arr: The velocities to interpolate each ccf at
:return: pandas DataFrame
"""
if not basedir.endswith('/'):
basedir += '/'
all_files = ['{}{}'.format(basedir, f) for f in os.listdir(basedir) if type in f.lower()]
primary = []
secondary = []
vsini_values = []
temperature = []
gravity = []
metallicity = []
ccf = []
for fname in all_files:
star1, star2, vsini, temp, logg, metal = classify_filename(fname, type=type)
if primary_name is not None and star1.lower() != primary_name.lower():
continue
if secondary_name is not None and star2.lower() != secondary_name.lower():
continue
vel, corr = np.loadtxt(fname, unpack=True)
fcn = spline(vel, corr)
ccf.append(fcn(vel_arr))
primary.append(star1)
secondary.append(star2)
vsini_values.append(vsini)
temperature.append(temp)
gravity.append(logg)
metallicity.append(metal)
# Make a pandas dataframe with all this data
df = pd.DataFrame(data={'Primary': primary, 'Secondary': secondary, 'Temperature': temperature,
'vsini': vsini_values, 'logg': gravity, '[Fe/H]': metallicity, 'CCF': ccf})
return df
def get_ccf_summary(hdf5_filename, vel_arr=np.arange(-900.0, 900.0, 0.1), excel_filename=None,
velocity='highest', addmode='simple', Tmin=3000, Tmax=7000, N_best=1, debug=False):
"""
Goes through the given HDF5 file, and finds the best set of parameters for each combination of primary/secondary star
:param hdf5_filename: The HDF5 file containing the CCF data
:keyword excel_filename: The filename of an MS excel file giving the velocity for each secondary star.
The data must be in the first sheet, and three must be columns labeled
'Star' and 'CCF RV'. Only used if velocity='excel'
:keyword velocity: The velocity to measure the CCF at. Options are:
- 'highest' (default): uses the maximum of the ccf
- value: A numeric type giving the velocity to to use.
- 'excel': Search the filename excel_filename for the velocity of each secondary star
:keyword vel_arr: The velocities to interpolate each ccf at
:keyword addmode: The way the CCF orders were added while generating the ccfs
:keyword debug: If True, it prints the progress. Otherwise, does its work silently and takes a while
:keyword Tmin, Tmax: The minimum and maximum temperatures to include in the output.
:keyword N_best: Passed to find_best_pars()
:return: pandas DataFrame summarizing the best parameters.
This is the type of dataframe to give to the other function here
"""
if velocity.lower() == 'excel':
table = pd.read_excel(excel_filename, 0)
summary_dfs = []
with h5py.File(hdf5_filename, 'r') as f:
primaries = f.keys()
for p in primaries:
secondaries = f[p].keys()
for s in secondaries:
if addmode not in f[p][s].keys():
continue
logging.info('Primary: {}\tSecondary: {}'.format(p, s))
if velocity.lower() == 'excel':
try:
vel_max = table.loc[table.Star.str.lower().str.contains(s.strip().lower())]['CCF RV'].item()
except ValueError:
logging.warning('No entry found for star "{}" in table {}'.format(s, excel_filename))
continue
else:
vel_max = velocity
datasets = f[p][s][addmode].keys()
vsini_values = []
temperature = []
gravity = []
metallicity = []
ccf = []
for i, d in enumerate(datasets):
if debug:
sys.stdout.write('\r\t\tDataset {}/{}'.format(i+1, len(datasets)))
sys.stdout.flush()
ds = f[p][s][addmode][d]
if Tmin <= ds.attrs['T'] <= Tmax:
if ds.value.shape[0] == 2:
vel, corr = ds.value
elif 'velocity' in ds.attrs:
vel, corr = ds.attrs['velocity'], ds.value
else:
raise KeyError('Cannot find velocity information for dataset {}'.format(ds.name))
fcn = spline(vel, corr)
vsini_values.append(ds.attrs['vsini'])
temperature.append(ds.attrs['T'])
gravity.append(ds.attrs['logg'])
metallicity.append(ds.attrs['[Fe/H]'])
ccf.append(fcn(vel_arr))
data = pd.DataFrame(data={'Primary': [p]*len(ccf), 'Secondary': [s]*len(ccf),
'Temperature': temperature, 'vsini': vsini_values,
'logg': gravity, '[Fe/H]': metallicity, 'CCF': ccf})
data.drop_duplicates(subset=('Temperature', 'vsini', 'logg', '[Fe/H]', 'Primary', 'Secondary'),
inplace=True)
summary_dfs.append(find_best_pars(data, velocity=vel_max, vel_arr=vel_arr, N=N_best))
del data
return pd.concat(summary_dfs, ignore_index=True)
def find_best_pars(df, velocity='highest', vel_arr=np.arange(-900.0, 900.0, 0.1), N=1):
"""
Find the 'best-fit' parameters for each combination of primary and secondary star
:param df: the dataframe to search in
:keyword velocity: The velocity to measure the CCF at. The default is 'highest', and uses the maximum of the ccf
:keyword vel_arr: The velocities to interpolate each ccf at
:keyword N: The number of parameters to return
:return: a dataframe with keys of primary, secondary, and the parameters
"""
# Make sure N is odd
if N % 2 == 0:
logging.warn('N must be an odd number. Changing N from {} --> {}'.format(N, N + 1))
N += 1
# Get the names of the primary and secondary stars
primary_names = pd.unique(df.Primary)
secondary_names = pd.unique(df.Secondary)
# Find the ccf value at the given velocity
def val_fcn(ccf, idx=None, search_indices=None):
if idx is None:
if search_indices is None:
idx = np.argmax(ccf)
else:
idx = np.argmax(ccf[search_indices])
idx = search_indices[idx]
rv = vel_arr[idx]
sigma = np.std(ccf[np.abs(vel_arr - rv) > 200])
return ccf[idx], ccf[idx] / sigma, rv
if velocity == 'highest':
vals = df['CCF'].map(val_fcn)
df['ccf_max'] = vals.map(lambda l: l[0])
df['significance'] = vals.map(lambda l: l[1])
df['rv'] = vals.map(lambda l: l[2])
else:
# idx = np.argmin(np.abs(vel_arr - velocity))
idx = np.where(np.abs(vel_arr - velocity) <= 5)[0]
vals = df['CCF'].map(lambda c: val_fcn(c, search_indices=idx))
df['ccf_max'] = vals.map(lambda l: l[0])
df['significance'] = vals.map(lambda l: l[1])
df['rv'] = vals.map(lambda l: l[2])
#print(df[['Secondary', 'rv']])
# Find the best parameter for each combination
d = defaultdict(list)
groups = df.groupby(('Primary', 'Secondary'))
for group in groups.groups.keys():
primary = group[0]
secondary = group[1]
g = groups.get_group(group)
best = g.loc[g.ccf_max == g.ccf_max.max()]
T = best['Temperature'].item()
vsini = best['vsini'].item()
logg = best['logg'].item()
metal = best['[Fe/H]'].item()
rv = best['rv'].item()
Tmin = T - (N - 1) * 50
Tmax = T + (N - 1) * 50
for Ti in range(Tmin, Tmax + 1, 100):
good = g.loc[
(g['Temperature'] == Ti) & (g['vsini'] == vsini) & (g['logg'] == logg) & (g['[Fe/H]'] == metal)]
if len(good) == 0:
logging.warn('No matches for T = {} with primary/secondary = {}/{}!'.format(Ti, primary, secondary))
d['Primary'].append(primary)
d['Secondary'].append(secondary)
d['Temperature'].append(Ti)
d['vsini'].append(vsini)
d['logg'].append(logg)
d['[Fe/H]'].append(metal)
d['rv'].append(rv)
d['CCF'].append(np.nan)
d['significance'].append(np.nan)
continue
# print len(good)
best = good.loc[good.ccf_max == good.ccf_max.max()]
#best = good
if len(best) != 1 or any(np.isnan(best['CCF'].item())):
print(best)
print(good)
print(good.ccf_max)
print(good.ccf_max.max())
continue
# Save the best parameters for this temperature
d['Primary'].append(primary)
d['Secondary'].append(secondary)
d['Temperature'].append(best['Temperature'].item())
d['vsini'].append(best['vsini'].item())
d['logg'].append(best['logg'].item())
d['[Fe/H]'].append(best['[Fe/H]'].item())
idx = np.argmin(np.abs(vel_arr - rv))
d['rv'].append(rv)
d['CCF'].append(best['CCF'].item()[idx])
# d['rv'].append(best['rv'].item())
#d['CCF'].append(best.ccf_max.item())
# Measure the detection significance
std = mad(best.CCF.item())
mean = np.median(best.CCF.item())
d['significance'].append((d['CCF'][-1] - mean) / std)
return pd.DataFrame(data=d)
def get_detected_objects(df, tol=1.0, debug=False):
"""
Takes a summary dataframe with RV information. Finds the median rv for each star,
and removes objects that are more than 'tol' km/s from the median value
:param df: A summary dataframe, such as created by get_ccf_summary or find_best_pars
:param tol: The tolerance, in km/s, to accept an observation as detected
:return: a dataframe containing only detected companions
"""
secondary_names = pd.unique(df.Secondary)
secondary_to_rv = defaultdict(float)
for secondary in secondary_names:
rv = df.loc[df.Secondary == secondary]['rv'].median()
secondary_to_rv[secondary] = rv
if debug:
for secondary in sorted(secondary_to_rv.keys()):
print ('RV for {}: {:.2f} km/s'.format(secondary, secondary_to_rv[secondary]))
keys = df.Secondary.values
good = df.loc[abs(df.rv.values - np.array(itemgetter(*keys)(secondary_to_rv))) < tol]
return good
def get_detected_objects_new(df, siglim=5, Terr_lim=3, Toffset=2000):
"""
Get a dataframe with only the detected objects.
:param df: A DataFrame such as one output by get_ccf_summary with N > 1
:param siglim: The minimum significance to count as detected
:param Terr_lim: The maximum number of standard deviations of (Measured - Actual) to allow for detected objects
:param Toffset: The absolute difference to allow between the true and measured temperature.
:return: A dataframe similar to df, but with fewer rows
"""
S = get_initial_uncertainty(df)
S['Tdiff'] = S.Tmeas - S.Tactual
mean, std = S.Tdiff.mean(), S.Tdiff.std()
detected = S.loc[(S.significance > siglim) & (S.Tdiff - mean < Terr_lim * std) & (abs(S.Tdiff) < Toffset)]
return pd.merge(detected[['Primary', 'Secondary']], df, on=['Primary', 'Secondary'], how='left')
def add_actual_temperature(df, method='excel', filename='SecondaryStar_Temperatures.xls'):
"""
Add the actual temperature to a given summary dataframe
:param df: The dataframe to which we will add the actual secondary star temperature
:keyword method: How to get the actual temperature. Options are:
- 'spt': Use main-sequence relationships to go from spectral type --> temperature
- 'excel': Use tabulated data, available in the file 'SecondaryStar_Temperatures.xls'
:keyword filename: The filename of the excel spreadsheet containing the literature temperatures.
Needs to have the right format! Ignored if method='spt'
:return: copy of the original dataframe, with an extra column for the secondary star temperature
"""
# First, get a list of the secondary stars in the data
secondary_names = pd.unique(df.Secondary)
secondary_to_temperature = defaultdict(float)
secondary_to_error = defaultdict(float)
if method.lower() == 'spt':
MS = SpectralTypeRelations.MainSequence()
for secondary in secondary_names:
star_data = StarData.GetData(secondary)
spt = star_data.spectype[0] + re.search('[0-9]\.*[0-9]*', star_data.spectype).group()
T_sec = MS.Interpolate(MS.Temperature, spt)
secondary_to_temperature[secondary] = T_sec
elif method.lower() == 'excel':
table = pd.read_excel(filename, 0)
for secondary in secondary_names:
T_sec = table.loc[table.Star.str.lower().str.contains(secondary.strip().lower())]['Literature_Temp'].item()
T_error = table.loc[table.Star.str.lower().str.contains(secondary.strip().lower())][
'Literature_error'].item()
secondary_to_temperature[secondary] = T_sec
secondary_to_error[secondary] = T_error
df['Tactual'] = df['Secondary'].map(lambda s: secondary_to_temperature[s])
df['Tact_err'] = df['Secondary'].map(lambda s: secondary_to_error[s])
return
def make_gaussian_process_samples(df):
"""
Make a gaussian process fitting the Tactual-Tmeasured relationship
:param df: pandas DataFrame with columns 'Temperature' (with the measured temperature)
and 'Tactual' (for the actual temperature)
:return: emcee sampler instance
"""
Tmeasured, Tactual, error, lit_err = get_values(df)
for i, e in enumerate(error):
if e < 1:
e = fit_sigma(df, i)
error[i] = np.sqrt(e**2 + lit_err[i]**2)
for Tm, Ta, e in zip(Tmeasured, Tactual, error):
print(Tm, Ta, e)
plt.figure(1)
limits = [3000, 7000]
plt.errorbar(Tmeasured, Tactual, yerr=error, fmt='.k', capsize=0)
plt.plot(limits, limits, 'r--')
plt.xlabel('Measured Temperature')
plt.ylabel('Actual Temperature')
plt.xlim(limits)
plt.ylim(limits)
# Define some functions to use in the GP fit
def model(pars, T):
#polypars = pars[2:]
#return np.poly1d(polypars)(T)
return T
def lnlike(pars, Tact, Tmeas, Terr):
a, tau = np.exp(pars[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau))
gp.compute(Tmeas, Terr)
return gp.lnlikelihood(Tact - model(pars, Tmeas))
def lnprior(pars):
lna, lntau = pars[:2]
polypars = pars[2:]
if -20 < lna < 20 and 12 < lntau < 20:
return 0.0
return -np.inf
def lnprob(pars, x, y, yerr):
lp = lnprior(pars)
return lp + lnlike(pars, x, y, yerr) if np.isfinite(lp) else -np.inf
# Set up the emcee fitter
initial = np.array([0, 14])#, 1.0, 0.0])
ndim = len(initial)
nwalkers = 100
p0 = [np.array(initial) + 1e-8 * np.random.randn(ndim) for i in xrange(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(Tactual, Tmeasured, error))
print('Running first burn-in')
p1, lnp, _ = sampler.run_mcmc(p0, 500)
sampler.reset()
print("Running second burn-in...")
p_best = p1[np.argmax(lnp)]
p2 = [p_best + 1e-8 * np.random.randn(ndim) for i in xrange(nwalkers)]
p3, _, _ = sampler.run_mcmc(p2, 250)
sampler.reset()
print("Running production...")
sampler.run_mcmc(p3, 1000)
# We now need to increase the spread of the posterior distribution so that it encompasses the right number of data points
# This is because the way we have been treating error bars here is kind of funky...
# First, generate a posterior distribution of Tactual for every possible Tmeasured
print('Generating posterior samples at all temperatures...')
N = 10000 # This is 1/10th of the total number of samples!
idx = np.argsort(-sampler.lnprobability.flatten())[:N] # Get N 'best' curves
par_vals = sampler.flatchain[idx]
Tvalues = np.arange(3000, 6900, 100)
gp_posterior = []
for pars in par_vals:
a, tau = np.exp(pars[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau))
gp.compute(Tmeasured, error)
s = gp.sample_conditional(Tactual - model(pars, Tmeasured), Tvalues) + model(pars, Tvalues)
gp_posterior.append(s)
# Get the median and spread in the pdf
gp_posterior = np.array(gp_posterior)
medians = np.median(gp_posterior, axis=0)
sigma_pdf = np.std(gp_posterior, axis=0)
# Correct the data and get the residual spread
df['Corrected_Temperature'] = df['Temperature'].map(lambda T: medians[np.argmin(abs(T - Tvalues))])
sigma_spread = np.std(df.Tactual - df.Corrected_Temperature)
# Increase the spread in the pdf to reflect the residual spread
ratio = np.maximum(np.ones(sigma_pdf.size), sigma_spread / sigma_pdf)
gp_corrected = (gp_posterior - medians) * ratio + medians
# Make confidence intervals
l, m, h = np.percentile(gp_corrected, [16.0, 50.0, 84.0], axis=0)
conf = pd.DataFrame(data={'Measured Temperature': Tvalues, 'Actual Temperature': m,
'Lower Bound': l, 'Upper bound': h})
conf.to_csv('Confidence_Intervals.csv', index=False)
# Finally, plot a bunch of the fits
print("Plotting...")
N = 300
Tvalues = np.arange(3000, 7000, 20)
idx = np.argsort(-sampler.lnprobability.flatten())[:N] # Get N 'best' curves
par_vals = sampler.flatchain[idx]
plot_posterior = []
for i, pars in enumerate(par_vals):
a, tau = np.exp(pars[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau))
gp.compute(Tmeasured, error)
s = gp.sample_conditional(Tactual - model(pars, Tmeasured), Tvalues) + model(pars, Tvalues)
plot_posterior.append(s)
plot_posterior = np.array(plot_posterior)
medians = np.median(plot_posterior, axis=0)
sigma_pdf = np.std(plot_posterior, axis=0)
# Increase the spread in the pdf to reflect the residual spread
ratio = np.maximum(np.ones(sigma_pdf.size), sigma_spread / sigma_pdf)
plot_posterior = (plot_posterior - medians) * ratio + medians
plt.plot(Tvalues, plot_posterior.T, 'b-', alpha=0.05)
plt.draw()
plt.savefig('Temperature_Correspondence.pdf')
return sampler, gp_corrected
def check_posterior(df, posterior, Tvalues=np.arange(3000, 6900, 100)):
"""
Checks the posterior samples: Are 95% of the measurements within 2-sigma of the prediction?
:param df: The summary dataframe
:param posterior: The MCMC predicted values
:param Tvalues: The measured temperatures the posterior was made with
:return: boolean, as well as some warning messages if applicable
"""
# First, make 2-sigma confidence intervals
l, m, h = np.percentile(posterior, [5.0, 50.0, 95.0], axis=0)
Ntot = [] # The total number of observations with the given measured temperature
Nacc = [] # The number that have actual temperatures within the confidence interval
g = df.groupby('Temperature')
for i, T in enumerate(Tvalues):
if T in g.groups.keys():
Ta = g.get_group(T)['Tactual']
low, high = l[i], h[i]
Ntot.append(len(Ta))
Nacc.append(len(Ta.loc[(Ta >= low) & (Ta <= high)]))
p = float(Nacc[-1]) / float(Ntot[-1])
if p < 0.95:
logging.warn(
'Only {}/{} of the samples ({:.2f}%) were accepted for T = {} K'.format(Nacc[-1], Ntot[-1], p * 100,
T))
print(low, high)
print(sorted(Ta))
else:
Ntot.append(0)
Nacc.append(0)
p = float(sum(Nacc)) / float(sum(Ntot))
if p < 0.95:
logging.warn('Only {:.2f}% of the total samples were accepted!'.format(p * 100))
return False
return True
def get_Tmeas(d, include_actual=True):
d = d.dropna(subset=['CCF'])
corr = d.CCF.values
corr += 1.0 - corr.max()
T = d.Temperature.values
w = corr / corr.sum()
Tmeas = np.average(T, weights=w)
var_T = np.average((T - Tmeas) ** 2, weights=w)
# Get factor to go from biased --> unbiased variance
V1 = np.sum(w)
V2 = np.sum(w ** 2)
f = V1 / (V1 - V2 / V1)
# Finally, get the peak significance
sig = d['significance'].max()
if include_actual:
return pd.DataFrame(data={'[Fe/H]': d['[Fe/H]'].values[0], 'logg': d.logg.values[0],
'rv': d.rv.values[0], 'vsini': d.vsini.values[0],
'Tactual': d.Tactual.values[0], 'Tact_err': d.Tact_err.values[0],
'Tmeas': Tmeas, 'Tmeas_err': np.sqrt(f * var_T), 'significance': sig}, index=[0])
else:
return pd.DataFrame(data={'[Fe/H]': d['[Fe/H]'].values[0], 'logg': d.logg.values[0],
'rv': d.rv.values[0], 'vsini': d.vsini.values[0],
'Tmeas': Tmeas, 'Tmeas_err': np.sqrt(f * var_T), 'significance': sig}, index=[0])
def get_initial_uncertainty(df):
"""
Take a dataframe such as one output from get_ccf_summary with N > 1, and get the temperature and initial
estimate for the temperature uncertainty.
:param df:
:return:
"""
# Get the measured temperature as the weighted average of the temperatures (weight by normalized CCF value)
summary = df.groupby(('Primary', 'Secondary')).apply(get_Tmeas).reset_index()
return summary
class GPFitter(fitters.Bayesian_LS):
def _lnlike(self, pars):
"""
likelihood function. This uses the class variables for x,y,xerr, and yerr, as well as the 'model' instance.
"""
y_pred = self.x
a, tau = np.exp(pars[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau))
gp.compute(self.x, self.yerr)
return gp.lnlikelihood(self.y - y_pred)
def lnprior(self, pars):
lna, lntau = pars[:2]
polypars = pars[2:]
if -20 < lna < 30 and 0 < lntau < 30:
return 0.0
return -np.inf
def guess_fit_parameters(self):
return [0, 10]
def predict(self, x, N=100, highest=False):
"""
Predict the y value for the given x values.
"""
if self.sampler is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.sampler.flatchain.shape[0]
else:
N = min(N, self.sampler.flatchain.shape[0])
if highest:
indices = np.argsort(self.sampler.flatlnprobability)[:N]
pars = self.sampler.flatchain[indices]
else:
pars = self.sampler.flatchain[:N]
yvals = []
for i, p in enumerate(pars):
logging.info('Generating GP samples for iteration {}/{}'.format(i+1, len(pars)))
a, tau = np.exp(p[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau))
gp.compute(self.x, self.yerr)
s = gp.sample_conditional(self.y - self.x, x) + x
yvals.append(s)
return np.array(yvals)
class ModifiedPolynomial(fitters.Bayesian_LS):
def model(self, p, x):
s, m = 10**p[:2]
polypars = p[2:]
return np.poly1d(polypars)(x) * np.exp(-s*(x-m)**2) + x
def guess_fit_parameters(self, fitorder=1):
polypars = np.zeros(fitorder + 1)
polypars[-2] = 1.0
pars = [-7, 3.5]
pars.extend(polypars)
min_func = lambda p, xi, yi, yerri: np.sum((yi - self.model(p, xi)) ** 2 / yerri ** 2)
best_pars = fmin(min_func, x0=pars, args=(self.x, self.y, self.yerr))
self.guess_pars = best_pars
return best_pars
def fit_act2tmeas(df, nwalkers=500, n_burn=200, n_prod=500, fitorder=1, fitter_class=None):
"""
Fit a function to go from actual to measured temperature. Use Bayes' Theorem to get the reverse!
:param df: A pandas DataFrame such as one output by get_ccf_summary with N > 1
:param fitorder: The order of the fit
:return:
"""
# Get the measured temperature for each primary/secondary combination
summary = get_initial_uncertainty(df)
# Get the average and standard deviation of the measured temperature, for a given actual temperature.
def get_avg_T(df):
Tm = df.Tmeas.values
Tm_err = df.Tmeas_err.values
w = 1.0 / Tm_err ** 2
w /= w.sum()
T_avg = np.average(Tm, weights=w)
var_T = np.average((Tm - T_avg) ** 2, weights=w)
# Get factor to go from biased --> unbiased variance
V1 = np.sum(w)
V2 = np.sum(w ** 2)
f = V1 / (V1 - V2 / V1)
# return Tmeas, np.sqrt(f*var_T)
return pd.DataFrame(data={'Tactual': df.Tactual.values[0], 'Tact_err': df.Tact_err.values[0],
'Tmeas': T_avg, 'Tmeas_err': np.sqrt(f * var_T)}, index=[0])
final = summary.groupby('Secondary').apply(get_avg_T).reset_index()
# Don't let the error bars get smaller than 50 K
final['Tmeas_err'] = np.maximum(final.Tmeas_err, 100.0)
# Save the measurement data
final[['Tactual', 'Tmeas', 'Tact_err', 'Tmeas_err']].to_csv('Systematics_Data.csv')
# Fit to a polynomial with a gaussian process noise model.
if fitter_class is None:
#fitter = GPFitter(final.Tactual, final.Tmeas, final.Tmeas_err)
fitter = ModifiedPolynomial(final.Tactual, final.Tmeas, final.Tmeas_err)
else:
fitter = fitter_class(final.Tactual, final.Tmeas, final.Tmeas_err)
fitter.fit(nwalkers=nwalkers, n_burn=n_burn, n_prod=n_prod, fitorder=fitorder)
par_samples = fitter.sampler.flatchain
# Plot
fig, ax = plt.subplots(1, 1)
fig.subplots_adjust(left=0.15, bottom=0.18)
ax.errorbar(final.Tactual, final.Tmeas, xerr=final.Tact_err, yerr=final.Tmeas_err, fmt='ko')
lim = [3000, 7000]
xplot = np.linspace(lim[0], lim[1], 100)
ypred = fitter.predict(xplot, N=300)
for i in range(ypred.shape[0]):
yplot = ypred[i]
ax.plot(xplot, yplot, 'b-', alpha=0.03)
ax.set_xlabel('Literature Temperature (K)')
ax.set_ylabel('Measured Temperature (K)')
ax.set_xlim(lim)
ax.set_ylim(lim)
ax.plot(lim, lim, 'r--')
plt.savefig('Tact2Tmeas.pdf')
# Now, plot the fractional difference vs Tactual
fig2, ax2 = plt.subplots(1, 1)
delta = (final.Tmeas - final.Tactual)/final.Tactual
delta_var = (final.Tmeas_err/final.Tactual)**2 + (final.Tmeas/final.Tactual**2 * final.Tact_err)**2
ax2.errorbar(final.Tactual, delta, xerr=final.Tact_err, yerr=np.sqrt(delta_var), fmt='ko')
for i in range(ypred.shape[0]):
#ypred = np.poly1d(par_samples[i])(xplot)
del_plot = (ypred[i] - xplot)/xplot
ax2.plot(xplot, del_plot, 'b-', alpha=0.03)
ax2.set_xlabel('Literature Temperature')
ax2.set_ylabel('Fractional Error')
lim = ax2.get_xlim()
ax2.plot(lim, [0, 0], 'r--')
plt.savefig('Tact2Tmeas_Residual.pdf')
plt.show()
return fitter
def get_actual_temperature(fitter, Tmeas, Tmeas_err, cache=None, ret_cache=None, summarize=True):
"""
Get the actual temperature from the measured temperature
:param fitter: a Bayesian_TLS instance which has already been fit
:param Tmeas: the measured temperature. Either a float or a numpy array with independent temperatures
:param Tmeas_err: uncertainty on the measured temperature. Same shape as Tmeas.
:return: posterior samples for the actual temperature
"""
# First, build up a cache of the MCMC predicted measured temperatures for lots of actual temperatures
if cache is None:
logging.info('Generating cache...')
Ta_arr = np.arange(2000, 10000, 1.0)
Tmeas_pred = fitter.predict(Ta_arr, N=10000)
cache = pd.DataFrame(Tmeas_pred, columns=Ta_arr)
ret_cache = True if ret_cache is None else False
del Tmeas_pred
# Get the probability of each value in the cache
Tmeas = np.atleast_1d(Tmeas).astype(np.float)
Tmeas_err = np.atleast_1d(Tmeas_err).astype(np.float)
def get_prob(Tm_pred, Tm, Tm_err):
return np.exp(-((Tm_pred - Tm) / Tm_err)**2)
probs = np.array([get_prob(cache.values, Tm, Tme) for Tm, Tme in zip(Tmeas, Tmeas_err)])
# Get the posterior probability distribution
tmp = np.mean(probs, axis=1)
tmp /= np.sum(tmp, axis=1)[:, np.newaxis]
P = np.prod(tmp, axis=0)
# Find the maximum and FWHM of the probabilities
#best_T = cache.columns.values[np.argmax(P)]
#roots = fwhm(cache.columns.values, P, k=0, ret_roots=True)
#h, l = max(roots), min(roots)
l, best_T, h = integral(cache.columns.values, P, [0.16, 0.5, 0.84], k=0)
print('$T = {}^{{+{}}}_{{-{}}}$'.format(best_T, h-best_T, best_T-l))
# Return the requested things.
if ret_cache:
if summarize:
return best_T, h - best_T, best_T - l, cache
return cache.columns.values, P, cache
if summarize:
return best_T, h - best_T, best_T - l
return cache.columns.values, P
def correct_measured_temperature(df, fitter, cache=None):
"""
Given a dataframe such as output by get_ccf_data (with N > 1), correct the temperatures to
account for the measurement bias.
:param df: A dataframe with the CCF data
:param fitter: A fitters.Bayesian_TLS instance that contains fitted parameters for the measurement bias
:param cache: A pandas dataframe that gives MCMC samples of the temperature measurement
for various actual temperatures.
:return:
"""
# First, get the measurement values and estimated uncertainty
data = get_initial_uncertainty(df)
# Make a cache for get_actual_temperature if it is not given.
if cache is None:
logging.info('Generating cache...')
Ta_arr = np.arange(2000, 10000, 1.0)
Tmeas_pred = fitter.predict(Ta_arr, N=10000)
cache = pd.DataFrame(Tmeas_pred, columns=Ta_arr)
# Correct the measured temperatures to account for the bias.
out = data.apply(lambda r: get_actual_temperature(fitter, r['Tmeas'], r['Tmeas_err'], cache=cache), axis=1)
data['Corrected Temperature'] = out.map(lambda l: l[0])
data['T_uperr'] = out.map(lambda l: l[1])
data['T_lowerr'] = out.map(lambda l: l[2])
return data
def get_uncertainty_scalefactor(df):
"""
Find the factor by which to multiply the 1-sigma measurement uncertainties
so that they agree with the literature values 68% of the time.
:param df: A pandas DataFrame with corrected temperatures, such as output by correct_measured_temperature
:return: The scaling factor. Multiply df['T_uperr'] and df['T_lowerr'] by this to get more realistic uncertainties.
"""
def get_zscore(x, y, xerr, yerr, f=1.0):
delta = x - y
sigma = np.sqrt(f * xerr ** 2 + yerr ** 2)
return delta / sigma
def min_func(f, x, y, xerr, yerr):
zscores = get_zscore(x, y, xerr, yerr, f)
return (len(zscores[zscores ** 2 > 1]) / float(len(zscores)) - 0.32) ** 2
df['T_err'] = np.minimum(df['T_uperr'], df['T_lowerr']) # Be conservative and use the smaller error.
fitresult = minimize_scalar(min_func, bounds=[0, 10], method='bounded', args=(df['Corrected Temperature'],
df['Tactual'],
df['T_err'],
df['Tact_err']))
logging.info('Uncertainty scale factor = {:.2g}'.format(fitresult.x))
return fitresult.x
def get_values(df):
temp = df.groupby('Temperature')
Tmeasured = temp.groups.keys()
Tactual_values = [temp.get_group(Tm)['Tactual'].values for Tm in Tmeasured]
Tactual = np.array([np.mean(Ta) for Ta in Tactual_values])
spread = np.nan_to_num([np.std(Ta, ddof=1) for Ta in Tactual_values])
literr_values = [temp.get_group(Tm)['Tact_err'].values for Tm in Tmeasured]
lit_err = np.array([np.sqrt(np.sum(literr**2)) for literr in literr_values])
return Tmeasured, Tactual, spread, lit_err
def integrate_gauss(x1, x2, amp, mean, sigma):
"""
Integrate a gaussian between the points x1 and x2
"""
gauss = lambda x, A, mu, sig: A*np.exp(-(x-mu)**2 / (2.0*sig**2))
if x1 < -1e6:
x1 = -np.inf
if x2 > 1e6:
x2 = np.inf
result = quad(gauss, x1, x2, args=(amp, mean, sigma))
return result[0]
def get_probability(x1, x2, x3, x4, N, mean, sigma, debug=False):
"""
Get the probability of the given value of sigma
x1-x4 are the four limits, which are the bin edges of the possible values Tactual can take
N is the number of entries in the single bin, and mean what it sounds like
"""
if x2 < 100:
x2 = x3 - (x4-x3)
if x4 > 1e6:
x4 = x3 + (x3-x2)
int1 = integrate_gauss(x2, x3, 1.0, mean, sigma)
A = float(N) / int1
int2 = 0 if x1 < 100 else integrate_gauss(x1, x2, A, mean, sigma)
int3 = 0 if x4 > 1e6 else integrate_gauss(x3, x4, A, mean, sigma)
if debug:
print('\n')
print(x1, x2, x3, x4, N, mean, sigma)
print(int1)
print(A)
print(int2)
print(int3)
if int2 > 1 or int3 > 1:
return 0
return 1
def fit_sigma(df, i):
"""
Find the largest allowable standard deviation, given the possible values Tactual can take.
"""
Tmeasured, Tactual, _, _ = get_values(df)
Tm = Tmeasured[i]
# Get the possible values, and bin those with this measured value
possible_values = sorted(pd.unique(df.Tactual))
edges = [(possible_values[i] + possible_values[i+1])/2 for i in range(len(possible_values)-1)]
bins = [0] + edges + [9e9]
good = df.loc[df.Temperature == Tm]
values, _= np.histogram(good.Tactual.values, bins=bins)
mean = np.mean(good.Tactual.values)
std = np.std(good.Tactual.values, ddof=1)
if std > 0:
return std
sigma_test = np.arange(500, 10, -10) #Just test a bunch of values
idx = np.searchsorted(bins, mean)
idx = np.argmin(abs(np.array(bins) - mean))
x1 = bins[idx-2] if idx > 2 else -1
x2 = bins[idx-1]
x3 = bins[idx]
x4 = bins[idx+1] if idx < len(bins)-2 else np.inf
N = len(good)
probs = [get_probability(x1, x2, x3, x4, N, mean, s) for s in sigma_test]
for s, p in zip(sigma_test, probs):
if p > 0.5:
return s
# If we get here, just return a guess value
return 200.0
#raise ValueError('No probability > 0!')
if __name__ == '__main__':
pass
|
kgullikson88/gullikson-scripts
|
kglib/cross_correlation/CCF_Systematics.py
|
Python
|
mit
| 37,521
|
[
"Gaussian"
] |
3a68bb74f604ed59cdb696c25a2991a462f840ffde342b7b11ce9f90b4f10773
|
import topo
import param
import imagen
from topo import projection, responsefn, learningfn, transferfn, sheet, optimized
import topo.transferfn.misc
from topo.submodel import Model, ArraySpec, order_projections # pyflakes:ignore (API import)
from topo.submodel.earlyvision import EarlyVisionModel
@Model.definition
class ModelGCAL(EarlyVisionModel):
cortex_density=param.Number(default=47.0,bounds=(0,None),
inclusive_bounds=(False,True),doc="""
The nominal_density to use for V1.""")
homeostasis = param.Boolean(default=True, doc="""
Whether or not the homeostatic adaption should be applied in V1""")
t_init = param.Number(default=0.15, doc="""
The initial V1 threshold value. This value is static in the L and GCL models
and adaptive in the AL and GCAL models.""")
t_settle = param.Integer(default=16, doc="""
Number of settling steps before applying a reset in the V1 sheet.""")
target_activity = param.Number(default=0.024,doc="""
The target average activity for the homeostatic threshold mechanism.""")
latexc_radius=param.Number(default=0.104,bounds=(0,None),doc="""
Radius of the lateral excitatory bounds within V1.""")
latinh_radius=param.Number(default=0.22917,bounds=(0,None),doc="""
Radius of the lateral inhibitory bounds within V1.""")
latexc_size=param.Number(default=0.05,bounds=(0,None),doc="""
Size of the lateral excitatory connections within V1.""")
latinh_size=param.Number(default=0.15,bounds=(0,None),doc="""
Size of the lateral inhibitory connections within V1.""")
aff_lr=param.Number(default=0.1,bounds=(0.0,None),doc="""
Learning rate for the afferent projection(s) to V1.""")
exc_lr=param.Number(default=0.0,bounds=(0.0,None),doc="""
Learning rate for the lateral excitatory projection to V1.""")
inh_lr=param.Number(default=0.3,bounds=(0.0,None),doc="""
Learning rate for the lateral inhibitory projection to V1.""")
aff_strength=param.Number(default=1.0,bounds=(0.0,None),doc="""
Overall strength of the afferent projection to V1.""")
exc_strength=param.Number(default=1.7,bounds=(0.0,None),doc="""
Overall strength of the lateral excitatory projection to V1.""")
inh_strength=param.Number(default=1.4,bounds=(0.0,None),doc="""
Overall strength of the lateral inhibitory projection to V1.""")
expand_sf_test_range=param.Boolean(default=False,doc="""
By default, measure_sine_pref() measures SF at the sizes of RF
used, for speed, but if expand_sf_test_range is True, it will
test over a larger range, including half the size of the
smallest and twice the size of the largest.""")
def property_setup(self, properties):
properties = super(ModelGCAL, self).property_setup(properties)
"Specify weight initialization, response function, and learning function"
projection.CFProjection.cf_shape=imagen.Disk(smoothing=0.0)
projection.CFProjection.response_fn=optimized.CFPRF_DotProduct_cython()
projection.CFProjection.learning_fn=optimized.CFPLF_Hebbian_cython()
projection.CFProjection.weights_output_fns=[optimized.CFPOF_DivisiveNormalize_L1_cython()]
projection.SharedWeightCFProjection.response_fn=optimized.CFPRF_DotProduct_cython()
return properties
def sheet_setup(self):
sheets = super(ModelGCAL,self).sheet_setup()
sheets['V1'] = [{}]
return sheets
@Model.SettlingCFSheet
def V1(self, properties):
return Model.SettlingCFSheet.params(
tsettle=self.t_settle,
plastic=True,
joint_norm_fn=optimized.compute_joint_norm_totals_cython,
output_fns=[transferfn.misc.HomeostaticResponse(t_init=self.t_init,
target_activity = self.target_activity,
learning_rate=0.01 if self.homeostasis else 0.0)],
nominal_density=self.cortex_density,
nominal_bounds=sheet.BoundingBox(radius=self.area/2.0))
@Model.matchconditions('V1', 'V1_afferent')
def V1_afferent_conditions(self, properties):
return {'level': 'LGN'}
@Model.CFProjection
def V1_afferent(self, src_properties, dest_properties):
sf_channel = src_properties['SF'] if 'SF' in src_properties else 1
# Adjust delays so same measurement protocol can be used with and without gain control.
LGN_V1_delay = 0.05 if self.gain_control else 0.10
name=''
if 'eye' in src_properties: name+=src_properties['eye']
if 'opponent' in src_properties:
name+=src_properties['opponent']+src_properties['surround']
name+=('LGN'+src_properties['polarity']+'Afferent')
if sf_channel>1: name+=('SF'+str(src_properties['SF']))
gaussian_size = 2.0 * self.v1aff_radius *self.sf_spacing**(sf_channel-1)
weights_generator = imagen.random.GaussianCloud(gaussian_size=gaussian_size)
return [Model.CFProjection.params(
delay=LGN_V1_delay+lag,
dest_port=('Activity','JointNormalize','Afferent'),
name= name if lag==0 else name+('Lag'+str(lag)),
learning_rate=self.aff_lr,
strength=self.aff_strength*(1.5 if self.gain_control else 1.0),
weights_generator=weights_generator,
nominal_bounds_template=sheet.BoundingBox(radius=
self.v1aff_radius*self.sf_spacing**(sf_channel-1)))
for lag in self['lags']]
@Model.matchconditions('V1', 'lateral_excitatory')
def lateral_excitatory_conditions(self, properties):
return {'level': 'V1'}
@Model.CFProjection
def lateral_excitatory(self, src_properties, dest_properties):
return Model.CFProjection.params(
delay=0.05,
name='LateralExcitatory',
weights_generator=imagen.Gaussian(aspect_ratio=1.0, size=self.latexc_size),
strength=self.exc_strength,
learning_rate=self.exc_lr,
nominal_bounds_template=sheet.BoundingBox(radius=self.latexc_radius))
@Model.matchconditions('V1', 'lateral_inhibitory')
def lateral_inhibitory_conditions(self, properties):
return {'level': 'V1'}
@Model.CFProjection
def lateral_inhibitory(self, src_properties, dest_properties):
return Model.CFProjection.params(
delay=0.05,
name='LateralInhibitory',
weights_generator=imagen.random.GaussianCloud(gaussian_size=self.latinh_size),
strength=-1.0*self.inh_strength,
learning_rate=self.inh_lr,
nominal_bounds_template=sheet.BoundingBox(radius=self.latinh_radius))
def analysis_setup(self):
# TODO: This is different in gcal.ty, stevens/gcal.ty and gcal_od.ty
# And depends whether gain control is used or not
import topo.analysis.featureresponses
topo.analysis.featureresponses.FeatureMaps.selectivity_multiplier=2.0
topo.analysis.featureresponses.FeatureCurveCommand.contrasts=[1, 10, 30, 50, 100]
if 'dr' in self.dims:
topo.analysis.featureresponses.MeasureResponseCommand.durations=[(max(self['lags'])+1)*1.0]
if 'sf' in self.dims:
from topo.analysis.command import measure_sine_pref
sf_relative_sizes = [self.sf_spacing**(sf_channel-1) for sf_channel in self['SF']]
wide_relative_sizes=[0.5*sf_relative_sizes[0]] + sf_relative_sizes + [2.0*sf_relative_sizes[-1]]
relative_sizes=(wide_relative_sizes if self.expand_sf_test_range else sf_relative_sizes)
#The default 2.4 spatial frequency value here is
#chosen because it results in a sine grating with bars whose
#width approximately matches the width of the Gaussian training
#patterns, and thus the typical width of an ON stripe in one of the
#receptive fields
measure_sine_pref.frequencies = [2.4*s for s in relative_sizes]
@Model.definition
class ExamplesGCAL(ModelGCAL):
"""
Reproduces the results of the legacy examples/gcal.ty file.
"""
def __init__(self, **params):
super(ExamplesGCAL, self).__init__(time_dependent=False, **params)
if set(self.dims) != set([ 'xy', 'or']):
raise Exception("ExamplesGCAL only reproducible for dims = ['xy', 'or']")
def setup(self,setup_options=True):
model = super(ExamplesGCAL, self).setup(setup_options)
if setup_options is True or 'sheets' in setup_options:
model.sheets.Retina.update(nominal_bounds=sheet.BoundingBox(radius=self.area/2.0+1.125))
model.sheets.LGNOn.update(nominal_bounds=sheet.BoundingBox(radius=self.area/2.0+0.75))
model.sheets.LGNOff.update(nominal_bounds=sheet.BoundingBox(radius=self.area/2.0+0.75))
model.sheets.V1.update(nominal_density=48)
if setup_options is True or 'projections' in setup_options:
order_projections(model, ['afferent',
'lateral_gain_control',
('V1_afferent', {'polarity':'On'}),
('V1_afferent', {'polarity':'Off'}),
'lateral_excitatory',
'lateral_inhibitory'])
return model
|
ioam/topographica
|
topo/submodel/gcal.py
|
Python
|
bsd-3-clause
| 9,576
|
[
"Gaussian"
] |
6b57c5671eb675c35598e233e491d820f6e4d2cb00e3516380ba3159260951bc
|
from distutils.core import setup
setup(
name='spfem',
packages=['spfem'],
version='2016.1',
install_requires=[
"numpy",
"scipy",
"matplotlib",
"sympy",
"mayavi",
],
description='Finite elements in pure SciPy',
author='Tom Gustafsson',
author_email='tom dot gustafsson at aalto dot fi',
url='https://github.com/kinnala/sp.fem',
download_url='https://github.com/kinnala/sp.fem/tarball/2016.1',
keywords=['testing','logging','example'],
classifiers=[],
)
|
kinnala/sp.fem
|
setup.py
|
Python
|
agpl-3.0
| 620
|
[
"Mayavi"
] |
2f97266cfda86ad86e197e0b2bdf453f063fee4d95afcb6e758e2caeeece291d
|
from copy import deepcopy
from argparse import ArgumentParser
import time
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
import galsim
import emcee
import triangle
from astropy.utils.console import ProgressBar
def walker_ball(alpha, spread, nwalkers):
return [alpha+np.random.randn(len(alpha))*spread for i in xrange(nwalkers)]
def lnprior(p):
x0, y0, n, flux, HLR, e1, e2 = p
if n < 0.3 or n > 6.2: return -np.inf
if flux < 0: return -np.inf
if HLR < 0.1: return -np.inf
if e1**2 + e2**2 > 1.0: return -np.inf
return 0.0
def lnprob(p, target_img, noise_var, psftype):
lp = lnprior(p)
if not np.isfinite(lp):
return -np.inf
x0, y0, n, flux, HLR, e1, e2 = p
model_img = galsim.ImageD(25, 25, scale=0.2)
gal = galsim.Sersic(n=n, half_light_radius=HLR, flux=flux)
gal = gal.shear(e1=e1, e2=e2)
gal = gal.shift(x0, y0)
if psftype == 'atmopt':
lam_over_diam = 700e-9 / 8.4 * 3600 * 180.0 / np.pi
aberrations = [0.0]*4 + [0.1]*8
obscuration = 0.5
atm_FWHM = 0.6
atm_e1 = 0.01
atm_e2 = 0.02
psf = galsim.Convolve(galsim.Kolmogorov(fwhm=atm_FWHM).shear(e1=atm_e1, e2=atm_e2),
galsim.OpticalPSF(lam_over_diam=lam_over_diam,
aberrations=aberrations,
obscuration=obscuration))
elif psftype == 'moffat':
psf = galsim.Moffat(fwhm = 0.7, beta=3.0).shear(e1=0.01, e2=0.02)
final = galsim.Convolve(gal, psf)
try:
final.drawImage(image=model_img)
except:
return -np.inf
lnlike = -0.5 * np.sum((target_img.array - model_img.array)**2/noise_var)
return lp + lnlike
def autocorrtime(chain):
N, K = chain.shape
if N < 100: return
M = 50
tau_int = emcee.autocorr.integrated_time(chain, window=M)
while (M < 4*tau_int.max()) & (M < N/4) :
M = 4.1*tau_int.max()
tau_int = emcee.autocorr.integrated_time(chain, window=M)
return tau_int
def mcgalsim(args):
bd = galsim.BaseDeviate(args.seed)
gn = galsim.GaussianNoise(bd)
target_img = galsim.ImageD(25, 25, scale=0.2)
gal = galsim.Sersic(n=args.n,
half_light_radius=args.HLR,
flux=args.flux)
gal = gal.shear(e1=args.e1, e2=args.e2)
gal = gal.shift(args.x0, args.y0)
if args.psf == 'atmopt':
lam_over_diam = 700e-9 / 8.4 * 3600 * 180.0 / np.pi
aberrations = [0.0]*4 + [0.1]*8
obscuration = 0.5
atm_FWHM = 0.6
atm_e1 = 0.01
atm_e2 = 0.02
psf = galsim.Convolve(galsim.Kolmogorov(fwhm=atm_FWHM).shear(e1=atm_e1, e2=atm_e2),
galsim.OpticalPSF(lam_over_diam=lam_over_diam,
aberrations=aberrations,
obscuration=obscuration))
elif args.psf == 'moffat':
psf = galsim.Moffat(fwhm = 0.7, beta=3.0).shear(e1=0.01, e2=0.02)
final = galsim.Convolve(gal, psf)
final.drawImage(image=target_img)
noise_var = target_img.addNoiseSNR(gn, args.snr, preserve_flux=True)
p1 = [args.x0, args.y0, args.n, args.flux, args.HLR, args.e1, args.e2]
dp1 = 0.001
ndim = len(p1)
p0 = walker_ball(p1, dp1, args.nwalkers)
# print np.array(p0).mean(axis=0)
sampler = emcee.EnsembleSampler(args.nwalkers, ndim, lnprob,
args=(target_img, noise_var, args.psf),
threads=args.nthreads)
pp, lnp, rstate = sampler.run_mcmc(p0, 1)
sampler.reset()
lnps = []
dts = []
print "sampling"
with ProgressBar(args.nburn+args.nsamples) as bar:
for i in range(args.nburn+args.nsamples):
bar.update()
t1 = time.time()
pp, lnp, rstate = sampler.run_mcmc(pp, 1, lnprob0=lnp, rstate0=rstate)
dt = (time.time() - t1) / args.nwalkers * args.nthreads
lnps.append(deepcopy(lnp))
dts.append(dt)
samples = sampler.chain
lnps = np.array(lnps)
dts = np.array(dts)
flat_samples = samples[:, args.nburn:, :].reshape((-1, ndim)) # flat_samples excludes burn-in
if flat_samples.shape[0] > 2000:
tau_int = autocorrtime(flat_samples)
print "making triangle plot"
fig = triangle.corner(flat_samples, labels=["x0", "y0", "n", "flux", "HLR", "e1", "e2"],
truths=[args.x0, args.y0, args.n, args.flux, args.HLR, args.e1, args.e2])
fig.savefig("triangle.png", dpi=220)
print "making walker plot"
# Try to make plot aspect ratio near golden
nparam = ndim+2 # add 2 for lnp and dt
ncols = int(np.ceil(np.sqrt(nparam*1.6)))
nrows = int(np.ceil(1.0*nparam/ncols))
fig = plt.figure(figsize = (3.0*ncols,3.0*nrows))
for i, p in enumerate(["x0", "y0", "n", "flux", "HLR", "e1", "e2"]):
ax = fig.add_subplot(nrows, ncols, i+1)
ax.plot(samples[..., i].T)
ax.set_ylabel(p)
ax.set_yticklabels(ax.get_yticks(), rotation=45)
ax.set_xticklabels(ax.get_xticks(), rotation=45)
ylim = ax.get_ylim()
ylim = np.r_[ylim[0], (ylim[1]-ylim[0])*1.1+ylim[0]]
ax.set_ylim(ylim)
xlim = ax.get_xlim()
ax.set_xlim(xlim)
ax.fill_between([args.nburn,xlim[1]], [ylim[0]]*2, [ylim[1]]*2, color="#CCCCCC")
if "tau_int" in locals():
ax.text(0.6, 0.90, r"$\tau_\mathrm{{int}} = {:g}$".format(int(tau_int[i])),
transform=ax.transAxes)
# lnp chains
ax = fig.add_subplot(nrows, ncols, i+2)
ax.plot(lnps)
ax.set_ylabel(r"ln(prob)")
ax.set_yticklabels(ax.get_yticks(), rotation=45)
ax.set_xticklabels(ax.get_xticks(), rotation=45)
xlim = ax.get_xlim()
ax.set_xlim(xlim)
ylim = ax.get_ylim()
ax.set_ylim(ylim)
ax.fill_between([args.nburn,xlim[1]], [ylim[0]]*2, [ylim[1]]*2, color="#CCCCCC")
# delta t distribution
ax = fig.add_subplot(nrows, ncols, i+3)
ax.hist(dts)
ax.set_xlabel(r"$\Delta t (s)$")
ax.set_ylabel(r"#")
ax.set_yticklabels(ax.get_yticks(), rotation=45)
ax.set_xticklabels(ax.get_xticks(), rotation=45)
fig.tight_layout()
fig.savefig("walkers.png", dpi=220)
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument('--x0', type=float, default=0.0,
help="Galaxy centroid (default: 0.0)")
parser.add_argument('--y0', type=float, default=0.0,
help="Galaxy centroid (default: 0.0)")
parser.add_argument('-n', type=float, default=1.0,
help="Galaxy Sersic index (default: 1.0)")
parser.add_argument('--flux', type=float, default=10.0,
help="Galaxy flux (default: 10.0)")
parser.add_argument('--HLR', type=float, default=0.5,
help="Galaxy half light radius (default: 0.5)")
parser.add_argument('--e1', type=float, default=0.0,
help="Galaxy ellipticity (default: 0.0)")
parser.add_argument('--e2', type=float, default=0.0,
help="Galaxy ellipticity (default: 0.0)")
parser.add_argument('--snr', type=float, default=80.0,
help="Signal-to-noise ratio (default: 80.0)")
parser.add_argument('--psf', default='moffat',
help="psf type: (moffat | atmopt) (default: moffat)")
parser.add_argument('--seed', type=int, default=0,
help="Random number seed (default: 0)")
parser.add_argument('--nwalkers', type=int, default=32,
help="Number of walkers (default: 32)")
parser.add_argument('--nburn', type=int, default=30,
help="Numbers of burn-in samples (default: 30)")
parser.add_argument('--nsamples', type=int, default=30,
help="Numbers of samples per walker (default: 30)")
parser.add_argument('--nthreads', type=int, default=4,
help="Numbers of threads (default: 4)")
args = parser.parse_args()
mcgalsim(args)
|
jmeyers314/mcgalsim
|
mcgalsim.py
|
Python
|
bsd-3-clause
| 8,223
|
[
"Galaxy"
] |
f057b311a25df61ca85a616b58f1a99095a6fed1caca0c4c6d9c8af358ca2662
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from authlib.oauth2.base import OAuth2Error
from authlib.oauth2.rfc6749.grants import RefreshTokenGrant as _RefreshTokenGrant
from DIRAC.ConfigurationSystem.Client.Helpers.Registry import getUsernameForDN, wrapIDAsDN
class RefreshTokenGrant(_RefreshTokenGrant):
"""See :class:`authlib.oauth2.rfc6749.grants.RefreshTokenGrant`"""
DEFAULT_EXPIRES_AT = 12 * 3600
TOKEN_ENDPOINT_AUTH_METHODS = ["client_secret_basic", "client_secret_post", "none"]
def authenticate_refresh_token(self, refresh_token):
"""Get credential for token
:param str refresh_token: refresh token
:return: dict or None
"""
result = self.server.readToken(refresh_token)
if not result["OK"]:
raise OAuth2Error(result["Message"])
rtDict = result["Value"]
result = self.server.db.getCredentialByRefreshToken(rtDict["jti"])
if not result["OK"]:
raise OAuth2Error(result["Message"])
credential = result["Value"]
if int(rtDict["iat"]) != int(credential["issued_at"]):
# An attempt to reuse the refresh token was detected
prov = self.server.idps.getIdProvider(rtDict["provider"])
if prov["OK"]:
prov["Value"].revokeToken(credential["refresh_token"])
prov["Value"].revokeToken(credential["access_token"], "access_token")
return None
credential.update(rtDict)
return credential
def authenticate_user(self, credential):
"""Authorize user
:param dict credential: credential (token payload)
:return: str or bool
"""
result = getUsernameForDN(wrapIDAsDN(credential["sub"]))
if not result["OK"]:
self.server.log.error(result["Message"])
return result.get("Value")
def issue_token(self, user, credential):
"""Refresh tokens
:param user: unuse
:param dict credential: token credential
:return: dict
"""
if credential["refresh_token"]:
result = self.server.idps.getIdProvider(credential["provider"])
if result["OK"]:
result = result["Value"].refreshToken(credential["refresh_token"])
else:
result = self.server.tokenCli.getToken(user, self.server._getScope(credential["scope"], "g"))
if result["OK"]:
token = result["Value"]
result = self.server.registerRefreshToken(credential, token)
if not result["OK"]:
raise OAuth2Error(result["Message"])
return result["Value"]
def revoke_old_credential(self, credential):
"""Remove old credential"""
pass
|
ic-hep/DIRAC
|
src/DIRAC/FrameworkSystem/private/authorization/grants/RefreshToken.py
|
Python
|
gpl-3.0
| 2,809
|
[
"DIRAC"
] |
6c7e575dc919f3d9f85cc049aaf04eca0e887edef4eebbdf27109a6189d1b6bc
|
#==================================================================================================
# Copyright (C) 2016 Olivier Mallet - All Rights Reserved
#==================================================================================================
import numpy as np
import pandas as pd
import CyURT as urt
if __name__ == "__main__":
y = pd.read_csv('../data/y.csv', sep=',', header=None)
x = pd.read_csv('../data/x.csv', sep=',', header=None)
yd = np.asarray(y).reshape(y.size)
yf = yd.astype(np.float32)
xd = np.asarray(x, order='F')
xf = xd.astype(np.float32)
# running OLS regression as in ./examples/example1.cpp using double precision type
fit = urt.OLS_d(yd, xd, True)
fit.show()
# running OLS regression as in ./examples/example1.cpp using single precision type
fit = urt.OLS_f(yf, xf, True)
fit.show()
# running first ADF test as in ./examples/example2.cpp using double precision type
test = urt.ADF_d(yd, lags=10, trend='ct')
test.show()
# running second ADF test as in ./examples/example2.cpp using double precision type
test.method = 'AIC'
test.bootstrap = True
test.niter = 10000
test.show()
|
olmallet81/URT
|
Python/example.py
|
Python
|
mit
| 1,247
|
[
"ADF"
] |
68399c4e3c4bc6c1b5f91d80b9efa61cd3f7a25c2c668b13eebee8a813206ac7
|
import json
import pandas as pd
import requests
from py2cytoscape.data.network_view import CyNetworkView
from ..util import util_networkx as nx_util
from ..util import util_dataframe as df_util
from .util_http import check_response
from . import BASE_URL, HEADERS
import warnings
warnings.warn('\n\n\n**** data.cynetwork will be deprecated in the next py2cytoscape release. ****\n\n\n')
BASE_URL_NETWORK = BASE_URL + 'networks'
class CyNetwork(object):
def __init__(self, suid=None, session=None, url=None):
if pd.isnull(url):
raise ValueError("URL is missing.")
# Validate required argument
if pd.isnull(suid):
raise ValueError("SUID is missing.")
else:
self.__id = suid
self.__url = url + '/' + str(self.__id) + '/'
self.session = session if session is not None else requests.Session()
def get_id(self):
"""
Get session-unique ID of this network
:return: SUID as integer
"""
return self.__id
def to_json(self):
"""
Return this network in Cytoscape.js format.
:return: Cytoscape.js Style JSON as dictionary.
"""
return self.session.get(self.__url).json()
def to_networkx(self):
"""
Return this network in NetworkX graph object.
:return: Network as NetworkX graph object
"""
return nx_util.to_networkx(self.session.get(self.__url).json())
def to_dataframe(self, extra_edges_columns=[]):
"""
Return this network in pandas DataFrame.
:return: Network as DataFrame. This is equivalent to SIF.
"""
return df_util.to_dataframe(
self.session.get(self.__url).json(),
edges_attr_cols=extra_edges_columns
)
def get_nodes(self):
"""
Get all nodes as a list of SUIDs
:return:
"""
return self.session.get(self.__url + 'nodes').json()
def get_edges(self, fmt='suid'):
if fmt == 'suid':
return self.session.get(self.__url + 'edges').json()
elif fmt == 'edgelist':
# TODO: implement this
pass
else:
raise ValueError(fmt + ' is not supported for edge format.')
def add_node(self, node_name, dataframe=False):
""" Add a single node to the network. """
if node_name is None:
return None
return self.add_nodes([node_name], dataframe=dataframe)
def add_nodes(self, node_name_list, dataframe=False):
"""
Add new nodes to the network
:param node_name_list: list of node names, e.g. ['a', 'b', 'c']
:param dataframe: If True, return a pandas dataframe instead of a dict.
:return: A dict mapping names to SUIDs for the newly-created nodes.
"""
res = self.session.post(self.__url + 'nodes', data=json.dumps(node_name_list), headers=HEADERS)
check_response(res)
nodes = res.json()
if dataframe:
return pd.DataFrame(nodes).set_index(['SUID'])
else:
return {node['name']: node['SUID'] for node in nodes}
def add_edge(self, source, target, interaction='-', directed=True, dataframe=True):
""" Add a single edge from source to target. """
new_edge = {
'source': source,
'target': target,
'interaction': interaction,
'directed': directed
}
return self.add_edges([new_edge], dataframe=dataframe)
def add_edges(self, edge_list, dataframe=True):
"""
Add a all edges in edge_list.
:return: A data structure with Cytoscape SUIDs for the newly-created edges.
:param edge_list: List of (source, target, interaction) tuples *or*
list of dicts with 'source', 'target', 'interaction', 'direction' keys.
:param dataframe: If dataframe is True (default), return a Pandas DataFrame.
If dataframe is False, return a list of dicts with keys 'SUID', 'source' and 'target'.
"""
# It might be nice to have an option pass a list of dicts instead of list of tuples
if not isinstance(edge_list[0], dict):
edge_list = [{'source': edge_tuple[0],
'target': edge_tuple[1],
'interaction': edge_tuple[2]}
for edge_tuple in edge_list]
res = self.session.post(self.__url + 'edges', data=json.dumps(edge_list), headers=HEADERS)
check_response(res)
edges = res.json()
if dataframe:
return pd.DataFrame(edges).set_index(['SUID'])
else:
return edges
def delete_node(self, id):
url = self.__url + 'nodes/' + str(id)
self.session.delete(url)
def delete_edge(self, id):
url = self.__url + 'edges/' + str(id)
self.session.delete(url)
def __get_table(self, type, fmt=None):
url = self.__url + 'tables/default' + type
if fmt is None or fmt == 'dataframe':
return pd.DataFrame(self.session.get(url).json()['rows'])
elif fmt == 'csv' or fmt == 'tsv':
return self.session.get(url + '.' + fmt).content
elif fmt == 'cytoscapejs':
return self.session.get(url).json()['rows']
else:
raise ValueError('Unsupported format: ' + fmt)
def get_node_table(self, fmt=None):
return self.__get_table('node', fmt)
def get_edge_table(self, fmt=None):
return self.__get_table('edge', fmt)
def get_network_table(self, fmt=None):
return self.__get_table('network', fmt)
def __get_columns(self, type=None):
url = self.__url + 'tables/default' + type + '/columns'
df = pd.DataFrame(self.session.get(url).json())
return df.set_index(['name'])
def get_node_columns(self):
"""
Get node table column information as DataFrame
:return: Node column information ad DataFrame
"""
return self.__get_columns('node')
def get_edge_columns(self):
"""
Get edge table column information as DataFrame
:return: Edge column information ad DataFrame
"""
return self.__get_columns('edge')
def get_network_columns(self):
"""
Get network table column information as DataFrame
:return: Network column information ad DataFrame
"""
return self.__get_columns('networks')
def __get_column(self, type=None, column=None):
url = self.__url + 'tables/default' + type + '/columns/' + column
result = self.session.get(url).json()
return pd.Series(result['values'])
def get_node_column(self, column):
return self.__get_column('node', column=column)
def get_edge_column(self, column):
return self.__get_column('edge', column=column)
def __get_value(self, type=None, id=None, column=None):
if column is None and id is not None:
# Extract a row in table
url = self.__url + 'tables/default' + type + '/rows/' + str(id)
return pd.Series(self.session.get(url).json())
elif column is not None and id is not None:
url = self.__url + 'tables/default' + type + '/rows/' + str(id) + '/' + column
return self.session.get(url).content
else:
raise ValueError('ID is required.')
def get_node_value(self, id, column=None):
return self.__get_value(type='node', id=id, column=column)
def get_edge_value(self, id, column=None):
return self.__get_value(type='edge', id=id, column=column)
def get_network_value(self, column):
return self.__get_value(type='network', id=self.__id, column=column)
def update_node_table(self, df=None, network_key_col='name',
data_key_col=None):
return self.__update_table('node', df=df, network_key_col=network_key_col, data_key_col=data_key_col)
def __update_table(self, type, df, network_key_col='name',
data_key_col=None):
is_index_col = False
if data_key_col is None:
# Use index
data_key = network_key_col
is_index_col = True
else:
data_key = data_key_col
table = {
'key': network_key_col,
'dataKey': data_key
}
if is_index_col:
# Use DataFrame's index as the mapping key
df2 = pd.DataFrame(df)
df2[network_key_col] = df.index
data = df2.to_json(orient='records')
del df2
else:
data = df.to_json(orient='records')
table['data'] = json.loads(data)
url = self.__url + 'tables/default' + type
self.session.put(url, json=table, headers=HEADERS)
def __delete_column(self, type, column):
url = self.__url + 'tables/default' + type + '/columns/' + column
self.session.delete(url)
def delete_node_table_column(self, column):
self.__delete_column('node', column=column)
def delete_edge_table_column(self, column):
self.__delete_column('edge', column=column)
def delete_network_table_column(self, column):
self.__delete_column('network', column=column)
def __create_column(self, type, name, data_type, immutable, list):
url = self.__url + 'tables/default' + type + '/columns'
new_column = {
'name': name,
'type': data_type,
'immutable': immutable,
'list': list
}
self.session.post(url, data=json.dumps(new_column), headers=HEADERS)
def create_node_column(self, name, data_type='String', is_immutable=False, is_list=False):
self.__create_column('node', name=name, data_type=data_type, immutable=is_immutable, list=is_list)
def create_edge_column(self, name, data_type='String', is_immutable=False, is_list=False):
self.__create_column('edge', name=name, data_type=data_type, immutable=is_immutable, list=is_list)
def create_network_column(self, name, data_type='String', is_immutable=False, is_list=False):
self.__create_column('network', name=name, data_type=data_type, immutable=is_immutable, list=is_list)
# Utility functions
def get_neighbours(self, node_id):
url = self.__url + 'nodes/' + str(node_id) + '/neighbors'
return self.session.get(url).json()
def get_adjacent_edges(self, node_id):
url = self.__url + 'nodes/' + str(node_id) + '/adjEdges'
return self.session.get(url).json()
# Views
def get_views(self):
"""
Get views as a list of SUIDs
:return:
"""
url = self.__url + 'views'
return self.session.get(url).json()
def get_png(self, height=1200):
url = self.__url + 'views/first.png?h=' + str(height)
return self.session.get(url).content
def get_svg(self, height=1200):
url = self.__url + 'views/first.svg?h=' + str(height)
return self.session.get(url).content
def get_pdf(self):
url = self.__url + 'views/first.pdf'
return self.session.get(url).content
def get_first_view(self, fmt='json'):
"""
Get a first view model as dict
:return:
"""
url = self.__url + 'views/first'
return self.session.get(url).json()
def get_view(self, view_id, fmt='json'):
if fmt == 'json':
url = self.__url + 'views/' + str(view_id)
return self.session.get(url).json()
elif fmt == 'view':
return self.__get_view_object(view_id)
else:
return None
def __get_view_object(self, view_id):
"""
Create a new CyNetworkView object for the given ID.
:param view_id:
:return:
"""
view = CyNetworkView(self, view_id)
return view
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__id == other.__id
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
|
idekerlab/py2cytoscape
|
py2cytoscape/data/cynetwork.py
|
Python
|
mit
| 12,242
|
[
"Cytoscape"
] |
ec02c729b116cd858360cfd935482e47a932565f6daa3d1f8e100da6b0d9c74a
|
# $Id: nodes.py 7054 2011-06-07 15:05:58Z milde $
# Author: David Goodger <goodger@python.org>
# Copyright: This module has been placed in the public domain.
"""
Docutils document tree element class library.
Classes in CamelCase are abstract base classes or auxiliary classes. The one
exception is `Text`, for a text (PCDATA) node; uppercase is used to
differentiate from element classes. Classes in lower_case_with_underscores
are element classes, matching the XML element generic identifiers in the DTD_.
The position of each node (the level at which it can occur) is significant and
is represented by abstract base classes (`Root`, `Structural`, `Body`,
`Inline`, etc.). Certain transformations will be easier because we can use
``isinstance(node, base_class)`` to determine the position of the node in the
hierarchy.
.. _DTD: http://docutils.sourceforge.net/docs/ref/docutils.dtd
"""
__docformat__ = 'reStructuredText'
import sys
import os
import re
import warnings
import types
import unicodedata
# ==============================
# Functional Node Base Classes
# ==============================
class Node(object):
"""Abstract base class of nodes in a document tree."""
parent = None
"""Back-reference to the Node immediately containing this Node."""
document = None
"""The `document` node at the root of the tree containing this Node."""
source = None
"""Path or description of the input source which generated this Node."""
line = None
"""The line number (1-based) of the beginning of this Node in `source`."""
def __nonzero__(self):
"""
Node instances are always true, even if they're empty. A node is more
than a simple container. Its boolean "truth" does not depend on
having one or more subnodes in the doctree.
Use `len()` to check node length. Use `None` to represent a boolean
false value.
"""
return True
if sys.version_info < (3,):
# on 2.x, str(node) will be a byte string with Unicode
# characters > 255 escaped; on 3.x this is no longer necessary
def __str__(self):
return unicode(self).encode('raw_unicode_escape')
def asdom(self, dom=None):
"""Return a DOM **fragment** representation of this Node."""
if dom is None:
import xml.dom.minidom as dom
domroot = dom.Document()
return self._dom_node(domroot)
def pformat(self, indent=' ', level=0):
"""
Return an indented pseudo-XML representation, for test purposes.
Override in subclasses.
"""
raise NotImplementedError
def copy(self):
"""Return a copy of self."""
raise NotImplementedError
def deepcopy(self):
"""Return a deep copy of self (also copying children)."""
raise NotImplementedError
def setup_child(self, child):
child.parent = self
if self.document:
child.document = self.document
if child.source is None:
child.source = self.document.current_source
if child.line is None:
child.line = self.document.current_line
def walk(self, visitor):
"""
Traverse a tree of `Node` objects, calling the
`dispatch_visit()` method of `visitor` when entering each
node. (The `walkabout()` method is similar, except it also
calls the `dispatch_departure()` method before exiting each
node.)
This tree traversal supports limited in-place tree
modifications. Replacing one node with one or more nodes is
OK, as is removing an element. However, if the node removed
or replaced occurs after the current node, the old node will
still be traversed, and any new nodes will not.
Within ``visit`` methods (and ``depart`` methods for
`walkabout()`), `TreePruningException` subclasses may be raised
(`SkipChildren`, `SkipSiblings`, `SkipNode`, `SkipDeparture`).
Parameter `visitor`: A `NodeVisitor` object, containing a
``visit`` implementation for each `Node` subclass encountered.
Return true if we should stop the traversal.
"""
stop = 0
visitor.document.reporter.debug(
'docutils.nodes.Node.walk calling dispatch_visit for %s'
% self.__class__.__name__)
try:
try:
visitor.dispatch_visit(self)
except (SkipChildren, SkipNode):
return stop
except SkipDeparture: # not applicable; ignore
pass
children = self.children
try:
for child in children[:]:
if child.walk(visitor):
stop = 1
break
except SkipSiblings:
pass
except StopTraversal:
stop = 1
return stop
def walkabout(self, visitor):
"""
Perform a tree traversal similarly to `Node.walk()` (which
see), except also call the `dispatch_departure()` method
before exiting each node.
Parameter `visitor`: A `NodeVisitor` object, containing a
``visit`` and ``depart`` implementation for each `Node`
subclass encountered.
Return true if we should stop the traversal.
"""
call_depart = 1
stop = 0
visitor.document.reporter.debug(
'docutils.nodes.Node.walkabout calling dispatch_visit for %s'
% self.__class__.__name__)
try:
try:
visitor.dispatch_visit(self)
except SkipNode:
return stop
except SkipDeparture:
call_depart = 0
children = self.children
try:
for child in children[:]:
if child.walkabout(visitor):
stop = 1
break
except SkipSiblings:
pass
except SkipChildren:
pass
except StopTraversal:
stop = 1
if call_depart:
visitor.document.reporter.debug(
'docutils.nodes.Node.walkabout calling dispatch_departure '
'for %s' % self.__class__.__name__)
visitor.dispatch_departure(self)
return stop
def _fast_traverse(self, cls):
"""Specialized traverse() that only supports instance checks."""
result = []
if isinstance(self, cls):
result.append(self)
for child in self.children:
result.extend(child._fast_traverse(cls))
return result
def _all_traverse(self):
"""Specialized traverse() that doesn't check for a condition."""
result = []
result.append(self)
for child in self.children:
result.extend(child._all_traverse())
return result
def traverse(self, condition=None,
include_self=1, descend=1, siblings=0, ascend=0):
"""
Return an iterable containing
* self (if include_self is true)
* all descendants in tree traversal order (if descend is true)
* all siblings (if siblings is true) and their descendants (if
also descend is true)
* the siblings of the parent (if ascend is true) and their
descendants (if also descend is true), and so on
If `condition` is not None, the iterable contains only nodes
for which ``condition(node)`` is true. If `condition` is a
node class ``cls``, it is equivalent to a function consisting
of ``return isinstance(node, cls)``.
If ascend is true, assume siblings to be true as well.
For example, given the following tree::
<paragraph>
<emphasis> <--- emphasis.traverse() and
<strong> <--- strong.traverse() are called.
Foo
Bar
<reference name="Baz" refid="baz">
Baz
Then list(emphasis.traverse()) equals ::
[<emphasis>, <strong>, <#text: Foo>, <#text: Bar>]
and list(strong.traverse(ascend=1)) equals ::
[<strong>, <#text: Foo>, <#text: Bar>, <reference>, <#text: Baz>]
"""
if ascend:
siblings=1
# Check for special argument combinations that allow using an
# optimized version of traverse()
if include_self and descend and not siblings:
if condition is None:
return self._all_traverse()
elif isinstance(condition, (types.ClassType, type)):
return self._fast_traverse(condition)
# Check if `condition` is a class (check for TypeType for Python
# implementations that use only new-style classes, like PyPy).
if isinstance(condition, (types.ClassType, type)):
node_class = condition
def condition(node, node_class=node_class):
return isinstance(node, node_class)
r = []
if include_self and (condition is None or condition(self)):
r.append(self)
if descend and len(self.children):
for child in self:
r.extend(child.traverse(
include_self=1, descend=1, siblings=0, ascend=0,
condition=condition))
if siblings or ascend:
node = self
while node.parent:
index = node.parent.index(node)
for sibling in node.parent[index+1:]:
r.extend(sibling.traverse(include_self=1, descend=descend,
siblings=0, ascend=0,
condition=condition))
if not ascend:
break
else:
node = node.parent
return r
def next_node(self, condition=None,
include_self=0, descend=1, siblings=0, ascend=0):
"""
Return the first node in the iterable returned by traverse(),
or None if the iterable is empty.
Parameter list is the same as of traverse. Note that
include_self defaults to 0, though.
"""
iterable = self.traverse(condition=condition,
include_self=include_self, descend=descend,
siblings=siblings, ascend=ascend)
try:
return iterable[0]
except IndexError:
return None
if sys.version_info < (3,):
class reprunicode(unicode):
"""
A class that removes the initial u from unicode's repr.
"""
def __repr__(self):
return unicode.__repr__(self)[1:]
else:
reprunicode = unicode
class Text(Node, reprunicode):
"""
Instances are terminal nodes (leaves) containing text only; no child
nodes or attributes. Initialize by passing a string to the constructor.
Access the text itself with the `astext` method.
"""
tagname = '#text'
children = ()
"""Text nodes have no children, and cannot have children."""
if sys.version_info > (3,):
def __new__(cls, data, rawsource=None):
"""Prevent the rawsource argument from propagating to str."""
if isinstance(data, bytes):
raise TypeError('expecting str data, not bytes')
return reprunicode.__new__(cls, data)
else:
def __new__(cls, data, rawsource=None):
"""Prevent the rawsource argument from propagating to str."""
return reprunicode.__new__(cls, data)
def __init__(self, data, rawsource=''):
self.rawsource = rawsource
"""The raw text from which this element was constructed."""
def shortrepr(self, maxlen=18):
data = self
if len(data) > maxlen:
data = data[:maxlen-4] + ' ...'
return '<%s: %s>' % (self.tagname, repr(reprunicode(data)))
def __repr__(self):
return self.shortrepr(maxlen=68)
def _dom_node(self, domroot):
return domroot.createTextNode(unicode(self))
def astext(self):
return reprunicode(self)
# Note about __unicode__: The implementation of __unicode__ here,
# and the one raising NotImplemented in the superclass Node had
# to be removed when changing Text to a subclass of unicode instead
# of UserString, since there is no way to delegate the __unicode__
# call to the superclass unicode:
# unicode itself does not have __unicode__ method to delegate to
# and calling unicode(self) or unicode.__new__ directly creates
# an infinite loop
def copy(self):
return self.__class__(reprunicode(self), rawsource=self.rawsource)
def deepcopy(self):
return self.copy()
def pformat(self, indent=' ', level=0):
result = []
indent = indent * level
for line in self.splitlines():
result.append(indent + line + '\n')
return ''.join(result)
# rstrip and lstrip are used by substitution definitions where
# they are expected to return a Text instance, this was formerly
# taken care of by UserString. Note that then and now the
# rawsource member is lost.
def rstrip(self, chars=None):
return self.__class__(reprunicode.rstrip(self, chars))
def lstrip(self, chars=None):
return self.__class__(reprunicode.lstrip(self, chars))
class Element(Node):
"""
`Element` is the superclass to all specific elements.
Elements contain attributes and child nodes. Elements emulate
dictionaries for attributes, indexing by attribute name (a string). To
set the attribute 'att' to 'value', do::
element['att'] = 'value'
There are two special attributes: 'ids' and 'names'. Both are
lists of unique identifiers, and names serve as human interfaces
to IDs. Names are case- and whitespace-normalized (see the
fully_normalize_name() function), and IDs conform to the regular
expression ``[a-z](-?[a-z0-9]+)*`` (see the make_id() function).
Elements also emulate lists for child nodes (element nodes and/or text
nodes), indexing by integer. To get the first child node, use::
element[0]
Elements may be constructed using the ``+=`` operator. To add one new
child node to element, do::
element += node
This is equivalent to ``element.append(node)``.
To add a list of multiple child nodes at once, use the same ``+=``
operator::
element += [node1, node2]
This is equivalent to ``element.extend([node1, node2])``.
"""
list_attributes = ('ids', 'classes', 'names', 'dupnames', 'backrefs')
"""List attributes, automatically initialized to empty lists for
all nodes."""
tagname = None
"""The element generic identifier. If None, it is set as an instance
attribute to the name of the class."""
child_text_separator = '\n\n'
"""Separator for child nodes, used by `astext()` method."""
def __init__(self, rawsource='', *children, **attributes):
self.rawsource = rawsource
"""The raw text from which this element was constructed."""
self.children = []
"""List of child nodes (elements and/or `Text`)."""
self.extend(children) # maintain parent info
self.attributes = {}
"""Dictionary of attribute {name: value}."""
# Initialize list attributes.
for att in self.list_attributes:
self.attributes[att] = []
for att, value in attributes.items():
att = att.lower()
if att in self.list_attributes:
# mutable list; make a copy for this node
self.attributes[att] = value[:]
else:
self.attributes[att] = value
if self.tagname is None:
self.tagname = self.__class__.__name__
def _dom_node(self, domroot):
element = domroot.createElement(self.tagname)
for attribute, value in self.attlist():
if isinstance(value, list):
value = ' '.join([serial_escape('%s' % (v,)) for v in value])
element.setAttribute(attribute, '%s' % value)
for child in self.children:
element.appendChild(child._dom_node(domroot))
return element
def __repr__(self):
data = ''
for c in self.children:
data += c.shortrepr()
if len(data) > 60:
data = data[:56] + ' ...'
break
if self['names']:
return '<%s "%s": %s>' % (self.__class__.__name__,
'; '.join(self['names']), data)
else:
return '<%s: %s>' % (self.__class__.__name__, data)
def shortrepr(self):
if self['names']:
return '<%s "%s"...>' % (self.__class__.__name__,
'; '.join(self['names']))
else:
return '<%s...>' % self.tagname
def __unicode__(self):
if self.children:
return u'%s%s%s' % (self.starttag(),
''.join([unicode(c) for c in self.children]),
self.endtag())
else:
return self.emptytag()
if sys.version_info > (3,):
# 2to3 doesn't convert __unicode__ to __str__
__str__ = __unicode__
def starttag(self):
parts = [self.tagname]
for name, value in self.attlist():
if value is None: # boolean attribute
parts.append(name)
elif isinstance(value, list):
values = [serial_escape('%s' % (v,)) for v in value]
parts.append('%s="%s"' % (name, ' '.join(values)))
else:
parts.append('%s="%s"' % (name, value))
return '<%s>' % ' '.join(parts)
def endtag(self):
return '</%s>' % self.tagname
def emptytag(self):
return u'<%s/>' % ' '.join([self.tagname] +
['%s="%s"' % (n, v)
for n, v in self.attlist()])
def __len__(self):
return len(self.children)
def __contains__(self, key):
# support both membership test for children and attributes
# (has_key is translated to "in" by 2to3)
if isinstance(key, basestring):
return key in self.attributes
return key in self.children
def __getitem__(self, key):
if isinstance(key, basestring):
return self.attributes[key]
elif isinstance(key, int):
return self.children[key]
elif isinstance(key, types.SliceType):
assert key.step in (None, 1), 'cannot handle slice with stride'
return self.children[key.start:key.stop]
else:
raise TypeError, ('element index must be an integer, a slice, or '
'an attribute name string')
def __setitem__(self, key, item):
if isinstance(key, basestring):
self.attributes[str(key)] = item
elif isinstance(key, int):
self.setup_child(item)
self.children[key] = item
elif isinstance(key, types.SliceType):
assert key.step in (None, 1), 'cannot handle slice with stride'
for node in item:
self.setup_child(node)
self.children[key.start:key.stop] = item
else:
raise TypeError, ('element index must be an integer, a slice, or '
'an attribute name string')
def __delitem__(self, key):
if isinstance(key, basestring):
del self.attributes[key]
elif isinstance(key, int):
del self.children[key]
elif isinstance(key, types.SliceType):
assert key.step in (None, 1), 'cannot handle slice with stride'
del self.children[key.start:key.stop]
else:
raise TypeError, ('element index must be an integer, a simple '
'slice, or an attribute name string')
def __add__(self, other):
return self.children + other
def __radd__(self, other):
return other + self.children
def __iadd__(self, other):
"""Append a node or a list of nodes to `self.children`."""
if isinstance(other, Node):
self.append(other)
elif other is not None:
self.extend(other)
return self
def astext(self):
return self.child_text_separator.join(
[child.astext() for child in self.children])
def non_default_attributes(self):
atts = {}
for key, value in self.attributes.items():
if self.is_not_default(key):
atts[key] = value
return atts
def attlist(self):
attlist = self.non_default_attributes().items()
attlist.sort()
return attlist
def get(self, key, failobj=None):
return self.attributes.get(key, failobj)
def hasattr(self, attr):
return attr in self.attributes
def delattr(self, attr):
if attr in self.attributes:
del self.attributes[attr]
def setdefault(self, key, failobj=None):
return self.attributes.setdefault(key, failobj)
has_key = hasattr
# support operator in
__contains__ = hasattr
def append(self, item):
self.setup_child(item)
self.children.append(item)
def extend(self, item):
for node in item:
self.append(node)
def insert(self, index, item):
if isinstance(item, Node):
self.setup_child(item)
self.children.insert(index, item)
elif item is not None:
self[index:index] = item
def pop(self, i=-1):
return self.children.pop(i)
def remove(self, item):
self.children.remove(item)
def index(self, item):
return self.children.index(item)
def is_not_default(self, key):
if self[key] == [] and key in self.list_attributes:
return 0
else:
return 1
def update_basic_atts(self, dict):
"""
Update basic attributes ('ids', 'names', 'classes',
'dupnames', but not 'source') from node or dictionary `dict`.
"""
if isinstance(dict, Node):
dict = dict.attributes
for att in ('ids', 'classes', 'names', 'dupnames'):
for value in dict.get(att, []):
if not value in self[att]:
self[att].append(value)
def clear(self):
self.children = []
def replace(self, old, new):
"""Replace one child `Node` with another child or children."""
index = self.index(old)
if isinstance(new, Node):
self.setup_child(new)
self[index] = new
elif new is not None:
self[index:index+1] = new
def replace_self(self, new):
"""
Replace `self` node with `new`, where `new` is a node or a
list of nodes.
"""
update = new
if not isinstance(new, Node):
# `new` is a list; update first child.
try:
update = new[0]
except IndexError:
update = None
if isinstance(update, Element):
update.update_basic_atts(self)
else:
# `update` is a Text node or `new` is an empty list.
# Assert that we aren't losing any attributes.
for att in ('ids', 'names', 'classes', 'dupnames'):
assert not self[att], \
'Losing "%s" attribute: %s' % (att, self[att])
self.parent.replace(self, new)
def first_child_matching_class(self, childclass, start=0, end=sys.maxint):
"""
Return the index of the first child whose class exactly matches.
Parameters:
- `childclass`: A `Node` subclass to search for, or a tuple of `Node`
classes. If a tuple, any of the classes may match.
- `start`: Initial index to check.
- `end`: Initial index to *not* check.
"""
if not isinstance(childclass, tuple):
childclass = (childclass,)
for index in range(start, min(len(self), end)):
for c in childclass:
if isinstance(self[index], c):
return index
return None
def first_child_not_matching_class(self, childclass, start=0,
end=sys.maxint):
"""
Return the index of the first child whose class does *not* match.
Parameters:
- `childclass`: A `Node` subclass to skip, or a tuple of `Node`
classes. If a tuple, none of the classes may match.
- `start`: Initial index to check.
- `end`: Initial index to *not* check.
"""
if not isinstance(childclass, tuple):
childclass = (childclass,)
for index in range(start, min(len(self), end)):
for c in childclass:
if isinstance(self.children[index], c):
break
else:
return index
return None
def pformat(self, indent=' ', level=0):
return ''.join(['%s%s\n' % (indent * level, self.starttag())] +
[child.pformat(indent, level+1)
for child in self.children])
def copy(self):
return self.__class__(rawsource=self.rawsource, **self.attributes)
def deepcopy(self):
copy = self.copy()
copy.extend([child.deepcopy() for child in self.children])
return copy
def set_class(self, name):
"""Add a new class to the "classes" attribute."""
warnings.warn('docutils.nodes.Element.set_class deprecated; '
"append to Element['classes'] list attribute directly",
DeprecationWarning, stacklevel=2)
assert ' ' not in name
self['classes'].append(name.lower())
def note_referenced_by(self, name=None, id=None):
"""Note that this Element has been referenced by its name
`name` or id `id`."""
self.referenced = 1
# Element.expect_referenced_by_* dictionaries map names or ids
# to nodes whose ``referenced`` attribute is set to true as
# soon as this node is referenced by the given name or id.
# Needed for target propagation.
by_name = getattr(self, 'expect_referenced_by_name', {}).get(name)
by_id = getattr(self, 'expect_referenced_by_id', {}).get(id)
if by_name:
assert name is not None
by_name.referenced = 1
if by_id:
assert id is not None
by_id.referenced = 1
class TextElement(Element):
"""
An element which directly contains text.
Its children are all `Text` or `Inline` subclass nodes. You can
check whether an element's context is inline simply by checking whether
its immediate parent is a `TextElement` instance (including subclasses).
This is handy for nodes like `image` that can appear both inline and as
standalone body elements.
If passing children to `__init__()`, make sure to set `text` to
``''`` or some other suitable value.
"""
child_text_separator = ''
"""Separator for child nodes, used by `astext()` method."""
def __init__(self, rawsource='', text='', *children, **attributes):
if text != '':
textnode = Text(text)
Element.__init__(self, rawsource, textnode, *children,
**attributes)
else:
Element.__init__(self, rawsource, *children, **attributes)
class FixedTextElement(TextElement):
"""An element which directly contains preformatted text."""
def __init__(self, rawsource='', text='', *children, **attributes):
TextElement.__init__(self, rawsource, text, *children, **attributes)
self.attributes['xml:space'] = 'preserve'
# ========
# Mixins
# ========
class Resolvable:
resolved = 0
class BackLinkable:
def add_backref(self, refid):
self['backrefs'].append(refid)
# ====================
# Element Categories
# ====================
class Root: pass
class Titular: pass
class PreBibliographic:
"""Category of Node which may occur before Bibliographic Nodes."""
class Bibliographic: pass
class Decorative(PreBibliographic): pass
class Structural: pass
class Body: pass
class General(Body): pass
class Sequential(Body):
"""List-like elements."""
class Admonition(Body): pass
class Special(Body):
"""Special internal body elements."""
class Invisible(PreBibliographic):
"""Internal elements that don't appear in output."""
class Part: pass
class Inline: pass
class Referential(Resolvable): pass
class Targetable(Resolvable):
referenced = 0
indirect_reference_name = None
"""Holds the whitespace_normalized_name (contains mixed case) of a target.
Required for MoinMoin/reST compatibility."""
class Labeled:
"""Contains a `label` as its first element."""
# ==============
# Root Element
# ==============
class document(Root, Structural, Element):
"""
The document root element.
Do not instantiate this class directly; use
`docutils.utils.new_document()` instead.
"""
def __init__(self, settings, reporter, *args, **kwargs):
Element.__init__(self, *args, **kwargs)
self.current_source = None
"""Path to or description of the input source being processed."""
self.current_line = None
"""Line number (1-based) of `current_source`."""
self.settings = settings
"""Runtime settings data record."""
self.reporter = reporter
"""System message generator."""
self.indirect_targets = []
"""List of indirect target nodes."""
self.substitution_defs = {}
"""Mapping of substitution names to substitution_definition nodes."""
self.substitution_names = {}
"""Mapping of case-normalized substitution names to case-sensitive
names."""
self.refnames = {}
"""Mapping of names to lists of referencing nodes."""
self.refids = {}
"""Mapping of ids to lists of referencing nodes."""
self.nameids = {}
"""Mapping of names to unique id's."""
self.nametypes = {}
"""Mapping of names to hyperlink type (boolean: True => explicit,
False => implicit."""
self.ids = {}
"""Mapping of ids to nodes."""
self.footnote_refs = {}
"""Mapping of footnote labels to lists of footnote_reference nodes."""
self.citation_refs = {}
"""Mapping of citation labels to lists of citation_reference nodes."""
self.autofootnotes = []
"""List of auto-numbered footnote nodes."""
self.autofootnote_refs = []
"""List of auto-numbered footnote_reference nodes."""
self.symbol_footnotes = []
"""List of symbol footnote nodes."""
self.symbol_footnote_refs = []
"""List of symbol footnote_reference nodes."""
self.footnotes = []
"""List of manually-numbered footnote nodes."""
self.citations = []
"""List of citation nodes."""
self.autofootnote_start = 1
"""Initial auto-numbered footnote number."""
self.symbol_footnote_start = 0
"""Initial symbol footnote symbol index."""
self.id_start = 1
"""Initial ID number."""
self.parse_messages = []
"""System messages generated while parsing."""
self.transform_messages = []
"""System messages generated while applying transforms."""
import docutils.transforms
self.transformer = docutils.transforms.Transformer(self)
"""Storage for transforms to be applied to this document."""
self.decoration = None
"""Document's `decoration` node."""
self.document = self
def __getstate__(self):
"""
Return dict with unpicklable references removed.
"""
state = self.__dict__.copy()
state['reporter'] = None
state['transformer'] = None
return state
def asdom(self, dom=None):
"""Return a DOM representation of this document."""
if dom is None:
import xml.dom.minidom as dom
domroot = dom.Document()
domroot.appendChild(self._dom_node(domroot))
return domroot
def set_id(self, node, msgnode=None):
for id in node['ids']:
if id in self.ids and self.ids[id] is not node:
msg = self.reporter.severe('Duplicate ID: "%s".' % id)
if msgnode != None:
msgnode += msg
if not node['ids']:
for name in node['names']:
id = self.settings.id_prefix + make_id(name)
if id and id not in self.ids:
break
else:
id = ''
while not id or id in self.ids:
id = (self.settings.id_prefix +
self.settings.auto_id_prefix + str(self.id_start))
self.id_start += 1
node['ids'].append(id)
self.ids[id] = node
return id
def set_name_id_map(self, node, id, msgnode=None, explicit=None):
"""
`self.nameids` maps names to IDs, while `self.nametypes` maps names to
booleans representing hyperlink type (True==explicit,
False==implicit). This method updates the mappings.
The following state transition table shows how `self.nameids` ("ids")
and `self.nametypes` ("types") change with new input (a call to this
method), and what actions are performed ("implicit"-type system
messages are INFO/1, and "explicit"-type system messages are ERROR/3):
==== ===== ======== ======== ======= ==== ===== =====
Old State Input Action New State Notes
----------- -------- ----------------- ----------- -----
ids types new type sys.msg. dupname ids types
==== ===== ======== ======== ======= ==== ===== =====
- - explicit - - new True
- - implicit - - new False
None False explicit - - new True
old False explicit implicit old new True
None True explicit explicit new None True
old True explicit explicit new,old None True [#]_
None False implicit implicit new None False
old False implicit implicit new,old None False
None True implicit implicit new None True
old True implicit implicit new old True
==== ===== ======== ======== ======= ==== ===== =====
.. [#] Do not clear the name-to-id map or invalidate the old target if
both old and new targets are external and refer to identical URIs.
The new target is invalidated regardless.
"""
for name in node['names']:
if name in self.nameids:
self.set_duplicate_name_id(node, id, name, msgnode, explicit)
else:
self.nameids[name] = id
self.nametypes[name] = explicit
def set_duplicate_name_id(self, node, id, name, msgnode, explicit):
old_id = self.nameids[name]
old_explicit = self.nametypes[name]
self.nametypes[name] = old_explicit or explicit
if explicit:
if old_explicit:
level = 2
if old_id is not None:
old_node = self.ids[old_id]
if 'refuri' in node:
refuri = node['refuri']
if old_node['names'] \
and 'refuri' in old_node \
and old_node['refuri'] == refuri:
level = 1 # just inform if refuri's identical
if level > 1:
dupname(old_node, name)
self.nameids[name] = None
msg = self.reporter.system_message(
level, 'Duplicate explicit target name: "%s".' % name,
backrefs=[id], base_node=node)
if msgnode != None:
msgnode += msg
dupname(node, name)
else:
self.nameids[name] = id
if old_id is not None:
old_node = self.ids[old_id]
dupname(old_node, name)
else:
if old_id is not None and not old_explicit:
self.nameids[name] = None
old_node = self.ids[old_id]
dupname(old_node, name)
dupname(node, name)
if not explicit or (not old_explicit and old_id is not None):
msg = self.reporter.info(
'Duplicate implicit target name: "%s".' % name,
backrefs=[id], base_node=node)
if msgnode != None:
msgnode += msg
def has_name(self, name):
return name in self.nameids
# "note" here is an imperative verb: "take note of".
def note_implicit_target(self, target, msgnode=None):
id = self.set_id(target, msgnode)
self.set_name_id_map(target, id, msgnode, explicit=None)
def note_explicit_target(self, target, msgnode=None):
id = self.set_id(target, msgnode)
self.set_name_id_map(target, id, msgnode, explicit=1)
def note_refname(self, node):
self.refnames.setdefault(node['refname'], []).append(node)
def note_refid(self, node):
self.refids.setdefault(node['refid'], []).append(node)
def note_indirect_target(self, target):
self.indirect_targets.append(target)
if target['names']:
self.note_refname(target)
def note_anonymous_target(self, target):
self.set_id(target)
def note_autofootnote(self, footnote):
self.set_id(footnote)
self.autofootnotes.append(footnote)
def note_autofootnote_ref(self, ref):
self.set_id(ref)
self.autofootnote_refs.append(ref)
def note_symbol_footnote(self, footnote):
self.set_id(footnote)
self.symbol_footnotes.append(footnote)
def note_symbol_footnote_ref(self, ref):
self.set_id(ref)
self.symbol_footnote_refs.append(ref)
def note_footnote(self, footnote):
self.set_id(footnote)
self.footnotes.append(footnote)
def note_footnote_ref(self, ref):
self.set_id(ref)
self.footnote_refs.setdefault(ref['refname'], []).append(ref)
self.note_refname(ref)
def note_citation(self, citation):
self.citations.append(citation)
def note_citation_ref(self, ref):
self.set_id(ref)
self.citation_refs.setdefault(ref['refname'], []).append(ref)
self.note_refname(ref)
def note_substitution_def(self, subdef, def_name, msgnode=None):
name = whitespace_normalize_name(def_name)
if name in self.substitution_defs:
msg = self.reporter.error(
'Duplicate substitution definition name: "%s".' % name,
base_node=subdef)
if msgnode != None:
msgnode += msg
oldnode = self.substitution_defs[name]
dupname(oldnode, name)
# keep only the last definition:
self.substitution_defs[name] = subdef
# case-insensitive mapping:
self.substitution_names[fully_normalize_name(name)] = name
def note_substitution_ref(self, subref, refname):
subref['refname'] = whitespace_normalize_name(refname)
def note_pending(self, pending, priority=None):
self.transformer.add_pending(pending, priority)
def note_parse_message(self, message):
self.parse_messages.append(message)
def note_transform_message(self, message):
self.transform_messages.append(message)
def note_source(self, source, offset):
self.current_source = source
if offset is None:
self.current_line = offset
else:
self.current_line = offset + 1
def copy(self):
return self.__class__(self.settings, self.reporter,
**self.attributes)
def get_decoration(self):
if not self.decoration:
self.decoration = decoration()
index = self.first_child_not_matching_class(Titular)
if index is None:
self.append(self.decoration)
else:
self.insert(index, self.decoration)
return self.decoration
# ================
# Title Elements
# ================
class title(Titular, PreBibliographic, TextElement): pass
class subtitle(Titular, PreBibliographic, TextElement): pass
class rubric(Titular, TextElement): pass
# ========================
# Bibliographic Elements
# ========================
class docinfo(Bibliographic, Element): pass
class author(Bibliographic, TextElement): pass
class authors(Bibliographic, Element): pass
class organization(Bibliographic, TextElement): pass
class address(Bibliographic, FixedTextElement): pass
class contact(Bibliographic, TextElement): pass
class version(Bibliographic, TextElement): pass
class revision(Bibliographic, TextElement): pass
class status(Bibliographic, TextElement): pass
class date(Bibliographic, TextElement): pass
class copyright(Bibliographic, TextElement): pass
# =====================
# Decorative Elements
# =====================
class decoration(Decorative, Element):
def get_header(self):
if not len(self.children) or not isinstance(self.children[0], header):
self.insert(0, header())
return self.children[0]
def get_footer(self):
if not len(self.children) or not isinstance(self.children[-1], footer):
self.append(footer())
return self.children[-1]
class header(Decorative, Element): pass
class footer(Decorative, Element): pass
# =====================
# Structural Elements
# =====================
class section(Structural, Element): pass
class topic(Structural, Element):
"""
Topics are terminal, "leaf" mini-sections, like block quotes with titles,
or textual figures. A topic is just like a section, except that it has no
subsections, and it doesn't have to conform to section placement rules.
Topics are allowed wherever body elements (list, table, etc.) are allowed,
but only at the top level of a section or document. Topics cannot nest
inside topics, sidebars, or body elements; you can't have a topic inside a
table, list, block quote, etc.
"""
class sidebar(Structural, Element):
"""
Sidebars are like miniature, parallel documents that occur inside other
documents, providing related or reference material. A sidebar is
typically offset by a border and "floats" to the side of the page; the
document's main text may flow around it. Sidebars can also be likened to
super-footnotes; their content is outside of the flow of the document's
main text.
Sidebars are allowed wherever body elements (list, table, etc.) are
allowed, but only at the top level of a section or document. Sidebars
cannot nest inside sidebars, topics, or body elements; you can't have a
sidebar inside a table, list, block quote, etc.
"""
class transition(Structural, Element): pass
# ===============
# Body Elements
# ===============
class paragraph(General, TextElement): pass
class compound(General, Element): pass
class container(General, Element): pass
class bullet_list(Sequential, Element): pass
class enumerated_list(Sequential, Element): pass
class list_item(Part, Element): pass
class definition_list(Sequential, Element): pass
class definition_list_item(Part, Element): pass
class term(Part, TextElement): pass
class classifier(Part, TextElement): pass
class definition(Part, Element): pass
class field_list(Sequential, Element): pass
class field(Part, Element): pass
class field_name(Part, TextElement): pass
class field_body(Part, Element): pass
class option(Part, Element):
child_text_separator = ''
class option_argument(Part, TextElement):
def astext(self):
return self.get('delimiter', ' ') + TextElement.astext(self)
class option_group(Part, Element):
child_text_separator = ', '
class option_list(Sequential, Element): pass
class option_list_item(Part, Element):
child_text_separator = ' '
class option_string(Part, TextElement): pass
class description(Part, Element): pass
class literal_block(General, FixedTextElement): pass
class doctest_block(General, FixedTextElement): pass
class math_block(General, FixedTextElement): pass
class line_block(General, Element): pass
class line(Part, TextElement):
indent = None
class block_quote(General, Element): pass
class attribution(Part, TextElement): pass
class attention(Admonition, Element): pass
class caution(Admonition, Element): pass
class danger(Admonition, Element): pass
class error(Admonition, Element): pass
class important(Admonition, Element): pass
class note(Admonition, Element): pass
class tip(Admonition, Element): pass
class hint(Admonition, Element): pass
class warning(Admonition, Element): pass
class admonition(Admonition, Element): pass
class comment(Special, Invisible, FixedTextElement): pass
class substitution_definition(Special, Invisible, TextElement): pass
class target(Special, Invisible, Inline, TextElement, Targetable): pass
class footnote(General, BackLinkable, Element, Labeled, Targetable): pass
class citation(General, BackLinkable, Element, Labeled, Targetable): pass
class label(Part, TextElement): pass
class figure(General, Element): pass
class caption(Part, TextElement): pass
class legend(Part, Element): pass
class table(General, Element): pass
class tgroup(Part, Element): pass
class colspec(Part, Element): pass
class thead(Part, Element): pass
class tbody(Part, Element): pass
class row(Part, Element): pass
class entry(Part, Element): pass
class system_message(Special, BackLinkable, PreBibliographic, Element):
"""
System message element.
Do not instantiate this class directly; use
``document.reporter.info/warning/error/severe()`` instead.
"""
def __init__(self, message=None, *children, **attributes):
if message:
p = paragraph('', message)
children = (p,) + children
try:
Element.__init__(self, '', *children, **attributes)
except:
print 'system_message: children=%r' % (children,)
raise
def astext(self):
line = self.get('line', '')
return u'%s:%s: (%s/%s) %s' % (self['source'], line, self['type'],
self['level'], Element.astext(self))
class pending(Special, Invisible, Element):
"""
The "pending" element is used to encapsulate a pending operation: the
operation (transform), the point at which to apply it, and any data it
requires. Only the pending operation's location within the document is
stored in the public document tree (by the "pending" object itself); the
operation and its data are stored in the "pending" object's internal
instance attributes.
For example, say you want a table of contents in your reStructuredText
document. The easiest way to specify where to put it is from within the
document, with a directive::
.. contents::
But the "contents" directive can't do its work until the entire document
has been parsed and possibly transformed to some extent. So the directive
code leaves a placeholder behind that will trigger the second phase of its
processing, something like this::
<pending ...public attributes...> + internal attributes
Use `document.note_pending()` so that the
`docutils.transforms.Transformer` stage of processing can run all pending
transforms.
"""
def __init__(self, transform, details=None,
rawsource='', *children, **attributes):
Element.__init__(self, rawsource, *children, **attributes)
self.transform = transform
"""The `docutils.transforms.Transform` class implementing the pending
operation."""
self.details = details or {}
"""Detail data (dictionary) required by the pending operation."""
def pformat(self, indent=' ', level=0):
internals = [
'.. internal attributes:',
' .transform: %s.%s' % (self.transform.__module__,
self.transform.__name__),
' .details:']
details = self.details.items()
details.sort()
for key, value in details:
if isinstance(value, Node):
internals.append('%7s%s:' % ('', key))
internals.extend(['%9s%s' % ('', line)
for line in value.pformat().splitlines()])
elif value and isinstance(value, list) \
and isinstance(value[0], Node):
internals.append('%7s%s:' % ('', key))
for v in value:
internals.extend(['%9s%s' % ('', line)
for line in v.pformat().splitlines()])
else:
internals.append('%7s%s: %r' % ('', key, value))
return (Element.pformat(self, indent, level)
+ ''.join([(' %s%s\n' % (indent * level, line))
for line in internals]))
def copy(self):
return self.__class__(self.transform, self.details, self.rawsource,
**self.attributes)
class raw(Special, Inline, PreBibliographic, FixedTextElement):
"""
Raw data that is to be passed untouched to the Writer.
"""
pass
# =================
# Inline Elements
# =================
class emphasis(Inline, TextElement): pass
class strong(Inline, TextElement): pass
class literal(Inline, TextElement): pass
class reference(General, Inline, Referential, TextElement): pass
class footnote_reference(Inline, Referential, TextElement): pass
class citation_reference(Inline, Referential, TextElement): pass
class substitution_reference(Inline, TextElement): pass
class title_reference(Inline, TextElement): pass
class abbreviation(Inline, TextElement): pass
class acronym(Inline, TextElement): pass
class superscript(Inline, TextElement): pass
class subscript(Inline, TextElement): pass
class math(Inline, TextElement): pass
class image(General, Inline, Element):
def astext(self):
return self.get('alt', '')
class inline(Inline, TextElement): pass
class problematic(Inline, TextElement): pass
class generated(Inline, TextElement): pass
# ========================================
# Auxiliary Classes, Functions, and Data
# ========================================
node_class_names = """
Text
abbreviation acronym address admonition attention attribution author
authors
block_quote bullet_list
caption caution citation citation_reference classifier colspec comment
compound contact container copyright
danger date decoration definition definition_list definition_list_item
description docinfo doctest_block document
emphasis entry enumerated_list error
field field_body field_list field_name figure footer
footnote footnote_reference
generated
header hint
image important inline
label legend line line_block list_item literal literal_block
math math_block
note
option option_argument option_group option_list option_list_item
option_string organization
paragraph pending problematic
raw reference revision row rubric
section sidebar status strong subscript substitution_definition
substitution_reference subtitle superscript system_message
table target tbody term tgroup thead tip title title_reference topic
transition
version
warning""".split()
"""A list of names of all concrete Node subclasses."""
class NodeVisitor:
"""
"Visitor" pattern [GoF95]_ abstract superclass implementation for
document tree traversals.
Each node class has corresponding methods, doing nothing by
default; override individual methods for specific and useful
behaviour. The `dispatch_visit()` method is called by
`Node.walk()` upon entering a node. `Node.walkabout()` also calls
the `dispatch_departure()` method before exiting a node.
The dispatch methods call "``visit_`` + node class name" or
"``depart_`` + node class name", resp.
This is a base class for visitors whose ``visit_...`` & ``depart_...``
methods should be implemented for *all* node types encountered (such as
for `docutils.writers.Writer` subclasses). Unimplemented methods will
raise exceptions.
For sparse traversals, where only certain node types are of interest,
subclass `SparseNodeVisitor` instead. When (mostly or entirely) uniform
processing is desired, subclass `GenericNodeVisitor`.
.. [GoF95] Gamma, Helm, Johnson, Vlissides. *Design Patterns: Elements of
Reusable Object-Oriented Software*. Addison-Wesley, Reading, MA, USA,
1995.
"""
optional = ()
"""
Tuple containing node class names (as strings).
No exception will be raised if writers do not implement visit
or departure functions for these node classes.
Used to ensure transitional compatibility with existing 3rd-party writers.
"""
def __init__(self, document):
self.document = document
def dispatch_visit(self, node):
"""
Call self."``visit_`` + node class name" with `node` as
parameter. If the ``visit_...`` method does not exist, call
self.unknown_visit.
"""
node_name = node.__class__.__name__
method = getattr(self, 'visit_' + node_name, self.unknown_visit)
self.document.reporter.debug(
'docutils.nodes.NodeVisitor.dispatch_visit calling %s for %s'
% (method.__name__, node_name))
return method(node)
def dispatch_departure(self, node):
"""
Call self."``depart_`` + node class name" with `node` as
parameter. If the ``depart_...`` method does not exist, call
self.unknown_departure.
"""
node_name = node.__class__.__name__
method = getattr(self, 'depart_' + node_name, self.unknown_departure)
self.document.reporter.debug(
'docutils.nodes.NodeVisitor.dispatch_departure calling %s for %s'
% (method.__name__, node_name))
return method(node)
def unknown_visit(self, node):
"""
Called when entering unknown `Node` types.
Raise an exception unless overridden.
"""
if (self.document.settings.strict_visitor
or node.__class__.__name__ not in self.optional):
raise NotImplementedError(
'%s visiting unknown node type: %s'
% (self.__class__, node.__class__.__name__))
def unknown_departure(self, node):
"""
Called before exiting unknown `Node` types.
Raise exception unless overridden.
"""
if (self.document.settings.strict_visitor
or node.__class__.__name__ not in self.optional):
raise NotImplementedError(
'%s departing unknown node type: %s'
% (self.__class__, node.__class__.__name__))
class SparseNodeVisitor(NodeVisitor):
"""
Base class for sparse traversals, where only certain node types are of
interest. When ``visit_...`` & ``depart_...`` methods should be
implemented for *all* node types (such as for `docutils.writers.Writer`
subclasses), subclass `NodeVisitor` instead.
"""
class GenericNodeVisitor(NodeVisitor):
"""
Generic "Visitor" abstract superclass, for simple traversals.
Unless overridden, each ``visit_...`` method calls `default_visit()`, and
each ``depart_...`` method (when using `Node.walkabout()`) calls
`default_departure()`. `default_visit()` (and `default_departure()`) must
be overridden in subclasses.
Define fully generic visitors by overriding `default_visit()` (and
`default_departure()`) only. Define semi-generic visitors by overriding
individual ``visit_...()`` (and ``depart_...()``) methods also.
`NodeVisitor.unknown_visit()` (`NodeVisitor.unknown_departure()`) should
be overridden for default behavior.
"""
def default_visit(self, node):
"""Override for generic, uniform traversals."""
raise NotImplementedError
def default_departure(self, node):
"""Override for generic, uniform traversals."""
raise NotImplementedError
def _call_default_visit(self, node):
self.default_visit(node)
def _call_default_departure(self, node):
self.default_departure(node)
def _nop(self, node):
pass
def _add_node_class_names(names):
"""Save typing with dynamic assignments:"""
for _name in names:
setattr(GenericNodeVisitor, "visit_" + _name, _call_default_visit)
setattr(GenericNodeVisitor, "depart_" + _name, _call_default_departure)
setattr(SparseNodeVisitor, 'visit_' + _name, _nop)
setattr(SparseNodeVisitor, 'depart_' + _name, _nop)
_add_node_class_names(node_class_names)
class TreeCopyVisitor(GenericNodeVisitor):
"""
Make a complete copy of a tree or branch, including element attributes.
"""
def __init__(self, document):
GenericNodeVisitor.__init__(self, document)
self.parent_stack = []
self.parent = []
def get_tree_copy(self):
return self.parent[0]
def default_visit(self, node):
"""Copy the current node, and make it the new acting parent."""
newnode = node.copy()
self.parent.append(newnode)
self.parent_stack.append(self.parent)
self.parent = newnode
def default_departure(self, node):
"""Restore the previous acting parent."""
self.parent = self.parent_stack.pop()
class TreePruningException(Exception):
"""
Base class for `NodeVisitor`-related tree pruning exceptions.
Raise subclasses from within ``visit_...`` or ``depart_...`` methods
called from `Node.walk()` and `Node.walkabout()` tree traversals to prune
the tree traversed.
"""
pass
class SkipChildren(TreePruningException):
"""
Do not visit any children of the current node. The current node's
siblings and ``depart_...`` method are not affected.
"""
pass
class SkipSiblings(TreePruningException):
"""
Do not visit any more siblings (to the right) of the current node. The
current node's children and its ``depart_...`` method are not affected.
"""
pass
class SkipNode(TreePruningException):
"""
Do not visit the current node's children, and do not call the current
node's ``depart_...`` method.
"""
pass
class SkipDeparture(TreePruningException):
"""
Do not call the current node's ``depart_...`` method. The current node's
children and siblings are not affected.
"""
pass
class NodeFound(TreePruningException):
"""
Raise to indicate that the target of a search has been found. This
exception must be caught by the client; it is not caught by the traversal
code.
"""
pass
class StopTraversal(TreePruningException):
"""
Stop the traversal alltogether. The current node's ``depart_...`` method
is not affected. The parent nodes ``depart_...`` methods are also called
as usual. No other nodes are visited. This is an alternative to
NodeFound that does not cause exception handling to trickle up to the
caller.
"""
pass
def make_id(string):
"""
Convert `string` into an identifier and return it.
Docutils identifiers will conform to the regular expression
``[a-z](-?[a-z0-9]+)*``. For CSS compatibility, identifiers (the "class"
and "id" attributes) should have no underscores, colons, or periods.
Hyphens may be used.
- The `HTML 4.01 spec`_ defines identifiers based on SGML tokens:
ID and NAME tokens must begin with a letter ([A-Za-z]) and may be
followed by any number of letters, digits ([0-9]), hyphens ("-"),
underscores ("_"), colons (":"), and periods (".").
- However the `CSS1 spec`_ defines identifiers based on the "name" token,
a tighter interpretation ("flex" tokenizer notation; "latin1" and
"escape" 8-bit characters have been replaced with entities)::
unicode \\[0-9a-f]{1,4}
latin1 [¡-ÿ]
escape {unicode}|\\[ -~¡-ÿ]
nmchar [-a-z0-9]|{latin1}|{escape}
name {nmchar}+
The CSS1 "nmchar" rule does not include underscores ("_"), colons (":"),
or periods ("."), therefore "class" and "id" attributes should not contain
these characters. They should be replaced with hyphens ("-"). Combined
with HTML's requirements (the first character must be a letter; no
"unicode", "latin1", or "escape" characters), this results in the
``[a-z](-?[a-z0-9]+)*`` pattern.
.. _HTML 4.01 spec: http://www.w3.org/TR/html401
.. _CSS1 spec: http://www.w3.org/TR/REC-CSS1
"""
id = string.lower()
if not isinstance(id, unicode):
id = id.decode()
id = id.translate(_non_id_translate_digraphs)
id = id.translate(_non_id_translate)
# get rid of non-ascii characters.
# 'ascii' lowercase to prevent problems with turkish locale.
id = unicodedata.normalize('NFKD', id).\
encode('ascii', 'ignore').decode('ascii')
# shrink runs of whitespace and replace by hyphen
id = _non_id_chars.sub('-', ' '.join(id.split()))
id = _non_id_at_ends.sub('', id)
return str(id)
_non_id_chars = re.compile('[^a-z0-9]+')
_non_id_at_ends = re.compile('^[-0-9]+|-+$')
_non_id_translate = {
0x00f8: u'o', # o with stroke
0x0111: u'd', # d with stroke
0x0127: u'h', # h with stroke
0x0131: u'i', # dotless i
0x0142: u'l', # l with stroke
0x0167: u't', # t with stroke
0x0180: u'b', # b with stroke
0x0183: u'b', # b with topbar
0x0188: u'c', # c with hook
0x018c: u'd', # d with topbar
0x0192: u'f', # f with hook
0x0199: u'k', # k with hook
0x019a: u'l', # l with bar
0x019e: u'n', # n with long right leg
0x01a5: u'p', # p with hook
0x01ab: u't', # t with palatal hook
0x01ad: u't', # t with hook
0x01b4: u'y', # y with hook
0x01b6: u'z', # z with stroke
0x01e5: u'g', # g with stroke
0x0225: u'z', # z with hook
0x0234: u'l', # l with curl
0x0235: u'n', # n with curl
0x0236: u't', # t with curl
0x0237: u'j', # dotless j
0x023c: u'c', # c with stroke
0x023f: u's', # s with swash tail
0x0240: u'z', # z with swash tail
0x0247: u'e', # e with stroke
0x0249: u'j', # j with stroke
0x024b: u'q', # q with hook tail
0x024d: u'r', # r with stroke
0x024f: u'y', # y with stroke
}
_non_id_translate_digraphs = {
0x00df: u'sz', # ligature sz
0x00e6: u'ae', # ae
0x0153: u'oe', # ligature oe
0x0238: u'db', # db digraph
0x0239: u'qp', # qp digraph
}
def dupname(node, name):
node['dupnames'].append(name)
node['names'].remove(name)
# Assume that this method is referenced, even though it isn't; we
# don't want to throw unnecessary system_messages.
node.referenced = 1
def fully_normalize_name(name):
"""Return a case- and whitespace-normalized name."""
return ' '.join(name.lower().split())
def whitespace_normalize_name(name):
"""Return a whitespace-normalized name."""
return ' '.join(name.split())
def serial_escape(value):
"""Escape string values that are elements of a list, for serialization."""
return value.replace('\\', r'\\').replace(' ', r'\ ')
#
#
# Local Variables:
# indent-tabs-mode: nil
# sentence-end-double-space: t
# fill-column: 78
# End:
|
chirilo/remo
|
vendor-local/lib/python/docutils/nodes.py
|
Python
|
bsd-3-clause
| 64,968
|
[
"VisIt"
] |
74fd6db8a00a8055d6862f8f23e7ec979f37e746e3ba6a9fe8e577986f6b6102
|
import numpy as np
from numpy import fft
from .plotter_utils_consts import n_pts_smooth, default_fourier_n_harm
def gauss(x, a, x0, sigma):
return a * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2))
def gaussian_fit(x, y, x_smooth=None, n_pts=n_pts_smooth):
"""
Fits a Gaussian to some data - x and y. Returns predicted interpolation values.
Parameters
----------
x: list-like
The x values of the data to fit to. Must have range [0,1].
y: list-like
The y values of the data to fit to.
x_smooth: list-like
The exact x values to interpolate for. Supercedes `n_pts`.
n_pts: int
The number of evenly spaced points spanning the range of `x` to interpolate for.
Returns
-------
x_smooth, y_smooth: numpy.ndarray
The smoothed x and y values of the curve fit.
"""
from scipy.optimize import curve_fit
from .scale import np_scale
if x_smooth is None:
x_smooth_inds = np.linspace(0, len(x), n_pts)
x_smooth = np.interp(x_smooth_inds, np.arange(len(x)), x)
mean, sigma = np.nanmean(y), np.nanstd(y)
popt, pcov = curve_fit(gauss, np_scale(x), y, p0=[1, mean, sigma],
maxfev=np.iinfo(np.int32).max)
y_smooth = gauss(np_scale(x_smooth), *popt)
return x_smooth, y_smooth
def gaussian_filter_fit(x, y, x_smooth=None, n_pts=n_pts_smooth, sigma=0.75):
"""
Fits a Gaussian filter to some data - x and y. Returns predicted interpolation values.
Currently, smoothing is achieved by fitting a cubic spline to the gaussian filter fit
of `x` and `y`.
Parameters
----------
x: list-like
The x values of the data to fit to.
y: list-like
The y values of the data to fit to.
x_smooth: list-like, optional
The exact x values to interpolate for. Supercedes `n_pts`.
n_pts: int, optional
The number of evenly spaced points spanning the range of `x` to interpolate for.
sigma: numeric, optional
The standard deviation of the Gaussian kernel. A larger value yields a smoother curve,
but also reduced the closeness of the fit. By default, it is `0.75`.
Returns
-------
x_smooth, y_smooth: numpy.ndarray
The smoothed x and y values of the curve fit.
"""
from scipy.interpolate import CubicSpline
from scipy.ndimage.filters import gaussian_filter1d
if x_smooth is None:
x_smooth_inds = np.linspace(0, len(x)-1, n_pts)
x_smooth = np.interp(x_smooth_inds, np.arange(len(x)), x)
gauss_filter_y = gaussian_filter1d(y, sigma)
cs = CubicSpline(x, gauss_filter_y)
y_smooth = cs(x_smooth)
return x_smooth, y_smooth
def poly_fit(x, y, degree, x_smooth=None, n_pts=n_pts_smooth):
"""
Fits a polynomial of any positive integer degree to some data - x and y. Returns predicted interpolation values.
Parameters
----------
x: list-like
The x values of the data to fit to.
y: list-like
The y values of the data to fit to.
x_smooth: list-like
The exact x values to interpolate for. Supercedes `n_pts`.
n_pts: int
The number of evenly spaced points spanning the range of `x` to interpolate for.
degree: int
The degree of the polynomial to fit.
Returns
-------
x_smooth, y_smooth: numpy.ndarray
The smoothed x and y values of the curve fit.
"""
if x_smooth is None:
x_smooth_inds = np.linspace(0, len(x)-1, n_pts)
x_smooth = np.interp(x_smooth_inds, np.arange(len(x)), x)
y_smooth = np.array([np.array([coef * (x_val ** current_degree) for
coef, current_degree in zip(np.polyfit(x, y, degree),
range(degree, -1, -1))]).sum() for x_val in x_smooth])
return x_smooth, y_smooth
def fourier_fit(x, y, n_predict=0, x_smooth=None, n_pts=n_pts_smooth,
n_harm=default_fourier_n_harm):
"""
Creates a Fourier fit of a NumPy array. Also supports extrapolation.
Credit goes to https://gist.github.com/tartakynov/83f3cd8f44208a1856ce.
Parameters
----------
x, y: numpy.ndarray
1D NumPy arrays of the x and y values to fit to.
Must not contain NaNs.
n_predict: int
The number of points to extrapolate.
The points will be spaced evenly by the mean spacing of values in `x`.
x_smooth: list-like, optional
The exact x values to interpolate for. Supercedes `n_pts`.
n_pts: int, optional
The number of evenly spaced points spanning the range of `x` to interpolate for.
n_harm: int
The number of harmonics to use. A higher value yields a closer fit.
Returns
-------
x_smooth, y_smooth: numpy.ndarray
The smoothed x and y values of the curve fit.
"""
if x_smooth is None:
x_smooth_inds = np.linspace(0, len(x)-1, n_pts)
x_smooth = np.interp(x_smooth_inds, np.arange(len(x)), x)
n_predict_smooth = int((len(x_smooth) / len(x)) * n_predict)
# These points are evenly spaced for the fourier fit implementation we use.
# More points are selected than are in `x_smooth` so we can interpolate accurately.
fourier_mult_pts = 2
x_smooth_fourier = np.linspace(x_smooth.min(), x_smooth.max(),
fourier_mult_pts * len(x_smooth))
y_smooth_fourier = np.interp(x_smooth_fourier, x, y)
n_predict_smooth_fourier = int((len(x_smooth_fourier) / len(x)) * n_predict)
# Perform the Fourier fit and extrapolation.
n = y_smooth_fourier.size
t = np.arange(0, n)
p = np.polyfit(t, y_smooth_fourier, 1) # find linear trend in arr
x_notrend = y_smooth_fourier - p[0] * t # detrended arr
x_freqdom = fft.fft(x_notrend) # detrended arr in frequency domain
f = fft.fftfreq(n) # frequencies
# sort indexes by frequency, lower -> higher
indexes = list(range(n))
indexes.sort(key=lambda i: np.absolute(x_freqdom[i]))
indexes.reverse()
t = np.arange(0, n + n_predict_smooth_fourier)
restored_sig = np.zeros(t.size)
for i in indexes[:1 + n_harm * 2]:
ampli = np.absolute(x_freqdom[i]) / n # amplitude
phase = np.angle(x_freqdom[i]) # phase
restored_sig += ampli * np.cos(2 * np.pi * f[i] * t + phase)
y_smooth_fourier = restored_sig + p[0] * t
# Find the points in `x_smooth_fourier` that are near to points in `x_smooth`
# and then interpolate the y values to match the new x values.
x_smooth = x_smooth_fourier[np.searchsorted(x_smooth_fourier, x_smooth)]
# Ensure `x_smooth` includes the extrapolations.
mean_x_smooth_space = np.diff(x_smooth).mean()
x_predict_smooth = np.linspace(x_smooth[-1] + mean_x_smooth_space,
x_smooth[-1] + mean_x_smooth_space * n_predict_smooth,
n_predict_smooth)
x_smooth = np.concatenate((x_smooth, x_predict_smooth))
# Ensure `x_smooth_fourier` includes the extrapolations.
mean_x_smooth_fourier_space = np.diff(x_smooth).mean()
x_predict_smooth_fourier = \
np.linspace(
x_smooth_fourier[-1] + mean_x_smooth_fourier_space,
x_smooth_fourier[-1] + mean_x_smooth_fourier_space * n_predict_smooth_fourier,
n_predict_smooth_fourier)
x_smooth_fourier = np.concatenate((x_smooth_fourier, x_predict_smooth_fourier))
y_smooth = np.interp(x_smooth, x_smooth_fourier, y_smooth_fourier)
return x_smooth, y_smooth
|
ceos-seo/data_cube_utilities
|
data_cube_utilities/curve_fitting.py
|
Python
|
apache-2.0
| 7,564
|
[
"Gaussian"
] |
0007202f0c82bbc38522ca2b46217826e962cfa8d851c5765d775bd8ba12e8af
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2007-2008 Brian G. Matherly
# Copyright (C) 2008 Stephane Charette <stephanecharette@gmail.com>
# Contribution 2009 by Bob Ham <rah@bash.sh>
# Copyright (C) 2010 Jakim Friant
# Copyright (C) 2013-2014 Paul Franklin
# Copyright (C) 2015 Detlef Wolz <detlef.wolz@t-online.de>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""
Generate an hourglass graph using the Graphviz generator.
"""
#------------------------------------------------------------------------
#
# python modules
#
#------------------------------------------------------------------------
#------------------------------------------------------------------------
#
# Gramps modules
#
#------------------------------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
from gramps.gen.errors import ReportError
from gramps.gen.plug.menu import (PersonOption, BooleanOption, NumberOption,
EnumeratedListOption, ColorOption)
from gramps.gen.plug.report import Report
from gramps.gen.plug.report import utils
from gramps.gen.plug.report import MenuReportOptions
from gramps.gen.plug.report import stdoptions
from gramps.gen.utils.db import get_birth_or_fallback, get_death_or_fallback
from gramps.gen.proxy import CacheProxyDb
#------------------------------------------------------------------------
#
# Constant options items
#
#------------------------------------------------------------------------
_COLORS = [{'name' : _("B&W outline"), 'value' : "outline"},
{'name' : _("Colored outline"), 'value' : "colored"},
{'name' : _("Color fill"), 'value' : "filled"}]
_ARROWS = [ { 'name' : _("Center -> Others"), 'value' : 'o' },
{ 'name' : _("Center <- Others"), 'value' : 'c' },
{ 'name' : _("Center <-> Other"), 'value' : 'co' },
{ 'name' : _("Center - Other"), 'value' : '' }]
#------------------------------------------------------------------------
#
# HourGlassReport
#
#------------------------------------------------------------------------
class HourGlassReport(Report):
"""
An hourglass report displays ancestors and descendants of a center person.
"""
def __init__(self, database, options, user):
"""
Create HourGlass object that produces the report.
name_format - Preferred format to display names
incl_private - Whether to include private data
incid - Whether to include IDs.
living_people - How to handle living people
years_past_death - Consider as living this many years after death
"""
Report.__init__(self, database, options, user)
menu = options.menu
lang = menu.get_option_by_name('trans').get_value()
locale = self.set_locale(lang)
stdoptions.run_private_data_option(self, menu)
stdoptions.run_living_people_option(self, menu, locale)
self.database = CacheProxyDb(self.database)
self.__db = self.database
self.__used_people = []
self.__family_father = [] # links allocated from family to father
self.__family_mother = [] # links allocated from family to mother
self.max_descend = menu.get_option_by_name('maxdescend').get_value()
self.max_ascend = menu.get_option_by_name('maxascend').get_value()
pid = menu.get_option_by_name('pid').get_value()
self.center_person = self.__db.get_person_from_gramps_id(pid)
if self.center_person is None:
raise ReportError(_("Person %s is not in the Database") % pid)
self.colorize = menu.get_option_by_name('color').get_value()
self.colors = {'male': menu.get_option_by_name('colormales').get_value(),
'female': menu.get_option_by_name('colorfemales').get_value(),
'unknown': menu.get_option_by_name('colorunknown').get_value(),
'family': menu.get_option_by_name('colorfamilies').get_value()
}
self.roundcorners = menu.get_option_by_name('roundcorners').get_value()
self.includeid = menu.get_option_by_name('incid').get_value()
arrow_str = menu.get_option_by_name('arrow').get_value()
if 'o' in arrow_str:
self.arrowheadstyle = 'normal'
else:
self.arrowheadstyle = 'none'
if 'c' in arrow_str:
self.arrowtailstyle = 'normal'
else:
self.arrowtailstyle = 'none'
stdoptions.run_name_format_option(self, menu)
def write_report(self):
"""
Generate the report.
"""
self.add_person(self.center_person)
self.traverse_up(self.center_person, 1)
self.traverse_down(self.center_person, 1)
def traverse_down(self, person, gen):
"""
Recursively find the descendants of the given person.
"""
if gen > self.max_descend:
return
for family_handle in person.get_family_handle_list():
family = self.__db.get_family_from_handle(family_handle)
self.add_family(family)
self.doc.add_link(person.get_gramps_id(), family.get_gramps_id(),
head=self.arrowheadstyle,
tail=self.arrowtailstyle)
for child_ref in family.get_child_ref_list():
child_handle = child_ref.get_reference_handle()
if child_handle not in self.__used_people:
# Avoid going down paths twice when descendant cousins marry
self.__used_people.append(child_handle)
child = self.__db.get_person_from_handle(child_handle)
self.add_person(child)
self.doc.add_link(family.get_gramps_id(),
child.get_gramps_id(),
head=self.arrowheadstyle,
tail=self.arrowtailstyle)
self.traverse_down(child, gen+1)
def traverse_up(self, person, gen):
"""
Recursively find the ancestors of the given person.
"""
if gen > self.max_ascend:
return
family_handle = person.get_main_parents_family_handle()
if family_handle:
family = self.__db.get_family_from_handle(family_handle)
family_id = family.get_gramps_id()
self.add_family(family)
self.doc.add_link(family_id, person.get_gramps_id(),
head=self.arrowtailstyle,
tail=self.arrowheadstyle )
# create link from family to father
father_handle = family.get_father_handle()
if father_handle and family_handle not in self.__family_father:
# allocate only one father per family
self.__family_father.append(family_handle)
father = self.__db.get_person_from_handle(father_handle)
self.add_person(father)
self.doc.add_link(father.get_gramps_id(), family_id,
head=self.arrowtailstyle,
tail=self.arrowheadstyle )
# no need to go up if he is a father in another family
if father_handle not in self.__used_people:
self.__used_people.append(father_handle)
self.traverse_up(father, gen+1)
# create link from family to mother
mother_handle = family.get_mother_handle()
if mother_handle and family_handle not in self.__family_mother:
# allocate only one mother per family
self.__family_mother.append(family_handle)
mother = self.__db.get_person_from_handle(mother_handle)
self.add_person(mother)
self.doc.add_link(mother.get_gramps_id(), family_id,
head=self.arrowtailstyle,
tail=self.arrowheadstyle)
# no need to go up if she is a mother in another family
if mother_handle not in self.__used_people:
self.__used_people.append(mother_handle)
self.traverse_up(mother, gen+1)
def add_person(self, person):
"""
Add a person to the Graph. The node id will be the person's gramps id.
"""
p_id = person.get_gramps_id()
name = self._name_display.display(person)
birth_evt = get_birth_or_fallback(self.__db, person)
if birth_evt:
birth = self._get_date(birth_evt.get_date_object())
else:
birth = ""
death_evt = get_death_or_fallback(self.__db, person)
if death_evt:
death = self._get_date(death_evt.get_date_object())
else:
death = ""
if self.includeid == 0: # no ID
label = "%s \\n(%s - %s)" % (name, birth, death)
elif self.includeid == 1: # same line
label = "%s (%s)\\n(%s - %s)" % (name, p_id, birth, death)
elif self.includeid == 2: # own line
label = "%s \\n(%s - %s)\\n(%s)" % (name, birth, death, p_id)
label = label.replace('"', '\\\"')
(shape, style, color, fill) = self.get_gender_style(person)
self.doc.add_node(p_id, label, shape, color, style, fill)
def add_family(self, family):
"""
Add a family to the Graph. The node id will be the family's gramps id.
"""
family_id = family.get_gramps_id()
label = ""
marriage = utils.find_marriage(self.__db, family)
if marriage:
label = self._get_date(marriage.get_date_object())
if self.includeid == 1 and label: # same line
label = "%s (%s)" % (label, family_id)
elif self.includeid == 1 and not label:
label = "(%s)" % family_id
elif self.includeid == 2 and label: # own line
label = "%s\\n(%s)" % (label, family_id)
elif self.includeid == 2 and not label:
label = "(%s)" % family_id
color = ""
fill = ""
style = "solid"
if self.colorize == 'colored':
color = self.colors['family']
elif self.colorize == 'filled':
fill = self.colors['family']
style = "filled"
self.doc.add_node(family_id, label, "ellipse", color, style, fill)
def get_gender_style(self, person):
"return gender specific person style"
gender = person.get_gender()
shape = "box"
style = "solid"
color = ""
fill = ""
if gender == person.FEMALE and self.roundcorners:
style = "rounded"
elif gender == person.UNKNOWN:
shape = "hexagon"
if self.colorize == 'colored':
if gender == person.MALE:
color = self.colors['male']
elif gender == person.FEMALE:
color = self.colors['female']
else:
color = self.colors['unknown']
elif self.colorize == 'filled':
style += ",filled"
if gender == person.MALE:
fill = self.colors['male']
elif gender == person.FEMALE:
fill = self.colors['female']
else:
fill = self.colors['unknown']
return(shape, style, color, fill)
#------------------------------------------------------------------------
#
# HourGlassOptions
#
#------------------------------------------------------------------------
class HourGlassOptions(MenuReportOptions):
"""
Defines options for the HourGlass report.
"""
def __init__(self, name, dbase):
MenuReportOptions.__init__(self, name, dbase)
def add_menu_options(self, menu):
"""
Create all the menu options for this report.
"""
category_name = _("Report Options")
pid = PersonOption(_("Center Person"))
pid.set_help(_("The Center person for the graph"))
menu.add_option(category_name, "pid", pid)
stdoptions.add_name_format_option(menu, category_name)
stdoptions.add_private_data_option(menu, category_name)
stdoptions.add_living_people_option(menu, category_name)
max_gen = NumberOption(_('Max Descendant Generations'), 10, 1, 15)
max_gen.set_help(_("The number of generations of descendants to "
"include in the graph"))
menu.add_option(category_name, "maxdescend", max_gen)
max_gen = NumberOption(_('Max Ancestor Generations'), 10, 1, 15)
max_gen.set_help(_("The number of generations of ancestors to "
"include in the graph"))
menu.add_option(category_name, "maxascend", max_gen)
include_id = EnumeratedListOption(_('Include Gramps ID'), 0)
include_id.add_item(0, _('Do not include'))
include_id.add_item(1, _('Share an existing line'))
include_id.add_item(2, _('On a line of its own'))
include_id.set_help(_("Whether (and where) to include Gramps IDs"))
menu.add_option(category_name, "incid", include_id)
stdoptions.add_localization_option(menu, category_name)
################################
category_name = _("Graph Style")
################################
color = EnumeratedListOption(_("Graph coloring"), "filled")
for i in range(0, len(_COLORS)):
color.add_item(_COLORS[i]["value"], _COLORS[i]["name"])
color.set_help(_("Males will be shown with blue, females "
"with red. If the sex of an individual "
"is unknown it will be shown with gray."))
menu.add_option(category_name, "color", color)
color_males = ColorOption(_('Males'), '#e0e0ff')
color_males.set_help(_('The color to use to display men.'))
menu.add_option(category_name, 'colormales', color_males)
color_females = ColorOption(_('Females'), '#ffe0e0')
color_females.set_help(_('The color to use to display women.'))
menu.add_option(category_name, 'colorfemales', color_females)
color_unknown = ColorOption(_('Unknown'), '#e0e0e0')
color_unknown.set_help(_('The color to use '
'when the gender is unknown.'))
menu.add_option(category_name, 'colorunknown', color_unknown)
color_family = ColorOption(_('Families'), '#ffffe0')
color_family.set_help(_('The color to use to display families.'))
menu.add_option(category_name, 'colorfamilies', color_family)
arrow = EnumeratedListOption(_("Arrowhead direction"), 'o')
for i in range( 0, len(_ARROWS) ):
arrow.add_item(_ARROWS[i]["value"], _ARROWS[i]["name"])
arrow.set_help(_("Choose the direction that the arrows point."))
menu.add_option(category_name, "arrow", arrow)
roundedcorners = BooleanOption(_("Use rounded corners"), False) # 2180
roundedcorners.set_help(
_("Use rounded corners to differentiate between women and men."))
menu.add_option(category_name, "roundcorners", roundedcorners)
|
beernarrd/gramps
|
gramps/plugins/graph/gvhourglass.py
|
Python
|
gpl-2.0
| 16,031
|
[
"Brian"
] |
ba07f4ac59519190b289aed08a3cf52878071e96158c84b5e4d2185dd3eb9396
|
from __future__ import division
import pickle
from io import BytesIO
import numpy as np
import scipy.sparse
from sklearn.datasets import load_digits, load_iris
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.externals.six.moves import zip
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_warns
from sklearn.naive_bayes import GaussianNB, BernoulliNB
from sklearn.naive_bayes import MultinomialNB, ComplementNB
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]])
y = np.array([1, 1, 1, 2, 2, 2])
# A bit more random tests
rng = np.random.RandomState(0)
X1 = rng.normal(size=(10, 3))
y1 = (rng.normal(size=(10)) > 0).astype(np.int)
# Data is 6 random integer points in a 100 dimensional space classified to
# three classes.
X2 = rng.randint(5, size=(6, 100))
y2 = np.array([1, 1, 2, 2, 3, 3])
def test_gnb():
# Gaussian Naive Bayes classification.
# This checks that GaussianNB implements fit and predict and returns
# correct values for a simple toy dataset.
clf = GaussianNB()
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y)
y_pred_proba = clf.predict_proba(X)
y_pred_log_proba = clf.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba), y_pred_log_proba, 8)
# Test whether label mismatch between target y and classes raises
# an Error
# FIXME Remove this test once the more general partial_fit tests are merged
assert_raises(ValueError, GaussianNB().partial_fit, X, y, classes=[0, 1])
def test_gnb_prior():
# Test whether class priors are properly set.
clf = GaussianNB().fit(X, y)
assert_array_almost_equal(np.array([3, 3]) / 6.0,
clf.class_prior_, 8)
clf.fit(X1, y1)
# Check that the class priors sum to 1
assert_array_almost_equal(clf.class_prior_.sum(), 1)
def test_gnb_sample_weight():
"""Test whether sample weights are properly used in GNB. """
# Sample weights all being 1 should not change results
sw = np.ones(6)
clf = GaussianNB().fit(X, y)
clf_sw = GaussianNB().fit(X, y, sw)
assert_array_almost_equal(clf.theta_, clf_sw.theta_)
assert_array_almost_equal(clf.sigma_, clf_sw.sigma_)
# Fitting twice with half sample-weights should result
# in same result as fitting once with full weights
sw = rng.rand(y.shape[0])
clf1 = GaussianNB().fit(X, y, sample_weight=sw)
clf2 = GaussianNB().partial_fit(X, y, classes=[1, 2], sample_weight=sw / 2)
clf2.partial_fit(X, y, sample_weight=sw / 2)
assert_array_almost_equal(clf1.theta_, clf2.theta_)
assert_array_almost_equal(clf1.sigma_, clf2.sigma_)
# Check that duplicate entries and correspondingly increased sample
# weights yield the same result
ind = rng.randint(0, X.shape[0], 20)
sample_weight = np.bincount(ind, minlength=X.shape[0])
clf_dupl = GaussianNB().fit(X[ind], y[ind])
clf_sw = GaussianNB().fit(X, y, sample_weight)
assert_array_almost_equal(clf_dupl.theta_, clf_sw.theta_)
assert_array_almost_equal(clf_dupl.sigma_, clf_sw.sigma_)
def test_gnb_neg_priors():
"""Test whether an error is raised in case of negative priors"""
clf = GaussianNB(priors=np.array([-1., 2.]))
assert_raises(ValueError, clf.fit, X, y)
def test_gnb_priors():
"""Test whether the class prior override is properly used"""
clf = GaussianNB(priors=np.array([0.3, 0.7])).fit(X, y)
assert_array_almost_equal(clf.predict_proba([[-0.1, -0.1]]),
np.array([[0.825303662161683,
0.174696337838317]]), 8)
assert_array_equal(clf.class_prior_, np.array([0.3, 0.7]))
def test_gnb_wrong_nb_priors():
""" Test whether an error is raised if the number of prior is different
from the number of class"""
clf = GaussianNB(priors=np.array([.25, .25, .25, .25]))
assert_raises(ValueError, clf.fit, X, y)
def test_gnb_prior_greater_one():
"""Test if an error is raised if the sum of prior greater than one"""
clf = GaussianNB(priors=np.array([2., 1.]))
assert_raises(ValueError, clf.fit, X, y)
def test_gnb_prior_large_bias():
"""Test if good prediction when class prior favor largely one class"""
clf = GaussianNB(priors=np.array([0.01, 0.99]))
clf.fit(X, y)
assert_equal(clf.predict([[-0.1, -0.1]]), np.array([2]))
def test_check_update_with_no_data():
""" Test when the partial fit is called without any data"""
# Create an empty array
prev_points = 100
mean = 0.
var = 1.
x_empty = np.empty((0, X.shape[1]))
tmean, tvar = GaussianNB._update_mean_variance(prev_points, mean,
var, x_empty)
assert_equal(tmean, mean)
assert_equal(tvar, var)
def test_gnb_pfit_wrong_nb_features():
"""Test whether an error is raised when the number of feature changes
between two partial fit"""
clf = GaussianNB()
# Fit for the first time the GNB
clf.fit(X, y)
# Partial fit a second time with an incoherent X
assert_raises(ValueError, clf.partial_fit, np.hstack((X, X)), y)
def test_discrete_prior():
# Test whether class priors are properly set.
for cls in [BernoulliNB, MultinomialNB]:
clf = cls().fit(X2, y2)
assert_array_almost_equal(np.log(np.array([2, 2, 2]) / 6.0),
clf.class_log_prior_, 8)
def test_mnnb():
# Test Multinomial Naive Bayes classification.
# This checks that MultinomialNB implements fit and predict and returns
# correct values for a simple toy dataset.
for X in [X2, scipy.sparse.csr_matrix(X2)]:
# Check the ability to predict the learning set.
clf = MultinomialNB()
assert_raises(ValueError, clf.fit, -X, y2)
y_pred = clf.fit(X, y2).predict(X)
assert_array_equal(y_pred, y2)
# Verify that np.log(clf.predict_proba(X)) gives the same results as
# clf.predict_log_proba(X)
y_pred_proba = clf.predict_proba(X)
y_pred_log_proba = clf.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba), y_pred_log_proba, 8)
# Check that incremental fitting yields the same results
clf2 = MultinomialNB()
clf2.partial_fit(X[:2], y2[:2], classes=np.unique(y2))
clf2.partial_fit(X[2:5], y2[2:5])
clf2.partial_fit(X[5:], y2[5:])
y_pred2 = clf2.predict(X)
assert_array_equal(y_pred2, y2)
y_pred_proba2 = clf2.predict_proba(X)
y_pred_log_proba2 = clf2.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba2), y_pred_log_proba2, 8)
assert_array_almost_equal(y_pred_proba2, y_pred_proba)
assert_array_almost_equal(y_pred_log_proba2, y_pred_log_proba)
# Partial fit on the whole data at once should be the same as fit too
clf3 = MultinomialNB()
clf3.partial_fit(X, y2, classes=np.unique(y2))
y_pred3 = clf3.predict(X)
assert_array_equal(y_pred3, y2)
y_pred_proba3 = clf3.predict_proba(X)
y_pred_log_proba3 = clf3.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba3), y_pred_log_proba3, 8)
assert_array_almost_equal(y_pred_proba3, y_pred_proba)
assert_array_almost_equal(y_pred_log_proba3, y_pred_log_proba)
def check_partial_fit(cls):
clf1 = cls()
clf1.fit([[0, 1], [1, 0]], [0, 1])
clf2 = cls()
clf2.partial_fit([[0, 1], [1, 0]], [0, 1], classes=[0, 1])
assert_array_equal(clf1.class_count_, clf2.class_count_)
assert_array_equal(clf1.feature_count_, clf2.feature_count_)
clf3 = cls()
clf3.partial_fit([[0, 1]], [0], classes=[0, 1])
clf3.partial_fit([[1, 0]], [1])
assert_array_equal(clf1.class_count_, clf3.class_count_)
assert_array_equal(clf1.feature_count_, clf3.feature_count_)
def test_discretenb_partial_fit():
for cls in [MultinomialNB, BernoulliNB]:
yield check_partial_fit, cls
def test_gnb_partial_fit():
clf = GaussianNB().fit(X, y)
clf_pf = GaussianNB().partial_fit(X, y, np.unique(y))
assert_array_almost_equal(clf.theta_, clf_pf.theta_)
assert_array_almost_equal(clf.sigma_, clf_pf.sigma_)
assert_array_almost_equal(clf.class_prior_, clf_pf.class_prior_)
clf_pf2 = GaussianNB().partial_fit(X[0::2, :], y[0::2], np.unique(y))
clf_pf2.partial_fit(X[1::2], y[1::2])
assert_array_almost_equal(clf.theta_, clf_pf2.theta_)
assert_array_almost_equal(clf.sigma_, clf_pf2.sigma_)
assert_array_almost_equal(clf.class_prior_, clf_pf2.class_prior_)
def test_discretenb_pickle():
# Test picklability of discrete naive Bayes classifiers
for cls in [BernoulliNB, MultinomialNB, GaussianNB]:
clf = cls().fit(X2, y2)
y_pred = clf.predict(X2)
store = BytesIO()
pickle.dump(clf, store)
clf = pickle.load(BytesIO(store.getvalue()))
assert_array_equal(y_pred, clf.predict(X2))
if cls is not GaussianNB:
# TODO re-enable me when partial_fit is implemented for GaussianNB
# Test pickling of estimator trained with partial_fit
clf2 = cls().partial_fit(X2[:3], y2[:3], classes=np.unique(y2))
clf2.partial_fit(X2[3:], y2[3:])
store = BytesIO()
pickle.dump(clf2, store)
clf2 = pickle.load(BytesIO(store.getvalue()))
assert_array_equal(y_pred, clf2.predict(X2))
def test_input_check_fit():
# Test input checks for the fit method
for cls in [BernoulliNB, MultinomialNB, GaussianNB]:
# check shape consistency for number of samples at fit time
assert_raises(ValueError, cls().fit, X2, y2[:-1])
# check shape consistency for number of input features at predict time
clf = cls().fit(X2, y2)
assert_raises(ValueError, clf.predict, X2[:, :-1])
def test_input_check_partial_fit():
for cls in [BernoulliNB, MultinomialNB]:
# check shape consistency
assert_raises(ValueError, cls().partial_fit, X2, y2[:-1],
classes=np.unique(y2))
# classes is required for first call to partial fit
assert_raises(ValueError, cls().partial_fit, X2, y2)
# check consistency of consecutive classes values
clf = cls()
clf.partial_fit(X2, y2, classes=np.unique(y2))
assert_raises(ValueError, clf.partial_fit, X2, y2,
classes=np.arange(42))
# check consistency of input shape for partial_fit
assert_raises(ValueError, clf.partial_fit, X2[:, :-1], y2)
# check consistency of input shape for predict
assert_raises(ValueError, clf.predict, X2[:, :-1])
def test_discretenb_predict_proba():
# Test discrete NB classes' probability scores
# The 100s below distinguish Bernoulli from multinomial.
# FIXME: write a test to show this.
X_bernoulli = [[1, 100, 0], [0, 1, 0], [0, 100, 1]]
X_multinomial = [[0, 1], [1, 3], [4, 0]]
# test binary case (1-d output)
y = [0, 0, 2] # 2 is regression test for binary case, 02e673
for cls, X in zip([BernoulliNB, MultinomialNB],
[X_bernoulli, X_multinomial]):
clf = cls().fit(X, y)
assert_equal(clf.predict(X[-1:]), 2)
assert_equal(clf.predict_proba([X[0]]).shape, (1, 2))
assert_array_almost_equal(clf.predict_proba(X[:2]).sum(axis=1),
np.array([1., 1.]), 6)
# test multiclass case (2-d output, must sum to one)
y = [0, 1, 2]
for cls, X in zip([BernoulliNB, MultinomialNB],
[X_bernoulli, X_multinomial]):
clf = cls().fit(X, y)
assert_equal(clf.predict_proba(X[0:1]).shape, (1, 3))
assert_equal(clf.predict_proba(X[:2]).shape, (2, 3))
assert_almost_equal(np.sum(clf.predict_proba([X[1]])), 1)
assert_almost_equal(np.sum(clf.predict_proba([X[-1]])), 1)
assert_almost_equal(np.sum(np.exp(clf.class_log_prior_)), 1)
assert_almost_equal(np.sum(np.exp(clf.intercept_)), 1)
def test_discretenb_uniform_prior():
# Test whether discrete NB classes fit a uniform prior
# when fit_prior=False and class_prior=None
for cls in [BernoulliNB, MultinomialNB]:
clf = cls()
clf.set_params(fit_prior=False)
clf.fit([[0], [0], [1]], [0, 0, 1])
prior = np.exp(clf.class_log_prior_)
assert_array_equal(prior, np.array([.5, .5]))
def test_discretenb_provide_prior():
# Test whether discrete NB classes use provided prior
for cls in [BernoulliNB, MultinomialNB]:
clf = cls(class_prior=[0.5, 0.5])
clf.fit([[0], [0], [1]], [0, 0, 1])
prior = np.exp(clf.class_log_prior_)
assert_array_equal(prior, np.array([.5, .5]))
# Inconsistent number of classes with prior
assert_raises(ValueError, clf.fit, [[0], [1], [2]], [0, 1, 2])
assert_raises(ValueError, clf.partial_fit, [[0], [1]], [0, 1],
classes=[0, 1, 1])
def test_discretenb_provide_prior_with_partial_fit():
# Test whether discrete NB classes use provided prior
# when using partial_fit
iris = load_iris()
iris_data1, iris_data2, iris_target1, iris_target2 = train_test_split(
iris.data, iris.target, test_size=0.4, random_state=415)
for cls in [BernoulliNB, MultinomialNB]:
for prior in [None, [0.3, 0.3, 0.4]]:
clf_full = cls(class_prior=prior)
clf_full.fit(iris.data, iris.target)
clf_partial = cls(class_prior=prior)
clf_partial.partial_fit(iris_data1, iris_target1,
classes=[0, 1, 2])
clf_partial.partial_fit(iris_data2, iris_target2)
assert_array_almost_equal(clf_full.class_log_prior_,
clf_partial.class_log_prior_)
def test_sample_weight_multiclass():
for cls in [BernoulliNB, MultinomialNB]:
# check shape consistency for number of samples at fit time
yield check_sample_weight_multiclass, cls
def check_sample_weight_multiclass(cls):
X = [
[0, 0, 1],
[0, 1, 1],
[0, 1, 1],
[1, 0, 0],
]
y = [0, 0, 1, 2]
sample_weight = np.array([1, 1, 2, 2], dtype=np.float64)
sample_weight /= sample_weight.sum()
clf = cls().fit(X, y, sample_weight=sample_weight)
assert_array_equal(clf.predict(X), [0, 1, 1, 2])
# Check sample weight using the partial_fit method
clf = cls()
clf.partial_fit(X[:2], y[:2], classes=[0, 1, 2],
sample_weight=sample_weight[:2])
clf.partial_fit(X[2:3], y[2:3], sample_weight=sample_weight[2:3])
clf.partial_fit(X[3:], y[3:], sample_weight=sample_weight[3:])
assert_array_equal(clf.predict(X), [0, 1, 1, 2])
def test_sample_weight_mnb():
clf = MultinomialNB()
clf.fit([[1, 2], [1, 2], [1, 0]],
[0, 0, 1],
sample_weight=[1, 1, 4])
assert_array_equal(clf.predict([[1, 0]]), [1])
positive_prior = np.exp(clf.intercept_[0])
assert_array_almost_equal([1 - positive_prior, positive_prior],
[1 / 3., 2 / 3.])
def test_coef_intercept_shape():
# coef_ and intercept_ should have shapes as in other linear models.
# Non-regression test for issue #2127.
X = [[1, 0, 0], [1, 1, 1]]
y = [1, 2] # binary classification
for clf in [MultinomialNB(), BernoulliNB()]:
clf.fit(X, y)
assert_equal(clf.coef_.shape, (1, 3))
assert_equal(clf.intercept_.shape, (1,))
def test_check_accuracy_on_digits():
# Non regression test to make sure that any further refactoring / optim
# of the NB models do not harm the performance on a slightly non-linearly
# separable dataset
digits = load_digits()
X, y = digits.data, digits.target
binary_3v8 = np.logical_or(digits.target == 3, digits.target == 8)
X_3v8, y_3v8 = X[binary_3v8], y[binary_3v8]
# Multinomial NB
scores = cross_val_score(MultinomialNB(alpha=10), X, y, cv=10)
assert_greater(scores.mean(), 0.86)
scores = cross_val_score(MultinomialNB(alpha=10), X_3v8, y_3v8, cv=10)
assert_greater(scores.mean(), 0.94)
# Bernoulli NB
scores = cross_val_score(BernoulliNB(alpha=10), X > 4, y, cv=10)
assert_greater(scores.mean(), 0.83)
scores = cross_val_score(BernoulliNB(alpha=10), X_3v8 > 4, y_3v8, cv=10)
assert_greater(scores.mean(), 0.92)
# Gaussian NB
scores = cross_val_score(GaussianNB(), X, y, cv=10)
assert_greater(scores.mean(), 0.77)
scores = cross_val_score(GaussianNB(), X_3v8, y_3v8, cv=10)
assert_greater(scores.mean(), 0.86)
def test_feature_log_prob_bnb():
# Test for issue #4268.
# Tests that the feature log prob value computed by BernoulliNB when
# alpha=1.0 is equal to the expression given in Manning, Raghavan,
# and Schuetze's "Introduction to Information Retrieval" book:
# http://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
X = np.array([[0, 0, 0], [1, 1, 0], [0, 1, 0], [1, 0, 1], [0, 1, 0]])
Y = np.array([0, 0, 1, 2, 2])
# Fit Bernoulli NB w/ alpha = 1.0
clf = BernoulliNB(alpha=1.0)
clf.fit(X, Y)
# Manually form the (log) numerator and denominator that
# constitute P(feature presence | class)
num = np.log(clf.feature_count_ + 1.0)
denom = np.tile(np.log(clf.class_count_ + 2.0), (X.shape[1], 1)).T
# Check manual estimate matches
assert_array_almost_equal(clf.feature_log_prob_, (num - denom))
def test_bnb():
# Tests that BernoulliNB when alpha=1.0 gives the same values as
# those given for the toy example in Manning, Raghavan, and
# Schuetze's "Introduction to Information Retrieval" book:
# http://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
# Training data points are:
# Chinese Beijing Chinese (class: China)
# Chinese Chinese Shanghai (class: China)
# Chinese Macao (class: China)
# Tokyo Japan Chinese (class: Japan)
# Features are Beijing, Chinese, Japan, Macao, Shanghai, and Tokyo
X = np.array([[1, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 1, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 1]])
# Classes are China (0), Japan (1)
Y = np.array([0, 0, 0, 1])
# Fit BernoulliBN w/ alpha = 1.0
clf = BernoulliNB(alpha=1.0)
clf.fit(X, Y)
# Check the class prior is correct
class_prior = np.array([0.75, 0.25])
assert_array_almost_equal(np.exp(clf.class_log_prior_), class_prior)
# Check the feature probabilities are correct
feature_prob = np.array([[0.4, 0.8, 0.2, 0.4, 0.4, 0.2],
[1/3.0, 2/3.0, 2/3.0, 1/3.0, 1/3.0, 2/3.0]])
assert_array_almost_equal(np.exp(clf.feature_log_prob_), feature_prob)
# Testing data point is:
# Chinese Chinese Chinese Tokyo Japan
X_test = np.array([[0, 1, 1, 0, 0, 1]])
# Check the predictive probabilities are correct
unnorm_predict_proba = np.array([[0.005183999999999999,
0.02194787379972565]])
predict_proba = unnorm_predict_proba / np.sum(unnorm_predict_proba)
assert_array_almost_equal(clf.predict_proba(X_test), predict_proba)
def test_cnb():
# Tests ComplementNB when alpha=1.0 for the toy example in Manning,
# Raghavan, and Schuetze's "Introduction to Information Retrieval" book:
# http://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
# Training data points are:
# Chinese Beijing Chinese (class: China)
# Chinese Chinese Shanghai (class: China)
# Chinese Macao (class: China)
# Tokyo Japan Chinese (class: Japan)
# Features are Beijing, Chinese, Japan, Macao, Shanghai, and Tokyo.
X = np.array([[1, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 1, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 1]])
# Classes are China (0), Japan (1).
Y = np.array([0, 0, 0, 1])
# Verify inputs are nonnegative.
clf = ComplementNB(alpha=1.0)
assert_raises(ValueError, clf.fit, -X, Y)
clf.fit(X, Y)
# Check that counts are correct.
feature_count = np.array([[1, 3, 0, 1, 1, 0], [0, 1, 1, 0, 0, 1]])
assert_array_equal(clf.feature_count_, feature_count)
class_count = np.array([3, 1])
assert_array_equal(clf.class_count_, class_count)
feature_all = np.array([1, 4, 1, 1, 1, 1])
assert_array_equal(clf.feature_all_, feature_all)
# Check that weights are correct. See steps 4-6 in Table 4 of
# Rennie et al. (2003).
theta = np.array([
[
(0 + 1) / (3 + 6),
(1 + 1) / (3 + 6),
(1 + 1) / (3 + 6),
(0 + 1) / (3 + 6),
(0 + 1) / (3 + 6),
(1 + 1) / (3 + 6)
],
[
(1 + 1) / (6 + 6),
(3 + 1) / (6 + 6),
(0 + 1) / (6 + 6),
(1 + 1) / (6 + 6),
(1 + 1) / (6 + 6),
(0 + 1) / (6 + 6)
]])
weights = np.zeros(theta.shape)
for i in range(2):
weights[i] = np.log(theta[i])
weights[i] /= weights[i].sum()
assert_array_equal(clf.feature_log_prob_, weights)
def test_naive_bayes_scale_invariance():
# Scaling the data should not change the prediction results
iris = load_iris()
X, y = iris.data, iris.target
labels = [GaussianNB().fit(f * X, y).predict(f * X)
for f in [1E-10, 1, 1E10]]
assert_array_equal(labels[0], labels[1])
assert_array_equal(labels[1], labels[2])
def test_alpha():
# Setting alpha=0 should not output nan results when p(x_i|y_j)=0 is a case
X = np.array([[1, 0], [1, 1]])
y = np.array([0, 1])
nb = BernoulliNB(alpha=0.)
assert_warns(UserWarning, nb.partial_fit, X, y, classes=[0, 1])
assert_warns(UserWarning, nb.fit, X, y)
prob = np.array([[1, 0], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
nb = MultinomialNB(alpha=0.)
assert_warns(UserWarning, nb.partial_fit, X, y, classes=[0, 1])
assert_warns(UserWarning, nb.fit, X, y)
prob = np.array([[2./3, 1./3], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
# Test sparse X
X = scipy.sparse.csr_matrix(X)
nb = BernoulliNB(alpha=0.)
assert_warns(UserWarning, nb.fit, X, y)
prob = np.array([[1, 0], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
nb = MultinomialNB(alpha=0.)
assert_warns(UserWarning, nb.fit, X, y)
prob = np.array([[2./3, 1./3], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
# Test for alpha < 0
X = np.array([[1, 0], [1, 1]])
y = np.array([0, 1])
expected_msg = ('Smoothing parameter alpha = -1.0e-01. '
'alpha should be > 0.')
b_nb = BernoulliNB(alpha=-0.1)
m_nb = MultinomialNB(alpha=-0.1)
assert_raise_message(ValueError, expected_msg, b_nb.fit, X, y)
assert_raise_message(ValueError, expected_msg, m_nb.fit, X, y)
b_nb = BernoulliNB(alpha=-0.1)
m_nb = MultinomialNB(alpha=-0.1)
assert_raise_message(ValueError, expected_msg, b_nb.partial_fit,
X, y, classes=[0, 1])
assert_raise_message(ValueError, expected_msg, m_nb.partial_fit,
X, y, classes=[0, 1])
|
aflaxman/scikit-learn
|
sklearn/tests/test_naive_bayes.py
|
Python
|
bsd-3-clause
| 23,848
|
[
"Gaussian"
] |
995593d0f76c4f2cc26b314fcfc7c51c5dfbea54aa360bbebe2126d6c4693991
|
#
# @BEGIN LICENSE
#
# Psi4: an open-source quantum chemistry software package
#
# Copyright (c) 2007-2019 The Psi4 Developers.
#
# The copyrights for code used from other parties are included in
# the corresponding files.
#
# This file is part of Psi4.
#
# Psi4 is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, version 3.
#
# Psi4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with Psi4; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# @END LICENSE
#
"""Module with non-generic exceptions classes."""
from psi4 import core
from psi4 import extras
class PsiException(Exception):
"""Error class for Psi."""
extras._success_flag_ = False
pass
class ValidationError(PsiException):
"""Error called for problems with the input file. Prints
error message *msg* to standard output stream and output file.
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
# print("%s" % repr(msg))
self.message = '\nPsiException: %s\n\n' % repr(msg)
class ParsingError(PsiException):
"""Error called for problems parsing a text file. Prints error message
*msg* to standard output stream and output file.
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
self.message = '\nPsiException: %s\n\n' % msg
class PsiImportError(PsiException):
"""Error called for problems import python dependencies. Prints error message
*msg* to standard output stream and output file.
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
self.message = '\nPsiException: %s\n\n' % msg
class TestComparisonError(PsiException):
"""Error called when a test case fails due to a failed
compare_values() call. Prints error message *msg* to standard
output stream and output file.
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
self.message = '\nPsiException: %s\n\n' % msg
class UpgradeHelper(PsiException):
"""Error called on previously valid syntax that now isn't and a
simple syntax transition is possible.
It is much preferred to leave the old syntax valid for a release
cycle and have the old syntax raise a deprecation FutureWarning. For
cases where the syntax just has to jump, this can be used to trap
the old syntax at first error and suggest the new.
"""
def __init__(self, old, new, version, elaboration):
msg = "Using `{}` instead of `{}` is obsolete as of {}.{}".format(old, new, version, elaboration)
PsiException.__init__(self, msg)
core.print_out('\nPsiException: %s\n\n' % (msg))
class ConvergenceError(PsiException):
"""Error called for problems with converging an iterative method.
Parameters
----------
eqn_description : str
Type of QC routine that has failed (e.g., SCF)
iteration : int
What iteration we failed on
"""
def __init__(self, eqn_description, iteration, additional_info=None):
msg = f"Could not converge {eqn_description:s} in {iteration:d} iterations."
if additional_info is not None:
msg += f"\n\n{additional_info}"
PsiException.__init__(self, msg)
self.iteration = iteration
self.message = msg
core.print_out(f'\nPsiException: {msg:s}\n\n')
class OptimizationConvergenceError(ConvergenceError):
"""Error called for problems with geometry optimizer."""
def __init__(self, eqn_description, iteration, wfn):
ConvergenceError.__init__(self, eqn_description, iteration)
self.wfn = wfn
class SCFConvergenceError(ConvergenceError):
"""Error called for problems with SCF iterations.
Parameters
----------
wfn : psi4.core.Wavefunction
Wavefunction at time of exception
e_conv : float
Change in energy for last iteration
d_conv : float
RMS change in density for last iteration
"""
def __init__(self, eqn_description, iteration, wfn, e_conv, d_conv):
ConvergenceError.__init__(self, eqn_description, iteration)
self.e_conv = e_conv
self.d_conv = d_conv
self.wfn = wfn
class TDSCFConvergenceError(ConvergenceError):
"""Error called for problems with TDSCF iterations.
Parameters
----------
wfn : psi4.core.Wavefunction
Wavefunction at time of exception
what : str
What we were trying to solve for (singlets/triplets, irrep) when we failed to converge
stats : Dict
Dictionary of convergence statistics of last iteration.
Keys are:
count : int, iteration number
res_norm : np.ndarray (nroots, ), the norm of residual vector for each roots
val : np.ndarray (nroots, ), the eigenvalue corresponding to each root
delta_val : np.ndarray (nroots, ), the change in eigenvalue from the last iteration to this ones
collapse : bool, if a subspace collapse was performed
product_count : int, the running total of product evaluations that was performed
done : bool, if all roots were converged
"""
def __init__(self, iteration, wfn, what, stats):
# prepare message, including excitation energies and residual norm
conv_info = "==> Convergence statistics from last iteration <==\n\n"
conv_info += "Excitation Energy".center(21) + f" {'D[value]':^15}" + "|R|".center(11) + "\n"
conv_info += f"{'-':->20} {'-':->15} {'-':->15}\n"
for e, diff, r_norm in zip(stats["val"], stats["delta_val"], stats["res_norm"]):
conv_info += f" {e:.6f} {diff:-11.5e} {r_norm:12.5e}\n"
ConvergenceError.__init__(self,
eqn_description=f"""TDSCF solver ({what})""",
iteration=iteration,
additional_info=conv_info)
self.wfn = wfn
self.what = what
self.stats = stats
class CSXError(PsiException):
"""Error called when CSX generation fails.
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
self.message = '\nCSXException: %s\n\n' % msg
class MissingMethodError(ValidationError):
"""Error called when method not available.
"""
def __init__(self, msg):
ValidationError.__init__(self, msg)
self.message = '\nMissingMethodError: %s\n\n' % msg
class ManagedMethodError(PsiException):
def __init__(self, circs):
if circs[5] == '':
modulemsg = "not available"
else:
modulemsg = f"not directable to QC_MODULE '{circs[5]}'"
if len(circs) == 7:
msg = f"""{circs[0]}: Method '{circs[1]}' with {circs[2]} '{circs[3]}', FREEZE_CORE '{not circs[6]}', and REFERENCE '{circs[4]}' {modulemsg}"""
else:
msg = f"""{circs[0]}: Method '{circs[1]}' with {circs[2]} '{circs[3]}' and REFERENCE '{circs[4]}' {modulemsg}"""
PsiException.__init__(self, msg)
self.message = '\nPsiException: %s\n\n' % msg
class Dftd3Error(PsiException):
"""
"""
def __init__(self, msg):
PsiException.__init__(self, msg)
self.message = '\nDftd3Error: %s\n\n' % msg
class PastureRequiredError(PsiException):
"""Error called when the specified value of *option* requires some
module(s) from Psi4Pasture, but could not be imported.
"""
msg_tmpl = """Psi4Pasture module(s) [{modlist}] are required to change the default value of {opt}
"""
install_instructions = """
Note: Psi4Pasture is currently in an experimental state with no reliable install
procedure yet, but this is what it would look like.
To Build Psi4Pasture and install the required modules within your current
Psi4 installation
>>> # clone the pasture repo
>>> git clone https://github.com/psi4/psi4pasture.git
>>> cmake -H. -Bobjdir -Dpsi4_DIR=$PSI4_INSTALL_PREFIX/share/cmake/psi4 {module_args}
>>> # $PSI4_INSTALL_PREFIX is the $CMAKE_INSTALL_PREFIX for the psi4
>>> # install you want to install pasture to
>>> # build + install install location is detected automatically
>>> cd objdir
>>> make && make install
See https://github.com/psi4/psi4pasture for more details
Or to install using psi4's own build system add
{module_args}
to cmake command line when building psi4.
"""
pasture_required_modules = {"RUN_CCTRANSORT": ["ccsort", "transqt2"]}
def __init__(self, option):
mods_str = ", ".join([m for m in PastureRequiredError.pasture_required_modules[option]])
msg = PastureRequiredError.msg_tmpl.format(opt=option, modlist=mods_str)
PsiException.__init__(self, msg)
module_cmake_args = " ".join(
["-DENABLE_{}=ON".format(module) for module in PastureRequiredError.pasture_required_modules[option]])
msg += PastureRequiredError.install_instructions.format(module_args=module_cmake_args)
self.message = '\nPsiException: {}\n\n'.format(msg)
core.print_out(self.message)
|
dgasmith/psi4
|
psi4/driver/p4util/exceptions.py
|
Python
|
lgpl-3.0
| 9,523
|
[
"Psi4"
] |
4eaf1961aeb36483bf1fd600d251ee21de91b7c2d72e5bb5f827426ead87091c
|
# -*- coding: utf-8 -*-
##################################################################################################
import logging
import sqlite3
import threading
from datetime import datetime, timedelta, time
import xbmc
import xbmcgui
import xbmcvfs
import api
import utils
import clientinfo
import database
import downloadutils
import itemtypes
import embydb_functions as embydb
import read_embyserver as embyserver
import userclient
import views
from objects import Movies, MusicVideos, TVShows, Music
from utils import window, settings, language as lang, should_stop
from ga_client import GoogleAnalytics
##################################################################################################
log = logging.getLogger("EMBY."+__name__)
##################################################################################################
class LibrarySync(threading.Thread):
_shared_state = {}
isFastSync = False
stop_thread = False
suspend_thread = False
# Track websocketclient updates
addedItems = []
updateItems = []
userdataItems = []
removeItems = []
forceLibraryUpdate = False
incremental_count = 0
refresh_views = False
def __init__(self):
self.__dict__ = self._shared_state
self.monitor = xbmc.Monitor()
self.clientInfo = clientinfo.ClientInfo()
self.doUtils = downloadutils.DownloadUtils().downloadUrl
self.user = userclient.UserClient()
self.emby = embyserver.Read_EmbyServer()
self.kodi_version = int(xbmc.getInfoLabel('System.BuildVersion')[:2])
threading.Thread.__init__(self)
def progressDialog(self, title):
dialog = None
dialog = xbmcgui.DialogProgressBG()
dialog.create("Emby for Kodi", title)
log.debug("Show progress dialog: %s" % title)
return dialog
def startSync(self):
ga = GoogleAnalytics()
# Run at start up - optional to use the server plugin
if settings('SyncInstallRunDone') == "true":
# Validate views
self.refreshViews()
completed = False
# Verify if server plugin is installed.
if settings('serverSync') == "true":
# Try to use fast start up
url = "{server}/emby/Plugins?format=json"
try:
result = self.doUtils(url)
except Exception as error:
log.info("Error getting plugin list form server: " + str(error))
result = []
for plugin in result:
if plugin['Name'] == "Emby.Kodi Sync Queue":
log.debug("Found server plugin.")
self.isFastSync = True
ga.sendEventData("SyncAction", "FastSync")
completed = self.fastSync()
break
if not completed:
# Fast sync failed or server plugin is not found
ga.sendEventData("SyncAction", "Sync")
completed = ManualSync().sync()
else:
# Install sync is not completed
ga.sendEventData("SyncAction", "FullSync")
completed = self.fullSync()
return completed
def fastSync(self):
lastSync = settings('LastIncrementalSync')
if not lastSync:
lastSync = "2010-01-01T00:00:00Z"
lastSyncTime = utils.convertDate(lastSync)
log.info("Last sync run: %s" % lastSyncTime)
# get server RetentionDateTime
try:
result = self.doUtils("{server}/emby/Emby.Kodi.SyncQueue/GetServerDateTime?format=json")
retention_time = result['RetentionDateTime']
except Exception as error:
log.error(error)
retention_time = "2010-01-01T00:00:00Z"
retention_time = utils.convertDate(retention_time)
log.info("RetentionDateTime: %s" % retention_time)
# if last sync before retention time do a full sync
if retention_time > lastSyncTime:
log.info("Fast sync server retention insufficient, fall back to full sync")
return False
params = {'LastUpdateDT': lastSync}
if settings('enableMusic') != "true":
params['filter'] = "music"
url = "{server}/emby/Emby.Kodi.SyncQueue/{UserId}/GetItems?format=json"
try:
result = self.doUtils(url, parameters=params)
processlist = {
'added': result['ItemsAdded'],
'update': result['ItemsUpdated'],
'userdata': result['UserDataChanged'],
'remove': result['ItemsRemoved']
}
except Exception as error: # To be reviewed to only catch specific errors.
log.error(error)
log.error("Failed to retrieve latest updates using fast sync.")
xbmcgui.Dialog().ok(lang(29999), lang(33095))
return False
else:
log.info("Fast sync changes: %s" % result)
for action in processlist:
self.triage_items(action, processlist[action])
return True
def saveLastSync(self):
# Save last sync time
overlap = 2
try: # datetime fails when used more than once, TypeError
if self.isFastSync:
result = self.doUtils("{server}/emby/Emby.Kodi.SyncQueue/GetServerDateTime?format=json")
server_time = result['ServerDateTime']
server_time = utils.convertDate(server_time)
else:
raise Exception("Fast sync server plugin is not enabled.")
except Exception as e:
# If the server plugin is not installed or an error happened.
log.debug("An exception occurred: %s" % e)
time_now = datetime.utcnow()-timedelta(minutes=overlap)
lastSync = time_now.strftime('%Y-%m-%dT%H:%M:%SZ')
log.info("New sync time: client time -%s min: %s" % (overlap, lastSync))
else:
lastSync = (server_time - timedelta(minutes=overlap)).strftime('%Y-%m-%dT%H:%M:%SZ')
log.info("New sync time: server time -%s min: %s" % (overlap, lastSync))
finally:
settings('LastIncrementalSync', value=lastSync)
def dbCommit(self, connection):
# Central commit, verifies if Kodi database update is running
kodidb_scan = window('emby_kodiScan') == "true"
count = 0
while kodidb_scan:
log.info("Kodi scan is running. Waiting...")
kodidb_scan = window('emby_kodiScan') == "true"
if count == 10:
log.info("Flag still active, but will try to commit")
window('emby_kodiScan', clear=True)
if should_stop():
log.info("Commit unsuccessful. Sync terminated.")
break
if self.monitor.waitForAbort(1):
# Abort was requested while waiting. We should exit
log.info("Commit unsuccessful.")
break
count += 1
try:
connection.commit()
log.info("Commit successful.")
except sqlite3.OperationalError as error:
log.error(error)
if "database is locked" in error:
log.info("retrying...")
window('emby_kodiScan', value="true")
self.dbCommit(connection)
def fullSync(self, manualrun=False, repair=False):
# Only run once when first setting up. Can be run manually.
music_enabled = settings('enableMusic') == "true"
xbmc.executebuiltin('InhibitIdleShutdown(true)')
screensaver = utils.getScreensaver()
utils.setScreensaver(value="")
window('emby_dbScan', value="true")
# Add sources
utils.sourcesXML()
# use emby and video DBs
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn('video') as cursor_video:
# content sync: movies, tvshows, musicvideos, music
if manualrun:
message = "Manual sync"
elif repair:
message = "Repair sync"
repair_list = []
choices = ['all', 'movies', 'musicvideos', 'tvshows']
if music_enabled:
choices.append('music')
if self.kodi_version > 15:
# Jarvis or higher
types = xbmcgui.Dialog().multiselect(lang(33094), choices)
if types is None:
pass
elif 0 in types: # all
choices.pop(0)
repair_list.extend(choices)
else:
for index in types:
repair_list.append(choices[index])
else:
resp = xbmcgui.Dialog().select(lang(33094), choices)
if resp == 0: # all
choices.pop(resp)
repair_list.extend(choices)
else:
repair_list.append(choices[resp])
log.info("Repair queued for: %s", repair_list)
else:
message = "Initial sync"
window('emby_initialScan', value="true")
pDialog = self.progressDialog("%s" % message)
starttotal = datetime.now()
# Set views
views.Views(cursor_emby, cursor_video).maintain()
cursor_emby.connection.commit()
#self.maintainViews(cursor_emby, cursor_video)
# Sync video library
process = {
'movies': self.movies,
'boxsets': self.boxsets,
'musicvideos': self.musicvideos,
'tvshows': self.tvshows
}
for itemtype in process:
if repair and itemtype not in repair_list:
continue
startTime = datetime.now()
completed = process[itemtype](cursor_emby, cursor_video, pDialog)
if not completed:
xbmc.executebuiltin('InhibitIdleShutdown(false)')
utils.setScreensaver(value=screensaver)
window('emby_dbScan', clear=True)
if pDialog:
pDialog.close()
return False
else:
elapsedTime = datetime.now() - startTime
log.info("SyncDatabase (finished %s in: %s)"
% (itemtype, str(elapsedTime).split('.')[0]))
# sync music
# use emby and music
if music_enabled:
if repair and 'music' not in repair_list:
pass
else:
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn('music') as cursor_music:
startTime = datetime.now()
completed = self.music(cursor_emby, cursor_music, pDialog)
if not completed:
xbmc.executebuiltin('InhibitIdleShutdown(false)')
utils.setScreensaver(value=screensaver)
window('emby_dbScan', clear=True)
if pDialog:
pDialog.close()
return False
else:
elapsedTime = datetime.now() - startTime
log.info("SyncDatabase (finished music in: %s)"
% (str(elapsedTime).split('.')[0]))
if pDialog:
pDialog.close()
with database.DatabaseConn('emby') as cursor_emby:
emby_db = embydb.Embydb_Functions(cursor_emby)
current_version = emby_db.get_version(self.clientInfo.get_version())
window('emby_version', current_version)
settings('SyncInstallRunDone', value="true")
self.saveLastSync()
xbmc.executebuiltin('UpdateLibrary(video)')
elapsedtotal = datetime.now() - starttotal
xbmc.executebuiltin('InhibitIdleShutdown(false)')
utils.setScreensaver(value=screensaver)
window('emby_dbScan', clear=True)
window('emby_initialScan', clear=True)
xbmcgui.Dialog().notification(
heading=lang(29999),
message="%s %s %s" %
(message, lang(33025), str(elapsedtotal).split('.')[0]),
icon="special://home/addons/plugin.video.emby/icon.png",
sound=False)
return True
def refreshViews(self):
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn() as cursor_video:
# Compare views, assign correct tags to items
views.Views(cursor_emby, cursor_video).maintain()
def offline_mode_views(self):
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn() as cursor_video:
views.Views(cursor_emby, cursor_video).offline_mode()
def movies(self, embycursor, kodicursor, pdialog):
# Get movies from emby
emby_db = embydb.Embydb_Functions(embycursor)
movies = Movies(embycursor, kodicursor, pdialog)
views = emby_db.getView_byType('movies')
views += emby_db.getView_byType('mixed')
log.info("Media folders: %s" % views)
##### PROCESS MOVIES #####
for view in views:
log.info("Processing: %s", view)
view_name = view['name']
# Get items per view
if pdialog:
pdialog.update(
heading=lang(29999),
message="%s %s..." % (lang(33017), view_name))
all_movies = self.emby.getMovies(view['id'], dialog=pdialog)
movies.add_all("Movie", all_movies, view)
log.debug("Movies finished.")
return True
def boxsets(self, embycursor, kodicursor, pdialog):
movies = Movies(embycursor, kodicursor, pdialog)
if pdialog:
pdialog.update(heading=lang(29999), message=lang(33018))
boxsets = self.emby.getBoxset(dialog=pdialog)
movies.add_all("BoxSet", boxsets)
log.debug("Boxsets finished.")
return True
def musicvideos(self, embycursor, kodicursor, pdialog):
# Get musicvideos from emby
emby_db = embydb.Embydb_Functions(embycursor)
mvideos = MusicVideos(embycursor, kodicursor, pdialog)
views = emby_db.getView_byType('musicvideos')
log.info("Media folders: %s" % views)
for view in views:
log.info("Processing: %s", view)
# Get items per view
viewId = view['id']
viewName = view['name']
if pdialog:
pdialog.update(
heading=lang(29999),
message="%s %s..." % (lang(33019), viewName))
# Initial or repair sync
all_mvideos = self.emby.getMusicVideos(viewId, dialog=pdialog)
mvideos.add_all("MusicVideo", all_mvideos, view)
else:
log.debug("MusicVideos finished.")
return True
def tvshows(self, embycursor, kodicursor, pdialog):
# Get shows from emby
emby_db = embydb.Embydb_Functions(embycursor)
tvshows = TVShows(embycursor, kodicursor, pdialog)
views = emby_db.getView_byType('tvshows')
views += emby_db.getView_byType('mixed')
log.info("Media folders: %s" % views)
for view in views:
# Get items per view
if pdialog:
pdialog.update(
heading=lang(29999),
message="%s %s..." % (lang(33020), view['name']))
all_tvshows = self.emby.getShows(view['id'], dialog=pdialog)
#log.info([item['Id'] for item in all_tvshows['Items']])
#for all_tvshows in self.emby.get_parent_child(view['id'], "Series"):
tvshows.add_all("Series", all_tvshows, view)
else:
log.debug("TVShows finished.")
return True
def music(self, embycursor, kodicursor, pdialog):
# Get music from emby
emby_db = embydb.Embydb_Functions(embycursor)
music = Music(embycursor, kodicursor, pdialog)
views = emby_db.getView_byType('music')
log.info("Media folders: %s", views)
# Add music artists and everything will fall into place
if pdialog:
pdialog.update(heading=lang(29999),
message="%s Music..." % lang(33021))
for view in views:
all_artists = self.emby.getArtists(view['id'], dialog=pdialog)
music.add_all("MusicArtist", all_artists)
log.debug("Finished syncing music")
return True
# Reserved for websocket_client.py and fast start
def triage_items(self, process, items):
processlist = {
'added': self.addedItems,
'update': self.updateItems,
'userdata': self.userdataItems,
'remove': self.removeItems
}
if items:
if process == "userdata":
itemids = []
for item in items:
itemids.append(item['ItemId'])
items = itemids
log.info("Queue %s: %s" % (process, items))
processlist[process].extend(items)
def incrementalSync(self):
self.incremental_count += 1
update_embydb = False
pDialog = None
# do a view update if needed
if self.refresh_views:
self.refreshViews()
self.refresh_views = False
self.forceLibraryUpdate = True
# do a lib update if any items in list
totalUpdates = len(self.addedItems) + len(self.updateItems) + len(self.userdataItems) + len(self.removeItems)
if totalUpdates > 0 and window('emby_kodiScan') != "true":
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn('video') as cursor_video:
xbmc.executebuiltin('InhibitIdleShutdown(true)')
screensaver = utils.getScreensaver()
utils.setScreensaver(value="")
emby_db = embydb.Embydb_Functions(cursor_emby)
incSyncIndicator = int(settings('incSyncIndicator') or 10)
if incSyncIndicator != -1 and totalUpdates > incSyncIndicator:
# Only present dialog if we are going to process items
pDialog = self.progressDialog('Incremental sync')
log.info("incSyncIndicator=" + str(incSyncIndicator) + " totalUpdates=" + str(totalUpdates))
process = {
'added': self.addedItems,
'update': self.updateItems,
'userdata': self.userdataItems,
'remove': self.removeItems
}
for process_type in ['added', 'update', 'userdata', 'remove']:
if process[process_type]:
listItems = list(process[process_type])
del process[process_type][:] # Reset class list
items_process = itemtypes.Items(cursor_emby, cursor_video)
update = False
# Prepare items according to process process_type
if process_type == "added":
items = self.emby.sortby_mediatype(listItems)
elif process_type in ("userdata", "remove"):
items = emby_db.sortby_mediaType(listItems, unsorted=False)
else:
items = emby_db.sortby_mediaType(listItems)
if items.get('Unsorted'):
sorted_items = self.emby.sortby_mediatype(items['Unsorted'])
doupdate = items_process.itemsbyId(sorted_items, "added", pDialog)
if doupdate:
embyupdate, kodiupdate_video = doupdate
if embyupdate:
update_embydb = True
if kodiupdate_video:
self.forceLibraryUpdate = True
del items['Unsorted']
doupdate = items_process.itemsbyId(items, process_type, pDialog)
if doupdate:
embyupdate, kodiupdate_video = doupdate
if embyupdate:
update_embydb = True
if kodiupdate_video:
self.forceLibraryUpdate = True
xbmc.executebuiltin('InhibitIdleShutdown(false)')
utils.setScreensaver(value=screensaver)
# if stuff happened then do some stuff
if update_embydb:
update_embydb = False
log.info("Updating emby database.")
self.saveLastSync()
if self.forceLibraryUpdate:
# Force update the Kodi library
self.forceLibraryUpdate = False
log.info("Updating video library.")
self.incremental_count = 0
window('emby_kodiScan', value="true")
xbmc.executebuiltin('UpdateLibrary(video)')
if pDialog:
pDialog.close()
def compareDBVersion(self, current, minimum):
# It returns True is database is up to date. False otherwise.
log.info("current: %s minimum: %s" % (current, minimum))
try:
currMajor, currMinor, currPatch = current.split(".")
minMajor, minMinor, minPatch = minimum.split(".")
except ValueError as error:
raise ValueError("Unable to compare versions: %s, %s" % (current, minimum))
if currMajor > minMajor:
return True
elif currMajor == minMajor and (currMinor > minMinor or
(currMinor == minMinor and currPatch >= minPatch)):
return True
else:
# Database out of date.
return False
def run(self):
try:
self.run_internal()
except Warning as e:
if "restricted" in e:
pass
elif "401" in e:
pass
except Exception as e:
ga = GoogleAnalytics()
errStrings = ga.formatException()
if not (hasattr(e, 'quiet') and e.quiet):
ga.sendEventData("Exception", errStrings[0], errStrings[1])
window('emby_dbScan', clear=True)
log.exception(e)
xbmcgui.Dialog().ok(
heading=lang(29999),
line1=(
"Library sync thread has exited! "
"You should restart Kodi now. "
"Please report this on the forum."),
line2=(errStrings[0] + " (" + errStrings[1] + ")"))
def run_internal(self):
dialog = xbmcgui.Dialog()
startupComplete = False
log.warn("---===### Starting LibrarySync ###===---")
if utils.verify_advancedsettings():
# Advancedsettings was modified, Kodi needs to restart
log.warn("###===--- LibrarySync Aborted ---===###")
return
while not self.monitor.abortRequested():
# In the event the server goes offline
while self.suspend_thread:
# Set in service.py
if self.monitor.waitForAbort(5):
# Abort was requested while waiting. We should exit
break
if (window('emby_dbCheck') != "true" and settings('SyncInstallRunDone') == "true"):
# Verify the validity of the database
log.info("Doing DB Version Check")
with database.DatabaseConn('emby') as cursor:
emby_db = embydb.Embydb_Functions(cursor)
currentVersion = emby_db.get_version()
###$ Begin migration $###
if not currentVersion:
currentVersion = emby_db.get_version(settings('dbCreatedWithVersion') or self.clientInfo.get_version())
log.info("Migration of database version completed")
###$ End migration $###
window('emby_version', value=currentVersion)
minVersion = window('emby_minDBVersion')
uptoDate = self.compareDBVersion(currentVersion, minVersion)
if not uptoDate:
log.warn("Database version out of date: %s minimum version required: %s"
% (currentVersion, minVersion))
resp = dialog.yesno(lang(29999), lang(33022))
if not resp:
log.warn("Database version is out of date! USER IGNORED!")
dialog.ok(lang(29999), lang(33023))
else:
database.db_reset()
break
window('emby_dbCheck', value="true")
if not startupComplete:
# Verify the video database can be found
videoDb = database.video_database()
if not xbmcvfs.exists(videoDb):
# Database does not exists
log.error(
"The current Kodi version is incompatible "
"with the Emby for Kodi add-on. Please visit "
"https://github.com/MediaBrowser/Emby.Kodi/wiki "
"to know which Kodi versions are supported.")
dialog.ok(
heading=lang(29999),
line1=lang(33024))
break
# Run start up sync
log.warn("Database version: %s", window('emby_version'))
log.info("SyncDatabase (started)")
startTime = datetime.now()
librarySync = self.startSync()
elapsedTime = datetime.now() - startTime
log.info("SyncDatabase (finished in: %s) %s"
% (str(elapsedTime).split('.')[0], librarySync))
# Add other servers at this point
# TODO: re-add once plugin listing is created
# self.user.load_connect_servers()
# Only try the initial sync once per kodi session regardless
# This will prevent an infinite loop in case something goes wrong.
startupComplete = True
# Process updates
if self.incremental_count > 5:
self.incremental_count = 0
window('emby_kodiScan', clear=True)
if ((not xbmc.Player().isPlaying() or xbmc.getCondVisibility('VideoPlayer.Content(livetv)')) and
window('emby_dbScan') != "true" and window('emby_shouldStop') != "true"):
self.incrementalSync()
if window('emby_onWake') == "true" and window('emby_online') == "true":
# Kodi is waking up
# Set in kodimonitor.py
window('emby_onWake', clear=True)
if window('emby_syncRunning') != "true":
log.info("SyncDatabase onWake (started)")
librarySync = self.startSync()
log.info("SyncDatabase onWake (finished) %s" % librarySync)
if self.stop_thread:
# Set in service.py
log.debug("Service terminated thread.")
break
if self.monitor.waitForAbort(1):
# Abort was requested while waiting. We should exit
break
log.warn("###===--- LibrarySync Stopped ---===###")
def stopThread(self):
self.stop_thread = True
log.debug("Ending thread...")
def suspendThread(self):
self.suspend_thread = True
log.debug("Pausing thread...")
def resumeThread(self):
self.suspend_thread = False
log.debug("Resuming thread...")
class ManualSync(LibrarySync):
def __init__(self):
LibrarySync.__init__(self)
def sync(self, mediatype=None):
if mediatype in ('movies', 'boxsets', 'musicvideos', 'tvshows'):
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn('video') as cursor_video:
pDialog = self.progressDialog("Manual Sync: %s" % mediatype)
if mediatype == 'movies':
self.movies(cursor_emby, cursor_video, pDialog)
elif mediatype == "boxsets":
self.boxsets(cursor_emby, cursor_video, pDialog)
elif mediatype =='musicvideos':
self.musicvideos(cursor_emby, cursor_video, pDialog)
elif mediatype == 'tvshows':
self.tvshows(cursor_emby, cursor_video, pDialog)
pDialog.close()
return
elif mediatype == 'music':
with database.DatabaseConn('emby') as cursor_emby:
with database.DatabaseConn('music') as cursor_music:
pDialog = self.progressDialog("Manual Sync: %s" % mediatype)
self.music(cursor_emby, cursor_music, pDialog)
pDialog.close()
return
else:
return self.fullSync(manualrun=True)
def movies(self, embycursor, kodicursor, pdialog):
return Movies(embycursor, kodicursor, pdialog).compare_all()
def boxsets(self, embycursor, kodicursor, pdialog):
return Movies(embycursor, kodicursor, pdialog).force_refresh_boxsets()
def musicvideos(self, embycursor, kodicursor, pdialog):
return MusicVideos(embycursor, kodicursor, pdialog).compare_all()
def tvshows(self, embycursor, kodicursor, pdialog):
return TVShows(embycursor, kodicursor, pdialog).compare_all()
def music(self, embycursor, kodicursor, pdialog):
return Music(embycursor, kodicursor).compare_all()
|
agentxan/plugin.video.emby
|
resources/lib/librarysync.py
|
Python
|
gpl-2.0
| 32,260
|
[
"VisIt"
] |
b8dc97ef9f27a6aaccc5fd97f42249f371cca5334fed67a9fd7ea9d6d806b39b
|
# creates: ag.png water_divide_surf.png
from urllib import urlretrieve
def setup(app):
pass
urlretrieve('http://wiki.fysik.dtu.dk/ase-files/ag.png', 'ase/ag.png')
urlretrieve('http://wiki.fysik.dtu.dk/ase-files/water_divide_surf.png',
'ase/dft/water_divide_surf.png')
|
slabanja/ase
|
doc/images.py
|
Python
|
gpl-2.0
| 284
|
[
"ASE"
] |
418a02d3f383c354671c6dce42f8c21773da46cf4621c4be9c556cd5fac4cd9a
|
# Test executable #5 to exercise the Gauss-Hermite class
# Here, we fit two Gauss-Hermite expansions to a donut shaped profile
# (Each one forces one of the zero'th order coefficients to be zero)
# The SciPy least squares method is used.
#
# Copyright (c) 2013 RadiaBeam Technologies. All rights reserved
# python imports
import math
# SciPy imports
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
# RadiaBeam imports
from radtrack.fields import RbGaussHermiteMN
# ---------------------------------------------------------
# Make sure the residual() method has access to necessary
# 'global' data:
global mMax, numFuncCalls, hS1, hS2, zGrid, tGrid, nCells
# Specify the central laser wavelength
lambda0 = 10.e-06
# Need a place holder for the waist size
w0 = 10.*lambda0
# Define the maximum order(s) of the Hermite expansion
mMax = 24 # horizontal and vertical
# Create two instances of the Hermite expansion class
hS1 = RbGaussHermiteMN.RbGaussHermiteMN(lambda0,w0,w0,0.)
hS2 = RbGaussHermiteMN.RbGaussHermiteMN(lambda0,w0,w0,0.)
# Specify the desired grid size
numPts = 50
nCells = numPts**2
# load up the x,y locations of the mesh
xMin = -4.*w0
xMax = 4.*w0
yMin = xMin
yMax = xMax
xArr = np.zeros(numPts)
for iLoop in range(numPts):
xArr[iLoop] = xMin + iLoop * (xMax-xMin) / (numPts-1)
yArr = np.zeros(numPts)
for jLoop in range(numPts):
yArr[jLoop] = yMin + jLoop * (yMax-yMin) / (numPts-1)
xGrid = np.zeros((numPts, numPts))
yGrid = np.zeros((numPts, numPts))
for iLoop in range(numPts):
for jLoop in range(numPts):
xGrid[iLoop,jLoop] = xMin + iLoop * (xMax-xMin) / (numPts-1)
yGrid[iLoop,jLoop] = yMin + jLoop * (yMax-yMin) / (numPts-1)
# Create transverse field profile (#3 elliptical Gaussian donut)
ExGrid = np.zeros((numPts, numPts))
wx3 = 2.0 * w0
rad1 = 1.0 * w0
rad2 = 2.0 * w0
mVal = 0.0
for iLoop in range(numPts):
for jLoop in range(numPts):
xArg = xArr[iLoop]
yArg = yArr[jLoop]
rArg = math.sqrt(xArg**2 + yArg**2)
rFactor = 1.0
if rArg <= rad2:
rFactor = 0.5 + 0.5*math.cos(math.pi*((rArg-rad1)/(rad2-rad1) - 1.))
if rArg <= rad1:
rFactor = 0.0
ExGrid[iLoop, jLoop] = rFactor*math.exp(-(xArg/wx3)**2)*math.exp(-(yArg/wx3)**2)
mVal = max(ExGrid[iLoop, jLoop], mVal)
# Divide out the maximum value
ExGrid /= mVal
# Calculate residuals for the least squares analysis
# params - array of fitting parameters
numFuncCalls = 0
def residuals(params, e, x, y):
global mMax, numFuncCalls, hS1, hS2, zGrid, tGrid, nCells
hS1.setWaistX(params[0])
hS1.setWaistY(params[0])
hS2.setWaistX(params[0])
hS2.setWaistY(params[0])
hCoefs = np.zeros(mMax+1)
for ii in range(mMax):
hCoefs[ii+1] = params[1+ii]
hS1.setMCoef(hCoefs)
hS2.setNCoef(hCoefs)
vCoefs = np.zeros(mMax+1)
for ii in range(mMax+1):
vCoefs[ii] = params[mMax+1+ii]
hS1.setNCoef(vCoefs)
hS2.setMCoef(vCoefs)
# let the user know what's going on if many function calls are required
if numFuncCalls == 0:
print ' '
print 'Number of calls to method residual():'
numFuncCalls += 1
if 100*int(numFuncCalls/100.) == numFuncCalls:
print ' ', numFuncCalls
return e-hS1.evaluateEx(x,y,0.,0.) \
-hS2.evaluateEx(x,y,0.,0.)
# plot the transverse field profile
ncLevels = 12
vLevels = [0.001, 0.05, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.05]
plt.figure(1)
cs1 = plt.contourf(xGrid, yGrid, ExGrid, vLevels)
plt.colorbar(cs1)
plt.axis([xMin, xMax, yMin, yMax])
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title('x-section #2: quadratic square (sharp cut-off)')
# choose initial guesses for all fitting parameters
# also, specify the scale of variations for each
paramGuess = np.zeros(2*mMax+2)
paramGuess[0] = w0 # horizontal waist
paramGuess[1] = 1.0
for ii in range(mMax-1):
paramGuess[ii+2] = 1.e-5 # horiz. coeff's
paramGuess[mMax+1] = 1.0
for ii in range(mMax):
paramGuess[mMax+1+ii] = 1.e-5 # vertical coeff's
# invoke the least squares algorithm
result = leastsq(residuals, paramGuess, \
args=(np.reshape(ExGrid,nCells), \
np.reshape(xGrid,nCells), \
np.reshape(yGrid,nCells)), \
full_output=True, ftol=1e-4, \
maxfev=1200)
parFit = result[0]
nEvals = result[2]['nfev']
resVals = result[2]['fvec']
message = result[3]
iError = result[4]
print ' '
print ' iError = ', iError
print ' message = ', message
print ' nEvals = ', nEvals
print ' resVals = ', resVals
# load the results into named variables (for clarity)
wxFit = parFit[0]
mCFit = np.zeros(mMax+1)
for ii in range(mMax):
mCFit[ii+1] = parFit[1+ii]
nCFit = np.zeros(mMax+1)
for ii in range(mMax+1):
nCFit[ii] = parFit[mMax+1+ii]
# check the results
print ' '
print 'The least squares minimimization has completed:'
print ' wx = ', wx3, '; ', wxFit
print ' C0x = 0.; ', mCFit[0]
print ' C0y = NA; ', nCFit[0]
if mMax >= 1:
print ' C1x = NA; ', mCFit[1]
print ' C1y = NA; ', nCFit[1]
if mMax >= 2:
print ' C2x = NA; ', mCFit[2]
print ' C2y = NA; ', nCFit[2]
if mMax >= 3:
print ' C3x = NA; ', mCFit[3]
print ' C3y = NA; ', nCFit[3]
if mMax >= 4:
print ' C4x = NA; ', mCFit[4]
print ' C4y = NA; ', nCFit[4]
if mMax >= 5:
print ' C5x = NA; ', mCFit[5]
print ' C5y = NA; ', nCFit[5]
if mMax >= 6:
print ' C6x = NA; ', mCFit[6]
print ' C6y = NA; ', nCFit[6]
if mMax >= 7:
print ' C7x = NA; ', mCFit[7]
print ' C7y = NA; ', nCFit[7]
if mMax >= 8:
print ' C8x = NA; ', mCFit[8]
print ' C8y = NA; ', nCFit[8]
if mMax >= 9:
print ' C9x = NA; ', mCFit[9]
print ' C9y = NA; ', nCFit[9]
if mMax >= 10:
print ' C10x= NA; ', mCFit[10]
print ' C10y= NA; ', nCFit[10]
if mMax >= 11:
print ' C11x= NA; ', mCFit[11]
print ' C11y= NA; ', nCFit[11]
if mMax >= 12:
print ' C12x= NA; ', mCFit[12]
print ' C12y= NA; ', nCFit[12]
if mMax >= 13:
print ' C13x= NA; ', mCFit[13]
print ' C13y= NA; ', nCFit[13]
if mMax >= 14:
print ' C14x= NA; ', mCFit[14]
print ' C14y= NA; ', nCFit[14]
if mMax >= 15:
print ' C15x= NA; ', mCFit[15]
print ' C15y= NA; ', nCFit[15]
if mMax >= 16:
print ' C16x= NA; ', mCFit[16]
print ' C16y= NA; ', nCFit[16]
if mMax >= 17:
print ' C17x= NA; ', mCFit[17]
print ' C17y= NA; ', nCFit[17]
if mMax >= 18:
print ' C18x= NA; ', mCFit[18]
print ' C18y= NA; ', nCFit[18]
if mMax >= 19:
print ' C19x= NA; ', mCFit[19]
print ' C19y= NA; ', nCFit[19]
if mMax >= 20:
print ' C20x= NA; ', mCFit[20]
print ' C20y= NA; ', nCFit[20]
if mMax >= 21:
print ' C21x= NA; ', mCFit[21]
print ' C21y= NA; ', nCFit[21]
if mMax >= 22:
print ' C22x= NA; ', mCFit[22]
print ' C22y= NA; ', nCFit[22]
if mMax >= 23:
print ' C23x= NA; ', mCFit[23]
print ' C23y= NA; ', nCFit[23]
if mMax >= 24:
print ' C24x= NA; ', mCFit[24]
print ' C24y= NA; ', nCFit[24]
if mMax >= 25:
print ' C25x= NA; ', mCFit[25]
print ' C25y= NA; ', nCFit[25]
if mMax >= 26:
print ' C26x= NA; ', mCFit[26]
print ' C26y= NA; ', nCFit[26]
if mMax >= 27:
print ' C27x= NA; ', mCFit[27]
print ' C27y= NA; ', nCFit[27]
if mMax >= 28:
print ' C28x= NA; ', mCFit[28]
print ' C28y= NA; ', nCFit[28]
if mMax >= 29:
print ' C29x= NA; ', mCFit[29]
print ' C29y= NA; ', nCFit[29]
if mMax >= 30:
print ' C30x= NA; ', mCFit[30]
print ' C30y= NA; ', nCFit[30]
if mMax >= 31:
print ' C31x= NA; ', mCFit[31]
print ' C31y= NA; ', nCFit[31]
if mMax >= 32:
print ' C32x= NA; ', mCFit[32]
print ' C32y= NA; ', nCFit[32]
if mMax >= 33:
print ' C33x= NA; ', mCFit[33]
print ' C33y= NA; ', nCFit[33]
if mMax >= 34:
print ' C34x= NA; ', mCFit[34]
print ' C34y= NA; ', nCFit[34]
if mMax >= 35:
print ' C35x= NA; ', mCFit[35]
print ' C35y= NA; ', nCFit[35]
if mMax >= 36:
print ' C36x= NA; ', mCFit[36]
print ' C36y= NA; ', nCFit[36]
if mMax >= 37:
print ' C37x= NA; ', mCFit[37]
print ' C37y= NA; ', nCFit[37]
if mMax >= 38:
print ' C38x= NA; ', mCFit[38]
print ' C38y= NA; ', nCFit[38]
if mMax >= 39:
print ' C39x= NA; ', mCFit[39]
print ' C39y= NA; ', nCFit[39]
if mMax >= 40:
print ' C40x= NA; ', mCFit[40]
print ' C40y= NA; ', nCFit[40]
# load up the fitted electric field at all grid points
hS1.setWaistX(wxFit)
hS1.setWaistY(wxFit)
hS1.setMCoef(mCFit)
hS1.setNCoef(nCFit)
Ex1 = np.reshape(hS1.evaluateEx(
np.reshape(xGrid,nCells),
np.reshape(yGrid,nCells), 0., 0.),
(numPts, numPts))
hS2.setWaistX(wxFit)
hS2.setWaistY(wxFit)
hS2.setMCoef(nCFit)
hS2.setNCoef(mCFit)
Ex2 = np.reshape(hS2.evaluateEx(
np.reshape(xGrid,nCells),
np.reshape(yGrid,nCells), 0., 0.),
(numPts, numPts))
ExFit = Ex1 + Ex2
# plot the fitted transverse field profile
plt.figure(2)
cs2 = plt.contourf(xGrid, yGrid, ExFit, vLevels)
plt.colorbar(cs2)
plt.axis([xMin, xMax, yMin, yMax])
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title('x-section #2: Result of the least squares fit')
# plot the transverse profile of the difference
plt.figure(3)
cs3 = plt.contourf(xGrid, yGrid, ExFit-ExGrid, ncLevels)
plt.colorbar(cs3)
plt.axis([xMin, xMax, yMin, yMax])
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title('x-section #2: Absolute differences in Ex')
plt.show()
|
radiasoft/radtrack
|
experimental/hermite/testHermite05.py
|
Python
|
apache-2.0
| 9,777
|
[
"Gaussian"
] |
7880f46a0cfebc589846c2c17a131046f1b92fbd71df50dffea0f50a7da41e05
|
# Copyright (C) 2010-2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from .script_interface import ScriptObjectRegistry, ScriptInterfaceHelper, script_interface_register
from .__init__ import has_features
if any(has_features(i) for i in ["LB_BOUNDARIES", "LB_BOUNDARIES_GPU"]):
@script_interface_register
class LBBoundaries(ScriptObjectRegistry):
"""
Creates a set of lattice-Boltzmann boundaries.
"""
_so_name = "LBBoundaries::LBBoundaries"
def add(self, *args, **kwargs):
"""
Adds a boundary to the set.
Either a valid boundary is an argument,
or a valid set of parameters to create a boundary.
"""
if len(args) == 1:
if isinstance(args[0], LBBoundary):
lbboundary = args[0]
else:
raise TypeError(
"Either a LBBoundary object or key-value pairs for the parameters of a LBBoundary object need to be passed.")
else:
lbboundary = LBBoundary(**kwargs)
self.call_method("add", object=lbboundary)
return lbboundary
def remove(self, lbboundary):
"""
Removes a boundary from the set.
Parameters
----------
lbboundary : :obj:`LBBoundary`
The boundary to be removed from the set.
"""
self.call_method("remove", object=lbboundary)
def clear(self):
"""
Removes all boundaries.
"""
self.call_method("clear")
def size(self):
return self.call_method("size")
def empty(self):
return self.call_method("empty")
@script_interface_register
class LBBoundary(ScriptInterfaceHelper):
"""
Creates a LB boundary.
"""
_so_name = "LBBoundaries::LBBoundary"
_so_bind_methods = ("get_force",)
|
KaiSzuttor/espresso
|
src/python/espressomd/lbboundaries.py
|
Python
|
gpl-3.0
| 2,658
|
[
"ESPResSo"
] |
88b3a541eb1fedb318975553278cd15ca35e06ed4291c14dcd8065565f83b52a
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
#Author: Read AUTHORS file.
#License: Read COPYING file.
import gettext
from PyQt4 import *
from constants import const
from ui_main import Ui_Dialog
from lib import stripFilename, stripPath, getPardusRelease, ratioCalc
from about import aboutDialog
from options import optionsDialog
from update import update
from logs import logsDialog
import os
t = gettext.translation(const.APP_NAME, const.APP_I18NDIR, fallback = True)
_ = t.ugettext
class mainDialog(QtGui.QDialog, Ui_Dialog):
def __init__(self):
#
QtGui.QDialog.__init__(self)
self.setupUi(self)
#self.__about = aboutDialog()
self.__options = optionsDialog()
self.__aboutDialog = aboutDialog()
self.__logsDialog = logsDialog()
self.__update = update()
#Aliases for GUI objects
self.__status = self.label_3
self.__ip = self.label_4
self.__ipStatus = self.label_12
self.__lastupdateIp = self.label_7
self.__lastupdateTime = self.label_9
#Get user dialogs and decriptions
self.i18n()
#Refresh
self.on_pushButton_9_clicked()
self.__options.load()
@QtCore.pyqtSignature("void")
def on_pushButton_3_clicked(self):
self.close()
#Options button
@QtCore.pyqtSignature("void")
def on_pushButton_8_clicked(self):
#
self.__options.action()
#About button
@QtCore.pyqtSignature("void")
def on_pushButton_7_clicked(self):
#
self.__aboutDialog.exec_()
#Manage button
@QtCore.pyqtSignature("void")
def on_pushButton_6_clicked(self):
#
self.openURL(const.MANAGEACCOUNT_URL)
#NM button
@QtCore.pyqtSignature("void")
def on_pushButton_10_clicked(self):
#
os.system("dbus-launch network-manager")
#Refresh button
@QtCore.pyqtSignature("void")
def on_pushButton_9_clicked(self):
#
if self.__update.readIp():
self.__ip.setText(self.__update.ip)
self.__logsDialog.load()
self.__lastupdateIp.setText(self.__logsDialog.getLastIp())
self.__lastupdateTime.setText(self.__logsDialog.getLastTime())
nameserver = self.__options.getFirstNameserver()
if nameserver and (nameserver == const.ODNS_NAMESERVER_01 or \
nameserver == const.ODNS_NAMESERVER_02):
self.__status.setText(self.__status_yes)
else:
self.__status.setText(self.__status_no)
if self.__update.ip == self.__logsDialog.getLastIp():
self.__ipStatus.setText(self.__status_yes)
else:
self.__ipStatus.setText(self.__status_no)
#Log button
@QtCore.pyqtSignature("void")
def on_pushButton_5_clicked(self):
#
self.__logsDialog.action()
#Update button
@QtCore.pyqtSignature("void")
def on_pushButton_4_clicked(self):
#
self.__options.load()
self.__update.action(self.__options.getOpendns(const.OPT_OPENDNS_UNAME), \
self.__options.getOpendns(const.OPT_OPENDNS_PASS), \
self.__options.getOpendns(const.OPT_OPENDNS_NETWORK) )
self.on_pushButton_9_clicked()
def openURL(self, URL):
QtGui.QDesktopServices().openUrl(QtCore.QUrl(URL))
################################################################################
#
# i18n
#
################################################################################
def i18n(self):
#
self.label.setText( const.NAME )
self.label_8.setText( _("OpenDNS update client for Pardus Linux") )
self.setWindowTitle(const.NAME+" "+const.VERSION)
self.pushButton_7.setText( _("About") )
self.groupBox_2.setTitle( _("Status") )
self.label_2.setText( _("OpenDNS service in use") )
self.label_5.setText( _("Current WAN ip adress") )
self.label_6.setText( _("Last updated ip adress") )
self.label_11.setText( _("OpenDNS service is updated") )
self.label_10.setText( _("Last update time") )
self.pushButton_10.setText( _("Network Manager") )
self.pushButton_9.setText( _("Refresh") )
self.pushButton_6.setText( _("Manage") )
self.pushButton_5.setText( _("Log") )
self.pushButton_4.setText( _("Update") )
self.pushButton_3.setText( _("Close") )
self.pushButton_8.setText( _("Options") )
#Log Dialog
self.__logsDialog.setWindowTitle( self.pushButton_5.text() )
self.__logsDialog.label.setText( _("Log records") )
self.__logsDialog.pushButton.setText( self.pushButton_3.text() )
#About Dialog
description = _("<p>Pog is openDNS service update client for Pardus Linux. For moore information and new versions visit to;<br>")
developers = _("<p>Developers;<br>")
translators = _("<p>Translated by;<br>")
copying = _("<p>This program released under the terms of the GNU General Public License. Please read COPYING file</p>")
abouttext = description+"<a href="+const.WEBPAGE+">"+const.WEBPAGE+"</a></p>"
abouttext += copying
abouttext += developers + const.DEVELOPERS + "</p>"
abouttext += translators + const.TRANSLATORS + "</p>"
self.__aboutDialog.setWindowTitle( self.pushButton_7.text() )
self.__aboutDialog.textBrowser.setText(abouttext)
self.__aboutDialog.label.setText( const.NAME )
self.__aboutDialog.label_2.setText( const.VERSION )
self.__aboutDialog.pushButton.setText( self.pushButton_3.text() )
#Options Dialog
self.__options.setWindowTitle( self.pushButton_8.text() )
self.__options.tabWidget.setTabText(0, _("Account") )
self.__options.tabWidget.setTabText(1, self.pushButton_4.text() )
self.__options.label.setText( _("User name") )
self.__options.label_2.setText( _("Password") )
self.__options.label_3.setText( _("Network name") )
self.__options.label_6.setText( "<a href='opendns'>"+ _("Create account") +"</a>")
self.__options.checkBox.setText( _("Update periodically") )
self.__options.label_4.setText( _("Delay") )
self.__options.label_5.setText( _("minutes") )
self.__options.pushButton_2.setText( _("Save") )
self.__options.pushButton.setText( self.pushButton_3.text() )
#messages
self.__options.messages[const.MSG_OPT_01] = _("Runnig for the first time, you should save the your OpenDNS account information.")
self.__options.messages[const.MSG_OPT_02] = _("Configuration not loaded, you may not be root user.")
self.__options.messages[const.MSG_OPT_03] = _("Configuration saved.")
self.__options.messages[const.MSG_OPT_04] = _("Configuration not saved.")
self.__options.messages[const.MSG_OPT_05] = _("Crontab configuration not saved.")
self.__status_yes = const.STATUS_YES %( _("YES") )
self.__status_no = const.STATUS_NO %( _("NO") )
|
alierkanimrek/pog
|
src/main.py
|
Python
|
gpl-3.0
| 7,150
|
[
"VisIt"
] |
20cc795b6126984344dc5c01fd98beae6a7aa0ba1bc839c5187c3fd7ace91367
|
# Copyright (C) 2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest as ut
import importlib_wrapper
def disable_visualizer_GUI(code):
breakpoint = "while True:"
assert breakpoint in code
code = code.replace(breakpoint, "for _ in range(5):", 1)
breakpoint = "t = Thread(target=main)"
assert breakpoint in code
code = code.split(breakpoint, 1)[0] + "main()"
return code
sample, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@SAMPLES_DIR@/billiard.py", substitutions=disable_visualizer_GUI)
@skipIfMissingFeatures
class Sample(ut.TestCase):
system = sample.system
if __name__ == "__main__":
ut.main()
|
espressomd/espresso
|
testsuite/scripts/samples/test_billiard.py
|
Python
|
gpl-3.0
| 1,318
|
[
"ESPResSo"
] |
4f7277112f50cf4da150089eb726312633614a8a4d5b8fa6c9bd0a328ff6d881
|
"""
Tests for IPython.config.application.Application
Authors:
* Brian Granger
"""
#-----------------------------------------------------------------------------
# Copyright (C) 2008-2011 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
import logging
from io import StringIO
from unittest import TestCase
import nose.tools as nt
from IPython.config.configurable import Configurable
from IPython.config.loader import Config
from IPython.config.application import (
Application
)
from IPython.utils.traitlets import (
Bool, Unicode, Integer, List, Dict
)
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
class Foo(Configurable):
i = Integer(0, config=True, help="The integer i.")
j = Integer(1, config=True, help="The integer j.")
name = Unicode(u'Brian', config=True, help="First name.")
class Bar(Configurable):
b = Integer(0, config=True, help="The integer b.")
enabled = Bool(True, config=True, help="Enable bar.")
class MyApp(Application):
name = Unicode(u'myapp')
running = Bool(False, config=True,
help="Is the app running?")
classes = List([Bar, Foo])
config_file = Unicode(u'', config=True,
help="Load this config file")
aliases = Dict({
'i' : 'Foo.i',
'j' : 'Foo.j',
'name' : 'Foo.name',
'enabled' : 'Bar.enabled',
'log-level' : 'Application.log_level',
})
flags = Dict(dict(enable=({'Bar': {'enabled' : True}}, "Set Bar.enabled to True"),
disable=({'Bar': {'enabled' : False}}, "Set Bar.enabled to False"),
crit=({'Application' : {'log_level' : logging.CRITICAL}},
"set level=CRITICAL"),
))
def init_foo(self):
self.foo = Foo(parent=self)
def init_bar(self):
self.bar = Bar(parent=self)
class TestApplication(TestCase):
def test_log(self):
stream = StringIO()
app = MyApp(log_level=logging.INFO)
handler = logging.StreamHandler(stream)
# trigger reconstruction of the log formatter
app.log.handlers = [handler]
app.log_format = "%(message)s"
app.log.info("hello")
nt.assert_in("hello", stream.getvalue())
def test_basic(self):
app = MyApp()
self.assertEqual(app.name, u'myapp')
self.assertEqual(app.running, False)
self.assertEqual(app.classes, [MyApp,Bar,Foo])
self.assertEqual(app.config_file, u'')
def test_config(self):
app = MyApp()
app.parse_command_line(["--i=10","--Foo.j=10","--enabled=False","--log-level=50"])
config = app.config
self.assertEqual(config.Foo.i, 10)
self.assertEqual(config.Foo.j, 10)
self.assertEqual(config.Bar.enabled, False)
self.assertEqual(config.MyApp.log_level,50)
def test_config_propagation(self):
app = MyApp()
app.parse_command_line(["--i=10","--Foo.j=10","--enabled=False","--log-level=50"])
app.init_foo()
app.init_bar()
self.assertEqual(app.foo.i, 10)
self.assertEqual(app.foo.j, 10)
self.assertEqual(app.bar.enabled, False)
def test_flags(self):
app = MyApp()
app.parse_command_line(["--disable"])
app.init_bar()
self.assertEqual(app.bar.enabled, False)
app.parse_command_line(["--enable"])
app.init_bar()
self.assertEqual(app.bar.enabled, True)
def test_aliases(self):
app = MyApp()
app.parse_command_line(["--i=5", "--j=10"])
app.init_foo()
self.assertEqual(app.foo.i, 5)
app.init_foo()
self.assertEqual(app.foo.j, 10)
def test_flag_clobber(self):
"""test that setting flags doesn't clobber existing settings"""
app = MyApp()
app.parse_command_line(["--Bar.b=5", "--disable"])
app.init_bar()
self.assertEqual(app.bar.enabled, False)
self.assertEqual(app.bar.b, 5)
app.parse_command_line(["--enable", "--Bar.b=10"])
app.init_bar()
self.assertEqual(app.bar.enabled, True)
self.assertEqual(app.bar.b, 10)
def test_flatten_flags(self):
cfg = Config()
cfg.MyApp.log_level = logging.WARN
app = MyApp()
app.update_config(cfg)
self.assertEqual(app.log_level, logging.WARN)
self.assertEqual(app.config.MyApp.log_level, logging.WARN)
app.initialize(["--crit"])
self.assertEqual(app.log_level, logging.CRITICAL)
# this would be app.config.Application.log_level if it failed:
self.assertEqual(app.config.MyApp.log_level, logging.CRITICAL)
def test_flatten_aliases(self):
cfg = Config()
cfg.MyApp.log_level = logging.WARN
app = MyApp()
app.update_config(cfg)
self.assertEqual(app.log_level, logging.WARN)
self.assertEqual(app.config.MyApp.log_level, logging.WARN)
app.initialize(["--log-level", "CRITICAL"])
self.assertEqual(app.log_level, logging.CRITICAL)
# this would be app.config.Application.log_level if it failed:
self.assertEqual(app.config.MyApp.log_level, "CRITICAL")
def test_extra_args(self):
app = MyApp()
app.parse_command_line(["--Bar.b=5", 'extra', "--disable", 'args'])
app.init_bar()
self.assertEqual(app.bar.enabled, False)
self.assertEqual(app.bar.b, 5)
self.assertEqual(app.extra_args, ['extra', 'args'])
app = MyApp()
app.parse_command_line(["--Bar.b=5", '--', 'extra', "--disable", 'args'])
app.init_bar()
self.assertEqual(app.bar.enabled, True)
self.assertEqual(app.bar.b, 5)
self.assertEqual(app.extra_args, ['extra', '--disable', 'args'])
|
marcoantoniooliveira/labweb
|
oscar/lib/python2.7/site-packages/IPython/config/tests/test_application.py
|
Python
|
bsd-3-clause
| 6,352
|
[
"Brian"
] |
660dfaa412754427964b02b82064af123b532a2da50c7accb220e2a141354be0
|
import argparse
import sys, traceback
import os
from os import path
sys.path.append(path.abspath(path.join(path.dirname(__file__), "..")))
import csv, math
#import rasterio
import ogr, gdal, gdalconst, osr
#import fiona
import numpy
from lib.sitkaAPI import downloadUnzipTopo
from lib.loghelper import Logger
from lib.exception import DataException, MissingException, NetworkException
from lib.riverscapes import Project
from lib.topoproject import TopoProject
__version__="0.1.1"
def hydro_gis_export(hydro_project_xml, topo_project_xml, outfolder):
"""
:param jsonFilePath:
:param outputFolder:
:param bVerbose:
:return:
"""
#gdal.UseExceptions()
log = Logger("Hydro GIS Export")
# 1 todo Read project.rs.xml
rs_hydro = Project(hydro_project_xml)
rs_topo = TopoProject(topo_project_xml)
hydro_results_folder = os.path.dirname(hydro_project_xml)
if not rs_hydro.ProjectMetadata.has_key("Visit"):
raise MissingException("Cannot Find Visit ID")
visit_id = rs_hydro.ProjectMetadata['Visit']
dem = gdal.Open(rs_topo.getpath("DEM"))
dem_srs = dem.GetProjection()
dem_x_size = dem.RasterXSize
dem_y_size = dem.RasterYSize
dem_band = dem.GetRasterBand(1)
dem_ndv = dem_band.GetNoDataValue()
dem_geotransfrom = dem.GetGeoTransform()
# 3 Get data columns in csv file
csvfile = os.path.join(hydro_results_folder, "dem_grid_results.csv")
csvfile_clean = os.path.join(hydro_results_folder, "dem_grid_results_clean_header.csv")
if not os.path.isfile(csvfile):
raise MissingException("Required file {} does not exist.".format(csvfile))
with open(csvfile, "rb") as f_in, open(csvfile_clean, "wb") as f_out:
reader = csv.reader(f_in)
# writer = csv.writer(f_out)
cols = [col for col in reader.next() if col not in ["Y", "X"]]#[col.replace(".", "_") for col in reader.next() if col not in ["Y", "X"]]
log.info("Loaded fields from csv file.")
# writer.writerow(['X', 'Y'] + cols)
# for row in reader:
# writer.writerow(row)
# log.info("Saved csv file with sanitized headers.")
# Write VRT file
vrt = os.path.join(hydro_results_folder, '{}.vrt'.format("dem_grid_results"))
with open(vrt, 'wt') as f:
f.write('<OGRVRTDataSource>\n')
f.write('\t<OGRVRTLayer name="{}">\n'.format("dem_grid_results"))
f.write('\t\t<SrcDataSource>{}</SrcDataSource>\n'.format(csvfile))
f.write('\t\t<SrcLayer>{}</SrcLayer>\n'.format("dem_grid_results"))
f.write('\t\t<GeometryType>wkbPoint25D</GeometryType>\n')
f.write('\t\t<LayerSRS>{}</LayerSRS>\n'.format(dem_srs))
f.write('\t\t<GeometryField encoding="PointFromColumns" x="X" y="Y" />\n')
for field in cols:
f.write('\t\t<Field name="{}" type="Real" subtype="Float32" />\n'.format(field))
f.write('\t</OGRVRTLayer>\n')
f.write('</OGRVRTDataSource>\n')
log.info("Generated vrt file {}".format(vrt))
# Open csv as OGR
ogr_vrt = ogr.Open(vrt, 1)
if ogr_vrt is None:
raise DataException("unable to open {}".format(vrt))
layer = ogr_vrt.GetLayer()
# 4 Generate geotiff for each column in the CSV file
driver = gdal.GetDriverByName("GTiff")
for col in cols:
out_tif = os.path.join(outfolder, '{}.tif'.format(col))
out_raster = driver.Create(out_tif, dem_x_size, dem_y_size, 1, gdalconst.GDT_Float32)
out_raster.SetGeoTransform(dem_geotransfrom)
out_raster.SetProjection(dem_srs)
band = out_raster.GetRasterBand(1)
band.SetNoDataValue(dem_ndv)
band.FlushCache()
gdal.RasterizeLayer(out_raster, [1], layer, options=["ATTRIBUTE={}".format(col)])
band.GetStatistics(0, 1)
band.FlushCache()
out_raster.FlushCache()
log.info("Generated {} for attribute {}".format(out_tif, col))
if col == "Depth":
raw = numpy.array(band.ReadAsArray())
masked = numpy.ma.masked_array(raw, raw == dem_ndv)
bool_raster = numpy.array(masked, "bool")
numpy.greater(masked, 0, bool_raster)
raster_mem = gdal.GetDriverByName("GTIFF").Create(os.path.join(outfolder, "Temp.tif"), dem_x_size, dem_y_size, 1, gdalconst.GDT_Int16)
raster_mem.SetGeoTransform(dem_geotransfrom)
raster_mem.SetProjection(dem_srs)
band_mem = raster_mem.GetRasterBand(1)
band_mem.WriteArray(bool_raster, 0, 0)
band_mem.SetNoDataValue(dem_ndv)
band_mem.FlushCache()
temp = ogr.GetDriverByName("ESRI Shapefile").CreateDataSource(os.path.join(outfolder, "TempExtent.shp"))
temp_layer = temp.CreateLayer("RawExtent", osr.SpatialReference(wkt=dem_srs), ogr.wkbPolygon)
temp_layer.CreateField(ogr.FieldDefn("Value", ogr.OFTInteger))
temp_layer.CreateField(ogr.FieldDefn("Area", ogr.OFTReal))
gdal.Polygonize(band_mem, None, temp_layer, 0)
del raster_mem
#
# for feature in temp_layer:
# feature.SetField("Area", feature.GetGeometryRef().GetArea())
# temp_layer.SetFeature(feature)
# Stage Extent
# temp_layer.SetAttributeFilter("Value=1")
# shp_extent = os.path.join(outfolder, "StageExtent.shp")
# driver_extent = ogr.GetDriverByName("ESRI Shapefile").CreateDataSource(shp_extent)
# driver_extent.CopyLayer(temp_layer, "StageExtent")
# driver_extent = None
# ogr_extent = ogr.Open(shp_extent, 1)
# layer_extent = ogr_extent.GetLayer("StageExtent")
# field_extent = ogr.FieldDefn("ExtentType", ogr.OFTString)
# layer_extent.CreateField(field_extent)
# area_current = 0.0
# fid_current = None
# for feature in layer_extent:
# area_feat = feature.GetGeometryRef().GetArea()
# if area_feat > area_current:
# area_current = area_feat
# fid_current = feature.GetFID()
#
# edit_feat = layer_extent.GetFeature(fid_current)
# edit_feat.SetField("ExtentType", "Channel")
# layer_extent.SetFeature(edit_feat)
#
# layer_extent.DeleteField(layer_extent.FindFieldIndex("Value", True))
# #ogr_extent.Destroy()
# log.info("Generated Stage Extent Shapefile {}".format(shp_extent))
#
# # Stage Islands
# import time
# time.sleep(5)
# temp_layer.ResetReading()
# temp_layer.SetAttributeFilter("Value=0")
# shp_islands = os.path.join(outfolder, "StageIslands.shp")
# driver_islands = ogr.GetDriverByName("ESRI Shapefile").CreateDataSource(shp_islands)
# driver_islands.CopyLayer(temp_layer, "StageIslands")
# driver_islands = None
# ogr_islands = ogr.Open(shp_islands, 1)
# layer_islands = ogr_islands.GetLayer("StageIslands")
#
# field_qual = ogr.FieldDefn("Qualifying", ogr.OFTInteger)
# field_qual.SetDefault("0")
# field_valid = ogr.FieldDefn("IsValid", ogr.OFTInteger)
# field_valid.SetDefault("0")
# layer_islands.CreateField(field_qual)
# layer_islands.CreateField(field_valid)
# layer_islands.SyncToDisk()
#
# area_current = 0.0
# fid_current = None
# for feature in layer_islands:
# if feature is not None:
# g = feature.GetGeometryRef()
# area_feat = g.GetArea()
# # todo identify qualifying islands here?
# if area_feat > area_current:
# area_current = area_feat
# fid_current = feature.GetFID()
#
# #feat_del = layer_islands.GetFeature(fid_current)
# layer_islands.DeleteFeature(fid_current)
#
# layer_islands.DeleteField(layer_islands.FindFieldIndex("Value", True))
# ogr_islands = None
# ogr_extent = None
# log.info("Generated Stage Islands Shapefile {}".format(shp_islands))
temp = None
del out_raster
shp_hydroresults = os.path.join(outfolder, "HydroResults.shp")
ogr.GetDriverByName("ESRI Shapefile").CopyDataSource(ogr_vrt, shp_hydroresults)
#out_shp = ogr.GetDriverByName("ESRI Shapefile").CreateDataSource()
# ogr_shp = ogr.Open(shp_hydroresults, 1)
# lyr = ogr_shp.GetLayer()
# lyr_defn = lyr.GetLayerDefn()
# for i in range(lyr_defn.GetFieldCount()):
# fielddefn = lyr_defn.GetFieldDefn(i)
# fielddefn.SetName(fielddefn.GetName().replace(".","_"))
# lyr.AlterFieldDefn(i, fielddefn, ogr.ALTER_NAME_FLAG)
#
# new_field = ogr.FieldDefn('V_Bearing', ogr.OFTReal)
# lyr.CreateField(new_field)
# # Calculate Velocity Bearing
# for feat in lyr:
# vel_x = feat.GetField("X_Velocity")
# vel_y = feat.GetField("Y_Velocity")
# dir = 90 - math.degrees(math.atan2(float(vel_y), float(vel_x)))
# bearing = 360 + dir if dir < 0 else dir
# feat.SetField('V_Bearing', float(bearing))
# lyr.SetFeature(feat)
log.info("Generated Hydro Results Shapefile {}".format(shp_hydroresults))
ogr_vrt = None
ogr_shp = None
return 0
def main():
# parse command line options
parser = argparse.ArgumentParser()
parser.add_argument('visitID', help='Visit ID', type=int)
parser.add_argument('outputfolder', help='Path to output folder', type=str)
parser.add_argument('--hydroprojectxml', '-p', help='(optional) hydro project xml file', type=str)
parser.add_argument('--topoprojectxml', '-t', help='(optional) topo project xml file', type=str)
parser.add_argument('--datafolder', help='(optional) Top level folder containing Hydro Model Riverscapes projects', type=str)
parser.add_argument('--verbose', help='Get more information in your logs.', action='store_true', default=False )
args = parser.parse_args()
# Make sure the output folder exists
resultsFolder = os.path.join(args.outputfolder, "outputs")
# Initiate the log file
logg = Logger("Program")
logfile = os.path.join(resultsFolder, "hydro_gis.log")
xmlfile = os.path.join(resultsFolder, "hydro_gis.xml")
logg.setup(logPath=logfile, verbose=args.verbose)
# Initiate the log file
log = Logger("Program")
log.setup(logPath=logfile, verbose=args.verbose)
try:
# Make some folders if we need to:
if not os.path.isdir(args.outputfolder):
os.makedirs(args.outputfolder)
if not os.path.isdir(resultsFolder):
os.makedirs(resultsFolder)
# If we need to go get our own topodata.zip file and unzip it we do this
# if args.datafolder is None:
# hydroDataFolder = os.path.join(args.outputfolder, "inputs")
# folderJSON, list_projectFolders = downloadUnzipTopo(args.visitID, hydroDataFolder)
# # otherwise just pass in a path to existing data
# else:
# list_projectFolders = args.datafolder
# runResult = []
# for fileJSON, projectFolder in list_projectFolders:
result = hydro_gis_export(args.hydroprojectxml, args.topoprojectxml, resultsFolder)
sys.exit(result)
except (DataException, MissingException, NetworkException) as e:
# Exception class prints the relevant information
traceback.print_exc(file=sys.stdout)
sys.exit(e.returncode)
except AssertionError as e:
log.error(e.message)
traceback.print_exc(file=sys.stdout)
sys.exit(1)
except Exception as e:
log.error(e.message)
traceback.print_exc(file=sys.stdout)
sys.exit(1)
#sys.exit(0)
if __name__ == "__main__":
main()
|
SouthForkResearch/CHaMP_Metrics
|
tools/hydro_gis_export_gdal.py
|
Python
|
gpl-3.0
| 12,099
|
[
"VisIt"
] |
cd0d0f02aa39e3aab5068f92de09a734af26d8034cb96d7f45aa7e5686af3264
|
"""
python-can requires the setuptools package to be installed.
"""
from setuptools import setup, find_packages
__version__ = "1.5.2"
import logging
logging.basicConfig(level=logging.WARNING)
setup(
name="python-can",
url="https://bitbucket.org/hardbyte/python-can",
version=__version__,
packages=find_packages(),
author="Brian Thorne",
author_email="hardbyte@gmail.com",
description="Controller Area Network interface module for Python",
long_description=open('README').read(),
license="LGPL v3",
package_data={
"": ["CONTRIBUTORS.txt", "LICENSE.txt"],
"doc": ["*.*"]
},
scripts=["./bin/can_logger.py", './bin/j1939_logger.py'],
# Tests can be run using `python setup.py test`
test_suite="nose.collector",
tests_require=['mock', 'nose']
)
|
luminize/python-can
|
setup.py
|
Python
|
lgpl-3.0
| 822
|
[
"Brian"
] |
fa99022286a1f5367435aafc57e6ede85379aec8b53e9e47c0aa108d8f9e66f2
|
import os
import unittest
import numpy as np
from deepchem.utils import rdkit_utils
from deepchem.utils.fragment_utils import get_contact_atom_indices
from deepchem.utils.fragment_utils import merge_molecular_fragments
from deepchem.utils.fragment_utils import get_partial_charge
from deepchem.utils.fragment_utils import strip_hydrogens
from deepchem.utils.fragment_utils import MolecularFragment
from deepchem.utils.fragment_utils import AtomShim
class TestFragmentUtil(unittest.TestCase):
def setUp(self):
# TODO test more formats for ligand
current_dir = os.path.dirname(os.path.realpath(__file__))
self.protein_file = os.path.join(
current_dir, '../../feat/tests/data/3ws9_protein_fixer_rdkit.pdb')
self.ligand_file = os.path.join(current_dir,
'../../feat/tests/data/3ws9_ligand.sdf')
def test_get_contact_atom_indices(self):
complexes = rdkit_utils.load_complex([self.protein_file, self.ligand_file])
contact_indices = get_contact_atom_indices(complexes)
assert len(contact_indices) == 2
def test_create_molecular_fragment(self):
mol_xyz, mol_rdk = rdkit_utils.load_molecule(self.ligand_file)
fragment = MolecularFragment(mol_rdk.GetAtoms(), mol_xyz)
assert len(mol_rdk.GetAtoms()) == len(fragment.GetAtoms())
assert (fragment.GetCoords() == mol_xyz).all()
def test_strip_hydrogens(self):
mol_xyz, mol_rdk = rdkit_utils.load_molecule(self.ligand_file)
_ = MolecularFragment(mol_rdk.GetAtoms(), mol_xyz)
# Test on RDKit
_ = strip_hydrogens(mol_xyz, mol_rdk)
def test_merge_molecular_fragments(self):
mol_xyz, mol_rdk = rdkit_utils.load_molecule(self.ligand_file)
fragment1 = MolecularFragment(mol_rdk.GetAtoms(), mol_xyz)
fragment2 = MolecularFragment(mol_rdk.GetAtoms(), mol_xyz)
joint = merge_molecular_fragments([fragment1, fragment2])
assert len(mol_rdk.GetAtoms()) * 2 == len(joint.GetAtoms())
def test_get_partial_charge(self):
from rdkit import Chem
mol = Chem.MolFromSmiles("CC")
atom = mol.GetAtoms()[0]
partial_charge = get_partial_charge(atom)
assert partial_charge == 0
def test_atom_shim(self):
atomic_num = 5
partial_charge = 1
atom_coords = np.array([0., 1., 2.])
shim = AtomShim(atomic_num, partial_charge, atom_coords)
assert shim.GetAtomicNum() == atomic_num
assert shim.GetPartialCharge() == partial_charge
assert (shim.GetCoords() == atom_coords).all()
|
deepchem/deepchem
|
deepchem/utils/test/test_fragment_utils.py
|
Python
|
mit
| 2,474
|
[
"RDKit"
] |
b4356910a4160a4d8a19fb9ed3b79d73a66d617c10e7a357006b79632d971c03
|
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import sys
if not (sys.version_info.major == 3 and sys.version_info.minor > 5):
print("Python version %s.%s not supported version 3.6 or above required - exiting" % (sys.version_info.major,sys.version_info.minor))
sys.exit(1)
import os
for path in [os.getcwd(),"software/Util","software/SchemaTerms","software/SchemaExamples"]:
sys.path.insert( 1, path ) #Pickup libs from local directories
import unittest
import os
from os import getenv
from os.path import expanduser
import logging # https://docs.python.org/2/library/logging.html#logging-levels
import glob
import sys
warnings = []
andstr = "\n AND\n "
TYPECOUNT_UPPERBOUND = 1000
TYPECOUNT_LOWERBOUND = 500
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
from sdotermsource import SdoTermSource
VOCABURI = SdoTermSource.vocabUri()
# Tests to probe the health of both schemas and code using graph libraries in rdflib
# Note that known failings can be annotated with @unittest.expectedFailure or @skip("reason...")
class SDOGraphSetupTestCase(unittest.TestCase):
@classmethod
def loadGraphs(self):
SdoTermSource.loadSourceGraph("default")
self.rdflib_data = SdoTermSource.sourceGraph()
@classmethod
def setUpClass(self):
log.info("Graph tests require rdflib.")
try:
log.info("Trying to import rdflib...")
import rdflib
from rdflib import Graph
except Exception as e:
raise unittest.SkipTest("Need rdflib installed to do graph tests: %s" % e)
SDOGraphSetupTestCase.loadGraphs()
def test_graphsLoaded(self):
self.assertTrue(len(self.rdflib_data) > 0,
"Graph rdflib_data should have some triples in it.")
# SPARQLResult http://rdflib.readthedocs.org/en/latest/apidocs/rdflib.plugins.sparql.html
# "A list of dicts (solution mappings) is returned"
def test_found_sixplus_inverseOf(self):
inverseOf_results = self.rdflib_data.query("select ?x ?y where { ?x <https://schema.org/inverseOf> ?y }")
log.info("inverseOf result count: %s" % len(inverseOf_results ) )
self.assertTrue(len(inverseOf_results) >= 6,
"Six or more inverseOf expected. Found: %s " % len(inverseOf_results ) )
def test_even_number_inverseOf(self):
inverseOf_results = self.rdflib_data.query("select ?x ?y where { ?x <https://schema.org/inverseOf> ?y }")
self.assertTrue(len(inverseOf_results ) % 2 == 0,
"Even number of inverseOf triples expected. Found: %s " % len(inverseOf_results ) )
def test_non_equal_inverseOf(self):
results = self.rdflib_data.query("select ?x ?y where { ?x <https://schema.org/inverseOf> ?y }")
for result in results :
self.assertTrue(result[0] != result[1],
"%s should not be equal to %s" % (result[0], result[1]) )
def test_non_equal_supercededBy(self):
results = self.rdflib_data.query("select ?x ?y where { ?x <https://schema.org/supercededBy> ?y }")
for result in results :
self.assertTrue(result[0] != result[1],
"%s should not be equal to %s" % (result[0], result[1]) )
@unittest.expectedFailure # autos
def test_needlessDomainIncludes(self):
global warnings
# check immediate subtypes don't declare same domainIncludes
# TODO: could we use property paths here to be more thorough?
# rdfs:subClassOf+ should work but seems not to.
ndi1 = ('''SELECT ?prop ?c1 ?c2
WHERE {
?prop <https://schema.org/domainIncludes> ?c1 .
?prop <https://schema.org/domainIncludes> ?c2 .
?c1 rdfs:subClassOf ?c2 .
FILTER (?c1 != ?c2) .
FILTER NOT EXISTS { ?prop <https://schema.org/isPartOf> <http://attic.schema.org> .}
FILTER NOT EXISTS { ?c1 <https://schema.org/isPartOf> <http://attic.schema.org> .}
FILTER NOT EXISTS { ?c2 <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?prop ''')
ndi1_results = self.rdflib_data.query(ndi1)
if (len(ndi1_results) > 0):
for row in ndi1_results:
warn = "WARNING property %s defining domain, %s, [which is subclassOf] %s unnecessarily" % (row["prop"],row["c1"],row["c2"])
#warnings.append(warn)
log.info(warn + "\n")
self.assertEqual(len(ndi1_results), 0,
"No subtype need redeclare a domainIncludes of its parents. Found: %s " % len(ndi1_results ) )
@unittest.expectedFailure
def test_needlessRangeIncludes(self):
global warnings
# as above, but for range. We excuse URL as it is special, not best seen as a Text subtype.
# check immediate subtypes don't declare same domainIncludes
# TODO: could we use property paths here to be more thorough?
nri1= ('''SELECT ?prop ?c1 ?c2
WHERE {
?prop <https://schema.org/rangeIncludes> ?c1 .
?prop <https://schema.org/rangeIncludes> ?c2 .
?c1 rdfs:subClassOf ?c2 .
FILTER (?c1 != ?c2) .
FILTER (?c1 != <https://schema.org/URL>) .
FILTER NOT EXISTS { ?prop <https://schema.org/isPartOf> <http://attic.schema.org> .}
FILTER NOT EXISTS { ?c1 <https://schema.org/isPartOf> <http://attic.schema.org> .}
FILTER NOT EXISTS { ?c2 <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?prop ''')
nri1_results = self.rdflib_data.query(nri1)
if (len(nri1_results)>0):
for row in nri1_results:
warn = "WARNING property %s defining range, %s, [which is subclassOf] %s unnecessarily" % (row["prop"],row["c1"],row["c2"])
#warnings.append(warn)
log.info(warn + "\n")
self.assertEqual(len(nri1_results), 0, "No subtype need redeclare a rangeIncludes of its parents. Found: %s" % len(nri1_results) )
# def test_supersededByAreLabelled(self):
# supersededByAreLabelled_results = self.rdflib_data.query("select ?x ?y ?z where { ?x <https://schema.org/supersededBy> ?y . ?y <https://schema.org/name> ?z }")
# self.assertEqual(len(inverseOf_results ) % 2 == 0, True, "Even number of inverseOf triples expected. Found: %s " % len(inverseOf_results ) )
def test_validRangeIncludes(self):
nri1= ('''SELECT ?prop ?c1
WHERE {
?prop <https://schema.org/rangeIncludes> ?c1 .
OPTIONAL{
?c1 rdf:type ?c2 .
?c1 rdf:type rdfs:Class .
}.
FILTER (!BOUND(?c2))
FILTER NOT EXISTS { ?prop <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?prop ''')
nri1_results = self.rdflib_data.query(nri1)
for row in nri1_results:
log.info("Property %s invalid rangeIncludes value: %s\n" % (row["prop"],row["c1"]))
self.assertEqual(len(nri1_results), 0, "RangeIncludes should define valid type. Found: %s" % len(nri1_results))
def test_validDomainIncludes(self):
nri1= ('''SELECT ?prop ?c1
WHERE {
?prop <https://schema.org/domainIncludes> ?c1 .
OPTIONAL{
?c1 rdf:type ?c2 .
?c1 rdf:type rdfs:Class .
}.
FILTER (!BOUND(?c2))
FILTER NOT EXISTS { ?prop <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?prop ''')
nri1_results = self.rdflib_data.query(nri1)
for row in nri1_results:
log.info("Property %s invalid domainIncludes value: %s\n" % (row["prop"],row["c1"]))
self.assertEqual(len(nri1_results), 0, "DomainIncludes should define valid type. Found: %s" % len(nri1_results))
# These are place-holders for more sophisticated SPARQL-expressed checks.
@unittest.expectedFailure
def test_readSchemaFromRDFa(self):
self.assertTrue(True, False, "We should know how to locally get /docs/schema_org_rdfa.html but this requires fixes to api.py.")
#@unittest.expectedFailure
def test_simpleLabels(self):
s = ""
complexLabels = self.rdflib_data.query(
"select distinct ?term ?label where { ?term rdfs:label ?label FILTER regex(?label,'[^a-zA-Z0-9_ ]','i'). } " )
for row in complexLabels:
s += (" term %s has complex label: %s\n" % (row["term"],row["label"]))
self.assertTrue(len(complexLabels ) == 0,
"No complex term labels expected; alphanumeric only please. Found: %s Details: %s\n"% (len(complexLabels), s) )
# Whitespace is tolerated, for now.
# we don't deal well with non definitional uses of rdfs:label yet - non terms are flagged up.
# https://github.com/schemaorg/schemaorg/issues/1136
#
# TODO: https://github.com/schemaorg/schemaorg/issues/662
#
# self.assertEqual(len(ndi1_results), 0, "No domainIncludes or rangeIncludes value should lack a type. Found: %s " % len(ndi1_results ) )
def test_labelMatchesTermId(self):
nri1= ('''select ?term ?label where {
?term rdfs:label ?label.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
FILTER(SUBSTR(?strVal, 20) != STR(?label))
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Label matching errors:")
for row in nri1_results:
log.info("Term '%s' has none-matching label: '%s'" % (row["term"],row["label"]))
self.assertEqual(len(nri1_results), 0, "Term should have matching rdfs:label. Found: %s" % len(nri1_results))
def test_superTypesExist(self):
nri1= ('''select ?term ?super where {
?term rdfs:subClassOf ?super.
?term rdf:type rdfs:Class.
FILTER NOT EXISTS { ?super rdf:type rdfs:Class }
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
BIND(STR(?super) AS ?superStrVal)
FILTER(STRLEN(?superStrVal) >= 19 && SUBSTR(?superStrVal, 1, 19) = "https://schema.org/")
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Invalid SuperType errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has nonexistent supertype: '%s'" % (row["term"],row["super"]))
self.assertEqual(len(nri1_results), 0, "Types with nonexistent SuperTypes. Found: %s" % len(nri1_results))
def test_propswitoutdomain(self):
nri1= (''' select ?term where {
?term a rdf:Property.
FILTER NOT EXISTS { ?term <https://schema.org/domainIncludes> ?o .}
}
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Property without domain errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has no domainIncludes value(s)" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Property without domain extensions Found: %s" % len(nri1_results))
def test_propswitoutrange(self):
nri1= (''' select ?term where {
?term a rdf:Property.
FILTER NOT EXISTS { ?term <https://schema.org/rangeIncludes> ?o .}
}
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Property without domain errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has no rangeIncludes value(s)" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Property without range extensions Found: %s" % len(nri1_results))
def test_superPropertiesExist(self):
nri1= ('''select ?term ?super where {
?term rdf:type rdf:Property.
?term rdfs:subPropertyOf ?super.
FILTER NOT EXISTS { ?super rdf:type rdf:Property }
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
BIND(STR(?super) AS ?superStrVal)
FILTER(STRLEN(?superStrVal) >= 19 && SUBSTR(?superStrVal, 1, 19) = "https://schema.org/")
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Invalid Super-Property errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has nonexistent super-property: '%s'" % (row["term"],row["super"]))
self.assertEqual(len(nri1_results), 0, "Properties with nonexistent SuperProperties. Found: %s" % len(nri1_results))
def test_selfReferencingInverse(self):
nri1= ('''select ?term ?inverse where {
?term rdf:type rdf:Property.
?term <https://schema.org/inverseOf> ?inverse.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
FILTER(str(?term) = str(?inverse))
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Self referencing inverseOf errors!!!\n")
for row in nri1_results:
log.info("Term '%s' is defined as inverseOf self" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Types with self referencing inverseOf Found: %s" % len(nri1_results))
def test_sameInverseAndSupercededByTarget(self):
nri1= ('''select ?term ?inverse ?super where {
?term rdf:type rdf:Property.
?term <https://schema.org/inverseOf> ?inverse.
?term <https://schema.org/supercededBy> ?super.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 18 && SUBSTR(?strVal, 1, 18) = "https://schema.org/")
FILTER(str(?inverse) = str(?super))
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("InverseOf supercededBy shared target errors!!!\n")
for row in nri1_results:
log.info("Term '%s' defined ase inverseOf AND supercededBy %s" % (row["term"], row["inverse"]))
self.assertEqual(len(nri1_results), 0, "Types with inverseOf supercededBy shared target Found: %s" % len(nri1_results))
@unittest.expectedFailure
def test_commentEndWithPeriod(self):
nri1= ('''select ?term ?com where {
?term rdfs:comment ?com.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
FILTER (!(regex(str(?com), '\\\\.\\\\s*$') || regex(str(?com), 'n\\\\* .*')))
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Comment without ending '.' errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has a comment without an ending '.'" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Comment without ending '.' Found: %s" % len(nri1_results))
def test_typeLabelCase(self):
nri1= ('''select ?term ?label where {
?term rdf:type rdfs:Class.
?term rdfs:label ?label.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
FILTER (!regex(str(?label), '^[0-9]*[A-Z].*'))
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Type label [A-Z] errors!!!\n")
for row in nri1_results:
log.info("Type '%s' has a label without upper case 1st character" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Type label not [A-Z] 1st non-numeric char Found: %s" % len(nri1_results))
def test_propertyLabelCase(self):
nri1= ('''select ?term ?label where {
?term rdf:type rdf:Property.
?term rdfs:label ?label.
BIND(STR(?term) AS ?strVal)
FILTER(STRLEN(?strVal) >= 19 && SUBSTR(?strVal, 1, 19) = "https://schema.org/")
FILTER (!regex(str(?label), '^[0-9]*[a-z].*'))
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Property label [a-z] errors!!!\n")
for row in nri1_results:
log.info("Property '%s' has a label without lower case 1st non-numeric character" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Property label not [a-z] 1st char Found: %s" % len(nri1_results))
def test_superTypeInAttic(self):
nri1= ('''select ?term ?super where {
{
?term rdfs:subClassOf ?super.
}
UNION
{
?term rdfs:subPropertyOf ?super.
}
?super <https://schema.org/isPartOf> <http://attic.schema.org> .
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Super-term in attic errors!!!\n")
for row in nri1_results:
log.info("Term '%s' is sub-term of %s a term in attic" % (row["term"],row["super"]))
self.assertEqual(len(nri1_results), 0, "Super-term in attic Found: %s" % len(nri1_results))
def test_referenceTermInAttic(self):
nri1= ('''select ?term ?rel ?ref where {
{
?term <https://schema.org/domainIncludes> ?ref.
?term ?rel ?ref.
}
UNION
{
?term <https://schema.org/rangeIncludes> ?ref.
?term ?rel ?ref.
}
UNION
{
?term <https://schema.org/inverseOf> ?ref.
?term ?rel ?ref.
}
UNION
{
?term <https://schema.org/supercededBy> ?ref.
?term ?rel ?ref.
}
?ref <https://schema.org/isPartOf> <http://attic.schema.org> .
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Reference to attic term errors!!!\n")
for row in nri1_results:
log.info("Term '%s' makes a %s reference to %s a term in attic" % (row["term"],row["rel"],row["ref"]))
self.assertEqual(len(nri1_results), 0, "Reference to attic term Found: %s" % len(nri1_results))
def test_termIn2PlusExtensions(self):
nri1= ('''select ?term (count(?part) as ?count) where {
?term <https://schema.org/isPartOf> ?part.
}
GROUP BY ?term
HAVING (count(?part) > 1)
ORDER BY ?term
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Term in +1 extensions errors!!!\n")
for row in nri1_results:
log.info("Term '%s' isPartOf %s extensions" % (row["term"],row["count"]))
self.assertEqual(len(nri1_results), 0, "Term in +1 extensions Found: %s" % len(nri1_results))
def test_termNothttps(self):
nri1= ('''select distinct ?term where {
?term ?p ?o.
FILTER strstarts(str(?term),"http://schema.org")
}
ORDER BY ?term
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Term defined as http errors!!!\n")
for row in nri1_results:
log.info("Term '%s' is defined as http " % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Term defined as http Found: %s" % len(nri1_results))
def test_targetNothttps(self):
nri1= ('''prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix schema: <https://schema.org/>
select ?term ?target where {
?term schema:domainIncludes |
schema:rangeIncludes |
rdfs:subClassOf |
rdfs:subPropertyOf |
schema:supercededBy |
schema:inverseOf ?target.
filter strstarts(str(?target),"http://schema.org")
}
ORDER BY ?term
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Target defined as https errors!!!\n")
for row in nri1_results:
log.info("Term '%s' references term %s as https " % (row["term"],row["target"]))
self.assertEqual(len(nri1_results), 0, "Term defined as https Found: %s" % len(nri1_results))
def test_isPartOf(self):
nri1= ('''prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix schema: <https://schema.org/>
select ?term ?partof where {
?term schema:isPartOf ?partof .
MINUS{
?term schema:isPartOf <https://attic.schema.org>
}
MINUS{
?term schema:isPartOf <https://auto.schema.org>
}
MINUS{
?term schema:isPartOf <https://bib.schema.org>
}
MINUS{
?term schema:isPartOf <https://health-lifesci.schema.org>
}
MINUS{
?term schema:isPartOf <https://meta.schema.org>
}
MINUS{
?term schema:isPartOf <https://pending.schema.org>
}
}
ORDER BY ?term
''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Invalid isPartOf value errors!!!\n")
for row in nri1_results:
log.info("Term '%s' has invalid isPartOf value '%s'" % (row["term"],row["partof"]))
self.assertEqual(len(nri1_results), 0, "Invalid isPartOf value errors Found: %s" % len(nri1_results))
@unittest.expectedFailure
def test_EnumerationWithoutEnums(self):
nri1= ('''select ?term where {
?term a rdfs:subClassOf+ <https://schema.org/Enumeration> .
FILTER NOT EXISTS { ?enum a ?term. }
FILTER NOT EXISTS { ?term <https://schema.org/isPartOf> <http://attic.schema.org> .}
}
ORDER BY ?term ''')
nri1_results = self.rdflib_data.query(nri1)
if len(nri1_results):
log.info("Enumeration Type without Enumeration value(s) errors!!!\n")
for row in nri1_results:
log.info("Enumeration Type '%s' has no matching enum values" % (row["term"]))
self.assertEqual(len(nri1_results), 0, "Enumeration Type without Enumeration value(s) Found: %s" % len(nri1_results))
def tearDownModule():
global warnings
if len(warnings) > 0:
log.info("\nWarnings (%s):\n" % len(warnings))
for warn in warnings:
log.info("%s" % warn)
# TODO: Unwritten tests (from basics; easier here?)
#
# * different terms should not have identical comments
# * rdflib and internal parsers should have same number of triples
# * if x and y are inverseOf each other, the rangeIncludes types on x should be domainIncludes on y, and vice-versa.
# * need a few supporting functions e.g. all terms, all types, all properties, all enum values; candidates for api later but just use here first.
if __name__ == "__main__":
unittest.main()
|
schemaorg/schemaorg
|
software/tests/test_graphs.py
|
Python
|
apache-2.0
| 22,992
|
[
"ASE"
] |
a7d659a9a5fa17f0625adc7ded425b971588c0e125f66e17e0293c68f29d793b
|
#!/usr/bin/env python
#
# File Name : ptbtokenizer.py
#
# Description : Do the PTB Tokenization and remove punctuations.
#
# Usage :
#
# Creation Date : 29-12-2014
# Last Modified : Feb 25 2015
# Author : Hao Fang and Tsung-Yi Lin
# Modified by: Desmond Elliott
import os
import sys
import subprocess
# import tempfile
# import itertools
STANFORD_CORENLP_3_4_1_JAR = 'stanford-corenlp-3.4.1.jar'
PUNCTUATIONS = ["''", "'", "``", "`", "-LRB-", "-RRB-", "-LCB-", "-RCB-",
".", "?", "!", ",", ":", "-", "--", "...", ";",
"-lrb-", "-rrb-", "-lcb-", "-rcb-"]
class PTBTokenizer:
"""Python wrapper of Stanford PTBTokenizer"""
def tokenize(self, textFile, toDisk=False):
cmd = ['java', '-cp', STANFORD_CORENLP_3_4_1_JAR,
'edu.stanford.nlp.process.PTBTokenizer',
'-preserveLines', '-lowerCase']
# ======================================================
# tokenize sentence
# ======================================================
cmd.append(textFile)
# NOTE: path_to_jar_dirname is unused
path_to_jar_dirname = os.path.dirname(os.path.abspath(__file__))
with open('intermediate', 'w') as f:
subprocess.call(cmd, stdout=f)
lines = open("intermediate").readlines()
lines = [x.replace("\n", "") for x in lines]
os.remove("intermediate")
# ======================================================
# create dictionary for tokenized captions
# ======================================================
tokenized = []
handle = open("%s-tokenized" % (textFile), "w")
for line in lines:
tokenized_text = ' '.join([w for w in line.rstrip().split(' ')
if w not in PUNCTUATIONS])
tokenized.append(tokenized_text)
handle.write(tokenized_text+"\n")
handle.close()
return tokenized
if __name__ == "__main__":
t = PTBTokenizer()
t.tokenize(sys.argv[1], toDisk=True)
|
elliottd/GroundedTranslation
|
util/ptbtokenizer.py
|
Python
|
bsd-3-clause
| 2,065
|
[
"Desmond"
] |
2e6bdb044feb32b595a6390452e0e3e9930130f738167cb3449d3c3570af5464
|
"""
==================================
Advanced interactive visualization
==================================
In DIPY_ we created a thin interface to access many of the capabilities
available in the Visualization Toolkit framework (VTK) but tailored to the
needs of structural and diffusion imaging. Initially the 3D visualization
module was named ``fvtk``, meaning functions using vtk. This is not available
anymore.
"""
import numpy as np
from dipy.viz import actor, window, ui
"""
In ``window`` we have all the objects that connect what needs to be rendered
to the display or the disk e.g., for saving screenshots. So, there you will
find key objects and functions like the ``Renderer`` class which holds and
provides access to all the actors and the ``show`` function which displays what
is in the renderer on a window. Also, this module provides access to functions
for opening/saving dialogs and printing screenshots (see ``snapshot``).
In the ``actor`` module we can find all the different primitives e.g.,
streamtubes, lines, image slices, etc.
In the ``ui`` module we have some other objects which allow to add buttons
and sliders and these interact both with windows and actors. Because of this
they need input from the operating system so they can process events.
Let's get started. In this tutorial, we will visualize some bundles
together with FA or T1. We will be able to change the slices using
a ``LineSlider2D`` widget.
First we need to fetch and load some datasets.
"""
from dipy.tracking.streamline import Streamlines
from dipy.data.fetcher import fetch_bundles_2_subjects, read_bundles_2_subjects
fetch_bundles_2_subjects()
"""
The following function outputs a dictionary with the required bundles e.g. ``af
left`` (left arcuate fasciculus) and maps, e.g. FA for a specific subject.
"""
res = read_bundles_2_subjects('subj_1', ['t1', 'fa'],
['af.left', 'cst.right', 'cc_1'])
"""
We will use 3 bundles, FA and the affine transformation that brings the voxel
coordinates to world coordinates (RAS 1mm).
"""
streamlines = Streamlines(res['af.left'])
streamlines.extend(res['cst.right'])
streamlines.extend(res['cc_1'])
data = res['fa']
shape = data.shape
affine = res['affine']
"""
With our current design it is easy to decide in which space you want the
streamlines and slices to appear. The default we have here is to appear in
world coordinates (RAS 1mm).
"""
world_coords = True
"""
If we want to see the objects in native space we need to make sure that all
objects which are currently in world coordinates are transformed back to
native space using the inverse of the affine.
"""
if not world_coords:
from dipy.tracking.streamline import transform_streamlines
streamlines = transform_streamlines(streamlines, np.linalg.inv(affine))
"""
Now we create, a ``Renderer`` object and add the streamlines using the ``line``
function and an image plane using the ``slice`` function.
"""
ren = window.Renderer()
stream_actor = actor.line(streamlines)
if not world_coords:
image_actor_z = actor.slicer(data, affine=np.eye(4))
else:
image_actor_z = actor.slicer(data, affine)
"""
We can also change also the opacity of the slicer.
"""
slicer_opacity = 0.6
image_actor_z.opacity(slicer_opacity)
"""
We can add additonal slicers by copying the original and adjusting the
``display_extent``.
"""
image_actor_x = image_actor_z.copy()
x_midpoint = int(np.round(shape[0] / 2))
image_actor_x.display_extent(x_midpoint,
x_midpoint, 0,
shape[1] - 1,
0,
shape[2] - 1)
image_actor_y = image_actor_z.copy()
y_midpoint = int(np.round(shape[1] / 2))
image_actor_y.display_extent(0,
shape[0] - 1,
y_midpoint,
y_midpoint,
0,
shape[2] - 1)
"""
Connect the actors with the Renderer.
"""
ren.add(stream_actor)
ren.add(image_actor_z)
ren.add(image_actor_x)
ren.add(image_actor_y)
"""
Now we would like to change the position of each ``image_actor`` using a
slider. The sliders are widgets which require access to different areas of the
visualization pipeline and therefore we don't recommend using them with
``show``. The more appropriate way is to use them with the ``ShowManager``
object which allows accessing the pipeline in different areas. Here is how:
"""
show_m = window.ShowManager(ren, size=(1200, 900))
show_m.initialize()
"""
After we have initialized the ``ShowManager`` we can go ahead and create
sliders to move the slices and change their opacity.
"""
line_slider_z = ui.LineSlider2D(min_value=0,
max_value=shape[2] - 1,
initial_value=shape[2] / 2,
text_template="{value:.0f}",
length=140)
line_slider_x = ui.LineSlider2D(min_value=0,
max_value=shape[0] - 1,
initial_value=shape[0] / 2,
text_template="{value:.0f}",
length=140)
line_slider_y = ui.LineSlider2D(min_value=0,
max_value=shape[1] - 1,
initial_value=shape[1] / 2,
text_template="{value:.0f}",
length=140)
opacity_slider = ui.LineSlider2D(min_value=0.0,
max_value=1.0,
initial_value=slicer_opacity,
length=140)
"""
Now we will write callbacks for the sliders and register them.
"""
def change_slice_z(slider):
z = int(np.round(slider.value))
image_actor_z.display_extent(0, shape[0] - 1, 0, shape[1] - 1, z, z)
def change_slice_x(slider):
x = int(np.round(slider.value))
image_actor_x.display_extent(x, x, 0, shape[1] - 1, 0, shape[2] - 1)
def change_slice_y(slider):
y = int(np.round(slider.value))
image_actor_y.display_extent(0, shape[0] - 1, y, y, 0, shape[2] - 1)
def change_opacity(slider):
slicer_opacity = slider.value
image_actor_z.opacity(slicer_opacity)
image_actor_x.opacity(slicer_opacity)
image_actor_y.opacity(slicer_opacity)
line_slider_z.on_change = change_slice_z
line_slider_x.on_change = change_slice_x
line_slider_y.on_change = change_slice_y
opacity_slider.on_change = change_opacity
"""
We'll also create text labels to identify the sliders.
"""
def build_label(text):
label = ui.TextBlock2D()
label.message = text
label.font_size = 18
label.font_family = 'Arial'
label.justification = 'left'
label.bold = False
label.italic = False
label.shadow = False
label.background = (0, 0, 0)
label.color = (1, 1, 1)
return label
line_slider_label_z = build_label(text="Z Slice")
line_slider_label_x = build_label(text="X Slice")
line_slider_label_y = build_label(text="Y Slice")
opacity_slider_label = build_label(text="Opacity")
"""
Now we will create a ``panel`` to contain the sliders and labels.
"""
panel = ui.Panel2D(size=(300, 200),
color=(1, 1, 1),
opacity=0.1,
align="right")
panel.center = (1030, 120)
panel.add_element(line_slider_label_x, (0.1, 0.75))
panel.add_element(line_slider_x, (0.38, 0.75))
panel.add_element(line_slider_label_y, (0.1, 0.55))
panel.add_element(line_slider_y, (0.38, 0.55))
panel.add_element(line_slider_label_z, (0.1, 0.35))
panel.add_element(line_slider_z, (0.38, 0.35))
panel.add_element(opacity_slider_label, (0.1, 0.15))
panel.add_element(opacity_slider, (0.38, 0.15))
ren.add(panel)
"""
Then, we can render all the widgets and everything else in the screen and
start the interaction using ``show_m.start()``.
However, if you change the window size, the panel will not update its position
properly. The solution to this issue is to update the position of the panel
using its ``re_align`` method every time the window size changes.
"""
global size
size = ren.GetSize()
def win_callback(obj, event):
global size
if size != obj.GetSize():
size_old = size
size = obj.GetSize()
size_change = [size[0] - size_old[0], 0]
panel.re_align(size_change)
show_m.initialize()
"""
Finally, please set the following variable to ``True`` to interact with the
datasets in 3D.
"""
interactive = False
ren.zoom(1.5)
ren.reset_clipping_range()
if interactive:
show_m.add_window_callback(win_callback)
show_m.render()
show_m.start()
else:
window.record(ren, out_path='bundles_and_3_slices.png', size=(1200, 900),
reset_camera=False)
"""
.. figure:: bundles_and_3_slices.png
:align: center
A few bundles with interactive slicing.
"""
del show_m
"""
.. include:: ../links_names.inc
"""
|
FrancoisRheaultUS/dipy
|
doc/examples/viz_advanced.py
|
Python
|
bsd-3-clause
| 8,973
|
[
"VTK"
] |
ad86df2a36d8abfafa7bd29e07af2fccc367333acdb03b8ed913b09cf8a1e494
|
import cStringIO
import logging
import operator as op
from antlr3.tree import CommonTree as AST
from lib.typecheck import *
import lib.const as C
import lib.visit as v
from .. import util
from . import method_nonce, register_method, class_lookup
import statement as st
class Method(v.BaseNode):
def __init__(self, **kwargs):
self._id = method_nonce()
self._clazz = kwargs.get("clazz", None) # for Java-to-C translation
self._annos = kwargs.get("annos", [])
self._mods = kwargs.get("mods", [])
self._typ = kwargs.get("typ", C.J.v)
self._name = kwargs.get("name", None)
# to keep the order of parameters, can't be {}
self._params = kwargs.get("params", [])
self._throws = kwargs.get("throws", [])
# updated while parsing the body
self._locals = { \
C.J.N: C.J.OBJ, \
C.J.THIS: self._clazz.name, \
C.J.SUP: self._clazz.sup \
}
self._body = []
register_method(self)
@property
def id(self):
return self._id
@property
def clazz(self):
return self._clazz
@clazz.setter
def clazz(self, v):
self._clazz = v
@property
def annos(self):
return self._annos
@property
def mods(self):
return self._mods
@property
def is_private(self):
return C.mod.PR in self._mods
@property
def is_static(self):
return C.mod.ST in self._mods
@property
def is_abstract(self):
return C.mod.AB in self._mods
@property
def is_generator(self):
return C.mod.GN in self._mods
@property
def typ(self):
return self._typ
@typ.setter
def typ(self, v):
self._typ = v
@property
def name(self):
return self._name
@name.setter
def name(self, v):
self._name = v
@property
def is_init(self):
return util.is_class_name(self._name) and self._name == self._typ
@property
def is_clinit(self):
return self._name == C.J.CLINIT
@property
def params(self):
return self._params
@property
def param_typs(self):
typs, _ = util.split(self._params)
return typs
@property
def param_vars(self):
_, args = util.split(self._params)
return args
@property
def signature(self):
params = ", ".join(self.param_typs)
return "{} {}.{}({})".format(self._typ, self._clazz.name, self._name, params)
@property
def locals(self):
return self._locals
@locals.setter
def locals(self, v):
self._locals = v
@property
def vars(self):
d_flds = { fld.name: fld.typ for fld in self._clazz.flds }
d_params = { nm: ty for (ty, nm) in self._params }
# the order of merging is important
# in this way, variables declared later overwrite fields and parameters
d_merged = dict(d_flds, **d_params)
return dict(d_merged, **self._locals)
@property
def body(self):
return self._body
@body.setter
def body(self, v):
self._body = v
@property
def has_return(self):
return util.exists(op.attrgetter("has_return"), self.body)
def __repr__(self):
mname, cname = self._name, repr(self._clazz)
params = map(util.sanitize_ty, self.param_typs)
return u'_'.join([mname, cname] + params)
def __str__(self, s_printer=str):
buf = cStringIO.StringIO()
if self._mods:
buf.write(' '.join(list(set(self._mods) - set(C.sk_mod))) + ' ')
# <clinit> won't have type signature
if self._name != C.J.CLINIT:
# not to print constructor (i.e., class name) twice
if self._typ != self._name: buf.write(self._typ + ' ')
buf.write(self._name + " (")
def str_param( (ty, nm) ): return ' '.join([ty, nm])
buf.write(", ".join(map(str_param, self._params)))
buf.write(')')
if self._throws:
buf.write(" {} {}".format(C.T.THROWS, ", ".join(self._throws)))
# interfaces and abstract methods won't have method body
if self._clazz.is_itf or self.is_abstract: buf.write(';')
else:
buf.write(" {\n")
buf.write('\n'.join(map(s_printer, self._body)))
buf.write("\n}\n")
return buf.getvalue()
def __eq__(self, other):
return repr(self) == repr(other)
def is_supercall(self, other):
if self._name != other.name: return False
if len(self._params) != len(other.params): return False
if not (self._clazz <= other.clazz): return False
args = sig_match(other.params, self._params)
for (_, nm), arg in zip(self._params, args):
if nm != arg: return False
return True
def accept(self, visitor):
visitor.visit(self)
f = op.methodcaller("accept", visitor)
self._body = util.flatten(map(f, self._body))
def jsonify(self):
m = {}
if self._mods: m["mods"] = self._mods
m["type"] = self._typ
m["name"] = self._name
if self._params:
m["params"] = [ {"type": ty, "name": nm} for (ty, nm) in self._params ]
return m
# merge method definition in another template
def merge(self, other):
# double-check it refers to the same method
assert self._name == other.name
assert self._typ == other.typ
for (t1, n1), (t2, n2) in zip(self._params, other.params):
assert t1 == t2 and n1 == n2
# adopt method body if exists
if not self._body and other.body:
logging.debug("merging: {} -> {}".format(other.body, repr(self)))
self._body = other.body
@takes(list_of(tuple_of(unicode)), list_of(tuple_of(unicode)))
@returns(list_of(unicode))
def param_match(params1, params2):
r = []
cp_params = params2[:]
for (ty_formal, nm_formal) in params1:
# setting default argument
if util.is_class_name(ty_formal):
if ty_formal == C.J.STR: arg = u"\"\""
else: arg = C.J.N
elif ty_formal == C.J.z: arg = u"false"
else: arg = u'0' # NOTE: perhaps buggy
cls_formal = class_lookup(ty_formal)
def candidate_by_ty( (ty_actual, _) ):
cls_actual = class_lookup(ty_actual)
return cls_actual <= cls_formal
def candidate_by_nm( (_, nm_actual) ):
return nm_actual in nm_formal
candidates_ty = filter(candidate_by_ty, cp_params)
if candidates_ty:
if util.exists(candidate_by_nm, candidates_ty):
ty_actual, nm_actual = util.find(candidate_by_nm, candidates_ty)
else: ty_actual, nm_actual = candidates_ty[0]
cp_params.remove( (ty_actual, nm_actual) )
arg = nm_actual
r.append(arg)
return r
# pick type-matched arguments (if possible)
# e.g., sig_match([(C,x),(D,y),(C,z)], [(int,a),(C,b),(C,c)]) = [b,"null",c]
# if there exist multiple instances of a certain type,
# name-matched or left-most one will be chosen.
@takes(list_of(tuple_of(unicode)), list_of(tuple_of(unicode)))
@returns(list_of(unicode))
def sig_match(sig, params):
return param_match(sig, params)
# find type-matched formal parameters (if possible)
# e.g., find_formals([(C,x),(D,y),(C,z)], [C, D]) = [x, y]
# similarly, the left-most one will be chosen, if there are multiple choices
@takes(list_of(tuple_of(unicode)), list_of(unicode))
@returns(list_of(unicode))
def find_formals(sig, typs):
# bogus_params = [(C, '0'), (D, '1')]
bogus_params = [ (ty, unicode(i)) for i, ty in enumerate(typs) ]
# args_idx = ['0', '1']
args_idx = param_match(sig, bogus_params)
# matched = [(C, x), (D, y)]
matched = [ sig[int(i)] for i in args_idx if i != C.J.N ]
# return [x, y]
return [ nm for _, nm in matched ]
# an expression to call the given static method
@takes(Method, list_of(tuple_of(unicode)))
@returns(unicode)
def call_stt(mtd, params):
args = sig_match(mtd.params, params)
return u"{}.{}({})".format(mtd.clazz.name, mtd.name, ", ".join(args))
# (DECL (ANNOTATION ...)* modifier* ((FIELD|METHOD) ...))
# (METHOD (TYPE Id) (NAME Id) (PARAMS (Id Id)+)? (THROWS Id+)? Body)
@takes("Clazz", AST, list_of("Anno"), list_of(unicode))
@returns(nothing)
def parse(cls, node, annos, mods):
_node = node.getChildren()[-1]
typ = util.implode_id(_node.getChild(0))
name = _node.getChild(1).getChild(0).getText()
b_idx = 2
params = []
if _node.getChild(b_idx).getText() == C.T.PARA:
__params = _node.getChild(b_idx).getChildren()
s_params = map(op.methodcaller("getText"), __params)
ty = u''
for p in s_params:
if p in ['.', '[', ']', '<', '>', '?']: ty += p
elif ty.endswith('.'): ty += p
elif not ty: ty = p
else:
params.append( (ty, p) )
ty = u''
b_idx = b_idx + 1
throws = []
if _node.getChild(b_idx).getText() == C.T.THROWS:
_throws = util.mk_v_node_w_children(_node.getChild(b_idx).getChildren())
throws = util.implode_id(_throws).split(',')
b_idx = b_idx + 1
mtd = Method(clazz=cls, annos=annos, mods=mods, typ=typ, name=name, params=params, throws=throws)
mtd.body = st.parse(mtd, _node.getChildren()[b_idx:])
cls.mtds.append(mtd)
|
plum-umd/pasket
|
pasket/meta/method.py
|
Python
|
mit
| 8,739
|
[
"VisIt"
] |
11d20fda75d66e5f6cd11a6f6832e828ce78f16ccce80b78fab3727f74bc9eda
|
##############################################################################
# MDTraj: A Python Library for Loading, Saving, and Manipulating
# Molecular Dynamics Trajectories.
# Copyright 2012-2013 Stanford University and the Authors
#
# Authors: Robert McGibbon
# Contributors:
#
# MDTraj is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 2.1
# of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with MDTraj. If not, see <http://www.gnu.org/licenses/>.
##############################################################################
##############################################################################
# Imports
##############################################################################
import numpy as np
from numpy.testing import *
import mdtraj as md
from mdtraj import element
from mdtraj.testing import get_fn, skipif, eq
from mdtraj.geometry.sasa import _ATOMIC_RADII
##############################################################################
# Globals
##############################################################################
# set up a mock topology with 1 atom
topology1 = md.Topology()
topology1.add_atom('H', element.hydrogen, topology1.add_residue('res', topology1.add_chain()))
# set up a mock topology with two atoms
topology2 = md.Topology()
_res2 = topology2.add_residue('res', topology2.add_chain())
topology2.add_atom('H', element.hydrogen, _res2)
topology2.add_atom('H', element.hydrogen, _res2)
##############################################################################
# Tests
##############################################################################
def test_sasa_0():
# make one atom at the origin
traj = md.Trajectory(xyz=np.zeros((1,1,3)), topology=topology1)
probe_radius = 0.14
calc_area = np.sum(md.geometry.shrake_rupley(traj, probe_radius=probe_radius))
true_area = 4 * np.pi * (_ATOMIC_RADII['H'] + probe_radius)**2
assert_approx_equal(calc_area, true_area)
def test_sasa_1():
# two atoms
traj = md.Trajectory(xyz=np.zeros((1,2,3)), topology=topology2)
probe_radius = 0.14
true = 4 * np.pi * (_ATOMIC_RADII['H'] + probe_radius)**2
# when atoms are closer than 2e-5, there seems to be a bug.
# note that you should never actually have a case where atoms are this close
# but nonetheless I'm adding a check for this in the implementation -- to make
# it crash if the atoms are too close, as opposed to giving you wrong results
separations = np.linspace(2.0e-5, probe_radius*2 + _ATOMIC_RADII['H']*2, 10)
areas = np.zeros_like(separations)
# check the sasa as we vary the separation
for i, sep in enumerate(separations):
traj.xyz[0, 0, 1] = sep
areas[i] = np.sum(md.geometry.shrake_rupley(traj, probe_radius=probe_radius))
assert_approx_equal(areas[0], true, significant=3)
assert_approx_equal(areas[-1], 2*true)
# make sure that areas is increasing
assert_array_less(areas[0:8], areas[1:9])
def test_sasa_2():
t = md.load(get_fn('frame0.h5'))
val1 = np.sum(md.geometry.shrake_rupley(t[0])) # calculate only frame 0
val2 = np.sum(md.geometry.shrake_rupley(t)[0]) # calculate on all frames
true_frame_0_sasa = 2.859646797180176
assert_approx_equal(true_frame_0_sasa, val1)
assert_approx_equal(true_frame_0_sasa, val2)
def test_sasa_3():
traj_ref = np.loadtxt(get_fn('g_sas_ref.dat'))
traj = md.load(get_fn('frame0.h5'))
traj_sasa = md.geometry.shrake_rupley(traj, probe_radius=0.14, n_sphere_points = 960)
# the algorithm used by gromacs' g_sas is slightly different than the one
# used here, so the results are not exactly the same
assert_array_almost_equal(traj_sasa, traj_ref, decimal=2)
def test_sasa_4():
def _test_atom_group(t, value):
sasa = md.shrake_rupley(t, mode='atom')
rids = np.array([a.residue.index for a in t.top.atoms])
for i, rid in enumerate(np.unique(rids)):
mask = (rids == rid)
eq(value[:, i], np.sum(sasa[:, mask], axis=1))
t = md.load(get_fn('frame0.h5'))
value = md.shrake_rupley(t, mode='residue')
assert value.shape == (t.n_frames, t.n_residues)
yield lambda: _test_atom_group(t, value)
# scramle the order of the atoms, and which residue each is a
# member of
df, bonds = t.top.to_dataframe()
df['resSeq'] = np.random.permutation(df['resSeq'])
df['resName'] = df['resSeq']
t.top = md.Topology.from_dataframe(df, bonds)
value = md.shrake_rupley(t, mode='residue')
yield lambda: _test_atom_group(t, value)
|
ctk3b/mdtraj
|
mdtraj/geometry/tests/test_sasa.py
|
Python
|
lgpl-2.1
| 5,058
|
[
"Gromacs",
"MDTraj"
] |
55acdb9008ddcdd7e1383cd4dfb6a00ed5e6f029abbfc93c43282bfe6f556d07
|
#!/usr/bin/env python
try:
import netCDF4 as netCDF
except:
print "netCDF4 is not installed!"
sys.exit(1)
from matplotlib import pyplot as pp
from matplotlib import colors as mc
from optparse import OptionParser
from siple.reporting import endpause
import numpy as np
usage = """Usage: %prog [options]
Example: %prog -N 100 -n 0.1"""
parser = OptionParser(usage=usage)
parser.add_option("-i","--input_file",type='string',
help='input file')
parser.add_option("-c","--tauc_cap",type='float',default=200000,
help='maximum tauc value to display')
parser.add_option("-e","--tauc_error_cap",type='float',default=0.2,
help='maximum relative error to display')
(options, args) = parser.parse_args()
try:
ds = netCDF.Dataset(options.input_file)
except:
print('ERROR: option -i is required')
parser.print_help()
exit(0)
secpera = 3.15569259747e7
tauc = ds.variables['tauc'][...].squeeze()
tauc_true = ds.variables['tauc_true'][...].squeeze()
tauc_diff = tauc-tauc_true
not_ice = abs(ds.variables['mask'][...].squeeze() -2 ) > 0.01
tauc[not_ice] = 0
tauc_true[not_ice] = 0
tauc_diff[not_ice] = 0.
u_computed = ds.variables['u_computed'][...].squeeze()*secpera
v_computed = ds.variables['v_computed'][...].squeeze()*secpera
cbase_computed = np.sqrt(u_computed * u_computed + v_computed * v_computed)
not_sliding = np.logical_and( (abs(u_computed) < 10.) , (abs(v_computed) < 10.) )
tauc[not_ice] = 0
tauc_true[not_ice] = 0
tauc_diff[not_sliding] = 0.
# difference figure
pp.clf()
pp.imshow(tauc_diff.transpose()/tauc_true.transpose(),origin='lower',vmin=-options.tauc_error_cap,vmax=options.tauc_error_cap)
pp.title(r'$(\tau_c$ - true) / true')
pp.colorbar()
# side-by-side comparison
pp.figure()
pp.subplot(1,2,1)
pp.imshow(tauc.transpose(),origin='lower',vmin=0.0,vmax=options.tauc_cap)
pp.title(r'$\tau_c$ [from inversion]')
pp.colorbar()
pp.subplot(1,2,2)
pp.imshow(tauc_true.transpose(),origin='lower',vmin=0.0,vmax=options.tauc_cap)
pp.title(r'true $\tau_c$ [prior]')
pp.colorbar()
# show computed sliding speed
pp.figure()
im = pp.imshow(cbase_computed.transpose(),origin='lower',
norm=mc.LogNorm(vmin=0.1, vmax=1000.0))
pp.title('computed sliding speed')
t = [0.1, 1.0, 10.0, 100.0, 1000.0]
pp.colorbar(im, ticks=t, format='$%.1f$')
# pp.ion()
pp.show()
# endpause()
|
talbrecht/pism_pik06
|
examples/inverse/tauc_compare.py
|
Python
|
gpl-3.0
| 2,393
|
[
"NetCDF"
] |
dc7306c12b5f28d562d756d1097313f4e08081dde4087e01f83eb420577b99b6
|
##############################################################################
# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class RRbgl(RPackage):
"""A fairly extensive and comprehensive interface to the graph
algorithms contained in the BOOST library."""
homepage = "https://www.bioconductor.org/packages/RBGL/"
url = "https://git.bioconductor.org/packages/RBGL"
version('1.52.0', git='https://git.bioconductor.org/packages/RBGL', commit='93e8fcfafec8f1cd5638fe30dc0f9506d15b49c0')
depends_on('r@3.4.0:3.4.9', when='@1.52.0')
depends_on('r-graph', type=('build', 'run'))
|
skosukhin/spack
|
var/spack/repos/builtin/packages/r-rbgl/package.py
|
Python
|
lgpl-2.1
| 1,741
|
[
"Bioconductor"
] |
525b9200a194b131e56c0491bf15ec136cc29aa9d61290e3d0463de5c382f136
|
#!/usr/bin/python
#
# Created on Aug 25, 2016
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
#
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_snmptrapprofile
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of SnmpTrapProfile Avi RESTful Object
description:
- This module is used to configure SnmpTrapProfile object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent","present"]
name:
description:
- A user-friendly name of the snmp trap configuration.
required: true
tenant_ref:
description:
- It is a reference to an object of type tenant.
trap_servers:
description:
- The ip address or hostname of the snmp trap destination server.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the snmp trap profile object.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Example to create SnmpTrapProfile object
avi_snmptrapprofile:
controller: 10.10.25.42
username: admin
password: something
state: present
name: sample_snmptrapprofile
"""
RETURN = '''
obj:
description: SnmpTrapProfile (api/snmptrapprofile) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
name=dict(type='str', required=True),
tenant_ref=dict(type='str',),
trap_servers=dict(type='list',),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'snmptrapprofile',
set([]))
if __name__ == '__main__':
main()
|
e-gob/plataforma-kioscos-autoatencion
|
scripts/ansible-play/.venv/lib/python2.7/site-packages/ansible/modules/network/avi/avi_snmptrapprofile.py
|
Python
|
bsd-3-clause
| 3,396
|
[
"VisIt"
] |
f9c5c18221027d7ec8972891dd35ddd34e08b515ae451ce897cab35aecbd45ca
|
"""These classes are here to alter simulation products or other sequences"""
import math
from seqtools.sequence import rc, Sequence
from seqtools.simulation.randomsource import RandomSource
from seqtools.format.fastq import FASTQ
class ErrorMakerGeneric(object):
"""Add errors to sequences"""
def __init__(self,rand=None,seed=None):
if rand:
self.random = rand
else:
if seed: self.random = RandomSource(seed)
else: self.random = RandomSource()
def permute(self,seq):
return seq
class ErrorMakerFlatRate(ErrorMakerGeneric):
"""Class to define how to make errors, and to introduce those errors"""
def __init__(self,rate=0,rand=None,seed=None):
super(ErrorMakerFlatRate,self).__init__(rand=rand,seed=seed)
self._rate = rate
def set_rate(self,rate): self._rate = rate
def permute(self,fastq):
sequence = fastq.sequence
seq = ''
qual = ''
qbase = rate_to_phred33(self._rate)
for i in range(len(sequence)):
# check context
rnum = self.random.random()
if rnum < self._rate:
step1 = self._rate/float(5)
if rnum < 3*step1:
seq += self.random.different_random_nt(sequence[i])
qual += qbase
elif rnum < 4*step1:
if (rnum-3*step1)/step1 < 0.5: #either before or after
seq = seq+sequence[i]+self.random.random_nt()
qual += qbase+qbase
else:
seq = seq+self.random.random_nt()+sequence[i]
qual += qbase+qbase
#we don't need to explicity do the deletion.
else:
seq += sequence[i]
qual += qbase
return FASTQ('@'+fastq.header+"\n"+seq+"\n+\n"+qual+"\n")
class ErrorMaker(ErrorMakerGeneric):
"""Class to define how to make errors, and to introduce those errors"""
def __init__(self,rand=None,seed=None):
super(ErrorMaker,self).__init__(rand=rand,seed=seed)
#### context information ####
self._before_base = None
self._after_base = None
#### set the reference base to change for del,mismatch ###
self._observed_base = None
#### set waht to change base to for ins or mismatch
self._modified_base = None
self._substitution_rate = 0
self._deletion_rate = 0
self._insertion_rate = 0
def set_substitution_rate(self,rate):
self._substitution_rate = rate
def set_deletion_rate(self,rate):
self._deletion_rate = rate
def set_insertion_rate(self,rate):
self._insertion_rate = rate
def permute(self,fastq):
s = fastq
if self._substitution_rate > 0:
s = self.random_substitution(s, self._substitution_rate)
if self._insertion_rate > 0:
s = self.random_insertion(s, self._insertion_rate)
if self._deletion_rate > 0:
s = self.random_deletion(s, self._deletion_rate)
return s
def set_before_context(self,base):
"""Limit errors to a specific preceeding base context"""
self._before_base = base
def set_after_context(self,base):
"""Limit errors to a specific following base context"""
self._after_base = base
def set_observed_base(self,base):
"""Limit errors to a specific reference base"""
self._observed_base = base
def set_modified_base(self,base):
"""Limit errors to a specific type of sequenced base"""
self._modified_base = base
def random_substitution(self,fastq,rate):
"""Perform the permutation on the sequence
:param fastq: FASTQ sequence to permute
:type fastq: format.fastq.FASTQ
:param rate: how frequently to permute
:type rate: float
:return: Permutted FASTQ
:rtype: format.fastq.FASTQ
"""
sequence = fastq.sequence
seq = ''
for i in range(len(sequence)):
# check context
prev = None
if i >= 1: prev = sequence[i-1]
next = None
if i < len(sequence)-1: next = sequence[i+1]
if self._before_base and (not prev or prev != self._before_base):
seq+=sequence[i]
continue
if self._after_base and (not next or next != self._after_base):
seq+=sequence[i]
continue
if self._observed_base and (sequence[i] != self._observed_base):
seq+=sequence[i]
continue
rnum = self.random.random()
if rnum < rate:
if not self._modified_base:
seq += self.random.different_random_nt(sequence[i])
else:
seq += self._modified_base
else:
seq += sequence[i]
return FASTQ('@'+fastq.header+"\n"+seq+"\n+\n"+fastq.qual+"\n")
def random_deletion(self,fastq,rate):
"""Perform the permutation on the sequence
:param fastq: FASTQ sequence to permute
:type fastq: format.fastq.FASTQ
:param rate: how frequently to permute
:type rate: float
:return: Permutted FASTQ
:rtype: format.fastq.FASTQ
"""
sequence = fastq.sequence
quality = fastq.qual
seq = ''
qual = None
if quality: qual = ''
for i in range(len(sequence)):
# check context
prev = None
if i >= 1: prev = sequence[i-1]
next = None
if i < len(sequence)-1: next = sequence[i+1]
if self._before_base and (not prev or prev != self._before_base):
seq+=sequence[i]
if quality: qual+=quality[i]
continue
if self._after_base and (not next or next != self._after_base):
seq+=sequence[i]
if quality: qual+=quality[i]
continue
if self._observed_base and (sequence[i] != self._observed_base):
seq+=sequence[i]
if quality: qual+=quality[i]
continue
rnum = self.random.random()
if rnum >= rate:
seq += sequence[i]
if quality: qual+=quality[i]
return FASTQ('@'+fastq.header+"\n"+seq+"\n+\n"+qual+"\n")
def random_insertion(self,fastq,rate,max_inserts=1):
"""Perform the permutation on the sequence. If authorized to do multiple bases they are done at hte rate defined here.
:param fastq: FASTQ sequence to permute
:type fastq: format.fastq.FASTQ
:param rate: how frequently to permute
:type rate: float
:param max_inserts: the maximum number of bases to insert (default 1)
:type rate: int
:return: Permutted FASTQ
:rtype: format.fastq.FASTQ
"""
sequence = fastq.sequence
quality = fastq.qual
seq = ''
qual = None
ibase = rate_to_phred33(rate)
if quality: qual = ''
z = 0
while self.random.random() < rate and z < max_inserts:
if self._before_base: break # can't do this one
if self._after_base:
if self._after_base != sequence[1]: break
z += 1
if self._modified_base:
seq += self._modified_base
if quality: qual += ibase
else:
seq += self.random.random_nt()
if quality: qual += ibase
z = 0
for i in range(len(sequence)):
# check context
prev = sequence[i]
next = None
if i < len(sequence)-1: next = sequence[i+1]
if self._before_base and (not prev or prev != self._before_base):
seq+=sequence[i]
if quality: qual+=quality[i]
continue
if self._after_base and (not next or next != self._after_base):
seq+=sequence[i]
if quality: qual+= quality[i]
continue
seq += sequence[i]
if quality: qual += quality[i]
while self.random.random() < rate and z < max_inserts:
z+=1
if self._modified_base:
seq += self._modified_base
if quality: qual += ibase
else:
seq += self.random.random_nt()
if quality: qual += ibase
z = 0
return FASTQ('@'+fastq.name+"\n"+seq+"\n+\n"+qual+"\n")
def random_flip(self,sequence):
"""Change the direction of the sequence with 0.5 probability"""
if self.random.random() < 0.5:
return rc(sequence)
return sequence
class CutMaker:
"""Class to cut the sequence to different sizes
:param rand: pass a random source, otherwise it gets a new RandomSource
:param seed: if you want to set a seed here
:type rand: RandomSource
:type seed: int
"""
def __init__(self,rand=None,seed=None):
#print rand
if rand:
self.random = rand
else:
self.random = RandomSource()
if seed: self.random = RandomSource(seed)
self._gauss_min = None
self._gauss_mu = None
self._gauss_sigma = None
#self.set_lr_cuts()
def cut(self,seq):
#cut a sequence or a mapping
if not self._gauss_min: return seq # if not set up return the sequence
rgauss = self.random.gauss(self._gauss_mu,self._gauss_sigma)
l = min(seq.length,max(self._gauss_min,int(rgauss)))
#print self._gauss_min
#print self._gauss_mu
#print rgauss
leeway = seq.length-l
start = self.random.randint(0,leeway)
return seq.slice_sequence(start,start+l)
def set_custom(self,gmin,gmu,gsigma):
"""Set a minimum lengtha, and then the gaussian distribution parameters for cutting
For any sequence longer than the minimum the guassian parameters will be used"""
self._gauss_min = gmin
self._gauss_mu = gmu
self._gauss_sigma = gsigma
def set_lr_cuts(self):
self._gauss_min = 1000
self._gauss_mu = 4000
self._gauss_sigma = 500
def set_sr_cuts(self):
self._gauss_min = 150
self._gauss_mu = 290
self._gauss_sigma = 290
def random_flip(sequence,rnum=None):
"""Flip a sequence direction with 0.5 probability"""
randin = rnum
if not randin: randin = RandomSource()
if randin.random() < 0.5:
return rc(sequence)
return sequence
def rate_to_phred33(rate):
"""Convert an error rate to a phred 33 character"""
if rate < 0.0001: return 'I'
return chr(int(-10*math.log10(rate))+33)
def phred33_to_rate(q):
"""Convert a phred33 character to an error rate"""
return math.pow(10,float(ord(q)-33)/-10)
|
jason-weirather/py-seq-tools
|
seqtools/simulation/permute.py
|
Python
|
apache-2.0
| 9,854
|
[
"Gaussian"
] |
d5bf3907ea1d93669f780ee79d5f60a867535abbc7273f7d283d04b5e0c32a7e
|
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Fri Jul 19 16:06:40 2019
@author: raf
"""
# IMPORT STUFF
from pdb import set_trace as stop
from collections import OrderedDict
import os
import copy
import numpy as np
import os
from astropy import table
import glob
import gc
import string as st
import pandas as pd
from vison import __version__
from vison.support import files
from vison.support import vjson
from vison.datamodel import core as vcore
from vison.datamodel import cdp as cdpmod
from vison.support import vcal
from vison.fpa import fpa as fpamod
from vison.plot import plots_fpa as plfpa
from vison.plot import baseplotclasses as basepl
from vison.support.report import Report
from matplotlib import pyplot as plt
plt.switch_backend('TkAgg')
from matplotlib.colors import Normalize
# END IMPORT
class MetaCal(object):
def __init__(self, **kwargs):
""" """
if 'design' in kwargs:
design = kwargs['design']
else:
design = 'final'
self.fpa = fpamod.FPA(design)
self.blocks = copy.deepcopy(self.fpa.all_blocks)
self.flight_blocks = copy.deepcopy(self.fpa.flight_blocks)
# self.blocks = ['BORN','CURIE','DIRAC'] # TESTS
# self.flight_blocks = ['BORN','CURIE','DIRAC','FOWLER','GUYE','KRAMERS'] # TESTS
self.CCDs = [1, 2, 3]
self.Quads = ['E', 'F', 'G', 'H']
self.NSLICES_FPA = self.fpa.NSLICES
self.NCOLS_FPA = self.fpa.NCOLS
if 'vcalfile' in kwargs:
self.vcalfile = kwargs['vcalfile']
else:
self.vcalfile = ''
if 'respathroot' in kwargs:
self.respathroot = kwargs['respathroot']
else:
self.respathroot = ''
if 'jsonf' in kwargs:
self.jsonf = kwargs['jsonf']
else:
self.jsonf = ''
if 'testkey' in kwargs:
self.testkey = kwargs['testkey']
else:
self.testkey = ''
if 'outparent' in kwargs:
outparent = kwargs['outparent']
else:
outparent = ''
self.outpathroot = os.path.join(outparent, '%s_FPA' % self.testkey.upper())
self.inventory = OrderedDict()
self.results = OrderedDict()
self.products = OrderedDict()
self.ParsedTable = None
self.roeVCals = dict()
self.init_roeVCals()
self.cdps = OrderedDict()
self.outcdps = OrderedDict()
self.figs = dict()
self.report = None
self.figspath = os.path.join(self.outpathroot, 'figs')
self.cdpspath = os.path.join(self.outpathroot, 'cdps')
self.CDP_header = OrderedDict()
self.CDP_header['vison']=__version__
self.CDP_header['fpa_design'] = design
self.CDP_header['FPA'] = self.fpa.FPA_MAP.copy()
self.CDP_header['vcalfile'] = os.path.split(self.vcalfile)[-1]
def get_time_tag(self):
from vison.support.vistime import get_time_tag
return get_time_tag()
def init_fignames(self):
raise NotImplementedError("Subclass must implement abstract method")
def init_outcdpnames(self):
raise NotImplementedError("Subclass must implement abstract method")
def run(self, doLoad=True, doParse=True, doDump=True, doReport=True):
""" """
if not os.path.exists(self.outpathroot):
os.system('mkdir -p %s' % self.outpathroot)
dictiopick = os.path.join(self.outpathroot,
'%s_dictionary.pick' % self.testkey.lower())
if doLoad:
print('Loading block(s) results...')
self.load_block_results()
files.cPickleDumpDictionary(self.inventory, dictiopick)
else:
print('Re-loading block(s) results')
self.inventory = files.cPickleRead(dictiopick)
parsedpick = os.path.join(self.outpathroot,
'%s_parsed.pick' % self.testkey.lower())
if doParse:
print('Parsing results')
for testname in self.testnames:
self.parse_test_results(testname)
parsedbundle = dict(PT=self.ParsedTable,
products=self.products)
files.cPickleDumpDictionary(parsedbundle, parsedpick)
else:
print('Re-loading parsed results')
parsedbundle = files.cPickleRead(parsedpick)
self.ParsedTable = parsedbundle['PT'].copy()
self.products = parsedbundle['products'].copy()
if doReport:
self.report = Report(TestName=self.testkey.upper(),
Model='FM',
Reference='7-XXX',
doDraft = False)
else:
self.report = None
if doDump:
self.dump_aggregated_results()
if doReport:
cleantexafter = False # hard-wired by now
reportroot = '%s_TR' % self.testkey.upper()
#self.report()
outfiles = self.report.doreport(
reportroot,
cleanafter=cleantexafter,
silent=True) # commented on TESTS
for outfile in outfiles:
os.system('mv %s %s/' % (outfile, self.outpathroot))
def init_roeVCals(self):
for block in self.blocks:
ROE_SN = fpamod.ROE_SNs[block]
roevcal = vcal.RoeVCal(self.vcalfile)
try:
roevcal.load_VCal(ROE_SN)
except IOError:
continue
self.roeVCals[block] = copy.deepcopy(roevcal)
def _update_inventory(self, block, testline):
test = testline[0]
session = testline[1]
repeat = testline[2]
if test not in self.testnames:
return None
sesspath = os.path.join(self.respathroot, block, 'ANALYSIS', 'results_atCALDATA',
session)
#alltestpaths = glob.glob(os.path.join(sesspath,'%s*' % test))
tester = glob.glob(os.path.join(sesspath, '%s.[0-9]' % test))
if len(tester) > 0:
testpath = '%s.%i' % (test, repeat)
else:
testpath = test
respath = os.path.join(sesspath, testpath)
DDfile = os.path.join(respath, '%s_DataDict.pick' % test)
try:
assert os.path.exists(respath)
except BaseException:
stop()
try:
assert os.path.exists(DDfile)
except BaseException:
stop()
if block not in self.inventory:
self.inventory[block] = OrderedDict()
if test not in self.inventory[block]:
self.inventory[block][test] = []
self.inventory[block][test].append(OrderedDict())
ii = self.inventory[block][test][-1]
ii['DD'] = DDfile
try:
dd = files.cPickleRead(DDfile)
except BaseException:
print('Could not load %s\n\n' % DDfile)
raise RuntimeError
ii['dd'] = copy.deepcopy(dd)
ii['resroot'] = respath
ii['session'] = session
ii['repeat'] = repeat
return None
def load_block_results(self, inventoryfile=None):
if inventoryfile is None:
inventoryfile = self.jsonf
inventraw = vjson.load_jsonfile(inventoryfile, useyaml=True)
for block in self.blocks:
if block in list(inventraw['inventory'].keys()):
rawcargo = inventraw['inventory'][block]
for testline in rawcargo:
self._update_inventory(block, testline)
return None
def dump_aggregated_results(self):
raise NotImplementedError("Subclass must implement abstract method")
def stack_dd(self, dd, cols, index2stack, indices2keep, stacker='median'):
""" """
#indices = copy.deepcopy(dd.indices)
stkdd = vcore.DataDict()
stkdd.compliances = dd.compliances.copy()
stkdd.flags = copy.deepcopy(dd.flags)
stkdd.meta = dd.meta.copy()
stkdd.products = dd.products.copy()
stacker_functions = dict(median=np.median,
mean=np.mean)
fstacker = stacker_functions[stacker]
for ixc, icol in enumerate(cols):
idtype = dd.mx[icol][:].dtype
iindices = dd.mx[icol].indices
niindices = []
niindices = [index for index in iindices
if index.name in indices2keep]
# STACKING
_ix2stack = iindices.names.index(index2stack)
if idtype.char not in ['S', 'O', 'U']:
imx = fstacker(dd.mx[icol][:], axis=_ix2stack, keepdims=True)
else:
slicer = []
for _ix, _i in enumerate(iindices):
if _i.name == index2stack:
slicer.append([0])
else:
slicer.append(slice(_i.len))
imx = dd.mx[icol][tuple(slicer)]
# TRIMMING UNNECESSARY AXES
if idtype.char not in ['S', 'O', 'U']:
_ix2ax = [_ix for _ix, ix in enumerate(iindices) if
ix.name not in indices2keep]
imx = fstacker(imx, axis=_ix2ax, keepdims=False)
else:
slicer = []
for _ix, _i in enumerate(iindices):
if _i.name not in indices2keep and _i.name != index2stack:
slicer.append(0)
imx = imx[tuple(slicer)]
# INDEXING
niindices = []
for ix in iindices:
if ix.name not in indices2keep:
pass
else:
if ix.name == index2stack:
_ix = copy.deepcopy(ix)
_ix.len = 1
_ix.vals = [ix.vals[0]]
niindices.append(_ix)
else:
niindices.append(copy.deepcopy(ix))
stkdd.addColumn(imx, name=icol, indices=niindices)
return stkdd
def stackTables(self, t1, t2):
""" """
return table.vstack([t1, t2])
def parse_single_test_gen(self, jrep, block, testname, inventoryitem):
""" """
IndexS = vcore.vMultiIndex([vcore.vIndex('ix', vals=[0])])
idd = copy.deepcopy(inventoryitem['dd'])
sidd = self.stack_dd(idd, self.incols,
indices2keep=['ix', 'CCD', 'Quad'],
index2stack='ix',
stacker='median')
sidd.dropColumn('test')
# rename the CCD index values in sidd, for convenience
sidd.indices[sidd.indices.names.index('CCD')].vals = self.CCDs
for col in sidd.colnames:
if 'CCD' in sidd.mx[col].indices.get_names():
_i = sidd.mx[col].indices.names.index('CCD')
sidd.mx[col].indices[_i].vals = self.CCDs
# CALIBRATED HK
try:
roeVCal = self.roeVCals[block]
except KeyError:
print(('Voltage calibrations for block %s not found!' % block))
roeVCal = None
if roeVCal is not None:
for commcal_key in ['IDL', 'IDH']:
for iCCD, CCD in enumerate(self.CCDs):
for Q in self.Quads:
cEkey = '%s_CCD%s_Quad%s_CAL' % (commcal_key, CCD, Q)
EV = sidd.mx[commcal_key][0][iCCD]
EVcal = roeVCal.fcal_ELVIS_script(EV, commcal_key, CCD, Q)
sidd.addColumn(np.zeros(1, dtype=float) + EVcal,
cEkey, IndexS)
for hkcal_key in ['OD', 'RD', 'IG1', 'IG2']:
for CCD in self.CCDs:
for Q in self.Quads:
rHKkey = roeVCal.get_HKkey(hkcal_key, CCD, Q)
HKkey = 'HK_%s' % rHKkey.upper()
cHKkey = '%s_CAL' % HKkey
if HKkey in sidd.mx:
HKV = sidd.mx[HKkey][0]
HKVcal = roeVCal.fcal_HK(HKV, hkcal_key, CCD, Q)
sidd.addColumn(np.zeros(1, dtype=float) + HKVcal,
cHKkey, IndexS)
else:
dummy_roeVCal = vcal.RoeVCal()
cHKkeys = []
for cal_key in ['OD', 'RD', 'IG1', 'IG2']:
for CCD in self.CCDs:
for Q in self.Quads:
rHKkey = dummy_roeVCal.get_HKkey(cal_key, CCD, Q)
HKkey = 'HK_%s' % rHKkey.upper()
cHKkey = '%s_CAL' % HKkey
cHKkeys.append(cHKkey)
for cHKkey in cHKkeys:
sidd.addColumn(np.zeros(1, dtype=float) + np.nan,
cHKkey, IndexS)
return sidd
def parse_test_results(self, testname):
""" """
for iblock, block in enumerate(self.blocks):
try:
Nreps = len(self.inventory[block][testname])
except KeyError:
print(('block %s not found!' % block))
continue
for jrep in range(Nreps):
print(('Parsing %s:%s' % (block, jrep + 1)))
inventoryitem = self.inventory[block][testname][jrep]
sit = self.parse_single_test(jrep, block, testname, inventoryitem)
# MERGING WITH PREVIOUS DDs
try:
pt = self.stackTables(pt, sit)
except NameError:
pt = copy.deepcopy(sit)
self.ParsedTable[testname] = pt
def add_DataAlbaran2Report(self):
"""Adds a data delivery note to the report."""
albaran_dict = OrderedDict()
cols = ['BLOCK','TEST','SESSION','DAY-FOLDER','OBSIDS']
for col in cols:
albaran_dict[col] = []
for iblock, block in enumerate(self.blocks):
for testname in self.testnames:
try:
Nreps = len(self.inventory[block][testname])
except KeyError:
print(('block %s not found!' % block))
continue
for jrep in range(Nreps):
iitem = self.inventory[block][testname][jrep]
dayfolder = os.path.split(iitem['dd'].meta['inputs']['datapath'])[-1]
sdayfolder = dayfolder.replace('_','\\_')
OBSID_min = iitem['dd'].meta['data_inventory']['ObsID'].min()
OBSID_max = iitem['dd'].meta['data_inventory']['ObsID'].max()
OBSID_lims = '%i-%i' % (OBSID_min, OBSID_max)
albaran_dict['BLOCK'].append(block)
albaran_dict['TEST'].append(testname)
albaran_dict['SESSION'].append(iitem['session'])
albaran_dict['DAY-FOLDER'].append(sdayfolder)
albaran_dict['OBSIDS'].append(OBSID_lims)
albaran_df = pd.DataFrame(albaran_dict)
kwargs = dict(multicolumn=True, multirow=True, longtable=True, index=False)
tex = albaran_df.to_latex(**kwargs)
ncols = len(cols)
caption = 'Data Inventory.'
wtex = cdpmod.wraptextable(tex, ncols, caption, fitwidth=True, tiny=True,
longtable=True)
self.report.add_Text(wtex)
def get_ixblock(self, PT, block):
return np.where(PT['BLOCK'].data==block)
def get_stat_from_FPAMAP(self, M, numpystat):
""" """
vals = []
for jY in range(self.NSLICES_FPA):
for iX in range(self.NCOLS_FPA):
Ckey = 'C_%i%i' % (jY + 1, iX + 1)
for Q in self.Quads:
vals.append(M[Ckey][Q])
return numpystat(vals)
def get_FPAMAP_from_PT(self, PT, extractor):
""" """
M = OrderedDict()
for jY in range(self.NSLICES_FPA):
for iX in range(self.NCOLS_FPA):
Ckey = 'C_%i%i' % (jY + 1, iX + 1)
M[Ckey] = OrderedDict()
locator = self.fpa.FPA_MAP[Ckey]
block = locator[0]
CCDk = locator[1]
for Q in self.Quads:
M[Ckey][Q] = extractor(PT, block, CCDk, Q)
return M
def plot_SimpleMAP(self, MAPdict, **kwargs):
""" """
VALs = []
for ckey in list(MAPdict.keys()):
for Q in self.Quads:
VALs.append(MAPdict[ckey][Q])
normfunction = Normalize(vmin=np.nanmin(VALs), vmax=np.nanmax(VALs), clip=False)
_kwargs = dict(doColorbar=True,
corekwargs=dict(
norm=normfunction
))
_kwargs.update(kwargs)
if 'figname' in _kwargs:
figname = _kwargs['figname']
else:
figname = ''
with plfpa.FpaHeatMap(MAPdict, **_kwargs) as heatmap:
heatmap.render(figname=figname)
#heatmap = None
gc.collect()
def _get_plot_gen(self, plotclass):
""" """
def plot_gen(self, datadict, **kwargs):
_kwargs = dict()
_kwargs.update(kwargs)
if 'figname' in _kwargs:
figname = _kwargs['figname']
else:
figname = ''
with plotclass(datadict, **_kwargs) as plot:
plot.render(figname=figname)
return plot_gen
gc.collect()
def plot_XY(self, XYdict, **kwargs):
""" """
plotobj = self._get_plot_gen(basepl.XYPlot)
plotobj(self, XYdict, **kwargs)
def plot_HISTO(self, Hdict, **kwargs):
""" """
plotobj = self._get_plot_gen(basepl.HistoPlot)
plotobj(self, Hdict, **kwargs)
def plot_XYMAP(self, XYMAP, **kwargs):
plotobj = self._get_plot_gen(plfpa.FpaPlotYvsX)
plotobj(self, XYMAP, **kwargs)
def plot_XYCCD(self, XYCCD, **kwargs):
plotobj = self._get_plot_gen(basepl.CCD2DPlotYvsX)
plotobj(self, XYCCD, **kwargs)
def plot_ImgFPA(self, BCdict, **kwargs):
plotobj = self._get_plot_gen(plfpa.FpaImgShow)
plotobj(self, BCdict, **kwargs)
def add_StdQuadsTable2Report(self, extractor=None, Matrix=None, cdp=None, cdpdict=None):
""" """
assert (extractor is None) != (Matrix is None)
if extractor is not None:
def _get_val(Ckey,Q):
return extractor(Ckey,Q)
elif Matrix is not None:
def _get_val(Ckey,Q):
return Matrix[Ckey][Q]
defcdpdict = dict(TBkey = 'TB',
meta = dict(),
CDP_header = dict(),
header_title = 'generic title',
CDP_KEY = 'UNK',
caption = 'CAPTION PENDING',
valformat = '%.2e')
if cdpdict is not None:
assert isinstance(cdpdict, dict)
defcdpdict.update(cdpdict)
TBkey = defcdpdict['TBkey']
meta = defcdpdict['meta'].copy()
CDP_header = defcdpdict['CDP_header'].copy()
header_title = defcdpdict['header_title']
CDP_KEY = defcdpdict['CDP_KEY']
valformat = defcdpdict['valformat']
caption = defcdpdict['caption']
NCCDs = len(self.CCDs)
NBlocks = len(self.fpa.flight_blocks)
NP = NBlocks * NCCDs
TB = OrderedDict()
TB['CCDID'] = np.zeros(NP, dtype='U4')
TB['BLOCK'] = np.zeros(NP, dtype='U4')
TB['CCD'] = np.zeros(NP, dtype='U4')
_dtype = np.array(_get_val('C_11','E')).dtype
if 'S' in _dtype.str:
Qdtype = 'U30'
else:
Qdtype = 'float32'
for Q in self.Quads:
TB[Q] = np.zeros(NP, dtype=Qdtype)
for jY in range(self.NSLICES_FPA):
for iX in range(self.NCOLS_FPA):
Ckey = 'C_%i%i' % (jY + 1, iX + 1)
locator = self.fpa.FPA_MAP[Ckey]
block = locator[0]
CCDk = locator[1]
indx = jY * self.NCOLS_FPA + iX
TB['CCDID'][indx] = Ckey
TB['BLOCK'][indx] = block[0:4]
TB['CCD'][indx] = CCDk
for iQ, Q in enumerate(self.Quads):
TB[Q][indx] = _get_val(Ckey, Q)
TB_ddf = OrderedDict()
TB_ddf[TBkey] = pd.DataFrame.from_dict(TB)
if cdp is not None:
cdp.path = self.inputs['subpaths']['products']
cdp.ingest_inputs(
data=TB_ddf.copy(),
meta=meta.copy(),
header=CDP_header.copy()
)
cdp.init_wb_and_fillAll(header_title=header_title)
self.save_CDP(cdp)
self.pack_CDP_to_dd(cdp, CDP_KEY)
else:
cdp = cdpmod.Tables_CDP()
cdp.ingest_inputs(
data=TB_ddf.copy(),
meta=meta.copy(),
header=CDP_header.copy()
)
def fstr(x): return '%s' % x
def ff(x): return valformat % x
her_formatters = [fstr, fstr, fstr]
for iQ, Q in enumerate(self.Quads):
her_formatters.append(ff)
nicecaption = caption.replace('_', '\_')
Ttex = cdp.get_textable(sheet=TBkey, caption=nicecaption,
fitwidth=True,
tiny=True,
formatters=her_formatters)
self.report.add_Text(Ttex)
return TB, cdp
def addFigure2Report(self, png, figkey, caption='', texfraction=0.7):
""" """
if self.report is None:
return
assert os.path.exists(png)
eps = '%s.eps' % os.path.splitext(png)[0]
os.system('convert %s %s' % (png, eps))
self.report.add_Figure(eps, texfraction=texfraction,
caption=caption,
label=figkey)
def FITSify_CDP_header(self, CDP_header):
""" """
outCDP_header = CDP_header.copy()
if 'fpa_design' in outCDP_header:
fpa_design = outCDP_header.pop('fpa_design')
outCDP_header['fpadesi'] = fpa_design
if 'FPA' in outCDP_header:
rawFPA = outCDP_header.pop('FPA')
for jY in range(self.NSLICES_FPA):
for iX in range(self.NCOLS_FPA):
Ckey = 'C_%i%i' % (jY + 1, iX + 1)
outCDP_header[Ckey] = rawFPA[Ckey][-1]
return outCDP_header
|
ruymanengithub/vison
|
vison/metatests/metacal.py
|
Python
|
gpl-3.0
| 22,778
|
[
"DIRAC"
] |
ed3e65036a04f5c91500c4b08e7a140f05543d5760b2664eedd6a596c56acda7
|
"""User preferences for KlustaViewa."""
# -----------------------------------------------------------------------------
# Imports
# -----------------------------------------------------------------------------
import logging
import numpy as np
# -----------------------------------------------------------------------------
# Logging
# -----------------------------------------------------------------------------
# Console logging level, can be DEBUG, INFO or WARNING.
loglevel = logging.INFO
# Level of the logging file. DEBUG, INFO or WARNING, or just None to disable.
loglevel_file = logging.INFO
# -----------------------------------------------------------------------------
# Main window
# -----------------------------------------------------------------------------
# Should the software ask the user to save upon closing?
prompt_save_on_exit = True
delay_timer = .05
delay_buffer = .1
# -----------------------------------------------------------------------------
# Similarity matrix
# -----------------------------------------------------------------------------
similarity_measure = 'gaussian' # or 'kl' for KL divergence
# -----------------------------------------------------------------------------
# Waveform view
# -----------------------------------------------------------------------------
# Approximate maximum number of spikes pper cluster to show. Should be
# about 100 for low-end graphics cards, 1000 for high-end ones.
waveforms_nspikes_max_expected = 100
# The minimum number of spikes per cluster to display.
waveforms_nspikes_per_cluster_min = 10
# -----------------------------------------------------------------------------
# Feature view
# -----------------------------------------------------------------------------
# Opacity value of the background spikes.
feature_background_alpha = .25
# Opacity value of the spikes in the selected clusters.
feature_selected_alpha = .75
# Number of spikes to show in the background.
features_nspikes_background_max = 10000
# Maximum number of spikes per cluster to show.
features_nspikes_per_cluster_max = 1000
# Unit of the spike time in the feature view. Can be 'samples' or 'second'.
features_info_time_unit = 'second'
# -----------------------------------------------------------------------------
# Correlograms view
# -----------------------------------------------------------------------------
# Maximum number of clusters to show in the correlograms view.
correlograms_max_nclusters = 20
correlograms_nexcerpts = 50
correlograms_excerpt_size = 10000
# -----------------------------------------------------------------------------
# IPython import path
# -----------------------------------------------------------------------------
# Paths where all .py files are loaded in IPython view.
# "~" corresponds to the user home path, C:\Users\Username\ on Windows,
# /home/username/ on Linux, etc.
ipython_import_paths = ['~/.kwiklib/code']
# -----------------------------------------------------------------------------
# Unit tests
# -----------------------------------------------------------------------------
# Delay between two successive automatic operations in unit tests for views.
test_operator_delay = .1
# Whether to automatically close the views during unit testing.
test_auto_close = True
|
klusta-team/kwiklib
|
kwiklib/utils/preferences_default.py
|
Python
|
bsd-3-clause
| 3,308
|
[
"Gaussian"
] |
dd26b697daa0167c4040fb1e4cd768ecd788b9808c1121a82333bc78c87f5977
|
"""
functions to access the data dictionary in a clearer way
"""
import os
import toolz as tz
from bcbio.utils import file_exists
from bcbio.log import logger
import sys
LOOKUPS = {
"config": {"keys": ['config']},
"num_cores": {"keys": ['config', 'algorithm', 'num_cores'],
"default": 1},
"priority_regions": {"keys": ['config', 'algorithm', 'priority_regions']},
"problem_region_dir": {"keys": ["config", "algorithm", "problem_region_dir"]},
"gtf_file": {"keys": ['genome_resources', 'rnaseq', 'transcripts'],
"checker": file_exists},
"srna_gtf_file": {"keys": ['genome_resources', 'srnaseq', 'srna-transcripts'],
"checker": file_exists},
"mirbase_ref": {"keys": ['genome_resources', 'srnaseq', 'mirbase'],
"checker": file_exists},
"gene_bed": {"keys": ['genome_resources', 'rnaseq', 'gene_bed'],
"checker": file_exists},
"work_dir": {"keys": ['dirs', 'work']},
"sam_ref": {"keys": ["sam_ref"]},
"disambiguate": {"keys": ["config", "algorithm", "disambiguate"],
"default": False},
"lane": {"keys": ["rgnames", "lane"]},
"cores": {"keys": ["config", "algorithm", "num_cores"], "default": 1},
"sample_name": {"keys": ['rgnames', 'sample']},
"strandedness": {"keys": ['config', 'algorithm', 'strandedness'],
"default": "unstranded"},
"square_vcf": {"keys": ['square_vcf']},
"ploidy": {"keys": ['config', 'algorithm', 'ploidy'], "default": 2},
"gender": {"keys": ["metadata", "sex"], "default": ""},
"batch": {"keys": ["metadata", "batch"]},
"phenotype": {"keys": ["metadata", "phenotype"], "default": ""},
"hetcaller": {"keys": ["config", "algorithm", "hetcaller"]},
"variantcaller": {"keys": ['config', 'algorithm', 'variantcaller']},
"work_bam": {"keys": ["work_bam"]},
"count_file": {"keys": ["count_file"]},
"combined_counts": {"keys": ["combined_counts"]},
"annotated_combined_counts": {"keys": ["annotated_combined_counts"]},
"ref_file": {"keys": ["reference", "fasta", "base"]},
"dexseq_gff": {"keys": ['genome_resources', 'rnaseq', 'dexseq']},
"combined_fpkm": {"keys": ['combined_fpkm']},
"combined_fpkm_isoform": {"keys": ['combined_fpkm_isoform']},
"express_fpkm": {"keys": ['express_fpkm']},
"express_tpm": {"keys": ['express_tpm']},
"express_counts": {"keys": ['express_counts']},
"isoform_to_gene": {"keys": ['isoform_to_gene']},
"fusion_mode": {"keys": ['config', 'algorithm', 'fusion_mode']},
"dexseq_counts": {"keys": ['dexseq_counts']},
"description": {"keys": ['description']},
"aligner": {"keys": ['config', 'algorithm', 'aligner']},
"platform": {"keys": ['config', 'algorithm', 'platform'],
"default": "illumina"},
"quality_format": {"keys": ['config', 'algorithm', 'quality_format'],
"default": "standard"},
"adapters": {"keys": ['config', 'algorithm', 'adapters'],
"default": []},
"species": {"keys": ['config', 'algorithm', 'species'],
"default": None},
"variation_resources": {"keys": ["genome_resources", "variation"], "default": {}},
"qsig_file": {"keys": ['genome_resources', 'variation', 'qsignature'],
"checker": file_exists},
"mixup_check": {"keys": ["config", "algorithm", "mixup_check"],
"default": False},
"cufflinks_dir": {"keys": ['cufflinks_dir']},
"rsem": {"keys": ["config", "algorithm", "rsem"], "default": False},
"transcriptome_align": {"keys": ["config", "algorithm", "transcriptome_align"],
"default": False},
"expression_caller": {"keys": ["config", "algorithm", "expression_caller"],
"default": []},
"transcriptome_bam": {"keys": ["transcriptome_bam"]},
"fpkm_isoform": {"keys": ["fpkm_isoform"]},
"fpkm": {"keys": ["fpkm"]},
"galaxy_dir": {"keys": ["dirs", "galaxy"]},
"assembled_gtf": {"keys": ["assembled_gtf"]},
"assemble_transcripts": {"keys": ["config", "algorithm", "assemble_transcripts"],
"default": False},
"oncofuse_file": {"keys": ["oncofuse_file"]},
"split_bam": {"keys": ["split_bam"]},
"vrn_file": {"keys": ["vrn_file"]},
"variant_regions": {"keys": ["config", "algorithm", "variant_regions"]},
"callable_regions": {"keys": ["regions", "callable"]},
"offtarget_stats": {"keys": ["regions", "offtarget_stats"]},
"sample_callable": {"keys": ["regions", "sample_callable"]},
"coverage_interval": {"keys": ["config", "algorithm", "coverage_interval"]},
"coverage_experimental": {"keys": ["config", "algorithm", "coverage_experimental"]},
"report": {"keys": ["config", "algorithm", "report"]},
"coverage_depth_min": {"keys": ["config", "algorithm", "coverage_depth_min"],
"default": 4},
"coverage_depth_max": {"keys": ["config", "algorithm", "coverage_depth_max"],
"default": 10000},
"joint_group_size": {"keys": ["config", "algorithm", "joint_group_size"],
"default": 200},
"coverage_regions": {"keys": ["config", "algorithm", "coverage"]},
"deduped_bam": {"keys": ["deduped_bam"]},
"align_bam": {"keys": ["align_bam"]},
"tools_off": {"keys": ["config", "algorithm", "tools_off"], "default": []},
"tools_on": {"keys": ["config", "algorithm", "tools_on"], "default": []},
}
def get_batches(data):
batches = get_batch(data)
if batches:
if not isinstance(batches, (list, tuple)):
batches = [batches]
return batches
def get_input_sequence_files(data, default=None):
"""
returns the input sequencing files, these can be single or paired FASTQ
files or BAM files
"""
if "files" not in data:
file1, file2 = None, None
elif len(data["files"]) == 2:
file1, file2 = data["files"]
else:
assert len(data["files"]) == 1, data["files"]
file1, file2 = data["files"][0], None
return file1, file2
def get_dexseq_gff(config, default=None):
"""
some older versions of the genomes have the DEXseq gff file as
gff instead of gff3, so this handles that by looking for either one
"""
dexseq_gff = tz.get_in(tz.get_in(['dexseq_gff', 'keys'], LOOKUPS, {}),
config, None)
if not dexseq_gff:
return None
gtf_file = get_gtf_file(config)
if gtf_file:
base_dir = os.path.dirname(gtf_file)
else:
base_dir = os.path.dirname(dexseq_gff)
base, _ = os.path.splitext(dexseq_gff)
gff_file = os.path.join(base_dir, base + ".gff")
if file_exists(gff_file):
return gff_file
gtf_file = os.path.join(base_dir, base + ".gff3")
if file_exists(gtf_file):
return gtf_file
else:
return None
def getter(keys, global_default=None):
def lookup(config, default=None):
default = global_default if not default else default
return tz.get_in(keys, config, default)
return lookup
def setter(keys, checker):
def update(config, value):
if checker and not checker(value):
logger.error("%s fails check %s." % (value, checker))
sys.exit(1)
return tz.update_in(config, keys, lambda x: value, default=value)
return update
def is_setter(keys):
def present(config):
try:
value = tz.get_in(keys, config, no_default=True)
except:
value = False
return True if value else False
return present
"""
generate the getter and setter functions but don't override any explicitly
defined
"""
_g = globals()
for k, v in LOOKUPS.items():
keys = v['keys']
getter_fn = 'get_' + k
if getter_fn not in _g:
_g["get_" + k] = getter(keys, v.get('default', None))
setter_fn = 'set_' + k
if setter_fn not in _g:
_g["set_" + k] = setter(keys, v.get('checker', None))
is_setter_fn = "is_set" + k
if is_setter_fn not in _g:
_g["is_set_" + k] = is_setter(keys)
def sample_data_iterator(samples):
"""
for a list of samples, return the data dictionary of each sample
"""
for sample in samples:
yield sample[0]
|
elkingtonmcb/bcbio-nextgen
|
bcbio/pipeline/datadict.py
|
Python
|
mit
| 8,324
|
[
"Galaxy"
] |
1b5465b0a594cc9bfbe193db4ea17f4fc92119f19db9ce93f4d4d2bbad0a153b
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2007-2008 Brian G. Matherly
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#
#
"""
Display filtered data
"""
from gramps.gen.simple import SimpleAccess, SimpleDoc
from gramps.gui.plug.quick import QuickTable
from gramps.gen.utils.file import media_path_full
from gramps.gui.plug.quick import run_quick_report_by_name_direct
from gramps.gen.lib import Person
from gramps.gen.datehandler import get_date
import os
from collections import defaultdict
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.sgettext
ngettext = glocale.translation.ngettext # else "nearby" comments are ignored
fname_map = {'all': _('all', 'Filtering_on'),
'Inverse Person': _('Inverse Person', 'Filtering_on'),
'Inverse Family': _('Inverse Family', 'Filtering_on'),
'Inverse Event': _('Inverse Event', 'Filtering_on'),
'Inverse Place': _('Inverse Place', 'Filtering_on'),
'Inverse Source': _('Inverse Source', 'Filtering_on'),
'Inverse Repository': _('Inverse Repository', 'Filtering_on'),
'Inverse Media': _('Inverse Media', 'Filtering_on'),
'Inverse Note': _('Inverse Note', 'Filtering_on'),
'all people': _('all people', 'Filtering_on'),
'all families': _('all families', 'Filtering_on'),
'all events': _('all events', 'Filtering_on'),
'all places': _('all places', 'Filtering_on'),
'all sources': _('all sources', 'Filtering_on'),
'all repositories': _('all repositories', 'Filtering_on'),
'all media': _('all media', 'Filtering_on'),
'all notes': _('all notes', 'Filtering_on'),
'males': _('males', 'Filtering_on'),
'females': _('females', 'Filtering_on'),
'people with unknown gender':
_('people with unknown gender', 'Filtering_on'),
'incomplete names':
_('incomplete names', 'Filtering_on'),
'people with missing birth dates':
_('people with missing birth dates', 'Filtering_on'),
'disconnected people': _('disconnected people', 'Filtering_on'),
'unique surnames': _('unique surnames', 'Filtering_on'),
'people with media': _('people with media', 'Filtering_on'),
'media references': _('media references', 'Filtering_on'),
'unique media': _('unique media', 'Filtering_on'),
'missing media': _('missing media', 'Filtering_on'),
'media by size': _('media by size', 'Filtering_on'),
'list of people': _('list of people', 'Filtering_on')}
def run(database, document, filter_name, *args, **kwargs):
"""
Loops through the families that the person is a child in, and display
the information about the other children.
"""
# setup the simple access functions
sdb = SimpleAccess(database)
sdoc = SimpleDoc(document)
stab = QuickTable(sdb)
if (filter_name == 'all'):
sdoc.title(_("Summary counts of current selection"))
sdoc.paragraph("")
sdoc.paragraph(_("Right-click row (or press ENTER) to see selected items."))
sdoc.paragraph("")
stab.columns(_("Object"), _("Count/Total"))
if hasattr(database, "db"):
stab.row([_("People"), "Filter", "Person"],
"%d/%d" % (len(database.get_person_handles()),
len(database.db.get_person_handles())))
stab.row([_("Families"), "Filter", "Family"],
"%d/%d" % (len(database.get_family_handles()),
len(database.db.get_family_handles())))
stab.row([_("Events"), "Filter", "Event"],
"%d/%d" % (len(database.get_event_handles()),
len(database.db.get_event_handles())))
stab.row([_("Places"), "Filter", "Place"],
"%d/%d" % (len(database.get_place_handles()),
len(database.db.get_place_handles())))
stab.row([_("Sources"), "Filter", "Source"],
"%d/%d" % (len(database.get_source_handles()),
len(database.db.get_source_handles())))
stab.row([_("Repositories"), "Filter", "Repository"],
"%d/%d" % (len(database.get_repository_handles()),
len(database.db.get_repository_handles())))
stab.row([_("Media"), "Filter", "Media"],
"%d/%d" % (len(database.get_media_handles()),
len(database.db.get_media_handles())))
stab.row([_("Notes"), "Filter", "Note"],
"%d/%d" % (len(database.get_note_handles()),
len(database.db.get_note_handles())))
else:
stab.row([_("People"), "Filter", "Person"],
"%d/%d" % (len(database.get_person_handles()),
len(database.basedb.get_person_handles())))
stab.row([_("Families"), "Filter", "Family"],
"%d/%d" % (len(database.get_family_handles()),
len(database.basedb.get_family_handles())))
stab.row([_("Events"), "Filter", "Event"],
"%d/%d" % (len(database.get_event_handles()),
len(database.basedb.get_event_handles())))
stab.row([_("Places"), "Filter", "Place"],
"%d/%d" % (len(database.get_place_handles()),
len(database.basedb.get_place_handles())))
stab.row([_("Sources"), "Filter", "Source"],
"%d/%d" % (len(database.get_source_handles()),
len(database.basedb.get_source_handles())))
stab.row([_("Repositories"), "Filter", "Repository"],
"%d/%d" % (len(database.get_repository_handles()),
len(database.basedb.get_repository_handles())))
stab.row([_("Media"), "Filter", "Media"],
"%d/%d" % (len(database.get_media_handles()),
len(database.basedb.get_media_handles())))
stab.row([_("Notes"), "Filter", "Note"],
"%d/%d" % (len(database.get_note_handles()),
len(database.basedb.get_note_handles())))
sdoc.paragraph("")
stab.write(sdoc)
return
# display the title
if filter_name in fname_map:
sdoc.title(_("Filtering on %s") % fname_map[filter_name]) # listed above
else:
sdoc.title(_("Filtering on %s") % _(filter_name))
sdoc.paragraph("")
matches = 0
if (filter_name == 'Inverse Person'):
sdb.dbase = database.db
stab.columns(_("Person"), _("Gramps ID"), _("Birth Date"))
proxy_handles = set(database.iter_person_handles())
for person in database.db.iter_people():
if person.handle not in proxy_handles:
stab.row(person, person.gramps_id,
sdb.birth_or_fallback(person))
matches += 1
elif (filter_name == 'Inverse Family'):
sdb.dbase = database.db
stab.columns(_("Family"), _("Gramps ID"))
proxy_handles = set(database.iter_family_handles())
for family in database.db.iter_families():
if family.handle not in proxy_handles:
stab.row(family, family.gramps_id)
matches += 1
elif (filter_name == 'Inverse Event'):
sdb.dbase = database.db
stab.columns(_("Event"), _("Gramps ID"))
proxy_handles = set(database.iter_event_handles())
for event in database.db.iter_events():
if event.handle not in proxy_handles:
stab.row(event, event.gramps_id)
matches += 1
elif (filter_name == 'Inverse Place'):
sdb.dbase = database.db
stab.columns(_("Place"), _("Gramps ID"))
proxy_handles = set(database.iter_place_handles())
for place in database.db.iter_places():
if place.handle not in proxy_handles:
stab.row(place, place.gramps_id)
matches += 1
elif (filter_name == 'Inverse Source'):
sdb.dbase = database.db
stab.columns(_("Source"), _("Gramps ID"))
proxy_handles = set(database.iter_source_handles())
for source in database.db.iter_sources():
if source.handle not in proxy_handles:
stab.row(source, source.gramps_id)
matches += 1
elif (filter_name == 'Inverse Repository'):
sdb.dbase = database.db
stab.columns(_("Repository"), _("Gramps ID"))
proxy_handles = set(database.iter_repository_handles())
for repository in database.db.iter_repositories():
if repository.handle not in proxy_handles:
stab.row(repository, repository.gramps_id)
matches += 1
elif (filter_name == 'Inverse Media'):
sdb.dbase = database.db
stab.columns(_("Media"), _("Gramps ID"))
proxy_handles = set(database.iter_media_handles())
for media in database.db.iter_media():
if media.handle not in proxy_handles:
stab.row(media, media.gramps_id)
matches += 1
elif (filter_name == 'Inverse Note'):
sdb.dbase = database.db
stab.columns(_("Note"), _("Gramps ID"))
proxy_handles = set(database.iter_note_handles())
for note in database.db.iter_notes():
if note.handle not in proxy_handles:
stab.row(note, note.gramps_id)
matches += 1
elif (filter_name in ['all people', 'Person']):
stab.columns(_("Person"), _("Gramps ID"), _("Birth Date"))
for person in database.iter_people():
stab.row(person, person.gramps_id, sdb.birth_or_fallback(person))
matches += 1
elif (filter_name in ['all families', 'Family']):
stab.columns(_("Family"), _("Gramps ID"))
for family in database.iter_families():
stab.row(family, family.gramps_id)
matches += 1
elif (filter_name in ['all events', 'Event']):
stab.columns(_("Event"), _("Gramps ID"))
for obj in database.iter_events():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name in ['all places', 'Place']):
stab.columns(_("Place"), _("Gramps ID"))
for obj in database.iter_places():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name in ['all sources', 'Source']):
stab.columns(_("Source"), _("Gramps ID"))
for obj in database.iter_sources():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name in ['all repositories', 'Repository']):
stab.columns(_("Repository"), _("Gramps ID"))
for obj in database.iter_repositories():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name in ['all media', 'Media']):
stab.columns(_("Media"), _("Gramps ID"))
for obj in database.iter_media():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name in ['all notes', 'Note']):
stab.columns(_("Note"), _("Gramps ID"))
for obj in database.iter_notes():
stab.row(obj, obj.gramps_id)
matches += 1
elif (filter_name == 'males'):
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
for person in database.iter_people():
if person.gender == Person.MALE:
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
elif (filter_name == 'females'):
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
for person in database.iter_people():
if person.gender == Person.FEMALE:
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
elif (filter_name == 'people with unknown gender'):
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
for person in database.iter_people():
if person.gender not in [Person.FEMALE, Person.MALE]:
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
elif (filter_name == 'incomplete names'):
stab.columns(_("Name"), _("Birth Date"), _("Name type"))
for person in database.iter_people():
for name in [person.get_primary_name()] + person.get_alternate_names():
if name.get_first_name().strip() == "":
stab.row([name.get_name(), "Person", person.handle], sdb.birth_or_fallback(person),
str(name.get_type()))
matches += 1
else:
if name.get_surname_list():
for surname in name.get_surname_list():
if surname.get_surname().strip() == "":
stab.row([name.get_first_name(), "Person", person.handle], sdb.birth_or_fallback(person),
str(name.get_type()))
matches += 1
else:
stab.row([name.get_first_name(), "Person", person.handle], sdb.birth_or_fallback(person),
str(name.get_type()))
matches += 1
elif (filter_name == 'people with missing birth dates'):
stab.columns(_("Person"), _("Type"))
for person in database.iter_people():
birth_ref = person.get_birth_ref()
if birth_ref:
birth = database.get_event_from_handle(birth_ref.ref)
if not get_date(birth):
stab.row(person, _("birth event but no date"))
matches += 1
else:
stab.row(person, _("missing birth event"))
matches += 1
elif (filter_name == 'disconnected people'):
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
for person in database.iter_people():
if ((not person.get_main_parents_family_handle()) and
(not len(person.get_family_handle_list()))):
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
elif (filter_name == 'unique surnames'):
namelist = defaultdict(int)
for person in database.iter_people():
names = [person.get_primary_name()] + person.get_alternate_names()
surnames = list(set([name.get_group_name() for name in names]))
for surname in surnames:
namelist[surname] += 1
stab.columns(_("Surname"), _("Count"))
for name in sorted(namelist):
stab.row(name, namelist[name])
matches += 1
stab.set_callback("leftdouble",
lambda name: run_quick_report_by_name_direct("samesurnames",
database,
document,
name))
elif (filter_name == 'people with media'):
stab.columns(_("Person"), _("Media count"))
for person in database.iter_people():
length = len(person.get_media_list())
if length > 0:
stab.row(person, str(length))
matches += 1
elif (filter_name == 'media references'):
stab.columns(_("Person"), _("Reference"))
for person in database.iter_people():
medialist = person.get_media_list()
for item in medialist:
stab.row(person, _("media"))
matches += 1
elif (filter_name == 'unique media'):
stab.columns(_("Unique Media"))
for photo in database.iter_media():
fullname = media_path_full(database, photo.get_path())
stab.row(fullname)
matches += 1
elif (filter_name == 'missing media'):
stab.columns(_("Missing Media"))
for photo in database.iter_media():
fullname = media_path_full(database, photo.get_path())
try:
os.path.getsize(fullname)
except:
stab.row(fullname)
matches += 1
elif (filter_name == 'media by size'):
stab.columns(_("Media"), _("Size in bytes"))
for photo in database.iter_media():
fullname = media_path_full(database, photo.get_path())
try:
bytes = os.path.getsize(fullname)
stab.row(fullname, str(bytes))
matches += 1
except:
pass
elif (filter_name == 'list of people'):
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
handles = kwargs["handles"]
for person_handle in handles:
person = database.get_person_from_handle(person_handle)
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
else:
raise AttributeError("invalid filter name: '%s'" % filter_name)
# translators: leave all/any {...} untranslated
sdoc.paragraph(ngettext("Filter matched {number_of} record.",
"Filter matched {number_of} records.", matches
).format(number_of=matches) )
sdoc.paragraph("")
document.has_data = matches > 0
if matches > 0:
stab.write(sdoc)
|
Fedik/gramps
|
gramps/plugins/quickview/filterbyname.py
|
Python
|
gpl-2.0
| 18,863
|
[
"Brian"
] |
b82fb92a1d767c5ce3afc03f4779ec38e0e37289fb31698203545f5ed4fceb87
|
# Copyright 2016 Brian Innes
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from os.path import expanduser, isfile, exists, dirname
from os import makedirs
from .coordinates import Coordinate, PolarCoordinate
from math import sqrt, floor
import errno
try:
import configparser
except ImportError:
import ConfigParser as configparser
class PolarConfig:
def __init__(self):
self.resetCommand = -128
self.penUpCommand = -127
self.penDownCommand = 127
self.stepsMaxValue = 126.0
self.configFile = None
self.configured = False
self.penSize = 0
self.machineWidth = 0
self.machineHeight = 0
self.mmPerRev = 0.0
self.stepsPerRev = 0
self.stepMultiplier = 0
self.serialPort = ''
self.timeSliceUS = 0
self.baud = 0
self.motorAccel = 0.0
self.motorMaxSpeed = 0.0
self.penUp = 0
self.penDown = 0
self.homeX = 0
self.homeY = 0
self.polarDraw = False
self.size = ''
self.width = 0
self.height = 0
self.posX = 0
self.posY = 0
self.margin = 0
self.pixels = 0
self.rotate = False
self.screenX = 0
self.showImage = False
self.saveImage = False
self.stepsSizeMM = 0.0
self.stepsPerValue = 0.0
self.MaxSpeedMMs = 0.0
self.AccelerationMMs2 = 0.0
self.pixelsPerMM = 0.0
self.heightPixels = 0.0
self.heightScreen = 0
self.loadConfig()
def __str__(self):
return '\n'.join(
("***** System configuration *****",
"configured = {}".format(self.configured),
"penSize = {}".format(self.penSize),
"machineWidth = {}".format(self.machineWidth),
"machineHeight = {}".format(self.machineHeight),
"mmPerRev = {}".format(self.mmPerRev),
"stepsPerRev = {}".format(self.stepsPerRev),
"stepMultiplier = {}".format(self.stepMultiplier),
"serialPort = {}".format(self.serialPort),
"timeSliceUS = {}".format(self.timeSliceUS),
"baud = {}".format(self.baud),
"motorAccel = {}".format(self.motorAccel),
"motorMaxSpeed = {}".format(self.motorMaxSpeed),
"penUp = {}".format(self.penUp),
"penDown = {}".format(self.penDown),
"homeX = {}".format(self.homeX),
"homeY = {}".format(self.homeY),
"polarDraw = {}".format(self.polarDraw),
"size = {}".format(self.size),
"width = {}".format(self.width),
"height = {}".format(self.height),
"posX = {}".format(self.posX),
"posY = {}".format(self.posY),
"margin = {}".format(self.margin),
"pixelsX = {}".format(self.pixels),
"pixelsY = {}".format(self.heightPixels),
"rotate = {}".format(self.rotate),
"stepsSizeMM = {}".format(self.stepsSizeMM),
"stepsPerValue = {}".format(self.stepsPerValue),
"MaxSpeedMMs = {}".format(self.MaxSpeedMMs),
"AccelerationMMs2 = {}".format(self.AccelerationMMs2),
"screenX = {}".format(self.screenX),
"screenY = {}".format(self.heightScreen),
"showImage = {}".format(self.showImage),
"saveImage = {}".format(self.saveImage)))
def loadConfig(self):
self.configFile = expanduser("~/.vpip/config.cfg")
print("Config is being read from %s\n" % self.configFile)
if isfile(self.configFile):
config = configparser.ConfigParser()
config.read(self.configFile)
self.configured = True
self.penSize = config.getfloat('vpip', 'penSize')
self.machineWidth = config.getint('vpip', 'machineWidth')
self.machineHeight = config.getint('vpip', 'machineHeight')
self.mmPerRev = config.getfloat('vpip', 'mmPerRev')
self.stepsPerRev = config.getint('vpip', 'stepsPerRev')
self.stepMultiplier = config.getint('vpip', 'stepMultiplier')
self.serialPort = config.get('vpip', 'serialPort')
self.timeSliceUS = config.getfloat('vpip', 'timeSliceUS')
self.baud = config.getint('vpip', 'baud')
self.motorAccel = config.getfloat('vpip', 'motorAccel')
self.motorMaxSpeed = config.getfloat('vpip', 'motorMaxSpeed')
self.penUp = config.getint('vpip', 'penUp')
self.penDown = config.getint('vpip', 'penDown')
self.homeX = config.getint('vpip', 'homeX')
self.homeY = config.getint('vpip', 'homeY')
self.polarDraw = config.getboolean('vpip', 'polarDraw')
self.size = config.get('Paper', 'size')
self.width = config.getint('Paper', 'width')
self.height = config.getint('Paper', 'height')
self.posX = config.getint('Paper', 'posX')
self.posY = config.getint('Paper', 'posY')
self.margin = config.getint('Paper', 'margin')
self.pixels = config.getint('Paper', 'pixels')
self.rotate = config.getboolean('Paper', 'rotate')
self.screenX = config.getint('Screen', 'screenX')
self.showImage = config.getboolean('Screen', 'showImage')
self.saveImage = config.getboolean('Screen', 'saveImage')
else:
self.configured = False
self.createDefaultConfig()
self.loadConfig()
if self.rotate:
self.width, self.height = self.height, self.width
self.stepsSizeMM = (1.0 / self.stepsPerRev / self.stepMultiplier) * self.mmPerRev
stepsPerRevolution = self.stepsPerRev * self.stepMultiplier
self.stepsPerValue = self.stepsMaxValue / self.stepMultiplier
self.MaxSpeedMMs = min(self.motorMaxSpeed, (
(self.stepsPerValue / (self.timeSliceUS / 1000000.0)) / stepsPerRevolution) * self.mmPerRev)
self.AccelerationMMs2 = self.MaxSpeedMMs / self.motorAccel
self.pixelsPerMM = float(self.pixels) / (self.width - 2 * self.margin)
self.heightPixels = int(floor(float(self.height - 2 * self.margin) * self.pixelsPerMM))
self.heightScreen = int(floor(float(self.heightPixels) * self.screenX / self.pixels))
def createDefaultConfig(self):
config = configparser.ConfigParser()
config.add_section('vpip')
config.set('vpip', 'penSize', '1.0')
config.set('vpip', 'machineWidth', '1')
config.set('vpip', 'machineHeight', '1')
config.set('vpip', 'mmPerRev', '1.0')
config.set('vpip', 'stepsPerRev', '1')
config.set('vpip', 'stepMultiplier', '1')
config.set('vpip', 'serialPort', 'none')
config.set('vpip', 'timeSliceUS', '1.0')
config.set('vpip', 'baud', '57600')
config.set('vpip', 'motorAccel', '1.0')
config.set('vpip', 'motorMaxSpeed', '1.0')
config.set('vpip', 'penUp', '0')
config.set('vpip', 'penDown', '0')
config.set('vpip', 'homeX', '0')
config.set('vpip', 'homeY', '0')
config.set('vpip', 'polarDraw', 'True')
config.add_section('Paper')
config.set('Paper', 'size', 'custom')
config.set('Paper', 'width', '1')
config.set('Paper', 'height', '1')
config.set('Paper', 'posX', '1')
config.set('Paper', 'posY', '1')
config.set('Paper', 'margin', '1')
config.set('Paper', 'pixels', '1')
config.set('Paper', 'rotate', 'False')
config.add_section('Screen')
config.set('Screen', 'screenX', '1')
config.set('Screen', 'showImage', 'False')
config.set('Screen', 'saveImage', 'False')
if not exists(dirname(self.configFile)):
try:
makedirs(dirname(self.configFile))
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
with open(self.configFile, 'w') as configfile:
config.write(configfile)
def writeConfig(self):
if self.configured:
config = configparser.ConfigParser()
config.add_section('vpip')
config.set('vpip', 'penSize', self.penSize)
config.set('vpip', 'machineWidth', self.machineWidth)
config.set('vpip', 'machineHeight', self.machineHeight)
config.set('vpip', 'mmPerRev', self.mmPerRev)
config.set('vpip', 'stepsPerRev', self.stepsPerRev)
config.set('vpip', 'stepMultiplier', self.stepMultiplier)
config.set('vpip', 'serialPort', self.serialPort)
config.set('vpip', 'timeSliceUS', self.timeSliceUS)
config.set('vpip', 'baud', self.baud)
config.set('vpip', 'motorAccel', self.motorAccel)
config.set('vpip', 'motorMaxSpeed', self.motorMaxSpeed)
config.set('vpip', 'penUp', self.penUp)
config.set('vpip', 'penDown', self.penDown)
config.set('vpip', 'homeX', self.homeX)
config.set('vpip', 'homeY', self.homeY)
config.set('vpip', 'polarDraw', self.polarDraw)
config.add_section('Paper')
config.set('Paper', 'size', self.size)
config.set('Paper', 'width', self.width)
config.set('Paper', 'height', self.height)
config.set('Paper', 'posX', self.posX)
config.set('Paper', 'posY', self.posY)
config.set('Paper', 'margin', self.margin)
config.set('Paper', 'pixelsX', self.pixels)
config.set('Paper', 'rotate', self.rotate)
config.add_section('Screen')
config.set('Screen', 'screenX', self.screenX)
config.set('Screen', 'showImage', self.showImage)
config.set('Screen', 'saveImage', self.saveImage)
if not exists(dirname(self.configFile)):
try:
makedirs(dirname(self.configFile))
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
with open(self.configFile, 'w') as configfile:
config.write(configfile)
else:
print('ERROR-Trying to write configuration when unconfigured!')
def system2drawingCoords(self, coord):
"""
:param coord:
:rtype: Coordinate
"""
mmCoord = coord.translate(self.posX - self.margin, self.posY - self.margin)
return mmCoord * self.pixelsPerMM
def drawing2systemCoords(self, coord):
"""
:param coord:
:rtype: Coordinate
"""
mmCoord = coord.divide(self.pixelsPerMM)
return mmCoord.translate(self.posX + self.margin, self.posY + self.margin)
def drawing2screenCoords(self, coord):
"""
:type coord: Coordinate
:rtype: Coordinate
"""
return coord * self.screenX / self.pixels
def system2polarCoords(self, coord):
"""
:param coord:
:rtype: PolarCoordinate
"""
xdiff = float(self.machineWidth) - coord.x
return PolarCoordinate.fromCoords(sqrt(coord.x * coord.x + coord.y * coord.y),
sqrt(xdiff * xdiff + coord.y * coord.y), coord.penup)
def polar2systemCoords(self, coord):
"""
:param coord:
:rtype: Coordinate
"""
if coord.leftDist + coord.rightDist > self.machineWidth:
# print "polar2systemCoords-{}".format(coord)
x = ((float(coord.leftDist * coord.leftDist) - (coord.rightDist * coord.rightDist) +
float(self.machineWidth * self.machineWidth)) / (2.0 * self.machineWidth))
y = sqrt((coord.leftDist * coord.leftDist) - (x * x))
return Coordinate.fromCoords(x, y, coord.penup)
else:
print("WARN: polar2systemCoords received invalid coordinate {}").format(coord)
return Coordinate.fromCoords(0, 0, coord.penup)
|
brianinnes/vPiP
|
python/vPiP/config.py
|
Python
|
apache-2.0
| 12,670
|
[
"Brian"
] |
694aace60f5cdae85a4247b73451a86b3588176fc41a1ea036e688b0b34879b1
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Pipeline, the top-level Beam object.
A pipeline holds a DAG of data transforms. Conceptually the nodes of the DAG
are transforms (:class:`~apache_beam.transforms.ptransform.PTransform` objects)
and the edges are values (mostly :class:`~apache_beam.pvalue.PCollection`
objects). The transforms take as inputs one or more PValues and output one or
more :class:`~apache_beam.pvalue.PValue` s.
The pipeline offers functionality to traverse the graph. The actual operation
to be executed for each node visited is specified through a runner object.
Typical usage::
# Create a pipeline object using a local runner for execution.
with beam.Pipeline('DirectRunner') as p:
# Add to the pipeline a "Create" transform. When executed this
# transform will produce a PCollection object with the specified values.
pcoll = p | 'Create' >> beam.Create([1, 2, 3])
# Another transform could be applied to pcoll, e.g., writing to a text file.
# For other transforms, refer to transforms/ directory.
pcoll | 'Write' >> beam.io.WriteToText('./output')
# run() will execute the DAG stored in the pipeline. The execution of the
# nodes visited is done using the specified local runner.
"""
# pytype: skip-file
# mypy: disallow-untyped-defs
import abc
import logging
import os
import re
import shutil
import tempfile
import unicodedata
from collections import defaultdict
from typing import TYPE_CHECKING
from typing import Any
from typing import Dict
from typing import FrozenSet
from typing import Iterable
from typing import List
from typing import Optional
from typing import Sequence
from typing import Set
from typing import Tuple
from typing import Type
from typing import Union
from google.protobuf import message
from apache_beam import pvalue
from apache_beam.internal import pickler
from apache_beam.io.filesystems import FileSystems
from apache_beam.options.pipeline_options import CrossLanguageOptions
from apache_beam.options.pipeline_options import DebugOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.options.pipeline_options import TypeOptions
from apache_beam.options.pipeline_options_validator import PipelineOptionsValidator
from apache_beam.portability import common_urns
from apache_beam.runners import PipelineRunner
from apache_beam.runners import create_runner
from apache_beam.transforms import ParDo
from apache_beam.transforms import ptransform
from apache_beam.transforms.display import DisplayData
from apache_beam.transforms.resources import merge_resource_hints
from apache_beam.transforms.resources import resource_hints_from_options
from apache_beam.transforms.sideinputs import get_sideinput_index
from apache_beam.typehints import TypeCheckError
from apache_beam.typehints import typehints
from apache_beam.utils import proto_utils
from apache_beam.utils import subprocess_server
from apache_beam.utils.annotations import deprecated
from apache_beam.utils.interactive_utils import alter_label_if_ipython
if TYPE_CHECKING:
from types import TracebackType
from apache_beam.portability.api import beam_runner_api_pb2
from apache_beam.runners.pipeline_context import PipelineContext
from apache_beam.runners.runner import PipelineResult
from apache_beam.transforms import environments
__all__ = ['Pipeline', 'PTransformOverride']
class Pipeline(object):
"""A pipeline object that manages a DAG of
:class:`~apache_beam.pvalue.PValue` s and their
:class:`~apache_beam.transforms.ptransform.PTransform` s.
Conceptually the :class:`~apache_beam.pvalue.PValue` s are the DAG's nodes and
the :class:`~apache_beam.transforms.ptransform.PTransform` s computing
the :class:`~apache_beam.pvalue.PValue` s are the edges.
All the transforms applied to the pipeline must have distinct full labels.
If same transform instance needs to be applied then the right shift operator
should be used to designate new names
(e.g. ``input | "label" >> my_transform``).
"""
@classmethod
def runner_implemented_transforms(cls):
# type: () -> FrozenSet[str]
# This set should only contain transforms which are required to be
# implemented by a runner.
return frozenset([
common_urns.primitives.GROUP_BY_KEY.urn,
common_urns.primitives.IMPULSE.urn,
])
def __init__(self, runner=None, options=None, argv=None):
# type: (Optional[Union[str, PipelineRunner]], Optional[PipelineOptions], Optional[List[str]]) -> None
"""Initialize a pipeline object.
Args:
runner (~apache_beam.runners.runner.PipelineRunner): An object of
type :class:`~apache_beam.runners.runner.PipelineRunner` that will be
used to execute the pipeline. For registered runners, the runner name
can be specified, otherwise a runner object must be supplied.
options (~apache_beam.options.pipeline_options.PipelineOptions):
A configured
:class:`~apache_beam.options.pipeline_options.PipelineOptions` object
containing arguments that should be used for running the Beam job.
argv (List[str]): a list of arguments (such as :data:`sys.argv`)
to be used for building a
:class:`~apache_beam.options.pipeline_options.PipelineOptions` object.
This will only be used if argument **options** is :data:`None`.
Raises:
ValueError: if either the runner or options argument is not
of the expected type.
"""
# Initializing logging configuration in case the user did not set it up.
logging.basicConfig()
if options is not None:
if isinstance(options, PipelineOptions):
self._options = options
else:
raise ValueError(
'Parameter options, if specified, must be of type PipelineOptions. '
'Received : %r' % options)
elif argv is not None:
if isinstance(argv, list):
self._options = PipelineOptions(argv)
else:
raise ValueError(
'Parameter argv, if specified, must be a list. Received : %r' %
argv)
else:
self._options = PipelineOptions([])
FileSystems.set_options(self._options)
if runner is None:
runner = self._options.view_as(StandardOptions).runner
if runner is None:
runner = StandardOptions.DEFAULT_RUNNER
logging.info((
'Missing pipeline option (runner). Executing pipeline '
'using the default runner: %s.'),
runner)
if isinstance(runner, str):
runner = create_runner(runner)
elif not isinstance(runner, PipelineRunner):
raise TypeError(
'Runner %s is not a PipelineRunner object or the '
'name of a registered runner.' % runner)
# Validate pipeline options
errors = PipelineOptionsValidator(self._options, runner).validate()
if errors:
raise ValueError(
'Pipeline has validations errors: \n' + '\n'.join(errors))
# set default experiments for portable runners
# (needs to occur prior to pipeline construction)
if runner.is_fnapi_compatible():
experiments = (self._options.view_as(DebugOptions).experiments or [])
if not 'beam_fn_api' in experiments:
experiments.append('beam_fn_api')
self._options.view_as(DebugOptions).experiments = experiments
self.local_tempdir = tempfile.mkdtemp(prefix='beam-pipeline-temp')
# Default runner to be used.
self.runner = runner
# Stack of transforms generated by nested apply() calls. The stack will
# contain a root node as an enclosing (parent) node for top transforms.
self.transforms_stack = [AppliedPTransform(None, None, '', None)]
# Set of transform labels (full labels) applied to the pipeline.
# If a transform is applied and the full label is already in the set
# then the transform will have to be cloned with a new label.
self.applied_labels = set() # type: Set[str]
# Hints supplied via pipeline options are considered the outermost hints.
self._root_transform().resource_hints = resource_hints_from_options(options)
# Create a ComponentIdMap for assigning IDs to components. Ensures that any
# components that receive an ID during pipeline construction (for example in
# ExternalTransform), will receive the same component ID when generating the
# full pipeline proto.
self.component_id_map = ComponentIdMap()
# Records whether this pipeline contains any external transforms.
self.contains_external_transforms = False
@property # type: ignore[misc] # decorated property not supported
@deprecated(
since='First stable release',
extra_message='References to <pipeline>.options'
' will not be supported')
def options(self):
# type: () -> PipelineOptions
return self._options
def _current_transform(self):
# type: () -> AppliedPTransform
"""Returns the transform currently on the top of the stack."""
return self.transforms_stack[-1]
def _root_transform(self):
# type: () -> AppliedPTransform
"""Returns the root transform of the transform stack."""
return self.transforms_stack[0]
def _remove_labels_recursively(self, applied_transform):
# type: (AppliedPTransform) -> None
for part in applied_transform.parts:
if part.full_label in self.applied_labels:
self.applied_labels.remove(part.full_label)
self._remove_labels_recursively(part)
def _replace(self, override):
# type: (PTransformOverride) -> None
assert isinstance(override, PTransformOverride)
# From original transform output --> replacement transform output
output_map = {} # type: Dict[pvalue.PValue, pvalue.PValue]
output_replacements = {
} # type: Dict[AppliedPTransform, List[Tuple[pvalue.PValue, Optional[str]]]]
input_replacements = {
} # type: Dict[AppliedPTransform, Sequence[Union[pvalue.PBegin, pvalue.PCollection]]]
side_input_replacements = {
} # type: Dict[AppliedPTransform, List[pvalue.AsSideInput]]
class TransformUpdater(PipelineVisitor): # pylint: disable=used-before-assignment
""""A visitor that replaces the matching PTransforms."""
def __init__(self, pipeline):
# type: (Pipeline) -> None
self.pipeline = pipeline
def _replace_if_needed(self, original_transform_node):
# type: (AppliedPTransform) -> None
if override.matches(original_transform_node):
assert isinstance(original_transform_node, AppliedPTransform)
replacement_transform = (
override.get_replacement_transform_for_applied_ptransform(
original_transform_node))
if replacement_transform is original_transform_node.transform:
return
replacement_transform.side_inputs = tuple(
original_transform_node.transform.side_inputs)
replacement_transform_node = AppliedPTransform(
original_transform_node.parent,
replacement_transform,
original_transform_node.full_label,
original_transform_node.inputs)
replacement_transform_node.resource_hints = (
original_transform_node.resource_hints)
# Transform execution could depend on order in which nodes are
# considered. Hence we insert the replacement transform node to same
# index as the original transform node. Note that this operation
# removes the original transform node.
if original_transform_node.parent:
assert isinstance(original_transform_node.parent, AppliedPTransform)
parent_parts = original_transform_node.parent.parts
parent_parts[parent_parts.index(original_transform_node)] = (
replacement_transform_node)
else:
# Original transform has to be a root.
roots = self.pipeline.transforms_stack[0].parts
assert original_transform_node in roots
roots[roots.index(original_transform_node)] = (
replacement_transform_node)
inputs = override.get_replacement_inputs(original_transform_node)
if len(inputs) > 1:
transform_input = inputs
elif len(inputs) == 1:
transform_input = inputs[0]
elif len(inputs) == 0:
transform_input = pvalue.PBegin(self.pipeline)
try:
# We have to add the new AppliedTransform to the stack before
# expand() and pop it out later to make sure that parts get added
# correctly.
self.pipeline.transforms_stack.append(replacement_transform_node)
# Keeping the same label for the replaced node but recursively
# removing labels of child transforms of original transform since
# they will be replaced during the expand below. This is needed in
# case the replacement contains children that have labels that
# conflicts with labels of the children of the original.
self.pipeline._remove_labels_recursively(original_transform_node)
new_output = replacement_transform.expand(transform_input)
assert isinstance(
new_output, (dict, pvalue.PValue, pvalue.DoOutputsTuple))
if isinstance(new_output, pvalue.PValue):
new_output.element_type = None
self.pipeline._infer_result_type(
replacement_transform, inputs, new_output)
if isinstance(new_output, dict):
for new_tag, new_pcoll in new_output.items():
replacement_transform_node.add_output(new_pcoll, new_tag)
elif isinstance(new_output, pvalue.DoOutputsTuple):
replacement_transform_node.add_output(
new_output, new_output._main_tag)
else:
replacement_transform_node.add_output(new_output, new_output.tag)
# Recording updated outputs. This cannot be done in the same
# visitor since if we dynamically update output type here, we'll
# run into errors when visiting child nodes.
#
# NOTE: When replacing multiple outputs, the replacement
# PCollection tags must have a matching tag in the original
# transform.
if isinstance(new_output, pvalue.PValue):
if not new_output.producer:
new_output.producer = replacement_transform_node
output_map[original_transform_node.outputs[new_output.tag]] = \
new_output
elif isinstance(new_output, (pvalue.DoOutputsTuple, tuple)):
for pcoll in new_output:
if not pcoll.producer:
pcoll.producer = replacement_transform_node
output_map[original_transform_node.outputs[pcoll.tag]] = pcoll
elif isinstance(new_output, dict):
for tag, pcoll in new_output.items():
if not pcoll.producer:
pcoll.producer = replacement_transform_node
output_map[original_transform_node.outputs[tag]] = pcoll
finally:
self.pipeline.transforms_stack.pop()
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
self._replace_if_needed(transform_node)
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
self._replace_if_needed(transform_node)
self.visit(TransformUpdater(self))
# Adjusting inputs and outputs
class InputOutputUpdater(PipelineVisitor): # pylint: disable=used-before-assignment
""""A visitor that records input and output values to be replaced.
Input and output values that should be updated are recorded in maps
input_replacements and output_replacements respectively.
We cannot update input and output values while visiting since that results
in validation errors.
"""
def __init__(self, pipeline):
# type: (Pipeline) -> None
self.pipeline = pipeline
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
self.visit_transform(transform_node)
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
replace_output = False
for tag in transform_node.outputs:
if transform_node.outputs[tag] in output_map:
replace_output = True
break
replace_input = False
for input in transform_node.inputs:
if input in output_map:
replace_input = True
break
replace_side_inputs = False
for side_input in transform_node.side_inputs:
if side_input.pvalue in output_map:
replace_side_inputs = True
break
if replace_output:
output_replacements[transform_node] = []
for original, replacement in output_map.items():
for tag, output in transform_node.outputs.items():
if output == original:
output_replacements[transform_node].append((tag, replacement))
if replace_input:
new_input = [
input if not input in output_map else output_map[input]
for input in transform_node.inputs
]
input_replacements[transform_node] = new_input
if replace_side_inputs:
new_side_inputs = []
for side_input in transform_node.side_inputs:
if side_input.pvalue in output_map:
side_input.pvalue = output_map[side_input.pvalue]
new_side_inputs.append(side_input)
else:
new_side_inputs.append(side_input)
side_input_replacements[transform_node] = new_side_inputs
self.visit(InputOutputUpdater(self))
for transform in output_replacements:
for tag, output in output_replacements[transform]:
transform.replace_output(output, tag=tag)
for transform in input_replacements:
transform.replace_inputs(input_replacements[transform])
for transform in side_input_replacements:
transform.replace_side_inputs(side_input_replacements[transform])
def _check_replacement(self, override):
# type: (PTransformOverride) -> None
class ReplacementValidator(PipelineVisitor):
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
if override.matches(transform_node):
raise RuntimeError(
'Transform node %r was not replaced as expected.' %
transform_node)
self.visit(ReplacementValidator())
def replace_all(self, replacements):
# type: (Iterable[PTransformOverride]) -> None
""" Dynamically replaces PTransforms in the currently populated hierarchy.
Currently this only works for replacements where input and output types
are exactly the same.
TODO: Update this to also work for transform overrides where input and
output types are different.
Args:
replacements (List[~apache_beam.pipeline.PTransformOverride]): a list of
:class:`~apache_beam.pipeline.PTransformOverride` objects.
"""
for override in replacements:
assert isinstance(override, PTransformOverride)
self._replace(override)
# Checking if the PTransforms have been successfully replaced. This will
# result in a failure if a PTransform that was replaced in a given override
# gets re-added in a subsequent override. This is not allowed and ordering
# of PTransformOverride objects in 'replacements' is important.
for override in replacements:
self._check_replacement(override)
def run(self, test_runner_api='AUTO'):
# type: (Union[bool, str]) -> PipelineResult
"""Runs the pipeline. Returns whatever our runner returns after running."""
# Records whether this pipeline contains any cross-language transforms.
self.contains_external_transforms = (
ExternalTransformFinder.contains_external_transforms(self))
try:
if test_runner_api == 'AUTO':
# Don't pay the cost of a round-trip if we're going to be going through
# the FnApi anyway...
is_fnapi_compatible = self.runner.is_fnapi_compatible() or (
# DirectRunner uses the Fn API for batch only
self.runner.__class__.__name__ == 'SwitchingDirectRunner' and
not self._options.view_as(StandardOptions).streaming)
# Multi-language pipelines that contain external pipeline segments may
# not be able to create a Python pipeline object graph. Hence following
# runner API check should be skipped for such pipelines.
# The InteractiveRunner relies on a constant pipeline reference, skip
# it.
test_runner_api = (
not is_fnapi_compatible and
not self.contains_external_transforms and
self.runner.__class__.__name__ != 'InteractiveRunner')
# When possible, invoke a round trip through the runner API.
if test_runner_api and self._verify_runner_api_compatible():
return Pipeline.from_runner_api(
self.to_runner_api(use_fake_coders=True),
self.runner,
self._options).run(False)
if (self._options.view_as(TypeOptions).runtime_type_check and
self._options.view_as(TypeOptions).performance_runtime_type_check):
raise RuntimeError(
'You cannot turn on runtime_type_check '
'and performance_runtime_type_check simultaneously. '
'Pick one or the other.')
if self._options.view_as(TypeOptions).runtime_type_check:
from apache_beam.typehints import typecheck
self.visit(typecheck.TypeCheckVisitor())
if self._options.view_as(TypeOptions).performance_runtime_type_check:
from apache_beam.typehints import typecheck
self.visit(typecheck.PerformanceTypeCheckVisitor())
if self._options.view_as(SetupOptions).save_main_session:
# If this option is chosen, verify we can pickle the main session early.
tmpdir = tempfile.mkdtemp()
try:
pickler.dump_session(os.path.join(tmpdir, 'main_session.pickle'))
finally:
shutil.rmtree(tmpdir)
return self.runner.run_pipeline(self, self._options)
finally:
shutil.rmtree(self.local_tempdir, ignore_errors=True)
def __enter__(self):
# type: () -> Pipeline
self._extra_context = subprocess_server.JavaJarServer.beam_services(
self._options.view_as(CrossLanguageOptions).beam_services)
self._extra_context.__enter__()
return self
def __exit__(
self,
exc_type, # type: Optional[Type[BaseException]]
exc_val, # type: Optional[BaseException]
exc_tb # type: Optional[TracebackType]
):
# type: (...) -> None
try:
if not exc_type:
self.result = self.run()
self.result.wait_until_finish()
finally:
self._extra_context.__exit__(exc_type, exc_val, exc_tb)
def visit(self, visitor):
# type: (PipelineVisitor) -> None
"""Visits depth-first every node of a pipeline's DAG.
Runner-internal implementation detail; no backwards-compatibility guarantees
Args:
visitor (~apache_beam.pipeline.PipelineVisitor):
:class:`~apache_beam.pipeline.PipelineVisitor` object whose callbacks
will be called for each node visited. See
:class:`~apache_beam.pipeline.PipelineVisitor` comments.
Raises:
TypeError: if node is specified and is not a
:class:`~apache_beam.pvalue.PValue`.
~apache_beam.error.PipelineError: if node is specified and does not
belong to this pipeline instance.
"""
visited = set() # type: Set[pvalue.PValue]
self._root_transform().visit(visitor, self, visited)
def apply(
self,
transform, # type: ptransform.PTransform
pvalueish=None, # type: Optional[pvalue.PValue]
label=None # type: Optional[str]
):
# type: (...) -> pvalue.PValue
"""Applies a custom transform using the pvalueish specified.
Args:
transform (~apache_beam.transforms.ptransform.PTransform): the
:class:`~apache_beam.transforms.ptransform.PTransform` to apply.
pvalueish (~apache_beam.pvalue.PCollection): the input for the
:class:`~apache_beam.transforms.ptransform.PTransform` (typically a
:class:`~apache_beam.pvalue.PCollection`).
label (str): label of the
:class:`~apache_beam.transforms.ptransform.PTransform`.
Raises:
TypeError: if the transform object extracted from the
argument list is not a
:class:`~apache_beam.transforms.ptransform.PTransform`.
RuntimeError: if the transform object was already applied to
this pipeline and needs to be cloned in order to apply again.
"""
if isinstance(transform, ptransform._NamedPTransform):
return self.apply(
transform.transform, pvalueish, label or transform.label)
if not isinstance(transform, ptransform.PTransform):
raise TypeError("Expected a PTransform object, got %s" % transform)
if label:
# Fix self.label as it is inspected by some PTransform operations
# (e.g. to produce error messages for type hint violations).
try:
old_label, transform.label = transform.label, label
return self.apply(transform, pvalueish)
finally:
transform.label = old_label
# Attempts to alter the label of the transform to be applied only when it's
# a top-level transform so that the cell number will not be prepended to
# every child transform in a composite.
if self._current_transform() is self._root_transform():
alter_label_if_ipython(transform, pvalueish)
full_label = '/'.join(
[self._current_transform().full_label, label or
transform.label]).lstrip('/')
if full_label in self.applied_labels:
raise RuntimeError(
'A transform with label "%s" already exists in the pipeline. '
'To apply a transform with a specified label write '
'pvalue | "label" >> transform' % full_label)
self.applied_labels.add(full_label)
pvalueish, inputs = transform._extract_input_pvalues(pvalueish)
try:
inputs = tuple(inputs)
for leaf_input in inputs:
if not isinstance(leaf_input, pvalue.PValue):
raise TypeError
except TypeError:
raise NotImplementedError(
'Unable to extract PValue inputs from %s; either %s does not accept '
'inputs of this format, or it does not properly override '
'_extract_input_pvalues' % (pvalueish, transform))
current = AppliedPTransform(
self._current_transform(), transform, full_label, inputs)
self._current_transform().add_part(current)
try:
self.transforms_stack.append(current)
type_options = self._options.view_as(TypeOptions)
if type_options.pipeline_type_check:
transform.type_check_inputs(pvalueish)
pvalueish_result = self.runner.apply(transform, pvalueish, self._options)
if type_options is not None and type_options.pipeline_type_check:
transform.type_check_outputs(pvalueish_result)
for tag, result in ptransform.get_named_nested_pvalues(pvalueish_result):
assert isinstance(result, (pvalue.PValue, pvalue.DoOutputsTuple))
# Make sure we set the producer only for a leaf node in the transform
# DAG. This way we preserve the last transform of a composite transform
# as being the real producer of the result.
if result.producer is None:
result.producer = current
self._infer_result_type(transform, inputs, result)
assert isinstance(result.producer.inputs, tuple)
# The DoOutputsTuple adds the PCollection to the outputs when accessed
# except for the main tag. Add the main tag here.
if isinstance(result, pvalue.DoOutputsTuple):
current.add_output(result, result._main_tag)
continue
# If there is already a tag with the same name, increase a counter for
# the name. This can happen, for example, when a composite outputs a
# list of PCollections where all the tags are None.
base = tag
counter = 0
while tag in current.outputs:
counter += 1
tag = '%s_%d' % (base, counter)
current.add_output(result, tag)
if (type_options is not None and
type_options.type_check_strictness == 'ALL_REQUIRED' and
transform.get_type_hints().output_types is None):
ptransform_name = '%s(%s)' % (transform.__class__.__name__, full_label)
raise TypeCheckError(
'Pipeline type checking is enabled, however no '
'output type-hint was found for the '
'PTransform %s' % ptransform_name)
finally:
self.transforms_stack.pop()
return pvalueish_result
def _infer_result_type(
self,
transform, # type: ptransform.PTransform
inputs, # type: Sequence[Union[pvalue.PBegin, pvalue.PCollection]]
result_pcollection # type: Union[pvalue.PValue, pvalue.DoOutputsTuple]
):
# type: (...) -> None
# TODO(robertwb): Multi-input inference.
type_options = self._options.view_as(TypeOptions)
if type_options is None or not type_options.pipeline_type_check:
return
if (isinstance(result_pcollection, pvalue.PCollection) and
(not result_pcollection.element_type
# TODO(robertwb): Ideally we'd do intersection here.
or result_pcollection.element_type == typehints.Any)):
# {Single, multi}-input, single-output inference.
input_element_types_tuple = tuple(i.element_type for i in inputs)
input_element_type = (
input_element_types_tuple[0] if len(input_element_types_tuple) == 1
else typehints.Union[input_element_types_tuple])
type_hints = transform.get_type_hints()
declared_output_type = type_hints.simple_output_type(transform.label)
if declared_output_type:
input_types = type_hints.input_types
if input_types and input_types[0]:
declared_input_type = input_types[0][0]
result_pcollection.element_type = typehints.bind_type_variables(
declared_output_type,
typehints.match_type_variables(
declared_input_type, input_element_type))
else:
result_pcollection.element_type = declared_output_type
else:
result_pcollection.element_type = transform.infer_output_type(
input_element_type)
elif isinstance(result_pcollection, pvalue.DoOutputsTuple):
# {Single, multi}-input, multi-output inference.
# TODO(BEAM-4132): Add support for tagged type hints.
# https://github.com/apache/beam/pull/9810#discussion_r338765251
for pcoll in result_pcollection:
if pcoll.element_type is None:
pcoll.element_type = typehints.Any
def __reduce__(self):
# type: () -> Tuple[Type, Tuple[str, ...]]
# Some transforms contain a reference to their enclosing pipeline,
# which in turn reference all other transforms (resulting in quadratic
# time/space to pickle each transform individually). As we don't
# require pickled pipelines to be executable, break the chain here.
return str, ('Pickled pipeline stub.', )
def _verify_runner_api_compatible(self):
# type: () -> bool
if self._options.view_as(TypeOptions).runtime_type_check:
# This option is incompatible with the runner API as it requires
# the runner to inspect non-serialized hints on the transform
# itself.
return False
class Visitor(PipelineVisitor): # pylint: disable=used-before-assignment
ok = True # Really a nonlocal.
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
pass
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
try:
# Transforms must be picklable.
pickler.loads(
pickler.dumps(transform_node.transform, enable_trace=False),
enable_trace=False)
except Exception:
Visitor.ok = False
def visit_value(self, value, _):
# type: (pvalue.PValue, AppliedPTransform) -> None
if isinstance(value, pvalue.PDone):
Visitor.ok = False
self.visit(Visitor())
return Visitor.ok
def to_runner_api(
self,
return_context=False, # type: bool
context=None, # type: Optional[PipelineContext]
use_fake_coders=False, # type: bool
default_environment=None # type: Optional[environments.Environment]
):
# type: (...) -> beam_runner_api_pb2.Pipeline
"""For internal use only; no backwards-compatibility guarantees."""
from apache_beam.runners import pipeline_context
from apache_beam.portability.api import beam_runner_api_pb2
if context is None:
context = pipeline_context.PipelineContext(
use_fake_coders=use_fake_coders,
component_id_map=self.component_id_map,
default_environment=default_environment)
elif default_environment is not None:
raise ValueError(
'Only one of context or default_environment may be specified.')
# The RunnerAPI spec requires certain transforms and side-inputs to have KV
# inputs (and corresponding outputs).
# Currently we only upgrade to KV pairs. If there is a need for more
# general shapes, potential conflicts will have to be resolved.
# We also only handle single-input, and (for fixing the output) single
# output, which is sufficient.
# Also marks such values as requiring deterministic key coders.
deterministic_key_coders = not self._options.view_as(
TypeOptions).allow_non_deterministic_key_coders
class ForceKvInputTypes(PipelineVisitor):
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
self.visit_transform(transform_node)
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
if not transform_node.transform:
return
if transform_node.transform.runner_api_requires_keyed_input():
pcoll = transform_node.inputs[0]
pcoll.element_type = typehints.coerce_to_kv_type(
pcoll.element_type, transform_node.full_label)
pcoll.requires_deterministic_key_coder = (
deterministic_key_coders and transform_node.full_label)
if len(transform_node.outputs) == 1:
# The runner often has expectations about the output types as well.
output, = transform_node.outputs.values()
if not output.element_type:
output.element_type = transform_node.transform.infer_output_type(
pcoll.element_type)
if (isinstance(output.element_type,
typehints.TupleHint.TupleConstraint) and
len(output.element_type.tuple_types) == 2):
output.requires_deterministic_key_coder = (
deterministic_key_coders and transform_node.full_label)
for side_input in transform_node.transform.side_inputs:
if side_input.requires_keyed_input():
side_input.pvalue.element_type = typehints.coerce_to_kv_type(
side_input.pvalue.element_type,
transform_node.full_label,
side_input_producer=side_input.pvalue.producer.full_label)
side_input.pvalue.requires_deterministic_key_coder = (
deterministic_key_coders and transform_node.full_label)
self.visit(ForceKvInputTypes())
# Mutates context; placing inline would force dependence on
# argument evaluation order.
root_transform_id = context.transforms.get_id(self._root_transform())
proto = beam_runner_api_pb2.Pipeline(
root_transform_ids=[root_transform_id],
components=context.to_runner_api(),
requirements=context.requirements())
proto.components.transforms[root_transform_id].unique_name = (
root_transform_id)
if return_context:
return proto, context # type: ignore # too complicated for now
else:
return proto
@staticmethod
def from_runner_api(
proto, # type: beam_runner_api_pb2.Pipeline
runner, # type: PipelineRunner
options, # type: PipelineOptions
return_context=False, # type: bool
):
# type: (...) -> Pipeline
"""For internal use only; no backwards-compatibility guarantees."""
p = Pipeline(runner=runner, options=options)
from apache_beam.runners import pipeline_context
context = pipeline_context.PipelineContext(
proto.components, requirements=proto.requirements)
if proto.root_transform_ids:
root_transform_id, = proto.root_transform_ids
p.transforms_stack = [context.transforms.get_by_id(root_transform_id)]
else:
p.transforms_stack = [AppliedPTransform(None, None, '', None)]
# TODO(robertwb): These are only needed to continue construction. Omit?
p.applied_labels = {
t.unique_name
for t in proto.components.transforms.values()
}
for id in proto.components.pcollections:
pcollection = context.pcollections.get_by_id(id)
pcollection.pipeline = p
if not pcollection.producer:
raise ValueError('No producer for %s' % id)
# Inject PBegin input where necessary.
from apache_beam.io.iobase import Read
from apache_beam.transforms.core import Create
has_pbegin = [Read, Create]
for id in proto.components.transforms:
transform = context.transforms.get_by_id(id)
if not transform.inputs and transform.transform.__class__ in has_pbegin:
transform.inputs = (pvalue.PBegin(p), )
if return_context:
return p, context # type: ignore # too complicated for now
else:
return p
class PipelineVisitor(object):
"""For internal use only; no backwards-compatibility guarantees.
Visitor pattern class used to traverse a DAG of transforms
(used internally by Pipeline for bookeeping purposes).
"""
def visit_value(self, value, producer_node):
# type: (pvalue.PValue, AppliedPTransform) -> None
"""Callback for visiting a PValue in the pipeline DAG.
Args:
value: PValue visited (typically a PCollection instance).
producer_node: AppliedPTransform object whose transform produced the
pvalue.
"""
pass
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
"""Callback for visiting a transform leaf node in the pipeline DAG."""
pass
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
"""Callback for entering traversal of a composite transform node."""
pass
def leave_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
"""Callback for leaving traversal of a composite transform node."""
pass
class ExternalTransformFinder(PipelineVisitor):
"""Looks for any external transforms in the pipeline and if found records
it.
"""
def __init__(self):
self._contains_external_transforms = False
@staticmethod
def contains_external_transforms(pipeline):
visitor = ExternalTransformFinder()
pipeline.visit(visitor)
return visitor._contains_external_transforms
def _perform_exernal_transform_test(self, transform):
if not transform:
return
from apache_beam.transforms import ExternalTransform
if isinstance(transform, ExternalTransform):
self._contains_external_transforms = True
def visit_transform(self, transform_node):
# type: (AppliedPTransform) -> None
self._perform_exernal_transform_test(transform_node.transform)
def enter_composite_transform(self, transform_node):
# type: (AppliedPTransform) -> None
# Python SDK object graph may represent an external transform that is a leaf
# of the pipeline graph as a composite without sub-transforms.
# Note that this visitor is just used to identify pipelines with external
# transforms. A Runner API pipeline proto generated from the Pipeline object
# will include external sub-transform.
self._perform_exernal_transform_test(transform_node.transform)
class AppliedPTransform(object):
"""For internal use only; no backwards-compatibility guarantees.
A transform node representing an instance of applying a PTransform
(used internally by Pipeline for bookeeping purposes).
"""
def __init__(
self,
parent, # type: Optional[AppliedPTransform]
transform, # type: Optional[ptransform.PTransform]
full_label, # type: str
inputs, # type: Optional[Sequence[Union[pvalue.PBegin, pvalue.PCollection]]]
environment_id=None, # type: Optional[str]
annotations=None, # type: Optional[Dict[str, bytes]]
):
# type: (...) -> None
self.parent = parent
self.transform = transform
# Note that we want the PipelineVisitor classes to use the full_label,
# inputs, side_inputs, and outputs fields from this instance instead of the
# ones of the PTransform instance associated with it. Doing this permits
# reusing PTransform instances in different contexts (apply() calls) without
# any interference. This is particularly useful for composite transforms.
self.full_label = full_label
self.inputs = inputs or ()
self.side_inputs = tuple() if transform is None else transform.side_inputs
self.outputs = {} # type: Dict[Union[str, int, None], pvalue.PValue]
self.parts = [] # type: List[AppliedPTransform]
self.environment_id = environment_id if environment_id else None # type: Optional[str]
# We may need to merge the hints with environment-provided hints here
# once environment is a first-class citizen in Beam graph and we have
# access to actual environment, not just an id.
self.resource_hints = dict(
transform.get_resource_hints()) if transform else {
} # type: Dict[str, bytes]
if annotations is None and transform:
def annotation_to_bytes(key, a: Any) -> bytes:
if isinstance(a, bytes):
return a
elif isinstance(a, str):
return a.encode('ascii')
elif isinstance(a, message.Message):
return a.SerializeToString()
else:
raise TypeError(
'Unknown annotation type %r (type %s) for %s' % (a, type(a), key))
annotations = {
key: annotation_to_bytes(key, a)
for key,
a in transform.annotations().items()
}
self.annotations = annotations
def __repr__(self):
# type: () -> str
return "%s(%s, %s)" % (
self.__class__.__name__, self.full_label, type(self.transform).__name__)
def replace_output(
self,
output, # type: Union[pvalue.PValue, pvalue.DoOutputsTuple]
tag=None # type: Union[str, int, None]
):
# type: (...) -> None
"""Replaces the output defined by the given tag with the given output.
Args:
output: replacement output
tag: tag of the output to be replaced.
"""
if isinstance(output, pvalue.DoOutputsTuple):
self.replace_output(output[output._main_tag])
elif isinstance(output, pvalue.PValue):
self.outputs[tag] = output
elif isinstance(output, dict):
for output_tag, out in output.items():
self.outputs[output_tag] = out
else:
raise TypeError("Unexpected output type: %s" % output)
# Importing locally to prevent circular dependency issues.
from apache_beam.transforms import external
if isinstance(self.transform, external.ExternalTransform):
self.transform.replace_named_outputs(self.named_outputs())
def replace_inputs(self, inputs):
self.inputs = inputs
# Importing locally to prevent circular dependency issues.
from apache_beam.transforms import external
if isinstance(self.transform, external.ExternalTransform):
self.transform.replace_named_inputs(self.named_inputs())
def replace_side_inputs(self, side_inputs):
self.side_inputs = side_inputs
# Importing locally to prevent circular dependency issues.
from apache_beam.transforms import external
if isinstance(self.transform, external.ExternalTransform):
self.transform.replace_named_inputs(self.named_inputs())
def add_output(
self,
output, # type: Union[pvalue.DoOutputsTuple, pvalue.PValue]
tag # type: Union[str, int, None]
):
# type: (...) -> None
if isinstance(output, pvalue.DoOutputsTuple):
self.add_output(output[tag], tag)
elif isinstance(output, pvalue.PValue):
assert tag not in self.outputs
self.outputs[tag] = output
else:
raise TypeError("Unexpected output type: %s" % output)
def add_part(self, part):
# type: (AppliedPTransform) -> None
assert isinstance(part, AppliedPTransform)
part._merge_outer_resource_hints()
self.parts.append(part)
def is_composite(self):
# type: () -> bool
"""Returns whether this is a composite transform.
A composite transform has parts (inner transforms) or isn't the
producer for any of its outputs. (An example of a transform that
is not a producer is one that returns its inputs instead.)
"""
return bool(self.parts) or all(
pval.producer is not self for pval in self.outputs.values())
def visit(
self,
visitor, # type: PipelineVisitor
pipeline, # type: Pipeline
visited # type: Set[pvalue.PValue]
):
# type: (...) -> None
"""Visits all nodes reachable from the current node."""
for in_pval in self.inputs:
if in_pval not in visited and not isinstance(in_pval, pvalue.PBegin):
if in_pval.producer is not None:
in_pval.producer.visit(visitor, pipeline, visited)
# The value should be visited now since we visit outputs too.
assert in_pval in visited, in_pval
# Visit side inputs.
for side_input in self.side_inputs:
if isinstance(side_input, pvalue.AsSideInput) \
and side_input.pvalue not in visited:
pval = side_input.pvalue # Unpack marker-object-wrapped pvalue.
if pval.producer is not None:
pval.producer.visit(visitor, pipeline, visited)
# The value should be visited now since we visit outputs too.
assert pval in visited
# TODO(silviuc): Is there a way to signal that we are visiting a side
# value? The issue is that the same PValue can be reachable through
# multiple paths and therefore it is not guaranteed that the value
# will be visited as a side value.
# Visit a composite or primitive transform.
if self.is_composite():
visitor.enter_composite_transform(self)
for part in self.parts:
part.visit(visitor, pipeline, visited)
visitor.leave_composite_transform(self)
else:
visitor.visit_transform(self)
# Visit the outputs (one or more). It is essential to mark as visited the
# tagged PCollections of the DoOutputsTuple object. A tagged PCollection is
# connected directly with its producer (a multi-output ParDo), but the
# output of such a transform is the containing DoOutputsTuple, not the
# PCollection inside it. Without the code below a tagged PCollection will
# not be marked as visited while visiting its producer.
for out_pval in self.outputs.values():
if isinstance(out_pval, pvalue.DoOutputsTuple):
pvals = (v for v in out_pval)
else:
pvals = (out_pval, )
for v in pvals:
if v not in visited:
visited.add(v)
visitor.visit_value(v, self)
def named_inputs(self):
# type: () -> Dict[str, pvalue.PValue]
# TODO(BEAM-1833): Push names up into the sdk construction.
if self.transform is None:
assert not self.inputs and not self.side_inputs
return {}
else:
return self.transform._named_inputs(self.inputs, self.side_inputs)
def named_outputs(self):
# type: () -> Dict[str, pvalue.PCollection]
if self.transform is None:
assert not self.outputs
return {}
else:
return self.transform._named_outputs(self.outputs)
def to_runner_api(self, context):
# type: (PipelineContext) -> beam_runner_api_pb2.PTransform
# External transforms require more splicing than just setting the spec.
from apache_beam.transforms import external
if isinstance(self.transform, external.ExternalTransform):
# TODO(BEAM-12082): Support resource hints in XLang transforms.
# In particular, make sure hints on composites are properly propagated.
return self.transform.to_runner_api_transform(context, self.full_label)
from apache_beam.portability.api import beam_runner_api_pb2
def transform_to_runner_api(
transform, # type: Optional[ptransform.PTransform]
context # type: PipelineContext
):
# type: (...) -> Optional[beam_runner_api_pb2.FunctionSpec]
if transform is None:
return None
else:
# We only populate inputs information to ParDo in order to expose
# key_coder and window_coder to stateful DoFn.
if isinstance(transform, ParDo):
return transform.to_runner_api(
context,
has_parts=bool(self.parts),
named_inputs=self.named_inputs())
return transform.to_runner_api(context, has_parts=bool(self.parts))
# Iterate over inputs and outputs by sorted key order, so that ids are
# consistently generated for multiple runs of the same pipeline.
transform_spec = transform_to_runner_api(self.transform, context)
environment_id = self.environment_id
transform_urn = transform_spec.urn if transform_spec else None
if (not environment_id and
(transform_urn not in Pipeline.runner_implemented_transforms())):
environment_id = context.get_environment_id_for_resource_hints(
self.resource_hints)
return beam_runner_api_pb2.PTransform(
unique_name=self.full_label,
spec=transform_spec,
subtransforms=[
context.transforms.get_id(part, label=part.full_label)
for part in self.parts
],
inputs={
tag: context.pcollections.get_id(pc)
for tag,
pc in sorted(self.named_inputs().items())
},
outputs={
tag: context.pcollections.get_id(out)
for tag,
out in sorted(self.named_outputs().items())
},
environment_id=environment_id,
annotations=self.annotations,
# TODO(BEAM-366): Add display_data.
display_data=DisplayData.create_from(self.transform).to_proto()
if self.transform else None)
@staticmethod
def from_runner_api(
proto, # type: beam_runner_api_pb2.PTransform
context # type: PipelineContext
):
# type: (...) -> AppliedPTransform
if common_urns.primitives.PAR_DO.urn == proto.spec.urn:
# Preserving side input tags.
from apache_beam.portability.api import beam_runner_api_pb2
pardo_payload = (
proto_utils.parse_Bytes(
proto.spec.payload, beam_runner_api_pb2.ParDoPayload))
side_input_tags = list(pardo_payload.side_inputs.keys())
else:
pardo_payload = None
side_input_tags = []
main_inputs = [
context.pcollections.get_by_id(id) for tag,
id in proto.inputs.items() if tag not in side_input_tags
]
transform = ptransform.PTransform.from_runner_api(proto, context)
if transform and proto.environment_id:
resource_hints = context.environments.get_by_id(
proto.environment_id).resource_hints()
if resource_hints:
transform._resource_hints = dict(resource_hints)
# Ordering is important here.
# TODO(BEAM-9635): use key, value pairs instead of depending on tags with
# index as a suffix.
indexed_side_inputs = [
(get_sideinput_index(tag), context.pcollections.get_by_id(id)) for tag,
id in proto.inputs.items() if tag in side_input_tags
]
side_inputs = [si for _, si in sorted(indexed_side_inputs)]
result = AppliedPTransform(
parent=None,
transform=transform,
full_label=proto.unique_name,
inputs=main_inputs,
environment_id=None,
annotations=proto.annotations)
if result.transform and result.transform.side_inputs:
for si, pcoll in zip(result.transform.side_inputs, side_inputs):
si.pvalue = pcoll
result.side_inputs = tuple(result.transform.side_inputs)
result.parts = []
for transform_id in proto.subtransforms:
part = context.transforms.get_by_id(transform_id)
part.parent = result
result.add_part(part)
result.outputs = {
None if tag == 'None' else tag: context.pcollections.get_by_id(id)
for tag,
id in proto.outputs.items()
}
# This annotation is expected by some runners.
if proto.spec.urn == common_urns.primitives.PAR_DO.urn:
result.transform.output_tags = set(proto.outputs.keys()).difference(
{'None'})
if not result.parts:
for tag, pcoll_id in proto.outputs.items():
if pcoll_id not in proto.inputs.values():
pc = context.pcollections.get_by_id(pcoll_id)
pc.producer = result
pc.tag = None if tag == 'None' else tag
return result
def _merge_outer_resource_hints(self):
if (self.parent is not None and self.parent.resource_hints):
self.resource_hints = merge_resource_hints(
outer_hints=self.parent.resource_hints,
inner_hints=self.resource_hints)
if self.resource_hints:
for part in self.parts:
part._merge_outer_resource_hints()
class PTransformOverride(metaclass=abc.ABCMeta):
"""For internal use only; no backwards-compatibility guarantees.
Gives a matcher and replacements for matching PTransforms.
TODO: Update this to support cases where input and/our output types are
different.
"""
@abc.abstractmethod
def matches(self, applied_ptransform):
# type: (AppliedPTransform) -> bool
"""Determines whether the given AppliedPTransform matches.
Note that the matching will happen *after* Runner API proto translation.
If matching is done via type checks, to/from_runner_api[_parameter] methods
must be implemented to preserve the type (and other data) through proto
serialization.
Consider URN-based translation instead.
Args:
applied_ptransform: AppliedPTransform to be matched.
Returns:
a bool indicating whether the given AppliedPTransform is a match.
"""
raise NotImplementedError
def get_replacement_transform_for_applied_ptransform(
self, applied_ptransform):
# type: (AppliedPTransform) -> ptransform.PTransform
"""Provides a runner specific override for a given `AppliedPTransform`.
Args:
applied_ptransform: `AppliedPTransform` containing the `PTransform` to be
replaced.
Returns:
A `PTransform` that will be the replacement for the `PTransform` inside
the `AppliedPTransform` given as an argument.
"""
# Returns a PTransformReplacement
return self.get_replacement_transform(applied_ptransform.transform)
@deprecated(
since='2.24', current='get_replacement_transform_for_applied_ptransform')
def get_replacement_transform(self, ptransform):
# type: (Optional[ptransform.PTransform]) -> ptransform.PTransform
"""Provides a runner specific override for a given PTransform.
Args:
ptransform: PTransform to be replaced.
Returns:
A PTransform that will be the replacement for the PTransform given as an
argument.
"""
# Returns a PTransformReplacement
raise NotImplementedError
def get_replacement_inputs(self, applied_ptransform):
# type: (AppliedPTransform) -> Iterable[pvalue.PValue]
"""Provides inputs that will be passed to the replacement PTransform.
Args:
applied_ptransform: Original AppliedPTransform containing the PTransform
to be replaced.
Returns:
An iterable of PValues that will be passed to the expand() method of the
replacement PTransform.
"""
return tuple(applied_ptransform.inputs) + tuple(
side_input.pvalue for side_input in applied_ptransform.side_inputs)
class ComponentIdMap(object):
"""A utility for assigning unique component ids to Beam components.
Component ID assignments are only guaranteed to be unique and consistent
within the scope of a ComponentIdMap instance.
"""
def __init__(self, namespace="ref"):
self.namespace = namespace
self._counters = defaultdict(lambda: 0) # type: Dict[type, int]
self._obj_to_id = {} # type: Dict[Any, str]
def get_or_assign(self, obj=None, obj_type=None, label=None):
if obj not in self._obj_to_id:
self._obj_to_id[obj] = self._unique_ref(obj, obj_type, label)
return self._obj_to_id[obj]
def _normalize(self, str_value):
str_value = unicodedata.normalize('NFC', str_value)
return re.sub(r'[^a-zA-Z0-9-_]+', '-', str_value)
def _unique_ref(self, obj=None, obj_type=None, label=None):
# Normalize, trim, and uniqify.
prefix = self._normalize(
'%s_%s_%s' %
(self.namespace, obj_type.__name__, label or type(obj).__name__))[0:100]
self._counters[obj_type] += 1
return '%s_%d' % (prefix, self._counters[obj_type])
|
axbaretto/beam
|
sdks/python/apache_beam/pipeline.py
|
Python
|
apache-2.0
| 58,770
|
[
"VisIt"
] |
a8d02f32367ffd999d8b6e5b4646e1e307f030786b4b84a1a9d0624e631eb9e5
|
#!/usr/bin/env python
#===============================================================================
# Copyright 2017 Geoscience Australia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===============================================================================
"""
Unit tests for geophys_utils._crs_utils against a NetCDF file
Created on 15/11/2016
@author: Alex Ip
"""
import unittest
import numpy as np
import re
from osgeo.osr import CoordinateTransformation
from geophys_utils._crs_utils import get_coordinate_transformation, get_utm_wkt, transform_coords
class TestCRSUtils(unittest.TestCase):
"""Unit tests for geophys_utils._crs_utils module."""
EPSG4326_WKT = "GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4326\"]]"
EPSG4326_EPSG = 'EPSG:4326'
EPSG3577_WKT = "PROJCS[\"GDA94 / Australian Albers\",GEOGCS[\"GDA94\",DATUM[\"Geocentric_Datum_of_Australia_1994\",SPHEROID[\"GRS 1980\",6378137,298.257222101,AUTHORITY[\"EPSG\",\"7019\"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY[\"EPSG\",\"6283\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.01745329251994328,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4283\"]],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]],PROJECTION[\"Albers_Conic_Equal_Area\"],PARAMETER[\"standard_parallel_1\",-18],PARAMETER[\"standard_parallel_2\",-36],PARAMETER[\"latitude_of_center\",0],PARAMETER[\"longitude_of_center\",132],PARAMETER[\"false_easting\",0],PARAMETER[\"false_northing\",0],AUTHORITY[\"EPSG\",\"3577\"],AXIS[\"Easting\",EAST],AXIS[\"Northing\",NORTH]]"
EPSG3577_EPSG = 'EPSG:3577'
UTM_WKT = 'PROJCS["UTM Zone 55,Southern Hemisphere",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",147],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",10000000],UNIT["Meter",1]]'
EPSG4326_COORDS = [149.160, -35.306]
#EPSG4326_COORD_ARRAY = np.array([[149.160, -35.306], [150, -35]])
EPSG4326_COORD_ARRAY = [[149.160, -35.306], [150, -35]]
UTM_COORDS = (696382.5632171195, 6090881.858493287) #(696382.5632178178, 6090881.858493158)
UTM_COORD_ARRAY = [(696382.5632171195, 6090881.858493287), (773798.0963396085, 6122843.308355326)]
def test_get_coordinate_transformation(self):
print('Testing get_coordinate_transformation function')
coordinate_transformation = get_coordinate_transformation(TestCRSUtils.EPSG4326_WKT,
TestCRSUtils.EPSG3577_WKT)
assert coordinate_transformation is not None
assert type(coordinate_transformation) == CoordinateTransformation
coordinate_transformation = get_coordinate_transformation(TestCRSUtils.EPSG4326_EPSG,
TestCRSUtils.EPSG3577_EPSG)
assert coordinate_transformation is not None
assert type(coordinate_transformation) == CoordinateTransformation
coordinate_transformation = get_coordinate_transformation(TestCRSUtils.EPSG4326_WKT,
TestCRSUtils.EPSG4326_WKT)
assert coordinate_transformation is None, 'Null transformation should return None'
def test_get_utm_wkt(self):
print('Testing get_utm_wkt function')
utm_wkt = get_utm_wkt(TestCRSUtils.EPSG4326_COORDS,
TestCRSUtils.EPSG4326_EPSG)
utm_wkt = re.sub(',\s+', ',', re.sub('\s+', ' ', utm_wkt))
expected_wkt = re.sub(',\s+', ',', re.sub('\s+', ' ', TestCRSUtils.UTM_WKT))
assert utm_wkt == expected_wkt, 'Incorrect UTM CRS: {} instead of {}'.format(utm_wkt, expected_wkt)
def test_transform_coords(self):
print('Testing transform_coords function with single coordinate{}'.format(TestCRSUtils.EPSG4326_COORDS))
utm_coords = transform_coords(TestCRSUtils.EPSG4326_COORDS, TestCRSUtils.EPSG4326_WKT, TestCRSUtils.UTM_WKT)
assert (utm_coords == np.array(TestCRSUtils.UTM_COORDS)).all(), 'Incorrect UTM coordinates: {} instead of {}'.format(utm_coords, TestCRSUtils.UTM_COORDS)
print('Testing transform_coords function with multi coordinate {}'.format(TestCRSUtils.EPSG4326_COORD_ARRAY))
utm_coord_array = transform_coords(TestCRSUtils.EPSG4326_COORD_ARRAY, TestCRSUtils.EPSG4326_WKT, TestCRSUtils.UTM_WKT)
assert (utm_coord_array == np.array(TestCRSUtils.UTM_COORD_ARRAY)).all(), 'Incorrect UTM coordinates: {} instead of {}'.format(utm_coord_array, TestCRSUtils.UTM_COORD_ARRAY)
# Define test suites
def test_suite():
"""Returns a test suite of all the tests in this module."""
test_classes = [TestCRSUtils]
suite_list = map(unittest.defaultTestLoader.loadTestsFromTestCase,
test_classes)
suite = unittest.TestSuite(suite_list)
return suite
# Define main function
def main():
unittest.TextTestRunner(verbosity=2).run(test_suite())
if __name__ == '__main__':
main()
|
alex-ip/geophys_utils
|
geophys_utils/test/test_crs_utils.py
|
Python
|
apache-2.0
| 6,019
|
[
"NetCDF"
] |
b2f15e7726cd21b605ff29ea11799a3349d9e325091210f8d933add369813194
|
# This file is part of Androguard.
#
# Copyright (c) 2012 Geoffroy Gueguen <geoffroy.gueguen@gmail.com>
# All Rights Reserved.
#
# Androguard is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Androguard is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Androguard. If not, see <http://www.gnu.org/licenses/>.
import logging
from androguard.decompiler.dad.util import get_type, ACCESS_FLAGS_METHODS
from androguard.decompiler.dad.opcode_ins import Op
from androguard.decompiler.dad.instruction import (Constant, ThisParam,
BinaryExpression,
BinaryCompExpression)
logger = logging.getLogger('dad.writer')
class Writer(object):
def __init__(self, graph, method):
self.graph = graph
self.method = method
self.visited_nodes = set()
self.ind = 4
self.buffer = []
self.loop_follow = [None]
self.latch_node = [None]
self.if_follow = [None]
self.switch_follow = [None]
self.next_case = None
self.skip = False
self.need_break = True
def __str__(self):
return ''.join(self.buffer)
def inc_ind(self, i=1):
self.ind += (4 * i)
def dec_ind(self, i=1):
self.ind -= (4 * i)
def space(self):
if self.skip:
self.skip = False
return ''
return ' ' * self.ind
def write_ind(self):
if self.skip:
self.skip = False
else:
self.write(self.space())
def write(self, s):
self.buffer.append(s)
def end_ins(self):
self.write(';\n')
def write_ind_visit_end(self, lhs, s, rhs=None):
self.write_ind()
lhs.visit(self)
self.write(s)
if rhs is not None:
rhs.visit(self)
self.end_ins()
def write_inplace_if_possible(self, lhs, rhs):
if isinstance(rhs, BinaryExpression) and lhs == rhs.var_map[rhs.arg1]:
exp_rhs = rhs.var_map[rhs.arg2]
if rhs.op in '+-' and isinstance(exp_rhs, Constant) and\
exp_rhs.get_int_value() == 1:
return self.write_ind_visit_end(lhs, rhs.op * 2)
return self.write_ind_visit_end(lhs, ' %s= ' % rhs.op, exp_rhs)
return self.write_ind_visit_end(lhs, ' = ', rhs)
def visit_ins(self, ins):
ins.visit(self)
def write_method(self):
acc = []
access = self.method.access
self.constructor = 'constructor' in access
for modifier in self.method.access:
if modifier == 'constructor':
continue
acc.append(modifier)
if self.constructor:
name = get_type(self.method.cls_name).split('.')[-1]
proto = '%s %s(' % (' '.join(acc), name)
else:
name = self.method.name
proto = '%s %s %s(' % (' '.join(acc), self.method.type, name)
self.write('%s%s' % (self.space(), proto))
params = self.method.lparams
if 'static' not in self.method.access:
params = params[1:]
proto = ''
if self.method.params_type:
proto = ', '.join(['%s p%s' % (get_type(p_type), param) for
p_type, param in zip(self.method.params_type, params)])
self.write('%s)' % proto)
if self.graph is None:
return self.write(';')
self.write('\n%s{\n' % self.space())
self.inc_ind()
# for v, var in self.method.var_to_name.iteritems():
# var.visit_decl(self)
self.visit_node(self.graph.get_entry())
self.dec_ind()
self.write('%s}\n' % self.space())
def visit_node(self, node):
if node in (self.if_follow[-1], self.switch_follow[-1],
self.loop_follow[-1], self.latch_node[-1]):
return
if node in self.visited_nodes:
return
self.visited_nodes.add(node)
node.visit(self)
def visit_loop_node(self, loop):
follow = loop.get_loop_follow()
if follow is None and not loop.looptype.endless():
logger.error('Loop has no follow !')
if loop.looptype.pretest():
if loop.true is follow:
loop.neg()
loop.true, loop.false = loop.false, loop.true
self.write('%swhile (' % self.space())
loop.visit_cond(self)
self.write(') {\n')
elif loop.looptype.posttest():
self.write('%sdo {\n' % self.space())
self.latch_node.append(loop.latch)
elif loop.looptype.endless():
self.write('%swhile(true) {\n' % self.space())
self.inc_ind()
self.loop_follow.append(follow)
if loop.looptype.pretest():
self.visit_node(loop.true)
else:
self.visit_node(loop.cond)
self.loop_follow.pop()
self.dec_ind()
if loop.looptype.pretest():
self.write('%s}\n' % self.space())
elif loop.looptype.posttest():
self.latch_node.pop()
self.write('%s} while(' % self.space())
loop.latch.visit_cond(self)
self.write(');\n')
else:
self.inc_ind()
self.visit_node(loop.latch)
self.dec_ind()
self.write('%s}\n' % self.space())
if follow is not None:
self.visit_node(follow)
def visit_cond_node(self, cond):
follow = cond.get_if_follow()
if cond.false is self.loop_follow[-1]:
cond.neg()
cond.true, cond.false = cond.false, cond.true
if self.loop_follow[-1] in (cond.true, cond.false):
self.write('%sif (' % self.space())
cond.visit_cond(self)
self.write(') {\n')
self.inc_ind()
self.write('%sbreak;\n' % self.space())
self.dec_ind()
self.write('%s}\n' % self.space())
self.visit_node(cond.false)
elif follow is not None:
is_else = not (follow in (cond.true, cond.false))
if cond.true in (follow, self.next_case) or\
cond.num > cond.true.num:
cond.neg()
cond.true, cond.false = cond.false, cond.true
self.if_follow.append(follow)
if not cond.true in self.visited_nodes:
self.write('%sif (' % self.space())
cond.visit_cond(self)
self.write(') {\n')
self.inc_ind()
self.visit_node(cond.true)
self.dec_ind()
if is_else and not cond.false in self.visited_nodes:
self.write('%s} else {\n' % self.space())
self.inc_ind()
self.visit_node(cond.false)
self.dec_ind()
self.if_follow.pop()
self.write('%s}\n' % self.space())
self.visit_node(follow)
else:
self.write('%sif (' % self.space())
cond.visit_cond(self)
self.write(') {\n')
self.inc_ind()
self.visit_node(cond.true)
self.dec_ind()
self.write('%s} else {\n' % self.space())
self.inc_ind()
self.visit_node(cond.false)
self.dec_ind()
self.write('%s}\n' % self.space())
def visit_short_circuit_condition(self, nnot, aand, cond1, cond2):
if nnot:
cond1.neg()
self.write('(')
cond1.visit_cond(self)
self.write(') %s (' % ['||', '&&'][aand])
cond2.visit_cond(self)
self.write(')')
def visit_switch_node(self, switch):
lins = switch.get_ins()
for ins in lins[:-1]:
self.visit_ins(ins)
switch_ins = switch.get_ins()[-1]
self.write('%sswitch (' % self.space())
self.visit_ins(switch_ins)
self.write(') {\n')
follow = switch.get_switch_follow()
cases = switch.cases
self.switch_follow.append(follow)
default = switch.default
for i, node in enumerate(cases):
if node in self.visited_nodes:
continue
self.inc_ind()
for case in switch.node_to_case[node]:
self.write('%scase %d:\n' % (self.space(), case))
if i + 1 < len(cases):
self.next_case = cases[i + 1]
else:
self.next_case = None
if node is default:
self.write('%sdefault:\n' % self.space())
default = None
self.inc_ind()
self.visit_node(node)
if self.need_break:
self.write('%sbreak;\n' % self.space())
else:
self.need_break = True
self.dec_ind(2)
if default not in (None, follow):
self.inc_ind()
self.write('%sdefault:\n' % self.space())
self.inc_ind()
self.visit_node(default)
self.dec_ind(2)
self.write('%s}\n' % self.space())
self.switch_follow.pop()
self.visit_node(follow)
def visit_statement_node(self, stmt):
sucs = self.graph.sucs(stmt)
for ins in stmt.get_ins():
self.visit_ins(ins)
if len(sucs) == 1:
if sucs[0] is self.loop_follow[-1]:
self.write('%sbreak;\n' % self.space())
elif sucs[0] is self.next_case:
self.need_break = False
else:
self.visit_node(sucs[0])
def visit_return_node(self, ret):
self.need_break = False
for ins in ret.get_ins():
self.visit_ins(ins)
def visit_throw_node(self, throw):
for ins in throw.get_ins():
self.visit_ins(ins)
# def visit_decl(self, var):
# self.write('%sdecl v%s' % (SPACE * self.ind, var))
# self.end_ins()
def visit_constant(self, cst):
if isinstance(cst, str):
return self.write(string('%s' % cst))
self.write('%s' % cst)
def visit_base_class(self, cls):
self.write(cls)
def visit_variable(self, var):
if isinstance(var, str):
return self.write(var)
self.write('v%d' % var)
def visit_param(self, param):
self.write('p%s' % param)
def visit_this(self):
self.write('this')
def visit_assign(self, lhs, rhs):
if lhs is not None:
return self.write_inplace_if_possible(lhs, rhs)
self.write_ind()
rhs.visit(self)
if not self.skip:
self.end_ins()
def visit_move_result(self, lhs, rhs):
self.write_ind_visit_end(lhs, ' = ', rhs)
def visit_move(self, lhs, rhs):
if lhs is not rhs:
self.write_inplace_if_possible(lhs, rhs)
def visit_astore(self, array, index, rhs):
self.write_ind()
array.visit(self)
self.write('[')
if isinstance(index, Constant):
index.visit(self, 'I')
else:
index.visit(self)
self.write('] = ')
rhs.visit(self)
self.end_ins()
def visit_put_static(self, cls, name, rhs):
self.write_ind()
self.write('%s.%s = ' % (cls, name))
rhs.visit(self)
self.end_ins()
def visit_put_instance(self, lhs, name, rhs):
self.write_ind_visit_end(lhs, '.%s = ' % name, rhs)
def visit_new(self, atype):
self.write('new %s' % get_type(atype))
def visit_invoke(self, name, base, ptype, rtype, args):
if isinstance(base, ThisParam):
if name == '<init>' and self.constructor and len(args) == 0:
self.skip = True
return
base.visit(self)
if name != '<init>':
self.write('.%s' % name)
self.write('(')
comma = False
for arg in args:
if comma:
self.write(', ')
comma = True
arg.visit(self)
self.write(')')
def visit_return_void(self):
self.write_ind()
self.write('return')
self.end_ins()
def visit_return(self, arg):
self.write_ind()
self.write('return ')
arg.visit(self)
self.end_ins()
def visit_nop(self):
pass
def visit_switch(self, arg):
arg.visit(self)
def visit_check_cast(self, arg, atype):
self.write('(checkcast)(')
arg.visit(self)
self.write(', %s)' % atype)
def visit_aload(self, array, index):
array.visit(self)
self.write('[')
index.visit(self)
self.write(']')
def visit_alength(self, array):
array.visit(self)
self.write('.length')
def visit_new_array(self, atype, size):
self.write('new %s[' % get_type(atype[1:]))
size.visit(self)
self.write(']')
def visit_filled_new_array(self, atype, size, args):
self.write('filled-new-array(type=')
atype.visit(self)
self.write(', size=')
size.visit(self)
for arg in args:
self.write(', arg=')
arg.visit(self)
self.write(')')
def visit_fill_array(self, array, value):
self.write_ind()
array.visit(self)
self.write(' = {')
data = value.get_data()
self.write(', '.join(['%d' % ord(c) for c in data[:-1]]))
self.write('}')
self.end_ins()
def visit_monitor_enter(self, ref):
self.write_ind()
self.write('synchronized(')
ref.visit(self)
self.write(') {\n')
self.inc_ind()
def visit_monitor_exit(self, ref):
self.dec_ind()
self.write_ind()
self.write('}\n')
def visit_throw(self, ref):
self.write_ind()
self.write('throw ')
ref.visit(self)
self.end_ins()
def visit_binary_expression(self, op, arg1, arg2):
self.write('(')
arg1.visit(self)
self.write(' %s ' % op)
arg2.visit(self)
self.write(')')
def visit_unary_expression(self, op, arg):
self.write('(%s ' % op)
arg.visit(self)
self.write(')')
def visit_cast(self, op, arg):
self.write('(%s ' % op)
arg.visit(self)
self.write(')')
def visit_cond_expression(self, op, arg1, arg2):
arg1.visit(self)
self.write(' %s ' % op)
arg2.visit(self)
def visit_condz_expression(self, op, arg):
if isinstance(arg, BinaryCompExpression):
arg.op = op
return arg.visit(self)
atype = arg.get_type()
if atype == 'Z':
if op is Op.EQUAL:
self.write('!')
arg.visit(self)
else:
arg.visit(self)
self.write(' %s 0' % op)
def visit_get_instance(self, arg, name):
arg.visit(self)
self.write('.%s' % name)
def visit_get_static(self, cls, name):
self.write('%s.%s' % (cls, name))
def string(s):
# Based on http://stackoverflow.com/a/1676407
ret = ['"']
for c in s:
if ord(c) < 32 or 0x80 <= ord(c) <= 0xff:
to_add = '\\x%02x' % ord(c)
elif c in '\\"':
to_add = '%c' % c
else:
to_add = c
ret.append(to_add)
ret.append('"')
return ''.join(ret)
|
JulianSchuette/android-instrumentation
|
injector/androguard/decompiler/dad/writer.py
|
Python
|
apache-2.0
| 15,928
|
[
"VisIt"
] |
7e4ba378e7b3af5d088fa090efb898ff1bee44523bdcbc984a0660a6d9e412f1
|
# pylint: disable=C0111
# pylint: disable=W0621
from lettuce import world, step
from nose.tools import assert_true, assert_in, assert_false # pylint: disable=E0611
from auth.authz import get_user_by_email, get_course_groupname_for_role
from django.conf import settings
from selenium.webdriver.common.keys import Keys
import time
import os
from django.contrib.auth.models import Group
from logging import getLogger
logger = getLogger(__name__)
from terrain.browser import reset_data
TEST_ROOT = settings.COMMON_TEST_DATA_ROOT
@step('I (?:visit|access|open) the Studio homepage$')
def i_visit_the_studio_homepage(_step):
# To make this go to port 8001, put
# LETTUCE_SERVER_PORT = 8001
# in your settings.py file.
world.visit('/')
signin_css = 'a.action-signin'
assert world.is_css_present(signin_css)
@step('I am logged into Studio$')
def i_am_logged_into_studio(_step):
log_into_studio()
@step('I confirm the alert$')
def i_confirm_with_ok(_step):
world.browser.get_alert().accept()
@step(u'I press the "([^"]*)" delete icon$')
def i_press_the_category_delete_icon(_step, category):
if category == 'section':
css = 'a.delete-button.delete-section-button span.delete-icon'
elif category == 'subsection':
css = 'a.delete-button.delete-subsection-button span.delete-icon'
else:
assert False, 'Invalid category: %s' % category
world.css_click(css)
@step('I have opened a new course in Studio$')
def i_have_opened_a_new_course(_step):
open_new_course()
@step('(I select|s?he selects) the new course')
def select_new_course(_step, whom):
course_link_css = 'a.course-link'
world.css_click(course_link_css)
@step(u'I press the "([^"]*)" notification button$')
def press_the_notification_button(_step, name):
# TODO: fix up this code. Selenium is not dealing well with css transforms,
# as it thinks that the notification and the buttons are always visible
# First wait for the notification to pop up
notification_css = 'div#page-notification div.wrapper-notification'
world.wait_for_visible(notification_css)
# You would think that the above would have worked, but it doesn't.
# Brute force wait for now.
world.wait(.5)
# Now make sure the button is there
btn_css = 'div#page-notification a.action-%s' % name.lower()
world.wait_for_visible(btn_css)
# You would think that the above would have worked, but it doesn't.
# Brute force wait for now.
world.wait(.5)
if world.is_firefox():
# This is done to explicitly make the changes save on firefox.
# It will remove focus from the previously focused element
world.trigger_event(btn_css, event='focus')
world.browser.execute_script("$('{}').click()".format(btn_css))
else:
world.css_click(btn_css)
@step('I change the "(.*)" field to "(.*)"$')
def i_change_field_to_value(_step, field, value):
field_css = '#%s' % '-'.join([s.lower() for s in field.split()])
ele = world.css_find(field_css).first
ele.fill(value)
ele._element.send_keys(Keys.ENTER)
@step('I reset the database')
def reset_the_db(_step):
"""
When running Lettuce tests using examples (i.e. "Confirmation is
shown on save" in course-settings.feature), the normal hooks
aren't called between examples. reset_data should run before each
scenario to flush the test database. When this doesn't happen we
get errors due to trying to insert a non-unique entry. So instead,
we delete the database manually. This has the effect of removing
any users and courses that have been created during the test run.
"""
reset_data(None)
@step('I see a confirmation that my changes have been saved')
def i_see_a_confirmation(step):
confirmation_css = '#alert-confirmation'
assert world.is_css_present(confirmation_css)
def open_new_course():
world.clear_courses()
create_studio_user()
log_into_studio()
create_a_course()
def create_studio_user(
uname='robot',
email='robot+studio@edx.org',
password='test',
is_staff=False):
studio_user = world.UserFactory(
username=uname,
email=email,
password=password,
is_staff=is_staff)
registration = world.RegistrationFactory(user=studio_user)
registration.register(studio_user)
registration.activate()
return studio_user
def fill_in_course_info(
name='Robot Super Course',
org='MITx',
num='101',
run='2013_Spring'):
world.css_fill('.new-course-name', name)
world.css_fill('.new-course-org', org)
world.css_fill('.new-course-number', num)
world.css_fill('.new-course-run', run)
def log_into_studio(
uname='robot',
email='robot+studio@edx.org',
password='test',
name='Robot Studio'):
world.log_in(username=uname, password=password, email=email, name=name)
# Navigate to the studio dashboard
world.visit('/')
assert_in(uname, world.css_text('h2.title', timeout=10))
def create_a_course():
course = world.CourseFactory.create(org='MITx', course='999', display_name='Robot Super Course')
world.scenario_dict['COURSE'] = course
user = world.scenario_dict.get("USER")
if not user:
user = get_user_by_email('robot+studio@edx.org')
# Add the user to the instructor group of the course
# so they will have the permissions to see it in studio
for role in ("staff", "instructor"):
groupname = get_course_groupname_for_role(course.location, role)
group, __ = Group.objects.get_or_create(name=groupname)
user.groups.add(group)
user.save()
# Navigate to the studio dashboard
world.visit('/')
course_link_css = 'a.course-link'
world.css_click(course_link_css)
course_title_css = 'span.course-title'
assert_true(world.is_css_present(course_title_css))
def add_section(name='My Section'):
link_css = 'a.new-courseware-section-button'
world.css_click(link_css)
name_css = 'input.new-section-name'
save_css = 'input.new-section-name-save'
world.css_fill(name_css, name)
world.css_click(save_css)
span_css = 'span.section-name-span'
assert_true(world.is_css_present(span_css))
def add_subsection(name='Subsection One'):
css = 'a.new-subsection-item'
world.css_click(css)
name_css = 'input.new-subsection-name-input'
save_css = 'input.new-subsection-name-save'
world.css_fill(name_css, name)
world.css_click(save_css)
def set_date_and_time(date_css, desired_date, time_css, desired_time):
world.css_fill(date_css, desired_date)
# hit TAB to get to the time field
e = world.css_find(date_css).first
# pylint: disable=W0212
e._element.send_keys(Keys.TAB)
world.css_fill(time_css, desired_time)
e = world.css_find(time_css).first
e._element.send_keys(Keys.TAB)
time.sleep(float(1))
@step('I have enabled the (.*) advanced module$')
def i_enabled_the_advanced_module(step, module):
step.given('I have opened a new course section in Studio')
world.css_click('.nav-course-settings')
world.css_click('.nav-course-settings-advanced a')
type_in_codemirror(0, '["%s"]' % module)
press_the_notification_button(step, 'Save')
@step('I have clicked the new unit button')
def open_new_unit(step):
step.given('I have opened a new course section in Studio')
step.given('I have added a new subsection')
step.given('I expand the first section')
world.css_click('a.new-unit-item')
@step('the save notification button is disabled')
def save_button_disabled(step):
button_css = '.action-save'
disabled = 'is-disabled'
assert world.css_has_class(button_css, disabled)
@step('the "([^"]*)" button is disabled')
def button_disabled(step, value):
button_css = 'input[value="%s"]' % value
assert world.css_has_class(button_css, 'is-disabled')
@step('I confirm the prompt')
def confirm_the_prompt(step):
def click_button(btn_css):
world.css_click(btn_css)
return world.css_find(btn_css).visible == False
prompt_css = 'div.prompt.has-actions'
world.wait_for_visible(prompt_css)
btn_css = 'a.button.action-primary'
world.wait_for_visible(btn_css)
# Sometimes you can do a click before the prompt is up.
# Thus we need some retry logic here.
world.wait_for(lambda _driver: click_button(btn_css))
assert_false(world.css_find(btn_css).visible)
@step(u'I am shown a (.*)$')
def i_am_shown_a_notification(step, notification_type):
assert world.is_css_present('.wrapper-%s' % notification_type)
def type_in_codemirror(index, text):
world.wait(1) # For now, slow this down so that it works. TODO: fix it.
world.css_click("div.CodeMirror-lines", index=index)
world.browser.execute_script("$('div.CodeMirror.CodeMirror-focused > div').css('overflow', '')")
g = world.css_find("div.CodeMirror.CodeMirror-focused > div > textarea")
if world.is_mac():
g._element.send_keys(Keys.COMMAND + 'a')
else:
g._element.send_keys(Keys.CONTROL + 'a')
g._element.send_keys(Keys.DELETE)
g._element.send_keys(text)
if world.is_firefox():
world.trigger_event('div.CodeMirror', index=index, event='blur')
def upload_file(filename):
path = os.path.join(TEST_ROOT, filename)
world.browser.execute_script("$('input.file-input').css('display', 'block')")
world.browser.attach_file('file', os.path.abspath(path))
button_css = '.upload-dialog .action-upload'
world.css_click(button_css)
|
morpheby/levelup-by
|
cms/djangoapps/contentstore/features/common.py
|
Python
|
agpl-3.0
| 9,621
|
[
"VisIt"
] |
9752d187a2d9b4f5916bb6d229b9a31c36199079302cbe94bc80ce34564c39bf
|
#!/usr/bin/env python
# ----------------------------------------------------------------------------
# Copyright 2015 Nervana Systems Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ----------------------------------------------------------------------------
"""
Example that trains a small multi-layer perceptron with fully connected layers on MNIST.
This example has some command line arguments that enable different neon features.
Examples:
python mnist_mlp.py -b gpu -e 10
Run the example for 10 epochs of mnist data using the nervana gpu backend
python mnist_mlp.py --eval_freq 1
After each training epoch the validation/test data set will be processed through the model
and the cost will be displayed.
python mnist_mlp.py --serialize 1 -s checkpoint.pkl
After every iteration of training the model will be dumped to a pickle file named
"checkpoint.pkl". Changing the serialize parameter changes the frequency at which the
model is saved.
python mnist_mlp.py --model_file checkpoint.pkl
Before starting to train the model, the model state is set to the values stored in the
checkpoint file named checkpoint.pkl.
"""
import logging
from neon.callbacks.callbacks import Callbacks
from neon.data import ArrayIterator, load_mnist
from neon.initializers import Gaussian
from neon.layers import GeneralizedCost, Affine
from neon.models import Model
from neon.optimizers import GradientDescentMomentum
from neon.transforms import Rectlin, Logistic, CrossEntropyBinary, Misclassification
from neon.util.argparser import NeonArgparser
# parse the command line arguments
parser = NeonArgparser(__doc__)
args = parser.parse_args()
logger = logging.getLogger()
logger.setLevel(args.log_thresh)
# load up the mnist data set
# split into train and tests sets
(X_train, y_train), (X_test, y_test), nclass = load_mnist(path=args.data_dir)
# setup a training set iterator
train_set = ArrayIterator(X_train, y_train, nclass=nclass, lshape=(1, 28, 28))
# setup a validation data set iterator
valid_set = ArrayIterator(X_test, y_test, nclass=nclass, lshape=(1, 28, 28))
# setup weight initialization function
init_norm = Gaussian(loc=0.0, scale=0.01)
# setup model layers
layers = [Affine(nout=100, init=init_norm, activation=Rectlin()),
Affine(nout=10, init=init_norm, activation=Logistic(shortcut=True))]
# setup cost function as CrossEntropy
cost = GeneralizedCost(costfunc=CrossEntropyBinary())
# setup optimizer
optimizer = GradientDescentMomentum(0.1, momentum_coef=0.9, stochastic_round=args.rounding)
# initialize model object
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, eval_set=valid_set, **args.callback_args)
# run fit
mlp.fit(train_set, optimizer=optimizer, num_epochs=args.epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
|
dongjoon-hyun/neon
|
examples/mnist_mlp.py
|
Python
|
apache-2.0
| 3,458
|
[
"Gaussian"
] |
b680cb687860f8dbd249351c8034ed76c9007b1748977f2959321bcca108bd59
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
from __future__ import division, unicode_literals
"""
This module is used for analysis of materials with potential application as
intercalation batteries.
"""
__author__ = "Anubhav Jain, Shyue Ping Ong"
__copyright__ = "Copyright 2012, The Materials Project"
__version__ = "0.1"
__maintainer__ = "Anubhav Jain"
__email__ = "ajain@lbl.gov"
__date__ = "Jan 13, 2012"
__status__ = "Beta"
import itertools
from pymatgen.core.composition import Composition
from pymatgen.core.units import Charge, Time
from pymatgen.core.physical_constants import AVOGADROS_CONST
from pymatgen.phasediagram.pdmaker import PhaseDiagram
from pymatgen.phasediagram.entries import PDEntry
from pymatgen.apps.battery.battery_abc import AbstractElectrode, \
AbstractVoltagePair
from pymatgen.core.periodic_table import Element
class InsertionElectrode(AbstractElectrode):
"""
A set of topotactically related compounds, with different amounts of a
single element, e.g. TiO2 and LiTiO2, that can be used to define an
insertion battery electrode.
"""
def __init__(self, entries, working_ion_entry):
"""
Create a new InsertionElectrode.
Args:
entries: A list of ComputedStructureEntries (or subclasses)
representing the different topotactic states of the battery,
e.g. TiO2 and LiTiO2.
working_ion_entry: A single ComputedEntry or PDEntry
representing the element that carries charge across the
battery, e.g. Li.
"""
self._entries = entries
self._working_ion = working_ion_entry.composition.elements[0]
self._working_ion_entry = working_ion_entry
#Prepare to make phase diagram: determine elements and set their energy
#to be very high
elements = set()
for entry in entries:
elements.update(entry.composition.elements)
#Set an artificial energy for each element for convex hull generation
element_energy = max([entry.energy_per_atom for entry in entries]) + 10
pdentries = []
pdentries.extend(entries)
pdentries.extend([PDEntry(Composition({el:1}), element_energy)
for el in elements])
#Make phase diagram to determine which entries are stable vs. unstable
pd = PhaseDiagram(pdentries)
lifrac = lambda e: e.composition.get_atomic_fraction(self._working_ion)
#stable entries ordered by amount of Li asc
self._stable_entries = tuple(sorted([e for e in pd.stable_entries
if e in entries], key=lifrac))
#unstable entries ordered by amount of Li asc
self._unstable_entries = tuple(sorted([e for e in pd.unstable_entries
if e in entries], key=lifrac))
#create voltage pairs
self._vpairs = tuple([InsertionVoltagePair(self._stable_entries[i],
self._stable_entries[i + 1],
working_ion_entry)
for i in range(len(self._stable_entries) - 1)])
@property
def working_ion(self):
"""
The working ion as an Element object
"""
return self._working_ion
@property
def working_ion_entry(self):
return self._working_ion_entry
@property
def voltage_pairs(self):
return self._vpairs
def get_stable_entries(self, charge_to_discharge=True):
"""
Get the stable entries.
Args:
charge_to_discharge: order from most charge to most discharged
state? Default to True.
Returns:
A list of stable entries in the electrode, ordered by amount of the
working ion.
"""
list_copy = list(self._stable_entries)
return list_copy if charge_to_discharge else list_copy.reverse()
def get_unstable_entries(self, charge_to_discharge=True):
"""
Returns the unstable entries for the electrode.
Args:
charge_to_discharge: Order from most charge to most discharged
state? Defaults to True.
Returns:
A list of unstable entries in the electrode, ordered by amount of
the working ion.
"""
list_copy = list(self._unstable_entries)
return list_copy if charge_to_discharge else list_copy.reverse()
def get_all_entries(self, charge_to_discharge=True):
"""
Return all entries input for the electrode.
Args:
charge_to_discharge:
order from most charge to most discharged state? Defaults to
True.
Returns:
A list of all entries in the electrode (both stable and unstable),
ordered by amount of the working ion.
"""
all_entries = list(self.get_stable_entries())
all_entries.extend(self.get_unstable_entries())
#sort all entries by amount of working ion ASC
fsrt = lambda e: e.composition.get_atomic_fraction(self.working_ion)
all_entries = sorted([e for e in all_entries],
key=fsrt)
return all_entries if charge_to_discharge else all_entries.reverse()
@property
def fully_charged_entry(self):
"""
The most charged entry along the topotactic path.
"""
return self._stable_entries[0]
@property
def fully_discharged_entry(self):
"""
The most discharged entry along the topotactic path.
"""
return self._stable_entries[-1]
def get_max_instability(self, min_voltage=None, max_voltage=None):
"""
The maximum instability along a path for a specific voltage range.
Args:
min_voltage: The minimum allowable voltage.
max_voltage: The maximum allowable voltage.
Returns:
Maximum decomposition energy of all compounds along the insertion
path (a subset of the path can be chosen by the optional arguments)
"""
data = []
for pair in self._select_in_voltage_range(min_voltage, max_voltage):
if pair.decomp_e_charge is not None:
data.append(pair.decomp_e_charge)
if pair.decomp_e_discharge is not None:
data.append(pair.decomp_e_discharge)
return max(data) if len(data) > 0 else None
def get_min_instability(self, min_voltage=None, max_voltage=None):
"""
The minimum instability along a path for a specific voltage range.
Args:
min_voltage: The minimum allowable voltage.
max_voltage: The maximum allowable voltage.
Returns:
Minimum decomposition energy of all compounds along the insertion
path (a subset of the path can be chosen by the optional arguments)
"""
data = []
for pair in self._select_in_voltage_range(min_voltage, max_voltage):
if pair.decomp_e_charge is not None:
data.append(pair.decomp_e_charge)
if pair.decomp_e_discharge is not None:
data.append(pair.decomp_e_discharge)
return min(data) if len(data) > 0 else None
def get_max_muO2(self, min_voltage=None, max_voltage=None):
"""
Maximum critical oxygen chemical potential along path.
Args:
min_voltage: The minimum allowable voltage.
max_voltage: The maximum allowable voltage.
Returns:
Maximum critical oxygen chemical of all compounds along the
insertion path (a subset of the path can be chosen by the optional
arguments).
"""
data = []
for pair in self._select_in_voltage_range(min_voltage, max_voltage):
if pair.muO2_discharge is not None:
data.append(pair.pair.muO2_discharge)
if pair.muO2_charge is not None:
data.append(pair.muO2_charge)
return max(data) if len(data) > 0 else None
def get_min_muO2(self, min_voltage=None, max_voltage=None):
"""
Minimum critical oxygen chemical potential along path.
Args:
min_voltage: The minimum allowable voltage for a given step
max_voltage: The maximum allowable voltage allowable for a given
step
Returns:
Minimum critical oxygen chemical of all compounds along the
insertion path (a subset of the path can be chosen by the optional
arguments).
"""
data = []
for pair in self._select_in_voltage_range(min_voltage, max_voltage):
if pair.pair.muO2_discharge is not None:
data.append(pair.pair.muO2_discharge)
if pair.muO2_charge is not None:
data.append(pair.muO2_charge)
return min(data) if len(data) > 0 else None
def get_sub_electrodes(self, adjacent_only=True, include_myself=True):
"""
If this electrode contains multiple voltage steps, then it is possible
to use only a subset of the voltage steps to define other electrodes.
For example, an LiTiO2 electrode might contain three subelectrodes:
[LiTiO2 --> TiO2, LiTiO2 --> Li0.5TiO2, Li0.5TiO2 --> TiO2]
This method can be used to return all the subelectrodes with some
options
Args:
adjacent_only: Only return electrodes from compounds that are
adjacent on the convex hull, i.e. no electrodes returned
will have multiple voltage steps if this is set True.
include_myself: Include this identical electrode in the list of
results.
Returns:
A list of InsertionElectrode objects
"""
battery_list = []
pair_it = self._vpairs if adjacent_only \
else itertools.combinations_with_replacement(self._vpairs, 2)
ion = self._working_ion
for pair in pair_it:
entry_charge = pair.entry_charge if adjacent_only \
else pair[0].entry_charge
entry_discharge = pair.entry_discharge if adjacent_only \
else pair[1].entry_discharge
chg_frac = entry_charge.composition.get_atomic_fraction(ion)
dischg_frac = entry_discharge.composition.get_atomic_fraction(ion)
def in_range(entry):
frac = entry.composition.get_atomic_fraction(ion)
return chg_frac <= frac <= dischg_frac
if include_myself or entry_charge != self.fully_charged_entry \
or entry_discharge != self.fully_discharged_entry:
unstable_entries = filter(in_range,
self.get_unstable_entries())
stable_entries = filter(in_range, self.get_stable_entries())
all_entries = list(stable_entries)
all_entries.extend(unstable_entries)
battery_list.append(self.__class__(all_entries,
self.working_ion_entry))
return battery_list
def as_dict_summary(self, print_subelectrodes=True):
"""
Generate a summary dict.
Args:
print_subelectrodes: Also print data on all the possible
subelectrodes.
Returns:
A summary of this electrode"s properties in dict format.
"""
chg_comp = self.fully_charged_entry.composition
dischg_comp = self.fully_discharged_entry.composition
ion = self.working_ion
d = {"average_voltage": self.get_average_voltage(),
"max_voltage": self.max_voltage,
"min_voltage": self.min_voltage,
"max_delta_volume": self.max_delta_volume,
"max_voltage_step": self.max_voltage_step,
"capacity_grav": self.get_capacity_grav(),
"capacity_vol": self.get_capacity_vol(),
"energy_grav": self.get_specific_energy(),
"energy_vol": self.get_energy_density(),
"working_ion": self._working_ion.symbol,
"nsteps": self.num_steps,
"framework": self._vpairs[0].framework.to_data_dict,
"formula_charge": chg_comp.reduced_formula,
"formula_discharge": dischg_comp.reduced_formula,
"fracA_charge": chg_comp.get_atomic_fraction(ion),
"fracA_discharge": dischg_comp.get_atomic_fraction(ion),
"max_instability": self.get_max_instability(),
"min_instability": self.get_min_instability()}
if print_subelectrodes:
f_dict = lambda c: c.as_dict_summary(print_subelectrodes=False)
d["adj_pairs"] = map(f_dict,
self.get_sub_electrodes(adjacent_only=True))
d["all_pairs"] = map(f_dict,
self.get_sub_electrodes(adjacent_only=False))
return d
def __str__(self):
return self.__repr__()
def __repr__(self):
output = []
chg_form = self.fully_charged_entry.composition.reduced_formula
dischg_form = self.fully_discharged_entry.composition.reduced_formula
output.append("InsertionElectrode with endpoints at {} and {}".format(
chg_form, dischg_form))
output.append("Avg. volt. = {} V".format(self.get_average_voltage()))
output.append("Grav. cap. = {} mAh/g".format(self.get_capacity_grav()))
output.append("Vol. cap. = {}".format(self.get_capacity_vol()))
return "\n".join(output)
@classmethod
def from_dict(cls, d):
from monty.json import MontyDecoder
dec = MontyDecoder()
return cls(dec.process_decoded(d["entries"]),
dec.process_decoded(d["working_ion_entry"]))
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"entries": [entry.as_dict() for entry in self._entries],
"working_ion_entry": self.working_ion_entry.as_dict()}
class InsertionVoltagePair(AbstractVoltagePair):
"""
Defines an Insertion Voltage Pair.
Args:
entry1: Entry corresponding to one of the entries in the voltage step.
entry2: Entry corresponding to the other entry in the voltage step.
working_ion_entry: A single ComputedEntry or PDEntry representing
the element that carries charge across the battery, e.g. Li.
"""
def __init__(self, entry1, entry2, working_ion_entry):
#initialize some internal variables
working_element = working_ion_entry.composition.elements[0]
entry_charge = entry1
entry_discharge = entry2
if entry_charge.composition.get_atomic_fraction(working_element) \
> entry2.composition.get_atomic_fraction(working_element):
(entry_charge, entry_discharge) = (entry_discharge, entry_charge)
comp_charge = entry_charge.composition
comp_discharge = entry_discharge.composition
ion_sym = working_element.symbol
frame_charge_comp = Composition({el: comp_charge[el]
for el in comp_charge
if el.symbol != ion_sym})
frame_discharge_comp = Composition({el: comp_discharge[el]
for el in comp_discharge
if el.symbol != ion_sym})
#Data validation
#check that the ion is just a single element
if not working_ion_entry.composition.is_element:
raise ValueError("VoltagePair: The working ion specified must be "
"an element")
#check that at least one of the entries contains the working element
if not comp_charge.get_atomic_fraction(working_element) > 0 and \
not comp_discharge.get_atomic_fraction(working_element) > 0:
raise ValueError("VoltagePair: The working ion must be present in "
"one of the entries")
#check that the entries do not contain the same amount of the workin
#element
if comp_charge.get_atomic_fraction(working_element) == \
comp_discharge.get_atomic_fraction(working_element):
raise ValueError("VoltagePair: The working ion atomic percentage "
"cannot be the same in both the entries")
#check that the frameworks of the entries are equivalent
if not frame_charge_comp.reduced_formula == \
frame_discharge_comp.reduced_formula:
raise ValueError("VoltagePair: the specified entries must have the"
" same compositional framework")
#Initialize normalization factors, charged and discharged entries
valence_list = Element(ion_sym).oxidation_states
working_ion_valence = max(valence_list)
(self.framework,
norm_charge) = frame_charge_comp.get_reduced_composition_and_factor()
norm_discharge = \
frame_discharge_comp.get_reduced_composition_and_factor()[1]
self._working_ion_entry = working_ion_entry
#Initialize normalized properties
self._vol_charge = entry_charge.structure.volume / norm_charge
self._vol_discharge = entry_discharge.structure.volume / norm_discharge
comp_charge = entry_charge.composition
comp_discharge = entry_discharge.composition
self._mass_charge = comp_charge.weight / norm_charge
self._mass_discharge = comp_discharge.weight / norm_discharge
self._num_ions_transferred = \
(comp_discharge[working_element] / norm_discharge) \
- (comp_charge[working_element] / norm_charge)
self._voltage = \
(((entry_charge.energy / norm_charge) -
(entry_discharge.energy / norm_discharge)) / \
self._num_ions_transferred + working_ion_entry.energy_per_atom) / working_ion_valence
self._mAh = self._num_ions_transferred * Charge(1, "e").to("C") * \
Time(1, "s").to("h") * AVOGADROS_CONST * 1000 * working_ion_valence
#Step 4: add (optional) hull and muO2 data
self.decomp_e_charge = \
entry_charge.data.get("decomposition_energy", None)
self.decomp_e_discharge = \
entry_discharge.data.get("decomposition_energy", None)
self.muO2_charge = entry_charge.data.get("muO2", None)
self.muO2_discharge = entry_discharge.data.get("muO2", None)
self.entry_charge = entry_charge
self.entry_discharge = entry_discharge
self.normalization_charge = norm_charge
self.normalization_discharge = norm_discharge
self._frac_charge = comp_charge.get_atomic_fraction(working_element)
self._frac_discharge = \
comp_discharge.get_atomic_fraction(working_element)
@property
def frac_charge(self):
return self._frac_charge
@property
def frac_discharge(self):
return self._frac_discharge
@property
def voltage(self):
return self._voltage
@property
def mAh(self):
return self._mAh
@property
def mass_charge(self):
return self._mass_charge
@property
def mass_discharge(self):
return self._mass_discharge
@property
def vol_charge(self):
return self._vol_charge
@property
def vol_discharge(self):
return self._vol_discharge
@property
def working_ion_entry(self):
return self._working_ion_entry
def __repr__(self):
output = ["Insertion voltage pair with working ion {}"
.format(self._working_ion_entry.composition.reduced_formula),
"V = {}, mAh = {}".format(self.voltage, self.mAh),
"mass_charge = {}, mass_discharge = {}"
.format(self.mass_charge, self.mass_discharge),
"vol_charge = {}, vol_discharge = {}"
.format(self.vol_charge, self.vol_discharge),
"frac_charge = {}, frac_discharge = {}"
.format(self.frac_charge, self.frac_discharge)]
return "\n".join(output)
def __str__(self):
return self.__repr__()
|
migueldiascosta/pymatgen
|
pymatgen/apps/battery/insertion_battery.py
|
Python
|
mit
| 20,676
|
[
"pymatgen"
] |
f43c3716cc4d7f65e9560869d8a261a9fd2f935aafe74f5d300101af6365ee85
|
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class NetcdfCxx(AutotoolsPackage):
"""Deprecated C++ compatibility bindings for NetCDF.
These do NOT read or write NetCDF-4 files, and are no longer
maintained by Unidata. Developers should migrate to current
NetCDF C++ bindings, in Spack package netcdf-cxx4."""
homepage = "https://www.unidata.ucar.edu/software/netcdf"
url = "https://www.unidata.ucar.edu/downloads/netcdf/ftp/netcdf-cxx-4.2.tar.gz"
version('4.2', sha256='95ed6ab49a0ee001255eac4e44aacb5ca4ea96ba850c08337a3e4c9a0872ccd1')
depends_on('netcdf-c')
variant(
'netcdf4', default=True, description='Compile with netCDF4 support')
@property
def libs(self):
shared = True
return find_libraries(
'libnetcdf_c++', root=self.prefix, shared=shared, recursive=True
)
def configure_args(self):
args = []
if '+netcdf4' in self.spec:
# There is no clear way to set this via configure, so set the flag
# explicitly
args.append('CPPFLAGS=-DUSE_NETCDF4')
# Add these to LDFLAGS explicitly, so the linker doesn't accidentally
# use system versions
ldflags = [
self.spec['netcdf-c'].libs.search_flags,
self.spec['hdf5'].libs.search_flags,
]
args.append('LDFLAGS=' + ' '.join(ldflags))
return args
|
LLNL/spack
|
var/spack/repos/builtin/packages/netcdf-cxx/package.py
|
Python
|
lgpl-2.1
| 1,592
|
[
"NetCDF"
] |
f8a6cb965b47cf0406acfcbcf05cb4acae1f66fa26ce959619b2b2e9b2e581e8
|
# Author: Robert McGibbon <rmcgibbo@gmail.com>
# Contributors:
# Copyright (c) 2014, Stanford University and the Authors
# All rights reserved.
# -----------------------------------------------------------------------------
# Imports
# -----------------------------------------------------------------------------
from __future__ import print_function, absolute_import, division
from glob import glob
from io import BytesIO
from os import makedirs
from os.path import exists
from os.path import join
from zipfile import ZipFile
from six.moves.urllib.request import urlopen
import mdtraj as md
from .base import Bunch, Dataset
from .base import get_data_home, retry
DATA_URL = "http://downloads.figshare.com/article/public/1026131"
TARGET_DIRECTORY = "alanine_dipeptide"
class AlanineDipeptide(Dataset):
"""Alanine dipeptide dataset
Parameters
----------
data_home : optional, default: None
Specify another download and cache folder for the datasets. By default
all MSMBuilder data is stored in '~/msmbuilder_data' subfolders.
Notes
-----
The dataset consists of ten 10ns trajectories of of alanine dipeptide,
simulated using OpenMM 6.0.1 (CUDA platform, NVIDIA GTX660) with the
AMBER99SB-ILDN force field at 300K (langevin dynamics, friction coefficient
of 91/ps, timestep of 2fs) with GBSA implicit solvent. The coordinates are
saved every 1ps. Each trajectory contains 9,999 snapshots.
The dataset, including the script used to generate the dataset
is available on figshare at
http://dx.doi.org/10.6084/m9.figshare.1026131
"""
def __init__(self, data_home=None):
self.data_home = get_data_home(data_home)
self.data_dir = join(self.data_home, TARGET_DIRECTORY)
self.cached = False
@retry(3)
def cache(self):
if not exists(self.data_home):
makedirs(self.data_home)
if not exists(self.data_dir):
print('downloading alanine dipeptide from %s to %s' %
(DATA_URL, self.data_home))
fhandle = urlopen(DATA_URL)
buf = BytesIO(fhandle.read())
zip_file = ZipFile(buf)
makedirs(self.data_dir)
for name in zip_file.namelist():
zip_file.extract(name, path=self.data_dir)
self.cached = True
def get(self):
if not self.cached:
self.cache()
top = md.load(join(self.data_dir, 'ala2.pdb'))
trajectories = []
for fn in glob(join(self.data_dir, 'trajectory*.dcd')):
trajectories.append(md.load(fn, top=top))
return Bunch(trajectories=trajectories, DESCR=self.description())
def fetch_alanine_dipeptide(data_home=None):
return AlanineDipeptide(data_home).get()
fetch_alanine_dipeptide.__doc__ = AlanineDipeptide.__doc__
|
stephenliu1989/msmbuilder
|
msmbuilder/example_datasets/alanine_dipeptide.py
|
Python
|
lgpl-2.1
| 2,853
|
[
"MDTraj",
"OpenMM"
] |
aa6e69950f89e8904e78982e445eea46a40c8a87238177e6a4a10ccb2709bffe
|
from setuptools import setup
setup(
name="public-drive-urls",
version='1.0.0',
author="Brian Peterson",
author_email="bepetersn@gmail.com",
description="Find Google Drive download URLs from a file's sharing URL",
license="MIT",
url='https://github.com/bepetersn/public-drive-urls/',
py_modules=['public_drive_urls'],
classifiers=[
],
install_requires=['requests'],
extras_require={
'test': ['nose', 'mock'],
'dev': ['pip-tools']
}
)
|
bepetersn/public-drive-urls
|
setup.py
|
Python
|
mit
| 504
|
[
"Brian"
] |
d3290a388cbaaa73b9c51cf22b7135eb326a0db35b21c897215ea836f686ecc1
|
from skimage.filters import gabor_kernel
from scipy.ndimage.filters import convolve as convolveim
import numpy as np
import time
class GaborFilter:
real_time = 0
real_cont = 0
imag_time = 0
imag_cont = 0
magnitude_time = 0
magnitude_cont = 0
def __init__(self, frequency, theta, sigma_x, sigma_y):
""" Instantiate a Gabor kernel
Parameters
----------
frequency: float
Spatial frequency of the harmonic function. Specified in pixels.
theta: float
Orientation in radians.
sigma_x, sigma_y: float
Standard deviation in x- and y-directions.
"""
self.frequency = frequency
self.theta = theta
self.sigma_x = sigma_x
self.sigma_y = sigma_y
self.kernel = gabor_kernel(frequency=frequency, theta=theta, sigma_x=sigma_x, sigma_y=sigma_y)
def convolve_real(self, image):
"""
Returns a image convolved with the real component of the kernel
"""
start_time = time.time()
conv = convolveim(image, np.real(self.kernel), mode='wrap')
self.real_time += time.time() - start_time
self.real_cont += 1
return conv
def convolve_imag(self, image):
"""
Returns a image convolved with the imaginary component of the kernel
"""
start_time = time.time()
conv = convolveim(image, np.imag(self.kernel), mode='wrap')
self.imag_time += time.time() - start_time
self.imag_cont += 1
return conv
def magnitude(self, image):
"""
Returns the magnitude value
"""
start_time = time.time()
conv = np.sqrt(self.convolve_real(image) ** 2 + self.convolve_imag(image) ** 2)
self.magnitude_time += time.time() - start_time
self.magnitude_cont += 1
return conv
def mean_magnitude_time(self):
return self.magnitude_time/self.magnitude_cont
def mean_real_time(self):
return self.real_time/self.real_cont
def mean_imag_time(self):
return self.imag_time/self.imag_cont
def gabor_bank(fmax, ns, nd, v=2, b=1.177):
""" Devuelve un array 2D (Ns x Nd) de filtros de Gabor.
Cada posición dentro del array corresponde a un determinado filtro de Gabor que ha sido creado
con unos parámetros especificos.
Parameters
----------
fmax: float
Max Spatial frequency of the harmonic function. Specified in pixels.
ns: scalar
Number of scales in the bank.
nd: scalar
Number of orientations in the bank.
v: float, optional
Scaling factor to create gabor filters (2 octaves by default)
b: float, optional
Value to truncate the Gaussian envelope bandwidth (1.177 by default to truncate at the half of amplitude)
Returns
-------
bank: 2D array
Bank of Gabor filters
References
----------
https://www.researchgate.net/publication/4214734_Gabor_feature_extraction_for_character_recognition_Comparison_with_gradient_feature
"""
# Bank of Gabor Filters
bank = []
"""
Initial parameters
"""
# Interval of orientation
O = np.pi / nd
# Aspect ratio
alpha = np.tan(O / 2) * (v + 1) / (v - 1)
# Orientations
thetas = [(i * np.pi / nd) for i in range(0, nd)]
"""
First nd kerneles with the same frequency but different orientations
"""
# Frequency
f = fmax * (v + 1) / (2 * v)
sigma_u = f * (v - 1) / (b * (v + 1))
sigma_v = sigma_u / alpha
sigma_x = 1 / (2 * np.pi * sigma_u)
sigma_y = 1 / (2 * np.pi * sigma_v)
for theta in thetas:
bank.append(GaborFilter(f, theta, sigma_x, sigma_y))
"""
Rest of kernels with different frequencys and orientations
"""
for i in range(1, ns):
f /= v
sigma_u = f * (v - 1) / (b * (v + 1))
sigma_v = sigma_u / alpha
sigma_x = 1 / (2 * np.pi * sigma_u)
sigma_y = 1 / (2 * np.pi * sigma_v)
for theta in thetas:
bank.append(GaborFilter(f, theta, sigma_x, sigma_y))
return bank
|
benitesf/Skin-Lesion-Analysis-Towards-Melanoma-Detection
|
features_extraction/methods/gabor_filter_banks.py
|
Python
|
mit
| 4,150
|
[
"Gaussian"
] |
fe05eab3e1b2618d51b10f105166f97f4785da746d22030da55f464befa5beff
|
"""
==================================================
Automatic Relevance Determination Regression (ARD)
==================================================
Fit regression model with Bayesian Ridge Regression.
See :ref:`bayesian_ridge_regression` for more information on the regressor.
Compared to the OLS (ordinary least squares) estimator, the coefficient
weights are slightly shifted toward zeros, which stabilises them.
The histogram of the estimated weights is very peaked, as a sparsity-inducing
prior is implied on the weights.
The estimation of the model is done by iteratively maximizing the
marginal log-likelihood of the observations.
We also plot predictions and uncertainties for ARD
for one dimensional regression using polynomial feature expansion.
Note the uncertainty starts going up on the right side of the plot.
This is because these test samples are outside of the range of the training
samples.
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.linear_model import ARDRegression, LinearRegression
# %%
# Generating simulated data with Gaussian weights
# Parameters of the example
np.random.seed(0)
n_samples, n_features = 100, 100
# Create Gaussian data
X = np.random.randn(n_samples, n_features)
# Create weights with a precision lambda_ of 4.
lambda_ = 4.0
w = np.zeros(n_features)
# Only keep 10 weights of interest
relevant_features = np.random.randint(0, n_features, 10)
for i in relevant_features:
w[i] = stats.norm.rvs(loc=0, scale=1.0 / np.sqrt(lambda_))
# Create noise with a precision alpha of 50.
alpha_ = 50.0
noise = stats.norm.rvs(loc=0, scale=1.0 / np.sqrt(alpha_), size=n_samples)
# Create the target
y = np.dot(X, w) + noise
# %%
# Fit the ARD Regression
clf = ARDRegression(compute_score=True)
clf.fit(X, y)
ols = LinearRegression()
ols.fit(X, y)
# %%
# Plot the true weights, the estimated weights, the histogram of the
# weights, and predictions with standard deviations
plt.figure(figsize=(6, 5))
plt.title("Weights of the model")
plt.plot(clf.coef_, color="darkblue", linestyle="-", linewidth=2, label="ARD estimate")
plt.plot(
ols.coef_, color="yellowgreen", linestyle=":", linewidth=2, label="OLS estimate"
)
plt.plot(w, color="orange", linestyle="-", linewidth=2, label="Ground truth")
plt.xlabel("Features")
plt.ylabel("Values of the weights")
plt.legend(loc=1)
plt.figure(figsize=(6, 5))
plt.title("Histogram of the weights")
plt.hist(clf.coef_, bins=n_features, color="navy", log=True)
plt.scatter(
clf.coef_[relevant_features],
np.full(len(relevant_features), 5.0),
color="gold",
marker="o",
label="Relevant features",
)
plt.ylabel("Features")
plt.xlabel("Values of the weights")
plt.legend(loc=1)
plt.figure(figsize=(6, 5))
plt.title("Marginal log-likelihood")
plt.plot(clf.scores_, color="navy", linewidth=2)
plt.ylabel("Score")
plt.xlabel("Iterations")
# Plotting some predictions for polynomial regression
def f(x, noise_amount):
y = np.sqrt(x) * np.sin(x)
noise = np.random.normal(0, 1, len(x))
return y + noise_amount * noise
degree = 10
X = np.linspace(0, 10, 100)
y = f(X, noise_amount=1)
clf_poly = ARDRegression(threshold_lambda=1e5)
clf_poly.fit(np.vander(X, degree), y)
X_plot = np.linspace(0, 11, 25)
y_plot = f(X_plot, noise_amount=0)
y_mean, y_std = clf_poly.predict(np.vander(X_plot, degree), return_std=True)
plt.figure(figsize=(6, 5))
plt.errorbar(X_plot, y_mean, y_std, color="navy", label="Polynomial ARD", linewidth=2)
plt.plot(X_plot, y_plot, color="gold", linewidth=2, label="Ground Truth")
plt.ylabel("Output y")
plt.xlabel("Feature X")
plt.legend(loc="lower left")
plt.show()
|
manhhomienbienthuy/scikit-learn
|
examples/linear_model/plot_ard.py
|
Python
|
bsd-3-clause
| 3,655
|
[
"Gaussian"
] |
71f30d2d3e2e1f0993568b62fc685e12e205f7ce4741051b77f5b9811fd84c13
|
from lan.lan_ast import *
import copy
class AddToId(NodeVisitor):
""" Finds the Id and replaces it with a binop that
adds another variable to the id.
"""
def __init__(self, id, variable):
self.id = id
self.variable = variable
def changeIdNode(self, node):
if isinstance(node, Id):
if node.name == self.id:
return BinOp(node, '+', Id(self.variable))
else:
return node
else:
return node
def visit_BinOp(self, node):
self.visit(node.lval)
node.lval = self.changeIdNode(node.lval)
self.visit(node.rval)
node.rval = self.changeIdNode(node.rval)
def visit_Assignment(self, node):
self.visit(node.lval)
node.lval = self.changeIdNode(node.lval)
self.visit(node.rval)
node.rval = self.changeIdNode(node.rval)
def visit_ArrayRef(self, node):
for i, n in enumerate(node.subscript):
addToId = Ids()
addToId.visit(n)
if self.id in addToId.ids:
sub = node.subscript[i]
try:
if node.extra['localMemory']:
return
except KeyError:
pass
node.subscript[i] = BinOp(sub, '+', Id(self.variable))
def updateDict(sink, src):
for n in sink:
l = sink[n]
for m in src[n]:
l.add(m)
class FindFunction(NodeVisitor):
""" Finds the typeid of the kernel function """
def __init__(self):
self.typeid = None
def visit_FuncDecl(self, node):
self.visit_TypeId(node.typeid)
def visit_TypeId(self, node):
self.typeid = node
class FindDeviceArgs(NodeVisitor):
""" Finds the argument that we transfer from the C code
to the device.
"""
def __init__(self, argIds):
self.argIds = argIds
self.arglist = list()
def visit_ArgList(self, node):
for typeid in node.arglist:
if isinstance(typeid, TypeId):
if typeid.name.name in self.argIds:
self.argIds.remove(typeid.name.name)
if len(typeid.type) == 2:
if typeid.type[1] == '*':
typeid.type.insert(0, '__global')
self.arglist.append(typeid)
class PerfectForLoop(NodeVisitor):
""" Performs simple checks to decide if we have 1D or 2D
parallelism, i.e. if we have a perfect loops nest of size one
or two.
"""
def __init__(self):
self.depth = 0
self.ast = None
self.inner = None
self.outer = None
def visit_FuncDecl(self, node):
funcstats = node.compound.statements
if len(funcstats) == 1:
if isinstance(funcstats[0], ForLoop):
self.ast = funcstats[0]
self.inner = funcstats[0]
self.depth += 1
loopstats = funcstats[0].compound.statements
if len(loopstats) == 1:
if isinstance(loopstats[0], ForLoop):
self.outer = self.inner
self.depth += 1
self.inner = loopstats[0]
class FindDim(NodeVisitor):
""" Finds the size of the dimNum dimension.
"""
def __init__(self, arrayIds):
self.arrayIds = arrayIds
self.dimNames = dict()
def visit_ArgList(self, node):
for arrayname in self.arrayIds:
findSpecificArrayId = FindSpecificArrayId(arrayname)
count = 0
for typeid in node.arglist:
findSpecificArrayId.reset(arrayname)
findSpecificArrayId.visit(typeid)
if findSpecificArrayId.Found:
self.dimNames[arrayname] = list()
for n in xrange(self.arrayIds[arrayname]):
self.dimNames[arrayname].append(
node.arglist[count + 1 + n].name.name)
count += 1
class FindSpecificArrayId(NodeVisitor):
""" Finds a specific arrayId
"""
def __init__(self, arrayId):
self.arrayId = arrayId
self.Found = False
def visit_TypeId(self, node):
if node.name.name == self.arrayId:
self.Found = True
def reset(self, arrayId):
self.Found = False
self.arrayId = arrayId
class FindIncludes(NodeVisitor):
""" Return a list of include statements
"""
def __init__(self):
self.includes = list()
def visit_Include(self, node):
self.includes.append(node)
class InitIds(NodeVisitor):
""" Finds Id's in a for loop initialization.
More generally: Finds all Ids and adds them to a list.
"""
def __init__(self):
self.index = list()
def visit_Id(self, node):
self.index.append(node.name)
class FindUpperLimit(NodeVisitor):
""" Finds Id's in an for loop initialization.
More generally: Finds all Ids and adds them to a list.
"""
def __init__(self):
self.index = list()
def visit_Id(self, node):
self.index.append(node.name)
class Ids(NodeVisitor):
""" Finds all unique IDs, excluding function IDs"""
def __init__(self):
self.ids = set()
def visit_FuncDecl(self, node):
if node.compound.statements == []:
self.visit(node.arglist)
def visit_Id(self, node):
self.ids.add(node.name)
class LoopIds(NodeVisitor):
""" Finds all unique LoopIndices
-- Used in localMemory2
"""
def __init__(self, LoopIds):
self.LoopIds = LoopIds
self.ids = set()
def reset(self):
self.ids = set()
def visit_Id(self, node):
name = node.name
if name in self.LoopIds:
self.ids.add(name)
class LoopIndices(NodeVisitor):
""" Finds loop indices, the start and end values of the
indices and creates a mapping from a loop index to
the ForLoop AST node that is indexes.
"""
def __init__(self):
self.index = list()
self.end = dict()
self.start = dict()
self.Loops = dict()
def visit_ForLoop(self, node):
self.Loops[node.init.lval.name.name] = node
IdVis = Ids()
IdVis.visit(node.init)
ids = list(IdVis.ids)
self.index.extend(ids)
self.visit(node.compound)
try:
self.end[ids[0]] = (node.cond.rval.name)
self.start[ids[0]] = (node.init.rval.value)
except AttributeError:
self.end[ids[0]] = 'Unknown'
self.start[ids[0]] = 'Unknown'
class ForLoops(NodeVisitor):
""" Returns first loop it encounters
"""
def __init__(self):
self.isFirst = True
self.ast = None
def reset(self):
self.isFirst = True
def visit_ForLoop(self, node):
if self.isFirst:
self.ast = node
self.isFirst = False
return node
class NumIndices(NodeVisitor):
""" Finds if there is two distinct loop indices
in an 1D array reference
"""
def __init__(self, numIndices, indices):
self.numIndices = numIndices
self.num = 0
self.indices = indices
self.found = set()
self.subIdx = set()
self.yes = False
def reset(self):
self.firstFound = False
self.subIdx = set()
def visit_Id(self, node):
if node.name in self.indices \
and node.name not in self.found \
and self.num < self.numIndices:
self.found.add(node.name)
self.subIdx.add(node.name)
self.num += 1
if self.num >= self.numIndices:
self.yes = True
class SwapUnrollID(NodeVisitor):
""" Swap a loop index that is being unrolled with
' " << <loop index> << "
"""
def __init__(self, UnrollLoops):
self.UnrollLoops = UnrollLoops
self.outsideHeader = True
def visit_ForLoop(self, node):
self.outsideHeader = False
self.visit(node.init)
self.visit(node.cond)
self.visit(node.inc)
self.outsideHeader = True
self.visit(node.compound)
def visit_Id(self, node):
if self.outsideHeader:
if node.name in self.UnrollLoops:
node.name = '\" << ' + node.name + ' << \"'
class RefToLoop(NodeVisitor):
""" Create a dict from array name to list of
arrayref list of loop indices that the arrayrefs are inside.
"""
def __init__(self, GridIndices):
self.stack = list()
self.RefToLoop = dict()
self.GridIndices = GridIndices
def visit_ForLoop(self, node):
name = node.init.lval.name.name
if name not in self.GridIndices:
self.stack.append(name)
self.outsideHeader = False
self.visit(node.init)
self.visit(node.cond)
self.visit(node.inc)
self.outsideHeader = True
self.visit(node.compound)
if name not in self.GridIndices:
self.stack.pop()
def visit_ArrayRef(self, node):
name = node.name.name
try:
self.RefToLoop[name].append(copy.deepcopy(self.stack))
except KeyError:
self.RefToLoop[name] = [copy.deepcopy(self.stack)]
class Arrays(NodeVisitor):
""" Finds array Ids """
def __init__(self, loopindices):
self.ids = set()
self.numIndices = dict()
self.indexIds = dict()
self.loopindices = loopindices
self.numSubscripts = dict()
self.Subscript = dict()
self.LoopArrays = dict()
self.SubIdx = dict()
def visit_ArrayRef(self, node):
name = node.name.name
self.ids.add(name)
numIndcs = NumIndices(99, self.loopindices)
if name in self.Subscript:
self.Subscript[name].append(node.subscript)
self.LoopArrays[name].append(node)
else:
self.Subscript[name] = [node.subscript]
self.LoopArrays[name] = [node]
listidx = []
for s in node.subscript:
numIndcs.visit(s)
if numIndcs.subIdx:
listidx.extend(list(numIndcs.subIdx))
else:
listidx.append(None)
numIndcs.reset()
if name in self.SubIdx:
self.SubIdx[name].append(listidx)
else:
self.SubIdx[name] = [listidx]
if name not in self.numIndices:
self.numIndices[name] = numIndcs.num
self.numSubscripts[name] = numIndcs.num
self.indexIds[name] = (numIndcs.found)
else:
self.indexIds[name].update((numIndcs.found))
## self.numSubscripts[name] = max(len(node.subscript),self.numIndices[name])
self.numSubscripts[name] = len(node.subscript)
for n in node.subscript:
self.visit(n)
class TypeIds(NodeVisitor):
""" Finds type Ids """
def __init__(self):
self.ids = set()
self.dictIds = dict()
def visit_TypeId(self, node):
name = node.name.name
self.ids.add(name)
self.dictIds[name] = node.type
def visit_ArrayTypeId(self, node):
name = node.name.name
self.ids.add(name)
self.dictIds[name] = copy.deepcopy(node.type)
if len(node.type) != 2:
# "ArrayTypeId: Need to check, type of array is ", node.type
## self.dictIds[name].append('*')
pass
class TypeIds2(NodeVisitor):
""" Return a set of TypeId nodes. Remove the type from the
TypeIds that we encounter.
"""
def __init__(self):
self.ids = set()
def visit_ForLoop(self, node):
self.visit(node.compound)
def visit_TypeId(self, node):
self.ids.add(copy.deepcopy(node))
node.type = []
def visit_ArrayTypeId(self, node):
self.ids.add(copy.deepcopy(node))
node.type = []
class NumBinOps(NodeVisitor):
""" Finds the number of BinOp in an 1D array subscript
"""
def __init__(self):
self.ops = list()
def visit_BinOp(self, node):
self.ops.append(node.op)
self.visit(node.lval)
self.visit(node.rval)
class Norm(NodeVisitor):
""" Normalizes subscripts to the form i * (width of j) + j
"""
def __init__(self, indices):
self.subscript = dict()
self.count = 0
self.indices = indices
def visit_ArrayRef(self, node):
if len(node.subscript) == 1:
numBinOps = NumBinOps()
binop = node.subscript[0]
numBinOps.visit(binop)
if len(numBinOps.ops) == 2:
if '+' in numBinOps.ops and '*' in numBinOps.ops:
if not isinstance(binop.lval, BinOp):
(binop.lval, binop.rval) = (binop.rval, binop.lval)
twoIndices = NumIndices(2, self.indices)
## twoIndices.visit(binop.lval)
## twoIndices.reset()
## twoIndices.visit(binop.rval)
twoIndices.visit(binop)
if twoIndices.yes:
if binop.lval.lval.name not in self.indices:
(binop.lval.lval.name, binop.lval.rval.name) = \
(binop.lval.rval.name, binop.lval.lval.name)
# convert to 2D
node.subscript = [Id(binop.lval.lval.name, node.coord), \
binop.rval]
|
dikujepsen/OpenTran
|
v2.0/framework/unused/transf_visitor_unused.py
|
Python
|
mit
| 13,670
|
[
"VisIt"
] |
e912b3111e0efd15975712c3fc27cf52798866170ca9b21e971ee5032d9c8e69
|
# DIALS_ENABLE_COMMAND_LINE_COMPLETION
from __future__ import annotations
import copy
import os
import sys
import iotbx.phil
from cctbx import sgtbx
from rstbx.symmetry.constraints import parameter_reduction
import dials.util
from dials.algorithms.indexing.assign_indices import AssignIndicesGlobal
from dials.algorithms.scaling.scaling_library import determine_best_unit_cell
from dials.array_family import flex
from dials.util.filter_reflections import filtered_arrays_from_experiments_reflections
from dials.util.options import ArgumentParser, reflections_and_experiments_from_files
help_message = """
This program can be used to re-index an indexed.expt and/or indexed.refl
file from one setting to another. The change of basis operator can be
provided in h,k,l, or a,b,c or x,y,z conventions. By default the change of
basis operator will also be applied to the space group in the indexed.expt
file, however, optionally, a space group (including setting) to be applied
AFTER applying the change of basis operator can be provided.
Alternatively, to reindex an integated dataset in the case of indexing ambiguity,
a reference dataset (models.expt and reflection.refl) in the same space
group can be specified. In this case, any potential twin operators are tested,
and the dataset is reindexed to the setting that gives the highest correlation
with the reference dataset.
Examples::
dials.reindex indexed.expt change_of_basis_op=b+c,a+c,a+b
dials.reindex indexed.refl change_of_basis_op=-b,a+b+2*c,-a
dials.reindex indexed.expt indexed.refl change_of_basis_op=l,h,k
dials.reindex indexed.expt indexed.refl reference.experiments=reference.expt
reference.reflections=reference.refl
"""
phil_scope = iotbx.phil.parse(
"""
change_of_basis_op = a,b,c
.type = str
hkl_offset = None
.type = ints(size=3)
space_group = None
.type = space_group
.help = "The space group to be applied AFTER applying the change of basis "
"operator."
reference {
experiments = None
.type = path
.help = "Reference experiment for determination of change of basis operator."
reflections = None
.type = path
.help = "Reference reflections to allow reindexing to consistent index between datasets."
}
output {
experiments = reindexed.expt
.type = str
.help = "The filename for reindexed experimental models"
reflections = reindexed.refl
.type = str
.help = "The filename for reindexed reflections"
}
""",
process_includes=True,
)
def derive_change_of_basis_op(from_hkl, to_hkl):
# exclude those reflections that we couldn't index
sel = (to_hkl != (0, 0, 0)) & (from_hkl != (0, 0, 0))
assert sel.count(True) >= 3 # need minimum of 3 equations ?
to_hkl = to_hkl.select(sel)
from_hkl = from_hkl.select(sel)
# for each miller index, solve a system of linear equations to find the
# change of basis operator
h, k, l = to_hkl.as_vec3_double().parts()
r = []
from scitbx.lstbx import normal_eqns
for i in range(3):
eqns = normal_eqns.linear_ls(3)
for index, hkl in zip((h, k, l)[i], from_hkl):
eqns.add_equation(
right_hand_side=index, design_matrix_row=flex.double(hkl), weight=1
)
eqns.solve()
r.extend(eqns.solution())
from scitbx import matrix
from scitbx.math import continued_fraction
denom = 12
r = [
int(denom * continued_fraction.from_real(r_, eps=1e-2).as_rational())
for r_ in r
]
r = matrix.sqr(r).transpose()
# print (1/denom)*r
# now convert into a cctbx change_of_basis_op object
change_of_basis_op = sgtbx.change_of_basis_op(
sgtbx.rt_mx(sgtbx.rot_mx(r, denominator=denom))
).inverse()
print(f"discovered change_of_basis_op={change_of_basis_op}")
# sanity check that this is the right cb_op
assert (change_of_basis_op.apply(from_hkl) == to_hkl).count(False) == 0
return change_of_basis_op
def reindex_experiments(experiments, cb_op, space_group=None):
reindexed_experiments = copy.deepcopy(experiments)
for crystal in reindexed_experiments.crystals():
cryst_reindexed = copy.deepcopy(crystal)
if space_group is not None:
# See also https://github.com/cctbx/cctbx_project/issues/424
cryst_reindexed.set_space_group(sgtbx.space_group("P 1"))
cryst_reindexed = cryst_reindexed.change_basis(cb_op)
cryst_reindexed.set_space_group(space_group)
S = parameter_reduction.symmetrize_reduce_enlarge(
cryst_reindexed.get_space_group()
)
S.set_orientation(cryst_reindexed.get_B())
S.symmetrize()
# Cache the scan-varying A matrices if applicable as these get lost
# when we call crystal.set_B()
A_varying = [
cryst_reindexed.get_A_at_scan_point(i)
for i in range(cryst_reindexed.num_scan_points)
]
# Update the symmetrized B matrix
cryst_reindexed.set_B(S.orientation.reciprocal_matrix())
# Reapply the scan-varying A matrices
cryst_reindexed.set_A_at_scan_points(A_varying)
else:
cryst_reindexed = cryst_reindexed.change_basis(cb_op)
crystal.update(cryst_reindexed)
return reindexed_experiments
@dials.util.show_mail_handle_errors()
def run(args=None):
import libtbx.load_env
from dials.util import Sorry
usage = "dials.reindex [options] indexed.expt indexed.refl"
parser = ArgumentParser(
usage=usage,
phil=phil_scope,
read_reflections=True,
read_experiments=True,
check_format=False,
epilog=help_message,
)
params, options = parser.parse_args(args, show_diff_phil=True)
reflections, experiments = reflections_and_experiments_from_files(
params.input.reflections, params.input.experiments
)
if len(experiments) == 0 and len(reflections) == 0:
parser.print_help()
return
if params.change_of_basis_op is None:
raise Sorry("Please provide a change_of_basis_op.")
reference_crystal = None
if params.reference.experiments is not None:
from dxtbx.serialize import load
reference_experiments = load.experiment_list(
params.reference.experiments, check_format=False
)
if len(reference_experiments.crystals()) == 1:
reference_crystal = reference_experiments.crystals()[0]
else:
# first check sg all same
sgs = [
expt.crystal.get_space_group().type().number() for expt in experiments
]
if len(set(sgs)) > 1:
raise Sorry(
"""The reference experiments have different space groups:
space group numbers found: %s
Please reanalyse the data so that space groups are consistent,
(consider using dials.reindex, dials.symmetry or dials.cosym)"""
% ", ".join(map(str, set(sgs)))
)
reference_crystal = reference_experiments.crystals()[0]
reference_crystal.unit_cell = determine_best_unit_cell(
reference_experiments
)
if params.reference.reflections is not None:
# First check that we have everything as expected for the reference reindexing
if params.reference.experiments is None:
raise Sorry(
"""For reindexing against a reference dataset, a reference
experiments file must also be specified with the option: reference.experiments= """
)
if not os.path.exists(params.reference.reflections):
raise Sorry("Could not locate reference dataset reflection file")
reference_reflections = flex.reflection_table().from_file(
params.reference.reflections
)
test_reflections = reflections[0]
if (
reference_crystal.get_space_group().type().number()
!= experiments.crystals()[0].get_space_group().type().number()
):
raise Sorry("Space group of input does not match reference")
# Set some flags to allow filtering, if wanting to reindex against
# reference with data that has not yet been through integration
if (
test_reflections.get_flags(test_reflections.flags.integrated_sum).count(
True
)
== 0
):
assert (
"intensity.sum.value" in test_reflections
), "No 'intensity.sum.value' in reflections"
test_reflections.set_flags(
flex.bool(test_reflections.size(), True),
test_reflections.flags.integrated_sum,
)
if (
reference_reflections.get_flags(
reference_reflections.flags.integrated_sum
).count(True)
== 0
):
assert (
"intensity.sum.value" in test_reflections
), "No 'intensity.sum.value in reference reflections"
reference_reflections.set_flags(
flex.bool(reference_reflections.size(), True),
reference_reflections.flags.integrated_sum,
)
# Make miller array of the two datasets
try:
test_miller_set = filtered_arrays_from_experiments_reflections(
experiments, [test_reflections]
)[0]
except ValueError:
raise Sorry("No reflections remain after filtering the test dataset")
try:
reference_miller_set = filtered_arrays_from_experiments_reflections(
reference_experiments, [reference_reflections]
)[0]
except ValueError:
raise Sorry("No reflections remain after filtering the reference dataset")
from dials.algorithms.symmetry.reindex_to_reference import (
determine_reindex_operator_against_reference,
)
change_of_basis_op = determine_reindex_operator_against_reference(
test_miller_set, reference_miller_set
)
elif len(experiments) and params.change_of_basis_op is libtbx.Auto:
if reference_crystal is not None:
if len(experiments.crystals()) > 1:
raise Sorry("Only one crystal can be processed at a time")
from dials.algorithms.indexing.compare_orientation_matrices import (
difference_rotation_matrix_axis_angle,
)
cryst = experiments.crystals()[0]
R, axis, angle, change_of_basis_op = difference_rotation_matrix_axis_angle(
cryst, reference_crystal
)
print(f"Change of basis op: {change_of_basis_op}")
print("Rotation matrix to transform input crystal to reference::")
print(R.mathematica_form(format="%.3f", one_row_per_line=True))
print(
f"Rotation of {angle:.3f} degrees",
"about axis (%.3f, %.3f, %.3f)" % axis,
)
elif len(reflections):
assert len(reflections) == 1
# always re-map reflections to reciprocal space
refl = reflections.deep_copy()
refl.centroid_px_to_mm(experiments)
refl.map_centroids_to_reciprocal_space(experiments)
# index the reflection list using the input experiments list
refl["id"] = flex.int(len(refl), -1)
index = AssignIndicesGlobal(tolerance=0.2)
index(refl, experiments)
hkl_expt = refl["miller_index"]
hkl_input = reflections[0]["miller_index"]
change_of_basis_op = derive_change_of_basis_op(hkl_input, hkl_expt)
# reset experiments list since we don't want to reindex this
experiments = []
else:
change_of_basis_op = sgtbx.change_of_basis_op(params.change_of_basis_op)
if len(experiments):
space_group = params.space_group
if space_group is not None:
space_group = space_group.group()
try:
experiments = reindex_experiments(
experiments, change_of_basis_op, space_group=space_group
)
except RuntimeError as e:
# Only catch specific errors here
if "Unsuitable value for rational rotation matrix." in str(e):
original_message = str(e).split(":")[-1].strip()
sys.exit(f"Error: {original_message} Is your change_of_basis_op valid?")
raise
print(f"Saving reindexed experimental models to {params.output.experiments}")
experiments.as_file(params.output.experiments)
if len(reflections):
assert len(reflections) == 1
reflections = reflections[0]
miller_indices = reflections["miller_index"]
if params.hkl_offset is not None:
h, k, l = miller_indices.as_vec3_double().parts()
h += params.hkl_offset[0]
k += params.hkl_offset[1]
l += params.hkl_offset[2]
miller_indices = flex.miller_index(h.iround(), k.iround(), l.iround())
non_integral_indices = change_of_basis_op.apply_results_in_non_integral_indices(
miller_indices
)
if non_integral_indices.size() > 0:
print(
"Removing %i/%i reflections (change of basis results in non-integral indices)"
% (non_integral_indices.size(), miller_indices.size())
)
sel = flex.bool(miller_indices.size(), True)
sel.set_selected(non_integral_indices, False)
miller_indices_reindexed = change_of_basis_op.apply(miller_indices.select(sel))
reflections["miller_index"].set_selected(sel, miller_indices_reindexed)
reflections["miller_index"].set_selected(~sel, (0, 0, 0))
print(f"Saving reindexed reflections to {params.output.reflections}")
reflections.as_file(params.output.reflections)
if __name__ == "__main__":
run()
|
dials/dials
|
command_line/reindex.py
|
Python
|
bsd-3-clause
| 14,144
|
[
"CRYSTAL"
] |
24fe7a2c8053fc2af61445fc545a907c90633885a861fe433e2008eec5aec958
|
# ######################################################################
# Original code: #
# @author: Robert B. Von Dreele and Brian Toby #
# General Structure Analysis System - II (GSAS-II) #
# https://subversion.xor.aps.anl.gov/trac/pyGSAS #
# Copyright 2010, UChicago Argonne, LLC, Operator of #
# Argonne National Laboratory All rights reserved. #
# #
# Copyright (c) 2014, Brookhaven Science Associates, Brookhaven #
# National Laboratory. All rights reserved. #
# #
# Redistribution and use in source and binary forms, with or without #
# modification, are permitted provided that the following conditions #
# are met: #
# #
# * Redistributions of source code must retain the above copyright #
# notice, this list of conditions and the following disclaimer. #
# #
# * Redistributions in binary form must reproduce the above copyright #
# notice this list of conditions and the following disclaimer in #
# the documentation and/or other materials provided with the #
# distribution. #
# #
# * Neither the name of the Brookhaven Science Associates, Brookhaven #
# National Laboratory nor the names of its contributors may be used #
# to endorse or promote products derived from this software without #
# specific prior written permission. #
# #
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS #
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS #
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE #
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, #
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES #
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR #
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) #
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, #
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OTHERWISE) ARISING #
# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE #
# POSSIBILITY OF SUCH DAMAGE. #
########################################################################
"""
This is the module for reading files created in GSAS file formats
https://subversion.xor.aps.anl.gov/trac/pyGSAS
"""
from __future__ import absolute_import, division, print_function
import os
import numpy as np
def gsas_reader(file):
"""
Parameters
----------
file: str
GSAS powder data file
Returns
--------
tth : ndarray
twotheta values (degrees) shape (N, ) array
intensity : ndarray
intensity values shape (N, ) array
err : ndarray
error value of intensity shape(N, ) array
"""
if os.path.splitext(file)[1] != ".gsas":
raise IOError("Provide a file with diffraction data saved in GSAS,"
" file extension has to be .gsas ")
# find the file mode, could be 'std', 'esd', 'fxye'
with open(file, 'r') as fi:
S = fi.readlines()[1]
mode = S.split()[9]
try:
tth, intensity, err = _func_look_up[mode](file)
except KeyError:
raise ValueError("Provide a correct mode of the GSAS file, "
"file modes could be in 'STD', 'ESD', 'FXYE' ")
return tth, intensity, err
def _get_fxye_data(file):
"""
Parameters
----------
file: str
GSAS powder data file
Return
------
tth : ndarray
twotheta values (degrees) shape (N, ) array
intensity : ndarray
intensity values shape (N, ) array
err : ndarray
error value of intensity shape(N, ) array
"""
tth = []
intensity = []
err = []
with open(file, 'r') as fi:
S = fi.readlines()[2:]
for line in S:
vals = line.split()
tth.append(float(vals[0]))
f = float(vals[1])
s = float(vals[2])
if f <= 0.0:
intensity.append(0.0)
else:
intensity.append(float(vals[1]))
if s > 0.0:
err.append(1.0/float(vals[2])**2)
else:
err.append(0.0)
return [np.array(tth), np.array(intensity), np.array(err)]
def _get_esd_data(file):
"""
Parameters
----------
file: str
GSAS powder data file
Return
------
tth : ndarray
twotheta values (degrees) shape (N, ) array
intensity : ndarray
intensity values shape (N, ) array
err : ndarray
error value of intensity shape(N, ) array
"""
tth = []
intensity = []
err = []
with open(file, 'r') as fi:
S = fi.readlines()[1:]
# convert from centidegrees to degrees
start = float(S[0].split()[5])/100.0
step = float(S[0].split()[6])/100.0
j = 0
for line in S[1:]:
for i in range(0, 80, 16):
xi = start + step*j
yi = _sfloat(line[i: i + 8])
ei = _sfloat(line[i + 8: i + 16])
tth.append(xi)
if yi > 0.0:
intensity.append(yi)
else:
intensity.append(0.0)
if ei > 0.0:
err.append(1.0/ei**2)
else:
err.append(0.0)
j += 1
return [np.array(tth), np.array(intensity), np.array(err)]
def _get_std_data(file):
"""
Parameters
----------
file: str
GSAS powder data file
Return
------
tth : ndarray
twotheta values (degrees) shape (N, ) array
intensity : ndarray
intensity values shape (N, ) array
err : ndarray
error value of intensity shape(N, ) array
"""
tth = []
intensity = []
err = []
with open(file, 'r') as fi:
S = fi.readlines()[1:]
# convert from centidegrees to degrees
start = float(S[0].split()[5])/100.0
step = float(S[0].split()[6])/100.0
# number of data values(two theta or intensity)
nch = float(S[0].split()[2])
j = 0
for line in S[1:]:
for i in range(0, 80, 8):
xi = start + step*j
ni = max(_sint(line[i: i + 2]), 1)
yi = max(_sfloat(line[i + 2: i + 8]), 0.0)
if yi:
vi = yi/ni
else:
yi = 0.0
vi = 0.0
if j < nch:
tth.append(xi)
if vi <= 0.:
intensity.append(0.)
err.append(0.)
else:
intensity.append(yi)
err.append(1.0/vi)
j += 1
return [np.array(tth), np.array(intensity), np.array(err)]
# find the which function to use according to mode of the GSAS file
# mode could be "STD", "ESD" or "FXYE"
_func_look_up = {'STD': _get_std_data, 'ESD': _get_esd_data,
'FXYE': _get_fxye_data}
def _sfloat(S):
"""
convert a string to a float, treating an all-blank string as zero
Parameter
---------
S : str
string that need to be converted as float treating an
all-blank string as zero
Returns
-------
float or zero
"""
if S.strip():
return float(S)
else:
return 0.0
def _sint(S):
"""
convert a string to an integer, treating an all-blank string as zero
Parameter
---------
S : str
string that need to be converted as integer treating an all-blank
strings as zero
Returns
-------
integer or zero
"""
if S.strip():
return int(S)
else:
return 0
|
yugangzhang/scikit-beam
|
skbeam/io/gsas_file_reader.py
|
Python
|
bsd-3-clause
| 8,616
|
[
"Brian"
] |
5c6b0570bc54aaaa0eb531019bae610968a7cb5c0e9f7af56bb6dda0dcc92ff5
|
# Hidden Markov Model Implementation
import pylab as pyl
import numpy as np
import matplotlib.pyplot as pp
#from enthought.mayavi import mlab
import scipy as scp
import scipy.ndimage as ni
import roslib; roslib.load_manifest('sandbox_tapo_darpa_m3')
import rospy
#import hrl_lib.mayavi2_util as mu
import hrl_lib.viz as hv
import hrl_lib.util as ut
import hrl_lib.matplotlib_util as mpu
import pickle
import ghmm
import sys
sys.path.insert(0, '/home/tapo/svn/robot1_data/usr/tapo/data_code/Classification/Data/Single_Contact_HMM/384')
from data_384 import Fmat_original
# Returns mu,sigma for 10 hidden-states from feature-vectors(123,35) for RF,SF,RM,SM models
def feature_to_mu_sigma(fvec):
index = 0
m,n = np.shape(fvec)
#print m,n
mu = np.matrix(np.zeros((10,1)))
sigma = np.matrix(np.zeros((10,1)))
DIVS = m/10
while (index < 10):
m_init = index*DIVS
temp_fvec = fvec[(m_init):(m_init+DIVS),0:]
#if index == 1:
#print temp_fvec
mu[index] = scp.mean(temp_fvec)
sigma[index] = scp.std(temp_fvec)
index = index+1
return mu,sigma
# Returns sequence given raw data
def create_seq(fvec):
m,n = np.shape(fvec)
#print m,n
seq = np.matrix(np.zeros((10,n)))
DIVS = m/10
for i in range(n):
index = 0
while (index < 10):
m_init = index*DIVS
temp_fvec = fvec[(m_init):(m_init+DIVS),i]
#if index == 1:
#print temp_fvec
seq[index,i] = scp.mean(temp_fvec)
index = index+1
return seq
if __name__ == '__main__':
Fmat = Fmat_original
# Checking the Data-Matrix
m_tot, n_tot = np.shape(Fmat)
#print " "
#print 'Total_Matrix_Shape:',m_tot,n_tot
mu_rf,sigma_rf = feature_to_mu_sigma(Fmat[242:363,0:35])
mu_rm,sigma_rm = feature_to_mu_sigma(Fmat[242:363,35:70])
mu_sf,sigma_sf = feature_to_mu_sigma(Fmat[242:363,70:105])
mu_sm,sigma_sm = feature_to_mu_sigma(Fmat[242:363,105:140])
mu_obj1,sigma_obj1 = feature_to_mu_sigma(Fmat[242:363,140:141])
mu_obj2,sigma_obj2 = feature_to_mu_sigma(Fmat[242:363,141:142])
#print [mu_rf, sigma_rf]
# HMM - Implementation:
# 10 Hidden States
# Max. Force(For now), Contact Area(Not now), and Contact Motion(Not Now) as Continuous Gaussian Observations from each hidden state
# Four HMM-Models for Rigid-Fixed, Soft-Fixed, Rigid-Movable, Soft-Movable
# Transition probabilities obtained as upper diagonal matrix (to be trained using Baum_Welch)
# For new objects, it is classified according to which model it represenst the closest..
F = ghmm.Float() # emission domain of this model
# A - Transition Matrix
A = [[0.1, 0.25, 0.15, 0.15, 0.1, 0.05, 0.05, 0.05, 0.05, 0.05],
[0.0, 0.1, 0.25, 0.25, 0.2, 0.1, 0.05, 0.05, 0.05, 0.05],
[0.0, 0.0, 0.1, 0.25, 0.25, 0.2, 0.05, 0.05, 0.05, 0.05],
[0.0, 0.0, 0.0, 0.1, 0.3, 0.30, 0.20, 0.1, 0.05, 0.05],
[0.0, 0.0, 0.0, 0.0, 0.1, 0.30, 0.30, 0.20, 0.05, 0.05],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.1, 0.35, 0.30, 0.20, 0.05],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.2, 0.30, 0.30, 0.20],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.2, 0.50, 0.30],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.4, 0.60],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]]
# B - Emission Matrix, parameters of emission distributions in pairs of (mu, sigma)
B_rf = np.zeros((10,2))
B_rm = np.zeros((10,2))
B_sf = np.zeros((10,2))
B_sm = np.zeros((10,2))
for num_states in range(10):
B_rf[num_states,0] = mu_rf[num_states]
B_rf[num_states,1] = sigma_rf[num_states]
B_rm[num_states,0] = mu_rm[num_states]
B_rm[num_states,1] = sigma_rm[num_states]
B_sf[num_states,0] = mu_sf[num_states]
B_sf[num_states,1] = sigma_sf[num_states]
B_sm[num_states,0] = mu_sm[num_states]
B_sm[num_states,1] = sigma_sm[num_states]
B_rf = B_rf.tolist()
B_rm = B_rm.tolist()
B_sf = B_sf.tolist()
B_sm = B_sm.tolist()
# pi - initial probabilities per state
pi = [0.1] * 10
# generate RF, RM, SF, SM models from parameters
model_rf = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rf, pi) # Will be Trained
model_rm = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rm, pi) # Will be Trained
model_sf = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sf, pi) # Will be Trained
model_sm = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sm, pi) # Will be Trained
trial_number = 1
rf_final = np.matrix(np.zeros((28,1)))
rm_final = np.matrix(np.zeros((28,1)))
sf_final = np.matrix(np.zeros((28,1)))
sm_final = np.matrix(np.zeros((28,1)))
while (trial_number < 6):
# For Training
total_seq = Fmat[242:363,:]
m_total, n_total = np.shape(total_seq)
#print 'Total_Sequence_Shape:', m_total, n_total
if (trial_number == 1):
j = 5
total_seq_rf = total_seq[0:121,1:5]
total_seq_rm = total_seq[0:121,36:40]
total_seq_sf = total_seq[0:121,71:75]
total_seq_sm = total_seq[0:121,106:110]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+1:j+5]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+36:j+40]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+71:j+75]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+106:j+110]))
j = j+5
if (trial_number == 2):
j = 5
total_seq_rf = np.column_stack((total_seq[0:121,0],total_seq[0:121,2:5]))
total_seq_rm = np.column_stack((total_seq[0:121,35],total_seq[0:121,37:40]))
total_seq_sf = np.column_stack((total_seq[0:121,70],total_seq[0:121,72:75]))
total_seq_sm = np.column_stack((total_seq[0:121,105],total_seq[0:121,107:110]))
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+0],total_seq[0:121,j+2:j+5]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+35],total_seq[0:121,j+37:j+40]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+70],total_seq[0:121,j+72:j+75]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+105],total_seq[0:121,j+107:j+110]))
j = j+5
if (trial_number == 3):
j = 5
total_seq_rf = np.column_stack((total_seq[0:121,0:2],total_seq[0:121,3:5]))
total_seq_rm = np.column_stack((total_seq[0:121,35:37],total_seq[0:121,38:40]))
total_seq_sf = np.column_stack((total_seq[0:121,70:72],total_seq[0:121,73:75]))
total_seq_sm = np.column_stack((total_seq[0:121,105:107],total_seq[0:121,108:110]))
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+0:j+2],total_seq[0:121,j+3:j+5]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+35:j+37],total_seq[0:121,j+38:j+40]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+70:j+72],total_seq[0:121,j+73:j+75]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+105:j+107],total_seq[0:121,j+108:j+110]))
j = j+5
if (trial_number == 4):
j = 5
total_seq_rf = np.column_stack((total_seq[0:121,0:3],total_seq[0:121,4:5]))
total_seq_rm = np.column_stack((total_seq[0:121,35:38],total_seq[0:121,39:40]))
total_seq_sf = np.column_stack((total_seq[0:121,70:73],total_seq[0:121,74:75]))
total_seq_sm = np.column_stack((total_seq[0:121,105:108],total_seq[0:121,109:110]))
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+0:j+3],total_seq[0:121,j+4:j+5]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+35:j+38],total_seq[0:121,j+39:j+40]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+70:j+73],total_seq[0:121,j+74:j+75]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+105:j+108],total_seq[0:121,j+109:j+110]))
j = j+5
if (trial_number == 5):
j = 5
total_seq_rf = total_seq[0:121,0:4]
total_seq_rm = total_seq[0:121,35:39]
total_seq_sf = total_seq[0:121,70:74]
total_seq_sm = total_seq[0:121,105:109]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+0:j+4]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+35:j+39]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+70:j+74]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+105:j+109]))
j = j+5
train_seq_rf = (np.array(total_seq_rf).T).tolist()
train_seq_rm = (np.array(total_seq_rm).T).tolist()
train_seq_sf = (np.array(total_seq_sf).T).tolist()
train_seq_sm = (np.array(total_seq_sm).T).tolist()
#print train_seq_rf[:][27]
final_ts_rf = ghmm.SequenceSet(F,train_seq_rf)
final_ts_rm = ghmm.SequenceSet(F,train_seq_rm)
final_ts_sf = ghmm.SequenceSet(F,train_seq_sf)
final_ts_sm = ghmm.SequenceSet(F,train_seq_sm)
model_rf.baumWelch(final_ts_rf)
model_rm.baumWelch(final_ts_rm)
model_sf.baumWelch(final_ts_sf)
model_sm.baumWelch(final_ts_sm)
# For Testing
if (trial_number == 1):
j = 5
total_seq_rf = total_seq[0:121,0]
total_seq_rm = total_seq[0:121,35]
total_seq_sf = total_seq[0:121,70]
total_seq_sm = total_seq[0:121,105]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+35]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+70]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+105]))
j = j+5
if (trial_number == 2):
j = 5
total_seq_rf = total_seq[0:121,1]
total_seq_rm = total_seq[0:121,36]
total_seq_sf = total_seq[0:121,71]
total_seq_sm = total_seq[0:121,106]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+1]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+36]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+71]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+106]))
j = j+5
if (trial_number == 3):
j = 5
total_seq_rf = total_seq[0:121,2]
total_seq_rm = total_seq[0:121,37]
total_seq_sf = total_seq[0:121,72]
total_seq_sm = total_seq[0:121,107]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+2]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+37]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+72]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+107]))
j = j+5
if (trial_number == 4):
j = 5
total_seq_rf = total_seq[0:121,3]
total_seq_rm = total_seq[0:121,38]
total_seq_sf = total_seq[0:121,73]
total_seq_sm = total_seq[0:121,108]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+3]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+38]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+73]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+108]))
j = j+5
if (trial_number == 5):
j = 5
total_seq_rf = total_seq[0:121,4]
total_seq_rm = total_seq[0:121,39]
total_seq_sf = total_seq[0:121,74]
total_seq_sm = total_seq[0:121,109]
while (j < 35):
total_seq_rf = np.column_stack((total_seq_rf,total_seq[0:121,j+4]))
total_seq_rm = np.column_stack((total_seq_rm,total_seq[0:121,j+39]))
total_seq_sf = np.column_stack((total_seq_sf,total_seq[0:121,j+74]))
total_seq_sm = np.column_stack((total_seq_sm,total_seq[0:121,j+109]))
j = j+5
total_seq_obj = np.matrix(np.column_stack((total_seq_rf,total_seq_rm,total_seq_sf,total_seq_sm)))
#print np.shape(total_seq_obj)
#print np.shape(total_seq_rf)
rf = np.matrix(np.zeros(np.size(total_seq_obj,1)))
#print rf
#print np.size(total_seq_obj,1)
rm = np.matrix(np.zeros(np.size(total_seq_obj,1)))
sf = np.matrix(np.zeros(np.size(total_seq_obj,1)))
sm = np.matrix(np.zeros(np.size(total_seq_obj,1)))
k = 0
while (k < np.size(total_seq_obj,1)):
test_seq_obj = (np.array(total_seq_obj[0:121,k]).T).tolist()
new_test_seq_obj = np.array(sum(test_seq_obj,[]))
print new_test_seq_obj
ts_obj = new_test_seq_obj
#print np.shape(ts_obj)
final_ts_obj = ghmm.EmissionSequence(F,ts_obj.tolist())
# Find Viterbi Path
path_rf_obj = model_rf.viterbi(final_ts_obj)
path_rm_obj = model_rm.viterbi(final_ts_obj)
path_sf_obj = model_sf.viterbi(final_ts_obj)
path_sm_obj = model_sm.viterbi(final_ts_obj)
obj = max(path_rf_obj[1],path_rm_obj[1],path_sf_obj[1],path_sm_obj[1])
if obj == path_rf_obj[1]:
rf[0,k] = 1
elif obj == path_rm_obj[1]:
rm[0,k] = 1
elif obj == path_sf_obj[1]:
sf[0,k] = 1
else:
sm[0,k] = 1
k = k+1
#print rf.T
rf_final = rf_final + rf.T
rm_final = rm_final + rm.T
sf_final = sf_final + sf.T
sm_final = sm_final + sm.T
trial_number = trial_number + 1
#print rf_final
#print rm_final
#print sf_final
#print sm_final
# Confusion Matrix
cmat = np.zeros((4,4))
arrsum_rf = np.zeros((4,1))
arrsum_rm = np.zeros((4,1))
arrsum_sf = np.zeros((4,1))
arrsum_sm = np.zeros((4,1))
k = 7
i = 0
while (k < 29):
arrsum_rf[i] = np.sum(rf_final[k-7:k,0])
arrsum_rm[i] = np.sum(rm_final[k-7:k,0])
arrsum_sf[i] = np.sum(sf_final[k-7:k,0])
arrsum_sm[i] = np.sum(sm_final[k-7:k,0])
i = i+1
k = k+7
i=0
while (i < 4):
j=0
while (j < 4):
if (i == 0):
cmat[i][j] = arrsum_rf[j]
elif (i == 1):
cmat[i][j] = arrsum_rm[j]
elif (i == 2):
cmat[i][j] = arrsum_sf[j]
else:
cmat[i][j] = arrsum_sm[j]
j = j+1
i = i+1
#print cmat
# Plot Confusion Matrix
Nlabels = 4
fig = pp.figure()
ax = fig.add_subplot(111)
figplot = ax.matshow(cmat, interpolation = 'nearest', origin = 'upper', extent=[0, Nlabels, 0, Nlabels])
ax.set_title('Performance of HMM Models')
pp.xlabel("Targets")
pp.ylabel("Predictions")
ax.set_xticks([0.5,1.5,2.5,3.5])
ax.set_xticklabels(['Rigid-Fixed', 'Rigid-Movable', 'Soft-Fixed', 'Soft-Movable'])
ax.set_yticks([3.5,2.5,1.5,0.5])
ax.set_yticklabels(['Rigid-Fixed', 'Rigid-Movable', 'Soft-Fixed', 'Soft-Movable'])
figbar = fig.colorbar(figplot)
i = 0
while (i < 4):
j = 0
while (j < 4):
pp.text(j+0.5,3.5-i,cmat[i][j])
j = j+1
i = i+1
pp.show()
|
tapomayukh/projects_in_python
|
classification/Classification_with_HMM/Single_Contact_Classification/motion_only/hmm_crossvalidation_motion_10_states.py
|
Python
|
mit
| 16,433
|
[
"Gaussian",
"Mayavi"
] |
37bf8eae31722a34d568974b0e5a2420bd788b9765fc68b301e377592f5793b4
|
"""Test the Elk-M1 Control config flow."""
from unittest.mock import MagicMock, patch
from homeassistant import config_entries, setup
from homeassistant.components.elkm1.const import DOMAIN
def mock_elk(invalid_auth=None, sync_complete=None):
"""Mock m1lib Elk."""
def handler_callbacks(type_, callback):
nonlocal invalid_auth, sync_complete
if type_ == "login":
if invalid_auth is not None:
callback(not invalid_auth)
elif type_ == "sync_complete" and sync_complete:
callback()
mocked_elk = MagicMock()
mocked_elk.add_handler.side_effect = handler_callbacks
return mocked_elk
async def test_form_user_with_secure_elk(hass):
"""Test we can setup a secure elk."""
await setup.async_setup_component(hass, "persistent_notification", {})
result = await hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_USER}
)
assert result["type"] == "form"
assert result["errors"] == {}
mocked_elk = mock_elk(invalid_auth=False, sync_complete=True)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
), patch(
"homeassistant.components.elkm1.async_setup", return_value=True
) as mock_setup, patch(
"homeassistant.components.elkm1.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result2 = await hass.config_entries.flow.async_configure(
result["flow_id"],
{
"protocol": "secure",
"address": "1.2.3.4",
"username": "test-username",
"password": "test-password",
"temperature_unit": "°F",
"prefix": "",
},
)
await hass.async_block_till_done()
assert result2["type"] == "create_entry"
assert result2["title"] == "ElkM1"
assert result2["data"] == {
"auto_configure": True,
"host": "elks://1.2.3.4",
"password": "test-password",
"prefix": "",
"temperature_unit": "°F",
"username": "test-username",
}
assert len(mock_setup.mock_calls) == 1
assert len(mock_setup_entry.mock_calls) == 1
async def test_form_user_with_non_secure_elk(hass):
"""Test we can setup a non-secure elk."""
await setup.async_setup_component(hass, "persistent_notification", {})
result = await hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_USER}
)
assert result["type"] == "form"
assert result["errors"] == {}
mocked_elk = mock_elk(invalid_auth=None, sync_complete=True)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
), patch(
"homeassistant.components.elkm1.async_setup", return_value=True
) as mock_setup, patch(
"homeassistant.components.elkm1.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result2 = await hass.config_entries.flow.async_configure(
result["flow_id"],
{
"protocol": "non-secure",
"address": "1.2.3.4",
"temperature_unit": "°F",
"prefix": "guest_house",
},
)
await hass.async_block_till_done()
assert result2["type"] == "create_entry"
assert result2["title"] == "guest_house"
assert result2["data"] == {
"auto_configure": True,
"host": "elk://1.2.3.4",
"prefix": "guest_house",
"username": "",
"password": "",
"temperature_unit": "°F",
}
assert len(mock_setup.mock_calls) == 1
assert len(mock_setup_entry.mock_calls) == 1
async def test_form_user_with_serial_elk(hass):
"""Test we can setup a serial elk."""
await setup.async_setup_component(hass, "persistent_notification", {})
result = await hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_USER}
)
assert result["type"] == "form"
assert result["errors"] == {}
mocked_elk = mock_elk(invalid_auth=None, sync_complete=True)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
), patch(
"homeassistant.components.elkm1.async_setup", return_value=True
) as mock_setup, patch(
"homeassistant.components.elkm1.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result2 = await hass.config_entries.flow.async_configure(
result["flow_id"],
{
"protocol": "serial",
"address": "/dev/ttyS0:115200",
"temperature_unit": "°C",
"prefix": "",
},
)
await hass.async_block_till_done()
assert result2["type"] == "create_entry"
assert result2["title"] == "ElkM1"
assert result2["data"] == {
"auto_configure": True,
"host": "serial:///dev/ttyS0:115200",
"prefix": "",
"username": "",
"password": "",
"temperature_unit": "°C",
}
assert len(mock_setup.mock_calls) == 1
assert len(mock_setup_entry.mock_calls) == 1
async def test_form_cannot_connect(hass):
"""Test we handle cannot connect error."""
result = await hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_USER}
)
mocked_elk = mock_elk(invalid_auth=None, sync_complete=None)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
), patch(
"homeassistant.components.elkm1.config_flow.VALIDATE_TIMEOUT",
0,
):
result2 = await hass.config_entries.flow.async_configure(
result["flow_id"],
{
"protocol": "secure",
"address": "1.2.3.4",
"username": "test-username",
"password": "test-password",
"temperature_unit": "°F",
"prefix": "",
},
)
assert result2["type"] == "form"
assert result2["errors"] == {"base": "cannot_connect"}
async def test_form_invalid_auth(hass):
"""Test we handle invalid auth error."""
result = await hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_USER}
)
mocked_elk = mock_elk(invalid_auth=True, sync_complete=True)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
):
result2 = await hass.config_entries.flow.async_configure(
result["flow_id"],
{
"protocol": "secure",
"address": "1.2.3.4",
"username": "test-username",
"password": "test-password",
"temperature_unit": "°F",
"prefix": "",
},
)
assert result2["type"] == "form"
assert result2["errors"] == {"base": "invalid_auth"}
async def test_form_import(hass):
"""Test we get the form with import source."""
await setup.async_setup_component(hass, "persistent_notification", {})
mocked_elk = mock_elk(invalid_auth=False, sync_complete=True)
with patch(
"homeassistant.components.elkm1.config_flow.elkm1.Elk",
return_value=mocked_elk,
), patch(
"homeassistant.components.elkm1.async_setup", return_value=True
) as mock_setup, patch(
"homeassistant.components.elkm1.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result = await hass.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data={
"host": "elks://1.2.3.4",
"username": "friend",
"password": "love",
"temperature_unit": "C",
"auto_configure": False,
"keypad": {
"enabled": True,
"exclude": [],
"include": [[1, 1], [2, 2], [3, 3]],
},
"output": {"enabled": False, "exclude": [], "include": []},
"counter": {"enabled": False, "exclude": [], "include": []},
"plc": {"enabled": False, "exclude": [], "include": []},
"prefix": "ohana",
"setting": {"enabled": False, "exclude": [], "include": []},
"area": {"enabled": False, "exclude": [], "include": []},
"task": {"enabled": False, "exclude": [], "include": []},
"thermostat": {"enabled": False, "exclude": [], "include": []},
"zone": {
"enabled": True,
"exclude": [[15, 15], [28, 208]],
"include": [],
},
},
)
await hass.async_block_till_done()
assert result["type"] == "create_entry"
assert result["title"] == "ohana"
assert result["data"] == {
"auto_configure": False,
"host": "elks://1.2.3.4",
"keypad": {"enabled": True, "exclude": [], "include": [[1, 1], [2, 2], [3, 3]]},
"output": {"enabled": False, "exclude": [], "include": []},
"password": "love",
"plc": {"enabled": False, "exclude": [], "include": []},
"prefix": "ohana",
"setting": {"enabled": False, "exclude": [], "include": []},
"area": {"enabled": False, "exclude": [], "include": []},
"counter": {"enabled": False, "exclude": [], "include": []},
"task": {"enabled": False, "exclude": [], "include": []},
"temperature_unit": "C",
"thermostat": {"enabled": False, "exclude": [], "include": []},
"username": "friend",
"zone": {"enabled": True, "exclude": [[15, 15], [28, 208]], "include": []},
}
assert len(mock_setup.mock_calls) == 1
assert len(mock_setup_entry.mock_calls) == 1
|
sander76/home-assistant
|
tests/components/elkm1/test_config_flow.py
|
Python
|
apache-2.0
| 10,094
|
[
"Elk"
] |
cc5ce977376b861f396bf9880e98253464e1646b86b74717ab902882eb6ad77c
|
"""SCons.Util
Various utility functions go here.
"""
#
# Copyright (c) 2001 - 2016 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
__revision__ = "src/engine/SCons/Util.py rel_2.5.1:3735:9dc6cee5c168 2016/11/03 14:02:02 bdbaddog"
import os
import sys
import copy
import re
import types
from collections import UserDict, UserList, UserString
# Don't "from types import ..." these because we need to get at the
# types module later to look for UnicodeType.
InstanceType = types.InstanceType
MethodType = types.MethodType
FunctionType = types.FunctionType
try: unicode
except NameError: UnicodeType = None
else: UnicodeType = unicode
def dictify(keys, values, result={}):
for k, v in zip(keys, values):
result[k] = v
return result
_altsep = os.altsep
if _altsep is None and sys.platform == 'win32':
# My ActivePython 2.0.1 doesn't set os.altsep! What gives?
_altsep = '/'
if _altsep:
def rightmost_separator(path, sep):
return max(path.rfind(sep), path.rfind(_altsep))
else:
def rightmost_separator(path, sep):
return path.rfind(sep)
# First two from the Python Cookbook, just for completeness.
# (Yeah, yeah, YAGNI...)
def containsAny(str, set):
"""Check whether sequence str contains ANY of the items in set."""
for c in set:
if c in str: return 1
return 0
def containsAll(str, set):
"""Check whether sequence str contains ALL of the items in set."""
for c in set:
if c not in str: return 0
return 1
def containsOnly(str, set):
"""Check whether sequence str contains ONLY items in set."""
for c in str:
if c not in set: return 0
return 1
def splitext(path):
"Same as os.path.splitext() but faster."
sep = rightmost_separator(path, os.sep)
dot = path.rfind('.')
# An ext is only real if it has at least one non-digit char
if dot > sep and not containsOnly(path[dot:], "0123456789."):
return path[:dot],path[dot:]
else:
return path,""
def updrive(path):
"""
Make the drive letter (if any) upper case.
This is useful because Windows is inconsistent on the case
of the drive letter, which can cause inconsistencies when
calculating command signatures.
"""
drive, rest = os.path.splitdrive(path)
if drive:
path = drive.upper() + rest
return path
class NodeList(UserList):
"""This class is almost exactly like a regular list of Nodes
(actually it can hold any object), with one important difference.
If you try to get an attribute from this list, it will return that
attribute from every item in the list. For example:
>>> someList = NodeList([ ' foo ', ' bar ' ])
>>> someList.strip()
[ 'foo', 'bar' ]
"""
def __nonzero__(self):
return len(self.data) != 0
def __str__(self):
return ' '.join(map(str, self.data))
def __iter__(self):
return iter(self.data)
def __call__(self, *args, **kwargs):
result = [x(*args, **kwargs) for x in self.data]
return self.__class__(result)
def __getattr__(self, name):
result = [getattr(x, name) for x in self.data]
return self.__class__(result)
_get_env_var = re.compile(r'^\$([_a-zA-Z]\w*|{[_a-zA-Z]\w*})$')
def get_environment_var(varstr):
"""Given a string, first determine if it looks like a reference
to a single environment variable, like "$FOO" or "${FOO}".
If so, return that variable with no decorations ("FOO").
If not, return None."""
mo=_get_env_var.match(to_String(varstr))
if mo:
var = mo.group(1)
if var[0] == '{':
return var[1:-1]
else:
return var
else:
return None
class DisplayEngine(object):
print_it = True
def __call__(self, text, append_newline=1):
if not self.print_it:
return
if append_newline: text = text + '\n'
try:
sys.stdout.write(unicode(text))
except IOError:
# Stdout might be connected to a pipe that has been closed
# by now. The most likely reason for the pipe being closed
# is that the user has press ctrl-c. It this is the case,
# then SCons is currently shutdown. We therefore ignore
# IOError's here so that SCons can continue and shutdown
# properly so that the .sconsign is correctly written
# before SCons exits.
pass
def set_mode(self, mode):
self.print_it = mode
def render_tree(root, child_func, prune=0, margin=[0], visited=None):
"""
Render a tree of nodes into an ASCII tree view.
root - the root node of the tree
child_func - the function called to get the children of a node
prune - don't visit the same node twice
margin - the format of the left margin to use for children of root.
1 results in a pipe, and 0 results in no pipe.
visited - a dictionary of visited nodes in the current branch if not prune,
or in the whole tree if prune.
"""
rname = str(root)
# Initialize 'visited' dict, if required
if visited is None:
visited = {}
children = child_func(root)
retval = ""
for pipe in margin[:-1]:
if pipe:
retval = retval + "| "
else:
retval = retval + " "
if rname in visited:
return retval + "+-[" + rname + "]\n"
retval = retval + "+-" + rname + "\n"
if not prune:
visited = copy.copy(visited)
visited[rname] = 1
for i in range(len(children)):
margin.append(i<len(children)-1)
retval = retval + render_tree(children[i], child_func, prune, margin, visited
)
margin.pop()
return retval
IDX = lambda N: N and 1 or 0
def print_tree(root, child_func, prune=0, showtags=0, margin=[0], visited=None):
"""
Print a tree of nodes. This is like render_tree, except it prints
lines directly instead of creating a string representation in memory,
so that huge trees can be printed.
root - the root node of the tree
child_func - the function called to get the children of a node
prune - don't visit the same node twice
showtags - print status information to the left of each node line
margin - the format of the left margin to use for children of root.
1 results in a pipe, and 0 results in no pipe.
visited - a dictionary of visited nodes in the current branch if not prune,
or in the whole tree if prune.
"""
rname = str(root)
# Initialize 'visited' dict, if required
if visited is None:
visited = {}
if showtags:
if showtags == 2:
legend = (' E = exists\n' +
' R = exists in repository only\n' +
' b = implicit builder\n' +
' B = explicit builder\n' +
' S = side effect\n' +
' P = precious\n' +
' A = always build\n' +
' C = current\n' +
' N = no clean\n' +
' H = no cache\n' +
'\n')
sys.stdout.write(unicode(legend))
tags = ['[']
tags.append(' E'[IDX(root.exists())])
tags.append(' R'[IDX(root.rexists() and not root.exists())])
tags.append(' BbB'[[0,1][IDX(root.has_explicit_builder())] +
[0,2][IDX(root.has_builder())]])
tags.append(' S'[IDX(root.side_effect)])
tags.append(' P'[IDX(root.precious)])
tags.append(' A'[IDX(root.always_build)])
tags.append(' C'[IDX(root.is_up_to_date())])
tags.append(' N'[IDX(root.noclean)])
tags.append(' H'[IDX(root.nocache)])
tags.append(']')
else:
tags = []
def MMM(m):
return [" ","| "][m]
margins = list(map(MMM, margin[:-1]))
children = child_func(root)
if prune and rname in visited and children:
sys.stdout.write(''.join(tags + margins + ['+-[', rname, ']']) + u'\n')
return
sys.stdout.write(''.join(tags + margins + ['+-', rname]) + u'\n')
visited[rname] = 1
if children:
margin.append(1)
idx = IDX(showtags)
for C in children[:-1]:
print_tree(C, child_func, prune, idx, margin, visited)
margin[-1] = 0
print_tree(children[-1], child_func, prune, idx, margin, visited)
margin.pop()
# Functions for deciding if things are like various types, mainly to
# handle UserDict, UserList and UserString like their underlying types.
#
# Yes, all of this manual testing breaks polymorphism, and the real
# Pythonic way to do all of this would be to just try it and handle the
# exception, but handling the exception when it's not the right type is
# often too slow.
# We are using the following trick to speed up these
# functions. Default arguments are used to take a snapshot of
# the global functions and constants used by these functions. This
# transforms accesses to global variable into local variables
# accesses (i.e. LOAD_FAST instead of LOAD_GLOBAL).
DictTypes = (dict, UserDict)
ListTypes = (list, UserList)
SequenceTypes = (list, tuple, UserList)
# Note that profiling data shows a speed-up when comparing
# explicitly with str and unicode instead of simply comparing
# with basestring. (at least on Python 2.5.1)
StringTypes = (str, unicode, UserString)
# Empirically, it is faster to check explicitly for str and
# unicode than for basestring.
BaseStringTypes = (str, unicode)
def is_Dict(obj, isinstance=isinstance, DictTypes=DictTypes):
return isinstance(obj, DictTypes)
def is_List(obj, isinstance=isinstance, ListTypes=ListTypes):
return isinstance(obj, ListTypes)
def is_Sequence(obj, isinstance=isinstance, SequenceTypes=SequenceTypes):
return isinstance(obj, SequenceTypes)
def is_Tuple(obj, isinstance=isinstance, tuple=tuple):
return isinstance(obj, tuple)
def is_String(obj, isinstance=isinstance, StringTypes=StringTypes):
return isinstance(obj, StringTypes)
def is_Scalar(obj, isinstance=isinstance, StringTypes=StringTypes, SequenceTypes=SequenceTypes):
# Profiling shows that there is an impressive speed-up of 2x
# when explicitly checking for strings instead of just not
# sequence when the argument (i.e. obj) is already a string.
# But, if obj is a not string then it is twice as fast to
# check only for 'not sequence'. The following code therefore
# assumes that the obj argument is a string most of the time.
return isinstance(obj, StringTypes) or not isinstance(obj, SequenceTypes)
def do_flatten(sequence, result, isinstance=isinstance,
StringTypes=StringTypes, SequenceTypes=SequenceTypes):
for item in sequence:
if isinstance(item, StringTypes) or not isinstance(item, SequenceTypes):
result.append(item)
else:
do_flatten(item, result)
def flatten(obj, isinstance=isinstance, StringTypes=StringTypes,
SequenceTypes=SequenceTypes, do_flatten=do_flatten):
"""Flatten a sequence to a non-nested list.
Flatten() converts either a single scalar or a nested sequence
to a non-nested list. Note that flatten() considers strings
to be scalars instead of sequences like Python would.
"""
if isinstance(obj, StringTypes) or not isinstance(obj, SequenceTypes):
return [obj]
result = []
for item in obj:
if isinstance(item, StringTypes) or not isinstance(item, SequenceTypes):
result.append(item)
else:
do_flatten(item, result)
return result
def flatten_sequence(sequence, isinstance=isinstance, StringTypes=StringTypes,
SequenceTypes=SequenceTypes, do_flatten=do_flatten):
"""Flatten a sequence to a non-nested list.
Same as flatten(), but it does not handle the single scalar
case. This is slightly more efficient when one knows that
the sequence to flatten can not be a scalar.
"""
result = []
for item in sequence:
if isinstance(item, StringTypes) or not isinstance(item, SequenceTypes):
result.append(item)
else:
do_flatten(item, result)
return result
# Generic convert-to-string functions that abstract away whether or
# not the Python we're executing has Unicode support. The wrapper
# to_String_for_signature() will use a for_signature() method if the
# specified object has one.
#
def to_String(s,
isinstance=isinstance, str=str,
UserString=UserString, BaseStringTypes=BaseStringTypes):
if isinstance(s,BaseStringTypes):
# Early out when already a string!
return s
elif isinstance(s, UserString):
# s.data can only be either a unicode or a regular
# string. Please see the UserString initializer.
return s.data
else:
return str(s)
def to_String_for_subst(s,
isinstance=isinstance, str=str, to_String=to_String,
BaseStringTypes=BaseStringTypes, SequenceTypes=SequenceTypes,
UserString=UserString):
# Note that the test cases are sorted by order of probability.
if isinstance(s, BaseStringTypes):
return s
elif isinstance(s, SequenceTypes):
l = []
for e in s:
l.append(to_String_for_subst(e))
return ' '.join( s )
elif isinstance(s, UserString):
# s.data can only be either a unicode or a regular
# string. Please see the UserString initializer.
return s.data
else:
return str(s)
def to_String_for_signature(obj, to_String_for_subst=to_String_for_subst,
AttributeError=AttributeError):
try:
f = obj.for_signature
except AttributeError:
return to_String_for_subst(obj)
else:
return f()
# The SCons "semi-deep" copy.
#
# This makes separate copies of lists (including UserList objects)
# dictionaries (including UserDict objects) and tuples, but just copies
# references to anything else it finds.
#
# A special case is any object that has a __semi_deepcopy__() method,
# which we invoke to create the copy. Currently only used by
# BuilderDict to actually prevent the copy operation (as invalid on that object).
#
# The dispatch table approach used here is a direct rip-off from the
# normal Python copy module.
_semi_deepcopy_dispatch = d = {}
def semi_deepcopy_dict(x, exclude = [] ):
copy = {}
for key, val in x.items():
# The regular Python copy.deepcopy() also deepcopies the key,
# as follows:
#
# copy[semi_deepcopy(key)] = semi_deepcopy(val)
#
# Doesn't seem like we need to, but we'll comment it just in case.
if key not in exclude:
copy[key] = semi_deepcopy(val)
return copy
d[dict] = semi_deepcopy_dict
def _semi_deepcopy_list(x):
return list(map(semi_deepcopy, x))
d[list] = _semi_deepcopy_list
def _semi_deepcopy_tuple(x):
return tuple(map(semi_deepcopy, x))
d[tuple] = _semi_deepcopy_tuple
def semi_deepcopy(x):
copier = _semi_deepcopy_dispatch.get(type(x))
if copier:
return copier(x)
else:
if hasattr(x, '__semi_deepcopy__') and callable(x.__semi_deepcopy__):
return x.__semi_deepcopy__()
elif isinstance(x, UserDict):
return x.__class__(semi_deepcopy_dict(x))
elif isinstance(x, UserList):
return x.__class__(_semi_deepcopy_list(x))
return x
class Proxy(object):
"""A simple generic Proxy class, forwarding all calls to
subject. So, for the benefit of the python newbie, what does
this really mean? Well, it means that you can take an object, let's
call it 'objA', and wrap it in this Proxy class, with a statement
like this
proxyObj = Proxy(objA),
Then, if in the future, you do something like this
x = proxyObj.var1,
since Proxy does not have a 'var1' attribute (but presumably objA does),
the request actually is equivalent to saying
x = objA.var1
Inherit from this class to create a Proxy.
Note that, with new-style classes, this does *not* work transparently
for Proxy subclasses that use special .__*__() method names, because
those names are now bound to the class, not the individual instances.
You now need to know in advance which .__*__() method names you want
to pass on to the underlying Proxy object, and specifically delegate
their calls like this:
class Foo(Proxy):
__str__ = Delegate('__str__')
"""
def __init__(self, subject):
"""Wrap an object as a Proxy object"""
self._subject = subject
def __getattr__(self, name):
"""Retrieve an attribute from the wrapped object. If the named
attribute doesn't exist, AttributeError is raised"""
return getattr(self._subject, name)
def get(self):
"""Retrieve the entire wrapped object"""
return self._subject
def __cmp__(self, other):
if issubclass(other.__class__, self._subject.__class__):
return cmp(self._subject, other)
return cmp(self.__dict__, other.__dict__)
class Delegate(object):
"""A Python Descriptor class that delegates attribute fetches
to an underlying wrapped subject of a Proxy. Typical use:
class Foo(Proxy):
__str__ = Delegate('__str__')
"""
def __init__(self, attribute):
self.attribute = attribute
def __get__(self, obj, cls):
if isinstance(obj, cls):
return getattr(obj._subject, self.attribute)
else:
return self
# attempt to load the windows registry module:
can_read_reg = 0
try:
import winreg
can_read_reg = 1
hkey_mod = winreg
RegOpenKeyEx = winreg.OpenKeyEx
RegEnumKey = winreg.EnumKey
RegEnumValue = winreg.EnumValue
RegQueryValueEx = winreg.QueryValueEx
RegError = winreg.error
except ImportError:
try:
import win32api
import win32con
can_read_reg = 1
hkey_mod = win32con
RegOpenKeyEx = win32api.RegOpenKeyEx
RegEnumKey = win32api.RegEnumKey
RegEnumValue = win32api.RegEnumValue
RegQueryValueEx = win32api.RegQueryValueEx
RegError = win32api.error
except ImportError:
class _NoError(Exception):
pass
RegError = _NoError
WinError = None
# Make sure we have a definition of WindowsError so we can
# run platform-independent tests of Windows functionality on
# platforms other than Windows. (WindowsError is, in fact, an
# OSError subclass on Windows.)
class PlainWindowsError(OSError):
pass
try:
WinError = WindowsError
except NameError:
WinError = PlainWindowsError
if can_read_reg:
HKEY_CLASSES_ROOT = hkey_mod.HKEY_CLASSES_ROOT
HKEY_LOCAL_MACHINE = hkey_mod.HKEY_LOCAL_MACHINE
HKEY_CURRENT_USER = hkey_mod.HKEY_CURRENT_USER
HKEY_USERS = hkey_mod.HKEY_USERS
def RegGetValue(root, key):
"""This utility function returns a value in the registry
without having to open the key first. Only available on
Windows platforms with a version of Python that can read the
registry. Returns the same thing as
SCons.Util.RegQueryValueEx, except you just specify the entire
path to the value, and don't have to bother opening the key
first. So:
Instead of:
k = SCons.Util.RegOpenKeyEx(SCons.Util.HKEY_LOCAL_MACHINE,
r'SOFTWARE\Microsoft\Windows\CurrentVersion')
out = SCons.Util.RegQueryValueEx(k,
'ProgramFilesDir')
You can write:
out = SCons.Util.RegGetValue(SCons.Util.HKEY_LOCAL_MACHINE,
r'SOFTWARE\Microsoft\Windows\CurrentVersion\ProgramFilesDir')
"""
# I would use os.path.split here, but it's not a filesystem
# path...
p = key.rfind('\\') + 1
keyp = key[:p-1] # -1 to omit trailing slash
val = key[p:]
k = RegOpenKeyEx(root, keyp)
return RegQueryValueEx(k,val)
else:
HKEY_CLASSES_ROOT = None
HKEY_LOCAL_MACHINE = None
HKEY_CURRENT_USER = None
HKEY_USERS = None
def RegGetValue(root, key):
raise WinError
def RegOpenKeyEx(root, key):
raise WinError
if sys.platform == 'win32':
def WhereIs(file, path=None, pathext=None, reject=[]):
if path is None:
try:
path = os.environ['PATH']
except KeyError:
return None
if is_String(path):
path = path.split(os.pathsep)
if pathext is None:
try:
pathext = os.environ['PATHEXT']
except KeyError:
pathext = '.COM;.EXE;.BAT;.CMD'
if is_String(pathext):
pathext = pathext.split(os.pathsep)
for ext in pathext:
if ext.lower() == file[-len(ext):].lower():
pathext = ['']
break
if not is_List(reject) and not is_Tuple(reject):
reject = [reject]
for dir in path:
f = os.path.join(dir, file)
for ext in pathext:
fext = f + ext
if os.path.isfile(fext):
try:
reject.index(fext)
except ValueError:
return os.path.normpath(fext)
continue
return None
elif os.name == 'os2':
def WhereIs(file, path=None, pathext=None, reject=[]):
if path is None:
try:
path = os.environ['PATH']
except KeyError:
return None
if is_String(path):
path = path.split(os.pathsep)
if pathext is None:
pathext = ['.exe', '.cmd']
for ext in pathext:
if ext.lower() == file[-len(ext):].lower():
pathext = ['']
break
if not is_List(reject) and not is_Tuple(reject):
reject = [reject]
for dir in path:
f = os.path.join(dir, file)
for ext in pathext:
fext = f + ext
if os.path.isfile(fext):
try:
reject.index(fext)
except ValueError:
return os.path.normpath(fext)
continue
return None
else:
def WhereIs(file, path=None, pathext=None, reject=[]):
import stat
if path is None:
try:
path = os.environ['PATH']
except KeyError:
return None
if is_String(path):
path = path.split(os.pathsep)
if not is_List(reject) and not is_Tuple(reject):
reject = [reject]
for d in path:
f = os.path.join(d, file)
if os.path.isfile(f):
try:
st = os.stat(f)
except OSError:
# os.stat() raises OSError, not IOError if the file
# doesn't exist, so in this case we let IOError get
# raised so as to not mask possibly serious disk or
# network issues.
continue
if stat.S_IMODE(st[stat.ST_MODE]) & 0111:
try:
reject.index(f)
except ValueError:
return os.path.normpath(f)
continue
return None
def PrependPath(oldpath, newpath, sep = os.pathsep,
delete_existing=1, canonicalize=None):
"""This prepends newpath elements to the given oldpath. Will only
add any particular path once (leaving the first one it encounters
and ignoring the rest, to preserve path order), and will
os.path.normpath and os.path.normcase all paths to help assure
this. This can also handle the case where the given old path
variable is a list instead of a string, in which case a list will
be returned instead of a string.
Example:
Old Path: "/foo/bar:/foo"
New Path: "/biz/boom:/foo"
Result: "/biz/boom:/foo:/foo/bar"
If delete_existing is 0, then adding a path that exists will
not move it to the beginning; it will stay where it is in the
list.
If canonicalize is not None, it is applied to each element of
newpath before use.
"""
orig = oldpath
is_list = 1
paths = orig
if not is_List(orig) and not is_Tuple(orig):
paths = paths.split(sep)
is_list = 0
if is_String(newpath):
newpaths = newpath.split(sep)
elif not is_List(newpath) and not is_Tuple(newpath):
newpaths = [ newpath ] # might be a Dir
else:
newpaths = newpath
if canonicalize:
newpaths=list(map(canonicalize, newpaths))
if not delete_existing:
# First uniquify the old paths, making sure to
# preserve the first instance (in Unix/Linux,
# the first one wins), and remembering them in normpaths.
# Then insert the new paths at the head of the list
# if they're not already in the normpaths list.
result = []
normpaths = []
for path in paths:
if not path:
continue
normpath = os.path.normpath(os.path.normcase(path))
if normpath not in normpaths:
result.append(path)
normpaths.append(normpath)
newpaths.reverse() # since we're inserting at the head
for path in newpaths:
if not path:
continue
normpath = os.path.normpath(os.path.normcase(path))
if normpath not in normpaths:
result.insert(0, path)
normpaths.append(normpath)
paths = result
else:
newpaths = newpaths + paths # prepend new paths
normpaths = []
paths = []
# now we add them only if they are unique
for path in newpaths:
normpath = os.path.normpath(os.path.normcase(path))
if path and not normpath in normpaths:
paths.append(path)
normpaths.append(normpath)
if is_list:
return paths
else:
return sep.join(paths)
def AppendPath(oldpath, newpath, sep = os.pathsep,
delete_existing=1, canonicalize=None):
"""This appends new path elements to the given old path. Will
only add any particular path once (leaving the last one it
encounters and ignoring the rest, to preserve path order), and
will os.path.normpath and os.path.normcase all paths to help
assure this. This can also handle the case where the given old
path variable is a list instead of a string, in which case a list
will be returned instead of a string.
Example:
Old Path: "/foo/bar:/foo"
New Path: "/biz/boom:/foo"
Result: "/foo/bar:/biz/boom:/foo"
If delete_existing is 0, then adding a path that exists
will not move it to the end; it will stay where it is in the list.
If canonicalize is not None, it is applied to each element of
newpath before use.
"""
orig = oldpath
is_list = 1
paths = orig
if not is_List(orig) and not is_Tuple(orig):
paths = paths.split(sep)
is_list = 0
if is_String(newpath):
newpaths = newpath.split(sep)
elif not is_List(newpath) and not is_Tuple(newpath):
newpaths = [ newpath ] # might be a Dir
else:
newpaths = newpath
if canonicalize:
newpaths=list(map(canonicalize, newpaths))
if not delete_existing:
# add old paths to result, then
# add new paths if not already present
# (I thought about using a dict for normpaths for speed,
# but it's not clear hashing the strings would be faster
# than linear searching these typically short lists.)
result = []
normpaths = []
for path in paths:
if not path:
continue
result.append(path)
normpaths.append(os.path.normpath(os.path.normcase(path)))
for path in newpaths:
if not path:
continue
normpath = os.path.normpath(os.path.normcase(path))
if normpath not in normpaths:
result.append(path)
normpaths.append(normpath)
paths = result
else:
# start w/ new paths, add old ones if not present,
# then reverse.
newpaths = paths + newpaths # append new paths
newpaths.reverse()
normpaths = []
paths = []
# now we add them only if they are unique
for path in newpaths:
normpath = os.path.normpath(os.path.normcase(path))
if path and not normpath in normpaths:
paths.append(path)
normpaths.append(normpath)
paths.reverse()
if is_list:
return paths
else:
return sep.join(paths)
def AddPathIfNotExists(env_dict, key, path, sep=os.pathsep):
"""This function will take 'key' out of the dictionary
'env_dict', then add the path 'path' to that key if it is not
already there. This treats the value of env_dict[key] as if it
has a similar format to the PATH variable...a list of paths
separated by tokens. The 'path' will get added to the list if it
is not already there."""
try:
is_list = 1
paths = env_dict[key]
if not is_List(env_dict[key]):
paths = paths.split(sep)
is_list = 0
if os.path.normcase(path) not in list(map(os.path.normcase, paths)):
paths = [ path ] + paths
if is_list:
env_dict[key] = paths
else:
env_dict[key] = sep.join(paths)
except KeyError:
env_dict[key] = path
if sys.platform == 'cygwin':
def get_native_path(path):
"""Transforms an absolute path into a native path for the system. In
Cygwin, this converts from a Cygwin path to a Windows one."""
return os.popen('cygpath -w ' + path).read().replace('\n', '')
else:
def get_native_path(path):
"""Transforms an absolute path into a native path for the system.
Non-Cygwin version, just leave the path alone."""
return path
display = DisplayEngine()
def Split(arg):
if is_List(arg) or is_Tuple(arg):
return arg
elif is_String(arg):
return arg.split()
else:
return [arg]
class CLVar(UserList):
"""A class for command-line construction variables.
This is a list that uses Split() to split an initial string along
white-space arguments, and similarly to split any strings that get
added. This allows us to Do the Right Thing with Append() and
Prepend() (as well as straight Python foo = env['VAR'] + 'arg1
arg2') regardless of whether a user adds a list or a string to a
command-line construction variable.
"""
def __init__(self, seq = []):
UserList.__init__(self, Split(seq))
def __add__(self, other):
return UserList.__add__(self, CLVar(other))
def __radd__(self, other):
return UserList.__radd__(self, CLVar(other))
def __coerce__(self, other):
return (self, CLVar(other))
def __str__(self):
return ' '.join(self.data)
# A dictionary that preserves the order in which items are added.
# Submitted by David Benjamin to ActiveState's Python Cookbook web site:
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/107747
# Including fixes/enhancements from the follow-on discussions.
class OrderedDict(UserDict):
def __init__(self, dict = None):
self._keys = []
UserDict.__init__(self, dict)
def __delitem__(self, key):
UserDict.__delitem__(self, key)
self._keys.remove(key)
def __setitem__(self, key, item):
UserDict.__setitem__(self, key, item)
if key not in self._keys: self._keys.append(key)
def clear(self):
UserDict.clear(self)
self._keys = []
def copy(self):
dict = OrderedDict()
dict.update(self)
return dict
def items(self):
return list(zip(self._keys, list(self.values())))
def keys(self):
return self._keys[:]
def popitem(self):
try:
key = self._keys[-1]
except IndexError:
raise KeyError('dictionary is empty')
val = self[key]
del self[key]
return (key, val)
def setdefault(self, key, failobj = None):
UserDict.setdefault(self, key, failobj)
if key not in self._keys: self._keys.append(key)
def update(self, dict):
for (key, val) in dict.items():
self.__setitem__(key, val)
def values(self):
return list(map(self.get, self._keys))
class Selector(OrderedDict):
"""A callable ordered dictionary that maps file suffixes to
dictionary values. We preserve the order in which items are added
so that get_suffix() calls always return the first suffix added."""
def __call__(self, env, source, ext=None):
if ext is None:
try:
ext = source[0].get_suffix()
except IndexError:
ext = ""
try:
return self[ext]
except KeyError:
# Try to perform Environment substitution on the keys of
# the dictionary before giving up.
s_dict = {}
for (k,v) in self.items():
if k is not None:
s_k = env.subst(k)
if s_k in s_dict:
# We only raise an error when variables point
# to the same suffix. If one suffix is literal
# and a variable suffix contains this literal,
# the literal wins and we don't raise an error.
raise KeyError(s_dict[s_k][0], k, s_k)
s_dict[s_k] = (k,v)
try:
return s_dict[ext][1]
except KeyError:
try:
return self[None]
except KeyError:
return None
if sys.platform == 'cygwin':
# On Cygwin, os.path.normcase() lies, so just report back the
# fact that the underlying Windows OS is case-insensitive.
def case_sensitive_suffixes(s1, s2):
return 0
else:
def case_sensitive_suffixes(s1, s2):
return (os.path.normcase(s1) != os.path.normcase(s2))
def adjustixes(fname, pre, suf, ensure_suffix=False):
if pre:
path, fn = os.path.split(os.path.normpath(fname))
if fn[:len(pre)] != pre:
fname = os.path.join(path, pre + fn)
# Only append a suffix if the suffix we're going to add isn't already
# there, and if either we've been asked to ensure the specific suffix
# is present or there's no suffix on it at all.
if suf and fname[-len(suf):] != suf and \
(ensure_suffix or not splitext(fname)[1]):
fname = fname + suf
return fname
# From Tim Peters,
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
# ASPN: Python Cookbook: Remove duplicates from a sequence
# (Also in the printed Python Cookbook.)
def unique(s):
"""Return a list of the elements in s, but without duplicates.
For example, unique([1,2,3,1,2,3]) is some permutation of [1,2,3],
unique("abcabc") some permutation of ["a", "b", "c"], and
unique(([1, 2], [2, 3], [1, 2])) some permutation of
[[2, 3], [1, 2]].
For best speed, all sequence elements should be hashable. Then
unique() will usually work in linear time.
If not possible, the sequence elements should enjoy a total
ordering, and if list(s).sort() doesn't raise TypeError it's
assumed that they do enjoy a total ordering. Then unique() will
usually work in O(N*log2(N)) time.
If that's not possible either, the sequence elements must support
equality-testing. Then unique() will usually work in quadratic
time.
"""
n = len(s)
if n == 0:
return []
# Try using a dict first, as that's the fastest and will usually
# work. If it doesn't work, it will usually fail quickly, so it
# usually doesn't cost much to *try* it. It requires that all the
# sequence elements be hashable, and support equality comparison.
u = {}
try:
for x in s:
u[x] = 1
except TypeError:
pass # move on to the next method
else:
return list(u.keys())
del u
# We can't hash all the elements. Second fastest is to sort,
# which brings the equal elements together; then duplicates are
# easy to weed out in a single pass.
# NOTE: Python's list.sort() was designed to be efficient in the
# presence of many duplicate elements. This isn't true of all
# sort functions in all languages or libraries, so this approach
# is more effective in Python than it may be elsewhere.
try:
t = sorted(s)
except TypeError:
pass # move on to the next method
else:
assert n > 0
last = t[0]
lasti = i = 1
while i < n:
if t[i] != last:
t[lasti] = last = t[i]
lasti = lasti + 1
i = i + 1
return t[:lasti]
del t
# Brute force is all that's left.
u = []
for x in s:
if x not in u:
u.append(x)
return u
# From Alex Martelli,
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
# ASPN: Python Cookbook: Remove duplicates from a sequence
# First comment, dated 2001/10/13.
# (Also in the printed Python Cookbook.)
def uniquer(seq, idfun=None):
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
# A more efficient implementation of Alex's uniquer(), this avoids the
# idfun() argument and function-call overhead by assuming that all
# items in the sequence are hashable.
def uniquer_hashables(seq):
seen = {}
result = []
for item in seq:
#if not item in seen:
if item not in seen:
seen[item] = 1
result.append(item)
return result
# Recipe 19.11 "Reading Lines with Continuation Characters",
# by Alex Martelli, straight from the Python CookBook (2nd edition).
def logical_lines(physical_lines, joiner=''.join):
logical_line = []
for line in physical_lines:
stripped = line.rstrip()
if stripped.endswith('\\'):
# a line which continues w/the next physical line
logical_line.append(stripped[:-1])
else:
# a line which does not continue, end of logical line
logical_line.append(line)
yield joiner(logical_line)
logical_line = []
if logical_line:
# end of sequence implies end of last logical line
yield joiner(logical_line)
class LogicalLines(object):
""" Wrapper class for the logical_lines method.
Allows us to read all "logical" lines at once from a
given file object.
"""
def __init__(self, fileobj):
self.fileobj = fileobj
def readlines(self):
result = [l for l in logical_lines(self.fileobj)]
return result
class UniqueList(UserList):
def __init__(self, seq = []):
UserList.__init__(self, seq)
self.unique = True
def __make_unique(self):
if not self.unique:
self.data = uniquer_hashables(self.data)
self.unique = True
def __lt__(self, other):
self.__make_unique()
return UserList.__lt__(self, other)
def __le__(self, other):
self.__make_unique()
return UserList.__le__(self, other)
def __eq__(self, other):
self.__make_unique()
return UserList.__eq__(self, other)
def __ne__(self, other):
self.__make_unique()
return UserList.__ne__(self, other)
def __gt__(self, other):
self.__make_unique()
return UserList.__gt__(self, other)
def __ge__(self, other):
self.__make_unique()
return UserList.__ge__(self, other)
def __cmp__(self, other):
self.__make_unique()
return UserList.__cmp__(self, other)
def __len__(self):
self.__make_unique()
return UserList.__len__(self)
def __getitem__(self, i):
self.__make_unique()
return UserList.__getitem__(self, i)
def __setitem__(self, i, item):
UserList.__setitem__(self, i, item)
self.unique = False
def __getslice__(self, i, j):
self.__make_unique()
return UserList.__getslice__(self, i, j)
def __setslice__(self, i, j, other):
UserList.__setslice__(self, i, j, other)
self.unique = False
def __add__(self, other):
result = UserList.__add__(self, other)
result.unique = False
return result
def __radd__(self, other):
result = UserList.__radd__(self, other)
result.unique = False
return result
def __iadd__(self, other):
result = UserList.__iadd__(self, other)
result.unique = False
return result
def __mul__(self, other):
result = UserList.__mul__(self, other)
result.unique = False
return result
def __rmul__(self, other):
result = UserList.__rmul__(self, other)
result.unique = False
return result
def __imul__(self, other):
result = UserList.__imul__(self, other)
result.unique = False
return result
def append(self, item):
UserList.append(self, item)
self.unique = False
def insert(self, i):
UserList.insert(self, i)
self.unique = False
def count(self, item):
self.__make_unique()
return UserList.count(self, item)
def index(self, item):
self.__make_unique()
return UserList.index(self, item)
def reverse(self):
self.__make_unique()
UserList.reverse(self)
def sort(self, *args, **kwds):
self.__make_unique()
return UserList.sort(self, *args, **kwds)
def extend(self, other):
UserList.extend(self, other)
self.unique = False
class Unbuffered(object):
"""
A proxy class that wraps a file object, flushing after every write,
and delegating everything else to the wrapped object.
"""
def __init__(self, file):
self.file = file
self.softspace = 0 ## backward compatibility; not supported in Py3k
def write(self, arg):
try:
self.file.write(arg)
self.file.flush()
except IOError:
# Stdout might be connected to a pipe that has been closed
# by now. The most likely reason for the pipe being closed
# is that the user has press ctrl-c. It this is the case,
# then SCons is currently shutdown. We therefore ignore
# IOError's here so that SCons can continue and shutdown
# properly so that the .sconsign is correctly written
# before SCons exits.
pass
def __getattr__(self, attr):
return getattr(self.file, attr)
def make_path_relative(path):
""" makes an absolute path name to a relative pathname.
"""
if os.path.isabs(path):
drive_s,path = os.path.splitdrive(path)
import re
if not drive_s:
path=re.compile("/*(.*)").findall(path)[0]
else:
path=path[1:]
assert( not os.path.isabs( path ) ), path
return path
# The original idea for AddMethod() and RenameFunction() come from the
# following post to the ActiveState Python Cookbook:
#
# ASPN: Python Cookbook : Install bound methods in an instance
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/223613
#
# That code was a little fragile, though, so the following changes
# have been wrung on it:
#
# * Switched the installmethod() "object" and "function" arguments,
# so the order reflects that the left-hand side is the thing being
# "assigned to" and the right-hand side is the value being assigned.
#
# * Changed explicit type-checking to the "try: klass = object.__class__"
# block in installmethod() below so that it still works with the
# old-style classes that SCons uses.
#
# * Replaced the by-hand creation of methods and functions with use of
# the "new" module, as alluded to in Alex Martelli's response to the
# following Cookbook post:
#
# ASPN: Python Cookbook : Dynamically added methods to a class
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/81732
def AddMethod(obj, function, name=None):
"""
Adds either a bound method to an instance or an unbound method to
a class. If name is ommited the name of the specified function
is used by default.
Example:
a = A()
def f(self, x, y):
self.z = x + y
AddMethod(f, A, "add")
a.add(2, 4)
print a.z
AddMethod(lambda self, i: self.l[i], a, "listIndex")
print a.listIndex(5)
"""
if name is None:
name = function.func_name
else:
function = RenameFunction(function, name)
if hasattr(obj, '__class__') and obj.__class__ is not type:
# "obj" is an instance, so it gets a bound method.
setattr(obj, name, MethodType(function, obj, obj.__class__))
else:
# "obj" is a class, so it gets an unbound method.
setattr(obj, name, MethodType(function, None, obj))
def RenameFunction(function, name):
"""
Returns a function identical to the specified function, but with
the specified name.
"""
return FunctionType(function.func_code,
function.func_globals,
name,
function.func_defaults)
md5 = False
def MD5signature(s):
return str(s)
def MD5filesignature(fname, chunksize=65536):
f = open(fname, "rb")
result = f.read()
f.close()
return result
try:
import hashlib
except ImportError:
pass
else:
if hasattr(hashlib, 'md5'):
md5 = True
def MD5signature(s):
m = hashlib.md5()
m.update(str(s))
return m.hexdigest()
def MD5filesignature(fname, chunksize=65536):
m = hashlib.md5()
f = open(fname, "rb")
while True:
blck = f.read(chunksize)
if not blck:
break
m.update(str(blck))
f.close()
return m.hexdigest()
def MD5collect(signatures):
"""
Collects a list of signatures into an aggregate signature.
signatures - a list of signatures
returns - the aggregate signature
"""
if len(signatures) == 1:
return signatures[0]
else:
return MD5signature(', '.join(signatures))
def silent_intern(x):
"""
Perform sys.intern() on the passed argument and return the result.
If the input is ineligible (e.g. a unicode string) the original argument is
returned and no exception is thrown.
"""
try:
return sys.intern(x)
except TypeError:
return x
# From Dinu C. Gherman,
# Python Cookbook, second edition, recipe 6.17, p. 277.
# Also:
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/68205
# ASPN: Python Cookbook: Null Object Design Pattern
#TODO??? class Null(object):
class Null(object):
""" Null objects always and reliably "do nothing." """
def __new__(cls, *args, **kwargs):
if not '_instance' in vars(cls):
cls._instance = super(Null, cls).__new__(cls, *args, **kwargs)
return cls._instance
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return self
def __repr__(self):
return "Null(0x%08X)" % id(self)
def __nonzero__(self):
return False
def __getattr__(self, name):
return self
def __setattr__(self, name, value):
return self
def __delattr__(self, name):
return self
class NullSeq(Null):
def __len__(self):
return 0
def __iter__(self):
return iter(())
def __getitem__(self, i):
return self
def __delitem__(self, i):
return self
def __setitem__(self, i, v):
return self
del __revision__
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
|
EmanueleCannizzaro/scons
|
src/engine/SCons/Util.py
|
Python
|
mit
| 50,268
|
[
"VisIt"
] |
c62b7eb380bed5b58a0cb8f9444babd1175f56db45639e9785f705746837f88c
|
# Copyright (C) 2010-2018 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
import sys
import unittest as ut
import numpy as np
import espressomd
class AnalyzeGyration(ut.TestCase):
system = espressomd.System(box_l=[1.0, 1.0, 1.0])
np.random.seed(1234)
cube_len = 4
type_cube = 0
type_stick = 1
@classmethod
def setUpClass(self):
box_l = 20.0
cube_centre = 0.5 * (self.cube_len - 1)
self.system.box_l = np.array([box_l, box_l, box_l])
self.system.cell_system.set_n_square(use_verlet_lists=False)
#4x4 cube
for x, y, z in np.ndindex((self.cube_len, self.cube_len, self.cube_len)):
self.system.part.add(pos=[x, y, z], type=self.type_cube)
# long stick in z, force z as principal axis
for x, y, z in np.ndindex((1, 1, 10)):
self.system.part.add(
pos=[x + cube_centre, y + cube_centre, z + self.cube_len], type=self.type_stick)
# two small nubs in y, force y as secondary axis
self.system.part.add(
pos=[cube_centre, self.cube_len, cube_centre], type=self.type_stick)
self.system.part.add(
pos=[cube_centre, -1, cube_centre], type=self.type_stick)
def test_gyration_tensor_cube(self):
# get results
res = self.system.analysis.gyration_tensor(p_type=self.type_cube)
rg = self.system.analysis.calc_rg(
chain_start=0, number_of_chains=1, chain_length=self.cube_len**3)[0]
# make sure all eigenvalues (for the cube) are identical
self.assertTrue(
np.allclose(np.abs(res['eva0'][0]), np.abs(res['eva1'][0]), np.abs(res['eva2'][0]), atol=1e-6))
self.assertTrue(np.allclose(rg**2, res['Rg^2'], atol=1e-6))
def test_gyration_tensor(self):
# get results
res = self.system.analysis.gyration_tensor(
p_type=[self.type_stick, self.type_cube])
rg = self.system.analysis.calc_rg(
chain_start=0, number_of_chains=1, chain_length=len(self.system.part[:]))[0]
#test if principal and secondary axis is [0,0,1] and [0,1,0]
self.assertTrue(
np.allclose(np.abs(res['eva0'][1]), [0., 0., 1.], atol=1e-6))
self.assertTrue(
np.allclose(np.abs(res['eva1'][1]), [0., 1., 0.], atol=1e-6))
self.assertTrue(
np.allclose(np.abs(res['eva2'][1]), [1., 0., 0.], atol=1e-6))
self.assertTrue(np.allclose(rg**2, res['Rg^2'], atol=1e-6))
def test_mom_intertia(self):
sqr_dist = np.sum(
(self.system.analysis.center_of_mass(p_type=0) - self.system.part.select(type=0).pos)**2, axis=0)
mom_I = self.system.analysis.moment_of_inertia_matrix(p_type=0)
# the cube case should have zero as off- diagonal components
self.assertTrue(
np.allclose([mom_I[0, 1], mom_I[0, 2], mom_I[1, 2], mom_I[1, 0], mom_I[2, 0], mom_I[2, 1]], np.zeros(6), atol=1e-6))
self.assertTrue(np.allclose([mom_I[0, 0], mom_I[1, 1], mom_I[2, 2]], [
sqr_dist[1] + sqr_dist[2], sqr_dist[0] + sqr_dist[2], sqr_dist[1] + sqr_dist[2]], atol=1e-6))
if __name__ == "__main__":
print("Features: ", espressomd.features())
ut.main()
|
hmenke/espresso
|
testsuite/python/analyze_gyration_tensor.py
|
Python
|
gpl-3.0
| 3,913
|
[
"ESPResSo"
] |
fe02939123a801c64215b7945a0ade0e093c8a0f15206cf82bc6720188db33fe
|
# Demo program using ManGrating.
#
# Copyright (C) 2010-2011 Huang Xin
#
# See LICENSE.TXT that came with this file.
"""
USAGE:
Move the mouse cursor to change the position of the grating.
Scroll the mouse wheel to change the orientation.
Press right arrow to increase the spatial frequency.
Press left arrow to decrease the spatial frequency.
Press up arrow to increase the temporal frequency.
...
"""
from __future__ import division
from StimControl.LightStim.Core import DefaultScreen
from StimControl.LightStim.LightData import dictattr
from StimControl.LightStim.FrameControl import FrameSweep
from StimControl.LightStim.ManGrating import ManGrating
# Manual Grating experiment parameters, all must be scalars
DefaultScreen(['control','left','right'])
p = dictattr()
# mask, one of: None, 'gaussian', or 'circle'
p.mask = 'circle'
p.maskSizeStepDeg = 0.5
# initial grating phase
p.phase0 = 0
# grating mean luminance (0-1)
p.ml = 0.5
# grating contrast (0-1)
p.contrast = 1
# background brightness (0-1)
p.bgbrightness = 0.5
# antialiase the bar?
p.antialiase = True
# flash the grating?
p.flash = False
# duration of each on period (sec)
p.flashduration = 0.5
# duration of each off period (sec)
p.flashinterval = 0.3
# factor to chage bar width and height by left/right/up/down key
p.sizemultiplier = 1.02
# factor to change temporal freq by on up/down
p.tfreqmultiplier = 1.01
# factor to change spatial freq by on left/right
p.sfreqmultiplier = 1.01
# factor to change contrast by on +/-
p.contrastmultiplier = 1.005
# orientation step size to snap to when scrolling mouse wheel (deg)
p.snapDeg = 12
stimulus_control = ManGrating(disp_info=True, params=p, viewport='control')
stimulus_left = ManGrating(disp_info=False, params=p, viewport='left')
stimulus_right = ManGrating(disp_info=False, params=p, viewport='right')
sweep = FrameSweep()
sweep.add_stimulus(stimulus_control)
sweep.add_stimulus(stimulus_left)
sweep.add_stimulus(stimulus_right)
sweep.go()
|
chrox/RealTimeElectrophy
|
StimControl/LightStim/demo/mangrating.py
|
Python
|
bsd-2-clause
| 2,020
|
[
"Gaussian"
] |
fbe9c75bd6dd0b893317f15ecfd10361ac2307e9324b97d2950335d71e8eb3b6
|
"""
XML format classes
"""
import data
import logging
from galaxy.datatypes.sniff import *
log = logging.getLogger(__name__)
class BlastXml( data.Text ):
"""NCBI Blast XML Output data"""
file_ext = "blastxml"
def set_peek( self, dataset, is_multi_byte=False ):
"""Set the peek and blurb text"""
if not dataset.dataset.purged:
dataset.peek = data.get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
dataset.blurb = 'NCBI Blast XML data'
else:
dataset.peek = 'file does not exist'
dataset.blurb = 'file purged from disk'
def sniff( self, filename ):
"""
Determines whether the file is blastxml
>>> fname = get_test_fname( 'megablast_xml_parser_test1.blastxml' )
>>> BlastXml().sniff( fname )
True
>>> fname = get_test_fname( 'interval.interval' )
>>> BlastXml().sniff( fname )
False
"""
blastxml_header = [ '<?xml version="1.0"?>',
'<!DOCTYPE BlastOutput PUBLIC "-//NCBI//NCBI BlastOutput/EN" "http://www.ncbi.nlm.nih.gov/dtd/NCBI_BlastOutput.dtd">',
'<BlastOutput>' ]
for i, line in enumerate( file( filename ) ):
if i >= len( blastxml_header ):
return True
line = line.rstrip( '\n\r' )
if line != blastxml_header[ i ]:
return False
|
volpino/Yeps-EURAC
|
lib/galaxy/datatypes/xml.py
|
Python
|
mit
| 1,456
|
[
"BLAST",
"Galaxy"
] |
9137ab4761e7d5e7ba50ae77f68de6ca1fe65b3eccf230b637132547d2feacf0
|
# -*- coding: utf-8 -*-
import unittest
import logging
import datetime
import cmath as math
from sea_object import *
from util.util import feet_to_meters, knots_to_meters
from sound import sound
logger = logging.getLogger("subsim")
"""
A novice asked the Master: “Here is a programmer that never designs,
documents or tests his programs. Yet all who know him consider him
one of the best programmers in the world. Why is this?”
The Master replies: “That programmer has mastered the Tao. He has gone
beyond the need for design; he does not become angry when the system
crashes, but accepts the universe without concern. He has gone beyond
the need for documentation; he no longer cares if anyone else sees his
code. He has gone beyond the need for testing; each of his programs
are perfect within themselves, serene and elegant, their purpose self-evident.
Truly, he has entered the mystery of Tao.”
"""
class ScanResult:
def __init__(self, object_idx):
self.object_idx = object_idx
self.signal_to_noise = 0
self.bearing = 0
self.range = 0
self.deep = 0
self.blades = 0
self.bands = None
def __str__(self):
return "ScanResult idx={id} power={db}db snt={snt} bearing={b} range={r} deep={deep} ". \
format(id=self.object_idx, snt=self.signal_to_noise, b=self.bearing, r=self.range,
deep=self.deep, db=self.db)
class Sea:
def __init__(self):
self.time = datetime.datetime(2010, 05, 05, random.randint(0, 23), random.randint(0, 59), 0)
self.counter = 0
self.objects = {}
self.ids_collection = range(1000, 9999)
random.shuffle(self.ids_collection)
# limits below because sound absortion formula
self.temperature = random.randint(-60, 150) / 10.0 # Celsius, -6.0 < T < 15.0
self.salinity = float(random.randint(30, 35)) # 5 < S < 50 ppt
self.ph = 1.0 * random.randint(77, 83) / 10 # 7.7 < pH < 8.3
self.sea_state = random.randrange(0, 7) # 0 to 6, based in Beaufort Force table
self.shipping_state_noise = random.randrange(65, 90) # reference value in DB for shipping noise
self.raining = random.random() > 0.8
def initialize(self):
pass
def sea_state_description(self):
# http://www.usna.edu/Users/physics/ejtuchol/documents/SP411/Chapter11.pdf
description = {
0: 'Calm',
1: 'Light Air',
2: 'Light Breeze',
3: 'Gentle Breeze',
4: 'Moderate Breeze',
5: 'Fresh Breeze',
6: 'Strong Breeze'
}
return description[self.sea_state]
def sea_state_noise_level_1k(self):
# http://www.usna.edu/Users/physics/ejtuchol/documents/SP411/Chapter11.pdf
db_1k = {
0: 44.5,
1: 50.0,
2: 55.0,
3: 61.5,
4: 64.5,
5: 66.5,
6: 68.5
}
return db_1k[self.sea_state]
def get_unique_id(self):
return self.ids_collection.pop()
def add_object(self, obj):
assert isinstance(obj, SeaObject)
new_id = self.get_unique_id()
self.objects[new_id] = obj
obj.id = new_id
def turn(self, time_elapsed): # time_elapsed in hours
self.time = self.time + datetime.timedelta(seconds=time_elapsed * 3600)
# for obj in self.objects.values():
# obj.turn(time_elapsed)
# def background_noise_for_freq_min_max(self, freq):
# # using Wenz (1962)
# # http://www.dosits.org/science/soundsinthesea/commonsounds
# # Min and Max values done by linear aproximation
# logfreq = math.log10(freq)
#
# # for minimum value:
# # 1 -> 10 Hz, 5 -> 10KHz
# # > x = c(1,5)
# # > y = c(85,20)
# # > l = lm(y ~ x)
# # Coefficients:
# # (Intercept) x
# # 101.25 -16.25
# min_value = 101.25 + (-16.25 * logfreq)
#
# # > y_max = c(140,60)
# # > l = lm(y_max ~ x)
# # Coefficients:
# # (Intercept) x
# # 160 -20
# max_value = 160.0 + (-20.0 * logfreq)
# return min_value, max_value
SEA_NOISE_CACHE = None
def get_sea_noise(self, deep):
if self.SEA_NOISE_CACHE is None:
self.SEA_NOISE_CACHE = self.calc_sea_noise(deep)
return self.SEA_NOISE_CACHE.add_noise(0.5)
def calc_sea_noise(self, deep):
# using Wenz (1962)
# http://www.dosits.org/science/soundsinthesea/commonsounds
s = Sound()
'''
All curves ajusted in the ipython notebook sound_sea.ipynb
logfreq = 0 = 1 Hz = 1 Hz
logfreq = 1 = 10 Hz = 10 Hz
logfreq = 2 = 100 Hz = 100 Hz
logfreq = 3 = 1.000 Hz = 1 kHz
logfreq = 4 = 10.000 Hz = 10 kHz
logfreq = 5 = 100.000 Hz = 100 kHz
############################################################################
< 10 Hz:
The starting frequency of 10 Hz is motivated more by
simplicity and need to limit the scope of this discussion. The
infrasonic band of < 10 Hz is also more strongly influenced
by shallow water waveguide effects that establish a cutoff fre-
quency for effective sound propagation. 1 However, it is
worth noting here that in pelagic, open waters, the general
trend for frequency dependence and spectral level within the
nominal 1–10-Hz band is reasonably described by the Holu
Spectrum (observed to apply between 0.4 Hz and 6 Hz), from
the Hawaiian word for deep ocean, 13 and which is shown for
reference in Fig. 2. Ambient noise in this spectral band is
associated with the dynamics of ocean surface waves. Shorter
wavelength ocean waves exhibit a saturation beyond which
they no longer increase in waveheight, and this is mirrored in
the Holu Spectrum insofar as the spectral density remains
roughly constant for a given frequency.
1 Hz -> 120 - 60 * 0 = 120 db
10 hz -> 120 - 60 * 1 = 60 db
100 hz -> 120 - 60 * 2 = 0 db
############################################################################
'''
s.add_logdecay(120,1,0,100) # 120db @ 30Hz - 100db @ 300 db
'''
100-1000 Hz – Noise in this band is dominated by shipping (decreasing intensity with frequency
increases). A significant contribution is also from sea surface agitation. Urick (1986) developed
a model for predicting this shipping noise:
'''
central_freq = 100
central_db = 80
s.add_cosine(60,10, central_db ,central_freq)
s.add_cosine(central_db ,central_freq, 30, 1000)
'''
1-100 kHz – Sea surface agitiation is now the dominant factor, unless marine mammals or rain is
present. Knudsen (1948) presented a model to predict this contribution:
Rain - TO DO
Rain drops impacting sea surface and implosion of air bubbles caused by rain, f =
1-100 kHz, max SL @ 20 kHz, SL can be up to 30 dB above sea surface noise
'''
noise_1k = self.sea_state_noise_level_1k()
s.add_cosine(noise_1k-40,10, noise_1k ,1000)
return s
def background_noise_for_freq(self, freq):
a = 50.0
h = 2.7 # centre of the parabole, and max value of the curve
if 50 < freq <= 1000: # 100 - 1000
v = self.sea_state_noise_level_1k() - (a * ((logfreq - h) ** 2))
base.append(v)
if 1000 < freq < 1000000: # 100 - 1000
noise_1khz = self.sea_state_noise_level_1k() - (a * ((3 - h) ** 2)) # extends the parabole, with the same value
base.append(noise_1khz - (17.0 * (logfreq-3)))
'''
> 100 kHz:
The ending frequency of 100,000 Hz (100 kHz) is large-
ly set by thermal noise generated by the random motion of
water molecules. Thermal noise ultimately establishes the
lower limit of measurability of pressure fluctuations associat-
ed with truly propagating sound waves, and is also shown for
reference in Fig. 2.
'''
if freq > 1000:
base.append(-75 + 20 * logfreq)
'''
Shalow x Deep Water - TO DO
'''
value = sound.sum_of_decibels(base) + random.gauss(0, 1)
return value
# #################################################################################################################
#
# http://www.usna.edu/Users/physics/ejtuchol/documents/SP411/Chapter11.pdf
# #################################################################################################################
def sound_absortion_by_sea(self, freq, deep, temperature, salinity, pH):
"""
freq in Hertz
deep in feet
temp in degC
salinity in ppt
http://resource.npl.co.uk/acoustics/techguides/seaabsorption/
calculation of absorption according to:
Ainslie & McColm, J. Acoust. Soc. Am., Vol. 103, No. 3, March 1998
// f frequency (kHz)
// T Temperature (degC)
// S Salinity (ppt)
// D Depth (km)
// pH Acidity
The Ainslie and McColm formula retains accuracy to within 10% of the
Francois and Garrison model between 100 Hz and 1 MHz for the following range of oceanographic conditions:
-6 < T < 35 °C (S = 35 ppt, pH=8, D = 0 km)
7.7 < pH < 8.3 (T = 10 °C, S = 35 ppt, D = 0 km)
5 < S < 50 ppt (T = 10 °C, pH = 8, D = 0 km)
0 < D < 7 km (T = 10 °C, S = 35 ppt, pH = 8)
:return Total absorption (dB/km)
"""
freq = freq / 1000.0 # converts from KHz to Hz
deep = feet_to_meters(deep) / 1000.0 # convert feet to km
# kelvin = 273.1 # for converting to Kelvin (273.15) # Measured ambient temp
# t_kel = kelvin + temperature
# Boric acid contribution
a1 = 0.106 * math.exp((pH - 8.0) / 0.56);
p1 = 1.0;
f1 = 0.78 * math.sqrt(salinity / 35.0) * math.exp(temperature / 26.0);
boric = 1.0 * (a1 * p1 * f1 * freq * freq) / (freq * freq + f1 * f1);
# MgSO4 contribution
a2 = 0.52 * (salinity / 35.0) * (1 + temperature / 43.0);
p2 = math.exp(-deep / 6);
f2 = 42.0 * math.exp(temperature / 17.0);
mgso4 = 1.0 * (a2 * p2 * f2 * freq * freq) / (freq * freq + f2 * f2);
# Pure water contribution
a3 = 0.00049 * math.exp(-(temperature / 27.0 + deep / 17.0));
p3 = 1.0;
h2o = 1.0 * a3 * p3 * freq * freq;
# Total absorption (dB/km)
alpha = boric + mgso4 + h2o;
return alpha # in db/km;
def spherical_spreading_loss(self, dist):
# dist in meters
# http://www.dosits.org/science/advancedtopics/spreading/
# Spherical spreading describes the decrease in level when a sound wave
# propagates away from a source uniformly in all directions.
return 20 * math.log10(dist) # decibels
def cylindrical_spreading_loss(self, dist):
# dist in meters
# http://www.dosits.org/science/advancedtopics/spreading/
# Sound cannot propagate uniformly
# in all directions from a source in the ocean forever.
# Beyond some range the sound will hit the sea surface or sea floor.
# A simple approximation for spreading loss in a medium with upper and
# lower boundaries can be obtained by assuming that the sound is distributed
# uniformly over the surface of a cylinder having a radius equal to the range r
# and a height H equal to the depth of the ocean
return 10 * math.log10(dist) # decibels
# def passive_sonar_scan(self, sub, sonar):
# logger.debug("--- Passive sonar scan ---")
# sub_pos = sub.get_pos()
# assert isinstance(sub_pos, Point)
# result = [] # list of ScanResult
def passive_scan(self, sub, sonar):
return []
logger.debug("--- Passive scan ---")
logger.debug("Sub: {0}".format(sub))
logger.debug("Sonar: {0}".format(sonar))
sub_pos = sub.get_pos()
assert isinstance(sub_pos, Point)
result = {}
# background_noise = self.get_background_noise() + sub.self_noise()
#logger.debug("background_noise {0}".format(background_noise))
sub_self_noise = sub.get_self_noise()
for idx, obj in self.objects.items():
obj_pos = obj.get_pos()
obj_id = obj.get_id()
# skips the sub itself
if obj_id == sub.get_id():
continue
range_in_knots = obj_pos.distance_to(sub_pos) # in knots/miles
# if range > 15: # hard limit for object detection.
# continue
assert isinstance(obj.get_pos(), Point)
s = obj.get_sound() # Source Level
assert isinstance(s, Sound)
logger.debug("** Examining: {i}: dist:{dist:5.2f} obj:{obj} type:{ty}".format(i=idx, dist=range_in_knots,\
obj=obj,\
ty=type(obj)))
range_in_meters = knots_to_meters(range_in_knots)
deep = sub.actual_deep
ocean_deep = 2000.0 # in meters
half_ocean_deep = ocean_deep / 2.0
if range_in_meters < half_ocean_deep:
spreading_loss = self.spherical_spreading_loss(range_in_meters)
else:
# mix of spherical and cylindrical losses
# spherical losses up to half_ocean_deep + cylindrical
# http://www.dosits.org/science/advancedtopics/spreading/
# http://www.fas.org/man/dod-101/navy/docs/es310/SNR_PROP/snr_prop.htm
spreading_loss = (10 * math.log10(range_in_meters)) + (10 * math.log10(half_ocean_deep))
### transmission_loss ###
def absorption_filter(freq, value):
return value - self.sound_absortion_by_sea(freq, deep,
temperature=self.temperature,
salinity=self.salinity,
pH=self.ph) * range_in_meters / 1000
s.filter(absorption_filter)
def spreading_loss(freq, value):
return value - spreading_loss
s.filter(spreading_loss)
sea_noise_level = self.background_noise_for_freq(freq)
total_noise = sound.sum_of_decibels([sub_noise_level, sea_noise_level])
receiving_array_gain = sonar.array_gain(freq) # AG
adicional_losses = 0
received_sound = source_level - transmission_loss \
+ receiving_array_gain - adicional_losses
stn = received_sound - total_noise
# received_sound = self.calculate_dectect_frequences(sub, sonar, obj, obj_sound, range_in_knots)
logger.debug("received sound: {0}".format(received_sound))
# total_received_sound = sum([i[1] for i in received_bands])
# total_received_noise = sum([i[2] for i in received_bands])
# signal_to_noise = total_received_sound - total_received_noise
# logger.debug("total_received_sound : {0}".format(total_received_sound))
# logger.debug("total_received_noise : {0}".format(total_received_noise))
# logger.debug("signal_to_noise : {0}".format(signal_to_noise))
# logger.debug(
# "* freq:{f} source level:{sl} deep:{deep}".format(
# f=freq, deep=deep,
# sl=source_level))
# logger.debug(
# "spreading_loss:{spl} absorption:{at} sea_noise:{sean} sub_noise:{subn}".format(
# at=absorption,
# sean=sea_noise_level,
# subn=sub_noise_level,
# spl=spreading_loss))
#
# logger.debug(
# "receiving_array_gain:{gain} transmission_loss:{tl}".format(
# gain=receiving_array_gain,
# tl=transmission_loss))
#
# logger.debug(
# "total_noise:{tn} received_sound:{rs} stn:{stn}".format(
# tn=total_noise,
# rs=received_sound,
# stn=stn))
# consolidate bands in a total detected signal-to-noise value
total_detected_signal_to_noise = 0
detected_bands = {}
for band in received_bands:
freq = band[0]
signal = band[1]
noise = band[2]
stn = signal - noise - sonar.detection_threshold
# if stn > 0:
# total_detected_signal_to_noise += stn
# detected_bands[freq] = stn
# logger.debug("total_detected_signal_to_noise : {0}".format(total_detected_signal_to_noise))
#if not isinstance(object_sound, Decibel):
# received_sound = db(received_sound)
# logger.debug("{i}: signal_to_noise:{stn}".format(i=i, stn=signal_to_noise))
if total_detected_signal_to_noise > 0:
# error: greater the signal_to_noise, less the error
if total_detected_signal_to_noise > 10 + sonar.detection_threshold:
error = 0.0001 # means 0.1% in measure
else:
# the error moves from 5% to 1% in a exponencial decay
error = 0.0001 + 0.0004 * math.exp(-0.5 * total_detected_signal_to_noise)
# it's divided by 3 because in a gaussian 99% of time we are inside 3 sigmas...
# so the error is "max" error for 99% of measures
error /= 3
deep = obj.get_deep()
# .add_noise(0.1*dist)
bearing = sub_pos.bearing_to(obj_pos)
#bearing = obj_pos.bearing_to(sub_pos)
# Scan Result
r = ScanResult(idx)
r.signal_to_noise = total_detected_signal_to_noise
# r.blades = 0
r.distance = range_in_knots * random.gauss(1, error)
r.bearing = bearing * random.gauss(1, error)
r.deep = deep * random.gauss(1, error)
# r.bands = detected_bands
logger.debug("scan_result: {0}".format(r))
# result.append(r)
result[idx] = r
logger.debug("--- END of passive scan ---")
return result # def passive_scan(self, sub, sonar):
def pulse(self, ship):
pass
def explosion(self, pos):
# weapon type
pass
def __str__(self):
return "Time: {0}".format(self.time.strftime("%d/%m/%Y %H:%M:%S"))
def debug(self):
print('------ SEA DEBUG ------')
print(self)
for obj in self.objects.values():
print (obj)
print ('')
print('------ END OF SEA DEBUG ------')
class TestUtil(unittest.TestCase):
class FakeShip():
def get_pos(self):
return Point(0, 0)
def setUp(self):
self.universe = Sea()
self.universe.initialize()
def test_turn(self):
u = self.universe
u.turn(1)
u.turn(0.1)
def test_scan_passive(self):
print("test_scan_passive")
u = self.universe
# u.create_asteroid(Point(2,1))
# u.create_asteroid(Point(1,2))
scan = u.passive_scan(self.FakeShip(), 0.1)
self.assertEquals(len(scan), 2)
print ([str(sr) for sr in scan])
if __name__ == '__main__':
unittest.main()
"""
Source:
Can Russian Strategic
Submarines Survive at Sea?
The Fundamental Limits of
Passive Acoustics
http://scienceandglobalsecurity.org/archive/sgs04miasnikov.pdf
http://fas.org/man/dod-101/sys/ship/deep.htm
The most obvious contribution to the ambient noise is the action occurring on the surface of
the ocean. The greater the size of the waves, the greater the ambient noise contribution. The
waves are driven by the winds, so there is a direct correspondence between the steady wind speed
and the sea state. The greater the wind speed or sea state, obviously the greater the ambient
noise contribution. The frequency of the noise from sea state tends to be greater than 300 Hz.
The second main contribution to ambient noise comes from shipping in general. In regions where
there are many transiting ships, the ambient noise will be increased substantially. This noise,
in contrast to the noise from sea state, will be at low frequency (< 300 Hz).
The third possible ambient noise source is biologics, meaning sea-life. These are as widely
varied as they are unpredictable. One common source is snapping shrimp. Others include whales
and dolphins.
"""
|
ftfarias/PySubsim
|
old/sea.py
|
Python
|
gpl-3.0
| 21,237
|
[
"Gaussian"
] |
0b4ecd84e0f9c8e64ed97cfd3b570e6fead05c13406018a772e4c8784c19c0a6
|
"""
The following tools have been eliminated from the distribution:
1: BAM-to-SAM converts BAM format to SAM format
2: Categorize Elements satisfying criteria
3: Compute Motif Frequencies For All Motifs motif by motif
4: Compute Motif Frequencies in indel flanking regions
5: CTD analysis of chemicals, diseases, or genes
6: Cuffcompare
7: Cuffdiff
8: Cufflinks
9: Cuffmerge
10: Delete Overlapping Indels from a chromosome indels file
11: Separate pgSnp alleles into columns
12: Draw Stacked Bar Plots for different categories and different criteria
13: Length Distribution chart
14: FASTA Width formatter
15: RNA/DNA converter
16: Draw quality score boxplot
17: Quality format converter (ASCII-Numeric)
18: Filter by quality
19: FASTQ to FASTA converter
20: Remove sequencing artifacts
21: Barcode Splitter
22: Clip adapter sequences
23: Collapse sequences
24: Draw nucleotides distribution chart
25: Compute quality statistics
26: Rename sequences
27: Reverse- Complement
28: Trim sequences
29: FunDO human genes associated with disease terms
30: HVIS visualization of genomic data with the Hilbert curve
31: Fetch Indels from 3-way alignments
32: Identify microsatellite births and deaths
33: Extract orthologous microsatellites for multiple (>2) species alignments
34: Mutate Codons with SNPs
35: Pileup-to-Interval condenses pileup format into ranges of bases
36: Filter pileup on coverage and SNPs
37: Filter SAM on bitwise flag values
38: Merge BAM Files merges BAM files together
39: Generate pileup from BAM dataset
40: SAM-to-BAM converts SAM format to BAM format
41: Convert SAM to interval
42: flagstat provides simple stats on BAM files
43: MPileup SNP and indel caller
44: rmdup remove PCR duplicates
45: Slice BAM by provided regions
46: Split paired end reads
47: T Test for Two Samples
48: Plotting tool for multiple series and graph types.
The tools are now available in the repositories respectively:
1: bam_to_sam
2: categorize_elements_satisfying_criteria
3: compute_motif_frequencies_for_all_motifs
4: compute_motifs_frequency
5: ctd_batch
6: cuffcompare
7: cuffdiff
8: cufflinks
9: cuffmerge
10: delete_overlapping_indels
11: divide_pg_snp
12: draw_stacked_barplots
13: fasta_clipping_histogram
14: fasta_formatter
15: fasta_nucleotide_changer
16: fastq_quality_boxplot
17: fastq_quality_converter
18: fastq_quality_filter
19: fastq_to_fasta
20: fastx_artifacts_filter
21: fastx_barcode_splitter
22: fastx_clipper
23: fastx_collapser
24: fastx_nucleotides_distribution
25: fastx_quality_statistics
26: fastx_renamer
27: fastx_reverse_complement
28: fastx_trimmer
29: hgv_fundo
30: hgv_hilbertvis
31: indels_3way
32: microsatellite_birthdeath
33: multispecies_orthologous_microsats
34: mutate_snp_codon
35: pileup_interval
36: pileup_parser
37: sam_bitwise_flag_filter
38: sam_merge
39: sam_pileup
40: sam_to_bam
41: sam2interval
42: samtools_flagstat
43: samtools_mpileup
44: samtools_rmdup
45: samtools_slice_bam
46: split_paired_reads
47: t_test_two_samples
48: xy_plot
from the main Galaxy tool shed at http://toolshed.g2.bx.psu.edu
and will be installed into your local Galaxy instance at the
location discussed above by running the following command.
"""
def upgrade( migrate_engine ):
print __doc__
def downgrade( migrate_engine ):
pass
|
mikel-egana-aranguren/SADI-Galaxy-Docker
|
galaxy-dist/lib/tool_shed/galaxy_install/migrate/versions/0008_tools.py
|
Python
|
gpl-3.0
| 3,304
|
[
"Galaxy"
] |
fae4a74d1211df9d8868c1e41ab8ef8ffb927492bc318a5d6125d4466c10a5f7
|
# Authors: Lukas Breuer <l.breuer@fz-juelich.de>
# Juergen Dammers <j.dammers@fz-juelich.de>
# Denis A. Engeman <denis.engemann@gemail.com>
#
# License: BSD-3-Clause
import math
import numpy as np
from ..utils import logger, verbose, check_random_state, random_permutation
@verbose
def infomax(data, weights=None, l_rate=None, block=None, w_change=1e-12,
anneal_deg=60., anneal_step=0.9, extended=True, n_subgauss=1,
kurt_size=6000, ext_blocks=1, max_iter=200, random_state=None,
blowup=1e4, blowup_fac=0.5, n_small_angle=20, use_bias=True,
verbose=None, return_n_iter=False):
"""Run (extended) Infomax ICA decomposition on raw data.
Parameters
----------
data : np.ndarray, shape (n_samples, n_features)
The whitened data to unmix.
weights : np.ndarray, shape (n_features, n_features)
The initialized unmixing matrix.
Defaults to None, which means the identity matrix is used.
l_rate : float
This quantity indicates the relative size of the change in weights.
Defaults to ``0.01 / log(n_features ** 2)``.
.. note:: Smaller learning rates will slow down the ICA procedure.
block : int
The block size of randomly chosen data segments.
Defaults to floor(sqrt(n_times / 3.)).
w_change : float
The change at which to stop iteration. Defaults to 1e-12.
anneal_deg : float
The angle (in degrees) at which the learning rate will be reduced.
Defaults to 60.0.
anneal_step : float
The factor by which the learning rate will be reduced once
``anneal_deg`` is exceeded: ``l_rate *= anneal_step.``
Defaults to 0.9.
extended : bool
Whether to use the extended Infomax algorithm or not.
Defaults to True.
n_subgauss : int
The number of subgaussian components. Only considered for extended
Infomax. Defaults to 1.
kurt_size : int
The window size for kurtosis estimation. Only considered for extended
Infomax. Defaults to 6000.
ext_blocks : int
Only considered for extended Infomax. If positive, denotes the number
of blocks after which to recompute the kurtosis, which is used to
estimate the signs of the sources. In this case, the number of
sub-gaussian sources is automatically determined.
If negative, the number of sub-gaussian sources to be used is fixed
and equal to n_subgauss. In this case, the kurtosis is not estimated.
Defaults to 1.
max_iter : int
The maximum number of iterations. Defaults to 200.
%(random_state)s
blowup : float
The maximum difference allowed between two successive estimations of
the unmixing matrix. Defaults to 10000.
blowup_fac : float
The factor by which the learning rate will be reduced if the difference
between two successive estimations of the unmixing matrix exceededs
``blowup``: ``l_rate *= blowup_fac``. Defaults to 0.5.
n_small_angle : int | None
The maximum number of allowed steps in which the angle between two
successive estimations of the unmixing matrix is less than
``anneal_deg``. If None, this parameter is not taken into account to
stop the iterations. Defaults to 20.
use_bias : bool
This quantity indicates if the bias should be computed.
Defaults to True.
%(verbose)s
return_n_iter : bool
Whether to return the number of iterations performed. Defaults to
False.
Returns
-------
unmixing_matrix : np.ndarray, shape (n_features, n_features)
The linear unmixing operator.
n_iter : int
The number of iterations. Only returned if ``return_max_iter=True``.
References
----------
.. [1] A. J. Bell, T. J. Sejnowski. An information-maximization approach to
blind separation and blind deconvolution. Neural Computation, 7(6),
1129-1159, 1995.
.. [2] T. W. Lee, M. Girolami, T. J. Sejnowski. Independent component
analysis using an extended infomax algorithm for mixed subgaussian
and supergaussian sources. Neural Computation, 11(2), 417-441, 1999.
"""
from scipy.stats import kurtosis
rng = check_random_state(random_state)
# define some default parameters
max_weight = 1e8
restart_fac = 0.9
min_l_rate = 1e-10
degconst = 180.0 / np.pi
# for extended Infomax
extmomentum = 0.5
signsbias = 0.02
signcount_threshold = 25
signcount_step = 2
# check data shape
n_samples, n_features = data.shape
n_features_square = n_features ** 2
# check input parameters
# heuristic default - may need adjustment for large or tiny data sets
if l_rate is None:
l_rate = 0.01 / math.log(n_features ** 2.0)
if block is None:
block = int(math.floor(math.sqrt(n_samples / 3.0)))
logger.info('Computing%sInfomax ICA' % ' Extended ' if extended else ' ')
# collect parameters
nblock = n_samples // block
lastt = (nblock - 1) * block + 1
# initialize training
if weights is None:
weights = np.identity(n_features, dtype=np.float64)
else:
weights = weights.T
BI = block * np.identity(n_features, dtype=np.float64)
bias = np.zeros((n_features, 1), dtype=np.float64)
onesrow = np.ones((1, block), dtype=np.float64)
startweights = weights.copy()
oldweights = startweights.copy()
step = 0
count_small_angle = 0
wts_blowup = False
blockno = 0
signcount = 0
initial_ext_blocks = ext_blocks # save the initial value in case of reset
# for extended Infomax
if extended:
signs = np.ones(n_features)
for k in range(n_subgauss):
signs[k] = -1
kurt_size = min(kurt_size, n_samples)
old_kurt = np.zeros(n_features, dtype=np.float64)
oldsigns = np.zeros(n_features)
# trainings loop
olddelta, oldchange = 1., 0.
while step < max_iter:
# shuffle data at each step
permute = random_permutation(n_samples, rng)
# ICA training block
# loop across block samples
for t in range(0, lastt, block):
u = np.dot(data[permute[t:t + block], :], weights)
u += np.dot(bias, onesrow).T
if extended:
# extended ICA update
y = np.tanh(u)
weights += l_rate * np.dot(weights,
BI -
signs[None, :] * np.dot(u.T, y) -
np.dot(u.T, u))
if use_bias:
bias += l_rate * np.reshape(np.sum(y, axis=0,
dtype=np.float64) * -2.0,
(n_features, 1))
else:
# logistic ICA weights update
y = 1.0 / (1.0 + np.exp(-u))
weights += l_rate * np.dot(weights,
BI + np.dot(u.T, (1.0 - 2.0 * y)))
if use_bias:
bias += l_rate * np.reshape(np.sum((1.0 - 2.0 * y), axis=0,
dtype=np.float64),
(n_features, 1))
# check change limit
max_weight_val = np.max(np.abs(weights))
if max_weight_val > max_weight:
wts_blowup = True
blockno += 1
if wts_blowup:
break
# ICA kurtosis estimation
if extended:
if ext_blocks > 0 and blockno % ext_blocks == 0:
if kurt_size < n_samples:
rp = np.floor(rng.uniform(0, 1, kurt_size) *
(n_samples - 1))
tpartact = np.dot(data[rp.astype(int), :], weights).T
else:
tpartact = np.dot(data, weights).T
# estimate kurtosis
kurt = kurtosis(tpartact, axis=1, fisher=True)
if extmomentum != 0:
kurt = (extmomentum * old_kurt +
(1.0 - extmomentum) * kurt)
old_kurt = kurt
# estimate weighted signs
signs = np.sign(kurt + signsbias)
ndiff = (signs - oldsigns != 0).sum()
if ndiff == 0:
signcount += 1
else:
signcount = 0
oldsigns = signs
if signcount >= signcount_threshold:
ext_blocks = np.fix(ext_blocks * signcount_step)
signcount = 0
# here we continue after the for loop over the ICA training blocks
# if weights in bounds:
if not wts_blowup:
oldwtchange = weights - oldweights
step += 1
angledelta = 0.0
delta = oldwtchange.reshape(1, n_features_square)
change = np.sum(delta * delta, dtype=np.float64)
if step > 2:
angledelta = math.acos(np.sum(delta * olddelta) /
math.sqrt(change * oldchange))
angledelta *= degconst
if verbose:
logger.info(
'step %d - lrate %5f, wchange %8.8f, angledelta %4.1f deg'
% (step, l_rate, change, angledelta))
# anneal learning rate
oldweights = weights.copy()
if angledelta > anneal_deg:
l_rate *= anneal_step # anneal learning rate
# accumulate angledelta until anneal_deg reaches l_rate
olddelta = delta
oldchange = change
count_small_angle = 0 # reset count when angledelta is large
else:
if step == 1: # on first step only
olddelta = delta # initialize
oldchange = change
if n_small_angle is not None:
count_small_angle += 1
if count_small_angle > n_small_angle:
max_iter = step
# apply stopping rule
if step > 2 and change < w_change:
step = max_iter
elif change > blowup:
l_rate *= blowup_fac
# restart if weights blow up (for lowering l_rate)
else:
step = 0 # start again
wts_blowup = 0 # re-initialize variables
blockno = 1
l_rate *= restart_fac # with lower learning rate
weights = startweights.copy()
oldweights = startweights.copy()
olddelta = np.zeros((1, n_features_square), dtype=np.float64)
bias = np.zeros((n_features, 1), dtype=np.float64)
ext_blocks = initial_ext_blocks
# for extended Infomax
if extended:
signs = np.ones(n_features)
for k in range(n_subgauss):
signs[k] = -1
oldsigns = np.zeros(n_features)
if l_rate > min_l_rate:
if verbose:
logger.info('... lowering learning rate to %g'
'\n... re-starting...' % l_rate)
else:
raise ValueError('Error in Infomax ICA: unmixing_matrix matrix'
'might not be invertible!')
# prepare return values
if return_n_iter:
return weights.T, step
else:
return weights.T
|
wmvanvliet/mne-python
|
mne/preprocessing/infomax_.py
|
Python
|
bsd-3-clause
| 11,848
|
[
"Gaussian"
] |
36cd3fc75d6ec1b044851b0a7fb6f9752b210ae6ce339873f0d9d8b650c7718c
|
import unittest
from ladygeek.graph.client import get_graph_url
from ladygeek.graph.entities import User
from py2neo import Graph
class TestUser(unittest.TestCase):
USER_DATA = {'contributors_enabled': False,
'created_at': 'Fri Jun 12 11:13:21 +0000 2009',
'default_profile': False,
'default_profile_image': False,
'description': 'We are the UK’s leading energy supplier and committed to '
'looking after your world. For Emergency numbers visit '
'http://t.co/GVkMDCUzW3',
'entities': {'description': {'urls': [{'display_url': 'britishgas.co.uk/emergency',
'expanded_url': 'http://www.britishgas.co.uk/emergency',
'indices': [111, 133],
'url': 'http://t.co/GVkMDCUzW3'}]},
'url': {'urls': [{'display_url': 'britishgas.co.uk/the-source',
'expanded_url': 'http://www.britishgas.co.uk/the-source',
'indices': [0, 22],
'url': 'http://t.co/rlasQ9hHeu'}]}},
'favourites_count': 431,
'follow_request_sent': False,
'followers_count': 36081,
'following': False,
'friends_count': 4774,
'geo_enabled': True,
'id': 46630225,
'id_str': '46630225',
'is_translation_enabled': False,
'is_translator': False,
'lang': 'en',
'listed_count': 400,
'location': 'Staines, Middlesex',
'name': 'British Gas ',
'notifications': False,
'profile_background_color': '00AEDE',
'profile_background_image_url': 'http://pbs.twimg.com/profile_background_images/831694128/7187a2d2a890b67c21ae04c18861f5b9.jpeg',
'profile_background_image_url_https': 'https://pbs.twimg.com/profile_background_images/831694128/7187a2d2a890b67c21ae04c18861f5b9.jpeg',
'profile_background_tile': False,
'profile_banner_url': 'https://pbs.twimg.com/profile_banners/46630225/1400584801',
'profile_image_url': 'http://pbs.twimg.com/profile_images/552048129055289344/6oPZvR3T_normal.jpeg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/552048129055289344/6oPZvR3T_normal.jpeg',
'profile_link_color': '1890C4',
'profile_location': None,
'profile_sidebar_border_color': 'FFFFFF',
'profile_sidebar_fill_color': 'D9EDF9',
'profile_text_color': '333333',
'profile_use_background_image': True,
'protected': False,
'screen_name': 'BritishGas',
'status': {'contributors': None,
'coordinates': None,
'created_at': 'Mon Mar 02 18:45:18 +0000 2015',
'entities': {'hashtags': [],
'media': [{'display_url': 'pic.twitter.com/ec4iusBe4Q',
'expanded_url': 'http://twitter.com/BritishGas/status/572467734367191041/photo/1',
'id': 572425479120007168,
'id_str': '572425479120007168',
'indices': [108, 130],
'media_url': 'http://pbs.twimg.com/media/B_Gp8L9UsAAe8ap.png',
'media_url_https': 'https://pbs.twimg.com/media/B_Gp8L9UsAAe8ap.png',
'sizes': {'large': {'h': 500,
'resize': 'fit',
'w': 1000},
'medium': {'h': 300,
'resize': 'fit',
'w': 600},
'small': {'h': 170,
'resize': 'fit',
'w': 340},
'thumb': {'h': 150,
'resize': 'crop',
'w': 150}},
'type': 'photo',
'url': 'http://t.co/ec4iusBe4Q'}],
'symbols': [],
'urls': [],
'user_mentions': []},
'favorite_count': 4,
'favorited': False,
'geo': None,
'id': 572467734367191041,
'id_str': '572467734367191041',
'in_reply_to_screen_name': None,
'in_reply_to_status_id': None,
'in_reply_to_status_id_str': None,
'in_reply_to_user_id': None,
'in_reply_to_user_id_str': None,
'lang': 'en',
'place': None,
'possibly_sensitive': False,
'retweet_count': 3,
'retweeted': False,
'source': '<a href="https://ads.twitter.com" '
'rel="nofollow">Twitter Ads</a>',
'text': 'Afraid of the dust bunny lurking behind your fridge? '
'Check out our guide to cleaning up those fridge coils: '
'http://t.co/ec4iusBe4Q',
'truncated': False},
'statuses_count': 13664,
'time_zone': 'London',
'url': 'http://t.co/rlasQ9hHeu',
'utc_offset': 0,
'verified': True}
def setUp(self):
self.g = Graph(get_graph_url("dev"))
def tearDown(self):
self.g.delete_all()
def testAddNewUser(self):
u = User.new(self.g, properties=self.USER_DATA)
u.get_followers()
if __name__ == '__main__':
unittest.main()
|
salimfadhley/neowrapper
|
test/test_neowrapper/test_user.py
|
Python
|
mit
| 7,113
|
[
"VisIt"
] |
ed782c0a39ce5b4508d265671c435b12378ab1f1572faa4964a8777aae9023ee
|
#!/usr/bin/env python3
# Version 1.1
# Author Alexis Blanchet-Cohen
# Date: 09/06/2014
import argparse
import glob
import os
import os.path
import pandas
import subprocess
import util
# Read the command line arguments.
parser = argparse.ArgumentParser(description="Generates bedtools coverage scripts.")
parser.add_argument("-s", "--scriptsDirectory", help="Scripts directory. DEFAULT=bedtools_coverage", default="bedtools_coverage")
parser.add_argument("-i", "--inputDirectory", help="Input directory with FASTQ files. DEFAULT=../results/bwa", default="../results/bwa")
parser.add_argument("-o", "--outputDirectory", help="Output directory with bedtools coverage results. DEFAULT=../results/bedtools_coverage", default="../results/bedtools_coverage")
parser.add_argument("-a", help="One or more BAM/BED/GFF/VCF file(s) “B”. Use “stdin” if passing B with a UNIX pipe. NEW!!!: -b may be followed with multiple databases and/or wildcard (*) character(s). DEFAULT=/gs/project/feb-684-aa/BIF/VIN/VIN-BIF-P1/data/SeqCapEZ_Exome_v3.0_Design_Annotation_files/120430_HG19_ExomeV3_UTR_EZ_HX1_ensembl_sorted_filtered.bed",
default="/gs/project/feb-684-aa/BIF/VIN/VIN-BIF-P1/data/SeqCapEZ_Exome_v3.0_Design_Annotation_files/120430_HG19_ExomeV3_UTR_EZ_HX1_ensembl_sorted_filtered.bed")
parser.add_argument("-c", "--chromSizes", help="Tab delimited file with chromosome sizes and lengths. DEFAULT=/gs/project/feb-684-aa/BIF/genomes/Homo_sapiens/Broad/human_g1k_v37_chrom.sizes", default="/gs/project/feb-684-aa/BIF/genomes/Homo_sapiens/Broad/human_g1k_v37_chrom.sizes")
parser.add_argument("-q", "--submitJobsToQueue", help="Submit jobs to queue immediately.", choices=["yes", "no", "y", "n"], default="no")
args = parser.parse_args()
# If not in the main scripts directory, cd to the main scripts directory, if it exists.
util.cdMainScriptsDirectory()
# Process the command line arguments.
scriptsDirectory = os.path.abspath(args.scriptsDirectory)
inputDirectory = os.path.abspath(args.inputDirectory)
outputDirectory = os.path.abspath(args.outputDirectory)
a = os.path.abspath(args.a)
chromSizes = os.path.abspath(args.chromSizes)
# Check if the inputDirectory exists, and is a directory.
util.checkInputDirectory(args.inputDirectory)
# Read configuration files
config = util.readConfigurationFiles()
header = config.getboolean("server", "PBS_header")
# Read samples file.
samplesFile = util.readsamplesFile()
samples = samplesFile["sample"].tolist()
# Create scripts directory, if it does not exist yet, and cd to it.
if not os.path.exists(scriptsDirectory):
os.mkdir(scriptsDirectory)
os.chdir(scriptsDirectory)
# Create output directory, if it does not exist yet.
if not os.path.exists(outputDirectory):
os.makedirs(outputDirectory)
# Cycle through all the samples and write the bedtools_coverage scripts.
for sample in samples:
# Create script file.
scriptName = "bedtools_coverage_" + sample + ".sh"
script = open(scriptName, 'w')
if header:
util.writeHeader(script, config, "bedtools_coverage")
script.write("bedtools coverage" + " \\\n")
script.write("-a " + os.path.relpath(a) + " \\\n")
script.write("-b " + os.path.relpath(os.path.join(inputDirectory, sample, sample + ".bam")) + " \\\n")
script.write("-g " + os.path.relpath(chromSizes) + " \\\n")
script.write("-hist" + " \\\n")
script.write("-sorted" + " \\\n")
script.write("1> " + os.path.relpath(os.path.join(outputDirectory, sample + ".txt")) + " \\\n")
script.write("2> " + scriptName + ".log")
if (args.submitJobsToQueue.lower() == "yes") | (args.submitJobsToQueue.lower() == "y"):
subprocess.call("submitJobs.py", shell=True)
|
blancha/abcngspipelines
|
utils/bedtools_coverage.py
|
Python
|
gpl-3.0
| 3,688
|
[
"BWA"
] |
cffb68fe4140724ec3014747526704b7cca668174fe09f33b34ba5973404a1ee
|
from django.conf.urls import patterns, url
urlpatterns = patterns('',
(r'^(?P<project_id>\d+)/multiple-presynaptic-terminals$', 'vncbrowser.views.multiple_presynaptic_terminals'),
(r'^(?P<project_id>\d+)/go-to/connector/(?P<connector_id>\d+)/stack/(?P<stack_id>\d+)$', 'vncbrowser.views.goto_connector'),
(r'^(?P<project_id>\d+)$', 'vncbrowser.views.index'),
(r'^(?P<project_id>\d+)/sorted/(?P<order_by>[^/]+)$', 'vncbrowser.views.index'),
(r'^(?P<project_id>\d+)/view/(?P<neuron_id>\d+)$', 'vncbrowser.views.view'),
(r'^(?P<project_id>\d+)/view/(?P<neuron_name>.*)$', 'vncbrowser.views.view'),
(r'^neuron/set_cell_body$', 'vncbrowser.views.set_cell_body'),
(r'^(?P<project_id>\d+)/lines/add$', 'vncbrowser.views.lines_add'),
(r'^(?P<project_id>\d+)/line/(?P<line_id>\d+)$', 'vncbrowser.views.line'),
(r'^(?P<project_id>\d+)/lines/delete$', 'vncbrowser.views.lines_delete'),
(r'^(?P<project_id>\d+)/visual_index$', 'vncbrowser.views.visual_index'),
(r'^(?P<project_id>\d+)/visual_index(/find/(?P<search>[^/]*))?(/sorted/(?P<order_by>[^/]*))?(/cell_body_location/(?P<cell_body_location>[^/]*))?(/page/(?P<page>[0-9]*))?$', 'vncbrowser.views.visual_index'),
)
|
htem/CATMAID
|
django/applications/vncbrowser/urls.py
|
Python
|
agpl-3.0
| 1,208
|
[
"NEURON"
] |
03f7840c51746f455094ea7e3713508aa794c6fb32f371359c34f011d26b9984
|
"""
CP decomposition by classic alternating least squares (ALS).
Author: N. Benjamin Erichson <erichson@uw.edu> and Alex H. Williams
"""
import numpy as np
from scipy import linalg
from tensortools.operations import unfold, khatri_rao
from tensortools.tensors import KTensor
from tensortools.optimize import FitResult, optim_utils
def mcp_als(X, rank, mask, random_state=None, init='randn', skip_modes=[], **options):
"""Fits CP Decomposition with missing data using Alternating Least Squares (ALS).
Parameters
----------
X : (I_1, ..., I_N) array_like
A tensor with ``X.ndim >= 3``.
rank : integer
The `rank` sets the number of components to be computed.
mask : (I_1, ..., I_N) array_like
A binary tensor with the same shape as ``X``. All entries equal to zero
correspond to held out or missing data in ``X``. All entries equal to
one correspond to observed entries in ``X`` and the decomposition is
fit to these datapoints.
random_state : integer, ``RandomState``, or ``None``, optional (default ``None``)
If integer, sets the seed of the random number generator;
If RandomState instance, random_state is the random number generator;
If None, use the RandomState instance used by ``numpy.random``.
init : str, or KTensor, optional (default ``'randn'``).
Specifies initial guess for KTensor factor matrices.
If ``'randn'``, Gaussian random numbers are used to initialize.
If ``'rand'``, uniform random numbers are used to initialize.
If KTensor instance, a copy is made to initialize the optimization.
skip_modes : iterable, optional (default ``[]``).
Specifies modes of the tensor that are not fit. This can be
used to fix certain factor matrices that have been previously
fit.
options : dict, specifying fitting options.
tol : float, optional (default ``tol=1E-5``)
Stopping tolerance for reconstruction error.
max_iter : integer, optional (default ``max_iter = 500``)
Maximum number of iterations to perform before exiting.
min_iter : integer, optional (default ``min_iter = 1``)
Minimum number of iterations to perform before exiting.
max_time : integer, optional (default ``max_time = np.inf``)
Maximum computational time before exiting.
verbose : bool ``{'True', 'False'}``, optional (default ``verbose=True``)
Display progress.
Returns
-------
result : FitResult instance
Object which holds the fitted results. It provides the factor matrices
in form of a KTensor, ``result.factors``.
Notes
-----
Fitting CP decompositions with missing data can be exploited to perform
cross-validation.
References
----------
Williams, A. H.
"Solving Least-Squares Regression with Missing Data."
http://alexhwilliams.info/itsneuronalblog/2018/02/26/censored-lstsq/
"""
# Check inputs.
optim_utils._check_cpd_inputs(X, rank)
# Initialize problem.
U, _ = optim_utils._get_initial_ktensor(init, X, rank, random_state, scale_norm=False)
result = FitResult(U, 'MCP_ALS', **options)
normX = np.linalg.norm((X * mask))
# Main optimization loop.
while result.still_optimizing:
# Iterate over each tensor mode.
for n in range(X.ndim):
# Skip modes that are specified as fixed.
if n in skip_modes:
continue
# i) Normalize factors to prevent singularities.
U.rebalance()
# ii) Unfold data and mask along the nth mode.
unf = unfold(X, n) # i_n x N
m = unfold(mask, n) # i_n x N
# iii) Form Khatri-Rao product of factors matrices.
components = [U[j] for j in range(X.ndim) if j != n]
krt = khatri_rao(components).T # N x r
# iv) Broadcasted solve of linear systems.
# Left hand side of equations, R x R x X.shape[n]
# Right hand side of equations, X.shape[n] x R x 1
lhs_stack = np.matmul(m[:, None, :] * krt[None, :, :], krt.T[None, :, :])
rhs_stack = np.dot(unf * m, krt.T)[:, :, None]
# vi) Update factor.
U[n] = np.linalg.solve(lhs_stack, rhs_stack).reshape(X.shape[n], rank)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Update the optimization result, checks for convergence.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
obj = linalg.norm(mask * (U.full() - X)) / normX
# Update result
result.update(obj)
# Finalize and return the optimization result.
return result.finalize()
|
ahwillia/tensortools
|
tensortools/optimize/mcp_als.py
|
Python
|
mit
| 4,810
|
[
"Gaussian"
] |
81ea3d9465957a1a4617219a814cdf02e14caf004533bbd142ba65cae1db6a8c
|
# -*- Mode: python; tab-width: 4; indent-tabs-mode:nil; coding:utf-8 -*-
# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4
#
# MDAnalysis --- http://www.mdanalysis.org
# Copyright (c) 2006-2016 The MDAnalysis Development Team and contributors
# (see the file AUTHORS for the full list of names)
#
# Released under the GNU Public Licence, v2 or any higher version
#
# Please cite your use of MDAnalysis in published work:
#
# R. J. Gowers, M. Linke, J. Barnoud, T. J. E. Reddy, M. N. Melo, S. L. Seyler,
# D. L. Dotson, J. Domanski, S. Buchoux, I. M. Kenney, and O. Beckstein.
# MDAnalysis: A Python package for the rapid analysis of molecular dynamics
# simulations. In S. Benthall and S. Rostrup editors, Proceedings of the 15th
# Python in Science Conference, pages 102-109, Austin, TX, 2016. SciPy.
#
# N. Michaud-Agrawal, E. J. Denning, T. B. Woolf, and O. Beckstein.
# MDAnalysis: A Toolkit for the Analysis of Molecular Dynamics Simulations.
# J. Comput. Chem. 32 (2011), 2319--2327, doi:10.1002/jcc.21787
#
# TPR parser and tpr support module
# Copyright (c) 2011 Zhuyi Xue
# Released under the GNU Public Licence, v2
"""Gromacs portable run input TPR format parser
============================================
The :mod:`~MDAnalysis.topology.TPRParser` module allows reading of a
Gromacs_ portable run input file (a `TPR file`_). Because
the file format of the TPR file is changing rapidly, not all versions
are currently supported. The known working versions and the
approximate Gromacs release numbers are listed in the table
:ref:`TPR format versions <TPR-format-table>`.
.. _`TPR-format-table`:
.. table:: TPR format versions and generations read by :func:`MDAnalysis.topology.TPRParser.parse`.
========== ============== ==================== =====
TPX format TPX generation Gromacs release read
========== ============== ==================== =====
?? ?? 3.3, 3.3.1 no
58 17 4.0, 4.0.2, 4.0.3, yes
4.0.4, 4.0.5, 4.0.6,
4.0.7
73 23 4.5.0, 4.5.1, 4.5.2, yes
4.5.3, 4.5.4, 4.5.5
83 24 4.6, 4.6.1 yes
100 26 5.0, 5.0.1, 5.0.2, yes
5.0.3,5.0.4, 5.0.5
103 26 5.1 yes
110 26 2016 yes
========== ============== ==================== =====
For further discussion and notes see `Issue 2`_. Please *open a new issue* in
the `Issue Tracker`_ when a new or different TPR file format version should be
supported.
Bonded interactions available in Gromacs are described in table 5.5 of the
`Gromacs manual`_. The following ones are used to build the topology (see
`Issue 463`_):
* bonds: regular bonds (type 1), G96 bonds (type 2), Morse (type 3),
cubic bonds (type 4), connections (type 5), harmonic potentials (type 6),
FENE bonds (type 7), restraint potentials (type 10),
tabulated potential with exclusion/connection (type 8),
tabulated potential without exclusion/connection (type 9), constraints with
exclusion/connection (type 1), constraints without exclusion/connection (type
2)
* angles: regular angles (type 1), G96 angles (type 2), cross bond-bond
(type3), cross-bond-angle (type 4), Urey-Bradley (type 5), quartic angles
(type 6), restricted bending potential (type 10), tabulated angles (type 8)
* dihedrals: proper dihedrals (type 1 and type 9), Ryckaert-Bellemans dihedrals
(type 3), Fourier dihedrals (type 5), restricted dihedrals (type 10),
combined bending-torsion potentials (type 11), tabulated dihedral (type 8)
* impropers: improper dihedrals (type 2), periodic improper dihedrals (type 4)
Classes
-------
.. autoclass:: TPRParser
:members:
:inherited-members:
.. SeeAlso:: :mod:`MDAnalysis.topology.tpr`
Development notes
-----------------
The TPR reader is a pure-python implementation of a basic TPR
parser. Currently the following sections of the topology are parsed:
* Atoms: number, name, type, resname, resid, segid, mass, charge,
[residue, segment, radius, bfactor, resnum]
* Bonds
* Angels
* Dihedrals
* Impropers
This tpr parser is written according to the following files
- :file:`{gromacs_dir}/src/kernel/gmxdump.c`
- :file:`{gromacs_dir}/src/gmxlib/tpxio.c` (the most important one)
- :file:`{gromacs_dir}/src/gmxlib/gmxfio_rw.c`
- :file:`{gromacs_dir}/src/gmxlib/gmxfio_xdr.c`
- :file:`{gromacs_dir}/include/gmxfiofio.h`
or their equivalent in more recent versions of Gromacs.
The function :func:`read_tpxheader` is based on the
`TPRReaderDevelopment`_ notes. Functions with names starting with
``read_`` or ``do_`` are trying to be similar to those in
:file:`gmxdump.c` or :file:`tpxio.c`, those with ``extract_`` are new.
Wherever ``fver_err(fver)`` is used, it means the tpx version problem
has not been solved. Versions prior to Gromacs 4.0.x are not supported.
.. Links
.. _Gromacs: http://www.gromacs.org
.. _`Gromacs manual`: http://manual.gromacs.org/documentation/5.1/manual-5.1.pdf
.. _TPR file: http://manual.gromacs.org/current/online/tpr.html
.. _`Issue Tracker`: https://github.com/MDAnalysis/mdanalysis/issues
.. _`Issue 2`: https://github.com/MDAnalysis/mdanalysis/issues/2
.. _`Issue 463`: https://github.com/MDAnalysis/mdanalysis/pull/463
.. _TPRReaderDevelopment: https://github.com/MDAnalysis/mdanalysis/wiki/TPRReaderDevelopment
"""
from __future__ import absolute_import
__author__ = "Zhuyi Xue"
__copyright__ = "GNU Public Licence, v2"
import xdrlib
from . import guessers
from ..lib.util import anyopen
from .tpr import utils as tpr_utils
from .base import TopologyReaderBase
from ..core.topologyattrs import Resnums
import logging
logger = logging.getLogger("MDAnalysis.topology.TPRparser")
class TPRParser(TopologyReaderBase):
"""Read topology information from a Gromacs_ TPR_ file.
.. _Gromacs: http://www.gromacs.org
.. _TPR file: http://manual.gromacs.org/current/online/tpr.html
"""
format = 'TPR'
def parse(self):
"""Parse a Gromacs TPR file into a MDAnalysis internal topology structure.
Returns
-------
structure : dict
"""
tprf = anyopen(self.filename, mode='rb').read()
data = xdrlib.Unpacker(tprf)
try:
th = tpr_utils.read_tpxheader(data) # tpxheader
except EOFError:
msg = "{0}: Invalid tpr file or cannot be recognized".format(self.filename)
logger.critical(msg)
raise IOError(msg)
self._log_header(th)
V = th.fver # since it's used very often
state_ngtc = th.ngtc # done init_state() in src/gmxlib/tpxio.c
if th.bBox:
tpr_utils.extract_box_info(data, V)
if state_ngtc > 0 and V >= 28:
if V < 69: # redundancy due to different versions
tpr_utils.ndo_real(data, state_ngtc)
tpr_utils.ndo_real(data, state_ngtc) # relevant to Berendsen tcoupl_lambda
if V < 26:
tpr_utils.fileVersion_err(V)
if th.bTop:
tpr_top = tpr_utils.do_mtop(data, V)
else:
msg = "{0}: No topology found in tpr file".format(self.filename)
logger.critical(msg)
raise IOError(msg)
tpr_top.add_TopologyAttr(Resnums(tpr_top.resids.values.copy()))
return tpr_top
# THE FOLLOWING CODE IS WORKING FOR TPX VERSION 58, BUT SINCE THESE INFO IS
# NOT INTERESTED, SO IT'S NOT COVERED IN ALL VERSIONS. PARSING STOPS HERE.
# if th.bX:
# ndo_rvec(data, th.natoms)
# if th.bV:
# ndo_rvec(data, th.natoms)
# if th.bF:
# ndo_rvec(data, th.natoms)
# not useful at the moment
# ePBC = -1;
# bPeriodicMols = False
# if th.bIr:
# # update
# data.unpack_int() # ePBC
# data.unpack_bool() # bPeriodicMols
# # 17 < 23. and ir (ir is from the c code, seems not apply here
# if th.fgen < setting.tpx_generation:
# # a crazily long (670 lines) function in c, slightly better here
# # (240 lines), so put it in setting.py
# utils.do_inputrec(data)
def _log_header(self, th):
logger.info("Gromacs version : {0}".format(th.ver_str))
logger.info("tpx version : {0}".format(th.fver))
logger.info("tpx generation : {0}".format(th.fgen))
logger.info("tpx precision : {0}".format(th.precision))
logger.info("tpx file_tag : {0}".format(th.file_tag))
logger.info("tpx natoms : {0}".format(th.natoms))
logger.info("tpx ngtc : {0}".format(th.ngtc))
logger.info("tpx fep_state : {0}".format(th.fep_state))
logger.info("tpx lambda : {0}".format(th.lamb))
logger.debug("tpx bIr (input record): {0}".format(th.bIr))
logger.debug("tpx bTop : {0}".format(th.bTop))
logger.debug("tpx bX : {0}".format(th.bX))
logger.debug("tpx bV : {0}".format(th.bV))
logger.debug("tpx bF : {0}".format(th.bF))
logger.debug("tpx bBox : {0}".format(th.bBox))
|
kain88-de/mdanalysis
|
package/MDAnalysis/topology/TPRParser.py
|
Python
|
gpl-2.0
| 9,416
|
[
"Gromacs",
"MDAnalysis"
] |
d4ed30e196c2b4db1f189333be3fb601f67f7f3ed6b3b52cbf196be988966149
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Adds guards against function calls with side effects.
Only standalone calls are guarded.
WARNING: This mechanism is incomplete. Particularly, it only guards the
arguments passed to functions, and does not account for indirectly modified
state.
Example:
y = tf.layers.dense(x) # Creates TF variable 'foo'
loss = loss(y)
opt.minimize(loss) # indirectly affects 'foo'
z = tf.get_variable('foo') # Indirectly affects `loss` and 'foo'
# Here, `loss` can be guarded. But `z` cannot.
# TODO(mdan): We should probably define a safe mode where we guard everything.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gast
from tensorflow.python.autograph.core import converter
from tensorflow.python.autograph.pyct import anno
from tensorflow.python.autograph.pyct import ast_util
from tensorflow.python.autograph.pyct import qual_names
from tensorflow.python.autograph.pyct import templates
from tensorflow.python.autograph.pyct.static_analysis.annos import NodeAnno
class SymbolNamer(object):
"""Describes the interface for SideEffectGuardTransformer's namer."""
def new_symbol(self, name_root, reserved_locals):
"""Generate a new unique function_name.
Args:
name_root: String, used as stem in the new name.
reserved_locals: Set(string), additional local symbols that are reserved.
Returns:
String.
"""
raise NotImplementedError()
class SideEffectGuardTransformer(converter.Base):
"""Adds control dependencies to functions with side effects."""
def _visit_and_reindent(self, nodes):
new_nodes = []
current_dest = new_nodes
alias_map = {}
reindent_requested = False
for n in nodes:
n = self.visit(n)
# NOTE: the order in which these statements execute is important; in
# particular, watch out for ending up with cycles in the AST.
if alias_map:
n = ast_util.rename_symbols(n, alias_map)
if isinstance(n, (list, tuple)):
current_dest.extend(n)
else:
current_dest.append(n)
if anno.hasanno(n, anno.Basic.INDENT_BLOCK_REMAINDER):
reindent_requested = True
new_dest, new_alias_map = anno.getanno(
n, anno.Basic.INDENT_BLOCK_REMAINDER)
anno.delanno(n, anno.Basic.INDENT_BLOCK_REMAINDER)
new_alias_map.update(alias_map)
alias_map = new_alias_map
current_dest = new_dest
if reindent_requested and not current_dest:
# TODO(mdan): There may still be something that could be done.
raise ValueError('Unable to insert statement into the computation flow: '
'it is not followed by any computation which '
'the statement could gate.')
return new_nodes
def visit_FunctionDef(self, node):
node.body = self._visit_and_reindent(node.body)
return node
def visit_With(self, node):
node.body = self._visit_and_reindent(node.body)
return node
def visit_If(self, node):
node.body = self._visit_and_reindent(node.body)
node.orelse = self._visit_and_reindent(node.orelse)
return node
def visit_While(self, node):
node.body = self._visit_and_reindent(node.body)
node.orelse = self._visit_and_reindent(node.orelse)
return node
def visit_Expr(self, node):
self.generic_visit(node)
if isinstance(node.value, gast.Call):
# Patterns of single function calls, like:
# opt.minimize(loss)
# or:
# tf.py_func(...)
# First, attempt to gate future evaluation of args. If that's not
# possible, gate all remaining statements (and that may fail too, see
# _visit_and_reindent.
args_scope = anno.getanno(node.value, NodeAnno.ARGS_SCOPE)
# NOTE: We can't guard object attributes because they may not be writable.
# In addition, avoid renaming well-known names.
# TODO(mdan): Move these names into config.
unguarded_names = (qual_names.QN('self'), qual_names.QN('tf'))
guarded_args = tuple(s for s in args_scope.read
if not s.is_composite() and s not in unguarded_names)
# TODO(mdan): Include all arguments which depended on guarded_args too.
# For example, the following will still cause a race:
# tf.assign(a, a + 1)
# b = a + 1
# tf.assign(a, a + 1) # Control deps here should include `b`
# c = b + 1
# Or maybe we should just raise an "unsafe assign" error?
if guarded_args:
# The aliases may need new names to avoid incorrectly making them local.
# TODO(mdan): This is brutal. It will even rename modules - any fix?
need_alias = tuple(
s for s in guarded_args if s not in args_scope.parent.modified)
aliased_new_names = tuple(
qual_names.QN(
self.ctx.namer.new_symbol(
s.ssf(), args_scope.parent.referenced)) for s in need_alias)
alias_map = dict(zip(need_alias, aliased_new_names))
if len(guarded_args) == 1:
s, = guarded_args
aliased_guarded_args = alias_map.get(s, s)
else:
aliased_guarded_args = gast.Tuple(
[alias_map.get(s, s).ast() for s in guarded_args], None)
template = """
with ag__.utils.control_dependency_on_returns(call):
aliased_guarded_args = ag__.utils.alias_tensors(guarded_args)
"""
control_deps_guard = templates.replace(
template,
call=node.value,
aliased_guarded_args=aliased_guarded_args,
guarded_args=guarded_args)[-1]
else:
alias_map = {}
template = """
with ag__.utils.control_dependency_on_returns(call):
pass
"""
control_deps_guard = templates.replace(template, call=node.value)[-1]
control_deps_guard.body = []
node = control_deps_guard
anno.setanno(node, anno.Basic.INDENT_BLOCK_REMAINDER,
(node.body, alias_map))
return node
def transform(node, ctx):
return SideEffectGuardTransformer(ctx).visit(node)
|
seanli9jan/tensorflow
|
tensorflow/python/autograph/converters/side_effect_guards.py
|
Python
|
apache-2.0
| 6,845
|
[
"VisIt"
] |
3067c9d00c2fefa3c49755f2ea78d2f38862c60197c5e0a26042680959806aae
|
'''
THIS MODULE HAS BEEN REPLACED BY `gproc.py` AND IT REMAINS HERE FOR LEGACY
PURPOSES
This module defines a class, `GaussianProcess`, which is an
abstraction that allows one to easily work with Gaussian processes.
One main use for the `GaussianProcess` class is Gaussian process
regression (GPR). GPR is also known as Kriging or Least Squares
Collocation. It is a technique for constructing a continuous function
from discrete observations by incorporating a stochastic prior model
for the underlying function. GPR is performed with the `condition`
method of a `GaussianProcess` instance. In addition to GPR, the
`GaussianProcess` class can be used for basic arithmetic with Gaussian
processes and for generating random samples of a Gaussian process.
There are several existing python packages for Gaussian processes (See
www.gaussianprocess.org for an updated list of packages). This module
was written because existing software lacked support for 1) Gaussian
processes with added basis functions 2) analytical differentiation of
Gaussian processes and 3) conditioning a Gaussian process with
derivative constraints. Other software packages have a strong focus on
optimizing hyperparameters based on data likelihood. This module does
not include any optimization routines and hyperparameters are always
explicitly specified by the user. However, the `GaussianProcess` class
contains the `likelihood` method which can be used with functions from
`scipy.optimize` to construct a hyperparameter optimization routine.
Gaussian processes
==================
To understand what a Gaussian process is, let's first consider a
random vector :math:`\mathbf{u}` which has a multivariate normal
distribution with mean :math:`\\bar{\mathbf{u}}` and covariance matrix
:math:`\mathbf{C}`. That is to say, each element :math:`u_i` of
:math:`\mathbf{u}` is a normally distributed random variable
with mean :math:`\\bar{u}_i` and covariance :math:`C_{ij}` with
element :math:`u_j`. Each element also has context (e.g., time or
position) denoted as :math:`x_i`. A Gaussian process is the continuous
analogue to the multivariate normal vector, where the context for a
Gaussian process is a continuous variable :math:`x`, rather than the
discrete variable :math:`x_i`. A Gaussian process :math:`u_o` is
defined in terms of a mean *function* :math:`\\bar{u}`, and a
covariance *function* :math:`C_u`. We write this definition of
:math:`u_o` more concisely as
.. math::
u_o \\sim \\mathcal{GP}\\left(\\bar{u},C_u\\right).
Analogous to each element of the random vector :math:`\mathbf{u}`, the
Gaussian process at :math:`x`, denoted as :math:`u_o(x)`, is a
normally distributed random variable with mean :math:`\\bar{u}(x)` and
covariance :math:`C_u(x, x')` with :math:`u_o(x')`.
In this module, we adopt a more general definition of a Gaussian
process by incorporating basis functions. These basis functions are
added to Gaussian processes to account for arbitrary shifts or trends
in the data that we are trying to model. To be more precise, we
consider a Gaussian process :math:`u(x)` to be the combination of
:math:`u_o(x)`, a *proper* Gaussian process, and a set of :math:`m`
basis functions, :math:`\mathbf{p}_u(x) = \{p_i(x)\}_{i=1}^m`, whose
coefficients, :math:`\{c_i\}_{i=1}^m`, have infinite variance. We then
express :math:`u(x)` as
.. math::
u(x) = u_o(x) + \sum_{i=1}^m c_i p_i(x).
When we include these basis functions, the Gaussian process
:math:`u(x)` becomes improper because it has infinite variance. So
when we refer to the covariance function for a Gaussian process
:math:`u(x)`, we are actually referring to the covariance function for
its proper component :math:`u_o(x)`.
Throughout this module we will define a Gaussian process `u(x)` in
terms of its mean function :math:`\\bar{u}(x)`, its covariance
function :math:`C_u(x, x')`, as well as its basis functions
:math:`\mathbf{p}_u(x)`.
We consider five operations on Gaussian processes: addition,
subtraction, scaling, differentiation, and conditioning. Each
operation produces another Gaussian process which possesses the same
five operations. These operations are described below.
Operations on Gaussian processes
================================
Addition
--------
Two uncorrelated Gaussian processes, :math:`u` and :math:`v`, can be
added as
.. math::
u(x) + v(x) = z(x)
where the mean, covariance, and basis functions for :math:`z` are
.. math::
\\bar{z}(x) = \\bar{u}(x) + \\bar{v}(x),
.. math::
C_z(x,x') = C_u(x,x') + C_v(x,x'),
and
.. math::
\mathbf{p}_z(x) = \mathbf{p}_u(x) \cup \mathbf{p}_v(x).
Two `GaussianProcess` instances can be added with the `add` method or
the `+` operator.
Subtraction
-----------
A Gaussian process can be subtracted from another Gaussian processes
as
.. math::
u(x) - v(x) = z(x)
where
.. math::
\\bar{z}(x) = \\bar{u}(x) - \\bar{v}(x),
.. math::
C_z(x,x') = C_u(x,x') + C_v(x,x'),
and
.. math::
\mathbf{p}_z(x) = \mathbf{p}_u(x) \cup \mathbf{p}_v(x).
Two `GaussianProcess` instances can be subtracted with the `subtract`
method or the `-` operator.
Scaling
-------
A Gaussian process can be scaled by a constant as
.. math::
cu(x) = z(x)
where
.. math::
\\bar{z}(x) = c\\bar{u}(x),
.. math::
C_z(x,x') = c^2C_u(x,x'),
and
.. math::
\mathbf{p}_z(x) = \mathbf{p}_u(x).
A `GaussianProcess` instance can be scaled with the `scale` method or
the `*` operator.
Differentiation
---------------
A Gaussian process can be differentiated with respect to :math:`x_i`
as
.. math::
\\frac{\partial}{\partial x_i} u(x) = z(x),
where
.. math::
\\bar{z}(x) = \\frac{\partial}{\partial x_i}\\bar{u}(x),
.. math::
C_z(x,x') = \\frac{\partial^2}{\partial x_i \partial x_i'}
C_u(x,x'),
and
.. math::
\mathbf{p}_z(x) = \\left\{\\frac{\partial}{\partial x_i} p_k(x)
\mid p_k(x) \in \mathbf{p}_u(x)\\right\}
A `GaussianProcess` instance can be differentiated with the
`differentiate` method.
Conditioning
------------
A Gaussian process can be conditioned with :math:`q` noisy
observations of :math:`u(x)`, :math:`\mathbf{d}=\{d_i\}_{i=1}^q`,
which have been made at locations :math:`\mathbf{y}=\{y_i\}_{i=1}^q`.
These observations have noise with zero mean and covariance described
by :math:`\mathbf{C_d}`. The conditioned Gaussian process is
.. math::
u(x) | \mathbf{d} = z(x)
where
.. math::
\\bar{z}(x) = \\bar{u}(x) +
\mathbf{k}(x,\mathbf{y})
\mathbf{K}(\mathbf{y})^{-1}
\mathbf{r}^*,
.. math::
C_{z}(x,x') = C_u(x,x') -
\mathbf{k}(x,\mathbf{y})
\mathbf{K}(\mathbf{y})^{-1}
\mathbf{k}(x',\mathbf{y})^T,
and
.. math::
\mathbf{p}_z(x) = \emptyset.
In the above equations we use the augmented covariance matrices,
:math:`\mathbf{k}` and :math:`\mathbf{K}`, whose entries are
.. math::
\mathbf{k}(x,\mathbf{y}) =
\\left[
\\begin{array}{cc}
\\left[C_u(x,y_i)\\right]_{y_i \in \mathbf{y}}
& \mathbf{p}_u(x) \\\\
\\end{array}
\\right]
and
.. math::
\mathbf{K}(\mathbf{y}) =
\\left[
\\begin{array}{cc}
\mathbf{C_d} + \\left[C_u(y_i,y_j)\\right]_
{y_i,y_j \in \mathbf{y}\\times\mathbf{y}}
& [\mathbf{p}_u(y_i)]_{y_i \in \mathbf{y}} \\\\
[\mathbf{p}_u(y_i)]^T_{y_i \in \mathbf{y}}
& \mathbf{0} \\\\
\\end{array}
\\right].
We define the residual vector as
.. math::
\mathbf{r} = \\left([d_i - \\bar{u}(y_i)]_{i=1}^q\\right)^T
and :math:`\mathbf{r}^*` is the residual vector which has been
suitably padded with zeros. Note that there are no basis functions in
:math:`z` because it is assumed that there is enough data in
:math:`\mathbf{d}` to constrain the basis functions in :math:`u`. If
:math:`\mathbf{d}` is not sufficiently informative then
:math:`\mathbf{K}(\mathbf{y})` will not be invertible. A necessary but
not sufficient condition for :math:`\mathbf{K}(\mathbf{y})` to be
invertible is that :math:`q \geq m`.
A `GaussianProcess` instance can be conditioned with the `condition`
method.
Some commonly used Gaussian processes
=====================================
The `GaussianProcess` class is quite general as it can be instantiated
with any user-specified mean function, covariance function, or set of
basis functions. However, supplying these requisite functions can be
laborious. This module contains several constructors to simplify
instantiating some commonly used types of Gaussian processes. The
types of Gaussian processes which have constructors are listed below.
Isotropic Gaussian Processes
----------------------------
An isotropic Gaussian process has a constant mean and a covariance
function which can be written as a function of
:math:`r = ||x - x'||_2`. To put more explicitly, an isotropic
Gaussian processes has the mean function
.. math::
\\bar{u}(x) = \mu,
and the covariance function
.. math::
C_u(x,x') = \sigma^2 \phi(r\ ; \epsilon),
Where :math:`\phi(r\ ; \epsilon)` is a positive definite radial basis
function with shape parameter :math:`\epsilon`. One common choice for
:math:`\phi` is the squared exponential function,
.. math::
\phi(r\ ;\epsilon) = \exp\\left(\\frac{-r^2}{\epsilon^2}\\right),
which has the useful property of being infinitely differentiable. An
instance of an isotropic `GaussianProcess` can be created with the
function `gpiso`. A `GaussianProcess` with a squared exponential
covariance function can be created with the function `gpse`.
Gaussian Process with a Gibbs covariance function
-------------------------------------------------
A Gaussian process with a Gibbs covariance function is useful because,
unlike for isotropic Gaussian processes, it can have a spatially
variable lengthscale. Given some user-specified lengthscale function
:math:`\ell_d(x)`, which gives the lengthscale at :math:`x \in
\mathbb{R}^D` along dimension :math:`d`, the Gibbs covariance function
is
.. math::
C_u(x, x') =
\sigma^2
\prod_{d=1}^D \\left(
\\frac{2 \ell_d(x) \ell_d(x')}{\ell_d(x)^2 + \ell_d(x')^2}
\\right)^{1/2}
\exp\\left(-\sum_{d=1}^D
\\frac{(x_d - x_d')^2}{\ell_d(x)^2 + \ell_d(x')^2}
\\right).
An instance of a `GaussianProcess` with a Gibbs covariance function
can be created with the function `gpgibbs`.
Gaussian Process with mononomial basis functions
------------------------------------------------
Polynomials are often added to Gaussian processes to improve their
ability to describe offsets and trends in data. The function `gppoly`
is used to create a `GaussianProcess` with zero mean, zero covariance,
and a set of monomial basis function that span the space of all
polynomials with some degree, :math:`d`. For example, if :math:`x \in
\mathbb{R}^2` and :math:`d=1`, then the monomial basis functions would
be
.. math::
\mathbf{p}_u(x) = \{1,x_1,x_2\}.
The function `gpbasis` can be used to create a `GaussianProcess` with
any other type of basis functions.
Examples
========
Here we provide a basic example that demonstrates creating a
`GaussianProcess` and performing GPR. Suppose we have 5 scalar valued
observations `d` that were made at locations `x`, and we want
interpolate these observations with GPR
>>> x = [[0.0], [1.0], [2.0], [3.0], [4.0]]
>>> d = [2.3, 2.2, 1.7, 1.8, 2.4]
First we define our prior for the underlying function that we want to
interpolate. We assume an isotropic `GaussianProcess` with a squared
exponential covariance function and the parameter :math:`\mu=0.0`,
:math:`\sigma^2=1.0` and :math:`\epsilon=0.5`.
>>> from rbf.basis import se
>>> from rbf.gauss import gpiso
>>> gp_prior = gpiso(se, (0.0, 1.0, 0.5))
We also want to include an unknown constant offset to our prior model,
which is done with the command
>>> from rbf.gauss import gppoly
>>> gp_prior += gppoly(0)
Now we condition the prior with the observations to form the posterior
>>> gp_post = gp_prior.condition(x, d)
We can now evaluate the mean and covariance of the posterior anywhere
using the `mean` or `covariance` method. We can also evaluate just the
mean and standard deviation with the `meansd` method.
>>> m, s = gp_post.meansd([[0.5], [1.5], [2.5], [3.5]])
References
==========
[1] Rasmussen, C., and Williams, C., Gaussian Processes for Machine
Learning. The MIT Press, 2006.
'''
import logging
import warnings
import numpy as np
import scipy.sparse as sp
import rbf.poly
import rbf.basis
import rbf.linalg
from rbf.utils import assert_shape, get_arg_count, MemoizeArrayInput
from rbf.linalg import (as_array, as_sparse_or_array,
is_positive_definite, PosDefSolver,
PartitionedPosDefSolver)
LOGGER = logging.getLogger(__name__)
def differentiator(delta):
'''
Decorator that makes a function differentiable. The derivatives of the
function are approximated by finite differences. The function must take a
single (N, D) array of positions as input. The returned function takes a
single (N, D) array of positions and a (D,) array derivative specification.
Parameters
----------
delta : float
step size to use for finite differences
'''
def _differentiator(fin):
'''The actual decorator'''
def fout(x, diff):
'''The returned differentiable mean function'''
if not any(diff):
# If no derivatives are specified then return the undifferentiated
# mean. Make sure the output is a numpy array.
out = as_array(fin(x))
return out
else:
# get the axis we are differentiating with respect to
diff_axis = np.argmax(diff)
# make the perturbations
x_plus_dx = np.copy(x)
x_plus_dx[:, diff_axis] += delta
# make a copy of `diff` and lower the derivative along `diff_axis` by
# one.
diff_minus_one = np.copy(diff)
diff_minus_one[diff_axis] -= 1
# compute a first order forward finite difference
out = ( fout(x_plus_dx, diff_minus_one) -
fout(x, diff_minus_one) ) / delta
return out
return fout
return _differentiator
def covariance_differentiator(delta):
'''
Decorator that makes a covariance function differentiable. The derivatives of
the covariance function are approximated by finite differences. The
covariance function must take an (N, D) array and an (M, D) array of
positions as input. The returned function takes an (N, D) array and an (M, D)
array of positions and two (D,) array derivative specifications.
Parameters
----------
delta : float
step size to use for finite differences
'''
def _covariance_differentiator(fin):
'''The actual decorator'''
def fout(x1, x2, diff1, diff2):
'''The returned differentiable mean function'''
if (not any(diff1)) & (not any(diff2)):
# If no derivatives are specified then return the undifferentiated
# covariance.
return as_sparse_or_array(fin(x1, x2))
elif any(diff1):
# get the axis we are differentiating with respect to
diff1_axis = np.argmax(diff1)
# make the perturbations
x1_plus_dx = np.copy(x1)
x1_plus_dx[:, diff1_axis] += delta
# make a copy of `diff1` and lower the derivative along `diff1_axis` by
# one.
diff1_minus_one = np.copy(diff1)
diff1_minus_one[diff1_axis] -= 1
# compute a first order forward finite difference
out = ( fout(x1_plus_dx, x2, diff1_minus_one, diff2) -
fout(x1, x2, diff1_minus_one, diff2) ) / delta
return out
else:
# any(diff2) == True
# get the axis we are differentiating with respect to
diff2_axis = np.argmax(diff2)
# make the perturbations
x2_plus_dx = np.copy(x2)
x2_plus_dx[:, diff2_axis] += delta
# make a copy of `diff2` and lower the derivative along `diff2_axis` by
# one.
diff2_minus_one = np.copy(diff2)
diff2_minus_one[diff2_axis] -= 1
# compute a first order forward finite difference
out = ( fout(x1, x2_plus_dx, diff1, diff2_minus_one) -
fout(x1, x2, diff1, diff2_minus_one) ) / delta
return out
return fout
return _covariance_differentiator
def _combined_dim(dim1, dim2):
'''
Returns the dimensionality of a Gaussian process formed by combining two
Gaussian processes with dimensions `dim1` and `dim2`. The dimensionality can
be an `int` or `None` indicating that it is unspecified
'''
# If both dimensions are unspecified, return None
if (dim1 is None) & (dim2 is None):
return None
# At least one dimension is specified. If only one dimension is specified
# return that
elif dim1 is None:
return dim2
elif dim2 is None:
return dim1
# both dim1 and dim2 are specified. If they are not equal raise an error
elif dim1 == dim2:
return dim1
else:
raise ValueError(
'The `GaussianProcess` instances have inconsistent spatial dimensions')
def _as_covariance(sigma):
'''
Return `sigma` as a covariance matrix. If `sigma` is a 1-D array then square
it and make it a scipy sparse diagonal matrix. Otherwise run `sigma` through
`as_sparse_or_array`
'''
if np.ndim(sigma) == 1:
sigma = np.array(sigma, dtype=float, copy=False)
sigma = sp.diags(sigma**2).tocsc()
sigma = as_sparse_or_array(sigma, dtype=float)
return sigma
def _all_is_finite(A):
'''
returns True if all values in `A` are finite. `A` can be a numpy array or a
scipy sparse matrix.
'''
if sp.issparse(A):
# get all the nonzero entries
return np.all(np.isfinite(A.data))
else:
return np.all(np.isfinite(A))
def _sample(mean, cov, use_cholesky=False, count=None):
'''
Draws a random sample from the Gaussian process with the specified mean and
covariance.
'''
if use_cholesky:
# draw a sample using a cholesky decomposition. This assumes that `cov` is
# numerically positive definite (i.e. no small negative eigenvalues from
# rounding error).
L = PosDefSolver(cov).L()
if count is None:
w = np.random.normal(0.0, 1.0, mean.shape[0])
u = mean + L.dot(w)
else:
w = np.random.normal(0.0, 1.0, (mean.shape[0], count))
u = (mean[:, None] + L.dot(w)).T
else:
# otherwise use an eigenvalue decomposition, ignoring negative eigenvalues.
# If `cov` is sparse then begrudgingly make it dense.
cov = as_array(cov)
vals, vecs = np.linalg.eigh(cov)
keep = (vals > 0.0)
vals = vals[keep]
vecs = vecs[:, keep]
if count is None:
w = np.random.normal(0.0, np.sqrt(vals))
u = mean + vecs.dot(w)
else:
w = np.random.normal(0.0, np.sqrt(vals[:, None].repeat(count, axis=1)))
u = (mean[:, None] + vecs.dot(w)).T
return u
def likelihood(d, mu, sigma, p=None):
'''
Returns the log likelihood. If `p` is not specified, then the likelihood is
the probability of observing `d` from a normally distributed random vector
with mean `mu` and covariance `sigma`. If `d` is expected to contain some
unknown linear combination of basis vectors (e.g. a constant offset or linear
trend), then `p` should be specified with those basis vectors as its columns.
When `p` is specified, the restricted likelihood is returned. The restricted
likelihood is the probability of observing `R.dot(d)` from a normally
distributed random vector with mean `R.dot(mu)` and covariance
`R.dot(sigma).dot(R.T)`, where `R` is a matrix with rows that are orthogonal
to the columns of `p`. In other words, if `p` is specified then the component
of `d` which lies along the columns of `p` will be ignored.
The restricted likelihood was first described by [1] and it is covered in
more general reference books such as [2]. Both [1] and [2] are good sources
for additional information.
Parameters
----------
d : (N,) array
observations
mu : (N,) array
mean of the random vector
sigma : (N,) array, (N, N) array, or (N, N) scipy sparse matrix
If this is an (N,) array then it describes one standard deviation of the
random vector. If this is an (N, N) array then it describes the
covariances.
p : (N, P) array, optional
Basis vectors. If specified, then `d` is assumed to contain some unknown
linear combination of the columns of `p`.
Notes
-----
Unlike other functions in this module, if the covariance matrix is not
numerically positive definite then this function will fail with an error
rather than trying to coerce it into a positive definite matrix.
References
----------
[1] Harville D. (1974). Bayesian Inference of Variance Components Using Only
Error Contrasts. Biometrica.
[2] Cressie N. (1993). Statistics for Spatial Data. John Wiley & Sons.
'''
d = as_array(d, dtype=float)
assert_shape(d, (None,), 'd')
# number of observations
n = d.shape[0]
mu = as_array(mu, dtype=float)
assert_shape(mu, (n,), 'mu')
sigma = _as_covariance(sigma)
assert_shape(sigma, (n, n), 'sigma')
if p is None:
p = np.zeros((n, 0), dtype=float)
else:
p = as_array(p, dtype=float)
assert_shape(p, (n, None), 'p')
# number of basis vectors
m = p.shape[1]
A = PosDefSolver(sigma)
B = A.solve_L(p)
C = PosDefSolver(B.T.dot(B))
D = PosDefSolver(p.T.dot(p))
a = A.solve_L(d - mu)
b = C.solve_L(B.T.dot(a))
out = 0.5*(D.log_det() -
A.log_det() -
C.log_det() -
a.T.dot(a) +
b.T.dot(b) -
(n-m)*np.log(2*np.pi))
return out
def outliers(d, s, mu=None, sigma=None, p=None, tol=4.0, maxitr=50):
'''
Uses a data editing algorithm to identify outliers in `d`. Outliers are
considered to be the data that are abnormally inconsistent with the Gaussian
process described by `mu` (mean), `sigma` (covariance), and `p` (basis
vectors). This function can only be used for data with nonzero, uncorrelated
noise.
The data editing algorithm first conditions the Gaussian process with the
observations, then it compares each residual (`d` minus the expected value of
the posterior divided by `sigma`) to the RMS of residuals. Data with
residuals greater than `tol` times the RMS are identified as outliers. This
process is then repeated using the subset of `d` which were not flagged as
outliers. If no new outliers are detected in an iteration then the algorithm
stops.
Parameters
----------
d : (N,) float array
Observations
s : (N,) float array
One standard deviation uncertainty on the observations.
mu : (N,) float array, optional
Mean of the Gaussian process at the observation points. Defaults to zeros.
sigma : (N,) array, (N, N) array, or (N, N) scipy sparse matrix, optional
Covariance of the Gaussian process at the observation points. Defaults to
zeros.
p : (N, P) float array, optional
Basis vectors for the Gaussian process evaluated at the observation points.
Defaults to an (N, 0) array.
tol : float, optional
Outlier tolerance. Smaller values make the algorithm more likely to
identify outliers. A good value is 4.0 and this should not be set any lower
than 2.0.
maxitr : int, optional
Maximum number of iterations.
Returns
-------
out : (N,) bool array
Array indicating which data are outliers
'''
d = as_array(d, dtype=float)
assert_shape(d, (None,), 'd')
# number of observations
n = d.shape[0]
s = as_array(s, dtype=float)
assert_shape(s, (n,), 's')
if mu is None:
mu = np.zeros((n,), dtype=float)
else:
mu = as_array(mu, dtype=float)
assert_shape(mu, (n,), 'mu')
if sigma is None:
sigma = sp.csc_matrix((n, n), dtype=float)
else:
sigma = _as_covariance(sigma)
assert_shape(sigma, (n, n), 'sigma')
if p is None:
p = np.zeros((n, 0), dtype=float)
else:
p = as_array(p, dtype=float)
assert_shape(p, (n, None), 'p')
# number of basis functions
m = p.shape[1]
# total number of outlier detection iterations completed thus far
itr = 0
# boolean array indicating outliers
out = np.zeros(n, dtype=bool)
while True:
LOGGER.debug(
'Starting iteration %s of outlier detection routine' % (itr+1))
# remove rows and cols where `out` is True
sigma_i = sigma[:, ~out][~out, :]
p_i = p[~out]
mu_i = mu[~out]
d_i = d[~out]
s_i = s[~out]
# add data covariance to GP covariance. If an array is added to a sparse
# matrix then the output is a matrix. as_sparse_or_array coerces it back to
# an array
sigma_i = as_sparse_or_array(sigma_i + _as_covariance(s_i))
Ksolver = PartitionedPosDefSolver(sigma_i, p_i)
vec1, vec2 = Ksolver.solve(d_i - mu_i, np.zeros(m))
# dereference everything that we no longer need
del sigma_i, mu_i, p_i, d_i, s_i, Ksolver
fit = mu + sigma[:, ~out].dot(vec1) + p.dot(vec2)
# find new outliers
res = np.abs(fit - d)/s
rms = np.sqrt(np.mean(res[~out]**2))
if np.all(out == (res > tol*rms)):
break
else:
out = res > tol*rms
itr += 1
if itr == maxitr:
warnings.warn('Reached the maximum number of iterations')
break
LOGGER.debug(
'Detected %s outliers out of %s observations' % (sum(out), len(out)))
return out
def _io_is_checked(fin):
'''
Decorator that indicates the function has the appropriate input and output
and does not need to be wrapped with the io check functions.
'''
fin._io_is_checked = None
return fin
def _is_null(fin):
'''
Decorator that indicates the mean function returns zeros, covariance function
returns zeros, or the basis functions are an empty set. This is used to avoid
unnecessarily adding arrays of zeros or appending empty arrays.
'''
fin._is_null = None
return fin
@_is_null
@_io_is_checked
def zero_mean(x, diff):
'''mean function that returns zeros'''
return np.zeros((x.shape[0],), dtype=float)
@_is_null
@_io_is_checked
def zero_variance(x, diff):
'''variance function that returns zeros'''
return np.zeros((x.shape[0],), dtype=float)
@_is_null
@_io_is_checked
def zero_sparse_covariance(x1, x2, diff1, diff2):
'''covariance function that returns sparse zeros'''
return sp.csc_matrix((x1.shape[0], x2.shape[0]), dtype=float)
@_is_null
@_io_is_checked
def zero_dense_covariance(x1, x2, diff1, diff2):
'''covariance function that returns dense zeros'''
return np.zeros((x1.shape[0], x2.shape[0]), dtype=float)
@_is_null
@_io_is_checked
def empty_basis(x, diff):
'''empty set of basis functions'''
return np.zeros((x.shape[0], 0), dtype=float)
def _default_variance(covariance):
'''Converts a covariance function to a variance function'''
@_io_is_checked
def variance(x, diff):
cov = covariance(x, x, diff, diff)
# cov may be a CSC sparse matrix or an array. Either way, it has a
# diagonal method
out = cov.diagonal()
return out
return variance
def _mean_io_check(fin):
'''
Decorator that ensures the mean function takes two positional arguments and
returns an array with the appropriate shape.
'''
if hasattr(fin, '_io_is_checked'):
return fin
arg_count = get_arg_count(fin)
@_io_is_checked
def mean_checked(x, diff):
if arg_count == 1:
# `fin` only takes one argument and is assumed to not be differentiable
if any(diff):
raise ValueError(
'The mean of the `GaussianProcess` is not differentiable')
out = fin(x)
else:
# otherwise it is assumed that `fin` takes two arguments
out = fin(x, diff)
out = as_array(out)
assert_shape(out, (x.shape[0],), 'mean_output')
return out
return mean_checked
def _variance_io_check(fin):
'''
Decorator that ensures the variance function takes two positional arguments
and returns an array with the appropriate shape.
'''
if hasattr(fin, '_io_is_checked'):
return fin
arg_count = get_arg_count(fin)
@_io_is_checked
def variance_checked(x, diff):
if arg_count == 1:
# `fin` only takes one argument and is assumed to not be differentiable
if any(diff):
raise ValueError(
'The variance of the `GaussianProcess` is not differentiable')
out = fin(x)
else:
# otherwise it is assumed that `fin` takes two arguments
out = fin(x, diff)
out = as_array(out)
assert_shape(out, (x.shape[0],), 'variance_output')
return out
return variance_checked
def _covariance_io_check(fin):
'''
Decorator that ensures the covariance function takes four positional
arguments and returns either an array or csc sparse matrix with the
appropriate shape.
'''
if hasattr(fin, '_io_is_checked'):
return fin
arg_count = get_arg_count(fin)
@_io_is_checked
def covariance_checked(x1, x2, diff1, diff2):
if arg_count == 2:
# `fin` only takes two argument and is assumed to not be differentiable
if any(diff1) | any(diff2):
raise ValueError(
'The covariance of the `GaussianProcess` is not differentiable')
out = fin(x1, x2)
else:
# otherwise it is assumed that `fin` takes four arguments
out = fin(x1, x2, diff1, diff2)
out = as_sparse_or_array(out)
assert_shape(out, (x1.shape[0], x2.shape[0]), 'covariance_output')
return out
return covariance_checked
def _basis_io_check(fin):
'''
Decorator that ensures the basis function takes two positional arguments and
returns an array with the appropriate shape
'''
if hasattr(fin, '_io_is_checked'):
return fin
arg_count = get_arg_count(fin)
@_io_is_checked
def basis_checked(x, diff):
if arg_count == 1:
# `fin` only takes one argument and is assumed to not be differentiable
if any(diff):
raise ValueError(
'The basis functions for the `GaussianProcess` are not '
'differentiable')
out = fin(x)
else:
# otherwise it is assumed that `fin` takes two arguments
out = fin(x, diff)
out = as_array(out)
assert_shape(out, (x.shape[0], None), 'basis_output')
return out
return basis_checked
def _add(gp1, gp2):
'''
Returns a `GaussianProcess` which is the sum of two `GaussianProcess`
instances.
'''
if hasattr(gp2._mean, '_is_null'):
mean = gp1._mean
elif hasattr(gp1._mean, '_is_null'):
mean = gp2._mean
else:
@_io_is_checked
def mean(x, diff):
out = gp1._mean(x, diff) + gp2._mean(x, diff)
return out
if hasattr(gp2._variance, '_is_null'):
variance = gp1._variance
elif hasattr(gp1._variance, '_is_null'):
variance = gp2._variance
else:
@_io_is_checked
def variance(x, diff):
out = gp1._variance(x, diff) + gp2._variance(x, diff)
return out
if hasattr(gp2._covariance, '_is_null'):
covariance = gp1._covariance
elif hasattr(gp1._covariance, '_is_null'):
covariance = gp2._covariance
else:
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
out = as_sparse_or_array(gp1._covariance(x1, x2, diff1, diff2) +
gp2._covariance(x1, x2, diff1, diff2))
return out
if hasattr(gp2._basis, '_is_null'):
basis = gp1._basis
elif hasattr(gp1._basis, '_is_null'):
basis = gp2._basis
else:
@_io_is_checked
def basis(x, diff):
out = np.hstack((gp1._basis(x, diff),
gp2._basis(x, diff)))
return out
dim = _combined_dim(gp1.dim, gp2.dim)
out = GaussianProcess(mean, covariance, basis=basis, variance=variance,
dim=dim)
return out
def _subtract(gp1, gp2):
'''
Returns a `GaussianProcess` which is the difference of two `GaussianProcess`
instances.
'''
if hasattr(gp2._mean, '_is_null'):
mean = gp1._mean
elif hasattr(gp1._mean, '_is_null'):
@_io_is_checked
def mean(x, diff):
out = -gp2._mean(x, diff)
return out
else:
@_io_is_checked
def mean(x, diff):
out = gp1._mean(x, diff) - gp2._mean(x, diff)
return out
if hasattr(gp2._variance, '_is_null'):
variance = gp1._variance
elif hasattr(gp1._variance, '_is_null'):
variance = gp2._variance
else:
@_io_is_checked
def variance(x, diff):
out = gp1._variance(x, diff) + gp2._variance(x, diff)
return out
if hasattr(gp2._covariance, '_is_null'):
covariance = gp1._covariance
elif hasattr(gp1._covariance, '_is_null'):
covariance = gp2._covariance
else:
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
out = as_sparse_or_array(gp1._covariance(x1, x2, diff1, diff2) +
gp2._covariance(x1, x2, diff1, diff2))
return out
if hasattr(gp2._basis, '_is_null'):
basis = gp1._basis
elif hasattr(gp1._basis, '_is_null'):
basis = gp2._basis
else:
@_io_is_checked
def basis(x, diff):
out = np.hstack((gp1._basis(x, diff),
gp2._basis(x, diff)))
return out
dim = _combined_dim(gp1.dim, gp2.dim)
out = GaussianProcess(mean, covariance, basis=basis, variance=variance,
dim=dim)
return out
def _scale(gp, c):
'''
Returns a scaled `GaussianProcess`.
'''
if hasattr(gp._mean, '_is_null'):
mean = gp._mean
else:
@_io_is_checked
def mean(x, diff):
out = c*gp._mean(x, diff)
return out
if hasattr(gp._variance, '_is_null'):
variance = gp._variance
else:
@_io_is_checked
def variance(x, diff):
out = c**2*gp._variance(x, diff)
return out
if hasattr(gp._covariance, '_is_null'):
covariance = gp._covariance
else:
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
out = c**2*gp._covariance(x1, x2, diff1, diff2)
return out
out = GaussianProcess(mean, covariance, basis=gp._basis, variance=variance,
dim=gp.dim)
return out
def _differentiate(gp, d):
'''
Differentiates a `GaussianProcess`.
'''
if hasattr(gp._mean, '_is_null'):
mean = gp._mean
else:
@_io_is_checked
def mean(x, diff):
out = gp._mean(x, diff + d)
return out
if hasattr(gp._variance, '_is_null'):
variance = gp._variance
else:
@_io_is_checked
def variance(x, diff):
out = gp._variance(x, diff + d)
return out
if hasattr(gp._covariance, '_is_null'):
covariance = gp._covariance
else:
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
out = gp._covariance(x1, x2, diff1 + d, diff2 + d)
return out
if hasattr(gp._basis, '_is_null'):
basis = gp._basis
else:
@_io_is_checked
def basis(x, diff):
out = gp._basis(x, diff + d)
return out
dim = d.shape[0]
out = GaussianProcess(mean, covariance, basis=basis, variance=variance,
dim=dim)
return out
def _condition(gp, y, d, sigma, p, obs_diff, build_inverse):
'''
Returns a conditioned `GaussianProcess`.
'''
@MemoizeArrayInput
def precompute():
# do as many calculations as possible without yet knowning where the
# interpolation points will be. This function is memoized so that I can
# easily dereference the kernel inverse matrix with "clear_caches".
# GP mean at the observation points
mu_y = gp._mean(y, obs_diff)
# GP covariance at the observation points
C_y = gp._covariance(y, y, obs_diff, obs_diff)
# GP basis functions at the observation points
p_y = gp._basis(y, obs_diff)
# Only if noise basis vectors exist, append them to the GP basis vectors
if p.shape[1] != 0:
p_y = np.hstack((p_y, p))
# add data noise to the covariance matrix
C_y = as_sparse_or_array(C_y + sigma)
# Create a factorization for the kernel, for rapid solving
K_y_solver = PartitionedPosDefSolver(C_y, p_y, build_inverse=build_inverse)
# evaluate the right-most operations for computing the mean since they do
# not require knowledge of the interpolation points, store the intermediate
# results as vec1 and vec2
r = d - mu_y
z = np.zeros((p_y.shape[1],), dtype=float)
vec1, vec2 = K_y_solver.solve(r, z)
return K_y_solver, vec1, vec2
@_io_is_checked
def mean(x, diff):
_, vec1, vec2 = precompute()
mu_x = gp._mean(x, diff)
C_xy = gp._covariance(x, y, diff, obs_diff)
p_x = gp._basis(x, diff)
# Only if noise basis vectors exist, pad the GP basis vectors with that
# many zeros
if p.shape[1] != 0:
p_x_pad = np.zeros((p_x.shape[0], p.shape[1]), dtype=float)
p_x = np.hstack((p_x, p_x_pad))
out = mu_x + C_xy.dot(vec1) + p_x.dot(vec2)
return out
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
K_y_solver, _, _ = precompute()
C_x1x2 = gp._covariance(x1, x2, diff1, diff2)
C_x1y = gp._covariance(x1, y, diff1, obs_diff)
C_x2y = gp._covariance(x2, y, diff2, obs_diff)
p_x1 = gp._basis(x1, diff1)
p_x2 = gp._basis(x2, diff2)
# Only if noise basis vectors exist, pad the GP basis vectors with that
# many zeros
if p.shape[1] != 0:
p_x1_pad = np.zeros((p_x1.shape[0], p.shape[1]), dtype=float)
p_x2_pad = np.zeros((p_x2.shape[0], p.shape[1]), dtype=float)
p_x1 = np.hstack((p_x1, p_x1_pad))
p_x2 = np.hstack((p_x2, p_x2_pad))
mat1, mat2 = K_y_solver.solve(C_x2y.T, p_x2.T)
out = C_x1x2 - C_x1y.dot(mat1) - p_x1.dot(mat2)
return out
@_io_is_checked
def variance(x, diff):
K_y_solver, _, _ = precompute()
var_x = gp._variance(x, diff)
C_xy = gp._covariance(x, y, diff, obs_diff)
p_x = gp._basis(x, diff)
# Only if noise basis vectors exist, pad the GP basis vectors with that
# many zeros
if p.shape[1] != 0:
p_x_pad = np.zeros((p_x.shape[0], p.shape[1]), dtype=float)
p_x = np.hstack((p_x, p_x_pad))
mat1, mat2 = K_y_solver.solve(C_xy.T, p_x.T)
# Efficiently get the diagonals of C_xy.dot(mat1) and p_x.dot(mat2)
if sp.issparse(C_xy):
diag1 = C_xy.multiply(mat1.T).sum(axis=1).A[:, 0]
else:
diag1 = np.einsum('ij, ji->i', C_xy, mat1)
diag2 = np.einsum('ij, ji->i', p_x, mat2)
out = var_x - diag1 - diag2
return out
dim = y.shape[1]
out = GaussianProcess(mean, covariance, variance=variance, dim=dim)
return out
class GaussianProcess(object):
'''
A `GaussianProcess` instance represents a stochastic process which is defined
in terms of a mean function, a covariance function, and (optionally) a set of
basis functions. This class is used to perform basic operations on Gaussian
processes which include addition, subtraction, scaling, differentiation,
sampling, and conditioning.
Parameters
----------
mean : function
Function which returns either the mean of the Gaussian process at `x` or a
specified derivative of the mean at `x`. This has the call signature
`out = mean(x)`
or
`out = mean(x, diff)`
`x` is an (N, D) array of positions. `diff` is a (D,) int array derivative
specification (e.g. [0, 1] indicates to return the derivative with respect
to the second spatial dimension). `out` must be an (N,) array. If this
function only takes one argument then it is assumed to not be
differentiable and the `differentiate` method for the `GaussianProcess`
instance will return an error.
covariance : function
Function which returns either the covariance of the Gaussian process
between points `x1` and `x2` or the covariance of the specified derivatives
of the Gaussian process between points `x1` and `x2`. This has the call
signature
`out = covariance(x1, x2)`
or
`out = covariance(x1, x2, diff1, diff2)`
`x1` and `x2` are (N, D) and (M, D) arrays of positions, respectively.
`diff1` and `diff2` are (D,) int array derivative specifications. `out` can
be an (N, M) array or scipy sparse matrix (csc format would be most
efficient). If this function only takes two arguments, then it is assumed
to not be differentiable and the `differentiate` method for the
`GaussianProcess` instance will return an error.
basis : function, optional
Function which returns either the basis functions evaluated at `x` or the
specified derivative of the basis functions evaluated at `x`. This has the
call signature
`out = basis(x)`
or
`out = basis(x, diff)`
`x` is an (N, D) array of positions. `diff` is a (D,) int array derivative
specification. `out` is an (N, P) array where each column corresponds to a
basis function. By default, a `GaussianProcess` instance contains no basis
functions. If this function only takes one argument, then it is assumed to
not be differentiable and the `differentiate` method for the
`GaussianProcess` instance will return an error.
variance : function, optional
A function that returns the variance of the Gaussian process or its
derivative at `x`. The has the call signature
`out = variance(x)`
or
`out = variance(x, diff)`
If this function is provided, it should be a more efficient alternative to
evaluating the covariance matrix at `(x, x)` and then taking the diagonals.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` instance. An error
will be raised if method arguments have a conflicting number of spatial
dimensions.
Notes
-----
1. This class does not check whether the specified covariance function is
positive definite, making it easy to construct an invalid `GaussianProcess`
instance. For this reason, one may prefer to create a `GaussianProcess` with
one of the constructor functions (e.g., `gpse` or `gppoly`).
2. A `GaussianProcess` returned by `add`, `subtract`, `scale`,
`differentiate`, and `condition` has `mean`, `covariance`, and `basis`
function which calls the `mean`, `covariance`, and `basis` functions of its
parents. Due to this recursive implementation, the number of generations of
children is limited by the maximum recursion depth.
Examples
--------
Create a `GaussianProcess` describing Brownian motion
>>> import numpy as np
>>> from rbf.gauss import GaussianProcess
>>> def mean(x): return np.zeros(x.shape[0])
>>> def cov(x1, x2): return np.minimum(x1[:, None, 0], x2[None, :, 0])
>>> gp = GaussianProcess(mean, cov, dim=1) # Brownian motion is 1D
'''
def __init__(self, mean, covariance, basis=None, variance=None, dim=None):
self._mean = _mean_io_check(mean)
self._covariance = _covariance_io_check(covariance)
if basis is None:
basis = empty_basis
self._basis = _basis_io_check(basis)
if variance is None:
variance = _default_variance(self._covariance)
self._variance = _variance_io_check(variance)
self.dim = dim
def __call__(self, *args, **kwargs):
'''
equivalent to calling `meansd`
'''
return self.meansd(*args, **kwargs)
def __add__(self, other):
'''
equivalent to calling `add`
'''
return self.add(other)
def __sub__(self, other):
'''
equivalent to calling `subtract`
'''
return self.subtract(other)
def __mul__(self, c):
'''
equivalent to calling `scale`
'''
return self.scale(c)
def __rmul__(self, c):
'''
equivalent to calling `scale`
'''
return self.__mul__(c)
def __or__(self, args):
'''
equivalent to calling `condition` with positional arguments `args`.
'''
return self.condition(*args)
def add(self, other):
'''
Adds two `GaussianProcess` instances.
Parameters
----------
other : GuassianProcess
Returns
-------
out : GaussianProcess
'''
out = _add(self, other)
return out
def subtract(self, other):
'''
Subtracts two `GaussianProcess` instances.
Parameters
----------
other : GuassianProcess
Returns
-------
out : GaussianProcess
'''
out = _subtract(self, other)
return out
def scale(self, c):
'''
Scales a `GaussianProcess`.
Parameters
----------
c : float
Returns
-------
out : GaussianProcess
'''
c = np.float64(c)
out = _scale(self, c)
return out
def differentiate(self, d):
'''
Returns the derivative of a `GaussianProcess`.
Parameters
----------
d : (D,) int array
Derivative specification
Returns
-------
out : GaussianProcess
'''
d = as_array(d, dtype=int)
assert_shape(d, (self.dim,), 'd')
out = _differentiate(self, d)
return out
def condition(self, y, d, sigma=None, p=None, obs_diff=None,
build_inverse=False):
'''
Returns a conditional `GaussianProcess` which incorporates the observed
data, `d`.
Parameters
----------
y : (N, D) float array
Observation points
d : (N,) float array
Observed values at `y`
sigma : (N,) array, (N, N) array, or (N, N) scipy sparse matrix, optional
Data uncertainty. If this is an (N,) array then it describes one standard
deviation of the data error. If this is an (N, N) array then it describes
the covariances of the data error. If nothing is provided then the error
is assumed to be zero. Note that having zero uncertainty can result in
numerically unstable calculations for large N.
p : (N, P) array, optional
Basis vectors for the noise. The data noise is assumed to contain some
unknown linear combination of the columns of `p`.
obs_diff : (D,) int array, optional
Derivative of the observations. For example, use (1,) if the observations
constrain the slope of a 1-D Gaussian process.
build_inverse : bool, optional
Whether to construct the inverse matrices rather than just the factors
Returns
-------
out : GaussianProcess
'''
## Check the input for errors
y = as_array(y, dtype=float)
assert_shape(y, (None, self.dim), 'y')
# number of observations and spatial dimensions
n, dim = y.shape
d = as_array(d, dtype=float)
assert_shape(d, (n,), 'd')
if sigma is None:
sigma = sp.csc_matrix((n, n), dtype=float)
else:
sigma = _as_covariance(sigma)
assert_shape(sigma, (n, n), 'sigma')
if p is None:
p = np.zeros((n, 0), dtype=float)
else:
p = as_array(p, dtype=float)
assert_shape(p, (n, None), 'p')
if obs_diff is None:
obs_diff = np.zeros(dim, dtype=int)
else:
obs_diff = as_array(obs_diff, dtype=int)
assert_shape(obs_diff, (dim,), 'obs_diff')
out = _condition(self, y, d, sigma, p, obs_diff,
build_inverse=build_inverse)
return out
def likelihood(self, y, d, sigma=None, p=None):
'''
Returns the log likelihood of drawing the observations `d` from this
`GaussianProcess`. The observations could potentially have noise which is
described by `sigma` and `p`. If the Gaussian process contains any basis
functions or if `p` is specified, then the restricted likelihood is
returned. For more information, see the documentation for
`rbf.gauss.likelihood` and references therein.
Parameters
----------
y : (N, D) array
Observation points.
d : (N,) array
Observed values at `y`.
sigma : (N,) array, (N, N) array, or (N, N) sparse matrix, optional
Data uncertainty. If this is an (N,) array then it describes one standard
deviation of the data error. If this is an (N, N) array then it describes
the covariances of the data error. If nothing is provided then the error
is assumed to be zero. Note that having zero uncertainty can result in
numerically unstable calculations for large N.
p : (N, P) float array, optional
Basis vectors for the noise. The data noise is assumed to contain some
unknown linear combination of the columns of `p`.
Returns
-------
out : float
log likelihood.
'''
y = as_array(y, dtype=float)
assert_shape(y, (None, self.dim), 'y')
n, dim = y.shape # number of observations and dimensions
d = as_array(d, dtype=float)
assert_shape(d, (n,), 'd')
if sigma is None:
sigma = sp.csc_matrix((n, n), dtype=float)
else:
sigma = _as_covariance(sigma)
assert_shape(sigma, (n, n), 'sigma')
if p is None:
p = np.zeros((n, 0), dtype=float)
else:
p = as_array(p, dtype=float)
assert_shape(p, (n, None), 'p')
obs_diff = np.zeros(dim, dtype=int)
# find the mean, covariance, and basis for the combination of the Gaussian
# process and the noise.
mu = self._mean(y, obs_diff)
gp_sigma = self._covariance(y, y, obs_diff, obs_diff)
sigma = as_sparse_or_array(gp_sigma + sigma)
gp_p = self._basis(y, obs_diff)
p = np.hstack((gp_p, p))
out = likelihood(d, mu, sigma, p=p)
return out
def outliers(self, y, d, sigma, tol=4.0, maxitr=50):
'''
Uses a data editing algorithm to identify outliers in `d`. Outliers are
considered to be the data that are abnormally inconsistent with the
`GaussianProcess`. This method can only be used for data that has nonzero,
uncorrelated noise.
The data editing algorithm first conditions the `GaussianProcess` with the
observations, then it compares each residual (`d` minus the expected value
of the posterior divided by `sigma`) to the RMS of residuals. Data with
residuals greater than `tol` times the RMS are identified as outliers. This
process is then repeated using the subset of `d` which were not flagged as
outliers. If no new outliers are detected in an iteration then the
algorithms stops.
Parameters
----------
y : (N, D) float array
Observation points.
d : (N,) float array
Observed values at `y`
sigma : (N,) float array
One standard deviation uncertainty on `d`
tol : float
Outlier tolerance. Smaller values make the algorithm more likely to
identify outliers. A good value is 4.0 and this should not be set any
lower than 2.0.
Returns
-------
out : (N,) bool array
Boolean array indicating which data are outliers
'''
y = as_array(y, dtype=float)
assert_shape(y, (None, self.dim), 'y')
n, dim = y.shape # number of observations and dimensions
d = as_array(d, dtype=float)
assert_shape(d, (n,), 'd')
# sigma is kept as a 1-D array
sigma = as_array(sigma, dtype=float)
assert_shape(sigma, (n,), 'sigma')
obs_diff = np.zeros(dim, dtype=int)
# find the mean, covariance, and basis for the combination of the Gaussian
# process and the noise.
gp_mu = self._mean(y, obs_diff)
gp_sigma = self._covariance(y, y, obs_diff, obs_diff)
gp_p = self._basis(y, obs_diff)
out = outliers(d, sigma,
mu=gp_mu, sigma=gp_sigma,
p=gp_p, tol=tol, maxitr=maxitr)
return out
def basis(self, x, diff=None):
'''
Returns the basis functions evaluated at `x`.
Parameters
----------
x : (N, D) array
Evaluation points
diff : (D,) int array
Derivative specification
Returns
-------
out : (N, P) array
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
if diff is None:
diff = np.zeros(x.shape[1], dtype=int)
else:
diff = as_array(diff, dtype=int)
assert_shape(diff, (x.shape[1],), 'diff')
out = self._basis(x, diff)
# return a dense copy of out
out = as_array(out, copy=True)
return out
def mean(self, x, diff=None):
'''
Returns the mean of the proper component of the `GaussianProcess`.
Parameters
----------
x : (N, D) array
Evaluation points
diff : (D,) int array
Derivative specification
Returns
-------
out : (N,) array
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
if diff is None:
diff = np.zeros(x.shape[1], dtype=int)
else:
diff = as_array(diff, dtype=int)
assert_shape(diff, (x.shape[1],), 'diff')
out = self._mean(x, diff)
# return a dense copy of out
out = as_array(out, copy=True)
return out
def variance(self, x, diff=None):
'''
Returns the variance of the proper component of the `GaussianProcess`.
Parameters
----------
x : (N, D) array
Evaluation points
diff : (D,) int array
Derivative specification
Returns
-------
out : (N,) array
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
if diff is None:
diff = np.zeros(x.shape[1], dtype=int)
else:
diff = as_array(diff, dtype=int)
assert_shape(diff, (x.shape[1],), 'diff')
out = self._variance(x, diff)
# return a dense copy of out
out = as_array(out, copy=True)
return out
def covariance(self, x1, x2, diff1=None, diff2=None):
'''
Returns the covariance of the proper component of the `GaussianProcess`.
Parameters
----------
x1, x2 : (N, D) array
Evaluation points
diff1, diff2 : (D,) int array
Derivative specification. For example, if `diff1` is (0,) and `diff2` is
(1,), then the returned covariance matrix will indicate how the Gaussian
process at `x1` covaries with the derivative of the Gaussian process at
`x2`.
Returns
-------
out : (N, N) array
'''
x1 = as_array(x1, dtype=float)
assert_shape(x1, (None, self.dim), 'x1')
x2 = as_array(x2, dtype=float)
assert_shape(x2, (None, self.dim), 'x2')
if diff1 is None:
diff1 = np.zeros(x1.shape[1], dtype=int)
else:
diff1 = as_array(diff1, dtype=int)
assert_shape(diff1, (x1.shape[1],), 'diff1')
if diff2 is None:
diff2 = np.zeros(x2.shape[1], dtype=int)
else:
diff2 = as_array(diff2, dtype=int)
assert_shape(diff2, (x1.shape[1],), 'diff2')
out = self._covariance(x1, x2, diff1, diff2)
# return a dense copy of out
out = as_array(out, copy=True)
return out
def meansd(self, x, chunk_size=100):
'''
Returns the mean and standard deviation of the proper component of the
`GaussianProcess`. This does not return the full covariance matrix, making
it appropriate for evaluating the `GaussianProcess` at many points.
Parameters
----------
x : (N, D) array
Evaluation points
chunk_size : int, optional
Break `x` into chunks with this size and evaluate the `GaussianProcess`
for each chunk. This argument affects the speed and memory usage of this
method, but it does not affect the output. Setting this to a larger value
will reduce the number of python function call at the expense of
increased memory usage.
Returns
-------
out_mean : (N,) array
Mean at `x`
out_sd : (N,) array
One standard deviation at `x`
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
# derivative of output will be zero
diff = np.zeros(x.shape[1], dtype=int)
# count is the total number of points evaluated thus far
count = 0
xlen = x.shape[0]
out_mean = np.zeros(xlen, dtype=float)
out_sd = np.zeros(xlen, dtype=float)
# This block should run at least once to catch any potential errors
while True:
# only log the progress if the mean and sd are being build in multiple
# chunks
if xlen > chunk_size:
LOGGER.debug(
'Computing the mean and std. dev. (chunk size = %s) : '
'%5.1f%% complete' % (chunk_size, (100.0*count)/xlen))
start, stop = count, min(count+chunk_size, xlen)
out_mean[start:stop] = self._mean(x[start:stop], diff)
out_sd[start:stop] = np.sqrt(self._variance(x[start:stop], diff))
count = stop
if count == xlen:
# break out of loop if all the points have been evaluated
break
if xlen > chunk_size:
LOGGER.debug(
'Computing the mean and std. dev. (chunk size = %s) : '
'100.0%% complete' % chunk_size)
return out_mean, out_sd
def sample(self, x, c=None, use_cholesky=False, count=None):
'''
Draws a random sample from the `GaussianProcess`.
Parameters
----------
x : (N, D) array
Evaluation points.
c : (P,) array, optional
Coefficients for the basis functions. If this is not specified then they
are set to zero.
use_cholesky : bool, optional
Indicates whether to use the Cholesky decomposition to create the sample.
The Cholesky decomposition is faster but it assumes that the covariance
matrix is numerically positive definite (i.e. there are no slightly
negative eigenvalues due to rounding error).
count : int, optional
If given, `count` samples will be drawn
Returns
-------
out : (N,) array
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
# derivative of the sample will be zero
diff = np.zeros(x.shape[1], dtype=int)
mu = self._mean(x, diff)
sigma = self._covariance(x, x, diff, diff)
p = self._basis(x, diff)
if c is not None:
c = as_array(c, dtype=float)
else:
c = np.zeros(p.shape[1])
assert_shape(c, (p.shape[1],), 'c')
out = _sample(mu, sigma, use_cholesky=use_cholesky, count=count) + p.dot(c)
return out
def is_positive_definite(self, x):
'''
Tests if the covariance matrix, which is the covariance function evaluated
at `x`, is positive definite. This is done by testing if the Cholesky
decomposition of the covariance matrix finishes successfully.
Parameters
----------
x : (N, D) array
Evaluation points
Returns
-------
out : bool
Notes
-----
1. This function may return `False` even if the covariance function is
positive definite. This is because some of the eigenvalues for the matrix
are so small that they become slightly negative due to numerical rounding
error. This is most notably the case for the squared exponential covariance
function.
'''
x = as_array(x, dtype=float)
assert_shape(x, (None, self.dim), 'x')
diff = np.zeros(x.shape[1], dtype=int)
cov = self._covariance(x, x, diff, diff)
out = is_positive_definite(cov)
return out
def memoize(self):
'''
Memoizes the `_mean`, `_covariance`, and `_basis` methods for this
`GaussianProcess`. This can improve performance by cutting out redundant
computations, but it may also increase memory consumption.
'''
self._mean = MemoizeArrayInput(self._mean)
self._covariance = MemoizeArrayInput(self._covariance)
self._variance = MemoizeArrayInput(self._variance)
self._basis = MemoizeArrayInput(self._basis)
def gpiso(phi, params, dim=None, check_finite=True):
'''
Creates an isotropic `GaussianProcess` instance which has a constant mean and
a covariance function that is described by a radial basis function.
Parameters
----------
phi : str or RBF instance
Radial basis function describing the covariance function. For example, use
`rbf.basis.se` for a squared exponential covariance function. This must be
positive definite.
params : 3-tuple
Tuple containing the mean, the variance, and the shape parameter for the
Gaussian process, respectively.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` domain. An error will
be raised if method arguments have a conflicting number of spatial
dimensions.
check_finite : bool, optional
Indicates whether to check if the output for `phi` is finite. NaNs or Infs
may be encountered if the `RBF` instance is not sufficiently
differentiable.
Returns
-------
out : GaussianProcess
Notes
-----
Not all radial basis functions are positive definite, which means that it is
possible to instantiate an invalid `GaussianProcess`. The method
`is_positive_definite` provides a necessary but not sufficient test for
positive definiteness. Examples of predefined `RBF` instances which are
positive definite include: `rbf.basis.se`, `rbf.basis.ga`, `rbf.basis.exp`,
`rbf.basis.iq`, `rbf.basis.imq`.
'''
phi = rbf.basis.get_rbf(phi)
params = as_array(params, dtype=float)
@_io_is_checked
def mean(x, diff):
a, b, c = params
if not any(diff):
out = np.full(x.shape[0], a, dtype=float)
else:
out = np.zeros(x.shape[0], dtype=float)
return out
@_io_is_checked
def covariance(x1, x2, diff1, diff2):
a, b, c = params
diff = diff1 + diff2
coeff = b*(-1)**sum(diff2)
out = coeff*phi(x1, x2, eps=c, diff=diff)
if check_finite:
if not _all_is_finite(out):
raise ValueError(
'Encountered a non-finite RBF covariance. This may be because the '
'basis function is not sufficiently differentiable.')
return out
@_io_is_checked
def variance(x, diff):
a, b, c = params
coeff = b*(-1)**sum(diff)
value = coeff*phi.center_value(eps=c, diff=2*diff)
if check_finite:
if not _all_is_finite(value):
raise ValueError(
'Encountered a non-finite RBF variance. This may be because the '
'basis function is not sufficiently differentiable.')
out = np.full(x.shape[0], value)
return out
out = GaussianProcess(mean, covariance, variance=variance, dim=dim)
return out
def gpse(params, dim=None):
'''
Creates an isotropic `GaussianProcess` with a squared exponential covariance
function.
Parameters
----------
params : 3-tuple
Tuple containing the mean, the variance, and the shape parameter for the
Gaussian process, respectively.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` domain. An error will
be raised if method arguments have a conflicting number of spatial
dimensions.
Returns
-------
out : GaussianProcess
Notes
-----
1. Some of the eigenvalues for squared exponential covariance matrices are
very small and may be slightly negative due to numerical rounding error.
Consequently, the Cholesky decomposition for a squared exponential covariance
matrix will often fail. This becomes a problem when conditioning a squared
exponential `GaussianProcess` with noise-free data.
'''
out = gpiso(rbf.basis.se, params, dim=dim, check_finite=False)
return out
def gpexp(params, dim=None, check_finite=True):
'''
Creates an isotropic `GaussianProcess` with an exponential covariance
function.
Parameters
----------
params : 3-tuple
Tuple containing the mean, the variance, and the shape parameter for the
Gaussian process, respectively.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` domain. An error will
be raised if method arguments have a conflicting number of spatial
dimensions.
Returns
-------
out : GaussianProcess
'''
out = gpiso(rbf.basis.exp, params, dim=dim, check_finite=check_finite)
return out
def gpbasis(basis, dim=None, dense=False):
'''
Creates an `GaussianProcess` consisting only of basis functions.
Parameters
----------
basis : function
Function that takes either one argument, `x`, or two arguments, `x` and
`diff`. `x` is an (N, D) array of positions and `diff` is a (D,) array
specifying the derivative. This function returns an (N, P) array, where
each column is a basis function evaluated at `x`.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` domain. An error will
be raised if method arguments have a conflicting number of spatial
dimensions.
dense : bool, optional
If True, then the covariance function returns a dense, rather than sparse,
array of zeros. This is useful when the covariance matrices are relatively
small and we do not want to incur the overhead of sparse matrices.
Returns
-------
out : GaussianProcess
'''
if dense:
out = GaussianProcess(zero_mean, zero_dense_covariance, basis=basis,
variance=zero_variance, dim=dim)
else:
out = GaussianProcess(zero_mean, zero_sparse_covariance, basis=basis,
variance=zero_variance, dim=dim)
return out
def gppoly(order, dim=None, dense=False):
'''
Returns a `GaussianProcess` consisting of monomial basis functions. The
monomials span the space of all polynomials with a user-specified order. If
`order` = 0, then the basis functions consists of a constant term, if `order`
= 1 then the basis functions consists of a constant and linear term, etc.
Parameters
----------
order : int
Order of the basis functions.
dim : int, optional
Fixes the spatial dimensions of the `GaussianProcess` domain. An error will
be raised if method arguments have a conflicting number of spatial
dimensions.
dense : bool, optional
If True, then the covariance function returns a dense, rather than sparse,
array of zeros. This is useful when the covariance matrices are relatively
small and we do not want to incur the overhead of sparse matrices.
Returns
-------
out : GaussianProcess
'''
@_io_is_checked
def basis(x, diff):
powers = rbf.poly.monomial_powers(order, x.shape[1])
out = rbf.poly.mvmonos(x, powers, diff)
return out
out = gpbasis(basis, dim=dim, dense=dense)
return out
def gpgibbs(ls, sigma, delta=1e-4):
'''
Returns a `GaussianProcess` with zero mean and a Gibbs covariance function.
The Gibbs kernel has a spatially varying lengthscale.
Parameters
----------
ls: function
Function that takes an (N, D) array of positions and returns an (N, D)
array indicating the lengthscale along each dimension at those positions.
sigma: float
Standard deviation of the Gaussian process.
delta: float, optional
Finite difference spacing to use when calculating the derivative of the
`GaussianProcess`. An analytical solution for the derivative is not
available because the derivative of the `ls` function is unknown.
'''
@_io_is_checked
@covariance_differentiator(delta)
def covariance(x1, x2):
'''
covariance function for the Gibbs Gaussian process.
'''
dim = x1.shape[1]
lsx1 = ls(x1)
lsx2 = ls(x2)
# sanitize the output for `ls`
lsx1 = as_array(lsx1, dtype=float)
lsx2 = as_array(lsx2, dtype=float)
assert_shape(lsx1, x1.shape, 'ls(x1)')
assert_shape(lsx2, x2.shape, 'ls(x2)')
coeff = np.ones((x1.shape[0], x2.shape[0]))
exponent = np.zeros((x1.shape[0], x2.shape[0]))
for i in range(dim):
a = 2 * lsx1[:, None, i] * lsx2[None, :, i]
b = lsx1[:, None, i]**2 + lsx2[None, :, i]**2
coeff *= np.sqrt( a / b )
for i in range(dim):
a = ( x1[:, None, i] - x2[None, :, i] )**2
b = lsx1[:, None, i]**2 + lsx2[None, :, i]**2
exponent -= ( a / b )
out = sigma**2*coeff*np.exp(exponent)
return out
#TODO define the variance function
out = GaussianProcess(zero_mean, covariance)
return out
|
treverhines/RBF
|
rbf/gauss.py
|
Python
|
mit
| 67,258
|
[
"Gaussian"
] |
20507e75fa47c2b1a2ebdef8366d9620e388fa6eafa8534be830559ebb40ab85
|
"""@package gen_tex_tables
Generate tables in LaTeX format for insertion into writeup
"""
import os
import numpy as np
class filterdat():
def __init__(self,name,notes = None):
## identifying name
self.name = name
## 3 x 4 array of data; each coumn is MSE1, MSE2, percentage conservative, percent overconfident
self.data = np.zeros((4,3,4))
## notes: if nonempty, a string that's printed nextt to the fkilter name in the table's label
self.notes = notes
def set_data(self,Ts,namebit,FID):
line = FID.readline()
dataList = line.split(',')
# remove endline
dataList[3] = dataList[3][0:len(dataList[3])-1]
if Ts == 0.01:
uind = 0
elif Ts == 0.1:
uind = 1
elif Ts == 1.0:
uind = 2
nind = namebit
self.data[nind,uind,0] = float(dataList[0])
self.data[nind,uind,1] = float(dataList[1])
self.data[nind,uind,2] = float(dataList[2])
self.data[nind,uind,3] = float(dataList[3])
def print_out(self,FID=None):
TS = [0.01,0.1,1.0]
if FID is None:
for nb in range(4):
# print to terminal
print('\begin{\table}[h!]')
print('\begin{tabular}{|c|c|c|c|c|}')
for k in range(3):
print('\hline')
print('%8.4g & %8.4g & %8.4g & %8.4g & %8.4g \\' % (TS[k],self.data[nb,k,0],self.data[nb,k,1],self.data[nb,k,2],self.data[nb,k,3]))
print('\end{tabular}')
print('\caption{Performance metrics for ')
print(' %s ' % (self.name.upper()))
if self.notes is not None:
print('(%s) ' % self.notes)
if nb == 0:
print('with no forcing}')
if nb == 1:
print('with Gaussian forcing term}')
if nb == 2:
print('with cosine forcing term}')
if nb == 3:
print('with Gaussian and cosine forcing terms}')
print('\end{table}')
else:
for nb in range(4):
FID.write('\n\n%%************************************************\n%% Performance table for filter %s with namebit %d\n' % (self.name,nb))
# print to terminal
FID.write('\\begin{table}[h!]\n')
FID.write('\\centering\n')
FID.write('\\begin{tabular}{|c|c|c|c|c|}\n')
# write column labels
FID.write('\hline\n')
FID.write('$T_s$ & $\mathrm{MSE}_1$ & $\mathrm{MSE}_2$ & Conservative fraction & Optimistic fraction \\\\\n')
for k in range(3):
FID.write('\hline\n')
FID.write('%8.4g & %8.4g & %8.4g & %8.4g & %8.4g \\\\\n' % (TS[k],self.data[nb,k,0],self.data[nb,k,1],self.data[nb,k,2],self.data[nb,k,3]))
FID.write('\\hline\n')
FID.write('\end{tabular}\n')
FID.write('\caption{Performance metrics for ')
FID.write('%s ' % (self.name.upper()))
if self.notes is not None:
FID.write('(%s) ' % self.notes)
if nb == 0:
FID.write('with no forcing}\n')
if nb == 1:
FID.write('with Gaussian forcing term}\n')
if nb == 2:
FID.write('with cosine forcing term}\n')
if nb == 3:
FID.write('with Gaussian and cosine forcing terms}\n')
## create figure label
nam = 'tab:' + self.name + '_case_%d' % nb
FID.write('\label{%s}\n' % nam)
FID.write('\end{table}\n')
## print a single table line with the metrics for comparing all filters at a constant sample rate
def print_line_comparison(self,Ts_target,namebit):
nind = namebit
if Ts_target == 0.01:
uind = 0
elif Ts_target == 0.1:
uind = 1
elif Ts_target == 1.0:
uind = 2
lineOut = '%s & %8.4g & %8.4g & %8.4g & %8.4g \\\\\n' % (self.name.upper(),self.data[nind,uind,0],self.data[nind,uind,1],self.data[nind,uind,2],self.data[nind,uind,3])
return lineOut
def main(argin='../trials'):
# file format is "metric_<filter>_sims_<namebit>_<samplerate>.txt"
filters = []
for fn in os.listdir(argin):
# check if name matches formal
lf = len(fn)
if lf > 6:
metricsCheck = fn[0:7]
extensionCheck = fn[(lf-3):(lf+1)]
if metricsCheck == 'metrics' and extensionCheck == 'txt':
fnlist = fn.split('_')
filtername = fnlist[1]
# if filtername == "sir", append the number of particles to the name
if filtername == "sir":
filtername = filtername + '(' + fnlist[2] + ')'
juse = -1
if len(filters) > 0:
for j in range(len(filters)):
if filtername == filters[j].name:
juse = j
break
if juse < 0:
# append
filters.append(filterdat(filtername))
juse = len(filters)-1
print(filtername,filters[juse].name,juse)
if filtername[0:3] == 'sir':
Ns = int(fnlist[2])
filters[juse].notes = '%d particles' % (Ns)
namebit = int(fnlist[4],2)
samplerate = (fnlist[5].split('.'))[0]
else:
namebit = int(fnlist[3],2)
samplerate = (fnlist[4].split('.'))[0]
print(samplerate)
if samplerate == 'fast':
Ts = 0.01
elif samplerate == 'medium':
Ts = 0.1
elif samplerate == 'slow':
Ts = 1.0
print('Filter %s, namebit %d, Ts = %f' % (filtername,namebit,Ts))
# set_data
FID = open(argin+'/'+fn,'r')
gar = FID.readline()
print(gar)
filters[juse].set_data(Ts,namebit,FID)
FID.close()
if len(filters)>0:
FID = open('tex_tables.txt','w')
for k in range(len(filters)):
filters[k].print_out(FID)
FID.close()
# now, open each filter with the sample rate Ts = 0.1
Tstar = [0.1,0.01,1.0]
# and print a table for that case
for i in range(len(Tstar)):
FID = open('../../../estimationProjectWriteup/doc/tex/tex_comparison_tables_%d.tex' % (i),'w')
for ko in range(1,4):
FID.write('\n\n%%************************************************\n%% Performance table at Ts = %f with namebit %d\n' % (Tstar[i],ko))
# print to terminal
FID.write('\\begin{table}[h!]\n')
FID.write('\\centering\n')
FID.write('\\begin{tabular}{|c|c|c|c|c|}\n')
FID.write('\\hline\nFilter & $\mathrm{MSE}_1$ & $\mathrm{MSE}_2$ & Conservative fraction & Optimistic fraction \\\\\n')
for k in range(len(filters)):
FID.write('\hline\n')
printline = filters[k].print_line_comparison(Tstar[i],ko)
FID.write(printline)
FID.write('\\hline\n')
FID.write('\end{tabular}\n')
FID.write('\caption{Performance metrics comparison of all filters with $T_s$ = %.2f sec and ' % (Tstar[i]))
if ko == 0:
FID.write('no forcing}\n')
if ko == 1:
FID.write('Gaussian forcing term}\n')
if ko == 2:
FID.write('cosine forcing term}\n')
if ko == 3:
FID.write('Gaussian and cosine forcing terms}\n')
## create figure label
nam = 'table:compare_case_%d_sample_%d' % (ko,i)
FID.write('\label{%s}\n' % nam)
FID.write('\end{table}\n')
FID.close()
print("Wrote to file tex_comparison_tables_%d.tex" % (i))
if __name__ == "__main__":
main()
|
fatadama/estimation
|
challenge_problem/processing/gen_tex_tables.py
|
Python
|
gpl-2.0
| 6,577
|
[
"Gaussian"
] |
fcf5ce041e2452e27ac1cb12cacc287d421260d8216b2421ccca8cb41ac61db9
|
# Copyright (C) 2015 Hydriz Scholz
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA, or visit
# <http://www.gnu.org/copyleft/gpl.html>
import DBRCore
class DBRmostinvokes:
def __init__( self, db='' ):
self.dbquery = DBRCore.DBQuery( db )
self.Wiki = DBRCore.Wiki( db )
def execute( self ):
title = "Modules invoked on the most pages"
query = "SELECT tl_title, COUNT(*) FROM templatelinks WHERE tl_namespace = 828 GROUP BY tl_title having COUNT(*) > 10 ORDER BY COUNT(*) DESC LIMIT 500;"
template = '''Lua modules used on the most number of pages (limited to 500 results); data as of <onlyinclude>%s</onlyinclude>.
{| class="wikitable sortable plainlinks" style="width:100%%; margin:auto;"
|- style="white-space:nowrap;"
! No.
! Module
! Uses
|-
%s
|}
[[Category:{{subst:SITENAME}} database reports|{{SUBPAGENAME}}]]
'''
rows = self.dbquery.execute( query )
i = 1
output = []
for row in rows:
pagetitle = 'Module:%s' % ( row[0] )
uses = row[1]
tablerow = '| %d\n| [[%s|%s]]\n| %s\n|-' % ( i, pagetitle, row[0], uses )
output.append( tablerow )
i += 1
contents = template % ( self.Wiki.getDataAsOf(), '\n'.join( output ) )
self.Wiki.outputToWiki( title, contents )
if __name__ == "__main__":
print "This module should not be called directly! Please use dbr.py to run the database reports."
|
Hydriz/DBReports
|
reports/mostinvokes.py
|
Python
|
gpl-3.0
| 1,989
|
[
"VisIt"
] |
cffd61e0634065bf71da779b622b8341e077c2f0c4b82519926fb53d47da4d1f
|
import vtk
import random
class ParametricObjects():
def ParametricObjects(self):
parametricObjects = list()
parametricObjects.append(vtk.vtkParametricBoy())
parametricObjects.append(vtk.vtkParametricConicSpiral())
parametricObjects.append(vtk.vtkParametricCrossCap())
parametricObjects.append(vtk.vtkParametricDini())
parametricObjects.append(vtk.vtkParametricEllipsoid())
parametricObjects[-1].SetXRadius(0.5)
parametricObjects[-1].SetYRadius(2.0)
parametricObjects.append(vtk.vtkParametricEnneper())
parametricObjects.append(vtk.vtkParametricFigure8Klein())
parametricObjects.append(vtk.vtkParametricKlein())
parametricObjects.append(vtk.vtkParametricMobius())
parametricObjects[-1].SetRadius(2)
parametricObjects[-1].SetMinimumV(-0.5)
parametricObjects[-1].SetMaximumV(0.5)
parametricObjects.append(vtk.vtkParametricRandomHills())
parametricObjects[-1].AllowRandomGenerationOff()
parametricObjects.append(vtk.vtkParametricRoman())
parametricObjects.append(vtk.vtkParametricSuperEllipsoid())
parametricObjects[-1].SetN1(0.5)
parametricObjects[-1].SetN2(0.1)
parametricObjects.append(vtk.vtkParametricSuperToroid())
parametricObjects[-1].SetN1(0.2)
parametricObjects[-1].SetN2(3.0)
parametricObjects.append(vtk.vtkParametricTorus())
parametricObjects.append(vtk.vtkParametricSpline())
# Add some points to the parametric spline.
inputPoints = vtk.vtkPoints()
vtk.vtkMath.RandomSeed(8775070)
for i in range(10):
x = vtk.vtkMath.Random(0.0,1.0)
y = vtk.vtkMath.Random(0.0,1.0)
z = vtk.vtkMath.Random(0.0,1.0)
inputPoints.InsertNextPoint(x, y, z)
parametricObjects[-1].SetPoints(inputPoints)
# There are only 15 objects.
parametricFunctionSources = list()
renderers = list()
mappers = list()
actors = list()
textmappers = list()
textactors = list()
# Create a common text property.
textProperty = vtk.vtkTextProperty()
textProperty.SetFontSize(10)
textProperty.SetJustificationToCentered()
# Create a parametric function source, renderer, mapper
# and actor for each object.
for idx, item in enumerate(parametricObjects):
parametricFunctionSources.append(vtk.vtkParametricFunctionSource())
parametricFunctionSources[idx].SetParametricFunction(item)
parametricFunctionSources[idx].Update()
mappers.append(vtk.vtkPolyDataMapper())
mappers[idx].SetInputConnection(parametricFunctionSources[idx].GetOutputPort())
actors.append(vtk.vtkActor())
actors[idx].SetMapper(mappers[idx])
textmappers.append(vtk.vtkTextMapper())
textmappers[idx].SetInput(item.GetClassName())
textmappers[idx].SetTextProperty(textProperty)
textactors.append(vtk.vtkActor2D())
textactors[idx].SetMapper(textmappers[idx])
textactors[idx].SetPosition(100, 16)
renderers.append(vtk.vtkRenderer())
gridDimensions = 4
for idx in range(len(parametricObjects)):
if idx < gridDimensions * gridDimensions:
renderers.append(vtk.vtkRenderer)
rendererSize = 200
# Create the RenderWindow
#
renderWindow = vtk.vtkRenderWindow()
renderWindow.SetSize(rendererSize * gridDimensions, rendererSize * gridDimensions)
# Add and position the renders to the render window.
viewport = list()
for row in range(gridDimensions):
for col in range(gridDimensions):
idx = row * gridDimensions + col
viewport[:] = []
viewport.append(float(col) * rendererSize / (gridDimensions * rendererSize))
viewport.append(float(gridDimensions - (row+1)) * rendererSize / (gridDimensions * rendererSize))
viewport.append(float(col+1)*rendererSize / (gridDimensions * rendererSize))
viewport.append(float(gridDimensions - row) * rendererSize / (gridDimensions * rendererSize))
if idx > (len(parametricObjects) - 1):
continue
renderers[idx].SetViewport(viewport)
renderWindow.AddRenderer(renderers[idx])
renderers[idx].AddActor(actors[idx])
renderers[idx].AddActor(textactors[idx])
renderers[idx].SetBackground(0.2,0.3,0.4)
renderers[idx].ResetCamera()
renderers[idx].GetActiveCamera().Azimuth(30)
renderers[idx].GetActiveCamera().Elevation(-30)
renderers[idx].GetActiveCamera().Zoom(0.9)
renderers[idx].ResetCameraClippingRange()
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(renderWindow)
renderWindow.Render()
interactor.Start()
if __name__ == "__main__":
po = ParametricObjects()
po.ParametricObjects()
|
bond-anton/Space_visualization
|
demo/10_pure_vtk.py
|
Python
|
apache-2.0
| 5,212
|
[
"VTK"
] |
a9e4b88741487d43894f89f47f7789ed5793de5058993383bbce0b9d0c305455
|
# -*- coding: utf-8 -*-
__author__ = "HarshaRani"
__credits__ = ["Upi Lab"]
__license__ = "GPL3"
__version__ = "1.0.0"
__maintainer__ = "HarshaRani"
__email__ = "hrani@ncbs.res.in"
__status__ = "Development"
__updated__ = "Sep 03 2018"
'''
mooseAddChemSolver and mooseDeleteChemSolver is for adding and deleting only
Chemical solver
'''
import moose
from moose.fixXreacs import fixXreacs
def positionCompt( compt ):
i = 0
while (i != len(compt)-1):
compt[i+1].x1 += compt[i].x1
compt[i+1].x0 += compt[i].x1
i += 1
def mooseDeleteChemSolver(modelRoot):
"""Delete solvers from Chemical Compartment """
compts = moose.wildcardFind(modelRoot + '/##[ISA=ChemCompt]')
#if all(isinstance(x, (moose.CubeMesh,moose.CylMesh)) for x in compts):
if all( ( x.isA["CubeMesh"] or x.isA["CylMesh"]) for x in compts):
for compt in compts:
if moose.exists(compt.path + '/stoich'):
st = moose.element(compt.path + '/stoich')
st_ksolve = st.ksolve
st_dsolve = st.dsolve
moose.delete(st)
if moose.exists((st_ksolve).path):
print("KSolver is deleted for modelpath %s " % st_ksolve)
moose.delete(st_ksolve)
if moose.exists((st_dsolve).path) and st_dsolve.path != '/':
print("DSolver is deleted for modelpath %s " % st_dsolve)
moose.delete(st_dsolve)
else:
return ("mooseDeleteChemSolver is only for deleting Chemical Model solver which has to be `CubeMesh` or `CylMesh` found ",list(set([x.className for x in compts]) - set(['CubeMesh',"CylMesh"])))
def stdSolvertype(solverName):
if solverName.lower() in ["gssa","gillespie","stochastic","gsolve"]:
return "gssa"
elif solverName.lower() in ["gsl","runge kutta","deterministic","ksolve","rungekutta","rk5","rkf","rk"]:
return "gsl"
elif solverName.lower() in ["ee","exponential euler","exponentialeuler","neutral"]:
return "ee"
return "ee"
def mooseAddChemSolver(modelRoot, solver):
"""
Add the solvers only if all are Chemical compartment
"""
compts = moose.wildcardFind(modelRoot + '/##[ISA=ChemCompt]')
#if all(isinstance(x, (moose.CubeMesh,moose.CylMesh)) for x in compts):
if all( ( x.isA["CubeMesh"] or x.isA["CylMesh"]) for x in compts):
if not compts:
return ("Atleast one compartment is required ")
elif ( len(compts) > 3 ):
return ("Warning: setSolverOnCompt Cannot handle " , len(compts) , " chemical compartments\n")
else:
comptinfo = moose.Annotator(moose.element(compts[0]).path + '/info')
previousSolver = stdSolvertype(comptinfo.solver)
currentSolver = stdSolvertype(solver)
if previousSolver != currentSolver:
comptinfo.solver = currentSolver
if (moose.exists(compts[0].path + '/stoich')):
# "A: and stoich exists then delete the stoich add solver"
mooseDeleteChemSolver(modelRoot)
setCompartmentSolver(modelRoot, currentSolver)
return True
else:
if not moose.exists(compts[0].path + '/stoich'):
# " stoich exist, doing nothing"
setCompartmentSolver(modelRoot, currentSolver)
return True
else:
return ("mooseAddChemSolver is only for adding Chemical Model which has to be `CubeMesh` or `CylMesh` found ",list(set([x.className for x in compts]) - set(['CubeMesh',"CylMesh"])))
def setCompartmentSolver(modelRoot, solver):
"""
If Solver type is 'gsl' or 'gssa' do add Solver
if 'ee' nothing
"""
if solver != 'ee':
comptlist = dict((c.volume, c) for c in moose.wildcardFind(modelRoot + '/##[ISA=ChemCompt]'))
vollist = sorted(comptlist.keys())
compts = [comptlist[key] for key in vollist]
#compts = [key for key, value in sorted(comptlist.items(), key=lambda (k,v): (v,k))]
if (len(compts) >1 ):
positionCompt(compts)
fixXreacs( modelRoot )
vollist = sorted(comptlist.keys())
compts = [comptlist[key] for key in vollist]
#compts = [key for key, value in sorted(comptlist.items(), key=lambda (k,v): (v,k))]
for compt in compts:
if solver != 'ee':
if solver.lower() in [ 'gsl', 'runge kutta', 'lsoda' ]:
ksolve = moose.Ksolve(compt.path + '/ksolve')
elif solver.lower() in ['gssa', 'gillespie']:
ksolve = moose.Gsolve(compt.path + '/gsolve')
if (len(compts) > 1):
dsolve = moose.Dsolve(compt.path+'/dsolve')
stoich = moose.Stoich(compt.path + '/stoich')
stoich.ksolve = ksolve
if (len(compts) > 1):
stoich.dsolve = dsolve
stoich.compartment = compt
stoich.reacSystemPath = compt.path + "/##"
dsolveList = moose.wildcardFind(modelRoot+'/##[ISA=Dsolve]')
i = 0
while(i < len(dsolveList)-1):
dsolveList[i+1].buildMeshJunctions(dsolveList[i])
i += 1
if not modelRoot[:1].startswith('/'):
modelRoot ='/'+modelRoot
print( " Solver is added to model path `%s` with `%s` solver" % (modelRoot,solver) )
|
dilawar/moose-core
|
python/moose/chemUtil/add_Delete_ChemicalSolver.py
|
Python
|
gpl-3.0
| 5,578
|
[
"MOOSE"
] |
0560c1d18193d73d0c396d9beff99798359e34ef30932ee19fdd768b7b00cf94
|
"""\
This program inputs a GAMESS basis, e.g. from the EMSL basis set order
form, and formats it into PyQuante format
Copyright (c) 2004, Richard P. Muller. All Rights Reserved.
PyQuante version 1.2 and later is covered by the modified BSD
license. Please see the file LICENSE that is part of this
distribution.
"""
import re,pprint
from PyQuante.Element import name2no
basis_map = {
'6-31g':'p631',
'6-31g**':'p631ss',
'6-31g(d,p)':'p631ss',
'6-31g**++':'p631ppss',
'6-31g++**':'p631ppss',
'6-311g**':'p6311ss',
'6-311g++(2d,2p)':'p6311pp_2d_2p',
'6-311g++(3d,3p)':'p6311pp_3d_3p',
'6-311g++(3df,3pd)':'p6311pp_3df_3pd',
'3-21g':'p321',
'sto3g':'sto3g',
'sto-3g':'sto3g',
'sto-6g':'sto6g',
'lacvp':'lacvp',
'ccpvdz':'ccpvdz',
'cc-pvdz':'ccpvdz',
'ccpvtz':'ccpvtz',
'cc-pvtz':'ccpvtz',
'ccpvqz':'ccpvqz',
'cc-pvqz':'ccpvqz',
'ccpv5z':'ccpv5z',
'cc-pv5z':'ccpv5z',
'ccpv6z':'ccpv6z',
'cc-pv6z':'ccpv6z',
'augccpvdz':'augccpvdz',
'aug-cc-pvdz':'augccpvdz',
'augccpvtz':'augccpvtz',
'aug-cc-pvtz':'augccpvtz',
'augccpvqz':'augccpvqz',
'aug-cc-pvqz':'augccpvqz',
'augccpv5z':'augccpv5z',
'aug-cc-pv5z':'augccpv5z',
'augccpv6z':'augccpv6z',
'aug-cc-pv6z':'augccpv6z',
'dzvp':'dzvp',
}
def importname(modulename, name):
"""Import from a module whose name is determined at runtime.
(Python Cookbook 2nd ed.)
"""
module = __import__(modulename, globals(), locals(), [name])
if not module:
raise ImportError
return getattr(module, name)
def get_basis_data(name):
dc_name = name.lower()
if dc_name not in basis_map:
raise Exception("Can't import basis set %s %s" % (name,dc_name))
return importname(basis_map[dc_name],"basis_data")
def split_comment(line):
"""Split a line into line,comment, where a comment is
started by the ! character"""
res = line.strip().split('!')
if len(res) == 0: return "",""
elif len(res) == 1: return res[0],""
elif len(res) == 2: return res
return res[0],''.join(res[1:])
def parse_gamess_basis(file,**kwargs):
import re
if type(file) == type(""): return parse_gamess_basis(open(file),**kwargs)
maxatno = kwargs.get('maxatno',54)
basis = {} # RPM changed basis sets from list to dictionary 2/22/2007
atom_line = re.compile('[A-Za-z]{3,}')
while 1:
try:
line = file.next()
except:
break
#line,comment = split_comment(line)
if line.startswith('!'):continue
words = line.split()
if not words: continue
if atom_line.search(line):
#el_string = line.strip()
el_string = line.split()[0]
atno = name2no[el_string]
bfs = basis.setdefault(atno,[])
#basis[atno] = bfs
else:
words = line.split()
sym = words[0]
nprim = int(words[1])
if sym == "L":
sprims = []
pprims = []
for i in xrange(nprim):
line = file.next()
words = line.split()
sprims.append((float(words[1]),float(words[2])))
pprims.append((float(words[1]),float(words[3])))
bfs.append(("S",sprims))
bfs.append(("P",pprims))
else:
prims = []
for i in xrange(nprim):
line = file.next()
words = line.split()
prims.append((float(words[1]),float(words[2])))
bfs.append((sym,prims))
return basis
def main(**kwargs):
fname = kwargs.get('fname','/home/rmuller/dzvp_basis.txt')
oname = kwargs.get('oname','basis_dzvp.py')
file = open(fname)
basis = parse_gamess_basis(file,**kwargs)
string = pprint.pformat(basis)
file = open(oname,'w').write('basis_data = %s' % string)
return
if __name__ == '__main__': main()
|
berquist/PyQuante
|
PyQuante/Basis/Tools.py
|
Python
|
bsd-3-clause
| 4,058
|
[
"GAMESS"
] |
ee5ccd720a7bed3897d6a04daedf482e6fa3b8e26bbd252d8203712e5290e382
|
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
"""The fsl module provides classes for interfacing with the `FSL
<http://www.fmrib.ox.ac.uk/fsl/index.html>`_ command line tools. This
was written to work with FSL version 4.1.4.
Change directory to provide relative paths for doctests
>>> import os
>>> filepath = os.path.dirname( os.path.realpath( __file__ ) )
>>> datadir = os.path.realpath(os.path.join(filepath, '../../testing/data'))
>>> os.chdir(datadir)
"""
import os
from glob import glob
import warnings
from shutil import rmtree
import numpy as np
from nibabel import load
from ... import LooseVersion
from .base import (FSLCommand, FSLCommandInputSpec, Info)
from ..base import (load_template, File, traits, isdefined,
TraitedSpec, BaseInterface, Directory,
InputMultiPath, OutputMultiPath,
BaseInterfaceInputSpec)
from ...utils.filemanip import (list_to_filename, filename_to_list)
from ...utils.misc import human_order_sorted
warn = warnings.warn
warnings.filterwarnings('always', category=UserWarning)
class Level1DesignInputSpec(BaseInterfaceInputSpec):
interscan_interval = traits.Float(mandatory=True,
desc='Interscan interval (in secs)')
session_info = traits.Any(mandatory=True,
desc='Session specific information generated by ``modelgen.SpecifyModel``')
bases = traits.Either(
traits.Dict(traits.Enum(
'dgamma'), traits.Dict(traits.Enum('derivs'), traits.Bool)),
traits.Dict(traits.Enum('gamma'), traits.Dict(
traits.Enum('derivs'), traits.Bool)),
traits.Dict(traits.Enum('none'), traits.Enum(None)),
mandatory=True,
desc="name of basis function and options e.g., {'dgamma': {'derivs': True}}")
model_serial_correlations = traits.Bool(
desc="Option to model serial correlations using an \
autoregressive estimator (order 1). Setting this option is only \
useful in the context of the fsf file. If you set this to False, you need to repeat \
this option for FILMGLS by setting autocorr_noestimate to True", mandatory=True)
contrasts = traits.List(
traits.Either(traits.Tuple(traits.Str,
traits.Enum('T'),
traits.List(traits.Str),
traits.List(traits.Float)),
traits.Tuple(traits.Str,
traits.Enum('T'),
traits.List(traits.Str),
traits.List(traits.Float),
traits.List(traits.Float)),
traits.Tuple(traits.Str,
traits.Enum('F'),
traits.List(
traits.Either(traits.Tuple(traits.Str,
traits.Enum(
'T'),
traits.List(
traits.Str),
traits.List(
traits.Float)),
traits.Tuple(
traits.Str,
traits.Enum(
'T'),
traits.List(
traits.Str),
traits.List(
traits.Float),
traits.List(
traits.Float)))))),
desc="List of contrasts with each contrast being a list of the form - \
[('name', 'stat', [condition list], [weight list], [session list])]. if \
session list is None or not provided, all sessions are used. For F \
contrasts, the condition list should contain previously defined \
T-contrasts.")
class Level1DesignOutputSpec(TraitedSpec):
fsf_files = OutputMultiPath(File(exists=True),
desc='FSL feat specification files')
ev_files = OutputMultiPath(traits.List(File(exists=True)),
desc='condition information files')
class Level1Design(BaseInterface):
"""Generate FEAT specific files
Examples
--------
>>> level1design = Level1Design()
>>> level1design.inputs.interscan_interval = 2.5
>>> level1design.inputs.bases = {'dgamma':{'derivs': False}}
>>> level1design.inputs.session_info = 'session_info.npz'
>>> level1design.run() # doctest: +SKIP
"""
input_spec = Level1DesignInputSpec
output_spec = Level1DesignOutputSpec
def _create_ev_file(self, evfname, evinfo):
f = open(evfname, 'wt')
for i in evinfo:
if len(i) == 3:
f.write('%f %f %f\n' % (i[0], i[1], i[2]))
else:
f.write('%f\n' % i[0])
f.close()
def _create_ev_files(
self, cwd, runinfo, runidx, usetd, contrasts, no_bases,
do_tempfilter):
"""Creates EV files from condition and regressor information.
Parameters:
-----------
runinfo : dict
Generated by `SpecifyModel` and contains information
about events and other regressors.
runidx : int
Index to run number
usetd : int
Whether or not to use temporal derivatives for
conditions
contrasts : list of lists
Information on contrasts to be evaluated
"""
conds = {}
evname = []
ev_hrf = load_template('feat_ev_hrf.tcl')
ev_none = load_template('feat_ev_none.tcl')
ev_ortho = load_template('feat_ev_ortho.tcl')
ev_txt = ''
# generate sections for conditions and other nuisance
# regressors
num_evs = [0, 0]
for field in ['cond', 'regress']:
for i, cond in enumerate(runinfo[field]):
name = cond['name']
evname.append(name)
evfname = os.path.join(cwd, 'ev_%s_%d_%d.txt' % (name, runidx,
len(evname)))
evinfo = []
num_evs[0] += 1
num_evs[1] += 1
if field == 'cond':
for j, onset in enumerate(cond['onset']):
try:
amplitudes = cond['amplitudes']
if len(amplitudes) > 1:
amp = amplitudes[j]
else:
amp = amplitudes[0]
except KeyError:
amp = 1
if len(cond['duration']) > 1:
evinfo.insert(j, [onset, cond['duration'][j], amp])
else:
evinfo.insert(j, [onset, cond['duration'][0], amp])
if no_bases:
ev_txt += ev_none.substitute(ev_num=num_evs[0],
ev_name=name,
tempfilt_yn=do_tempfilter,
cond_file=evfname)
else:
ev_txt += ev_hrf.substitute(ev_num=num_evs[0],
ev_name=name,
tempfilt_yn=do_tempfilter,
temporalderiv=usetd,
cond_file=evfname)
if usetd:
evname.append(name + 'TD')
num_evs[1] += 1
elif field == 'regress':
evinfo = [[j] for j in cond['val']]
ev_txt += ev_none.substitute(ev_num=num_evs[0],
ev_name=name,
tempfilt_yn=do_tempfilter,
cond_file=evfname)
ev_txt += "\n"
conds[name] = evfname
self._create_ev_file(evfname, evinfo)
# add ev orthogonalization
for i in range(1, num_evs[0] + 1):
for j in range(0, num_evs[0] + 1):
ev_txt += ev_ortho.substitute(c0=i, c1=j)
ev_txt += "\n"
# add contrast info to fsf file
if isdefined(contrasts):
contrast_header = load_template('feat_contrast_header.tcl')
contrast_prolog = load_template('feat_contrast_prolog.tcl')
contrast_element = load_template('feat_contrast_element.tcl')
contrast_ftest_element = load_template(
'feat_contrast_ftest_element.tcl')
contrastmask_header = load_template('feat_contrastmask_header.tcl')
contrastmask_footer = load_template('feat_contrastmask_footer.tcl')
contrastmask_element = load_template(
'feat_contrastmask_element.tcl')
# add t/f contrast info
ev_txt += contrast_header.substitute()
con_names = []
for j, con in enumerate(contrasts):
con_names.append(con[0])
con_map = {}
ftest_idx = []
ttest_idx = []
for j, con in enumerate(contrasts):
if con[1] == 'F':
ftest_idx.append(j)
for c in con[2]:
if c[0] not in con_map.keys():
con_map[c[0]] = []
con_map[c[0]].append(j)
else:
ttest_idx.append(j)
for ctype in ['real', 'orig']:
for j, con in enumerate(contrasts):
if con[1] == 'F':
continue
tidx = ttest_idx.index(j) + 1
ev_txt += contrast_prolog.substitute(cnum=tidx,
ctype=ctype,
cname=con[0])
count = 0
for c in range(1, len(evname) + 1):
if evname[c - 1].endswith('TD') and ctype == 'orig':
continue
count = count + 1
if evname[c - 1] in con[2]:
val = con[3][con[2].index(evname[c - 1])]
else:
val = 0.0
ev_txt += contrast_element.substitute(cnum=tidx,
element=count,
ctype=ctype, val=val)
ev_txt += "\n"
if con[0] in con_map.keys():
for fconidx in con_map[con[0]]:
ev_txt += contrast_ftest_element.substitute(
cnum=ftest_idx.index(fconidx) + 1,
element=tidx,
ctype=ctype,
val=1)
ev_txt += "\n"
# add contrast mask info
ev_txt += contrastmask_header.substitute()
for j, _ in enumerate(contrasts):
for k, _ in enumerate(contrasts):
if j != k:
ev_txt += contrastmask_element.substitute(c1=j + 1,
c2=k + 1)
ev_txt += contrastmask_footer.substitute()
return num_evs, ev_txt
def _format_session_info(self, session_info):
if isinstance(session_info, dict):
session_info = [session_info]
return session_info
def _get_func_files(self, session_info):
"""Returns functional files in the order of runs
"""
func_files = []
for i, info in enumerate(session_info):
func_files.insert(i, info['scans'])
return func_files
def _run_interface(self, runtime):
cwd = os.getcwd()
fsf_header = load_template('feat_header_l1.tcl')
fsf_postscript = load_template('feat_nongui.tcl')
prewhiten = 0
if isdefined(self.inputs.model_serial_correlations):
prewhiten = int(self.inputs.model_serial_correlations)
usetd = 0
no_bases = False
basis_key = self.inputs.bases.keys()[0]
if basis_key in ['dgamma', 'gamma']:
usetd = int(self.inputs.bases[basis_key]['derivs'])
if basis_key == 'none':
no_bases = True
session_info = self._format_session_info(self.inputs.session_info)
func_files = self._get_func_files(session_info)
n_tcon = 0
n_fcon = 0
if isdefined(self.inputs.contrasts):
for i, c in enumerate(self.inputs.contrasts):
if c[1] == 'T':
n_tcon += 1
elif c[1] == 'F':
n_fcon += 1
for i, info in enumerate(session_info):
do_tempfilter = 1
if info['hpf'] == np.inf:
do_tempfilter = 0
num_evs, cond_txt = self._create_ev_files(cwd, info, i, usetd,
self.inputs.contrasts,
no_bases, do_tempfilter)
nim = load(func_files[i])
(_, _, _, timepoints) = nim.get_shape()
fsf_txt = fsf_header.substitute(run_num=i,
interscan_interval=self.inputs.interscan_interval,
num_vols=timepoints,
prewhiten=prewhiten,
num_evs=num_evs[0],
num_evs_real=num_evs[1],
num_tcon=n_tcon,
num_fcon=n_fcon,
high_pass_filter_cutoff=info[
'hpf'],
temphp_yn=do_tempfilter,
func_file=func_files[i])
fsf_txt += cond_txt
fsf_txt += fsf_postscript.substitute(overwrite=1)
f = open(os.path.join(cwd, 'run%d.fsf' % i), 'w')
f.write(fsf_txt)
f.close()
return runtime
def _list_outputs(self):
outputs = self.output_spec().get()
cwd = os.getcwd()
outputs['fsf_files'] = []
outputs['ev_files'] = []
usetd = 0
basis_key = self.inputs.bases.keys()[0]
if basis_key in ['dgamma', 'gamma']:
usetd = int(self.inputs.bases[basis_key]['derivs'])
for runno, runinfo in enumerate(self._format_session_info(self.inputs.session_info)):
outputs['fsf_files'].append(os.path.join(cwd, 'run%d.fsf' % runno))
outputs['ev_files'].insert(runno, [])
evname = []
for field in ['cond', 'regress']:
for i, cond in enumerate(runinfo[field]):
name = cond['name']
evname.append(name)
evfname = os.path.join(
cwd, 'ev_%s_%d_%d.txt' % (name, runno,
len(evname)))
if field == 'cond':
if usetd:
evname.append(name + 'TD')
outputs['ev_files'][runno].append(
os.path.join(cwd, evfname))
return outputs
class FEATInputSpec(FSLCommandInputSpec):
fsf_file = File(exists=True, mandatory=True, argstr="%s", position=0,
desc="File specifying the feat design spec file")
class FEATOutputSpec(TraitedSpec):
feat_dir = Directory(exists=True)
class FEAT(FSLCommand):
"""Uses FSL feat to calculate first level stats
"""
_cmd = 'feat'
input_spec = FEATInputSpec
output_spec = FEATOutputSpec
def _list_outputs(self):
outputs = self._outputs().get()
is_ica = False
outputs['feat_dir']=None
with open(self.inputs.fsf_file, 'rt') as fp:
text = fp.read()
if "set fmri(inmelodic) 1" in text:
is_ica = True
for line in text.split('\n'):
if line.find("set fmri(outputdir)")>-1:
try:
outputdir_spec=line.split('"')[-2]
if os.path.exists(outputdir_spec):
outputs['feat_dir']=outputdir_spec
except:
pass
if not outputs['feat_dir']:
if is_ica:
outputs['feat_dir'] = glob(os.path.join(os.getcwd(), '*ica'))[0]
else:
outputs['feat_dir'] = glob(os.path.join(os.getcwd(), '*feat'))[0]
print 'Outputs from FEATmodel:',outputs
return outputs
class FEATModelInputSpec(FSLCommandInputSpec):
fsf_file = File(exists=True, mandatory=True, argstr="%s", position=0,
desc="File specifying the feat design spec file",
copyfile=False)
ev_files = traits.List(File(exists=True),
mandatory=True, argstr="%s",
desc="Event spec files generated by level1design",
position=1, copyfile=False)
class FEATModelOutpuSpec(TraitedSpec):
design_file = File(
exists=True, desc='Mat file containing ascii matrix for design')
design_image = File(
exists=True, desc='Graphical representation of design matrix')
design_cov = File(
exists=True, desc='Graphical representation of design covariance')
con_file = File(
exists=True, desc='Contrast file containing contrast vectors')
fcon_file = File(desc='Contrast file containing contrast vectors')
class FEATModel(FSLCommand):
"""Uses FSL feat_model to generate design.mat files
"""
_cmd = 'feat_model'
input_spec = FEATModelInputSpec
output_spec = FEATModelOutpuSpec
def _format_arg(self, name, trait_spec, value):
if name == 'fsf_file':
return super(FEATModel, self)._format_arg(name, trait_spec, self._get_design_root(value))
elif name == 'ev_files':
return ''
else:
return super(FEATModel, self)._format_arg(name, trait_spec, value)
def _get_design_root(self, infile):
_, fname = os.path.split(infile)
return fname.split('.')[0]
def _list_outputs(self):
# TODO: figure out file names and get rid off the globs
outputs = self._outputs().get()
root = self._get_design_root(list_to_filename(self.inputs.fsf_file))
design_file = glob(os.path.join(os.getcwd(), '%s*.mat' % root))
assert len(design_file) == 1, 'No mat file generated by FEAT Model'
outputs['design_file'] = design_file[0]
design_image = glob(os.path.join(os.getcwd(), '%s.png' % root))
assert len(
design_image) == 1, 'No design image generated by FEAT Model'
outputs['design_image'] = design_image[0]
design_cov = glob(os.path.join(os.getcwd(), '%s_cov.png' % root))
assert len(
design_cov) == 1, 'No covariance image generated by FEAT Model'
outputs['design_cov'] = design_cov[0]
con_file = glob(os.path.join(os.getcwd(), '%s*.con' % root))
assert len(con_file) == 1, 'No con file generated by FEAT Model'
outputs['con_file'] = con_file[0]
fcon_file = glob(os.path.join(os.getcwd(), '%s*.fts' % root))
if fcon_file:
assert len(fcon_file) == 1, 'No fts file generated by FEAT Model'
outputs['fcon_file'] = fcon_file[0]
return outputs
class FILMGLSInputSpec(FSLCommandInputSpec):
in_file = File(exists=True, mandatory=True, position=-3,
argstr='%s',
desc='input data file')
design_file = File(exists=True, position=-2,
argstr='%s',
desc='design matrix file')
threshold = traits.Range(default=1000., low=0.0, argstr='%f',
position=-1, usedefault=True,
desc='threshold')
smooth_autocorr = traits.Bool(argstr='-sa',
desc='Smooth auto corr estimates')
mask_size = traits.Int(argstr='-ms %d',
desc="susan mask size")
brightness_threshold = traits.Range(low=0, argstr='-epith %d',
desc='susan brightness threshold, otherwise it is estimated')
full_data = traits.Bool(argstr='-v', desc='output full data')
_estimate_xor = ['autocorr_estimate_only', 'fit_armodel', 'tukey_window',
'multitaper_product', 'use_pava', 'autocorr_noestimate']
autocorr_estimate_only = traits.Bool(argstr='-ac',
xor=_estimate_xor,
desc='perform autocorrelation estimatation only')
fit_armodel = traits.Bool(argstr='-ar', xor=_estimate_xor,
desc='fits autoregressive model - default is to use tukey with M=sqrt(numvols)')
tukey_window = traits.Int(argstr='-tukey %d', xor=_estimate_xor,
desc='tukey window size to estimate autocorr')
multitaper_product = traits.Int(argstr='-mt %d', xor=_estimate_xor,
desc='multitapering with slepian tapers and num is the time-bandwidth product')
use_pava = traits.Bool(
argstr='-pava', desc='estimates autocorr using PAVA')
autocorr_noestimate = traits.Bool(argstr='-noest', xor=_estimate_xor,
desc='do not estimate autocorrs')
output_pwdata = traits.Bool(argstr='-output_pwdata',
desc='output prewhitened data and average design matrix')
results_dir = Directory('results', argstr='-rn %s', usedefault=True,
desc='directory to store results in')
class FILMGLSInputSpec505(FSLCommandInputSpec):
in_file = File(exists=True, mandatory=True, position=-3,
argstr='--in=%s', desc='input data file')
design_file = File(exists=True, position=-2,
argstr='--pd=%s', desc='design matrix file')
threshold = traits.Range(default=1000., low=0.0, argstr='--thr=%f',
position=-1, usedefault=True, desc='threshold')
smooth_autocorr = traits.Bool(argstr='--sa',
desc='Smooth auto corr estimates')
mask_size = traits.Int(argstr='--ms=%d', desc="susan mask size")
brightness_threshold = traits.Range(low=0, argstr='--epith=%d',
desc=('susan brightness threshold, '
'otherwise it is estimated'))
full_data = traits.Bool(argstr='-v', desc='output full data')
_estimate_xor = ['autocorr_estimate_only', 'fit_armodel', 'tukey_window',
'multitaper_product', 'use_pava', 'autocorr_noestimate']
autocorr_estimate_only = traits.Bool(argstr='--ac', xor=_estimate_xor,
desc=('perform autocorrelation '
'estimation only'))
fit_armodel = traits.Bool(argstr='--ar', xor=_estimate_xor,
desc=('fits autoregressive model - default is to '
'use tukey with M=sqrt(numvols)'))
tukey_window = traits.Int(argstr='--tukey=%d', xor=_estimate_xor,
desc='tukey window size to estimate autocorr')
multitaper_product = traits.Int(argstr='--mt=%d', xor=_estimate_xor,
desc=('multitapering with slepian tapers '
'and num is the time-bandwidth '
'product'))
use_pava = traits.Bool(argstr='--pava', desc='estimates autocorr using PAVA')
autocorr_noestimate = traits.Bool(argstr='--noest', xor=_estimate_xor,
desc='do not estimate autocorrs')
output_pwdata = traits.Bool(argstr='--outputPWdata',
desc=('output prewhitened data and average '
'design matrix'))
results_dir = Directory('results', argstr='--rn=%s', usedefault=True,
desc='directory to store results in')
class FILMGLSOutputSpec(TraitedSpec):
param_estimates = OutputMultiPath(File(exists=True),
desc='Parameter estimates for each column of the design matrix')
residual4d = File(exists=True,
desc='Model fit residual mean-squared error for each time point')
dof_file = File(exists=True, desc='degrees of freedom')
sigmasquareds = File(
exists=True, desc='summary of residuals, See Woolrich, et. al., 2001')
results_dir = Directory(exists=True,
desc='directory storing model estimation output')
corrections = File(exists=True,
desc='statistical corrections used within FILM modelling')
logfile = File(exists=True,
desc='FILM run logfile')
class FILMGLS(FSLCommand):
"""Use FSL film_gls command to fit a design matrix to voxel timeseries
Examples
--------
Initialize with no options, assigning them when calling run:
>>> from nipype.interfaces import fsl
>>> fgls = fsl.FILMGLS()
>>> res = fgls.run('in_file', 'design_file', 'thresh', rn='stats') #doctest: +SKIP
Assign options through the ``inputs`` attribute:
>>> fgls = fsl.FILMGLS()
>>> fgls.inputs.in_file = 'functional.nii'
>>> fgls.inputs.design_file = 'design.mat'
>>> fgls.inputs.threshold = 10
>>> fgls.inputs.results_dir = 'stats'
>>> res = fgls.run() #doctest: +SKIP
Specify options when creating an instance:
>>> fgls = fsl.FILMGLS(in_file='functional.nii', \
design_file='design.mat', \
threshold=10, results_dir='stats')
>>> res = fgls.run() #doctest: +SKIP
"""
_cmd = 'film_gls'
if Info.version() and LooseVersion(Info.version()) > LooseVersion('5.0.4'):
input_spec = FILMGLSInputSpec505
else:
input_spec = FILMGLSInputSpec
output_spec = FILMGLSOutputSpec
def _get_pe_files(self, cwd):
files = None
if isdefined(self.inputs.design_file):
fp = open(self.inputs.design_file, 'rt')
for line in fp.readlines():
if line.startswith('/NumWaves'):
numpes = int(line.split()[-1])
files = []
for i in range(numpes):
files.append(self._gen_fname('pe%d.nii' % (i + 1),
cwd=cwd))
break
fp.close()
return files
def _list_outputs(self):
outputs = self._outputs().get()
cwd = os.getcwd()
results_dir = os.path.join(cwd, self.inputs.results_dir)
outputs['results_dir'] = results_dir
pe_files = self._get_pe_files(results_dir)
if pe_files:
outputs['param_estimates'] = pe_files
outputs['residual4d'] = self._gen_fname('res4d.nii', cwd=results_dir)
outputs['dof_file'] = os.path.join(results_dir, 'dof')
outputs['sigmasquareds'] = self._gen_fname('sigmasquareds.nii',
cwd=results_dir)
outputs['corrections'] = self._gen_fname('corrections.nii',
cwd=results_dir)
outputs['logfile'] = self._gen_fname('logfile',
change_ext=False,
cwd=results_dir)
return outputs
class FEATRegisterInputSpec(BaseInterfaceInputSpec):
feat_dirs = InputMultiPath(
Directory(exists=True), desc="Lower level feat dirs",
mandatory=True)
reg_image = File(
exists=True, desc="image to register to (will be treated as standard)",
mandatory=True)
reg_dof = traits.Int(
12, desc="registration degrees of freedom", usedefault=True)
class FEATRegisterOutputSpec(TraitedSpec):
fsf_file = File(exists=True,
desc="FSL feat specification file")
class FEATRegister(BaseInterface):
"""Register feat directories to a specific standard
"""
input_spec = FEATRegisterInputSpec
output_spec = FEATRegisterOutputSpec
def _run_interface(self, runtime):
fsf_header = load_template('featreg_header.tcl')
fsf_footer = load_template('feat_nongui.tcl')
fsf_dirs = load_template('feat_fe_featdirs.tcl')
num_runs = len(self.inputs.feat_dirs)
fsf_txt = fsf_header.substitute(num_runs=num_runs,
regimage=self.inputs.reg_image,
regdof=self.inputs.reg_dof)
for i, rundir in enumerate(filename_to_list(self.inputs.feat_dirs)):
fsf_txt += fsf_dirs.substitute(runno=i + 1,
rundir=os.path.abspath(rundir))
fsf_txt += fsf_footer.substitute()
f = open(os.path.join(os.getcwd(), 'register.fsf'), 'wt')
f.write(fsf_txt)
f.close()
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs['fsf_file'] = os.path.abspath(
os.path.join(os.getcwd(), 'register.fsf'))
return outputs
class FLAMEOInputSpec(FSLCommandInputSpec):
cope_file = File(exists=True, argstr='--copefile=%s', mandatory=True,
desc='cope regressor data file')
var_cope_file = File(exists=True, argstr='--varcopefile=%s',
desc='varcope weightings data file')
dof_var_cope_file = File(exists=True, argstr='--dofvarcopefile=%s',
desc='dof data file for varcope data')
mask_file = File(exists=True, argstr='--maskfile=%s', mandatory=True,
desc='mask file')
design_file = File(exists=True, argstr='--designfile=%s', mandatory=True,
desc='design matrix file')
t_con_file = File(
exists=True, argstr='--tcontrastsfile=%s', mandatory=True,
desc='ascii matrix specifying t-contrasts')
f_con_file = File(exists=True, argstr='--fcontrastsfile=%s',
desc='ascii matrix specifying f-contrasts')
cov_split_file = File(
exists=True, argstr='--covsplitfile=%s', mandatory=True,
desc='ascii matrix specifying the groups the covariance is split into')
run_mode = traits.Enum(
'fe', 'ols', 'flame1', 'flame12', argstr='--runmode=%s',
mandatory=True, desc='inference to perform')
n_jumps = traits.Int(
argstr='--njumps=%d', desc='number of jumps made by mcmc')
burnin = traits.Int(argstr='--burnin=%d',
desc='number of jumps at start of mcmc to be discarded')
sample_every = traits.Int(argstr='--sampleevery=%d',
desc='number of jumps for each sample')
fix_mean = traits.Bool(argstr='--fixmean', desc='fix mean for tfit')
infer_outliers = traits.Bool(argstr='--inferoutliers',
desc='infer outliers - not for fe')
no_pe_outputs = traits.Bool(argstr='--nopeoutput',
desc='do not output pe files')
sigma_dofs = traits.Int(argstr='--sigma_dofs=%d',
desc='sigma (in mm) to use for Gaussian smoothing the DOFs in FLAME 2. Default is 1mm, -1 indicates no smoothing')
outlier_iter = traits.Int(argstr='--ioni=%d',
desc='Number of max iterations to use when inferring outliers. Default is 12.')
log_dir = Directory("stats", argstr='--ld=%s', usedefault=True) # ohinds
# no support for ven, vef
class FLAMEOOutputSpec(TraitedSpec):
pes = OutputMultiPath(File(exists=True),
desc=("Parameter estimates for each column of the "
"design matrix for each voxel"))
res4d = OutputMultiPath(File(exists=True),
desc=("Model fit residual mean-squared error for "
"each time point"))
copes = OutputMultiPath(File(exists=True),
desc="Contrast estimates for each contrast")
var_copes = OutputMultiPath(File(exists=True),
desc="Variance estimates for each contrast")
zstats = OutputMultiPath(File(exists=True),
desc="z-stat file for each contrast")
tstats = OutputMultiPath(File(exists=True),
desc="t-stat file for each contrast")
zfstats = OutputMultiPath(File(exists=True),
desc="z stat file for each f contrast")
fstats = OutputMultiPath(File(exists=True),
desc="f-stat file for each contrast")
mrefvars = OutputMultiPath(File(exists=True),
desc=("mean random effect variances for each "
"contrast"))
tdof = OutputMultiPath(File(exists=True),
desc="temporal dof file for each contrast")
weights = OutputMultiPath(File(exists=True),
desc="weights file for each contrast")
stats_dir = Directory(File(exists=True),
desc="directory storing model estimation output")
class FLAMEO(FSLCommand):
"""Use FSL flameo command to perform higher level model fits
Examples
--------
Initialize FLAMEO with no options, assigning them when calling run:
>>> from nipype.interfaces import fsl
>>> import os
>>> flameo = fsl.FLAMEO(cope_file='cope.nii.gz', \
var_cope_file='varcope.nii.gz', \
cov_split_file='cov_split.mat', \
design_file='design.mat', \
t_con_file='design.con', \
mask_file='mask.nii', \
run_mode='fe')
>>> flameo.cmdline
'flameo --copefile=cope.nii.gz --covsplitfile=cov_split.mat --designfile=design.mat --ld=stats --maskfile=mask.nii --runmode=fe --tcontrastsfile=design.con --varcopefile=varcope.nii.gz'
"""
_cmd = 'flameo'
input_spec = FLAMEOInputSpec
output_spec = FLAMEOOutputSpec
# ohinds: 2010-04-06
def _run_interface(self, runtime):
log_dir = self.inputs.log_dir
cwd = os.getcwd()
if os.access(os.path.join(cwd, log_dir), os.F_OK):
rmtree(os.path.join(cwd, log_dir))
return super(FLAMEO, self)._run_interface(runtime)
# ohinds: 2010-04-06
# made these compatible with flameo
def _list_outputs(self):
outputs = self._outputs().get()
pth = os.path.join(os.getcwd(), self.inputs.log_dir)
pes = human_order_sorted(glob(os.path.join(pth, 'pe[0-9]*.*')))
assert len(pes) >= 1, 'No pe volumes generated by FSL Estimate'
outputs['pes'] = pes
res4d = human_order_sorted(glob(os.path.join(pth, 'res4d.*')))
assert len(res4d) == 1, 'No residual volume generated by FSL Estimate'
outputs['res4d'] = res4d[0]
copes = human_order_sorted(glob(os.path.join(pth, 'cope[0-9]*.*')))
assert len(copes) >= 1, 'No cope volumes generated by FSL CEstimate'
outputs['copes'] = copes
var_copes = human_order_sorted(
glob(os.path.join(pth, 'varcope[0-9]*.*')))
assert len(
var_copes) >= 1, 'No varcope volumes generated by FSL CEstimate'
outputs['var_copes'] = var_copes
zstats = human_order_sorted(glob(os.path.join(pth, 'zstat[0-9]*.*')))
assert len(zstats) >= 1, 'No zstat volumes generated by FSL CEstimate'
outputs['zstats'] = zstats
if isdefined(self.inputs.f_con_file):
zfstats = human_order_sorted(
glob(os.path.join(pth, 'zfstat[0-9]*.*')))
assert len(
zfstats) >= 1, 'No zfstat volumes generated by FSL CEstimate'
outputs['zfstats'] = zfstats
fstats = human_order_sorted(
glob(os.path.join(pth, 'fstat[0-9]*.*')))
assert len(
fstats) >= 1, 'No fstat volumes generated by FSL CEstimate'
outputs['fstats'] = fstats
tstats = human_order_sorted(glob(os.path.join(pth, 'tstat[0-9]*.*')))
assert len(tstats) >= 1, 'No tstat volumes generated by FSL CEstimate'
outputs['tstats'] = tstats
mrefs = human_order_sorted(
glob(os.path.join(pth, 'mean_random_effects_var[0-9]*.*')))
assert len(
mrefs) >= 1, 'No mean random effects volumes generated by FLAMEO'
outputs['mrefvars'] = mrefs
tdof = human_order_sorted(glob(os.path.join(pth, 'tdof_t[0-9]*.*')))
assert len(tdof) >= 1, 'No T dof volumes generated by FLAMEO'
outputs['tdof'] = tdof
weights = human_order_sorted(
glob(os.path.join(pth, 'weights[0-9]*.*')))
assert len(weights) >= 1, 'No weight volumes generated by FLAMEO'
outputs['weights'] = weights
outputs['stats_dir'] = pth
return outputs
class ContrastMgrInputSpec(FSLCommandInputSpec):
tcon_file = File(exists=True, mandatory=True,
argstr='%s', position=-1,
desc='contrast file containing T-contrasts')
fcon_file = File(exists=True, argstr='-f %s',
desc='contrast file containing F-contrasts')
param_estimates = InputMultiPath(File(exists=True),
argstr='', copyfile=False,
mandatory=True,
desc='Parameter estimates for each column of the design matrix')
corrections = File(exists=True, copyfile=False, mandatory=True,
desc='statistical corrections used within FILM modelling')
dof_file = File(exists=True, argstr='', copyfile=False, mandatory=True,
desc='degrees of freedom')
sigmasquareds = File(exists=True, argstr='', position=-2,
copyfile=False, mandatory=True,
desc='summary of residuals, See Woolrich, et. al., 2001')
contrast_num = traits.Range(low=1, argstr='-cope',
desc='contrast number to start labeling copes from')
suffix = traits.Str(argstr='-suffix %s',
desc='suffix to put on the end of the cope filename before the contrast number, default is nothing')
class ContrastMgrOutputSpec(TraitedSpec):
copes = OutputMultiPath(File(exists=True),
desc='Contrast estimates for each contrast')
varcopes = OutputMultiPath(File(exists=True),
desc='Variance estimates for each contrast')
zstats = OutputMultiPath(File(exists=True),
desc='z-stat file for each contrast')
tstats = OutputMultiPath(File(exists=True),
desc='t-stat file for each contrast')
fstats = OutputMultiPath(File(exists=True),
desc='f-stat file for each contrast')
zfstats = OutputMultiPath(File(exists=True),
desc='z-stat file for each F contrast')
neffs = OutputMultiPath(File(exists=True),
desc='neff file ?? for each contrast')
class ContrastMgr(FSLCommand):
"""Use FSL contrast_mgr command to evaluate contrasts
In interface mode this file assumes that all the required inputs are in the
same location.
"""
_cmd = 'contrast_mgr'
input_spec = ContrastMgrInputSpec
output_spec = ContrastMgrOutputSpec
def _run_interface(self, runtime):
# The returncode is meaningless in ContrastMgr. So check the output
# in stderr and if it's set, then update the returncode
# accordingly.
runtime = super(ContrastMgr, self)._run_interface(runtime)
if runtime.stderr:
self.raise_exception(runtime)
return runtime
def _format_arg(self, name, trait_spec, value):
if name in ['param_estimates', 'corrections', 'dof_file']:
return ''
elif name in ['sigmasquareds']:
path, _ = os.path.split(value)
return path
else:
return super(ContrastMgr, self)._format_arg(name, trait_spec, value)
def _get_design_root(self, infile):
_, fname = os.path.split(infile)
return fname.split('.')[0]
def _get_numcons(self):
numtcons = 0
numfcons = 0
if isdefined(self.inputs.tcon_file):
fp = open(self.inputs.tcon_file, 'rt')
for line in fp.readlines():
if line.startswith('/NumContrasts'):
numtcons = int(line.split()[-1])
break
fp.close()
if isdefined(self.inputs.fcon_file):
fp = open(self.inputs.fcon_file, 'rt')
for line in fp.readlines():
if line.startswith('/NumContrasts'):
numfcons = int(line.split()[-1])
break
fp.close()
return numtcons, numfcons
def _list_outputs(self):
outputs = self._outputs().get()
pth, _ = os.path.split(self.inputs.sigmasquareds)
numtcons, numfcons = self._get_numcons()
base_contrast = 1
if isdefined(self.inputs.contrast_num):
base_contrast = self.inputs.contrast_num
copes = []
varcopes = []
zstats = []
tstats = []
neffs = []
for i in range(numtcons):
copes.append(self._gen_fname('cope%d.nii' % (base_contrast + i),
cwd=pth))
varcopes.append(
self._gen_fname('varcope%d.nii' % (base_contrast + i),
cwd=pth))
zstats.append(self._gen_fname('zstat%d.nii' % (base_contrast + i),
cwd=pth))
tstats.append(self._gen_fname('tstat%d.nii' % (base_contrast + i),
cwd=pth))
neffs.append(self._gen_fname('neff%d.nii' % (base_contrast + i),
cwd=pth))
if copes:
outputs['copes'] = copes
outputs['varcopes'] = varcopes
outputs['zstats'] = zstats
outputs['tstats'] = tstats
outputs['neffs'] = neffs
fstats = []
zfstats = []
for i in range(numfcons):
fstats.append(self._gen_fname('fstat%d.nii' % (base_contrast + i),
cwd=pth))
zfstats.append(
self._gen_fname('zfstat%d.nii' % (base_contrast + i),
cwd=pth))
if fstats:
outputs['fstats'] = fstats
outputs['zfstats'] = zfstats
return outputs
class L2ModelInputSpec(BaseInterfaceInputSpec):
num_copes = traits.Range(low=1, mandatory=True,
desc='number of copes to be combined')
class L2ModelOutputSpec(TraitedSpec):
design_mat = File(exists=True, desc='design matrix file')
design_con = File(exists=True, desc='design contrast file')
design_grp = File(exists=True, desc='design group file')
class L2Model(BaseInterface):
"""Generate subject specific second level model
Examples
--------
>>> from nipype.interfaces.fsl import L2Model
>>> model = L2Model(num_copes=3) # 3 sessions
"""
input_spec = L2ModelInputSpec
output_spec = L2ModelOutputSpec
def _run_interface(self, runtime):
cwd = os.getcwd()
mat_txt = ['/NumWaves 1',
'/NumPoints %d' % self.inputs.num_copes,
'/PPheights %e' % 1,
'',
'/Matrix']
for i in range(self.inputs.num_copes):
mat_txt += ['%e' % 1]
mat_txt = '\n'.join(mat_txt)
con_txt = ['/ContrastName1 group mean',
'/NumWaves 1',
'/NumContrasts 1',
'/PPheights %e' % 1,
'/RequiredEffect 100.0', # XX where does this
# number come from
'',
'/Matrix',
'%e' % 1]
con_txt = '\n'.join(con_txt)
grp_txt = ['/NumWaves 1',
'/NumPoints %d' % self.inputs.num_copes,
'',
'/Matrix']
for i in range(self.inputs.num_copes):
grp_txt += ['1']
grp_txt = '\n'.join(grp_txt)
txt = {'design.mat': mat_txt,
'design.con': con_txt,
'design.grp': grp_txt}
# write design files
for i, name in enumerate(['design.mat', 'design.con', 'design.grp']):
f = open(os.path.join(cwd, name), 'wt')
f.write(txt[name])
f.close()
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
for field in outputs.keys():
outputs[field] = os.path.join(os.getcwd(),
field.replace('_', '.'))
return outputs
class MultipleRegressDesignInputSpec(BaseInterfaceInputSpec):
contrasts = traits.List(
traits.Either(traits.Tuple(traits.Str,
traits.Enum('T'),
traits.List(traits.Str),
traits.List(traits.Float)),
traits.Tuple(traits.Str,
traits.Enum('F'),
traits.List(traits.Tuple(traits.Str,
traits.Enum('T'),
traits.List(
traits.Str),
traits.List(
traits.Float)),
))),
mandatory=True,
desc="List of contrasts with each contrast being a list of the form - \
[('name', 'stat', [condition list], [weight list])]. if \
session list is None or not provided, all sessions are used. For F \
contrasts, the condition list should contain previously defined \
T-contrasts without any weight list.")
regressors = traits.Dict(traits.Str, traits.List(traits.Float),
mandatory=True,
desc='dictionary containing named lists of regressors')
groups = traits.List(traits.Int,
desc='list of group identifiers (defaults to single group)')
class MultipleRegressDesignOutputSpec(TraitedSpec):
design_mat = File(exists=True, desc='design matrix file')
design_con = File(exists=True, desc='design t-contrast file')
design_fts = File(exists=True, desc='design f-contrast file')
design_grp = File(exists=True, desc='design group file')
class MultipleRegressDesign(BaseInterface):
"""Generate multiple regression design
.. note::
FSL does not demean columns for higher level analysis.
Please see `FSL documentation <http://www.fmrib.ox.ac.uk/fsl/feat5/detail.html#higher>`_
for more details on model specification for higher level analysis.
Examples
--------
>>> from nipype.interfaces.fsl import MultipleRegressDesign
>>> model = MultipleRegressDesign()
>>> model.inputs.contrasts = [['group mean', 'T',['reg1'],[1]]]
>>> model.inputs.regressors = dict(reg1=[1, 1, 1], reg2=[2.,-4, 3])
>>> model.run() # doctest: +SKIP
"""
input_spec = MultipleRegressDesignInputSpec
output_spec = MultipleRegressDesignOutputSpec
def _run_interface(self, runtime):
cwd = os.getcwd()
regs = sorted(self.inputs.regressors.keys())
nwaves = len(regs)
npoints = len(self.inputs.regressors[regs[0]])
ntcons = sum([1 for con in self.inputs.contrasts if con[1] == 'T'])
nfcons = sum([1 for con in self.inputs.contrasts if con[1] == 'F'])
# write mat file
mat_txt = ['/NumWaves %d' % nwaves,
'/NumPoints %d' % npoints]
ppheights = []
for reg in regs:
maxreg = np.max(self.inputs.regressors[reg])
minreg = np.min(self.inputs.regressors[reg])
if np.sign(maxreg) == np.sign(minreg):
regheight = max([abs(minreg), abs(maxreg)])
else:
regheight = abs(maxreg - minreg)
ppheights.append('%e' % regheight)
mat_txt += ['/PPheights ' + ' '.join(ppheights)]
mat_txt += ['',
'/Matrix']
for cidx in range(npoints):
mat_txt.append(' '.join(
['%e' % self.inputs.regressors[key][cidx] for key in regs]))
mat_txt = '\n'.join(mat_txt) + '\n'
# write t-con file
con_txt = []
counter = 0
tconmap = {}
for conidx, con in enumerate(self.inputs.contrasts):
if con[1] == 'T':
tconmap[conidx] = counter
counter += 1
con_txt += ['/ContrastName%d %s' % (counter, con[0])]
con_txt += ['/NumWaves %d' % nwaves,
'/NumContrasts %d' % ntcons,
'/PPheights %s' % ' '.join(
['%e' % 1 for i in range(counter)]),
'/RequiredEffect %s' % ' '.join(
['%.3f' % 100 for i in range(counter)]),
'',
'/Matrix']
for idx in sorted(tconmap.keys()):
convals = np.zeros((nwaves, 1))
for regidx, reg in enumerate(self.inputs.contrasts[idx][2]):
convals[regs.index(reg)
] = self.inputs.contrasts[idx][3][regidx]
con_txt.append(' '.join(['%e' % val for val in convals]))
con_txt = '\n'.join(con_txt) + '\n'
# write f-con file
fcon_txt = ''
if nfcons:
fcon_txt = ['/NumWaves %d' % ntcons,
'/NumContrasts %d' % nfcons,
'',
'/Matrix']
for conidx, con in enumerate(self.inputs.contrasts):
if con[1] == 'F':
convals = np.zeros((ntcons, 1))
for tcon in con[2]:
convals[tconmap[self.inputs.contrasts.index(tcon)]] = 1
fcon_txt.append(' '.join(['%d' % val for val in convals]))
fcon_txt = '\n'.join(fcon_txt)
fcon_txt += '\n'
# write group file
grp_txt = ['/NumWaves 1',
'/NumPoints %d' % npoints,
'',
'/Matrix']
for i in range(npoints):
if isdefined(self.inputs.groups):
grp_txt += ['%d' % self.inputs.groups[i]]
else:
grp_txt += ['1']
grp_txt = '\n'.join(grp_txt) + '\n'
txt = {'design.mat': mat_txt,
'design.con': con_txt,
'design.fts': fcon_txt,
'design.grp': grp_txt}
# write design files
for key, val in txt.items():
if ('fts' in key) and (nfcons == 0):
continue
filename = key.replace('_', '.')
f = open(os.path.join(cwd, filename), 'wt')
f.write(val)
f.close()
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
nfcons = sum([1 for con in self.inputs.contrasts if con[1] == 'F'])
for field in outputs.keys():
if ('fts' in field) and (nfcons == 0):
continue
outputs[field] = os.path.join(os.getcwd(),
field.replace('_', '.'))
return outputs
class SMMInputSpec(FSLCommandInputSpec):
spatial_data_file = File(
exists=True, position=0, argstr='--sdf="%s"', mandatory=True,
desc="statistics spatial map", copyfile=False)
mask = File(exists=True, position=1, argstr='--mask="%s"', mandatory=True,
desc="mask file", copyfile=False)
no_deactivation_class = traits.Bool(position=2, argstr="--zfstatmode",
desc="enforces no deactivation class")
class SMMOutputSpec(TraitedSpec):
null_p_map = File(exists=True)
activation_p_map = File(exists=True)
deactivation_p_map = File(exists=True)
class SMM(FSLCommand):
'''
Spatial Mixture Modelling. For more detail on the spatial mixture modelling see
Mixture Models with Adaptive Spatial Regularisation for Segmentation with an Application to FMRI Data;
Woolrich, M., Behrens, T., Beckmann, C., and Smith, S.; IEEE Trans. Medical Imaging, 24(1):1-11, 2005.
'''
_cmd = 'mm --ld=logdir'
input_spec = SMMInputSpec
output_spec = SMMOutputSpec
def _list_outputs(self):
outputs = self._outputs().get()
# TODO get the true logdir from the stdout
outputs['null_p_map'] = self._gen_fname(basename="w1_mean",
cwd="logdir")
outputs['activation_p_map'] = self._gen_fname(
basename="w2_mean", cwd="logdir")
if not isdefined(self.inputs.no_deactivation_class) or not self.inputs.no_deactivation_class:
outputs['deactivation_p_map'] = self._gen_fname(
basename="w3_mean", cwd="logdir")
return outputs
class MELODICInputSpec(FSLCommandInputSpec):
in_files = InputMultiPath(
File(exists=True), argstr="-i %s", mandatory=True, position=0,
desc="input file names (either single file name or a list)",
sep=",")
out_dir = Directory(
argstr="-o %s", desc="output directory name", genfile=True)
mask = File(exists=True, argstr="-m %s",
desc="file name of mask for thresholding")
no_mask = traits.Bool(argstr="--nomask", desc="switch off masking")
update_mask = traits.Bool(
argstr="--update_mask", desc="switch off mask updating")
no_bet = traits.Bool(argstr="--nobet", desc="switch off BET")
bg_threshold = traits.Float(
argstr="--bgthreshold=%f", desc="brain/non-brain threshold used to mask non-brain voxels, as a percentage (only if --nobet selected)")
dim = traits.Int(argstr="-d %d", desc="dimensionality reduction into #num dimensions"
"(default: automatic estimation)")
dim_est = traits.Str(argstr="--dimest=%s", desc="use specific dim. estimation technique:"
" lap, bic, mdl, aic, mean (default: lap)")
sep_whiten = traits.Bool(
argstr="--sep_whiten", desc="switch on separate whitening")
sep_vn = traits.Bool(
argstr="--sep_vn", desc="switch off joined variance normalization")
num_ICs = traits.Int(
argstr="-n %d", desc="number of IC's to extract (for deflation approach)")
approach = traits.Str(argstr="-a %s", desc="approach for decomposition, 2D: defl, symm (default), "
" 3D: tica (default), concat")
non_linearity = traits.Str(
argstr="--nl=%s", desc="nonlinearity: gauss, tanh, pow3, pow4")
var_norm = traits.Bool(
argstr="--vn", desc="switch off variance normalization")
pbsc = traits.Bool(
argstr="--pbsc", desc="switch off conversion to percent BOLD signal change")
cov_weight = traits.Float(argstr="--covarweight=%f", desc="voxel-wise weights for the covariance "
"matrix (e.g. segmentation information)")
epsilon = traits.Float(argstr="--eps=%f", desc="minimum error change")
epsilonS = traits.Float(
argstr="--epsS=%f", desc="minimum error change for rank-1 approximation in TICA")
maxit = traits.Int(argstr="--maxit=%d",
desc="maximum number of iterations before restart")
max_restart = traits.Int(
argstr="--maxrestart=%d", desc="maximum number of restarts")
mm_thresh = traits.Float(
argstr="--mmthresh=%f", desc="threshold for Mixture Model based inference")
no_mm = traits.Bool(
argstr="--no_mm", desc="switch off mixture modelling on IC maps")
ICs = File(exists=True, argstr="--ICs=%s",
desc="filename of the IC components file for mixture modelling")
mix = File(exists=True, argstr="--mix=%s",
desc="mixing matrix for mixture modelling / filtering")
smode = File(exists=True, argstr="--smode=%s",
desc="matrix of session modes for report generation")
rem_cmp = traits.List(
traits.Int, argstr="-f %d", desc="component numbers to remove")
report = traits.Bool(argstr="--report", desc="generate Melodic web report")
bg_image = File(exists=True, argstr="--bgimage=%s", desc="specify background image for report"
" (default: mean image)")
tr_sec = traits.Float(argstr="--tr=%f", desc="TR in seconds")
log_power = traits.Bool(
argstr="--logPower", desc="calculate log of power for frequency spectrum")
t_des = File(exists=True, argstr="--Tdes=%s",
desc="design matrix across time-domain")
t_con = File(exists=True, argstr="--Tcon=%s",
desc="t-contrast matrix across time-domain")
s_des = File(exists=True, argstr="--Sdes=%s",
desc="design matrix across subject-domain")
s_con = File(exists=True, argstr="--Scon=%s",
desc="t-contrast matrix across subject-domain")
out_all = traits.Bool(argstr="--Oall", desc="output everything")
out_unmix = traits.Bool(argstr="--Ounmix", desc="output unmixing matrix")
out_stats = traits.Bool(
argstr="--Ostats", desc="output thresholded maps and probability maps")
out_pca = traits.Bool(argstr="--Opca", desc="output PCA results")
out_white = traits.Bool(
argstr="--Owhite", desc="output whitening/dewhitening matrices")
out_orig = traits.Bool(argstr="--Oorig", desc="output the original ICs")
out_mean = traits.Bool(argstr="--Omean", desc="output mean volume")
report_maps = traits.Str(argstr="--report_maps=%s",
desc="control string for spatial map images (see slicer)")
remove_deriv = traits.Bool(argstr="--remove_deriv", desc="removes every second entry in paradigm"
" file (EV derivatives)")
class MELODICOutputSpec(TraitedSpec):
out_dir = Directory(exists=True)
report_dir = Directory(exists=True)
class MELODIC(FSLCommand):
"""Multivariate Exploratory Linear Optimised Decomposition into Independent Components
Examples
--------
>>> melodic_setup = MELODIC()
>>> melodic_setup.inputs.approach = 'tica'
>>> melodic_setup.inputs.in_files = ['functional.nii', 'functional2.nii', 'functional3.nii']
>>> melodic_setup.inputs.no_bet = True
>>> melodic_setup.inputs.bg_threshold = 10
>>> melodic_setup.inputs.tr_sec = 1.5
>>> melodic_setup.inputs.mm_thresh = 0.5
>>> melodic_setup.inputs.out_stats = True
>>> melodic_setup.inputs.t_des = 'timeDesign.mat'
>>> melodic_setup.inputs.t_con = 'timeDesign.con'
>>> melodic_setup.inputs.s_des = 'subjectDesign.mat'
>>> melodic_setup.inputs.s_con = 'subjectDesign.con'
>>> melodic_setup.inputs.out_dir = 'groupICA.out'
>>> melodic_setup.run() # doctest: +SKIP
"""
input_spec = MELODICInputSpec
output_spec = MELODICOutputSpec
_cmd = 'melodic'
def _list_outputs(self):
outputs = self.output_spec().get()
outputs['out_dir'] = self.inputs.out_dir
if not isdefined(outputs['out_dir']):
outputs['out_dir'] = self._gen_filename("out_dir")
if isdefined(self.inputs.report) and self.inputs.report:
outputs['report_dir'] = os.path.join(
self._gen_filename("out_dir"), "report")
return outputs
def _gen_filename(self, name):
if name == "out_dir":
return os.getcwd()
class SmoothEstimateInputSpec(FSLCommandInputSpec):
dof = traits.Int(argstr='--dof=%d', mandatory=True,
xor=['zstat_file'],
desc='number of degrees of freedom')
mask_file = File(argstr='--mask=%s',
exists=True, mandatory=True,
desc='brain mask volume')
residual_fit_file = File(argstr='--res=%s',
exists=True, requires=['dof'],
desc='residual-fit image file')
zstat_file = File(argstr='--zstat=%s',
exists=True, xor=['dof'],
desc='zstat image file')
class SmoothEstimateOutputSpec(TraitedSpec):
dlh = traits.Float(desc='smoothness estimate sqrt(det(Lambda))')
volume = traits.Int(desc='number of voxels in mask')
resels = traits.Float(desc='number of resels')
class SmoothEstimate(FSLCommand):
""" Estimates the smoothness of an image
Examples
--------
>>> est = SmoothEstimate()
>>> est.inputs.zstat_file = 'zstat1.nii.gz'
>>> est.inputs.mask_file = 'mask.nii'
>>> est.cmdline
'smoothest --mask=mask.nii --zstat=zstat1.nii.gz'
"""
input_spec = SmoothEstimateInputSpec
output_spec = SmoothEstimateOutputSpec
_cmd = 'smoothest'
def aggregate_outputs(self, runtime=None, needed_outputs=None):
outputs = self._outputs()
stdout = runtime.stdout.split('\n')
outputs.dlh = float(stdout[0].split()[1])
outputs.volume = int(stdout[1].split()[1])
outputs.resels = float(stdout[2].split()[1])
return outputs
class ClusterInputSpec(FSLCommandInputSpec):
in_file = File(argstr='--in=%s', mandatory=True,
exists=True, desc='input volume')
threshold = traits.Float(argstr='--thresh=%.10f',
mandatory=True,
desc='threshold for input volume')
out_index_file = traits.Either(traits.Bool, File,
argstr='--oindex=%s',
desc='output of cluster index (in size order)', hash_files=False)
out_threshold_file = traits.Either(traits.Bool, File,
argstr='--othresh=%s',
desc='thresholded image', hash_files=False)
out_localmax_txt_file = traits.Either(traits.Bool, File,
argstr='--olmax=%s',
desc='local maxima text file', hash_files=False)
out_localmax_vol_file = traits.Either(traits.Bool, File,
argstr='--olmaxim=%s',
desc='output of local maxima volume', hash_files=False)
out_size_file = traits.Either(traits.Bool, File,
argstr='--osize=%s',
desc='filename for output of size image', hash_files=False)
out_max_file = traits.Either(traits.Bool, File,
argstr='--omax=%s',
desc='filename for output of max image', hash_files=False)
out_mean_file = traits.Either(traits.Bool, File,
argstr='--omean=%s',
desc='filename for output of mean image', hash_files=False)
out_pval_file = traits.Either(traits.Bool, File,
argstr='--opvals=%s',
desc='filename for image output of log pvals', hash_files=False)
pthreshold = traits.Float(argstr='--pthresh=%.10f',
requires=['dlh', 'volume'],
desc='p-threshold for clusters')
peak_distance = traits.Float(argstr='--peakdist=%.10f',
desc='minimum distance between local maxima/minima, in mm (default 0)')
cope_file = traits.File(argstr='--cope=%s',
desc='cope volume')
volume = traits.Int(argstr='--volume=%d',
desc='number of voxels in the mask')
dlh = traits.Float(argstr='--dlh=%.10f',
desc='smoothness estimate = sqrt(det(Lambda))')
fractional = traits.Bool('--fractional',
desc='interprets the threshold as a fraction of the robust range')
connectivity = traits.Int(argstr='--connectivity=%d',
desc='the connectivity of voxels (default 26)')
use_mm = traits.Bool('--mm', desc='use mm, not voxel, coordinates')
find_min = traits.Bool('--min', desc='find minima instead of maxima')
no_table = traits.Bool(
'--no_table', desc='suppresses printing of the table info')
minclustersize = traits.Bool(argstr='--minclustersize',
desc='prints out minimum significant cluster size')
xfm_file = File(argstr='--xfm=%s',
desc='filename for Linear: input->standard-space transform. Non-linear: input->highres transform')
std_space_file = File(argstr='--stdvol=%s',
desc='filename for standard-space volume')
num_maxima = traits.Int(argstr='--num=%d',
desc='no of local maxima to report')
warpfield_file = File(argstr='--warpvol=%s',
desc='file contining warpfield')
class ClusterOutputSpec(TraitedSpec):
index_file = File(desc='output of cluster index (in size order)')
threshold_file = File(desc='thresholded image')
localmax_txt_file = File(desc='local maxima text file')
localmax_vol_file = File(desc='output of local maxima volume')
size_file = File(desc='filename for output of size image')
max_file = File(desc='filename for output of max image')
mean_file = File(desc='filename for output of mean image')
pval_file = File(desc='filename for image output of log pvals')
class Cluster(FSLCommand):
""" Uses FSL cluster to perform clustering on statistical output
Examples
--------
>>> cl = Cluster()
>>> cl.inputs.threshold = 2.3
>>> cl.inputs.in_file = 'zstat1.nii.gz'
>>> cl.inputs.out_localmax_txt_file = 'stats.txt'
>>> cl.cmdline
'cluster --in=zstat1.nii.gz --olmax=stats.txt --thresh=2.3000000000'
"""
input_spec = ClusterInputSpec
output_spec = ClusterOutputSpec
_cmd = 'cluster'
filemap = {'out_index_file': 'index', 'out_threshold_file': 'threshold',
'out_localmax_txt_file': 'localmax.txt',
'out_localmax_vol_file': 'localmax',
'out_size_file': 'size', 'out_max_file': 'max',
'out_mean_file': 'mean', 'out_pval_file': 'pval'}
def _list_outputs(self):
outputs = self.output_spec().get()
for key, suffix in self.filemap.items():
outkey = key[4:]
inval = getattr(self.inputs, key)
if isdefined(inval):
if isinstance(inval, bool):
if inval:
change_ext = True
if suffix.endswith('.txt'):
change_ext = False
outputs[outkey] = self._gen_fname(self.inputs.in_file,
suffix='_' + suffix,
change_ext=change_ext)
else:
outputs[outkey] = os.path.abspath(inval)
return outputs
def _format_arg(self, name, spec, value):
if name in self.filemap.keys():
if isinstance(value, bool):
fname = self._list_outputs()[name[4:]]
else:
fname = value
return spec.argstr % fname
return super(Cluster, self)._format_arg(name, spec, value)
class RandomiseInputSpec(FSLCommandInputSpec):
in_file = File(exists=True, desc='4D input file', argstr='-i %s',
position=0, mandatory=True)
base_name = traits.Str(
'tbss_', desc='the rootname that all generated files will have',
argstr='-o "%s"', position=1, usedefault=True)
design_mat = File(
exists=True, desc='design matrix file', argstr='-d %s', position=2)
tcon = File(
exists=True, desc='t contrasts file', argstr='-t %s', position=3)
fcon = File(exists=True, desc='f contrasts file', argstr='-f %s')
mask = File(exists=True, desc='mask image', argstr='-m %s')
x_block_labels = File(
exists=True, desc='exchangeability block labels file', argstr='-e %s')
demean = traits.Bool(
desc='demean data temporally before model fitting', argstr='-D')
one_sample_group_mean = traits.Bool(
desc='perform 1-sample group-mean test instead of generic permutation test',
argstr='-1')
show_total_perms = traits.Bool(
desc='print out how many unique permutations would be generated and exit',
argstr='-q')
show_info_parallel_mode = traits.Bool(
desc='print out information required for parallel mode and exit',
argstr='-Q')
vox_p_values = traits.Bool(
desc='output voxelwise (corrected and uncorrected) p-value images',
argstr='-x')
tfce = traits.Bool(
desc='carry out Threshold-Free Cluster Enhancement', argstr='-T')
tfce2D = traits.Bool(
desc='carry out Threshold-Free Cluster Enhancement with 2D optimisation',
argstr='--T2')
f_only = traits.Bool(desc='calculate f-statistics only', argstr='--f_only')
raw_stats_imgs = traits.Bool(
desc='output raw ( unpermuted ) statistic images', argstr='-R')
p_vec_n_dist_files = traits.Bool(
desc='output permutation vector and null distribution text files',
argstr='-P')
num_perm = traits.Int(
argstr='-n %d', desc='number of permutations (default 5000, set to 0 for exhaustive)')
seed = traits.Int(
argstr='--seed=%d', desc='specific integer seed for random number generator')
var_smooth = traits.Int(
argstr='-v %d', desc='use variance smoothing (std is in mm)')
c_thresh = traits.Float(
argstr='-c %.2f', desc='carry out cluster-based thresholding')
cm_thresh = traits.Float(
argstr='-C %.2f', desc='carry out cluster-mass-based thresholding')
f_c_thresh = traits.Float(
argstr='-F %.2f', desc='carry out f cluster thresholding')
f_cm_thresh = traits.Float(
argstr='-S %.2f', desc='carry out f cluster-mass thresholding')
tfce_H = traits.Float(
argstr='--tfce_H=%.2f', desc='TFCE height parameter (default=2)')
tfce_E = traits.Float(
argstr='--tfce_E=%.2f', desc='TFCE extent parameter (default=0.5)')
tfce_C = traits.Float(
argstr='--tfce_C=%.2f', desc='TFCE connectivity (6 or 26; default=6)')
class RandomiseOutputSpec(TraitedSpec):
tstat_files = traits.List(
File(exists=True),
desc='t contrast raw statistic')
fstat_files = traits.List(
File(exists=True),
desc='f contrast raw statistic')
t_p_files = traits.List(
File(exists=True),
desc='f contrast uncorrected p values files')
f_p_files = traits.List(
File(exists=True),
desc='f contrast uncorrected p values files')
t_corrected_p_files = traits.List(
File(exists=True),
desc='t contrast FWE (Family-wise error) corrected p values files')
f_corrected_p_files = traits.List(
File(exists=True),
desc='f contrast FWE (Family-wise error) corrected p values files')
class Randomise(FSLCommand):
"""XXX UNSTABLE DO NOT USE
FSL Randomise: feeds the 4D projected FA data into GLM
modelling and thresholding
in order to find voxels which correlate with your model
Example
-------
>>> import nipype.interfaces.fsl as fsl
>>> rand = fsl.Randomise(in_file='allFA.nii', mask = 'mask.nii', tcon='design.con', design_mat='design.mat')
>>> rand.cmdline
'randomise -i allFA.nii -o "tbss_" -d design.mat -t design.con -m mask.nii'
"""
_cmd = 'randomise'
input_spec = RandomiseInputSpec
output_spec = RandomiseOutputSpec
def _list_outputs(self):
outputs = self.output_spec().get()
outputs['tstat_files'] = glob(self._gen_fname(
'%s_tstat*.nii' % self.inputs.base_name))
outputs['fstat_files'] = glob(self._gen_fname(
'%s_fstat*.nii' % self.inputs.base_name))
prefix = False
if self.inputs.tfce or self.inputs.tfce2D:
prefix = 'tfce'
elif self.inputs.vox_p_values:
prefix = 'vox'
elif self.inputs.c_thresh or self.inputs.f_c_thresh:
prefix = 'clustere'
elif self.inputs.cm_thresh or self.inputs.f_cm_thresh:
prefix = 'clusterm'
if prefix:
outputs['t_p_files'] = glob(self._gen_fname(
'%s_%s_p_tstat*' % (self.inputs.base_name, prefix)))
outputs['t_corrected_p_files'] = glob(self._gen_fname(
'%s_%s_corrp_tstat*.nii' % (self.inputs.base_name, prefix)))
outputs['f_p_files'] = glob(self._gen_fname(
'%s_%s_p_fstat*.nii' % (self.inputs.base_name, prefix)))
outputs['f_corrected_p_files'] = glob(self._gen_fname(
'%s_%s_corrp_fstat*.nii' % (self.inputs.base_name, prefix)))
return outputs
class GLMInputSpec(FSLCommandInputSpec):
in_file = File(exists=True, argstr='-i %s', mandatory=True, position=1,
desc='input file name (text matrix or 3D/4D image file)')
out_file = File(name_template="%s_glm", argstr='-o %s', position=3,
desc=('filename for GLM parameter estimates'
+ ' (GLM betas)'),
name_source="in_file", keep_extension=True)
design = File(exists=True, argstr='-d %s', mandatory=True, position=2,
desc=('file name of the GLM design matrix (text time'
+ ' courses for temporal regression or an image'
+ ' file for spatial regression)'))
contrasts = File(exists=True, argstr='-c %s', desc=('matrix of t-statics'
+ ' contrasts'))
mask = File(exists=True, argstr='-m %s', desc=('mask image file name if'
+ ' input is image'))
dof = traits.Int(argstr='--dof=%d', desc=('set degrees of freedom'
+ ' explicitly'))
des_norm = traits.Bool(argstr='--des_norm', desc=('switch on normalization'
+ ' of the design matrix'
+ ' columns to unit std'
+ ' deviation'))
dat_norm = traits.Bool(argstr='--dat_norm', desc=('switch on normalization'
+ ' of the data time'
+ ' series to unit std'
+ ' deviation'))
var_norm = traits.Bool(argstr='--vn', desc=('perform MELODIC variance-'
+ 'normalisation on data'))
demean = traits.Bool(argstr='--demean', desc=('switch on demeaining of '
+ ' design and data'))
out_cope = File(argstr='--out_cope=%s',
desc='output file name for COPE (either as txt or image')
out_z_name = File(argstr='--out_z=%s',
desc='output file name for Z-stats (either as txt or image')
out_t_name = File(argstr='--out_t=%s',
desc='output file name for t-stats (either as txt or image')
out_p_name = File(argstr='--out_p=%s',
desc=('output file name for p-values of Z-stats (either as'
+ ' text file or image)'))
out_f_name = File(argstr='--out_f=%s',
desc='output file name for F-value of full model fit')
out_pf_name = File(argstr='--out_pf=%s',
desc='output file name for p-value for full model fit')
out_res_name = File(argstr='--out_res=%s',
desc='output file name for residuals')
out_varcb_name = File(argstr='--out_varcb=%s',
desc='output file name for variance of COPEs')
out_sigsq_name = File(argstr='--out_sigsq=%s',
desc=('output file name for residual noise variance'
+ ' sigma-square'))
out_data_name = File(argstr='--out_data=%s',
desc='output file name for pre-processed data')
out_vnscales_name = File(argstr='--out_vnscales=%s',
desc=('output file name for scaling factors for variance'
+ ' normalisation'))
class GLMOutputSpec(TraitedSpec):
out_file = File(exists=True, desc=('file name of GLM parameters'
' (if generated)'))
out_cope = OutputMultiPath(File(exists=True),
desc=('output file name for COPEs (either as '
'text file or image)'))
out_z = OutputMultiPath(File(exists=True),
desc=('output file name for COPEs (either as text '
'file or image)'))
out_t = OutputMultiPath(File(exists=True),
desc=('output file name for t-stats (either as '
'text file or image)'))
out_p = OutputMultiPath(File(exists=True),
desc=('output file name for p-values of Z-stats '
'(either as text file or image)'))
out_f = OutputMultiPath(File(exists=True),
desc=('output file name for F-value of full model '
'fit'))
out_pf = OutputMultiPath(File(exists=True),
desc=('output file name for p-value for full '
'model fit'))
out_res = OutputMultiPath(File(exists=True),
desc='output file name for residuals')
out_varcb = OutputMultiPath(File(exists=True),
desc='output file name for variance of COPEs')
out_sigsq = OutputMultiPath(File(exists=True),
desc=('output file name for residual noise '
'variance sigma-square'))
out_data = OutputMultiPath(File(exists=True),
desc='output file for preprocessed data')
out_vnscales = OutputMultiPath(File(exists=True),
desc=('output file name for scaling factors '
'for variance normalisation'))
class GLM(FSLCommand):
"""
FSL GLM:
Example
-------
>>> import nipype.interfaces.fsl as fsl
>>> glm = fsl.GLM(in_file='functional.nii', design='maps.nii', output_type='NIFTI')
>>> glm.cmdline
'fsl_glm -i functional.nii -d maps.nii -o functional_glm.nii'
"""
_cmd = 'fsl_glm'
input_spec = GLMInputSpec
output_spec = GLMOutputSpec
def _list_outputs(self):
outputs = super(GLM, self)._list_outputs()
if isdefined(self.inputs.out_cope):
outputs['out_cope'] = os.path.abspath(self.inputs.out_cope)
if isdefined(self.inputs.out_z_name):
outputs['out_z'] = os.path.abspath(self.inputs.out_z_name)
if isdefined(self.inputs.out_t_name):
outputs['out_t'] = os.path.abspath(self.inputs.out_t_name)
if isdefined(self.inputs.out_p_name):
outputs['out_p'] = os.path.abspath(self.inputs.out_p_name)
if isdefined(self.inputs.out_f_name):
outputs['out_f'] = os.path.abspath(self.inputs.out_f_name)
if isdefined(self.inputs.out_pf_name):
outputs['out_pf'] = os.path.abspath(self.inputs.out_pf_name)
if isdefined(self.inputs.out_res_name):
outputs['out_res'] = os.path.abspath(self.inputs.out_res_name)
if isdefined(self.inputs.out_varcb_name):
outputs['out_varcb'] = os.path.abspath(self.inputs.out_varcb_name)
if isdefined(self.inputs.out_sigsq_name):
outputs['out_sigsq'] = os.path.abspath(self.inputs.out_sigsq_name)
if isdefined(self.inputs.out_data_name):
outputs['out_data'] = os.path.abspath(self.inputs.out_data_name)
if isdefined(self.inputs.out_vnscales_name):
outputs['out_vnscales'] = os.path.abspath(
self.inputs.out_vnscales_name)
return outputs
|
rameshvs/nipype
|
nipype/interfaces/fsl/model.py
|
Python
|
bsd-3-clause
| 82,958
|
[
"Gaussian"
] |
7dd1194c8fec5f51bb10074e567dc9f1d8036067ad7522fb3ae05bc1ca894701
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.