text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
"""
=======================================
Signal processing (:mod:`scipy.signal`)
=======================================
Convolution
===========
.. autosummary::
:toctree: generated/
convolve -- N-dimensional convolution.
correlate -- N-dimensional correlation.
fftconvolve -- N-dimensional convolution using the FFT.
convolve2d -- 2-dimensional convolution (more options).
correlate2d -- 2-dimensional correlation (more options).
sepfir2d -- Convolve with a 2-D separable FIR filter.
choose_conv_method -- Chooses faster of FFT and direct convolution methods.
B-splines
=========
.. autosummary::
:toctree: generated/
bspline -- B-spline basis function of order n.
cubic -- B-spline basis function of order 3.
quadratic -- B-spline basis function of order 2.
gauss_spline -- Gaussian approximation to the B-spline basis function.
cspline1d -- Coefficients for 1-D cubic (3rd order) B-spline.
qspline1d -- Coefficients for 1-D quadratic (2nd order) B-spline.
cspline2d -- Coefficients for 2-D cubic (3rd order) B-spline.
qspline2d -- Coefficients for 2-D quadratic (2nd order) B-spline.
cspline1d_eval -- Evaluate a cubic spline at the given points.
qspline1d_eval -- Evaluate a quadratic spline at the given points.
spline_filter -- Smoothing spline (cubic) filtering of a rank-2 array.
Filtering
=========
.. autosummary::
:toctree: generated/
order_filter -- N-dimensional order filter.
medfilt -- N-dimensional median filter.
medfilt2d -- 2-dimensional median filter (faster).
wiener -- N-dimensional wiener filter.
symiirorder1 -- 2nd-order IIR filter (cascade of first-order systems).
symiirorder2 -- 4th-order IIR filter (cascade of second-order systems).
lfilter -- 1-dimensional FIR and IIR digital linear filtering.
lfiltic -- Construct initial conditions for `lfilter`.
lfilter_zi -- Compute an initial state zi for the lfilter function that
-- corresponds to the steady state of the step response.
filtfilt -- A forward-backward filter.
savgol_filter -- Filter a signal using the Savitzky-Golay filter.
deconvolve -- 1-d deconvolution using lfilter.
sosfilt -- 1-dimensional IIR digital linear filtering using
-- a second-order sections filter representation.
sosfilt_zi -- Compute an initial state zi for the sosfilt function that
-- corresponds to the steady state of the step response.
sosfiltfilt -- A forward-backward filter for second-order sections.
hilbert -- Compute 1-D analytic signal, using the Hilbert transform.
hilbert2 -- Compute 2-D analytic signal, using the Hilbert transform.
decimate -- Downsample a signal.
detrend -- Remove linear and/or constant trends from data.
resample -- Resample using Fourier method.
resample_poly -- Resample using polyphase filtering method.
upfirdn -- Upsample, apply FIR filter, downsample.
Filter design
=============
.. autosummary::
:toctree: generated/
bilinear -- Digital filter from an analog filter using
-- the bilinear transform.
findfreqs -- Find array of frequencies for computing filter response.
firls -- FIR filter design using least-squares error minimization.
firwin -- Windowed FIR filter design, with frequency response
-- defined as pass and stop bands.
firwin2 -- Windowed FIR filter design, with arbitrary frequency
-- response.
freqs -- Analog filter frequency response.
freqz -- Digital filter frequency response.
sosfreqz -- Digital filter frequency response for SOS format filter.
group_delay -- Digital filter group delay.
iirdesign -- IIR filter design given bands and gains.
iirfilter -- IIR filter design given order and critical frequencies.
kaiser_atten -- Compute the attenuation of a Kaiser FIR filter, given
-- the number of taps and the transition width at
-- discontinuities in the frequency response.
kaiser_beta -- Compute the Kaiser parameter beta, given the desired
-- FIR filter attenuation.
kaiserord -- Design a Kaiser window to limit ripple and width of
-- transition region.
minimum_phase -- Convert a linear phase FIR filter to minimum phase.
savgol_coeffs -- Compute the FIR filter coefficients for a Savitzky-Golay
-- filter.
remez -- Optimal FIR filter design.
unique_roots -- Unique roots and their multiplicities.
residue -- Partial fraction expansion of b(s) / a(s).
residuez -- Partial fraction expansion of b(z) / a(z).
invres -- Inverse partial fraction expansion for analog filter.
invresz -- Inverse partial fraction expansion for digital filter.
BadCoefficients -- Warning on badly conditioned filter coefficients
Lower-level filter design functions:
.. autosummary::
:toctree: generated/
abcd_normalize -- Check state-space matrices and ensure they are rank-2.
band_stop_obj -- Band Stop Objective Function for order minimization.
besselap -- Return (z,p,k) for analog prototype of Bessel filter.
buttap -- Return (z,p,k) for analog prototype of Butterworth filter.
cheb1ap -- Return (z,p,k) for type I Chebyshev filter.
cheb2ap -- Return (z,p,k) for type II Chebyshev filter.
cmplx_sort -- Sort roots based on magnitude.
ellipap -- Return (z,p,k) for analog prototype of elliptic filter.
lp2bp -- Transform a lowpass filter prototype to a bandpass filter.
lp2bs -- Transform a lowpass filter prototype to a bandstop filter.
lp2hp -- Transform a lowpass filter prototype to a highpass filter.
lp2lp -- Transform a lowpass filter prototype to a lowpass filter.
normalize -- Normalize polynomial representation of a transfer function.
Matlab-style IIR filter design
==============================
.. autosummary::
:toctree: generated/
butter -- Butterworth
buttord
cheby1 -- Chebyshev Type I
cheb1ord
cheby2 -- Chebyshev Type II
cheb2ord
ellip -- Elliptic (Cauer)
ellipord
bessel -- Bessel (no order selection available -- try butterod)
iirnotch -- Design second-order IIR notch digital filter.
iirpeak -- Design second-order IIR peak (resonant) digital filter.
Continuous-Time Linear Systems
==============================
.. autosummary::
:toctree: generated/
lti -- Continuous-time linear time invariant system base class.
StateSpace -- Linear time invariant system in state space form.
TransferFunction -- Linear time invariant system in transfer function form.
ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form.
lsim -- continuous-time simulation of output to linear system.
lsim2 -- like lsim, but `scipy.integrate.odeint` is used.
impulse -- impulse response of linear, time-invariant (LTI) system.
impulse2 -- like impulse, but `scipy.integrate.odeint` is used.
step -- step response of continous-time LTI system.
step2 -- like step, but `scipy.integrate.odeint` is used.
freqresp -- frequency response of a continuous-time LTI system.
bode -- Bode magnitude and phase data (continuous-time LTI).
Discrete-Time Linear Systems
============================
.. autosummary::
:toctree: generated/
dlti -- Discrete-time linear time invariant system base class.
StateSpace -- Linear time invariant system in state space form.
TransferFunction -- Linear time invariant system in transfer function form.
ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form.
dlsim -- simulation of output to a discrete-time linear system.
dimpulse -- impulse response of a discrete-time LTI system.
dstep -- step response of a discrete-time LTI system.
dfreqresp -- frequency response of a discrete-time LTI system.
dbode -- Bode magnitude and phase data (discrete-time LTI).
LTI Representations
===================
.. autosummary::
:toctree: generated/
tf2zpk -- transfer function to zero-pole-gain.
tf2sos -- transfer function to second-order sections.
tf2ss -- transfer function to state-space.
zpk2tf -- zero-pole-gain to transfer function.
zpk2sos -- zero-pole-gain to second-order sections.
zpk2ss -- zero-pole-gain to state-space.
ss2tf -- state-pace to transfer function.
ss2zpk -- state-space to pole-zero-gain.
sos2zpk -- second-order sections to zero-pole-gain.
sos2tf -- second-order sections to transfer function.
cont2discrete -- continuous-time to discrete-time LTI conversion.
place_poles -- pole placement.
Waveforms
=========
.. autosummary::
:toctree: generated/
chirp -- Frequency swept cosine signal, with several freq functions.
gausspulse -- Gaussian modulated sinusoid
max_len_seq -- Maximum length sequence
sawtooth -- Periodic sawtooth
square -- Square wave
sweep_poly -- Frequency swept cosine signal; freq is arbitrary polynomial
unit_impulse -- Discrete unit impulse
Window functions
================
.. autosummary::
:toctree: generated/
get_window -- Return a window of a given length and type.
barthann -- Bartlett-Hann window
bartlett -- Bartlett window
blackman -- Blackman window
blackmanharris -- Minimum 4-term Blackman-Harris window
bohman -- Bohman window
boxcar -- Boxcar window
chebwin -- Dolph-Chebyshev window
cosine -- Cosine window
exponential -- Exponential window
flattop -- Flat top window
gaussian -- Gaussian window
general_gaussian -- Generalized Gaussian window
hamming -- Hamming window
hann -- Hann window
hanning -- Hann window
kaiser -- Kaiser window
nuttall -- Nuttall's minimum 4-term Blackman-Harris window
parzen -- Parzen window
slepian -- Slepian window
triang -- Triangular window
tukey -- Tukey window
Wavelets
========
.. autosummary::
:toctree: generated/
cascade -- compute scaling function and wavelet from coefficients
daub -- return low-pass
morlet -- Complex Morlet wavelet.
qmf -- return quadrature mirror filter from low-pass
ricker -- return ricker wavelet
cwt -- perform continuous wavelet transform
Peak finding
============
.. autosummary::
:toctree: generated/
find_peaks_cwt -- Attempt to find the peaks in the given 1-D array
argrelmin -- Calculate the relative minima of data
argrelmax -- Calculate the relative maxima of data
argrelextrema -- Calculate the relative extrema of data
Spectral Analysis
=================
.. autosummary::
:toctree: generated/
periodogram -- Compute a (modified) periodogram
welch -- Compute a periodogram using Welch's method
csd -- Compute the cross spectral density, using Welch's method
coherence -- Compute the magnitude squared coherence, using Welch's method
spectrogram -- Compute the spectrogram
lombscargle -- Computes the Lomb-Scargle periodogram
vectorstrength -- Computes the vector strength
"""
from __future__ import division, print_function, absolute_import
# The spline module (a C extension) provides:
# cspline2d, qspline2d, sepfir2d, symiirord1, symiirord2
from .spline import *
__all__ = [s for s in dir() if not s.startswith('_')]
from numpy.testing import Tester
test = Tester().test
|
DailyActie/Surrogate-Model
|
01-codes/scipy-master/scipy/signal/__init__.py
|
Python
|
mit
| 12,187
|
[
"Gaussian"
] |
ce68325603a675be795fb22285998c612add6330694510ff2ccfedbf17ba0343
|
# encoding: utf-8
"""
Module to set up run time parameters for Clawpack.
The values set in the function setrun are then written out to data files
that will be read in by the Fortran code.
"""
from __future__ import print_function
import os
import datetime
import numpy as np
# Need to adjust the date a bit due to weirdness with leap year (I think)
ike_landfall = datetime.datetime(2008,9,13 - 1,7) - datetime.datetime(2008,1,1,0)
# days s/hour hours/day
days2seconds = lambda days: days * 60.0**2 * 24.0
seconds2days = lambda seconds: seconds / (60.0**2 * 24.0)
#------------------------------
def setrun(claw_pkg='geoclaw'):
#------------------------------
"""
Define the parameters used for running Clawpack.
INPUT:
claw_pkg expected to be "geoclaw" for this setrun.
OUTPUT:
rundata - object of class ClawRunData
"""
from clawpack.clawutil import data
assert claw_pkg.lower() == 'geoclaw', "Expected claw_pkg = 'geoclaw'"
num_dim = 2
rundata = data.ClawRunData(claw_pkg, num_dim)
#------------------------------------------------------------------
# Problem-specific parameters to be written to setprob.data:
#------------------------------------------------------------------
#probdata = rundata.new_UserData(name='probdata',fname='setprob.data')
#------------------------------------------------------------------
# Standard Clawpack parameters to be written to claw.data:
# (or to amr2ez.data for AMR)
#------------------------------------------------------------------
clawdata = rundata.clawdata # initialized when rundata instantiated
# Set single grid parameters first.
# See below for AMR parameters.
# ---------------
# Spatial domain:
# ---------------
# Number of space dimensions:
clawdata.num_dim = num_dim
# Lower and upper edge of computational domain:
clawdata.lower[0] = -99.0 # west longitude
clawdata.upper[0] = -70.0 # east longitude
clawdata.lower[1] = 8.0 # south latitude
clawdata.upper[1] = 32.0 # north latitude
# Number of grid cells:
degree_factor = 4 # (0.25º,0.25º) ~ (25237.5 m, 27693.2 m) resolution
clawdata.num_cells[0] = int(clawdata.upper[0] - clawdata.lower[0]) * degree_factor
clawdata.num_cells[1] = int(clawdata.upper[1] - clawdata.lower[1]) * degree_factor
# ---------------
# Size of system:
# ---------------
# Number of equations in the system:
clawdata.num_eqn = 3
# Number of auxiliary variables in the aux array (initialized in setaux)
# First three are from shallow GeoClaw, fourth is friction and last 3 are
# storm fields
clawdata.num_aux = 3 + 1 + 3
# Index of aux array corresponding to capacity function, if there is one:
clawdata.capa_index = 2
# -------------
# Initial time:
# -------------
clawdata.t0 = days2seconds(ike_landfall.days - 3) + ike_landfall.seconds
# clawdata.t0 = days2seconds(ike_landfall.days - 1) + ike_landfall.seconds
# Restart from checkpoint file of a previous run?
# Note: If restarting, you must also change the Makefile to set:
# RESTART = True
# If restarting, t0 above should be from original run, and the
# restart_file 'fort.chkNNNNN' specified below should be in
# the OUTDIR indicated in Makefile.
clawdata.restart = False # True to restart from prior results
clawdata.restart_file = 'fort.chk00006' # File to use for restart data
# -------------
# Output times:
#--------------
# Specify at what times the results should be written to fort.q files.
# Note that the time integration stops after the final output time.
# The solution at initial time t0 is always written in addition.
clawdata.output_style = 1
if clawdata.output_style==1:
# Output nout frames at equally spaced times up to tfinal:
# clawdata.tfinal = days2seconds(date2days('2008091400'))
clawdata.tfinal = days2seconds(ike_landfall.days + 0.75) + ike_landfall.seconds
recurrence = 24
clawdata.num_output_times = int((clawdata.tfinal - clawdata.t0)
* recurrence / (60**2 * 24))
clawdata.output_t0 = True # output at initial (or restart) time?
elif clawdata.output_style == 2:
# Specify a list of output times.
clawdata.output_times = [0.5, 1.0]
elif clawdata.output_style == 3:
# Output every iout timesteps with a total of ntot time steps:
clawdata.output_step_interval = 1
clawdata.total_steps = 1
clawdata.output_t0 = True
clawdata.output_format = 'binary' # 'ascii' or 'netcdf'
clawdata.output_q_components = 'all' # could be list such as [True,True]
clawdata.output_aux_components = 'all'
clawdata.output_aux_onlyonce = False # output aux arrays only at t0
# ---------------------------------------------------
# Verbosity of messages to screen during integration:
# ---------------------------------------------------
# The current t, dt, and cfl will be printed every time step
# at AMR levels <= verbosity. Set verbosity = 0 for no printing.
# (E.g. verbosity == 2 means print only on levels 1 and 2.)
clawdata.verbosity = 1
# --------------
# Time stepping:
# --------------
# if dt_variable==1: variable time steps used based on cfl_desired,
# if dt_variable==0: fixed time steps dt = dt_initial will always be used.
clawdata.dt_variable = True
# Initial time step for variable dt.
# If dt_variable==0 then dt=dt_initial for all steps:
clawdata.dt_initial = 0.016
# Max time step to be allowed if variable dt used:
clawdata.dt_max = 1e+99
# Desired Courant number if variable dt used, and max to allow without
# retaking step with a smaller dt:
clawdata.cfl_desired = 0.75
clawdata.cfl_max = 1.0
# clawdata.cfl_desired = 0.25
# clawdata.cfl_max = 0.5
# Maximum number of time steps to allow between output times:
clawdata.steps_max = 5000
# ------------------
# Method to be used:
# ------------------
# Order of accuracy: 1 => Godunov, 2 => Lax-Wendroff plus limiters
clawdata.order = 1
# Use dimensional splitting? (not yet available for AMR)
clawdata.dimensional_split = 'unsplit'
# For unsplit method, transverse_waves can be
# 0 or 'none' ==> donor cell (only normal solver used)
# 1 or 'increment' ==> corner transport of waves
# 2 or 'all' ==> corner transport of 2nd order corrections too
clawdata.transverse_waves = 1
# Number of waves in the Riemann solution:
clawdata.num_waves = 3
# List of limiters to use for each wave family:
# Required: len(limiter) == num_waves
# Some options:
# 0 or 'none' ==> no limiter (Lax-Wendroff)
# 1 or 'minmod' ==> minmod
# 2 or 'superbee' ==> superbee
# 3 or 'mc' ==> MC limiter
# 4 or 'vanleer' ==> van Leer
clawdata.limiter = ['mc', 'mc', 'mc']
clawdata.use_fwaves = True # True ==> use f-wave version of algorithms
# Source terms splitting:
# src_split == 0 or 'none' ==> no source term (src routine never called)
# src_split == 1 or 'godunov' ==> Godunov (1st order) splitting used,
# src_split == 2 or 'strang' ==> Strang (2nd order) splitting used, not recommended.
clawdata.source_split = 'godunov'
# clawdata.source_split = 'strang'
# --------------------
# Boundary conditions:
# --------------------
# Number of ghost cells (usually 2)
clawdata.num_ghost = 2
# Choice of BCs at xlower and xupper:
# 0 => user specified (must modify bcN.f to use this option)
# 1 => extrapolation (non-reflecting outflow)
# 2 => periodic (must specify this at both boundaries)
# 3 => solid wall for systems where q(2) is normal velocity
clawdata.bc_lower[0] = 'extrap'
clawdata.bc_upper[0] = 'extrap'
clawdata.bc_lower[1] = 'extrap'
clawdata.bc_upper[1] = 'extrap'
# Specify when checkpoint files should be created that can be
# used to restart a computation.
clawdata.checkpt_style = 0
if clawdata.checkpt_style == 0:
# Do not checkpoint at all
pass
elif clawdata.checkpt_style == 1:
# Checkpoint only at tfinal.
pass
elif clawdata.checkpt_style == 2:
# Specify a list of checkpoint times.
clawdata.checkpt_times = [0.1,0.15]
elif clawdata.checkpt_style == 3:
# Checkpoint every checkpt_interval timesteps (on Level 1)
# and at the final time.
clawdata.checkpt_interval = 5
# ---------------
# AMR parameters:
# ---------------
amrdata = rundata.amrdata
# max number of refinement levels:
amrdata.amr_levels_max = 7
# amrdata.amr_levels_max = 6
# List of refinement ratios at each level (length at least mxnest-1)
# amrdata.refinement_ratios_x = [2,2,3,4,16]
# amrdata.refinement_ratios_y = [2,2,3,4,16]
# amrdata.refinement_ratios_t = [2,2,3,4,16]
# amrdata.refinement_ratios_x = [2,2,2,6,16]
# amrdata.refinement_ratios_y = [2,2,2,6,16]
# amrdata.refinement_ratios_t = [2,2,2,6,16]
amrdata.refinement_ratios_x = [2,2,2,6,4,4]
amrdata.refinement_ratios_y = [2,2,2,6,4,4]
amrdata.refinement_ratios_t = [2,2,2,6,4,4]
# Specify type of each aux variable in amrdata.auxtype.
# This must be a list of length maux, each element of which is one of:
# 'center', 'capacity', 'xleft', or 'yleft' (see documentation).
amrdata.aux_type = ['center','capacity','yleft','center','center','center',
'center', 'center', 'center']
# Flag using refinement routine flag2refine rather than richardson error
amrdata.flag_richardson = False # use Richardson?
amrdata.flag2refine = True
# steps to take on each level L between regriddings of level L+1:
amrdata.regrid_interval = 3
# width of buffer zone around flagged points:
# (typically the same as regrid_interval so waves don't escape):
amrdata.regrid_buffer_width = 2
# clustering alg. cutoff for (# flagged pts) / (total # of cells refined)
# (closer to 1.0 => more small grids may be needed to cover flagged cells)
amrdata.clustering_cutoff = 0.700000
# print info about each regridding up to this level:
amrdata.verbosity_regrid = 0
# ----- For developers -----
# Toggle debugging print statements:
amrdata.dprint = False # print domain flags
amrdata.eprint = False # print err est flags
amrdata.edebug = False # even more err est flags
amrdata.gprint = False # grid bisection/clustering
amrdata.nprint = False # proper nesting output
amrdata.pprint = False # proj. of tagged points
amrdata.rprint = False # print regridding summary
amrdata.sprint = False # space/memory output
amrdata.tprint = False # time step reporting each level
amrdata.uprint = False # update/upbnd reporting
# More AMR parameters can be set -- see the defaults in pyclaw/data.py
# == setregions.data values ==
regions = rundata.regiondata.regions
# to specify regions of refinement append lines of the form
# [minlevel,maxlevel,t1,t2,x1,x2,y1,y2]
# Latex shelf
# regions.append([1, 5, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -97.5, -88.5, 27.5, 30.5])
# Galveston region
# Galveston Sub-Domains
regions.append([1, 4, rundata.clawdata.t0, rundata.clawdata.tfinal,
-95.8666, -93.4, 28.63333, 30.2])
regions.append([1, 5, rundata.clawdata.t0, rundata.clawdata.tfinal,
-95.3723, -94.5939, 29.2467, 29.9837])
regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
-95.25, -94.3, 28.85, 29.8])
# Galveston Channel Entrance (galveston_channel)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -94.84, -94.70, 29.30, 29.40])
# # Galveston area (galveston)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -94.922600000000003, -94.825786176806162,
# 29.352, 29.394523768822882])
# # Lower Galveston Bay channel (lower_galveston_bay)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -94.903199999999998, -94.775835119593594,
# 29.383199999999999, 29.530588208444357])
# # Middle Galveston Bay Channel (upper_galveston_bay)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -94.959199999999996, -94.859496211934697,
# 29.517700000000001, 29.617610214127549])
# # Upper Galveston bay channel (houston_channel_2)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -95.048400000000001, -94.903076052178108,
# 29.602699999999999, 29.688573241894751])
# # Lower Houston channel (houston_channel_3)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -95.094899999999996, -94.892808885060177,
# 29.6769, 29.832958103058733])
# # Upper Houston channel (houston_harbor)
# regions.append([1, 7, rundata.clawdata.t0, rundata.clawdata.tfinal,
# -95.320999999999998, -95.074527281677078,
# 29.699999999999999, 29.830461271340102])
# == setgauges.data values ==
# for gauges append lines of the form [gaugeno, x, y, t1, t2]
# rundata.gaugedata.gauges.append([121, -94.70895, 29.2812, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([122, -94.38840, 29.4964, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([123, -94.12530, 29.5846, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Gauges from Ike AWR paper (2011 Dawson et al)
rundata.gaugedata.gauges.append([1, -95.04, 29.07, rundata.clawdata.t0, rundata.clawdata.tfinal])
rundata.gaugedata.gauges.append([2, -94.71, 29.28, rundata.clawdata.t0, rundata.clawdata.tfinal])
rundata.gaugedata.gauges.append([3, -94.39, 29.49, rundata.clawdata.t0, rundata.clawdata.tfinal])
rundata.gaugedata.gauges.append([4, -94.13, 29.58, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([5, -95.00, 29.70, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([6, -95.14, 29.74, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([7, -95.08, 29.55, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([8, -94.75, 29.76, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([9, -95.27, 29.72, rundata.clawdata.t0, rundata.clawdata.tfinal])
# rundata.gaugedata.gauges.append([10, -94.51, 29.52, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Stations from Andrew Kennedy
# Station R - 82
rundata.gaugedata.gauges.append([ord('R'),-97.1176, 27.6289, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station S - 83
rundata.gaugedata.gauges.append([ord('S'),-96.55036666666666, 28.207733333333334, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station U - 85
rundata.gaugedata.gauges.append([ord('U'),-95.75235, 28.62505, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station V - 86
rundata.gaugedata.gauges.append([ord('V'),-95.31511666666667, 28.8704, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station W: Same as gauge 1
# rundata.gaugedata.gauges.append([ord('W'),-95.03958333333334, 29.0714, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station X: Same as gauge 2 above
# rundata.gaugedata.gauges.append([ord('X'),-94.70895, 29.281266666666667, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station Y: Same as gauge 3 above
# rundata.gaugedata.gauges.append([ord('Y'),-94.3884, 29.496433333333332, rundata.clawdata.t0, rundata.clawdata.tfinal])
# Station Z: Same as gauge 4 above
# rundata.gaugedata.gauges.append([ord('Z'),-94.12533333333333, 29.584683333333334, rundata.clawdata.t0, rundata.clawdata.tfinal])
#------------------------------------------------------------------
# GeoClaw specific parameters:
#------------------------------------------------------------------
rundata = setgeo(rundata)
return rundata
# end of function setrun
# ----------------------
#-------------------
def setgeo(rundata):
#-------------------
"""
Set GeoClaw specific runtime parameters.
For documentation see ....
"""
try:
geo_data = rundata.geo_data
except:
print("*** Error, this rundata has no geo_data attribute")
raise AttributeError("Missing geo_data attribute")
# == Physics ==
geo_data.gravity = 9.81
geo_data.coordinate_system = 2
geo_data.earth_radius = 6367.5e3
# == Forcing Options
geo_data.coriolis_forcing = True
geo_data.friction_forcing = True
geo_data.manning_coefficient = 0.025 # Overridden below
geo_data.friction_depth = 1e10
# == Algorithm and Initial Conditions ==
geo_data.sea_level = 0.28 # Due to seasonal swelling of gulf
geo_data.dry_tolerance = 1.e-2
# Refinement Criteria
refine_data = rundata.refinement_data
refine_data.wave_tolerance = 1.0
# refine_data.wave_tolerance = 0.5
# refine_data.speed_tolerance = [0.25,0.5,1.0,2.0,3.0,4.0]
# refine_data.speed_tolerance = [0.5,1.0,1.5,2.0,2.5,3.0]
refine_data.speed_tolerance = [1.0,2.0,3.0,4.0]
refine_data.deep_depth = 300.0
refine_data.max_level_deep = 4
refine_data.variable_dt_refinement_ratios = True
# == settopo.data values ==
topo_data = rundata.topo_data
topo_data.topofiles = []
# for topography, append lines of the form
# [topotype, minlevel, maxlevel, t1, t2, fname]
# See regions for control over these regions, need better bathy data for the
# smaller domains
if "DATA_PATH" in os.environ.keys():
topo_path = os.path.join(os.environ["DATA_PATH"], "topography", "gulf")
else:
topo_path = os.path.join("..", "bathy")
topo_data.topofiles.append([3, 1, 5, rundata.clawdata.t0,
rundata.clawdata.tfinal,
os.path.join(topo_path,
'gulf_caribbean.tt3')])
topo_data.topofiles.append([3, 1, 5, rundata.clawdata.t0,
rundata.clawdata.tfinal,
os.path.join(topo_path,
'NOAA_Galveston_Houston.tt3')])
topo_data.topofiles.append([3, 1, 6, rundata.clawdata.t0,
rundata.clawdata.tfinal,
os.path.join(topo_path,
'galveston_tx.asc')])
# == setdtopo.data values ==
dtopo_data = rundata.dtopo_data
dtopo_data.dtopofiles = []
# for moving topography, append lines of the form : (<= 1 allowed for now!)
# [topotype, minlevel,maxlevel,fname]
# == setqinit.data values ==
rundata.qinit_data.qinit_type = 0
rundata.qinit_data.qinitfiles = []
# for qinit perturbations, append lines of the form: (<= 1 allowed for now!)
# [minlev, maxlev, fname]
# == setfixedgrids.data values ==
rundata.fixed_grid_data.fixedgrids = []
# for fixed grids append lines of the form
# [t1,t2,noutput,x1,x2,y1,y2,xpoints,ypoints,\
# ioutarrivaltimes,ioutsurfacemax]
return rundata
# end of function setgeo
# ----------------------
def set_storm(rundata):
data = rundata.surge_data
# Physics parameters
data.rho_air = 1.15
data.ambient_pressure = 101.3e3 # Nominal atmos pressure
# Source term controls - These are currently not respected
data.wind_forcing = True
data.drag_law = 1
data.pressure_forcing = True
# Source term algorithm parameters
# data.wind_tolerance = 1e-4
# data.pressure_tolerance = 1e-4 # Pressure source term tolerance
# AMR parameters
data.wind_refine = [20.0,40.0,60.0] # m/s
data.R_refine = [60.0e3,40e3,20e3] # m
# Storm parameters
data.storm_type = 1 # Type of storm
data.landfall = days2seconds(ike_landfall.days) + ike_landfall.seconds
data.display_landfall_time = True
# Storm type 1 - Idealized storm track
data.storm_file = os.path.expandvars(os.path.join(os.getcwd(),'ike.storm'))
return rundata
def set_friction(rundata):
data = rundata.friction_data
# Variable friction
data.variable_friction = True
# Region based friction
# Entire domain
data.friction_regions.append([rundata.clawdata.lower,
rundata.clawdata.upper,
[np.infty,0.0,-np.infty],
[0.030, 0.022]])
# La-Tex Shelf
data.friction_regions.append([(-98, 25.25), (-90, 30),
[np.infty,-10.0,-200.0,-np.infty],
[0.030, 0.012, 0.022]])
return rundata
def get_topo(plot=False):
"""Retrieve the topo file from the GeoClaw repository."""
# Fetch topography
base_url = "https://dl.dropboxusercontent.com/u/8449354/bathy/"
urls = [os.path.join(base_url, "gulf_caribbean.tt3.tar.bz2"),
os.path.join(base_url, "NOAA_Galveston_Houston.tt3.tar.bz2"),
os.path.join(base_url, "galveston_tx.asc.tar.bz2")]
for url in urls:
data.get_remote_file(url, verbose=True)
# Plot if requested
if plot:
import matplotlib.pyplot as plt
scratch_dir = os.path.join(os.environ.get("CLAW", os.getcwd()),
'geoclaw', 'scratch')
for topo_name in ['gulf_caribbean.tt3', 'NOAA_Galveston_Houston.tt3',
'galveston_tx.asc']:
topo_path = os.path.join(scratch_dir, topo_name)
topo = topotools.Topography(topo_path, topo_type=3)
topo.plot()
fname = os.path.splitext(topo_name)[0] + '.png'
plt.savefig(fname)
if __name__ == '__main__':
# Set up run-time parameters and write all data files.
import sys
if len(sys.argv) == 2:
rundata = setrun(sys.argv[1])
else:
rundata = setrun()
rundata = set_storm(rundata)
rundata = set_friction(rundata)
rundata.write()
|
mandli/surge-examples
|
ike/setrun.py
|
Python
|
mit
| 23,410
|
[
"NetCDF"
] |
74f099627b5d837080fd85114785d2534a0308d3388c0b004d7322d189dcc787
|
# pylint: disable=C0111
# pylint: disable=W0621
import os
from lettuce import world, step
from nose.tools import assert_true, assert_in # pylint: disable=no-name-in-module
from django.conf import settings
from student.roles import CourseStaffRole, CourseInstructorRole, GlobalStaff
from student.models import get_user
from selenium.webdriver.common.keys import Keys
from logging import getLogger
from student.tests.factories import AdminFactory
from student import auth
logger = getLogger(__name__)
from terrain.browser import reset_data
TEST_ROOT = settings.COMMON_TEST_DATA_ROOT
@step('I (?:visit|access|open) the Studio homepage$')
def i_visit_the_studio_homepage(_step):
# To make this go to port 8001, put
# LETTUCE_SERVER_PORT = 8001
# in your settings.py file.
world.visit('/')
signin_css = 'a.action-signin'
assert world.is_css_present(signin_css)
@step('I am logged into Studio$')
def i_am_logged_into_studio(_step):
log_into_studio()
@step('I confirm the alert$')
def i_confirm_with_ok(_step):
world.browser.get_alert().accept()
@step(u'I press the "([^"]*)" delete icon$')
def i_press_the_category_delete_icon(_step, category):
if category == 'section':
css = 'a.action.delete-section-button'
elif category == 'subsection':
css = 'a.action.delete-subsection-button'
else:
assert False, 'Invalid category: %s' % category
world.css_click(css)
@step('I have opened a new course in Studio$')
def i_have_opened_a_new_course(_step):
open_new_course()
@step('(I select|s?he selects) the new course')
def select_new_course(_step, whom):
course_link_css = 'a.course-link'
world.css_click(course_link_css)
@step(u'I press the "([^"]*)" notification button$')
def press_the_notification_button(_step, name):
# Because the notification uses a CSS transition,
# Selenium will always report it as being visible.
# This makes it very difficult to successfully click
# the "Save" button at the UI level.
# Instead, we use JavaScript to reliably click
# the button.
btn_css = 'div#page-notification a.action-%s' % name.lower()
world.trigger_event(btn_css, event='focus')
world.browser.execute_script("$('{}').click()".format(btn_css))
world.wait_for_ajax_complete()
@step('I change the "(.*)" field to "(.*)"$')
def i_change_field_to_value(_step, field, value):
field_css = '#%s' % '-'.join([s.lower() for s in field.split()])
ele = world.css_find(field_css).first
ele.fill(value)
ele._element.send_keys(Keys.ENTER)
@step('I reset the database')
def reset_the_db(_step):
"""
When running Lettuce tests using examples (i.e. "Confirmation is
shown on save" in course-settings.feature), the normal hooks
aren't called between examples. reset_data should run before each
scenario to flush the test database. When this doesn't happen we
get errors due to trying to insert a non-unique entry. So instead,
we delete the database manually. This has the effect of removing
any users and courses that have been created during the test run.
"""
reset_data(None)
@step('I see a confirmation that my changes have been saved')
def i_see_a_confirmation(step):
confirmation_css = '#alert-confirmation'
assert world.is_css_present(confirmation_css)
def open_new_course():
world.clear_courses()
create_studio_user()
log_into_studio()
create_a_course()
def create_studio_user(
uname='robot',
email='robot+studio@edx.org',
password='test',
is_staff=False):
studio_user = world.UserFactory(
username=uname,
email=email,
password=password,
is_staff=is_staff)
registration = world.RegistrationFactory(user=studio_user)
registration.register(studio_user)
registration.activate()
return studio_user
def fill_in_course_info(
name='Robot Super Course',
org='MITx',
num='101',
run='2013_Spring'):
world.css_fill('.new-course-name', name)
world.css_fill('.new-course-org', org)
world.css_fill('.new-course-number', num)
world.css_fill('.new-course-run', run)
def log_into_studio(
uname='robot',
email='robot+studio@edx.org',
password='test',
name='Robot Studio'):
world.log_in(username=uname, password=password, email=email, name=name)
# Navigate to the studio dashboard
world.visit('/')
assert_in(uname, world.css_text('h2.title', timeout=10))
def add_course_author(user, course):
"""
Add the user to the instructor group of the course
so they will have the permissions to see it in studio
"""
global_admin = AdminFactory()
for role in (CourseStaffRole, CourseInstructorRole):
auth.add_users(global_admin, role(course.id), user)
def create_a_course():
course = world.CourseFactory.create(org='MITx', course='999', display_name='Robot Super Course')
world.scenario_dict['COURSE'] = course
user = world.scenario_dict.get("USER")
if not user:
user = get_user('robot+studio@edx.org')
add_course_author(user, course)
# Navigate to the studio dashboard
world.visit('/')
course_link_css = 'a.course-link'
world.css_click(course_link_css)
course_title_css = 'span.course-title'
assert_true(world.is_css_present(course_title_css))
def add_section(name='My Section'):
link_css = 'a.new-courseware-section-button'
world.css_click(link_css)
name_css = 'input.new-section-name'
save_css = 'input.new-section-name-save'
world.css_fill(name_css, name)
world.css_click(save_css)
span_css = 'span.section-name-span'
assert_true(world.is_css_present(span_css))
def add_subsection(name='Subsection One'):
css = 'a.new-subsection-item'
world.css_click(css)
name_css = 'input.new-subsection-name-input'
save_css = 'input.new-subsection-name-save'
world.css_fill(name_css, name)
world.css_click(save_css)
def set_date_and_time(date_css, desired_date, time_css, desired_time, key=None):
set_element_value(date_css, desired_date, key)
world.wait_for_ajax_complete()
set_element_value(time_css, desired_time, key)
world.wait_for_ajax_complete()
def set_element_value(element_css, element_value, key=None):
element = world.css_find(element_css).first
element.fill(element_value)
# hit TAB or provided key to trigger save content
if key is not None:
element._element.send_keys(getattr(Keys, key)) # pylint: disable=protected-access
else:
element._element.send_keys(Keys.TAB) # pylint: disable=protected-access
@step('I have enabled the (.*) advanced module$')
def i_enabled_the_advanced_module(step, module):
step.given('I have opened a new course section in Studio')
world.css_click('.nav-course-settings')
world.css_click('.nav-course-settings-advanced a')
type_in_codemirror(0, '["%s"]' % module)
press_the_notification_button(step, 'Save')
@world.absorb
def create_course_with_unit():
"""
Prepare for tests by creating a course with a section, subsection, and unit.
Performs the following:
Clear out all courseware
Create a course with a section, subsection, and unit
Create a user and make that user a course author
Log the user into studio
Open the course from the dashboard
Expand the section and click on the New Unit link
The end result is the page where the user is editing the new unit
"""
world.clear_courses()
course = world.CourseFactory.create()
world.scenario_dict['COURSE'] = course
section = world.ItemFactory.create(parent_location=course.location)
world.ItemFactory.create(
parent_location=section.location,
category='sequential',
display_name='Subsection One',
)
user = create_studio_user(is_staff=False)
add_course_author(user, course)
log_into_studio()
world.css_click('a.course-link')
world.wait_for_js_to_load()
css_selectors = [
'div.section-item a.expand-collapse', 'a.new-unit-item'
]
for selector in css_selectors:
world.css_click(selector)
world.wait_for_mathjax()
world.wait_for_xmodule()
assert world.is_css_present('ul.new-component-type')
@step('I have clicked the new unit button$')
@step(u'I am in Studio editing a new unit$')
def edit_new_unit(step):
create_course_with_unit()
@step('the save notification button is disabled')
def save_button_disabled(step):
button_css = '.action-save'
disabled = 'is-disabled'
assert world.css_has_class(button_css, disabled)
@step('the "([^"]*)" button is disabled')
def button_disabled(step, value):
button_css = 'input[value="%s"]' % value
assert world.css_has_class(button_css, 'is-disabled')
def _do_studio_prompt_action(intent, action):
"""
Wait for a studio prompt to appear and press the specified action button
See cms/static/js/views/feedback_prompt.js for implementation
"""
assert intent in ['warning', 'error', 'confirmation', 'announcement',
'step-required', 'help', 'mini']
assert action in ['primary', 'secondary']
world.wait_for_present('div.wrapper-prompt.is-shown#prompt-{}'.format(intent))
action_css = 'li.nav-item > a.action-{}'.format(action)
world.trigger_event(action_css, event='focus')
world.browser.execute_script("$('{}').click()".format(action_css))
world.wait_for_ajax_complete()
world.wait_for_present('div.wrapper-prompt.is-hiding#prompt-{}'.format(intent))
@world.absorb
def confirm_studio_prompt():
_do_studio_prompt_action('warning', 'primary')
@step('I confirm the prompt')
def confirm_the_prompt(step):
confirm_studio_prompt()
@step(u'I am shown a prompt$')
def i_am_shown_a_notification(step):
assert world.is_css_present('.wrapper-prompt')
def type_in_codemirror(index, text, find_prefix="$"):
script = """
var cm = {find_prefix}('div.CodeMirror:eq({index})').get(0).CodeMirror;
cm.getInputField().focus();
cm.setValue(arguments[0]);
cm.getInputField().blur();""".format(index=index, find_prefix=find_prefix)
world.browser.driver.execute_script(script, str(text))
world.wait_for_ajax_complete()
def get_codemirror_value(index=0, find_prefix="$"):
return world.browser.driver.execute_script(
"""
return {find_prefix}('div.CodeMirror:eq({index})').get(0).CodeMirror.getValue();
""".format(index=index, find_prefix=find_prefix)
)
def attach_file(filename, sub_path):
path = os.path.join(TEST_ROOT, sub_path, filename)
world.browser.execute_script("$('input.file-input').css('display', 'block')")
world.browser.attach_file('file', os.path.abspath(path))
def upload_file(filename, sub_path=''):
attach_file(filename, sub_path)
button_css = '.wrapper-modal-window-assetupload .action-upload'
world.css_click(button_css)
@step(u'"([^"]*)" logs in$')
def other_user_login(step, name):
step.given('I log out')
world.visit('/')
signin_css = 'a.action-signin'
world.is_css_present(signin_css)
world.css_click(signin_css)
def fill_login_form():
login_form = world.browser.find_by_css('form#login_form')
login_form.find_by_name('email').fill(name + '@edx.org')
login_form.find_by_name('password').fill("test")
login_form.find_by_name('submit').click()
world.retry_on_exception(fill_login_form)
assert_true(world.is_css_present('.new-course-button'))
world.scenario_dict['USER'] = get_user(name + '@edx.org')
@step(u'the user "([^"]*)" exists( as a course (admin|staff member|is_staff))?$')
def create_other_user(_step, name, has_extra_perms, role_name):
email = name + '@edx.org'
user = create_studio_user(uname=name, password="test", email=email)
if has_extra_perms:
if role_name == "is_staff":
GlobalStaff().add_users(user)
else:
if role_name == "admin":
# admins get staff privileges, as well
roles = (CourseStaffRole, CourseInstructorRole)
else:
roles = (CourseStaffRole,)
course_key = world.scenario_dict["COURSE"].id
global_admin = AdminFactory()
for role in roles:
auth.add_users(global_admin, role(course_key), user)
@step('I log out')
def log_out(_step):
world.visit('logout')
@step(u'I click on "edit a draft"$')
def i_edit_a_draft(_step):
world.css_click("a.create-draft")
@step(u'I click on "replace with draft"$')
def i_replace_w_draft(_step):
world.css_click("a.publish-draft")
@step(u'I click on "delete draft"$')
def i_delete_draft(_step):
world.css_click("a.delete-draft")
@step(u'I publish the unit$')
def publish_unit(_step):
world.select_option('visibility-select', 'public')
@step(u'I unpublish the unit$')
def unpublish_unit(_step):
world.select_option('visibility-select', 'private')
|
nanolearning/edx-platform
|
cms/djangoapps/contentstore/features/common.py
|
Python
|
agpl-3.0
| 13,058
|
[
"VisIt"
] |
54b566321917d0530fa6927a0cf32c768fd80c45d1f36a27dac75d52692a231b
|
import copy
import logging
import os.path
import re
from spitfire.compiler.ast import *
from spitfire.compiler.analyzer import *
from spitfire.compiler.visitor import print_tree
from spitfire.compiler.walker import flatten_tree
import __builtin__
builtin_names = vars(__builtin__)
class _BaseAnalyzer(object):
def __init__(self, ast_root, options, compiler):
self.ast_root = ast_root
self.options = options
self.compiler = compiler
self.unoptimized_node_types = set()
def optimize_ast(self):
self.visit_ast(self.ast_root)
if self.options.debug:
print "unoptimized_node_types", self.unoptimized_node_types
return self.ast_root
# build an AST node list from a single parse node
# need the parent in case we are going to delete a node
def visit_ast(self, node, parent=None):
node.parent = parent
method_name = 'analyze%s' % node.__class__.__name__
method = getattr(self, method_name, self.default_optimize_node)
if method_name in self.compiler.debug_flags:
print method_name, node
return method(node)
def skip_analyze_node(self, node):
return
analyzeLiteralNode = skip_analyze_node
analyzeIdentifierNode = skip_analyze_node
analyzeTargetNode = skip_analyze_node
def default_optimize_node(self, node):
# print "default_optimize_node", type(node)
self.unoptimized_node_types.add(type(node))
return
def get_parent_loop(self, node):
return self._get_parent_node_by_type(node, ForNode)
def get_parent_function(self, node):
return self._get_parent_node_by_type(node, FunctionNode)
def get_parent_block(self, node):
return self._get_parent_node_by_type(node,
(FunctionNode, ForNode, IfNode, ElseNode))
def _get_parent_node_by_type(self, node, node_type):
node = node.parent
while node is not None:
if isinstance(node, node_type):
return node
node = node.parent
return None
# this function has some rules that are a bit unclean - you aren't actually
# looking for the 'parent' scope, but one you might insert nodes into.
# for instance, you skip over a ForNode so that optimizetions are inserted
# in a loop-invariant fashion.
def get_parent_scope(self, node):
node_stack = [node]
node = node.parent
while node is not None:
if type(node) == FunctionNode:
return node.scope
elif type(node) == IfNode:
# elements of the test clause need to reference the next scope
# "up" - usually the function, but could be another conditional block
# fixme: if we ever implement "elif" this will have to get fixed up
if node_stack[-1] != node.test_expression:
return node.scope
elif type(node) == ElseNode:
return node.scope
elif type(node) == ForNode:
if node_stack[-1] != node.expression_list:
return node.scope
node_stack.append(node)
node = node.parent
raise SemanticAnalyzerError("expected a parent function")
def get_insert_block_and_point(self, node):
original_node = node
insert_marker = node
node = node.parent
while node is not None:
if isinstance(node, (FunctionNode, ForNode, IfNode, ElseNode)):
if insert_marker in node.child_nodes:
return node, insert_marker
insert_marker = node
node = node.parent
raise SemanticAnalyzerError("expected a parent block")
def replace_in_parent_block(self, node, new_node):
insert_block, insert_marker = self.get_insert_block_and_point(node)
insert_block.replace(insert_marker, new_node)
def reanalyzeConditionalNode(self, conditional_node):
if (not self.options.hoist_conditional_aliases and
not self.options.cache_filtered_placeholders):
return
parent_node = conditional_node
parent_block, insertion_point = self.get_insert_block_and_point(
conditional_node)
if self.options.hoist_conditional_aliases:
#print "reanalyzeConditionalNode", conditional_node
#print " parent_block", parent_block
#print " parent_scope", parent_block.scope
# NOTE: need to iterate over items, in case we modify something
for alias_node, alias in conditional_node.scope.aliased_expression_map.items():
#print " check alias:", alias
#print " alias_node:", alias_node
assign_alias_node = AssignNode(alias, alias_node)
if alias_node in parent_block.scope.aliased_expression_map:
if self.is_condition_invariant(alias_node, conditional_node):
#print " hoist:", assign_alias_node
self.hoist(
conditional_node, parent_block, insertion_point, alias_node,
assign_alias_node)
def reanalyzeLoopNode(self, loop_node):
if not self.options.hoist_loop_invariant_aliases:
return
parent_block, insertion_point = self.get_insert_block_and_point(loop_node)
# NOTE: need to iterate over items, in case we modify something
for alias_node, alias in loop_node.scope.aliased_expression_map.items():
assign_alias = AssignNode(alias, alias_node)
if alias_node in parent_block.scope.aliased_expression_map:
if self.is_loop_invariant(alias_node, loop_node):
self.hoist(loop_node, parent_block, insertion_point, alias_node,
assign_alias)
else:
# if this alias is not already used in the parent scope, that's
# ok, hoist it if it's loop invariant
if self.is_loop_invariant(alias_node, loop_node):
loop_node.remove(assign_alias)
parent_block.insert_before(loop_node, assign_alias)
parent_block.scope.hoisted_aliases.append(alias_node)
def is_condition_invariant(self, node, conditional_node):
node_dependency_set = self.get_node_dependencies(node)
condition_invariant = not node_dependency_set.intersection(
conditional_node.scope.local_identifiers)
#print "is_condition_invariant:", condition_invariant
#print " locals:", conditional_node.scope.local_identifiers
#print " deps:", node_dependency_set
return condition_invariant
def is_loop_invariant(self, node, loop_node):
node_dependency_set = self.get_node_dependencies(node)
# print "is loop invariant node:", node
# for x in node_dependency_set:
# print " dep:", x
# find dependencies within the loop node but outside the node we're checking
node_dependency_set_except_node_tree = node_dependency_set - set(flatten_tree(node))
dependencies_within_loop = set(flatten_tree(loop_node)).intersection(
node_dependency_set_except_node_tree)
depends_on_loop_variants = bool(loop_node.loop_variant_set.intersection(
node_dependency_set))
# TODO: Disabling warnings for now. They are useless witout
# filenames. Also need to make sure all these cases are valid.
# if not depends_on_loop_variants and dependencies_within_loop:
# # we can't assume this is invariant because it depends on other
# # nodes inside the loop. eventually we should hoist out both the
# # node and its dependencies.
# dependency_nodes = '\n'.join(' %s' % node.parent for node in dependencies_within_loop)
# logging.warning("Cannot hoist possible loop invariant: %s.", node)
# logging.warning("Please move following dependencies out of the loop:\n%s",
# dependency_nodes)
return not depends_on_loop_variants and not dependencies_within_loop
def get_node_dependencies(self, node):
node_dependency_set = set(flatten_tree(node))
parent_block = self.get_parent_block(node)
for n in list(node_dependency_set):
# when this is an identifier, you need to check all of the potential
# the dependencies for that symbol, which means doing some crawling
if isinstance(n, IdentifierNode):
identifier = n
parent_block_to_check = parent_block
while parent_block_to_check:
for block_node in parent_block_to_check.child_nodes:
if isinstance(block_node, AssignNode):
if block_node.left == identifier:
node_dependency_set.update(
self.get_node_dependencies(block_node.right))
parent_block_to_check = None
break
elif isinstance(block_node, IfNode):
# if you encounter a conditional in your chain, you depend on any
# dependencies of the condition itself
# FIXME: calling get_node_dependencies(block_node.test_expression)
# causes an infinite loop, but that is probably the correct way
# forward to address the dependency chain
node_dependency_set.update(
flatten_tree(block_node.test_expression))
else:
parent_block_to_check = self.get_parent_block(
parent_block_to_check)
#elif isinstance(n, (GetUDNNode, FilterNode)):
# node_dependency_set.update(
# self.get_node_dependencies(node.expression))
#print "get_node_dependencies", node
#print " deps:", node_dependency_set
return node_dependency_set
class OptimizationAnalyzer(_BaseAnalyzer):
def analyzeParameterNode(self, parameter):
self.visit_ast(parameter.default, parameter)
return
def analyzeTemplateNode(self, template):
# at this point, if we have a function registry, add in the nodes before we
# begin optimizing
for alias, (fq_name, method) in self.compiler.function_name_registry.iteritems():
fq_name_parts = fq_name.split('.')
self.ast_root.from_nodes.append(FromNode(
[IdentifierNode(x) for x in fq_name_parts[:-1]],
IdentifierNode(fq_name_parts[-1]),
IdentifierNode(alias)))
for n in template.from_nodes:
if n.alias:
template.global_identifiers.add(n.alias)
else:
template.global_identifiers.add(n.identifier)
# scan extends for dependencies
# this allows faster calling of template functions - we could also
# tune BufferWrite calls for these nodes
if self.options.use_dependency_analysis:
for n in template.extends_nodes:
path = os.path.join(
*[ident_node.name
for ident_node in n.source_module_name_list])
template_function_names = get_template_functions(self.compiler.include_path, path)
template.template_methods.update(template_function_names)
self.visit_ast(template.main_function, template)
for n in template.child_nodes:
self.visit_ast(n, template)
def analyzeFunctionNode(self, function):
function.scope.local_identifiers.extend([IdentifierNode(n.name)
for n in function.parameter_list])
for n in function.child_nodes:
self.visit_ast(n, function)
def analyzeForNode(self, for_node):
self.visit_ast(for_node.target_list, for_node)
for_node.loop_variant_set = set(for_node.target_list.flat_list)
self.visit_ast(for_node.expression_list, for_node)
for n in for_node.child_nodes:
self.visit_ast(n, for_node)
def analyzeAssignNode(self, node):
_identifier = IdentifierNode(node.left.name)
scope = self.get_parent_scope(node)
scope.local_identifiers.append(_identifier)
# note: this hack is here so you can partially analyze alias nodes
# without double-processing
if node.right:
self.visit_ast(node.right, node)
def analyzeExpressionListNode(self, expression_list_node):
for n in expression_list_node:
self.visit_ast(n, expression_list_node)
def analyzeTargetListNode(self, target_list_node):
flat_list = []
for n in target_list_node:
self.visit_ast(n, target_list_node)
if type(n) == TargetListNode:
flat_list.extend(n.flat_list)
else:
flat_list.append(n)
target_list_node.flat_list = flat_list
# def analyzeParameterListNode(self, parameter_list_node):
# flat_list = []
# for n in parameter_list_node:
# flat_list.append(n)
# target_list_node.flat_list = flat_list
def analyzeArgListNode(self, arg_list_node):
for n in arg_list_node:
self.visit_ast(n, arg_list_node)
def analyzeTupleLiteralNode(self, tuple_literal_node):
for n in tuple_literal_node.child_nodes:
self.visit_ast(n, tuple_literal_node)
def analyzeDictLiteralNode(self, dict_literal_node):
for key_node, value_node in dict_literal_node.child_nodes:
self.visit_ast(key_node, dict_literal_node)
self.visit_ast(value_node, dict_literal_node)
def analyzeCallFunctionNode(self, function_call):
self.visit_ast(function_call.expression, function_call)
self.visit_ast(function_call.arg_list, function_call)
# NOTE: these optimizations are disabled because the optimizer has a tendency
# to "over-hoist" code inside a CacheNode and you end up doing *more* work
def analyzeCacheNode(self, cache_node):
cache_placeholders = self.options.cache_resolved_placeholders
cache_udn_expressions = self.options.cache_resolved_udn_expressions
cache_filtered_placeholders = self.options.cache_filtered_placeholders
self.options.cache_resolved_placeholders = False
self.options.cache_resolved_udn_expressions = False
self.options.cache_filtered_placeholders = False
self.visit_ast(cache_node.expression, cache_node)
self.options.cache_resolved_placeholders = cache_placeholders
self.options.cache_resolved_udn_expressions = cache_udn_expressions
self.options.cache_filtered_placeholders = cache_filtered_placeholders
def analyzeBufferWrite(self, buffer_write):
self.visit_ast(buffer_write.expression, buffer_write)
# template functions output text - don't format them as strings
if (isinstance(buffer_write.expression, BinOpNode) and
buffer_write.expression.operator == '%' and
isinstance(buffer_write.expression.right, CallFunctionNode) and
isinstance(buffer_write.expression.right.expression,
TemplateMethodIdentifierNode)):
buffer_write.replace(
buffer_write.expression, buffer_write.expression.right)
def analyzeEchoNode(self, node):
for n in (node.test_expression, node.true_expression, node.false_expression):
if n:
self.visit_ast(n, node)
def analyzeFilterNode(self, filter_node):
self.visit_ast(filter_node.expression, filter_node)
if (isinstance(filter_node.expression, CallFunctionNode) and
isinstance(filter_node.expression.expression, TemplateMethodIdentifierNode)):
filter_node.parent.replace(filter_node, filter_node.expression)
return
if self.options.cache_filtered_placeholders:
# NOTE: you *must* analyze the node before putting it in a dict
# otherwise the definition of hash and equivalence will change and the
# node will not be found due to the sketchy custom hash function
scope = self.get_parent_scope(filter_node)
alias = scope.aliased_expression_map.get(filter_node)
if not alias:
alias_name = '_fph%08X' % unsigned_hash(filter_node.expression)
if alias_name in scope.alias_name_set:
print "duplicate alias_name", alias_name
print "scope", scope
print "scope.alias_name_set", scope.alias_name_set
print "scope.aliased_expression_map", scope.aliased_expression_map
return
alias = IdentifierNode(alias_name)
scope.alias_name_set.add(alias_name)
scope.aliased_expression_map[filter_node] = alias
assign_alias = AssignNode(alias, filter_node)
insert_block, insert_marker = self.get_insert_block_and_point(
filter_node)
insert_block.insert_before(insert_marker, assign_alias)
filter_node.parent.replace(filter_node, alias)
def analyzePlaceholderNode(self, placeholder):
if self.options.directly_access_defined_variables:
# when the analyzer finds a PlaceholderNode and generates a function
# call out of it, i annotate an IdentifierNode with the original
# placeholder name
local_var = IdentifierNode(placeholder.name)
cached_placeholder = IdentifierNode('_rph_%s' % local_var.name)
local_identifiers = self.get_local_identifiers(placeholder)
#print "local_identifiers", local_identifiers
if local_var in local_identifiers:
placeholder.parent.replace(placeholder, local_var)
elif placeholder.name in self.ast_root.template_methods:
placeholder.parent.replace(
placeholder, TemplateMethodIdentifierNode(
placeholder.name))
elif local_var in self.ast_root.global_identifiers:
placeholder.parent.replace(placeholder, local_var)
elif cached_placeholder in local_identifiers:
placeholder.parent.replace(placeholder, cached_placeholder)
elif local_var.name in builtin_names:
placeholder.parent.replace(placeholder,
IdentifierNode(local_var.name))
elif self.options.cache_resolved_placeholders:
scope = self.get_parent_scope(placeholder)
scope.alias_name_set.add(cached_placeholder.name)
scope.aliased_expression_map[placeholder] = cached_placeholder
insert_block, insert_marker = self.get_insert_block_and_point(
placeholder)
# note: this is sketchy enough that it requires some explanation
# basically, you need to visit the node for the parent function to
# get the memo that this value is aliased. unfortunately, the naive
# case of just calling visit_ast blows up since it tries to double
# analyze a certain set of nodes. you only really need to analyze
# that the assignment took place, then you can safely alias the
# actual function call. definitely sketchy, but it does seem to work
assign_rph = AssignNode(cached_placeholder, None)
cached_placeholder.parent = assign_rph
#print "optimize scope:", insert_block
#print "optimize marker:", insert_marker
insert_block.insert_before(
insert_marker, assign_rph)
self.visit_ast(assign_rph, insert_block)
assign_rph.right = placeholder
placeholder.parent.replace(placeholder, cached_placeholder)
def analyzePlaceholderSubstitutionNode(self, placeholder_substitution):
self.visit_ast(placeholder_substitution.expression,
placeholder_substitution)
# def alias_expression_in_function(self, function, expression):
# alias = function.aliased_expression_map.get(expression)
# if not alias:
# alias_name = '_%s' % (expression.name)
# if alias_name in function.alias_name_set:
# print "duplicate alias_name", alias_name
# return
# alias = IdentifierNode(alias_name)
# function.aliased_expression_map[expression] = alias
# assign_alias = AssignNode(alias, expression)
# parent_loop = self.get_parent_loop(node)
# # fixme: check to see if this expression is loop-invariant
# # must add a test case for this
# child_node_set = set(node.getChildNodes())
# #print "child_node_set", child_node_set
# #print "parent_loop", parent_loop, "parent", node.parent
# if (parent_loop is not None and
# not parent_loop.loop_variant_set.intersection(child_node_set)):
# #print "pull up loop invariant", assign_alias
# parent_loop.parent.insert_before(parent_loop, assign_alias)
# else:
# insert_block, insert_marker = self.get_insert_block_and_point(node)
# insert_block.insert_before(insert_marker, assign_alias)
# node.parent.replace(node, alias)
def analyzeGetAttrNode(self, node):
if not self.options.alias_invariants:
return
# fixme: only handle the trivial case for now
# simplifies the protocol for making up alias names
if type(node.expression) != IdentifierNode:
return
scope = self.get_parent_scope(node)
alias = scope.aliased_expression_map.get(node)
if not alias:
if node.expression.name[0] != '_':
alias_format = '_%s_%s'
else:
alias_format = '%s_%s'
alias_name = alias_format % (node.expression.name, node.name)
if alias_name in scope.alias_name_set:
print "duplicate alias_name", alias_name
print "scope", scope
print "scope.alias_name_set", scope.alias_name_set
print "scope.aliased_expression_map", scope.aliased_expression_map
return
alias = IdentifierNode(alias_name)
scope.alias_name_set.add(alias_name)
scope.aliased_expression_map[node] = alias
assign_alias = AssignNode(alias, node)
parent_loop = self.get_parent_loop(node)
# fixme: check to see if this expression is loop-invariant
# must add a test case for this
child_node_set = set(node.getChildNodes())
#print "child_node_set", child_node_set
#print "parent_loop", parent_loop, "parent", node.parent
if (self.options.inline_hoist_loop_invariant_aliases and
parent_loop is not None and
not parent_loop.loop_variant_set.intersection(child_node_set)):
# print "pull up loop invariant", assign_alias
parent_loop.parent.insert_before(parent_loop, assign_alias)
else:
insert_block, insert_marker = self.get_insert_block_and_point(node)
insert_block.insert_before(insert_marker, assign_alias)
node.parent.replace(node, alias)
def analyzeIfNode(self, if_node):
self.visit_ast(if_node.test_expression, if_node)
for n in if_node.child_nodes:
self.visit_ast(n, if_node)
for n in if_node.else_.child_nodes:
self.visit_ast(n, if_node.else_)
parent_scope = self.get_parent_scope(if_node)
# once both branches are optimized, walk the scopes for any variables that
# are defined in both places. those will be promoted to function scope
# since it is safe to assume that those will defined
# fixme: this feels like a bit of hack - but not sure how to do this
# correctly without reverting to slower performance for almost all calls to
# resolve_placeholder.
#
# it seems like certain optimizations need
# to be hoisted up to the parent scope. this is particularly the case when
# you are aliasing common functions that are likely to occur in the parent
# scope after the conditional block. you *need* to hoist those, or you will
# have errors when the branch fails. essentially you have to detect and
# hoist 'branch invariant' optimizations.
if if_node.else_.child_nodes:
if_scope_vars = set(if_node.scope.local_identifiers)
common_local_identifiers = list(if_scope_vars.intersection(
if_node.else_.scope.local_identifiers))
common_alias_name_set = if_node.scope.alias_name_set.intersection(
if_node.else_.scope.alias_name_set)
common_keys = (
set(if_node.scope.aliased_expression_map.iterkeys()) &
set(if_node.else_.scope.aliased_expression_map.iterkeys()))
common_aliased_expression_map = {}
for key in common_keys:
common_aliased_expression_map[key] = if_node.scope.aliased_expression_map[key]
parent_scope.local_identifiers.extend(common_local_identifiers)
parent_scope.alias_name_set.update(common_alias_name_set)
parent_scope.aliased_expression_map.update(common_aliased_expression_map)
else:
# we can try to hoist up invariants if they don't depend on the
# condition. this is somewhat hard to know, so the best way to do so
# without multiple passes of the optimizer is to hoist only things that
# were already defined in the parent scope - like _buffer, or things on
# self.
pass
def analyzeBinOpNode(self, n):
# if you are trying to use short-circuit behavior, these two optimizations
# can sabotage correct execution since the rhs may be hoisted above the
# IfNode and cause it to get executed prior to passing the lhs check.
if n.operator == 'and' or n.operator == 'or':
cache_placeholders = self.options.cache_resolved_placeholders
cache_udn_expressions = self.options.cache_resolved_udn_expressions
self.options.cache_resolved_placeholders = False
self.options.cache_resolved_udn_expressions = False
self.visit_ast(n.left, n)
self.visit_ast(n.right, n)
if n.operator == 'and' or n.operator == 'or':
self.options.cache_resolved_placeholders = cache_placeholders
self.options.cache_resolved_udn_expressions = cache_udn_expressions
analyzeBinOpExpressionNode = analyzeBinOpNode
def analyzeUnaryOpNode(self, op_node):
self.visit_ast(op_node.expression, op_node)
def get_local_identifiers(self, node):
local_identifiers = []
# search the parent scopes
# fixme: should this be recursive?
node = node.parent
while node is not None:
if isinstance(node, ForNode):
local_identifiers.extend(node.loop_variant_set)
local_identifiers.extend(node.scope.local_identifiers)
elif isinstance(node, IfNode):
local_identifiers.extend(node.scope.local_identifiers)
elif isinstance(node, ElseNode):
# in this case, we don't want to go to the parent node, which is the
# IfNode - we want to go to the parent 'scope'
local_identifiers.extend(node.scope.local_identifiers)
node = node.parent.parent
continue
elif isinstance(node, FunctionNode):
local_identifiers.extend(node.scope.local_identifiers)
break
node = node.parent
return frozenset(local_identifiers)
def analyzeGetUDNNode(self, node):
if not self.options.prefer_whole_udn_expressions:
self.visit_ast(node.expression, node)
if self.options.cache_resolved_udn_expressions:
cached_udn = IdentifierNode('_rudn_%s' % unsigned_hash(node))
local_identifiers = self.get_local_identifiers(node)
if cached_udn in local_identifiers:
node.parent.replace(node, cached_udn)
else:
insert_block, insert_marker = self.get_insert_block_and_point(
node)
# if there is a reassignment in the parent block, don't cache this
# incase it needs to be re-resolved.
# #set $text = $text.replace('\r\n', '\n')
# #set $text = $text.replace('\t', ' ')
# in this example, if you cache the udn expression text.replace,
# you have a problem - you won't ever use the new string create by
# the first call to replace
for child_node in insert_block.child_nodes:
if (isinstance(child_node, AssignNode) and
child_node.left == node.expression):
return
scope = self.get_parent_scope(node)
scope.alias_name_set.add(cached_udn.name)
scope.aliased_expression_map[node] = cached_udn
# note: this is sketchy enough that it requires some explanation
# basically, you need to visit the node for the parent function to
# get the memo that this value is aliased. unfortunately, the naive
# case of just calling visit_ast blows up since it tries to double
# analyze a certain set of nodes. you only really need to analyze
# that the assignment took place, then you can safely alias the
# actual function call. definitely sketchy, but it does seem to work
assign_rph = AssignNode(cached_udn, None)
cached_udn.parent = assign_rph
insert_block.insert_before(
insert_marker, assign_rph)
self.visit_ast(assign_rph, insert_block)
assign_rph.right = node
node.parent.replace(node, cached_udn)
elif self.options.prefer_whole_udn_expressions:
self.visit_ast(node.expression, node)
def analyzeSliceNode(self, pnode):
self.visit_ast(pnode.expression, pnode)
self.visit_ast(pnode.slice_expression, pnode)
# a second pass over the optimized tree to hoist invariant aliases to their
# parent blocks
class FinalPassAnalyzer(_BaseAnalyzer):
def analyzeTemplateNode(self, template):
self.visit_ast(template.main_function, template)
for n in template.child_nodes:
self.visit_ast(n, template)
def analyzeFunctionNode(self, function):
for n in function.child_nodes:
self.visit_ast(n, function)
def analyzeForNode(self, for_node):
for n in for_node.child_nodes:
self.visit_ast(n, for_node)
self.reanalyzeLoopNode(for_node)
def analyzeIfNode(self, if_node):
# depth-first
for n in if_node.child_nodes:
self.visit_ast(n, if_node)
for n in if_node.else_.child_nodes:
self.visit_ast(n, if_node.else_)
self.reanalyzeConditionalNode(if_node)
self.reanalyzeConditionalNode(if_node.else_)
def hoist(self, parent_node, parent_block, insertion_point, alias_node,
assign_alias_node):
# prune the implementation in the nested block
# print "prune", alias_node
# print "parent_block aliases", parent_block.scope.aliased_expression_map
parent_node.remove(assign_alias_node)
# if we've already hoisted an assignment, don't do it again
if alias_node not in parent_block.scope.hoisted_aliases:
# prune the original implementation in the current block and
# reinsert the alias before it's first potential usage if it
# is needed earlier in the execution path.
# when a variable aliased in both the if and
# else blocks is promoted to the parent scope
# the implementation isn't actually hoisted (should it be?)
# inline with the IfNode optimization so we need to check if the
# node is already here
if assign_alias_node in parent_block.child_nodes:
current_pos = parent_block.child_nodes.index(assign_alias_node)
# an else node's parent is the IfNode, which is the relevant
# node when searching for the insertion point
needed_pos = parent_block.child_nodes.index(insertion_point)
if needed_pos < current_pos:
parent_block.child_nodes.remove(assign_alias_node)
if isinstance(parent_node, ElseNode):
parent_block.insert_before(parent_node.parent, assign_alias_node)
else:
parent_block.insert_before(parent_node, assign_alias_node)
# print "insert_before", alias_node
else:
# still need to insert the alias
parent_block.insert_before(insertion_point, assign_alias_node)
parent_block.scope.hoisted_aliases.append(alias_node)
# NOTE: once we hoist an expression, we need to make sure that we no
# longer use this for dependencies in the current scope
del parent_node.scope.aliased_expression_map[alias_node]
parent_node.scope.alias_name_set.remove(assign_alias_node.left.name)
# FIXME: this is probably an indication of a bug or unnecessary
# difference between the caching of placeholders and filter expressions
if not isinstance(alias_node, FilterNode):
parent_node.scope.local_identifiers.remove(assign_alias_node.left)
template_function_re = re.compile('^[^#]*#(def|block)\s+(\w+)')
extends_re = re.compile('^#extends\s+([\.\w]+)')
template_extensions = ('.spt', '.tmpl')
def _extend_to_real_path(base_dir, ex_path):
for ext in template_extensions:
rpath = os.path.join(base_dir, ex_path) + ext
if os.path.exists(rpath):
return rpath
raise Exception('could not find .spt or .tmpl file for %s during dependency check' % ex_path)
# scan an spt file for template functions it will output
def get_template_functions(base_dir, path):
template_function_names = set()
path = _extend_to_real_path(base_dir, path)
if not path:
return template_function_names
f = open(path)
for line in f:
match = template_function_re.match(line)
if match:
template_function_names.add(match.group(2))
continue
match = extends_re.match(line)
if match:
extend_name = match.group(1)
extend_path = extend_name.replace('.', '/')
template_function_names.update(
get_template_functions(base_dir, extend_path))
f.close()
return template_function_names
|
infin8/spitfire
|
spitfire/compiler/optimizer.py
|
Python
|
bsd-3-clause
| 32,284
|
[
"VisIt"
] |
cb822d14d09643510356d3ca1505a22dc2487b1ceb3c936e7387a461096cac51
|
"""Test markdown and text widgets."""
# pylint: disable=redefined-outer-name,unused-argument,invalid-name
import time
import pytest
from bowtie import App
from bowtie.html import Markdown
from bowtie.control import Textbox
from bowtie.tests.utils import reset_uuid, server_check
reset_uuid()
mark = Markdown(
'''
# top
## middle
[link]('hello.html')
'''
)
side = Markdown(
'''
# sideheader
'''
)
text = Textbox(area=True)
def write(txt):
"""Update markdown text."""
mark.do_text(txt)
@pytest.fixture
def markdown(build_reset, monkeypatch):
"""Create markdown and text widgets."""
app = App(__name__, sidebar=True)
app.add(mark)
app.add_sidebar(side)
app.add_sidebar(text)
app.subscribe(text.on_change)(write)
# pylint: disable=protected-access
app._build()
with server_check(app) as server:
yield server
def test_markdown(markdown, chrome_driver):
"""Test markdown and text widgets."""
chrome_driver.get('http://localhost:9991')
chrome_driver.implicitly_wait(5)
assert chrome_driver.title == 'Bowtie App'
txtctrl = chrome_driver.find_element_by_class_name('ant-input')
output = chrome_driver.find_element_by_xpath(
"//div[@style='grid-area: 1 / 2 / 2 / 3; position: relative;']"
)
assert 'top' in output.text
assert 'middle' in output.text
assert 'link' in output.text
txtctrl.send_keys('apple')
time.sleep(1)
assert 'apple' in output.text
txtctrl.send_keys('banana')
time.sleep(1)
assert 'apple' in output.text
assert 'banana' in output.text
|
jwkvam/bowtie
|
bowtie/tests/test_editor.py
|
Python
|
mit
| 1,599
|
[
"Bowtie"
] |
74426e8be1a8acf36051e0a24f472054f0a0def1c8de65ec418a6a433a76d00b
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2008 Brian G. Matherly
# Copyright (C) 2010 Nick Hall
# Copyright (C) 2011 Tim G L Lyons
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#-------------------------------------------------------------------------
#
# Standard python modules
#
#-------------------------------------------------------------------------
import os
from io import StringIO
#-------------------------------------------------------------------------
#
# set up logging
#
#-------------------------------------------------------------------------
import logging
_LOG = logging.getLogger(".DisplayState")
#-------------------------------------------------------------------------
#
# GNOME python modules
#
#-------------------------------------------------------------------------
from gi.repository import Gdk
from gi.repository import Gtk
from gi.repository import GObject
from gi.repository import GLib
#-------------------------------------------------------------------------
#
# Gramps modules
#
#-------------------------------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
from gramps.gen.utils.callback import Callback
from .utils import process_pending_events
from .views.navigationview import NavigationView
from gramps.gen.config import config
from gramps.gen.display.name import displayer as name_displayer
from .managedwindow import GrampsWindowManager
from gramps.gen.relationship import get_relationship_calculator
from .glade import Glade
from gramps.gen.utils.db import navigation_label
from .widgets.progressdialog import ProgressMonitor, GtkProgressDialog
DISABLED = -1
#-------------------------------------------------------------------------
#
# History manager
#
#-------------------------------------------------------------------------
class History(Callback):
""" History manages the objects of a certain type that have been viewed,
with ability to go back, or forward.
When accessing an object, it should be pushed on the History.
"""
__signals__ = {
'active-changed' : (str, ),
'mru-changed' : (list, )
}
def __init__(self, dbstate, nav_type):
Callback.__init__(self)
self.dbstate = dbstate
self.nav_type = nav_type
self.clear()
dbstate.connect('database-changed', self.connect_signals)
self.signal_map = {}
self.signal_map[nav_type.lower() + '-delete'] = self.handles_removed
self.signal_map[nav_type.lower() + '-rebuild'] = self.history_changed
def connect_signals(self, dbstate):
"""
Connects database signals when the database has changed.
"""
for sig in self.signal_map:
dbstate.connect(sig, self.signal_map[sig])
def clear(self):
"""
Clears the history, resetting the values back to their defaults
"""
self.history = []
self.mru = []
self.index = -1
self.lock = False
if self.dbstate.is_open() and self.nav_type == 'Person':
initial_person = self.dbstate.db.find_initial_person()
if initial_person:
self.push(initial_person.get_handle())
def remove(self, handle, old_id=None):
"""
Remove a handle from the history list
"""
if old_id:
del_id = old_id
else:
del_id = handle
history_count = self.history.count(del_id)
for c in range(history_count):
self.history.remove(del_id)
self.index -= 1
mhc = self.mru.count(del_id)
for c in range(mhc):
self.mru.remove(del_id)
self.emit('mru-changed', (self.mru, ))
if self.history:
newact = self.history[self.index]
if not isinstance(newact, str):
newact = str(newact)
self.emit('active-changed', (newact,))
def push(self, handle):
"""
Pushes the handle on the history stack
"""
self.prune()
if len(self.history) == 0 or handle != self.history[-1]:
self.history.append(handle)
if handle in self.mru:
self.mru.remove(handle)
self.mru.append(handle)
self.emit('mru-changed', (self.mru, ))
self.index += 1
if self.history:
newact = self.history[self.index]
if not isinstance(newact, str):
newact = str(newact)
self.emit('active-changed', (newact,))
def forward(self, step=1):
"""
Moves forward in the history list
"""
self.index += step
handle = self.history[self.index]
if handle in self.mru:
self.mru.remove(handle)
self.mru.append(handle)
self.emit('mru-changed', (self.mru, ))
newact = self.history[self.index]
if not isinstance(newact, str):
newact = str(newact)
self.emit('active-changed', (newact,))
return newact
def back(self, step=1):
"""
Moves backward in the history list
"""
self.index -= step
try:
handle = self.history[self.index]
if handle in self.mru:
self.mru.remove(handle)
self.mru.append(handle)
self.emit('mru-changed', (self.mru, ))
newact = self.history[self.index]
if not isinstance(newact, str):
newact = str(newact)
self.emit('active-changed', (newact,))
return newact
except IndexError:
return ""
def present(self):
"""
return the person handle that is now active in the history
"""
try :
if self.history :
return self.history[self.index]
else:
return ""
except IndexError:
return ""
def at_end(self):
"""
returns True if we are at the end of the history list
"""
return self.index+1 == len(self.history)
def at_front(self):
"""
returns True if we are at the front of the history list
"""
return self.index <= 0
def prune(self):
"""
Truncates the history list at the current object.
"""
if not self.at_end():
self.history = self.history[0:self.index+1]
def handles_removed(self, handle_list):
"""
Called in response to an object-delete signal.
Removes a list of handles from the history.
"""
for handle in handle_list:
self.remove(handle)
def history_changed(self):
"""
Called in response to an object-rebuild signal.
Objects in the history list may have been deleted.
"""
self.clear()
self.emit('mru-changed', (self.mru, ))
#-------------------------------------------------------------------------
#
# Recent Docs Menu
#
#-------------------------------------------------------------------------
_RCT_TOP = '<ui><menubar name="MenuBar"><menu action="FileMenu"><menu action="OpenRecent">'
_RCT_BTM = '</menu></menu></menubar></ui>'
from gramps.gen.recentfiles import RecentFiles
class RecentDocsMenu:
def __init__(self, uistate, state, fileopen):
self.action_group = Gtk.ActionGroup(name='RecentFiles')
self.active = DISABLED
self.uistate = uistate
self.uimanager = uistate.uimanager
self.fileopen = fileopen
self.state = state
def load(self, item):
filename = item.get_path()
self.fileopen(filename)
def build(self):
buf = StringIO()
buf.write(_RCT_TOP)
gramps_rf = RecentFiles()
count = 0
if self.active != DISABLED:
self.uimanager.remove_ui(self.active)
self.uimanager.remove_action_group(self.action_group)
self.action_group = Gtk.ActionGroup(name='RecentFiles')
self.active = DISABLED
actions = []
rfiles = gramps_rf.gramps_recent_files
rfiles.sort(key=lambda x: x.get_time(), reverse=True)
new_menu = Gtk.Menu()
for item in rfiles:
try:
title = item.get_name()
filename = os.path.basename(item.get_path())
action_id = "RecentMenu%d" % count
buf.write('<menuitem action="%s"/>' % action_id)
actions.append((action_id, None, title, None, None,
make_callback(item, self.load)))
mitem = Gtk.MenuItem(label=title, use_underline=False)
mitem.connect('activate', make_callback(item, self.load))
mitem.show()
new_menu.append(mitem)
except RuntimeError:
# ignore no longer existing files
_LOG.info("Ignoring the RecentItem %s (%s)" % (title, filename))
count += 1
buf.write(_RCT_BTM)
self.action_group.add_actions(actions)
self.uimanager.insert_action_group(self.action_group, 1)
self.active = self.uimanager.add_ui_from_string(buf.getvalue())
self.uimanager.ensure_update()
buf.close()
if len(rfiles) > 0:
new_menu.show()
self.uistate.set_open_recent_menu(new_menu)
def make_callback(val, func):
return lambda x: func(val)
from .logger import RotateHandler
class WarnHandler(RotateHandler):
def __init__(self, capacity, button, parent=None):
RotateHandler.__init__(self, capacity)
self.setLevel(logging.WARN)
self.button = button
button.on_clicked(self.display)
self.timer = None
self.last_line = '-1'
self.parent = parent
def emit(self, record):
if self.timer is None:
#check every 3 minutes if warn button can disappear
self.timer = GLib.timeout_add(3*60*1000, self._check_clear)
RotateHandler.emit(self, record)
self.button.show()
def _check_clear(self):
buffer = self.get_buffer()
if len(buffer) > 0:
new_last_line = self.get_buffer()[-1]
if self.last_line == new_last_line:
#buffer has not changed for 3 minutes, let's clear it:
self._clear()
return False
else:
self.last_line = new_last_line
return True
else:
return False
def _clear(self):
self.button.hide()
self.set_capacity(self._capacity)
self.last_line = '-1'
self.timer = None
def display(self, obj):
obj.hide()
self.glade = Glade(toplevel='displaystate')
top = self.glade.toplevel
msg = self.glade.get_object('msg')
buf = msg.get_buffer()
for i in self.get_formatted_log():
buf.insert_at_cursor(i + '\n')
if self.parent:
top.set_transient_for(self.parent)
top.run()
top.destroy()
class DisplayState(Callback):
__signals__ = {
'filters-changed' : (str, ),
'filter-name-changed' : (str, str, str),
'nameformat-changed' : None,
'grampletbar-close-changed' : None,
'update-available' : (list, ),
'autobackup' : None,
}
#nav_type to message
NAV2MES = {
'Person': _("No active person"),
'Family': _("No active family"),
'Event': _("No active event"),
'Place': _("No active place"),
'Source': _("No active source"),
'Citation': _("No active citation"),
'Repository': _("No active repository"),
'Media': _("No active media"),
'Note': _("No active note"),
}
BUSY_CURSOR = Gdk.Cursor.new_for_display(Gdk.Display.get_default(),
Gdk.CursorType.WATCH)
def __init__(self, window, status, uimanager, viewmanager=None):
self.busy = False
self.cursor = None
self.viewmanager = viewmanager
self.uimanager = uimanager
self.progress_monitor = ProgressMonitor(GtkProgressDialog, ("", window))
self.window = window
Callback.__init__(self)
self.status = status
self.status_id = status.get_context_id('GRAMPS')
self.progress = status.get_progress_bar()
self.history_lookup = {}
self.gwm = GrampsWindowManager(uimanager)
self.widget = None
self.disprel_old = ''
self.disprel_defpers = None
self.disprel_active = None
self.set_relationship_class()
self.export = False
self.backup_timer = None
formatter = logging.Formatter('%(levelname)s %(name)s: %(message)s')
warnbtn = status.get_warning_button()
self.rhandler = WarnHandler(capacity=400, button=warnbtn, parent=window)
self.rhandler.setFormatter(formatter)
self.rhandler.setLevel(logging.WARNING)
self.log = logging.getLogger()
self.log.addHandler(self.rhandler)
# This call has been moved one level up,
# but this connection is still made!
# self.dbstate.connect('database-changed', self.db_changed)
def set_backup_timer(self):
"""
Set the backup timer.
"""
interval = config.get('database.autobackup')
if self.backup_timer is not None:
GLib.source_remove(self.backup_timer)
self.backup_timer = None
if interval == 1:
minutes = 15
elif interval == 2:
minutes = 30
elif interval == 3:
minutes = 60
if interval > 0:
self.backup_timer = GLib.timeout_add_seconds(
minutes*60, self.__emit_autobackup)
def __emit_autobackup(self):
"""
Emit an 'autobackup' signal.
"""
self.emit('autobackup')
return True
def screen_width(self):
"""
Return the width of the current screen.
"""
return self.window.get_screen().get_width()
def screen_height(self):
"""
Return the height of the current screen.
"""
return self.window.get_screen().get_height()
def clear_history(self):
"""
Clear all history objects.
"""
for history in list(self.history_lookup.values()):
history.clear()
def get_history(self, nav_type, nav_group=0):
"""
Return the history object for the given navigation type and group.
"""
return self.history_lookup.get((nav_type, nav_group))
def register(self, dbstate, nav_type, nav_group):
"""
Create a history and navigation object for the specified
navigation type and group, if they don't exist.
"""
if (nav_type, nav_group) not in self.history_lookup:
history = History(dbstate, nav_type)
self.history_lookup[(nav_type, nav_group)] = history
def get_active(self, nav_type, nav_group=0):
"""
Return the handle for the active obejct specified by the given
navigation type and group.
"""
history = self.get_history(nav_type, nav_group)
return history.present() if history else None
def set_active(self, handle, nav_type, nav_group=0):
"""
Set the active object for the specified navigation type and group to
the given handle.
"""
history = self.get_history(nav_type, nav_group)
if history:
history.push(handle)
def set_sensitive(self, state):
self.window.set_sensitive(state)
def db_changed(self, db):
db.connect('long-op-start', self.progress_monitor.add_op)
self.clear_history()
def set_relationship_class(self):
"""method that rebinds the relationship to the current rel calc
Should be called after load or reload of plugins
"""
self.relationship = get_relationship_calculator(reinit=True)
def set_gendepth(self, value):
""" Set the generations we search back for showing relationships
on GRAMPS interface. Value must be integer > 0
This method will be used by the preference editor when user changes
the generations.
"""
self.relationship.set_depth(value)
def display_relationship(self, dbstate, active_handle):
""" Construct the relationship in order to show it in the statusbar
This can be a time intensive calculation, so we only want to do
it if persons are different than before.
Eg: select a person, then double click, will result in calling
three times to construct build the statusbar. We only want
to obtain relationship once!
This means the relationship part of statusbar only changes on
change of row.
"""
self.relationship.connect_db_signals(dbstate)
default_person = dbstate.db.get_default_person()
if default_person is None or active_handle is None:
return ''
if default_person.handle == self.disprel_defpers and \
active_handle == self.disprel_active :
return self.disprel_old
active = dbstate.db.get_person_from_handle(active_handle)
if active is None:
# During merger this method can be called at a time when treemodel
# and database are not in sync, resulting in active_handle != None,
# but active == None; see bug 5290 for the details.
return ''
name = self.relationship.get_one_relationship(
dbstate.db, default_person, active)
#store present call data
self.disprel_old = name
self.disprel_defpers = default_person.handle
self.disprel_active = active_handle
if name:
return name
else:
return ""
def set_export_mode(self, value):
self.set_busy_cursor(value)
if value == self.export:
return
else:
self.export = value
def get_export_mode(self):
return self.export
def set_busy_cursor(self, value):
if value == self.busy:
return
else:
self.busy = value
if self.window.get_window():
if value:
self.cursor = self.window.get_window().get_cursor()
self.window.get_window().set_cursor(self.BUSY_CURSOR)
else:
self.window.get_window().set_cursor(self.cursor)
if self.window.get_window().is_visible():
#avoid critical gdk error:
#Gdk-CRITICAL **: gdk_error_trap_pop_internal: assertion `trap != NULL' failed
#only process events if window is actually visible
process_pending_events()
def set_open_widget(self, widget):
self.widget = widget
def set_open_recent_menu(self, menu):
self.widget.set_menu(menu)
def push_message(self, dbstate, text):
self.status_text(text)
GLib.timeout_add(5000, self.modify_statusbar, dbstate)
def show_filter_results(self, dbstate, matched, total):
#nav_type = self.viewmanager.active_page.navigation_type()
#text = ((_("%(nav_type)s View") % {"nav_type": _(nav_type)}) +
text = (self.viewmanager.active_page.get_title() +
(": %d/%d" % (matched, total)))
self.status.set_filter(text)
def clear_filter_results(self):
self.status.clear_filter()
def modify_statusbar(self, dbstate, active=None):
view = self.viewmanager.active_page
if not isinstance(view, NavigationView) or dbstate is None:
return
nav_type = view.navigation_type()
active_handle = self.get_active(nav_type, view.navigation_group())
self.status.pop(self.status_id)
if active_handle:
name, obj = navigation_label(dbstate.db, nav_type, active_handle)
else:
name = _('No active object')
# Append relationship to default person if funtionality is enabled.
if nav_type == 'Person' and active_handle \
and config.get('interface.statusbar') > 1:
if active_handle != dbstate.db.get_default_handle():
msg = self.display_relationship(dbstate, active_handle)
if msg:
name = '%s (%s)' % (name, msg.strip())
if not name:
name = self.NAV2MES[nav_type]
self.status.push(self.status_id, name)
process_pending_events()
def pulse_progressbar(self, value, text=None):
self.progress.set_fraction(min(value/100.0, 1.0))
if text:
self.progress.set_text("%s: %d%%" % (text, value))
else:
self.progress.set_text("%d%%" % value)
process_pending_events()
def status_text(self, text):
self.status.pop(self.status_id)
self.status.push(self.status_id, text)
process_pending_events()
|
beernarrd/gramps
|
gramps/gui/displaystate.py
|
Python
|
gpl-2.0
| 22,045
|
[
"Brian"
] |
31bf44aedd8c2f09bb232095e0279fb3d35158edbb64fe1f1a76e26513d3c391
|
# Copyright 2008 Brian Boyer, Ryan Mark, Angela Nitzke, Joshua Pollock,
# Stuart Tiffen, Kayla Webley and the Medill School of Journalism, Northwestern
# University.
#
# This file is part of Crunchberry Pie.
#
# Crunchberry Pie is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Crunchberry Pie is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
#You should have received a copy of the GNU General Public License
#along with Crunchberry Pie. If not, see <http://www.gnu.org/licenses/>.
from django.contrib.auth.models import User
from django import template
from pie.quips.models import Quip, QuipForm
from datetime import datetime, timedelta
register = template.Library()
@register.inclusion_tag('quips/quip.html', takes_context=True)
def show_quip(context, quip):
context.update({'quips':quips})
return context
@register.inclusion_tag('quips/quip_form.html', takes_context=True)
def show_quip_form(context, article=None, quip=None, hidden=False):
if not article:
article = context['article']
f = QuipForm(instance=Quip(user=context['user'],article=article))
style = ''
if hidden:
style = 'display:none;'
context.update({'quip_form':f,'style':style})
return context
@register.inclusion_tag('quips/quips.html', takes_context=True)
def show_article_quips(context, article=None):
"""Show quips in reverse chrono for this article"""
if not article:
article = context['article']
quips = Quip.objects.filter(article=article).order_by('-created')
context.update({'quips':quips})
return context
@register.inclusion_tag('quips/quips.html', takes_context=True)
def show_quips(context):
#limit to top ten
quips = Quip.objects.all().order_by('-created')[0:10]
context.update({'quips':quips,'show_headline':True})
return context
@register.inclusion_tag('quips/quip_script.html')
def show_quips_script(limit=0,show_headline='false',article=None):
if article != None:
artid=article.id
else:
artid=0
return {'limit':limit,'show_headline':show_headline,'article':artid}
from django.utils.safestring import mark_safe
@register.filter
def link_user(value):
usernames = [word for word in value.split(' ') if word.startswith('@')]
for username in usernames:
value = value.replace(username, "<a href=\"/quips/user/%s\">%s</a>" % (username.lstrip('@').lower(), username))
return mark_safe(value)
|
brianboyer/newsmixer
|
pie/quips/templatetags/quips.py
|
Python
|
gpl-3.0
| 2,797
|
[
"Brian"
] |
da862740545ef303bca9714fcac0afacbe6e4c32e3a9da703e97c104d6917c2e
|
from collections import namedtuple
from pysb import Monomer
from indra.sources.indra_db_rest.api import get_statements_by_hash
from indra.statements import *
from indra.assemblers.english.assembler import _assemble_agent_str, \
SentenceBuilder
from indra.assemblers.pysb.assembler import parse_identifiers_url
from indra.assemblers.pysb.common import _n
class RefEdge(object):
"""Refinement edge representing ontological relationship between nodes.
Parameters
----------
source : indra.statements.Agent
Source agent of the edge.
target : indra.statements.Agent
Target agent of the edge.
relation : str
'is_ref' or 'has_ref' depending on the direction.
"""
def __init__(self, source, relation, target):
self.source = source
self.relation = relation
self.target = target
@classmethod
def _from_json(cls, json_tuple):
source = Agent._from_json(json_tuple[0])
relation = json_tuple[1]
target = Agent._from_json(json_tuple[2])
return RefEdge(source, relation, target)
def to_english(self):
source_str = _assemble_agent_str(self.source)
target_str = _assemble_agent_str(self.target)
sb = SentenceBuilder()
if self.relation == 'is_ref':
rel_str = ' is a refinement of '
elif self.relation == 'has_ref':
rel_str = ' has a refinement '
sb.append_as_sentence([source_str, rel_str, target_str])
sb.make_sentence()
return sb.sentence
def __repr__(self):
return str(self)
def __str__(self):
return 'RefEdge(%s %s %s)' % (self.source, self.relation, self.target)
def __eq__(self, other):
return (self.source.matches(other.source) and
self.target.matches(other.target) and
self.relation == other.relation)
def stmts_from_pysb_path(path, model, stmts):
"""Return source Statements corresponding to a path in a model.
Parameters
----------
path : list[tuple[str, int]]
A list of tuples where the first element of the tuple is the
name of a rule, and the second is the associated polarity along
a path.
model : pysb.core.Model
A PySB model which contains the rules along the path.
stmts : list[indra.statements.Statement]
A list of INDRA Statements from which the model was assembled.
Returns
-------
path_stmts : list[indra.statements.Statement]
The Statements from which the rules along the path were obtained.
"""
path_stmts = []
for step in path:
# Refinement edge
if len(step) == 3:
edge = RefEdge._from_json(step)
path_stmts.append(edge)
# Regular rule
elif len(step) == 2:
path_rule, sign = step
for rule in model.rules:
if rule.name == path_rule:
stmt = stmt_from_rule(path_rule, model, stmts)
assert stmt is not None
path_stmts.append(stmt)
return path_stmts
def stmts_from_indranet_path(path, model, signed, from_db=True, stmts=None):
"""Return source Statements corresponding to a path in an IndraNet model
(found by SignedGraphModelChecker or UnsignedGraphModelChecker).
Parameters
----------
path : list[tuple[str, int]]
A list of tuples where the first element of the tuple is the
name of an agent, and the second is the associated polarity along
a path.
model : nx.Digraph or nx.MultiDiGraph
An IndraNet model flattened into an unsigned DiGraph or signed
MultiDiGraph.
signed : bool
Whether the model and path are signed.
from_db : bool
If True, uses statement hashes to query the database. Otherwise, looks
for path statements in provided stmts.
stmts : Optional[list[indra.statements.Statement]]
A list of INDRA Statements from which the model was assembled. Required
if from_db is set to False.
Returns
-------
path_stmts : list[[indra.statements.Statement]]
A list of lists of INDRA statements explaining the path (each inner
corresponds to one step in the path because the flattened model can
have multiple statements per edge).
"""
steps = []
for i in range(len(path[:-1])):
source = path[i]
target = path[i+1]
if len(source) == 3:
edge = RefEdge._from_json(source)
steps.append([edge])
continue
elif len(target) == 3:
edge = RefEdge._from_json(target)
steps.append([edge])
continue
if signed:
if source[1] == target[1]:
sign = 0
else:
sign = 1
stmt_data = model[source[0]][target[0]][sign]['statements']
else:
stmt_data = model[source[0]][target[0]]['statements']
hashes = [stmt['stmt_hash'] for stmt in stmt_data]
if from_db:
p = get_statements_by_hash(hashes)
statements = p.statements
else:
statements = [
stmt for stmt in stmts if stmt.get_hash() in hashes]
steps.append(statements)
return steps
PybelEdge = namedtuple(
'PybelEdge', ['source', 'target', 'relation', 'reverse'])
def pybel_edge_to_english(pybel_edge):
source_str = _assemble_agent_str(pybel_edge.source)
target_str = _assemble_agent_str(pybel_edge.target)
sb = SentenceBuilder()
if pybel_edge.relation == 'partOf':
if pybel_edge.reverse:
rel_str = ' has a component '
else:
rel_str = ' is a part of '
elif pybel_edge.relation == 'hasVariant':
if pybel_edge.reverse:
rel_str = ' is a variant of '
else:
rel_str = ' has a variant '
sb.append_as_sentence([source_str, rel_str, target_str])
sb.make_sentence()
return sb.sentence
def stmts_from_pybel_path(path, model, from_db=True, stmts=None):
"""Return source Statements corresponding to a path in a PyBEL model.
Parameters
----------
path : list[tuple[str, int]]
A list of tuples where the first element of the tuple is the
name of an agent, and the second is the associated polarity along
a path.
model : pybel.BELGraph
A PyBEL BELGraph model.
from_db : bool
If True, uses statement hashes to query the database. Otherwise, looks
for path statements in provided stmts.
stmts : Optional[list[indra.statements.Statement]]
A list of INDRA Statements from which the model was assembled. Required
if from_db is set to False.
Returns
-------
path_stmts : list[[indra.statements.Statement]]
A list of lists of INDRA statements explaining the path (each inner
corresponds to one step in the path because PyBEL model can have
multiple edges representing multiple statements and evidences between
two nodes).
"""
import pybel.constants as pc
from indra.sources.bel.processor import get_agent
steps = []
for i in range(len(path[:-1])):
source = path[i]
target = path[i+1]
if len(source) == 3:
edge = RefEdge._from_json(source)
steps.append([edge])
continue
elif len(target) == 3:
edge = RefEdge._from_json(target)
steps.append([edge])
continue
# Check if the signs of source and target nodes are the same
positive = (source[1] == target[1])
reverse = False
try:
all_edges = model[source[0]][target[0]]
except KeyError:
# May be a symmetric edge
all_edges = model[target[0]][source[0]]
reverse = True
# Only keep the edges with correct sign or non-causal
edges = {}
key = 0
for edge_data in all_edges.values():
if edge_data['relation'] not in pc.CAUSAL_RELATIONS:
edges[key] = edge_data
key += 1
if positive and \
edge_data['relation'] in pc.CAUSAL_INCREASE_RELATIONS:
edges[key] = edge_data
key += 1
elif not positive and \
edge_data['relation'] in pc.CAUSAL_DECREASE_RELATIONS:
edges[key] = edge_data
key += 1
else:
continue
hashes = set()
for j in range(len(edges)):
try:
hashes.add(list(edges[j]['annotations']['stmt_hash'])[0])
# partOf and hasVariant edges don't have hashes
except KeyError:
continue
# If we didn't get any hashes, we can get PybelEdge object from
# partOf and hasVariant edges
if not hashes:
statements = []
# Can't get statements without hash from db
for edge_v in edges.values():
rel = edge_v['relation']
edge = PybelEdge(get_agent(source[0]),
get_agent(target[0]), rel, reverse)
statements.append(edge)
# Stop if we have an edge to avoid duplicates
if len(statements) > 0:
break
# If we have hashes, retrieve statements from them
else:
if from_db:
p = get_statements_by_hash(list(hashes))
statements = p.statements
else:
statements = [
stmt for stmt in stmts if stmt.get_hash() in hashes]
steps.append(statements)
return steps
def stmt_from_rule(rule_name, model, stmts):
"""Return the source INDRA Statement corresponding to a rule in a model.
Parameters
----------
rule_name : str
The name of a rule in the given PySB model.
model : pysb.core.Model
A PySB model which contains the given rule.
stmts : list[indra.statements.Statement]
A list of INDRA Statements from which the model was assembled.
Returns
-------
stmt : indra.statements.Statement
The Statement from which the given rule in the model was obtained.
"""
stmt_uuid = None
for ann in model.annotations:
if ann.subject == rule_name:
if ann.predicate == 'from_indra_statement':
stmt_uuid = ann.object
break
if stmt_uuid:
for stmt in stmts:
if stmt.uuid == stmt_uuid:
return stmt
def agent_from_obs(obs_name, model):
db_refs = {}
ag_name = None
for ann in model.annotations:
if ann.subject == obs_name:
if ann.predicate == 'from_indra_agent':
ag_name = ann.object
break
if ag_name:
mon_name = _n(ag_name)
for ann in model.annotations:
if isinstance(ann.subject, Monomer) and \
ann.subject.name == mon_name and ann.predicate == 'is':
db_name, db_ref = parse_identifiers_url(ann.object)
db_refs[db_name] = db_ref
ag = Agent(ag_name, db_refs=db_refs)
return ag
|
johnbachman/indra
|
indra/explanation/reporting.py
|
Python
|
bsd-2-clause
| 11,309
|
[
"Pybel"
] |
0cd2c18add3d44a6f30b4158450d7c502a73fb9a62a66b850471500783499abd
|
# -*- coding:utf-8 mode:python; tab-width:4; indent-tabs-mode:nil; py-indent-offset:4 -*-
import sys
import argparse
import os
import glob
import stat
import pprint
try:
from ..sharedutilities import Utility
except ValueError:
if __package__ is None:
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from sharedutilities import Utility
class Verifier(Utility):
def __init__(self):
self.processed = []
def generate_test_scripts(self, tests, directory,
top, target,
serial="runserial",
parallel="runmpi"):
"""Generate test battery scripts for serial and MPI execution from
the given list of tests to be included. Each file is a C shell
script similar to those bundled with NWChem QA but including only
a subset of tests, ordered by increasing test cost.
:param tests: tests to include in output, each entry (cost, name)
:type tests : list
:param directory: location of QA directory, where scripts will go
:type directory : str
:param top: NWCHEM_TOP environment variable
:type top : str
:param target: NWCHEM_TARGET environment variable
:type target : str
:param serial: name of serial-execution test battery script to generate
:type serial : str
:param parallel: name of MPI-execution test battery script to generate
:type parallel : str
"""
head = """#!/bin/csh -f
#
setenv NWCHEM_TOP {0}
setenv NWCHEM_TARGET {1}
setenv NWCHEM_EXECUTABLE `which nwchem`
set np = 1
if ($1 !="") then
set np = $1
endif
""".format(top, target)
sname = "{0}/{1}".format(directory, serial)
pname = "{0}/{1}".format(directory, parallel)
s = open(sname, "w")
p = open(pname, "w")
s.write(head)
p.write(head)
count = 0
for cost, test in tests:
count += 1
tname = test.replace(".out", "")
sline = "./runtests.unix procs $np {0} # estimated cost {1} number {2}\n".format(tname, cost, count)
pline = sline.replace("runtests.unix", "runtests.mpi.unix")
s.write(sline)
p.write(pline)
s.close()
p.close()
for name in (sname, pname):
st = os.stat(name)
os.chmod(name, st.st_mode | stat.S_IEXEC)
def find_ok_tests(self, qa_root, glob_pattern="doqmtest*"):
"""Find test cases that already appear in QA test scripts
included with NWChem. There are more test cases in
the tests/ directory than those appearing in the bundled QA
scripts, and those extra tests are often broken.
N.B.: even many of the tests that do appear in bundled scripts
are unreliable. Now only accepting tests that are in the doNightly
script and that are not commented out.
:param qa_root: path to the QA root directory
:type qa_root : str
:param glob_pattern: pattern to match test battery file names
:type glob_pattern : str
:return: names of test cases included in standard test batteries
:rtype : set
"""
cases = set()
full_pattern = "{0}/{1}".format(os.path.abspath(qa_root), glob_pattern)
for testfile in glob.glob(full_pattern):
with open(testfile) as f:
for line in f:
line = line.strip()
if line.startswith("#"):
continue
if "runtests" in line:
tail = line.split("runtest", 1)[-1]
#This will get test case names plus some garbage
#like "s.unix". The garbage doesn't matter because
#the names are only used to exclude bad tests.
for test in tail.split():
cases.add(test)
return cases
def find_fast_tests(self, qa_root, core_seconds):
"""Find test cases that, according to the reference output files,
executed in less than (n_cores * seconds) core_seconds. Some care
must be taken to exclude broken tests. Tests are assumed not-broken
only if they already appear in one of the QA do* scripts.
:param qa_root: path to the QA root directory
:type qa_root : str
:param core_seconds: maximum number of core-seconds to include test
:type core_seconds : int
:return: tests that appear to run sufficiently fast
:rtype : list
"""
ok_tests = self.find_ok_tests(qa_root)
tests = {}
for root, dirs, files in os.walk(os.path.abspath(qa_root)):
path = root.split(os.sep)
#process only test reference outputs
for f in files:
if "tests" in path and f.endswith(".out"):
fullfile = os.path.sep.join(path + [f])
nproc = 0
walltime = 0
with open(fullfile, "r") as infile:
for line in infile:
line = line.strip()
if line.startswith("nproc"):
nproc = int(line.split()[-1])
elif line.startswith("Total times"):
s = line.split()[-1]
walltime = float(s[:-1])
if nproc and walltime:
cost = nproc * walltime
entry = (cost, f)
#last part of path is test directory,
#we"re summing up costs of all tests
#in one directory
try:
tests[path[-1]].add(entry)
except KeyError:
tests[path[-1]] = {entry}
approved = []
for k, v in tests.items():
#skip tests that are too unreliable for standard QA scripts
if k not in ok_tests:
continue
#test runner expects at least one file matching directory name
files = set([x[1] for x in v])
trial = "{0}.out".format(k)
if trial not in files:
continue
cost = sum([x[0] for x in v])
if cost < core_seconds:
approved.append((cost, k))
return approved
def parse_qa_log(self, logfile):
"""Extract tests run from a NWChem QA log file by fetching all
"Running"/"verifying" pairs. Mark whether each test was OK or
failed according to the log file. Failed tests can be examined
more closely later.
:param logfile: name of QA log file to open
:type logfile : str
"""
refs_path = os.path.dirname(os.path.abspath(logfile)) + "/testoutputs"
with open(logfile, "r") as f:
for line in f.readlines():
line = line.strip()
if line.startswith("Running"):
test = line.split()[-1]
elif line.startswith("verifying"):
status = line.split()[-1]
test_name = test.split("/")[-1]
ref_file = "{0}/{1}.ok.out.nwparse".format(refs_path, test_name)
trial_file = "{0}/{1}.out.nwparse".format(refs_path, test_name)
entry = {"name" : test_name,
"basic_status" : status,
"reference" : ref_file,
"trial" : trial_file}
self.processed.append(entry)
def score_mismatch(self, reference, trial):
"""Score a trial output against a reference output. Reference and
trial are both lines of text extracted from .nwparse files produced
by a failing QA test.
The final score is a tuple consisting of
(gross_mismatch, numeric_absolute_difference)
Higher scores indicate worse deviations. Gross mismatches are worse
deviations than numerical differences.
If lines match but the numeric values in them do not, the absolute
difference of values is added to the numeric_absolute_difference.
If lines do not even contain numeric values in corresponding positions,
or one group has more lines than the other, the mismatches add to
the gross_mismatch count.
:param reference: mismatching lines from reference .nwparse file
:type reference : list
:param trial: mismatching lines from current QA trial .nwparse file
:type trial : list
:return: mismatch score, higher scores indicate worse mismatches
:rtype : tuple
"""
gross_mismatches = 0
abs_numeric_diff = 0.0
nlines = max(len(reference), len(trial))
for k in range(nlines):
try:
r = self.numericize(reference[k])
t = self.numericize(trial[k])
ncolumns = max(len(r), len(t))
for j in range(ncolumns):
try:
#if at least one line has a number in current column,
#try to get the numeric difference
if float in [type(r[j]), type(t[j])]:
diff = abs(r[j] - t[j])
abs_numeric_diff += diff
#lines don't match in length or don't have numbers in
#the same position - gross mismatch
except (IndexError, TypeError):
gross_mismatches += 1
except IndexError:
gross_mismatches += 1
score = (gross_mismatches, abs_numeric_diff)
return score
def compare_outputs(self, reference_file, trial_file, attributes = None):
"""Parse reference and trial files and compare their contents.
:param reference_file: a known-good NWChem log file for a calculation
:type reference_file : str
:param trial_file: an NWChem log file to compare against the reference
:type reference_file : str
"""
cmd = "diff {0} {1}".format(reference_file, trial_file)
output, returncode = self.execute_local(cmd)
diff = output.split("\n")
# lines starting with < are differences in reference file,
# lines starting with > are differences in trial file,
# group differing lines and make them easy to compare
r = [x[1:].strip() for x in diff if x.startswith("<")]
t = [x[1:].strip() for x in diff if x.startswith(">")]
if len(r) != len(t):
#trial just has extra info not in ref, e.g. "Total SCS-MP2 energy"
if not r:
mismatch_score = (0, 0.0)
#ref just has extra info not in trial, perhaps from extra iterations
elif not t:
base = os.path.basename(reference_file)
#import ipdb; ipdb.set_trace()
mismatch_score = (0, 0.0)
else:
mismatch_score = self.score_mismatch(r, t)
else:
mismatch_score = self.score_mismatch(r, t)
return mismatch_score
def compare_all_failures(self):
"""Perform a more detailed analysis of cases that did not pass the
NWChem QA procedures.
"""
failures = []
for entry in self.processed:
if entry["basic_status"] == "failed":
score = self.compare_outputs(entry["reference"],
entry["trial"])
if score != (0, 0.0):
failed = entry.copy()
failed["score"] = score
failures.append(failed)
#sort from minor to major failures
decorated = [(f["score"], f) for f in failures]
decorated.sort()
failures = [f[1] for f in decorated]
pprint.pprint(failures)
passed = len(self.processed) - len(failures)
print ("Total {0} passed {1} failed {2}".format(len(self.processed), passed, len(failures)))
def main(args):
v = Verifier()
if args.test_root:
approved = v.find_fast_tests(args.test_root, args.cost)
approved.sort()
v.generate_test_scripts(approved, os.path.abspath(args.test_root),
args.top, args.target)
else:
v.parse_qa_log(args.logfile)
v.compare_all_failures()
if __name__ == "__main__":
parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description="Generate a NWChem test battery with serial and MPI execution scripts, or check the test results from a completed test battery.", epilog="Example: generate a fast test battery where each test has cost no greater than 100:\n qacheck.py -c 100 -t /opt/science/nwchem/Nwchem-6.3.revision25564-src.2014-05-03/QA\nExample: run tests and then check them: \n cd /opt/science/nwchem/Nwchem-6.3.revision25564-src.2014-05-03/QA\n ./runserial | tee quick.log\n qacheck.py -l quick.log")
parser.add_argument("-c", "--cost", help="Maximum cost (wall clock time multiplied by number of processors) of tests to include in test battery.", type=int,default=1000)
parser.add_argument("--top", help="NWCHEM_TOP location of tree where NWChem was built/installed.", default="/opt/science/nwchem/current")
parser.add_argument("--target", help="NWCHEM_TARGET machine target that NWChem was built for.", default="LINUX64")
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("-t", "--test-root", help="Root directory location of NWChem QA, required to generate test battery, expressed either absolutely or relatively, e.g. \".\" or \"/opt/science/nwchem/Nwchem-6.3.revision25564-src.2014-05-03\".")
group.add_argument("-l", "--logfile", help="NWChem QA log file from a test battery run, required to check QA output.")
args = parser.parse_args()
main(args)
|
mattbernst/polyhartree
|
chemnw/qacheck.py
|
Python
|
gpl-3.0
| 14,526
|
[
"NWChem"
] |
220d670313732e610263c2ed0320de2420e509a8957f650c5fca067fe54b42cb
|
#!/usr/bin/env python3
#pylint: disable=missing-docstring
#* This file is part of the MOOSE framework
#* https://www.mooseframework.org
#*
#* All rights reserved, see COPYRIGHT for full restrictions
#* https://github.com/idaholab/moose/blob/master/COPYRIGHT
#*
#* Licensed under LGPL 2.1, please see LICENSE for details
#* https://www.gnu.org/licenses/lgpl-2.1.html
import chigger
reader = chigger.exodus.ExodusReader('../input/mug_blocks_out.e')
group = chigger.base.ResultGroup()
group.add(chigger.exodus.ExodusResult, reader, variable='diffused', cmap='viridis', block=['76'])
group.add(chigger.exodus.ExodusResult, reader, variable='convected', cmap='jet', block=['1'])
window = chigger.RenderWindow(group, size=[300,300], test=True)
window.write('result_group.png')
window.start()
|
nuclear-wizard/moose
|
python/chigger/tests/result_group/result_group.py
|
Python
|
lgpl-2.1
| 788
|
[
"MOOSE"
] |
d9894139555db18feb7958f1e560ea7d79e7f6fcce65fee461a0f6e0d4ea79f0
|
#!/usr/bin/env python3
# Project : From geodynamic to Seismic observations in the Earth's inner core
# Author : Marine Lasbleis
# Seismic properties of material as function of ka (adimensional frequency): P and S wave velocity and attenuation.
# Please refer to Calvet and Margerin 2008 (figures 3 and 4)
from __future__ import division
from __future__ import absolute_import
import numpy as np
import matplotlib.pyplot as plt # for figures
import warnings
import scipy.io as sio
import pkg_resources # to use data files of the package
# IMPORTANT : all the functions here use dimensional quantities in unit SI !
DATA_PATH = pkg_resources.resource_filename(
'GrowYourIC', 'data/CM2008_data.mat')
print(DATA_PATH)
def export_matlab_data(name_data, file_name=DATA_PATH):
""" Values for polynomial fit of the velocity and attenuation are included in data.mat. Scipy export them as a dictionary."""
return sio.loadmat(file_name)[name_data]
def domain_size(age):
""" domain size of a crystal in the inner core.
age is dimensionnal (in seconds)
Value of M \cdot \gamma = 8.10^{10} m2/s from Bergman 2010 (see Geballe 2013, paragraph 12)
output is the domain size, given in meters.
"""
Mgamma = 8.e-10 # m^2/s grain boundary mobility * surface energy
# exponent: 0.5 for pure materials, 1/3 to 1/4 for alloys
return np.sqrt(Mgamma * age)
def adimensional_frequency(size, v=11030., freq=1.):
""" k_0 a = domain_size * 2 pi freq / v """
return size * 2. * np.pi * freq / v
def convert_CM2008_velocity(kR, poly):
""" Data from Fig 3 and 4 of Calvet and Margerin 2008 have been stored as polynomial values. This function transform them to a function:
for example, with poly = Vp_poly,
give Vp as function of kR (adimensional frequency)"""
# if any(min(abs(np.log10(kR)+2),abs(np.log10(kR-1))) < 1e-4):
# warnings.warn('heaviside fn may be computing 0.5')
velocity = heaviside(kR - 10.**(-2.)) * heaviside(10. - kR) * np.polyval(poly, np.log10(kR)) +\
heaviside(10.**(-2) - kR) * np.polyval(poly, -2.) +\
heaviside(kR - 10.) * np.polyval(poly, 1.)
return velocity
def convert_CM2008_attenuation(kR, poly):
""" Data from Fig 3 and 4 of Calvet and Margerin 2008 have been stored as polynomial values. This function transform them to a function:
for example, with poly = Qp_poly,
give attenuation as function of kR (adimensional frequency)"""
# if min(abs(np.log10(kR)+2),abs(np.log10(kR-1))) < 1e-4:
# warnings.warn('heaviside fn may be computing 0.5')
velocity = heaviside(kR - 10**-1) * heaviside(10**1 - kR) * np.polyval(poly, np.log10(kR)) +\
heaviside(10**-1 - kR) * np.polyval(poly, -1) +\
heaviside(kR - 10**1) * np.polyval(poly, 1)
velocity = 10**velocity
return velocity
def heaviside(x):
""" numpy does not define an heaviside function.Let's define it here."""
return 0.5 * (np.sign(x) + 1)
if __name__ == "__main__":
x = 10**(np.linspace(-4, 4, 200))
plt.plot(np.log10(x), convert_CM2008_velocity(
x, export_matlab_data("Belanoshko_Vp_poly")), 'o-')
plt.show()
|
MarineLasbleis/GrowYourIC
|
GrowYourIC/mineral_phys.py
|
Python
|
mit
| 3,176
|
[
"CRYSTAL"
] |
52ecd728e952684e89014ee1faedd32eee65f25244c52bf3aabed8dac5cd5adf
|
# Copyright (c) 2009-2021 The Regents of the University of Michigan
# This file is part of the HOOMD-blue project, released under the BSD 3-Clause
# License.
"""Molecular Dynamics.
Perform Molecular Dynamics simulations with HOOMD-blue.
"""
from hoomd.md import angle
from hoomd.md import bond
from hoomd.md import compute
from hoomd.md import constrain
from hoomd.md import dihedral
from hoomd.md import external
from hoomd.md import force
from hoomd.md import improper
from hoomd.md.integrate import Integrator
from hoomd.md import long_range
from hoomd.md import manifold
from hoomd.md import nlist
from hoomd.md import pair
from hoomd.md import update
from hoomd.md import wall
from hoomd.md import special_pair
from hoomd.md import methods
from hoomd.md import many_body
|
joaander/hoomd-blue
|
hoomd/md/__init__.py
|
Python
|
bsd-3-clause
| 779
|
[
"HOOMD-blue"
] |
74b2732d778f22767723d2f6c4f4c296571be4d62e06e6edf533dfae2b128968
|
# -*- coding: utf-8 -*-
#--------------------------------------------------------------------------
# Software: InVesalius - Software de Reconstrucao 3D de Imagens Medicas
# Copyright: (C) 2001 Centro de Pesquisas Renato Archer
# Homepage: http://www.softwarepublico.gov.br
# Contact: invesalius@cti.gov.br
# License: GNU - GPL 2 (LICENSE.txt/LICENCA.txt)
#--------------------------------------------------------------------------
# Este programa e software livre; voce pode redistribui-lo e/ou
# modifica-lo sob os termos da Licenca Publica Geral GNU, conforme
# publicada pela Free Software Foundation; de acordo com a versao 2
# da Licenca.
#
# Este programa eh distribuido na expectativa de ser util, mas SEM
# QUALQUER GARANTIA; sem mesmo a garantia implicita de
# COMERCIALIZACAO ou de ADEQUACAO A QUALQUER PROPOSITO EM
# PARTICULAR. Consulte a Licenca Publica Geral GNU para obter mais
# detalhes.
#--------------------------------------------------------------------------
import collections
import itertools
import os
import tempfile
import numpy as np
import vtk
from vtk.wx.wxVTKRenderWindowInteractor import wxVTKRenderWindowInteractor
import invesalius.data.styles as styles
import wx
import sys
from invesalius.pubsub import pub as Publisher
try:
from agw import floatspin as FS
except ImportError: # if it's not there locally, try the wxPython lib.
import wx.lib.agw.floatspin as FS
import invesalius.constants as const
import invesalius.data.cursor_actors as ca
import invesalius.data.slice_ as sl
import invesalius.data.vtk_utils as vtku
import invesalius.project as project
import invesalius.data.slice_data as sd
import invesalius.utils as utils
import invesalius.session as ses
import invesalius.data.converters as converters
import invesalius.data.measures as measures
from invesalius.gui.widgets.inv_spinctrl import InvSpinCtrl, InvFloatSpinCtrl
from invesalius.gui.widgets.canvas_renderer import CanvasRendererCTX
if sys.platform == 'win32':
try:
import win32api
_has_win32api = True
except ImportError:
_has_win32api = False
else:
_has_win32api = False
ID_TO_TOOL_ITEM = {}
STR_WL = "WL: %d WW: %d"
ORIENTATIONS = {
"AXIAL": const.AXIAL,
"CORONAL": const.CORONAL,
"SAGITAL": const.SAGITAL,
}
class ContourMIPConfig(wx.Panel):
def __init__(self, prnt, orientation):
wx.Panel.__init__(self, prnt)
self.mip_size_spin = InvSpinCtrl(self, -1, value=const.PROJECTION_MIP_SIZE, min_value=1, max_value=240)
self.mip_size_spin.SetToolTip(wx.ToolTip(_("Number of slices used to compound the visualization.")))
self.mip_size_spin.CalcSizeFromTextSize('MMM')
self.border_spin = InvFloatSpinCtrl(self, -1, min_value=0, max_value=10,
increment=0.1,
value=const.PROJECTION_BORDER_SIZE,
digits=1)
self.border_spin.SetToolTip(wx.ToolTip(_("Controls the sharpness of the"
" contour. The greater the"
" value, the sharper the"
" contour.")))
self.border_spin.CalcSizeFromTextSize()
# w, h = self.border_spin.GetTextExtent('M')
# self.border_spin.SetMinSize((5 * w + 10, -1))
# self.border_spin.SetMaxSize((5 * w + 10, -1))
self.inverted = wx.CheckBox(self, -1, _("Inverted order"))
self.inverted.SetToolTip(wx.ToolTip(_("If checked, the slices are"
" traversed in descending"
" order to compound the"
" visualization instead of"
" ascending order.")))
txt_mip_size = wx.StaticText(self, -1, _("Number of slices"), style=wx.ALIGN_CENTER_HORIZONTAL)
self.txt_mip_border = wx.StaticText(self, -1, _("Sharpness"))
sizer = wx.BoxSizer(wx.HORIZONTAL)
sizer.Add(txt_mip_size, 0, wx.ALIGN_CENTER_VERTICAL | wx.ALL, 2)
sizer.Add(self.mip_size_spin, 0)
try:
sizer.Add(10, 0)
except TypeError:
sizer.Add((10, 0))
sizer.Add(self.txt_mip_border, 0, wx.ALIGN_CENTER_VERTICAL | wx.ALL, 2)
sizer.Add(self.border_spin, 0, wx.EXPAND)
try:
sizer.Add(10, 0)
except TypeError:
sizer.Add((10, 0))
sizer.Add(self.inverted, 0, wx.EXPAND)
self.SetSizer(sizer)
sizer.Fit(self)
self.Layout()
self.Update()
self.SetAutoLayout(1)
self.orientation = orientation
self.canvas = None
self.mip_size_spin.Bind(wx.EVT_SPINCTRL, self.OnSetMIPSize)
self.border_spin.Bind(wx.EVT_SPINCTRL, self.OnSetMIPBorder)
self.inverted.Bind(wx.EVT_CHECKBOX, self.OnCheckInverted)
Publisher.subscribe(self._set_projection_type, 'Set projection type')
def OnSetMIPSize(self, number_slices):
val = self.mip_size_spin.GetValue()
Publisher.sendMessage('Set MIP size %s' % self.orientation, number_slices=val)
def OnSetMIPBorder(self, evt):
val = self.border_spin.GetValue()
Publisher.sendMessage('Set MIP border %s' % self.orientation, border_size=val)
def OnCheckInverted(self, evt):
val = self.inverted.GetValue()
Publisher.sendMessage('Set MIP Invert %s' % self.orientation, invert=val)
def _set_projection_type(self, projection_id):
if projection_id in (const.PROJECTION_MIDA,
const.PROJECTION_CONTOUR_MIDA):
self.inverted.Enable()
else:
self.inverted.Disable()
if projection_id in (const.PROJECTION_CONTOUR_MIP,
const.PROJECTION_CONTOUR_MIDA):
self.border_spin.Enable()
self.txt_mip_border.Enable()
else:
self.border_spin.Disable()
self.txt_mip_border.Disable()
class Viewer(wx.Panel):
def __init__(self, prnt, orientation='AXIAL'):
wx.Panel.__init__(self, prnt, size=wx.Size(320, 300))
#colour = [255*c for c in const.ORIENTATION_COLOUR[orientation]]
#self.SetBackgroundColour(colour)
# Interactor additional style
self._number_slices = const.PROJECTION_MIP_SIZE
self._mip_inverted = False
self.style = None
self.last_position_mouse_move = ()
self.state = const.STATE_DEFAULT
self.overwrite_mask = False
# All renderers and image actors in this viewer
self.slice_data_list = []
self.slice_data = None
self.slice_actor = None
self.interpolation_slice_status = True
self.canvas = None
self.draw_by_slice_number = collections.defaultdict(list)
# The layout from slice_data, the first is number of cols, the second
# is the number of rows
self.layout = (1, 1)
self.orientation_texts = []
self.measures = measures.MeasureData()
self.actors_by_slice_number = collections.defaultdict(list)
self.renderers_by_slice_number = {}
self.orientation = orientation
self.slice_number = 0
self.scroll_enabled = True
self.__init_gui()
self._brush_cursor_op = const.DEFAULT_BRUSH_OP
self._brush_cursor_size = const.BRUSH_SIZE
self._brush_cursor_colour = const.BRUSH_COLOUR
self._brush_cursor_type = const.DEFAULT_BRUSH_OP
self.cursor = None
self.wl_text = None
self.on_wl = False
self.on_text = False
# VTK pipeline and actors
self.__config_interactor()
self.cross_actor = vtk.vtkActor()
self.__bind_events()
self.__bind_events_wx()
self._flush_buffer = False
def __init_gui(self):
self.interactor = wxVTKRenderWindowInteractor(self, -1, size=self.GetSize())
self.interactor.SetRenderWhenDisabled(True)
scroll = wx.ScrollBar(self, -1, style=wx.SB_VERTICAL)
self.scroll = scroll
self.mip_ctrls = ContourMIPConfig(self, self.orientation)
self.mip_ctrls.Hide()
sizer = wx.BoxSizer(wx.HORIZONTAL)
sizer.Add(self.interactor, 1, wx.EXPAND)
sizer.Add(scroll, 0, wx.EXPAND|wx.GROW)
background_sizer = wx.BoxSizer(wx.VERTICAL)
background_sizer.Add(sizer, 1, wx.EXPAND)
#background_sizer.Add(self.mip_ctrls, 0, wx.EXPAND|wx.GROW|wx.ALL, 2)
self.SetSizer(background_sizer)
background_sizer.Fit(self)
self.Layout()
self.Update()
self.SetAutoLayout(1)
self.pick = vtk.vtkWorldPointPicker()
self.interactor.SetPicker(self.pick)
def OnContextMenu(self, evt):
if (self.last_position_mouse_move ==\
self.interactor.GetLastEventPosition()):
self.menu.caller = self
self.PopupMenu(self.menu)
evt.Skip()
def SetPopupMenu(self, menu):
self.menu = menu
def SetLayout(self, layout):
self.layout = layout
if (layout == (1,1)) and self.on_text:
self.ShowTextActors()
else:
self.HideTextActors(change_status=False)
slice_ = sl.Slice()
self.LoadRenderers(slice_.GetOutput())
self.__configure_renderers()
self.__configure_scroll()
def HideTextActors(self, change_status=True):
try:
self.canvas.draw_list.remove(self.wl_text)
except (ValueError, AttributeError):
pass
[self.canvas.draw_list.remove(t) for t in self.orientation_texts]
self.UpdateCanvas()
if change_status:
self.on_text = False
def ShowTextActors(self):
if self.on_wl and self.wl_text:
self.canvas.draw_list.append(self.wl_text)
[self.canvas.draw_list.append(t) for t in self.orientation_texts]
self.UpdateCanvas()
self.on_text = True
def __set_layout(self, layout):
self.SetLayout(layout)
def __config_interactor(self):
style = vtk.vtkInteractorStyleImage()
interactor = self.interactor
interactor.SetInteractorStyle(style)
def SetInteractorStyle(self, state):
cleanup = getattr(self.style, 'CleanUp', None)
if cleanup:
self.style.CleanUp()
del self.style
style = styles.Styles.get_style(state)(self)
setup = getattr(style, 'SetUp', None)
if setup:
style.SetUp()
self.style = style
self.interactor.SetInteractorStyle(style)
self.interactor.Render()
self.state = state
def UpdateWindowLevelValue(self, window, level):
self.acum_achange_window, self.acum_achange_level = (window, level)
self.SetWLText(window, level)
slc = sl.Slice()
slc._update_wwwl_widget_nodes(window, level)
Publisher.sendMessage('Update all slice')
Publisher.sendMessage('Update clut imagedata widget')
def UpdateWindowLevelText(self, window, level):
self.acum_achange_window, self.acum_achange_level = window, level
self.SetWLText(window, level)
self.interactor.Render()
def OnClutChange(self, evt):
Publisher.sendMessage('Change colour table from background image from widget',
nodes=evt.GetNodes())
slc = sl.Slice()
Publisher.sendMessage('Update window level value',
window=slc.window_width,
level=slc.window_level)
def SetWLText(self, window_width, window_level):
value = STR_WL%(window_level, window_width)
if (self.wl_text):
self.wl_text.SetValue(value)
self.canvas.modified = True
#self.interactor.Render()
def EnableText(self):
if not (self.wl_text):
proj = project.Project()
colour = const.ORIENTATION_COLOUR[self.orientation]
# Window & Level text
self.wl_text = vtku.TextZero()
self.wl_text.SetPosition(const.TEXT_POS_LEFT_UP)
self.wl_text.SetSymbolicSize(wx.FONTSIZE_LARGE)
self.SetWLText(proj.level, proj.window)
# Orientation text
if self.orientation == 'AXIAL':
values = [_('R'), _('L'), _('A'), _('P')]
elif self.orientation == 'SAGITAL':
values = [_('P'), _('A'), _('T'), _('B')]
else:
values = [_('R'), _('L'), _('T'), _('B')]
left_text = self.left_text = vtku.TextZero()
left_text.ShadowOff()
left_text.SetColour(colour)
left_text.SetPosition(const.TEXT_POS_VCENTRE_LEFT)
left_text.SetVerticalJustificationToCentered()
left_text.SetValue(values[0])
left_text.SetSymbolicSize(wx.FONTSIZE_LARGE)
right_text = self.right_text = vtku.TextZero()
right_text.ShadowOff()
right_text.SetColour(colour)
right_text.SetPosition(const.TEXT_POS_VCENTRE_RIGHT_ZERO)
right_text.SetVerticalJustificationToCentered()
right_text.SetJustificationToRight()
right_text.SetValue(values[1])
right_text.SetSymbolicSize(wx.FONTSIZE_LARGE)
up_text = self.up_text = vtku.TextZero()
up_text.ShadowOff()
up_text.SetColour(colour)
up_text.SetPosition(const.TEXT_POS_HCENTRE_UP)
up_text.SetJustificationToCentered()
up_text.SetValue(values[2])
up_text.SetSymbolicSize(wx.FONTSIZE_LARGE)
down_text = self.down_text = vtku.TextZero()
down_text.ShadowOff()
down_text.SetColour(colour)
down_text.SetPosition(const.TEXT_POS_HCENTRE_DOWN_ZERO)
down_text.SetJustificationToCentered()
down_text.SetVerticalJustificationToBottom()
down_text.SetValue(values[3])
down_text.SetSymbolicSize(wx.FONTSIZE_LARGE)
self.orientation_texts = [left_text, right_text, up_text,
down_text]
def RenderTextDirection(self, directions):
# Values are on ccw order, starting from the top:
self.up_text.SetValue(directions[0])
self.left_text.SetValue(directions[1])
self.down_text.SetValue(directions[2])
self.right_text.SetValue(directions[3])
self.interactor.Render()
def ResetTextDirection(self, cam):
# Values are on ccw order, starting from the top:
if self.orientation == 'AXIAL':
values = [_("A"), _("R"), _("P"), _("L")]
elif self.orientation == 'CORONAL':
values = [_("T"), _("R"), _("B"), _("L")]
else: # 'SAGITAL':
values = [_("T"), _("P"), _("B"), _("A")]
self.RenderTextDirection(values)
self.interactor.Render()
def UpdateTextDirection(self, cam):
croll = cam.GetRoll()
if (self.orientation == 'AXIAL'):
if (croll >= -2 and croll <= 1):
self.RenderTextDirection([_("A"), _("R"), _("P"), _("L")])
elif(croll > 1 and croll <= 44):
self.RenderTextDirection([_("AL"), _("RA"), _("PR"), _("LP")])
elif(croll > 44 and croll <= 88):
self.RenderTextDirection([_("LA"), _("AR"), _("RP"), _("PL")])
elif(croll > 89 and croll <= 91):
self.RenderTextDirection([_("L"), _("A"), _("R"), _("P")])
elif(croll > 91 and croll <= 135):
self.RenderTextDirection([_("LP"), _("AL"), _("RA"), _("PR")])
elif(croll > 135 and croll <= 177):
self.RenderTextDirection([_("PL"), _("LA"), _("AR"), _("RP")])
elif(croll >= -180 and croll <= -178) or (croll < 180 and croll > 177):
self.RenderTextDirection([_("P"), _("L"), _("A"), _("R")])
elif(croll >= -177 and croll <= -133):
self.RenderTextDirection([_("PR"), _("LP"), _("AL"), _("RA")])
elif(croll >= -132 and croll <= -101):
self.RenderTextDirection([_("RP"), _("PL"), _("LA"), _("AR")])
elif(croll >= -101 and croll <= -87):
self.RenderTextDirection([_("R"), _("P"), _("L"), _("A")])
elif(croll >= -86 and croll <= -42):
self.RenderTextDirection([_("RA"), _("PR"), _("LP"), _("AL")])
elif(croll >= -41 and croll <= -2):
self.RenderTextDirection([_("AR"), _("RP"), _("PL"), _("LA")])
elif(self.orientation == "CORONAL"):
if (croll >= -2 and croll <= 1):
self.RenderTextDirection([_("T"), _("R"), _("B"), _("L")])
elif(croll > 1 and croll <= 44):
self.RenderTextDirection([_("TL"), _("RT"), _("BR"), _("LB")])
elif(croll > 44 and croll <= 88):
self.RenderTextDirection([_("LT"), _("TR"), _("RB"), _("BL")])
elif(croll > 89 and croll <= 91):
self.RenderTextDirection([_("L"), _("T"), _("R"), _("B")])
elif(croll > 91 and croll <= 135):
self.RenderTextDirection([_("LB"), _("TL"), _("RT"), _("BR")])
elif(croll > 135 and croll <= 177):
self.RenderTextDirection([_("BL"), _("LT"), _("TR"), _("RB")])
elif(croll >= -180 and croll <= -178) or (croll < 180 and croll > 177):
self.RenderTextDirection([_("B"), _("L"), _("T"), _("R")])
elif(croll >= -177 and croll <= -133):
self.RenderTextDirection([_("BR"), _("LB"), _("TL"), _("RT")])
elif(croll >= -132 and croll <= -101):
self.RenderTextDirection([_("RB"), _("BL"), _("LT"), _("TR")])
elif(croll >= -101 and croll <= -87):
self.RenderTextDirection([_("R"), _("B"), _("L"), _("T")])
elif(croll >= -86 and croll <= -42):
self.RenderTextDirection([_("RT"), _("BR"), _("LB"), _("TL")])
elif(croll >= -41 and croll <= -2):
self.RenderTextDirection([_("TR"), _("RB"), _("BL"), _("LT")])
elif(self.orientation == "SAGITAL"):
if(croll >= -101 and croll <= -87):
self.RenderTextDirection([_("T"), _("P"), _("B"), _("A")])
elif(croll >= -86 and croll <= -42):
self.RenderTextDirection([_("TA"), _("PT"), _("BP"), _("AB")])
elif(croll >= -41 and croll <= -2):
self.RenderTextDirection([_("AT"), _("TP"), _("PB"), _("BA")])
elif (croll >= -2 and croll <= 1):
self.RenderTextDirection([_("A"), _("T"), _("P"), _("B")])
elif(croll > 1 and croll <= 44):
self.RenderTextDirection([_("AB"), _("TA"), _("PT"), _("BP")])
elif(croll > 44 and croll <= 88):
self.RenderTextDirection([_("BA"), _("AT"), _("TP"), _("PB")])
elif(croll > 89 and croll <= 91):
self.RenderTextDirection([_("B"), _("A"), _("T"), _("P")])
elif(croll > 91 and croll <= 135):
self.RenderTextDirection([_("BP"), _("AB"), _("TA"), _("PT")])
elif(croll > 135 and croll <= 177):
self.RenderTextDirection([_("PB"), _("BA"), _("AT"), _("TP")])
elif(croll >= -180 and croll <= -178) or (croll < 180 and croll > 177):
self.RenderTextDirection([_("P"), _("B"), _("A"), _("T")])
elif(croll >= -177 and croll <= -133):
self.RenderTextDirection([_("PT"), _("BP"), _("AB"), _("TA")])
elif(croll >= -132 and croll <= -101):
self.RenderTextDirection([_("TP"), _("PB"), _("BA"), _("AT")])
def Reposition(self, slice_data):
"""
Based on code of method Zoom in the
vtkInteractorStyleRubberBandZoom, the of
vtk 5.4.3
"""
ren = slice_data.renderer
size = ren.GetSize()
ren.ResetCamera()
ren.GetActiveCamera().Zoom(1.0)
self.interactor.Render()
def ChangeBrushColour(self, colour):
vtk_colour = colour
self._brush_cursor_colour = vtk_colour
if (self.cursor):
for slice_data in self.slice_data_list:
slice_data.cursor.SetColour(vtk_colour)
def SetBrushColour(self, colour):
colour_vtk = [colour/float(255) for colour in colour]
self._brush_cursor_colour = colour_vtk
if self.slice_data.cursor:
self.slice_data.cursor.SetColour(colour_vtk)
def UpdateSlicesPosition(self, position):
# Get point from base change
px, py = self.get_slice_pixel_coord_by_world_pos(*position)
coord = self.calcultate_scroll_position(px, py)
# Debugging coordinates. For a 1.0 spacing axis the coord and position is the same,
# but for a spacing dimension =! 1, the coord and position are different
# print("\nPosition: {}".format(position))
# print("Scroll position: {}".format(coord))
# print("Slice actor bounds: {}".format(self.slice_data.actor.GetBounds()))
# print("Scroll from int of position: {}\n".format([round(s) for s in position]))
# this call did not affect the working code
# self.cross.SetFocalPoint(coord)
# update the image slices in all three orientations
self.ScrollSlice(coord)
def SetCrossFocalPoint(self, position):
"""
Sets the cross focal point for all slice panels (axial, coronal, sagittal). This function is also called via
pubsub messaging and may receive a list of 6 coordinates. Thus, limiting the number of list elements in the
SetFocalPoint call is required.
:param position: list of 6 coordinates in vtk world coordinate system wx, wy, wz
"""
self.cross.SetFocalPoint(position[:3])
def ScrollSlice(self, coord):
if self.orientation == "AXIAL":
Publisher.sendMessage(('Set scroll position', 'SAGITAL'),
index=coord[0])
Publisher.sendMessage(('Set scroll position', 'CORONAL'),
index=coord[1])
elif self.orientation == "SAGITAL":
Publisher.sendMessage(('Set scroll position', 'AXIAL'),
index=coord[2])
Publisher.sendMessage(('Set scroll position', 'CORONAL'),
index=coord[1])
elif self.orientation == "CORONAL":
Publisher.sendMessage(('Set scroll position', 'AXIAL'),
index=coord[2])
Publisher.sendMessage(('Set scroll position', 'SAGITAL'),
index=coord[0])
def get_slice_data(self, render):
#for slice_data in self.slice_data_list:
#if slice_data.renderer is render:
#return slice_data
# WARN: Return the only slice_data used in this slice_viewer.
return self.slice_data
def calcultate_scroll_position(self, x, y):
# Based in the given coord (x, y), returns a list with the scroll positions for each
# orientation, being the first position the sagital, second the coronal
# and the last, axial.
if self.orientation == 'AXIAL':
axial = self.slice_data.number
coronal = y
sagital = x
elif self.orientation == 'CORONAL':
axial = y
coronal = self.slice_data.number
sagital = x
elif self.orientation == 'SAGITAL':
axial = y
coronal = x
sagital = self.slice_data.number
return sagital, coronal, axial
def calculate_matrix_position(self, coord):
x, y, z = coord
xi, xf, yi, yf, zi, zf = self.slice_data.actor.GetBounds()
if self.orientation == 'AXIAL':
mx = round((x - xi)/self.slice_.spacing[0], 0)
my = round((y - yi)/self.slice_.spacing[1], 0)
elif self.orientation == 'CORONAL':
mx = round((x - xi)/self.slice_.spacing[0], 0)
my = round((z - zi)/self.slice_.spacing[2], 0)
elif self.orientation == 'SAGITAL':
mx = round((y - yi)/self.slice_.spacing[1], 0)
my = round((z - zi)/self.slice_.spacing[2], 0)
return int(mx), int(my)
def get_vtk_mouse_position(self):
"""
Get Mouse position inside a wxVTKRenderWindowInteractorself. Return a
tuple with X and Y position.
Please use this instead of using iren.GetEventPosition because it's
not returning the correct values on Mac with HighDPI display, maybe
the same is happing with Windows and Linux, we need to test.
"""
mposx, mposy = wx.GetMousePosition()
cposx, cposy = self.interactor.ScreenToClient((mposx, mposy))
mx, my = cposx, self.interactor.GetSize()[1] - cposy
if sys.platform == 'darwin':
# It's needed to mutiple by scale factor in HighDPI because of
# https://docs.wxpython.org/wx.glcanvas.GLCanvas.html
# For now we are doing this only on Mac but it may be needed on
# Windows and Linux too.
scale = self.interactor.GetContentScaleFactor()
mx *= scale
my *= scale
return int(mx), int(my)
def get_coordinate_cursor(self, mx, my, picker=None):
"""
Given the mx, my screen position returns the x, y, z position in world
coordinates.
Parameters
mx (int): x position.
my (int): y position
picker: the picker used to get calculate the voxel coordinate.
Returns:
world coordinate (x, y, z)
"""
if picker is None:
picker = self.pick
slice_data = self.slice_data
renderer = slice_data.renderer
picker.Pick(mx, my, 0, renderer)
x, y, z = picker.GetPickPosition()
bounds = self.slice_data.actor.GetBounds()
if bounds[0] == bounds[1]:
x = bounds[0]
elif bounds[2] == bounds[3]:
y = bounds[2]
elif bounds[4] == bounds[5]:
z = bounds[4]
return x, y, z
def get_coordinate_cursor_edition(self, slice_data=None, picker=None):
# Find position
if slice_data is None:
slice_data = self.slice_data
actor = slice_data.actor
slice_number = slice_data.number
if picker is None:
picker = self.pick
x, y, z = picker.GetPickPosition()
# First we fix the position origin, based on vtkActor bounds
bounds = actor.GetBounds()
bound_xi, bound_xf, bound_yi, bound_yf, bound_zi, bound_zf = bounds
x = float(x - bound_xi)
y = float(y - bound_yi)
z = float(z - bound_zi)
dx = bound_xf - bound_xi
dy = bound_yf - bound_yi
dz = bound_zf - bound_zi
dimensions = self.slice_.matrix.shape
try:
x = (x * dimensions[2]) / dx
except ZeroDivisionError:
x = slice_number
try:
y = (y * dimensions[1]) / dy
except ZeroDivisionError:
y = slice_number
try:
z = (z * dimensions[0]) / dz
except ZeroDivisionError:
z = slice_number
return x, y, z
def get_voxel_coord_by_screen_pos(self, mx, my, picker=None):
"""
Given the (mx, my) screen position returns the voxel coordinate
of the volume at (that mx, my) position.
Parameters:
mx (int): x position.
my (int): y position
picker: the picker used to get calculate the voxel coordinate.
Returns:
voxel_coordinate (x, y, z): voxel coordinate inside the matrix. Can
be used to access the voxel value inside the matrix.
"""
if picker is None:
picker = self.pick
wx, wy, wz = self.get_coordinate_cursor(mx, my, picker)
x, y, z = self.get_voxel_coord_by_world_pos(wx, wy, wz)
return (x, y, z)
def get_voxel_coord_by_world_pos(self, wx, wy, wz):
"""
Given the (x, my) screen position returns the voxel coordinate
of the volume at (that mx, my) position.
Parameters:
wx (float): x position.
wy (float): y position
wz (float): z position
Returns:
voxel_coordinate (x, y, z): voxel coordinate inside the matrix. Can
be used to access the voxel value inside the matrix.
"""
px, py = self.get_slice_pixel_coord_by_world_pos(wx, wy, wz)
x, y, z = self.calcultate_scroll_position(px, py)
return (int(x), int(y), int(z))
def get_slice_pixel_coord_by_screen_pos(self, mx, my, picker=None):
"""
Given the (mx, my) screen position returns the pixel coordinate
of the slice at (that mx, my) position.
Parameters:
mx (int): x position.
my (int): y position
picker: the picker used to get calculate the pixel coordinate.
Returns:
voxel_coordinate (x, y): voxel coordinate inside the matrix. Can
be used to access the voxel value inside the matrix.
"""
if picker is None:
picker = self.pick
wx, wy, wz = self.get_coordinate_cursor(mx, my, picker)
x, y = self.get_slice_pixel_coord_by_world_pos(wx, wy, wz)
return int(x), int(y)
def get_slice_pixel_coord_by_world_pos(self, wx, wy, wz):
"""
Given the (wx, wy, wz) world position returns the pixel coordinate
of the slice at (that mx, my) position.
Parameters:
mx (int): x position.
my (int): y position
picker: the picker used to get calculate the pixel coordinate.
Returns:
voxel_coordinate (x, y): voxel coordinate inside the matrix. Can
be used to access the voxel value inside the matrix.
"""
coord = wx, wy, wz
px, py = self.calculate_matrix_position(coord)
return px, py
def get_coord_inside_volume(self, mx, my, picker=None):
if picker is None:
picker = self.pick
slice_data = self.slice_data
renderer = slice_data.renderer
coord = self.get_coordinate_cursor(picker)
position = slice_data.actor.GetInput().FindPoint(coord)
if position != -1:
coord = slice_data.actor.GetInput().GetPoint(position)
return coord
def __bind_events(self):
Publisher.subscribe(self.LoadImagedata,
'Load slice to viewer')
Publisher.subscribe(self.SetBrushColour,
'Change mask colour')
Publisher.subscribe(self.UpdateRender,
'Update slice viewer')
Publisher.subscribe(self.UpdateRender,
'Update slice viewer %s' % self.orientation)
Publisher.subscribe(self.UpdateCanvas,
'Redraw canvas')
Publisher.subscribe(self.UpdateCanvas,
'Redraw canvas %s' % self.orientation)
Publisher.subscribe(self.ChangeSliceNumber,
('Set scroll position',
self.orientation))
# Publisher.subscribe(self.__update_cross_position,
# 'Update cross position')
# Publisher.subscribe(self.__update_cross_position,
# 'Update cross position %s' % self.orientation)
Publisher.subscribe(self.SetCrossFocalPoint, 'Set cross focal point')
Publisher.subscribe(self.UpdateSlicesPosition, 'Update slices position')
###
# Publisher.subscribe(self.ChangeBrushColour,
# 'Add mask')
Publisher.subscribe(self.UpdateWindowLevelValue,
'Update window level value')
Publisher.subscribe(self.UpdateWindowLevelText,
'Update window level text')
Publisher.subscribe(self._set_cross_visibility,
'Set cross visibility')
Publisher.subscribe(self.__set_layout,
'Set slice viewer layout')
Publisher.subscribe(self.OnSetInteractorStyle,
'Set slice interaction style')
Publisher.subscribe(self.OnCloseProject, 'Close project data')
#####
Publisher.subscribe(self.OnShowText,
'Show text actors on viewers')
Publisher.subscribe(self.OnHideText,
'Hide text actors on viewers')
Publisher.subscribe(self.OnExportPicture,'Export picture to file')
Publisher.subscribe(self.SetDefaultCursor, 'Set interactor default cursor')
Publisher.subscribe(self.SetSizeNSCursor, 'Set interactor resize NS cursor')
Publisher.subscribe(self.SetSizeWECursor, 'Set interactor resize WE cursor')
Publisher.subscribe(self.SetSizeNWSECursor, 'Set interactor resize NSWE cursor')
Publisher.subscribe(self.AddActors, 'Add actors ' + str(ORIENTATIONS[self.orientation]))
Publisher.subscribe(self.RemoveActors, 'Remove actors ' + str(ORIENTATIONS[self.orientation]))
Publisher.subscribe(self.OnSwapVolumeAxes, 'Swap volume axes')
Publisher.subscribe(self.ReloadActualSlice, 'Reload actual slice')
Publisher.subscribe(self.ReloadActualSlice, 'Reload actual slice %s' % self.orientation)
Publisher.subscribe(self.OnUpdateScroll, 'Update scroll')
# MIP
Publisher.subscribe(self.OnSetMIPSize, 'Set MIP size %s' % self.orientation)
Publisher.subscribe(self.OnSetMIPBorder, 'Set MIP border %s' % self.orientation)
Publisher.subscribe(self.OnSetMIPInvert, 'Set MIP Invert %s' % self.orientation)
Publisher.subscribe(self.OnShowMIPInterface, 'Show MIP interface')
Publisher.subscribe(self.OnSetOverwriteMask, "Set overwrite mask")
Publisher.subscribe(self.RefreshViewer, "Refresh viewer")
Publisher.subscribe(self.SetInterpolatedSlices, "Set interpolated slices")
Publisher.subscribe(self.UpdateInterpolatedSlice, "Update Slice Interpolation")
Publisher.subscribe(self.GetCrossPos, "Set Update cross pos")
Publisher.subscribe(self.UpdateCross, "Update cross pos")
def RefreshViewer(self):
self.Refresh()
def SetDefaultCursor(self):
self.interactor.SetCursor(wx.Cursor(wx.CURSOR_DEFAULT))
def SetSizeNSCursor(self):
self.interactor.SetCursor(wx.Cursor(wx.CURSOR_SIZENS))
def SetSizeWECursor(self):
self.interactor.SetCursor(wx.Cursor(wx.CURSOR_SIZEWE))
def SetSizeNWSECursor(self):
if sys.platform.startswith('linux'):
self.interactor.SetCursor(wx.Cursor(wx.CURSOR_SIZENWSE))
else:
self.interactor.SetCursor(wx.Cursor(wx.CURSOR_SIZING))
def SetFocus(self):
Publisher.sendMessage('Set viewer orientation focus',
orientation=self.orientation)
super().SetFocus()
def OnExportPicture(self, orientation, filename, filetype):
dict = {"AXIAL": const.AXIAL,
"CORONAL": const.CORONAL,
"SAGITAL": const.SAGITAL}
if orientation == dict[self.orientation]:
Publisher.sendMessage('Begin busy cursor')
if _has_win32api:
utils.touch(filename)
win_filename = win32api.GetShortPathName(filename)
self._export_picture(orientation, win_filename, filetype)
else:
self._export_picture(orientation, filename, filetype)
Publisher.sendMessage('End busy cursor')
def _export_picture(self, id, filename, filetype):
view_prop_list = []
dict = {"AXIAL": const.AXIAL,
"CORONAL": const.CORONAL,
"SAGITAL": const.SAGITAL}
if id == dict[self.orientation]:
if filetype == const.FILETYPE_POV:
renwin = self.interactor.GetRenderWindow()
image = vtk.vtkWindowToImageFilter()
image.SetInput(renwin)
writer = vtk.vtkPOVExporter()
writer.SetFilePrefix(filename.split(".")[0])
writer.SetRenderWindow(renwin)
writer.Write()
else:
ren = self.slice_data.renderer
#Use tiling to generate a large rendering.
image = vtk.vtkRenderLargeImage()
image.SetInput(ren)
image.SetMagnification(1)
image.Update()
image = image.GetOutput()
# write image file
if (filetype == const.FILETYPE_BMP):
writer = vtk.vtkBMPWriter()
elif (filetype == const.FILETYPE_JPG):
writer = vtk.vtkJPEGWriter()
elif (filetype == const.FILETYPE_PNG):
writer = vtk.vtkPNGWriter()
elif (filetype == const.FILETYPE_PS):
writer = vtk.vtkPostScriptWriter()
elif (filetype == const.FILETYPE_TIF):
writer = vtk.vtkTIFFWriter()
filename = "%s.tif"%filename.strip(".tif")
writer.SetInputData(image)
writer.SetFileName(filename.encode(const.FS_ENCODE))
writer.Write()
if not os.path.exists(filename):
wx.MessageBox(_("InVesalius was not able to export this picture"), _("Export picture error"))
for actor in view_prop_list:
self.slice_data.renderer.AddViewProp(actor)
Publisher.sendMessage('End busy cursor')
def OnShowText(self):
self.ShowTextActors()
def OnHideText(self):
self.HideTextActors()
def OnCloseProject(self):
self.CloseProject()
def CloseProject(self):
for slice_data in self.slice_data_list:
del slice_data
self.slice_data_list = []
self.layout = (1, 1)
del self.slice_data
self.slice_data = None
if self.canvas:
self.canvas.draw_list = []
self.canvas.remove_from_renderer()
self.canvas = None
self.orientation_texts = []
self.slice_number = 0
self.cursor = None
self.wl_text = None
self.pick = vtk.vtkWorldPointPicker()
def OnSetInteractorStyle(self, style):
self.SetInteractorStyle(style)
if (style not in [const.SLICE_STATE_EDITOR, const.SLICE_STATE_WATERSHED]):
Publisher.sendMessage('Set interactor default cursor')
def __bind_events_wx(self):
self.scroll.Bind(wx.EVT_SCROLL, self.OnScrollBar)
self.scroll.Bind(wx.EVT_SCROLL_THUMBTRACK, self.OnScrollBarRelease)
#self.scroll.Bind(wx.EVT_SCROLL_ENDSCROLL, self.OnScrollBarRelease)
self.interactor.Bind(wx.EVT_KEY_DOWN, self.OnKeyDown)
self.interactor.Bind(wx.EVT_RIGHT_UP, self.OnContextMenu)
self.interactor.Bind(wx.EVT_SIZE, self.OnSize)
def LoadImagedata(self, mask_dict):
self.SetInput(mask_dict)
def LoadRenderers(self, imagedata):
number_renderers = self.layout[0] * self.layout[1]
diff = number_renderers - len(self.slice_data_list)
if diff > 0:
for i in range(diff):
slice_data = self.create_slice_window(imagedata)
self.slice_data_list.append(slice_data)
elif diff < 0:
to_remove = self.slice_data_list[number_renderers::]
for slice_data in to_remove:
self.interactor.GetRenderWindow().RemoveRenderer(slice_data.renderer)
self.slice_data_list = self.slice_data_list[:number_renderers]
def __configure_renderers(self):
proportion_x = 1.0 / self.layout[0]
proportion_y = 1.0 / self.layout[1]
# The (0,0) in VTK is in bottom left. So the creation from renderers
# must be # in inverted order, from the top left to bottom right
w, h = self.interactor.GetRenderWindow().GetSize()
w *= proportion_x
h *= proportion_y
n = 0
for j in range(self.layout[1]-1, -1, -1):
for i in range(self.layout[0]):
slice_xi = i*proportion_x
slice_xf = (i+1)*proportion_x
slice_yi = j*proportion_y
slice_yf = (j+1)*proportion_y
position = (slice_xi, slice_yi, slice_xf, slice_yf)
slice_data = self.slice_data_list[n]
slice_data.renderer.SetViewport(position)
# Text actor position
x, y = const.TEXT_POS_LEFT_DOWN
slice_data.text.SetPosition((x+slice_xi,y+slice_yi))
slice_data.SetCursor(self.__create_cursor())
# slice_data.SetSize((w, h))
self.__update_camera(slice_data)
style = 0
if j == 0:
style = style | sd.BORDER_DOWN
if j == self.layout[1] - 1:
style = style | sd.BORDER_UP
if i == 0:
style = style | sd.BORDER_LEFT
if i == self.layout[0] - 1:
style = style | sd.BORDER_RIGHT
# slice_data.SetBorderStyle(style)
n += 1
def __create_cursor(self):
cursor = ca.CursorCircle()
cursor.SetOrientation(self.orientation)
#self.__update_cursor_position([i for i in actor_bound[1::2]])
cursor.SetColour(self._brush_cursor_colour)
cursor.SetSpacing(self.slice_.spacing)
cursor.Show(0)
self.cursor_ = cursor
return cursor
def SetInput(self, mask_dict):
self.slice_ = sl.Slice()
max_slice_number = sl.Slice().GetNumberOfSlices(self.orientation)
self.scroll.SetScrollbar(wx.SB_VERTICAL, 1, max_slice_number,
max_slice_number)
self.slice_data = self.create_slice_window()
self.slice_data.SetCursor(self.__create_cursor())
self.cam = self.slice_data.renderer.GetActiveCamera()
self.__build_cross_lines()
self.canvas = CanvasRendererCTX(self, self.slice_data.renderer, self.slice_data.canvas_renderer, self.orientation)
self.canvas.draw_list.append(self.slice_data)
# Set the slice number to the last slice to ensure the camera if far
# enough to show all slices.
self.set_slice_number(max_slice_number - 1)
self.__update_camera()
self.slice_data.renderer.ResetCamera()
self.interactor.GetRenderWindow().AddRenderer(self.slice_data.renderer)
self.interactor.Render()
self.EnableText()
self.wl_text.Hide()
## Insert cursor
self.SetInteractorStyle(const.STATE_DEFAULT)
def __build_cross_lines(self):
renderer = self.slice_data.overlay_renderer
cross = vtk.vtkCursor3D()
cross.AllOff()
cross.AxesOn()
self.cross = cross
c = vtk.vtkCoordinate()
c.SetCoordinateSystemToWorld()
cross_mapper = vtk.vtkPolyDataMapper()
cross_mapper.SetInputConnection(cross.GetOutputPort())
#cross_mapper.SetTransformCoordinate(c)
p = vtk.vtkProperty()
p.SetColor(1, 0, 0)
cross_actor = vtk.vtkActor()
cross_actor.SetMapper(cross_mapper)
cross_actor.SetProperty(p)
cross_actor.VisibilityOff()
# Only the slices are pickable
cross_actor.PickableOff()
self.cross_actor = cross_actor
renderer.AddActor(cross_actor)
# def __update_cross_position(self, arg, position):
# # self.cross.SetFocalPoint(position[:3])
# self.UpdateSlicesPosition(None, position)
def _set_cross_visibility(self, visibility):
self.cross_actor.SetVisibility(visibility)
def _set_editor_cursor_visibility(self, visibility):
for slice_data in self.slice_data_list:
slice_data.cursor.actor.SetVisibility(visibility)
def SetOrientation(self, orientation):
self.orientation = orientation
for slice_data in self.slice_data_list:
self.__update_camera(slice_data)
def create_slice_window(self):
renderer = vtk.vtkRenderer()
renderer.SetLayer(0)
cam = renderer.GetActiveCamera()
canvas_renderer = vtk.vtkRenderer()
canvas_renderer.SetLayer(1)
canvas_renderer.SetActiveCamera(cam)
canvas_renderer.SetInteractive(0)
canvas_renderer.PreserveDepthBufferOn()
overlay_renderer = vtk.vtkRenderer()
overlay_renderer.SetLayer(2)
overlay_renderer.SetActiveCamera(cam)
overlay_renderer.SetInteractive(0)
self.interactor.GetRenderWindow().SetNumberOfLayers(3)
self.interactor.GetRenderWindow().AddRenderer(overlay_renderer)
self.interactor.GetRenderWindow().AddRenderer(canvas_renderer)
self.interactor.GetRenderWindow().AddRenderer(renderer)
actor = vtk.vtkImageActor()
self.slice_actor = actor
# TODO: Create a option to let the user set if he wants to interpolate
# the slice images.
if int(ses.Session().slice_interpolation) == 1:
actor.InterpolateOff()
else:
actor.InterpolateOn()
slice_data = sd.SliceData()
slice_data.SetOrientation(self.orientation)
slice_data.renderer = renderer
slice_data.canvas_renderer = canvas_renderer
slice_data.overlay_renderer = overlay_renderer
slice_data.actor = actor
# slice_data.SetBorderStyle(sd.BORDER_ALL)
renderer.AddActor(actor)
# renderer.AddActor(slice_data.text.actor)
# renderer.AddViewProp(slice_data.box_actor)
return slice_data
def UpdateInterpolatedSlice(self):
if self.slice_actor != None:
if ses.Session().slice_interpolation:
self.slice_actor.InterpolateOff()
else:
self.slice_actor.InterpolateOn()
self.interactor.Render()
def SetInterpolatedSlices(self, flag):
self.interpolation_slice_status = flag
if self.slice_actor != None:
if self.interpolation_slice_status == True:
self.slice_actor.InterpolateOn()
else:
self.slice_actor.InterpolateOff()
self.interactor.Render()
def __update_camera(self):
orientation = self.orientation
proj = project.Project()
orig_orien = proj.original_orientation
self.cam.SetFocalPoint(0, 0, 0)
self.cam.SetViewUp(const.SLICE_POSITION[orig_orien][0][self.orientation])
self.cam.SetPosition(const.SLICE_POSITION[orig_orien][1][self.orientation])
#self.cam.ComputeViewPlaneNormal()
#self.cam.OrthogonalizeViewUp()
self.cam.ParallelProjectionOn()
def __update_display_extent(self, image):
self.slice_data.actor.SetDisplayExtent(image.GetExtent())
self.slice_data.renderer.ResetCameraClippingRange()
def UpdateRender(self):
self.interactor.Render()
def UpdateCanvas(self, evt=None):
if self.canvas is not None:
self._update_draw_list()
self.canvas.modified = True
self.interactor.Render()
def _update_draw_list(self):
cp_draw_list = self.canvas.draw_list[:]
self.canvas.draw_list = []
# Removing all measures
for i in cp_draw_list:
if not isinstance(i, (measures.AngularMeasure, measures.LinearMeasure, measures.CircleDensityMeasure, measures.PolygonDensityMeasure)):
self.canvas.draw_list.append(i)
# Then add all needed measures
for (m, mr) in self.measures.get(self.orientation, self.slice_data.number):
if m.visible:
self.canvas.draw_list.append(mr)
n = self.slice_data.number
self.canvas.draw_list.extend(self.draw_by_slice_number[n])
def __configure_scroll(self):
actor = self.slice_data_list[0].actor
number_of_slices = self.layout[0] * self.layout[1]
max_slice_number = actor.GetSliceNumberMax()/ \
number_of_slices
if actor.GetSliceNumberMax()% number_of_slices:
max_slice_number += 1
self.scroll.SetScrollbar(wx.SB_VERTICAL, 1, max_slice_number,
max_slice_number)
self.set_scroll_position(0)
@property
def number_slices(self):
return self._number_slices
@number_slices.setter
def number_slices(self, val):
if val != self._number_slices:
self._number_slices = val
buffer_ = self.slice_.buffer_slices[self.orientation]
buffer_.discard_buffer()
def set_scroll_position(self, position):
self.scroll.SetThumbPosition(position)
self.OnScrollBar()
def UpdateSlice3D(self, pos):
original_orientation = project.Project().original_orientation
pos = self.scroll.GetThumbPosition()
Publisher.sendMessage('Change slice from slice plane',
orientation=self.orientation, index=pos)
def OnScrollBar(self, evt=None, update3D=True):
pos = self.scroll.GetThumbPosition()
self.set_slice_number(pos)
if update3D:
self.UpdateSlice3D(pos)
# This Render needs to come before the self.style.OnScrollBar, otherwise the GetFocalPoint will sometimes
# provide the non-updated coordinate and the cross focal point will lag one pixel behind the actual
# scroll position
self.interactor.Render()
try:
self.style.OnScrollBar()
except AttributeError:
pass
if evt:
if self._flush_buffer:
self.slice_.apply_slice_buffer_to_mask(self.orientation)
evt.Skip()
def OnScrollBarRelease(self, evt):
pos = self.scroll.GetThumbPosition()
evt.Skip()
def OnKeyDown(self, evt=None, obj=None):
pos = self.scroll.GetThumbPosition()
skip = True
min = 0
max = self.slice_.GetMaxSliceNumber(self.orientation)
projections = {wx.WXK_NUMPAD0 : const.PROJECTION_NORMAL,
wx.WXK_NUMPAD1 : const.PROJECTION_MaxIP,
wx.WXK_NUMPAD2 : const.PROJECTION_MinIP,
wx.WXK_NUMPAD3 : const.PROJECTION_MeanIP,
wx.WXK_NUMPAD4 : const.PROJECTION_MIDA,
wx.WXK_NUMPAD5 : const.PROJECTION_CONTOUR_MIP,
wx.WXK_NUMPAD6 : const.PROJECTION_CONTOUR_MIDA,}
if self._flush_buffer:
self.slice_.apply_slice_buffer_to_mask(self.orientation)
if (evt.GetKeyCode() == wx.WXK_UP and pos > min):
self.OnScrollForward()
self.OnScrollBar()
skip = False
elif (evt.GetKeyCode() == wx.WXK_DOWN and pos < max):
self.OnScrollBackward()
self.OnScrollBar()
skip = False
elif (evt.GetKeyCode() == wx.WXK_NUMPAD_ADD):
actual_value = self.mip_ctrls.mip_size_spin.GetValue()
self.mip_ctrls.mip_size_spin.SetValue(actual_value + 1)
if self.mip_ctrls.mip_size_spin.GetValue() != actual_value:
self.number_slices = self.mip_ctrls.mip_size_spin.GetValue()
self.ReloadActualSlice()
skip = False
elif (evt.GetKeyCode() == wx.WXK_NUMPAD_SUBTRACT):
actual_value = self.mip_ctrls.mip_size_spin.GetValue()
self.mip_ctrls.mip_size_spin.SetValue(actual_value - 1)
if self.mip_ctrls.mip_size_spin.GetValue() != actual_value:
self.number_slices = self.mip_ctrls.mip_size_spin.GetValue()
self.ReloadActualSlice()
skip = False
elif evt.GetKeyCode() in projections:
self.slice_.SetTypeProjection(projections[evt.GetKeyCode()])
Publisher.sendMessage('Set projection type', projection_id=projections[evt.GetKeyCode()])
Publisher.sendMessage('Reload actual slice')
skip = False
self.UpdateSlice3D(pos)
self.interactor.Render()
if evt and skip:
evt.Skip()
def OnScrollForward(self, evt=None, obj=None):
if not self.scroll_enabled:
return
pos = self.scroll.GetThumbPosition()
min = 0
if(pos > min):
if self._flush_buffer:
self.slice_.apply_slice_buffer_to_mask(self.orientation)
pos = pos - 1
self.scroll.SetThumbPosition(pos)
self.OnScrollBar()
def OnScrollBackward(self, evt=None, obj=None):
if not self.scroll_enabled:
return
pos = self.scroll.GetThumbPosition()
max = self.slice_.GetMaxSliceNumber(self.orientation)
if(pos < max):
if self._flush_buffer:
self.slice_.apply_slice_buffer_to_mask(self.orientation)
pos = pos + 1
self.scroll.SetThumbPosition(pos)
self.OnScrollBar()
def OnSize(self, evt):
print("OnSize")
w, h = self.GetSize()
rwin = self.interactor.GetRenderWindow()
rwin.SetSize(w, h)
# if self.slice_data:
# self.slice_data.SetSize((w, h))
# evt.Skip()
def OnSetMIPSize(self, number_slices):
self.number_slices = number_slices
self.ReloadActualSlice()
def OnSetMIPBorder(self, border_size):
self.slice_.n_border = border_size
buffer_ = self.slice_.buffer_slices[self.orientation]
buffer_.discard_buffer()
self.ReloadActualSlice()
def OnSetMIPInvert(self, invert):
self._mip_inverted = invert
buffer_ = self.slice_.buffer_slices[self.orientation]
buffer_.discard_buffer()
self.ReloadActualSlice()
def OnShowMIPInterface(self, flag):
if flag:
if not self.mip_ctrls.Shown:
self.mip_ctrls.Show()
self.GetSizer().Add(self.mip_ctrls, 0, wx.EXPAND|wx.GROW|wx.ALL, 2)
self.Layout()
else:
self.mip_ctrls.Hide()
self.GetSizer().Detach(self.mip_ctrls)
self.Layout()
def OnSetOverwriteMask(self, flag):
self.overwrite_mask = flag
def set_slice_number(self, index):
max_slice_number = sl.Slice().GetNumberOfSlices(self.orientation)
if index < 0:
index = 0
if index >= max_slice_number:
index = max_slice_number - 1
inverted = self.mip_ctrls.inverted.GetValue()
border_size = self.mip_ctrls.border_spin.GetValue()
try:
image = self.slice_.GetSlices(self.orientation, index,
self.number_slices, inverted,
border_size)
except IndexError:
return
self.slice_data.actor.SetInputData(image)
for actor in self.actors_by_slice_number[self.slice_data.number]:
self.slice_data.renderer.RemoveActor(actor)
for actor in self.actors_by_slice_number[index]:
self.slice_data.renderer.AddActor(actor)
# for (m, mr) in self.measures.get(self.orientation, self.slice_data.number):
# try:
# self.canvas.draw_list.remove(mr)
# except ValueError:
# pass
# for (m, mr) in self.measures.get(self.orientation, index):
# if m.visible:
# self.canvas.draw_list.append(mr)
if self.slice_._type_projection == const.PROJECTION_NORMAL:
self.slice_data.SetNumber(index)
else:
max_slices = self.slice_.GetMaxSliceNumber(self.orientation)
end = min(max_slices, index + self.number_slices - 1)
self.slice_data.SetNumber(index, end)
self.__update_display_extent(image)
self.cross.SetModelBounds(self.slice_data.actor.GetBounds())
self._update_draw_list()
def ChangeSliceNumber(self, index):
#self.set_slice_number(index)
self.scroll.SetThumbPosition(index)
pos = self.scroll.GetThumbPosition()
self.set_slice_number(pos)
self.interactor.Render()
def ReloadActualSlice(self):
pos = self.scroll.GetThumbPosition()
self.set_slice_number(pos)
self.interactor.Render()
def OnUpdateScroll(self):
max_slice_number = sl.Slice().GetNumberOfSlices(self.orientation)
self.scroll.SetScrollbar(wx.SB_VERTICAL, 1, max_slice_number,
max_slice_number)
def OnSwapVolumeAxes(self, axes):
# Adjusting cursor spacing to match the spacing from the actual slice
# orientation
axis0, axis1 = axes
cursor = self.slice_data.cursor
spacing = cursor.spacing
if (axis0, axis1) == (2, 1):
cursor.SetSpacing((spacing[1], spacing[0], spacing[2]))
elif (axis0, axis1) == (2, 0):
cursor.SetSpacing((spacing[2], spacing[1], spacing[0]))
elif (axis0, axis1) == (1, 0):
cursor.SetSpacing((spacing[0], spacing[2], spacing[1]))
self.slice_data.renderer.ResetCamera()
def GetCrossPos(self):
spacing = self.slice_data.actor.GetInput().GetSpacing()
Publisher.sendMessage("Cross focal point", coord = self.cross.GetFocalPoint(), spacing = spacing)
def UpdateCross(self, coord):
self.cross.SetFocalPoint(coord)
Publisher.sendMessage('Co-registered points', arg=None, position=(coord[0], coord[1], coord[2], 0., 0., 0.))
self.OnScrollBar()
self.interactor.Render()
def AddActors(self, actors, slice_number):
"Inserting actors"
pos = self.scroll.GetThumbPosition()
#try:
#renderer = self.renderers_by_slice_number[slice_number]
#for actor in actors:
#renderer.AddActor(actor)
#except KeyError:
#pass
if pos == slice_number:
for actor in actors:
self.slice_data.renderer.AddActor(actor)
self.actors_by_slice_number[slice_number].extend(actors)
def RemoveActors(self, actors, slice_number):
"Remove a list of actors"
try:
renderer = self.renderers_by_slice_number[slice_number]
except KeyError:
for actor in actors:
self.actors_by_slice_number[slice_number].remove(actor)
self.slice_data.renderer.RemoveActor(actor)
else:
for actor in actors:
# Remove the actor from the renderer
renderer.RemoveActor(actor)
# and remove the actor from the actor's list
self.actors_by_slice_number[slice_number].remove(actor)
def get_actual_mask(self):
# Returns actual mask. Returns None if there is not a mask or no mask
# visible.
mask = self.slice_.current_mask
return mask
def get_slice(self):
return self.slice_
def discard_slice_cache(self, all_orientations=False, vtk_cache=True):
if all_orientations:
for orientation in self.slice_.buffer_slices:
buffer_ = self.slice_.buffer_slices[orientation]
buffer_.discard_image()
if vtk_cache:
buffer_.discard_vtk_image()
else:
buffer_ = self.slice_.buffer_slices[self.orientation]
buffer_.discard_image()
if vtk_cache:
buffer_.discard_vtk_image()
def discard_mask_cache(self, all_orientations=False, vtk_cache=True):
if all_orientations:
for orientation in self.slice_.buffer_slices:
buffer_ = self.slice_.buffer_slices[orientation]
buffer_.discard_mask()
if vtk_cache:
buffer_.discard_vtk_mask()
else:
buffer_ = self.slice_.buffer_slices[self.orientation]
buffer_.discard_mask()
if vtk_cache:
buffer_.discard_vtk_mask()
|
paulojamorim/invesalius3
|
invesalius/data/viewer_slice.py
|
Python
|
gpl-2.0
| 61,062
|
[
"VTK"
] |
95b7307ad1dfeae26039b60a029be0f70f55f2176d156b6aa90cb65d0f18ee1e
|
"Helper to quickly build instruction's semantic side effects"
import inspect
import ast
import re
import miasm2.expression.expression as m2_expr
from miasm2.ir.ir import irbloc
class MiasmTransformer(ast.NodeTransformer):
"""AST visitor translating DSL to Miasm expression
memX[Y] -> ExprMem(Y, X)
iX(Y) -> ExprIntX(Y)
X if Y else Z -> ExprCond(Y, X, Z)
'X'(Y) -> ExprOp('X', Y)
('X' % Y)(Z) -> ExprOp('X' % Y, Z)
"""
# Parsers
parse_integer = re.compile("^i([0-9]+)$")
parse_mem = re.compile("^mem([0-9]+)$")
# Visitors
def visit_Call(self, node):
"""iX(Y) -> ExprIntX(Y),
'X'(Y) -> ExprOp('X', Y), ('X' % Y)(Z) -> ExprOp('X' % Y, Z)"""
if isinstance(node.func, ast.Name):
# iX(Y) -> ExprIntX(Y)
fc_name = node.func.id
# Match the function name
new_name = fc_name
integer = self.parse_integer.search(fc_name)
# Do replacement
if integer is not None:
new_name = "ExprInt%s" % integer.groups()[0]
# Replace in the node
node.func.id = new_name
elif (isinstance(node.func, ast.Str) or
(isinstance(node.func, ast.BinOp) and
isinstance(node.func.op, ast.Mod) and
isinstance(node.func.left, ast.Str))):
# 'op'(args...) -> ExprOp('op', args...)
# ('op' % (fmt))(args...) -> ExprOp('op' % (fmt), args...)
op_name = node.func
# Do replacement
node.func = ast.Name(id="ExprOp", ctx=ast.Load())
node.args[0:0] = [op_name]
node.args = map(self.visit, node.args)
else:
# TODO: launch visitor on node
pass
return node
def visit_Subscript(self, node):
"""memX[Y] -> ExprMem(Y, X)"""
# Detect the syntax
if not isinstance(node.value, ast.Name):
return node
name = node.value.id
mem = self.parse_mem.search(name)
if mem is None:
# TODO: launch visitor on node
return node
# Do replacement
addr = self.visit(node.slice.value)
call = ast.Call(func=ast.Name(id='ExprMem', ctx=ast.Load()),
args=[addr, ast.Num(n=int(mem.groups()[0]))],
keywords=[], starargs=None, kwargs=None)
return call
def visit_IfExp(self, node):
"""X if Y else Z -> ExprCond(Y, X, Z)"""
call = ast.Call(func=ast.Name(id='ExprCond', ctx=ast.Load()),
args=[self.visit(node.test),
self.visit(node.body),
self.visit(node.orelse)],
keywords=[], starargs=None, kwargs=None)
return call
class SemBuilder(object):
"""Helper for building instruction's semantic side effects method
This class provides a decorator @parse to use on them.
The context in which the function will be parsed must be supplied on
instanciation
"""
def __init__(self, ctx):
"""Create a SemBuilder
@ctx: context dictionnary used during parsing
"""
# Init
self.transformer = MiasmTransformer()
self._ctx = dict(m2_expr.__dict__)
self._ctx["irbloc"] = irbloc
self._functions = {}
# Update context
self._ctx.update(ctx)
@property
def functions(self):
"""Return a dictionnary name -> func of parsed functions"""
return self._functions.copy()
@staticmethod
def _create_labels():
"""Return the AST standing for label creations"""
out = ast.parse("lbl_end = ExprId(ir.get_next_instr(instr))").body
out += ast.parse("lbl_if = ExprId(ir.gen_label())").body
return out
def _parse_body(self, body, argument_names):
"""Recursive function transforming a @body to a block expression
Return:
- AST to append to body (real python statements)
- a list of blocks, ie list of affblock, ie list of ExprAff (AST)"""
# Init
## Real instructions
real_body = []
## Final blocks
blocks = [[[]]]
for statement in body:
if isinstance(statement, ast.Assign):
src = self.transformer.visit(statement.value)
dst = self.transformer.visit(statement.targets[0])
if (isinstance(dst, ast.Name) and
dst.id not in argument_names and
dst.id not in self._ctx):
# Real variable declaration
statement.value = src
real_body.append(statement)
continue
dst.ctx = ast.Load()
res = ast.Call(func=ast.Name(id='ExprAff',
ctx=ast.Load()),
args=[dst, src],
keywords=[],
starargs=None,
kwargs=None)
blocks[-1][-1].append(res)
elif (isinstance(statement, ast.Expr) and
isinstance(statement.value, ast.Str)):
# String (docstring, comment, ...) -> keep it
real_body.append(statement)
elif (isinstance(statement, ast.If) and
not statement.orelse):
# Create jumps : ir.IRDst = lbl_if if cond else lbl_end
cond = statement.test
real_body += self._create_labels()
lbl_end = ast.Name(id='lbl_end', ctx=ast.Load())
lbl_if = ast.Name(id='lbl_if', ctx=ast.Load())
dst = ast.Call(func=ast.Name(id='ExprCond',
ctx=ast.Load()),
args=[cond,
lbl_if,
lbl_end],
keywords=[],
starargs=None,
kwargs=None)
if (isinstance(cond, ast.UnaryOp) and
isinstance(cond.op, ast.Not)):
## if not cond -> switch exprCond
dst.args[1:] = dst.args[1:][::-1]
dst.args[0] = cond.operand
IRDst = ast.Attribute(value=ast.Name(id='ir',
ctx=ast.Load()),
attr='IRDst', ctx=ast.Load())
blocks[-1][-1].append(ast.Call(func=ast.Name(id='ExprAff',
ctx=ast.Load()),
args=[IRDst, dst],
keywords=[],
starargs=None,
kwargs=None))
# Create the new blocks
sub_blocks, sub_body = self._parse_body(statement.body,
argument_names)
if len(sub_blocks) > 1:
raise RuntimeError("Imbricated if unimplemented")
## Close the last block
jmp_end = ast.Call(func=ast.Name(id='ExprAff',
ctx=ast.Load()),
args=[IRDst, lbl_end],
keywords=[],
starargs=None,
kwargs=None)
sub_blocks[-1][-1].append(jmp_end)
sub_blocks[-1][-1] = ast.List(elts=sub_blocks[-1][-1],
ctx=ast.Load())
sub_blocks[-1] = ast.List(elts=sub_blocks[-1],
ctx=ast.Load())
## Replace the block with a call to 'irbloc'
lbl_if_name = ast.Attribute(value=ast.Name(id='lbl_if',
ctx=ast.Load()),
attr='name', ctx=ast.Load())
sub_blocks[-1] = ast.Call(func=ast.Name(id='irbloc',
ctx=ast.Load()),
args=[lbl_if_name,
sub_blocks[-1]],
keywords=[],
starargs=None,
kwargs=None)
blocks += sub_blocks
real_body += sub_body
# Prepare a new block for following statement
blocks.append([[]])
else:
# TODO: real var, +=, /=, -=, <<=, >>=, if/else, ...
raise RuntimeError("Unimplemented %s" % statement)
return blocks, real_body
def parse(self, func):
"""Function decorator, returning a correct method from a pseudo-Python
one"""
# Get the function AST
parsed = ast.parse(inspect.getsource(func))
fc_ast = parsed.body[0]
argument_names = [name.id for name in fc_ast.args.args]
# Translate (blocks[0][0] is the current instr)
blocks, body = self._parse_body(fc_ast.body, argument_names)
# Build the new function
fc_ast.args.args[0:0] = [ast.Name(id='ir', ctx=ast.Param()),
ast.Name(id='instr', ctx=ast.Param())]
cur_instr = blocks[0][0]
if len(blocks[-1][0]) == 0:
## Last block can be empty
blocks.pop()
other_blocks = blocks[1:]
body.append(ast.Return(value=ast.Tuple(elts=[ast.List(elts=cur_instr,
ctx=ast.Load()),
ast.List(elts=other_blocks,
ctx=ast.Load())],
ctx=ast.Load())))
ret = ast.Module([ast.FunctionDef(name=fc_ast.name,
args=fc_ast.args,
body=body,
decorator_list=[])])
# To display the generated function, use codegen.to_source
# codegen: https://github.com/andreif/codegen
# Compile according to the context
fixed = ast.fix_missing_locations(ret)
codeobj = compile(fixed, '<string>', 'exec')
ctx = self._ctx.copy()
eval(codeobj, ctx)
# Get the function back
self._functions[fc_ast.name] = ctx[fc_ast.name]
return ctx[fc_ast.name]
|
kod3r/miasm
|
miasm2/core/sembuilder.py
|
Python
|
gpl-2.0
| 10,921
|
[
"VisIt"
] |
3ccc66533b574ea6a08cda90fce3eaaad29c20b9447127c883a1e02631e0cdcd
|
import tornado.ioloop
import tornado.web
import string
import random
"""
This script redirects all requests to a SMB server (Redirect to SMB)
Developed by Brian Wallace @botnet_hutner
"""
class RedirectAll(tornado.web.RequestHandler):
def get(self):
self.set_status(302, "Found")
self.redirect("file://{0}/redirected-{1}".format(sys.argv[1], ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))))
def post(self):
self.set_status(302, "Found")
self.redirect("file://{0}/redirected-{1}".format(sys.argv[1], ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))))
def head(self):
self.set_status(302, "Found")
self.redirect("file://{0}/redirected-{1}".format(sys.argv[1], ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))))
def options(self):
self.set_status(302, "Found")
self.redirect("file://{0}/redirected-{1}".format(sys.argv[1], ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))))
def put(self):
self.set_status(302, "Found")
self.redirect("file://{0}/redirected-{1}".format(sys.argv[1], ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))))
application = tornado.web.Application([
(r".*", RedirectAll),
])
if __name__ == "__main__":
import sys
port = 8080
if len(sys.argv) > 2:
port = int(sys.argv[2])
application.listen(port)
tornado.ioloop.IOLoop.instance().start()
|
CylanceSPEAR/SMBTrap
|
smbtrap/redirect_server.py
|
Python
|
mit
| 1,559
|
[
"Brian"
] |
a54a0adf925c459558510a56aa146357a1af2d94fd03eeb6dccd1ef1fbe1a5e1
|
"""
visualization of netCDF data
"""
from matplotlib import pyplot as plt
from matplotlib import colors
# from matplotlib.patches import Polygon
import matplotlib.patches as mpatches
import cartopy.feature as cfeature
import cartopy.crs as ccrs
from cartopy.util import add_cyclic_point
# from flyingpigeon.nc_statistic import fieldmean
from flyingpigeon.nc_utils import get_variable, get_coordinates
from flyingpigeon.nc_utils import get_time, sort_by_filename, get_values
from flyingpigeon.plt_utils import fig2plot
from numpy import meshgrid
from netCDF4 import Dataset
import numpy as np
import pandas as pd
from datetime import datetime as dt
from tempfile import mkstemp
# from matplotlib import use
# use('Agg') # use this if no xserver is available
import logging
LOGGER = logging.getLogger("PYWPS")
class MidpointNormalize(colors.Normalize):
def __init__(self, vmin=None, vmax=None, vcenter=None, clip=False):
self.vcenter = vcenter
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.vcenter, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
def add_colorbar(im, aspect=20, pad_fraction=0.5,):
"""Add a vertical color bar to an image plot."""
from mpl_toolkits import axes_grid1
divider = axes_grid1.make_axes_locatable(im.axes)
width = axes_grid1.axes_size.AxesY(im.axes, aspect=1./aspect)
pad = axes_grid1.axes_size.Fraction(pad_fraction, width)
current_ax = plt.gca()
cax = divider.append_axes("right", size=width, pad=pad)
plt.sca(current_ax)
return im.axes.figure.colorbar(im, cax=cax)
def plot_extend(resource, file_extension='png'):
"""
plots the extend (domain) of the values stored in a netCDF file:
:parm resource: path to netCDF file
:param file_extension: file format of the graphic. if file_extension=None a matplotlib figure will be returned
:return graphic: graphic in specified format
"""
lats, lons = get_coordinates(resource, unrotate=True)
# box_top = 45
# x, y = [-20, -20, 45, 45, -44], [-45, box_top, box_top, -45, -45]
xy = np.array([[np.min(lons), np.min(lats)],
[np.max(lons), np.min(lats)],
[np.max(lons), np.max(lats)],
[np.min(lons), np.max(lats)]])
fig = plt.figure(figsize=(20, 10), dpi=600, facecolor='w', edgecolor='k')
projection = ccrs.Robinson()
# ccrs.Orthographic(central_longitude=np.mean(xy[:, 0]),
# central_latitude=np.mean(xy[:, 1]),
# globe=None) # Robinson()
ax = plt.axes(projection=projection)
ax.stock_img()
ax.coastlines()
ax.add_patch(mpatches.Polygon(xy, closed=True, transform=ccrs.PlateCarree(), color='coral', alpha=0.6))
# ccrs.Geodetic()
ax.gridlines()
plt.show()
if file_extension is None:
map_graphic = fig
else:
map_graphic = fig2plot(fig=fig, file_extension=file_extension)
plt.close()
return map_graphic
def plot_ts_spaghetti(resource, variable=None, ylim=None, title=None,
file_extension='png', delta=0, dir_output='.',
figsize=(10, 10)):
"""
creates a png file containing the appropriate spaghetti plot as a
field mean of the values.
:param resource: list of files containing the same variable
:param variable: variable to be visualised. If None (default), variable will be detected
:param title: string to be used as title
:param ylim: Y-axis limitations: tuple(min,max)
:param figsize: figure size defult=(10,10)
:retruns str: path to png file
"""
try:
fig = plt.figure(figsize=figsize, dpi=600, facecolor='w', edgecolor='k')
LOGGER.debug('Start visualisation spaghetti plot')
# === prepare invironment
if type(resource) != list:
resource = [resource]
if variable is None:
variable = get_variable(resource[0])
LOGGER.info('plot values preparation done')
except Exception as ex:
msg = "plot values preparation failed {}".format(ex)
LOGGER.exception(msg)
raise Exception(msg)
try:
for c, nc in enumerate(resource):
try:
# dt = get_time(nc)
# ts = fieldmean(nc)
if 'historical' in nc:
col = 'grey'
elif 'evaluation' in nc:
col = 'black'
elif 'rcp26' in nc:
col = 'blue'
elif 'rcp85' in nc:
col = 'red'
else:
col = 'green'
dt = get_time(nc)
# [datetime.strptime(elem, '%Y-%m-%d') for elem in strDate[0]]
# ts = fieldmean(nc)
ds = Dataset(nc)
var = get_variable(nc)
tg_val = np.squeeze(ds.variables[var][:])
d2 = np.nanmean(tg_val, axis=1)
ts = np.nanmean(d2, axis=1)
plt.plot(dt, ts, col)
plt.grid()
plt.title(title)
#
# plt.plot(dt, ts)
# fig.line( dt,ts )
except Exception as e:
msg = "spaghetti plot failed for {} : {}".format(nc, e)
LOGGER.exception(msg)
plt.title(title, fontsize=20)
plt.ylim(ylim)
plt.xticks(fontsize=16, rotation=45)
plt.yticks(fontsize=16)
plt.grid()
output_png = fig2plot(fig=fig, file_extension=file_extension, dir_output=dir_output)
plt.close()
LOGGER.info('timeseries spaghetti plot done for %s with %s lines.' % (variable, c))
except Exception as ex:
msg = 'matplotlib spaghetti plot failed: {}'.format(ex)
LOGGER.exception(msg)
return output_png
def ts_data(datasets, delta=0):
"""
Creates a pandas DataFrame out of the netcdt datasets
:param datasets: a sort_by_filename dictionary of datasets
:param delta: set a delta for the values e.g. -273.15 to convert Kelvin to Celsius
:retruns DataFrame: dates
"""
# Create index out of existing timestemps
for i, key in enumerate(datasets.keys()):
for nc in datasets[key]:
ds = Dataset(nc)
ts = get_time(nc)
if i == 0:
dates = pd.DatetimeIndex(ts)
else:
dates = dates.union(ts)
# create empty DataFrame according existing timestemps
df = pd.DataFrame(columns=list(datasets.keys()), index=dates)
for key in datasets.keys():
try:
for nc in datasets[key]:
ds = Dataset(nc)
var = get_variable(nc)
ts = get_time(nc)
tg_val = np.squeeze(ds.variables[var][:])
d2 = np.nanmean(tg_val, axis=1)
data = np.nanmean(d2, axis=1) + delta
df[key].loc[ts] = data
# data = fieldmean(dic[key]) # get_values(f)
# ts = get_time(dic[key])
# ds = pd.Series(data=data, index=ts, name=key)
# # ds_yr = ds.resample('12M', ).mean() # yearly mean loffset='6M'
# df[key] = ds
LOGGER.debug('read in pandas series timeseries for: {}'.format(key))
except Exception:
LOGGER.exception('failed to read data timeseries for %s ' % (key))
return df
def plot_ts_uncertainty(resource, variable=None, ylim=None, title=None,
file_extension='png', delta=0, window=None, dir_output=None,
figsize=(10, 10)):
"""
creates a png file containing the appropriate uncertainty plot.
:param resource: list of files containing the same variable
:param delta: set a delta for the values e.g. -273.15 to convert Kelvin to Celsius
:param variable: variable to be visualised. If None (default), variable will be detected
:param ylim: Y-axis limitations: tuple(min,max)
:param title: string to be used as title
:param figsize: figure size defult=(10,10)
:param window: windowsize of the rolling mean
:returns str: path/to/file.png
"""
LOGGER.debug('Start visualisation uncertainty plot')
#
# from flyingpigeon.utils import get_time, sort_by_filename
# from flyingpigeon.nc_statistic import fieldmean
# from flyingpigeon.metadata import get_frequency
# === prepare invironment
if type(resource) == str:
resource = list([resource])
if variable is None:
variable = get_variable(resource[0])
if title is None:
title = "Field mean of %s " % variable
LOGGER.info('variable %s found in resource.' % variable)
try:
fig = plt.figure(figsize=figsize, facecolor='w', edgecolor='k')
dic = sort_by_filename(resource, historical_concatination=True)
df = ts_data(dic, delta=delta)
if window is None:
window = 10 # TODO: include detection of frq = get_frequency(resource[0])
if len(df.index.values) >= window * 2:
# TODO: calculate windowsize according to timestapms (day,mon,yr ... with get_frequency)
df_smooth = df.rolling(window=window, center=True).mean()
LOGGER.info('rolling mean calculated for all input data')
else:
df_smooth = df.copy()
LOGGER.debug('timeseries too short for moving mean')
fig.text(0.95, 0.05, '!!! timeseries too short for moving mean over 30years !!!',
fontsize=20, color='red',
ha='right', va='bottom', alpha=0.5)
try:
rmean = np.squeeze(df_smooth.quantile([0.5], axis=1,).values)
# skipna=False quantile([0.5], axis=1, numeric_only=False )
q05 = np.squeeze(df_smooth.quantile([0.10], axis=1,).values) # numeric_only=False)
q33 = np.squeeze(df_smooth.quantile([0.33], axis=1,).values) # numeric_only=False)
q66 = np.squeeze(df_smooth.quantile([0.66], axis=1,).values) # numeric_only=False)
q95 = np.squeeze(df_smooth.quantile([0.90], axis=1,).values) # numeric_only=False)
LOGGER.info('quantile calculated for all input data')
except Exception as e:
LOGGER.exception('failed to calculate quantiles: {}'.format(e))
try:
x = pd.to_datetime(df.index.values)
x1 = x[x <= dt.strptime('2005-12-31', "%Y-%m-%d")]
x2 = x[len(x1)-1:] # -1 to catch up with the last historical value
plt.fill_between(x, q05, q95, alpha=0.5, color='grey')
plt.fill_between(x, q33, q66, alpha=0.5, color='grey')
plt.plot(x1, rmean[:len(x1)], c='blue', lw=3)
plt.plot(x2, rmean[len(x1)-1:], c='r', lw=3)
# plt.xlim(min(df.index.values), max(df.index.values))
plt.ylim(ylim)
plt.xticks(fontsize=16, rotation=45)
plt.yticks(fontsize=16,)
plt.title(title, fontsize=20)
plt.grid() # .grid_line_alpha=0.3
output_png = fig2plot(fig=fig, file_extension=file_extension, dir_output=dir_output)
plt.close()
LOGGER.debug('timeseries uncertainty plot done for %s' % variable)
except Exception as e:
raise Exception('failed to calculate quantiles. {}'.format(e))
except Exception as e:
LOGGER.exception('uncertainty plot failed for {}: {}'.format(variable, e))
_, output_png = mkstemp(dir=dir_output, suffix='.png')
return output_png
def plot_ts_uncertaintyrcp(resource, variable=None, ylim=None, title=None,
file_extension='png', delta=0, window=None, dir_output=None,
figsize=(10, 10)):
"""
creates a png file containing the appropriate uncertainty plot.
:param resource: list of files containing the same variable
:param delta: set a delta for the values e.g. -273.15 to convert Kelvin to Celsius
:param variable: variable to be visualised. If None (default), variable will be detected
:param ylim: Y-axis limitations: tuple(min,max)
:param title: string to be used as title
:param figsize: figure size defult=(10,10)
:param window: windowsize of the rolling mean
:returns str: path/to/file.png
"""
LOGGER.debug('Start visualisation uncertainty plot')
#
# from flyingpigeon.utils import get_time, sort_by_filename
# from flyingpigeon.nc_statistic import fieldmean
# from flyingpigeon.metadata import get_frequency
# === prepare invironment
if type(resource) == str:
resource = list([resource])
if variable is None:
variable = get_variable(resource[0])
if title is None:
title = "Field mean of %s " % variable
LOGGER.info('variable %s found in resource.' % variable)
try:
fig = plt.figure(figsize=figsize, facecolor='w', edgecolor='k')
dic = sort_by_filename(resource, historical_concatination=True)
df = ts_data(dic, delta=delta)
if window is None:
# if frq == 'day':
# window = 1095 # 1
# elif frq == 'man':
# window = 35 # 9
# elif frq == 'sem':
# window = 11 # 9
# elif frq == 'yr':
# window = 3 # 0
# else:
# LOGGER.debug('frequency %s is not included' % frq)
window = 10
# TODO: include detection of frq = get_frequency(resource[0])
if len(df.index.values) >= window * 2:
# TODO: calculate windowsize according to timestapms (day,mon,yr ... with get_frequency)
df_smooth = df.rolling(window=window, center=True).mean()
LOGGER.info('rolling mean calculated for all input data')
else:
df_smooth = df.copy()
LOGGER.debug('timeseries too short for moving mean')
fig.text(0.95, 0.05, '!!! timeseries too short for moving mean over 30years !!!',
fontsize=20, color='red',
ha='right', va='bottom', alpha=0.5)
# split into differnet RCPs:
# TODO: inlcude rcp45 and 65
rcp26 = [ds for ds in df_smooth.columns if 'rcp26' in ds]
rcp85 = [ds for ds in df_smooth.columns if 'rcp85' in ds]
df_rcp26 = df_smooth[rcp26]
df_rcp85 = df_smooth[rcp85]
# for rcp26:
try:
rcp26_rmean = np.squeeze(df_rcp26.quantile([0.5], axis=1,).values)
# skipna=False quantile([0.5], axis=1, numeric_only=False )
rcp26_q05 = np.squeeze(df_rcp26.quantile([0.10], axis=1,).values)
rcp26_q33 = np.squeeze(df_rcp26.quantile([0.33], axis=1,).values)
rcp26_q66 = np.squeeze(df_rcp26.quantile([0.66], axis=1,).values)
rcp26_q95 = np.squeeze(df_rcp26.quantile([0.90], axis=1,).values)
LOGGER.info('quantile calculated for all input data')
except Exception as e:
LOGGER.exception('failed to calculate quantiles: {}'.format(e))
try:
rcp85_rmean = np.squeeze(df_rcp85.quantile([0.5], axis=1,).values)
# skipna=False quantile([0.5], axis=1, numeric_only=False )
rcp85_q05 = np.squeeze(df_rcp85.quantile([0.10], axis=1,).values)
rcp85_q33 = np.squeeze(df_rcp85.quantile([0.33], axis=1,).values)
rcp85_q66 = np.squeeze(df_rcp85.quantile([0.66], axis=1,).values)
rcp85_q95 = np.squeeze(df_rcp85.quantile([0.90], axis=1,).values)
LOGGER.info('quantile calculated for all input data')
except Exception as e:
LOGGER.exception('failed to calculate quantiles: {}'.format(e))
# plot for rcp26:
try:
x = pd.to_datetime(df.index.values)
x1 = x[x <= dt.strptime('2005-12-31', "%Y-%m-%d")]
x2 = x[len(x1)-1:] # -1 to catch up with the last historical value
plt.fill_between(x, rcp26_q05, rcp26_q95, alpha=0.5, color='grey')
plt.fill_between(x, rcp26_q33, rcp26_q66, alpha=0.5, color='grey')
plt.fill_between(x2, rcp85_q05[len(x1)-1:], rcp85_q95[len(x1)-1:],
alpha=0.5, color='grey')
plt.fill_between(x2, rcp85_q33[len(x1)-1:], rcp85_q66[len(x1)-1:],
alpha=0.5, color='grey')
plt.plot(x1, rcp26_rmean[:len(x1)], c='blue', lw=3)
plt.plot(x2, rcp26_rmean[len(x1)-1:], c='green', lw=3)
plt.plot(x1, rcp85_rmean[:len(x1)], c='blue', lw=3)
plt.plot(x2, rcp85_rmean[len(x1)-1:], c='red', lw=3)
# plt.xlim(min(df.index.values), max(df.index.values))
plt.ylim(ylim)
plt.xticks(fontsize=16, rotation=45)
plt.yticks(fontsize=16,)
plt.title(title, fontsize=20)
plt.grid() # .grid_line_alpha=0.3
output_png = fig2plot(fig=fig, file_extension=file_extension, dir_output=dir_output)
plt.close()
LOGGER.debug('timeseries uncertainty plot done for %s' % variable)
except Exception as e:
raise Exception('failed to calculate quantiles. {}'.format(e))
except Exception as e:
LOGGER.exception('uncertainty plot failed for {}: {}'.format(variable, e))
_, output_png = mkstemp(dir=dir_output, suffix='.png')
return output_png
def plot_map_timemean(resource, variable=None, time_range=None,
title=None, delta=0, cmap=None, vmin=None, vmax=None, figsize=(15, 15),
file_extension='png', dir_output='.'):
"""
creates a spatial map with the mean over the timestepps.
If multiple files are provided, a mean over all files are condidered.
:param resource: netCDF file(s) containng spatial values to be plotted.
:param variable: variable to be visualised. If None (default), variable will be detected
:param title: string to be used as title
:param delta: set a delta for the values e.g. -273.15 to convert Kelvin to Celsius
:param figsize: figure size defult=(15,15)
:param vmin: colorbar minimum
:param vmax: colorbar maximum
:param file_extension: file extinction for the graphic
:param dir_output: output directory to store the output graphic
:returns str: path/to/file.png
"""
# from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
try:
LOGGER.debug('plot_map function read in values for {}'.format(resource))
# get values of netcdf file
if type(resource) == str:
ds = Dataset(resource)
if variable is None:
variable = get_variable(resource)
var = ds.variables[variable]
dims = var.dimensions
lat = ds.variables[dims[-2]]
lon = ds.variables[dims[-1]]
lons, lats = meshgrid(lon, lat)
var = get_values(resource, time_range=time_range, variable=variable).data
var_mean = np.nanmean(var, axis=0) + delta
# mean over whole periode 30 Years 1981-2010 and transform to Celsius
else:
for i, f in enumerate(resource):
if i == 0:
if variable is None:
variable = get_variable(f)
ds = Dataset(f)
var = ds.variables[variable]
dims = var.dimensions
lat = ds.variables[dims[-2]]
lon = ds.variables[dims[-1]]
lons, lats = meshgrid(lon, lat)
vals = get_values(f, time_range=time_range, variable=variable).data
else:
vals = np.append(vals, get_values(f, time_range=time_range, variable=variable).data, axis=0)
var_mean = np.nanmean(vals, axis=0) + delta
# prepare plot
LOGGER.info('preparing matplotlib figure')
fig = plt.figure(figsize=figsize, facecolor='w', edgecolor='k')
ax = plt.axes(projection=ccrs.PlateCarree())
cs = plt.pcolormesh(lons, lats, var_mean,
transform=ccrs.PlateCarree(), cmap=cmap,
vmin=vmin, vmax=vmax,
)
# extent=(-0,17,10.5,24)
# ax.set_extent(extent)
ax.add_feature(cfeature.BORDERS, linewidth=2, linestyle='--')
ax.add_feature(cfeature.COASTLINE, linewidth=2,)
# ax.add_feature(cfeature.RIVERS)
# ax.stock_img()
# ax.gridlines(draw_labels=False)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylables_right = False
gl.ylables_left = False
# gl.xlines = False
# gl.xlocator = mticker.FixedLocator([0, 2,4,6,8,10,12,14,16] )
# gl.xformatter = LONGITUDE_FORMATTER
# gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 15, 'color': 'black', 'weight': 'bold'}
gl.ylabel_style = {'size': 15, 'color': 'black', 'weight': 'bold'}
if cmap is None:
if variable in ['pr', 'prAdjust',
'prcptot', 'rx1day', 'rx1day', 'wetdays',
'cdd', 'cwd', 'sdii']:
cmap = 'Blues'
if variable in ['tas', 'tasAdjust', 'tg', 'tg_mean']:
cmap = 'seismic'
plt.title(title, fontsize=25)
cax = fig.add_axes([ax.get_position().x1 + 0.1, ax.get_position().y0,
0.02, ax.get_position().height])
cbar = plt.colorbar(cs, cax=cax)
cbar.ax.tick_params(labelsize=20)
# ticklabs = cbar.ax.get_yticklabels()
# cbar.ax.set_yticklabels(ticklabs, fontsize=15)
# cb = add_colorbar(cs)
LOGGER.info('Matplotlib pcolormesh plot done')
output_png = fig2plot(fig=fig, file_extension='png',
dir_output=dir_output)
plt.close()
LOGGER.debug('Plot done for %s' % variable)
except Exception as e:
raise Exception('failed to plot netCDF file: {}'.format(e))
return output_png
def plot_map_ccsignal(signal, robustness=None,
variable=None, cmap=None, title=None,
file_extension='png', vmin=None, vmax=None, dir_output=None): # 'seismic'
"""
generates a graphic for the output of the ensembleRobustness process for a lat/long file.
:param signal: netCDF file containing the signal difference over time
:param robustness: netCDF file containing 1 and 0 corresponding to signal robustness
:param variable: variable containing the netCDF files
:param cmap: default='seismic',
:param title: default='Model agreement of signal'
:returns str: path/to/file.png
"""
if variable is None:
variable = get_variable(signal)
print('found variable in file {}'.format(variable))
try:
ds = Dataset(signal)
var_signal = ds.variables[variable]
val_signal = np.squeeze(ds.variables[variable])
lon_att = var_signal.dimensions[-1]
lat_att = var_signal.dimensions[-2]
lon = ds.variables[lon_att][:]
lat = ds.variables[lat_att][:]
lons, lats = meshgrid(lon, lat)
ds.close()
if robustness is not None:
ds = Dataset(robustness)
var_rob = get_variable(robustness)
val_rob = np.squeeze(ds.variables[var_rob][:])
ds.close()
# mask = val_signal[:] # [val_signal[:]<val_std[:]]
# mask_h = np.empty(list(val_signal[:].shape)) # [[val_signal[:] > val_std[:]]] = 1
# mask_h[(val_signal >= (val_std / 4.))] = 1 #[:]
#
# mask_l = np.empty(list(val_signal[:].shape)) # [[val_signal[:] > val_std[:]]] = 1
# mask_l[mask_h != 1] = 1
# cyclic_var, cyclic_lons = add_cyclic_point(var_signal, coord=lons)
# mask, cyclic_lons = add_cyclic_point(mask, coord=lons)
#
# lons = cyclic_lons
# var_signal = cyclic_var
LOGGER.info('prepared data for plotting')
except Exception as e:
msg = 'failed to get data for plotting: {}'.format(e)
LOGGER.exception(msg)
raise Exception(msg)
try:
fig = plt.figure(figsize=(20, 10), facecolor='w', edgecolor='k')
ax = plt.axes(projection=ccrs.PlateCarree())
# ax = plt.axes(projection=ccrs.Robinson(central_longitude=int(mean(lons))))
# minval = round(np.nanmin(var_signal))
if cmap is None:
if variable in ['pr', 'prAdjust',
'prcptot', 'rx1day',
'wetdays', # 'cdd',
'cwd', 'sdii',
'rx5day']:
cmap = 'BrBG'
if variable in ['tas', 'tasAdjust', 'tg', 'tg_mean']:
cmap = 'seismic'
else:
cmap = 'viridis'
LOGGER.debug('variable {} not found to set the colormap'.format(variable))
maxval = round(np.nanmax(val_signal)+.5)
minval = round(np.nanmin(val_signal))
norm = MidpointNormalize(vmin=minval, vcenter=0, vmax=maxval)
# )vcenter=0,,
cs = plt.pcolormesh(lons, lats, val_signal,
transform=ccrs.PlateCarree(),
cmap=cmap, norm=norm, vmin=vmin, vmax=vmax)
plt.colorbar(cs)
if robustness is not None:
plt.contourf(lons, lats, val_rob, transform=ccrs.PlateCarree(),
hatches=[None, '/', '.'], alpha=0, colors='none', cmap=None) # colors='white'
# cl = plt.contourf(lons, lats, mask_l, 1,
# transform=ccrs.PlateCarree(), hatches=[None, '/'], alpha=0, colors='none', cmap=None) # ,
plt.annotate('// = low model ensemble agreement', (0, 0), (0, -10),
xycoords='axes fraction', textcoords='offset points', va='top')
plt.annotate('.. = high model ensemble agreement', (0, 0), (0, -20),
xycoords='axes fraction', textcoords='offset points', va='top')
ax.add_feature(cfeature.BORDERS, linewidth=2, linestyle='--')
ax.add_feature(cfeature.COASTLINE, linewidth=2,) # coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylables_right = False
gl.xlabel_style = {'size': 15, 'color': 'black'}
gl.ylabel_style = {'size': 15, 'color': 'black'}
if title is not None:
plt.title(title, fontsize=20)
plt.xticks(fontsize=16, rotation=45)
plt.yticks(fontsize=16)
graphic = fig2plot(fig=fig, file_extension=file_extension, dir_output=dir_output)
plt.close()
LOGGER.info('Plot created and figure saved')
except Exception as e:
msg = 'failed to plot graphic: {}'.format(e)
LOGGER.exception(msg)
return graphic
def plot_map_spatialanalog(ncfile, variable='dissimilarity',
cmap='viridis', title='Spatial analog'):
"""
Return a matplotlib Figure instance showing a map of the dissimilarity measure.
"""
import netCDF4 as nc
from flyingpigeon import nc_utils
from mpl_toolkits.axes_grid import make_axes_locatable
import matplotlib.axes as maxes
try:
var = nc_utils.get_values(ncfile, variable)
LOGGER.info('Data loaded')
lats, lons = nc_utils.get_coordinates(ncfile, variable=variable, unrotate=False)
if len(lats.shape) == 1:
cyclic_var, cyclic_lons = add_cyclic_point(var, coord=lons)
lons = cyclic_lons.data
var = cyclic_var
with nc.Dataset(ncfile) as D:
V = D.variables[variable]
lon, lat = map(float, V.target_location.split(','))
LOGGER.info('Lat and lon loaded')
except Exception as e:
msg = 'Failed to get data for plotting: {0}\n{1}'.format(ncfile, e)
LOGGER.exception(msg)
raise Exception(msg)
try:
fig = plt.figure(facecolor='w', edgecolor='k')
fig.subplots_adjust(top=.95, bottom=.05, left=.03, right=.95)
ax = plt.axes(
projection=ccrs.Robinson(central_longitude=int(np.mean(lons))))
divider = make_axes_locatable(ax)
cax = divider.new_horizontal("4%", pad=0.15, axes_class=maxes.Axes)
fig.add_axes(cax)
ax.plot(lon, lat, marker='o', mfc='#292421', ms=13, transform=ccrs.PlateCarree())
ax.plot(lon, lat, marker='o', mfc='#ffffff', ms=7, transform=ccrs.PlateCarree())
cs = ax.contourf(lons, lats, var, 60,
transform=ccrs.PlateCarree(),
cmap=cmap, interpolation='nearest')
ax.coastlines(color='k', linewidth=.8)
ax.set_title(title)
cb = plt.colorbar(cs, cax=cax, orientation='vertical')
cb.set_label(u"– Dissimilarity +") # ha='left', va='center')
cb.set_ticks([])
except Exception as ex:
msg = 'failed to plot graphic {}'.format(ex)
LOGGER.exception(msg)
LOGGER.info('Plot created and figure saved')
return fig
|
bird-house/flyingpigeon
|
flyingpigeon/plt_ncdata.py
|
Python
|
apache-2.0
| 29,740
|
[
"NetCDF"
] |
8dcaf53a5cb9b66336184eebb1f959edf4eb6b942fef9a3dce06b1aa3c9d5992
|
import random
import getopt
import sys
import errno
import base64
optlist, args = getopt.getopt(sys.argv[1:], '?c:w:dhob6ks:X:N:', ["help"])
if len(args) > 0:
sys.stderr.write("%s: Error: Unexpected command line argument '%s'.\n" % (sys.argv[0], args[0]))
sys.exit(errno.EINVAL);
FMT_DEC = 0
FMT_HEX = 1
FMT_OCT = 2
FMT_BIN = 3
FMT_B64 = 4
def fmtbin(n, w):
s = bin(n)[2:]
while len(s) < w:
s = "0" + s
return s
fmtdec = lambda n, w : ("%0" + str(w) + "d") % n
fmt = FMT_DEC
formatter = fmtdec
count = 1
width = 0
mn = 0
mx = 255
sep = " "
for k, v in optlist:
if k == '-c':
try:
count = int(v, 0)
except ValueError, e:
sys.stderr.write("%s: Error: Invalid parameter for option '%s'.\n" % (sys.argv[0], k))
sys.exit(errno.EINVAL)
if k == '-w':
try:
width = int(v, 0)
except ValueError, e:
sys.stderr.write("%s: Error: Invalid parameter for option '%s'.\n" % (sys.argv[0], k))
sys.exit(errno.EINVAL)
elif k == '-s':
sep = v
elif k == '-k':
sep = "\n"
elif k == '-d':
fmt = FMT_DEC
formatter = fmtdec
elif k == '-h':
fmt = FMT_HEX
formatter = lambda n, w : ("%0" + str(w) + "x") % n
elif k == '-o':
fmt = FMT_OCT
formatter = lambda n, w : ("%0" + str(w) + "o") % n
elif k == '-b':
fmt = FMT_BIN
formatter = fmtbin
elif k == '-6':
fmt = FMT_B64
elif k == '-X':
try:
mx = int(v, 0)
except ValueError, e:
sys.stderr.write("%s: Error: Invalid parameter for option '%s'.\n" % (sys.argv[0], k))
sys.exit(errno.EINVAL)
elif k == '-N':
try:
mn = int(v, 0)
except ValueError, e:
sys.stderr.write("%s: Error: Invalid parameter for option '%s'.\n" % (sys.argv[0], k))
sys.exit(errno.EINVAL)
elif k in ("-?", "--help"):
print "rand - 1.0.0 - 2012 April 09"
print "Brian Mearns <bmearns@ieee.org>"
print ""
print "Usage: %s [options]" % sys.argv[0]
print ""
print "Options:"
print " -c COUNT Specify the number of random values to output. (Default is 1)."
print " -w WIDTH Specify the minimum number of characters to output for each"
print " element. (Elements with fewer characters will be 0 padded on"
print " the left)."
print " -d Specify decimal for the output encoding. (Default)."
print " -h Specify hexidecimal for the output encoding."
print " -o Specify octal for the output encoding."
print " -b Specify binary for the output encoding."
print " -6 Specify base-64 for the output encoding."
print " -X MAX Secify the maximum value for each element. (Default is 255)."
print " -N MAX Secify the minimum value for each element. (Default is 0)."
print " -s SEP Secify the separator string to use between elements. (Default"
print " is single space)."
print " -k Use a linebreak as the separator between elements."
print ""
print "Misc:"
print " -?, --help Show this help message and exit."
print ""
print "Base-64 encoding (specified with the -6 option) is a special case: the MIN,"
print "MAX, and SEP are all ignored. In this case, the specified COUNT of random bytes"
print "is generated as a string and encoded together as base-64."
print ""
sys.exit(0)
rand = random.SystemRandom()
if fmt == FMT_B64:
print base64.b64encode("".join(chr(rand.randint(0, 255)) for i in xrange(count)))
else:
print sep.join(formatter(rand.randint(mn, mx), width) for i in xrange(count))
|
mearns/winbin
|
rand.py
|
Python
|
gpl-3.0
| 4,077
|
[
"Brian"
] |
4adc36339942b5f833f717a98fe943a12a8f22e2c03c91ac1aba30884aa885e6
|
from openpiv.lib import replace_nans
import numpy as np
from scipy.signal import convolve
"""The openpiv.filters module contains some filtering/smoothing routines."""
__licence_ = """
Copyright (C) 2011 www.openpiv.net
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
def _gaussian_kernel(half_width=1):
"""A normalized 2D Gaussian kernel array
Parameters
----------
half_width : int
the half width of the kernel. Kernel
has shape 2*half_width + 1 (default half_width = 1, i.e.
a Gaussian of 3 x 3 kernel)
Examples
--------
>>> from openpiv.filters import _gaussian_kernel
>>> _gaussian_kernel(1)
array([[ 0.04491922, 0.12210311, 0.04491922],
[ 0.12210311, 0.33191066, 0.12210311],
[ 0.04491922, 0.12210311, 0.04491922]])
"""
# size = int(half_width)
x, y = np.mgrid[-half_width:half_width + 1, -half_width:half_width + 1]
g = np.exp(-(x ** 2 / float(half_width) + y ** 2 / float(half_width)))
return g / g.sum()
def gaussian_kernel(sigma, truncate=4.0):
"""
Return Gaussian that truncates at the given number of standard deviations.
"""
sigma = float(sigma)
radius = int(truncate * sigma + 0.5)
x, y = np.mgrid[-radius:radius + 1, -radius:radius + 1]
sigma = sigma ** 2
k = 2 * np.exp(-0.5 * (x ** 2 + y ** 2) / sigma)
k = k / np.sum(k)
return k
def gaussian(u, v, half_width=1):
"""Smooths the velocity field with a Gaussian kernel.
Parameters
----------
u : 2d np.ndarray
the u velocity component field
v : 2d np.ndarray
the v velocity component field
half_width : int
the half width of the kernel. Kernel
has shape 2*half_width+1, default = 1
Returns
-------
uf : 2d np.ndarray
the smoothed u velocity component field
vf : 2d np.ndarray
the smoothed v velocity component field
"""
g = _gaussian_kernel(half_width=half_width)
uf = convolve(u, g, mode="same")
vf = convolve(v, g, mode="same")
return uf, vf
def replace_outliers(u, v, w=None, method="localmean",
max_iter=5, tol=1e-3, kernel_size=1):
"""Replace invalid vectors in an velocity field using an iterative image
inpainting algorithm.
The algorithm is the following:
1) For each element in the arrays of the ``u`` and ``v`` components,
replace it by a weighted average
of the neighbouring elements which are not invalid themselves. The
weights depends of the method type. If ``method=localmean`` weight
are equal to 1/( (2*kernel_size+1)**2 -1 )
2) Several iterations are needed if there are adjacent invalid elements.
If this is the case, inforation is "spread" from the edges of the
missing regions iteratively, until the variation is below a certain
threshold.
Parameters
----------
u : 2d or 3d np.ndarray
the u velocity component field
v : 2d or 3d np.ndarray
the v velocity component field
w : 2d or 3d np.ndarray
the w velocity component field
max_iter : int
the number of iterations
kernel_size : int
the size of the kernel, default is 1
method : str
the type of kernel used for repairing missing vectors
Returns
-------
uf : 2d or 3d np.ndarray
the smoothed u velocity component field, where invalid vectors have
been replaced
vf : 2d or 3d np.ndarray
the smoothed v velocity component field, where invalid vectors have
been replaced
wf : 2d or 3d np.ndarray
the smoothed w velocity component field, where invalid vectors have
been replaced
"""
uf = replace_nans(
u, method=method, max_iter=max_iter, tol=tol,
kernel_size=kernel_size
)
vf = replace_nans(
v, method=method, max_iter=max_iter, tol=tol,
kernel_size=kernel_size
)
if isinstance(w, np.ndarray):
wf = replace_nans(
w, method=method, max_iter=max_iter, tol=tol,
kernel_size=kernel_size
)
return uf, vf, wf
return uf, vf
|
OpenPIV/openpiv-python
|
openpiv/filters.py
|
Python
|
gpl-3.0
| 4,760
|
[
"Gaussian"
] |
c36044bbfc4a6946f6607dbb117e9df8229d26a37ee51e1d2413fe451f0e3de2
|
# Author: Travis Oliphant
# 1999 -- 2002
from __future__ import division, print_function, absolute_import
import warnings
import threading
import sys
from . import sigtools
from ._upfirdn import _UpFIRDn, _output_len
from scipy._lib.six import callable
from scipy._lib._version import NumpyVersion
from scipy import fftpack, linalg
from numpy import (allclose, angle, arange, argsort, array, asarray,
atleast_1d, atleast_2d, cast, dot, exp, expand_dims,
iscomplexobj, mean, ndarray, newaxis, ones, pi,
poly, polyadd, polyder, polydiv, polymul, polysub, polyval,
prod, product, r_, ravel, real_if_close, reshape,
roots, sort, sum, take, transpose, unique, where, zeros,
zeros_like)
import numpy as np
from scipy.special import factorial
from .windows import get_window
from ._arraytools import axis_slice, axis_reverse, odd_ext, even_ext, const_ext
from scipy.signal.filter_design import cheby1
from scipy.signal.fir_filter_design import firwin
if sys.version_info.major >= 3 and sys.version_info.minor >= 5:
from math import gcd
else:
from fractions import gcd
__all__ = ['correlate', 'fftconvolve', 'convolve', 'convolve2d', 'correlate2d',
'order_filter', 'medfilt', 'medfilt2d', 'wiener', 'lfilter',
'lfiltic', 'sosfilt', 'deconvolve', 'hilbert', 'hilbert2',
'cmplx_sort', 'unique_roots', 'invres', 'invresz', 'residue',
'residuez', 'resample', 'resample_poly', 'detrend',
'lfilter_zi', 'sosfilt_zi',
'filtfilt', 'decimate', 'vectorstrength']
_modedict = {'valid': 0, 'same': 1, 'full': 2}
_boundarydict = {'fill': 0, 'pad': 0, 'wrap': 2, 'circular': 2, 'symm': 1,
'symmetric': 1, 'reflect': 4}
_rfft_mt_safe = (NumpyVersion(np.__version__) >= '1.9.0.dev-e24486e')
_rfft_lock = threading.Lock()
def _valfrommode(mode):
try:
val = _modedict[mode]
except KeyError:
if mode not in [0, 1, 2]:
raise ValueError("Acceptable mode flags are 'valid' (0),"
" 'same' (1), or 'full' (2).")
val = mode
return val
def _bvalfromboundary(boundary):
try:
val = _boundarydict[boundary] << 2
except KeyError:
if val not in [0, 1, 2]:
raise ValueError("Acceptable boundary flags are 'fill', 'wrap'"
" (or 'circular'), \n and 'symm'"
" (or 'symmetric').")
val = boundary << 2
return val
def _inputs_swap_needed(mode, shape1, shape2):
"""
If in 'valid' mode, checks whether or not one of the array shapes
is at least as large as the other in every dimension. Returns whether
or not the input arrays need to be swapped depending on whether shape2
is larger than shape1. This is important for some of the correlation and
convolution implementations in this module, where the larger array input
needs to come before the smaller array input when operating in this mode.
Note that if the mode provided is not 'valid', False is immediately
returned.
"""
if mode == 'valid':
ok1, ok2 = True, True
for d1, d2 in zip(shape1, shape2):
if not d1 >= d2:
ok1 = False
if not d2 >= d1:
ok2 = False
if not (ok1 or ok2):
raise ValueError("For 'valid' mode, one must be at least "
"as large as the other in every dimension")
return not ok1
return False
def correlate(in1, in2, mode='full'):
"""
Cross-correlate two N-dimensional arrays.
Cross-correlate `in1` and `in2`, with the output size determined by the
`mode` argument.
Parameters
----------
in1 : array_like
First input.
in2 : array_like
Second input. Should have the same number of dimensions as `in1`.
If operating in 'valid' mode, either `in1` or `in2` must be
at least as large as the other in every dimension.
mode : str {'full', 'valid', 'same'}, optional
A string indicating the size of the output:
``full``
The output is the full discrete linear cross-correlation
of the inputs. (Default)
``valid``
The output consists only of those elements that do not
rely on the zero-padding.
``same``
The output is the same size as `in1`, centered
with respect to the 'full' output.
Returns
-------
correlate : array
An N-dimensional array containing a subset of the discrete linear
cross-correlation of `in1` with `in2`.
Notes
-----
The correlation z of two d-dimensional arrays x and y is defined as:
z[...,k,...] = sum[..., i_l, ...]
x[..., i_l,...] * conj(y[..., i_l + k,...])
Examples
--------
Implement a matched filter using cross-correlation, to recover a signal
that has passed through a noisy channel.
>>> from scipy import signal
>>> sig = np.repeat([0., 1., 1., 0., 1., 0., 0., 1.], 128)
>>> sig_noise = sig + np.random.randn(len(sig))
>>> corr = signal.correlate(sig_noise, np.ones(128), mode='same') / 128
>>> import matplotlib.pyplot as plt
>>> clock = np.arange(64, len(sig), 128)
>>> fig, (ax_orig, ax_noise, ax_corr) = plt.subplots(3, 1, sharex=True)
>>> ax_orig.plot(sig)
>>> ax_orig.plot(clock, sig[clock], 'ro')
>>> ax_orig.set_title('Original signal')
>>> ax_noise.plot(sig_noise)
>>> ax_noise.set_title('Signal with noise')
>>> ax_corr.plot(corr)
>>> ax_corr.plot(clock, corr[clock], 'ro')
>>> ax_corr.axhline(0.5, ls=':')
>>> ax_corr.set_title('Cross-correlated with rectangular pulse')
>>> ax_orig.margins(0, 0.1)
>>> fig.tight_layout()
>>> fig.show()
"""
in1 = asarray(in1)
in2 = asarray(in2)
# Don't use _valfrommode, since correlate should not accept numeric modes
try:
val = _modedict[mode]
except KeyError:
raise ValueError("Acceptable mode flags are 'valid',"
" 'same', or 'full'.")
if in1.ndim == in2.ndim == 0:
return in1 * in2
elif not in1.ndim == in2.ndim:
raise ValueError("in1 and in2 should have the same dimensionality")
# numpy is significantly faster for 1d (but numpy's 'same' mode uses
# the size of the larger input, not the first.)
if in1.ndim == in2.ndim == 1 and (in1.size >= in2.size or mode != 'same'):
return np.correlate(in1, in2, mode)
# _correlateND is far slower when in2.size > in1.size, so swap them
# and then undo the effect afterward if mode == 'full'. Also, it fails
# with 'valid' mode if in2 is larger than in1, so swap those, too.
# Don't swap inputs for 'same' mode, since shape of in1 matters.
swapped_inputs = ((mode == 'full') and (in2.size > in1.size) or
_inputs_swap_needed(mode, in1.shape, in2.shape))
if swapped_inputs:
in1, in2 = in2, in1
if mode == 'valid':
ps = [i - j + 1 for i, j in zip(in1.shape, in2.shape)]
out = np.empty(ps, in1.dtype)
z = sigtools._correlateND(in1, in2, out, val)
else:
ps = [i + j - 1 for i, j in zip(in1.shape, in2.shape)]
# zero pad input
in1zpadded = np.zeros(ps, in1.dtype)
sc = [slice(0, i) for i in in1.shape]
in1zpadded[sc] = in1.copy()
if mode == 'full':
out = np.empty(ps, in1.dtype)
elif mode == 'same':
out = np.empty(in1.shape, in1.dtype)
z = sigtools._correlateND(in1zpadded, in2, out, val)
if swapped_inputs:
# Reverse in all dimensions and conjugate to undo the effect of
# swapping inputs
reverse = [slice(None, None, -1)] * z.ndim
z = z[reverse].conj()
return z
def _centered(arr, newsize):
# Return the center newsize portion of the array.
newsize = asarray(newsize)
currsize = array(arr.shape)
startind = (currsize - newsize) // 2
endind = startind + newsize
myslice = [slice(startind[k], endind[k]) for k in range(len(endind))]
return arr[tuple(myslice)]
def _next_regular(target):
"""
Find the next regular number greater than or equal to target.
Regular numbers are composites of the prime factors 2, 3, and 5.
Also known as 5-smooth numbers or Hamming numbers, these are the optimal
size for inputs to FFTPACK.
Target must be a positive integer.
"""
if target <= 6:
return target
# Quickly check if it's already a power of 2
if not (target & (target-1)):
return target
match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
quotient = -(-target // p35)
# Quickly find next power of 2 >= quotient
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)
N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match
def fftconvolve(in1, in2, mode="full"):
"""Convolve two N-dimensional arrays using FFT.
Convolve `in1` and `in2` using the fast Fourier transform method, with
the output size determined by the `mode` argument.
This is generally much faster than `convolve` for large arrays (n > ~500),
but can be slower when only a few output values are needed, and can only
output float arrays (int or object array inputs will be cast to float).
Parameters
----------
in1 : array_like
First input.
in2 : array_like
Second input. Should have the same number of dimensions as `in1`.
If operating in 'valid' mode, either `in1` or `in2` must be
at least as large as the other in every dimension.
mode : str {'full', 'valid', 'same'}, optional
A string indicating the size of the output:
``full``
The output is the full discrete linear convolution
of the inputs. (Default)
``valid``
The output consists only of those elements that do not
rely on the zero-padding.
``same``
The output is the same size as `in1`, centered
with respect to the 'full' output.
Returns
-------
out : array
An N-dimensional array containing a subset of the discrete linear
convolution of `in1` with `in2`.
Examples
--------
Autocorrelation of white noise is an impulse. (This is at least 100 times
as fast as `convolve`.)
>>> from scipy import signal
>>> sig = np.random.randn(1000)
>>> autocorr = signal.fftconvolve(sig, sig[::-1], mode='full')
>>> import matplotlib.pyplot as plt
>>> fig, (ax_orig, ax_mag) = plt.subplots(2, 1)
>>> ax_orig.plot(sig)
>>> ax_orig.set_title('White noise')
>>> ax_mag.plot(np.arange(-len(sig)+1,len(sig)), autocorr)
>>> ax_mag.set_title('Autocorrelation')
>>> fig.tight_layout()
>>> fig.show()
Gaussian blur implemented using FFT convolution. Notice the dark borders
around the image, due to the zero-padding beyond its boundaries.
The `convolve2d` function allows for other types of image boundaries,
but is far slower.
>>> from scipy import misc
>>> face = misc.face(gray=True)
>>> kernel = np.outer(signal.gaussian(70, 8), signal.gaussian(70, 8))
>>> blurred = signal.fftconvolve(face, kernel, mode='same')
>>> fig, (ax_orig, ax_kernel, ax_blurred) = plt.subplots(3, 1,
... figsize=(6, 15))
>>> ax_orig.imshow(face, cmap='gray')
>>> ax_orig.set_title('Original')
>>> ax_orig.set_axis_off()
>>> ax_kernel.imshow(kernel, cmap='gray')
>>> ax_kernel.set_title('Gaussian kernel')
>>> ax_kernel.set_axis_off()
>>> ax_blurred.imshow(blurred, cmap='gray')
>>> ax_blurred.set_title('Blurred')
>>> ax_blurred.set_axis_off()
>>> fig.show()
"""
in1 = asarray(in1)
in2 = asarray(in2)
if in1.ndim == in2.ndim == 0: # scalar inputs
return in1 * in2
elif not in1.ndim == in2.ndim:
raise ValueError("in1 and in2 should have the same dimensionality")
elif in1.size == 0 or in2.size == 0: # empty arrays
return array([])
s1 = array(in1.shape)
s2 = array(in2.shape)
complex_result = (np.issubdtype(in1.dtype, complex) or
np.issubdtype(in2.dtype, complex))
shape = s1 + s2 - 1
# Check that input sizes are compatible with 'valid' mode
if _inputs_swap_needed(mode, s1, s2):
# Convolution is commutative; order doesn't have any effect on output
in1, s1, in2, s2 = in2, s2, in1, s1
# Speed up FFT by padding to optimal size for FFTPACK
fshape = [_next_regular(int(d)) for d in shape]
fslice = tuple([slice(0, int(sz)) for sz in shape])
# Pre-1.9 NumPy FFT routines are not threadsafe. For older NumPys, make
# sure we only call rfftn/irfftn from one thread at a time.
if not complex_result and (_rfft_mt_safe or _rfft_lock.acquire(False)):
try:
sp1 = np.fft.rfftn(in1, fshape)
sp2 = np.fft.rfftn(in2, fshape)
ret = (np.fft.irfftn(sp1 * sp2, fshape)[fslice].copy())
finally:
if not _rfft_mt_safe:
_rfft_lock.release()
else:
# If we're here, it's either because we need a complex result, or we
# failed to acquire _rfft_lock (meaning rfftn isn't threadsafe and
# is already in use by another thread). In either case, use the
# (threadsafe but slower) SciPy complex-FFT routines instead.
sp1 = fftpack.fftn(in1, fshape)
sp2 = fftpack.fftn(in2, fshape)
ret = fftpack.ifftn(sp1 * sp2)[fslice].copy()
if not complex_result:
ret = ret.real
if mode == "full":
return ret
elif mode == "same":
return _centered(ret, s1)
elif mode == "valid":
return _centered(ret, s1 - s2 + 1)
else:
raise ValueError("Acceptable mode flags are 'valid',"
" 'same', or 'full'.")
def convolve(in1, in2, mode='full'):
"""
Convolve two N-dimensional arrays.
Convolve `in1` and `in2`, with the output size determined by the
`mode` argument.
Parameters
----------
in1 : array_like
First input.
in2 : array_like
Second input. Should have the same number of dimensions as `in1`.
If operating in 'valid' mode, either `in1` or `in2` must be
at least as large as the other in every dimension.
mode : str {'full', 'valid', 'same'}, optional
A string indicating the size of the output:
``full``
The output is the full discrete linear convolution
of the inputs. (Default)
``valid``
The output consists only of those elements that do not
rely on the zero-padding.
``same``
The output is the same size as `in1`, centered
with respect to the 'full' output.
Returns
-------
convolve : array
An N-dimensional array containing a subset of the discrete linear
convolution of `in1` with `in2`.
See also
--------
numpy.polymul : performs polynomial multiplication (same operation, but
also accepts poly1d objects)
Examples
--------
Smooth a square pulse using a Hann window:
>>> from scipy import signal
>>> sig = np.repeat([0., 1., 0.], 100)
>>> win = signal.hann(50)
>>> filtered = signal.convolve(sig, win, mode='same') / sum(win)
>>> import matplotlib.pyplot as plt
>>> fig, (ax_orig, ax_win, ax_filt) = plt.subplots(3, 1, sharex=True)
>>> ax_orig.plot(sig)
>>> ax_orig.set_title('Original pulse')
>>> ax_orig.margins(0, 0.1)
>>> ax_win.plot(win)
>>> ax_win.set_title('Filter impulse response')
>>> ax_win.margins(0, 0.1)
>>> ax_filt.plot(filtered)
>>> ax_filt.set_title('Filtered signal')
>>> ax_filt.margins(0, 0.1)
>>> fig.tight_layout()
>>> fig.show()
"""
volume = asarray(in1)
kernel = asarray(in2)
if volume.ndim == kernel.ndim == 0:
return volume * kernel
if _inputs_swap_needed(mode, volume.shape, kernel.shape):
# Convolution is commutative; order doesn't have any effect on output
volume, kernel = kernel, volume
# fastpath to faster numpy 1d convolve (but numpy's 'same' mode uses the
# size of the larger input, not the first.)
if volume.ndim == kernel.ndim == 1 and (volume.size >= kernel.size or
mode != 'same'):
return np.convolve(volume, kernel, mode)
# Reverse in all dimensions
reverse = [slice(None, None, -1)] * kernel.ndim
# .conj() does nothing to real arrays and is faster than iscomplexobj()
return correlate(volume, kernel[reverse].conj(), mode)
def order_filter(a, domain, rank):
"""
Perform an order filter on an N-dimensional array.
Perform an order filter on the array in. The domain argument acts as a
mask centered over each pixel. The non-zero elements of domain are
used to select elements surrounding each input pixel which are placed
in a list. The list is sorted, and the output for that pixel is the
element corresponding to rank in the sorted list.
Parameters
----------
a : ndarray
The N-dimensional input array.
domain : array_like
A mask array with the same number of dimensions as `a`.
Each dimension should have an odd number of elements.
rank : int
A non-negative integer which selects the element from the
sorted list (0 corresponds to the smallest element, 1 is the
next smallest element, etc.).
Returns
-------
out : ndarray
The results of the order filter in an array with the same
shape as `a`.
Examples
--------
>>> from scipy import signal
>>> x = np.arange(25).reshape(5, 5)
>>> domain = np.identity(3)
>>> x
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
>>> signal.order_filter(x, domain, 0)
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 2., 0.],
[ 0., 5., 6., 7., 0.],
[ 0., 10., 11., 12., 0.],
[ 0., 0., 0., 0., 0.]])
>>> signal.order_filter(x, domain, 2)
array([[ 6., 7., 8., 9., 4.],
[ 11., 12., 13., 14., 9.],
[ 16., 17., 18., 19., 14.],
[ 21., 22., 23., 24., 19.],
[ 20., 21., 22., 23., 24.]])
"""
domain = asarray(domain)
size = domain.shape
for k in range(len(size)):
if (size[k] % 2) != 1:
raise ValueError("Each dimension of domain argument "
" should have an odd number of elements.")
return sigtools._order_filterND(a, domain, rank)
def medfilt(volume, kernel_size=None):
"""
Perform a median filter on an N-dimensional array.
Apply a median filter to the input array using a local window-size
given by `kernel_size`.
Parameters
----------
volume : array_like
An N-dimensional input array.
kernel_size : array_like, optional
A scalar or an N-length list giving the size of the median filter
window in each dimension. Elements of `kernel_size` should be odd.
If `kernel_size` is a scalar, then this scalar is used as the size in
each dimension. Default size is 3 for each dimension.
Returns
-------
out : ndarray
An array the same size as input containing the median filtered
result.
"""
volume = atleast_1d(volume)
if kernel_size is None:
kernel_size = [3] * volume.ndim
kernel_size = asarray(kernel_size)
if kernel_size.shape == ():
kernel_size = np.repeat(kernel_size.item(), volume.ndim)
for k in range(volume.ndim):
if (kernel_size[k] % 2) != 1:
raise ValueError("Each element of kernel_size should be odd.")
domain = ones(kernel_size)
numels = product(kernel_size, axis=0)
order = numels // 2
return sigtools._order_filterND(volume, domain, order)
def wiener(im, mysize=None, noise=None):
"""
Perform a Wiener filter on an N-dimensional array.
Apply a Wiener filter to the N-dimensional array `im`.
Parameters
----------
im : ndarray
An N-dimensional array.
mysize : int or array_like, optional
A scalar or an N-length list giving the size of the Wiener filter
window in each dimension. Elements of mysize should be odd.
If mysize is a scalar, then this scalar is used as the size
in each dimension.
noise : float, optional
The noise-power to use. If None, then noise is estimated as the
average of the local variance of the input.
Returns
-------
out : ndarray
Wiener filtered result with the same shape as `im`.
"""
im = asarray(im)
if mysize is None:
mysize = [3] * im.ndim
mysize = asarray(mysize)
if mysize.shape == ():
mysize = np.repeat(mysize.item(), im.ndim)
# Estimate the local mean
lMean = correlate(im, ones(mysize), 'same') / product(mysize, axis=0)
# Estimate the local variance
lVar = (correlate(im ** 2, ones(mysize), 'same') / product(mysize, axis=0)
- lMean ** 2)
# Estimate the noise power if needed.
if noise is None:
noise = mean(ravel(lVar), axis=0)
res = (im - lMean)
res *= (1 - noise / lVar)
res += lMean
out = where(lVar < noise, lMean, res)
return out
def convolve2d(in1, in2, mode='full', boundary='fill', fillvalue=0):
"""
Convolve two 2-dimensional arrays.
Convolve `in1` and `in2` with output size determined by `mode`, and
boundary conditions determined by `boundary` and `fillvalue`.
Parameters
----------
in1 : array_like
First input.
in2 : array_like
Second input. Should have the same number of dimensions as `in1`.
If operating in 'valid' mode, either `in1` or `in2` must be
at least as large as the other in every dimension.
mode : str {'full', 'valid', 'same'}, optional
A string indicating the size of the output:
``full``
The output is the full discrete linear convolution
of the inputs. (Default)
``valid``
The output consists only of those elements that do not
rely on the zero-padding.
``same``
The output is the same size as `in1`, centered
with respect to the 'full' output.
boundary : str {'fill', 'wrap', 'symm'}, optional
A flag indicating how to handle boundaries:
``fill``
pad input arrays with fillvalue. (default)
``wrap``
circular boundary conditions.
``symm``
symmetrical boundary conditions.
fillvalue : scalar, optional
Value to fill pad input arrays with. Default is 0.
Returns
-------
out : ndarray
A 2-dimensional array containing a subset of the discrete linear
convolution of `in1` with `in2`.
Examples
--------
Compute the gradient of an image by 2D convolution with a complex Scharr
operator. (Horizontal operator is real, vertical is imaginary.) Use
symmetric boundary condition to avoid creating edges at the image
boundaries.
>>> from scipy import signal
>>> from scipy import misc
>>> ascent = misc.ascent()
>>> scharr = np.array([[ -3-3j, 0-10j, +3 -3j],
... [-10+0j, 0+ 0j, +10 +0j],
... [ -3+3j, 0+10j, +3 +3j]]) # Gx + j*Gy
>>> grad = signal.convolve2d(ascent, scharr, boundary='symm', mode='same')
>>> import matplotlib.pyplot as plt
>>> fig, (ax_orig, ax_mag, ax_ang) = plt.subplots(3, 1, figsize=(6, 15))
>>> ax_orig.imshow(ascent, cmap='gray')
>>> ax_orig.set_title('Original')
>>> ax_orig.set_axis_off()
>>> ax_mag.imshow(np.absolute(grad), cmap='gray')
>>> ax_mag.set_title('Gradient magnitude')
>>> ax_mag.set_axis_off()
>>> ax_ang.imshow(np.angle(grad), cmap='hsv') # hsv is cyclic, like angles
>>> ax_ang.set_title('Gradient orientation')
>>> ax_ang.set_axis_off()
>>> fig.show()
"""
in1 = asarray(in1)
in2 = asarray(in2)
if not in1.ndim == in2.ndim == 2:
raise ValueError('convolve2d inputs must both be 2D arrays')
if _inputs_swap_needed(mode, in1.shape, in2.shape):
in1, in2 = in2, in1
val = _valfrommode(mode)
bval = _bvalfromboundary(boundary)
with warnings.catch_warnings():
warnings.simplefilter('ignore', np.ComplexWarning)
# FIXME: some cast generates a warning here
out = sigtools._convolve2d(in1, in2, 1, val, bval, fillvalue)
return out
def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0):
"""
Cross-correlate two 2-dimensional arrays.
Cross correlate `in1` and `in2` with output size determined by `mode`, and
boundary conditions determined by `boundary` and `fillvalue`.
Parameters
----------
in1 : array_like
First input.
in2 : array_like
Second input. Should have the same number of dimensions as `in1`.
If operating in 'valid' mode, either `in1` or `in2` must be
at least as large as the other in every dimension.
mode : str {'full', 'valid', 'same'}, optional
A string indicating the size of the output:
``full``
The output is the full discrete linear cross-correlation
of the inputs. (Default)
``valid``
The output consists only of those elements that do not
rely on the zero-padding.
``same``
The output is the same size as `in1`, centered
with respect to the 'full' output.
boundary : str {'fill', 'wrap', 'symm'}, optional
A flag indicating how to handle boundaries:
``fill``
pad input arrays with fillvalue. (default)
``wrap``
circular boundary conditions.
``symm``
symmetrical boundary conditions.
fillvalue : scalar, optional
Value to fill pad input arrays with. Default is 0.
Returns
-------
correlate2d : ndarray
A 2-dimensional array containing a subset of the discrete linear
cross-correlation of `in1` with `in2`.
Examples
--------
Use 2D cross-correlation to find the location of a template in a noisy
image:
>>> from scipy import signal
>>> from scipy import misc
>>> face = misc.face(gray=True) - misc.face(gray=True).mean()
>>> template = np.copy(face[300:365, 670:750]) # right eye
>>> template -= template.mean()
>>> face = face + np.random.randn(*face.shape) * 50 # add noise
>>> corr = signal.correlate2d(face, template, boundary='symm', mode='same')
>>> y, x = np.unravel_index(np.argmax(corr), corr.shape) # find the match
>>> import matplotlib.pyplot as plt
>>> fig, (ax_orig, ax_template, ax_corr) = plt.subplots(3, 1,
... figsize=(6, 15))
>>> ax_orig.imshow(face, cmap='gray')
>>> ax_orig.set_title('Original')
>>> ax_orig.set_axis_off()
>>> ax_template.imshow(template, cmap='gray')
>>> ax_template.set_title('Template')
>>> ax_template.set_axis_off()
>>> ax_corr.imshow(corr, cmap='gray')
>>> ax_corr.set_title('Cross-correlation')
>>> ax_corr.set_axis_off()
>>> ax_orig.plot(x, y, 'ro')
>>> fig.show()
"""
in1 = asarray(in1)
in2 = asarray(in2)
if not in1.ndim == in2.ndim == 2:
raise ValueError('correlate2d inputs must both be 2D arrays')
swapped_inputs = _inputs_swap_needed(mode, in1.shape, in2.shape)
if swapped_inputs:
in1, in2 = in2, in1
val = _valfrommode(mode)
bval = _bvalfromboundary(boundary)
with warnings.catch_warnings():
warnings.simplefilter('ignore', np.ComplexWarning)
# FIXME: some cast generates a warning here
out = sigtools._convolve2d(in1, in2, 0, val, bval, fillvalue)
if swapped_inputs:
out = out[::-1, ::-1]
return out
def medfilt2d(input, kernel_size=3):
"""
Median filter a 2-dimensional array.
Apply a median filter to the `input` array using a local window-size
given by `kernel_size` (must be odd).
Parameters
----------
input : array_like
A 2-dimensional input array.
kernel_size : array_like, optional
A scalar or a list of length 2, giving the size of the
median filter window in each dimension. Elements of
`kernel_size` should be odd. If `kernel_size` is a scalar,
then this scalar is used as the size in each dimension.
Default is a kernel of size (3, 3).
Returns
-------
out : ndarray
An array the same size as input containing the median filtered
result.
"""
image = asarray(input)
if kernel_size is None:
kernel_size = [3] * 2
kernel_size = asarray(kernel_size)
if kernel_size.shape == ():
kernel_size = np.repeat(kernel_size.item(), 2)
for size in kernel_size:
if (size % 2) != 1:
raise ValueError("Each element of kernel_size should be odd.")
return sigtools._medfilt2d(image, kernel_size)
def lfilter(b, a, x, axis=-1, zi=None):
"""
Filter data along one-dimension with an IIR or FIR filter.
Filter a data sequence, `x`, using a digital filter. This works for many
fundamental data types (including Object type). The filter is a direct
form II transposed implementation of the standard difference equation
(see Notes).
Parameters
----------
b : array_like
The numerator coefficient vector in a 1-D sequence.
a : array_like
The denominator coefficient vector in a 1-D sequence. If ``a[0]``
is not 1, then both `a` and `b` are normalized by ``a[0]``.
x : array_like
An N-dimensional input array.
axis : int, optional
The axis of the input data array along which to apply the
linear filter. The filter is applied to each subarray along
this axis. Default is -1.
zi : array_like, optional
Initial conditions for the filter delays. It is a vector
(or array of vectors for an N-dimensional input) of length
``max(len(a),len(b))-1``. If `zi` is None or is not given then
initial rest is assumed. See `lfiltic` for more information.
Returns
-------
y : array
The output of the digital filter.
zf : array, optional
If `zi` is None, this is not returned, otherwise, `zf` holds the
final filter delay values.
See Also
--------
lfiltic : Construct initial conditions for `lfilter`.
lfilter_zi : Compute initial state (steady state of step response) for
`lfilter`.
filtfilt : A forward-backward filter, to obtain a filter with linear phase.
savgol_filter : A Savitzky-Golay filter.
sosfilt: Filter data using cascaded second-order sections.
Notes
-----
The filter function is implemented as a direct II transposed structure.
This means that the filter implements::
a[0]*y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb]
- a[1]*y[n-1] - ... - a[na]*y[n-na]
using the following difference equations::
y[m] = b[0]*x[m] + z[0,m-1]
z[0,m] = b[1]*x[m] + z[1,m-1] - a[1]*y[m]
...
z[n-3,m] = b[n-2]*x[m] + z[n-2,m-1] - a[n-2]*y[m]
z[n-2,m] = b[n-1]*x[m] - a[n-1]*y[m]
where m is the output sample number and n=max(len(a),len(b)) is the
model order.
The rational transfer function describing this filter in the
z-transform domain is::
-1 -nb
b[0] + b[1]z + ... + b[nb] z
Y(z) = ---------------------------------- X(z)
-1 -na
a[0] + a[1]z + ... + a[na] z
Examples
--------
Generate a noisy signal to be filtered:
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> t = np.linspace(-1, 1, 201)
>>> x = (np.sin(2*np.pi*0.75*t*(1-t) + 2.1) + 0.1*np.sin(2*np.pi*1.25*t + 1)
... + 0.18*np.cos(2*np.pi*3.85*t))
>>> xn = x + np.random.randn(len(t)) * 0.08
Create an order 3 lowpass butterworth filter:
>>> b, a = signal.butter(3, 0.05)
Apply the filter to xn. Use lfilter_zi to choose the initial condition of
the filter:
>>> zi = signal.lfilter_zi(b, a)
>>> z, _ = signal.lfilter(b, a, xn, zi=zi*xn[0])
Apply the filter again, to have a result filtered at an order the same as
filtfilt:
>>> z2, _ = signal.lfilter(b, a, z, zi=zi*z[0])
Use filtfilt to apply the filter:
>>> y = signal.filtfilt(b, a, xn)
Plot the original signal and the various filtered versions:
>>> plt.figure
>>> plt.plot(t, xn, 'b', alpha=0.75)
>>> plt.plot(t, z, 'r--', t, z2, 'r', t, y, 'k')
>>> plt.legend(('noisy signal', 'lfilter, once', 'lfilter, twice',
... 'filtfilt'), loc='best')
>>> plt.grid(True)
>>> plt.show()
"""
a = np.atleast_1d(a)
if len(a) == 1:
# This path only supports types fdgFDGO to mirror _linear_filter below.
# Any of b, a, x, or zi can set the dtype, but there is no default
# casting of other types; instead a NotImplementedError is raised.
b = np.asarray(b)
a = np.asarray(a)
if b.ndim != 1 and a.ndim != 1:
raise ValueError('object of too small depth for desired array')
x = np.asarray(x)
inputs = [b, a, x]
if zi is not None:
# _linear_filter does not broadcast zi, but does do expansion of singleton dims.
zi = np.asarray(zi)
if zi.ndim != x.ndim:
raise ValueError('object of too small depth for desired array')
expected_shape = list(x.shape)
expected_shape[axis] = b.shape[0] - 1
expected_shape = tuple(expected_shape)
# check the trivial case where zi is the right shape first
if zi.shape != expected_shape:
strides = zi.ndim * [None]
if axis < 0:
axis += zi.ndim
for k in range(zi.ndim):
if k == axis and zi.shape[k] == expected_shape[k]:
strides[k] = zi.strides[k]
elif k != axis and zi.shape[k] == expected_shape[k]:
strides[k] = zi.strides[k]
elif k != axis and zi.shape[k] == 1:
strides[k] = 0
else:
raise ValueError('Unexpected shape for zi: expected '
'%s, found %s.' %
(expected_shape, zi.shape))
zi = np.lib.stride_tricks.as_strided(zi, expected_shape, strides)
inputs.append(zi)
dtype = np.result_type(*inputs)
if dtype.char not in 'fdgFDGO':
raise NotImplementedError("input type '%s' not supported" % dtype)
b = np.array(b, dtype=dtype)
a = np.array(a, dtype=dtype, copy=False)
b /= a[0]
x = np.array(x, dtype=dtype, copy=False)
out_full = np.apply_along_axis(lambda y: np.convolve(b, y), axis, x)
ind = out_full.ndim * [slice(None)]
if zi is not None:
ind[axis] = slice(zi.shape[axis])
out_full[ind] += zi
ind[axis] = slice(out_full.shape[axis] - len(b) + 1)
out = out_full[ind]
if zi is None:
return out
else:
ind[axis] = slice(out_full.shape[axis] - len(b) + 1, None)
zf = out_full[ind]
return out, zf
else:
if zi is None:
return sigtools._linear_filter(b, a, x, axis)
else:
return sigtools._linear_filter(b, a, x, axis, zi)
def lfiltic(b, a, y, x=None):
"""
Construct initial conditions for lfilter.
Given a linear filter (b, a) and initial conditions on the output `y`
and the input `x`, return the initial conditions on the state vector zi
which is used by `lfilter` to generate the output given the input.
Parameters
----------
b : array_like
Linear filter term.
a : array_like
Linear filter term.
y : array_like
Initial conditions.
If ``N=len(a) - 1``, then ``y = {y[-1], y[-2], ..., y[-N]}``.
If `y` is too short, it is padded with zeros.
x : array_like, optional
Initial conditions.
If ``M=len(b) - 1``, then ``x = {x[-1], x[-2], ..., x[-M]}``.
If `x` is not given, its initial conditions are assumed zero.
If `x` is too short, it is padded with zeros.
Returns
-------
zi : ndarray
The state vector ``zi``.
``zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]}``, where ``K = max(M,N)``.
See Also
--------
lfilter, lfilter_zi
"""
N = np.size(a) - 1
M = np.size(b) - 1
K = max(M, N)
y = asarray(y)
if y.dtype.kind in 'bui':
# ensure calculations are floating point
y = y.astype(np.float64)
zi = zeros(K, y.dtype)
if x is None:
x = zeros(M, y.dtype)
else:
x = asarray(x)
L = np.size(x)
if L < M:
x = r_[x, zeros(M - L)]
L = np.size(y)
if L < N:
y = r_[y, zeros(N - L)]
for m in range(M):
zi[m] = sum(b[m + 1:] * x[:M - m], axis=0)
for m in range(N):
zi[m] -= sum(a[m + 1:] * y[:N - m], axis=0)
return zi
def deconvolve(signal, divisor):
"""Deconvolves ``divisor`` out of ``signal``.
Returns the quotient and remainder such that
``signal = convolve(divisor, quotient) + remainder``
Parameters
----------
signal : array_like
Signal data, typically a recorded signal
divisor : array_like
Divisor data, typically an impulse response or filter that was
applied to the original signal
Returns
-------
quotient : ndarray
Quotient, typically the recovered original signal
remainder : ndarray
Remainder
Examples
--------
Deconvolve a signal that's been filtered:
>>> from scipy import signal
>>> original = [0, 1, 0, 0, 1, 1, 0, 0]
>>> impulse_response = [2, 1]
>>> recorded = signal.convolve(impulse_response, original)
>>> recorded
array([0, 2, 1, 0, 2, 3, 1, 0, 0])
>>> recovered, remainder = signal.deconvolve(recorded, impulse_response)
>>> recovered
array([ 0., 1., 0., 0., 1., 1., 0., 0.])
See also
--------
numpy.polydiv : performs polynomial division (same operation, but
also accepts poly1d objects)
"""
num = atleast_1d(signal)
den = atleast_1d(divisor)
N = len(num)
D = len(den)
if D > N:
quot = []
rem = num
else:
input = ones(N - D + 1, float)
input[1:] = 0
quot = lfilter(num, den, input)
rem = num - convolve(den, quot, mode='full')
return quot, rem
def hilbert(x, N=None, axis=-1):
"""
Compute the analytic signal, using the Hilbert transform.
The transformation is done along the last axis by default.
Parameters
----------
x : array_like
Signal data. Must be real.
N : int, optional
Number of Fourier components. Default: ``x.shape[axis]``
axis : int, optional
Axis along which to do the transformation. Default: -1.
Returns
-------
xa : ndarray
Analytic signal of `x`, of each 1-D array along `axis`
Notes
-----
The analytic signal ``x_a(t)`` of signal ``x(t)`` is:
.. math:: x_a = F^{-1}(F(x) 2U) = x + i y
where `F` is the Fourier transform, `U` the unit step function,
and `y` the Hilbert transform of `x`. [1]_
In other words, the negative half of the frequency spectrum is zeroed
out, turning the real-valued signal into a complex signal. The Hilbert
transformed signal can be obtained from ``np.imag(hilbert(x))``, and the
original signal from ``np.real(hilbert(x))``.
Examples
---------
In this example we use the Hilbert transform to determine the amplitude
envelope and instantaneous frequency of an amplitude-modulated signal.
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.signal import hilbert, chirp
>>> duration = 1.0
>>> fs = 400.0
>>> samples = int(fs*duration)
>>> t = np.arange(samples) / fs
We create a chirp of which the frequency increases from 20 Hz to 100 Hz and
apply an amplitude modulation.
>>> signal = chirp(t, 20.0, t[-1], 100.0)
>>> signal *= (1.0 + 0.5 * np.sin(2.0*np.pi*3.0*t) )
The amplitude envelope is given by magnitude of the analytic signal. The
instantaneous frequency can be obtained by differentiating the instantaneous
phase in respect to time. The instantaneous phase corresponds to the phase
angle of the analytic signal.
>>> analytic_signal = hilbert(signal)
>>> amplitude_envelope = np.abs(analytic_signal)
>>> instantaneous_phase = np.unwrap(np.angle(analytic_signal))
>>> instantaneous_frequency = np.diff(instantaneous_phase) / (2.0*np.pi) * fs
>>> fig = plt.figure()
>>> ax0 = fig.add_subplot(211)
>>> ax0.plot(t, signal, label='signal')
>>> ax0.plot(t, amplitude_envelope, label='envelope')
>>> ax0.set_xlabel("time in seconds")
>>> ax0.legend()
>>> ax1 = fig.add_subplot(212)
>>> ax1.plot(t[1:], instantaneous_frequency)
>>> ax1.set_xlabel("time in seconds")
>>> ax1.set_ylim(0.0, 120.0)
References
----------
.. [1] Wikipedia, "Analytic signal".
http://en.wikipedia.org/wiki/Analytic_signal
.. [2] Leon Cohen, "Time-Frequency Analysis", 1995. Chapter 2.
.. [3] Alan V. Oppenheim, Ronald W. Schafer. Discrete-Time Signal Processing,
Third Edition, 2009. Chapter 12. ISBN 13: 978-1292-02572-8
"""
x = asarray(x)
if iscomplexobj(x):
raise ValueError("x must be real.")
if N is None:
N = x.shape[axis]
if N <= 0:
raise ValueError("N must be positive.")
Xf = fftpack.fft(x, N, axis=axis)
h = zeros(N)
if N % 2 == 0:
h[0] = h[N // 2] = 1
h[1:N // 2] = 2
else:
h[0] = 1
h[1:(N + 1) // 2] = 2
if x.ndim > 1:
ind = [newaxis] * x.ndim
ind[axis] = slice(None)
h = h[ind]
x = fftpack.ifft(Xf * h, axis=axis)
return x
def hilbert2(x, N=None):
"""
Compute the '2-D' analytic signal of `x`
Parameters
----------
x : array_like
2-D signal data.
N : int or tuple of two ints, optional
Number of Fourier components. Default is ``x.shape``
Returns
-------
xa : ndarray
Analytic signal of `x` taken along axes (0,1).
References
----------
.. [1] Wikipedia, "Analytic signal",
http://en.wikipedia.org/wiki/Analytic_signal
"""
x = atleast_2d(x)
if x.ndim > 2:
raise ValueError("x must be 2-D.")
if iscomplexobj(x):
raise ValueError("x must be real.")
if N is None:
N = x.shape
elif isinstance(N, int):
if N <= 0:
raise ValueError("N must be positive.")
N = (N, N)
elif len(N) != 2 or np.any(np.asarray(N) <= 0):
raise ValueError("When given as a tuple, N must hold exactly "
"two positive integers")
Xf = fftpack.fft2(x, N, axes=(0, 1))
h1 = zeros(N[0], 'd')
h2 = zeros(N[1], 'd')
for p in range(2):
h = eval("h%d" % (p + 1))
N1 = N[p]
if N1 % 2 == 0:
h[0] = h[N1 // 2] = 1
h[1:N1 // 2] = 2
else:
h[0] = 1
h[1:(N1 + 1) // 2] = 2
exec("h%d = h" % (p + 1), globals(), locals())
h = h1[:, newaxis] * h2[newaxis, :]
k = x.ndim
while k > 2:
h = h[:, newaxis]
k -= 1
x = fftpack.ifft2(Xf * h, axes=(0, 1))
return x
def cmplx_sort(p):
"""Sort roots based on magnitude.
Parameters
----------
p : array_like
The roots to sort, as a 1-D array.
Returns
-------
p_sorted : ndarray
Sorted roots.
indx : ndarray
Array of indices needed to sort the input `p`.
"""
p = asarray(p)
if iscomplexobj(p):
indx = argsort(abs(p))
else:
indx = argsort(p)
return take(p, indx, 0), indx
def unique_roots(p, tol=1e-3, rtype='min'):
"""
Determine unique roots and their multiplicities from a list of roots.
Parameters
----------
p : array_like
The list of roots.
tol : float, optional
The tolerance for two roots to be considered equal. Default is 1e-3.
rtype : {'max', 'min, 'avg'}, optional
How to determine the returned root if multiple roots are within
`tol` of each other.
- 'max': pick the maximum of those roots.
- 'min': pick the minimum of those roots.
- 'avg': take the average of those roots.
Returns
-------
pout : ndarray
The list of unique roots, sorted from low to high.
mult : ndarray
The multiplicity of each root.
Notes
-----
This utility function is not specific to roots but can be used for any
sequence of values for which uniqueness and multiplicity has to be
determined. For a more general routine, see `numpy.unique`.
Examples
--------
>>> from scipy import signal
>>> vals = [0, 1.3, 1.31, 2.8, 1.25, 2.2, 10.3]
>>> uniq, mult = signal.unique_roots(vals, tol=2e-2, rtype='avg')
Check which roots have multiplicity larger than 1:
>>> uniq[mult > 1]
array([ 1.305])
"""
if rtype in ['max', 'maximum']:
comproot = np.max
elif rtype in ['min', 'minimum']:
comproot = np.min
elif rtype in ['avg', 'mean']:
comproot = np.mean
else:
raise ValueError("`rtype` must be one of "
"{'max', 'maximum', 'min', 'minimum', 'avg', 'mean'}")
p = asarray(p) * 1.0
tol = abs(tol)
p, indx = cmplx_sort(p)
pout = []
mult = []
indx = -1
curp = p[0] + 5 * tol
sameroots = []
for k in range(len(p)):
tr = p[k]
if abs(tr - curp) < tol:
sameroots.append(tr)
curp = comproot(sameroots)
pout[indx] = curp
mult[indx] += 1
else:
pout.append(tr)
curp = tr
sameroots = [tr]
indx += 1
mult.append(1)
return array(pout), array(mult)
def invres(r, p, k, tol=1e-3, rtype='avg'):
"""
Compute b(s) and a(s) from partial fraction expansion.
If ``M = len(b)`` and ``N = len(a)``::
b(s) b[0] x**(M-1) + b[1] x**(M-2) + ... + b[M-1]
H(s) = ------ = ----------------------------------------------
a(s) a[0] x**(N-1) + a[1] x**(N-2) + ... + a[N-1]
r[0] r[1] r[-1]
= -------- + -------- + ... + --------- + k(s)
(s-p[0]) (s-p[1]) (s-p[-1])
If there are any repeated roots (closer than tol), then the partial
fraction expansion has terms like::
r[i] r[i+1] r[i+n-1]
-------- + ----------- + ... + -----------
(s-p[i]) (s-p[i])**2 (s-p[i])**n
Parameters
----------
r : ndarray
Residues.
p : ndarray
Poles.
k : ndarray
Coefficients of the direct polynomial term.
tol : float, optional
The tolerance for two roots to be considered equal. Default is 1e-3.
rtype : {'max', 'min, 'avg'}, optional
How to determine the returned root if multiple roots are within
`tol` of each other.
'max': pick the maximum of those roots.
'min': pick the minimum of those roots.
'avg': take the average of those roots.
See Also
--------
residue, unique_roots
"""
extra = k
p, indx = cmplx_sort(p)
r = take(r, indx, 0)
pout, mult = unique_roots(p, tol=tol, rtype=rtype)
p = []
for k in range(len(pout)):
p.extend([pout[k]] * mult[k])
a = atleast_1d(poly(p))
if len(extra) > 0:
b = polymul(extra, a)
else:
b = [0]
indx = 0
for k in range(len(pout)):
temp = []
for l in range(len(pout)):
if l != k:
temp.extend([pout[l]] * mult[l])
for m in range(mult[k]):
t2 = temp[:]
t2.extend([pout[k]] * (mult[k] - m - 1))
b = polyadd(b, r[indx] * atleast_1d(poly(t2)))
indx += 1
b = real_if_close(b)
while allclose(b[0], 0, rtol=1e-14) and (b.shape[-1] > 1):
b = b[1:]
return b, a
def residue(b, a, tol=1e-3, rtype='avg'):
"""
Compute partial-fraction expansion of b(s) / a(s).
If ``M = len(b)`` and ``N = len(a)``, then the partial-fraction
expansion H(s) is defined as::
b(s) b[0] s**(M-1) + b[1] s**(M-2) + ... + b[M-1]
H(s) = ------ = ----------------------------------------------
a(s) a[0] s**(N-1) + a[1] s**(N-2) + ... + a[N-1]
r[0] r[1] r[-1]
= -------- + -------- + ... + --------- + k(s)
(s-p[0]) (s-p[1]) (s-p[-1])
If there are any repeated roots (closer together than `tol`), then H(s)
has terms like::
r[i] r[i+1] r[i+n-1]
-------- + ----------- + ... + -----------
(s-p[i]) (s-p[i])**2 (s-p[i])**n
Returns
-------
r : ndarray
Residues.
p : ndarray
Poles.
k : ndarray
Coefficients of the direct polynomial term.
See Also
--------
invres, numpy.poly, unique_roots
"""
b, a = map(asarray, (b, a))
rscale = a[0]
k, b = polydiv(b, a)
p = roots(a)
r = p * 0.0
pout, mult = unique_roots(p, tol=tol, rtype=rtype)
p = []
for n in range(len(pout)):
p.extend([pout[n]] * mult[n])
p = asarray(p)
# Compute the residue from the general formula
indx = 0
for n in range(len(pout)):
bn = b.copy()
pn = []
for l in range(len(pout)):
if l != n:
pn.extend([pout[l]] * mult[l])
an = atleast_1d(poly(pn))
# bn(s) / an(s) is (s-po[n])**Nn * b(s) / a(s) where Nn is
# multiplicity of pole at po[n]
sig = mult[n]
for m in range(sig, 0, -1):
if sig > m:
# compute next derivative of bn(s) / an(s)
term1 = polymul(polyder(bn, 1), an)
term2 = polymul(bn, polyder(an, 1))
bn = polysub(term1, term2)
an = polymul(an, an)
r[indx + m - 1] = (polyval(bn, pout[n]) / polyval(an, pout[n])
/ factorial(sig - m))
indx += sig
return r / rscale, p, k
def residuez(b, a, tol=1e-3, rtype='avg'):
"""
Compute partial-fraction expansion of b(z) / a(z).
If ``M = len(b)`` and ``N = len(a)``::
b(z) b[0] + b[1] z**(-1) + ... + b[M-1] z**(-M+1)
H(z) = ------ = ----------------------------------------------
a(z) a[0] + a[1] z**(-1) + ... + a[N-1] z**(-N+1)
r[0] r[-1]
= --------------- + ... + ---------------- + k[0] + k[1]z**(-1) ...
(1-p[0]z**(-1)) (1-p[-1]z**(-1))
If there are any repeated roots (closer than tol), then the partial
fraction expansion has terms like::
r[i] r[i+1] r[i+n-1]
-------------- + ------------------ + ... + ------------------
(1-p[i]z**(-1)) (1-p[i]z**(-1))**2 (1-p[i]z**(-1))**n
See also
--------
invresz, unique_roots
"""
b, a = map(asarray, (b, a))
gain = a[0]
brev, arev = b[::-1], a[::-1]
krev, brev = polydiv(brev, arev)
if krev == []:
k = []
else:
k = krev[::-1]
b = brev[::-1]
p = roots(a)
r = p * 0.0
pout, mult = unique_roots(p, tol=tol, rtype=rtype)
p = []
for n in range(len(pout)):
p.extend([pout[n]] * mult[n])
p = asarray(p)
# Compute the residue from the general formula (for discrete-time)
# the polynomial is in z**(-1) and the multiplication is by terms
# like this (1-p[i] z**(-1))**mult[i]. After differentiation,
# we must divide by (-p[i])**(m-k) as well as (m-k)!
indx = 0
for n in range(len(pout)):
bn = brev.copy()
pn = []
for l in range(len(pout)):
if l != n:
pn.extend([pout[l]] * mult[l])
an = atleast_1d(poly(pn))[::-1]
# bn(z) / an(z) is (1-po[n] z**(-1))**Nn * b(z) / a(z) where Nn is
# multiplicity of pole at po[n] and b(z) and a(z) are polynomials.
sig = mult[n]
for m in range(sig, 0, -1):
if sig > m:
# compute next derivative of bn(s) / an(s)
term1 = polymul(polyder(bn, 1), an)
term2 = polymul(bn, polyder(an, 1))
bn = polysub(term1, term2)
an = polymul(an, an)
r[indx + m - 1] = (polyval(bn, 1.0 / pout[n]) /
polyval(an, 1.0 / pout[n]) /
factorial(sig - m) / (-pout[n]) ** (sig - m))
indx += sig
return r / gain, p, k
def invresz(r, p, k, tol=1e-3, rtype='avg'):
"""
Compute b(z) and a(z) from partial fraction expansion.
If ``M = len(b)`` and ``N = len(a)``::
b(z) b[0] + b[1] z**(-1) + ... + b[M-1] z**(-M+1)
H(z) = ------ = ----------------------------------------------
a(z) a[0] + a[1] z**(-1) + ... + a[N-1] z**(-N+1)
r[0] r[-1]
= --------------- + ... + ---------------- + k[0] + k[1]z**(-1)...
(1-p[0]z**(-1)) (1-p[-1]z**(-1))
If there are any repeated roots (closer than tol), then the partial
fraction expansion has terms like::
r[i] r[i+1] r[i+n-1]
-------------- + ------------------ + ... + ------------------
(1-p[i]z**(-1)) (1-p[i]z**(-1))**2 (1-p[i]z**(-1))**n
See Also
--------
residuez, unique_roots, invres
"""
extra = asarray(k)
p, indx = cmplx_sort(p)
r = take(r, indx, 0)
pout, mult = unique_roots(p, tol=tol, rtype=rtype)
p = []
for k in range(len(pout)):
p.extend([pout[k]] * mult[k])
a = atleast_1d(poly(p))
if len(extra) > 0:
b = polymul(extra, a)
else:
b = [0]
indx = 0
brev = asarray(b)[::-1]
for k in range(len(pout)):
temp = []
# Construct polynomial which does not include any of this root
for l in range(len(pout)):
if l != k:
temp.extend([pout[l]] * mult[l])
for m in range(mult[k]):
t2 = temp[:]
t2.extend([pout[k]] * (mult[k] - m - 1))
brev = polyadd(brev, (r[indx] * atleast_1d(poly(t2)))[::-1])
indx += 1
b = real_if_close(brev[::-1])
return b, a
def resample(x, num, t=None, axis=0, window=None):
"""
Resample `x` to `num` samples using Fourier method along the given axis.
The resampled signal starts at the same value as `x` but is sampled
with a spacing of ``len(x) / num * (spacing of x)``. Because a
Fourier method is used, the signal is assumed to be periodic.
Parameters
----------
x : array_like
The data to be resampled.
num : int
The number of samples in the resampled signal.
t : array_like, optional
If `t` is given, it is assumed to be the sample positions
associated with the signal data in `x`.
axis : int, optional
The axis of `x` that is resampled. Default is 0.
window : array_like, callable, string, float, or tuple, optional
Specifies the window applied to the signal in the Fourier
domain. See below for details.
Returns
-------
resampled_x or (resampled_x, resampled_t)
Either the resampled array, or, if `t` was given, a tuple
containing the resampled array and the corresponding resampled
positions.
See also
--------
decimate
resample_poly
Notes
-----
The argument `window` controls a Fourier-domain window that tapers
the Fourier spectrum before zero-padding to alleviate ringing in
the resampled values for sampled signals you didn't intend to be
interpreted as band-limited.
If `window` is a function, then it is called with a vector of inputs
indicating the frequency bins (i.e. fftfreq(x.shape[axis]) ).
If `window` is an array of the same length as `x.shape[axis]` it is
assumed to be the window to be applied directly in the Fourier
domain (with dc and low-frequency first).
For any other type of `window`, the function `scipy.signal.get_window`
is called to generate the window.
The first sample of the returned vector is the same as the first
sample of the input vector. The spacing between samples is changed
from ``dx`` to ``dx * len(x) / num``.
If `t` is not None, then it represents the old sample positions,
and the new sample positions will be returned as well as the new
samples.
As noted, `resample` uses FFT transformations, which can be very
slow if the number of input or output samples is large and prime;
see `scipy.fftpack.fft`.
Examples
--------
Note that the end of the resampled data rises to meet the first
sample of the next cycle:
>>> from scipy import signal
>>> x = np.linspace(0, 10, 20, endpoint=False)
>>> y = np.cos(-x**2/6.0)
>>> f = signal.resample(y, 100)
>>> xnew = np.linspace(0, 10, 100, endpoint=False)
>>> import matplotlib.pyplot as plt
>>> plt.plot(x, y, 'go-', xnew, f, '.-', 10, y[0], 'ro')
>>> plt.legend(['data', 'resampled'], loc='best')
>>> plt.show()
"""
x = asarray(x)
X = fftpack.fft(x, axis=axis)
Nx = x.shape[axis]
if window is not None:
if callable(window):
W = window(fftpack.fftfreq(Nx))
elif isinstance(window, ndarray):
if window.shape != (Nx,):
raise ValueError('window must have the same length as data')
W = window
else:
W = fftpack.ifftshift(get_window(window, Nx))
newshape = [1] * x.ndim
newshape[axis] = len(W)
W.shape = newshape
X = X * W
sl = [slice(None)] * x.ndim
newshape = list(x.shape)
newshape[axis] = num
N = int(np.minimum(num, Nx))
Y = zeros(newshape, 'D')
sl[axis] = slice(0, (N + 1) // 2)
Y[sl] = X[sl]
sl[axis] = slice(-(N - 1) // 2, None)
Y[sl] = X[sl]
y = fftpack.ifft(Y, axis=axis) * (float(num) / float(Nx))
if x.dtype.char not in ['F', 'D']:
y = y.real
if t is None:
return y
else:
new_t = arange(0, num) * (t[1] - t[0]) * Nx / float(num) + t[0]
return y, new_t
def resample_poly(x, up, down, axis=0, window=('kaiser', 5.0)):
"""
Resample `x` along the given axis using polyphase filtering.
The signal `x` is upsampled by the factor `up`, a zero-phase low-pass
FIR filter is applied, and then it is downsampled by the factor `down`.
The resulting sample rate is ``up / down`` times the original sample
rate. Values beyond the boundary of the signal are assumed to be zero
during the filtering step.
Parameters
----------
x : array_like
The data to be resampled.
up : int
The upsampling factor.
down : int
The downsampling factor.
axis : int, optional
The axis of `x` that is resampled. Default is 0.
window : string or tuple of string and parameter values
Desired window to use to design the low-pass filter. See
`scipy.signal.get_window` for a list of windows and required
parameters.
Returns
-------
resampled_x : array
The resampled array.
See also
--------
decimate
resample
Notes
-----
This polyphase method will likely be faster than the Fourier method
in `scipy.signal.resample` when the number of samples is large and
prime, or when the number of samples is large and `up` and `down`
share a large greatest common denominator. The length of the FIR
filter used will depend on ``max(up, down) // gcd(up, down)``, and
the number of operations during polyphase filtering will depend on
the filter length and `down` (see `scipy.signal.upfirdn` for details).
The `window` argument is passed directly to `scipy.signal.firwin`
to design a low-pass filter.
The first sample of the returned vector is the same as the first
sample of the input vector. The spacing between samples is changed
from ``dx`` to ``dx * up / float(down)``.
Examples
--------
Note that the end of the resampled data rises to meet the first
sample of the next cycle for the FFT method, and gets closer to zero
for the polyphase method:
>>> from scipy import signal
>>> x = np.linspace(0, 10, 20, endpoint=False)
>>> y = np.cos(-x**2/6.0)
>>> f_fft = signal.resample(y, 100)
>>> f_poly = signal.resample_poly(y, 100, 20)
>>> xnew = np.linspace(0, 10, 100, endpoint=False)
>>> import matplotlib.pyplot as plt
>>> plt.plot(xnew, f_fft, 'b.-', xnew, f_poly, 'r.-')
>>> plt.plot(x, y, 'ko-')
>>> plt.plot(10, y[0], 'bo', 10, 0., 'ro') # boundaries
>>> plt.legend(['resample', 'resamp_poly', 'data'], loc='best')
>>> plt.show()
"""
x = asarray(x)
up = int(up)
down = int(down)
if up < 1 or down < 1:
raise ValueError('up and down must be >= 1')
# Determine our up and down factors
# Use a rational approimation to save computation time on really long
# signals
g_ = gcd(up, down)
up //= g_
down //= g_
if up == down == 1:
return x.copy()
n_out = (x.shape[axis] * up) // down
# Design a linear-phase low-pass FIR filter
max_rate = max(up, down)
f_c = 1. / max_rate # cutoff of FIR filter (rel. to Nyquist)
half_len = 10 * max_rate # reasonable cutoff for our sinc-like function
h = firwin(2 * half_len + 1, f_c, window=window)
h *= up
# Zero-pad our filter to put the output samples at the center
n_pre_pad = (down - half_len % down)
n_post_pad = 0
n_pre_remove = (half_len + n_pre_pad) // down
# We should rarely need to do this given our filter lengths...
while _output_len(len(h) + n_pre_pad + n_post_pad, x.shape[axis],
up, down) < n_out + n_pre_remove:
n_post_pad += 1
h = np.concatenate((np.zeros(n_pre_pad), h, np.zeros(n_post_pad)))
ufd = _UpFIRDn(h, x.dtype, up, down)
n_pre_remove_end = n_pre_remove + n_out
def apply_remove(x):
"""Apply the upfirdn filter and remove excess"""
return ufd.apply_filter(x)[n_pre_remove:n_pre_remove_end]
y = np.apply_along_axis(apply_remove, axis, x)
return y
def vectorstrength(events, period):
'''
Determine the vector strength of the events corresponding to the given
period.
The vector strength is a measure of phase synchrony, how well the
timing of the events is synchronized to a single period of a periodic
signal.
If multiple periods are used, calculate the vector strength of each.
This is called the "resonating vector strength".
Parameters
----------
events : 1D array_like
An array of time points containing the timing of the events.
period : float or array_like
The period of the signal that the events should synchronize to.
The period is in the same units as `events`. It can also be an array
of periods, in which case the outputs are arrays of the same length.
Returns
-------
strength : float or 1D array
The strength of the synchronization. 1.0 is perfect synchronization
and 0.0 is no synchronization. If `period` is an array, this is also
an array with each element containing the vector strength at the
corresponding period.
phase : float or array
The phase that the events are most strongly synchronized to in radians.
If `period` is an array, this is also an array with each element
containing the phase for the corresponding period.
References
----------
van Hemmen, JL, Longtin, A, and Vollmayr, AN. Testing resonating vector
strength: Auditory system, electric fish, and noise.
Chaos 21, 047508 (2011);
doi: 10.1063/1.3670512
van Hemmen, JL. Vector strength after Goldberg, Brown, and von Mises:
biological and mathematical perspectives. Biol Cybern.
2013 Aug;107(4):385-96. doi: 10.1007/s00422-013-0561-7.
van Hemmen, JL and Vollmayr, AN. Resonating vector strength: what happens
when we vary the "probing" frequency while keeping the spike times
fixed. Biol Cybern. 2013 Aug;107(4):491-94.
doi: 10.1007/s00422-013-0560-8
'''
events = asarray(events)
period = asarray(period)
if events.ndim > 1:
raise ValueError('events cannot have dimensions more than 1')
if period.ndim > 1:
raise ValueError('period cannot have dimensions more than 1')
# we need to know later if period was originally a scalar
scalarperiod = not period.ndim
events = atleast_2d(events)
period = atleast_2d(period)
if (period <= 0).any():
raise ValueError('periods must be positive')
# this converts the times to vectors
vectors = exp(dot(2j*pi/period.T, events))
# the vector strength is just the magnitude of the mean of the vectors
# the vector phase is the angle of the mean of the vectors
vectormean = mean(vectors, axis=1)
strength = abs(vectormean)
phase = angle(vectormean)
# if the original period was a scalar, return scalars
if scalarperiod:
strength = strength[0]
phase = phase[0]
return strength, phase
def detrend(data, axis=-1, type='linear', bp=0):
"""
Remove linear trend along axis from data.
Parameters
----------
data : array_like
The input data.
axis : int, optional
The axis along which to detrend the data. By default this is the
last axis (-1).
type : {'linear', 'constant'}, optional
The type of detrending. If ``type == 'linear'`` (default),
the result of a linear least-squares fit to `data` is subtracted
from `data`.
If ``type == 'constant'``, only the mean of `data` is subtracted.
bp : array_like of ints, optional
A sequence of break points. If given, an individual linear fit is
performed for each part of `data` between two break points.
Break points are specified as indices into `data`.
Returns
-------
ret : ndarray
The detrended input data.
Examples
--------
>>> from scipy import signal
>>> randgen = np.random.RandomState(9)
>>> npoints = 1000
>>> noise = randgen.randn(npoints)
>>> x = 3 + 2*np.linspace(0, 1, npoints) + noise
>>> (signal.detrend(x) - noise).max() < 0.01
True
"""
if type not in ['linear', 'l', 'constant', 'c']:
raise ValueError("Trend type must be 'linear' or 'constant'.")
data = asarray(data)
dtype = data.dtype.char
if dtype not in 'dfDF':
dtype = 'd'
if type in ['constant', 'c']:
ret = data - expand_dims(mean(data, axis), axis)
return ret
else:
dshape = data.shape
N = dshape[axis]
bp = sort(unique(r_[0, bp, N]))
if np.any(bp > N):
raise ValueError("Breakpoints must be less than length "
"of data along given axis.")
Nreg = len(bp) - 1
# Restructure data so that axis is along first dimension and
# all other dimensions are collapsed into second dimension
rnk = len(dshape)
if axis < 0:
axis = axis + rnk
newdims = r_[axis, 0:axis, axis + 1:rnk]
newdata = reshape(transpose(data, tuple(newdims)),
(N, prod(dshape, axis=0) // N))
newdata = newdata.copy() # make sure we have a copy
if newdata.dtype.char not in 'dfDF':
newdata = newdata.astype(dtype)
# Find leastsq fit and remove it for each piece
for m in range(Nreg):
Npts = bp[m + 1] - bp[m]
A = ones((Npts, 2), dtype)
A[:, 0] = cast[dtype](arange(1, Npts + 1) * 1.0 / Npts)
sl = slice(bp[m], bp[m + 1])
coef, resids, rank, s = linalg.lstsq(A, newdata[sl])
newdata[sl] = newdata[sl] - dot(A, coef)
# Put data back in original shape.
tdshape = take(dshape, newdims, 0)
ret = reshape(newdata, tuple(tdshape))
vals = list(range(1, rnk))
olddims = vals[:axis] + [0] + vals[axis:]
ret = transpose(ret, tuple(olddims))
return ret
def lfilter_zi(b, a):
"""
Compute an initial state `zi` for the lfilter function that corresponds
to the steady state of the step response.
A typical use of this function is to set the initial state so that the
output of the filter starts at the same value as the first element of
the signal to be filtered.
Parameters
----------
b, a : array_like (1-D)
The IIR filter coefficients. See `lfilter` for more
information.
Returns
-------
zi : 1-D ndarray
The initial state for the filter.
See Also
--------
lfilter, lfiltic, filtfilt
Notes
-----
A linear filter with order m has a state space representation (A, B, C, D),
for which the output y of the filter can be expressed as::
z(n+1) = A*z(n) + B*x(n)
y(n) = C*z(n) + D*x(n)
where z(n) is a vector of length m, A has shape (m, m), B has shape
(m, 1), C has shape (1, m) and D has shape (1, 1) (assuming x(n) is
a scalar). lfilter_zi solves::
zi = A*zi + B
In other words, it finds the initial condition for which the response
to an input of all ones is a constant.
Given the filter coefficients `a` and `b`, the state space matrices
for the transposed direct form II implementation of the linear filter,
which is the implementation used by scipy.signal.lfilter, are::
A = scipy.linalg.companion(a).T
B = b[1:] - a[1:]*b[0]
assuming `a[0]` is 1.0; if `a[0]` is not 1, `a` and `b` are first
divided by a[0].
Examples
--------
The following code creates a lowpass Butterworth filter. Then it
applies that filter to an array whose values are all 1.0; the
output is also all 1.0, as expected for a lowpass filter. If the
`zi` argument of `lfilter` had not been given, the output would have
shown the transient signal.
>>> from numpy import array, ones
>>> from scipy.signal import lfilter, lfilter_zi, butter
>>> b, a = butter(5, 0.25)
>>> zi = lfilter_zi(b, a)
>>> y, zo = lfilter(b, a, ones(10), zi=zi)
>>> y
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
Another example:
>>> x = array([0.5, 0.5, 0.5, 0.0, 0.0, 0.0, 0.0])
>>> y, zf = lfilter(b, a, x, zi=zi*x[0])
>>> y
array([ 0.5 , 0.5 , 0.5 , 0.49836039, 0.48610528,
0.44399389, 0.35505241])
Note that the `zi` argument to `lfilter` was computed using
`lfilter_zi` and scaled by `x[0]`. Then the output `y` has no
transient until the input drops from 0.5 to 0.0.
"""
# FIXME: Can this function be replaced with an appropriate
# use of lfiltic? For example, when b,a = butter(N,Wn),
# lfiltic(b, a, y=numpy.ones_like(a), x=numpy.ones_like(b)).
#
# We could use scipy.signal.normalize, but it uses warnings in
# cases where a ValueError is more appropriate, and it allows
# b to be 2D.
b = np.atleast_1d(b)
if b.ndim != 1:
raise ValueError("Numerator b must be 1-D.")
a = np.atleast_1d(a)
if a.ndim != 1:
raise ValueError("Denominator a must be 1-D.")
while len(a) > 1 and a[0] == 0.0:
a = a[1:]
if a.size < 1:
raise ValueError("There must be at least one nonzero `a` coefficient.")
if a[0] != 1.0:
# Normalize the coefficients so a[0] == 1.
b = b / a[0]
a = a / a[0]
n = max(len(a), len(b))
# Pad a or b with zeros so they are the same length.
if len(a) < n:
a = np.r_[a, np.zeros(n - len(a))]
elif len(b) < n:
b = np.r_[b, np.zeros(n - len(b))]
IminusA = np.eye(n - 1) - linalg.companion(a).T
B = b[1:] - a[1:] * b[0]
# Solve zi = A*zi + B
zi = np.linalg.solve(IminusA, B)
# For future reference: we could also use the following
# explicit formulas to solve the linear system:
#
# zi = np.zeros(n - 1)
# zi[0] = B.sum() / IminusA[:,0].sum()
# asum = 1.0
# csum = 0.0
# for k in range(1,n-1):
# asum += a[k]
# csum += b[k] - a[k]*b[0]
# zi[k] = asum*zi[0] - csum
return zi
def sosfilt_zi(sos):
"""
Compute an initial state `zi` for the sosfilt function that corresponds
to the steady state of the step response.
A typical use of this function is to set the initial state so that the
output of the filter starts at the same value as the first element of
the signal to be filtered.
Parameters
----------
sos : array_like
Array of second-order filter coefficients, must have shape
``(n_sections, 6)``. See `sosfilt` for the SOS filter format
specification.
Returns
-------
zi : ndarray
Initial conditions suitable for use with ``sosfilt``, shape
``(n_sections, 2)``.
See Also
--------
sosfilt, zpk2sos
Notes
-----
.. versionadded:: 0.16.0
Examples
--------
Filter a rectangular pulse that begins at time 0, with and without
the use of the `zi` argument of `scipy.signal.sosfilt`.
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> sos = signal.butter(9, 0.125, output='sos')
>>> zi = signal.sosfilt_zi(sos)
>>> x = (np.arange(250) < 100).astype(int)
>>> f1 = signal.sosfilt(sos, x)
>>> f2, zo = signal.sosfilt(sos, x, zi=zi)
>>> plt.plot(x, 'k--', label='x')
>>> plt.plot(f1, 'b', alpha=0.5, linewidth=2, label='filtered')
>>> plt.plot(f2, 'g', alpha=0.25, linewidth=4, label='filtered with zi')
>>> plt.legend(loc='best')
>>> plt.show()
"""
sos = np.asarray(sos)
if sos.ndim != 2 or sos.shape[1] != 6:
raise ValueError('sos must be shape (n_sections, 6)')
n_sections = sos.shape[0]
zi = np.empty((n_sections, 2))
scale = 1.0
for section in range(n_sections):
b = sos[section, :3]
a = sos[section, 3:]
zi[section] = scale * lfilter_zi(b, a)
# If H(z) = B(z)/A(z) is this section's transfer function, then
# b.sum()/a.sum() is H(1), the gain at omega=0. That's the steady
# state value of this section's step response.
scale *= b.sum() / a.sum()
return zi
def _filtfilt_gust(b, a, x, axis=-1, irlen=None):
"""Forward-backward IIR filter that uses Gustafsson's method.
Apply the IIR filter defined by `(b,a)` to `x` twice, first forward
then backward, using Gustafsson's initial conditions [1]_.
Let ``y_fb`` be the result of filtering first forward and then backward,
and let ``y_bf`` be the result of filtering first backward then forward.
Gustafsson's method is to compute initial conditions for the forward
pass and the backward pass such that ``y_fb == y_bf``.
Parameters
----------
b : scalar or 1-D ndarray
Numerator coefficients of the filter.
a : scalar or 1-D ndarray
Denominator coefficients of the filter.
x : ndarray
Data to be filtered.
axis : int, optional
Axis of `x` to be filtered. Default is -1.
irlen : int or None, optional
The length of the nonnegligible part of the impulse response.
If `irlen` is None, or if the length of the signal is less than
``2 * irlen``, then no part of the impulse response is ignored.
Returns
-------
y : ndarray
The filtered data.
x0 : ndarray
Initial condition for the forward filter.
x1 : ndarray
Initial condition for the backward filter.
Notes
-----
Typically the return values `x0` and `x1` are not needed by the
caller. The intended use of these return values is in unit tests.
References
----------
.. [1] F. Gustaffson. Determining the initial states in forward-backward
filtering. Transactions on Signal Processing, 46(4):988-992, 1996.
"""
# In the comments, "Gustafsson's paper" and [1] refer to the
# paper referenced in the docstring.
b = np.atleast_1d(b)
a = np.atleast_1d(a)
order = max(len(b), len(a)) - 1
if order == 0:
# The filter is just scalar multiplication, with no state.
scale = (b[0] / a[0])**2
y = scale * x
return y, np.array([]), np.array([])
if axis != -1 or axis != x.ndim - 1:
# Move the axis containing the data to the end.
x = np.swapaxes(x, axis, x.ndim - 1)
# n is the number of samples in the data to be filtered.
n = x.shape[-1]
if irlen is None or n <= 2*irlen:
m = n
else:
m = irlen
# Create Obs, the observability matrix (called O in the paper).
# This matrix can be interpreted as the operator that propagates
# an arbitrary initial state to the output, assuming the input is
# zero.
# In Gustafsson's paper, the forward and backward filters are not
# necessarily the same, so he has both O_f and O_b. We use the same
# filter in both directions, so we only need O. The same comment
# applies to S below.
Obs = np.zeros((m, order))
zi = np.zeros(order)
zi[0] = 1
Obs[:, 0] = lfilter(b, a, np.zeros(m), zi=zi)[0]
for k in range(1, order):
Obs[k:, k] = Obs[:-k, 0]
# Obsr is O^R (Gustafsson's notation for row-reversed O)
Obsr = Obs[::-1]
# Create S. S is the matrix that applies the filter to the reversed
# propagated initial conditions. That is,
# out = S.dot(zi)
# is the same as
# tmp, _ = lfilter(b, a, zeros(), zi=zi) # Propagate ICs.
# out = lfilter(b, a, tmp[::-1]) # Reverse and filter.
# Equations (5) & (6) of [1]
S = lfilter(b, a, Obs[::-1], axis=0)
# Sr is S^R (row-reversed S)
Sr = S[::-1]
# M is [(S^R - O), (O^R - S)]
if m == n:
M = np.hstack((Sr - Obs, Obsr - S))
else:
# Matrix described in section IV of [1].
M = np.zeros((2*m, 2*order))
M[:m, :order] = Sr - Obs
M[m:, order:] = Obsr - S
# Naive forward-backward and backward-forward filters.
# These have large transients because the filters use zero initial
# conditions.
y_f = lfilter(b, a, x)
y_fb = lfilter(b, a, y_f[..., ::-1])[..., ::-1]
y_b = lfilter(b, a, x[..., ::-1])[..., ::-1]
y_bf = lfilter(b, a, y_b)
delta_y_bf_fb = y_bf - y_fb
if m == n:
delta = delta_y_bf_fb
else:
start_m = delta_y_bf_fb[..., :m]
end_m = delta_y_bf_fb[..., -m:]
delta = np.concatenate((start_m, end_m), axis=-1)
# ic_opt holds the "optimal" initial conditions.
# The following code computes the result shown in the formula
# of the paper between equations (6) and (7).
if delta.ndim == 1:
ic_opt = linalg.lstsq(M, delta)[0]
else:
# Reshape delta so it can be used as an array of multiple
# right-hand-sides in linalg.lstsq.
delta2d = delta.reshape(-1, delta.shape[-1]).T
ic_opt0 = linalg.lstsq(M, delta2d)[0].T
ic_opt = ic_opt0.reshape(delta.shape[:-1] + (M.shape[-1],))
# Now compute the filtered signal using equation (7) of [1].
# First, form [S^R, O^R] and call it W.
if m == n:
W = np.hstack((Sr, Obsr))
else:
W = np.zeros((2*m, 2*order))
W[:m, :order] = Sr
W[m:, order:] = Obsr
# Equation (7) of [1] says
# Y_fb^opt = Y_fb^0 + W * [x_0^opt; x_{N-1}^opt]
# `wic` is (almost) the product on the right.
# W has shape (m, 2*order), and ic_opt has shape (..., 2*order),
# so we can't use W.dot(ic_opt). Instead, we dot ic_opt with W.T,
# so wic has shape (..., m).
wic = ic_opt.dot(W.T)
# `wic` is "almost" the product of W and the optimal ICs in equation
# (7)--if we're using a truncated impulse response (m < n), `wic`
# contains only the adjustments required for the ends of the signal.
# Here we form y_opt, taking this into account if necessary.
y_opt = y_fb
if m == n:
y_opt += wic
else:
y_opt[..., :m] += wic[..., :m]
y_opt[..., -m:] += wic[..., -m:]
x0 = ic_opt[..., :order]
x1 = ic_opt[..., -order:]
if axis != -1 or axis != x.ndim - 1:
# Restore the data axis to its original position.
x0 = np.swapaxes(x0, axis, x.ndim - 1)
x1 = np.swapaxes(x1, axis, x.ndim - 1)
y_opt = np.swapaxes(y_opt, axis, x.ndim - 1)
return y_opt, x0, x1
def filtfilt(b, a, x, axis=-1, padtype='odd', padlen=None, method='pad',
irlen=None):
"""
A forward-backward filter.
This function applies a linear filter twice, once forward and once
backwards. The combined filter has linear phase.
The function provides options for handling the edges of the signal.
When `method` is "pad", the function pads the data along the given axis
in one of three ways: odd, even or constant. The odd and even extensions
have the corresponding symmetry about the end point of the data. The
constant extension extends the data with the values at the end points. On
both the forward and backward passes, the initial condition of the
filter is found by using `lfilter_zi` and scaling it by the end point of
the extended data.
When `method` is "gust", Gustafsson's method [1]_ is used. Initial
conditions are chosen for the forward and backward passes so that the
forward-backward filter gives the same result as the backward-forward
filter.
Parameters
----------
b : (N,) array_like
The numerator coefficient vector of the filter.
a : (N,) array_like
The denominator coefficient vector of the filter. If ``a[0]``
is not 1, then both `a` and `b` are normalized by ``a[0]``.
x : array_like
The array of data to be filtered.
axis : int, optional
The axis of `x` to which the filter is applied.
Default is -1.
padtype : str or None, optional
Must be 'odd', 'even', 'constant', or None. This determines the
type of extension to use for the padded signal to which the filter
is applied. If `padtype` is None, no padding is used. The default
is 'odd'.
padlen : int or None, optional
The number of elements by which to extend `x` at both ends of
`axis` before applying the filter. This value must be less than
``x.shape[axis] - 1``. ``padlen=0`` implies no padding.
The default value is ``3 * max(len(a), len(b))``.
method : str, optional
Determines the method for handling the edges of the signal, either
"pad" or "gust". When `method` is "pad", the signal is padded; the
type of padding is determined by `padtype` and `padlen`, and `irlen`
is ignored. When `method` is "gust", Gustafsson's method is used,
and `padtype` and `padlen` are ignored.
irlen : int or None, optional
When `method` is "gust", `irlen` specifies the length of the
impulse response of the filter. If `irlen` is None, no part
of the impulse response is ignored. For a long signal, specifying
`irlen` can significantly improve the performance of the filter.
Returns
-------
y : ndarray
The filtered output, an array of type numpy.float64 with the same
shape as `x`.
See Also
--------
lfilter_zi, lfilter, lfiltic, savgol_filter, sosfilt
Notes
-----
The option to use Gustaffson's method was added in scipy version 0.16.0.
References
----------
.. [1] F. Gustaffson, "Determining the initial states in forward-backward
filtering", Transactions on Signal Processing, Vol. 46, pp. 988-992,
1996.
Examples
--------
The examples will use several functions from `scipy.signal`.
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
First we create a one second signal that is the sum of two pure sine
waves, with frequencies 5 Hz and 250 Hz, sampled at 2000 Hz.
>>> t = np.linspace(0, 1.0, 2001)
>>> xlow = np.sin(2 * np.pi * 5 * t)
>>> xhigh = np.sin(2 * np.pi * 250 * t)
>>> x = xlow + xhigh
Now create a lowpass Butterworth filter with a cutoff of 0.125 times
the Nyquist rate, or 125 Hz, and apply it to ``x`` with `filtfilt`.
The result should be approximately ``xlow``, with no phase shift.
>>> b, a = signal.butter(8, 0.125)
>>> y = signal.filtfilt(b, a, x, padlen=150)
>>> np.abs(y - xlow).max()
9.1086182074789912e-06
We get a fairly clean result for this artificial example because
the odd extension is exact, and with the moderately long padding,
the filter's transients have dissipated by the time the actual data
is reached. In general, transient effects at the edges are
unavoidable.
The following example demonstrates the option ``method="gust"``.
First, create a filter.
>>> b, a = signal.ellip(4, 0.01, 120, 0.125) # Filter to be applied.
>>> np.random.seed(123456)
`sig` is a random input signal to be filtered.
>>> n = 60
>>> sig = np.random.randn(n)**3 + 3*np.random.randn(n).cumsum()
Apply `filtfilt` to `sig`, once using the Gustafsson method, and
once using padding, and plot the results for comparison.
>>> fgust = signal.filtfilt(b, a, sig, method="gust")
>>> fpad = signal.filtfilt(b, a, sig, padlen=50)
>>> plt.plot(sig, 'k-', label='input')
>>> plt.plot(fgust, 'b-', linewidth=4, label='gust')
>>> plt.plot(fpad, 'c-', linewidth=1.5, label='pad')
>>> plt.legend(loc='best')
>>> plt.show()
The `irlen` argument can be used to improve the performance
of Gustafsson's method.
Estimate the impulse response length of the filter.
>>> z, p, k = signal.tf2zpk(b, a)
>>> eps = 1e-9
>>> r = np.max(np.abs(p))
>>> approx_impulse_len = int(np.ceil(np.log(eps) / np.log(r)))
>>> approx_impulse_len
137
Apply the filter to a longer signal, with and without the `irlen`
argument. The difference between `y1` and `y2` is small. For long
signals, using `irlen` gives a significant performance improvement.
>>> x = np.random.randn(5000)
>>> y1 = signal.filtfilt(b, a, x, method='gust')
>>> y2 = signal.filtfilt(b, a, x, method='gust', irlen=approx_impulse_len)
>>> print(np.max(np.abs(y1 - y2)))
1.80056858312e-10
"""
b = np.atleast_1d(b)
a = np.atleast_1d(a)
x = np.asarray(x)
if method not in ["pad", "gust"]:
raise ValueError("method must be 'pad' or 'gust'.")
if method == "gust":
y, z1, z2 = _filtfilt_gust(b, a, x, axis=axis, irlen=irlen)
return y
# `method` is "pad"...
ntaps = max(len(a), len(b))
if padtype not in ['even', 'odd', 'constant', None]:
raise ValueError(("Unknown value '%s' given to padtype. padtype "
"must be 'even', 'odd', 'constant', or None.") %
padtype)
if padtype is None:
padlen = 0
if padlen is None:
# Original padding; preserved for backwards compatibility.
edge = ntaps * 3
else:
edge = padlen
# x's 'axis' dimension must be bigger than edge.
if x.shape[axis] <= edge:
raise ValueError("The length of the input vector x must be at least "
"padlen, which is %d." % edge)
if padtype is not None and edge > 0:
# Make an extension of length `edge` at each
# end of the input array.
if padtype == 'even':
ext = even_ext(x, edge, axis=axis)
elif padtype == 'odd':
ext = odd_ext(x, edge, axis=axis)
else:
ext = const_ext(x, edge, axis=axis)
else:
ext = x
# Get the steady state of the filter's step response.
zi = lfilter_zi(b, a)
# Reshape zi and create x0 so that zi*x0 broadcasts
# to the correct value for the 'zi' keyword argument
# to lfilter.
zi_shape = [1] * x.ndim
zi_shape[axis] = zi.size
zi = np.reshape(zi, zi_shape)
x0 = axis_slice(ext, stop=1, axis=axis)
# Forward filter.
(y, zf) = lfilter(b, a, ext, axis=axis, zi=zi * x0)
# Backward filter.
# Create y0 so zi*y0 broadcasts appropriately.
y0 = axis_slice(y, start=-1, axis=axis)
(y, zf) = lfilter(b, a, axis_reverse(y, axis=axis), axis=axis, zi=zi * y0)
# Reverse y.
y = axis_reverse(y, axis=axis)
if edge > 0:
# Slice the actual signal from the extended signal.
y = axis_slice(y, start=edge, stop=-edge, axis=axis)
return y
def sosfilt(sos, x, axis=-1, zi=None):
"""
Filter data along one dimension using cascaded second-order sections
Filter a data sequence, `x`, using a digital IIR filter defined by
`sos`. This is implemented by performing `lfilter` for each
second-order section. See `lfilter` for details.
Parameters
----------
sos : array_like
Array of second-order filter coefficients, must have shape
``(n_sections, 6)``. Each row corresponds to a second-order
section, with the first three columns providing the numerator
coefficients and the last three providing the denominator
coefficients.
x : array_like
An N-dimensional input array.
axis : int, optional
The axis of the input data array along which to apply the
linear filter. The filter is applied to each subarray along
this axis. Default is -1.
zi : array_like, optional
Initial conditions for the cascaded filter delays. It is a (at
least 2D) vector of shape ``(n_sections, ..., 2, ...)``, where
``..., 2, ...`` denotes the shape of `x`, but with ``x.shape[axis]``
replaced by 2. If `zi` is None or is not given then initial rest
(i.e. all zeros) is assumed.
Note that these initial conditions are *not* the same as the initial
conditions given by `lfiltic` or `lfilter_zi`.
Returns
-------
y : ndarray
The output of the digital filter.
zf : ndarray, optional
If `zi` is None, this is not returned, otherwise, `zf` holds the
final filter delay values.
See Also
--------
zpk2sos, sos2zpk, sosfilt_zi
Notes
-----
The filter function is implemented as a series of second-order filters
with direct-form II transposed structure. It is designed to minimize
numerical precision errors for high-order filters.
.. versionadded:: 0.16.0
Examples
--------
Plot a 13th-order filter's impulse response using both `lfilter` and
`sosfilt`, showing the instability that results from trying to do a
13th-order filter in a single stage (the numerical error pushes some poles
outside of the unit circle):
>>> import matplotlib.pyplot as plt
>>> from scipy import signal
>>> b, a = signal.ellip(13, 0.009, 80, 0.05, output='ba')
>>> sos = signal.ellip(13, 0.009, 80, 0.05, output='sos')
>>> x = np.zeros(700)
>>> x[0] = 1.
>>> y_tf = signal.lfilter(b, a, x)
>>> y_sos = signal.sosfilt(sos, x)
>>> plt.plot(y_tf, 'r', label='TF')
>>> plt.plot(y_sos, 'k', label='SOS')
>>> plt.legend(loc='best')
>>> plt.show()
"""
x = np.asarray(x)
sos = atleast_2d(sos)
if sos.ndim != 2:
raise ValueError('sos array must be 2D')
n_sections, m = sos.shape
if m != 6:
raise ValueError('sos array must be shape (n_sections, 6)')
use_zi = zi is not None
if use_zi:
zi = np.asarray(zi)
x_zi_shape = list(x.shape)
x_zi_shape[axis] = 2
x_zi_shape = tuple([n_sections] + x_zi_shape)
if zi.shape != x_zi_shape:
raise ValueError('Invalid zi shape. With axis=%r, an input with '
'shape %r, and an sos array with %d sections, zi '
'must have shape %r.' %
(axis, x.shape, n_sections, x_zi_shape))
zf = zeros_like(zi)
for section in range(n_sections):
if use_zi:
x, zf[section] = lfilter(sos[section, :3], sos[section, 3:],
x, axis, zi=zi[section])
else:
x = lfilter(sos[section, :3], sos[section, 3:], x, axis)
out = (x, zf) if use_zi else x
return out
def decimate(x, q, n=None, ftype='iir', axis=-1):
"""
Downsample the signal by using a filter.
By default, an order 8 Chebyshev type I filter is used. A 30 point FIR
filter with hamming window is used if `ftype` is 'fir'.
Parameters
----------
x : ndarray
The signal to be downsampled, as an N-dimensional array.
q : int
The downsampling factor.
n : int, optional
The order of the filter (1 less than the length for 'fir').
ftype : str {'iir', 'fir'}, optional
The type of the lowpass filter.
axis : int, optional
The axis along which to decimate.
Returns
-------
y : ndarray
The down-sampled signal.
See Also
--------
resample
resample_poly
"""
if not isinstance(q, int):
raise TypeError("q must be an integer")
if n is None:
if ftype == 'fir':
n = 30
else:
n = 8
if ftype == 'fir':
b = firwin(n + 1, 1. / q, window='hamming')
a = 1.
else:
b, a = cheby1(n, 0.05, 0.8 / q)
y = lfilter(b, a, x, axis=axis)
sl = [slice(None)] * y.ndim
sl[axis] = slice(None, None, q)
return y[sl]
|
Gillu13/scipy
|
scipy/signal/signaltools.py
|
Python
|
bsd-3-clause
| 96,375
|
[
"Gaussian"
] |
425f07aafe9db6b9f713961ff78c124dde2fb0bbc83c761f7bb1b9677ff32ea9
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of the SPORCO package. Details of the copyright
# and user license can be found in the 'LICENSE.txt' file distributed
# with the package.
"""
Colour ℓ2-TV Denoising
======================
This example demonstrates the use of class :class:`.tvl2.TVL2Denoise` for removing Gaussian white noise from a colour image using Total Variation regularization with an ℓ2 data fidelity term (ℓ2-TV denoising).
"""
from __future__ import print_function
from builtins import input
from builtins import range
import numpy as np
from sporco.admm import tvl2
from sporco import util
from sporco import metric
from sporco import plot
"""
Load reference image.
"""
img = util.ExampleImages().image('monarch.png', scaled=True,
idxexp=np.s_[:,160:672])
"""
Construct test image corrupted by Gaussian white noise with a 0.05 standard deviation.
"""
np.random.seed(12345)
imgn = img + np.random.normal(0.0, 0.05, img.shape)
"""
Set regularization parameter and options for ℓ2-TV denoising solver. The regularization parameter used here has been manually selected for good performance.
"""
lmbda = 0.04
opt = tvl2.TVL2Denoise.Options({'Verbose': True, 'MaxMainIter': 200,
'gEvalY': False, 'AutoRho': {'Enabled': True}})
"""
Create solver object and solve, returning the the denoised image ``imgr``.
"""
b = tvl2.TVL2Denoise(imgn, lmbda, opt)
imgr = b.solve()
"""
Display solve time and denoising performance.
"""
print("TVL2Denoise solve time: %5.2f s" % b.timer.elapsed('solve'))
print("Noisy image PSNR: %5.2f dB" % metric.psnr(img, imgn))
print("Denoised image PSNR: %5.2f dB" % metric.psnr(img, imgr))
"""
Display reference, corrupted, and denoised images.
"""
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.imview(img, fig=fig, title='Reference')
plot.subplot(1, 3, 2)
plot.imview(imgn, fig=fig, title='Corrupted')
plot.subplot(1, 3, 3)
plot.imview(imgr, fig=fig, title=r'Restored ($\ell_2$-TV)')
fig.show()
"""
Get iterations statistics from solver object and plot functional value, ADMM primary and dual residuals, and automatically adjusted ADMM penalty parameter against the iteration number.
"""
its = b.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, fig=fig, xlbl='Iterations', ylbl='Functional')
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T, fig=fig,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'])
plot.subplot(1, 3, 3)
plot.plot(its.Rho, fig=fig, xlbl='Iterations', ylbl='Penalty Parameter')
fig.show()
# Wait for enter on keyboard
input()
|
alphacsc/alphacsc
|
alphacsc/other/sporco/examples/scripts/tv/tvl2den_clr.py
|
Python
|
bsd-3-clause
| 2,725
|
[
"Gaussian"
] |
4beb2d5a68f1b52a164704d05878433c3c1ca205f6f32fa3bbf3b794b709bf69
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
from django.views import defaults as default_views
from django.views.generic import TemplateView
urlpatterns = [
url(r'^$', TemplateView.as_view(template_name='pages/home.html'), name='home'),
url(r'^about/$', TemplateView.as_view(template_name='pages/about.html'), name='about'),
url(r'^calendar/$', TemplateView.as_view(template_name='pages/calendar.html'), name='calendar'),
# Django Admin, use {% url 'admin:index' %}
url(settings.ADMIN_URL, admin.site.urls),
# User management
url(r'^users/', include('ninexd.users.urls', namespace='users')),
url(r'^accounts/', include('allauth.urls')),
# Your stuff: custom urls includes go here
url(r'^posts/', include('posts.urls', namespace='posts')),
url(r'^notice/', include('notice.urls', namespace='notice')),
url(r'^common/', include('common.urls', namespace='common')),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
if settings.DEBUG:
# This allows the error pages to be debugged during development, just visit
# these url in browser to see how these error pages look like.
urlpatterns += [
url(r'^400/$', default_views.bad_request, kwargs={'exception': Exception('Bad Request!')}),
url(r'^403/$', default_views.permission_denied, kwargs={'exception': Exception('Permission Denied')}),
url(r'^404/$', default_views.page_not_found, kwargs={'exception': Exception('Page not Found')}),
url(r'^500/$', default_views.server_error),
]
if 'debug_toolbar' in settings.INSTALLED_APPS:
import debug_toolbar
urlpatterns += [
url(r'^__debug__/', include(debug_toolbar.urls)),
]
|
9XD/9XD
|
config/urls.py
|
Python
|
mit
| 1,977
|
[
"VisIt"
] |
c0e46e262f3b277da33c476fb7222bd0a2d2aadf52c0fb9513aa83bcb0e28c85
|
#!/usr/bin/env python 3
from accessoryFunctions.accessoryFunctions import MetadataObject
from geneseekr.geneseekr import GeneSeekr
from geneseekr.blast import BLAST
import multiprocessing
from glob import glob
from time import time
import pytest
import os
test_path = os.path.abspath(os.path.dirname(__file__))
__author__ = 'adamkoziol'
@pytest.fixture()
def variables():
v = MetadataObject()
datapath = os.path.join(test_path, 'testdata')
v.sequencepath = os.path.join(datapath, 'aa_sequences')
v.targetpath = os.path.join(datapath, 'databases', 'resfinder')
v.reportpath = os.path.join(datapath, 'reports')
v.cutoff = 70
v.evalue = '1E-05'
v.align = False
v.unique = False
v.resfinder = False
v.virulencefinder = False
v.numthreads = multiprocessing.cpu_count()
v.start = time()
return v
def variable_update():
global method
method = method_init(variables())
@pytest.fixture()
def method_init(variables, analysistype, program, align, unique):
global method
variables.analysistype = analysistype
variables.program = program
variables.align = align
variables.unique = unique
method = BLAST(variables)
return method
tblastn_method = method_init(variables(), 'resfinder', 'tblastn', True, True)
def test_parser():
assert os.path.basename(tblastn_method.targets[0]) == 'beta-lactam.tfa'
def test_combined_files():
assert os.path.isfile(tblastn_method.combinedtargets)
def test_strains():
assert os.path.isfile(tblastn_method.strains[0])
def test_strain():
assert os.path.basename(tblastn_method.strains[0]) == 'amr_test.fasta'
def test_makeblastdb(variables):
global geneseekr
geneseekr = GeneSeekr()
geneseekr.makeblastdb(fasta=tblastn_method.combinedtargets,
program=tblastn_method.program)
assert os.path.isfile(os.path.join(variables.targetpath, 'combinedtargets.nsq'))
def test_variable_populate():
global targetfolders
global targetfiles
global records
targetfolders, targetfiles, records = \
geneseekr.target_folders(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype)
def test_targetfolders():
assert os.path.basename(list(targetfolders)[0]) == 'resfinder'
def test_targetfiles():
assert targetfiles[0] == tblastn_method.combinedtargets
def test_records():
assert records[targetfiles[0]]['ampH_2_HQ586946']
def test_tblastn(variables):
global tblastn_report
tblastn_method.metadata = geneseekr.run_blast(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype,
program=tblastn_method.program,
outfmt=tblastn_method.outfmt,
evalue=tblastn_method.evalue,
num_threads=tblastn_method.cpus)
tblastn_report = os.path.join(variables.reportpath, 'amr_test_tblastn_resfinder.tsv')
assert os.path.isfile(tblastn_report)
def test_enhance_report_parsing():
geneseekr.parseable_blast_outputs(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype,
fieldnames=tblastn_method.fieldnames,
program=tblastn_method.program)
header = open(tblastn_report).readline()
assert header.split('\t')[0] == 'query_id'
def test_tblastn_results():
with open(tblastn_report) as blast_results:
next(blast_results)
data = blast_results.readline()
results = data.split('\t')
assert int(results[2]) >= 50
def test_blast_parse():
tblastn_method.metadata = geneseekr.unique_parse_blast(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype,
fieldnames=tblastn_method.fieldnames,
cutoff=tblastn_method.cutoff,
program=tblastn_method.program)
for sample in tblastn_method.metadata:
assert sample.resfinder.queryranges['contig2'] == [[1, 264]]
def test_filter():
tblastn_method.metadata = geneseekr.filter_unique(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype)
for sample in tblastn_method.metadata:
assert sample.resfinder.blastlist[0]['percentidentity'] >= 70
def test_dict_create():
tblastn_method.metadata = geneseekr.dict_initialise(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype)
for sample in tblastn_method.metadata:
assert type(sample.resfinder.protseq) is dict
def test_report_creation():
tblastn_method.metadata = geneseekr.resfinder_reporter(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype,
reportpath=tblastn_method.reportpath,
align=tblastn_method.align,
program=tblastn_method.program,
targetpath=tblastn_method.targetpath,
cutoff=tblastn_method.cutoff)
def test_report_existance():
global geneseekr_report
geneseekr_report = os.path.join(tblastn_method.reportpath, 'resfinder_tblastn.xlsx')
assert os.path.isfile(geneseekr_report)
def test_report_row():
for sample in tblastn_method.metadata:
assert sorted(sample.resfinder.sampledata)[0][0] == 'blaOXA'
def test_parse_results():
for sample in tblastn_method.metadata:
assert sample.resfinder.blastresults['blaOXA_427_1_KX827604'] == 94.34
def test_aaseq():
for sample in tblastn_method.metadata:
assert sample.resfinder.blastlist[0]['query_sequence'][:5] == 'MSRIL'
def test_fasta_create(variables):
global fasta_file
geneseekr.export_fasta(metadata=tblastn_method.metadata,
analysistype=tblastn_method.analysistype,
reportpath=tblastn_method.reportpath,
cutoff=tblastn_method.cutoff,
program=tblastn_method.program)
fasta_file = os.path.join(variables.reportpath, 'amr_test_resfinder.fasta')
assert os.path.isfile(fasta_file)
header = open(fasta_file, 'r').readline().rstrip()
assert header == '>amr_test_blaOXA_427_1_KX827604'
def test_combined_targets_clean():
os.remove(tblastn_method.combinedtargets)
def test_makeblastdb_clean(variables):
databasefiles = glob(os.path.join(variables.targetpath, 'combinedtargets.n*'))
for dbfile in databasefiles:
os.remove(dbfile)
def test_remove_tblastn_report():
os.remove(tblastn_report)
def test_remove_fasta_file():
os.remove(fasta_file)
def test_remove_geneseekr_report():
os.remove(geneseekr_report)
def test_remove_report_path():
os.rmdir(tblastn_method.reportpath)
|
OLC-Bioinformatics/pythonGeneSeekr
|
tests/test_tblastn.py
|
Python
|
mit
| 7,491
|
[
"BLAST"
] |
5536105adb271ab712502cf8c20f1520c494346943f2616e5c994bc6ffe9aca6
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Methods to read data in the graph."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.input_pipeline.python.ops import input_pipeline_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import io_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import parsing_ops
from tensorflow.python.ops import variables as var_ops
from tensorflow.python.platform import gfile
from tensorflow.python.summary import summary
from tensorflow.python.training import input as input_ops
from tensorflow.python.training import queue_runner
# Default name for key in the feature dict.
KEY_FEATURE_NAME = '__key__'
def read_batch_examples(file_pattern,
batch_size,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
num_threads=1,
read_batch_size=1,
parse_fn=None,
name=None,
seed=None):
"""Adds operations to read, queue, batch `Example` protos.
Given file pattern (or list of files), will setup a queue for file names,
read `Example` proto using provided `reader`, use batch queue to create
batches of examples of size `batch_size`.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Use `parse_fn` if you need to do parsing / processing on single examples.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If `None`, cycles through the dataset forever.
NOTE - If specified, creates a variable that must be initialized, so call
`tf.global_variables_initializer()` and run the op in a session.
queue_capacity: Capacity for input queue.
num_threads: The number of threads enqueuing examples.
read_batch_size: An int or scalar `Tensor` specifying the number of
records to read at once
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
seed: An integer (optional). Seed used if randomize_input == True.
Returns:
String `Tensor` of batched `Example` proto.
Raises:
ValueError: for invalid inputs.
"""
_, examples = read_keyed_batch_examples(
file_pattern=file_pattern,
batch_size=batch_size,
reader=reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
num_threads=num_threads,
read_batch_size=read_batch_size,
parse_fn=parse_fn,
name=name,
seed=seed)
return examples
def read_keyed_batch_examples(file_pattern,
batch_size,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
num_threads=1,
read_batch_size=1,
parse_fn=None,
name=None,
seed=None):
"""Adds operations to read, queue, batch `Example` protos.
Given file pattern (or list of files), will setup a queue for file names,
read `Example` proto using provided `reader`, use batch queue to create
batches of examples of size `batch_size`.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Use `parse_fn` if you need to do parsing / processing on single examples.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If `None`, cycles through the dataset forever.
NOTE - If specified, creates a variable that must be initialized, so call
`tf.global_variables_initializer()` and run the op in a session.
queue_capacity: Capacity for input queue.
num_threads: The number of threads enqueuing examples.
read_batch_size: An int or scalar `Tensor` specifying the number of
records to read at once
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
seed: An integer (optional). Seed used if randomize_input == True.
Returns:
Returns tuple of:
- `Tensor` of string keys.
- String `Tensor` of batched `Example` proto.
Raises:
ValueError: for invalid inputs.
"""
return _read_keyed_batch_examples_helper(
file_pattern,
batch_size,
reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
num_threads=num_threads,
read_batch_size=read_batch_size,
parse_fn=parse_fn,
setup_shared_queue=False,
name=name,
seed=seed)
def _read_keyed_batch_examples_shared_queue(file_pattern,
batch_size,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
num_threads=1,
read_batch_size=1,
parse_fn=None,
name=None,
seed=None):
"""Adds operations to read, queue, batch `Example` protos.
Given file pattern (or list of files), will setup a shared queue for file
names, setup a worker queue that pulls from the shared queue, read `Example`
protos using provided `reader`, use batch queue to create batches of examples
of size `batch_size`. This provides at most once visit guarantees. Note that
this only works if the parameter servers are not pre-empted or restarted or
the session is not restored from a checkpoint since the state of a queue
is not checkpointed and we will end up restarting from the entire list of
files.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Use `parse_fn` if you need to do parsing / processing on single examples.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If `None`, cycles through the dataset forever.
NOTE - If specified, creates a variable that must be initialized, so call
`tf.global_variables_initializer()` and run the op in a session.
queue_capacity: Capacity for input queue.
num_threads: The number of threads enqueuing examples.
read_batch_size: An int or scalar `Tensor` specifying the number of
records to read at once
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
seed: An integer (optional). Seed used if randomize_input == True.
Returns:
Returns tuple of:
- `Tensor` of string keys.
- String `Tensor` of batched `Example` proto.
Raises:
ValueError: for invalid inputs.
"""
return _read_keyed_batch_examples_helper(
file_pattern,
batch_size,
reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
num_threads=num_threads,
read_batch_size=read_batch_size,
parse_fn=parse_fn,
setup_shared_queue=True,
name=name,
seed=seed)
def _get_file_names(file_pattern, randomize_input):
"""Parse list of file names from pattern, optionally shuffled.
Args:
file_pattern: File glob pattern, or list of strings.
randomize_input: Whether to shuffle the order of file names.
Returns:
List of file names matching `file_pattern`.
Raises:
ValueError: If `file_pattern` is empty, or pattern matches no files.
"""
if isinstance(file_pattern, list):
file_names = file_pattern
if not file_names:
raise ValueError('No files given to dequeue_examples.')
else:
file_names = list(gfile.Glob(file_pattern))
if not file_names:
raise ValueError('No files match %s.' % file_pattern)
# Sort files so it will be deterministic for unit tests. They'll be shuffled
# in `string_input_producer` if `randomize_input` is enabled.
if not randomize_input:
file_names = sorted(file_names)
return file_names
def _get_examples(file_name_queue, reader, num_threads, read_batch_size,
filter_fn, parse_fn):
with ops.name_scope('read'):
example_list = []
for _ in range(num_threads):
if read_batch_size > 1:
keys, examples_proto = reader().read_up_to(file_name_queue,
read_batch_size)
else:
keys, examples_proto = reader().read(file_name_queue)
if filter_fn:
mask = filter_fn(keys, examples_proto)
keys = array_ops.boolean_mask(keys, mask)
examples_proto = array_ops.boolean_mask(examples_proto, mask)
if parse_fn:
parsed_examples = parse_fn(examples_proto)
# Map keys into example map because batch_join doesn't support
# tuple of Tensor + dict.
if isinstance(parsed_examples, dict):
parsed_examples[KEY_FEATURE_NAME] = keys
example_list.append(parsed_examples)
else:
example_list.append((keys, parsed_examples))
else:
example_list.append((keys, examples_proto))
return example_list
def _read_keyed_batch_examples_helper(file_pattern,
batch_size,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
num_threads=1,
read_batch_size=1,
filter_fn=None,
parse_fn=None,
setup_shared_queue=False,
name=None,
seed=None):
"""Adds operations to read, queue, batch `Example` protos.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If `None`, cycles through the dataset forever.
NOTE - If specified, creates a variable that must be initialized, so call
`tf.global_variables_initializer()` and run the op in a session.
queue_capacity: Capacity for input queue.
num_threads: The number of threads enqueuing examples.
read_batch_size: An int or scalar `Tensor` specifying the number of
records to read at once
filter_fn: Filtering function, takes both keys as well `Example` Tensors
and returns a boolean mask of the same shape as the input Tensors to
be applied for filtering. If `None`, no filtering is done.
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
setup_shared_queue: Whether to set up a shared queue for file names.
name: Name of resulting op.
seed: An integer (optional). Seed used if randomize_input == True.
Returns:
Returns tuple of:
- `Tensor` of string keys.
- String `Tensor` of batched `Example` proto.
Raises:
ValueError: for invalid inputs.
"""
# Retrieve files to read.
file_names = _get_file_names(file_pattern, randomize_input)
# Check input parameters are given and reasonable.
if (not queue_capacity) or (queue_capacity <= 0):
raise ValueError('Invalid queue_capacity %s.' % queue_capacity)
if (batch_size is None) or ((not isinstance(batch_size, ops.Tensor)) and
(batch_size <= 0 or batch_size > queue_capacity)):
raise ValueError('Invalid batch_size %s, with queue_capacity %s.' %
(batch_size, queue_capacity))
if (read_batch_size is None) or (
(not isinstance(read_batch_size, ops.Tensor)) and (read_batch_size <= 0)):
raise ValueError('Invalid read_batch_size %s.' % read_batch_size)
if (not num_threads) or (num_threads <= 0):
raise ValueError('Invalid num_threads %s.' % num_threads)
if (num_epochs is not None) and (num_epochs <= 0):
raise ValueError('Invalid num_epochs %s.' % num_epochs)
with ops.name_scope(name, 'read_batch_examples', [file_pattern]) as scope:
with ops.name_scope('file_name_queue') as file_name_queue_scope:
if setup_shared_queue:
file_name_queue = data_flow_ops.FIFOQueue(
capacity=1, dtypes=[dtypes.string], shapes=[[]])
enqueue_op = file_name_queue.enqueue(
input_pipeline_ops.seek_next(
file_names, shuffle=randomize_input, num_epochs=num_epochs,
seed=seed))
queue_runner.add_queue_runner(
queue_runner.QueueRunner(file_name_queue, [enqueue_op]))
else:
file_name_queue = input_ops.string_input_producer(
constant_op.constant(
file_names, name='input'),
shuffle=randomize_input,
num_epochs=num_epochs,
name=file_name_queue_scope,
seed=seed)
example_list = _get_examples(file_name_queue, reader, num_threads,
read_batch_size, filter_fn, parse_fn)
enqueue_many = read_batch_size > 1
if num_epochs is None:
allow_smaller_final_batch = False
else:
allow_smaller_final_batch = True
# Setup batching queue given list of read example tensors.
if randomize_input:
if isinstance(batch_size, ops.Tensor):
min_after_dequeue = int(queue_capacity * 0.4)
else:
min_after_dequeue = max(queue_capacity - (3 * batch_size), batch_size)
queued_examples_with_keys = input_ops.shuffle_batch_join(
example_list,
batch_size,
capacity=queue_capacity,
min_after_dequeue=min_after_dequeue,
enqueue_many=enqueue_many,
name=scope,
allow_smaller_final_batch=allow_smaller_final_batch,
seed=seed)
else:
queued_examples_with_keys = input_ops.batch_join(
example_list,
batch_size,
capacity=queue_capacity,
enqueue_many=enqueue_many,
name=scope,
allow_smaller_final_batch=allow_smaller_final_batch)
if parse_fn and isinstance(queued_examples_with_keys, dict):
queued_keys = queued_examples_with_keys.pop(KEY_FEATURE_NAME)
return queued_keys, queued_examples_with_keys
return queued_examples_with_keys
def read_keyed_batch_features(file_pattern,
batch_size,
features,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
reader_num_threads=1,
feature_queue_capacity=100,
num_enqueue_threads=2,
parse_fn=None,
name=None):
"""Adds operations to read, queue, batch and parse `Example` protos.
Given file pattern (or list of files), will setup a queue for file names,
read `Example` proto using provided `reader`, use batch queue to create
batches of examples of size `batch_size` and parse example given `features`
specification.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If None, cycles through the dataset forever. NOTE - If specified,
creates a variable that must be initialized, so call
tf.local_variables_initializer() and run the op in a session.
queue_capacity: Capacity for input queue.
reader_num_threads: The number of threads to read examples.
feature_queue_capacity: Capacity of the parsed features queue.
num_enqueue_threads: Number of threads to enqueue the parsed example queue.
Using multiple threads to enqueue the parsed example queue helps maintain
a full queue when the subsequent computations overall are cheaper than
parsing.
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
Returns:
Returns tuple of:
- `Tensor` of string keys.
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
Raises:
ValueError: for invalid inputs.
"""
with ops.name_scope(name, 'read_batch_features', [file_pattern]) as scope:
keys, examples = read_keyed_batch_examples(
file_pattern,
batch_size,
reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
num_threads=reader_num_threads,
read_batch_size=batch_size,
parse_fn=parse_fn,
name=scope)
# Parse the example.
feature_map = parsing_ops.parse_example(examples, features)
return queue_parsed_features(
feature_map,
keys=keys,
feature_queue_capacity=feature_queue_capacity,
num_enqueue_threads=num_enqueue_threads,
name=scope)
def _read_keyed_batch_features_shared_queue(file_pattern,
batch_size,
features,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
reader_num_threads=1,
feature_queue_capacity=100,
num_queue_runners=2,
parse_fn=None,
name=None):
"""Adds operations to read, queue, batch and parse `Example` protos.
Given file pattern (or list of files), will setup a shared queue for file
names, setup a worker queue that gets filenames from the shared queue,
read `Example` proto using provided `reader`, use batch queue to create
batches of examples of size `batch_size` and parse example given `features`
specification.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If None, cycles through the dataset forever. NOTE - If specified,
creates a variable that must be initialized, so call
tf.local_variables_initializer() and run the op in a session.
queue_capacity: Capacity for input queue.
reader_num_threads: The number of threads to read examples.
feature_queue_capacity: Capacity of the parsed features queue.
num_queue_runners: Number of threads to enqueue the parsed example queue.
Using multiple threads to enqueue the parsed example queue helps maintain
a full queue when the subsequent computations overall are cheaper than
parsing.
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
Returns:
Returns tuple of:
- `Tensor` of string keys.
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
Raises:
ValueError: for invalid inputs.
"""
with ops.name_scope(name, 'read_batch_features', [file_pattern]) as scope:
keys, examples = _read_keyed_batch_examples_shared_queue(
file_pattern,
batch_size,
reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
num_threads=reader_num_threads,
read_batch_size=batch_size,
parse_fn=parse_fn,
name=scope)
# Parse the example.
feature_map = parsing_ops.parse_example(examples, features)
return queue_parsed_features(
feature_map,
keys=keys,
feature_queue_capacity=feature_queue_capacity,
num_enqueue_threads=num_queue_runners,
name=scope)
def queue_parsed_features(parsed_features,
keys=None,
feature_queue_capacity=100,
num_enqueue_threads=2,
name=None):
"""Speeds up parsing by using queues to do it asynchronously.
This function adds the tensors in `parsed_features` to a queue, which allows
the parsing (or any other expensive op before this) to be asynchronous wrt the
rest of the training graph. This greatly improves read latency and speeds up
training since the data will already be parsed and ready when each step of
training needs it.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Args:
parsed_features: A dict of string key to `Tensor` or `SparseTensor` objects.
keys: `Tensor` of string keys.
feature_queue_capacity: Capacity of the parsed features queue.
num_enqueue_threads: Number of threads to enqueue the parsed example queue.
Using multiple threads to enqueue the parsed example queue helps maintain
a full queue when the subsequent computations overall are cheaper than
parsing.
name: Name of resulting op.
Returns:
Returns tuple of:
- `Tensor` corresponding to `keys` if provided, otherwise `None`.
- A dict of string key to `Tensor` or `SparseTensor` objects corresponding
to `parsed_features`.
Raises:
ValueError: for invalid inputs.
"""
args = list(parsed_features.values())
if keys is not None:
args += [keys]
with ops.name_scope(name, 'queue_parsed_features', args):
# Lets also add preprocessed tensors into the queue types for each item of
# the queue.
tensors_to_enqueue = []
# Each entry contains the key, and a boolean which indicates whether the
# tensor was a sparse tensor.
tensors_mapping = []
# TODO(sibyl-Aix6ihai): Most of the functionality here is about pushing sparse
# tensors into a queue. This could be taken care in somewhere else so others
# can reuse it. Also, QueueBase maybe extended to handle sparse tensors
# directly.
for key in sorted(parsed_features.keys()):
tensor = parsed_features[key]
if isinstance(tensor, sparse_tensor.SparseTensor):
tensors_mapping.append((key, True))
tensors_to_enqueue.extend(
[tensor.indices, tensor.values, tensor.dense_shape])
else:
tensors_mapping.append((key, False))
tensors_to_enqueue.append(tensor)
if keys is not None:
tensors_to_enqueue.append(keys)
queue_dtypes = [x.dtype for x in tensors_to_enqueue]
input_queue = data_flow_ops.FIFOQueue(feature_queue_capacity, queue_dtypes)
# Add a summary op to debug if our feature queue is full or not.
summary.scalar('queue/parsed_features/%s/fraction_of_%d_full' %
(input_queue.name, feature_queue_capacity),
math_ops.cast(input_queue.size(), dtypes.float32) *
(1. / feature_queue_capacity))
# Use a single QueueRunner with multiple threads to enqueue so the queue is
# always full. The threads are coordinated so the last batch will not be
# lost.
enqueue_ops = [
input_queue.enqueue(tensors_to_enqueue)
for _ in range(num_enqueue_threads)
]
queue_runner.add_queue_runner(
queue_runner.QueueRunner(
input_queue,
enqueue_ops,
queue_closed_exception_types=(errors.OutOfRangeError,
errors.CancelledError)))
dequeued_tensors = input_queue.dequeue()
if not isinstance(dequeued_tensors, list):
# input_queue.dequeue() returns a single tensor instead of a list of
# tensors if there is only one tensor to dequeue, which breaks the
# assumption of a list below.
dequeued_tensors = [dequeued_tensors]
# Reset shapes on dequeued tensors.
for i in range(len(tensors_to_enqueue)):
dequeued_tensors[i].set_shape(tensors_to_enqueue[i].get_shape())
# Recreate feature mapping according to the original dictionary.
dequeued_parsed_features = {}
index = 0
for key, is_sparse_tensor in tensors_mapping:
if is_sparse_tensor:
# Three tensors are (indices, values, shape).
dequeued_parsed_features[key] = sparse_tensor.SparseTensor(
dequeued_tensors[index], dequeued_tensors[index + 1],
dequeued_tensors[index + 2])
index += 3
else:
dequeued_parsed_features[key] = dequeued_tensors[index]
index += 1
dequeued_keys = None
if keys is not None:
dequeued_keys = dequeued_tensors[-1]
return dequeued_keys, dequeued_parsed_features
def read_batch_features(file_pattern,
batch_size,
features,
reader,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
feature_queue_capacity=100,
reader_num_threads=1,
parse_fn=None,
name=None):
"""Adds operations to read, queue, batch and parse `Example` protos.
Given file pattern (or list of files), will setup a queue for file names,
read `Example` proto using provided `reader`, use batch queue to create
batches of examples of size `batch_size` and parse example given `features`
specification.
All queue runners are added to the queue runners collection, and may be
started via `start_queue_runners`.
All ops are added to the default graph.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values.
reader: A function or class that returns an object with
`read` method, (filename tensor) -> (example tensor).
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If None, cycles through the dataset forever. NOTE - If specified,
creates a variable that must be initialized, so call
tf.local_variables_initializer() and run the op in a session.
queue_capacity: Capacity for input queue.
feature_queue_capacity: Capacity of the parsed features queue. Set this
value to a small number, for example 5 if the parsed features are large.
reader_num_threads: The number of threads to read examples.
parse_fn: Parsing function, takes `Example` Tensor returns parsed
representation. If `None`, no parsing is done.
name: Name of resulting op.
Returns:
A dict of `Tensor` or `SparseTensor` objects for each in `features`.
Raises:
ValueError: for invalid inputs.
"""
_, features = read_keyed_batch_features(
file_pattern,
batch_size,
features,
reader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
feature_queue_capacity=feature_queue_capacity,
reader_num_threads=reader_num_threads,
parse_fn=parse_fn,
name=name)
return features
def read_batch_record_features(file_pattern,
batch_size,
features,
randomize_input=True,
num_epochs=None,
queue_capacity=10000,
reader_num_threads=1,
name='dequeue_record_examples'):
"""Reads TFRecord, queues, batches and parses `Example` proto.
See more detailed description in `read_examples`.
Args:
file_pattern: List of files or pattern of file paths containing
`Example` records. See `tf.gfile.Glob` for pattern rules.
batch_size: An int or scalar `Tensor` specifying the batch size to use.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values.
randomize_input: Whether the input should be randomized.
num_epochs: Integer specifying the number of times to read through the
dataset. If None, cycles through the dataset forever. NOTE - If specified,
creates a variable that must be initialized, so call
tf.local_variables_initializer() and run the op in a session.
queue_capacity: Capacity for input queue.
reader_num_threads: The number of threads to read examples.
name: Name of resulting op.
Returns:
A dict of `Tensor` or `SparseTensor` objects for each in `features`.
Raises:
ValueError: for invalid inputs.
"""
return read_batch_features(
file_pattern=file_pattern,
batch_size=batch_size,
features=features,
reader=io_ops.TFRecordReader,
randomize_input=randomize_input,
num_epochs=num_epochs,
queue_capacity=queue_capacity,
reader_num_threads=reader_num_threads,
name=name)
|
pcm17/tensorflow
|
tensorflow/contrib/learn/python/learn/learn_io/graph_io.py
|
Python
|
apache-2.0
| 33,733
|
[
"VisIt"
] |
48e0f72941dba7155cf2dfc28e5e50575f0f29108402bb58e6a89657d4a29f9a
|
"""
This is the boilerplate default configuration file.
Changes and additions to settings should be done in the config module
located in the application root rather than this config.
"""
config = {
# webapp2 sessions
'webapp2_extras.sessions': {'secret_key': '_PUT_KEY_HERE_YOUR_SECRET_KEY_'},
# webapp2 authentication
'webapp2_extras.auth': {'user_model': 'boilerplate.models.User',
'cookie_name': 'session_name'},
# jinja template name
'app_template_name' : 'default',
# jinja2 templates
'webapp2_extras.jinja2': {'template_path': ['templates', 'bp_content/themes/default/templates', 'bp_admin/templates'],
'environment_args': {'extensions': ['jinja2.ext.i18n']}},
# application name
'app_name': "Google App Engine Boilerplate",
# the default language code for the application.
# should match whatever language the site uses when i18n is disabled
'app_lang': 'en',
# Locale code = <language>_<territory> (ie 'en_US')
# to pick locale codes see http://cldr.unicode.org/index/cldr-spec/picking-the-right-language-code
# also see http://www.sil.org/iso639-3/codes.asp
# Language codes defined under iso 639-1 http://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
# Territory codes defined under iso 3166-1 alpha-2 http://en.wikipedia.org/wiki/ISO_3166-1
# disable i18n if locales array is empty or None
'locales': ['en_US', 'es_ES', 'it_IT', 'zh_CN', 'id_ID', 'fr_FR', 'de_DE', 'ru_RU', 'pt_BR', 'cs_CZ','vi_VN','nl_NL'],
# contact page email settings
'contact_sender': "PUT_SENDER_EMAIL_HERE",
'contact_recipient': "PUT_RECIPIENT_EMAIL_HERE",
# Password AES Encryption Parameters
# aes_key must be only 16 (*AES-128*), 24 (*AES-192*), or 32 (*AES-256*) bytes (characters) long.
'aes_key': "12_24_32_BYTES_KEY_FOR_PASSWORDS",
'salt': "_PUT_SALT_HERE_TO_SHA512_PASSWORDS_",
# get your own consumer key and consumer secret by registering at https://dev.twitter.com/apps
# callback url must be: http://[YOUR DOMAIN]/login/twitter/complete
'twitter_consumer_key': 'PUT_YOUR_TWITTER_CONSUMER_KEY_HERE',
'twitter_consumer_secret': 'PUT_YOUR_TWITTER_CONSUMER_SECRET_HERE',
#Facebook Login
# get your own consumer key and consumer secret by registering at https://developers.facebook.com/apps
#Very Important: set the site_url= your domain in the application settings in the facebook app settings page
# callback url must be: http://[YOUR DOMAIN]/login/facebook/complete
'fb_api_key': 'PUT_YOUR_FACEBOOK_PUBLIC_KEY_HERE',
'fb_secret': 'PUT_YOUR_FACEBOOK_PUBLIC_KEY_HERE',
#Linkedin Login
#Get you own api key and secret from https://www.linkedin.com/secure/developer
'linkedin_api': 'PUT_YOUR_LINKEDIN_PUBLIC_KEY_HERE',
'linkedin_secret': 'PUT_YOUR_LINKEDIN_PUBLIC_KEY_HERE',
# Github login
# Register apps here: https://github.com/settings/applications/new
'github_server': 'github.com',
'github_redirect_uri': 'http://www.example.com/social_login/github/complete',
'github_client_id': 'PUT_YOUR_GITHUB_CLIENT_ID_HERE',
'github_client_secret': 'PUT_YOUR_GITHUB_CLIENT_SECRET_HERE',
# get your own recaptcha keys by registering at http://www.google.com/recaptcha/
'captcha_public_key': "PUT_YOUR_RECAPCHA_PUBLIC_KEY_HERE",
'captcha_private_key': "PUT_YOUR_RECAPCHA_PRIVATE_KEY_HERE",
# Use a complete Google Analytics code, no just the Tracking ID
# In config/boilerplate.py there is an example to fill out this value
'google_analytics_code': "",
# add status codes and templates used to catch and display errors
# if a status code is not listed here it will use the default app engine
# stacktrace error page or browser error page
'error_templates': {
403: 'errors/default_error.html',
404: 'errors/default_error.html',
500: 'errors/default_error.html',
},
# Enable Federated login (OpenID and OAuth)
# Google App Engine Settings must be set to Authentication Options: Federated Login
'enable_federated_login': True,
# jinja2 base layout template
'base_layout': 'base.html',
# send error emails to developers
'send_mail_developer': False,
# fellas' list
'developers': (
('Santa Klauss', 'snowypal@northpole.com'),
),
# If true, it will write in datastore a log of every email sent
'log_email': True,
# If true, it will write in datastore a log of every visit
'log_visit': True,
# ----> ADD MORE CONFIGURATION OPTIONS HERE <----
} # end config
|
shupelneker/gae_new_structure
|
boilerplate/config.py
|
Python
|
lgpl-3.0
| 4,639
|
[
"VisIt"
] |
053de85e9564a0fabcc0406dcd83fe15297bbcdd8f6751b1e316ed587d598022
|
""" DRMAA2 Python language binding.
This is the public interface to be used by applications.
For further information, please visit drmaa.org.
"""
from enum import Enum
from collections import namedtuple
from abc import ABCMeta, abstractmethod
# Implementation-independent constants
HOME_DIR = "$DRMAA2_HOME_DIR$"
WORKING_DIR = "$DRMAA2_WORKING_DIR$"
PARAMETRIC_INDEX = "$DRMAA2_INDEX$"
INFINITE_TIME = -1
ZERO_TIME = 0
# Implementation-independent enumerations
class JobState(Enum):
UNDETERMINED = 0
QUEUED = 1
QUEUED_HELD = 2
RUNNING = 3
SUSPENDED = 4
REQUEUED = 5
REQUEUED_HELD = 6
DONE = 7
FAILED = 8
class OperatingSystem(Enum):
OTHER_OS = 0
AIX = 1
BSD = 2
LINUX = 3
HPUX = 4
IRIX = 5
MACOS = 6
SUNOS = 7
TRU64 = 8
UNIXWARE = 9
WIN = 10
WINNT = 11
class CpuArchitecture(Enum):
OTHER_CPU = 0
ALPHA = 1
ARM = 2
ARM64 = 3
CELL = 4
PARISC = 5
PARISC64 = 6
X86 = 7
X64 = 8
IA64 = 9
MIPS = 10
MIPS64 = 11
PPC = 12
PPC64 = 13
PPC64LE = 16
SPARC = 14
SPARC64 = 15
class Event(Enum):
NEW_STATE = 0
MIGRATED = 1
ATTRIBUTE_CHANGE = 2
class Capability(Enum):
ADVANCE_RESERVATION = 0
RESERVE_SLOTS = 1
CALLBACK = 2
BULK_JOBS_MAXPARALLEL = 3
JT_EMAIL = 4
JT_STAGING = 5
JT_DEADLINE = 6
JT_MAXSLOTS = 7
JT_ACCOUNTINGID = 8
RT_STARTNOW = 9
RT_DURATION = 10
RT_MACHINEOS = 11
RT_MACHINEARCH = 12
# Abstract classes, to be realized by implementation
class ReservationSession:
""" Every ReservationSession instance acts as container for advance reservations in the DRM system. """
__metaclass__ = ABCMeta
contact = None
session_name = None
@abstractmethod
def get_reservation(self, reservation_id):
""" get_reservation(self, str) -> Reservation
This method returns the Reservation instance that has the given reservationId.
"""
pass
@abstractmethod
def request_reservation(self, reservation_template):
""" request_reservation(self, ReservationTemplate) -> Reservation
The method requests an advance reservation in the DRM system as described
by the ReservationTemplate instance. On success,
the method returns an object that represents the advance reservation
in the underlying DRM system.
"""
pass
@abstractmethod
def get_reservations(self):
""" get_reservations(self) -> list
This method returns a list of Reservation objects for the reservations in this session,
regardless of their start and end time.
"""
pass
@abstractmethod
def close(self):
""" close(self) -> None
The method performs the necessary actions to disengage from the DRM system.
"""
pass
class Reservation:
""" The Reservation class represents attributes and methods available for an
advance reservation successfully created in the DRM system.
"""
__metaclass__ = ABCMeta
reservation_id = None
session_name = None
reservation_template = None
@abstractmethod
def get_info(self):
""" get_info(self) -> ReservationInfo
This method returns informations about this advanced reservation.
"""
pass
@abstractmethod
def terminate(self):
""" terminate(self) -> None
This method terminates the advance reservation represented by this instance.
"""
pass
class JobArray:
""" An instance of the JobArray interface represents a set of jobs created by one operation.
The job control functions allow modifying the status of the job array in the DRM system,
with the same semantic as in the Job object.
"""
__metaclass__ = ABCMeta
job_array_id = None
jobs = None
session_name = None
job_template = None
@abstractmethod
def suspend(self):
""" suspend(self) -> None """
pass
@abstractmethod
def resume(self):
""" resume(self) -> None """
pass
@abstractmethod
def hold(self):
""" hold(self) -> None """
pass
@abstractmethod
def release(self):
""" release(self) -> None """
pass
@abstractmethod
def terminate(self):
""" terminate(self) -> None """
pass
@abstractmethod
def reap(self):
""" reap(self) -> None
This function performs a Job.reap() operation for each of the jobs in the array.
"""
pass
class JobSession:
""" A job session acts as container for job instances controlled through the DRMAA API.
The session methods support the submission of new jobs and the monitoring of existing jobs.
The relationship between jobs and their session is persisted.
"""
__metaclass__ = ABCMeta
contact = None
session_name = None
job_categories = None
@abstractmethod
def get_jobs(self, filter=None):
""" get_jobs(self, JobInfo) -> list
This method returns a list of job objects that belong to the job session.
The filter parameter allows to choose a subset of the session jobs as return value.
If filter is None, all session jobs are returned.
"""
pass
@abstractmethod
def get_job_array(self, job_array_id):
""" get_job_array(self, str) -> JobArray
This method returns the JobArray instance with the given ID.
"""
pass
@abstractmethod
def run_job(self, job_template):
""" run_job(self, JobTemplate) -> Job
The run_job method submits a job with the attributes defined in the given job template instance.
The method returns a Job object that represents the job in the underlying DRM system.
"""
pass
@abstractmethod
def run_bulk_jobs(self, job_template, begin_index, end_index, step, max_parallel=None):
""" run_bulk_jobs(self, JobTemplate, long, long, long, long) -> JobArray
The runBulkJobs method creates a set of parametric jobs, each with attributes as defined
in the given job template instance. Each job in the set has the same attributes,
except for the job template attributes that include the PARAMETRIC_INDEX macro.
The method returns a JobArray instance that represents the set of Job objects created
by the method call under a common array identity.
The first job in the set has an index equal to the beginIndex parameter of the method call.
The smallest valid value for beginIndex is 1.
The next job has an index equal to beginIndex + step, and so on.
The last job has an index equal to beginIndex + n * step, where n is equal
to (endIndex - beginIndex) / step.
The index of the last job may not be equal to endIndex if the difference between beginIndex and
endIndex is not evenly divisible by step. The beginIndex value must be less than or equal to endIndex,
and only positive index numbers are allowed
The maxParallel parameter allows to specify how many of the bulk job instances are allowed to run
in parallel on the utilized resources. If the parameter is None, no limit is applied.
"""
pass
@abstractmethod
def wait_any_started(self, jobs, timeout):
""" wait_any_started(self, list, long) -> Job
The method blocks until any of the jobs in the list entered one of the 'Started' states.
The timeout argument specifies the desired maximum waiting time for the state change in seconds.
The constant value INFINITE_TIME declares an indefinite waiting time.
The constant value ZERO_TIME declares that the method call must return immediately.
"""
pass
@abstractmethod
def wait_any_terminated(self, jobs, timeout):
""" wait_any_terminated(self, list, time_amount) -> Job
The method blocks until any of the jobs in the list entered one of the 'Terminated' states.
The timeout argument specifies the desired maximum waiting time for the state change in seconds.
The constant value INFINITE_TIME declares an indefinite waiting time.
The constant value ZERO_TIME declares that the method call must return immediately.
"""
pass
@abstractmethod
def close(self):
""" close(self) -> None
The method performs the necessary actions to disengage from the DRM system.
"""
pass
class Job:
""" Every job in the JobSession is represented by its own instance of the Job class.
It allows to instruct the DRM system of a job status change, and to query the properties
of the job in the DRM system.
"""
__metaclass__ = ABCMeta
job_id = None
session_name = None
job_template = None
@abstractmethod
def suspend(self):
""" suspend(self) -> None """
pass
@abstractmethod
def resume(self):
""" resume(self) -> None """
pass
@abstractmethod
def hold(self):
""" hold(self) -> None """
pass
@abstractmethod
def release(self):
""" release(self) -> None """
pass
@abstractmethod
def terminate(self):
""" terminate(self) -> None """
pass
@abstractmethod
def reap(self):
""" reap(self) -> None
This function is intended to let the DRMAA implementation clean up any data about this job.
The motivating factor are long-running applications maintaining large amounts of jobs as part of
a monitoring session.
Using a reaped job in any subsequent activity generates an InvalidArgumentException.
This function only works for terminated jobs.
"""
pass
@abstractmethod
def get_state(self):
""" get_state(self) -> JobState, str
This method allows the application to get the current status of the job.
It returns the status according to the DRMAA state model as JobState enumeration value.
In addition, an implementation-specific sub state is returned as string.
"""
pass
@abstractmethod
def get_info(self):
""" get_info(self) -> JobInfo
This method returns a JobInfo instance for the job.
"""
pass
@abstractmethod
def wait_started(self, timeout):
""" wait_started(self, long) -> None
The method blocks until the job entered one of the 'Started' states.
The timeout argument specifies the desired maximum waiting time for the state change in seconds.
The constant value INFINITE_TIME declares an indefinite waiting time.
The constant value ZERO_TIME declares that the method call must return immediately.
"""
pass
@abstractmethod
def wait_terminated(self, timeout):
""" wait_terminated(self, long) -> None
The method blocks until the job entered one of the 'Terminated' states.
The timeout argument specifies the desired maximum waiting time for the state change in seconds.
The constant value INFINITE_TIME declares an indefinite waiting time.
The constant value ZERO_TIME declares that the method call must return immediately.
"""
pass
class MonitoringSession:
""" The MonitoringSession class provides a set of stateless methods for fetching information
about the DRM system and the DRMAA implementation itself.
"""
__metaclass__ = ABCMeta
@abstractmethod
def get_all_reservations(self):
""" get_all_reservations(self) -> list
This method returns a list of Reservation objects, which represent all advance reservations visible
for the user running the DRMAA-based application.
In contrast to a ReservationSession.get_reservations() call,
this method may also return reservations that were created outside of DRMAA,
e.g., through command-line tools by this user.
The DRM system or the DRMAA implementation is at liberty to restrict the set of returned
reservations based on site or system policies, such as security settings or
scheduler load restrictions. The returned list may contain reservations that were created
by other users. It may also contain reservations that are not usable for the user.
"""
pass
@abstractmethod
def get_all_jobs(self, filter=None):
""" get_all_jobs(self, JobInfo) -> list
This method returns a list of Job objects, representing all DRMS jobs visible to the user running
the DRMAA-based application.
The filter argument, if given, allows to fetch only a subset of the available job information.
In contrast to a JobSession.get_jobs() call, this method may also return
jobs that were submitted outside of DRMAA (e.g., through command-line tools) by this user.
The returned list may also contain jobs that were submitted by other users if the security policies
of the DRM system allow such global visibility. The DRM system or the DRMAA implementation is at liberty,
however, to restrict the set of returned jobs based on site or system policies, such as security settings
or scheduler load restrictions.
"""
pass
@abstractmethod
def get_all_queues(self, names=None):
""" get_all_queues(self, list) -> list
This method returns a list of QueueInfo objects, representing the queues available for job submission in the
DRM system. The names from all instances in this list can be used in the JobTemplate.queueName attribute.
The names parameter is a list of strings. If given, then it will restrict the result to QueueInfo instances
that have one of the names given in the list.
The result can be an empty list or might be incomplete, based on queue, host, or system policies.
It might also contain queues that are not accessible for the user at job submission time because of
queue configuration limits.
"""
pass
@abstractmethod
def get_all_machines(self, names):
""" get_all_machines(self, list) -> list
This method returns a list of MachineInfo objects, each representing a machine available in the DRM system
as execution host.
The names parameter is a list of strings. If given, then it will restrict the result to MachineInfo
instances that have one of the names given in the list.
The returned list might be empty or incomplete based on machine or system policies. It might also contain
machines that are not accessible for the user, e.g., because of host configuration limits.
"""
pass
@abstractmethod
def close(self):
""" close(self) -> None
The method performs the necessary actions to disengage from the DRM system.
"""
pass
# Import implementation
from drmaa2.backend import impl
# Implementation-dependent constants
CORE_FILE_SIZE = impl.CORE_FILE_SIZE
CPU_TIME = impl.CPU_TIME
DATA_SIZE = impl.DATA_SIZE
FILE_SIZE = impl.FILE_SIZE
OPEN_FILES = impl.OPEN_FILES
STACK_SIZE = impl.STACK_SIZE
VIRTUAL_MEMORY = impl.VIRTUAL_MEMORY
WALLCLOCK_TIME = impl.WALLCLOCK_TIME
# Implementation-dependent constants determined at module loading
drms_name = impl.drms_name
drms_version = impl.drms_version
drmaa_name = impl.drmaa_name
drmaa_version = impl.drmaa_version
job_template_impl_spec = impl.job_template_impl_spec
job_info_impl_spec = impl.job_info_impl_spec
reservation_template_impl_spec = impl.reservation_template_impl_spec
reservation_info_impl_spec = impl.reservation_info_impl_spec
queue_info_impl_spec = impl.queue_info_impl_spec
machine_info_impl_spec = impl.machine_info_impl_spec
notification_impl_spec = impl.notification_impl_spec
# Extensible data structures
# TODO: Distinguish mandatory and optional ones, fetch optional from implementation
Notification = namedtuple('Notification', ['event', 'job_id', 'session_name', 'job_state']
+ impl.notification_impl_spec)
Notification.__new__.__defaults__ = tuple([None]*len(Notification._fields))
JobTemplate = namedtuple('JobTemplate', ['remote_command', 'args', 'submit_as_hold', 'rerunnable',
'job_environment', 'working_directory', 'job_category',
'email', 'email_on_started', 'email_on_terminated', 'job_name',
'input_path', 'output_path', 'error_path', 'join_files',
'reservation_id', 'queue_name', 'min_slots', 'max_slots',
'priority', 'candidate_machines', 'min_phys_memory', 'machine_os',
'machine_arch', 'start_time', 'deadline_time', 'stage_in_files',
'stage_out_files', 'resource_limits', 'accounting_id']
+ impl.job_template_impl_spec)
JobTemplate.__new__.__defaults__ = tuple([None]*len(JobTemplate._fields))
QueueInfo = namedtuple('QueueInfo', ['name'] + impl.queue_info_impl_spec)
QueueInfo.__new__.__defaults__ = tuple([None]*len(QueueInfo._fields))
JobInfo = namedtuple('JobInfo', ['job_id', 'job_name', 'exit_status', 'terminating_signal', 'annotation', 'job_state',
'job_sub_state', 'allocated_machines', 'submission_machine', 'job_owner', 'slots',
'queue_name', 'wallclock_time', 'cpu_time', 'submission_time', 'dispatch_time',
'finish_time'] + impl.job_info_impl_spec)
JobInfo.__new__.__defaults__ = tuple([None]*len(JobInfo._fields))
MachineInfo = namedtuple('MachineInfo', ['name', 'available', 'sockets', 'cores_per_socket', 'threads_per_core',
'load', 'phys_memory', 'virt_memory', 'machine_os', 'machine_os_version',
'machine_arch'] + impl.machine_info_impl_spec)
MachineInfo.__new__.__defaults__ = tuple([None]*len(MachineInfo._fields))
ReservationInfo = namedtuple('ReservationInfo', ['reservation_id', 'reservation_name', 'reserved_start_time',
'reserved_end_time', 'users_acl', 'reserved_slots',
'reserved_machines'] + impl.reservation_info_impl_spec)
ReservationInfo.__new__.__defaults__ = tuple([None]*len(ReservationInfo._fields))
ReservationTemplate = namedtuple('ReservationTemplate', ['reservation_name', 'start_time', 'end_time', 'duration',
'min_slots', 'max_slots', 'job_category', 'users_acl',
'candidate_machines', 'min_phys_memory', 'machine_os',
'machine_arch'] + impl.reservation_template_impl_spec)
ReservationTemplate.__new__.__defaults__ = tuple([None]*len(ReservationTemplate._fields))
SlotInfo = namedtuple('SlotInfo', ['machine_name', 'slots'])
SlotInfo.__new__.__defaults__ = (None, None)
Version = namedtuple('Version', ['major', 'minor'])
Version.__new__.__defaults__ = (None, None)
class DeniedByDrmsException(Exception):
""" The DRM system rejected the operation due to security issues. """
pass
class DrmCommunicationException(Exception):
""" The DRMAA implementation could not contact the DRM system.
The problem source is unknown to the implementation,
so it is unknown if the problem is transient or not.
"""
pass
class TryLaterException(Exception):
""" The DRMAA implementation detected a transient problem while
performing the operation, for example due to excessive load.
The application is recommended to retry the operation.
"""
pass
class TimeoutException(Exception):
""" The timeout given in one the waiting functions was reached
without successfully finishing the waiting attempt.
"""
pass
class InternalException(Exception):
""" An unexpected or internal error occurred in the DRMAA library,
for example a system call failure.
It is unknown if the problem is transient or not.
"""
pass
class InvalidArgumentException(Exception):
""" From the viewpoint of the DRMAA library, an input parameter for
the particular method call is invalid or inappropriate.
"""
pass
class InvalidSessionException(Exception):
""" The session used for the method call is not valid,
for example since the session was previously closed.
"""
pass
class InvalidStateException(Exception):
""" The operation is not allowed in the current state of the job. """
pass
class OutOfResourceException(Exception):
""" The implementation has run out of operating system resources,
such as buffers, main memory, or disk space.
"""
pass
class UnsupportedAttributeException(Exception):
""" The optional attribute is not supported by this DRMAA implementation. """
pass
class UnsupportedOperationException(Exception):
""" The method is not supported by this DRMAA implementation."""
pass
class ImplementationSpecificException(Exception):
""" The implementation needs to report a special error condition that
cannot be mapped to one of the other exceptions.
"""
pass
# Module-level functions
def supports(capability):
""" supports(Capability entry) -> bool
This method allows to test if the DRMAA implementation supports a feature specified as optional.
The allowed input values are specified in the Capability enumeration.
"""
return impl.supports(capability)
def create_job_session(session_name=None, contact=None):
""" create_job_session(str, str) -> JobSession object
The method creates and opens a new job session.
"""
return impl.create_job_session(session_name, contact)
def create_reservation_session(session_name=None, contact=None):
""" create_reservation_session(str, str) -> ReservationSession object
The method creates and opens a new reservation session.
"""
return impl.create_reservation_session(session_name, contact)
def open_job_session(session_name):
""" open_job_session(str) -> JobSession object
The method opens an existing job session.
"""
return impl.open_job_session(session_name)
def open_reservation_session(session_name):
""" open_reservation_session(str) -> ReservationSession object
The method opens an existing reservation session.
"""
return impl.open_reservation_session(session_name)
def open_monitoring_session(contact=None):
""" open_monitoring_session(str) -> MonitoringSession object
The method opens a monitoring session.
"""
return impl.open_monitoring_session(contact)
def destroy_session(session):
""" destroy_session(str) -> None
The method reaps all persistent or cached state information for the given session name.
"""
impl.destroy_session(session)
def get_job_session_names():
""" get_job_session_names() -> list
This method returns a string list of job session names that are valid input for the open_job_session method.
"""
return impl.get_job_session_names()
def get_reservation_session_names():
""" get_reservation_session_names() -> list
This method returns a string list of reservation session names that are valid input for
the open_reservation_session method.
"""
return impl.get_reservation_session_names()
def register_event_notification(callback):
""" register_event_notification(function) -> None
This method is used to register a callback function for events from the DRM system.
The function should accept one parameter that is filled with a Notification object.
"""
impl.register_event_notification(callback)
def describe_attribute(instance, name):
""" describe_attribute(namedtuple, str) -> str
Returns a human-readable description of an attributes purpose in the instance.
"""
return impl.describe_attribute(instance, name)
|
troeger/drmaa2-python
|
drmaa2/__init__.py
|
Python
|
apache-2.0
| 24,787
|
[
"VisIt"
] |
f2ba5fdeb045c8fc9cfec8eee423aef0d23ff7c43538aabca4597127358d02ee
|
# Jython Database Specification API 2.0
#
# $Id: sptest.py,v 1.3 2001/12/29 07:16:55 bzimmer Exp $
#
# Copyright (c) 2001 brian zimmer <bzimmer@ziclix.com>
from zxtest import zxCoreTestCase
class OracleSPTest(zxCoreTestCase):
def setUp(self):
zxCoreTestCase.setUp(self)
c = self.cursor()
try:
try:
c.execute("drop table sptest")
except:
self.db.rollback()
try:
c.execute("create table sptest (x varchar2(20))")
c.execute("create or replace procedure procnone is begin insert into sptest values ('testing'); end;")
c.execute("create or replace procedure procin (y in varchar2) is begin insert into sptest values (y); end;")
c.execute("create or replace procedure procout (y out varchar2) is begin y := 'tested'; end;")
c.execute("create or replace procedure procinout (y out varchar2, z in varchar2) is begin insert into sptest values (z); y := 'tested'; end;")
c.execute("create or replace function funcnone return varchar2 is begin return 'tested'; end;")
c.execute("create or replace function funcin (y varchar2) return varchar2 is begin return y || y; end;")
c.execute("create or replace function funcout (y out varchar2) return varchar2 is begin y := 'tested'; return 'returned'; end;")
self.db.commit()
except:
self.db.rollback()
self.fail("procedure creation failed")
self.proc_errors("PROC")
self.proc_errors("FUNC")
finally:
c.close()
def tearDown(self):
zxCoreTestCase.tearDown(self)
def proc_errors(self, name):
c = self.cursor()
try:
c.execute("select * from user_errors where name like '%s%%'" % (name.upper()))
errors = c.fetchall()
try:
assert errors is None, "found errors"
except AssertionError, e:
for a in errors:
print a
raise e
finally:
c.close()
def testCursor(self):
c = self.cursor()
try:
c.execute("insert into sptest values ('a')")
c.execute("insert into sptest values ('b')")
c.execute("insert into sptest values ('c')")
c.execute("insert into sptest values ('d')")
c.execute("insert into sptest values ('e')")
c.execute("""
CREATE OR REPLACE PACKAGE types
AS
TYPE ref_cursor IS REF CURSOR;
END;
""")
c.execute("""
CREATE OR REPLACE FUNCTION funccur(v_x IN VARCHAR)
RETURN types.ref_cursor
AS
funccur_cursor types.ref_cursor;
BEGIN
OPEN funccur_cursor FOR
SELECT x FROM sptest WHERE x < v_x;
RETURN funccur_cursor;
END;
""")
self.proc_errors("funccur")
c.callproc("funccur", ("z",))
data = c.fetchall()
self.assertEquals(5, len(data))
c.callproc("funccur", ("c",))
data = c.fetchall()
self.assertEquals(2, len(data))
finally:
c.close()
def testProcin(self):
c = self.cursor()
try:
params = ["testProcin"]
c.callproc("procin", params)
self.assertEquals(None, c.fetchall())
c.execute("select * from sptest")
self.assertEquals(1, len(c.fetchall()))
finally:
c.close()
def testProcinout(self):
c = self.cursor()
try:
params = [None, "testing"]
c.callproc("procinout", params)
data = c.fetchone()
assert data is None, "data was not None"
c.execute("select * from sptest")
data = c.fetchone()
self.assertEquals("testing", data[0])
self.assertEquals("tested", params[0])
finally:
c.close()
def testFuncnone(self):
c = self.cursor()
try:
c.callproc("funcnone")
data = c.fetchone()
assert data is not None, "data was None"
self.assertEquals(1, len(data))
self.assertEquals("tested", data[0])
finally:
c.close()
def testFuncin(self):
c = self.cursor()
try:
params = ["testing"]
c.callproc("funcin", params)
self.assertEquals(1, c.rowcount)
data = c.fetchone()
assert data is not None, "data was None"
self.assertEquals(1, len(data))
self.assertEquals("testingtesting", data[0])
finally:
c.close()
def testCallingWithKws(self):
c = self.cursor()
try:
params = ["testing"]
c.callproc("funcin", params=params)
self.assertEquals(1, c.rowcount)
data = c.fetchone()
assert data is not None, "data was None"
self.assertEquals(1, len(data))
self.assertEquals("testingtesting", data[0])
finally:
c.close()
def testFuncout(self):
c = self.cursor()
try:
params = [None]
c.callproc("funcout", params)
data = c.fetchone()
assert data is not None, "data was None"
self.assertEquals(1, len(data))
self.assertEquals("returned", data[0])
self.assertEquals("tested", params[0].strip())
finally:
c.close()
def testMultipleFetch(self):
"""testing the second fetch call to a callproc() is None"""
c = self.cursor()
try:
c.callproc("funcnone")
data = c.fetchone()
assert data is not None, "data was None"
data = c.fetchone()
assert data is None, "data was not None"
finally:
c.close()
class SQLServerSPTest(zxCoreTestCase):
def testProcWithResultSet(self):
c = self.cursor()
try:
c.execute("use ziclix")
self.assertEquals("ziclix", c.connection.__connection__.getCatalog())
try:
c.execute("drop table sptest")
except:
pass
c.execute("create table sptest (a int, b varchar(32))")
c.execute("insert into sptest values (1, 'hello')")
c.execute("insert into sptest values (2, 'there')")
c.execute("insert into sptest values (3, 'goodbye')")
try:
c.execute("drop procedure sp_proctest")
except:
pass
c.execute("""
create procedure sp_proctest (@A int)
as
select a, b from sptest where a <= @A
""")
c.callproc(("ziclix", "jython", "sp_proctest"), (2,))
data = c.fetchall()
self.assertEquals(2, len(data))
self.assertEquals(2, len(c.description))
assert c.nextset() is not None, "expected an additional result set"
data = c.fetchall()
self.assertEquals(1, len(data))
self.assertEquals(1, len(c.description))
finally:
c.close()
def testSalesByCategory(self):
c = self.cursor()
try:
c.execute("use northwind")
c.callproc(("northwind", "dbo", "SalesByCategory"), ["Seafood", "1998"])
data = c.fetchall()
assert data is not None, "no results from SalesByCategory"
assert len(data) > 0, "expected numerous results"
finally:
c.close()
|
ai-ku/langvis
|
jython-2.1/Lib/test/zxjdbc/sptest.py
|
Python
|
mit
| 6,218
|
[
"Brian"
] |
0a2e1a15adad49097d53534c48a08b87e6363fc40301ca79c2eae963d727cfd9
|
from bs_utils.utils import *
import re
BAM_MATCH = 0
BAM_INS = 1
BAM_DEL = 2
BAM_SOFTCLIP = 4
CIGAR_OPS = {'M' : BAM_MATCH, 'I' : BAM_INS, 'D' : BAM_DEL, 'S' : BAM_SOFTCLIP}
def N_MIS(r,g):
mismatches = 0
if len(r)==len(g):
for i in xrange(len(r)):
if r[i] != g[i] and r[i] != "N" and g[i] != "N" and not(r[i] == 'T' and g[i] == 'C'):
mismatches += 1
#
#
#
return mismatches
#
#----------------------------------------------------------------
"""
Exmaple:
========
Read : ACCGCGTTGATCGAGTACGTACGTGGGTC
Adapter : ....................ACGTGGGTCCCG
========
no_mismatch : the maximum number allowed for mismatches
Algorithm: (allowing 1 mismatch)
========
-Step 1:
ACCGCGTTGATCGAGTACGTACGTGGGTC
||XX
ACGTGGGTCCCG
-Step 2:
ACCGCGTTGATCGAGTACGTACGTGGGTC
X||X
.ACGTGGGTCCCG
-Step 3:
ACCGCGTTGATCGAGTACGTACGTGGGTC
XX
..ACGTGGGTCCCG
-Step ...
-Step N:
ACCGCGTTGATCGAGTACGTACGTGGGTC
|||||||||
....................ACGTGGGTCCCG
Success & return!
========
"""
# Remove the adapter from 3' end
def RemoveAdapter ( read, adapter, no_mismatch, rm_back=0) :
lr = len(read)
la = len(adapter)
if la == 0 :
return read
# Check the empty adapter, namely, the reads start with the 2nd base of adapter,
# not including the 'A' base in front of the adapter.
if adapter[2:] == read[0:(la-1)] :
return ""
#
for i in xrange( lr - no_mismatch ) :
read_pos = i
adapter_pos = 0
count_no_mis = 0
while (adapter_pos < la) and (read_pos < lr) :
if (read[read_pos] == adapter[adapter_pos]) :
read_pos = read_pos + 1
adapter_pos = adapter_pos + 1
else :
count_no_mis = count_no_mis + 1
if count_no_mis > no_mismatch :
break
else :
read_pos = read_pos + 1
adapter_pos = adapter_pos + 1
#
#
# while_end
# Cut the extra bases before the adapter
# --C|CG G-- => --CNN+A+<adapter>
# --G GC|C-- --GGC
if adapter_pos == la or read_pos == lr :
if i <= rm_back :
return ''
else :
return read[:(i-rm_back)]
#
#
# for_end
return read
def Remove_5end_Adapter ( read, adapter, no_mismatch) :
lr = len(read)
la = len(adapter)
if la == 0 :
return read
#
for i in xrange (la - no_mismatch) :
read_pos = 0
adapter_pos = i
count_no_mis = 0
while (adapter_pos < la) and (read_pos < lr) :
if (read[read_pos] == adapter[adapter_pos]) :
adapter_pos = adapter_pos + 1
read_pos = read_pos + 1
else :
count_no_mis = count_no_mis + 1
if count_no_mis > no_mismatch :
break
else :
read_pos = read_pos + 1
adapter_pos = adapter_pos + 1
#
#
# while_end
if adapter_pos == la :
return read[(la-i):]
#
return read
#
def next_nuc(seq, pos, n):
""" Returns the nucleotide that is n places from pos in seq. Skips gap symbols.
"""
i = pos + 1
while i < len(seq):
if seq[i] != '-':
n -= 1
if n == 0: break
i += 1
if i < len(seq) :
return seq[i]
else :
return 'N'
#
#
def methy_seq(read, genome):
H = ['A', 'C', 'T']
m_seq = []
xx = "-"
for i in xrange(len(read)):
if genome[i] == '-':
continue
elif read[i] != 'C' and read[i] != 'T':
xx = "-"
elif read[i] == "T" and genome[i] == "C": #(unmethylated):
nn1 = next_nuc(genome, i, 1)
if nn1 == "G":
xx = "x"
elif nn1 in H :
nn2 = next_nuc(genome, i, 2)
if nn2 == "G":
xx = "y"
elif nn2 in H :
xx = "z"
#
#
elif read[i] == "C" and genome[i] == "C": #(methylated):
nn1 = next_nuc(genome, i, 1)
if nn1 == "G":
xx = "X"
elif nn1 in H :
nn2 = next_nuc(genome, i, 2)
2#
if nn2 == "G":
xx = "Y"
elif nn2 in H:
xx = "Z"
#
#
else:
xx = "-"
#
m_seq.append(xx)
#
return ''.join(m_seq)
#
def mcounts(mseq, mlst, ulst):
out_mlst=[mlst[0]+mseq.count("X"), mlst[1]+mseq.count("Y"), mlst[2]+mseq.count("Z")]
out_ulst=[ulst[0]+mseq.count("x"), ulst[1]+mseq.count("y"), ulst[2]+mseq.count("z")]
return out_mlst, out_ulst
#
def process_aligner_output(filename, pair_end = False):
#m = re.search(r'-('+'|'.join(supported_aligners) +')-TMP', filename)
m = re.search(r'-('+'|'.join(supported_aligners) +')-.*TMP', filename)
if m is None:
error('The temporary folder path should contain the name of one of the supported aligners: ' + filename)
#
format = m.group(1)
try :
input = open(filename)
except IOError:
print "[Error] Cannot open file %s" % filename
exit(-1)
#
QNAME, FLAG, RNAME, POS, MAPQ, CIGAR, RNEXT, PNEXT, TLEN, SEQ, QUAL = range(11)
def parse_SAM(line):
# fix error when reading file with lots of \x00 # date on 2016-12-09
line = line.replace('\x00', '').strip()
buf = line.split("\t")
if len(buf) < 11 :
sys.stderr.write("[warning] SAM input without enough columns\n")
return None, None, None, None, None, None
#
flag = int(buf[FLAG])
# skip reads that are not mapped
# skip reads that have probability of being non-unique higher than 1/10
if flag & 0x4 : # or int(buf[MAPQ]) < 10:
return None, None, None, None, None, None
# print "format = ", format
if format == BOWTIE:
mismatches = int([buf[i][5:] for i in xrange(11, len(buf)) if buf[i][:5] == 'NM:i:'][0]) # get the edit distance
# --- bug fixed ------
elif format == BOWTIE2:
if re.search(r'(.)*-e2e-TMP(.*)', filename) is None : # local model
mismatches = 1-int([buf[i][5:] for i in xrange(11, len(buf)) if buf[i][:5] == 'AS:i:'][0])
# print "====local=====\n"
## bowtie2 use AS tag (score) to evaluate the mapping. The higher, the better.
else : # end-to-end model
# print "end-to-end\n"
mismatches = int([buf[i][5:] for i in xrange(11, len(buf)) if buf[i][:5] == 'XM:i:'][0])
# --- Weilong ---------
elif format == SOAP:
mismatches = 1-buf[MAPQ]
# mismatches = 1/float(buf[MAPQ])
## downstream might round (0,1) to 0, so use integer instead
## fixed by Weilong
elif format == RMAP:
# chr16 75728107 75728147 read45 9 -
# chr16 67934919 67934959 read45 9 -
mismatches = buf[4]
#
return (buf[QNAME], # read ID
buf[RNAME], # reference ID
int(buf[POS]) - 1, # position, 0 based (SAM is 1 based)
mismatches, # number of mismatches
parse_cigar(buf[CIGAR]), # the parsed cigar string
flag & 0x40 # true if it is the first mate in a pair, false if it is the second mate
)
#
SOAP_QNAME, SOAP_SEQ, SOAP_QUAL, SOAP_NHITS, SOAP_AB, SOAP_LEN, SOAP_STRAND, SOAP_CHR, SOAP_LOCATION, SOAP_MISMATCHES = range(10)
def parse_SOAP(line):
buf = line.split()
return (buf[SOAP_QNAME],
buf[SOAP_CHR],
int(buf[SOAP_LOCATION]) - 1,
int(buf[SOAP_MISMATCHES]),
buf[SOAP_AB],
buf[SOAP_STRAND],
parse_cigar(buf[SOAP_LEN]+'M')
)
#
# chr16 75728107 75728147 read45 9 -
RMAP_CHR, RMAP_START, RMAP_END, RMAP_QNAME, RMAP_MISMATCH, RMAP_STRAND = range(6)
def parse_RMAP(line):
buf = line.split()
return ( buf[RMAP_QNAME],
buf[RMAP_CHR],
int(buf[RMAP_START]), # to check -1 or not
int(buf[RMAP_END]) - int(buf[RMAP_START]) + 1,
int(buf[RMAP_MISMATCH]),
buf[RMAP_STRAND]
)
#
if format == BOWTIE or format == BOWTIE2:
if pair_end:
for line in input:
header1, chr1, location1, no_mismatch1, cigar1, _ = parse_SAM(line)
header2, _, location2, no_mismatch2, cigar2, mate_no2 = parse_SAM(input.next())
#
if header1 and header2:
# flip the location info if the second mate comes first in the alignment file
if mate_no2:
location1, location2 = location2, location1
cigar1, cigar2 = cigar2, cigar1
#
yield header1, chr1, no_mismatch1 + no_mismatch2, location1, cigar1, location2, cigar2
#
#
else:
for line in input:
header, chr, location, no_mismatch, cigar, _ = parse_SAM(line)
if header is not None:
yield header, chr, location, no_mismatch, cigar
#
#
#
elif format == SOAP:
if pair_end:
for line in input:
header1, chr1, location1, no_mismatch1, mate1, strand1, cigar1 = parse_SOAP(line)
header2, _ , location2, no_mismatch2, _, strand2, cigar2 = parse_SOAP(input.next())
#
if mate1 == 'b':
location1, location2 = location2, location1
strand1, strand2 = strand2, strand1
ciga1, cigar2 = cigar2, cigar1
#
if header1 and header2 and strand1 == '+' and strand2 == '-':
yield header1, chr1, no_mismatch1 + no_mismatch2, location1, cigar1, location2, cigar2
#
#
#
else:
for line in input:
header, chr, location, no_mismatch, _, strand, cigar = parse_SOAP(line)
if header and strand == '+':
yield header, chr, location, no_mismatch, cigar
#
#
#
elif format == RMAP :
if pair_end :
todo = 0
# to do
else :
for line in input:
header, chr, location, read_len, no_mismatch, strand = parse_RMAP(line)
cigar = str(read_len) + "M"
yield header, chr, location, no_mismatch, cigar
#
#
#
input.close()
#
def parse_cigar(cigar_string):
i = 0
prev_i = 0
cigar = []
while i < len(cigar_string):
if cigar_string[i] in CIGAR_OPS:
cigar.append((CIGAR_OPS[cigar_string[i]], int(cigar_string[prev_i:i])))
prev_i = i + 1
i += 1
return cigar
def get_read_start_end_and_genome_length(cigar):
r_start = cigar[0][1] if cigar[0][0] == BAM_SOFTCLIP else 0
r_end = r_start
g_len = 0
for edit_op, count in cigar:
if edit_op == BAM_MATCH:
r_end += count
g_len += count
elif edit_op == BAM_INS:
r_end += count
elif edit_op == BAM_DEL:
g_len += count
#
#
return r_start, r_end, g_len # return the start and end in the read and the length of the genomic sequence
# r_start : start position on the read
# r_end : end position on the read
# g_len : length of the mapped region on genome
#
def cigar_to_alignment(cigar, read_seq, genome_seq):
""" Reconstruct the pairwise alignment based on the CIGAR string and the two sequences
"""
# reconstruct the alignment
r_pos = cigar[0][1] if cigar[0][0] == BAM_SOFTCLIP else 0
g_pos = 0
r_aln = ''
g_aln = ''
for edit_op, count in cigar:
if edit_op == BAM_MATCH:
r_aln += read_seq[r_pos : r_pos + count]
g_aln += genome_seq[g_pos : g_pos + count]
r_pos += count
g_pos += count
elif edit_op == BAM_DEL:
r_aln += '-'*count
g_aln += genome_seq[g_pos : g_pos + count]
g_pos += count
elif edit_op == BAM_INS:
r_aln += read_seq[r_pos : r_pos + count]
g_aln += '-'*count
r_pos += count
#
#
return r_aln, g_aln
#
# return sequence is [start, end), not include 'end'
def get_genomic_sequence(genome, start, end, strand = '+'):
if strand != '+' and strand != '-' :
print "[Bug] get_genomic_sequence input should be \'+\' or \'-\'."
exit(-1)
if start > 1:
prev = genome[start-2:start]
elif start == 1:
prev = 'N'+genome[0]
else:
prev = 'NN'
#
if end < len(genome) - 1:
next = genome[end: end + 2]
elif end == len(genome) - 1:
next = genome[end] + 'N'
else:
next = 'NN'
#
origin_genome = genome[start:end]
#
if strand == '-':
# reverse complement everything if strand is '-'
revc = reverse_compl_seq('%s%s%s' % (prev, origin_genome, next))
prev, origin_genome, next = revc[:2], revc[2:-2], revc[-2:]
#
return origin_genome, next, '%s_%s_%s' % (prev, origin_genome, next)
# next : next two nucleotides
#
|
BioInfoTools/BSVF
|
bin/BSseeker2/bs_align/bs_align_utils.py
|
Python
|
lgpl-3.0
| 13,956
|
[
"Bowtie"
] |
f5b96b370c54f5bc1219ca21ee164ab609ff6898c4eb40424e0199220337a744
|
from __future__ import division, print_function
import logging
import theano
import numpy
import cPickle
from theano import tensor
from collections import OrderedDict
from blocks.graph import ComputationGraph
from blocks.filter import VariableFilter
from blocks.bricks.base import application, _Brick, Brick, lazy
from blocks.bricks.recurrent import BaseRecurrent, recurrent
from blocks.initialization import Constant, IsotropicGaussian, Orthogonal
from blocks.bricks import Random, MLP, Linear, Tanh, Softmax, Initializable
from blocks.bricks import Tanh, Identity, Activation, Feedforward
from blocks.bricks.cost import BinaryCrossEntropy
from blocks.utils import shared_floatx_nans
from blocks.roles import add_role, WEIGHT, BIAS, PARAMETER, AUXILIARY
from BlocksAttention import ZoomableAttention2d
from DKCode import get_adam_updates, get_adam_updates_X
from HelperFuncs import constFX, to_fX, tanh_clip
from LogPDFs import log_prob_bernoulli, gaussian_kld, log_prob_gaussian2
################################
# Softplus activation function #
################################
class Softplus(Activation):
@application(inputs=['input_'], outputs=['output'])
def apply(self, input_):
return tensor.nnet.softplus(input_)
class BiasedLSTM(BaseRecurrent, Initializable):
@lazy(allocation=['dim'])
def __init__(self, dim, ig_bias=0.0, fg_bias=0.0, og_bias=0.0,
activation=None, **kwargs):
super(BiasedLSTM, self).__init__(**kwargs)
self.dim = dim
self.ig_bias = constFX(ig_bias) # input gate bias
self.fg_bias = constFX(fg_bias) # forget gate bias
self.og_bias = constFX(og_bias) # output gate bias
if not activation:
activation = Tanh()
self.children = [activation]
return
def get_dim(self, name):
if name == 'inputs':
return self.dim * 4
if name in ['states', 'cells']:
return self.dim
if name == 'mask':
return 0
return super(BiasedLSTM, self).get_dim(name)
def _allocate(self):
self.W_state = shared_floatx_nans((self.dim, 4*self.dim),
name='W_state')
self.W_cell_to_in = shared_floatx_nans((self.dim,),
name='W_cell_to_in')
self.W_cell_to_forget = shared_floatx_nans((self.dim,),
name='W_cell_to_forget')
self.W_cell_to_out = shared_floatx_nans((self.dim,),
name='W_cell_to_out')
add_role(self.W_state, WEIGHT)
add_role(self.W_cell_to_in, WEIGHT)
add_role(self.W_cell_to_forget, WEIGHT)
add_role(self.W_cell_to_out, WEIGHT)
self.params = [self.W_state, self.W_cell_to_in, self.W_cell_to_forget,
self.W_cell_to_out]
return
def _initialize(self):
for w in self.params:
self.weights_init.initialize(w, self.rng)
return
@recurrent(sequences=['inputs', 'mask'], states=['states', 'cells'],
contexts=[], outputs=['states', 'cells'])
def apply(self, inputs, states, cells, mask=None):
"""Apply the Long Short Term Memory transition.
Parameters
----------
states : :class:`~tensor.TensorVariable`
The 2 dimensional matrix of current states in the shape
(batch_size, features). Required for `one_step` usage.
cells : :class:`~tensor.TensorVariable`
The 2 dimensional matrix of current cells in the shape
(batch_size, features). Required for `one_step` usage.
inputs : :class:`~tensor.TensorVariable`
The 2 dimensional matrix of inputs in the shape (batch_size,
features * 4).
mask : :class:`~tensor.TensorVariable`
A 1D binary array in the shape (batch,) which is 1 if there is
data available, 0 if not. Assumed to be 1-s only if not given.
Returns
-------
states : :class:`~tensor.TensorVariable`
Next states of the network.
cells : :class:`~tensor.TensorVariable`
Next cell activations of the network.
"""
def slice_last(x, no):
return x.T[no*self.dim: (no+1)*self.dim].T
nonlinearity = self.children[0].apply
activation = tensor.dot(states, self.W_state) + inputs
in_gate = tensor.nnet.sigmoid(slice_last(activation, 0) +
(cells * self.W_cell_to_in) +
self.ig_bias)
forget_gate = tensor.nnet.sigmoid(slice_last(activation, 1) +
(cells * self.W_cell_to_forget) +
self.fg_bias)
next_cells = (forget_gate * cells +
in_gate * nonlinearity(slice_last(activation, 2)))
out_gate = tensor.nnet.sigmoid(slice_last(activation, 3) +
(next_cells * self.W_cell_to_out) +
self.og_bias)
next_states = out_gate * nonlinearity(next_cells)
if mask:
next_states = (mask[:, None] * next_states +
(1 - mask[:, None]) * states)
next_cells = (mask[:, None] * next_cells +
(1 - mask[:, None]) * cells)
return next_states, next_cells
###################################################
# Diagonal Gaussian conditional density estimator #
###################################################
class CondNet(Initializable, Feedforward):
"""A simple multi-layer perceptron for diagonal Gaussian conditionals.
Note -- For now, we require both activations and dims to be specified.
Parameters
----------
activations : list of :class:`.Brick`, :class:`.BoundApplication`,
or ``None``
A list of activations to apply after each linear transformation.
Give ``None`` to not apply any activation. It is assumed that the
application method to use is ``apply``. Required for
:meth:`__init__`. The length of this list should be two less than
the length of dims, as first dim is the input dim and the last dim
is the dim of the output Gaussian.
dims : list of ints
A list of input dimensions, as well as the output dimension of the
last layer. Required for :meth:`~.Brick.allocate`.
"""
def __init__(self, activations=None, dims=None, **kwargs):
if activations is None:
raise ValueError("activations must be specified.")
if dims is None:
raise ValueError("dims must be specified.")
if not (len(dims) == (len(activations) + 2)):
raise ValueError("len(dims) != len(activations) + 2.")
super(CondNet, self).__init__(**kwargs)
self.dims = dims
self.shared_acts = activations
# construct the shared linear transforms for feedforward
self.shared_linears = []
for i in range(len(dims)-2):
self.shared_linears.append( \
Linear(dims[i], dims[i+1], name='shared_linear_{}'.format(i)))
self.mean_linear = Linear(dims[-2], dims[-1], name='mean_linear')
self.logvar_linear = Linear(dims[-2], dims[-1], name='logvar_linear',
weights_init=Constant(0.))
self.children = self.shared_linears + self.shared_acts
self.children.append(self.mean_linear)
self.children.append(self.logvar_linear)
return
def get_dim(self, name):
if name == 'input':
return self.dims[0]
elif name == 'output':
return self.dims[-1]
else:
raise ValueError("Invalid dim name: {}".format(name))
return
@property
def input_dim(self):
return self.dims[0]
@property
def output_dim(self):
return self.dims[-1]
@application(inputs=['x', 'u'], outputs=['z_mean', 'z_logvar', 'z'])
def apply(self, x, u):
f = [ x ]
for linear, activation in zip(self.shared_linears, self.shared_acts):
f.append( activation.apply(linear.apply(f[-1])) )
z_mean = self.mean_linear.apply(f[-1])
z_logvar = self.logvar_linear.apply(f[-1])
z = z_mean + (u * tensor.exp(0.5 * z_logvar))
return z_mean, z_logvar, z
#-----------------------------------------------------------------------------
class Reader(Initializable):
def __init__(self, x_dim, dec_dim, **kwargs):
super(Reader, self).__init__(name="reader", **kwargs)
self.x_dim = x_dim
self.dec_dim = dec_dim
self.output_dim = 2*x_dim
def get_dim(self, name):
if name == 'input':
return self.dec_dim
elif name == 'x_dim':
return self.x_dim
elif name == 'output':
return self.output_dim
else:
raise ValueError
@application(inputs=['x', 'x_hat', 'h_dec'], outputs=['r'])
def apply(self, x, x_hat, h_dec):
return tensor.concatenate([x, x_hat], axis=1)
class AttentionReader2d(Initializable):
def __init__(self, x_dim, dec_dim, height, width, N, **kwargs):
super(AttentionReader2d, self).__init__(name="reader", **kwargs)
self.img_height = height
self.img_width = width
self.N = N
self.x_dim = x_dim
self.dec_dim = dec_dim
self.output_dim = 2*N*N
self.pre_trafo = Linear(
name=self.name+'_pretrafo',
input_dim=dec_dim, output_dim=dec_dim,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.zoomer = ZoomableAttention2d(height, width, N)
self.readout = MLP(activations=[Identity()], dims=[dec_dim, 5], **kwargs)
self.children = [self.pre_trafo, self.readout]
return
def get_dim(self, name):
if name == 'input':
return self.dec_dim
elif name == 'x_dim':
return self.x_dim
elif name == 'output':
return self.output_dim
else:
raise ValueError
@application(inputs=['x', 'x_hat', 'h_dec'], outputs=['r'])
def apply(self, x, x_hat, h_dec):
p = self.pre_trafo.apply(h_dec)
l = self.readout.apply(p)
center_y, center_x, delta, sigma, gamma = self.zoomer.nn2att(l)
w = gamma * self.zoomer.read(x , center_y, center_x, delta, sigma)
w_hat = gamma * self.zoomer.read(x_hat, center_y, center_x, delta, sigma)
return tensor.concatenate([w, w_hat], axis=1)
#-----------------------------------------------------------------------------
class Writer(Initializable):
def __init__(self, input_dim, output_dim, **kwargs):
super(Writer, self).__init__(name="writer", **kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.transform = Linear(
name=self.name+'_transform',
input_dim=input_dim, output_dim=output_dim,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.children = [self.transform]
@application(inputs=['h'], outputs=['c_update'])
def apply(self, h):
return self.transform.apply(h)
class AttentionWriter(Initializable):
def __init__(self, input_dim, output_dim, width, height, N, **kwargs):
super(AttentionWriter, self).__init__(name="writer", **kwargs)
self.img_width = width
self.img_height = height
self.N = N
self.input_dim = input_dim
self.output_dim = output_dim
assert output_dim == width*height
self.zoomer = ZoomableAttention2d(height, width, N)
self.z_trafo = Linear(
name=self.name+'_ztrafo',
input_dim=input_dim, output_dim=5,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.w_trafo = Linear(
name=self.name+'_wtrafo',
input_dim=input_dim, output_dim=N*N,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.children = [self.z_trafo, self.w_trafo]
return
@application(inputs=['h'], outputs=['c_update'])
def apply(self, h):
w = self.w_trafo.apply(h)
l = self.z_trafo.apply(h)
center_y, center_x, delta, sigma, gamma = self.zoomer.nn2att(l)
c_update = 1./gamma * self.zoomer.write(w, center_y, center_x, delta, sigma)
return c_update
@application(inputs=['h'], outputs=['c_update', 'center_y', 'center_x', 'delta'])
def apply_detailed(self, h):
w = self.w_trafo.apply(h)
l = self.z_trafo.apply(h)
center_y, center_x, delta, sigma, gamma = self.zoomer.nn2att(l)
c_update = 1./gamma * self.zoomer.write(w, center_y, center_x, delta, sigma)
return c_update, center_y, center_x, delta
class AttentionWriter2(Initializable):
def __init__(self, input_dim, output_dim, width, height, N, **kwargs):
super(AttentionWriter, self).__init__(name="writer", **kwargs)
self.img_width = width
self.img_height = height
self.N = N
self.input_dim = input_dim
self.output_dim = output_dim
assert output_dim == width*height
self.zoomer = ZoomableAttention2d(height, width, N)
self.pre_trafo = Linear(
name=self.name+'_pretrafo',
input_dim=input_dim, output_dim=input_dim,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.z_trafo = Linear(
name=self.name+'_ztrafo',
input_dim=input_dim, output_dim=5,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.w_trafo = Linear(
name=self.name+'_wtrafo',
input_dim=input_dim, output_dim=N*N,
weights_init=self.weights_init, biases_init=self.biases_init,
use_bias=True)
self.children = [self.pre_trafo, self.z_trafo, self.w_trafo]
@application(inputs=['h'], outputs=['c_update'])
def apply(self, h):
p = self.pre_trafo.apply(h)
w = self.w_trafo.apply(p)
l = self.z_trafo.apply(p)
center_y, center_x, delta, sigma, gamma = self.zoomer.nn2att(l)
c_update = 1./gamma * self.zoomer.write(w, center_y, center_x, delta, sigma)
return c_update
@application(inputs=['h'], outputs=['c_update', 'center_y', 'center_x', 'delta'])
def apply_detailed(self, h):
p = self.pre_trafo.apply(h)
w = self.w_trafo.apply(p)
l = self.z_trafo.apply(p)
center_y, center_x, delta, sigma, gamma = self.zoomer.nn2att(l)
c_update = 1./gamma * self.zoomer.write(w, center_y, center_x, delta, sigma)
return c_update, center_y, center_x, delta
##########################################################
# Generalized DRAW model, with infinite mixtures and RL. #
# -- this only works open-loopishly #
##########################################################
class IMoOLDrawModels(BaseRecurrent, Initializable, Random):
def __init__(self, n_iter, step_type, mix_enc_mlp, mix_dec_mlp,
reader_mlp, enc_mlp_in, enc_rnn, enc_mlp_out,
dec_mlp_in, dec_rnn, dec_mlp_out, writer_mlp,
**kwargs):
super(IMoOLDrawModels, self).__init__(**kwargs)
if not ((step_type == 'add') or (step_type == 'jump')):
raise ValueError('step_type must be jump or add')
# record the desired step count
self.n_iter = n_iter
self.step_type = step_type
# grab handles for mixture stuff
self.mix_enc_mlp = mix_enc_mlp
self.mix_dec_mlp = mix_dec_mlp
# grab handles for IMoOLDRAW model stuff
self.reader_mlp = reader_mlp
self.enc_mlp_in = enc_mlp_in
self.enc_rnn = enc_rnn
self.enc_mlp_out = enc_mlp_out
self.dec_mlp_in = dec_mlp_in
self.dec_rnn = dec_rnn
self.dec_mlp_out = dec_mlp_out
self.writer_mlp = writer_mlp
# regularization noise on RNN states
zero_ary = to_fX(numpy.zeros((1,)))
self.rnn_noise = theano.shared(value=zero_ary, name='rnn_noise')
self.params = []
# record the sub-models that underlie this model
self.children = [self.mix_enc_mlp, self.mix_dec_mlp, self.reader_mlp,
self.enc_mlp_in, self.enc_rnn, self.enc_mlp_out,
self.dec_mlp_in, self.dec_rnn, self.dec_mlp_out,
self.writer_mlp]
return
def _allocate(self):
c_dim = self.get_dim('c')
zm_dim = self.get_dim('z_mix')
# self.c_0 provides the initial state of the canvas
self.c_0 = shared_floatx_nans((c_dim,), name='c_0')
# self.zm_mean provides the mean of z_mix
self.zm_mean = shared_floatx_nans((zm_dim,), name='zm_mean')
# self.zm_logvar provides the logvar of z_mix
self.zm_logvar = shared_floatx_nans((zm_dim,), name='zm_logvar')
add_role(self.c_0, PARAMETER)
add_role(self.zm_mean, PARAMETER)
add_role(self.zm_logvar, PARAMETER)
# add the theano shared variables to our parameter lists
self.params.extend([ self.c_0, self.zm_mean, self.zm_logvar ])
return
def _initialize(self):
# initialize to all parameters zeros...
for p in self.params:
p_nan = p.get_value(borrow=False)
p_zeros = numpy.zeros(p_nan.shape)
p.set_value(p_zeros.astype(theano.config.floatX))
return
def get_dim(self, name):
if name == 'c':
return self.reader_mlp.get_dim('x_dim')
elif name == 'z_mix':
return self.mix_enc_mlp.get_dim('output')
elif name in ['h_enc','u_enc']:
return self.enc_rnn.get_dim('states')
elif name == 'c_enc':
return self.enc_rnn.get_dim('cells')
elif name == 'z_gen':
return self.enc_mlp_out.get_dim('output')
elif name in ['h_dec','u_dec']:
return self.dec_rnn.get_dim('states')
elif name == 'c_dec':
return self.dec_rnn.get_dim('cells')
elif name in ['nll', 'kl_q2p', 'kl_p2q']:
return 0
elif name == 'center_y':
return 0
elif name == 'center_x':
return 0
elif name == 'delta':
return 0
else:
super(IMoOLDrawModels, self).get_dim(name)
return
def set_rnn_noise(self, rnn_noise=0.0):
"""
Set the standard deviation of "regularizing noise".
"""
zero_ary = numpy.zeros((1,))
new_val = zero_ary + rnn_noise
self.rnn_noise.set_value(to_fX(new_val))
return
#------------------------------------------------------------------------
@recurrent(sequences=['u', 'u_enc', 'u_dec'], contexts=['x'],
states=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'],
outputs=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'])
def apply(self, u, u_enc, u_dec, c, h_enc, c_enc, h_dec, c_dec, nll, kl_q2p, kl_p2q, x):
# get current prediction
if self.step_type == 'add':
# additive steps use c as a "direct workspace", which means it's
# already directly comparable to x.
c = c
else:
# non-additive steps use c_dec as a "latent workspace", which means
# it needs to be transformed before being comparable to x.
c = self.writer_mlp.apply(c_dec)
c_as_x = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
# get the current "reconstruction error"
x_hat = x - c_as_x
r_enc = self.reader_mlp.apply(x, x_hat, h_dec)
# update the encoder RNN state
i_enc = self.enc_mlp_in.apply(tensor.concatenate([r_enc, h_dec], axis=1))
h_enc, c_enc = self.enc_rnn.apply(states=h_enc, cells=c_enc,
inputs=i_enc, iterate=False)
# add noise to the encoder state
h_enc = h_enc + u_enc
# estimate encoder conditional over z given h_enc
q_gen_mean, q_gen_logvar, q_z_gen = \
self.enc_mlp_out.apply(h_enc, u)
# estimate decoder conditional over z given h_dec
p_gen_mean, p_gen_logvar, p_z_gen = \
self.dec_mlp_out.apply(h_dec, u)
# update the decoder RNN state
z_gen = q_z_gen # use samples from q while training
i_dec = self.dec_mlp_in.apply(tensor.concatenate([z_gen], axis=1))
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# add noise to the decoder state
h_dec = h_dec + u_dec
# additive steps use c as the "workspace"
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(c_dec)
# compute the NLL of the reconstructiion as of this step
c_as_x = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
nll = -1.0 * tensor.flatten(log_prob_bernoulli(x, c_as_x))
# compute KL(q || p) and KL(p || q) for this step
kl_q2p = tensor.sum(gaussian_kld(q_gen_mean, q_gen_logvar, \
p_gen_mean, p_gen_logvar), axis=1)
kl_p2q = tensor.sum(gaussian_kld(p_gen_mean, p_gen_logvar, \
q_gen_mean, q_gen_logvar), axis=1)
return c, h_enc, c_enc, h_dec, c_dec, nll, kl_q2p, kl_p2q
@recurrent(sequences=['u', 'u_dec'], contexts=[],
states=['c', 'h_dec', 'c_dec'],
outputs=['c', 'h_dec', 'c_dec'])
def decode(self, u, u_dec, c, h_dec, c_dec):
# sample z from p(z | h_dec) -- we used q(z | h_enc) during training
p_gen_mean, p_gen_logvar, p_z_gen = \
self.dec_mlp_out.apply(h_dec, u)
z_gen = p_z_gen
# update the decoder RNN state
i_dec = self.dec_mlp_in.apply(tensor.concatenate([z_gen], axis=1))
h_dec, c_dec = self.dec_rnn.apply(
states=h_dec, cells=c_dec,
inputs=i_dec, iterate=False)
# add noise to decoder state
h_dec = h_dec + u_dec
# additive steps use c as the "workspace"
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(c_dec)
return c, h_dec, c_dec
#------------------------------------------------------------------------
@application(inputs=['x_in', 'x_out'],
outputs=['recons', 'nll', 'kl_q2p', 'kl_p2q'])
def reconstruct(self, x_in, x_out):
# get important size and shape information
batch_size = x_in.shape[0]
z_mix_dim = self.get_dim('z_mix')
z_gen_dim = self.get_dim('z_gen')
ce_dim = self.get_dim('c_enc')
cd_dim = self.get_dim('c_dec')
he_dim = self.get_dim('h_enc')
hd_dim = self.get_dim('h_dec')
# sample zero-mean, unit std. Gaussian noise for mixture init
u_mix = self.theano_rng.normal(
size=(batch_size, z_mix_dim),
avg=0., std=1.)
# transform ZMUV noise based on q(z_mix | x_in)
z_mix_mean, z_mix_logvar, z_mix = \
self.mix_enc_mlp.apply(x_in, u_mix)
# transform samples from q(z_mix | x_in) into initial generator state
mix_init = self.mix_dec_mlp.apply(z_mix)
cd0 = mix_init[:, :cd_dim]
hd0 = mix_init[:, cd_dim:(cd_dim+hd_dim)]
ce0 = mix_init[:, (cd_dim+hd_dim):(cd_dim+hd_dim+ce_dim)]
he0 = mix_init[:, (cd_dim+hd_dim+ce_dim):]
c0 = tensor.zeros_like(x_out) + self.c_0
# add noise to initial decoder state
hd0 = hd0 + (self.rnn_noise[0] * self.theano_rng.normal(
size=(hd0.shape[0], hd0.shape[1]),
avg=0., std=1.))
# add noise to initial encoder state
he0 = he0 + (self.rnn_noise[0] * self.theano_rng.normal(
size=(he0.shape[0], hd0.shape[1]),
avg=0., std=1.))
# compute KL-divergence information for the mixture init step
kl_q2p_mix = tensor.sum(gaussian_kld(z_mix_mean, z_mix_logvar, \
self.zm_mean, self.zm_logvar), axis=1)
kl_p2q_mix = tensor.sum(gaussian_kld(self.zm_mean, self.zm_logvar, \
z_mix_mean, z_mix_logvar), axis=1)
kl_q2p_mix = kl_q2p_mix.reshape((1, batch_size))
kl_p2q_mix = kl_p2q_mix.reshape((1, batch_size))
# get zero-mean, unit-std. Gaussian noise for use in scan op
u_gen = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_gen_dim),
avg=0., std=1.)
u_enc = self.rnn_noise[0] * self.theano_rng.normal(
size=(self.n_iter, batch_size, he_dim),
avg=0., std=1.)
u_dec = self.rnn_noise[0] * self.theano_rng.normal(
size=(self.n_iter, batch_size, hd_dim),
avg=0., std=1.)
# run the multi-stage guided generative process
c, _, _, _, _, step_nlls, kl_q2p_gen, kl_p2q_gen = \
self.apply(u=u_gen, u_enc=u_enc, u_dec=u_dec, \
c=c0, h_enc=he0, c_enc=ce0, \
h_dec=hd0, c_dec=cd0, x=x_out)
# grab the observations generated by the multi-stage process
recons = tensor.nnet.sigmoid(tanh_clip(c[-1,:,:], clip_val=15.0))
recons.name = "recons"
# get the NLL after the final update for each example
nll = step_nlls[-1]
nll.name = "nll"
# group up the klds from mixture init and multi-stage generation
kl_q2p = tensor.vertical_stack(kl_q2p_mix, kl_q2p_gen)
kl_q2p.name = "kl_q2p"
kl_p2q = tensor.vertical_stack(kl_p2q_mix, kl_p2q_gen)
kl_p2q.name = "kl_p2q"
return recons, nll, kl_q2p, kl_p2q
@application(inputs=['n_samples'], outputs=['x_samples','c_samples'])
def sample(self, n_samples):
"""Sample from model.
Returns
-------
samples : tensor3 (n_samples, n_iter, x_dim)
"""
z_mix_dim = self.get_dim('z_mix')
z_gen_dim = self.get_dim('z_gen')
cd_dim = self.get_dim('c_dec')
hd_dim = self.get_dim('h_dec')
ce_dim = self.get_dim('c_enc')
he_dim = self.get_dim('h_enc')
c_dim = self.get_dim('c')
# sample zero-mean, unit-std. Gaussian noise for the mixture init
u_mix = self.theano_rng.normal(
size=(n_samples, z_mix_dim),
avg=0., std=1.)
# transform noise based on learned mean and logvar
z_mix = self.zm_mean + (u_mix * tensor.exp(0.5 * self.zm_logvar))
# transform the sample from p(z_mix) into an initial generator state
mix_init = self.mix_dec_mlp.apply(z_mix)
cd0 = mix_init[:, :cd_dim]
hd0 = mix_init[:, cd_dim:(cd_dim+hd_dim)]
c0 = tensor.alloc(0.0, n_samples, c_dim) + self.c_0
# add noise to initial decoder state
hd0 = hd0 + (self.rnn_noise[0] * self.theano_rng.normal(
size=(hd0.shape[0], hd0.shape[1]),
avg=0., std=1.))
# sample from zero-mean unit-std. Gaussian for use in scan op
u_gen = self.theano_rng.normal(
size=(self.n_iter, n_samples, z_gen_dim),
avg=0., std=1.)
u_dec = self.rnn_noise[0] * self.theano_rng.normal(
size=(self.n_iter, n_samples, hd_dim),
avg=0., std=1.)
c_samples, _, _, = self.decode(u=u_gen, u_dec=u_dec, \
c=c0, h_dec=hd0, c_dec=cd0)
x_samples = tensor.nnet.sigmoid(tanh_clip(c_samples, clip_val=15.0))
return [x_samples, c_samples]
def build_model_funcs(self):
"""
Build the symbolic costs and theano functions relevant to this model.
"""
# some symbolic vars to represent various inputs/outputs
self.x_in_sym = tensor.matrix('x_in_sym')
self.x_out_sym = tensor.matrix('x_out_sym')
# collect reconstructions of x produced by the IMoOLDRAW model
_, nll, kl_q2p, kl_p2q = self.reconstruct(self.x_in_sym, self.x_out_sym)
# get the expected NLL part of the VFE bound
self.nll_term = nll.mean()
self.nll_term.name = "nll_term"
# get KL(q || p) and KL(p || q)
self.kld_q2p_term = kl_q2p.sum(axis=0).mean()
self.kld_q2p_term.name = "kld_q2p_term"
self.kld_p2q_term = kl_p2q.sum(axis=0).mean()
self.kld_p2q_term.name = "kld_p2q_term"
self.kld_q2p_step = kl_q2p.mean(axis=1)
# get the proper VFE bound on NLL
self.nll_bound = self.nll_term + self.kld_q2p_term
self.nll_bound.name = "nll_bound"
# grab handles for all the optimizable parameters in our cost
self.cg = ComputationGraph([self.nll_bound])
self.joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
# apply some l2 regularization to the model parameters
self.reg_term = (1e-5 * sum([tensor.sum(p**2.0) for p in self.joint_params]))
self.reg_term.name = "reg_term"
# compute the full cost w.r.t. which we will optimize
self.joint_cost = self.nll_term + (0.95 * self.kld_q2p_term) + \
(0.05 * self.kld_p2q_term) + self.reg_term
self.joint_cost.name = "joint_cost"
# Get the gradient of the joint cost for all optimizable parameters
print("Computing gradients of joint_cost...")
self.joint_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.joint_params)
for i, p in enumerate(self.joint_params):
self.joint_grads[p] = grad_list[i]
# shared var learning rate for generator and inferencer
zero_ary = to_fX( numpy.zeros((1,)) )
self.lr = theano.shared(value=zero_ary, name='tbm_lr')
# shared var momentum parameters for generator and inferencer
self.mom_1 = theano.shared(value=zero_ary, name='tbm_mom_1')
self.mom_2 = theano.shared(value=zero_ary, name='tbm_mom_2')
# construct the updates for the generator and inferencer networks
self.joint_updates = get_adam_updates(params=self.joint_params, \
grads=self.joint_grads, alpha=self.lr, \
beta1=self.mom_1, beta2=self.mom_2, \
mom2_init=1e-4, smoothing=1e-6, max_grad_norm=10.0)
# collect the outputs to return from this function
outputs = [self.joint_cost, self.nll_bound, self.nll_term, \
self.kld_q2p_term, self.kld_p2q_term, self.reg_term, \
self.kld_q2p_step]
# compile the theano function
print("Compiling model training/update function...")
self.train_joint = theano.function(inputs=[self.x_in_sym, self.x_out_sym], \
outputs=outputs, updates=self.joint_updates)
print("Compiling NLL bound estimator function...")
self.compute_nll_bound = theano.function(inputs=[self.x_in_sym, self.x_out_sym], \
outputs=outputs)
print("Compiling model sampler...")
n_samples = tensor.iscalar("n_samples")
x_samples, c_samples = self.sample(n_samples)
self.do_sample = theano.function([n_samples], \
outputs=[x_samples, c_samples], \
allow_input_downcast=True)
return
def build_extra_funcs(self):
"""
Build functions for computing performance and other stuff.
"""
# get a list of "bricks" for the variational distributions
var_bricks = [self.mix_enc_mlp, self.enc_mlp_in, self.enc_rnn,
self.enc_mlp_out]
# grab handles for all the variational parameters in our cost
cg_vars = self.cg.variables # self.cg should already be built...
self.var_params = VariableFilter(roles=[PARAMETER], bricks=var_bricks)(cg_vars)
# get the gradient of the joint cost for all optimizable parameters
print("Computing gradients of joint_cost (for var params)...")
self.var_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.var_params)
for i, p in enumerate(self.var_params):
self.var_grads[p] = grad_list[i]
# construct a function for training only the variational parameters
self.var_updates = get_adam_updates(params=self.var_params, \
grads=self.var_grads, alpha=self.lr, \
beta1=self.mom_1, beta2=self.mom_2, \
mom2_init=1e-4, smoothing=1e-6, max_grad_norm=10.0)
inputs = [self.x_in_sym, self.x_out_sym]
# collect the outputs to return from this function
outputs = [self.joint_cost, self.nll_bound, self.nll_term, \
self.kld_q2p_term, self.kld_p2q_term, self.reg_term, \
self.kld_q2p_step]
# compile the theano function
print("Compiling model training/update function (for var params)...")
self.train_var = theano.function(inputs=inputs, \
outputs=outputs, \
updates=self.var_updates)
return
def get_model_params(self, ary_type='numpy'):
"""
Get the optimizable parameters in this model. This returns a list
and, to reload this model's parameters, the list must stay in order.
This can provide shared variables or numpy arrays.
"""
if self.cg is None:
self.build_model_funcs()
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
if ary_type == 'numpy':
for i, p in enumerate(joint_params):
joint_params[i] = p.get_value(borrow=False)
return joint_params
def set_model_params(self, numpy_param_list):
"""
Set the optimizable parameters in this model. This requires a list
and, to reload this model's parameters, the list must be in order.
"""
if self.cg is None:
self.build_model_funcs()
# grab handles for all the optimizable parameters in our cost
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
for i, p in enumerate(joint_params):
joint_params[i].set_value(to_fX(numpy_param_list[i]))
return joint_params
def save_model_params(self, f_name=None):
"""
Save model parameters to a pickle file, in numpy form.
"""
numpy_params = self.get_model_params(ary_type='numpy')
f_handle = file(f_name, 'wb')
# dump the dict self.params, which just holds "simple" python values
cPickle.dump(numpy_params, f_handle, protocol=-1)
f_handle.close()
return
def load_model_params(self, f_name=None):
"""
Load model parameters from a pickle file, in numpy form.
"""
pickle_file = open(f_name)
numpy_params = cPickle.load(pickle_file)
self.set_model_params(numpy_params)
pickle_file.close()
return
##########################################################
# Generalized DRAW model, with infinite mixtures and RL. #
# -- also modified to operate closed-loopishly #
##########################################################
class IMoCLDrawModels(BaseRecurrent, Initializable, Random):
def __init__(self, n_iter, step_type,
mix_enc_mlp, mix_dec_mlp, mix_var_mlp,
reader_mlp, writer_mlp,
enc_mlp_in, enc_rnn, enc_mlp_out,
dec_mlp_in, dec_rnn,
var_mlp_in, var_rnn, var_mlp_out,
**kwargs):
super(IMoCLDrawModels, self).__init__(**kwargs)
if not ((step_type == 'add') or (step_type == 'jump')):
raise ValueError('step_type must be jump or add')
# record the desired step count
self.n_iter = n_iter
self.step_type = step_type
# grab handles for mixture stuff
self.mix_enc_mlp = mix_enc_mlp
self.mix_dec_mlp = mix_dec_mlp
self.mix_var_mlp = mix_var_mlp
# grab handles for shared read/write models
self.reader_mlp = reader_mlp
self.writer_mlp = writer_mlp
# grab handles for sequential read/write models
self.enc_mlp_in = enc_mlp_in
self.enc_rnn = enc_rnn
self.enc_mlp_out = enc_mlp_out
self.dec_mlp_in = dec_mlp_in
self.dec_rnn = dec_rnn
self.var_mlp_in = var_mlp_in
self.var_rnn = var_rnn
self.var_mlp_out = var_mlp_out
# setup a "null pointer" that will point to the computation graph
# for this model, which can be built by self.build_model_funcs()...
self.cg = None
self.params = []
# record the sub-models that underlie this model
self.children = [self.mix_enc_mlp, self.mix_dec_mlp, self.mix_var_mlp,
self.reader_mlp, self.writer_mlp,
self.enc_mlp_in, self.enc_rnn, self.enc_mlp_out,
self.dec_mlp_in, self.dec_rnn,
self.var_mlp_in, self.var_rnn, self.var_mlp_out]
return
def _allocate(self):
# allocate shared arrays to hold parameters owned by this model
c_dim = self.get_dim('c')
# self.c_0 provides the initial state of the canvas
self.c_0 = shared_floatx_nans((c_dim,), name='c_0')
add_role(self.c_0, PARAMETER)
# add the theano shared variables to our parameter lists
self.params.extend([ self.c_0 ])
return
def _initialize(self):
# initialize all parameters to zeros...
for p in self.params:
p_nan = p.get_value(borrow=False)
p_zeros = numpy.zeros(p_nan.shape)
p.set_value(p_zeros.astype(theano.config.floatX))
return
def get_dim(self, name):
if name == 'c':
return self.reader_mlp.get_dim('x_dim')
elif name == 'z_mix':
return self.mix_enc_mlp.get_dim('output')
elif name == 'z_gen':
return self.enc_mlp_out.get_dim('output')
elif name == 'h_enc':
return self.enc_rnn.get_dim('states')
elif name == 'c_enc':
return self.enc_rnn.get_dim('cells')
elif name == 'h_dec':
return self.dec_rnn.get_dim('states')
elif name == 'c_dec':
return self.dec_rnn.get_dim('cells')
elif name == 'h_var':
return self.var_rnn.get_dim('states')
elif name == 'c_var':
return self.var_rnn.get_dim('cells')
elif name in ['nll', 'kl_q2p', 'kl_p2q']:
return 0
elif name == 'center_y':
return 0
elif name == 'center_x':
return 0
elif name == 'delta':
return 0
else:
super(IMoCLDrawModels, self).get_dim(name)
return
#------------------------------------------------------------------------
@recurrent(sequences=['u'], contexts=['x', 'm'],
states=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec', 'h_var', 'c_var', 'nll', 'kl_q2p', 'kl_p2q'],
outputs=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec', 'h_var', 'c_var', 'nll', 'kl_q2p', 'kl_p2q'])
def apply(self, u, c, h_enc, c_enc, h_dec, c_dec, h_var, c_var, nll, kl_q2p, kl_p2q, x, m):
if self.step_type == 'add':
# additive steps use c as a "direct workspace", which means it's
# already directly comparable to x.
c_as_x = tensor.nnet.sigmoid(c)
else:
# non-additive steps use c_dec as a "latent workspace", which means
# it needs to be transformed before being comparable to x.
c_as_x = tensor.nnet.sigmoid(self.writer_mlp.apply(h_dec))
# apply a mask for mixing observed and imputed parts of x. c_as_x
# gives the current reconstruction of x, for all dimensions. m will
# use 1 to indicate known values, and 0 to indicate values to impute.
x_m = (m * x) + ((1.0 - m) * c_as_x) # when m==0 everywhere, this will
# contain no information about x.
# get the feedback available for use by the guide and primary policy
x_hat_var = x - c_as_x # provides LL grad w.r.t. c_as_x everywhere
x_hat_enc = x_m - c_as_x # provides LL grad w.r.t. c_as_x where m==1
# update the guide RNN state
r_var = self.reader_mlp.apply(x, x_hat_var, h_dec)
i_var = self.var_mlp_in.apply(tensor.concatenate([r_var, h_dec], axis=1))
h_var, c_var = self.var_rnn.apply(states=h_var, cells=c_var,
inputs=i_var, iterate=False)
# update the encoder RNN state
r_enc = self.reader_mlp.apply(x_m, x_hat_enc, h_dec)
i_enc = self.enc_mlp_in.apply(tensor.concatenate([r_enc, h_dec], axis=1))
h_enc, c_enc = self.enc_rnn.apply(states=h_enc, cells=c_enc,
inputs=i_enc, iterate=False)
# estimate guide conditional over z given h_var
q_zg_mean, q_zg_logvar, q_zg = \
self.var_mlp_out.apply(h_var, u)
# estimate primary conditional over z given h_enc
p_zg_mean, p_zg_logvar, p_zg = \
self.enc_mlp_out.apply(h_enc, u)
# update the decoder RNN state, using guidance from the guide
i_dec = self.dec_mlp_in.apply(tensor.concatenate([q_zg], axis=1))
#i_dec = self.dec_mlp_in.apply(tensor.concatenate([q_zg, h_enc], axis=1))
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# update the "workspace" (stored in c)
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(h_dec)
# compute the NLL of the reconstruction as of this step
c_as_x = tensor.nnet.sigmoid(c)
m_inv = 1.0 - m
nll = -1.0 * tensor.flatten(log_prob_bernoulli(x, c_as_x, mask=m_inv))
# compute KL(q || p) and KL(p || q) for this step
kl_q2p = tensor.sum(gaussian_kld(q_zg_mean, q_zg_logvar, \
p_zg_mean, p_zg_logvar), axis=1)
kl_p2q = tensor.sum(gaussian_kld(p_zg_mean, p_zg_logvar, \
q_zg_mean, q_zg_logvar), axis=1)
return c, h_enc, c_enc, h_dec, c_dec, h_var, c_var, nll, kl_q2p, kl_p2q
@recurrent(sequences=['u'], contexts=['x', 'm'],
states=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec'],
outputs=['c', 'h_enc', 'c_enc', 'h_dec', 'c_dec'])
def decode(self, u, c, h_enc, c_enc, h_dec, c_dec, x, m):
# get current state of the reconstruction/imputation
if self.step_type == 'add':
c_as_x = tensor.nnet.sigmoid(c)
else:
c_as_x = tensor.nnet.sigmoid(self.writer_mlp.apply(c_dec))
x_m = (m * x) + ((1.0 - m) * c_as_x) # mask the known/imputed vals
x_hat_enc = x_m - c_as_x # get feedback used by encoder
# update the encoder RNN state
r_enc = self.reader_mlp.apply(x_m, x_hat_enc, h_dec)
i_enc = self.enc_mlp_in.apply(tensor.concatenate([r_enc, h_dec], axis=1))
h_enc, c_enc = self.enc_rnn.apply(states=h_enc, cells=c_enc,
inputs=i_enc, iterate=False)
# estimate primary conditional over z given h_enc
p_zg_mean, p_zg_logvar, p_zg = \
self.enc_mlp_out.apply(h_enc, u)
# update the decoder RNN state, using guidance from the guide
i_dec = self.dec_mlp_in.apply(tensor.concatenate([p_zg], axis=1))
#i_dec = self.dec_mlp_in.apply(tensor.concatenate([p_zg, h_enc], axis=1))
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# update the "workspace" (stored in c)
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(c_dec)
return c, h_enc, c_enc, h_dec, c_dec
#------------------------------------------------------------------------
@application(inputs=['x', 'm'],
outputs=['recons', 'nll', 'kl_q2p', 'kl_p2q'])
def reconstruct(self, x, m):
# get important size and shape information
batch_size = x.shape[0]
z_mix_dim = self.get_dim('z_mix')
z_gen_dim = self.get_dim('z_gen')
ce_dim = self.get_dim('c_enc')
cd_dim = self.get_dim('c_dec')
cv_dim = self.get_dim('c_var')
he_dim = self.get_dim('h_enc')
hd_dim = self.get_dim('h_dec')
hv_dim = self.get_dim('h_var')
# get initial state of the reconstruction/imputation
c0 = tensor.zeros_like(x) + self.c_0
c_as_x = tensor.nnet.sigmoid(c0)
x_m = (m * x) + ((1.0 - m) * c_as_x)
# sample zero-mean, unit std. Gaussian noise for mixture init
u_mix = self.theano_rng.normal(
size=(batch_size, z_mix_dim),
avg=0., std=1.)
# transform ZMUV noise based on q(z_mix | x)
q_zm_mean, q_zm_logvar, q_zm = \
self.mix_var_mlp.apply(x, u_mix) # use full x info
p_zm_mean, p_zm_logvar, p_zm = \
self.mix_enc_mlp.apply(x_m, u_mix) # use masked x info
# transform samples from q(z_mix | x) into initial generator state
mix_init = self.mix_dec_mlp.apply(q_zm)
cd0 = mix_init[:, :cd_dim]
hd0 = mix_init[:, cd_dim:(cd_dim+hd_dim)]
ce0 = mix_init[:, (cd_dim+hd_dim):(cd_dim+hd_dim+ce_dim)]
he0 = mix_init[:, (cd_dim+hd_dim+ce_dim):(cd_dim+hd_dim+ce_dim+he_dim)]
cv0 = mix_init[:, (cd_dim+hd_dim+ce_dim+he_dim):(cd_dim+hd_dim+ce_dim+he_dim+cv_dim)]
hv0 = mix_init[:, (cd_dim+hd_dim+ce_dim+he_dim+cv_dim):]
# compute KL-divergence information for the mixture init step
kl_q2p_mix = tensor.sum(gaussian_kld(q_zm_mean, q_zm_logvar, \
p_zm_mean, p_zm_logvar), axis=1)
kl_p2q_mix = tensor.sum(gaussian_kld(p_zm_mean, p_zm_logvar, \
p_zm_mean, p_zm_logvar), axis=1)
kl_q2p_mix = kl_q2p_mix.reshape((1, batch_size))
kl_p2q_mix = kl_p2q_mix.reshape((1, batch_size))
# get zero-mean, unit-std. Gaussian noise for use in scan op
u_gen = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_gen_dim),
avg=0., std=1.)
# run the multi-stage guided generative process
c, _, _, _, _, _, _, step_nlls, kl_q2p_gen, kl_p2q_gen = \
self.apply(u=u_gen, c=c0, \
h_enc=he0, c_enc=ce0, \
h_dec=hd0, c_dec=cd0, \
h_var=hv0, c_var=cv0, \
x=x, m=m)
# grab the observations generated by the multi-stage process
c_as_x = tensor.nnet.sigmoid(c[-1,:,:])
recons = (m * x) + ((1.0 - m) * c_as_x)
recons.name = "recons"
# get the NLL after the final update for each example
nll = step_nlls[-1]
nll.name = "nll"
# group up the klds from mixture init and multi-stage generation
kl_q2p = tensor.vertical_stack(kl_q2p_mix, kl_q2p_gen)
kl_q2p.name = "kl_q2p"
kl_p2q = tensor.vertical_stack(kl_p2q_mix, kl_p2q_gen)
kl_p2q.name = "kl_p2q"
return recons, nll, kl_q2p, kl_p2q
@application(inputs=['x', 'm'], outputs=['recons','c_samples'])
def sample(self, x, m):
"""
Sample from model. Sampling can be performed either with or
without partial control (i.e. conditioning for imputation).
Returns
-------
samples : tensor3 (n_samples, n_iter, x_dim)
"""
# get important size and shape information
batch_size = x.shape[0]
z_mix_dim = self.get_dim('z_mix')
z_gen_dim = self.get_dim('z_gen')
ce_dim = self.get_dim('c_enc')
cd_dim = self.get_dim('c_dec')
cv_dim = self.get_dim('c_var')
he_dim = self.get_dim('h_enc')
hd_dim = self.get_dim('h_dec')
hv_dim = self.get_dim('h_var')
# get initial state of the reconstruction/imputation
c0 = tensor.zeros_like(x) + self.c_0
c_as_x = tensor.nnet.sigmoid(c0)
x_m = (m * x) + ((1.0 - m) * c_as_x)
# sample zero-mean, unit std. Gaussian noise for mixture init
u_mix = self.theano_rng.normal(
size=(batch_size, z_mix_dim),
avg=0., std=1.)
# transform ZMUV noise based on q(z_mix | x)
p_zm_mean, p_zm_logvar, p_zm = \
self.mix_enc_mlp.apply(x_m, u_mix) # use masked x info
# transform samples from q(z_mix | x) into initial generator state
mix_init = self.mix_dec_mlp.apply(p_zm)
cd0 = mix_init[:, :cd_dim]
hd0 = mix_init[:, cd_dim:(cd_dim+hd_dim)]
ce0 = mix_init[:, (cd_dim+hd_dim):(cd_dim+hd_dim+ce_dim)]
he0 = mix_init[:, (cd_dim+hd_dim+ce_dim):(cd_dim+hd_dim+ce_dim+he_dim)]
cv0 = mix_init[:, (cd_dim+hd_dim+ce_dim+he_dim):(cd_dim+hd_dim+ce_dim+he_dim+cv_dim)]
hv0 = mix_init[:, (cd_dim+hd_dim+ce_dim+he_dim+cv_dim):]
# get zero-mean, unit-std. Gaussian noise for use in scan op
u_gen = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_gen_dim),
avg=0., std=1.)
# run the sequential generative policy from given initial states
c_samples, _, _, _, _ = self.decode(u=u_gen, c=c0, h_enc=he0, c_enc=ce0, \
h_dec=hd0, c_dec=cd0, x=x, m=m)
# convert output into the desired form, and apply masking
c_as_x = tensor.nnet.sigmoid(c_samples)
recons = (m * x) + ((1.0 - m) * c_as_x)
recons.name = "recons"
return [recons, c_samples]
def build_model_funcs(self):
"""
Build the symbolic costs and theano functions relevant to this model.
"""
# some symbolic vars to represent various inputs/outputs
self.x_sym = tensor.matrix('x_sym')
self.m_sym = tensor.matrix('m_sym')
# collect reconstructions of x produced by the IMoCLDRAW model
_, nll, kl_q2p, kl_p2q = self.reconstruct(self.x_sym, self.m_sym)
# get the expected NLL part of the VFE bound
self.nll_term = nll.mean()
self.nll_term.name = "nll_term"
# get KL(q || p) and KL(p || q)
self.kld_q2p_term = kl_q2p.sum(axis=0).mean()
self.kld_q2p_term.name = "kld_q2p_term"
self.kld_p2q_term = kl_p2q.sum(axis=0).mean()
self.kld_p2q_term.name = "kld_p2q_term"
# get the proper VFE bound on NLL
self.nll_bound = self.nll_term + self.kld_q2p_term
self.nll_bound.name = "nll_bound"
# grab handles for all the optimizable parameters in our cost
self.cg = ComputationGraph([self.nll_bound])
self.joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
# apply some l2 regularization to the model parameters
self.reg_term = (1e-5 * sum([tensor.sum(p**2.0) for p in self.joint_params]))
self.reg_term.name = "reg_term"
# compute the full cost w.r.t. which we will optimize
self.joint_cost = self.nll_term + (0.9 * self.kld_q2p_term) + \
(0.1 * self.kld_p2q_term) + self.reg_term
self.joint_cost.name = "joint_cost"
# Get the gradient of the joint cost for all optimizable parameters
print("Computing gradients of joint_cost...")
self.joint_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.joint_params)
for i, p in enumerate(self.joint_params):
self.joint_grads[p] = grad_list[i]
# shared var learning rate for generator and inferencer
zero_ary = to_fX( numpy.zeros((1,)) )
self.lr = theano.shared(value=zero_ary, name='tbm_lr')
# shared var momentum parameters for generator and inferencer
self.mom_1 = theano.shared(value=zero_ary, name='tbm_mom_1')
self.mom_2 = theano.shared(value=zero_ary, name='tbm_mom_2')
# construct the updates for the generator and inferencer networks
self.joint_updates = get_adam_updates(params=self.joint_params, \
grads=self.joint_grads, alpha=self.lr, \
beta1=self.mom_1, beta2=self.mom_2, \
mom2_init=1e-4, smoothing=1e-6, max_grad_norm=10.0)
# collect the outputs to return from this function
outputs = [self.joint_cost, self.nll_bound, self.nll_term, \
self.kld_q2p_term, self.kld_p2q_term, self.reg_term]
# compile the theano function
print("Compiling model training/update function...")
self.train_joint = theano.function(inputs=[self.x_sym, self.m_sym], \
outputs=outputs, updates=self.joint_updates)
print("Compiling NLL bound estimator function...")
self.compute_nll_bound = theano.function(inputs=[self.x_sym, self.m_sym], \
outputs=outputs)
print("Compiling model sampler...")
x_samples, c_samples = self.sample(self.x_sym, self.m_sym)
self.do_sample = theano.function([self.x_sym, self.m_sym], \
outputs=[x_samples, c_samples], \
allow_input_downcast=True)
return
def build_extra_funcs(self):
"""
Build functions for computing performance and other stuff.
"""
# get a list of "bricks" for the variational distributions
var_bricks = [self.mix_var_mlp, self.var_mlp_in, self.var_rnn,
self.var_mlp_out]
# grab handles for all the variational parameters in our cost
cg_vars = self.cg.variables # self.cg should already be built...
self.var_params = VariableFilter(roles=[PARAMETER], bricks=var_bricks)(cg_vars)
# get the gradient of the joint cost for variational parameters
print("Computing gradients of joint_cost (for var params)...")
self.var_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.var_params)
for i, p in enumerate(self.var_params):
self.var_grads[p] = grad_list[i]
# construct a function for training only the variational parameters
self.var_updates = get_adam_updates(params=self.var_params, \
grads=self.var_grads, alpha=self.lr, \
beta1=self.mom_1, beta2=self.mom_2, \
mom2_init=1e-4, smoothing=1e-6, max_grad_norm=10.0)
inputs = [self.x_sym, self.m_sym]
# collect the outputs to return from this function
outputs = [self.joint_cost, self.nll_bound, self.nll_term, \
self.kld_q2p_term, self.kld_p2q_term, self.reg_term]
# compile the theano function
print("Compiling model training/update function (for var params)...")
self.train_var = theano.function(inputs=inputs, outputs=outputs, \
updates=self.var_updates)
return
def get_model_params(self, ary_type='numpy'):
"""
Get the optimizable parameters in this model. This returns a list
and, to reload this model's parameters, the list must stay in order.
This can provide shared variables or numpy arrays.
"""
if self.cg is None:
self.build_model_funcs()
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
if ary_type == 'numpy':
for i, p in enumerate(joint_params):
joint_params[i] = p.get_value(borrow=False)
return joint_params
def set_model_params(self, numpy_param_list):
"""
Set the optimizable parameters in this model. This requires a list
and, to reload this model's parameters, the list must be in order.
"""
if self.cg is None:
self.build_model_funcs()
# grab handles for all the optimizable parameters in our cost
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
for i, p in enumerate(joint_params):
joint_params[i].set_value(to_fX(numpy_param_list[i]))
return joint_params
def save_model_params(self, f_name=None):
"""
Save model parameters to a pickle file, in numpy form.
"""
numpy_params = self.get_model_params(ary_type='numpy')
f_handle = file(f_name, 'wb')
# dump the dict self.params, which just holds "simple" python values
cPickle.dump(numpy_params, f_handle, protocol=-1)
f_handle.close()
return
def load_model_params(self, f_name=None):
"""
Load model parameters from a pickle file, in numpy form.
"""
pickle_file = open(f_name)
numpy_params = cPickle.load(pickle_file)
self.set_model_params(numpy_params)
pickle_file.close()
return
VISUAL_BREAK_STR = """
======================================================================
======================================================================
======================================================================
======================================================================
======================================================================
======================================================================
======================================================================
======================================================================
"""
#######################################################
# Deep DRAW model, adds controller like an RL policy. #
# -- this only works almost closed-loopishly #
#######################################################
class RLDrawModel(BaseRecurrent, Initializable, Random):
def __init__(self, n_iter, step_type, use_pol,
reader_mlp, writer_mlp,
pol_mlp_in, pol_rnn, pol_mlp_out,
var_mlp_in, var_rnn, var_mlp_out,
dec_mlp_in, dec_rnn, dec_mlp_out,
ent_mlp_in, ent_rnn, ent_mlp_out,
**kwargs):
super(RLDrawModel, self).__init__(**kwargs)
if not ((step_type == 'add') or (step_type == 'jump')):
raise ValueError('step_type must be jump or add')
# record the basic model format params
self.n_iter = n_iter
self.step_type = step_type
self.use_pol = use_pol
# grab handles for submodels
self.reader_mlp = reader_mlp
self.writer_mlp = writer_mlp
self.pol_mlp_in = pol_mlp_in
self.pol_rnn = pol_rnn
self.pol_mlp_out = pol_mlp_out
self.var_mlp_in = var_mlp_in
self.var_rnn = var_rnn
self.var_mlp_out = var_mlp_out
self.dec_mlp_in = dec_mlp_in
self.dec_rnn = dec_rnn
self.dec_mlp_out = dec_mlp_out
self.ent_mlp_in = ent_mlp_in
self.ent_rnn = ent_rnn
self.ent_mlp_out = ent_mlp_out
# create a shared variable switch for controlling sampling
ones_ary = numpy.ones((1,)).astype(theano.config.floatX)
self.train_switch = theano.shared(value=ones_ary, name='train_switch')
# shared var learning rate for generator and inferencer
zero_ary = to_fX( numpy.zeros((1,)) )
self.lr = theano.shared(value=zero_ary, name='rld_lr')
# shared var momentum parameters for generator and inferencer
self.mom_1 = theano.shared(value=zero_ary, name='rld_mom_1')
self.mom_2 = theano.shared(value=zero_ary, name='rld_mom_2')
# standard deviation for noise on gradients
zero_ary = to_fX(numpy.zeros((1,)))
self.grad_noise = theano.shared(value=zero_ary, name='grad_noise')
# create shared variables for controlling KL and entropy cost terms
self.lam_kld_q2p = theano.shared(value=ones_ary, name='lam_kld_q2p')
self.lam_kld_p2q = theano.shared(value=ones_ary, name='lam_kld_p2q')
self.lam_neg_ent = theano.shared(value=ones_ary, name='lam_neg_ent')
# set weights for KL and entropy terms in the joint cost
self.set_lam_kld(lam_kld_q2p=1.0, lam_kld_p2q=0.0, lam_neg_ent=0.1)
# silly switches for "split" runs
self.use_q = True
self.use_p = False
# list for holding references to model params
self.params = []
# record the sub-models that underlie this model
self.children = [self.reader_mlp, self.writer_mlp,
self.pol_mlp_in, self.pol_rnn, self.pol_mlp_out,
self.var_mlp_in, self.var_rnn, self.var_mlp_out,
self.dec_mlp_in, self.dec_rnn, self.dec_mlp_out,
self.ent_mlp_in, self.ent_rnn, self.ent_mlp_out]
return
def _allocate(self):
"""
Allocate shared parameters used by this model.
"""
# get size information for the desired parameters
c_dim = self.get_dim('c')
cp_dim = self.get_dim('c_pol')
hp_dim = self.get_dim('h_pol')
cv_dim = self.get_dim('c_var')
hv_dim = self.get_dim('h_var')
cd_dim = self.get_dim('c_dec')
hd_dim = self.get_dim('h_dec')
ce_dim = self.get_dim('c_ent')
he_dim = self.get_dim('h_ent')
# self.c_0 provides initial state of the next column prediction
self.c_0 = shared_floatx_nans((1,c_dim), name='c_0')
add_role(self.c_0, PARAMETER)
# self.cp_0/self.hp_0 provides initial state of the primary policy
self.cp_0 = shared_floatx_nans((1,cp_dim), name='cp_0')
add_role(self.cp_0, PARAMETER)
self.hp_0 = shared_floatx_nans((1,hp_dim), name='hp_0')
add_role(self.hp_0, PARAMETER)
# self.cv_0/self.hv_0 provides initial state of the guide policy
self.cv_0 = shared_floatx_nans((1,cv_dim), name='cv_0')
add_role(self.cv_0, PARAMETER)
self.hv_0 = shared_floatx_nans((1,hv_dim), name='hv_0')
add_role(self.hv_0, PARAMETER)
# self.cd_0/self.hd_0 provides initial state of the shared dynamics
self.cd_0 = shared_floatx_nans((1,cd_dim), name='cd_0')
add_role(self.cd_0, PARAMETER)
self.hd_0 = shared_floatx_nans((1,hd_dim), name='hd_0')
add_role(self.hd_0, PARAMETER)
# self.ce_0/self.he_0 provides initial state of the entropy helper
self.ce_0 = shared_floatx_nans((1,ce_dim), name='ce_0')
add_role(self.ce_0, PARAMETER)
self.he_0 = shared_floatx_nans((1,he_dim), name='he_0')
add_role(self.he_0, PARAMETER)
# add the theano shared variables to our parameter lists
self.params.extend([ self.c_0,
self.cp_0, self.cv_0, self.cd_0, self.ce_0,
self.hp_0, self.hv_0, self.hd_0, self.he_0 ])
return
def _initialize(self):
# initialize to all parameters zeros...
for p in self.params:
p_nan = p.get_value(borrow=False)
p_zeros = numpy.zeros(p_nan.shape)
p.set_value(p_zeros.astype(theano.config.floatX))
return
def get_dim(self, name):
if name == 'c':
return self.reader_mlp.get_dim('x_dim')
elif name == 'h_pol':
return self.pol_rnn.get_dim('states')
elif name == 'c_pol':
return self.pol_rnn.get_dim('cells')
elif name == 'h_var':
return self.var_rnn.get_dim('states')
elif name == 'c_var':
return self.var_rnn.get_dim('cells')
elif name == 'h_dec':
return self.dec_rnn.get_dim('states')
elif name == 'c_dec':
return self.dec_rnn.get_dim('cells')
elif name == 'h_ent':
return self.ent_rnn.get_dim('states')
elif name == 'c_ent':
return self.ent_rnn.get_dim('cells')
elif name == 'z':
return self.var_mlp_out.get_dim('output')
elif name in ['nll', 'kl_q2p', 'kl_p2q']:
return 0
else:
super(RLDrawModel, self).get_dim(name)
return
def set_sgd_params(self, lr=0.01, mom_1=0.9, mom_2=0.999):
"""
Set learning rate and momentum parameter for all updates.
"""
zero_ary = numpy.zeros((1,))
# set learning rate
new_lr = zero_ary + lr
self.lr.set_value(to_fX(new_lr))
# set momentums (use first and second order "momentum")
new_mom_1 = zero_ary + mom_1
self.mom_1.set_value(to_fX(new_mom_1))
new_mom_2 = zero_ary + mom_2
self.mom_2.set_value(to_fX(new_mom_2))
return
def set_lam_kld(self, lam_kld_q2p=1.0, lam_kld_p2q=0.0, lam_neg_ent=0.1):
"""
Set the relative weight of various terms in the joint cost.
"""
zero_ary = numpy.zeros((1,))
new_lam = zero_ary + lam_kld_q2p
self.lam_kld_q2p.set_value(to_fX(new_lam))
new_lam = zero_ary + lam_kld_p2q
self.lam_kld_p2q.set_value(to_fX(new_lam))
new_lam = zero_ary + lam_neg_ent
self.lam_neg_ent.set_value(to_fX(new_lam))
return
def set_grad_noise(self, grad_noise=0.0):
"""
Set the standard deviation of "gradient noise".
"""
zero_ary = numpy.zeros((1,))
new_val = zero_ary + grad_noise
self.grad_noise.set_value(to_fX(new_val))
return
#------------------------------------------------------------------------
@recurrent(sequences=['u'], contexts=['x'],
states=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'],
outputs=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'])
def apply(self, u, c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, nll, kl_q2p, kl_p2q, x):
# get current state of the x under construction
if self.step_type == 'add':
c = c
else:
c = self.writer_mlp.apply(h_dec)
c_as_x = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
# update the primary policy state
pol_inp = tensor.concatenate([h_dec], axis=1)
i_pol = self.pol_mlp_in.apply(pol_inp)
h_pol, c_pol = self.pol_rnn.apply(states=h_pol, cells=c_pol,
inputs=i_pol, iterate=False)
# update the guide policy state
var_inp = tensor.concatenate([x, h_dec], axis=1)
i_var = self.var_mlp_in.apply(var_inp)
h_var, c_var = self.var_rnn.apply(states=h_var, cells=c_var,
inputs=i_var, iterate=False)
# estimate primary policy's conditional over z
p_z_mean, p_z_logvar, p_z = self.pol_mlp_out.apply(h_pol, u)
if not self.use_pol:
# use a "null" policy, i.e. ZMUV Gaussian
p_z_mean = 0.0 * p_z_mean
p_z_logvar = 0.0 * p_z_logvar
p_z = u
# estimate guide policy's conditional over z
q_z_mean, q_z_logvar, q_z = self.var_mlp_out.apply(h_var, u)
# mix samples from p/q based on value of self.train_switch
z = (self.train_switch[0] * q_z) + \
((1.0 - self.train_switch[0]) * p_z)
# update the shared dynamics' state
dec_inp = tensor.concatenate([z], axis=1)
i_dec = self.dec_mlp_in.apply(dec_inp)
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# get current state of the x under construction
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(h_dec)
# compute the NLL of the reconstruction as of this step
c_as_x = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
nll = -1.0 * tensor.flatten(log_prob_bernoulli(x, c_as_x))
# compute KL(q || p) and KL(p || q) for this step
kl_q2p = tensor.sum(gaussian_kld(q_z_mean, q_z_logvar, \
p_z_mean, p_z_logvar), axis=1)
kl_p2q = tensor.sum(gaussian_kld(p_z_mean, p_z_logvar, \
q_z_mean, q_z_logvar), axis=1)
return c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, nll, kl_q2p, kl_p2q
############################################
# FUNCS FOR SAMPLING SPLIT RUNS OF P AND Q #
############################################
@recurrent(sequences=['u'], contexts=['x'],
states=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec'],
outputs=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec', 'z', 'log_prob_z'])
def run_before(self, u, c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, x):
if self.use_q:
# update the guide policy state
var_inp = tensor.concatenate([x, h_dec], axis=1)
i_var = self.var_mlp_in.apply(var_inp)
h_var, c_var = self.var_rnn.apply(states=h_var, cells=c_var,
inputs=i_var, iterate=False)
# estimate guide policy's conditional over z
z_mean, z_logvar, z = self.var_mlp_out.apply(h_var, u)
elif self.use_p:
# update the primary policy state
pol_inp = tensor.concatenate([h_dec], axis=1)
i_pol = self.pol_mlp_in.apply(pol_inp)
h_pol, c_pol = self.pol_rnn.apply(states=h_pol, cells=c_pol,
inputs=i_pol, iterate=False)
# estimate primary policy's conditional over z
z_mean, z_logvar, z = self.pol_mlp_out.apply(h_pol, u)
if not self.use_pol:
# use a "null" policy, i.e. ZMUV Gaussian
z_mean = 0.0 * z_mean
z_logvar = 0.0 * z_logvar
z = u
else:
assert False, "Run flags not set properly!"
# compute log(prob(z | h))
log_prob_z = tensor.flatten(log_prob_gaussian2(z, z_mean, z_logvar))
# update the shared dynamics' state
dec_inp = tensor.concatenate([z], axis=1)
i_dec = self.dec_mlp_in.apply(dec_inp)
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# get current state of the x under construction
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(h_dec)
return c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, z, log_prob_z
@recurrent(sequences=['z', 'h_dec'], contexts=['x'],
states=['h_pol', 'c_pol', 'h_ent', 'c_ent'],
outputs=['h_pol', 'c_pol', 'h_ent', 'c_ent', 'log_prob_z'])
def run_after(self, z, h_dec, h_pol, c_pol, h_ent, c_ent, x):
if self.use_q:
# update the guide policy state
ent_inp = tensor.concatenate([x, h_dec], axis=1)
i_ent = self.ent_mlp_in.apply(ent_inp)
h_ent, c_ent = self.ent_rnn.apply(states=h_ent, cells=c_ent,
inputs=i_ent, iterate=False)
# estimate guide policy's conditional over z
z_mean, z_logvar, _ = self.ent_mlp_out.apply(h_ent, 0.0*z)
elif self.use_p:
# update the primary policy state
pol_inp = tensor.concatenate([h_dec], axis=1)
i_pol = self.pol_mlp_in.apply(pol_inp)
h_pol, c_pol = self.pol_rnn.apply(states=h_pol, cells=c_pol,
inputs=i_pol, iterate=False)
# estimate guide policy's conditional over z
z_mean, z_logvar, _ = self.pol_mlp_out.apply(h_pol, 0.0*z)
if not self.use_pol:
# use a "null" policy, i.e. ZMUV Gaussian
z_mean = 0.0 * z_mean
z_logvar = 0.0 * z_logvar
else:
assert False, "Run flags not set properly!"
# compute log(prob(z | h))
log_prob_z = tensor.flatten(log_prob_gaussian2(z, z_mean, z_logvar))
return h_pol, c_pol, h_ent, c_ent, log_prob_z
#------------------------------------------------------------------------
@application(inputs=['x_in'],
outputs=['cs', 'nll', 'kl_q2ps', 'kl_p2qs', 'neg_ent'])
def run_model_split(self, x_in):
# get important size and shape information
batch_size = x_in.shape[0]
z_dim = self.get_dim('z')
cp_dim = self.get_dim('c_pol')
cv_dim = self.get_dim('c_var')
cd_dim = self.get_dim('c_dec')
ce_dim = self.get_dim('c_ent')
hp_dim = self.get_dim('h_pol')
hv_dim = self.get_dim('h_var')
hd_dim = self.get_dim('h_dec')
he_dim = self.get_dim('h_ent')
# get initial states for all model components
c0 = self.c_0.repeat(batch_size, axis=0)
cp0 = self.cp_0.repeat(batch_size, axis=0)
hp0 = self.hp_0.repeat(batch_size, axis=0)
cv0 = self.cv_0.repeat(batch_size, axis=0)
hv0 = self.hv_0.repeat(batch_size, axis=0)
cd0 = self.cd_0.repeat(batch_size, axis=0)
hd0 = self.hd_0.repeat(batch_size, axis=0)
ce0 = self.ce_0.repeat(batch_size, axis=0)
he0 = self.he_0.repeat(batch_size, axis=0)
################
# RUN Q THEN P # -- for variational bound on log likelihood
################
# get zero-mean, unit-std. Gaussian noise for use in scan op
u_for_q = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_dim),
avg=0., std=1.)
# run the generative process starting from q
self.use_q = True
self.use_p = False
cs_from_q, _, _, _, _, hds_from_q, _, zs_from_q, log_prob_q_zs_from_q = \
self.run_before(u=u_for_q, c=c0,
h_pol=hp0, c_pol=cp0,
h_var=hv0, c_var=cv0,
h_dec=hd0, c_dec=cd0, x=x_in)
# try to follow the q trajectory using p
self.use_p = True
self.use_q = False
all_hds_from_q = tensor.concatenate([hd0.dimshuffle('x',0,1), hds_from_q], axis=0)
hds_from_q_for_p = all_hds_from_q[:-1,:,:]
_, _, _, _, log_prob_p_zs_from_q = \
self.run_after(z=zs_from_q,
h_dec=hds_from_q_for_p,
h_pol=hp0, c_pol=cp0,
h_ent=he0, c_ent=ce0, x=x_in)
################
# RUN P THEN Q # -- for variational bound on model entropy
################
# get zero-mean, unit-std. Gaussian noise for use in scan op
u_for_p = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_dim),
avg=0., std=1.)
# run the generative process starting from p
self.use_q = False
self.use_p = True
cs_from_p, _, _, _, _, hds_from_p, _, zs_from_p, log_prob_p_zs_from_p = \
self.run_before(u=u_for_p, c=c0,
h_pol=hp0, c_pol=cp0,
h_var=hv0, c_var=cv0,
h_dec=hd0, c_dec=cd0, x=x_in)
# get the continous-valued observations generated by p
cxs_from_p = tensor.nnet.sigmoid(tanh_clip(cs_from_p[-1,:,:], clip_val=15.0))
# convert continuous-valued xs to binary xs
# -- for use with the (biased) "pass-through" gradient estimator
s = self.theano_rng.uniform(size=cxs_from_p.shape, low=0.0, high=1.0, \
dtype=theano.config.floatX)
e = (s < cxs_from_p) - cxs_from_p
bxs_from_p = cxs_from_p + theano.gradient.disconnected_grad(e)
# process binary-valued xs with q
self.use_q = True
self.use_p = False
all_hds_from_p = tensor.concatenate([hd0.dimshuffle('x',0,1), hds_from_p], axis=0)
hds_from_p_for_q = all_hds_from_p[:-1,:,:]
_, _, _, _, log_prob_q_zs_from_p = \
self.run_after(z=zs_from_p,
h_dec=hds_from_p_for_q,
h_pol=hp0, c_pol=cp0,
h_ent=he0, c_ent=ce0, x=bxs_from_p)
###############################################
# To maximize variational entropy lower bound: #
#----------------------------------------------#######################
# #
# ent(p(x)) >= -\expect_{x,z \sim p}[\log \frac{p(x|z)p(z)}{q(z|x)}] #
# -ent(p(x)) <= \expect_{x,z \sim p}[\log \frac{p(x|z)p(z)}{q(z|x)}] #
# #
######################################################################
log_p_xgz = log_prob_bernoulli(bxs_from_p, cxs_from_p) # shape: (batch_size, 1)
log_rat_ent = log_prob_p_zs_from_p - log_prob_q_zs_from_p # shape: (n_iter, batch_size)
# compute the NLL for the reconstruction pass (i.e. Wake mode)
cs = cs_from_q
c_as_x = tensor.nnet.sigmoid(tanh_clip(cs[-1,:,:], clip_val=15.0))
nll = -1.0 * tensor.flatten(log_prob_bernoulli(x_in, c_as_x))
# compute KLds for the reconstruction pass (i.e. Wake mode)
kl_q2ps = log_prob_q_zs_from_q - log_prob_p_zs_from_q
kl_p2qs = log_prob_q_zs_from_q - log_prob_p_zs_from_q
# compute the neg-ent bound for the generative pass (i.e. Sleep mode)
neg_ent = tensor.flatten(log_p_xgz) + tensor.sum(log_rat_ent, axis=0) # shape: (batch_size,)
# add name tags to the constructed symbolic variables
cs.name = "cs"
nll.name = "nll"
kl_q2ps.name = "kl_q2ps"
kl_p2qs.name = "kl_p2qs"
neg_ent.name = "neg_ent"
return cs, nll, kl_q2ps, kl_p2qs, neg_ent
def build_model_funcs(self):
"""
Build the symbolic costs and theano functions relevant to this model.
"""
# symbolic variable for providing inputs
x_in_sym = tensor.matrix('x_in_sym')
# collect symbolic vars for model samples and costs (given x_in_sym)
cs, nll, kl_q2ps, kl_p2qs, neg_ent = self.run_model_split(x_in_sym)
# get the expected NLL part of the VFE bound
self.nll_term = tensor.mean(nll)
self.nll_term.name = "nll_term"
# get KL(q || p) and KL(p || q)
self.kld_q2p_term = kl_q2ps.sum(axis=0).mean()
self.kld_q2p_term.name = "kld_q2p_term"
self.kld_p2q_term = kl_p2qs.sum(axis=0).mean()
self.kld_p2q_term.name = "kld_p2q_term"
# get upper bound on negative entropy of p(x)
self.neg_ent_term = tensor.mean(neg_ent)
self.neg_ent_term.name = "neg_ent_term"
# construct the proper VFE bound on NLL
self.nll_bound = self.nll_term + self.kld_q2p_term
# grab handles for all the optimizable parameters in our cost
self.cg = ComputationGraph([self.nll_bound])
self.joint_params = self.get_model_params(ary_type='theano')
# apply some l2 regularization to the model parameters
self.reg_term = (1e-5 * sum([tensor.sum(p**2.0) for p in self.joint_params]))
self.reg_term.name = "reg_term"
# compute the full cost w.r.t. which we will optimize params
self.joint_cost = self.nll_term + \
(self.lam_kld_q2p[0] * self.kld_q2p_term) + \
(self.lam_kld_p2q[0] * self.kld_p2q_term) + \
(self.lam_neg_ent[0] * self.neg_ent_term) + \
self.reg_term
self.joint_cost.name = "joint_cost"
# get the gradient of the joint cost for all optimizable parameters
print("Computing gradients of joint_cost...")
self.joint_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.joint_params)
for i, p in enumerate(self.joint_params):
self.joint_grads[p] = grad_list[i]
# construct the updates for all trainable parameters
self.joint_updates, applied_updates = get_adam_updates_X(
params=self.joint_params,
grads=self.joint_grads, alpha=self.lr,
beta1=self.mom_1, beta2=self.mom_2,
mom2_init=1e-3, smoothing=1e-4, max_grad_norm=10.0,
theano_rng=self.theano_rng, noise_std=self.grad_noise)
# get the total grad norm and (post ADAM scaling) update norm.
self.grad_norm = sum([tensor.sum(g**2.0) for g in grad_list])
self.update_norm = sum([tensor.sum(u**2.0) for u in applied_updates])
# collect the outputs to return from this function
train_outputs = [self.joint_cost, self.nll_bound,
self.nll_term, self.kld_q2p_term, self.kld_p2q_term,
self.neg_ent_term, self.reg_term,
self.grad_norm, self.update_norm]
bound_outputs = [self.joint_cost, self.nll_bound,
self.nll_term, self.kld_q2p_term, self.kld_p2q_term,
self.neg_ent_term, self.reg_term]
# collect the required inputs
inputs = [x_in_sym]
# compile the theano functions for computing stuff, like for real
print("Compiling model training/update function...")
self.train_joint = theano.function(inputs=inputs, \
outputs=train_outputs, \
updates=self.joint_updates)
print("Compiling model cost estimator function...")
self.compute_nll_bound = theano.function(inputs=inputs, \
outputs=bound_outputs)
return
def build_sampling_funcs(self):
"""
Build functions for visualizing the behavior of this model.
"""
# symbolic variable for providing inputs
x_in_sym = tensor.matrix('x_in_sym')
# collect symbolic vars for model samples and costs (given x_in_sym)
cs, nll, kl_q2ps, kl_p2qs, neg_ent = self.run_model_split(x_in_sym)
cs_as_xs = tensor.nnet.sigmoid(tanh_clip(cs, clip_val=15.0))
# get important parts of the VFE bound
nll_term = nll.mean()
kl_term = kl_q2ps.mean()
neg_ent_term = neg_ent.mean()
# grab handle for the computation graph for this model's cost
dummy_cost = nll_term + kl_term + neg_ent_term
self.cg = ComputationGraph([dummy_cost])
# build the function for computing the attention trajectories
print("Compiling model sampler...")
sample_func = theano.function(inputs=[x_in_sym], outputs=cs_as_xs)
def switchy_sampler(x=None, sample_source='q'):
assert (not (x is None)), "input x is required, sorry"
# store value of sample source switch, to restore later
old_switch = self.train_switch.get_value()
if sample_source == 'p':
# take samples from the primary policy
zeros_ary = numpy.zeros((1,)).astype(theano.config.floatX)
self.train_switch.set_value(zeros_ary)
else:
# take samples from the guide policy
ones_ary = numpy.ones((1,)).astype(theano.config.floatX)
self.train_switch.set_value(ones_ary)
# sample prediction and attention trajectories
samps = sample_func(x)
# set sample source switch back to previous value
self.train_switch.set_value(old_switch)
return samps
self.sample_model = switchy_sampler
return
def get_model_params(self, ary_type='numpy'):
"""
Get the optimizable parameters in this model. This returns a list
and, to reload this model's parameters, the list must stay in order.
This can provide shared variables or numpy arrays.
"""
if self.cg is None:
self.build_model_funcs()
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
if ary_type == 'numpy':
for i, p in enumerate(joint_params):
joint_params[i] = p.get_value(borrow=False)
return joint_params
def set_model_params(self, numpy_param_list):
"""
Set the optimizable parameters in this model. This requires a list
and, to reload this model's parameters, the list must be in order.
"""
if self.cg is None:
self.build_model_funcs()
# grab handles for all the optimizable parameters in our cost
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
for i, p in enumerate(joint_params):
joint_params[i].set_value(to_fX(numpy_param_list[i]))
return joint_params
def save_model_params(self, f_name=None):
"""
Save model parameters to a pickle file, in numpy form.
"""
numpy_params = self.get_model_params(ary_type='numpy')
f_handle = file(f_name, 'wb')
# dump the dict self.params, which just holds "simple" python values
cPickle.dump(numpy_params, f_handle, protocol=-1)
f_handle.close()
return
def load_model_params(self, f_name=None):
"""
Load model parameters from a pickle file, in numpy form.
"""
pickle_file = open(f_name)
numpy_params = cPickle.load(pickle_file)
self.set_model_params(numpy_params)
pickle_file.close()
return
####################################################
####################################################
## Structured prediction via iterative refinement ##
####################################################
####################################################
class IRStructPredModel(BaseRecurrent, Initializable, Random):
def __init__(self, n_iter, step_type, use_pol,
reader_mlp, writer_mlp,
pol_mlp_in, pol_rnn, pol_mlp_out,
var_mlp_in, var_rnn, var_mlp_out,
dec_mlp_in, dec_rnn, dec_mlp_out,
**kwargs):
super(IRStructPredModel, self).__init__(**kwargs)
if not ((step_type == 'add') or (step_type == 'jump')):
raise ValueError('step_type must be jump or add')
# record the basic model format params
self.n_iter = n_iter
self.step_type = step_type
self.use_pol = use_pol
# grab handles for submodels
self.reader_mlp = reader_mlp
self.writer_mlp = writer_mlp
self.pol_mlp_in = pol_mlp_in
self.pol_rnn = pol_rnn
self.pol_mlp_out = pol_mlp_out
self.var_mlp_in = var_mlp_in
self.var_rnn = var_rnn
self.var_mlp_out = var_mlp_out
self.dec_mlp_in = dec_mlp_in
self.dec_rnn = dec_rnn
self.dec_mlp_out = dec_mlp_out
# create a shared variable switch for controlling sampling
ones_ary = numpy.ones((1,)).astype(theano.config.floatX)
self.train_switch = theano.shared(value=ones_ary, name='train_switch')
# shared var learning rate for generator and inferencer
zero_ary = to_fX( numpy.zeros((1,)) )
self.lr = theano.shared(value=zero_ary, name='rld_lr')
# shared var momentum parameters for generator and inferencer
self.mom_1 = theano.shared(value=zero_ary, name='rld_mom_1')
self.mom_2 = theano.shared(value=zero_ary, name='rld_mom_2')
# regularization noise on RNN states
zero_ary = to_fX(numpy.zeros((1,)))
self.rnn_noise = theano.shared(value=zero_ary, name='rnn_noise')
# regularization noise on gradient estimates
zero_ary = to_fX(numpy.zeros((1,)))
self.grad_noise = theano.shared(value=zero_ary, name='grad_noise')
# create shared variables for controlling KL and entropy cost terms
self.lam_kld_q2p = theano.shared(value=ones_ary, name='lam_kld_q2p')
self.lam_kld_p2q = theano.shared(value=ones_ary, name='lam_kld_p2q')
# set weights for KL and entropy terms in the joint cost
self.set_lam_kld(lam_kld_q2p=1.0, lam_kld_p2q=0.0)
# list for holding references to model params
self.params = []
# record the sub-models that underlie this model
self.children = [self.reader_mlp, self.writer_mlp,
self.pol_mlp_in, self.pol_rnn, self.pol_mlp_out,
self.var_mlp_in, self.var_rnn, self.var_mlp_out,
self.dec_mlp_in, self.dec_rnn, self.dec_mlp_out]
return
def _allocate(self):
"""
Allocate shared parameters used by this model.
"""
# get size information for the desired parameters
c_dim = self.get_dim('c')
cp_dim = self.get_dim('c_pol')
hp_dim = self.get_dim('h_pol')
cv_dim = self.get_dim('c_var')
hv_dim = self.get_dim('h_var')
cd_dim = self.get_dim('c_dec')
hd_dim = self.get_dim('h_dec')
# self.c_0 provides initial state of the next column prediction
self.c_0 = shared_floatx_nans((1,c_dim), name='c_0')
add_role(self.c_0, PARAMETER)
# self.cp_0/self.hp_0 provides initial state of the primary policy
self.cp_0 = shared_floatx_nans((1,cp_dim), name='cp_0')
add_role(self.cp_0, PARAMETER)
self.hp_0 = shared_floatx_nans((1,hp_dim), name='hp_0')
add_role(self.hp_0, PARAMETER)
# self.cv_0/self.hv_0 provides initial state of the guide policy
self.cv_0 = shared_floatx_nans((1,cv_dim), name='cv_0')
add_role(self.cv_0, PARAMETER)
self.hv_0 = shared_floatx_nans((1,hv_dim), name='hv_0')
add_role(self.hv_0, PARAMETER)
# self.cd_0/self.hd_0 provides initial state of the shared dynamics
self.cd_0 = shared_floatx_nans((1,cd_dim), name='cd_0')
add_role(self.cd_0, PARAMETER)
self.hd_0 = shared_floatx_nans((1,hd_dim), name='hd_0')
add_role(self.hd_0, PARAMETER)
# add the theano shared variables to our parameter lists
self.params.extend([ self.c_0,
self.cp_0, self.cv_0, self.cd_0,
self.hp_0, self.hv_0, self.hd_0 ])
return
def _initialize(self):
# initialize to all parameters zeros...
for p in self.params:
p_nan = p.get_value(borrow=False)
p_zeros = numpy.zeros(p_nan.shape)
p.set_value(p_zeros.astype(theano.config.floatX))
return
def get_dim(self, name):
if name == 'x':
return self.reader_mlp.input_dim
elif name in ['c', 'y']:
return self.writer_mlp.output_dim
elif name == 'h_pol':
return self.pol_rnn.get_dim('states')
elif name == 'c_pol':
return self.pol_rnn.get_dim('cells')
elif name == 'h_var':
return self.var_rnn.get_dim('states')
elif name == 'c_var':
return self.var_rnn.get_dim('cells')
elif name == 'h_dec':
return self.dec_rnn.get_dim('states')
elif name == 'c_dec':
return self.dec_rnn.get_dim('cells')
elif name == 'z':
return self.var_mlp_out.get_dim('output')
elif name in ['nll', 'kl_q2p', 'kl_p2q']:
return 0
else:
super(IRStructPredModel, self).get_dim(name)
return
def set_sgd_params(self, lr=0.01, mom_1=0.9, mom_2=0.999):
"""
Set learning rate and momentum parameter for all updates.
"""
zero_ary = numpy.zeros((1,))
# set learning rate
new_lr = zero_ary + lr
self.lr.set_value(to_fX(new_lr))
# set momentums (use first and second order "momentum")
new_mom_1 = zero_ary + mom_1
self.mom_1.set_value(to_fX(new_mom_1))
new_mom_2 = zero_ary + mom_2
self.mom_2.set_value(to_fX(new_mom_2))
return
def set_lam_kld(self, lam_kld_q2p=1.0, lam_kld_p2q=0.0):
"""
Set the relative weight of various terms in the joint cost.
"""
zero_ary = numpy.zeros((1,))
new_lam = zero_ary + lam_kld_q2p
self.lam_kld_q2p.set_value(to_fX(new_lam))
new_lam = zero_ary + lam_kld_p2q
self.lam_kld_p2q.set_value(to_fX(new_lam))
return
def set_rnn_noise(self, rnn_noise=0.0):
"""
Set the standard deviation of "RNN dynamics noise".
"""
zero_ary = numpy.zeros((1,))
new_val = zero_ary + rnn_noise
self.rnn_noise.set_value(to_fX(new_val))
return
def set_grad_noise(self, grad_noise=0.0):
"""
Set the standard deviation of "gradient noise".
"""
zero_ary = numpy.zeros((1,))
new_val = zero_ary + grad_noise
self.grad_noise.set_value(to_fX(new_val))
return
#------------------------------------------------------------------------
@recurrent(sequences=['u'], contexts=['f_x', 'y'],
states=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'],
outputs=['c', 'h_pol', 'c_pol', 'h_var', 'c_var', 'h_dec', 'c_dec', 'nll', 'kl_q2p', 'kl_p2q'])
def apply(self, u, c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, nll, kl_q2p, kl_p2q, f_x, y):
# get current state of the x under construction
if self.step_type == 'add':
c = c
else:
c = self.writer_mlp.apply(h_dec)
c_as_y = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
# update the primary policy state
pol_inp = tensor.concatenate([c_as_y, f_x, h_dec], axis=1)
i_pol = self.pol_mlp_in.apply(pol_inp)
h_pol, c_pol = self.pol_rnn.apply(states=h_pol, cells=c_pol,
inputs=i_pol, iterate=False)
# update the guide policy state
var_inp = tensor.concatenate([y, c_as_y, f_x, h_dec], axis=1)
i_var = self.var_mlp_in.apply(var_inp)
h_var, c_var = self.var_rnn.apply(states=h_var, cells=c_var,
inputs=i_var, iterate=False)
# estimate primary policy's conditional over z
p_z_mean, p_z_logvar, p_z = self.pol_mlp_out.apply(h_pol, u)
# estimate guide policy's conditional over z
q_z_mean, q_z_logvar, q_z = self.var_mlp_out.apply(h_var, u)
if not self.use_pol:
# use a deterministic model
q_z_mean, q_z_logvar = p_z_mean, p_z_logvar
p_z, q_z = p_z_mean, p_z_mean
# mix samples from p/q based on value of self.train_switch
z = (self.train_switch[0] * q_z) + \
((1.0 - self.train_switch[0]) * p_z)
# update the shared dynamics' state
dec_inp = tensor.concatenate([z], axis=1)
i_dec = self.dec_mlp_in.apply(dec_inp)
h_dec, c_dec = self.dec_rnn.apply(states=h_dec, cells=c_dec, \
inputs=i_dec, iterate=False)
# get current state of the x under construction
if self.step_type == 'add':
c = c + self.writer_mlp.apply(h_dec)
else:
c = self.writer_mlp.apply(h_dec)
# compute the NLL of the reconstruction as of this step
c_as_y = tensor.nnet.sigmoid(tanh_clip(c, clip_val=15.0))
nll = -1.0 * tensor.flatten(log_prob_bernoulli(y, c_as_y))
# compute KL(q || p) and KL(p || q) for this step
kl_q2p = tensor.sum(gaussian_kld(q_z_mean, q_z_logvar, \
p_z_mean, p_z_logvar), axis=1)
kl_p2q = tensor.sum(gaussian_kld(p_z_mean, p_z_logvar, \
q_z_mean, q_z_logvar), axis=1)
return c, h_pol, c_pol, h_var, c_var, h_dec, c_dec, nll, kl_q2p, kl_p2q
#------------------------------------------------------------------------
@application(inputs=['x', 'y'],
outputs=['cs', 'nll', 'kl_q2ps', 'kl_p2qs'])
def run_model_simul(self, x, y):
# get important size and shape information
batch_size = x.shape[0]
z_dim = self.get_dim('z')
cp_dim = self.get_dim('c_pol')
cv_dim = self.get_dim('c_var')
cd_dim = self.get_dim('c_dec')
hp_dim = self.get_dim('h_pol')
hv_dim = self.get_dim('h_var')
hd_dim = self.get_dim('h_dec')
# get initial states for all model components
c0 = self.c_0.repeat(batch_size, axis=0)
cp0 = self.cp_0.repeat(batch_size, axis=0)
hp0 = self.hp_0.repeat(batch_size, axis=0)
cv0 = self.cv_0.repeat(batch_size, axis=0)
hv0 = self.hv_0.repeat(batch_size, axis=0)
cd0 = self.cd_0.repeat(batch_size, axis=0)
hd0 = self.hd_0.repeat(batch_size, axis=0)
# get zero-mean, unit-std. Gaussian noise for use in scan op
u = self.theano_rng.normal(
size=(self.n_iter, batch_size, z_dim),
avg=0., std=1.)
# extract features from x, for conditioning our predictions
f_x = self.reader_mlp.apply(x)
# run the multi-stage guided generative process
cs, _, _, _, _, _, _, nlls, kl_q2ps, kl_p2qs = \
self.apply(u=u, c=c0,
h_pol=hp0, c_pol=cp0,
h_var=hv0, c_var=cv0,
h_dec=hd0, c_dec=cd0, f_x=f_x, y=y)
nll = tensor.flatten(nlls[-1,:]) # NLL following last refinement step
# add name tags to the constructed symbolic variables
cs.name = "cs"
nll.name = "nll"
kl_q2ps.name = "kl_q2ps"
kl_p2qs.name = "kl_p2qs"
return cs, nll, kl_q2ps, kl_p2qs
def build_model_funcs(self):
"""
Build the symbolic costs and theano functions relevant to this model.
"""
# symbolic variable for providing inputs
x_sym = tensor.matrix('x_sym')
y_sym = tensor.matrix('y_sym')
# collect symbolic vars for model samples and costs (given x/y)
cs, nll, kl_q2ps, kl_p2qs = self.run_model_simul(x_sym, y_sym)
# get the expected NLL part of the VFE bound
self.nll_term = tensor.mean(nll)
self.nll_term.name = "nll_term"
# get KL(q || p) and KL(p || q)
self.kld_q2p_term = kl_q2ps.sum(axis=0).mean()
self.kld_q2p_term.name = "kld_q2p_term"
self.kld_p2q_term = kl_p2qs.sum(axis=0).mean()
self.kld_p2q_term.name = "kld_p2q_term"
# construct the proper VFE bound on NLL
self.nll_bound = self.nll_term + self.kld_q2p_term
# grab handles for all the optimizable parameters in our cost
self.cg = ComputationGraph([self.nll_bound])
self.joint_params = self.get_model_params(ary_type='theano')
# apply some l2 regularization to the model parameters
self.reg_term = (1e-5 * sum([tensor.sum(p**2.0) for p in self.joint_params]))
self.reg_term.name = "reg_term"
# compute the full cost w.r.t. which we will optimize params
self.joint_cost = self.nll_term + \
(self.lam_kld_q2p[0] * self.kld_q2p_term) + \
(self.lam_kld_p2q[0] * self.kld_p2q_term) + \
self.reg_term
self.joint_cost.name = "joint_cost"
# get the gradient of the joint cost for all optimizable parameters
print("Computing gradients of joint_cost...")
self.joint_grads = OrderedDict()
grad_list = tensor.grad(self.joint_cost, self.joint_params)
for i, p in enumerate(self.joint_params):
self.joint_grads[p] = grad_list[i]
# construct the updates for all trainable parameters
self.joint_updates, applied_updates = get_adam_updates_X(
params=self.joint_params,
grads=self.joint_grads, alpha=self.lr,
beta1=self.mom_1, beta2=self.mom_2,
mom2_init=1e-3, smoothing=1e-4, max_grad_norm=10.0,
theano_rng=self.theano_rng, noise_std=self.grad_noise)
# get the total grad norm and (post ADAM scaling) update norm.
self.grad_norm = sum([tensor.sum(g**2.0) for g in grad_list])
self.update_norm = sum([tensor.sum(u**2.0) for u in applied_updates])
# collect the outputs to return from this function
train_outputs = [self.joint_cost, self.nll_bound,
self.nll_term, self.kld_q2p_term, self.kld_p2q_term,
self.reg_term, self.grad_norm, self.update_norm]
bound_outputs = [self.joint_cost, self.nll_bound,
self.nll_term, self.kld_q2p_term, self.kld_p2q_term,
self.reg_term]
# collect the required inputs
inputs = [x_sym, y_sym]
# compile the theano functions for computing stuff, like for real
print("Compiling model training/update function...")
self.train_joint = theano.function(inputs=inputs,
outputs=train_outputs,
updates=self.joint_updates)
print("Compiling model cost estimator function...")
self.compute_nll_bound = theano.function(inputs=inputs,
outputs=bound_outputs)
return
def build_sampling_funcs(self):
"""
Build functions for visualizing the behavior of this model.
"""
# symbolic variable for providing inputs
x_sym = tensor.matrix('x_sym')
y_sym = tensor.matrix('y_sym')
# collect symbolic vars for model samples and costs (given x/y)
cs, nll, kl_q2ps, kl_p2qs = self.run_model_simul(x_sym, y_sym)
cs_as_ys = tensor.nnet.sigmoid(tanh_clip(cs, clip_val=15.0))
# get important parts of the VFE bound
nll_term = nll.mean()
kl_term = kl_q2ps.mean()
# grab handle for the computation graph for this model's cost
dummy_cost = nll_term + kl_term
self.cg = ComputationGraph([dummy_cost])
# build the function for computing the prediction trajectories
print("Compiling model sampler...")
inputs = [x_sym, y_sym]
sample_func = theano.function(inputs=inputs, outputs=cs_as_ys)
def switchy_sampler(x=None, y=None, sample_source='q'):
assert (not (x is None)), "input x is required, sorry"
assert (not (y is None)), "input y is required, sorry"
# store value of sample source switch, to restore later
old_switch = self.train_switch.get_value()
if sample_source == 'p':
# take samples from the primary policy
zeros_ary = numpy.zeros((1,)).astype(theano.config.floatX)
self.train_switch.set_value(zeros_ary)
else:
# take samples from the guide policy
ones_ary = numpy.ones((1,)).astype(theano.config.floatX)
self.train_switch.set_value(ones_ary)
# sample prediction and attention trajectories
y_samps = sample_func(x, y)
x_samps = x[numpy.newaxis,:,:].repeat(y_samps.shape[0], axis=0)
# set sample source switch back to previous value
self.train_switch.set_value(old_switch)
return x_samps, y_samps
self.sample_model = switchy_sampler
return
def get_model_params(self, ary_type='numpy'):
"""
Get the optimizable parameters in this model. This returns a list
and, to reload this model's parameters, the list must stay in order.
This can provide shared variables or numpy arrays.
"""
if self.cg is None:
self.build_model_funcs()
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
if ary_type == 'numpy':
for i, p in enumerate(joint_params):
joint_params[i] = p.get_value(borrow=False)
return joint_params
def set_model_params(self, numpy_param_list):
"""
Set the optimizable parameters in this model. This requires a list
and, to reload this model's parameters, the list must be in order.
"""
if self.cg is None:
self.build_model_funcs()
# grab handles for all the optimizable parameters in our cost
joint_params = VariableFilter(roles=[PARAMETER])(self.cg.variables)
for i, p in enumerate(joint_params):
joint_params[i].set_value(to_fX(numpy_param_list[i]))
return joint_params
def save_model_params(self, f_name=None):
"""
Save model parameters to a pickle file, in numpy form.
"""
numpy_params = self.get_model_params(ary_type='numpy')
f_handle = file(f_name, 'wb')
# dump the dict self.params, which just holds "simple" python values
cPickle.dump(numpy_params, f_handle, protocol=-1)
f_handle.close()
return
def load_model_params(self, f_name=None):
"""
Load model parameters from a pickle file, in numpy form.
"""
pickle_file = open(f_name)
numpy_params = cPickle.load(pickle_file)
self.set_model_params(numpy_params)
pickle_file.close()
return
|
Philip-Bachman/Sequential-Generation
|
BlocksModels.py
|
Python
|
mit
| 107,039
|
[
"Gaussian"
] |
3d4be2a0b3bcce7ff60ad7fca947a40abe9cc6d8bc135e6a270383c94a8244f0
|
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Verifies that all source files contain the necessary copyright boilerplate
# snippet.
from __future__ import print_function
import argparse
import datetime
import glob
import os
import re
import sys
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"filenames", help="list of files to check, all files if unspecified", nargs='*')
rootdir = os.path.dirname(__file__) + "/../"
rootdir = os.path.abspath(rootdir)
parser.add_argument("--rootdir", default=rootdir,
help="root directory to examine")
default_boilerplate_dir = os.path.join(rootdir, "hack/boilerplate")
parser.add_argument("--boilerplate-dir", default=default_boilerplate_dir)
return parser.parse_args()
def get_refs():
refs = {}
for path in glob.glob(os.path.join(ARGS.boilerplate_dir, "boilerplate.*.txt")):
extension = os.path.basename(path).split(".")[1]
ref_file = open(path, 'r')
ref = ref_file.read().splitlines()
ref_file.close()
refs[extension] = ref
return refs
GENERATED_GO_MARKERS = [
"// Code generated by client-gen. DO NOT EDIT.",
"// Code generated by deepcopy-gen. DO NOT EDIT.",
"// Code generated by informer-gen. DO NOT EDIT.",
"// Code generated by lister-gen. DO NOT EDIT.",
]
# given the file contents, return true if the file appears to be generated
def is_generated(data):
for marker in GENERATED_GO_MARKERS:
if marker in data:
return True
return False
def file_passes(filename, refs, regexs): # pylint: disable=too-many-locals
try:
with open(filename, 'r') as fp:
data = fp.read()
except IOError:
return False
basename = os.path.basename(filename)
extension = file_extension(filename)
if extension != "":
ref = refs[extension]
else:
ref = refs[basename]
# check for and skip generated files
if is_generated(data):
return True
# remove build tags from the top of Go files
if extension == "go":
con = regexs["go_build_constraints"]
(data, found) = con.subn("", data, 1)
# remove shebang from the top of shell files
if extension == "sh" or extension == "py":
she = regexs["shebang"]
(data, found) = she.subn("", data, 1)
data = data.splitlines()
# if our test file is smaller than the reference it surely fails!
if len(ref) > len(data):
return False
# trim our file to the same number of lines as the reference file
data = data[:len(ref)]
year = regexs["year"]
for datum in data:
if year.search(datum):
return False
# Replace all occurrences of the regex "2017|2016|2015|2014" with "YEAR"
when = regexs["date"]
for idx, datum in enumerate(data):
(data[idx], found) = when.subn('YEAR', datum)
if found != 0:
break
# if we don't match the reference at this point, fail
if ref != data:
return False
return True
def file_extension(filename):
return os.path.splitext(filename)[1].split(".")[-1].lower()
SKIPPED_DIRS = [
'Godeps', 'third_party', '_gopath', '_output',
'.git', 'vendor', '__init__.py', 'node_modules'
]
# even when generated by bazel we will complain about some generated files
# not having the headers. since they're just generated, ignore them
IGNORE_HEADERS = [
'// Code generated by go-bindata.'
]
def has_ignored_header(pathname):
with open(pathname, 'r') as myfile:
data = myfile.read()
for header in IGNORE_HEADERS:
if data.startswith(header):
return True
return False
def normalize_files(files):
newfiles = []
for pathname in files:
if any(x in pathname for x in SKIPPED_DIRS):
continue
newfiles.append(pathname)
for idx, pathname in enumerate(newfiles):
if not os.path.isabs(pathname):
newfiles[idx] = os.path.join(ARGS.rootdir, pathname)
return newfiles
def get_files(extensions):
files = []
if ARGS.filenames:
files = ARGS.filenames
else:
for root, dirs, walkfiles in os.walk(ARGS.rootdir):
# don't visit certain dirs. This is just a performance improvement
# as we would prune these later in normalize_files(). But doing it
# cuts down the amount of filesystem walking we do and cuts down
# the size of the file list
for dpath in SKIPPED_DIRS:
if dpath in dirs:
dirs.remove(dpath)
for name in walkfiles:
pathname = os.path.join(root, name)
files.append(pathname)
files = normalize_files(files)
outfiles = []
for pathname in files:
basename = os.path.basename(pathname)
extension = file_extension(pathname)
if extension in extensions or basename in extensions:
if not has_ignored_header(pathname):
outfiles.append(pathname)
return outfiles
def get_dates():
years = datetime.datetime.now().year
return '(%s)' % '|'.join((str(year) for year in range(2014, years + 1)))
def get_regexs():
regexs = {}
# Search for "YEAR" which exists in the boilerplate, but shouldn't in the real thing
regexs["year"] = re.compile('YEAR')
# dates can be 2014, 2015, 2016 or 2017, company holder names can be anything
regexs["date"] = re.compile(get_dates())
# strip // +build \n\n build constraints
regexs["go_build_constraints"] = re.compile(
r"^(// \+build.*\n)+\n", re.MULTILINE)
# strip #!.* from shell/python scripts
regexs["shebang"] = re.compile(r"^(#!.*\n)\n*", re.MULTILINE)
return regexs
def main():
regexs = get_regexs()
refs = get_refs()
filenames = get_files(refs.keys())
nonconforming_files = []
for filename in filenames:
if not file_passes(filename, refs, regexs):
nonconforming_files.append(filename)
if nonconforming_files:
print('%d files have incorrect boilerplate headers:' %
len(nonconforming_files))
for filename in sorted(nonconforming_files):
print(os.path.relpath(filename, ARGS.rootdir))
sys.exit(1)
if __name__ == "__main__":
ARGS = get_args()
main()
|
lavalamp/test-infra
|
hack/verify_boilerplate.py
|
Python
|
apache-2.0
| 6,981
|
[
"VisIt"
] |
c7d747e099db5519842d530dbfad5e8272650f3f82b97547ca938ae3179ff9cb
|
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
"""
Perform DNA-DNA alignment using BLAST, NUCMER and BLAT. Keep the interface the
same and does parallelization both in core and on grid.
"""
import os.path as op
import sys
import shutil
import logging
from maize.utils.cbook import depends
from maize.apps.base import sh, get_abs_path, which
@depends
def run_formatdb(infile=None, outfile=None, dbtype="nucl"):
cmd = "makeblastdb"
cmd += " -dbtype {0} -in {1}".format(dbtype, infile)
sh(cmd)
@depends
def run_blat(infile=None, outfile=None, db="UniVec_Core",
pctid=95, hitlen=50, cpus=16, overwrite=True):
cmd = "pblat -threads={0}".format(cpus) if which("pblat") else "blat"
cmd += ' {0} {1} -out=blast8 {2}'.format(db, infile, outfile)
sh(cmd)
blatfile = outfile
filtered_blatfile = outfile + ".P{0}L{1}".format(pctid, hitlen)
run_blast_filter(infile=blatfile, outfile=filtered_blatfile,
pctid=pctid, hitlen=hitlen)
if overwrite:
shutil.move(filtered_blatfile, blatfile)
@depends
def run_vecscreen(infile=None, outfile=None, db="UniVec_Core",
pctid=None, hitlen=None):
"""
BLASTN parameters reference:
http://www.ncbi.nlm.nih.gov/VecScreen/VecScreen_docs.html
"""
db = get_abs_path(db)
nin = db + ".nin"
run_formatdb(infile=db, outfile=nin)
cmd = "blastn"
cmd += " -task blastn"
cmd += " -query {0} -db {1} -out {2}".format(infile, db, outfile)
cmd += " -penalty -5 -gapopen 4 -gapextend 4 -dust yes -soft_masking true"
cmd += " -searchsp 1750000000000 -evalue 0.01 -outfmt 6 -num_threads 8"
sh(cmd)
@depends
def run_megablast(infile=None, outfile=None, db=None, wordsize=None, \
pctid=98, hitlen=100, best=None, evalue=0.01, task="megablast", cpus=16):
assert db, "Need to specify database fasta file."
db = get_abs_path(db)
nin = db + ".nin"
nin00 = db + ".00.nin"
nin = nin00 if op.exists(nin00) else (db + ".nin")
run_formatdb(infile=db, outfile=nin)
cmd = "blastn"
cmd += " -query {0} -db {1} -out {2}".format(infile, db, outfile)
cmd += " -evalue {0} -outfmt 6 -num_threads {1}".format(evalue, cpus)
cmd += " -task {0}".format(task)
if wordsize:
cmd += " -word_size {0}".format(wordsize)
if pctid:
cmd += " -perc_identity {0}".format(pctid)
if best:
cmd += " -max_target_seqs {0}".format(best)
sh(cmd)
if pctid and hitlen:
blastfile = outfile
filtered_blastfile = outfile + ".P{0}L{1}".format(pctid, hitlen)
run_blast_filter(infile=blastfile, outfile=filtered_blastfile,
pctid=pctid, hitlen=hitlen)
shutil.move(filtered_blastfile, blastfile)
def run_blast_filter(infile=None, outfile=None, pctid=95, hitlen=50):
from maize.formats.blast import filter
logging.debug("Filter BLAST result (pctid={0}, hitlen={1})".\
format(pctid, hitlen))
pctidopt = "--pctid={0}".format(pctid)
hitlenopt = "--hitlen={0}".format(hitlen)
filter([infile, pctidopt, hitlenopt])
def main():
import argparse
parser = argparse.ArgumentParser(
formatter_class = argparse.ArgumentDefaultsHelpFormatter,
description = 'alignment utils'
)
sp = parser.add_subparsers(title = 'available commands', dest = 'command')
sp1 = sp.add_parser("blast",
help = 'run blastn using query against reference',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
sp1.add_argument('ref_fasta', help = 'reference fasta')
sp1.add_argument('qry_fasta', help = 'query fasta')
task_choices = ("blastn", "blastn-short", "dc-megablast", \
"megablast", "vecscreen")
sp1.add_argument("--wordsize", type=int, help="Word size")
sp1.add_argument("--best", default=1, type=int,
help="Only look for best N hits")
sp1.add_argument("--task", default="megablast", choices=task_choices,
help="Task of the blastn")
sp1.add_argument("--pctid", type=float, default=0, help='percent identity')
sp1.add_argument("--evalue", type=float, default=.01, help='evalue')
sp1.add_argument("--cpus", type=int, default=1, help='cpus')
sp1.set_defaults(func = blast)
sp1 = sp.add_parser("blat",
help = 'run blat using query against reference',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
sp1.add_argument('ref_fasta', help = 'reference fasta')
sp1.add_argument('qry_fasta', help = 'query fasta')
sp1.add_argument("--pctid", type = float, default = 95)
sp1.add_argument("--hitlen", type = int, default = 30)
sp1.add_argument("--cpus", type=int, default=1, help='cpus')
sp1.set_defaults(func = blast)
sp1 = sp.add_parser("blasr",
help = 'run blasr on a set of pacbio reads',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
sp1.add_argument('ref_fasta', help = 'reference fasta')
sp1.add_argument('fofn', help = 'fofn')
sp1.add_argument("--cpus", type=int, default=8, help='cpus')
sp1.set_defaults(func = blasr)
sp1 = sp.add_parser("nucmer",
help = 'run nucmer using query against reference',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
sp1.add_argument('ref_fasta', help = 'reference fasta')
sp1.add_argument('qry_fasta', help = 'query fasta')
sp1.add_argument("--chunks", type=int,
help="Split both query and subject into chunks")
sp1.add_argument("--cpus", type=int, default=1, help='cpus')
sp1.add_argument("--params", default='-l 100 -c 500', help='run prarameters')
sp1.set_defaults(func = nucmer)
sp1 = sp.add_parser("last",
help = 'run last using query against reference',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
sp1.add_argument('query', help = 'query fasta')
sp1.add_argument('db', help = 'subject database')
sp1.add_argument("--dbtype", default="nucl",
choices=("nucl", "prot"),
help="Molecule type of subject database")
sp1.add_argument("--path", help="Specify LAST path")
sp1.add_argument("--mask", action="store_true", help="Invoke -c in lastdb")
sp1.add_argument("--format", default="BlastTab",
choices=("TAB", "MAF", "BlastTab", "BlastTab+"),
help="Output format")
sp1.add_argument("--minlen", default=0, type=int,
help="Filter alignments by how many bases match")
sp1.add_argument("--minid", default=0, type=int, help="Minimum sequence identity")
sp1.add_argument('-p', "--thread", type=int, default=1, help='number of threads')
sp1.add_argument("--params", default='', help='run prarameters')
sp1.set_defaults(func = last)
args = parser.parse_args()
if args.command:
args.func(args)
else:
print('Error: need to specify a sub command\n')
parser.print_help()
def nucmer(args):
"""
%prog nucmer ref.fasta query.fasta
Run NUCMER using query against reference. Parallel implementation derived
from: <https://github.com/fritzsedlazeck/sge_mummer>
"""
from itertools import product
from maize.apps.grid import MakeManager
from maize.formats.base import split
ref, query = args.ref_fasta, args.qry_fasta
cpus = args.cpus
nrefs = nqueries = args.chunks or int(cpus ** .5)
refdir = ref.split(".")[0] + "-outdir"
querydir = query.split(".")[0] + "-outdir"
reflist = split([ref, refdir, str(nrefs)]).names
querylist = split([query, querydir, str(nqueries)]).names
mm = MakeManager()
for i, (r, q) in enumerate(product(reflist, querylist)):
pf = "{0:04d}".format(i)
cmd = "nucmer -maxmatch"
cmd += " {0}".format(args.extra)
cmd += " {0} {1} -p {2}".format(r, q, pf)
deltafile = pf + ".delta"
mm.add((r, q), deltafile, cmd)
print(cmd)
mm.write()
def blasr(args):
"""
%prog blasr ref.fasta fofn
Run blasr on a set of PacBio reads. This is based on a divide-and-conquer
strategy described below.
"""
from maize.apps.grid import MakeManager
from maize.utils.iter import grouper
reffasta, fofn = args.ref_fasta, args.fofn
flist = sorted([x.strip() for x in open(fofn)])
h5list = []
mm = MakeManager()
for i, fl in enumerate(grouper(flist, 3)):
chunkname = "chunk{0:03d}".format(i)
fn = chunkname + ".fofn"
h5 = chunkname + ".cmp.h5"
fw = open(fn, "w")
print >> fw, "\n".join(fl)
fw.close()
cmd = "pbalign {0} {1} {2}".format(fn, reffasta, h5)
cmd += " --nproc {0} --forQuiver --tmpDir .".format(args.cpus)
mm.add((fn, reffasta), h5, cmd)
h5list.append(h5)
# Merge h5, sort and repack
allh5 = "all.cmp.h5"
tmph5 = "tmp.cmp.h5"
cmd_merge = "cmph5tools.py merge --outFile {0}".format(allh5)
cmd_merge += " " + " ".join(h5list)
cmd_sort = "cmph5tools.py sort --deep {0} --tmpDir .".format(allh5)
cmd_repack = "h5repack -f GZIP=1 {0} {1}".format(allh5, tmph5)
cmd_repack += " && mv {0} {1}".format(tmph5, allh5)
mm.add(h5list, allh5, [cmd_merge, cmd_sort, cmd_repack])
# Quiver
pf = reffasta.rsplit(".", 1)[0]
variantsgff = pf + ".variants.gff"
consensusfasta = pf + ".consensus.fasta"
cmd_faidx = "samtools faidx {0}".format(reffasta)
cmd = "quiver -j 32 {0}".format(allh5)
cmd += " -r {0} -o {1} -o {2}".format(reffasta, variantsgff, consensusfasta)
mm.add(allh5, consensusfasta, [cmd_faidx, cmd])
mm.write()
def get_outfile(reffasta, queryfasta, suffix="blast"):
q = op.basename(queryfasta).split(".")[0]
r = op.basename(reffasta).split(".")[0]
return ".".join((q, r, suffix))
def blat(args):
"""
%prog blat ref.fasta query.fasta
Calls blat and filters BLAST hits.
"""
reffasta, queryfasta = args.ref_fasta, args.qry_fasta
blastfile = get_outfile(reffasta, queryfasta, suffix="blat")
run_blat(infile=queryfasta, outfile=blastfile, db=reffasta,
pctid=args.pctid, hitlen=args.hitlen, cpus=args.cpus,
overwrite=False)
return blastfile
def blast(args):
"""
%prog blast ref.fasta query.fasta
Calls blast and then filter the BLAST hits. Default is megablast.
"""
reffasta, queryfasta = args.ref_fasta, args.qry_fasta
blastfile = get_outfile(reffasta, queryfasta)
run_megablast(infile=queryfasta, outfile=blastfile, db=reffasta,
wordsize=args.wordsize, pctid=args.pctid, evalue=args.evalue,
hitlen=None, best=args.best, task=args.task, cpus=args.cpus)
return blastfile
@depends
def run_lastdb(infile=None, outfile=None, mask=False, lastdb_bin="lastdb", dbtype="nucl"):
outfilebase = outfile.rsplit(".", 1)[0]
db = "-p " if dbtype == "prot" else ""
mask = "-c " if mask else ""
cmd = "{0} {1}{2}{3} {4}".format(lastdb_bin, db, mask, outfilebase, infile)
sh(cmd)
def last(args, dbtype=None):
"""
%prog database.fasta query.fasta
Run LAST by calling LASTDB and LASTAL. LAST program available:
<http://last.cbrc.jp>
Works with LAST-719.
"""
query, db = args.query, args.db
path = args.path
nthread = args.thread
if not dbtype:
dbtype = args.dbtype
getpath = lambda x: op.join(path, x) if path else x
lastdb_bin = getpath("lastdb")
lastal_bin = getpath("lastal")
u = 2 if args.mask else 0
cmd = "{0} -u {1}".format(lastal_bin, u)
cmd += " -P {0} -i3G".format(nthread)
cmd += " -f {0}".format(args.format)
cmd += " {0} {1}".format(db, query)
minlen = args.minlen
minid = args.minid
extra = args.params
assert minid != 100, "Perfect match not yet supported"
mm = minid / (100 - minid)
if minlen:
extra += " -e{0}".format(minlen)
if minid:
extra += " -r1 -q{0} -a{0} -b{0}".format(mm)
if extra:
cmd += " " + extra.strip()
sh(cmd)
if __name__ == '__main__':
main()
|
orionzhou/robin
|
apps/align.py
|
Python
|
gpl-2.0
| 12,128
|
[
"BLAST"
] |
ab216df9b2df022f0eaff7024dd3b845103f3dcaa45aeebe8ebcba4b280dcc63
|
# -*- coding: utf-8 -*-
# ***********************************************************************
# Copyright (C) 2016 - 2017 Oscar Gerardo Lazo Arjona *
# 2017 Benjamin Brecht *
# <oscar.lazoarjona@physics.ox.ac.uk> *
# ***********************************************************************
"""This is a library for simulations of the ORCA memory [1].
References:
[1] https://arxiv.org/abs/1704.00013
"""
from math import pi, sqrt, log
from scipy.constants import physical_constants, c, hbar, epsilon_0
import numpy as np
from matplotlib import pyplot as plt
from colorsys import hls_to_rgb
from settings_ladder import omega21, omega32
from scipy.constants import k as k_B
from scipy.special import hermite
from scipy.misc import factorial
from scipy.interpolate import interp1d
from sympy import Matrix, Integer
from sympy import zeros as symb_zeros
from sympy import factorial as symb_factorial
from scipy.special import ai_zeros
def optimal_mesh(n, tau, T, D):
r"""Get the optimal mesh for a Hermite-Gauss function."""
bandwidth = hg_bandwidth(n, tau)
dt = 1/bandwidth/10.0
dz = c*dt
Nt = int(T/dt)
Nz = int(D/dz)
return Nt, Nz
def last_root(n):
r"""Get the last zero of the corresponding Hermite polynomial."""
# i1 = ai_zeros(1)[0][0]/-3**(-1/3.0)
if n < 0:
raise ValueError
if n == 0:
return None
a1 = ai_zeros(1)[0][0]/-2**(1/3.0)
Lam = np.sqrt(2*n+1)
root = Lam
root += (-a1)*Lam**(-1/3.0)
root += (-1/10.0 * a1**2) * Lam**(-5/3.0)
root += (9/280.0 - 11/350.0 * a1**3) * Lam**(-3.0)
root += (277/12600.0 * a1 - 823/63000.0 * a1**4) * Lam**(-13.0/3.0)
return root/np.sqrt(2*np.pi)
def hg_bandwidth(n, tau):
r"""Get the bandwidth of a Hermite-Gauss function."""
if n == 0:
bandwidth = 1.0
else:
bandwidth = last_root(n)+1.0
bandwidth = bandwidth/tau
return bandwidth
def hg_duration(n, tau):
r"""Get the pulse duration of a Hermite-Gauss function."""
return hg_bandwidth(n, tau)*tau**2*np.sqrt(2*np.pi)
def optimal_signal_bandwidth(L, tau2):
r"""For given tau2 (control field duration) and L we pick the \
maximum bandwidth.
"""
#
bandwidth_c = 1/tau2
bandwidth_L = c/L
bandwidth_probe = max([bandwidth_c, bandwidth_L])
# print "bandwidths L, control, probe:",
# print bandwidth_L*1e-9, bandwidth_c*1e-9, bandwidth_probe*1e-9
return bandwidth_probe
def get_coeffs(order, accur, direction="backward"):
"""The coefficients of a discrete derivative.
INPUT:
- ``order`` - an integer, the order of the derivative.
- ``accur`` - an integer, the accurracy of the derivative.
- ``direction`` - a string indicating the direction of the derivative, \
either `backward`, `centered`, or `forward`. By default, `backward`.
OUTPUT:
- A list of symbolic rational numbers.
The most common approximation.
>>> get_coeffs(1, 1)
[-1, 1]
A centered, second order derivative.
>>> get_coeffs(2, 2, "centered")
[1, -2, 1, 0]
A high accuracy first derivative.
>>> get_coeffs(1, 8)
[1/8, -8/7, 14/3, -56/5, 35/2, -56/3, 14, -8, 761/280]
A high accuracy fifth derivative.
>>> get_coeffs(5, 4)
[35/6, -305/6, 195, -2581/6, 1790/3, -1065/2, 895/3, -575/6, 27/2]
"""
points = order+accur
if direction == "backward":
s = Matrix([Integer(i) for i in range(-points+1, 1)]).transpose()
elif direction == "forward":
s = Matrix([Integer(i) for i in range(points)]).transpose()
elif direction == "centered":
if accur % 2 != 0:
s = "accurracy has to be even for centered derivatives."
raise ValueError(s)
s = Matrix([Integer(i)
for i in range(-points/2+1, points/2+1)]).transpose()
else:
s = 'direction options are "backward", "centered", forward""'
raise ValueError(s)
S = symb_zeros(points, points)
for i in range(points):
S[i, :] = Matrix([s[0, j]**i for j in range(points)]).transpose()
d = symb_zeros(points, 1)
d[order] = symb_factorial(order)
sol = S.inv()*d
return [i for i in sol]
def calculate_coeff_table(accur_max, direction="backward", numeric=True):
u"""Calculate a table of coefficients for higher accurracy first order \
backward derivatives.
INPUT:
- ``order`` - an integer, the order of the derivatives.
- ``accur_max`` - an integer, the maximum accurracy of the derivatives \
in the table.
- ``numeric`` - a boolean indicating whether to return symbolic numbers, \
floats. By default, `True`.
OUTPUT:
- Either a list of lists with symbolic numbers or a numpy array.
>>> from sympy import pprint
>>> table = calculate_coeff_table(5, numeric=False)
>>> pprint(table)
⎡ 0 0 0 0 -1 1 ⎤
⎢ ⎥
⎢ 0 0 0 1/2 -2 3/2 ⎥
⎢ ⎥
⎢ 0 0 -1/3 3/2 -3 11/6⎥
⎢ ⎥
⎢ 25 ⎥
⎢ 0 1/4 -4/3 3 -4 ── ⎥
⎢ 12 ⎥
⎢ ⎥
⎢ 137 ⎥
⎢-1/5 5/4 -10/3 5 -5 ─── ⎥
⎣ 60 ⎦
The equivalent forward table:
>>> table = calculate_coeff_table(5, direction="forward", numeric=False)
>>> pprint(table)
⎡ -1 1 0 0 0 0 ⎤
⎢ ⎥
⎢-3/2 2 -1/2 0 0 0 ⎥
⎢ ⎥
⎢-11/6 3 -3/2 1/3 0 0 ⎥
⎢ ⎥
⎢-25 ⎥
⎢──── 4 -3 4/3 -1/4 0 ⎥
⎢ 12 ⎥
⎢ ⎥
⎢-137 ⎥
⎢───── 5 -5 10/3 -5/4 1/5⎥
⎣ 60 ⎦
>>> print calculate_coeff_table(5)
[[ 0. 0. 0. 0. -1. 1. ]
[ 0. 0. 0. 0.5 -2. 1.5 ]
[ 0. 0. -0.33333333 1.5 -3. 1.83333333]
[ 0. 0.25 -1.33333333 3. -4. 2.08333333]
[-0.2 1.25 -3.33333333 5. -5. 2.28333333]]
"""
order = 1
coef_table = [[0 for ii in range(accur_max+1)] for jj in range(accur_max)]
for i in range(1, accur_max+1):
tab = get_coeffs(order, i, direction)
if direction == "backward":
coef_table[i-1][accur_max+1-(order+i):] = tab
elif direction == "forward":
coef_table[i-1][:order+i] = tab
elif direction == "centered":
s = "I hate centered tables, go make your own."
raise NotImplementedError(s)
else:
raise ValueError
coef_table = [coef_table[jj][:accur_max+1] for jj in range(accur_max)]
if numeric:
coef_table = np.array([[float(coef_table[ii][jj])
for jj in range(accur_max+1)]
for ii in range(accur_max)])
else:
coef_table = Matrix(coef_table)
return coef_table
def Dt_order_backward(f, t, coef_table, accur=1):
"""A backward derivative with accurracy `accur`.
INPUT:
- ``f`` - a numpy array representing a function.
- ``t`` - a numpy array representing the funtion's independent variable.
- ``coef_table`` - a coefficient table calculated with \
`calculate_coef_table`.
- ``accur`` - an int indicating the desired accurracy.
OUTPUT:
A number representing the derivative of f at the last point `t[-1]`.
"""
dt = t[1]-t[0]
max_accur = len(coef_table)
coefs = coef_table[accur-1][max_accur-accur:]
return sum([coefs[j]*f[j] for j in range(accur + 1)])/dt
def derivative_bounds(ii, accur_max, Nt, direction="forward"):
"""An auxiliary function to get the bounds necessary to calculate \
derivatives of the boundary conditions..."""
if direction == "forward":
if ii == Nt-1:
return (Nt-2, Nt)
elif ii <= Nt-accur_max:
return (ii, ii+accur_max+1)
else:
return (ii, Nt)
elif direction == "backward":
if ii <= accur_max:
return (0, ii+1)
else:
return (ii-accur_max, ii+1)
def sketch_cell(params, folder="", name="sketch"):
r"""Plot a sketch of the cell in space-time showing control and signal \
fields.
"""
def Omega2(ti, Z, Omega2_peak, tau2, t0w, t0r, alpha_rw):
Om2 = Omega2_peak*np.exp(
-4*log(2.0)*(ti-t0w+Z/c)**2/tau2**2)
Om2 += Omega2_peak*np.exp(
-4*log(2.0)*(ti-t0r+Z/c)**2/tau2**2)*alpha_rw
return Om2
if True:
USE_SQUARE_CTRL = params["USE_SQUARE_CTRL"]
USE_HG_CTRL = params["USE_HG_CTRL"]
sigma_power2 = params["sigma_power2"]
t0s = params["t0s"]
t0w = params["t0w"]
USE_HG_CTRL = params["USE_HG_CTRL"]
t0r = params["t0r"]
alpha_rw = params["alpha_rw"]
nw = params["nw"]
nr = params["nr"]
T = params["T"]
L = params["L"]
Nt = params["Nt"]/params["sampling_rate"]
Nz = params["Nz"]
tau2 = params["tau2"]
tau1 = params["tau1"]
t_cutoff = params["t_cutoff"]
Omega2_peak = 1.0
t_ini = np.linspace(0, T, Nt)
Z = build_Z_mesh(L, Nz)
if USE_HG_CTRL:
Om2_mesh = [Omega2_HG(Z, ti, sigma_power2, sigma_power2,
Omega2_peak, t0w, t0r,
alpha_rw, nw=nw, nr=nr) for ti in t_ini]
elif USE_SQUARE_CTRL:
Om2_mesh = [Omega2_square(Omega2_peak, Z, ti, tau2, t0w, t0r,
alpha_rw) for ti in t_ini]
slice_ = Omega2_square(Omega2_peak, Z, t0w, tau2, t0w, t0r,
alpha_rw)
else:
Om2_mesh = [Omega2(ti, Z, Omega2_peak, tau2, t0w, t0r, alpha_rw)
for ti in t_ini]
slice_ = Omega2(t0w, Z, Omega2_peak, tau2, t0w, t0r, alpha_rw)
Om2_mesh = np.array(Om2_mesh)
Om2_mesh = Om2_mesh**2
plt.close("all")
cp = plt.pcolormesh(Z*100, t_ini*1e9, Om2_mesh)
plt.colorbar(cp)
input_signal1 = t0s-tau1/2+Z/c
input_signal2 = t0s+tau1/2+Z/c
output_signal1 = t0r-tau1/2+Z/c
output_signal2 = t0r+tau1/2+Z/c
plt.plot(Z*100, input_signal1*1e9, "b-")
plt.plot(Z*100, input_signal2*1e9, "b-")
plt.plot(Z*100, output_signal1*1e9, "b-")
plt.plot(Z*100, output_signal2*1e9, "b-")
plt.plot(Z*100, np.ones(len(Z))*t_cutoff*1e9, "g-")
plt.plot([-L/2*100, -L/2*100], [0, T*1e9], "r-", linewidth=1)
plt.plot([L/2*100, L/2*100], [0, T*1e9], "r-", linewidth=1)
plt.xlabel(r"$ Z \ (\mathrm{cm})$", fontsize=20)
plt.ylabel(r"$ t \ (\mathrm{ns})$", fontsize=20)
plt.xlim([Z[0]*100, Z[-1]*100])
plt.ylim(0, T*1e9)
plt.savefig(folder+"params_Om2_"+name+".png", bbox_inches="tight")
plt.close("all")
plt.plot(Z*100, slice_, "b+-")
plt.savefig(folder+"params_control_"+name+".png", bbox_inches="tight")
plt.close("all")
def build_Z_mesh(L, Nz):
r"""Return a Z mesh for a given cell length and number of points."""
D = L*1.05
zL = -0.5 * D
cheb_diff_mat, cheb_mesh = cheb(Nz-1)
cheb_diff_mat = cheb_diff_mat.T / zL
Z = zL * cheb_mesh.T
return Z
def heaviside(x):
r"""The Heaviside function."""
return np.where(x <= 0, 0.0, 1.0) + np.where(x == 0, 0.5, 0.0)
def upper_hyperfine_density(element, isotope, Temperature):
r"""We calculate the atomic density of the upper hyperfine level."""
if element == "Rb":
if isotope == 85:
fground = [2, 3]
elif isotope == 87:
fground = [1, 2]
elif element == "Cs":
if isotope == 133:
fground = [3, 4]
n_atomic0 = vapour_number_density(Temperature, element)
upper_fraction = (2*fground[1]+1)/(2*fground[0]+1.0 + 2*fground[1]+1.0)
return upper_fraction*n_atomic0
def cell_atomic_density(element, isotope, Temperature, L, Nz,
upper_hyperfine=False):
r"""Return the atomic density as a function of Z."""
Z = build_Z_mesh(L, Nz)
if upper_hyperfine:
n_atomic0 = upper_hyperfine_density(element, isotope, Temperature)
else:
n_atomic0 = vapour_number_density(Temperature, element)
return n_atomic0*(-heaviside(Z - 0.5 * L) + heaviside(0.5 * L + Z))
def empty_points(n_atomic):
r"""Return the number of points that are empty in a cell mesh."""
empty = 0
for i in range(len(n_atomic)):
if n_atomic[i] == 0.0:
empty += 1
else:
break
return empty
def interpolator(xp, fp, kind="linear"):
r"""Return an interpolating function that extrapolates to zero."""
F = interp1d(xp, fp, kind)
def f(x):
if isinstance(x, np.ndarray):
return np.array([f(xi) for xi in x])
if xp[0] <= x <= xp[-1]:
return F(x)
else:
return 0.0
return f
def num_integral(t, f):
"""We integrate using the trapezium rule."""
dt = t[1]-t[0]
F = sum(f[1:-1])
F += (f[1] + f[-1])*0.5
return np.real(F*dt)
def hg(n, x, x0, sigma):
"""Generate normalized Hermite-Gauss mode.
That is,
.. math::
int |HG(x)|^2 dx = 1.
Note that for the purpose of this code, the mode is re-normalised
such that the 0th order mode (the fundamental Gaussian) has a
peak height of one. This renormalisation is necessary to conform
to the definitions in the quantum memories code.
"""
X = (x - x0) / sigma
result = hermite(n)(X) * np.exp(-X**2 / 2) /\
sqrt(factorial(n) * sqrt(pi) * 2**n * sigma)
# In the next line, the renormalisation happens.
result *= sqrt(sqrt(pi) * sigma)
return result
def Omega2_HG(Z, ti, sigma2w, sigma2r, Omega2, t0w, t0r,
alpha_rw, nw=0, nr=0):
r"""Calculate the control field distribution.
This function allows you to choose different energies, widths,
and temporal modes for write and read pulses, respectively.
Arguments:
Z -- position axis (numpy.ndarray)
ti -- current instant in time
sigma2w -- spectral intensity FWHM of the write pulse
sigma2r -- spectral intensity FWHM of the read pulse
Omega2 -- peak Rabi frequency of the write pulse
t0w -- temporal offset of the write pulse
t0r -- temporal offset of the read pulse
alpha_rw -- scaling between write and read pulse
Keyword Arguments:
nw -- temporal mode order of the write pulse (default: 0)
nr -- temporal mode order of the read pulse (default: 0)
c -- speed of light (default: 299792458 m/s)
Return:
ctrl -- numpy.ndarray containing the complex control field
"""
c = 299792458.0
tauw = sqrt(log(2)) / (pi * sigma2w) # width of write pulse
taur = sqrt(log(2)) / (pi * sigma2r) # width of read pulse
# Calculate the write pulse
ctrl_w = Omega2 * hg(nw, t0w - Z / c, ti, tauw)
# Calculate the read pulse
ctrl_r = Omega2 * hg(nr, t0r - Z / c, ti, taur)
ctrl = ctrl_w + alpha_rw * ctrl_r
return ctrl
def Omega1_boundary_HG(t, sigma1, Omega1, t0s, D, ns=0):
r"""Calculate the boundary conditions for the signal field.
Arguments:
t -- time axis (numpy.ndarray).
sigma1 -- spectral intensity FWHM of the signal pulse.
Omega1 -- peak Rabi frequency of the signal pulse.
t0s -- temporal offset of the signal pulse.
D -- spatial extent of the calculation.
Keyword Arguments:
ns -- temporal mode order of the signal pulse (default: 0)
c -- speed of light (default: 299792458 m/s)
Return:
sig_bound -- numpy.ndarray containing the complex signal
"""
tau = sqrt(log(2)) / (pi * sigma1)
sig_bound = Omega1 * hg(ns, t, t0s - D / 2 / c, tau)
return sig_bound
def Omega1_initial_HG(Z, sigma1, Omega1, t0s, ns=0):
r"""Calculate the initial signal field.
Arguments:
Z -- space axis (numpy.ndarray)
sigma1 -- spectral intensity FWHM of the signal pulse
Omega1 -- peak Rabi frequency of the signal pulse
t0s -- temporal offset of the signal pulse
Keyword Arguments:
ns -- temporal mode order of the signal pulse (default: 0)
c -- speed of light (default: 299792458 m/s)
Return:
sig_init -- numpy.ndarray containing the complex initial signal
"""
c = 299792458.0
tau = sqrt(log(2)) / (pi * sigma1)
sig_init = Omega1 * hg(ns, -t0s, Z / c, tau)
return sig_init
def square(t, tau):
r"""This is a template."""
f = np.where(t/tau >= -0.5, 1.0, 0.0)*np.where(t/tau <= 0.5, 1.0, 0.0)
f = f*sqrt(tau)
return f
def Omega2_square(Omega2, Z, ti, tau2, t0w, t0r, alpha_rw):
r"""Calculate the control field Rabi frequency for a specific ti."""
c = 299792458.0
pulse = Omega2/np.sqrt(tau2)*square((ti-t0w) + Z/c, tau2)
pulse += Omega2/np.sqrt(tau2)*square((ti-t0r) + Z/c, tau2)*alpha_rw
return pulse
def simple_complex_plot(x, y, f, name, amount="", modsquare=False):
"""Plot the real, imaginary and mod square of a function f."""
plt.figure(figsize=(18, 6))
fs = 15
plt.subplot(1, 3, 1)
plt.title(r"$ \mathfrak{Re}"+amount+"$", fontsize=fs)
cs = plt.pcolormesh(x, y, np.real(f))
plt.xlabel(r"$Z \ \mathrm{(cm)}$", fontsize=fs)
plt.ylabel(r"$t \ \mathrm{(ns)}$", fontsize=fs)
plt.colorbar(cs)
plt.subplot(1, 3, 2)
plt.title(r"$ \mathfrak{Im}"+amount+"$", fontsize=fs)
cs = plt.pcolormesh(x, y, np.imag(f))
plt.xlabel(r"$Z \ \mathrm{(cm)}$", fontsize=fs)
plt.colorbar(cs)
plt.subplot(1, 3, 3)
# plt.axes().set_aspect("equal")
plt.title(r"$|"+amount+"|^2$", fontsize=fs)
if modsquare:
plt.title(r"$|"+amount+"|^2$", fontsize=fs)
cs = plt.pcolormesh(x, y, np.real(f*f.conjugate()))
else:
plt.title(r"$|"+amount+"|$", fontsize=fs)
cs = plt.pcolormesh(x, y, np.abs(f))
# Nz = len(x)
# for i in range(len(y)):
# # print y[0], y[i], y[-1]
# plt.plot(x, np.ones(Nz)*y[i], "r+", ms=5)
# plt.ylim([0.85, 0.89])
plt.xlabel(r"$Z \ \mathrm{(cm)}$", fontsize=fs)
plt.colorbar(cs)
plt.savefig(name, bbox_inches="tight")
plt.close("all")
def colorize(z):
r"""Return an array of rgb tuples to visualize a complex matrix.
The lightness of each pixel represents the magnitude of the corresponding,
number, and it's hue the argument.
"""
r = np.abs(z)
arg = np.angle(z)
h = (arg + pi) / (2 * pi) + 0.5
l = 1.0 - 1.0/(1.0 + r**0.3)
s = 0.8
c = np.vectorize(hls_to_rgb)(h, l, s) # --> tuple
c = np.array(c) # --> array of (3,n,m) shape, but need (n,m,3)
c = c.swapaxes(0, 2)
return c
def cheb(N):
r"""Generate Chebyshev matrix."""
if N == 0:
D = 0.
x = 1.
else:
n = np.arange(0, N + 1)
x = np.cos(np.pi * n / N).reshape(N + 1, 1)
# x is a column vector
c = (np.hstack(([2.], np.ones(N-1), [2.])) * (-1)**n).reshape(N+1, 1)
# c is a column vector with a 2 as first and last element, not the
# speed of light! and ones in the middle. The signs are alternating.
X = np.tile(x, (1, N + 1))
# X combines N+1 colunm vectors x to a matrix
dX = X - X.T
D = np.dot(c, 1. / c.T) / (dX + np.eye(N + 1))
D -= np.diag(np.sum(D.T, axis=0))
return D, x.reshape(N+1)
# return the D matrix and x as a row vector
def cDz(fz, c, cheb_diff_mat):
r"""Calculate the Z derivative times c."""
return c*np.dot(fz, cheb_diff_mat)
def set_parameters_ladder(custom_parameters=None, fitted_couplings=True):
r"""Set the parameters for a ladder memory.
Only completely independent parameters are taken from settings.py.
The rest are derived from them.
"""
#########################################################################
# We set the default values of independent parameters
if True:
# rewrite = True; rewrite = False
calculate_atom = False # ; calculate_atom = True
# calculate_bloch = False # ; calculate_bloch=True
# make_smoother = True # ; make_smoother=False
# change_rep_rate = True # ; change_rep_rate=False
# change_read_power = True # ; change_read_power=False
ignore_lower_f = False; ignore_lower_f = True
# run_long = False; run_long = True
# optimize = True; optimize = False
verbose = 1
# We choose the units we want.
units = "SI" # ; units="fancy"
if verbose >= 2: print "We are using "+units+" units!"
a0 = physical_constants["Bohr radius"][0]
e_charge = physical_constants["elementary charge"][0]
kB = physical_constants["Boltzmann constant"][0]
# The extent of the simulation given by the number of dynamic variables
# Nrho, the number of time steps Nt, and the number of z points Nz.
Nrho = 2
Nt = 25500
Nz = 50
# The number of velocity groups to consider (better an odd number)
Nv = 9
# The number of standard deviations to consider on either side
# of the velocity distribution.
Nsigma = 4
# The data for the time discretization.
# The total time of the simulation (in s).
T = 8e-9
# T = 16e-9
# The time step.
# dt = T/(Nt-1)
# The data for the spacial discretization.
# Cell length (in m).
L = 0.072
# optical_depth = 0.05e5
# The simulation will be done spanning -D/2 <= z <= D/2
# zL = -0.5 * D # left boundary of the simulation
# zR = +0.5 * D # right boundary of the simulation
######################
# The temperature of the cell.
Temperature = 90.0 + 273.15
# We should be able to choose whether to keep all of data, to
# just keep a sample at a certain rate, or to keep only the
# current-time data.
keep_data = "all"
keep_data = "sample"
# The sampling rate for the output. If sampling_rate=2 every
# second time step will be saved in memory and returned. If
# Nt is a multiple of sampling_rate then the length of the
# output should be Nt/sampling_rate.
sampling_rate = 50
################################################
# The characteristics of the beams:
# The waists of the beams (in m):
w1 = 280e-6
w2 = 320e-6
# The full widths at half maximum of the gaussian envelope of
# the powers spectra (in Hz).
sigma_power1 = 1.0e9
sigma_power2 = 1.0e9
sigma_power1 = 0.807222536902e9
# sigma_power1 = 1.0e9
sigma_power2 = 0.883494520871e9
# We calculate the duration of the pulses from the standard deviations
# tau1 = 2/pi * sqrt(log(2.0))/sigma_power1
# tau2 = 2/pi * sqrt(log(2.0))/sigma_power2
# tau1 = 2*sqrt(2)*log(2)/pi / sigma_power1
# tau2 = 2*sqrt(2)*log(2)/pi / sigma_power2
# The time of arrival of the beams
t0s = 1.1801245283489222e-09
t0w = t0s
t0r = t0w + 3.5e-9
alpha_rw = 1.0
# t_cutoff = t0r+D/2/c+tau1
t_cutoff = 3.0e-9
######################
# The detuning of the signal field (in Hz):
delta1 = -2*pi*6e9
# The detuning of the control field (in Hz):
delta2 = -delta1
# This is the two-photon transition condition.
##################################################################
# We choose an atom:
element = "Cs"; isotope = 133; n_atom = 6
# Control pulse energy.
energy_pulse2 = 50e-12 # Joules.
################################################
Omega = 1.0 # We choose the frequencies to be in radians/s.
distance_unit = 1.0
# The fancy units should be picked so that the factors multiplied in
# each of the terms of the equations are of similar magnitude.
# Ideally, the various terms should also be of similar magnitude, but
# changing the units will not change the relative importance of terms.
# Otherwise physics would change depending on the units!
# However, it should be possible to choose units such that the largest
# terms should be close to 1.
# We set the default values of the independent parameters.
pms = {"e_charge": e_charge,
"hbar": hbar,
"c": c,
"epsilon_0": epsilon_0,
"kB": kB,
"Omega": Omega,
"distance_unit": distance_unit,
"element": element,
"isotope": isotope,
"Nt": Nt,
"Nz": Nz,
"Nv": Nv,
"Nrho": Nrho,
"T": T,
"L": L,
"sampling_rate": sampling_rate,
"keep_data": keep_data,
"Temperature": Temperature,
"Nsigma": Nsigma,
"delta1": delta1,
"sigma_power1": sigma_power1,
"sigma_power2": sigma_power2,
"w1": w1,
"w2": w2,
"t0s": t0s,
"t0w": t0w,
"t0r": t0r,
"energy_pulse2": energy_pulse2,
"alpha_rw": alpha_rw,
"t_cutoff": t_cutoff,
"element": element,
"isotope": isotope,
"verbose": verbose}
#########################################################################
# We replace independent parameters by custom ones if given.
if True:
if custom_parameters is None:
custom_parameters = {}
pm_names_ind = ["e_charge", "hbar", "c", "epsilon_0", "kB",
"Omega", "distance_unit", "element", "isotope", "Nt",
"Nz", "Nv", "Nrho", "T", "L", "sampling_rate",
"keep_data", "Temperature", "Nsigma", "delta1",
"sigma_power1", "sigma_power2", "w1", "w2",
"t0s", "t0w", "t0r",
"energy_pulse2", "alpha_rw", "t_cutoff",
"element", "isotope", "verbose",
"USE_HG_CTRL", "USE_HG_SIG", "USE_SB_SIG",
"USE_SQUARE_CTRL", "USE_SQUARE_SIG"]
pm_names_dep = ["mass", "gamma21", "gamma32", "omega21", "omega32",
"omega_laser1", "omega_laser2", "delta2", "r1", "r2",
"ns", "nw", "nr", "tau1", "tau2", "energy_pulse1"]
for i in custom_parameters:
if (i not in pm_names_ind) and (i not in pm_names_dep):
raise ValueError(str(i)+" is not a valid parameter name.")
for name in pm_names_ind:
if name in custom_parameters:
pms.update({name: custom_parameters[name]})
if type(custom_parameters[name]) is str:
s = name+"= '"+str(custom_parameters[name])+"'"
else:
s = name+"= "+str(custom_parameters[name])
exec(s)
#########################################################################
# We calculate dependent parameters
if calculate_atom:
from fast import State, Transition, make_list_of_states
from fast import calculate_boundaries, Integer
from fast import calculate_matrices
from fast import fancy_r_plot, fancy_matrix_plot
from fast import vapour_number_density
from matplotlib import pyplot
# atom = Atom(element, isotope)
n_atomic0 = vapour_number_density(Temperature, element)
g = State(element, isotope, n_atom, 0, 1/Integer(2))
e = State(element, isotope, n_atom, 1, 3/Integer(2))
l = State(element, isotope, n_atom, 2, 5/Integer(2))
fine_states = [g, e, l]
magnetic_states = make_list_of_states(fine_states,
"magnetic", verbose=0)
bounds = calculate_boundaries(fine_states, magnetic_states)
g_index = bounds[0][0][1]-1
e_index = bounds[0][1][1]-1
l_index = bounds[1][6][1]-1
g = magnetic_states[g_index]
e = magnetic_states[e_index]
l = magnetic_states[l_index]
if verbose >= 1:
print
print "Calculating atomic properties ..."
print "We are choosing the couplings of"
print magnetic_states[g_index], magnetic_states[e_index],
print magnetic_states[l_index]
print "as a basis to estimate the values of gamma_ij, r^l."
# We calculate the matrices for the given states.
Omega = 1.0 # We choose the frequencies to be in radians/s.
distance_unit = 1.0
omega, gamma, r = calculate_matrices(magnetic_states, Omega)
# We plot these matrices.
path = ''; name = element+str(isotope)
fig = pyplot.figure(); ax = fig.add_subplot(111)
fancy_matrix_plot(ax, omega, magnetic_states, path,
name+'_omega.png',
take_abs=True, colorbar=True)
fig = pyplot.figure(); ax = fig.add_subplot(111)
fancy_matrix_plot(ax, gamma, magnetic_states, path,
name+'_gamma.png',
take_abs=True, colorbar=True)
fig = pyplot.figure(); ax = fig.add_subplot(111)
fancy_r_plot(r, magnetic_states, path, name+'_r.png',
complex_matrix=True)
pyplot.close("all")
# We get the parameters for the simplified scheme.
# The couplings.
r1 = r[2][e_index][g_index]
r2 = r[2][l_index][e_index]
# The FAST function calculate_matrices always returns r in
# Bohr radii, so we convert. By contrast, it returns omega
# and gamma in units scaled by Omega. If Omega=1e6 this means
# 10^6 rad/s. So we do not have to rescale omega or gamma.
r1 = r1*a0
r2 = r2*a0
# The decay frequencies.
gamma21 = gamma[e_index][g_index]
gamma32 = gamma[l_index][e_index]
# print gamma21, gamma32
# We determine which fraction of the population is in the lower
# and upper ground states. The populations will be approximately
# those of a thermal state. At room temperature the populations
# of all Zeeman states will be approximately equal.
fs = State(element, isotope, n_atom, 0, 1/Integer(2)).fperm
# lower_fraction = (2*fs[0]+1)/(2*fs[0]+1.0 + 2*fs[1]+1.0)
upper_fraction = (2*fs[1]+1)/(2*fs[0]+1.0 + 2*fs[1]+1.0)
if ignore_lower_f:
g_index = bounds[0][0][1]-1
e_index = bounds[1][3][1]-1
g = magnetic_states[g_index]
e = magnetic_states[e_index]
n_atomic0 = upper_fraction*n_atomic0
else:
g_index = bounds[0][0][1]-1
e_index = bounds[0][1][1]-1
l_index = bounds[1][6][1]-1
g = magnetic_states[g_index]
e = magnetic_states[e_index]
l = magnetic_states[l_index]
omega21 = Transition(e, g).omega
omega32 = Transition(l, e).omega
# print omega21, omega32
# print r1, r2
# print n_atomic0
# print atom.mass
else:
if (element, isotope) == ("Rb", 85):
gamma21, gamma32 = (38107518.888, 3102649.47106)
if ignore_lower_f:
omega21, omega32 = (2.4141820325e+15, 2.42745336743e+15)
else:
omega21, omega32 = (2.41418319096e+15, 2.42745220897e+15)
r1, r2 = (2.23682340192e-10, 5.48219440757e-11)
mass = 1.40999341816e-25
if ignore_lower_f:
n_atomic0 = 1.8145590576e+18
else:
n_atomic0 = 3.11067267018e+18
elif (element, isotope) == ("Rb", 87):
gamma21, gamma32 = (38107518.888, 3102649.47106)
if ignore_lower_f:
omega21, omega32 = (2.41417295963e+15, 2.42745419204e+15)
else:
omega21, omega32 = (2.41417562114e+15, 2.42745153053e+15)
r1, r2 = (2.23682340192e-10, 5.48219440757e-11)
mass = 1.44316087206e-25
if ignore_lower_f:
n_atomic0 = 1.94417041886e+18
else:
n_atomic0 = 3.11067267018e+18
elif (element, isotope) == ("Cs", 133):
gamma21, gamma32 = (32886191.8978, 14878582.8074)
if ignore_lower_f:
omega21, omega32 = (2.20993141261e+15, 2.05306420003e+15)
else:
omega21, omega32 = (2.20993425498e+15, 2.05306135765e+15)
r1, r2 = (2.37254506627e-10, 1.54344650829e-10)
mass = 2.2069469161e-25
if ignore_lower_f:
n_atomic0 = 4.72335166533e+18
else:
n_atomic0 = 8.39706962725e+18
# The frequencies of the optical fields.
omega_laser1 = delta1 + omega21
omega_laser2 = delta2 + omega32
######################
# The energies of the photons.
energy_phot1 = hbar*omega_laser1
# The energies of the pulses.
energy_pulse1 = 1*energy_phot1 # Joules.
delta1 = pms["delta1"]
delta2 = -delta1
omega_laser1 = delta1 + omega21
omega_laser2 = delta2 + omega32
tau1 = 2*sqrt(2)*log(2)/pi / sigma_power1
tau2 = 2*sqrt(2)*log(2)/pi / sigma_power2
pms.update({"omega_laser1": omega_laser1, "omega_laser2": omega_laser2})
# We make a few checks
if pms["Nv"] == 2:
raise ValueError("Nv = 2 is a very bad choice.")
if pms["Nt"] % pms["sampling_rate"] != 0:
raise ValueError("Nt must be a multiple of the sampling_rate.")
pms.update({"mass": mass,
"gamma21": gamma21,
"gamma32": gamma32,
"omega21": omega21,
"omega32": omega32,
"omega_laser1": omega_laser1,
"omega_laser2": omega_laser2,
"delta2": delta2,
"r1": r1,
"r2": r2,
"energy_pulse1": energy_pulse1,
"energy_pulse2": energy_pulse2,
"tau1": tau1,
"tau2": tau2,
"ns": 1,
"nw": 1,
"nr": 1,
"USE_HG_CTRL": False,
"USE_HG_SIG": False,
"USE_SB_SIG": False})
if "USE_SQUARE_SIG" not in custom_parameters:
pms.update({"USE_SQUARE_SIG": False})
if "USE_SQUARE_CTRL" not in custom_parameters:
pms.update({"USE_SQUARE_CTRL": False})
cond1 = "r1" not in custom_parameters
cond2 = "r2" not in custom_parameters
if fitted_couplings and cond1 and cond2:
pms.update({"r1": pms["r1"]*0.2556521})
pms.update({"r2": pms["r2"]*0.72474758})
# We force any custom dependent parameters.
for name in pm_names_dep:
if name in custom_parameters:
if pms["verbose"] >= 1:
print "WARNING: parameter", name,
print "may be inconsistent with independent parameters."
pms.update({name: custom_parameters[name]})
return pms
def set_parameters_lambda(custom_parameters=None, fitted_couplings=True):
r"""Set the parameters for a lambda memory.
Only completely independent parameters are taken from settings.py.
The rest are derived from them.
"""
if custom_parameters is None:
custom_parameters = {}
pm_names = ["magic", "red_detuned",
"e_charge", "hbar", "c", "epsilon_0", "kB",
"Omega", "distance_unit", "element", "isotope",
"Nt", "Nz", "Nv", "Nrho", "T", "L", "sampling_rate",
"keep_data", "mass", "Temperature", "Nsigma",
"gamma31", "gamma32", "omega31", "omega31", "omega31",
"r31", "r32",
"delta1", "sigma_power1", "sigma_power2",
"w1", "w2", "energy_pulse31", "energy_pulse32",
"t0s", "t0w", "t0r", "alpha_rw", "t_cutoff", "verbose"]
pms = {}
for i in custom_parameters:
if i not in pm_names:
raise ValueError(str(i)+" is not a valid parameter name.")
for name in pm_names:
if name in custom_parameters:
pms.update({name: custom_parameters[name]})
else:
s = "from settings_lambda import "+name
exec(s)
s = "pms.update({'"+name+"':"+name+"})"
exec(s)
delta1 = pms["delta1"]
delta2 = -delta1
omega_laser1 = delta1 + omega21
omega_laser2 = delta2 + omega32
pms.update({"omega_laser1": omega_laser1, "omega_laser2": omega_laser2})
# We make a few checks
if pms["Nv"] == 2:
raise ValueError("Nv = 2 is a very bad choice.")
if pms["Nt"] % pms["sampling_rate"] != 0:
raise ValueError("Nt must be a multiple of the sampling_rate.")
return pms
def efficiencies(t, Om1, params, plots=False, name="",
explicit_decoherence=1.0, rabi=True):
r"""Calculate the efficiencies for a given solution of the signal."""
e_charge = params["e_charge"]
hbar = params["hbar"]
c = params["c"]
epsilon_0 = params["epsilon_0"]
Omega = params["Omega"]
Nt = len(t)
r1 = params["r1"]
omega_laser1 = params["omega_laser1"]
w1 = params["w1"]
t_cutoff = params["t_cutoff"]
# We calculate the number of photons.
if rabi:
const1 = np.pi*c*epsilon_0*hbar*(w1/e_charge/r1)**2/16.0/omega_laser1
else:
const1 = np.pi*c*epsilon_0*(w1)**2/16.0/hbar/omega_laser1
dphotons_ini_dt = const1 * np.real(Om1[:, +0]*Om1[:, +0].conjugate())
dphotons_out_dt = const1 * np.real(Om1[:, -1]*Om1[:, -1].conjugate())
dphase_ini = np.unwrap(np.angle(Om1[:, +0]))
# dphase_out = np.angle(Om1[:, -1])
# dphase_tra = np.array([dphase_out[i] for i in range(Nt)
# if t[i] < t_cutoff])
# dphase_ret = np.array([dphase_out[i] for i in range(Nt)
# if t[i] > t_cutoff])
dt = t[1]-t[0]
# We separate the output at the cutoff time.
dphotons_out_dt_tr = [dphotons_out_dt[i] for i in range(Nt)
if t[i] < t_cutoff]
dphotons_out_dt_re = [dphotons_out_dt[i] for i in range(Nt)
if t[i] > t_cutoff]
t_tr = np.array([t[i] for i in range(Nt) if t[i] < t_cutoff])
t_re = np.array([t[i] for i in range(Nt) if t[i] > t_cutoff])
dphotons_out_dt_tr = np.array(dphotons_out_dt_tr)
dphotons_out_dt_re = np.array(dphotons_out_dt_re)*explicit_decoherence
if plots:
fig, ax1 = plt.subplots()
ax1.plot(t*Omega*1e9, dphotons_ini_dt/Omega*1e-9, "g-",
label=r"$\mathrm{Signal} \ @ \ z=-D/2$")
ax1.plot(t_tr*Omega*1e9, dphotons_out_dt_tr/Omega*1e-9, "r-",
label=r"$\mathrm{Signal} \ @ \ z=+D/2$")
ax1.plot(t_re*Omega*1e9, dphotons_out_dt_re/Omega*1e-9, "b-",
label=r"$\mathrm{Signal} \ @ \ z=+D/2$")
ax1.set_xlabel(r"$ t \ (\mathrm{ns})$", fontsize=20)
ax1.set_ylabel(r"$ \mathrm{photons/ns}$", fontsize=20)
plt.legend(fontsize=15)
ax2 = ax1.twinx()
ax2.plot(t*Omega*1e9, dphase_ini*180/np.pi, "g:")
# ax2.plot(t_tr*Omega*1e9, dphase_tra*180/np.pi, "r:")
# ax2.plot(t_re*Omega*1e9, dphase_ret*180/np.pi, "b:")
ax2.set_ylabel(r"$ \mathrm{Phase \ (degrees)}$", fontsize=20)
plt.savefig(name+"_inout.png", bbox_inches="tight")
plt.close("all")
Ntr = sum([dphotons_out_dt[i] for i in range(Nt)
if t[i] < t_cutoff])*dt
Nre = sum([dphotons_out_dt[i] for i in range(Nt)
if t[i] > t_cutoff])*dt
# We integrate using the trapezium rule.
Nin = sum(dphotons_ini_dt[1:-1])
Nin += (dphotons_ini_dt[0] + dphotons_ini_dt[-1])*0.5
Nin = Nin*dt
Ntr = sum(dphotons_out_dt_tr[1:-1])
Ntr += (dphotons_out_dt_tr[0] + dphotons_out_dt_tr[-1])*0.5
Ntr = Ntr*dt
Nre = sum(dphotons_out_dt_re[1:-1])
Nre += (dphotons_out_dt_re[0] + dphotons_out_dt_re[-1])*0.5
Nre = Nre*dt
eff_in = (Nin-Ntr)/Nin
eff_out = Nre/(Nin-Ntr)
eff = eff_in*eff_out
return eff_in, eff_out, eff
def vapour_pressure(Temperature, element):
r"""Return the vapour pressure of rubidium or cesium in Pascals.
This function receives as input the temperature in Kelvins and the
name of the element.
>>> print vapour_pressure(25.0 + 273.15,"Rb")
5.31769896107e-05
>>> print vapour_pressure(39.3 + 273.15,"Rb")
0.000244249795696
>>> print vapour_pressure(90.0 + 273.15,"Rb")
0.0155963687128
>>> print vapour_pressure(25.0 + 273.15,"Cs")
0.000201461144963
>>> print vapour_pressure(28.5 + 273.15,"Cs")
0.000297898928349
>>> print vapour_pressure(90.0 + 273.15,"Cs")
0.0421014384667
The element must be in the database.
>>> print vapour_pressure(90.0 + 273.15,"Ca")
Traceback (most recent call last):
...
ValueError: Ca is not an element in the database for this function.
References:
[1] Daniel A. Steck, "Cesium D Line Data," available online at
http://steck.us/alkalidata (revision 2.1.4, 23 December 2010).
[2] Daniel A. Steck, "Rubidium 85 D Line Data," available online at
http://steck.us/alkalidata (revision 2.1.5, 19 September 2012).
[3] Daniel A. Steck, "Rubidium 87 D Line Data," available online at
http://steck.us/alkalidata (revision 2.1.5, 19 September 2012).
"""
if element == "Rb":
Tmelt = 39.30+273.15 # K.
if Temperature < Tmelt:
P = 10**(2.881+4.857-4215.0/Temperature) # Torr.
else:
P = 10**(2.881+4.312-4040.0/Temperature) # Torr.
elif element == "Cs":
Tmelt = 28.5 + 273.15 # K.
if Temperature < Tmelt:
P = 10**(2.881+4.711-3999.0/Temperature) # Torr.
else:
P = 10**(2.881+4.165-3830.0/Temperature) # Torr.
else:
s = str(element)
s += " is not an element in the database for this function."
raise ValueError(s)
P = P * 101325.0/760.0 # Pascals.
return P
def vapour_number_density(Temperature, element):
r"""Return the number of atoms in a rubidium or cesium vapour in m^-3.
It receives as input the temperature in Kelvins and the
name of the element.
>>> print vapour_number_density(90.0 + 273.15,"Cs")
8.39706962725e+18
"""
return vapour_pressure(Temperature, element)/k_B/Temperature
class Measurement(object):
r"""A class for error propagation arithmetic."""
def __init__(self, value, sigma):
r"""A class for error propagation arithmetic."""
self.value = float(value)
self.sigma = sigma
def __str__(self):
r"""The string method for Measurement."""
return '('+str(self.value)+', '+str(self.sigma)+')'
def __mul__(self, other, cov=0.0):
r"""Multiplication."""
# Scalar multiplication
if isinstance(other, float) or isinstance(other, int):
return Measurement(other*self.value, abs(other)*self.sigma)
# Measurement multiplication
elif isinstance(other, Measurement):
sigmaf = self.value**2 * other.sigma**2
sigmaf += other.value**2 * self.sigma**2
sigmaf += 2*self.value*other.value*cov
sigmaf = sqrt(sigmaf)
return Measurement(self.value*other.value, sigmaf)
def __rmul__(self, other):
r"""Reverse multiplication."""
return self.__mul__(other)
def __add__(self, other, cov=0.0):
r"""Addition."""
# Scalar addition
if isinstance(other, float) or isinstance(other, int):
return Measurement(other+self.value, self.sigma)
# Measurement addition
elif isinstance(other, Measurement):
sigmaf = self.sigma**2 + other.sigma**2 + 2*cov
sigmaf = sqrt(sigmaf)
return Measurement(self.value + other.value, sigmaf)
def __radd__(self, other):
r"""Reverse addition."""
return self.__add__(other)
def __sub__(self, other, cov=0.0):
r"""Substraction."""
# Scalar substraction
if isinstance(other, float) or isinstance(other, int):
return Measurement(-other+self.value, self.sigma)
# Measurement substraction
elif isinstance(other, Measurement):
sigmaf = self.sigma**2 + other.sigma**2 - 2*cov
sigmaf = sqrt(sigmaf)
return Measurement(self.value - other.value, sigmaf)
def __rsub__(self, other):
r"""Reverse substraction."""
if isinstance(other, float) or isinstance(other, int):
other = Measurement(other, 0.0)
return other.__sub__(self)
def __div__(self, other, cov=0.0):
r"""Division."""
# Scalar division.
if isinstance(other, float) or isinstance(other, int):
other = Measurement(other, 0.0)
# Measurement division.
sigmaf = (self.sigma/self.value)**2
sigmaf += (other.sigma/other.value)**2 - 2*cov/(self.value*other.value)
sigmaf = sqrt(sigmaf)
sigmaf = sqrt((self.value/other.value)**2)*sigmaf
return Measurement(self.value / other.value, sigmaf)
def __rdiv__(self, other):
r"""Reverse division."""
if isinstance(other, float) or isinstance(other, int):
other = Measurement(other, 0.0)
return other.__div__(self)
def __neg__(self):
r"""Negative."""
return Measurement(-self.value, self.sigma)
def __pow__(self, other, cov=0.0):
r"""Power."""
# Scalar power.
if isinstance(other, float) or isinstance(other, int):
other = Measurement(other, 0.0)
# Measurement power.
sigmaf = (other.value*self.sigma/self.value)**2
sigmaf += (log(self.value)*other.sigma)**2
sigmaf += 2*other.value*log(self.value)*cov/self.value
sigmaf = sqrt(sigmaf)
return Measurement(self.value ** other.value, sigmaf)
def __rpow__(self, other):
r"""Reverse power."""
if isinstance(other, float) or isinstance(other, int):
other = Measurement(other, 0.0)
return other.__pow__(self)
|
oscarlazoarjona/quantum_memories
|
quantum_memories/misc.py
|
Python
|
gpl-3.0
| 47,012
|
[
"Gaussian",
"ORCA"
] |
985a3280e559f5365932f89e127fe62794ea78972155034e35602fd7fedab7b3
|
########################################################################
# File: FileCatalogProxyClient.py
########################################################################
""" File catalog client for the File Catalog proxy service """
from DIRAC.Core.Base.Client import Client
class FileCatalogProxyClient(object):
"""File catalog client for the File Catalog proxy service"""
def __init__(self, fcName, **kwargs):
"""Constructor of the LCGFileCatalogProxy client class"""
self.method = None
self.fcName = fcName
self.rpc = Client(url="DataManagement/FileCatalogProxy", timeout=120)
self.valid = False
self.valid = self.rpc.ping()["OK"]
self.interfaceMethods = None
def isOK(self):
"""Is the Catalog available?"""
return self.valid
def getName(self):
"""Get the file catalog name"""
return self.fcName
def setInterfaceMethods(self, methodTuple):
self.interfaceMethods = methodTuple
def getInterfaceMethods(self):
return self.interfaceMethods
def __getattr__(self, name):
self.method = name
return self.execute
def execute(self, *parms, **kws):
"""Magic method dispatcher"""
return self.rpc.callProxyMethod(self.fcName, self.method, parms, kws)
|
DIRACGrid/DIRAC
|
src/DIRAC/Resources/Catalog/FileCatalogProxyClient.py
|
Python
|
gpl-3.0
| 1,328
|
[
"DIRAC"
] |
8db9c4c8f35ce22896bb9c15b059405eefe6fac871671cbe67ec6a6128bb1ebb
|
# Copyright 1999-2014. Parallels IP Holdings GmbH. All Rights Reserved.
#!@@PYTHON@@
#------------------------------------------------------------------------
# Copyright (c) 1998 by Total Control Software
# All Rights Reserved
#------------------------------------------------------------------------
#
# Module Name: fcgi.py
#
# Description: Handles communication with the FastCGI module of the
# web server without using the FastCGI developers kit, but
# will also work in a non-FastCGI environment, (straight CGI.)
# This module was originally fetched from someplace on the
# Net (I don't remember where and I can't find it now...) and
# has been significantly modified to fix several bugs, be more
# readable, more robust at handling large CGI data and return
# document sizes, and also to fit the model that we had previously
# used for FastCGI.
#
# WARNING: If you don't know what you are doing, don't tinker with this
# module!
#
# Creation Date: 1/30/98 2:59:04PM
#
# License: This is free software. You may use this software for any
# purpose including modification/redistribution, so long as
# this header remains intact and that you do not claim any
# rights of ownership or authorship of this software. This
# software has been tested, but no warranty is expressed or
# implied.
#
#------------------------------------------------------------------------
import os, sys, string, socket, errno
from cStringIO import StringIO
import cgi
#---------------------------------------------------------------------------
# Set various FastCGI constants
# Maximum number of requests that can be handled
FCGI_MAX_REQS=1
FCGI_MAX_CONNS = 1
# Supported version of the FastCGI protocol
FCGI_VERSION_1 = 1
# Boolean: can this application multiplex connections?
FCGI_MPXS_CONNS=0
# Record types
FCGI_BEGIN_REQUEST = 1 ; FCGI_ABORT_REQUEST = 2 ; FCGI_END_REQUEST = 3
FCGI_PARAMS = 4 ; FCGI_STDIN = 5 ; FCGI_STDOUT = 6
FCGI_STDERR = 7 ; FCGI_DATA = 8 ; FCGI_GET_VALUES = 9
FCGI_GET_VALUES_RESULT = 10
FCGI_UNKNOWN_TYPE = 11
FCGI_MAXTYPE = FCGI_UNKNOWN_TYPE
# Types of management records
ManagementTypes = [FCGI_GET_VALUES]
FCGI_NULL_REQUEST_ID=0
# Masks for flags component of FCGI_BEGIN_REQUEST
FCGI_KEEP_CONN = 1
# Values for role component of FCGI_BEGIN_REQUEST
FCGI_RESPONDER = 1 ; FCGI_AUTHORIZER = 2 ; FCGI_FILTER = 3
# Values for protocolStatus component of FCGI_END_REQUEST
FCGI_REQUEST_COMPLETE = 0 # Request completed nicely
FCGI_CANT_MPX_CONN = 1 # This app can't multiplex
FCGI_OVERLOADED = 2 # New request rejected; too busy
FCGI_UNKNOWN_ROLE = 3 # Role value not known
error = 'fcgi.error'
#---------------------------------------------------------------------------
# The following function is used during debugging; it isn't called
# anywhere at the moment
def error(msg):
"Append a string to /tmp/err"
errf=open('/tmp/err', 'a+')
errf.write(msg+'\n')
errf.close()
#---------------------------------------------------------------------------
class record:
"Class representing FastCGI records"
def __init__(self):
self.version = FCGI_VERSION_1
self.recType = FCGI_UNKNOWN_TYPE
self.reqId = FCGI_NULL_REQUEST_ID
self.content = ""
#----------------------------------------
def readRecord(self, sock):
s = map(ord, sock.recv(8))
self.version, self.recType, paddingLength = s[0], s[1], s[6]
self.reqId, contentLength = (s[2]<<8)+s[3], (s[4]<<8)+s[5]
self.content = ""
while len(self.content) < contentLength:
data = sock.recv(contentLength - len(self.content))
self.content = self.content + data
if paddingLength != 0:
padding = sock.recv(paddingLength)
# Parse the content information
c = self.content
if self.recType == FCGI_BEGIN_REQUEST:
self.role = (ord(c[0])<<8) + ord(c[1])
self.flags = ord(c[2])
elif self.recType == FCGI_UNKNOWN_TYPE:
self.unknownType = ord(c[0])
elif self.recType == FCGI_GET_VALUES or self.recType == FCGI_PARAMS:
self.values={}
pos=0
while pos < len(c):
name, value, pos = readPair(c, pos)
self.values[name] = value
elif self.recType == FCGI_END_REQUEST:
b = map(ord, c[0:4])
self.appStatus = (b[0]<<24) + (b[1]<<16) + (b[2]<<8) + b[3]
self.protocolStatus = ord(c[4])
#----------------------------------------
def writeRecord(self, sock):
content = self.content
if self.recType == FCGI_BEGIN_REQUEST:
content = chr(self.role>>8) + chr(self.role & 255) + chr(self.flags) + 5*'\000'
elif self.recType == FCGI_UNKNOWN_TYPE:
content = chr(self.unknownType) + 7*'\000'
elif self.recType==FCGI_GET_VALUES or self.recType==FCGI_PARAMS:
content = ""
for i in self.values.keys():
content = content + writePair(i, self.values[i])
elif self.recType==FCGI_END_REQUEST:
v = self.appStatus
content = chr((v>>24)&255) + chr((v>>16)&255) + chr((v>>8)&255) + chr(v&255)
content = content + chr(self.protocolStatus) + 3*'\000'
cLen = len(content)
eLen = (cLen + 7) & (0xFFFF - 7) # align to an 8-byte boundary
padLen = eLen - cLen
hdr = [ self.version,
self.recType,
self.reqId >> 8,
self.reqId & 255,
cLen >> 8,
cLen & 255,
padLen,
0]
hdr = string.joinfields(map(chr, hdr), '')
sock.send(hdr + content + padLen*'\000')
#---------------------------------------------------------------------------
def readPair(s, pos):
nameLen=ord(s[pos]) ; pos=pos+1
if nameLen & 128:
b=map(ord, s[pos:pos+3]) ; pos=pos+3
nameLen=((nameLen&127)<<24) + (b[0]<<16) + (b[1]<<8) + b[2]
valueLen=ord(s[pos]) ; pos=pos+1
if valueLen & 128:
b=map(ord, s[pos:pos+3]) ; pos=pos+3
valueLen=((valueLen&127)<<24) + (b[0]<<16) + (b[1]<<8) + b[2]
return ( s[pos:pos+nameLen], s[pos+nameLen:pos+nameLen+valueLen],
pos+nameLen+valueLen )
#---------------------------------------------------------------------------
def writePair(name, value):
l=len(name)
if l<128: s=chr(l)
else:
s=chr(128|(l>>24)&255) + chr((l>>16)&255) + chr((l>>8)&255) + chr(l&255)
l=len(value)
if l<128: s=s+chr(l)
else:
s=s+chr(128|(l>>24)&255) + chr((l>>16)&255) + chr((l>>8)&255) + chr(l&255)
return s + name + value
#---------------------------------------------------------------------------
def HandleManTypes(r, conn):
if r.recType == FCGI_GET_VALUES:
r.recType = FCGI_GET_VALUES_RESULT
v={}
vars={'FCGI_MAX_CONNS' : FCGI_MAX_CONNS,
'FCGI_MAX_REQS' : FCGI_MAX_REQS,
'FCGI_MPXS_CONNS': FCGI_MPXS_CONNS}
for i in r.values.keys():
if vars.has_key(i): v[i]=vars[i]
r.values=vars
r.writeRecord(conn)
#---------------------------------------------------------------------------
#---------------------------------------------------------------------------
_isFCGI = 1 # assume it is until we find out for sure
def isFCGI():
global _isFCGI
return _isFCGI
#---------------------------------------------------------------------------
_init = None
_sock = None
class FCGI:
def __init__(self):
self.haveFinished = 0
if _init == None:
_startup()
if not isFCGI():
self.haveFinished = 1
self.inp, self.out, self.err, self.env = \
sys.stdin, sys.stdout, sys.stderr, os.environ
return
if os.environ.has_key('FCGI_WEB_SERVER_ADDRS'):
good_addrs=string.split(os.environ['FCGI_WEB_SERVER_ADDRS'], ',')
good_addrs=map(string.strip(good_addrs)) # Remove whitespace
else:
good_addrs=None
self.conn, addr=_sock.accept()
stdin, data="", ""
self.env = {}
self.requestId=0
remaining=1
# Check if the connection is from a legal address
if good_addrs!=None and addr not in good_addrs:
raise error, 'Connection from invalid server!'
while remaining:
r=record(); r.readRecord(self.conn)
if r.recType in ManagementTypes:
HandleManTypes(r, self.conn)
elif r.reqId==0:
# Oh, poopy. It's a management record of an unknown
# type. Signal the error.
r2=record()
r2.recType=FCGI_UNKNOWN_TYPE ; r2.unknownType=r.recType
r2.writeRecord(self.conn)
continue # Charge onwards
# Ignore requests that aren't active
elif r.reqId != self.requestId and r.recType != FCGI_BEGIN_REQUEST:
continue
# If we're already doing a request, ignore further BEGIN_REQUESTs
elif r.recType == FCGI_BEGIN_REQUEST and self.requestId != 0:
continue
# Begin a new request
if r.recType == FCGI_BEGIN_REQUEST:
self.requestId = r.reqId
if r.role == FCGI_AUTHORIZER: remaining=1
elif r.role == FCGI_RESPONDER: remaining=2
elif r.role == FCGI_FILTER: remaining=3
elif r.recType == FCGI_PARAMS:
if r.content == "":
remaining=remaining-1
else:
for i in r.values.keys():
self.env[i] = r.values[i]
elif r.recType == FCGI_STDIN:
if r.content == "":
remaining=remaining-1
else:
stdin=stdin+r.content
elif r.recType==FCGI_DATA:
if r.content == "":
remaining=remaining-1
else:
data=data+r.content
# end of while remaining:
self.inp = sys.stdin = StringIO(stdin)
self.err = sys.stderr = StringIO()
self.out = sys.stdout = StringIO()
self.data = StringIO(data)
def __del__(self):
self.Finish()
def Finish(self, status=0):
if not self.haveFinished:
self.haveFinished = 1
self.err.seek(0,0)
self.out.seek(0,0)
r=record()
r.recType = FCGI_STDERR
r.reqId = self.requestId
data = self.err.read()
if data:
while data:
chunk, data = self.getNextChunk(data)
r.content = chunk
r.writeRecord(self.conn)
r.content="" ; r.writeRecord(self.conn) # Terminate stream
r.recType = FCGI_STDOUT
data = self.out.read()
while data:
chunk, data = self.getNextChunk(data)
r.content = chunk
r.writeRecord(self.conn)
r.content="" ; r.writeRecord(self.conn) # Terminate stream
r=record()
r.recType=FCGI_END_REQUEST
r.reqId=self.requestId
r.appStatus=status
r.protocolStatus=FCGI_REQUEST_COMPLETE
r.writeRecord(self.conn)
self.conn.close()
def getFieldStorage(self):
method = 'GET'
if self.env.has_key('REQUEST_METHOD'):
method = string.upper(self.env['REQUEST_METHOD'])
if method == 'GET':
return cgi.FieldStorage(environ=self.env, keep_blank_values=1)
else:
return cgi.FieldStorage(fp=self.inp, environ=self.env, keep_blank_values=1)
def getNextChunk(self, data):
chunk = data[:8192]
data = data[8192:]
return chunk, data
Accept = FCGI # alias for backwards compatibility
#---------------------------------------------------------------------------
def _startup():
global _init
_init = 1
try:
s=socket.fromfd(sys.stdin.fileno(), socket.AF_INET,
socket.SOCK_STREAM)
s.getpeername()
except socket.error, (err, errmsg):
if err!=errno.ENOTCONN: # must be a non-fastCGI environment
global _isFCGI
_isFCGI = 0
return
global _sock
_sock = s
#---------------------------------------------------------------------------
def _test():
counter=0
try:
while isFCGI():
req = Accept()
counter=counter+1
try:
fs = req.getFieldStorage()
size = string.atoi(fs['size'].value)
doc = ['*' * size]
except:
doc = ['<HTML><HEAD><TITLE>FCGI TestApp</TITLE></HEAD>\n<BODY>\n']
doc.append('<H2>FCGI TestApp</H2><P>')
doc.append('<b>request count</b> = %d<br>' % counter)
# doc.append('<b>pid</b> = %s<br>' % os.getpid())
# if req.env.has_key('CONTENT_LENGTH'):
# cl = string.atoi(req.env['CONTENT_LENGTH'])
# doc.append('<br><b>POST data (%s):</b><br><pre>' % cl)
# keys = fs.keys()
# keys.sort()
# for k in keys:
# val = fs[k]
# if type(val) == type([]):
# doc.append(' <b>%-15s :</b> %s\n' % (k, val))
# else:
# doc.append(' <b>%-15s :</b> %s\n' % (k, val.value))
# doc.append('</pre>')
#
#
# doc.append('<P><HR><P><pre>')
# keys = req.env.keys()
# keys.sort()
# for k in keys:
# doc.append('<b>%-20s :</b> %s\n' % (k, req.env[k]))
# doc.append('\n</pre><P><HR>\n')
doc.append('</BODY></HTML>\n')
doc = string.join(doc, '')
req.out.write('Content-length: %s\r\n'
'Content-type: text/html\r\n'
'Cache-Control: no-cache\r\n'
'\r\n'
% len(doc))
req.out.write(doc)
req.Finish()
except:
import traceback
f = open('traceback', 'w')
traceback.print_exc( file = f )
# f.write('%s' % doc)
if __name__=='__main__':
#import pdb
#pdb.run('_test()')
_test()
|
shotgunmm/cloudband-dev
|
test/fcgi/fcgi.py
|
Python
|
gpl-2.0
| 15,059
|
[
"TINKER"
] |
3537dc85d10e8ce7b5a2b5441d24f7aee4dfba8a65b750167fbec43e3d12b04f
|
# -*- coding: utf-8 -*-
"""
Created on Fri May 06 14:54:11 2016
@author: Alexander Weaver
"""
"""
Performs an affine (fully connected) operation on its input
An affine layer with out_dim neurons takes a data array of size Nx(in_dim), x
and returns a linearly transformed Nx(out_dim) data array
The transformation result, z, is determined by a (in_dim)x(out_dim) weight matrix, W, and
a (out_dim) bias vector, b. The transformation of any one data point (one row in x) is given by:
z = Wx + b
Constructing this object initializes the parameters following a gaussian random distribution with
standard deviation given by weight_scale.
Forward propagating this object performs the affine transformation on the given array, X.
Backpropagating this object returns the derivatives of x, W, and b with respect to the final output of
the network.
"""
import numpy as np
class AffineLayer(object):
def __init__(self, in_dim, out_dim, weight_scale, data_type=np.float32):
self.in_dim = in_dim
self.out_dim = out_dim
self.weight_scale = weight_scale
self.data_type = data_type
self.W = np.random.randn(in_dim, out_dim) * weight_scale
self.W = self.W.astype(self.data_type)
self.b = np.zeros(out_dim)
self.b = self.b.astype(self.data_type)
def forward(self, x, W=None, b=None):
if W is None:
W = self.W
if b is None:
b = self.b
N = x.shape[0]
reshaped_x = x.reshape(N, np.prod(x.shape[1:]))
out = reshaped_x.dot(W) + b
self.cache_x = x
return out
def backward(self, dout):
x = self.cache_x
N = x.shape[0]
reshaped_x = x.reshape(N, np.prod(x.shape[1:]))
dx = dout.dot(np.transpose(self.W)).reshape(x.shape)
self.dW = np.transpose(reshaped_x).dot(dout)
self.db = np.sum(dout, axis=0)
return dx
|
alexweav/Learny-McLearnface
|
LearnyMcLearnface/Layers/AffineLayer.py
|
Python
|
mit
| 1,928
|
[
"Gaussian"
] |
cf5250a552474eb2e7e0e0e467cfc2214f3e41c94acb535c26229452f76fca57
|
# pylint: disable=missing-docstring
from lettuce import world, step
from lettuce.django import django_url
import time
@step('I register for the course "([^"]*)"$')
def i_register_for_the_course(_step, course):
url = django_url('courses/%s/about' % world.scenario_dict['COURSE'].id.to_deprecated_string())
world.browser.visit(url)
world.css_click('section.intro a.register')
assert world.is_css_present('section.container.dashboard')
@step('I register to audit the course$')
def i_register_to_audit_the_course(_step):
url = django_url('courses/%s/about' % world.scenario_dict['COURSE'].id.to_deprecated_string())
world.browser.visit(url)
world.css_click('section.intro a.register')
# When the page first loads some animation needs to
# complete before this button is in a stable location
world.retry_on_exception(
lambda: world.browser.find_by_name("honor_mode").click(),
max_attempts=10,
ignored_exceptions=AttributeError
)
time.sleep(1)
assert world.is_css_present('section.container.dashboard')
@step(u'I should see an empty dashboard message')
def i_should_see_empty_dashboard(_step):
empty_dash_css = 'section.empty-dashboard-message'
assert world.is_css_present(empty_dash_css)
@step(u'I should( NOT)? see the course numbered "([^"]*)" in my dashboard$')
def i_should_see_that_course_in_my_dashboard(_step, doesnt_appear, course):
course_link_css = 'section.my-courses a[href*="%s"]' % course
if doesnt_appear:
assert world.is_css_not_present(course_link_css)
else:
assert world.is_css_present(course_link_css)
@step(u'I unenroll from the course numbered "([^"]*)"')
def i_unenroll_from_that_course(_step, course):
more_actions_dropdown_link_selector = '[id*=actions-dropdown-link-0]'
assert world.is_css_present(more_actions_dropdown_link_selector)
world.css_click(more_actions_dropdown_link_selector)
unregister_css = 'li.actions-item a.action-unenroll[data-course-number*="{course_number}"][href*=unenroll-modal]'.format(course_number=course)
assert world.is_css_present(unregister_css)
world.css_click(unregister_css)
button_css = 'section#unenroll-modal input[value="Unenroll"]'
assert world.is_css_present(button_css)
world.css_click(button_css)
|
solashirai/edx-platform
|
lms/djangoapps/courseware/features/registration.py
|
Python
|
agpl-3.0
| 2,318
|
[
"VisIt"
] |
ec03fdf281d12a0cb6776c10c8e5045a63d718dabb851d6e8f795ced3b4b25eb
|
"""Creates various plots for Balrog validation testing.
To run: $python ms_plotter.py base_path_to_catalogs output_directory realization tile
Example: $python ms_plotter.py /data/des71.a/data/kuropat/des2247-4414_sof/y3v02/ /Users/mspletts/BalValPlots/ all DES2247-4414
Relies on ms_matcher. User may need to replace `/data/des71.a/data/mspletts/balrog_validation_tests/scripts/BalVal/ms_matcher` with the correct path to ms_matcher.
FOF analysis relies on fof_matcher. User may need to replace `...` with the correct path to fof_matcher.
Plot attributes are specified with constants (many of them booleans) at the top of this script.
Constants that the user may wish to change are indicated by: # !!!!! {description and/or warnings} #. For example, user may wish to set `PRINTOUTS = False` or comment out `notice`.
# Comments are ABOVE the code they correspond to (with the exception of FIXMEs and TODOs). #
Megan Splettstoesser mspletts@fnal.gov"""
# astropy is needed only if analyzing a coadd catalog #
'''
from astropy.io import fits
from astropy.table import Column
from astropy.table import Table
'''
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import os
import pandas as pd
import subprocess
import sys
### Command line args ###
BASEPATH, OUTDIR, realizations, tiles = sys.argv[1], sys.argv[2], sys.argv[3].split(','), sys.argv[4].split(',')
# Catch error from inadequate number of command line args #
if len(sys.argv) != 5:
sys.exit("Args: basepath (location of catalogs), output directory, realizations (can be 'all'), tiles (can be 'all') \n")
BALROG_RUN = BASEPATH[BASEPATH.replace('/', ';', 4).find('/')+1:BASEPATH.replace('/', ';', 5).find('/')]
ALL_FILTERS = [ 'g', 'r', 'i', 'z' ]
# !!!!! Number of realizations depends on the tile #
ALL_REALIZATIONS = [ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ]
#ALL_REALIZATIONS = [ '0', '1', '2' ]
# !!!!! Available tiles depends on run #
ALL_TILES = [ 'DES0347-5540', 'DES2329-5622', 'DES2357-6456' ]
if tiles != 'all':
ALL_TILES = tiles
if realizations != 'all':
ALL_REALIZATIONS = realizations
################################################################### Specify plot and catalog attributes ###################################################################
### Colorbar ###
# !!!!! Add colorbar according to one of the following (only one can be True at a time). If all are False a scatter-plot is made. Colorbars cannot be used if NORMALIZE is True. #
HEXBIN = False
CM_T_S2N_COLORBAR = False
CM_T_ERR_COLORBAR = False
CM_T_COLORBAR = False
BIN_CM_T_S2N = False
# Normalizes plot to 1-sigma magnitude error. If NORMALIZE is True, PLOT_1SIG must be True else errors will not be computed and normalization cannot be performed #
NORMALIZE = False
# Use quality cuts introduced by Eric Huff? Link: https://github.com/sweverett/Balrog-GalSim/blob/master/plots/balrog_catalog_tests.py. Can only be performed if catalog has all the necessary headers: cm_s2n_r, cm_T, cm_T_err, and psfrec_T. #
EH_CUTS = False
# !!!!! Plot 1-sigma magnitude error curve? Must be True if NORMALIZE is True. If NORMALIZE is True will also plot the 68th percentile of the data in each error bin. #
PLOT_1SIG = True
# !!!!! What to do with the plot? #
SAVE_PLOT = False
SHOW_PLOT = True
# !!!!! Limits for the vertical axis. 'None' is an allowed value and will result in default scaling #
YLOW, YHIGH = None, None
# Swap horizontal axis? Default is magnitude1. Matching script ms_matcher determines which catalog is 1 and which is 2. Generally SWAP_HAX does not need to be changed unless the truth catalog values are not on the horizontal axis. #
SWAP_HAX = False
### Catalog attributes ###
# !!!!! Allowed values: sof, mof, star_truth, gal_truth, coadd. Both can be 'sof' and both can be 'mof' if INJ1 and INJ2 are different. Note that truth catalogs always have INJ=True. #
MATCH_CAT1, MATCH_CAT2 = 'gal_truth', 'sof'
# !!!!! Booleans. Examine injected catalogs? #
INJ1, INJ2 = True, True
# !!!!! Must be used with realization=all at command line #
STACK_REALIZATIONS = False
# !!!!! Make 2x2 subplots of each griz filter? Or make individual plots? #
SUBPLOT = True
# !!!!! If directories do no exist, make them or force sys.exit() to edit dirs within script? NO_DIR_EXIT is invoked first, so sys.exit() will exit if NO_DIR_EXIT = True and NO_DIR_MAKE = True #
NO_DIR_EXIT = False
NO_DIR_MAKE = True
### For FOF analysis. To ignore FOF analysis set `RUN_TYPE=None` ###
# !!!!! Allowed values: 'ok' 'rerun' None. 'ok' refers to FOF groups unchanged after Balrog-injection. 'rerun' refers to groups changed after Balrog-injection. #
RUN_TYPE = None
# !!!!! Only used if RUN_TYPE is not None #
MOF = False
SOF = True
if RUN_TYPE is not None:
print 'Doing FOF analysis ... \n '
# Overwrite MATCH_CATs #
if MOF:
MATCH_CAT1, MATCH_CAT2 = 'mof', 'mof'
if SOF:
MATCH_CAT1, MATCH_CAT2 = 'sof', 'sof'
INJ1, INJ2 = True, False
### !!!!! Make region files? #
MAKE_REG = False
### Miscellaneous ###
# Print progress? #
PRINTOUTS = True
# Only refers to printouts within get_floats_from_string(), get_matrix_diagonal_element(), and bin_and_cut_measured_magnitude_error() #
PRINTOUTS_MINOR = False
# Not currently in use or under constructrion #
LOG_FLAGS = False
PLOT_FLAGGED_OBJS = True
SHOW_FLAG_TYPE = False
# Catch errors from plot attributes #
if ((CM_T_S2N_COLORBAR and CM_T_ERR_COLORBAR) or (CM_T_S2N_COLORBAR and HEXBIN) or (CM_T_ERR_COLORBAR and HEXBIN) or (CM_T_COLORBAR and CM_T_ERR_COLORBAR) or (CM_T_COLORBAR and CM_T_S2N_COLORBAR) or (CM_T_COLORBAR and HEXBIN)) or (NORMALIZE and PLOT_1SIG is False) or (YLOW is not None and YHIGH is not None and YHIGH == YLOW) or (NORMALIZE and (CM_T_S2N_COLORBAR or CM_T_ERR_COLORBAR or CM_T_COLORBAR)) or (STACK_REALIZATIONS and realization != 'all') or (MOF and SOF):
sys.exit('ERROR: Only of of the following may be True at a time: CM_T, CM_T_S2N_COLORBAR, CM_T_ERR_COLORBAR, HEXBIN. Otherwise, colorbar will be overwritten. \nERROR: If NORMALIZE is True so must be PLOT_1SIG. \nERROR: YHIGH cannot be equal to YLOW. \n NORMALIZE must be False if any of: CM_T_S2N_COLORBAR CM_T_ERR_COLORBAR CM_T_S2N_COLORBAR are True.\nERROR: STACK_REALIZATIONS is True must be used with realization = all. \nERROR: MOF and SOF cannot be simultaneously True. \n')
# !!!!! Check that plots will not be overwritten, etc #
NOTICE = raw_input(' \n !! CHECK BEFORE RUNNING !! \n Save plot(s) -- ' + str(SAVE_PLOT) + '\n Showing plot(s) -- ' + str(SHOW_PLOT) + '\n Normalize plot(s) -- ' + str(NORMALIZE) + '\n Hexbin -- ' + str(HEXBIN) + '\n cm_T colorbar -- ' + str(CM_T_COLORBAR) + '\n cm_T_err colorbar -- ' + str(CM_T_ERR_COLORBAR) + '\n cm_T_s2n -- ' + str(CM_T_S2N_COLORBAR) + '\n Plot limits -- ' + str(YLOW) + ', ' + str(YHIGH) + '\n Plotting 1-sigma curve -- ' + str(PLOT_1SIG) +'\n Plotting flagged objects -- ' + str(PLOT_FLAGGED_OBJS) + '\n Print flags and flag types -- ' + str(SHOW_FLAG_TYPE) + '\n Logging flags -- ' + str(LOG_FLAGS) + '\n --> Press enter to proceed, control+c to stop...\n')
################################################################### Store catalog information ###################################################################
class CoaddCat():
"""Declare headers for coadd catalogs .../coadd/{tile}_{filter}_cat.fits. There is a separate catalog for each filter."""
# Once matched, headers will have form 'hdr_1' or 'hdr_2' with a suffix (suf) #
def __init__(self, inj, suf):
"""Declare headers.
Args:
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
"""
# For plot title #
if inj:
self.title_piece = 'Inj Coadd Cat'
if inj is False:
self.title_piece = 'Coadd Cat'
self.axlabel = 'meas'
# Magnitude, is one number #
self.mag_hdr = 'MAG_AUTO' + str(suf)
self.mag_axlabel = 'MAG_AUTO_meas'
# For error calculation #
self.mag_err_hdr = 'MAGERR_AUTO' + str(suf)
self.cm_flux_hdr = None
self.cm_flux_cov_hdr = None
# Size #
self.cm_t_hdr = None
self.cm_t_err_hdr = None
self.cm_t_s2n_axlabel = None
# Flags #
self.flags_hdr = 'FLAGS' + str(suf)
self.obj_flags_hdr = None
self.psf_flags_hdr = None
self.cm_flags_hdr = None
self.cm_max_flags_hdr = None
self.cm_mof_flags_hdr = None
self.cm_flags_r_hdr = None
# For region file #
self.ra_hdr = 'ALPHAWIN_J2000' + str(suf)
self.dec_hdr = 'DELTAWIN_J2000' + str(suf)
# Units: pixels #
self.a_hdr = 'A_IMAGE' + str(suf)
self.b_hdr = 'B_IMAGE' + str(suf)
# For Eric Huff (EH) quality cuts #
self.cm_s2n_r_hdr = None
self.psfrec_t_hdr = None
class SOFGalTruthCat():
"""Declare headers and axes labels for galaxy truth catalogs in the sof directory /data/des71.a/data/kuropat/des2247-4414_sof/."""
# Once matched, headers will have form 'hdr_1' or 'hdr_2' with a suffix (suf) #
def __init__(self, inj, suf):
"""Declare constants.
Args:
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
"""
if inj:
self.title_piece = 'Inj Gal Truth Cat'
if inj is False:
self.title_piece = 'Gal Truth Cat'
self.axlabel = 'true'
# Headers are the same as MOFCat class. Reproduced below for clarity in ms_plotter.py #
# Magnitude, is string of form '(mag_g, mag_r, mag_i, mag_z)' #
self.mag_hdr = 'cm_mag' + str(suf)
self.mag_axlabel = 'cm_mag_true'
# For error calculation #
self.mag_err_hdr = None
self.cm_flux_hdr = 'cm_flux' + str(suf)
self.cm_flux_cov_hdr = 'cm_flux_cov' + str(suf)
# Size. cm_T units: arcseconds squared. #
self.cm_t_hdr = 'cm_T' + str(suf)
self.cm_t_err_hdr = 'cm_T_err' + str(suf)
self.cm_t_s2n_axlabel = 'cm_T_s2n_true'
# Flags #
self.flags_hdr = 'flags' + str(suf)
self.obj_flags_hdr = 'obj_flags' + str(suf)
self.psf_flags_hdr = 'psf_flags' + str(suf)
self.cm_flags_hdr = 'cm_flags' + str(suf)
self.cm_max_flags_hdr = 'cm_max_flags' + str(suf)
self.cm_flags_r_hdr = 'cm_flags_r' + str(suf)
self.cm_mof_flags_hdr = 'cm_mof_flags' + str(suf)
# For region file #
self.ra_hdr = 'ra' + str(suf)
self.dec_hdr = 'dec' + str(suf)
self.a_hdr, self.b_hdr = None, None
# For Eric Huff (EH) quality cuts #
self.cm_s2n_r_hdr = 'cm_s2n_r' + str(suf)
self.psfrec_t_hdr = 'psfrec_T' + str(suf)
class SOFCat():
"""Declare headers and axis labels for sof catalog in /data/des71.a/data/kuropat/des2247-4414_sof/y3v02/balrog_images/{realization}/{tile}/sof/{tile}_sof.fits and /data/des71.a/data/kuropat/sof_stars/y3v02/balrog_images/{realization}/{tile}/mof/{tile}_mof.fits"""
# Once matched, headers will have form 'hdr_1' or 'hdr_2' with a suffix (suf) #
def __init__(self, inj, suf):
"""Declare constants.
Args:
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
"""
if inj:
self.title_piece = 'Inj SOF Cat'
if inj is False:
self.title_piece = 'SOF Cat'
self.axlabel = 'meas'
# Headers are the same as MOFCat class with the exception of cm_mof_flags_hdr. Reproduced below for clarity #
# Magnitude, is string of form '(mag_g, mag_r, mag_i, mag_z)' #
self.mag_hdr = 'cm_mag' + str(suf)
self.mag_axlabel = 'cm_mag_meas'
# For error calculation #
self.mag_err_hdr = None
self.cm_flux_hdr = 'cm_flux' + str(suf)
self.cm_flux_cov_hdr = 'cm_flux_cov' + str(suf)
# Size #
self.cm_t_hdr = 'cm_T' + str(suf)
self.cm_t_err_hdr = 'cm_T_err' + str(suf)
self.cm_t_s2n_axlabel = 'cm_T_s2n_meas'
# Flags #
self.flags_hdr = 'flags' + str(suf)
self.obj_flags_hdr = 'obj_flags' + str(suf)
self.psf_flags_hdr = 'psf_flags' + str(suf)
self.cm_flags_hdr = 'cm_flags' + str(suf)
self.cm_max_flags_hdr = 'cm_max_flags' + str(suf)
self.cm_flags_r_hdr = 'cm_flags_r' + str(suf)
self.cm_mof_flags_hdr = None
# For region file #
self.ra_hdr = 'ra' + str(suf)
self.dec_hdr = 'dec' + str(suf)
self.a_hdr, self.b_hdr = None, None
# For Eric Huff (EH) quality cuts #
self.cm_s2n_r_hdr = 'cm_s2n_r' + str(suf)
self.psfrec_t_hdr = 'psfrec_T' + str(suf)
class MOFCat():
"""Declare headers and axes labels for MOF catalog. Currently, the galaxy truth catalogs are created using MOF and have the same headers. Works (mostly) with /data/des71.a/data/kuropat/sof_stars/y3v02/balrog_images/{realization}/{tile}/{tile}_{realization}_balrog_truth_cat_gals.fits, /data/des71.a/data/kuropat/sof_stars/y3v02/balrog_images/{realization}/{tile}/mof/{tile}_mof.fits, ..."""
# Once matched, headers will have form 'hdr_1' or 'hdr_2' with a suffix (suf) #
def __init__(self, inj, suf):
"""Declare constants.
Args:
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
"""
# For plot title #
if inj:
self.title_piece = 'Inj MOF Cat'
if inj is False:
self.title_piece = 'MOF Cat'
self.axlabel = 'meas'
# Magnitude, is string of form (mag_g, mag_r, mag_i, mag_z) #
self.mag_hdr = 'cm_mag' + str(suf)
self.mag_axlabel = 'cm_mag_meas'
# For error calculation #
self.mag_err_hdr = None
self.cm_flux_hdr = 'cm_flux' + str(suf)
self.cm_flux_cov_hdr = 'cm_flux_cov' + str(suf)
# Size #
self.cm_t_hdr = 'cm_T' + str(suf)
self.cm_t_err_hdr = 'cm_T_err' + str(suf)
self.cm_t_s2n_axlabel = 'cm_T_s2n_meas'
# Flags #
self.flags_hdr = 'flags' + str(suf)
self.obj_flags_hdr = 'obj_flags' + str(suf)
self.psf_flags_hdr = 'psf_flags' + str(suf)
self.cm_flags_hdr = 'cm_flags' + str(suf)
self.cm_max_flags_hdr = 'cm_max_flags' + str(suf)
self.cm_flags_r_hdr = 'cm_flags_r' + str(suf)
self.cm_mof_flags_hdr = 'cm_mof_flags' + str(suf)
# For region file #
self.ra_hdr = 'ra' + str(suf)
self.dec_hdr = 'dec' + str(suf)
self.a_hdr, self.b_hdr = None, None
# For Eric Huff (EH) quality cuts #
self.cm_s2n_r_hdr = 'cm_s2n_r' + str(suf)
self.psfrec_t_hdr = 'psfrec_T' + str(suf)
class StarTruthCat(): #are there sep headers for MOFStarTruthCat and SOFStarTruthCat?
"""Declare headers and axes labels for star truth catalogs in /data/des71.a/data/kuropat/sof_stars/."""
# Once matched, headers will have form 'hdr_1' or 'hdr_2' with a suffix (suf) #
def __init__(self, inj, suf):
"""Declare constants.
Args:
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
"""
if inj:
self.title_piece = 'Inj Star Truth Cat'
if inj is False:
self.title_piece = 'Star Truth Cat'
self.axlabel = 'true'
# Magnitude #
self.mag_hdr = 'g_Corr' + str(suf)
self.mag_axlabel = 'mag_true'
# For error calculation #
# Is of form 'PSF_MAG_ERR_{filter}' + str(suf) #
self.mag_err_hdr = 'PSF_MAG_ERR' + str(suf)
self.cm_flux_hdr = None
self.cm_flux_cov_hdr = None
# Size #
self.cm_t_hdr = None
self.cm_t_err_hdr = None
self.cm_t_s2n_axlabel = None
# Flags #
self.flags_hdr = None
self.obj_flags_hdr = None
self.psf_flags_hdr = None
self.cm_flags_hdr = None
self.cm_max_flags_hdr = None
self.cm_mof_flags_hdr = None
self.cm_flags_r_hdr = None
# For region file #
self.ra_hdr = 'RA_new' + str(suf)
self.dec_hdr = 'DEC_new' + str(suf)
self.a_hdr, self.b_hdr = None, None # Use cm_T
# For Eric Huff (EH) quality cuts #
self.cm_s2n_r_hdr = None
self.psfrec_t_hdr = None
def get_class(cat_type, inj, suf):
"""Get the appropriate class for the catalog type.
Args:
cat_type -- Catalog type. Allowed values: 'gal_truth', 'mof', 'star_truth', 'sof', 'coadd'.
inj (bool) -- Balrog injected catalog?
suf (int) -- Refers to order in which catalog was matched in ms_matcher. Allowed values: '_1' '_2'
Returns:
cat_type_class -- Points to the appropriate class which contains constants.
"""
if cat_type == 'gal_truth':
cat_type_class = SOFGalTruthCat(inj=inj, suf=suf)
if cat_type == 'mof':
cat_type_class = MOFCat(inj=inj, suf=suf)
if cat_type == 'star_truth':
cat_type_class = StarTruthCat(inj=inj, suf=suf)
if cat_type == 'sof':
cat_type_class = SOFCat(inj=inj, suf=suf)
if cat_type == 'coadd':
cat_type_class = CoaddCat(inj=inj, suf=suf)
return cat_type_class
def get_match_type(title_piece1, title_piece2):
"""Transform plot title of form 'Inj MOF Cat & Truth Cat' to 'inj_mof_cat_truth_cat'.
Args:
title_piece1, title_piece2 (str) -- Ex: Injected MOF
Return:
match_type (str) -- Ex: injected_mof_truth_catalog
"""
title_piece1, title_piece2 = title_piece1.lower(), title_piece2.lower()
title_piece1, title_piece2 = title_piece1.replace(' ', '_'), title_piece2.replace(' ', '_')
match_type = str(title_piece1)+'_'+str(title_piece2)
return match_type
def get_fd_names():
"""Generate names for the following log files: flags, magnitude bins, number of objects plotted, number of objects within one sigma.
Relies on directory structure: outdir/log_files/`BALROG_RUN`/`MATCH_TYPE`/
Args:
outdir (str) -- Output directory. Files will be saved here.
Returns:
fn1, fn2, fn3, fn4 (str) -- Filenames for flag log file, magnitude log, number of objects plotted log, number of objects within 1-sigma, respectively.
"""
# !!!!! User may wish to edit directory structure #
### Check for directory existence ###
if RUN_TYPE is None:
log_dir = os.path.join(OUTDIR, 'log_files', BALROG_RUN, MATCH_TYPE)
if RUN_TYPE is not None:
log_dir = os.path.join(OUTDIR, 'log_files', BALROG_RUN, MATCH_TYPE, 'fof_analysis')
if os.path.isdir(log_dir) is False:
if NO_DIR_EXIT:
sys.exit('Directory ' + str(log_dir) + ' does not exist. \n Change directory structure in ms_plotter.get_fd_names() or set `NO_DIR_MAKE=True`')
if NO_DIR_MAKE:
print 'Making directory ', log_dir, '...\n'
os.makedirs(log_dir)
fn1 = os.path.join(log_dir, 'flag_log_'+str(MATCH_TYPE)+'.csv')
fn2 = os.path.join(log_dir, 'magnitude_bins_'+str(MATCH_TYPE)+'.txt')
fn3 = os.path.join(log_dir, 'num_objs_plotted_'+str(MATCH_TYPE)+'.txt')
fn4 = os.path.join(log_dir, 'one_sigma_objects_'+str(MATCH_TYPE)+'.txt')
if RUN_TYPE is not None:
fn1 = fn1[:-4] + '_' + str(RUN_TYPE) + fn1[-4:]; fn2 = fn2[:-4] + '_' + str(RUN_TYPE) + fn2[-4:]
fn3 = fn3[:-4] + '_' + str(RUN_TYPE) + fn3[-4:]; fn4 = fn4[:-4] + '_' + str(RUN_TYPE) + fn4[-4:]
print '-----> Saving log file for flags as: ', fn1, '\n'
print '-----> Saving log file for magnitude and error bins as: ', fn2, '\n'
print '-----> Saving log file for number of objects plotted as: ', fn3, '\n'
print '-----> Saving log file for number of objects within 1-sigma as: ', fn4, '\n'
return fn1, fn2, fn3, fn4
def get_reg_names(tile_name, realization_number):
"""Generate names for region files of different join types in STILTS script ms_matcher or fof_matcher.
Args:
outdir (str) -- Output directory. Files will be saved here.
Returns:
fn1, fn2, fn3, (str) -- Filenames for join=1and2, join=1not2, join=2not1, respectively.
"""
# !!!!! User may wish to edit directory structure #
### Check for directory existence ###
if RUN_TYPE is None:
reg_dir = os.path.join(OUTDIR, 'region_files', BALROG_RUN, MATCH_TYPE, tile_name, realization_number)
if RUN_TYPE is not None:
reg_dir = os.path.join(OUTDIR, 'region_files', BALROG_RUN, MATCH_TYPE, tile_name, realization_number, 'fof_analysis')
if os.path.isdir(reg_dir) is False:
if NO_DIR_EXIT:
sys.exit('Directory ' + str(reg_dir) + ' does not exist. \n Change directory structure in ms_plotter.get_fd_names() or set `NO_DIR_MAKE=True`')
if NO_DIR_MAKE:
print 'Making directory ', reg_dir, '...\n'
os.makedirs(reg_dir)
fn1 = os.path.join(reg_dir, str(tile_name) + '_' + str(realization_number) + '_' + str(MATCH_TYPE)+'_match1and2.reg')
fn2 = os.path.join(reg_dir, str(tile_name) + '_' + str(realization_number) + '_' + str(MATCH_TYPE)+'_match1not2.reg')
fn3 = os.path.join(reg_dir, str(tile_name) + '_' + str(realization_number) + '_' + str(MATCH_TYPE)+'_match2not1.reg')
if RUN_TYPE is not None:
fn1 = fn1[:-15] + '_' + str(RUN_TYPE) + fn1[-15:]; fn2 = fn2[:-15] + '_' + str(RUN_TYPE) + fn2[-15:]; fn3 = fn3[:-15] + '_' + str(RUN_TYPE) + fn3[-15:]
return fn1, fn2, fn3
################################################################### Declare necessary constants ###################################################################
### For data analysis ###
# CLASS1 refers to in1 in ms_matcher. in1 appends _1 to all the headers, hence suf=1. fof_matcher is done such that injected catalogs have no suffix #
if RUN_TYPE is not None:
CLASS1 = get_class(cat_type=MATCH_CAT1, inj=INJ1, suf='')
if RUN_TYPE is None:
CLASS1 = get_class(cat_type=MATCH_CAT1, inj=INJ1, suf='_1')
CLASS2 = get_class(cat_type=MATCH_CAT2, inj=INJ2, suf='_2')
# Get arguments to pass to ms_matcher. Need to transform header of form 'ra_1' to 'ra', hence [:-2] #
RA_HDR1, RA_HDR2 = CLASS1.ra_hdr[:-2], CLASS2.ra_hdr[:-2]
DEC_HDR1, DEC_HDR2 = CLASS1.dec_hdr[:-2], CLASS2.dec_hdr[:-2]
# For plot labels #
AXLABEL1, AXLABEL2 = CLASS1.axlabel, CLASS2.axlabel
# Magnitudes #
M_HDR1, M_HDR2 = CLASS1.mag_hdr, CLASS2.mag_hdr
M_AXLABEL1, M_AXLABEL2 = CLASS1.mag_axlabel, CLASS2.mag_axlabel
# For magnitude error calculation #
M_ERR_HDR1, M_ERR_HDR2 = CLASS1.mag_err_hdr, CLASS2.mag_err_hdr
CM_FLUX_HDR1, CM_FLUX_HDR2 = CLASS1.cm_flux_hdr, CLASS2.cm_flux_hdr
CM_FLUX_COV_HDR1, CM_FLUX_COV_HDR2 = CLASS1.cm_flux_cov_hdr, CLASS2.cm_flux_cov_hdr
# For signal to noise calculation #
CM_T_HDR1, CM_T_HDR2 = CLASS1.cm_t_hdr, CLASS2.cm_t_hdr
CM_T_ERR_HDR1, CM_T_ERR_HDR2 = CLASS1.cm_t_err_hdr, CLASS2.cm_t_err_hdr
CM_T_S2N_AXLABEL1, CM_T_S2N_AXLABEL2 = CLASS1.cm_t_s2n_axlabel, CLASS2.cm_t_s2n_axlabel
# Flags #
FLAGS_HDR1, FLAGS_HDR2 = CLASS1.flags_hdr, CLASS2.flags_hdr
OBJ_FLAGS_HDR1, OBJ_FLAGS_HDR2 = CLASS1.obj_flags_hdr, CLASS2.obj_flags_hdr
# psf_flags is a string of form '(0,0,0,0)'; must pass through get_floats_from_string() if used. #
PSF_FLAGS_HDR1, PSF_FLAGS_HDR2 = CLASS1.psf_flags_hdr, CLASS2.psf_flags_hdr
CM_FLAGS_HDR1, CM_FLAGS_HDR2 = CLASS1.cm_flags_hdr, CLASS2.cm_flags_hdr
CM_MAX_FLAGS_HDR1, CM_MAX_FLAGS_HDR2 = CLASS1.cm_max_flags_hdr, CLASS2.cm_max_flags_hdr
CM_FLAGS_R_HDR1, CM_FLAGS_R_HDR2 = CLASS1.cm_flags_r_hdr, CLASS2.cm_flags_r_hdr
CM_MOF_FLAGS_HDR1, CM_MOF_FLAGS_HDR2 = CLASS1.cm_mof_flags_hdr, CLASS2.cm_mof_flags_hdr
# For quality cuts introduced by Eric Huff #
CM_S2N_R_HDR1, CM_S2N_R_HDR2 = CLASS1.cm_s2n_r_hdr, CLASS2.cm_s2n_r_hdr
PSFREC_T_HDR1, PSFREC_T_HDR2 = CLASS1.psfrec_t_hdr, CLASS2.psfrec_t_hdr
# For region file #
MAJOR_AX_HDR1, MAJOR_AX_HDR2 = CLASS1.a_hdr, CLASS2.a_hdr
MINOR_AX_HDR1, MINOR_AX_HDR2 = CLASS1.b_hdr, CLASS2.b_hdr
FLAG_HDR_LIST = [ FLAGS_HDR1, FLAGS_HDR2, CM_FLAGS_HDR1, CM_FLAGS_HDR2, CM_MOF_FLAGS_HDR1, CM_MOF_FLAGS_HDR2, OBJ_FLAGS_HDR1, OBJ_FLAGS_HDR2, PSF_FLAGS_HDR1, PSF_FLAGS_HDR2, CM_MAX_FLAGS_HDR1, CM_MAX_FLAGS_HDR2, CM_FLAGS_R_HDR1, CM_FLAGS_R_HDR2 ]
# Used if LOG_FLAGS is True #
flag_idx = []
### For plot names, plot titles, log file names ###
TITLE_PIECE1, TITLE_PIECE2 = CLASS1.title_piece, CLASS2.title_piece
MATCH_TYPE = get_match_type(title_piece1=TITLE_PIECE1, title_piece2=TITLE_PIECE2)
### Names for file directors (fd) ###
FD_FLAG_NAME, FD_MAG_BINS_NAME, FD_NOP_NAME, FD_1SIG_NAME = get_fd_names()
# Create log file for number of objects plotted (nop) #
FD_NOP = open(FD_NOP_NAME, 'w')
# Create log file for number of objects within 1-sigma #
FD_1SIG = open(FD_1SIG_NAME, 'w')
# Create log file for magnitude bins #
FD_MAG_BINS = open(FD_MAG_BINS_NAME, 'w')
FD_MAG_BINS.write('NUM_OBJS_IN_BIN, BIN_LHS, BIN_RHS, MEDIAN_HAXIS_MAG, MEDIAN_ERROR \n')
# Create log file for flags #
FD_FLAG = open(FD_FLAG_NAME, 'w')
FD_FLAG.write('TILE, FILTER, TYPE, REALIZATION, FLAG1_HEADER, FLAG2_HEADER, FLAG1_VALUE, FLAG2_VALUE, MAG1, MAG2 \n')
if LOG_FLAGS is False:
FD_FLAG.write('Flags not logged because LOG_FLAGS is False.')
################################################################### Analysis ###################################################################
def get_floats_from_string(df, filter_name, hdr):
"""Transform a list of strings of form '[ (1, 2, 3, 4), (1, 2, 3, 4), ... ]' to a list of floats of form '[1,1,...]' (if filter_name="g"), '[2,2,...]' ("r"), '[3,3,...]' ("i"), or '[4,4,...]' ("z").
Args:
df (pandas DataFrame)
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'.
hdr (str) -- Header refers to a column name in the matched catalog. Must refer to a list of strings where each element is of form '(1,2,3,4)'.
Returns:
list_a (list of floats) -- Collection of the numbers corresponding to a particular index in a list of form '[ (1, 2, 3, 4), (1, 2, 3, 4), ... ].
"""
strings = df[hdr]; list_a = []
# Each element (elmt) is of form '(1, 2, 3, 4)' #
for elmt in strings:
if filter_name == 'g':
i = 1
idx1 = elmt.find('(') + i
idx2 = elmt.find(',')
if filter_name == 'r':
i = 2
idx1 = elmt.replace(',', ';', 0).find(',') + i
idx2 = elmt.replace(',', ';', 1).find(',')
if filter_name == 'i':
i = 2
idx1 = elmt.replace(',', ';', 1,).find(',') + i
idx2 = elmt.replace(',', ';', 2).find(',')
if filter_name == 'z':
i = 2
idx1 = elmt.replace(',', ';', 2).find(',') + i
idx2 = elmt.find(')')
list_a.append(float(elmt[idx1:idx2]))
if PRINTOUTS_MINOR:
print 'Got ', hdr, ' for filter ', filter_name, '...'
print ' Check: ', strings[0], ' & ', list_a[0], '\n'
return list_a
def get_matrix_diagonal_element(df, filter_name, hdr):
"""Transforms a list of 4x4 matrices where each element is a string of form '((11,12,13,14), (21,22,23,24), (31,32,33,34), (41,42,43,44))' into a list of either the 11 (if filter_name is "g"), 22 ("r"), 33 ("i"), or 44 ("z") matrix elements.
Args:
df (pandas DataFrame)
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'.
hdr (str) -- Header refers to a column name in the matched catalog. Must refer to a list of strings where each element is of form '((11,12,13,14), (21,22,23,24), (31,32,33,34), (41,42,43,44))'.
Returns:
list_aa (list of floats) -- Collection of the numbers corresponding to a particular diagonal element in a list of 4-by-4 matrices.
"""
matrices = df[hdr]; list_aa = []
# Each element in `matrices` is a matrix of form '((11,12,13,14), (21,22,23,24), (31,32,33,34), (41,42,43,44))' #
for matrix in matrices:
if filter_name == 'g':
i, j = 2, 0
idx1 = 0
idx2 = matrix.find(',')
if filter_name == 'r':
i, j = 2, 0
idx1 = matrix.replace(',', ';', 4).find(',')
idx2 = matrix.replace(',', ';', 5).find(',')
if filter_name == 'i':
i, j = 2, 0
idx1 = matrix.replace(',', ';', 9).find(',')
idx2 = matrix.replace(',', ';', 10).find(',')
if filter_name == 'z':
i, j = 2, -1
idx1 = matrix.replace(',', ';', 14).find(',')
idx2 = matrix.replace(',', ';', 15).find(',')
list_aa.append(float(matrix[idx1+i:idx2+j]))
if PRINTOUTS_MINOR:
print 'Got ', hdr, ' for filter ', filter_name
print ' Check: ', matrices[0], ' & ', list_aa[0], '\n'
return list_aa
def get_good_index_using_primary_flags(df, full_magnitude1, full_magnitude2, cm_flag_hdr1, cm_flag_hdr2, flag_hdr1, flag_hdr2):
"""Get indices of objects without flags as indicated by the headers 'flags' and 'cm_flags'. Also get indices of objects with magnitudes not equal to +/- 99, +/- 9999, and 37.5. Store the bad indices as well (if PLOT_FLAGGED_OBJS is True).
Args:
df (pandas DataFrame)
full_magnitude1, full_magnitude2 (list of floats) -- Uncleaned lists containing magnitudes.
Returns:
idx_good (list of ints)
idx_bad (list of ints) -- Is empty if PLOT_FLAGGED_OBJS is False.
"""
if cm_flag_hdr2 is None and cm_flag_hdr1 is None and flag_hdr1 is None and flag_hdr2 is None:
sys.exit('No headers to clean flags with...')
# If one catalog does not have the appropriate header, check it twice in the catalog that does have it so code still runs #
if cm_flag_hdr2 is None:
cm_flag_hdr2 = cm_flag_hdr1
if cm_flag_hdr1 is None:
cm_flag_hdr1 = cm_flag_hdr2
if flag_hdr1 is None:
flag_hdr1 = flag_hdr2
if flag_hdr2 is None:
flag_hdr2 = flag_hdr1
### Get flags ###
flag1, flag2 = df[flag_hdr1], df[flag_hdr2]
cm_flag1, cm_flag2 = df[cm_flag_hdr1], df[cm_flag_hdr2]
# Make arrays to take absolute value in next step #
full_magnitude1, full_magnitude2 = np.array(full_magnitude1), np.array(full_magnitude2)
# Get rid of these objects 37.5 corresponds to a negative flux #
idx_good= np.where( (abs(full_magnitude1) != 9999.0) & (abs(full_magnitude1) != 99.0) & (abs(full_magnitude1) != 37.5) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & (abs(full_magnitude2) != 37.5) & (flag1 == 0) & (flag2 == 0) & (cm_flag1 == 0) & (cm_flag2 == 0) )[0]
if PLOT_FLAGGED_OBJS:
idx_bad = np.where( (abs(full_magnitude1) != 9999.0) & (abs(full_magnitude1) != 99.0) & (abs(full_magnitude1) != 37.5) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & ((flag2 != 0) | (flag1 != 0) | (cm_flag1 != 0) | (cm_flag2 != 0)) )[0]
if PLOT_FLAGGED_OBJS is False:
idx_bad = None
if PRINTOUTS:
print 'Eliminated ', len(full_magnitude1) - len(idx_good), ' objects with magnitudes equal to +/- 9999, +/- 99, and 37.5 and objects with nonzero flags for: ', flag_hdr1, ', ', flag_hdr2, ', ', cm_flag_hdr1, ', ', cm_flag_hdr2, ' ... \n'
return idx_good, idx_bad
def get_good_index_using_quality_cuts(df, full_magnitude1, full_magnitude2):
"""Get indices of objects that satisfy quality cuts introduced by Eric Huff. Also get indices of objects without flags as indicated by the headers 'flags' and 'cm_flags'. Also get indices of objects with magnitudes not equal to +/- 99, +/- 9999, and 37.5. Store the bad indices as well (if PLOT_FLAGGED_OBJS is True).
Args:
df (pandas DataFrame)
*_hdr (str) -- Headers refer to column names in the matched catalog.
full_magnitude1, full_magnitude2 (list of floats) -- Values read directly from pandas DataFrame or passed through `get_floats_from_string()`; no flags removed.
Returns:
idx_good (list of ints) -- Indices of objects without flags and objects which met criteria for quality cuts.
idx_bad (list of ints) -- Is empty if PLOT_FLAGGED_OBJS is False.
"""
if 'true' in AXLABEL1 and 'true' in AXLABEL2:
sys.exit('ERROR. Cuts should be performed on measured catalog, not truth catalog.')
# Ignore truth catalog if one is present. Preserve ability to check both catalogs. #
if 'true' in AXLABEL1 and 'meas' in AXLABEL2:
CM_T_HDR1, CM_T_ERR_HDR1, CM_S2N_R_HDR1, PSFREC_T_HDR1 = CM_T_HDR2, CM_T_ERR_HDR2, CM_S2N_R_HDR2, PSFREC_T_HDR2
if 'true' in AXLABEL2 and 'meas' in AXLABEL1:
CM_T_HDR2, CM_T_ERR_HDR2, CM_S2N_R_HDR2, PSFREC_T_HDR2 = CM_T_HDR1, CM_T_ERR_HDR1, CM_S2N_R_HDR1, PSFREC_T_HDR1
idx_good, idx_bad = [], []
# If one catalog does not have the appropriate header, check it twice in the catalog that does have it so code still runs #
if cm_flag_hdr2 is None:
cm_flag_hdr2 = cm_flag_hdr1
if cm_flag_hdr1 is None:
cm_flag_hdr1 = cm_flag_hdr2
### Define flags ###
flag1, flag2 = df[flag_hdr1], df[flag_hdr2]
cm_flag1, cm_flag2 = df[cm_flag_hdr1], df[cm_flag_hdr2]
### Define parameters needed for quality cuts ###
# Size squared of object #
cm_t1, cm_t2 = df[CM_T_HDR1], df[CM_T_HDR2]
# Size error #
cm_t_err1, cm_t_err2 = df[CM_T_ERR_HDR1], df[CM_T_ERR_HDR2]
# Signal to noise #
cm_s2n_r1, cm_s2n_r2 = df[CM_S2N_R_HDR1], df[CM_S2N_R_HDR2]
# PSF size #
psfrec_t1, psfrec_t2 = df[PSFREC_T_HDR1], df[PSFREC_T_HDR2]
# Cast into array to take absolute value in next step #
full_magnitude1 = np.array(full_magnitude1)
full_magnitude2 = np.array(full_magnitude2)
idx_good = np.where( (abs(full_magnitude1) != 9999.0) & (abs(full_magnitude1) != 99.0) & (abs(full_magnitude1) != 37.5) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & (abs(full_magnitude2) != 9999.0) & (abs(full_magnitude2) != 99.0) & (abs(full_magnitude2) != 37.5) & (flag1 == 0) & (flag2 == 0) & (cm_flag1 == 0) & (cm_flag2 == 0) & (cm_s2n_r1 > 10) & (cm_s2n_r2 > 10) & (cm_t1/cm_t_err1 > 0.5) & (cm_t2/cm_t_err2 > 0.5) & (cm_t1/psfrec_t1 > 0.5) & (cm_t1/psfrec_t2 > 0.5) )[0]
idx_bad = []
if PLOT_FLAGGED_OBJS:
counter_bad_mag = 0
for i in np.arange(0, len(full_magnitude1)):
# Get rid of these objects #
if abs(full_magnitude1[i]) != 9999.0 and abs(full_magnitude2[i]) != 9999.0 and full_magnitude1[i] != 37.5 and full_magnitude2[i] != 37.5 and full_magnitude1[i] != 99.0 and full_magnitude2[i] != 99:
counter_bad_mag += 1
if flag1[i] != 0 or flag2[i] != 0 or cm_flag1[i] != 0 or cm_flag2[i] != 0 or cm_s2n_r1[i] < 10 or cm_s2n_r2[i] < 10 or cm_t1[i]/cm_t_err1[i] < 0.5 or cm_t2[i]/cm_t_err2[i] < 0.5 or cm_t1[i]/psfrec_t1[i] < 0.5 or cm_t2[i]/psfrec_t2[i] < 0.5:
idx_bad.append(i)
if PRINTOUTS:
print 'Eliminated objects with magnitudes equal to +/- 9999, +/- 99, and 37.5 and objects with nonzero flags for: ', flag_hdr1, ', ', flag_hdr2, ', ', CM_FLAGS_HDR1, ', ', cm_flag_hdr2, ' ...'
print ' Eliminated objects with signal-to-noise ratio < 10 ...'
print ' Eliminated objects with cm_T/cm_T_err < 0.5 ...'
print ' Eliminated objects with cm_T/psfrec_T < 0.5 ...'
if PLOT_FLAGGED_OBJS is False:
idx_bad = None
return idx_good, idx_bad
def handle_flags(df, flag_hdr1, flag_hdr2, filter_name, full_magnitude1, full_magnitude2, realization_number, tile_name):
"""Examine a particular flag and write to log file. Can also be used to check all flags in a list of flags.
Args:
df (pandas DataFrame)
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'.
full_magnitude1, full_magnitude2 (numpy.ndarray if directly from `df[hdr]` OR list of floats if from `get_floats_from_string()`) -- Values read directly from pandas DataFrame via `df[hdr]`; no objects removed using nonzero flag values and no quality cuts performed.
realization_number (int) -- Allowed values: 0 1 2 None. Refers to Balrog injection and None refers to a one-realization run.
tile_name (str) --
Returns:
idx_good (list of ints) -- Indices of objects with flags values of zero.
idx_bad (list of ints) -- Indices of objects with nonzero flag values.
"""
idx_good, idx_bad = [], []; counter_idx_bad = 0
# If one catalog does not have the appropriate header, check it twice in the catalog that does have it so code still runs #
if flag_hdr1 is None:
flag_hdr1 = flag_hdr2
if flag_hdr2 is None:
flag_hdr2 = flag_hdr1
### psf_flags are strings of form '(0,0,0,0)' ###
if flag_hdr1 is not None and flag_hdr2 is not None and 'psf' not in flag_hdr1 and 'psf' not in flag_hdr2:
flag1 = df[flag_hdr1]
flag2 = df[flag_hdr2]
if 'psf' in flag_hdr1 and 'psf' in flag_hdr2:
flag1 = get_floats_from_string(df=df, hdr=flag_hdr1, filter_name=filter_name)
flag2 = get_floats_from_string(df=df, hdr=flag_hdr2, filter_name=filter_name)
### Check for flags ###
for i in np.arange(0, len(full_magnitude1)):
if abs(full_magnitude1[i]) != 9999.0 and abs(full_magnitude2[i]) != 9999.0 and full_magnitude1[i] != 37.5 and full_magnitude2[i] != 37.5 and full_magnitude1[i] != 99.0 and full_magnitude2[i] != 99:
if flag1[i] == 0 and flag2[i] == 0:
idx_good.append(i)
if flag1[i] != 0 or flag2[i] != 0:
idx_bad.append(i)
counter_idx_bad += 1
### Write flags to file with headers TILE, FILTER, TYPE, REALIZATION, FLAG1_HEADER, FLAG2_HEADER, FLAG1_VALUE, FLAG2_VALUE, MAG1, MAG2 ###
FD_FLAG.write(str(tile_name) + '\t' + str(filter_name) + '\t' + str(RUN_TYPE) + '\t' + str(realization_number) + '\t' + str(flag_hdr1) + '\t' + str(flag_hdr2) + '\t' + str(flag1[i]) + '\t' + str(flag2[i]) + '\t' + str(full_magnitude1[i]) + '\t' + str(full_magnitude2[i]) +'\n')
if PRINTOUTS:
print 'For tile: ', str(tile_name), ' and filter: ', str(filter_name), ', checked flags: ', flag_hdr1, ' & ', flag_hdr2, '...'
### Check if flags were found ###
if counter_idx_bad > 0 and PRINTOUTS:
print ' Number of flags for magnitudes values 9999, 99, 37.5 and flags ', str(flag_hdr1), ' and ', str(flag_hdr2), ' : ', counter_idx_bad, '\n'
return idx_good, idx_bad
def calculate_total_fractional_magnitude_error(cov_hdr, df, filter_name, flux_hdr, idx_good):
"""Calculate the magnitude error via 1.08 * (flux_cov[i][i])^0.5 / flux[i] and ignore flagged objects in error calculation.
Args:
cov_hdr (str) -- Header for flux covariance matrix in the matched catalog.
df (pandas DataFrame)
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'.
flux_hdr (str) -- Headers refer to column names in the matched catalog.
idx_good (list of ints) -- Indices with flag values equal to zero.
Returns:
error (list of floats) -- The magnitude error corresponding to each object.
"""
# Uncleaned lists for flux and flux covariance #
full_flux = get_floats_from_string(df=df, hdr=flux_hdr, filter_name=filter_name)
full_flux_cov = get_matrix_diagonal_element(df=df, hdr=cov_hdr, filter_name=filter_name)
error, flux, fluxcov = [], [], []; counter_neg = 0
# 'Safe' indices #
for i in idx_good:
flux.append(full_flux[i])
fluxcov.append(full_flux_cov[i])
# Calculations #
for i in np.arange(0, len(flux)):
# Throw out negative fluxcov (error calculation involves taking the square root of a negative) #
if fluxcov[i] < 0:
error.append(0)
counter_neg += 1
if fluxcov[i] == 0:
print 'cm_flux_cov is 0'
if fluxcov[i] > 0:
err = 1.08 * fluxcov[i]**0.5 / flux[i] # Pogsons number = 1.08
error.append(err)
if PRINTOUTS:
print 'Calculated the magnitude error for filter: ', filter_name
print ' Number of negative cm_flux_cov: ', counter_neg, ' / ', len(flux), '\n'
return error
def calculate_and_bin_cm_T_signal_to_noise(cm_t_hdr, cm_t_err_hdr, df, idx_good, clean_magnitude1, clean_magnitude2):
"""Calculate measured signal-to-noise ratio via cm_T/cm_T_err (cuts performed on truth catalogs).
Args:
cm_t_hdr (str) -- Header for the size squared of object. Headers refer to column names in the matched catalog.
cm_t_err_hdr (str) -- Header for error on the size squared of object. Headers refer to column names in the matched catalog.
df (pandas DataFrame)
idx_good (list of ints) -- Indices where no flags exist.
clean_magnitude1, clean_magnitude2 (list of floats) -- Magnitudes with flags removed.
Returns:
s2n (list of floats) -- cm_T signal-to-noise at each 'safe' index.
"""
cm_t = get_good_data(df=df, hdr=cm_t_hdr, idx_good=idx_good, magnitude=False, filter_name=None)
cm_t_err = get_good_data(df=df, hdr=cm_t_err_hdr, idx_good=idx_good, magnitude=False, filter_name=None)
# cm_T signal to noise (s2n) #
cm_t_s2n = []
for i in np.arange(0, len(cm_t)):
cm_t_s2n.append(abs(cm_t[i]/cm_t_err[i]))
# Bin signal to noise #
# Bins suggested by Spencer Everett #
bins = [0, 1, 9, 20, max(cm_t_s2n)]
if PRINTOUTS:
print 'Binning cm_T_s1n with bins: ', bins, ' and headers/axlabels:', cm_t_hdr, ', ', cm_t_err_hdr, '...'
print ' Min and max absolute value of cm_T signal-to-noise: ', min(cm_t_s2n), ' and ', max(cm_t_s2n), '...'
# idx_list is a list of lists to preserve bin structure #
binned_s2n, binned_hax_mag, binned_vax_mag, idx_list = [], [], [], []
for j in np.arange(0, len(bins)-1):
idx_temp = []
for i in np.arange(0, len(cm_t_s2n)):
if cm_t_s2n[i] > bins[j] and cm_t_s2n[i] < bins[j+1]:
idx_temp.append(i)
if PRINTOUTS:
print ' For cm_T_s2n, number of objects in bin ', bins[j], '-', bins[j+1], ': ', len(idx_temp)
idx_list.append(idx_temp)
#idx_temp = np.where(s2n > bins[j] & (s2n < bins[j+1]))
if PRINTOUTS:
print ' '
return idx_list, bins, cm_t_s2n
def get_68percentile_from_normalized_data(norm_dm_list, bins, hax_mag_list):
"""Calculate the point on the normalized vertical axis corresponding to the 68th percentile of the data for each bin used in the error calculation.
Args:
norm_dm_list (list of list of floats) -- Normalized delta magnitudes. Bin structure preserved.
bins (list of floats) -- Bins used in error calculation.
hax_mag_list (list of list of floats) -- Magnitudes on the horizontal axis. Bin structure preserved.
Returns:
vax_68percentile (list of floats) -- Point on the vertical axis (vax) corresponding to 68 percentile. Each element in the list corresponds to a different bin.
bins (list of floats) -- Bins used in error calculation.
"""
vax_68percentile, neg_vax_34percentile, pos_vax_34percentile = [], [], []
PLOT_HIST = False
# Loop through bins (b) #
for b in np.arange(0, len(norm_dm_list)):
if norm_dm_list[b] is None:
vax_68percentile.append(None)
neg_vax_34percentile.append(None)
pos_vax_34percentile.append(None)
if norm_dm_list[b] is not None:
### Find 68th percentile about zero ###
# Values in current bin (icb) #
vax_mag_list_icb = norm_dm_list[b]
# Take absolute value of each point in bin #
abs_vax_mag_list_icb = [abs(elmt) for elmt in vax_mag_list_icb]
# Percentile sorts the data #
vax_68percentile.append(np.percentile(abs_vax_mag_list_icb, 68, interpolation='lower'))
if PRINTOUTS_MINOR:
# Check the percentile because interpolation='lower' was used #
num = 0
for j in np.arange(0, len(norm_dm_list[b])):
if abs(norm_dm_list[b][j]) <= np.percentile(abs_vax_mag_list_icb, 68, interpolation='lower'):
num += 1
print 'Number of objects within 68 percentile via np.percentile(interpolation=lower): ', float(num)/len(norm_dm_list[b]), '...\n'
### Find 34th percentile of positive and negative values separately ###
neg_vax, pos_vax = [], []
counter_neg, counter_pos = 0, 0
for j in np.arange(0, len(vax_mag_list_icb)):
if vax_mag_list_icb[j] < 0:
neg_vax.append(vax_mag_list_icb[j])
counter_neg += 1
if vax_mag_list_icb[j] > 0:
pos_vax.append(vax_mag_list_icb[j])
counter_pos += 1
# Check if lists are populated #
if counter_neg > 0:
neg_vax_34percentile.append(np.percentile(neg_vax, 34, interpolation='lower'))
if counter_pos > 0:
pos_vax_34percentile.append(np.percentile(pos_vax, 34, interpolation='lower'))
if counter_neg == 0:
neg_vax_34percentile.append(None)
if counter_pos == 0:
pos_vax_34percentile.append(None)
# Plot histogram to see distrubtion of data (data is not normally distributed) #
if PLOT_HIST:
plt.figure()
norm_dm = [abs(elmt) for elmt in norm_dm_list[b]]
plt.hist(norm_dm)
plt.title('Bin LHS: ' + str(bins[b]))
plt.xlabel(r'$\Delta M$')
plt.axvline(x=0, color='black', linestyle=':', linewidth=0.5)
plt.show()
return vax_68percentile, bins, neg_vax_34percentile, pos_vax_34percentile
def bin_and_cut_measured_magnitude_error(clean_magnitude1, clean_magnitude2, error1, error2, filter_name):
"""Remove error values corresponding to objects where |Delta-M| > 3. Do not consider error corresponding to empty bins nor bins with a small number of objects.
Args:
clean_magnitude1, clean_magnitude2 (list of floats) -- Objects with flag values of zero and/or quality cuts performed.
error1, error2 (list of floats) -- 1 and 2 refer to the matched catalogs.
Returns:
binned_hax_mag_median (list of floats) -- List of medians of the horizontal axis magnitude in each bin.
binned_vax_mag_median (list of floats) -- List of medians of the vertical axis magnitude in each bin. Vertical axis is computed via clean_magnitude1 - clean_magnitude2.
binned_err_median (list of floats) -- Median of the error in each bin.
bins (list of floats) -- Bins used. Binned according to horizontal axis.
binned_hax_mag_list, binned_vax_mag_list, binned_err_list (list of lists of floats) -- Stores values in each bin (horizontal axis magnitude, vertical axis magnitude, error, respectively).
"""
### !!!!! Comment this block out if errors are to be computed using both catalogs regardless of origin (measured catalog or truth catalog) ###
if 'meas' in AXLABEL1 and 'meas' not in AXLABEL2:
error2 = np.zeros(len(error1))
if PRINTOUTS:
print 'Using measured catalog (catalog1) for error calculation ... '
if 'meas' in AXLABEL2 and 'meas' not in AXLABEL1:
error1 = np.zeros(len(error2))
if PRINTOUTS:
print 'Using measured catalog (catalog2) for error calculation ... '
if 'meas' in AXLABEL1 and 'meas' in AXLABEL2:
if PRINTOUTS:
print 'Using measured catalog (catalog1 AND catalog2) for error calculation ... '
if 'true' in AXLABEL1 and 'true' in AXLABEL2:
sys.exit('Errors are to be computed using the measured catalog(s), not the truth catalog(s).')
### Define bins ###
step = 0.5
# Find the absolute min and max of the magnitudes in the matched catalog #
limlow1, limlow2 = min(clean_magnitude1), min(clean_magnitude2)
limhigh1, limhigh2 = max(clean_magnitude1), max(clean_magnitude2)
limlow, limhigh = min([limlow1, limlow2]), max([limhigh1, limhigh2])
# Define bins limits by ints #
limlow, limhigh = int(limlow), int(limhigh)
# Introduce magnitude cutoff to tame errors #
if 'gal' in MATCH_CAT1 or 'gal' in MATCH_CAT2:
limhigh = 26
if 'star' in MATCH_CAT1 or 'star' in MATCH_CAT2:
limhigh = 24
if PRINTOUTS:
print 'Forcing magnitudes to be binned with max ', limhigh, '...'
bins = np.arange(limlow, limhigh, step)
# Stores median of values in each bin #
binned_hax_mag_median, binned_vax_mag_median, binned_err_median = [], [], []
# List of lists. Stores all values in each bin #
binned_hax_mag_list, binned_vax_mag_list, binned_err_list = [], [], []
counter_empty_bin = 0
# Bin magnitude errors according to the magnitude on the horizontal axis #
if SWAP_HAX:
hax_mag = clean_magnitude2
if SWAP_HAX is False:
hax_mag = clean_magnitude1
# Magnitude on the vertical axis (vax) #
vax_mag = np.array(clean_magnitude1) - np.array(clean_magnitude2)
### Write filter header to log file ###
FD_MAG_BINS.write('Filter: ' + str(filter_name) + '\n')
### Populate each bin ###
for j in np.arange(limlow, limhigh, step):
binned_hax_mag_temp, binned_vax_mag_temp, binned_err_temp, counter_err = [], [], [], 0
for i in np.arange(0, len(clean_magnitude1)):
# Do not calculate errors using outlier magnitudes (chosen to be |Delta-M| > 3). Bin magnitude errors according to the magnitude on the horizontal axis of the plot #
if hax_mag[i] >= j and hax_mag[i] < j+step and abs(vax_mag[i]) < 3:
binned_err_temp.append((error1[i]**2 + error2[i]**2)**0.5)
binned_hax_mag_temp.append(hax_mag[i])
binned_vax_mag_temp.append(vax_mag[i])
counter_err += 1
# Written in log file, hence 'minor' #
if PRINTOUTS_MINOR:
print ' For magnitude, number of objects in bin ', round(j, 2), '-', round(j+step, 2), ': ', counter_err, '...'
### Write to log file ###
if counter_err == 0:
write_median, write_err = None, None
if counter_err > 0:
write_median, write_err = np.median(binned_hax_mag_temp), np.median(binned_err_temp)
FD_MAG_BINS.write(str(counter_err) + '\t' + str(round(j, 2)) + '\t' + str(round(j+step, 2)) + '\t' + str(write_median)+ '\t' + str(write_err) + '\n')
### Tame error calculation and normalization by adding zeros to empty bins and bins with a small number of points ###
# Define 'small' #
if STACK_REALIZATIONS:
CONST = 30
if STACK_REALIZATIONS is False:
CONST = 10
if counter_err <= CONST:
counter_empty_bin += 1
binned_err_median.append(None)
binned_hax_mag_median.append(None)
binned_vax_mag_median.append(None)
# Add to list of lists to keep bin structure #
binned_err_list.append(None)
binned_hax_mag_list.append(None)
binned_vax_mag_list.append(None)
if counter_err > CONST:
binned_err_median.append(np.median(binned_err_temp))
binned_hax_mag_median.append(np.median(binned_hax_mag_temp))
binned_vax_mag_median.append(np.median(binned_vax_mag_temp))
# Add to list of lists to keep bin structure #
binned_err_list.append(binned_err_temp)
binned_hax_mag_list.append(binned_hax_mag_temp)
binned_vax_mag_list.append(binned_vax_mag_temp)
if PRINTOUTS:
if SWAP_HAX:
print 'Binned clean_magnitude2 with step size: ', step, ', and minimum: ', limlow, ', and maximum: ', limhigh, '...'
if SWAP_HAX is False:
print 'Binned clean_magnitude1 with step size: ', step, ', and minimum: ', limlow, ', and maximum: ', limhigh, '...'
print ' Calculated errors using objects where |DeltaM| < 3 ... '
print ' Excluded bins with less than ', CONST, ' objects ... \n'
return binned_hax_mag_median, binned_vax_mag_median, binned_err_median, bins, binned_hax_mag_list, binned_vax_mag_list, binned_err_list
def normalize_plot_maintain_bin_structure(clean_magnitude1, clean_magnitude2, error1, error2, filter_name):
"""Normalize the vertical axis using error and uphold the bin structure.
Args:
clean_magnitude1, clean_magnitude2 (list of floats) --
error1, error2 (list of floats) --
Returns:
norm_dm_list (list of list of floats) --
bins (list of floats) -- Bins used in error calculation.
"""
# List of lists. Stores all values in each bin #
norm_dm_list, hax_mag_list = [], []
# binned_err_median: stores median of vales in bin. *_list: stores all values in each bin #
binned_err_median, bins, binned_hax_mag_list, binned_vax_mag_list = bin_and_cut_measured_magnitude_error(clean_magnitude1=clean_magnitude1, clean_magnitude2=clean_magnitude2, error1=error1, error2=error2, filter_name=filter_name)[2:-1]
# Loop through bins (b) #
for b in np.arange(0, len(binned_vax_mag_list)):
# Normalized Delta-Magnitudes (dm) in current bin (icb) #
norm_dm_icb, hax_mag_icb = [], []
# 0 is a placeholder for empty bins and bins with few objects #
if binned_err_median[b] is None:
norm_dm_list.append(None)
hax_mag_list.append(None)
#if vax_mag_icb != 0:
if binned_err_median[b] is not None:
vax_mag_icb = binned_vax_mag_list[b]
for i in np.arange(0, len(vax_mag_icb)):
norm_dm_icb.append(vax_mag_icb[i]/binned_err_median[b])
hax_mag_icb.append(binned_hax_mag_list[b][i])
# List of lists to keep bin structure #
hax_mag_list.append(hax_mag_icb)
norm_dm_list.append(norm_dm_icb)
return norm_dm_list, bins, hax_mag_list
def normalize_plot(norm_delta_mag_list, bins, hax_mag_list):
"""Normalize plot to 1-sigma curve using tame magnitude errors only (use bin_and_cut_measured_magnitude_error()).
Args:
norm_dm_list (list of list of floats) -- Normalized delta magnitudes in each bin.
bins (list of floats) -- Bins used in error calculation.
hax_mag_list (list of list of floats) -- Magnitudes on the horizontal axis. Bin structure preserved.
Returns:
norm_dm (list of floats) -- Delta-Magnitude normalized by error. Delta-Magnitude computed via magnitude1 - magnitude2.
hax_mag (list of floats) -- Magnitude to be plotted on the horizontal axis.
"""
### Remove zeros so that lists can be flattened. Zeros (int) were placeholders for missing lists due to empty or small bin. ###
norm_delta_mag_list[:] = [temp for temp in norm_delta_mag_list if temp is not None]
hax_mag_list[:] = [temp for temp in hax_mag_list if temp is not None]
### Flatten lists ###
hax_mag = [item for sublist in hax_mag_list for item in sublist]
norm_dm = [item for sublist in norm_delta_mag_list for item in sublist]
### Check ###
idx = []
for b in np.arange(0, len(bins)-1):
for j in np.arange(0, len(hax_mag)):
if hax_mag[j] >= bins[b] and hax_mag[j] <= bins[b+1]:
idx.append(j)
norm_dm, hax_mag = np.array(norm_dm), np.array(hax_mag)
norm_dm, hax_mag = norm_dm[idx], hax_mag[idx]
return norm_dm, hax_mag, bins
def one_sigma_counter(norm_delta_mag, clean_magnitude1, bins, hax_mag):
"""Find the number of objects within 1-sigma, where 1-sigma is calculated according to the error. This function is only called if NORMALIZE is True.
Args:
norm_delta_mag (list of floats) -- Normalized Delta-Magnitude.
Returns:
counter_1sig (int) -- Number of objects within 1-sigma curve.
"""
counter_1sig = 0
# Cutoffs were introduced in error calculation. Consider only points not cutoff #
maglow, maghigh = min(bins), max(bins)
hax_mag, norm_delta_mag = np.array(hax_mag), np.array(norm_delta_mag)
norm_delta_mag = norm_delta_mag[(hax_mag >= maglow) & (hax_mag <= maghigh)]
for k in norm_delta_mag:
if abs(k) < 1.0:
counter_1sig += 1
if PRINTOUTS:
print 'Fraction of objects within 1-sigma: ', counter_1sig, ' / ', len(norm_delta_mag), ' = ', str(float(counter_1sig) / len(norm_delta_mag))
print ' Fraction of objects considered (objects plotted on normalized plot / objects plotted on scatter plot): ', str(float(len(norm_delta_mag)) / len(clean_magnitude1)), '\n'
return counter_1sig
def get_flag_type(df, k):
"""Print the flag type() once.
Args:
df (pandas DataFrame)
k (int) -- Counter implemented so printout is not repeated.
Returns:
0
"""
if k == 0:
for flag_hdr in FLAG_HDR_LIST:
print 'HEADER:', str(flag_hdr), ' -- EXAMPLE:', df[flag_hdr][0], ' -- TYPE:', type(df[flag_hdr][0])
k += 1
return 0
def get_color(filter_name):
"""Color code plot such that each griz band is a different color.
Args:
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'
Returns:
color (str)
cmap (str) -- Colormap used for Delta-Magnitude colorbars.
"""
if filter_name == 'g':
color, cmap = 'green', 'Greens'
if filter_name == 'r':
color, cmap = 'orange', 'Oranges'
if filter_name == 'i':
#color, cmap = 'purple', 'Purples'
color, cmap = 'darkgrey', 'Greys'
if filter_name == 'z':
#color, cmap = 'blue', 'Blues'
color, cmap = 'navy', 'Blues'
return color, cmap
def logger(delta_mag, tile_name, filter_name, realization_number, clean_magnitude1, full_magnitude1, bins, hax_mag):
"""Write to log files to record number of objects plotted and number of objects within 1sigma.
Args:
filter_name (str) -- Allowed values: 'g' 'r' 'i' 'z'.
clean_magnitude1 (list of floats) -- Objects with nonzero flags and/or quality cuts removed.
full_magnitude (list of floats) -- Values read directly from pandas DataFrame via `df[hdr]`; no objects removed using nonzero flag values and no quality cuts performed.
realization_number (int) -- Allowed values: 0 1 2 None. Refers to Balrog injection and None refers to a one-realization run.
Returns:
0
"""
if NORMALIZE:
num_1sig = one_sigma_counter(norm_delta_mag=delta_mag, clean_magnitude1=clean_magnitude1, bins=bins, hax_mag=hax_mag)
# Record number of objects plotted within 1sigma #
FD_1SIG.write('Number of objects within 1sigma for tile ' + str(tile_name) + ', filter ' + str(filter_name) + ', type ' + str(RUN_TYPE) + ', realization ' + str(realization_number) + ' : ' + str(num_1sig) + ' / ' + str(len(clean_magnitude1)) + ' = ' + str(float(num_1sig) / len(clean_magnitude1)) + '\n')
# Record number of objects plotted (nop) #
FD_NOP.write('Number of objects plotted after flag cuts for tile ' + str(tile_name) + ', filter ' + str(filter_name) + ', type ' + str(RUN_TYPE) + ', realization ' + str(realization_number) + ' : ' + str(len(clean_magnitude1)) + ' / ' + str(len(full_magnitude1)) + ' = ' + str(float(len(clean_magnitude1)) / len(full_magnitude1)) + '\n')
return 0
def get_colorbar_value(df, cm_t_hdr, cm_t_err_hdr, idx_good, clean_magnitude1, clean_magnitude2, axlabel, inj):
"""Get data that will be used for the colorbar of plot.
Args:
df (pandas DataFrame)
*_hdr (str) -- Headers refer to columns in the matched catalog.
inj (bool)
Returns:
cbar_val -- Values used to make colorbar.
cbar_idx_list -- Can be None
cbar_bins -- Can be None
cbar_axlabel (str) -- Label for the colorbar.
"""
if 'true' in axlabel:
sys.exit('ERROR. Colorbars should describe measured catalog values, not truth catalog values.')
if CM_T_S2N_COLORBAR:
cbar_idx_list, cbar_bins, cbar_val = calculate_and_bin_cm_T_signal_to_noise(cm_t_hdr=cm_t_hdr, cm_t_err_hdr=cm_t_err_hdr, df=df, idx_good=idx_good, clean_magnitude1=clean_magnitude1, clean_magnitude2=clean_magnitude2)
cbar_axlabel = 'cm_T_s2n_'+str(AXLABEL)
if CM_T_ERR_COLORBAR:
# For measured catalog, cuts performed on truth catalogs #
cbar_val = get_good_data(df=df, hdr=cm_t_err_hdr, idx_good=idx_good, magnitude=False, filter_name=None)
cbar_axlabel = str(cm_t_err_hdr[:-2]) + '_' + str(AXLABEL)
cbar_idx_list, cbar_bins = None, None
if CM_T_COLORBAR:
cbar_val = get_good_data(df=df, hdr=cm_t_hdr, idx_good=idx_good, magnitude=False, filter_name=None)
cbar_axlabel = str(cm_t_hdr[:-2]) + '_' + str(AXLABEL)
cbar_idx_list, cbar_bins = None, None
if CM_T_S2N_COLORBAR is False and CM_T_ERR_COLORBAR is False and CM_T_COLORBAR is False:
cbar_val, cbar_idx_list, cbar_bins, cbar_axlabel = None, None, None, None
if inj and cbar_axlabel is not None:
cbar_axlabel = 'inj_' + cbar_axlabel
return cbar_val, cbar_idx_list, cbar_bins, cbar_axlabel
def get_errors(mag_err_hdr1, mag_err_hdr2, df, filter_name, idx_good):
"""Get errors for plot.
Args:
*_hdr (str ) -- Headers refer to columns in the matched catalog. Can be None.
df (pandas DataFrame)
Returns:
err1, err2 (list of floats) -- Will be None if PLOT_1SIG is False.
"""
if PLOT_1SIG:
if mag_err_hdr1 is None:
err1 = calculate_total_fractional_magnitude_error(df=df, flux_hdr=CM_FLUX_HDR1, cov_hdr=CM_FLUX_COV_HDR1, filter_name=filter_name, idx_good=idx_good)
if mag_err_hdr1 is not None:
if MATCH_CAT1 == 'coadd':
err1 = get_floats_from_string(df=df, hdr=mag_err_hdr1, filter_name=filter_name)
if MATCH_CAT1 == 'star_truth':
err1 = df[str(mag_err_hdr1[:-2]) + '_' + filter_name.upper() + str(mag_err_hdr1[-2:])]
if mag_err_hdr2 is None:
err2 = calculate_total_fractional_magnitude_error(df=df, flux_hdr=CM_FLUX_HDR2, cov_hdr=CM_FLUX_COV_HDR2, filter_name=filter_name, idx_good=idx_good)
if mag_err_hdr2 is not None:
if MATCH_CAT2 == 'coadd':
err2 = get_floats_from_string(df=df, hdr=mag_err_hdr2, filter_name=filter_name)
if MATCH_CAT2 == 'star_truth':
err2 = df[str(mag_err_hdr2[:-2]) + '_' + filter_name.upper() + str(mag_err_hdr2[-2:])]
if PLOT_1SIG is False:
print 'WARNING: Not plotting 1-sigma curve so log file will FALSELY report that ZERO objects are within 1sigma ...\n'
err1, err2 = None, None
return err1, err2
def get_good_data(df, hdr, idx_good, magnitude, filter_name):
"""Get the data corresponding to good indices (no flags or post quality cuts).
Args:
df (pandas DataFrame)
hdr (str) -- Header for the DataFrame.
idx_good (list of floats) -- Safe indices
magnitude (bool) -- Get data for magnitudes?
filter_name (str) -- Only used if magnitude is True.
"""
if magnitude:
full_data = get_floats_from_string(df=df, hdr=hdr, filter_name=filter_name)
if magnitude is False:
full_data = df[hdr]
clean_data = np.array(full_data)[idx_good]
return clean_data
def get_plot_variables(filter_name, df, mag_hdr1, mag_hdr2, mag_err_hdr1, mag_err_hdr2, realization_number, tile_name, mag_axlabel1, mag_axlabel2):
"""Get quantities needed for plotter() and subplotter().
Args:
df (pandas DataFrame)
*_hdr (str) -- Can be None.
Returns:
"""
# Rewrite mag_axlabels. Transform, for example, cm_mag_true to cm_mag_{filter}_true or psf_mag_meas to psf_mag_{filter}_meas #
if RUN_TYPE is None:
mag_axlabel1 = str(mag_hdr1[:-2]) + '_' + str(AXLABEL1)
if RUN_TYPE is not None:
mag_axlabel1 = str(mag_hdr1) + '_' + str(AXLABEL1)
mag_axlabel2 = str(mag_hdr2[:-2]) + '_' + str(AXLABEL2)
# Transform, for example, cm_mag_true to cm_mag_{filter}_true, or psf_mag_meas to psf_mag_{filter}_meas #
mag_axlabel1 = mag_axlabel1[:-4] + str(filter_name) + '_' + mag_axlabel1[-4:]
mag_axlabel2 = mag_axlabel2[:-4] + str(filter_name) + '_' + mag_axlabel2[-4:]
if INJ1:
mag_axlabel1 = 'inj_' + mag_axlabel1
if INJ2:
mag_axlabel2 = 'inj_' + mag_axlabel2
# Coadd catalogs. Combined to get '(m_g, m_r, m_i, m_z)' then matched. #
if MATCH_CAT1 == 'coadd':
mag_axlabel1 = 'MAG_AUTO_'+ str(AXLABEL1)
if MATCH_CAT2 == 'coadd':
mag_axlabel2 = 'MAG_AUTO_'+ str(AXLABEL2)
if CM_T_HDR1 is not None:
cm_t_axlabel1 = str(CM_T_HDR1[:-2]) + '_' + str(AXLABEL1)
if CM_T_HDR2 is not None:
cm_t_axlabel2 = str(CM_T_HDR2[:-2]) + '_' + str(AXLABEL2)
if CM_T_ERR_HDR1 is not None:
cm_t_err_axlabel1 = str(CM_T_ERR_HDR1[:-2]) + '_' + str(AXLABEL1)
if CM_T_ERR_HDR2 is not None:
cm_t_err_axlabel2 = str(CM_T_ERR_HDR2[:-2]) + '_' + str(AXLABEL2)
### Define variables ###
# Get magnitude1 #
fullmag1 = get_floats_from_string(df=df, hdr=mag_hdr1, filter_name=filter_name)
# Get magnitude2 #
fullmag2 = get_floats_from_string(df=df, hdr=mag_hdr2, filter_name=filter_name)
### Clean the data: removed flags and/or perform quality cuts ###
if EH_CUTS:
idx_good = get_good_index_using_quality_cuts(df, full_magnitude1=fullmag1, full_magnitude2=fullmag2, cm_flag_hdr1=CM_FLAGS_HDR1, cm_flag_hdr2=CM_FLAGS_HDR2, flag_hdr1=FLAGS_HDR1, flag_hdr2=FLAGS_HDR2)[0]
if EH_CUTS is False:
idx_good = get_good_index_using_primary_flags(df=df, full_magnitude1=fullmag1, full_magnitude2=fullmag2, cm_flag_hdr1=CM_FLAGS_HDR1, cm_flag_hdr2=CM_FLAGS_HDR2, flag_hdr1=FLAGS_HDR1, flag_hdr2=FLAGS_HDR2)[0]
cleanmag1 = get_good_data(df=df, hdr=mag_hdr1, idx_good=idx_good, magnitude=True, filter_name=filter_name)
cleanmag2 = get_good_data(df=df, hdr=mag_hdr2, idx_good=idx_good, magnitude=True, filter_name=filter_name)
# Some variables set to None because must pass to plotter() #
cbar_val, cbar_idx_list, cbar_bins, cbar_axlabel = get_colorbar_value(df=df, cm_t_hdr=CM_T_HDR2, cm_t_err_hdr=CM_T_ERR_HDR2, idx_good=idx_good, clean_magnitude1=cleanmag1, clean_magnitude2=cleanmag2, axlabel=AXLABEL2, inj=INJ2)
### Define errors ###
err1, err2 = get_errors(mag_err_hdr1=mag_err_hdr1, mag_err_hdr2=mag_err_hdr2, df=df, filter_name=filter_name, idx_good=idx_good)
### Write flags to file ###
if LOG_FLAGS:
for i in np.arange(0, len(FLAG_HDR_LIST), 2):
# Bad index #
temp_idx = handle_flags(df=df, filter_name=f, realization_number=realization_number, flag_hdr1=FLAG_HDR_LIST[i], flag_hdr2=FLAG_HDR_LIST[i+1], full_magnitude1=fullmag1, full_magnitude2=fullmag2, tile_name=tile_name)[1]
#flag_idx.append(temp_idx)
FLAG_HDR_LIST.extend(temp_idx)
if SHOW_PLOT is False and SAVE_PLOT is False:
# Reset to avoid errors #
counter_subplot = 1
### Print out the type() for each flag ###
if SHOW_FLAG_TYPE:
get_flag_type(df=df, k=counter_flag_type_printout)
counter_flag_type_printout += 1
return cbar_val, cbar_idx_list, cbar_bins, err1, err2, cleanmag1, cleanmag2, idx_good, cbar_axlabel, fullmag1, mag_axlabel1, mag_axlabel2
def plotter(mag_hdr1, mag_hdr2, cbar_val, error1, error2, filter_name, clean_magnitude1, full_magnitude1, mag_axlabel1, clean_magnitude2, mag_axlabel2, plot_title, realization_number, tile_name, idx_list, bins, cbar_axlabel, plot_name):
"""Plot a single magnitude versus delta-magnitude plot.
Args:
full_magnitude1, full_magnitude2 (numpy.ndarray if directly from `df` OR list of floats if from `get_floats_from_string()`) -- Values read directly from pandas DataFrame via `df[hdr]`; no objects removed using nonzero flag values and no quality cuts performed.
realization_number (str) -- Allowed values: 0 1 2 None. Refers to Balrog injection and None refers to a one-realization run.
Returns:
0
"""
### Get labels for vertical and horizontal axes. ###
'''
# Rewrite mag_axlabels. Transform, for example, cm_mag_true to cm_mag_{filter}_true or psf_mag_meas to psf_mag_{filter}_meas #
if RUN_TYPE is None:
mag_axlabel1 = str(mag_hdr1[:-2]) + '_' + str(AXLABEL1)
if RUN_TYPE is not None:
mag_axlabel1 = str(mag_hdr1) + '_' + str(AXLABEL1)
mag_axlabel2 = str(mag_hdr2[:-2]) + '_' + str(AXLABEL2)
# Transform, for example, cm_mag_true to cm_mag_{filter}_true, or psf_mag_meas to psf_mag_{filter}_meas #
mag_axlabel1 = mag_axlabel1[:-4] + str(filter_name) + '_' + mag_axlabel1[-4:]
mag_axlabel2 = mag_axlabel2[:-4] + str(filter_name) + '_' + mag_axlabel2[-4:]
if INJ1:
mag_axlabel1 = 'inj_' + mag_axlabel1
if INJ2:
mag_axlabel2 = 'inj_' + mag_axlabel2
'''
### Values to plot for normalized plot ###
if NORMALIZE:
# Args needed to call normalize_plot() #
norm_dm_list, bins, hax_mag_list = normalize_plot_maintain_bin_structure(clean_magnitude1=clean_magnitude1, clean_magnitude2=clean_magnitude2, error1=error1, error2=error2, filter_name=filter_name)
PLOT_68P, PLOT_34P_SPLIT = True, True
if PLOT_1SIG:
### Plot 1-sigma curve according to error calculation ###
plt.axhline(y=1.0, color='red', linestyle='--', linewidth=0.7, label='$1 \sigma_{mag\_meas}$')
plt.axhline(y=-1.0, color='red', linestyle='--', linewidth=0.7)
# Line width for top and sides of 68th percentile bins #
lwt = 1.1; lws = 0.7
if PLOT_34P_SPLIT:
### Plot the 68th percentile calculated from np.percentile() ###
vax_68percentile_list, bins, neg_vax_34percentile, pos_vax_34percentile = get_68percentile_from_normalized_data(norm_dm_list=norm_dm_list, bins=bins, hax_mag_list=hax_mag_list)
counter_legend1 = 0; color1 = 'cyan'
for b in np.arange(0, len(neg_vax_34percentile)-1):
# Horizontal bar bounds #
x_hbound = np.array([bins[b], bins[b+1]])
x_vbound1 = np.array([bins[b], bins[b]])
x_vbound2 = np.array([bins[b+1], bins[b+1]])
if neg_vax_34percentile[b] is not None:
# Horizontal bar bounds #
neg_y_hbound = np.array([neg_vax_34percentile[b], neg_vax_34percentile[b]])
# Vertical bar bounds #
y_vbound = np.array([neg_vax_34percentile[b], 0])
# Plot #
# Plot legend once #
if counter_legend1 == 0:
plt.plot(x_hbound, neg_y_hbound, color=color1, linewidth=lwt, label='$\pm P_{34}$')
counter_legend1 = 1
if counter_legend1 == 1:
plt.plot(x_hbound, neg_y_hbound, color=color1)
plt.plot(x_vbound1, y_vbound, color=color1, linewidth=lws, linestyle=':')
plt.plot(x_vbound2, y_vbound, color=color1, linewidth=lws, linestyle=':')
if pos_vax_34percentile[b] is not None:
# Horizontal bar bounds #
pos_y_hbound = np.array([pos_vax_34percentile[b], pos_vax_34percentile[b]])
# Vertical bar bounds #
y_vbound = np.array([0, pos_vax_34percentile[b]])
# Plot #
plt.plot(x_hbound, pos_y_hbound, color=color1, linewidth=lwt)
plt.plot(x_vbound1, y_vbound, color=color1, linewidth=lws, linestyle=':')
plt.plot(x_vbound2, y_vbound, color=color1, linewidth=lws, linestyle=':')
if PLOT_68P:
counter_legend2 = 0; color2 = 'fuchsia'
for b in np.arange(0, len(vax_68percentile_list)-1):
if vax_68percentile_list[b] is not None:
# Horizontal bar bounds #
x_hbound = np.array([bins[b], bins[b+1]])
y_hbound = np.array([vax_68percentile_list[b], vax_68percentile_list[b]])
# Vertical bar bounds #
x_vbound1, x_vbound2 = np.array([bins[b], bins[b]]), np.array([bins[b+1], bins[b+1]])
y_vbound = np.array([-1*vax_68percentile_list[b], vax_68percentile_list[b]])
# Plot legend once #
if counter_legend2 == 0:
plt.plot(x_hbound, y_hbound, color=color2, label='$P_{68}$', linewidth=lwt)
counter_legend2 += 1
if counter_legend2 == 1:
# Horizontal bar #
plt.plot(x_hbound, y_hbound, color=color2, linewidth=lwt)
plt.plot(x_hbound, -1.0*y_hbound, color=color2, linewidth=lwt)
# Vertical bar #
plt.plot(x_vbound1, y_vbound, color=color2, linewidth=lws, linestyle=':')
plt.plot(x_vbound2, y_vbound, color=color2, linewidth=lws, linestyle=':')
### Values to plot ###
deltam, hax_mag, bins = normalize_plot(norm_delta_mag_list=norm_dm_list, bins=bins, hax_mag_list=hax_mag_list)
# Labels and appearance #
plt.ylabel('('+str(mag_axlabel1) + ' - ' + str(mag_axlabel2)+') / '+ '$\sigma$', fontsize=8)
### For scatter plot ###
if NORMALIZE is False:
# Values to plot #
deltam = np.array(clean_magnitude1) - np.array(clean_magnitude2)
if SWAP_HAX:
hax_mag = clean_magnitude2
if SWAP_HAX is False:
hax_mag = clean_magnitude1
# Labels and appearance #
plt.ylabel(str(mag_axlabel1) + ' - ' + str(mag_axlabel2), fontsize=9)
### 1-sigma curve ###
if PLOT_1SIG:
hax, vax, err, bins = bin_and_cut_measured_magnitude_error(error1=error1, error2=error2, clean_magnitude1=clean_magnitude1, clean_magnitude2=clean_magnitude2, filter_name=filter_name)[:4]
### Remove zeros from x, y, and err (zeros were placeholders for instances in which there were no objects in a particular magnitude bin) ###
err[:] = [temp for temp in err if temp is not None]
hax[:] = [temp for temp in hax if temp is not None]
vax[:] = [temp for temp in vax if temp is not None]
### Plot 1-sigma curve ###
plt.plot(hax, np.array(vax) + np.array(err), color='red', linestyle='-', linewidth=0.7, label='$1 \sigma_{mag\_meas}$')
plt.plot(hax, np.array(vax) - np.array(err), color='red', linestyle='-', linewidth=0.7)
### Write to log files to record the number of objects plotted and the number of objects within 1sigma ###
logger(delta_mag=deltam, filter_name=filter_name, clean_magnitude1=clean_magnitude1, full_magnitude1=full_magnitude1, realization_number=realization_number, tile_name=tile_name, bins=bins, hax_mag=hax_mag)
if PRINTOUTS:
print 'Plotting ', len(clean_magnitude1), ' objects ... \n'
### Plot ###
# One colorbar at a time. This error is caught at beginning of script #
if CM_T_S2N_COLORBAR is False and BIN_CM_T_S2N is False and CM_T_COLORBAR is False and CM_T_ERR_COLORBAR is False:
plt.scatter(hax_mag, deltam, color=get_color(filter_name=filter_name)[0], alpha=0.25, s=0.25)
if CM_T_S2N_COLORBAR or CM_T_ERR_COLORBAR or CM_T_COLORBAR:
'''To plot only the worst (smallest) s2n ratio:
plt.scatter(np.array(hax_mag)[idx_list[0]], np.array(deltam)[idx_list[0]], color='purple', alpha=1, s=1, label='%1.f'%bins[0]+'<cm_T_s2n<%1.f'%bins[1])
'''
plt.scatter(hax_mag, deltam, c=cbar_val, alpha=0.25, s=0.25, norm=matplotlib.colors.LogNorm(), cmap='gist_rainbow')
plt.colorbar(label=cbar_axlabel)
if BIN_CM_T_S2N:
colors = ['green', 'purple', 'cyan', 'orange', 'pink', 'yellow', 'black', 'blue']
for i in np.arange(0, len(idx_list)):
plt.scatter(np.array(hax_mag)[idx_list[i]], np.array(deltam)[idx_list[i]], color=colors[i], alpha=0.25, s=0.25, label='%1.f'%bins[i]+'<cm_T_s2n<%1.f'%bins[i+1])
if HEXBIN:
if NORMALIZE:
grid = (100, 1000)
if PRINTOUTS:
print ' Normalized hexbin has a large number of grid cells. Will take a moment to plot ... \n'
if NORMALIZE is False:
grid = 100
plt.hexbin(hax_mag, deltam, gridsize=grid, cmap=get_color(filter_name=filter_name)[1], bins='log')
plt.colorbar(label='log(counts)')
# Labels and appearance #
if SWAP_HAX:
plt.xlabel(str(mag_axlabel2), fontsize=9)
if SWAP_HAX is False:
plt.xlabel(str(mag_axlabel1), fontsize=9)
plt.axhline(y=0.0, color='k', linestyle=':', linewidth=0.5)
if YLOW is not None and YHIGH is not None:
plt.ylim([YLOW, YHIGH])
### Plot legend ###
if PLOT_1SIG and BIN_CM_T_S2N is False:
plt.legend(fontsize=8).draggable()
if BIN_CM_T_S2N:
# Increase marker size and opacity in legend #
lgnd = plt.legend(markerscale=4, fontsize=8)
for l in lgnd.legendHandles:
l.set_alpha(1)
if SUBPLOT is False:
plot_name = plot_name.replace('griz', filter_name)
### Save plot ###
if SAVE_PLOT:
print '-----> Saving plot as: ', plot_name
plt.savefig(plot_name)
if SHOW_PLOT:
plt.title(plot_title)
plt.show()
return 0
def subplotter(df, flag_idx, mag_hdr1, mag_hdr2, mag_err_hdr1, mag_err_hdr2, plot_name, plot_title, realization_number, tile_name):
"""Combine four subplots into a single plot with four panels (2-by-2). Declare variables needed for plotting.
Args:
*_hdr (str) -- Headers refer to columns in the matched catalog.
df (pandas DataFrame)
plot_name (str) -- Path and name for the plot. Used when save_plot is True and normalize is False.
realization_number (int) -- Allowed values: 0 1 2 None. Refers to Balrog injection and None refers to a one-realization run.
Returns:
flag_idx (list of ints) -- If log_flags is True, will check for all nonzero flag values in `FLAG_HDR_LIST` and `flag_idx` will contain indices that have nonzero flag values. Will be empty if LOG_FLAGS is False.
"""
# Counter for flag type() printout #
counter_flag_type_printout = 0
### Create 4-by-4 subplot ###
counter_subplot = 1
# Figure size units: inches #
plt.figure(figsize=(10, 8))
### Create one subplot for each griz filter ###
for f in ALL_FILTERS:
### Define variables ###
cbar_val, cbar_idx_list, cbar_bins, err1, err2, cleanmag1, cleanmag2, index_good, cbar_axlabel, fullmag1, mag_axlabel1, mag_axlabel2 = get_plot_variables(filter_name=f, df=df, mag_hdr1=mag_hdr1, mag_hdr2=mag_hdr2, mag_err_hdr1=mag_err_hdr1, mag_err_hdr2=mag_err_hdr2, realization_number=realization_number, tile_name=tile_name, mag_axlabel1=M_AXLABEL1, mag_axlabel2=M_AXLABEL2)
### Subplot ###
if SUBPLOT:
plt.subplot(2, 2, counter_subplot)
plotter(mag_hdr1=mag_hdr1, mag_hdr2=mag_hdr2, cbar_val=cbar_val, plot_title=plot_title, error1=err1, error2=err2, filter_name=f, full_magnitude1=fullmag1, clean_magnitude1=cleanmag1, clean_magnitude2=cleanmag2, mag_axlabel1=mag_axlabel1, mag_axlabel2=mag_axlabel2, realization_number=realization_number, tile_name=tile_name, idx_list=cbar_idx_list, bins=cbar_bins, cbar_axlabel=cbar_axlabel, plot_name=plot_name)
counter_subplot += 1
if SUBPLOT:
### Show or save the plot once all four subplots have been filled ###
plt.subplots_adjust(hspace=0.4)
plt.subplots_adjust(wspace=0.3)
plt.tight_layout(pad=3, h_pad=2.5)
### Title ###
plt.suptitle(plot_title)
### Save plot ###
if SAVE_PLOT:
print '-----> Saving plot as: ', plot_name
plt.savefig(plot_name)
### Show plot ###
if SHOW_PLOT:
plt.show()
return flag_idx
def get_plot_suptitle(realization_number, tile_name):
"""Generate plot title.
Args:
match_type (str) -- Ex: inj_mof_vs_truth_cat
realization_number (str) -- Allowed values: '0' '1' '2' ... 'stacked'.
tile_name (str)
Returns:
title (str) -- Ex: 'Inj MOF Cat & Truth Cat'
"""
title = str(TITLE_PIECE1) + ' & ' + str(TITLE_PIECE2) +'. Tile: ' + str(tile_name) + '. Realization: ' + str(realization_number) + '.'
if RUN_TYPE == 'ok':
title = title + ' Unchanged FOF groups.'
if RUN_TYPE == 'rerun':
title = title + ' Changed FOF groups.'
if NORMALIZE:
title = 'Normalized. ' + title
return title
def get_plot_save_name(realization_number, tile_name):
"""Generate name of the plot that will be used in plt.savefig().
Relies on directory structure: outdir/plots/`BALROG_RUN`/`MATCH_TYPE`/{tile}/{plot_type}/{realization}/ where allowed values for plot_type are: 'normalized' 'scatter'.
Args:
outdir (str) -- Output directory
realization_number (str) -- Allowed values: '0' '1' '2' 'stacked'
tile_name (str)
Returns:
fn (str) -- The complete filename which includes path.
"""
### Get filename ###
if YLOW is None and YHIGH is None:
# Default scale for the vertical axis (vax) is used #
ylim = 'defaultvax'
if YLOW is not None and YHIGH is not None:
ylim = str(YLOW)+'y'+str(YHIGH)
if RUN_TYPE is None:
endname = str(tile_name) + '_' + str(realization_number) + '_griz_' + str(MATCH_TYPE) + '_' + str(ylim) + '.png'
if RUN_TYPE is not None:
endname = str(tile_name) + '_' + str(realization_number) + '_griz_' + str(MATCH_TYPE) + '_' + str(RUN_TYPE) + '_' + str(ylim) + '.png'
# dm = delta magnitude #
if CM_T_S2N_COLORBAR:
outname = 'm_vs_dm_cm_t_s2n_' + endname
if CM_T_COLORBAR:
outname = 'm_vs_dm_cm_t_' + endname
if CM_T_ERR_COLORBAR:
outname = 'm_vs_dm_cm_t_err_' + endname
if HEXBIN:
outname = 'm_vs_dm_hexbin_' + endname
if CM_T_S2N_COLORBAR is False and CM_T_COLORBAR is False and CM_T_ERR_COLORBAR is False and HEXBIN is False:
outname = 'm_vs_dm_' + endname
# !!!!! User may wish to edit directory structure #
### Check for directory existence ###
if RUN_TYPE is not None:
plot_dir_pref = os.path.join(OUTDIR, 'plots', BALROG_RUN, MATCH_TYPE, tile_name, realization_number, 'fof_analysis')
if RUN_TYPE is None:
plot_dir_pref = os.path.join(OUTDIR, 'plots', BALROG_RUN, MATCH_TYPE, tile_name, realization_number)
if NORMALIZE:
plot_dir = os.path.join(plot_dir_pref, 'normalized')
if NORMALIZE is False:
plot_dir = os.path.join(plot_dir_pref, 'scatter')
if os.path.isdir(plot_dir) is False:
if NO_DIR_EXIT:
sys.exit('Directory ' + str(plot_dir) + ' does not exist. \n Change directory structure in ms_plotter.get_plot_save_name() or set `NO_DIR_MAKE=True`')
if NO_DIR_MAKE:
print 'Making directory ', plot_dir, '...\n'
os.makedirs(plot_dir)
### Get filename and path ###
if NORMALIZE:
fn = os.path.join(plot_dir, 'norm_' + str(outname))
if NORMALIZE is False:
fn = os.path.join(plot_dir, outname)
return fn
def get_coadd_mag_and_mag_err(fn_g, fn_r, fn_i, fn_z, mag_hdr, err_hdr):
"""Solely for use with coadd catalogs. Creates a list of magnitudes of form '(mag_g, mag_r, mag_i, mag_z)' from four catalogs.
Args:
fn -- Filenames. Must be FITS files.
hdr (str) -- Header for the magnitude. Headers refer to columns in the matched catalog.
Returns:
m_griz (list of str) -- Stores magnitude of each filter in form '(mag_g, mag_r, mag_i, mag_z)'
m_err_griz (list of str) -- Stores error in magnitude of each filter in form '(mag_g, mag_r, mag_i, mag_z)'
"""
# Files have not yet been matched, and do not have hdr_1 #
mag_hdr = mag_hdr[:-2]
err_hdr = err_hdr[:-2]
# Open FITS files #
hdu_g = fits.open(fn_g); hdu_r = fits.open(fn_r); hdu_i = fits.open(fn_i); hdu_z = fits.open(fn_z)
# Read data #
data_g = hdu_g[1].data; data_r = hdu_r[1].data; data_i = hdu_i[1].data; data_z = hdu_z[1].data
# Get magnitudes #
m_g = data_g[mag_hdr]; m_r = data_r[mag_hdr]; m_i = data_i[mag_hdr]; m_z = data_z[mag_hdr]
# Get magnitude errors #
err_g = data_g[err_hdr]; err_r = data_r[err_hdr]; err_i = data_i[err_hdr]; err_z = data_z[err_hdr]
m_griz, m_err_griz = [], []
for i in np.arange(0, len(m_g)):
m_griz.append("'("+ str(m_g[i]) + ', ' + str(m_r[i]) + ', ' + str(m_i[i]) + ', ' + str(m_z[i]) + ")'")
m_err_griz.append("'("+ str(err_g[i])+ ', ' + str(err_r[i])+ ', ' + str(err_i[i]) + ', ' + str(err_z[i]) + ")'")
return m_griz, m_err_griz
def get_star_mag(df):
"""Solely for use with star truth catalogs. Computes and creates a list of magnitudes of form '(mag_g, mag_r, mag_i, mag_z)'.
Args:
df (pandas DataFram)
Returns:
m_griz (list of str) -- Stores magnitudes of each filter in form '(mag_g, mag_r, mag_i, mag_z)'.
"""
m_g = df['g_Corr_1']
m_r = df['g_Corr_1'] - df['gr_Corr_1']
m_i = df['g_Corr_1'] - df['gr_Corr_1'] - df['ri_Corr_1']
m_z = df['g_Corr_1'] - df['gr_Corr_1'] - df['ri_Corr_1'] - df['iz_Corr_1']
m_griz = []
for i in np.arange(0, len(m_g)):
m_griz.append("'("+ str(m_g[i]) + ', ' + str(m_r[i]) + ', ' + str(m_i[i]) + ', ' + str(m_z[i]) + ")'")
return m_griz
def get_catalog(cat_type, inj, realization_number, tile_name, filter_name):
"""Get catalog to analyze.
Args:
cat_type -- Catalog type. Allowed values: 'gal_truth', 'mof', 'star_truth', 'sof', 'coadd'.
inj (bool)
realization_number (str) -- Allowed values: '0' '1' '2' ...
tile_name -- Different allowed values depending on catalog.
filter_name (str) -- Only used with coadd catalogs.
Returns:
fn -- Filename
"""
if cat_type == 'gal_truth' and inj:
fn = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, tile_name+'_'+realization_number+'_balrog_truth_cat_gals.fits')
if cat_type == 'gal_truth' and inj is False:
sys.exit('No non-injected truth catalog exists.')
if cat_type == 'star_truth' and inj:
fn = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, tile_name+'_'+realization_number+'_balrog_truth_cat_stars.fits')
if cat_type == 'star_truth' and inj is False:
sys.exit('No non-injected truth catalog exists.')
if cat_type == 'sof' and inj:
fn = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'sof', tile_name+'_sof.fits')
if cat_type == 'sof' and inj is False:
fn = os.path.join(BASEPATH, tile_name, 'sof', tile_name+'_sof.fits')
if cat_type == 'mof' and inj:
fn = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'mof', tile_name+'_mof.fits')
if cat_type == 'mof' and inj is False:
fn = os.path.join(BASEPATH, tile_name, 'mof', tile_name+'_mof.fits')
if cat_type == 'coadd' and inj:
fn = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'coadd', tile_name+'_'+filter_name+'_cat.fits')
if cat_type == 'coadd' and inj is False:
fn = os.path.join(BASEPATH, tile_name, 'coadd', tile_name+'_'+filter_name+'_cat.fits')
return fn
def matcher(realization_number, tile_name, filter_name):
"""Match two catalogs on RA and DEC with a tolerance of 1 arcsecond via STILTS.
Args:
outdir (str) -- Path to where matched catalogs are saved.
realization_number (str or int) -- Currently allowed values: 0 1 2 3 4 5 6 7 8 9 depending on the basepath.
tile_name (str) -- Currently allowed values: DES0347-5540 DES2329-5622 DES2357-6456 DES2247-4414 depending on the basepath.
Returns:
outname (str) -- Name of matched catalog. Headers will have _1 appended for truth catalog and _2 appended for mof catalog.
"""
### Get arguments to pass to ms_matcher ###
# Input catalogs for STILTS #
if MATCH_CAT1 is not 'coadd':
in1 = get_catalog(cat_type=MATCH_CAT1, inj=INJ1, realization_number=realization_number, tile_name=tile_name, filter_name=filter_name)
if MATCH_CAT1 == 'coadd':
in1 = get_coadd_matcher_catalog(cat_type=MATCH_CAT1, inj=INJ1, realization_number=realization_number, tile_name=tile_name, mag_hdr=M_HDR1, err_hdr=mag_err_hdr1)
if MATCH_CAT2 is not 'coadd':
in2 = get_catalog(cat_type=MATCH_CAT2, inj=INJ2, realization_number=realization_number, tile_name=tile_name, filter_name=filter_name)
if MATCH_CAT2 == 'coadd':
in2 = get_coadd_matcher_catalog(cat_type=MATCH_CAT2, inj=INJ2, realization_number=realization_number, tile_name=tile_name, mag_hdr=M_HDR2, err_hdr=mag_err_hdr2)
# !!!!! User may wish to edit directory structure. Output catalog name for STILTS #
### Check for directory existence ###
match_dir = os.path.join(OUTDIR, 'catalog_compare', BALROG_RUN, MATCH_TYPE)
if os.path.isdir(match_dir) is False:
if NO_DIR_EXIT:
sys.exit('Directory ' + str(match_dir) + ' does not exist. \n Change directory structure in ms_plotter.matcher() or set `NO_DIR_MAKE=True`')
if NO_DIR_MAKE:
print 'Making directory ', match_dir, '...\n'
os.makedirs(match_dir)
outname_match = os.path.join(match_dir, tile_name+'_'+realization_number+'_'+str(MATCH_TYPE)+'_match1and2.csv')
outname_1not2 = os.path.join(match_dir, tile_name+'_'+realization_number+'_'+str(MATCH_TYPE)+'_match1not2.csv')
outname_2not1 = os.path.join(match_dir, tile_name+'_'+realization_number+'_'+str(MATCH_TYPE)+'_match2not1.csv')
# Overwrite matched catalogs if one already exists? #
overwrite = False
# Check `outname` existence #
if os.path.isfile(outname_2not1) is False or (os.path.isfile(outname_2not1) and overwrite):
print '\nMatching ', in1, in2, '...\n'
### Matching done in ms_matcher. Args: in1, in2, out, RA_HDR1, DEC_HDR1, RA_HDR2, DEC_HDR2, overwrite ###
# !!!!! Ensure that path to ms_matcher is correct #
subprocess.call(['/data/des71.a/data/mspletts/balrog_validation_tests/scripts/BalVal/ms_matcher', in1, in2, outname_match, outname_1not2, outname_2not1, RA_HDR1, DEC_HDR1, RA_HDR2, DEC_HDR2])
return outname_match, outname_1not2, outname_2not1
def fof_matcher(realization_number, tile_name):
"""Get catalogs to analyze. Return FOF-analysed catalogs.
Args:
outdir (str)
realization_number (str) -- Allowed values: '0' '1' '2' ...
tile_name -- Different allowed values depending on catalog.
Returns:
fn_* -- Filenames
"""
### Filenames for input catalogs used in fof_matcher ###
# FOF #
fof = os.path.join(BASEPATH, tile_name, 'mof', tile_name+'_fofslist.fits')
inj_fof = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'mof', tile_name+'_fofslist.fits')
# MOF or SOF #
if MOF:
mof = os.path.join(BASEPATH, tile_name, 'mof', tile_name+'_mof.fits')
inj_mof = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'mof', tile_name+'_mof.fits')
fof = os.path.join(BASEPATH, tile_name, 'mof', tile_name+'_fofslist.fits')
inj_fof = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'mof', tile_name+'_fofslist.fits')
if SOF:
mof = os.path.join(BASEPATH, tile_name, 'sof', tile_name+'_sof.fits')
inj_mof = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'sof', tile_name+'_sof.fits')
fof = os.path.join(BASEPATH, tile_name, 'sof', tile_name+'_fofslist.fits')
inj_fof = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'sof', tile_name+'_fofslist.fits')
# Coadds. Using i-band #
coadd = os.path.join(BASEPATH, tile_name, 'coadd', tile_name+'_i_cat.fits')
inj_coadd = os.path.join(BASEPATH, 'balrog_images', realization_number, tile_name, 'coadd', tile_name+'_i_cat.fits')
### Filenames for outputs of fof_matcher ###
#TODO relies on certain directory structure. make note of this in README.md #
outdir = os.path.join(OUTDIR, 'fof_analysis_catalog_compare', BALROG_RUN)
# Repeated #
inj_outdir = os.path.join(outdir, tile_name, realization_number)
inj_outname = tile_name + '_' + realization_number
### Check directory existence. Make directories if not present. ###
if os.path.isdir(inj_outdir) is False:
os.makedirs(inj_outdir)
if os.path.isdir(os.path.join(outdir, tile_name)) is False:
os.makedirs(os.path.join(outdir, tile_name))
fofcoadd = os.path.join(outdir, tile_name, tile_name+ '_num-match_fof_coadd.csv')
fofgroups = os.path.join(outdir, tile_name, tile_name+ 'fofgroups.csv')
inj_fofcoadd = os.path.join(inj_outdir, inj_outname + '_num-match_inj_fof_inj_coadd.csv')
inj_fofgroups = os.path.join(inj_outdir, inj_outname + '_inj_fofgroups.csv')
origfof_injfof = os.path.join(inj_outdir, inj_outname + '_inj_fofgroup_fofgroup_match1and2.csv')
ok = os.path.join(inj_outdir, inj_outname + '.ok')
rerun = os.path.join(inj_outdir, inj_outname + '.rerun')
ok_inj_mof = os.path.join(inj_outdir, inj_outname + '_ok_inj_mof.csv')
rerun_inj_mof = os.path.join(inj_outdir, inj_outname + '_rerun_inj_mof.csv')
ok_mof = os.path.join(inj_outdir, inj_outname + '_ok_mof.csv')
rerun_mof = os.path.join(inj_outdir, inj_outname + '_rerun_mof.csv')
ok_match = os.path.join(inj_outdir, inj_outname + '_ok_inj_mof_ok_mof_match1and2.csv')
ok_1not2 = os.path.join(inj_outdir, inj_outname + '_ok_inj_mof_ok_mof_match1not2.csv')
ok_2not1 = os.path.join(inj_outdir, inj_outname + '_ok_inj_mof_ok_mof_match2not1.csv')
rerun_match = os.path.join(inj_outdir, inj_outname + '_rerun_inj_mof_rerun_mof_match1and2.csv')
rerun_1not2 = os.path.join(inj_outdir, inj_outname + '_rerun_inj_mof_rerun_mof_match1not2.csv')
rerun_2not1 = os.path.join(inj_outdir, inj_outname + '_rerun_inj_mof_rerun_mof_match2not1.csv')
# Output directory for files made in par.py #
parpy_outdir = os.path.join(inj_outdir, inj_outname)
# May need to overwrite if matching was interupted #
overwrite = False
### Check file existence of last file made in fof_matcher ###
if os.path.isfile(rerun_match) is False or (os.path.isfile(rerun_match) and overwrite):
### Run fof_matcher ###
subprocess.call(['/data/des71.a/data/mspletts/balrog_validation_tests/scripts/BalVal/fof_matcher', fof, inj_fof, mof, inj_mof, coadd, inj_coadd, parpy_outdir, fofcoadd, fofgroups, inj_fofcoadd, inj_fofgroups, origfof_injfof, ok, rerun, ok_inj_mof, rerun_inj_mof, ok_mof, rerun_mof, ok_match, rerun_match, ok_1not2, rerun_1not2, ok_2not1, ok_2not1])
if RUN_TYPE == 'ok':
return ok_match, ok_1not2, ok_2not1
if RUN_TYPE == 'rerun':
return rerun_match, rerun_1not2, ok_2not1
def make_plots(mag_hdr1, mag_hdr2, mag_err_hdr1, mag_err_hdr2):
"""Makes plots.
Args:
mag_hdr1, mag_hdr2 (str) -- Headers for magnitude. May be altered, hence passed as parameters.
mag_err_hdr1, mag_err_hdr2 (str) -- Headers for magnitude error. May be altered, hence passed as parameters.
Returns:
0
"""
for t in ALL_TILES:
### For plotting all realizations at once, stacked ###
if STACK_REALIZATIONS:
# Filename #
fn_stack = os.path.join(OUTDIR, t+'_stacked_'+str(MATCH_TYPE)+'_match1and2.csv')
# Check if stacked realization file already exists #
if os.path.isfile(fn_stack):
print 'Stacked realization catalog exists. Not overwriting ... \n'
df1and2 = pd.read_csv(fn_stack)
# Combine all realizations for one tile into a single catalog. Catalogs combined AFTER matching. #
if os.path.isfile(fn_stack) is False:
all_fn = []
for r in ALL_REALIZATIONS:
if RUN_TYPE is None:
fn = matcher(realization_number=r, tile_name=t, filter_name=None)[0]
if RUN_TYPE is not None:
fn = fof_matcher(realization_number=r, tile_name=t)[0]
all_fn.append(fn)
print 'Stacking realizations. ', len(all_fn), 'files ...'
df1and2 = pd.concat((pd.read_csv(fn) for fn in all_fn))
print 'Stacking complete ... \n'
# Save stacked catalog as DataFrame #
df1and2.to_csv(fn_stack, sep=',')
print '-----> Saving stacked realization catalog as ', fn_stack
# Name for plt.savefig() #
fn = get_plot_save_name(realization_number='stacked_realizations', tile_name=t)
# Title for plot #
title = get_plot_suptitle(realization_number='stacked '+str(len(ALL_REALIZATIONS)), tile_name=t)
### Handle star truth catalogs ###
if MATCH_CAT1 == 'star_truth' or MATCH_CAT2 == 'star_truth':
print 'Adding new column to matched csv ...'
star_mag = get_star_mag(df=df1and2)
# 'mag_a' short for mag_all. New header must be of the form {base}_x where x is a single character because of the way m_axlabel is created from m_hdr #
df1and2.insert(len(df1and2.columns), 'mag_a', star_mag)
if MATCH_CAT1 == 'star_truth':
mag_hdr1 = 'mag_a'
if MATCH_CAT2 == 'star_truth':
mag_hdr2 = 'mag_a'
### Handle coadd catalogs. New column has been added with name 'mag_c'. Catalog combined then matched so has suffix (unlike star) #
if MATCH_CAT1 == 'coadd':
mag_hdr1 = 'mag_c_1'
mag_err_hdr1 = 'mag_err_c_1'
if MATCH_CAT2 == 'coadd':
mag_hdr2 = 'mag_c_2'
mag_err_hdr2 = 'mag_err_c_2'
subplotter(df=df1and2, flag_idx=flag_idx, mag_hdr1=mag_hdr1, mag_hdr2=mag_hdr2, mag_err_hdr1=mag_err_hdr1, mag_err_hdr2=mag_err_hdr2, plot_name=fn, plot_title=title, realization_number='stacked', tile_name=t)
### For plotting one realization at a time ###
if STACK_REALIZATIONS is False:
print 'Not stacking realizations...'
for r in ALL_REALIZATIONS:
# Filename #
if RUN_TYPE is None:
fn_match, fn_1not2, fn_2not1 = matcher(realization_number=r, tile_name=t, filter_name=None)
if RUN_TYPE is not None:
fn_match, fn_1not2, fn_2not1 = fof_matcher(realization_number=r, tile_name=t)
# DataFrame #
df1and2 = pd.read_csv(fn_match)
df1not2 = pd.read_csv(fn_1not2)
df2not1 = pd.read_csv(fn_2not1)
### ####
if MAKE_REG:
make_region_file(df_match=df1and2, df_1not2=df1not2, df_2not1=df2not1, realization_number=r, tile_name=t)
# Name for plt.savefig() #
fn = get_plot_save_name(realization_number=r, tile_name=t)
# Title for plot #
title = get_plot_suptitle(realization_number=r, tile_name=t)
### Handle star truth catalogs ###
if MATCH_CAT1 == 'star_truth' or MATCH_CAT2 == 'star_truth':
print 'Adding new column to matched csv ... \n'
star_mag = get_star_mag(df=df1and2)
df1and2.insert(len(df1and2.columns), 'mag_a', star_mag)
# Star truth catalogs matched then combined # #FIXME rename to mag_s for mag_star
if MATCH_CAT1 == 'star_truth':
mag_hdr1 = 'mag_a'
if MATCH_CAT2 == 'star_truth':
mag_hdr2 = 'mag_a'
### Handle coadd catalogs. New column has been added with name 'mag_c'. Catalog combined then matched so has suffix (unlike star) #
if MATCH_CAT1 == 'coadd':
mag_hdr1 = 'mag_c_1'
mag_err_hdr1 = 'mag_err_c_1'
if MATCH_CAT2 == 'coadd':
mag_hdr2 = 'mag_c_2'
mag_err_hdr2 = 'mag_err_c_2'
subplotter(df=df1and2, flag_idx=flag_idx, mag_hdr1=mag_hdr1, mag_hdr2=mag_hdr2, mag_err_hdr1=mag_err_hdr1, mag_err_hdr2=mag_err_hdr2, plot_name=fn, plot_title=title, realization_number=r, tile_name=t)
return 0
def get_coadd_matcher_catalog(cat_type, inj, realization_number, mag_hdr, err_hdr, tile_name):
"""Make FITS file that includes a column of form '(m_g, m_r, m_i, m_z)' where m is magnitude. Column will be added to '..._i_cat.fits'. This will be used in matcher().
Args:
cat_type (str) -- Catalog type. Allowed values: mof, sof, coadd, gal_truth, star_truth.
inj (bool) -- Is the catalog (`cat_type`) injected?
realization_number (str)
mag_hdr (str) -- Header for magnitude. Headers refer to columns in the matched catalog.
err_hdr (str) -- Header for error. Headers refer to columns in the matched catalog.
tile_name (str)
Returns:
fn (str) -- Filename. Is a FITS file.
"""
fn_new = os.path.join(OUTDIR, str(tile_name) + '_i_cat.fits')
# Check if new coadd catalog has already been created #
if os.path.isfile(fn_new):
print 'New coadd catalog already exists ...\n'
if os.path.isfile(fn_new) is False:
print 'Adding a column to i-band coadd catalog. Will take a moment ...\n'
# Get list of filenames #
fn_griz = []
for f in ALL_FILTERS:
fn_griz.append(get_catalog(cat_type=cat_type, inj=inj, realization_number=realization, tile_name=tile_name, filter_name=f))
fn_g, fn_r, fn_i, fn_z = fn_griz
# Get coadd magnitude (mag_c) and magnitude error to be of form '(m_g, m_r, m_i, m_z)'. Recall that this is a string #
mag_c, mag_err_c = get_coadd_mag_and_mag_err(fn_g=fn_g, fn_r=fn_r, fn_i=fn_i, fn_z=fn_z, mag_hdr=mag_hdr, err_hdr=err_hdr)
# Create new table #
mag_c = Column(mag_c, name='mag_c')
mag_err_c = Column(mag_err_c, name='mag_err_c')
# Add new table to i-band coadd catalog #
table = Table.read(fn_i)
table.add_column(mag_c, index=0)
table.add_column(mag_err_c, index=1)
# Save new table as FITS #
table.write(fn_new)
return fn_new
#TODO accept idx_good as input param? Will only be used for df_match.
def make_region_file(df_match, df_1not2, df_2not1, realization_number, tile_name):
"""Make DS9 region files for catalogs matched via join=1and2, join=1not2, and 2not1.
Args:
df_* (pandas DataFrame)
Returns:
fn_* -- Filenames
"""
### Get filenames and open files ###
fn_match, fn_1not2, fn_2not1 = get_reg_names(tile_name=tile_name, realization_number=realization_number)
overwrite = False
if os.path.isfile(fn_2not1) and overwrite is False:
print 'Region files already exist. Not overwriting ...'
if os.path.isfile(fn_2not1) is False or (os.path.isfile(fn_2not1) and overwrite):
fd_match = open(fn_match, 'w'); fd_1not2 = open(fn_1not2, 'w'); fd_2not1 = open(fn_2not1, 'w')
# Write coordinate system #
fd_match.write('J20000 \n'); fd_1not2.write('J20000 \n'), fd_2not1.write('J20000 \n')
# Handle matched catalog #
if RUN_TYPE is None:
ra1 = RA_HDR1 + str('_1'); dec1 = DEC_HDR1 + str('_1')
ra2 = RA_HDR2 + str('_2'); dec2 = DEC_HDR2 + str('_2')
if RUN_TYPE is not None:
# MOF or SOF catalogs #
ra1 = 'ra'; dec1 = 'dec'
ra2 = 'ra_2'; dec2 = 'dec_2'
### Get position. Arbitrarily using MATCH_CAT1 for RA and DEC ###
ra_match, dec_match = df_match[ra2], df_match[dec2]
ra_1not2, dec_1not2 = df_1not2[ra1], df_1not2[dec1]
ra_2not1, dec_2not1 = df_2not1[ra2], df_2not1[dec2]
### Write to region file for matched catalog. Units are arcseconds. ###
# Coadds allow for elliptical regions #
if MATCH_CAT1 == 'coadd' or MATCH_CAT2 == 'coadd':
### Get semimajor and semiminor axes (a and b, respectively). Coadds have these values. ###
a_match, b_match = df_match[MAJOR_AX_HDR1], df_match[MINOR_AX_HDR1]
a_1not2, b_1not2 = df_1not2[MAJOR_AX_HDR1], df_1not2[MINOR_AX_HDR1]
a_2not1, b_2not2 = df_2not1[MAJOR_AX_HDR2], df_2not1[MINOR_AX_HDR2]
for i in np.arange(0, len(ra_match)):
fd_match.write('ellipse ' + str(ra_match[i]) + ' ' + str(dec_match[i]) + ' ' + str(a_match[i]) + '" ' + str(b_match[i]) + '" #color=green width=3 \n')
for i in np.arange(0, len(ra_1not2)):
fd_1not2.write('ellipse ' + str(ra_1not2[i]) + ' ' + str(dec_1not2[i]) + ' ' + str(a_1not2[i]) + '" ' + str(b_1not2[i]) + '" #color=yellow width=3 \n')
for i in np.arange(0, len(ra_2not1)):
fd_2not1.write('ellipse ' + str(ra_2not1[i]) + ' ' + str(dec_2not1[i]) + ' ' + str(a_2not1[i]) + '" ' + str(b_2not1[i]) + '" #color=blue width=3 \n')
# Non-coadd catalogs allow for circular regions #
if MATCH_CAT1 != 'coadd' and MATCH_CAT2 != 'coadd':
size_sq_match = df_match[CM_T_HDR1]
size_sq_1not2 = df_1not2[CM_T_HDR1]
size_sq_2not1 = df_2not1[CM_T_HDR2]
# Use a typical radius of 2 arcsec? #
for i in np.arange(0, len(ra_match)):
if size_sq_match[i] > 0:# and np.isnan(size_sq_match[i]) is False:
fd_match.write('circle ' + str(ra_match[i]) + ' ' + str(dec_match[i]) + ' ' + str(size_sq_match[i]**0.5) + '" #color=green width=3 \n')
for i in np.arange(0, len(ra_1not2)):
if size_sq_1not2[i] > 0: # and np.isnan(size_sq_1not2[i]) is False:
fd_1not2.write('circle ' + str(ra_1not2[i]) + ' ' + str(dec_1not2[i]) + ' ' + str(size_sq_1not2[i]**0.5) + '" #color=yellow width=3 \n')
for i in np.arange(0, len(ra_2not1)):
if size_sq_2not1[i] > 0: # and np.isnan(size_sq_2not1[i]) is False:
fd_2not1.write('circle ' + str(ra_2not1[i]) + ' ' + str(dec_2not1[i]) + ' ' + str(size_sq_2not1[i]**0.5) + '" #color=blue width=3 \n')
# Close files #
fd_match.close(); fd_1not2.close(); fd_2not1.close()
print '-----> Saving region files as: ', fn_match
print ' -----> ', fn_1not2
print ' ----->', fn_2not1
return 0
################################################################### Run script. 0 returned when complete. ###################################################################
### !!!!! Run once. Log files are closed once 0 is returned. ###
print make_plots(mag_hdr1=M_HDR1, mag_hdr2=M_HDR2, mag_err_hdr1=M_ERR_HDR1, mag_err_hdr2=M_ERR_HDR2)
### Loop over vertical axis limits. Suggestions: for normalized plot with star truth catalog use y=[3, 10], for normalized plot with galaxy truth catalog use y=[3, 20]. For non-normalized plot with star truth catalog or galaxy truth catalog use y=[0.5, None]. ###
# !!!!! Loop over vertical axis limits? #
YLOOP = False
if YLOOP:
for y in [0.5, None]:
if y is None:
YLOW, YHIGH = None, None
if y is not None:
YLOW, YHIGH = -1.0*y, y
# Must pass constants as parameters here because used for plot name #
print make_plots(mag_hdr1=M_HDR1, mag_hdr2=M_HDR2, mag_err_hdr1=M_ERR_HDR1, mag_err_hdr2=M_ERR_HDR2)
### !!!!! Loop over possible colorbars? ###
CBAR_LOOP = False
if CBAR_LOOP:
# Possible combinations for colorbars. Only one True allowed at a time and NORMALIZE and HEXBIN must both be False. #
cbar_bools_list =[[True, False, False, False, False], [False, True, False, False, False], [False, False, True, False, False]]
for cbar_bools in cbar_bools_list:
# Reset constants #
CM_T_COLORBAR, CM_T_S2N_COLORBAR, CM_T_ERR_COLORBAR, NORMALIZE, HEXBIN = cbar_bools
make_plots(mag_hdr1=M_HDR1, mag_hdr2=M_HDR2, mag_err_hdr1=M_ERR_HDR1, mag_err_hdr2=M_ERR_HDR2)
RUN_TYPE_LOOP = False
if RUN_TYPE_LOOP:
run_type_list = [None, 'ok', 'rerun']
for run_type in run_type_list:
RUN_TYPE = run_type
make_plots(mag_hdr1=M_HDR1, mag_hdr2=M_HDR2, mag_err_hdr1=M_ERR_HDR1, mag_err_hdr2=M_ERR_HDR2)
### Close log files ###
FD_1SIG.close()
FD_FLAG.close()
FD_MAG_BINS.close()
FD_NOP.close()
|
sweverett/Balrog-GalSim
|
plots/ms_plotter.py
|
Python
|
mit
| 106,291
|
[
"Galaxy"
] |
49e6079b5cd19d5be226fcb61acf7ddfca17d6404f26d1b827359102ff2c0d94
|
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class RZlibbioc(RPackage):
"""An R packaged zlib-1.2.5.
This package uses the source code of zlib-1.2.5 to create libraries for
systems that do not have these available via other means (most Linux and
Mac users should have system-level access to zlib, and no direct need
for this package). See the vignette for instructions on use."""
homepage = "https://bioconductor.org/packages/zlibbioc"
git = "https://git.bioconductor.org/packages/zlibbioc.git"
version('1.30.0', commit='99eae5b05968bf6abc9b54b9031afd93517381e0')
version('1.28.0', commit='b825b042742ba45455fc284b988ff4cd2a33222c')
version('1.26.0', commit='2e3ab097caa09a5e3ddaa3469b13e19a7224da0d')
version('1.24.0', commit='2990059338d1b987d098c009b0bfa806bd24afec')
version('1.22.0', commit='30377f830af2bc1ff17bbf3fdd2cb6442015fea5')
|
rspavel/spack
|
var/spack/repos/builtin/packages/r-zlibbioc/package.py
|
Python
|
lgpl-2.1
| 1,083
|
[
"Bioconductor"
] |
bef5fb86a2485915e090173a8608cec1f4985d4347fefe9ab1ad0d079c5a67b6
|
currencies=['CHF', 'EUR', 'USD', 'GBP']
descriptions=[
("Cafe", 2.0),
("Pizza Margherita", 15.0),
("Pizza Napoli", 17.0),
("Hamburger", 10.0),
("Flight to Toronto", 1500.0),
("Flight to New York", 1300.0),
("Flight to Pistoia", 500.0),
("Flight to Lugano", 400.0),
("Flight to Tokyo", 2000.0),
("Flight to Osaka", 2500.0),
("Flight to Singapore", 2000.0),
("iPhone 6", 700.0),
("iPhone 7", 800.0),
("iPhone 8", 900.0),
("Nokia phone 71130", 200.0),
("Book", 12.0),
("Ikea table", 200.0),
("Ikea chair", 70.0),
# ("Renault Scenic", 25000.0),
# ("Ferrari Testarossa", 300000.0),
# ("Suzuki Swift", 29000.0),
("Swatch watch model B80", 200.0),
("ErgoDox Keyboard", 200.0),
("T-Shirt", 29.0),
("Dinner chez Nunzio", 25.0),
("Flowers", 30.0),
("Casino", 60.0),
("Taxi", 30.0),
("Grocery Shopping", 10.0),
("Purchase at Aldi Supermarket", 40.0),
("Purchase at Migros", 50.0),
("Ray Ban Glasses", 100.0),
("ASAP Studio licenses", 2000.0),
("Michele Cafaggi Show", 30.0),
("FoxTrail game", 30.0),
("Samsung TV", 2000.0),
("Mac Book Pro", 2600.0),
("Train ticket", 20.0),
("Swisscom Phone bill", 150.0),
("Parfume", 90.0),
("Thinkpad Laptop", 1290.0),
("Swimming pool entrance ticket", 10.0),
("Seeds for birds", 2.0),
("Cat food", 10.0),
("Dog food", 10.0),
("Vet bill", 90.0),
("Hospital bill", 90.0),
("Grand Hotel Firmani''s", 290.0),
("Caferra Hotel", 190.0),
("Holiday Inn", 160.0),
("Lollypop", 2.0),
("Purchase on Google Play", 10.0),
("Purchase on iTunes", 20.0),
]
import uuid
import random
from decimal import *
def get_one():
item = descriptions[random.randrange(0, len(descriptions)-1)]
currency = currencies[random.randrange(0, len(currencies)-1)]
description = item[0]
price = item[1]
osc = 0.3
variation = random.uniform(0, osc)
final_price = price * (1 + variation)
formatted_price="{0:.2f}".format(final_price)
id = uuid.uuid4()
return id, description, formatted_price, currency
|
adunicorn/prototypes
|
apps/Loader/app/generate_random_transactions.py
|
Python
|
gpl-2.0
| 2,148
|
[
"CASINO"
] |
d74c4d9462c87bfd7dfd1b83bd3a5402f02f2eb53e5930f3f3c329e62c9a0838
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========================================================================
"""A utility to trace tensor values on TPU."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import os.path
import re
import sys
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import graph_io
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import control_flow_util
from tensorflow.python.ops import gen_math_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import linalg_ops
from tensorflow.python.ops import logging_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.platform import gfile
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.tpu import tpu
from tensorflow.python.tpu.ops import tpu_ops
_TRACER_LOG_PREFIX = ' [>>>TT>>>]'
_DEVICE_TYPE_TPU = 'tpu'
_DEVICE_TYPE_CPU = 'cpu'
_TRACE_MODE_NAN_INF = 'nan-inf'
_TRACE_MODE_PART_TENSOR = 'part-tensor'
_TRACE_MODE_PART_TENSOR_SIZE = 3
_TRACE_MODE_FULL_TENSOR = 'full-tensor'
_TRACE_MODE_NORM = 'norm'
_TRACE_MODE_MAX_ABS = 'max-abs'
_SUBMODE_BRIEF = 'brief'
_SUBMODE_DETAILED = 'detailed'
_REASON_OUTSIDE_OP_RANGE = 'not-traced-outside-op-range'
_REASON_UNSAFE_OP = 'not-traced-unsafe-op'
_REASON_WHILELOOP_OP = 'not-traced-special-whileloop-op'
_REASON_UNSAFE_SCALAR = 'not-traced-unsafe-scalar'
_REASON_LESS_INTERESTING_OP = 'not-traced-less-interesting-op'
_REASON_DEVICE_MISMATCH = 'not-traced-device-mismatch'
_REASON_DYNAMIC_SHAPE = 'not-traced-dynamic-shape'
_REASON_SCALAR_GET_TRACED = 'traced-scalar'
_REASON_TENSOR_GET_TRACED = 'traced-tensor'
_REASON_USER_INCLUDED = 'traced-user-included'
_REASON_USER_EXCLUDED = 'not-traced-user-excluded'
_REASON_NOT_EXECUTED = 'not-traced-not-in-exec-path'
_REASON_NON_NUMERIC_TENSOR = 'not-traced-non-numeric-tensor'
_REASON_FEEDS_WHILELOOP_OP = 'not-traced-feeds-special-whileloop-op'
_MARKER_SECTION_BEGIN = '!!!!!!! section-begin:'
_MARKER_SECTION_END = '!!!!!!! section-end:'
_SECTION_NAME_CONFIG = 'configuration'
_SECTION_NAME_REASON = 'reason'
_SECTION_NAME_OP_LIST = 'op-list'
_SECTION_NAME_TENSOR_LIST = 'tensor-list'
_SECTION_NAME_CACHE_INDEX_MAP = 'cache-index-map'
_SECTION_NAME_GRAPH = 'graph'
_FIELD_NAME_VERSION = 'version:'
_FIELD_NAME_DEVICE = 'device:'
_FIELD_NAME_TRACE_MODE = 'trace-mode:'
_FIELD_NAME_SUBMODE = 'submode:'
_FIELD_NAME_NUM_REPLICAS = 'num-replicas:'
_FIELD_NAME_NUM_REPLICAS_PER_HOST = 'num-replicas-per-host:'
_FIELD_NAME_NUM_HOSTS = 'num-hosts:'
_FIELD_NAME_NUM_OPS = 'number-of-ops:'
_FIELD_NAME_NUM_TENSORS = 'number-of-tensors:'
_FIELD_NAME_NUM_CACHE_INDICES = 'number-of-indices:'
_FIELD_NAME_TOPOLOGICAL_SORT_SUCCEED = 'topological-sort-succeed:'
_FLAGS_ENV_VAR = 'TENSOR_TRACER_FLAGS'
_FLAG_SINGLE_QUOTE_PAT = re.compile(r"\s*--([^=]+)='([^']*)'")
_FLAG_DOUBLE_QUOTE_PAT = re.compile(r'\s*--([^=]+)="([^"]*)"')
_FLAG_NO_QUOTE_PAT = re.compile(r'\s*--([^=]+)=(\S*)')
_FLAG_NO_EQUAL_PAT = re.compile(r'\s*--([^=]+)\s*')
_FLAG_NAME_ENABLE = 'enable'
_FLAG_NAME_TRACE_MODE = 'trace_mode'
_FLAG_NAME_USE_COMPACT_TRACE = 'compact_trace'
_FLAG_NAME_SUBMODE = 'submode'
_FLAG_NAME_INCLUDE_LESS_INTERESTING_OPS = 'include_less_interesting_ops'
_FLAG_NAME_EXCLUDED_OPNAMES = 'excluded_opnames'
_FLAG_NAME_EXCLUDED_OPTYPES = 'excluded_optypes'
_FLAG_NAME_INCLUDED_OPNAMES = 'included_opnames'
_FLAG_NAME_INCLUDED_OPTYPES = 'included_optypes'
_FLAG_NAME_TRACE_DIR = 'trace_dir'
_FLAG_NAME_REPORT_FILE = 'report_file'
_FLAG_NAME_USE_TEST_UNDECLARED_OUTPUTS_DIR = 'use_test_undeclared_outputs_dir'
_FLAG_NAME_OP_RANGE = 'op_range'
# Folder to dump the pre (before tensor tracer updates) and post graphs (after
# tensor tracer updates).
_FLAG_DUMP_BEFORE_AFTER_GRAPHS = 'dump_graphs'
_OP_RANGE_PAT = re.compile(r'(\d+):(\d+)')
_OUTPUT_STREAM_ESCAPE = 'file://'
_TEST_UNDECLARED_OUTPUTS_DIR_ENV_VAR = 'TEST_UNDECLARED_OUTPUTS_DIR'
_TENSOR_TRACER_COLLECTION = 'tensor_tracer_variables'
_TENSOR_TRACER_CHECKPOINT = 'tensor_tracer_checkpoint'
_TRACE_FILE_NAME = 'trace.all'
_COMPACT_TRACE_FILE_PREFIX = 'compact_trace.'
_COMPACT_TRACE_ENTRY_INIT_VALUE = -1.0
_TENSOR_TRACER_STORAGE = 'tensor_tracer_storage'
_TENSOR_VALUES_CACHE = 'tensor_values_cache'
_REPLICA_ID_TAG = '#replica-id: '
def tensor_tracepoint(tensor, checkpoint_name):
"""Adds a checkpoint with the given checkpoint name for the given tensor.
The tensor will be added to the list of tensors that will be traced by the
tensor tracer.
Args:
tensor: the tensor object for which the tracing is requested.
checkpoint_name: a string name for the checkpoint. This name has to be a
unique name if used within model comparison. The tensors that have the same
checkpoint identifier is compared in model comparison.
Returns:
The provided tensor.
"""
tensor.graph.get_collection(_TENSOR_TRACER_COLLECTION)
tensor.graph.add_to_collection(_TENSOR_TRACER_COLLECTION,
(tensor, checkpoint_name))
return tensor
def keras_layer_tracepoint(layer, checkpoint_name):
"""An interface for adding the tensor outputs of a keras layer.
Encapsulates tensor_tracepoint.
Args:
layer: A keras layer.
checkpoint_name: a string name for the checkpoint. This name has to be a
unique name if used within model comparison. The tensors that have the same
checkpoint identifier is compared in model comparison.
Returns:
The provided layer.
"""
try:
outputs = layer.output
if tensor_util.is_tensor(outputs):
tensor_tracepoint(outputs, '%s' % (checkpoint_name))
else:
idx = 0
for output_tensor in outputs:
if tensor_util.is_tensor(outputs):
tensor_tracepoint(output_tensor, '%s_%d' % (checkpoint_name, idx))
idx += 1
except AttributeError:
pass
except RuntimeError:
pass
return layer
def _trace_files_need_precreated(output_dir):
"""Return True if trace files must be pre-created by users."""
if not output_dir.startswith('/'):
return False
if len(output_dir) < 5:
return False
if output_dir[2] != 'n':
return False
if output_dir[3] != 's':
return False
if output_dir[1] != 'c':
return False
if output_dir[4] != '/':
return False
return True
def _get_tensor_values_cache(graph=None):
"""Returns the variable that implements tensor-value caching."""
graph = graph or ops.get_default_graph()
collection = graph.get_collection(_TENSOR_TRACER_STORAGE)
if len(collection) == 1:
return collection[0]
elif not collection:
raise RuntimeError('%s has not been created'%_TENSOR_VALUES_CACHE)
else:
raise RuntimeError('Multiple %s created'%_TENSOR_VALUES_CACHE)
return None
def _create_tensor_values_cache(graph, num_tensors):
"""Creates a variable as the cache to store intermediate tensor values."""
graph = graph or ops.get_default_graph()
# Create in proper graph and base name_scope.
with graph.as_default() as g, g.name_scope(None):
return variable_scope.get_variable(
_TENSOR_VALUES_CACHE,
shape=[num_tensors],
dtype=dtypes.float32,
initializer=init_ops.constant_initializer(
_COMPACT_TRACE_ENTRY_INIT_VALUE),
trainable=False,
use_resource=True,
collections=[_TENSOR_TRACER_STORAGE, ops.GraphKeys.GLOBAL_VARIABLES])
class TensorTracer(object):
"""A software construct for tracing tensor values in a TF graph on TPU.
This utility is disabled by default. It can be enabled by setting
the TENSOR_TRACER_FLAGS env variable as:
export TENSOR_TRACER_FLAGS="--enable=1"
If it is enabled, it will trace the output tensor values of
selected Ops in the graph. It has two outputs: (1) the traces and (2)
a report. The traces are dumped to a specified local file on the TPU
host. The report is printed to the log.info of the TPU job.
By passing options via the env variable, users can change:
(1) the trace mode (e.g., detecting NaN/Inf, printing partial or
full tensor values)
(2) which Ops to be traced (via op.name or op.type)
(3) output trace file path.
"""
# The set of graphs that are rewritten by tensor tracer.
_traced_graphs = set()
@staticmethod
def _match_next_flag(flags, pos):
"""Returns the match for the next TensorTracer flag.
Args:
flags: a string that contains the flags.
pos: where in flags to start the search.
Returns:
A pair where the first element is the regular-expression
match found and the second element indicates if the match
has a value.
"""
match = _FLAG_DOUBLE_QUOTE_PAT.match(flags, pos)
if match:
return match, True
match = _FLAG_SINGLE_QUOTE_PAT.match(flags, pos)
if match:
return match, True
match = _FLAG_NO_QUOTE_PAT.match(flags, pos)
if match:
return match, True
match = _FLAG_NO_EQUAL_PAT.match(flags, pos)
if match:
# The flag is found but is not given a value.
return match, False
# The flag is not found.
return None, False
@staticmethod
def validate_flag_names():
"""Validates if the TensorTrace flags passed are valid."""
valid_flag_names = [_FLAG_NAME_ENABLE, _FLAG_NAME_TRACE_MODE,
_FLAG_NAME_USE_COMPACT_TRACE,
_FLAG_NAME_SUBMODE,
_FLAG_NAME_EXCLUDED_OPNAMES,
_FLAG_NAME_EXCLUDED_OPTYPES,
_FLAG_NAME_INCLUDED_OPNAMES,
_FLAG_NAME_INCLUDED_OPTYPES,
_FLAG_NAME_TRACE_DIR,
_FLAG_NAME_REPORT_FILE,
_FLAG_NAME_USE_TEST_UNDECLARED_OUTPUTS_DIR,
_FLAG_NAME_INCLUDE_LESS_INTERESTING_OPS,
_FLAG_NAME_OP_RANGE,
_FLAG_DUMP_BEFORE_AFTER_GRAPHS]
tensor_tracer_flags = os.environ.get(_FLAGS_ENV_VAR)
if not tensor_tracer_flags:
return
pos = 0
while True:
match, _ = TensorTracer._match_next_flag(tensor_tracer_flags, pos)
if not match:
break
flag_name = match.group(1)
if flag_name not in valid_flag_names:
raise ValueError(
'The flag name "%s" passed via the environment variable "%s" '
'is invalid. Valid flag names are:'
'\n%s'%(flag_name, _FLAGS_ENV_VAR, valid_flag_names))
pos = match.end()
@staticmethod
def print_flag_values():
"""Prints all TensorTracer flags passed via environment variables."""
tensor_tracer_flags = os.environ.get(_FLAGS_ENV_VAR)
if not tensor_tracer_flags:
return 'Env variable "%s" is not set'%_FLAGS_ENV_VAR
result = 'Env variable "%s" is set to "%s"\n'%(_FLAGS_ENV_VAR,
tensor_tracer_flags)
result += 'Individual flag value:\n'
pos = 0
while True:
match, has_value = TensorTracer._match_next_flag(
tensor_tracer_flags, pos)
if not match:
break
flag_name = match.group(1)
if has_value:
flag_value = match.group(2)
else:
flag_value = None
result += ' %s: %s\n'%(flag_name, flag_value)
pos = match.end()
result += '\n'
return result
@staticmethod
def get_flag_value(wanted_flag_name):
"""Returns the value of a TensorTracer flags.
Args:
wanted_flag_name: the name the the flag we are looking for.
Returns:
A pair where the first element indicates if the flag is
found and the second element is the value of the flag.
Raises:
RuntimeError: If supposedly deadcode is reached.
"""
tensor_tracer_flags = os.getenv(_FLAGS_ENV_VAR)
if not tensor_tracer_flags:
return False, None
pos = 0
while True:
match, has_value = TensorTracer._match_next_flag(
tensor_tracer_flags, pos)
if not match:
return False, None
flag_name = match.group(1)
if has_value:
flag_value = match.group(2)
else:
flag_value = None
if flag_name == wanted_flag_name:
return True, flag_value
pos = match.end()
raise RuntimeError('Should not reach here.')
@staticmethod
def flag_value_to_re_list(flag_name):
"""Converts list of strings to compiled RE."""
re_list = []
found, flag_value = TensorTracer.get_flag_value(flag_name)
if not found or not flag_value:
return re_list
list_of_values = flag_value.split()
for v in list_of_values:
r = re.compile(v)
re_list.append(r)
return re_list
@staticmethod
def _is_flag_on(flag_name):
"""Returns True if the given flag is on."""
found, flag_value = TensorTracer.get_flag_value(flag_name)
if not found:
return False
if flag_value is None:
return True
# Depends on the flag value.
flag_value = flag_value.lower()
enabled = flag_value in ['1', 't', 'true', 'y', 'yes']
return enabled
@staticmethod
def is_enabled():
"""Returns True if TensorTracer is enabled."""
return TensorTracer._is_flag_on(_FLAG_NAME_ENABLE)
@staticmethod
def use_test_undeclared_outputs_dir():
"""Decides the output directory of the report and trace files.
Args:
None.
Returns:
True if the output files should be written to the
test-undeclared-outputs-directory defined via an
env variable.
"""
return TensorTracer._is_flag_on(
_FLAG_NAME_USE_TEST_UNDECLARED_OUTPUTS_DIR)
@staticmethod
def use_compact_trace():
return TensorTracer._is_flag_on(
_FLAG_NAME_USE_COMPACT_TRACE)
@staticmethod
def check_device_type(device_type):
"""Checks if the given device type is valid."""
if device_type not in [_DEVICE_TYPE_TPU, _DEVICE_TYPE_CPU]:
raise ValueError('Invalid device_type "%s"'%device_type)
@staticmethod
def check_trace_mode(trace_mode):
"""Checks if the given trace mode is valid."""
valid_trace_modes = [_TRACE_MODE_NAN_INF, _TRACE_MODE_PART_TENSOR,
_TRACE_MODE_FULL_TENSOR, _TRACE_MODE_NORM,
_TRACE_MODE_MAX_ABS]
if trace_mode not in valid_trace_modes:
raise ValueError('Invalid trace mode "%s" given to the Tensor_Tracer.'
'Valid trace modes are: %s'%(trace_mode,
valid_trace_modes))
@staticmethod
def check_submode(submode):
"""Checks if the given submode is valid."""
if not submode:
return
valid_submodes = [_SUBMODE_DETAILED, _SUBMODE_BRIEF]
if submode not in valid_submodes:
raise ValueError('Invalid submode "%s" given to the Tensor_Tracer.'
'Valid submodes are: %s'%(submode,
valid_submodes))
@staticmethod
def loop_cond_op(op):
return op.type in ('LoopCond', 'RefLoopCond')
@staticmethod
def while_loop_op(op):
"""Returns true if op is one of the special ops of in a while loop.
Args:
op: A tf.Operation.
Returns:
True if the given op is one of [Switch, Merge, Enter, Exit,
NextIteration, LoopCond], which are all building blocks for TF while
loops.
"""
return (control_flow_util.IsLoopSwitch(op) or
control_flow_util.IsLoopMerge(op) or
control_flow_util.IsLoopEnter(op) or
control_flow_util.IsLoopExit(op) or
TensorTracer.loop_cond_op(op) or
op.type in ('RefNextIteration', 'NextIteration'))
@staticmethod
def unsafe_op(op):
"""Returns True if this op is not safe to be traced."""
if control_flow_util.IsInCond(op):
return True
# Reasons for not including following op types:
# Assign: cause incorrect result with CPU tracing.
if op.type in ['Assign']:
return True
return False
@staticmethod
def device_mismatch(device_type, op):
if device_type == _DEVICE_TYPE_TPU:
# pylint: disable=protected-access
return tpu._TPU_REPLICATE_ATTR not in op.node_def.attr
# pylint: enable=protected-access
return False
@staticmethod
def unsafe_scalar_trace(op):
"""Return true if scalar output tensor from Op is not safe to be traced."""
# Tracing the following causes cycle in the graph on TPU.
if op.type in ['LoopCond', 'Enter', 'Merge', 'Const',
'Switch', 'Less', 'ReadVariableOp']:
return True
# Tracing the following will cause casting-issue
# with the norm tracing mode or other compilation issues on CPU.
if op.type in ['VarHandleOp', 'IteratorToStringHandle',
'IteratorGetNext', 'OneShotIterator',
'IteratorV2', 'MakeIterator',
'BatchDatasetV2', 'MapDataset',
'FixedLengthRecordDataset', 'TakeDataset', 'ZipDataset',
'Placeholder', 'PlaceholderWithDefault', 'StridedSlice']:
return True
return False
@staticmethod
def less_interesting_op(op):
"""Returns True if the given Op is not an interesting one to be traced."""
found, _ = TensorTracer.get_flag_value(
_FLAG_NAME_INCLUDE_LESS_INTERESTING_OPS)
if found:
# users force to include all ops.
return False
# Following ops are highly unlikey to cause bugs.
return op.type in ['Const', 'Identity', 'Cast', 'Shape']
@staticmethod
def reason(op_idx, details):
"""Returns reason why the Op at op_idx is traced or not."""
return '%d %s'%(op_idx, details)
@staticmethod
def topological_sort(g):
"""Performs topological sort on the given graph.
Args:
g: the graph.
Returns:
A pair where the first element indicates if the topological
sort succeeded (True if there is no cycle found; False if a
cycle is found) and the second element is either the sorted
list of nodes or the cycle of nodes found.
"""
def visit(op, cycle, permanently_marked_ops,
temporarily_marked_ops, sorted_ops):
"""Recursively visits all Ops in a graph.
Args:
op: the current Op being visited.
cycle: a cycle of Ops found.
permanently_marked_ops: the set of Ops that were already visited.
temporarily_marked_ops: the set of Ops that we have visited during
the current descent.
sorted_ops: the list of Ops sorted in topological order.
"""
if cycle:
return
if op in permanently_marked_ops:
return
if op in temporarily_marked_ops:
cycle = temporarily_marked_ops
return
temporarily_marked_ops.add(op)
for i in range(len(op.outputs)):
out_tensor = op.outputs[i]
for consumer_op in out_tensor.consumers():
visit(consumer_op, cycle, permanently_marked_ops,
temporarily_marked_ops, sorted_ops)
# pylint: disable=protected-access
for ctrl_output_op in op._control_outputs:
# pylint: enable=protected-access
visit(ctrl_output_op, cycle, permanently_marked_ops,
temporarily_marked_ops, sorted_ops)
temporarily_marked_ops.remove(op)
permanently_marked_ops.add(op)
sorted_ops.insert(0, op)
graph_cycle = set([])
sorted_ops = []
permanently_marked_ops = set([])
temporarily_marked_ops = set([])
unsorted_ops = g.get_operations()
for op in unsorted_ops:
visit(op, graph_cycle, permanently_marked_ops,
temporarily_marked_ops, sorted_ops)
if graph_cycle:
return (False, graph_cycle)
else:
assert len(unsorted_ops) == len(sorted_ops)
return (True, sorted_ops)
@staticmethod
def _make_op_and_tensor_maps(op_list):
"""Creates various maps and lists from op_list.
Args:
op_list: a list of Ops
Returns:
opname_idx_map: a map from Op's name to its index in op_list.
tensor_list: a list of output tensors of the Ops in op_list.
tensorname_idx_map: a map from output tensor name to its index
in tensor_list.
"""
opname_idx_map = {}
tensor_list = []
tensorname_idx_map = {}
for op_id, op in enumerate(op_list):
if op.name in opname_idx_map:
raise ValueError('Duplicated Op name: %s'%op.name)
opname_idx_map[op.name] = op_id
for output_tensor in op.outputs:
if output_tensor.name not in tensorname_idx_map:
tensor_list.append(output_tensor)
tensorname_idx_map[output_tensor.name] = len(tensor_list)-1
return (opname_idx_map, tensor_list, tensorname_idx_map)
def __init__(self):
"""Initializes a TensorTracer.
Sets the various member fields from the flags (if given) or the defaults.
"""
self._version = 'use-outside-compilation'
self._device_type = None
TensorTracer.validate_flag_names()
found, self._trace_mode = TensorTracer.get_flag_value(_FLAG_NAME_TRACE_MODE)
if not found or not self._trace_mode:
self._trace_mode = _TRACE_MODE_NAN_INF
TensorTracer.check_trace_mode(self._trace_mode)
found, self._submode = TensorTracer.get_flag_value(_FLAG_NAME_SUBMODE)
if not found or not self._submode:
self._submode = _SUBMODE_DETAILED
TensorTracer.check_submode(self._submode)
self._part_tensor_size = _TRACE_MODE_PART_TENSOR_SIZE
self._instrument_records = {}
self._set_trace_dir()
self._set_report_file()
self._set_op_range()
self._set_excluded_opnames()
self._set_excluded_optypes()
self._set_included_opnames()
self._set_included_optypes()
self._num_replicas = None
self._num_replicas_per_host = None
self._num_hosts = None
self._replica_id = None
_, self._graph_dump_path = TensorTracer.get_flag_value(
_FLAG_DUMP_BEFORE_AFTER_GRAPHS)
def _add_replica_id_to_graph(self):
"""Adds nodes for computing the replica ID to the graph."""
if self._num_replicas:
with ops.control_dependencies(None):
# Uses None as dependency to run outside of TPU graph rewrites.
self._replica_id = tpu_ops.tpu_replicated_input(
list(range(self._num_replicas)),
name='tt_replica_id')
else:
self._replica_id = 'unknown'
def _set_trace_dir(self):
found, self._trace_dir = TensorTracer.get_flag_value(_FLAG_NAME_TRACE_DIR)
if found and self._trace_dir \
and TensorTracer.use_test_undeclared_outputs_dir():
raise ValueError('Cannot not use --%s and --%s at the same time'
%(_FLAG_NAME_TRACE_DIR,
_FLAG_NAME_USE_TEST_UNDECLARED_OUTPUTS_DIR))
if TensorTracer.use_test_undeclared_outputs_dir():
self._trace_dir = os.environ.get(_TEST_UNDECLARED_OUTPUTS_DIR_ENV_VAR)
def _set_report_file(self):
"""Sets the path of the output report file."""
found, self._report_file_path = TensorTracer.get_flag_value(
_FLAG_NAME_REPORT_FILE)
if found and self._report_file_path \
and TensorTracer.use_test_undeclared_outputs_dir():
if os.path.isabs(self._report_file_path):
raise ValueError('If use_test_undeclared_outputs_dir is set,'
'report_file_path cannot be an absolute path (%s)'
%self._report_file_path)
outputs_dir = os.environ.get(_TEST_UNDECLARED_OUTPUTS_DIR_ENV_VAR)
self._report_file_path = os.path.join(outputs_dir,
self._report_file_path)
if not self._report_file_path:
self._report_file = None
return
try:
self._report_file = gfile.Open(self._report_file_path, 'w')
except IOError as e:
raise e
def _close_report_file(self):
if self._report_file:
self._report_file.close()
def _set_op_range(self):
"""Sets the index range of the Ops that we will consider tracing."""
found, op_range = TensorTracer.get_flag_value(_FLAG_NAME_OP_RANGE)
if not found or not op_range:
self._op_range = (-1, -1) # this means including all ops.
return
match = _OP_RANGE_PAT.match(op_range)
if not match:
self._op_range = (-1, -1) # this means including all ops.
return
self._op_range = (int(match.group(1)), int(match.group(2)))
def _inside_op_range(self, idx):
"""Return True if the given index is inside the selected range."""
if idx < self._op_range[0]:
return False
return self._op_range[1] < 0 or idx <= self._op_range[1]
def _set_excluded_opnames(self):
self._excluded_opname_re_list = TensorTracer.flag_value_to_re_list(
_FLAG_NAME_EXCLUDED_OPNAMES)
def _set_excluded_optypes(self):
self._excluded_optype_re_list = TensorTracer.flag_value_to_re_list(
_FLAG_NAME_EXCLUDED_OPTYPES)
def _set_included_opnames(self):
self._included_opname_re_list = TensorTracer.flag_value_to_re_list(
_FLAG_NAME_INCLUDED_OPNAMES)
def _set_included_optypes(self):
self._included_optype_re_list = TensorTracer.flag_value_to_re_list(
_FLAG_NAME_INCLUDED_OPTYPES)
def _is_user_included_op(self, op):
for opname_re in self._included_opname_re_list:
if opname_re.match(op.name):
return True
for optype_re in self._included_optype_re_list:
if optype_re.match(op.type):
return True
return False
def _is_user_excluded_op(self, op):
for opname_re in self._excluded_opname_re_list:
if opname_re.match(op.name):
return True
for optype_re in self._excluded_optype_re_list:
if optype_re.match(op.type):
return True
return False
def _use_tensor_values_cache(self):
"""Returns True if immediate tensors should be first saved to a cache."""
if self._trace_mode not in set([_TRACE_MODE_NAN_INF,
_TRACE_MODE_NORM, _TRACE_MODE_MAX_ABS]):
return False
if self._trace_dir and _trace_files_need_precreated(self._trace_dir):
return True
if TensorTracer.use_compact_trace():
return True
return False
def _save_tensor_value_to_cache_op(self, graph, cache_idx, updates):
"""Returns an Op that will save the given updates to an entry in the cache."""
cache = _get_tensor_values_cache(graph)
indices = constant_op.constant([cache_idx])
return state_ops.scatter_update(cache, indices, updates).op
def _write_report(self, content):
"""Writes the given content to the report."""
line = '%s %s'%(_TRACER_LOG_PREFIX, content)
if self._report_file:
self._report_file.write(line)
else:
logging.info(line)
def _write_config_section(self):
"""Writes the config section of the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN, _SECTION_NAME_CONFIG))
self._write_report('%s %s\n'%(_FIELD_NAME_VERSION, self._version))
self._write_report('%s %s\n'%(_FIELD_NAME_DEVICE, self._device_type))
self._write_report('%s %s\n'%(_FIELD_NAME_TRACE_MODE, self._trace_mode))
self._write_report('%s %s\n'%(_FIELD_NAME_SUBMODE, self._submode))
self._write_report('%s %s\n'%(_FIELD_NAME_NUM_REPLICAS, self._num_replicas))
self._write_report('%s %s\n'%(_FIELD_NAME_NUM_REPLICAS_PER_HOST,
self._num_replicas_per_host))
self._write_report('%s %s\n'%(_FIELD_NAME_NUM_HOSTS, self._num_hosts))
self._write_report('%s %s\n'%(_MARKER_SECTION_END, _SECTION_NAME_CONFIG))
def _write_reason_section(self):
"""Writes the reason section of the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN, _SECTION_NAME_REASON))
for key in sorted(self._instrument_records):
self._write_report('"%s" %s\n'%(key, self._instrument_records[key]))
self._write_report('%s %s\n'%(_MARKER_SECTION_END, _SECTION_NAME_REASON))
def _write_op_list_section(self, op_list):
"""Writes the Op-list section of the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN, _SECTION_NAME_OP_LIST))
self._write_report('%s %d\n'%(_FIELD_NAME_NUM_OPS, len(op_list)))
for i in range(0, len(op_list)):
op = op_list[i]
line = '%d "%s" %s'%(i, op.name, op.type)
for out_tensor in op.outputs:
if out_tensor.name not in self._tensorname_idx_map:
raise ValueError(
'out_tensor %s is not in tensorname_idx_map'%out_tensor.name)
line += ' %d'%self._tensorname_idx_map[out_tensor.name]
line += '\n'
self._write_report(line)
self._write_report('%s %s\n'%(_MARKER_SECTION_END, _SECTION_NAME_OP_LIST))
def _write_tensor_list_section(self, tensor_list, opname_idx_map):
"""Writes the tensor-list section of the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN,
_SECTION_NAME_TENSOR_LIST))
self._write_report('%s %d\n'%(_FIELD_NAME_NUM_TENSORS, len(tensor_list)))
for i in range(0, len(tensor_list)):
tensor = tensor_list[i]
line = '%d "%s"'%(i, tensor.name)
for consumer_op in tensor.consumers():
if consumer_op.name not in opname_idx_map:
raise ValueError(
'consumer_op %s is not in opname_idx_map'%consumer_op.name)
line += ' %d'%opname_idx_map[consumer_op.name]
line += '\n'
self._write_report(line)
self._write_report('%s %s\n'%(_MARKER_SECTION_END,
_SECTION_NAME_TENSOR_LIST))
def _write_cache_index_map_section(self):
"""Writes the mapping from cache index to tensor index to the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN,
_SECTION_NAME_CACHE_INDEX_MAP))
self._write_report('%s %d\n'%(_FIELD_NAME_NUM_CACHE_INDICES,
len(self._cache_idx_to_tensor_idx)))
for cache_idx in range(0, len(self._cache_idx_to_tensor_idx)):
tensor_idx = self._cache_idx_to_tensor_idx[cache_idx]
line = '%d %d\n'%(cache_idx, tensor_idx)
self._write_report(line)
self._write_report('%s %s\n'%(_MARKER_SECTION_END,
_SECTION_NAME_CACHE_INDEX_MAP))
def _write_graph_section(self, succeed, sorted_or_cycle):
"""Writes the graph section of the report."""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN, _SECTION_NAME_GRAPH))
self._write_report('%s %s\n'%(_FIELD_NAME_TOPOLOGICAL_SORT_SUCCEED,
succeed))
l = list(sorted_or_cycle)
for i in range(0, len(l)):
self._write_report('%d "%s"\n'%(i, l[i].name))
self._write_report('%s %s\n'%(_MARKER_SECTION_END, _SECTION_NAME_GRAPH))
def _preprocess_traced_tensor(self, tensor):
"""Computes NAN/Norm/Max on TPUs before sending to CPU.
Args:
tensor: The tensor to be traced.
Returns:
A tensor that should be input to the trace_function.
Raises:
RuntimeError: If the trace mode is invalid.
"""
def _detect_nan_inf(tensor):
"""Trace function for detecting any NaN/Inf in the tensor."""
if tensor.dtype.is_floating:
mask = math_ops.reduce_any(
gen_math_ops.logical_or(
gen_math_ops.is_nan(tensor), gen_math_ops.is_inf(tensor)))
output_tensor = control_flow_ops.cond(mask,
lambda: constant_op.constant(1.0),
lambda: constant_op.constant(0.0))
else:
output_tensor = constant_op.constant(0.0)
# The shape has to be 1. Set it if it does not have the information.
output_tensor = array_ops.reshape(output_tensor, [1])
return output_tensor
def _show_norm(tensor):
tensor = math_ops.cast(tensor, dtypes.float32)
output_tensor = linalg_ops.norm(tensor)
# The shape has to be 1. Set it if it does not have the information.
output_tensor = array_ops.reshape(output_tensor, [1])
return output_tensor
def _show_max_abs(tensor):
tensor = math_ops.cast(tensor, dtypes.float32)
output_tensor = math_ops.reduce_max(math_ops.abs(tensor))
zero = constant_op.constant(0, dtypes.float32)
output_tensor = gen_math_ops.maximum(zero, output_tensor)
# The shape has to be 1. Set it if it does not have the information.
output_tensor = array_ops.reshape(output_tensor, [1])
return output_tensor
if self._trace_mode == _TRACE_MODE_NAN_INF:
return _detect_nan_inf(tensor)
if self._trace_mode == _TRACE_MODE_PART_TENSOR:
return tensor
if self._trace_mode == _TRACE_MODE_FULL_TENSOR:
return tensor
if self._trace_mode == _TRACE_MODE_NORM:
return _show_norm(tensor)
if self._trace_mode == _TRACE_MODE_MAX_ABS:
return _show_max_abs(tensor)
raise RuntimeError(
'Tensor trace fun for %s is not yet implemented' % self._trace_mode)
def _make_tensor_trace_fun(self, tensor_name):
"""Makes the tensor tracing function called by outside compilation.
Args:
tensor_name: name of the tensor being traced.
Returns:
A function to be passed as the first argument to outside compilation.
Raises:
RuntimeError: If the trace mode is invalid.
"""
def _print_tensor(tensor_name, num_elements, tensor, output_tensor):
"""Prints a tensor value to a file.
Args:
tensor_name: name of the tensor being traced.
num_elements: number of elements to print (-1 means print all).
tensor: the tensor needs to be returned.
output_tensor: the tensor needs to be printed.
Returns:
The same tensor passed via the "tensor" argument.
Raises:
ValueError: If tensor_name is not already in
self._tensorname_idx_map.
"""
if self._submode == _SUBMODE_BRIEF:
if tensor_name not in self._tensorname_idx_map:
raise ValueError(
'Tensor name %s is not in the tensorname_idx_map'%tensor_name)
msg = '%d'%self._tensorname_idx_map[tensor_name]
else:
msg = '"%s"'%tensor_name
if self._trace_dir:
output_path = os.path.join(self._trace_dir, _TRACE_FILE_NAME)
output_stream = _OUTPUT_STREAM_ESCAPE + output_path
else:
output_stream = sys.stderr
return logging_ops.print_v2(msg, array_ops.shape(output_tensor),
'@', self._replica_id,
'\n', output_tensor, '\n',
summarize=num_elements,
output_stream=output_stream)
def _show_part_tensor(tensor):
"""Trace function for printing part of the tensor."""
return _print_tensor(tensor_name, self._part_tensor_size,
tensor, tensor)
def _show_full_tensor(tensor):
"""Trace function for printing the entire tensor."""
return _print_tensor(tensor_name, -1, tensor, tensor)
if self._trace_mode == _TRACE_MODE_PART_TENSOR:
return _show_part_tensor
# The input tensor has a shape of "[1]" for _TRACE_MODE_NAN_INF,
# _TRACE_MODE_NORM, and _TRACE_MODE_MAX_ABS, as related computations are
# performed within TPUs and only their results are transferred to CPU.
# Simply, print the full tensor for these trace modes.
if self._trace_mode in [
_TRACE_MODE_NAN_INF, _TRACE_MODE_NORM, _TRACE_MODE_FULL_TENSOR,
_TRACE_MODE_MAX_ABS
]:
return _show_full_tensor
raise RuntimeError('Tensor trace fun for %s is not yet implemented'
%self._trace_mode)
def _skip_op(self, op_id, op, user_included, user_excluded,
in_exec_path=True):
"""Returns True if we should not trace Op."""
if TensorTracer.while_loop_op(op):
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_WHILELOOP_OP)
return True
if TensorTracer.unsafe_op(op):
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_UNSAFE_OP)
return True
if TensorTracer.device_mismatch(self._device_type, op):
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_DEVICE_MISMATCH)
return True
if not in_exec_path:
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_NOT_EXECUTED)
return True
if not self._inside_op_range(op_id):
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_OUTSIDE_OP_RANGE)
return True
if TensorTracer.less_interesting_op(op):
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_LESS_INTERESTING_OP)
return True
if user_included:
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_USER_INCLUDED)
return False
if user_excluded:
self._instrument_records[op.name] = TensorTracer.reason(
op_id, _REASON_USER_EXCLUDED)
return True
return False
def _skip_tensor(self, op_id, out_tensor, user_included,
user_excluded):
"""Returns True if we should not trace out_tensor."""
# Skips a tensor if the tensor has a non-numeric type.
# Note: we cannot use check_ops.is_numeric_tensor(out_tensor)
# because it also excludes tensors with dtypes, bool, and
# float32_ref, which we actually want to trace.
non_numeric_tensor_types = set([dtypes.variant, dtypes.resource,
dtypes.string])
if out_tensor.dtype in non_numeric_tensor_types:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_NON_NUMERIC_TENSOR)
return True
# Skip a tensor if it feeds a special while loop op.
if [consumer for consumer in out_tensor.consumers() if
TensorTracer.while_loop_op(consumer)]:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_FEEDS_WHILELOOP_OP)
return True
if user_included:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_USER_INCLUDED)
return False
if user_excluded:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_USER_EXCLUDED)
return True
if not out_tensor.get_shape().is_fully_defined():
# If trace mode is nan-inf, norm or max, then the tensor will be reduced
# to a scalar before the outside compilation call.
if self._trace_mode in [
_TRACE_MODE_NAN_INF, _TRACE_MODE_NORM, _TRACE_MODE_MAX_ABS
]:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_TENSOR_GET_TRACED)
return False
else:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_DYNAMIC_SHAPE)
return True
rank = len(out_tensor.shape)
if rank < 1:
# scalar
if TensorTracer.unsafe_scalar_trace(out_tensor.op):
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_UNSAFE_SCALAR)
return True
else:
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_SCALAR_GET_TRACED)
return False
else:
# tensor
self._instrument_records[out_tensor.name] = TensorTracer.reason(
op_id, _REASON_TENSOR_GET_TRACED)
return False
def _filter_execution_path_operations(self, operations, fetches):
"""Returns the set of ops in the execution path to compute given fetches."""
# If no fetch provided, then return all operations.
if fetches is None:
return set(operations)
# Convert to list, if a single element is provided.
if not isinstance(fetches, (list, tuple)):
fetches = [fetches]
# If a tensor is given as fetch, convert it to op.
op_fetches = []
for fetch in fetches:
if isinstance(fetch, ops.Operation):
op_fetches.append(fetch)
elif isinstance(fetch, ops.Tensor):
op_fetches.append(fetch.op)
else:
raise RuntimeError('Given fetch:%s is neither a tensor nor an op.'
%fetch)
execution_path_operations = set(op_fetches)
traverse_stack = list(op_fetches)
while True:
if not traverse_stack:
break
head_op = traverse_stack.pop()
input_ops = [tensor_input.op for tensor_input in head_op.inputs]
input_ops.extend(head_op.control_inputs)
for input_op in input_ops:
if input_op not in execution_path_operations:
# Filter out loop condition operations, tracing them causes a cycle.
# Trace only the loop-body.
if TensorTracer.loop_cond_op(input_op):
continue
execution_path_operations.add(input_op)
traverse_stack.append(input_op)
return execution_path_operations
def _determine_traced_tensors(self, graph, ops_in_exec_path):
"""Determines the tensors that will be traced."""
self._traced_tensorname_to_cache_idx_map = {}
self._cache_idx_to_tensor_idx = []
operations = graph.get_operations()
checkpoint_operations = self._get_checkpoints(graph)
for op_id, op in enumerate(operations):
if checkpoint_operations and op.name not in checkpoint_operations:
continue
user_included = self._is_user_included_op(op)
user_excluded = self._is_user_excluded_op(op)
in_exec_path = op in ops_in_exec_path
if self._skip_op(op_id, op, user_included, user_excluded, in_exec_path):
continue
for i in range(len(op.outputs)):
out_tensor = op.outputs[i]
if self._skip_tensor(op_id, out_tensor, user_included,
user_excluded):
continue
tensor_name = out_tensor.name
if tensor_name in self._traced_tensorname_to_cache_idx_map:
raise ValueError(
'Tensor name %s should not be already in '
'traced_tensorname_to_cache_idx_map'%tensor_name)
if tensor_name not in self._tensorname_idx_map:
raise ValueError(
'Tensor name %s is not in the tensorname_idx_map'%tensor_name)
tensor_idx = self._tensorname_idx_map[tensor_name]
cache_idx = len(self._traced_tensorname_to_cache_idx_map)
self._traced_tensorname_to_cache_idx_map[tensor_name] = cache_idx
self._cache_idx_to_tensor_idx.append(tensor_idx)
if len(self._traced_tensorname_to_cache_idx_map) != len(
self._cache_idx_to_tensor_idx):
raise RuntimeError('len(self._traced_tensorname_to_cache_idx_map) != '
'len(self._cache_idx_to_tensor_idx')
def _check_trace_files(self):
"""Checks if any requirements for trace files are satisfied."""
if not self._trace_dir:
# traces will be written to stderr. No need to check trace files.
return
if _trace_files_need_precreated(self._trace_dir):
for replica_id in range(0, self._num_replicas):
trace_file_path = os.path.join(
self._trace_dir,
_COMPACT_TRACE_FILE_PREFIX) + '%d'%replica_id
if not gfile.Exists(trace_file_path):
raise RuntimeError(
'%s must be pre-created with the '
'appropriate properties.'%trace_file_path)
else:
if not gfile.Exists(self._trace_dir):
gfile.MkDir(self._trace_dir)
if not gfile.Exists(self._trace_dir):
raise RuntimeError('Failed to create %s'%self._trace_dir)
def _pre_tracing(self, graph, fetches):
"""Work needs to be done prior to TPU or CPU tracing."""
self._check_trace_files()
operations = graph.get_operations()
(opname_idx_map, tensor_list, self._tensorname_idx_map) = (
TensorTracer._make_op_and_tensor_maps(operations))
self._write_config_section()
self._write_op_list_section(operations)
self._write_tensor_list_section(tensor_list, opname_idx_map)
# Filter out the operations that won't be executed.
# if fetches=None, then ops_in_exec_path = set(operations)
ops_in_exec_path = self._filter_execution_path_operations(operations,
fetches)
self._determine_traced_tensors(graph, ops_in_exec_path)
self._write_cache_index_map_section()
# Does the topological sort before adding any nodes to the graph.
(succeed, sorted_or_cycle) = TensorTracer.topological_sort(graph)
if self._use_tensor_values_cache():
_create_tensor_values_cache(graph,
len(self._cache_idx_to_tensor_idx))
return (ops_in_exec_path, succeed, sorted_or_cycle)
def _post_tracing(self, succeed, sorted_or_cycle):
"""Work needs to be done after TPU or CPU tracing."""
self._write_reason_section()
self._write_graph_section(succeed, sorted_or_cycle)
self._close_report_file()
def _get_checkpoints(self, graph):
"""Returns the list of Ops that produce the tensors traced with API.
Args:
graph: the graph of Ops.
Returns:
A set of operation names which should be traced.
"""
self._write_report('%s %s\n'%(_MARKER_SECTION_BEGIN,
_TENSOR_TRACER_CHECKPOINT))
checkpoint_operations = set()
tensor_tracer_variables = graph.get_collection(_TENSOR_TRACER_COLLECTION)
for (tensor, checkpoint_name) in tensor_tracer_variables:
self._write_report('%s %s\n'%(tensor.name, checkpoint_name))
checkpoint_operations.add(tensor.op.name)
self._write_report('%s %s\n'%(_MARKER_SECTION_END,
_TENSOR_TRACER_CHECKPOINT))
return checkpoint_operations
def _generate_flush_cache_op(self, graph, start_replica, on_tpu):
"""Generates an Op that will flush the cache to file.
Args:
graph: the graph of Ops
start_replica: the ID of the first replica being flushed by this Op.
on_tpu: if the graph is executed on TPU.
Returns:
The Op to flush the cache to file.
"""
def _make_flush_fun(replica_id):
"""Makes a function for flushing the cache for the given replica."""
def _fun():
"""A function that flushes the cache to a file."""
def _flush_fun(cache):
"""Flushes the cache to a file."""
if isinstance(replica_id, str):
replica_id_str = replica_id
else:
replica_id_str = '%d'%replica_id
if self._trace_dir:
output_path = os.path.join(self._trace_dir,
_COMPACT_TRACE_FILE_PREFIX) \
+ replica_id_str
output_stream = _OUTPUT_STREAM_ESCAPE + output_path
else:
output_stream = sys.stderr
new_step_line = _REPLICA_ID_TAG + replica_id_str
print_op = logging_ops.print_v2(
new_step_line, '\n',
cache, '\n',
summarize=-1,
output_stream=output_stream)
with ops.control_dependencies([print_op]):
return constant_op.constant(0).op
cache = _get_tensor_values_cache(graph)
if on_tpu:
flush_op = tpu.outside_compilation(_flush_fun, cache.value())
else:
flush_op = _flush_fun(cache.value())
with ops.control_dependencies([flush_op]):
reset_value = constant_op.constant(_COMPACT_TRACE_ENTRY_INIT_VALUE,
dtype=cache.dtype,
shape=cache.shape)
assign_op = state_ops.assign(cache, reset_value).op
with ops.control_dependencies([assign_op]):
return flush_op.outputs[0]
return _fun
def _f(replica_id):
return _make_flush_fun(replica_id)
def _eq(x):
return math_ops.equal(x, self._replica_id)
def _do_nothing():
return constant_op.constant(0)
return control_flow_ops.case({\
_eq(start_replica): _f(start_replica), \
_eq(start_replica+1): _f(start_replica+1), \
_eq(start_replica+2): _f(start_replica+2), \
_eq(start_replica+3): _f(start_replica+3), \
_eq(start_replica+4): _f(start_replica+4), \
_eq(start_replica+5): _f(start_replica+5), \
_eq(start_replica+6): _f(start_replica+6), \
_eq(start_replica+7): _f(start_replica+7), \
},
default=_do_nothing,
exclusive=True).op
def _flush_tensor_values_cache(self, graph, tensor_fetches, op_fetches,
on_tpu):
"""Flushes the intermediate tensor values in the graph to the cache.
Args:
graph: the graph of Ops
tensor_fetches: list of tensor results returned by the model_fn.
op_fetches: list of ops that are returned by the model_fn, e.g., train_op.
on_tpu: if the graph is executed on TPU.
Returns:
An identical copy of tensor_fetches.
"""
# Add a dependency to op and tensor fetches to make sure that all tracing
# ops are executed before flushing trace results.
with ops.control_dependencies(op_fetches +
[tensor.op for tensor in tensor_fetches]):
flush_cache_op_list = []
for host in range(self._num_hosts):
start_replica = host * 8
flush_op = self._generate_flush_cache_op(graph, start_replica, on_tpu)
flush_cache_op_list.append(flush_op)
return control_flow_ops.tuple(tensor_fetches,
control_inputs=flush_cache_op_list)
def _process_tensor_fetches(self, tensor_fetches):
"""Check that tensor_fetches is not empty and have valid tensors."""
# If none or empty list.
if tensor_fetches is None:
raise RuntimeError('tensor_fetches provided to tensor_tracer cannot be '
'None.')
if not isinstance(tensor_fetches, (list, tuple)):
tensor_fetches = [tensor_fetches]
elif not tensor_fetches:
raise RuntimeError('tensor_fetches provided to tensor_tracer cannot be '
'empty list.')
fetches = []
for fetch in tensor_fetches:
if isinstance(fetch, ops.Tensor):
fetches.append(fetch)
else:
raise RuntimeError('Given tensor_fetch:%s is not a tensor.' % fetch)
return fetches
def _process_op_fetches(self, op_fetches):
"""Check that op_fetches have valid ops."""
if op_fetches is None:
return []
if not isinstance(op_fetches, (list, tuple)):
op_fetches = [op_fetches]
fetches = []
for fetch in op_fetches:
if isinstance(fetch, ops.Operation):
fetches.append(fetch)
else:
logging.warning('Ignoring the given op_fetch:%s, which is not an op.' %
fetch)
return fetches
def _convert_fetches_to_input_format(self, input_fetches, current_fetches):
"""Changes current_fetches' format, so that it matches input_fetches."""
if isinstance(input_fetches, ops.Tensor):
if len(current_fetches) != 1:
raise RuntimeError('Tensor tracer input/output fetches do not match.')
return current_fetches[0]
else:
if len(current_fetches) != len(current_fetches):
raise RuntimeError('Tensor tracer input/output fetches do not match.')
elif isinstance(input_fetches, tuple):
return tuple(current_fetches)
else:
return current_fetches
def _get_op_control_flow_context(self, op):
"""Returns the control flow of the given op.
Args:
op: tf.Operation for which the control flow context is requested.
Returns:
op_control_flow_context: which the is control flow context of the given
op. If the operation type is LoopExit, returns the outer control flow
context.
"""
# pylint: disable=protected-access
op_control_flow_context = op._control_flow_context
# pylint: enable=protected-access
if control_flow_util.IsLoopExit(op):
op_control_flow_context = op_control_flow_context.outer_context
return op_control_flow_context
def _trace_execution(self, graph,
tensor_fetches,
op_fetches=None,
on_tpu=True):
"""Commong tracing function for both CPU and TPUs.
The caller function should set _device_type, _num_replicas,
_num_replicas_per_host, _num_hosts and _replica_id before calling
_trace_execution.
Args:
graph: the graph of Ops executed on the TPU.
tensor_fetches: a (list,tuple,or a single object) of tensor fetches
returned by model_fn given to session.run. Function must be provided
with as least one tensor to fetch.
op_fetches: A list of op fetches returned by model_fn given to
session.run. op_fetches and tensor_fetches are used to determine the
nodes that will be executed. Can be None.
on_tpu: True if executing on TPU.
Returns:
tensor_fetches: an exact copy of tensor_fetches that has additional
dependencies.
Raises:
RuntimeError: If tensor_fetches is None or empty.
"""
def _cast_unsupported_dtypes(tensor):
"""Casts tensor to a supported type."""
if tensor.dtype.__eq__(dtypes.int64):
# outside-compilation doesn't support int64 input yet.
return math_ops.cast(tensor, dtypes.int32)
if tensor.dtype.__eq__(dtypes.bfloat16) or tensor.dtype.__eq__(
dtypes.float16):
# Since host can't handle bf16, convert tensor to f32.
return math_ops.cast(tensor, dtypes.float32)
return tensor
TensorTracer.check_device_type(self._device_type)
# Check in_tensor_fetches, and op_fetches and convert them to lists.
processed_t_fetches = self._process_tensor_fetches(tensor_fetches)
op_fetches = self._process_op_fetches(op_fetches)
all_fetches = op_fetches + [tensor.op for tensor in processed_t_fetches]
# Filter the set of ops that will be executed, and topological sort.
(exec_op_set, succeed, sorted_or_cycle) = self._pre_tracing(graph,
all_fetches)
tensor_fetch_set = set(processed_t_fetches)
tracing_ops = []
# pylint: disable=protected-access
current_control_flow_context = graph._get_control_flow_context()
# pylint: enable=protected-access
# Trace ops only if they are in the execution path.
for op in exec_op_set:
for i in range(len(op.outputs)):
out_tensor = op.outputs[i]
tensor_name = out_tensor.name
if tensor_name not in self._traced_tensorname_to_cache_idx_map:
continue
# Create the list of consumers before calling _preprocess_traced_tensor.
# Otherwise, adding control input below, will introduce a cycle in the
# graph.
consumers = out_tensor.consumers()
# Not all consumers may be in the exec path. Filter out the consumers
# to keep the graph simpler.
consumers = [cop for cop in consumers if cop in exec_op_set]
# If there is no consumer of the tensor, there is no need to trace it;
# unless the tensor itself is one of the fetches.
is_a_fetched_tensor = out_tensor in tensor_fetch_set
if (not consumers) and (not is_a_fetched_tensor):
continue
op_control_flow_context = self._get_op_control_flow_context(op)
# pylint: disable=protected-access
graph._set_control_flow_context(op_control_flow_context)
# pylint: enable=protected-access
processed_out_tensor = self._preprocess_traced_tensor(out_tensor)
if on_tpu:
processed_out_tensor = _cast_unsupported_dtypes(processed_out_tensor)
if self._use_tensor_values_cache():
cache_idx = self._traced_tensorname_to_cache_idx_map[tensor_name]
trace_op = self._save_tensor_value_to_cache_op(graph,
cache_idx,
processed_out_tensor)
elif on_tpu:
trace_op = tpu.outside_compilation(
self._make_tensor_trace_fun(tensor_name), processed_out_tensor)
else:
trace_fun = self._make_tensor_trace_fun(tensor_name)
trace_op = trace_fun(processed_out_tensor)
if is_a_fetched_tensor:
tracing_ops.append(trace_op)
continue
# Add it to all consumers, as some consumers may not be executed if they
# are in a control flow.
for consumer_op in consumers:
# pylint: disable=protected-access
consumer_op._add_control_input(trace_op)
# pylint: enable=protected-access
# pylint: disable=protected-access
graph._set_control_flow_context(current_control_flow_context)
# pylint: enable=protected-access
if tracing_ops:
# If we are tracing a fetched tensor, their dependency is stored in
# tracing_ops.
processed_t_fetches = control_flow_ops.tuple(processed_t_fetches,
control_inputs=tracing_ops)
if self._use_tensor_values_cache():
processed_t_fetches = self._flush_tensor_values_cache(graph,
processed_t_fetches,
op_fetches,
on_tpu=on_tpu)
self._post_tracing(succeed, sorted_or_cycle)
# processed_t_fetches is a list at this point. Convert it to the same
# format as given in tensor_fetches.
return self._convert_fetches_to_input_format(tensor_fetches,
processed_t_fetches)
def trace_tpu(self, graph,
tensor_fetches,
op_fetches=None,
num_replicas=None,
num_replicas_per_host=None,
num_hosts=None):
"""Traces the tensors generated by TPU Ops in a TF graph.
Args:
graph: the graph of Ops executed on the TPU.
tensor_fetches: a (list,tuple,or a single object) of tensor fetches
returned by model_fn given to session.run. Function must be provided
with as least one tensor to fetch.
op_fetches: A list of op fetches returned by model_fn given to
session.run. op_fetches and tensor_fetches are used to determine the
nodes that will be executed. Can be None.
num_replicas: number of replicas used on the TPU.
num_replicas_per_host: number of replicas per TPU host.
num_hosts: total number of TPU hosts.
Returns:
tensor_fetches: an exact copy of tensor_fetches that has additional
dependencies.
Raises:
RuntimeError: If num_replicas_per_host > 8.
RuntimeError: If tensor_fetches is None or empty.
"""
if graph in TensorTracer._traced_graphs:
logging.warning('Graph is already rewritten with tensor tracer, ignoring '
'multiple calls.')
return tensor_fetches
else:
TensorTracer._traced_graphs.add(graph)
self._device_type = _DEVICE_TYPE_TPU
self._num_replicas = num_replicas
self._num_replicas_per_host = num_replicas_per_host
self._num_hosts = num_hosts
if self._num_replicas is not None:
if self._num_replicas_per_host is None:
self._num_replicas_per_host = 8
if self._num_hosts is None:
self._num_hosts = num_replicas // self._num_replicas_per_host + \
(num_replicas % self._num_replicas_per_host > 0)
if self._num_replicas_per_host > 8:
# Checks for the assumption in _generate_flush_cache_op().
raise RuntimeError('num_replicas_per_host (%d) is '
'greater than 8'%self._num_replicas_per_host)
if self._graph_dump_path:
graph_io.write_graph(graph, self._graph_dump_path,
'graph_before_tt.pbtxt')
with graph.as_default():
self._add_replica_id_to_graph()
tensor_fetches = self._trace_execution(graph, tensor_fetches, op_fetches,
on_tpu=True)
if self._graph_dump_path:
graph_io.write_graph(graph, self._graph_dump_path,
'graph_after_tt.pbtxt')
return tensor_fetches
def trace_cpu(self, graph, tensor_fetches, op_fetches=None):
"""Traces the tensors generated by CPU Ops in a TF graph.
Args:
graph: the graph of Ops executed on the CPU.
tensor_fetches: a (list,tuple,or a single object) of tensor fetches
returned by model_fn given to session.run. Function must be provided
with as least one tensor to fetch.
op_fetches: A list of op fetches returned by model_fn given to
session.run. op_fetches and tensor_fetches are used to determine the
nodes that will be executed. Can be None.
Returns:
tensor_fetches: an exact copy of tensor_fetches that has additional
dependencies.
Raises:
RuntimeError: If tensor_fetches is None or empty.
"""
if graph in TensorTracer._traced_graphs:
logging.warning('Graph is already rewritten with tensor tracer, ignoring '
'multiple calls.')
return tensor_fetches
else:
TensorTracer._traced_graphs.add(graph)
self._device_type = _DEVICE_TYPE_CPU
self._num_replicas = 1
self._num_replicas_per_host = 1
self._num_hosts = 1
self._replica_id = 0
if self._graph_dump_path:
graph_io.write_graph(graph, self._graph_dump_path,
'graph_before_tt.pbtxt')
with graph.as_default():
tensor_fetches = self._trace_execution(graph, tensor_fetches, op_fetches,
on_tpu=False)
if self._graph_dump_path:
graph_io.write_graph(graph, self._graph_dump_path,
'graph_after_tt.pbtxt')
return tensor_fetches
|
jbedorf/tensorflow
|
tensorflow/python/tpu/tensor_tracer.py
|
Python
|
apache-2.0
| 63,384
|
[
"VisIt"
] |
79cfa1fd4349da9f0be1ed7bdac25163a3285f8eff6d7b60f6190f4c79523702
|
"""This module contains the "Viz" objects
These objects represent the backend of all the visualizations that
Superset can render.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import copy
import hashlib
import logging
import traceback
import uuid
import zlib
from collections import OrderedDict, defaultdict
from itertools import product
from datetime import datetime, timedelta
import pandas as pd
import numpy as np
from flask import request
from flask_babel import lazy_gettext as _
from markdown import markdown
import simplejson as json
from six import string_types, PY3
from dateutil import relativedelta as rdelta
from superset import app, utils, cache, get_manifest_file
from superset.utils import DTTM_ALIAS
config = app.config
stats_logger = config.get('STATS_LOGGER')
class BaseViz(object):
"""All visualizations derive this base class"""
viz_type = None
verbose_name = "Base Viz"
credits = ""
is_timeseries = False
def __init__(self, datasource, form_data):
if not datasource:
raise Exception(_("Viz is missing a datasource"))
self.datasource = datasource
self.request = request
self.viz_type = form_data.get("viz_type")
self.form_data = form_data
self.query = ""
self.token = self.form_data.get(
'token', 'token_' + uuid.uuid4().hex[:8])
self.metrics = self.form_data.get('metrics') or []
self.groupby = self.form_data.get('groupby') or []
self.status = None
self.error_message = None
def get_df(self, query_obj=None):
"""Returns a pandas dataframe based on the query object"""
if not query_obj:
query_obj = self.query_obj()
self.error_msg = ""
self.results = None
timestamp_format = None
if self.datasource.type == 'table':
dttm_col = self.datasource.get_col(query_obj['granularity'])
if dttm_col:
timestamp_format = dttm_col.python_date_format
# The datasource here can be different backend but the interface is common
self.results = self.datasource.query(query_obj)
self.query = self.results.query
self.status = self.results.status
self.error_message = self.results.error_message
df = self.results.df
# Transform the timestamp we received from database to pandas supported
# datetime format. If no python_date_format is specified, the pattern will
# be considered as the default ISO date format
# If the datetime format is unix, the parse will use the corresponding
# parsing logic.
if df is None or df.empty:
self.status = utils.QueryStatus.FAILED
if not self.error_message:
self.error_message = "No data."
return pd.DataFrame()
else:
if DTTM_ALIAS in df.columns:
if timestamp_format in ("epoch_s", "epoch_ms"):
df[DTTM_ALIAS] = pd.to_datetime(df[DTTM_ALIAS], utc=False)
else:
df[DTTM_ALIAS] = pd.to_datetime(
df[DTTM_ALIAS], utc=False, format=timestamp_format)
if self.datasource.offset:
df[DTTM_ALIAS] += timedelta(hours=self.datasource.offset)
df.replace([np.inf, -np.inf], np.nan)
df = df.fillna(0)
return df
def get_extra_filters(self):
extra_filters = self.form_data.get('extra_filters', [])
return {f['col']: f['val'] for f in extra_filters}
def query_obj(self):
"""Building a query object"""
form_data = self.form_data
gb = form_data.get("groupby") or []
metrics = form_data.get("metrics") or []
columns = form_data.get("columns") or []
groupby = []
for o in gb + columns:
if o not in groupby:
groupby.append(o)
is_timeseries = self.is_timeseries
if DTTM_ALIAS in groupby:
groupby.remove(DTTM_ALIAS)
is_timeseries = True
# extra_filters are temporary/contextual filters that are external
# to the slice definition. We use those for dynamic interactive
# filters like the ones emitted by the "Filter Box" visualization
extra_filters = self.get_extra_filters()
granularity = (
form_data.get("granularity") or form_data.get("granularity_sqla")
)
limit = int(form_data.get("limit") or 0)
timeseries_limit_metric = form_data.get("timeseries_limit_metric")
row_limit = int(
form_data.get("row_limit") or config.get("ROW_LIMIT"))
# __form and __to are special extra_filters that target time
# boundaries. The rest of extra_filters are simple
# [column_name in list_of_values]. `__` prefix is there to avoid
# potential conflicts with column that would be named `from` or `to`
since = (
extra_filters.get('__from') or
form_data.get("since") or ''
)
# Backward compatibility hack
since_words = since.split(' ')
if (
len(since_words) == 2 and
since_words[1] in ['days', 'years', 'hours', 'day', 'year']):
since += ' ago'
from_dttm = utils.parse_human_datetime(since)
until = extra_filters.get('__to') or form_data.get("until", "now")
to_dttm = utils.parse_human_datetime(until)
if from_dttm and to_dttm and from_dttm > to_dttm:
raise Exception(_("From date cannot be larger than to date"))
# extras are used to query elements specific to a datasource type
# for instance the extra where clause that applies only to Tables
extras = {
'where': form_data.get("where", ''),
'having': form_data.get("having", ''),
'having_druid': form_data.get('having_filters') \
if 'having_filters' in form_data else [],
'time_grain_sqla': form_data.get("time_grain_sqla", ''),
'druid_time_origin': form_data.get("druid_time_origin", ''),
}
filters = form_data['filters'] if 'filters' in form_data \
else []
for col, vals in self.get_extra_filters().items():
if not (col and vals) or col.startswith('__'):
continue
elif col in self.datasource.filterable_column_names:
# Quote values with comma to avoid conflict
filters += [{
'col': col,
'op': 'in',
'val': vals,
}]
d = {
'granularity': granularity,
'from_dttm': from_dttm,
'to_dttm': to_dttm,
'is_timeseries': is_timeseries,
'groupby': groupby,
'metrics': metrics,
'row_limit': row_limit,
'filter': filters,
'timeseries_limit': limit,
'extras': extras,
'timeseries_limit_metric': timeseries_limit_metric,
'form_data': form_data,
}
return d
@property
def cache_timeout(self):
if self.form_data.get('cache_timeout'):
return int(self.form_data.get('cache_timeout'))
if self.datasource.cache_timeout:
return self.datasource.cache_timeout
if (
hasattr(self.datasource, 'database') and
self.datasource.database.cache_timeout):
return self.datasource.database.cache_timeout
return config.get("CACHE_DEFAULT_TIMEOUT")
def get_json(self, force=False):
return json.dumps(
self.get_payload(force),
default=utils.json_int_dttm_ser, ignore_nan=True)
@property
def cache_key(self):
s = str([(k, self.form_data[k]) for k in sorted(self.form_data.keys())])
return hashlib.md5(s.encode('utf-8')).hexdigest()
def get_payload(self, force=False):
"""Handles caching around the json payload retrieval"""
cache_key = self.cache_key
payload = None
force = force if force else self.form_data.get('force') == 'true'
if not force and cache:
payload = cache.get(cache_key)
if payload:
stats_logger.incr('loaded_from_source')
is_cached = True
try:
cached_data = zlib.decompress(payload)
if PY3:
cached_data = cached_data.decode('utf-8')
payload = json.loads(cached_data)
except Exception as e:
logging.error("Error reading cache: " +
utils.error_msg_from_exception(e))
payload = None
logging.info("Serving from cache")
if not payload:
stats_logger.incr('loaded_from_cache')
data = None
is_cached = False
cache_timeout = self.cache_timeout
stacktrace = None
try:
df = self.get_df()
if not self.error_message:
data = self.get_data(df)
except Exception as e:
logging.exception(e)
if not self.error_message:
self.error_message = str(e)
self.status = utils.QueryStatus.FAILED
data = None
stacktrace = traceback.format_exc()
payload = {
'cache_key': cache_key,
'cache_timeout': cache_timeout,
'data': data,
'error': self.error_message,
'form_data': self.form_data,
'query': self.query,
'status': self.status,
'stacktrace': stacktrace,
}
payload['cached_dttm'] = datetime.utcnow().isoformat().split('.')[0]
logging.info("Caching for the next {} seconds".format(
cache_timeout))
data = self.json_dumps(payload)
if PY3:
data = bytes(data, 'utf-8')
if cache and self.status != utils.QueryStatus.FAILED:
try:
cache.set(
cache_key,
zlib.compress(data),
timeout=cache_timeout)
except Exception as e:
# cache.set call can fail if the backend is down or if
# the key is too large or whatever other reasons
logging.warning("Could not cache key {}".format(cache_key))
logging.exception(e)
cache.delete(cache_key)
payload['is_cached'] = is_cached
return payload
def json_dumps(self, obj):
return json.dumps(obj, default=utils.json_int_dttm_ser, ignore_nan=True)
@property
def data(self):
"""This is the data object serialized to the js layer"""
content = {
'form_data': self.form_data,
'token': self.token,
'viz_name': self.viz_type,
'filter_select_enabled': self.datasource.filter_select_enabled,
}
return content
def get_csv(self):
df = self.get_df()
include_index = not isinstance(df.index, pd.RangeIndex)
return df.to_csv(index=include_index, encoding="utf-8")
def get_data(self, df):
return []
@property
def json_data(self):
return json.dumps(self.data)
class TableViz(BaseViz):
"""A basic html table that is sortable and searchable"""
viz_type = "table"
verbose_name = _("Table View")
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
is_timeseries = False
def should_be_timeseries(self):
fd = self.form_data
# TODO handle datasource-type-specific code in datasource
conditions_met = (
(fd.get('granularity') and fd.get('granularity') != 'all') or
(fd.get('granularity_sqla') and fd.get('time_grain_sqla'))
)
if fd.get('include_time') and not conditions_met:
raise Exception(_(
"Pick a granularity in the Time section or "
"uncheck 'Include Time'"))
return fd.get('include_time')
def query_obj(self):
d = super(TableViz, self).query_obj()
fd = self.form_data
if fd.get('all_columns') and (fd.get('groupby') or fd.get('metrics')):
raise Exception(_(
"Choose either fields to [Group By] and [Metrics] or "
"[Columns], not both"))
if fd.get('all_columns'):
d['columns'] = fd.get('all_columns')
d['groupby'] = []
order_by_cols = fd.get('order_by_cols') or []
d['orderby'] = [json.loads(t) for t in order_by_cols]
d['is_timeseries'] = self.should_be_timeseries()
return d
def get_data(self, df):
if not self.should_be_timeseries() and DTTM_ALIAS in df:
del df[DTTM_ALIAS]
return dict(
records=df.to_dict(orient="records"),
columns=list(df.columns),
)
def json_dumps(self, obj):
if self.form_data.get('all_columns'):
return json.dumps(obj, default=utils.json_iso_dttm_ser)
else:
return super(TableViz, self).json_dumps(obj)
class PivotTableViz(BaseViz):
"""A pivot table view, define your rows, columns and metrics"""
viz_type = "pivot_table"
verbose_name = _("Pivot Table")
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
is_timeseries = False
def query_obj(self):
d = super(PivotTableViz, self).query_obj()
groupby = self.form_data.get('groupby')
columns = self.form_data.get('columns')
metrics = self.form_data.get('metrics')
if not columns:
columns = []
if not groupby:
groupby = []
if not groupby:
raise Exception(_("Please choose at least one \"Group by\" field "))
if not metrics:
raise Exception(_("Please choose at least one metric"))
if (
any(v in groupby for v in columns) or
any(v in columns for v in groupby)):
raise Exception(_("'Group By' and 'Columns' can't overlap"))
return d
def get_data(self, df):
if (
self.form_data.get("granularity") == "all" and
DTTM_ALIAS in df):
del df[DTTM_ALIAS]
df = df.pivot_table(
index=self.form_data.get('groupby'),
columns=self.form_data.get('columns'),
values=self.form_data.get('metrics'),
aggfunc=self.form_data.get('pandas_aggfunc'),
margins=self.form_data.get('pivot_margins'),
)
# Display metrics side by side with each column
if self.form_data.get('combine_metric'):
df = df.stack(0).unstack()
return dict(
columns=list(df.columns),
html=df.to_html(
na_rep='',
classes=(
"dataframe table table-striped table-bordered "
"table-condensed table-hover").split(" ")),
)
class MarkupViz(BaseViz):
"""Use html or markdown to create a free form widget"""
viz_type = "markup"
verbose_name = _("Markup")
is_timeseries = False
def get_df(self):
return True
def get_data(self, df):
markup_type = self.form_data.get("markup_type")
code = self.form_data.get("code", '')
if markup_type == "markdown":
code = markdown(code)
return dict(html=code, theme_css=get_manifest_file('theme.css'))
class SeparatorViz(MarkupViz):
"""Use to create section headers in a dashboard, similar to `Markup`"""
viz_type = "separator"
verbose_name = _("Separator")
class WordCloudViz(BaseViz):
"""Build a colorful word cloud
Uses the nice library at:
https://github.com/jasondavies/d3-cloud
"""
viz_type = "word_cloud"
verbose_name = _("Word Cloud")
is_timeseries = False
def query_obj(self):
d = super(WordCloudViz, self).query_obj()
d['metrics'] = [self.form_data.get('metric')]
d['groupby'] = [self.form_data.get('series')]
return d
def get_data(self, df):
# Ordering the columns
df = df[[self.form_data.get('series'), self.form_data.get('metric')]]
# Labeling the columns for uniform json schema
df.columns = ['text', 'size']
return df.to_dict(orient="records")
class TreemapViz(BaseViz):
"""Tree map visualisation for hierarchical data."""
viz_type = "treemap"
verbose_name = _("Treemap")
credits = '<a href="https://d3js.org">d3.js</a>'
is_timeseries = False
def _nest(self, metric, df):
nlevels = df.index.nlevels
if nlevels == 1:
result = [{"name": n, "value": v}
for n, v in zip(df.index, df[metric])]
else:
result = [{"name": l, "children": self._nest(metric, df.loc[l])}
for l in df.index.levels[0]]
return result
def get_data(self, df):
df = df.set_index(self.form_data.get("groupby"))
chart_data = [{"name": metric, "children": self._nest(metric, df)}
for metric in df.columns]
return chart_data
class CalHeatmapViz(BaseViz):
"""Calendar heatmap."""
viz_type = "cal_heatmap"
verbose_name = _("Calendar Heatmap")
credits = (
'<a href=https://github.com/wa0x6e/cal-heatmap>cal-heatmap</a>')
is_timeseries = True
def get_data(self, df):
form_data = self.form_data
df.columns = ["timestamp", "metric"]
timestamps = {str(obj["timestamp"].value / 10**9):
obj.get("metric") for obj in df.to_dict("records")}
start = utils.parse_human_datetime(form_data.get("since"))
end = utils.parse_human_datetime(form_data.get("until"))
domain = form_data.get("domain_granularity")
diff_delta = rdelta.relativedelta(end, start)
diff_secs = (end - start).total_seconds()
if domain == "year":
range_ = diff_delta.years + 1
elif domain == "month":
range_ = diff_delta.years * 12 + diff_delta.months + 1
elif domain == "week":
range_ = diff_delta.years * 53 + diff_delta.weeks + 1
elif domain == "day":
range_ = diff_secs // (24*60*60) + 1
else:
range_ = diff_secs // (60*60) + 1
return {
"timestamps": timestamps,
"start": start,
"domain": domain,
"subdomain": form_data.get("subdomain_granularity"),
"range": range_,
}
def query_obj(self):
qry = super(CalHeatmapViz, self).query_obj()
qry["metrics"] = [self.form_data["metric"]]
return qry
class NVD3Viz(BaseViz):
"""Base class for all nvd3 vizs"""
credits = '<a href="http://nvd3.org/">NVD3.org</a>'
viz_type = None
verbose_name = "Base NVD3 Viz"
is_timeseries = False
class BoxPlotViz(NVD3Viz):
"""Box plot viz from ND3"""
viz_type = "box_plot"
verbose_name = _("Box Plot")
sort_series = False
is_timeseries = True
def to_series(self, df, classed='', title_suffix=''):
label_sep = " - "
chart_data = []
for index_value, row in zip(df.index, df.to_dict(orient="records")):
if isinstance(index_value, tuple):
index_value = label_sep.join(index_value)
boxes = defaultdict(dict)
for (label, key), value in row.items():
if key == "median":
key = "Q2"
boxes[label][key] = value
for label, box in boxes.items():
if len(self.form_data.get("metrics")) > 1:
# need to render data labels with metrics
chart_label = label_sep.join([index_value, label])
else:
chart_label = index_value
chart_data.append({
"label": chart_label,
"values": box,
})
return chart_data
def get_data(self, df):
form_data = self.form_data
df = df.fillna(0)
# conform to NVD3 names
def Q1(series): # need to be named functions - can't use lambdas
return np.percentile(series, 25)
def Q3(series):
return np.percentile(series, 75)
whisker_type = form_data.get('whisker_options')
if whisker_type == "Tukey":
def whisker_high(series):
upper_outer_lim = Q3(series) + 1.5 * (Q3(series) - Q1(series))
series = series[series <= upper_outer_lim]
return series[np.abs(series - upper_outer_lim).argmin()]
def whisker_low(series):
lower_outer_lim = Q1(series) - 1.5 * (Q3(series) - Q1(series))
# find the closest value above the lower outer limit
series = series[series >= lower_outer_lim]
return series[np.abs(series - lower_outer_lim).argmin()]
elif whisker_type == "Min/max (no outliers)":
def whisker_high(series):
return series.max()
def whisker_low(series):
return series.min()
elif " percentiles" in whisker_type:
low, high = whisker_type.replace(" percentiles", "").split("/")
def whisker_high(series):
return np.percentile(series, int(high))
def whisker_low(series):
return np.percentile(series, int(low))
else:
raise ValueError("Unknown whisker type: {}".format(whisker_type))
def outliers(series):
above = series[series > whisker_high(series)]
below = series[series < whisker_low(series)]
# pandas sometimes doesn't like getting lists back here
return set(above.tolist() + below.tolist())
aggregate = [Q1, np.median, Q3, whisker_high, whisker_low, outliers]
df = df.groupby(form_data.get('groupby')).agg(aggregate)
chart_data = self.to_series(df)
return chart_data
class BubbleViz(NVD3Viz):
"""Based on the NVD3 bubble chart"""
viz_type = "bubble"
verbose_name = _("Bubble Chart")
is_timeseries = False
def query_obj(self):
form_data = self.form_data
d = super(BubbleViz, self).query_obj()
d['groupby'] = [
form_data.get('entity')
]
if form_data.get('series'):
d['groupby'].append(form_data.get('series'))
self.x_metric = form_data.get('x')
self.y_metric = form_data.get('y')
self.z_metric = form_data.get('size')
self.entity = form_data.get('entity')
self.series = form_data.get('series') or self.entity
d['row_limit'] = form_data.get('limit')
d['metrics'] = [
self.z_metric,
self.x_metric,
self.y_metric,
]
if not all(d['metrics'] + [self.entity]):
raise Exception(_("Pick a metric for x, y and size"))
return d
def get_data(self, df):
df['x'] = df[[self.x_metric]]
df['y'] = df[[self.y_metric]]
df['size'] = df[[self.z_metric]]
df['shape'] = 'circle'
df['group'] = df[[self.series]]
series = defaultdict(list)
for row in df.to_dict(orient='records'):
series[row['group']].append(row)
chart_data = []
for k, v in series.items():
chart_data.append({
'key': k,
'values': v})
return chart_data
class BulletViz(NVD3Viz):
"""Based on the NVD3 bullet chart"""
viz_type = "bullet"
verbose_name = _("Bullet Chart")
is_timeseries = False
def query_obj(self):
form_data = self.form_data
d = super(BulletViz, self).query_obj()
self.metric = form_data.get('metric')
def as_strings(field):
value = form_data.get(field)
return value.split(',') if value else []
def as_floats(field):
return [float(x) for x in as_strings(field)]
self.ranges = as_floats('ranges')
self.range_labels = as_strings('range_labels')
self.markers = as_floats('markers')
self.marker_labels = as_strings('marker_labels')
self.marker_lines = as_floats('marker_lines')
self.marker_line_labels = as_strings('marker_line_labels')
d['metrics'] = [
self.metric,
]
if not self.metric:
raise Exception(_("Pick a metric to display"))
return d
def get_data(self, df):
df = df.fillna(0)
df['metric'] = df[[self.metric]]
values = df['metric'].values
return {
'measures': values.tolist(),
'ranges': self.ranges or [0, values.max() * 1.1],
'rangeLabels': self.range_labels or None,
'markers': self.markers or None,
'markerLabels': self.marker_labels or None,
'markerLines': self.marker_lines or None,
'markerLineLabels': self.marker_line_labels or None,
}
class BigNumberViz(BaseViz):
"""Put emphasis on a single metric with this big number viz"""
viz_type = "big_number"
verbose_name = _("Big Number with Trendline")
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
is_timeseries = True
def query_obj(self):
d = super(BigNumberViz, self).query_obj()
metric = self.form_data.get('metric')
if not metric:
raise Exception(_("Pick a metric!"))
d['metrics'] = [self.form_data.get('metric')]
self.form_data['metric'] = metric
return d
def get_data(self, df):
form_data = self.form_data
df.sort_values(by=df.columns[0], inplace=True)
compare_lag = form_data.get("compare_lag")
return {
'data': df.values.tolist(),
'compare_lag': compare_lag,
'compare_suffix': form_data.get('compare_suffix', ''),
}
class BigNumberTotalViz(BaseViz):
"""Put emphasis on a single metric with this big number viz"""
viz_type = "big_number_total"
verbose_name = _("Big Number")
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
is_timeseries = False
def query_obj(self):
d = super(BigNumberTotalViz, self).query_obj()
metric = self.form_data.get('metric')
if not metric:
raise Exception(_("Pick a metric!"))
d['metrics'] = [self.form_data.get('metric')]
self.form_data['metric'] = metric
return d
def get_data(self, df):
form_data = self.form_data
df.sort_values(by=df.columns[0], inplace=True)
return {
'data': df.values.tolist(),
'subheader': form_data.get('subheader', ''),
}
class NVD3TimeSeriesViz(NVD3Viz):
"""A rich line chart component with tons of options"""
viz_type = "line"
verbose_name = _("Time Series - Line Chart")
sort_series = False
is_timeseries = True
def to_series(self, df, classed='', title_suffix=''):
cols = []
for col in df.columns:
if col == '':
cols.append('N/A')
elif col is None:
cols.append('NULL')
else:
cols.append(col)
df.columns = cols
series = df.to_dict('series')
chart_data = []
for name in df.T.index.tolist():
ys = series[name]
if df[name].dtype.kind not in "biufc":
continue
if isinstance(name, string_types):
series_title = name
else:
name = ["{}".format(s) for s in name]
if len(self.form_data.get('metrics')) > 1:
series_title = ", ".join(name)
else:
series_title = ", ".join(name[1:])
if title_suffix:
series_title += title_suffix
d = {
"key": series_title,
"classed": classed,
"values": [
{'x': ds, 'y': ys[ds] if ds in ys else None}
for ds in df.index
],
}
chart_data.append(d)
return chart_data
def process_data(self, df):
fd = self.form_data
df = df.fillna(0)
if fd.get("granularity") == "all":
raise Exception(_("Pick a time granularity for your time series"))
df = df.pivot_table(
index=DTTM_ALIAS,
columns=fd.get('groupby'),
values=fd.get('metrics'))
fm = fd.get("resample_fillmethod")
if not fm:
fm = None
how = fd.get("resample_how")
rule = fd.get("resample_rule")
if how and rule:
df = df.resample(rule, how=how, fill_method=fm)
if not fm:
df = df.fillna(0)
if self.sort_series:
dfs = df.sum()
dfs.sort_values(ascending=False, inplace=True)
df = df[dfs.index]
if fd.get("contribution"):
dft = df.T
df = (dft / dft.sum()).T
rolling_type = fd.get("rolling_type")
rolling_periods = int(fd.get("rolling_periods") or 0)
min_periods = int(fd.get("min_periods") or 0)
if rolling_type in ('mean', 'std', 'sum') and rolling_periods:
kwargs = dict(
arg=df,
window=rolling_periods,
min_periods=min_periods)
if rolling_type == 'mean':
df = pd.rolling_mean(**kwargs)
elif rolling_type == 'std':
df = pd.rolling_std(**kwargs)
elif rolling_type == 'sum':
df = pd.rolling_sum(**kwargs)
elif rolling_type == 'cumsum':
df = df.cumsum()
if min_periods:
df = df[min_periods:]
num_period_compare = fd.get("num_period_compare")
if num_period_compare:
num_period_compare = int(num_period_compare)
prt = fd.get('period_ratio_type')
if prt and prt == 'growth':
df = (df / df.shift(num_period_compare)) - 1
elif prt and prt == 'value':
df = df - df.shift(num_period_compare)
else:
df = df / df.shift(num_period_compare)
df = df[num_period_compare:]
return df
def get_data(self, df):
fd = self.form_data
df = self.process_data(df)
chart_data = self.to_series(df)
time_compare = fd.get('time_compare')
if time_compare:
query_object = self.query_obj()
delta = utils.parse_human_timedelta(time_compare)
query_object['inner_from_dttm'] = query_object['from_dttm']
query_object['inner_to_dttm'] = query_object['to_dttm']
query_object['from_dttm'] -= delta
query_object['to_dttm'] -= delta
df2 = self.get_df(query_object)
df2[DTTM_ALIAS] += delta
df2 = self.process_data(df2)
chart_data += self.to_series(
df2, classed='superset', title_suffix="---")
chart_data = sorted(chart_data, key=lambda x: x['key'])
return chart_data
class NVD3DualLineViz(NVD3Viz):
"""A rich line chart with dual axis"""
viz_type = "dual_line"
verbose_name = _("Time Series - Dual Axis Line Chart")
sort_series = False
is_timeseries = True
def query_obj(self):
d = super(NVD3DualLineViz, self).query_obj()
m1 = self.form_data.get('metric')
m2 = self.form_data.get('metric_2')
d['metrics'] = [m1, m2]
if not m1:
raise Exception(_("Pick a metric for left axis!"))
if not m2:
raise Exception(_("Pick a metric for right axis!"))
if m1 == m2:
raise Exception(_("Please choose different metrics"
" on left and right axis"))
return d
def to_series(self, df, classed=''):
cols = []
for col in df.columns:
if col == '':
cols.append('N/A')
elif col is None:
cols.append('NULL')
else:
cols.append(col)
df.columns = cols
series = df.to_dict('series')
chart_data = []
metrics = [
self.form_data.get('metric'),
self.form_data.get('metric_2')
]
for i, m in enumerate(metrics):
ys = series[m]
if df[m].dtype.kind not in "biufc":
continue
series_title = m
d = {
"key": series_title,
"classed": classed,
"values": [
{'x': ds, 'y': ys[ds] if ds in ys else None}
for ds in df.index
],
"yAxis": i+1,
"type": "line"
}
chart_data.append(d)
return chart_data
def get_data(self, df):
fd = self.form_data
df = df.fillna(0)
if self.form_data.get("granularity") == "all":
raise Exception(_("Pick a time granularity for your time series"))
metric = fd.get('metric')
metric_2 = fd.get('metric_2')
df = df.pivot_table(
index=DTTM_ALIAS,
values=[metric, metric_2])
chart_data = self.to_series(df)
return chart_data
class NVD3TimeSeriesBarViz(NVD3TimeSeriesViz):
"""A bar chart where the x axis is time"""
viz_type = "bar"
sort_series = True
verbose_name = _("Time Series - Bar Chart")
class NVD3CompareTimeSeriesViz(NVD3TimeSeriesViz):
"""A line chart component where you can compare the % change over time"""
viz_type = 'compare'
verbose_name = _("Time Series - Percent Change")
class NVD3TimeSeriesStackedViz(NVD3TimeSeriesViz):
"""A rich stack area chart"""
viz_type = "area"
verbose_name = _("Time Series - Stacked")
sort_series = True
class DistributionPieViz(NVD3Viz):
"""Annoy visualization snobs with this controversial pie chart"""
viz_type = "pie"
verbose_name = _("Distribution - NVD3 - Pie Chart")
is_timeseries = False
def get_data(self, df):
df = df.pivot_table(
index=self.groupby,
values=[self.metrics[0]])
df.sort_values(by=self.metrics[0], ascending=False, inplace=True)
df = df.reset_index()
df.columns = ['x', 'y']
return df.to_dict(orient="records")
class HistogramViz(BaseViz):
"""Histogram"""
viz_type = "histogram"
verbose_name = _("Histogram")
is_timeseries = False
def query_obj(self):
"""Returns the query object for this visualization"""
d = super(HistogramViz, self).query_obj()
d['row_limit'] = self.form_data.get(
'row_limit', int(config.get('VIZ_ROW_LIMIT')))
numeric_column = self.form_data.get('all_columns_x')
if numeric_column is None:
raise Exception(_("Must have one numeric column specified"))
d['columns'] = [numeric_column]
return d
def get_data(self, df):
"""Returns the chart data"""
chart_data = df[df.columns[0]].values.tolist()
return chart_data
class DistributionBarViz(DistributionPieViz):
"""A good old bar chart"""
viz_type = "dist_bar"
verbose_name = _("Distribution - Bar Chart")
is_timeseries = False
def query_obj(self):
d = super(DistributionBarViz, self).query_obj() # noqa
fd = self.form_data
if (
len(d['groupby']) <
len(fd.get('groupby') or []) + len(fd.get('columns') or [])
):
raise Exception(
_("Can't have overlap between Series and Breakdowns"))
if not fd.get('metrics'):
raise Exception(_("Pick at least one metric"))
if not fd.get('groupby'):
raise Exception(_("Pick at least one field for [Series]"))
return d
def get_data(self, df):
fd = self.form_data
row = df.groupby(self.groupby).sum()[self.metrics[0]].copy()
row.sort_values(ascending=False, inplace=True)
columns = fd.get('columns') or []
pt = df.pivot_table(
index=self.groupby,
columns=columns,
values=self.metrics)
if fd.get("contribution"):
pt = pt.fillna(0)
pt = pt.T
pt = (pt / pt.sum()).T
pt = pt.reindex(row.index)
chart_data = []
for name, ys in pt.iteritems():
if pt[name].dtype.kind not in "biufc" or name in self.groupby:
continue
if isinstance(name, string_types):
series_title = name
elif len(self.metrics) > 1:
series_title = ", ".join(name)
else:
l = [str(s) for s in name[1:]]
series_title = ", ".join(l)
values = []
for i, v in ys.iteritems():
x = i
if isinstance(x, (tuple, list)):
x = ', '.join([str(s) for s in x])
else:
x = str(x)
values.append({
'x': x,
'y': v,
})
d = {
"key": series_title,
"values": values,
}
chart_data.append(d)
return chart_data
class SunburstViz(BaseViz):
"""A multi level sunburst chart"""
viz_type = "sunburst"
verbose_name = _("Sunburst")
is_timeseries = False
credits = (
'Kerry Rodden '
'@<a href="https://bl.ocks.org/kerryrodden/7090426">bl.ocks.org</a>')
def get_data(self, df):
# if m1 == m2 duplicate the metric column
cols = self.form_data.get('groupby')
metric = self.form_data.get('metric')
secondary_metric = self.form_data.get('secondary_metric')
if metric == secondary_metric:
ndf = df
ndf.columns = [cols + ['m1', 'm2']]
else:
cols += [
self.form_data['metric'], self.form_data['secondary_metric']]
ndf = df[cols]
return json.loads(ndf.to_json(orient="values")) # TODO fix this nonsense
def query_obj(self):
qry = super(SunburstViz, self).query_obj()
qry['metrics'] = [
self.form_data['metric'], self.form_data['secondary_metric']]
return qry
class SankeyViz(BaseViz):
"""A Sankey diagram that requires a parent-child dataset"""
viz_type = "sankey"
verbose_name = _("Sankey")
is_timeseries = False
credits = '<a href="https://www.npmjs.com/package/d3-sankey">d3-sankey on npm</a>'
def query_obj(self):
qry = super(SankeyViz, self).query_obj()
if len(qry['groupby']) != 2:
raise Exception(_("Pick exactly 2 columns as [Source / Target]"))
qry['metrics'] = [
self.form_data['metric']]
return qry
def get_data(self, df):
df.columns = ['source', 'target', 'value']
recs = df.to_dict(orient='records')
hierarchy = defaultdict(set)
for row in recs:
hierarchy[row['source']].add(row['target'])
def find_cycle(g):
"""Whether there's a cycle in a directed graph"""
path = set()
def visit(vertex):
path.add(vertex)
for neighbour in g.get(vertex, ()):
if neighbour in path or visit(neighbour):
return (vertex, neighbour)
path.remove(vertex)
for v in g:
cycle = visit(v)
if cycle:
return cycle
cycle = find_cycle(hierarchy)
if cycle:
raise Exception(_(
"There's a loop in your Sankey, please provide a tree. "
"Here's a faulty link: {}").format(cycle))
return recs
class DirectedForceViz(BaseViz):
"""An animated directed force layout graph visualization"""
viz_type = "directed_force"
verbose_name = _("Directed Force Layout")
credits = 'd3noob @<a href="http://bl.ocks.org/d3noob/5141278">bl.ocks.org</a>'
is_timeseries = False
def query_obj(self):
qry = super(DirectedForceViz, self).query_obj()
if len(self.form_data['groupby']) != 2:
raise Exception(_("Pick exactly 2 columns to 'Group By'"))
qry['metrics'] = [self.form_data['metric']]
return qry
def get_data(self, df):
df.columns = ['source', 'target', 'value']
return df.to_dict(orient='records')
class ChordViz(BaseViz):
"""A Chord diagram"""
viz_type = "chord"
verbose_name = _("Directed Force Layout")
credits = '<a href="https://github.com/d3/d3-chord">Bostock</a>'
is_timeseries = False
def query_obj(self):
qry = super(ChordViz, self).query_obj()
fd = self.form_data
qry['groupby'] = [fd.get('groupby'), fd.get('columns')]
qry['metrics'] = [fd.get('metric')]
return qry
def get_data(self, df):
df.columns = ['source', 'target', 'value']
# Preparing a symetrical matrix like d3.chords calls for
nodes = list(set(df['source']) | set(df['target']))
matrix = {}
for source, target in product(nodes, nodes):
matrix[(source, target)] = 0
for source, target, value in df.to_records(index=False):
matrix[(source, target)] = value
m = [[matrix[(n1, n2)] for n1 in nodes] for n2 in nodes]
return {
'nodes': list(nodes),
'matrix': m,
}
class CountryMapViz(BaseViz):
"""A country centric"""
viz_type = "country_map"
verbose_name = _("Country Map")
is_timeseries = False
credits = 'From bl.ocks.org By john-guerra'
def query_obj(self):
qry = super(CountryMapViz, self).query_obj()
qry['metrics'] = [
self.form_data['metric']]
qry['groupby'] = [self.form_data['entity']]
return qry
def get_data(self, df):
from superset.data import countries
fd = self.form_data
cols = [fd.get('entity')]
metric = fd.get('metric')
cols += [metric]
ndf = df[cols]
df = ndf
df.columns = ['country_id', 'metric']
d = df.to_dict(orient='records')
return d
class WorldMapViz(BaseViz):
"""A country centric world map"""
viz_type = "world_map"
verbose_name = _("World Map")
is_timeseries = False
credits = 'datamaps on <a href="https://www.npmjs.com/package/datamaps">npm</a>'
def query_obj(self):
qry = super(WorldMapViz, self).query_obj()
qry['metrics'] = [
self.form_data['metric'], self.form_data['secondary_metric']]
qry['groupby'] = [self.form_data['entity']]
return qry
def get_data(self, df):
from superset.data import countries
fd = self.form_data
cols = [fd.get('entity')]
metric = fd.get('metric')
secondary_metric = fd.get('secondary_metric')
if metric == secondary_metric:
ndf = df[cols]
# df[metric] will be a DataFrame
# because there are duplicate column names
ndf['m1'] = df[metric].iloc[:, 0]
ndf['m2'] = ndf['m1']
else:
cols += [metric, secondary_metric]
ndf = df[cols]
df = ndf
df.columns = ['country', 'm1', 'm2']
d = df.to_dict(orient='records')
for row in d:
country = None
if isinstance(row['country'], string_types):
country = countries.get(
fd.get('country_fieldtype'), row['country'])
if country:
row['country'] = country['cca3']
row['latitude'] = country['lat']
row['longitude'] = country['lng']
row['name'] = country['name']
else:
row['country'] = "XXX"
return d
class FilterBoxViz(BaseViz):
"""A multi filter, multi-choice filter box to make dashboards interactive"""
viz_type = "filter_box"
verbose_name = _("Filters")
is_timeseries = False
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
def query_obj(self):
qry = super(FilterBoxViz, self).query_obj()
groupby = self.form_data.get('groupby')
if len(groupby) < 1 and not self.form_data.get('date_filter'):
raise Exception(_("Pick at least one filter field"))
qry['metrics'] = [
self.form_data['metric']]
return qry
def get_data(self, df):
qry = self.query_obj()
filters = [g for g in self.form_data['groupby']]
d = {}
for flt in filters:
qry['groupby'] = [flt]
df = super(FilterBoxViz, self).get_df(qry)
d[flt] = [{
'id': row[0],
'text': row[0],
'filter': flt,
'metric': row[1]}
for row in df.itertuples(index=False)
]
return d
class IFrameViz(BaseViz):
"""You can squeeze just about anything in this iFrame component"""
viz_type = "iframe"
verbose_name = _("iFrame")
credits = 'a <a href="https://github.com/airbnb/superset">Superset</a> original'
is_timeseries = False
def get_df(self):
return None
class ParallelCoordinatesViz(BaseViz):
"""Interactive parallel coordinate implementation
Uses this amazing javascript library
https://github.com/syntagmatic/parallel-coordinates
"""
viz_type = "para"
verbose_name = _("Parallel Coordinates")
credits = (
'<a href="https://syntagmatic.github.io/parallel-coordinates/">'
'Syntagmatic\'s library</a>')
is_timeseries = False
def query_obj(self):
d = super(ParallelCoordinatesViz, self).query_obj()
fd = self.form_data
d['metrics'] = copy.copy(fd.get('metrics'))
second = fd.get('secondary_metric')
if second not in d['metrics']:
d['metrics'] += [second]
d['groupby'] = [fd.get('series')]
return d
def get_data(self, df):
return df.to_dict(orient="records")
class HeatmapViz(BaseViz):
"""A nice heatmap visualization that support high density through canvas"""
viz_type = "heatmap"
verbose_name = _("Heatmap")
is_timeseries = False
credits = (
'inspired from mbostock @<a href="http://bl.ocks.org/mbostock/3074470">'
'bl.ocks.org</a>')
def query_obj(self):
d = super(HeatmapViz, self).query_obj()
fd = self.form_data
d['metrics'] = [fd.get('metric')]
d['groupby'] = [fd.get('all_columns_x'), fd.get('all_columns_y')]
return d
def get_data(self, df):
fd = self.form_data
x = fd.get('all_columns_x')
y = fd.get('all_columns_y')
v = fd.get('metric')
if x == y:
df.columns = ['x', 'y', 'v']
else:
df = df[[x, y, v]]
df.columns = ['x', 'y', 'v']
norm = fd.get('normalize_across')
overall = False
if norm == 'heatmap':
overall = True
else:
gb = df.groupby(norm, group_keys=False)
if len(gb) <= 1:
overall = True
else:
df['perc'] = (
gb.apply(
lambda x: (x.v - x.v.min()) / (x.v.max() - x.v.min()))
)
if overall:
v = df.v
min_ = v.min()
df['perc'] = (v - min_) / (v.max() - min_)
return df.to_dict(orient="records")
class HorizonViz(NVD3TimeSeriesViz):
"""Horizon chart
https://www.npmjs.com/package/d3-horizon-chart
"""
viz_type = "horizon"
verbose_name = _("Horizon Charts")
credits = (
'<a href="https://www.npmjs.com/package/d3-horizon-chart">'
'd3-horizon-chart</a>')
class MapboxViz(BaseViz):
"""Rich maps made with Mapbox"""
viz_type = "mapbox"
verbose_name = _("Mapbox")
is_timeseries = False
credits = (
'<a href=https://www.mapbox.com/mapbox-gl-js/api/>Mapbox GL JS</a>')
def query_obj(self):
d = super(MapboxViz, self).query_obj()
fd = self.form_data
label_col = fd.get('mapbox_label')
if not fd.get('groupby'):
d['columns'] = [fd.get('all_columns_x'), fd.get('all_columns_y')]
if label_col and len(label_col) >= 1:
if label_col[0] == "count":
raise Exception(_(
"Must have a [Group By] column to have 'count' as the [Label]"))
d['columns'].append(label_col[0])
if fd.get('point_radius') != 'Auto':
d['columns'].append(fd.get('point_radius'))
d['columns'] = list(set(d['columns']))
else:
# Ensuring columns chosen are all in group by
if (label_col and len(label_col) >= 1 and
label_col[0] != "count" and
label_col[0] not in fd.get('groupby')):
raise Exception(_(
"Choice of [Label] must be present in [Group By]"))
if (fd.get("point_radius") != "Auto" and
fd.get("point_radius") not in fd.get('groupby')):
raise Exception(_(
"Choice of [Point Radius] must be present in [Group By]"))
if (fd.get('all_columns_x') not in fd.get('groupby') or
fd.get('all_columns_y') not in fd.get('groupby')):
raise Exception(_(
"[Longitude] and [Latitude] columns must be present in [Group By]"))
return d
def get_data(self, df):
fd = self.form_data
label_col = fd.get('mapbox_label')
custom_metric = label_col and len(label_col) >= 1
metric_col = [None] * len(df.index)
if custom_metric:
if label_col[0] == fd.get('all_columns_x'):
metric_col = df[fd.get('all_columns_x')]
elif label_col[0] == fd.get('all_columns_y'):
metric_col = df[fd.get('all_columns_y')]
else:
metric_col = df[label_col[0]]
point_radius_col = (
[None] * len(df.index)
if fd.get("point_radius") == "Auto"
else df[fd.get("point_radius")])
# using geoJSON formatting
geo_json = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"metric": metric,
"radius": point_radius,
},
"geometry": {
"type": "Point",
"coordinates": [lon, lat],
}
}
for lon, lat, metric, point_radius
in zip(
df[fd.get('all_columns_x')],
df[fd.get('all_columns_y')],
metric_col, point_radius_col)
]
}
return {
"geoJSON": geo_json,
"customMetric": custom_metric,
"mapboxApiKey": config.get('MAPBOX_API_KEY'),
"mapStyle": fd.get("mapbox_style"),
"aggregatorName": fd.get("pandas_aggfunc"),
"clusteringRadius": fd.get("clustering_radius"),
"pointRadiusUnit": fd.get("point_radius_unit"),
"globalOpacity": fd.get("global_opacity"),
"viewportLongitude": fd.get("viewport_longitude"),
"viewportLatitude": fd.get("viewport_latitude"),
"viewportZoom": fd.get("viewport_zoom"),
"renderWhileDragging": fd.get("render_while_dragging"),
"tooltip": fd.get("rich_tooltip"),
"color": fd.get("mapbox_color"),
}
class EventFlowViz(BaseViz):
"""A visualization to explore patterns in event sequences"""
viz_type = "event_flow"
verbose_name = _("Event flow")
credits = 'from <a href="https://github.com/williaster/data-ui">@data-ui</a>'
is_timeseries = True
def query_obj(self):
query = super(EventFlowViz, self).query_obj()
form_data = self.form_data
event_key = form_data.get('all_columns_x')
entity_key = form_data.get('entity')
meta_keys = [
col for col in form_data.get('all_columns') if col != event_key and col != entity_key
]
query['columns'] = [event_key, entity_key] + meta_keys
if form_data['order_by_entity']:
query['orderby'] = [(entity_key, True)]
return query
def get_data(self, df):
return df.to_dict(orient="records")
viz_types_list = [
TableViz,
PivotTableViz,
NVD3TimeSeriesViz,
NVD3DualLineViz,
NVD3CompareTimeSeriesViz,
NVD3TimeSeriesStackedViz,
NVD3TimeSeriesBarViz,
DistributionBarViz,
DistributionPieViz,
BubbleViz,
BulletViz,
MarkupViz,
WordCloudViz,
BigNumberViz,
BigNumberTotalViz,
SunburstViz,
DirectedForceViz,
SankeyViz,
CountryMapViz,
ChordViz,
WorldMapViz,
FilterBoxViz,
IFrameViz,
ParallelCoordinatesViz,
HeatmapViz,
BoxPlotViz,
TreemapViz,
CalHeatmapViz,
HorizonViz,
MapboxViz,
HistogramViz,
SeparatorViz,
EventFlowViz,
]
viz_types = OrderedDict([(v.viz_type, v) for v in viz_types_list
if v.viz_type not in config.get('VIZ_TYPE_BLACKLIST')])
|
massgov/incubator-superset
|
superset/viz.py
|
Python
|
apache-2.0
| 54,564
|
[
"VisIt"
] |
6743072fcccf82afba3024be947a8569e9467ec1cab15a5d48ffb7f86d4aff5b
|
#!/usr/bin/env/ python -W ignore
# -*- coding: utf-8 -*-
"""
Gaussian Mixture Model for clustering types of Twitter accts by
using python scikit-learn (sklearn) GaussianMixture class.
(gmm)
refr: https://brilliant.org/wiki/gaussian-mixture-model/
libr: http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html
code: https://github.com/scikit-learn/scikit-learn/blob/master/examples/mixture
(preprocessing)
libr: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
libr: http://scikit-learn.org/stable/modules/preprocessing.html#normalization
(dependencies)
numpy sklearn
(execute)
python gmm.py K /input/to/userengagements.csv /output/to/
"""
import sys
import csv
import numpy as np
from sklearn import preprocessing
from sklearn import mixture
###############################################
'''
Apply Gaussian Mixture Models to the tweet feature-set
for clustering entities
'''
def process_gmm():
K = int(sys.argv[1])
# load all cols except screen_names (0)
#X = np.loadtxt(sys.argv[2], delimiter=',', skiprows=1,
X = np.loadtxt(sys.argv[2], delimiter=',', skiprows=0,
#usecols=range(1,23)) # range(start,stop) - stop not inclusive, 1-16 or 1-23, 0 is screen_name
usecols=range(1,16)) # range(start,stop) - stop not inclusive, 1-16 or 1-23, 0 is screen_name
#X = np.genfromtxt(sys.argv[2], delimiter=',', skip_header=1,
# #usecols=range(1,23)) # range(start,stop) - stop not inclusive, 1-16 or 1-23, 0 is screen_name
# usecols=range(1,16)) # range(start,stop) - stop not inclusive, 1-16 or 1-23, 0 is screen_name
## Tranpose (to normalise per col), Normalise, Tranpose (back to correct matrix arrangement)
#X_tran = X.transpose()
#X_norm = preprocessing.normalize(X_tran, norm='l1') # L1 for least absolute deviations
#X = X_norm.transpose()
#print("K: {}, data shape: [{}][{}]".format(K, len(X), len(X[0])))
# Fit a Gaussian mixture with EM using K components
gmm = mixture.GaussianMixture(n_components=K, covariance_type='full',
tol=1e-4, max_iter=500, n_init=3, init_params='kmeans',
warm_start=True, verbose=1).fit(X)
## generate random samples from the fitted Gaussian distribution
#sample = gmm.sample(1000)
# load screen_names
with open(sys.argv[2]) as csvfile:
read_csv = csv.reader(csvfile, delimiter=',')
screen_names = []
for row in read_csv:
screen_names.append(row[0])
# label clusters
labels = gmm.predict(X)
clusters = {}
i = 0
for label in labels:
if label in clusters:
clusters[label].append(screen_names[i])
else:
clusters[label] = [screen_names[i]]
i += 1
# outputs
for cluster in clusters:
f = open(sys.argv[3]+"/gmm."+sys.argv[2].split("/")[-1:].pop()+".K"+sys.argv[1]+".cluster"+str(cluster)+".out", "w+")
for c in clusters[cluster]:
f.write("{}\n".format(c))
f.close()
###############################################
if __name__ == "__main__":
try:
process_gmm()
except:
raise
|
zafargilani/stcs
|
lib/clustering/gmm.py
|
Python
|
gpl-3.0
| 2,963
|
[
"Gaussian"
] |
0256e6c4f5e2827b7fbc46e79d9fb4febc759922eaf70ba2ccb1b3786024363b
|
#-------------------------------------------------------------------------------
# Cloud-COPASI
# Copyright (c) 2013 Edward Kent.
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the GNU Public License v3.0
# which accompanies this distribution, and is available at
# http://www.gnu.org/licenses/gpl.html
#-------------------------------------------------------------------------------
# Create your views here.
from django.http import HttpResponse
from django.template import RequestContext
from django.views.generic import TemplateView, RedirectView, View, FormView
from django.views.generic.edit import FormMixin, ProcessFormView
from django.views.generic.base import ContextMixin
from django.utils.decorators import method_decorator
from django.contrib.auth.decorators import login_required, permission_required
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.forms import AuthenticationForm
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth import logout
from django import forms
import sys
from boto.exception import BotoServerError
from cloud_copasi.web_interface.models import AWSAccessKey, CondorPool, Task,\
EC2Instance, ElasticIP
from cloud_copasi.web_interface.aws import resource_management_tools
import logging
from cloud_copasi import settings
#Remember - class based views are not thread safe! Don't pass lists, dicts etc as args
log = logging.getLogger(__name__)
class DefaultView(TemplateView):
page_title=''
def get(self, request, *args, **kwargs):
#log.debug('GET request [\"%s\"]' % request.path)
return super(DefaultView, self).get(request, *args, **kwargs)
def dispatch(self, request, *args, **kwargs):
#Override the template name if it is requested from the url
if kwargs.get('template_name', None):
self.template_name = kwargs['template_name']
if self.page_title:
kwargs['page_title'] = self.page_title
#Check for errors in request.session
kwargs['debug'] = settings.DEBUG
errors = request.session.pop('errors', None)
if errors:
kwargs['errors'] = errors
if request.user.is_authenticated():
if hasattr(self, 'template_name') and self.template_name != 'home.html':
#Don't show on the home screen, regardless of logged in or not
kwargs['show_status_bar']=True
return super(DefaultView, self).dispatch(request, *args, **kwargs)
class RestrictedView(DefaultView):
@method_decorator(login_required)
def dispatch(self, request, *args, **kwargs):
#Populate the context with information about the access keys
user = request.user
access_keys = AWSAccessKey.objects.filter(user=user)
kwargs['access_keys'] = access_keys
kwargs['show_status_bar'] = True
#resource_overview=resource_management_tools.get_unrecognized_resources(request.user)
#Generate warnings
#if not resource_overview.is_empty():
# log.debug('Unrecognized resources for user %s'%request.user)
#kwargs['show_warning_bar']= not resource_overview.is_empty()
#kwargs['resource_overview']=resource_overview
kwargs['compute_nodes'] = EC2Instance.objects.filter(ec2_pool__vpc__access_key__user=user)
kwargs['elastic_ips'] = ElasticIP.objects.filter(vpc__access_key__user=user)
kwargs['access_keys'] = AWSAccessKey.objects.filter(user=user)
kwargs['owned_keys'] = AWSAccessKey.objects.filter(user=user, copy_of__isnull=True)
kwargs['shared_keys'] = AWSAccessKey.objects.filter(user=user, copy_of__isnull=False)
kwargs['compute_pools'] = CondorPool.objects.filter(user=user)
tasks = Task.objects.filter(user = user)
kwargs['running_tasks'] = tasks.filter(status='new')|tasks.filter(status='running')|tasks.filter(status='transfer')
kwargs['finished_tasks'] = tasks.filter(status='finished')
kwargs['task_errors'] = tasks.filter(status='error')
return super(RestrictedView, self).dispatch(request, *args, **kwargs)
class RestrictedFormView(RestrictedView, FormMixin, ProcessFormView):
@method_decorator(login_required)
def dispatch(self, request, *args, **kwargs):
user=request.user
kwargs['form'] = self.get_form(self.get_form_class())
kwargs['compute_pools'] = CondorPool.objects.filter(user=user)
kwargs['compute_nodes'] = EC2Instance.objects.filter(ec2_pool__vpc__access_key__user=user)
kwargs['elastic_ips'] = ElasticIP.objects.filter(vpc__access_key__user=user)
kwargs['access_keys'] = AWSAccessKey.objects.filter(user=user)
kwargs['owned_keys'] = AWSAccessKey.objects.filter(user=user, copy_of__isnull=True)
kwargs['shared_keys'] = AWSAccessKey.objects.filter(user=user, copy_of__isnull=False)
kwargs['compute_pools'] = CondorPool.objects.filter(user=user)
tasks = Task.objects.filter(user = user)
kwargs['running_tasks'] = tasks.filter(status='new')|tasks.filter(status='running')|tasks.filter(status='transfer')
kwargs['finished_tasks'] = tasks.filter(status='finished')
kwargs['task_errors'] = tasks.filter(status='error')
return super(RestrictedFormView, self).dispatch(request, *args,**kwargs)
def form_valid(self, *args, **kwargs):
"""
If the form is valid, redirect to the supplied URL.
"""
return HttpResponseRedirect(self.get_success_url())
def form_invalid(self, *args, **kwargs):
"""
If the form is invalid, re-render the context data with the
data-filled form and errors.
"""
return self.render_to_response(self.get_context_data(**kwargs))
def get(self, request, *args, **kwargs):
"""
Handles GET requests and instantiates a blank version of the form.
"""
return self.render_to_response(self.get_context_data(**kwargs))
def post(self, request, *args, **kwargs):
"""
Handles POST requests, instantiating a form instance with the passed
POST variables and then checked for validity.
"""
form=kwargs['form']
if form.is_valid():
return self.form_valid(**kwargs)
else:
return self.form_invalid(**kwargs)
class LandingView(RedirectView):
def get_redirect_url(self, *args, **kwargs):
if self.request.user.is_authenticated():
return reverse_lazy('my_account')
else:
return reverse_lazy('home')
class HomeView(DefaultView):
template_name='home.html'
page_title = 'Home'
class LogoutView(RedirectView):
url = reverse_lazy('home')
def dispatch(self, request, *args, **kwargs):
logout(request)
return super(LogoutView, self).dispatch(request, *args, **kwargs)
class LoginView(FormView):
page_title = 'Sign in'
success_url = reverse_lazy('landing_view')
template_name = 'account/sign_in.html'
form_class = AuthenticationForm
initial={}
def get_success_url(self):
next_page = self.request.REQUEST.get('next', '')
if next_page:
return next_page
else:
return FormView.get_success_url(self)
def get_context_data(self, **kwargs):
context = FormView.get_context_data(self, **kwargs)
context['page_title'] = self.page_title
return context
def form_valid(self, form):
login(self.request, form.get_user())
return super(FormView,self).form_valid(form)
|
edkent/cloud-copasi
|
cloud_copasi/web_interface/views.py
|
Python
|
gpl-3.0
| 7,944
|
[
"COPASI"
] |
1aef8505cba003752ef57d6180536e79d0e15b014809109a8677b691a032ac7c
|
#!/usr/bin/env python
########################################################################
# $HeadURL$
########################################################################
""" Enable using one or more Storage Elements
"""
__RCSID__ = "$Id$"
import DIRAC
from DIRAC.Core.Base import Script
read = True
write = True
check = True
site = ''
mute = False
Script.setUsageMessage( """
Enable using one or more Storage Elements
Usage:
%s SE1 [SE2 ...]
""" % Script.scriptName )
Script.registerSwitch( "r" , "AllowRead" , " Allow only reading from the storage element" )
Script.registerSwitch( "w" , "AllowWrite", " Allow only writing to the storage element" )
Script.registerSwitch( "k" , "AllowCheck", " Allow only check access to the storage element" )
Script.registerSwitch( "m" , "Mute" , " Do not send email" )
Script.registerSwitch( "S:", "Site=" , " Allow all SEs associated to site" )
Script.parseCommandLine( ignoreErrors = True )
ses = Script.getPositionalArgs()
for switch in Script.getUnprocessedSwitches():
if switch[0].lower() == "r" or switch[0].lower() == "allowread":
write = False
check = False
if switch[0].lower() == "w" or switch[0].lower() == "allowwrite":
read = False
check = False
if switch[0].lower() == "k" or switch[0].lower() == "allowcheck":
read = False
write = False
if switch[0].lower() == "m" or switch[0].lower() == "mute":
mute = True
if switch[0] == "S" or switch[0].lower() == "site":
site = switch[1]
#from DIRAC.ConfigurationSystem.Client.CSAPI import CSAPI
from DIRAC.Interfaces.API.DiracAdmin import DiracAdmin
from DIRAC.ConfigurationSystem.Client.Helpers.Operations import Operations
from DIRAC import gConfig, gLogger
from DIRAC.ResourceStatusSystem.Client.ResourceStatus import ResourceStatus
from DIRAC.Core.Security.ProxyInfo import getProxyInfo
#csAPI = CSAPI()
diracAdmin = DiracAdmin()
exitCode = 0
errorList = []
setup = gConfig.getValue( '/DIRAC/Setup', '' )
if not setup:
print 'ERROR: Could not contact Configuration Service'
exitCode = 2
DIRAC.exit( exitCode )
res = getProxyInfo()
if not res[ 'OK' ]:
gLogger.error( 'Failed to get proxy information', res[ 'Message' ] )
DIRAC.exit( 2 )
userName = res['Value'].get( 'username' )
if not userName:
gLogger.error( 'Failed to get username for proxy' )
DIRAC.exit( 2 )
if site:
res = gConfig.getOptionsDict( '/Resources/Sites/LCG/%s' % site )
if not res[ 'OK' ]:
gLogger.error( 'The provided site (%s) is not known.' % site )
DIRAC.exit( -1 )
ses.extend( res[ 'Value' ][ 'SE' ].replace( ' ', '' ).split( ',' ) )
if not ses:
gLogger.error( 'There were no SEs provided' )
DIRAC.exit()
readAllowed = []
writeAllowed = []
checkAllowed = []
resourceStatus = ResourceStatus()
res = resourceStatus.getStorageElementStatus( ses )
if not res[ 'OK' ]:
gLogger.error( 'Storage Element %s does not exist' % ses )
DIRAC.exit( -1 )
reason = 'Forced with dirac-admin-allow-se by %s' % userName
for se, seOptions in res[ 'Value' ].items():
resW = resC = resR = { 'OK' : False }
# InActive is used on the CS model, Banned is the equivalent in RSS
if read and seOptions.has_key( 'ReadAccess' ):
if not seOptions[ 'ReadAccess' ] in [ "InActive", "Banned", "Probing", "Degraded" ]:
gLogger.notice( 'Read option for %s is %s, instead of %s' %
( se, seOptions[ 'ReadAccess' ], [ "InActive", "Banned", "Probing", "Degraded" ] ) )
gLogger.notice( 'Try specifying the command switches' )
continue
if 'ARCHIVE' in se:
gLogger.notice( '%s is not supposed to change Read status to Active' % se )
resR[ 'OK' ] = True
else:
resR = resourceStatus.setStorageElementStatus( se, 'ReadAccess', 'Active', reason, userName )
if not resR['OK']:
gLogger.error( "Failed to update %s read access to Active" % se )
else:
gLogger.notice( "Successfully updated %s read access to Active" % se )
readAllowed.append( se )
# InActive is used on the CS model, Banned is the equivalent in RSS
if write and seOptions.has_key( 'WriteAccess' ):
if not seOptions[ 'WriteAccess' ] in [ "InActive", "Banned", "Probing", "Degraded" ]:
gLogger.notice( 'Write option for %s is %s, instead of %s' %
( se, seOptions[ 'WriteAccess' ], [ "InActive", "Banned", "Probing", "Degraded" ] ) )
gLogger.notice( 'Try specifying the command switches' )
continue
resW = resourceStatus.setStorageElementStatus( se, 'WriteAccess', 'Active', reason, userName )
if not resW['OK']:
gLogger.error( "Failed to update %s write access to Active" % se )
else:
gLogger.notice( "Successfully updated %s write access to Active" % se )
writeAllowed.append( se )
# InActive is used on the CS model, Banned is the equivalent in RSS
if check and seOptions.has_key( 'CheckAccess' ):
if not seOptions[ 'CheckAccess' ] in [ "InActive", "Banned", "Probing", "Degraded" ]:
gLogger.notice( 'Check option for %s is %s, instead of %s' %
( se, seOptions[ 'CheckAccess' ], [ "InActive", "Banned", "Probing", "Degraded" ] ) )
gLogger.notice( 'Try specifying the command switches' )
continue
resC = resourceStatus.setStorageElementStatus( se, 'CheckAccess', 'Active', reason, userName )
if not resC['OK']:
gLogger.error( "Failed to update %s check access to Active" % se )
else:
gLogger.notice( "Successfully updated %s check access to Active" % se )
checkAllowed.append( se )
if not( resR['OK'] or resW['OK'] or resC['OK'] ):
DIRAC.exit( -1 )
if not ( writeAllowed or readAllowed or checkAllowed ):
gLogger.info( "No storage elements were allowed" )
DIRAC.exit( -1 )
if mute:
gLogger.notice( 'Email is muted by script switch' )
DIRAC.exit( 0 )
subject = '%s storage elements allowed for use' % len( writeAllowed + readAllowed + checkAllowed )
addressPath = 'EMail/Production'
address = Operations().getValue( addressPath, '' )
body = ''
if read:
body = "%s\n\nThe following storage elements were allowed for reading:" % body
for se in readAllowed:
body = "%s\n%s" % ( body, se )
if write:
body = "%s\n\nThe following storage elements were allowed for writing:" % body
for se in writeAllowed:
body = "%s\n%s" % ( body, se )
if check:
body = "%s\n\nThe following storage elements were allowed for checking:" % body
for se in checkAllowed:
body = "%s\n%s" % ( body, se )
if not address:
gLogger.notice( "'%s' not defined in Operations, can not send Mail\n" % addressPath, body )
DIRAC.exit( 0 )
res = diracAdmin.sendMail( address, subject, body )
gLogger.notice( 'Notifying %s' % address )
if res[ 'OK' ]:
gLogger.notice( res[ 'Value' ] )
else:
gLogger.notice( res[ 'Message' ] )
DIRAC.exit( 0 )
################################################################################
#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF
|
calancha/DIRAC
|
DataManagementSystem/scripts/dirac-admin-allow-se.py
|
Python
|
gpl-3.0
| 7,121
|
[
"DIRAC"
] |
ed4c1aa5080a8ac39d292e1a63dc12838d3025a796abb3af0bcb7e3998ed3b94
|
#!/usr/bin/env python
# File created on 21 Mar 2012
from __future__ import division
__author__ = "Greg Caporaso"
__copyright__ = "Copyright 2011, The QIIME project"
__credits__ = ["Greg Caporaso", "Jai Ram Rideout"]
__license__ = "GPL"
__version__ = "1.8.0-dev"
__maintainer__ = "Greg Caporaso"
__email__ = "gregcaporaso@gmail.com"
from os.path import split, splitext, getsize, exists, abspath, join
from shutil import copy, rmtree
from numpy import inf
from copy import deepcopy
from skbio.util import create_dir, remove_files
from skbio.parse.sequences import parse_fasta
from biom import Table
from biom.util import biom_open
from qiime.util import (subsample_fasta, count_seqs_from_file)
from qiime.filter import (filter_otus_from_otu_table,
get_seq_ids_from_fasta_file,
filter_otus_from_otu_map)
from qiime.workflow.util import (print_to_stdout,
WorkflowLogger,
generate_log_fp,
log_input_md5s,
get_params_str,
WorkflowError)
from qiime.util import write_biom_table
from qiime.workflow.core_diversity_analyses import (format_index_link,
generate_index_page,
_index_headers)
def final_repset_from_iteration_repsets(repset_fasta_fs):
"""
The first observation of each otu is chosen as the representative -
this ensures that the representative sequence is the centroid of
the cluster.
"""
observed = {}
for repset_fasta_f in repset_fasta_fs:
for otu_id, seq in parse_fasta(repset_fasta_f):
o = otu_id.split()[0]
if not o in observed:
yield (otu_id, seq)
observed[o] = None
else:
# we already have a representative for this otu id
pass
def final_repset_from_iteration_repsets_fps(repset_fasta_fps, final_repset_fp):
final_repset_f = open(final_repset_fp, 'w')
repset_fasta_fs = map(open, repset_fasta_fps)
for record in final_repset_from_iteration_repsets(repset_fasta_fs):
final_repset_f.write('>%s\n%s\n' % record)
final_repset_f.close()
#####################
# Start functions to port to new Qiime/qiime/workflow/util.py
#####################
# The following functions are currently all tested via
# the wrapper functions in PickSubsampledReferenceOtusThroughOtuTableTests.
# In an up-coming workflow-refactoring, I want to use these in other workflow
# scripts and test directly to simplify WorkflowTests. I split these out when
# writing this code as it became obvious that they're reusable.
def pick_reference_otus(input_fp,
output_dir,
otu_picking_method,
refseqs_fp,
parallel,
params,
logger,
similarity_override=None):
params_copy = deepcopy(params)
if 'pick_otus' in params_copy and 'refseqs_fp' in params_copy['pick_otus']:
raise WorkflowError("Cannot pass pick_otus:refseqs_fp in parameters file. This can only be"
" passed on the command line or through the API.")
if similarity_override is not None:
logger.write(
'Similiarity of %1.3f being used for pre-filtering.\n' %
similarity_override)
if 'pick_otus' in params_copy:
params_copy['pick_otus']['similarity'] = str(similarity_override)
else:
params_copy['pick_otus'] = {'similarity': str(similarity_override)}
if parallel and (otu_picking_method == 'uclust_ref' or otu_picking_method == "sortmerna"):
# Grab the parallel-specific parameters
try:
params_str = get_params_str(params_copy['parallel'])
except KeyError:
params_str = ''
# Grab the OTU picker parameters
try:
# Want to find a cleaner strategy for this: the parallel script
# is method-specific, so doesn't take a --otu_picking_method
# option. This works for now though.
if 'otu_picking_method' in params_copy['pick_otus']:
del params_copy['pick_otus']['otu_picking_method']
except KeyError:
pass
params_str += ' %s' % get_params_str(params_copy['pick_otus'])
otu_picking_script = 'parallel_pick_otus_%s.py' % otu_picking_method
# Build the OTU picking command
pick_otus_cmd = '%s -i %s -o %s -r %s -T %s' %\
(otu_picking_script,
input_fp,
output_dir,
refseqs_fp,
params_str)
else:
try:
params_str = get_params_str(params_copy['pick_otus'])
except KeyError:
params_str = ''
# Since this is reference-based OTU picking we always want to
# suppress new clusters -- force it here.
params_str += ' --suppress_new_clusters'
logger.write(
"Forcing --suppress_new_clusters as this is reference-based OTU picking.\n\n")
# Build the OTU picking command
pick_otus_cmd = 'pick_otus.py -i %s -o %s -r %s -m %s %s' %\
(input_fp,
output_dir,
refseqs_fp,
otu_picking_method,
params_str)
return pick_otus_cmd
def pick_denovo_otus(input_fp,
output_dir,
new_ref_set_id,
otu_picking_method,
params,
logger):
try:
d = params['pick_otus'].copy()
del d['otu_picking_method']
except KeyError:
pass
d['denovo_otu_id_prefix'] = '%s.ReferenceOTU' % new_ref_set_id
params_str = ' %s' % get_params_str(d)
# Build the OTU picking command
result = 'pick_otus.py -i %s -o %s -m %s %s' %\
(input_fp, output_dir, otu_picking_method, params_str)
return result
def assign_tax(repset_fasta_fp,
output_dir,
command_handler,
params,
qiime_config,
parallel=False,
logger=None,
status_update_callback=print_to_stdout):
input_dir, input_filename = split(repset_fasta_fp)
input_basename, input_ext = splitext(input_filename)
commands = []
if logger is None:
log_fp = generate_log_fp(output_dir)
logger = WorkflowLogger(log_fp,
params=params,
qiime_config=qiime_config)
close_logger_on_success = True
else:
close_logger_on_success = False
# Prep the taxonomy assignment command
try:
assignment_method = params['assign_taxonomy']['assignment_method']
except KeyError:
assignment_method = 'uclust'
assign_taxonomy_dir = '%s/%s_assigned_taxonomy' %\
(output_dir, assignment_method)
taxonomy_fp = '%s/%s_tax_assignments.txt' % \
(assign_taxonomy_dir, input_basename)
if parallel and (assignment_method == 'rdp' or
assignment_method == 'blast' or
assignment_method == 'uclust'):
# Grab the parallel-specific parameters
try:
params_str = get_params_str(params['parallel'])
except KeyError:
params_str = ''
try:
# Want to find a cleaner strategy for this: the parallel script
# is method-specific, so doesn't take a --assignment_method
# option. This works for now though.
d = params['assign_taxonomy'].copy()
if 'assignment_method' in d:
del d['assignment_method']
params_str += ' %s' % get_params_str(d)
except KeyError:
pass
# Build the parallel taxonomy assignment command
assign_taxonomy_cmd = \
'parallel_assign_taxonomy_%s.py -i %s -o %s -T %s' %\
(assignment_method, repset_fasta_fp,
assign_taxonomy_dir, params_str)
else:
try:
params_str = get_params_str(params['assign_taxonomy'])
except KeyError:
params_str = ''
# Build the taxonomy assignment command
assign_taxonomy_cmd = 'assign_taxonomy.py -o %s -i %s %s' %\
(assign_taxonomy_dir, repset_fasta_fp, params_str)
if exists(assign_taxonomy_dir):
rmtree(assign_taxonomy_dir)
commands.append([('Assign taxonomy', assign_taxonomy_cmd)])
# Call the command handler on the list of commands
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=close_logger_on_success)
return taxonomy_fp
def align_and_tree(repset_fasta_fp,
output_dir,
command_handler,
params,
qiime_config,
parallel=False,
logger=None,
status_update_callback=print_to_stdout):
input_dir, input_filename = split(repset_fasta_fp)
input_basename, input_ext = splitext(input_filename)
commands = []
if logger is None:
log_fp = generate_log_fp(output_dir)
logger = WorkflowLogger(log_fp,
params=params,
qiime_config=qiime_config)
close_logger_on_success = True
else:
close_logger_on_success = False
# Prep the pynast alignment command
alignment_method = 'pynast'
pynast_dir = '%s/%s_aligned_seqs' % (output_dir, alignment_method)
aln_fp = '%s/%s_aligned.fasta' % (pynast_dir, input_basename)
failures_fp = '%s/%s_failures.fasta' % (pynast_dir, input_basename)
if exists(pynast_dir):
rmtree(pynast_dir)
if parallel:
# Grab the parallel-specific parameters
try:
params_str = get_params_str(params['parallel'])
except KeyError:
params_str = ''
# Grab the OTU picker parameters
try:
# Want to find a cleaner strategy for this: the parallel script
# is method-specific, so doesn't take a --alignment_method
# option. This works for now though.
d = params['align_seqs'].copy()
if 'alignment_method' in d:
del d['alignment_method']
params_str += ' %s' % get_params_str(d)
except KeyError:
pass
# Build the parallel pynast alignment command
align_seqs_cmd = 'parallel_align_seqs_pynast.py -i %s -o %s -T %s' %\
(repset_fasta_fp, pynast_dir, params_str)
else:
try:
params_str = get_params_str(params['align_seqs'])
except KeyError:
params_str = ''
# Build the pynast alignment command
align_seqs_cmd = 'align_seqs.py -i %s -o %s %s' %\
(repset_fasta_fp, pynast_dir, params_str)
commands.append([('Align sequences', align_seqs_cmd)])
# Prep the alignment filtering command
filtered_aln_fp = '%s/%s_aligned_pfiltered.fasta' %\
(pynast_dir, input_basename)
try:
params_str = get_params_str(params['filter_alignment'])
except KeyError:
params_str = ''
# Build the alignment filtering command
filter_alignment_cmd = 'filter_alignment.py -o %s -i %s %s' %\
(pynast_dir, aln_fp, params_str)
commands.append([('Filter alignment', filter_alignment_cmd)])
# Prep the tree building command
tree_fp = '%s/rep_set.tre' % output_dir
try:
params_str = get_params_str(params['make_phylogeny'])
except KeyError:
params_str = ''
# Build the tree building command
make_phylogeny_cmd = 'make_phylogeny.py -i %s -o %s %s' %\
(filtered_aln_fp, tree_fp, params_str)
commands.append([('Build phylogenetic tree', make_phylogeny_cmd)])
if exists(tree_fp):
remove_files([tree_fp])
# Call the command handler on the list of commands
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=close_logger_on_success)
return failures_fp
#####################
# End functions to port to new Qiime/qiime/workflow/util.py
#####################
def iteration_output_exists(
iteration_output_dir, min_otu_size, remove_partial_output=True):
""" """
if not exists(iteration_output_dir):
return False
expected_fps = ['%s/new_refseqs.fna' % iteration_output_dir,
'%s/rep_set.fna' % iteration_output_dir,
'%s/otu_table_mc%d.biom' % (iteration_output_dir, min_otu_size)]
for fp in expected_fps:
if not (exists(fp) and getsize(fp) > 0):
if remove_partial_output:
# if any of the expected filepaths don't exist or have
# size == 0, remove the iteration output directory
rmtree(iteration_output_dir)
return False
return True
def iterative_pick_subsampled_open_reference_otus(
input_fps,
refseqs_fp,
output_dir,
percent_subsample,
new_ref_set_id,
command_handler,
params,
qiime_config,
prefilter_refseqs_fp=None,
prefilter_percent_id=None,
min_otu_size=2,
run_assign_tax=True,
run_align_and_tree=True,
step1_otu_map_fp=None,
step1_failures_fasta_fp=None,
parallel=False,
suppress_step4=False,
logger=None,
suppress_md5=False,
denovo_otu_picking_method='uclust',
reference_otu_picking_method='uclust_ref',
status_update_callback=print_to_stdout,
minimum_failure_threshold=100000):
""" Call the pick_subsampled_open_reference_otus workflow on multiple inputs
and handle processing of the results.
"""
create_dir(output_dir)
commands = []
if logger is None:
logger = WorkflowLogger(generate_log_fp(output_dir),
params=params,
qiime_config=qiime_config)
close_logger_on_success = True
else:
close_logger_on_success = False
# if the user has not passed a different reference collection for the pre-filter,
# used the input refseqs_fp for all iterations. we want to pre-filter all data against
# the input data as lower percent identity searches with uclust can be slow, so we
# want the reference collection to stay at a reasonable size.
if prefilter_refseqs_fp is None:
prefilter_refseqs_fp = refseqs_fp
otu_table_fps = []
repset_fasta_fps = []
for i, input_fp in enumerate(input_fps):
iteration_output_dir = '%s/%d/' % (output_dir, i)
if iteration_output_exists(iteration_output_dir, min_otu_size):
# if the output from an iteration already exists, skip that
# iteration (useful for continuing failed runs)
log_input_md5s(logger, [input_fp, refseqs_fp])
logger.write('Iteration %d (input file: %s) output data already exists. '
'Skipping and moving to next.\n\n' % (i, input_fp))
else:
pick_subsampled_open_reference_otus(input_fp=input_fp,
refseqs_fp=refseqs_fp,
output_dir=iteration_output_dir,
percent_subsample=percent_subsample,
new_ref_set_id='.'.join(
[new_ref_set_id, str(i)]),
command_handler=command_handler,
params=params,
qiime_config=qiime_config,
run_assign_tax=False,
run_align_and_tree=False,
prefilter_refseqs_fp=prefilter_refseqs_fp,
prefilter_percent_id=prefilter_percent_id,
min_otu_size=min_otu_size,
step1_otu_map_fp=step1_otu_map_fp,
step1_failures_fasta_fp=step1_failures_fasta_fp,
parallel=parallel,
suppress_step4=suppress_step4,
logger=logger,
suppress_md5=suppress_md5,
suppress_index_page=True,
denovo_otu_picking_method=denovo_otu_picking_method,
reference_otu_picking_method=reference_otu_picking_method,
status_update_callback=status_update_callback,
minimum_failure_threshold=minimum_failure_threshold)
# perform post-iteration file shuffling whether the previous iteration's
# data previously existed or was just computed.
# step1 otu map and failures can only be used for the first iteration
# as subsequent iterations need to use updated refseqs files
step1_otu_map_fp = step1_failures_fasta_fp = None
new_refseqs_fp = '%s/new_refseqs.fna' % iteration_output_dir
refseqs_fp = new_refseqs_fp
otu_table_fps.append(
'%s/otu_table_mc%d.biom' %
(iteration_output_dir, min_otu_size))
repset_fasta_fps.append('%s/rep_set.fna' % iteration_output_dir)
# Merge OTU tables - check for existence first as this step has historically
# been a frequent failure, so is sometimes run manually in failed runs.
otu_table_fp = '%s/otu_table_mc%d.biom' % (output_dir, min_otu_size)
if not (exists(otu_table_fp) and getsize(otu_table_fp) > 0):
merge_cmd = 'merge_otu_tables.py -i %s -o %s' %\
(','.join(otu_table_fps), otu_table_fp)
commands.append([("Merge OTU tables", merge_cmd)])
# Build master rep set
final_repset_fp = '%s/rep_set.fna' % output_dir
final_repset_from_iteration_repsets_fps(repset_fasta_fps, final_repset_fp)
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
# initialize output file names - these differ based on what combination of
# taxonomy assignment and alignment/tree building is happening.
if run_assign_tax and run_align_and_tree:
tax_input_otu_table_fp = otu_table_fp
otu_table_w_tax_fp = \
'%s/otu_table_mc%d_w_tax.biom' % (output_dir, min_otu_size)
align_and_tree_input_otu_table = otu_table_w_tax_fp
pynast_failure_filtered_otu_table_fp = \
'%s/otu_table_mc%d_w_tax_no_pynast_failures.biom' % (output_dir,
min_otu_size)
elif run_assign_tax:
tax_input_otu_table_fp = otu_table_fp
otu_table_w_tax_fp = \
'%s/otu_table_mc%d_w_tax.biom' % (output_dir, min_otu_size)
elif run_align_and_tree:
align_and_tree_input_otu_table = otu_table_fp
pynast_failure_filtered_otu_table_fp = \
'%s/otu_table_mc%d_no_pynast_failures.biom' % (output_dir,
min_otu_size)
if run_assign_tax:
if exists(otu_table_w_tax_fp) and getsize(otu_table_w_tax_fp) > 0:
logger.write(
"Final output file exists (%s). Will not rebuild." %
otu_table_w_tax_fp)
else:
# remove files from partially completed runs
remove_files([otu_table_w_tax_fp], error_on_missing=False)
taxonomy_fp = assign_tax(
repset_fasta_fp=final_repset_fp,
output_dir=output_dir,
command_handler=command_handler,
params=params,
qiime_config=qiime_config,
parallel=parallel,
logger=logger,
status_update_callback=status_update_callback)
# Add taxa to otu table
add_metadata_cmd = 'biom add-metadata -i %s --observation-metadata-fp %s -o %s --sc-separated taxonomy --observation-header OTUID,taxonomy' %\
(tax_input_otu_table_fp, taxonomy_fp, otu_table_w_tax_fp)
commands.append([("Add taxa to OTU table", add_metadata_cmd)])
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
if run_align_and_tree:
if exists(pynast_failure_filtered_otu_table_fp) and\
getsize(pynast_failure_filtered_otu_table_fp) > 0:
logger.write("Final output file exists (%s). Will not rebuild." %
pynast_failure_filtered_otu_table_fp)
else:
# remove files from partially completed runs
remove_files([pynast_failure_filtered_otu_table_fp],
error_on_missing=False)
pynast_failures_fp = align_and_tree(
repset_fasta_fp=final_repset_fp,
output_dir=output_dir,
command_handler=command_handler,
params=params,
qiime_config=qiime_config,
parallel=parallel,
logger=logger,
status_update_callback=status_update_callback)
# Build OTU table without PyNAST failures
with biom_open(align_and_tree_input_otu_table) as biom_file:
table = Table.from_hdf5(biom_file)
filtered_otu_table = filter_otus_from_otu_table(table,
get_seq_ids_from_fasta_file(open(pynast_failures_fp, 'U')),
0, inf, 0, inf, negate_ids_to_keep=True)
write_biom_table(filtered_otu_table,
pynast_failure_filtered_otu_table_fp)
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
logger.close()
def pick_subsampled_open_reference_otus(input_fp,
refseqs_fp,
output_dir,
percent_subsample,
new_ref_set_id,
command_handler,
params,
qiime_config,
prefilter_refseqs_fp=None,
run_assign_tax=True,
run_align_and_tree=True,
prefilter_percent_id=None,
min_otu_size=2,
step1_otu_map_fp=None,
step1_failures_fasta_fp=None,
parallel=False,
suppress_step4=False,
logger=None,
suppress_md5=False,
suppress_index_page=False,
denovo_otu_picking_method='uclust',
reference_otu_picking_method='uclust_ref',
status_update_callback=print_to_stdout,
minimum_failure_threshold=100000):
""" Run the data preparation steps of Qiime
The steps performed by this function are:
- Pick reference OTUs against refseqs_fp
- Subsample the failures to n sequences.
- Pick OTUs de novo on the n failures.
- Pick representative sequences for the resulting OTUs.
- Pick reference OTUs on all failures using the
representative set from step 4 as the reference set.
"""
# for now only allowing uclust/usearch/sortmerna+sumaclust for otu picking
allowed_denovo_otu_picking_methods = ['uclust', 'usearch61', 'sumaclust']
allowed_reference_otu_picking_methods = ['uclust_ref', 'usearch61_ref',
'sortmerna']
assert denovo_otu_picking_method in allowed_denovo_otu_picking_methods,\
"Unknown de novo OTU picking method: %s. Known methods are: %s"\
% (denovo_otu_picking_method,
','.join(allowed_denovo_otu_picking_methods))
assert reference_otu_picking_method in allowed_reference_otu_picking_methods,\
"Unknown reference OTU picking method: %s. Known methods are: %s"\
% (reference_otu_picking_method,
','.join(allowed_reference_otu_picking_methods))
# Prepare some variables for the later steps
index_links = []
input_dir, input_filename = split(input_fp)
input_basename, input_ext = splitext(input_filename)
create_dir(output_dir)
commands = []
if logger is None:
log_fp = generate_log_fp(output_dir)
logger = WorkflowLogger(log_fp,
params=params,
qiime_config=qiime_config)
close_logger_on_success = True
index_links.append(
('Run summary data',
log_fp,
_index_headers['run_summary']))
else:
close_logger_on_success = False
if not suppress_md5:
log_input_md5s(logger, [input_fp,
refseqs_fp,
step1_otu_map_fp,
step1_failures_fasta_fp])
# if the user has not passed a different reference collection for the pre-filter,
# used the main refseqs_fp. this is useful if the user wants to provide a smaller
# reference collection, or to use the input reference collection when running in
# iterative mode (rather than an iteration's new refseqs)
if prefilter_refseqs_fp is None:
prefilter_refseqs_fp = refseqs_fp
# Step 1: Closed-reference OTU picking on the input file (if not already
# complete)
if step1_otu_map_fp and step1_failures_fasta_fp:
step1_dir = '%s/step1_otus' % output_dir
create_dir(step1_dir)
logger.write("Using pre-existing reference otu map and failures.\n\n")
else:
if prefilter_percent_id is not None:
prefilter_dir = '%s/prefilter_otus/' % output_dir
prefilter_failures_list_fp = '%s/%s_failures.txt' % \
(prefilter_dir, input_basename)
prefilter_pick_otu_cmd = pick_reference_otus(
input_fp, prefilter_dir, reference_otu_picking_method,
prefilter_refseqs_fp, parallel, params, logger, prefilter_percent_id)
commands.append(
[('Pick Reference OTUs (prefilter)', prefilter_pick_otu_cmd)])
prefiltered_input_fp = '%s/prefiltered_%s%s' %\
(prefilter_dir, input_basename, input_ext)
filter_fasta_cmd = 'filter_fasta.py -f %s -o %s -s %s -n' %\
(input_fp, prefiltered_input_fp, prefilter_failures_list_fp)
commands.append(
[('Filter prefilter failures from input', filter_fasta_cmd)])
index_links.append(
('Pre-filtered sequence identifiers '
'(failed to hit reference at %1.1f%% identity)' % (float(prefilter_percent_id)*100),
prefilter_failures_list_fp,
_index_headers['sequences']))
# Call the command handler on the list of commands
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
input_fp = prefiltered_input_fp
input_dir, input_filename = split(input_fp)
input_basename, input_ext = splitext(input_filename)
if getsize(prefiltered_input_fp) == 0:
raise ValueError(
"All sequences were discarded by the prefilter. "
"Are the input sequences in the same orientation "
"in your input file and reference file (you can "
"add 'pick_otus:enable_rev_strand_match True' to "
"your parameters file if not)? Are you using the "
"correct reference file?")
# Build the OTU picking command
step1_dir = \
'%s/step1_otus' % output_dir
step1_otu_map_fp = \
'%s/%s_otus.txt' % (step1_dir, input_basename)
step1_pick_otu_cmd = pick_reference_otus(
input_fp, step1_dir, reference_otu_picking_method,
refseqs_fp, parallel, params, logger)
commands.append([('Pick Reference OTUs', step1_pick_otu_cmd)])
# Build the failures fasta file
step1_failures_list_fp = '%s/%s_failures.txt' % \
(step1_dir, input_basename)
step1_failures_fasta_fp = \
'%s/failures.fasta' % step1_dir
step1_filter_fasta_cmd = 'filter_fasta.py -f %s -s %s -o %s' %\
(input_fp, step1_failures_list_fp, step1_failures_fasta_fp)
commands.append([('Generate full failures fasta file',
step1_filter_fasta_cmd)])
# Call the command handler on the list of commands
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
step1_repset_fasta_fp = \
'%s/step1_rep_set.fna' % step1_dir
step1_pick_rep_set_cmd = 'pick_rep_set.py -i %s -o %s -f %s' %\
(step1_otu_map_fp, step1_repset_fasta_fp, input_fp)
commands.append([('Pick rep set', step1_pick_rep_set_cmd)])
# Call the command handler on the list of commands
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
# Subsample the failures fasta file to retain (roughly) the
# percent_subsample
step2_input_fasta_fp = \
'%s/subsampled_failures.fasta' % step1_dir
subsample_fasta(step1_failures_fasta_fp,
step2_input_fasta_fp,
percent_subsample)
logger.write('# Subsample the failures fasta file using API \n' +
'python -c "import qiime; qiime.util.subsample_fasta' +
'(\'%s\', \'%s\', \'%f\')\n\n"' % (abspath(step1_failures_fasta_fp),
abspath(
step2_input_fasta_fp),
percent_subsample))
# count number of sequences in subsampled failures fasta file
with open(abspath(step2_input_fasta_fp), 'U') as step2_input_fasta_f:
num_subsampled_seqs, mean, std = count_seqs_from_file(step2_input_fasta_f)
# name the final otu map
merged_otu_map_fp = '%s/final_otu_map.txt' % output_dir
# number of subsampled failures sequences is greater than the threshold,
# continue to step 2,3 and 4
run_step_2_and_3 = num_subsampled_seqs > minimum_failure_threshold
if run_step_2_and_3:
# Prep the OTU picking command for the subsampled failures
step2_dir = '%s/step2_otus/' % output_dir
step2_cmd = pick_denovo_otus(step2_input_fasta_fp,
step2_dir,
new_ref_set_id,
denovo_otu_picking_method,
params,
logger)
step2_otu_map_fp = '%s/subsampled_failures_otus.txt' % step2_dir
commands.append([('Pick de novo OTUs for new clusters', step2_cmd)])
# Prep the rep set picking command for the subsampled failures
step2_repset_fasta_fp = '%s/step2_rep_set.fna' % step2_dir
step2_rep_set_cmd = 'pick_rep_set.py -i %s -o %s -f %s' %\
(step2_otu_map_fp, step2_repset_fasta_fp, step2_input_fasta_fp)
commands.append(
[('Pick representative set for subsampled failures', step2_rep_set_cmd)])
step3_dir = '%s/step3_otus/' % output_dir
step3_otu_map_fp = '%s/failures_otus.txt' % step3_dir
step3_failures_list_fp = '%s/failures_failures.txt' % step3_dir
# remove the indexed reference database from the dictionary of
# parameters as it must be forced to build a new database
# using the step2_repset_fasta_fp
if reference_otu_picking_method == 'sortmerna':
if 'sortmerna_db' in params['pick_otus']:
del params['pick_otus']['sortmerna_db']
step3_cmd = pick_reference_otus(
step1_failures_fasta_fp,
step3_dir,
reference_otu_picking_method,
step2_repset_fasta_fp,
parallel,
params,
logger)
commands.append([
('Pick reference OTUs using de novo rep set', step3_cmd)])
index_links.append(
('Final map of OTU identifier to sequence identifers (i.e., "OTU map")',
merged_otu_map_fp,
_index_headers['otu_maps']))
if not suppress_step4:
step4_dir = '%s/step4_otus/' % output_dir
if run_step_2_and_3:
step3_failures_fasta_fp = '%s/failures_failures.fasta' % step3_dir
step3_filter_fasta_cmd = 'filter_fasta.py -f %s -s %s -o %s' %\
(step1_failures_fasta_fp,
step3_failures_list_fp, step3_failures_fasta_fp)
commands.append([('Create fasta file of step3 failures',
step3_filter_fasta_cmd)])
failures_fp = step3_failures_fasta_fp
failures_otus_fp = 'failures_failures_otus.txt'
failures_step = 'step3'
else:
failures_fp = step1_failures_fasta_fp
failures_otus_fp = 'failures_otus.txt'
failures_step = 'step1'
step3_otu_map_fp = ""
step4_cmd = pick_denovo_otus(failures_fp,
step4_dir,
'.'.join([new_ref_set_id, 'CleanUp']),
denovo_otu_picking_method,
params,
logger)
step4_otu_map_fp = '%s/%s' % (step4_dir, failures_otus_fp)
commands.append([('Pick de novo OTUs on %s failures' % failures_step, step4_cmd)])
# Merge the otu maps, note that we are explicitly using the '>' operator
# otherwise passing the --force flag on the script interface would
# append the newly created maps to the map that was previously created
cat_otu_tables_cmd = 'cat %s %s %s > %s' %\
(step1_otu_map_fp, step3_otu_map_fp,
step4_otu_map_fp, merged_otu_map_fp)
commands.append([('Merge OTU maps', cat_otu_tables_cmd)])
step4_repset_fasta_fp = '%s/step4_rep_set.fna' % step4_dir
step4_rep_set_cmd = 'pick_rep_set.py -i %s -o %s -f %s' %\
(step4_otu_map_fp, step4_repset_fasta_fp, failures_fp)
commands.append(
[('Pick representative set for subsampled failures', step4_rep_set_cmd)])
else:
# Merge the otu maps, note that we are explicitly using the '>' operator
# otherwise passing the --force flag on the script interface would
# append the newly created maps to the map that was previously created
if run_step_2_and_3:
failures_fp = step3_failures_list_fp
else:
failures_fp = step1_failures_list_fp
step3_otu_map_fp = ""
cat_otu_tables_cmd = 'cat %s %s > %s' %\
(step1_otu_map_fp, step3_otu_map_fp, merged_otu_map_fp)
commands.append([('Merge OTU maps', cat_otu_tables_cmd)])
# Move the step 3 failures file to the top-level directory
commands.append([('Move final failures file to top-level directory',
'mv %s %s/final_failures.txt' % (failures_fp, output_dir))])
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
otu_fp = merged_otu_map_fp
# Filter singletons from the otu map
otu_no_singletons_fp = '%s/final_otu_map_mc%d.txt' % (output_dir,
min_otu_size)
otus_to_keep = filter_otus_from_otu_map(
otu_fp,
otu_no_singletons_fp,
min_otu_size)
index_links.append(('Final map of OTU identifier to sequence identifers excluding '
'OTUs with fewer than %d sequences' % min_otu_size,
otu_no_singletons_fp,
_index_headers['otu_maps']))
logger.write('# Filter singletons from the otu map using API \n' +
'python -c "import qiime; qiime.filter.filter_otus_from_otu_map' +
'(\'%s\', \'%s\', \'%d\')"\n\n' % (abspath(otu_fp),
abspath(
otu_no_singletons_fp),
min_otu_size))
# make the final representative seqs file and a new refseqs file that
# could be used in subsequent otu picking runs.
# this is clunky. first, we need to do this without singletons to match
# the otu map without singletons. next, there is a difference in what
# we need the reference set to be and what we need the repseqs to be.
# the reference set needs to be a superset of the input reference set
# to this set. the repset needs to be only the sequences that were observed
# in this data set, and we want reps for the step1 reference otus to be
# reads from this run so we don't hit issues building a tree using
# sequences of very different lengths. so...
final_repset_fp = '%s/rep_set.fna' % output_dir
index_links.append(
('OTU representative sequences',
final_repset_fp,
_index_headers['sequences']))
final_repset_f = open(final_repset_fp, 'w')
new_refseqs_fp = '%s/new_refseqs.fna' % output_dir
index_links.append(
('New reference sequences (i.e., OTU representative sequences plus input '
'reference sequences)',
new_refseqs_fp,
_index_headers['sequences']))
# write non-singleton otus representative sequences from step1 to the
# final rep set file
for otu_id, seq in parse_fasta(open(step1_repset_fasta_fp, 'U')):
if otu_id.split()[0] in otus_to_keep:
final_repset_f.write('>%s\n%s\n' % (otu_id, seq))
logger.write('# Write non-singleton otus representative sequences ' +
'from step1 to the final rep set file: %s\n\n' % final_repset_fp)
# copy the full input refseqs file to the new refseqs_fp
copy(refseqs_fp, new_refseqs_fp)
new_refseqs_f = open(new_refseqs_fp, 'a')
new_refseqs_f.write('\n')
logger.write('# Copy the full input refseqs file to the new refseq file\n' +
'cp %s %s\n\n' % (refseqs_fp, new_refseqs_fp))
# iterate over all representative sequences from step2 and step4 and write
# those corresponding to non-singleton otus to the final representative set
# file and the new reference sequences file.
if run_step_2_and_3:
for otu_id, seq in parse_fasta(open(step2_repset_fasta_fp, 'U')):
if otu_id.split()[0] in otus_to_keep:
new_refseqs_f.write('>%s\n%s\n' % (otu_id, seq))
final_repset_f.write('>%s\n%s\n' % (otu_id, seq))
if not suppress_step4:
for otu_id, seq in parse_fasta(open(step4_repset_fasta_fp, 'U')):
if otu_id.split()[0] in otus_to_keep:
new_refseqs_f.write('>%s\n%s\n' % (otu_id, seq))
final_repset_f.write('>%s\n%s\n' % (otu_id, seq))
new_refseqs_f.close()
final_repset_f.close()
# steps 1-4 executed
if run_step_2_and_3:
logger.write('# Write non-singleton otus representative sequences from ' +
'step 2 and step 4 to the final representative set and the new reference' +
' set (%s and %s respectively)\n\n' % (final_repset_fp, new_refseqs_fp))
# only steps 1 and 4 executed
else:
logger.write('# Write non-singleton otus representative sequences from ' +
'step 4 to the final representative set and the new reference' +
' set (%s and %s respectively)\n\n' % (final_repset_fp, new_refseqs_fp))
# Prep the make_otu_table.py command
otu_table_fp = '%s/otu_table_mc%d.biom' % (output_dir, min_otu_size)
make_otu_table_cmd = 'make_otu_table.py -i %s -o %s' %\
(otu_no_singletons_fp, otu_table_fp)
commands.append([("Make the otu table", make_otu_table_cmd)])
index_links.append(
('OTU table exluding OTUs with fewer than %d sequences' % min_otu_size,
otu_table_fp,
_index_headers['otu_tables']))
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
# initialize output file names - these differ based on what combination of
# taxonomy assignment and alignment/tree building is happening.
if run_assign_tax and run_align_and_tree:
tax_input_otu_table_fp = otu_table_fp
otu_table_w_tax_fp = \
'%s/otu_table_mc%d_w_tax.biom' % (output_dir, min_otu_size)
align_and_tree_input_otu_table = otu_table_w_tax_fp
index_links.append(
('OTU table exluding OTUs with fewer than %d sequences and including OTU '
'taxonomy assignments' % min_otu_size,
otu_table_w_tax_fp,
_index_headers['otu_tables']))
pynast_failure_filtered_otu_table_fp = \
'%s/otu_table_mc%d_w_tax_no_pynast_failures.biom' % (output_dir, min_otu_size)
index_links.append(
('OTU table exluding OTUs with fewer than %d sequences and sequences that '
'fail to align with PyNAST and including OTU taxonomy assignments' % min_otu_size,
pynast_failure_filtered_otu_table_fp,
_index_headers['otu_tables']))
elif run_assign_tax:
tax_input_otu_table_fp = otu_table_fp
otu_table_w_tax_fp = \
'%s/otu_table_mc%d_w_tax.biom' % (output_dir, min_otu_size)
index_links.append(
('OTU table exluding OTUs with fewer than %d sequences and including OTU '
'taxonomy assignments' % min_otu_size,
otu_table_w_tax_fp,
_index_headers['otu_tables']))
elif run_align_and_tree:
align_and_tree_input_otu_table = otu_table_fp
pynast_failure_filtered_otu_table_fp = \
'%s/otu_table_mc%d_no_pynast_failures.biom' % (output_dir,
min_otu_size)
index_links.append(
('OTU table exluding OTUs with fewer than %d sequences and sequences that '
'fail to align with PyNAST' % min_otu_size,
pynast_failure_filtered_otu_table_fp,
_index_headers['otu_tables']))
if run_assign_tax:
if exists(otu_table_w_tax_fp) and getsize(otu_table_w_tax_fp) > 0:
logger.write(
"Final output file exists (%s). Will not rebuild." %
otu_table_w_tax_fp)
else:
# remove files from partially completed runs
remove_files([otu_table_w_tax_fp], error_on_missing=False)
taxonomy_fp = assign_tax(
repset_fasta_fp=final_repset_fp,
output_dir=output_dir,
command_handler=command_handler,
params=params,
qiime_config=qiime_config,
parallel=parallel,
logger=logger,
status_update_callback=status_update_callback)
index_links.append(
('OTU taxonomic assignments',
taxonomy_fp,
_index_headers['taxa_assignments']))
# Add taxa to otu table
add_metadata_cmd = 'biom add-metadata -i %s --observation-metadata-fp %s -o %s --sc-separated taxonomy --observation-header OTUID,taxonomy' %\
(tax_input_otu_table_fp, taxonomy_fp, otu_table_w_tax_fp)
commands.append([("Add taxa to OTU table", add_metadata_cmd)])
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
if run_align_and_tree:
rep_set_tree_fp = join(output_dir, 'rep_set.tre')
index_links.append(
('OTU phylogenetic tree',
rep_set_tree_fp,
_index_headers['trees']))
if exists(pynast_failure_filtered_otu_table_fp) and\
getsize(pynast_failure_filtered_otu_table_fp) > 0:
logger.write("Final output file exists (%s). Will not rebuild." %
pynast_failure_filtered_otu_table_fp)
else:
# remove files from partially completed runs
remove_files([pynast_failure_filtered_otu_table_fp],
error_on_missing=False)
pynast_failures_fp = align_and_tree(
repset_fasta_fp=final_repset_fp,
output_dir=output_dir,
command_handler=command_handler,
params=params,
qiime_config=qiime_config,
parallel=parallel,
logger=logger,
status_update_callback=status_update_callback)
# Build OTU table without PyNAST failures
with biom_open(align_and_tree_input_otu_table) as biom_file:
table = Table.from_hdf5(biom_file)
filtered_otu_table = filter_otus_from_otu_table(table,
get_seq_ids_from_fasta_file(open(pynast_failures_fp, 'U')),
0, inf, 0, inf, negate_ids_to_keep=True)
write_biom_table(filtered_otu_table,
pynast_failure_filtered_otu_table_fp)
command_handler(commands,
status_update_callback,
logger=logger,
close_logger_on_success=False)
commands = []
if close_logger_on_success:
logger.close()
if not suppress_index_page:
index_fp = '%s/index.html' % output_dir
generate_index_page(index_links, index_fp)
|
wasade/qiime
|
qiime/workflow/pick_open_reference_otus.py
|
Python
|
gpl-2.0
| 47,980
|
[
"BLAST"
] |
1a0db617ff31eca6ec1bed23bb6c9e13585bb7ed774c78654a3d6a70625979a6
|
# Copyright (C) 2010-2018 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Visualization sample of a simple plate capacitor with applied potential
difference and charged particles.
"""
import numpy as np
import math
from threading import Thread
import espressomd
import espressomd.shapes
from espressomd import electrostatics
from espressomd import visualization
required_features = ["ELECTROSTATICS", "WCA"]
espressomd.assert_features(required_features)
box_l = 20
system = espressomd.System(box_l=[box_l] * 3)
system.set_random_state_PRNG()
np.random.seed(seed=system.seed)
visualizer = visualization.openGLLive(
system,
constraint_type_colors=[[1, 1, 1]],
camera_position=[50, 15, 15],
camera_right=[0, 0, -1])
system.time_step = 0.02
system.cell_system.skin = 0.4
system.cell_system.set_layered(n_layers=5, use_verlet_lists=False)
system.periodicity = [1, 1, 0]
qion = 1
for i in range(300):
rpos = np.random.random(3) * box_l
system.part.add(pos=rpos, type=0, q=qion)
qion *= -1
system.constraints.add(shape=espressomd.shapes.Wall(
dist=0, normal=[0, 0, 1]), particle_type=1)
system.constraints.add(shape=espressomd.shapes.Wall(
dist=-box_l, normal=[0, 0, -1]), particle_type=1)
WCA_cut = 2.**(1. / 6.)
system.non_bonded_inter[0, 1].wca.set_params(
epsilon=1.0, sigma=1.0)
system.non_bonded_inter[0, 0].wca.set_params(
epsilon=1.0, sigma=1.0)
energy = system.analysis.energy()
print("Before Minimization: E_total=", energy['total'])
system.minimize_energy.init(
f_max=10, gamma=50.0, max_steps=1000, max_displacement=0.2)
system.minimize_energy.minimize()
energy = system.analysis.energy()
print("After Minimization: E_total=", energy['total'])
system.thermostat.set_langevin(kT=0.1, gamma=1.0, seed=42)
mmm2d = electrostatics.MMM2D(
prefactor=10.0, maxPWerror=1e-3, const_pot=True, pot_diff=50.0)
system.actors.add(mmm2d)
visualizer.run(1)
|
mkuron/espresso
|
samples/visualization_mmm2d.py
|
Python
|
gpl-3.0
| 2,549
|
[
"ESPResSo"
] |
9b283c82fa320329781c7a9eb108610745fbe301088752f8d7aeec8f4bcad9fc
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Mesh Preview Generator.
Examples
--------
$ ./script/gen_mesh_prev.py meshes/2d/
"""
from __future__ import print_function
from __future__ import absolute_import
from argparse import ArgumentParser, RawDescriptionHelpFormatter
import sys
sys.path.append('.')
import os
import vtk
from sfepy.discrete.fem import Mesh
def gen_shot(vtk_filename, png_filename):
"""
Generate PNG image of the FE mesh.
Parameters
----------
vtk_filename : str
The input mesh filename (file in VTK format).
png_filename : str
The name of the output PNG file.
"""
reader = vtk.vtkUnstructuredGridReader()
reader.SetFileName(vtk_filename)
reader.Update()
bnd = reader.GetOutput().GetPoints().GetBounds()
surface0 = vtk.vtkDataSetSurfaceFilter()
surface0.SetInput(reader.GetOutput())
surface0.Update()
if abs(bnd[5] - bnd[4]) > 1.0e-12:
tr = vtk.vtkTransform()
tr.RotateWXYZ(45,1,1,1)
trFilter = vtk.vtkTransformPolyDataFilter()
trFilter.SetTransform(tr)
trFilter.SetInputConnection(surface0.GetOutputPort())
trFilter.Update()
surface = trFilter
else:
surface = surface0
ca,cb = surface.GetOutput().GetCellData().GetScalars().GetRange()
lut = vtk.vtkLookupTable()
lut.SetHueRange(0.667, 0.667)
lut.SetSaturationRange(0.0, 1.0)
lut.SetValueRange(0.8, 1.0)
lut.SetAlphaRange(1.0, 1.0)
lut.SetTableRange(ca,cb)
gf = vtk.vtkGraphicsFactory()
gf.SetOffScreenOnlyMode(1)
gf.SetUseMesaClasses(1)
ifa = vtk.vtkImagingFactory()
ifa.SetUseMesaClasses(1)
mapper = vtk.vtkPolyDataMapper()
mapper.SetLookupTable(lut)
mapper.SetScalarRange(ca,cb);
mapper.SetInput(surface.GetOutput())
mapper.SetScalarModeToUseCellData()
actor = vtk.vtkActor()
actor.SetMapper(mapper)
mapper2 = vtk.vtkPolyDataMapper()
mapper2.SetInput(surface.GetOutput())
actor2 = vtk.vtkActor()
actor2.SetMapper(mapper2)
actor2.GetProperty().SetRepresentationToWireframe()
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.SetOffScreenRendering(1)
renWin.AddRenderer(ren)
ren.AddActor(actor)
ren.AddActor(actor2)
renWin.Render()
image = vtk.vtkWindowToImageFilter()
image.SetInput(renWin)
image.Update()
base, _ = os.path.splitext(vtk_filename)
writer = vtk.vtkPNGWriter()
writer.SetFileName(png_filename)
writer.SetInput(image.GetOutput())
writer.Write()
def main():
parser = ArgumentParser(description=__doc__,
formatter_class=RawDescriptionHelpFormatter)
parser.add_argument('mesh_dir')
options = parser.parse_args()
mesh_dir = options.mesh_dir
mesh_files = []
for (dirpath, dirnames, filenames) in os.walk(mesh_dir):
for ii in filenames:
_, ext = os.path.splitext(ii)
if ext.lower() in ['.mesh', '.vtk']:
mesh_files.append(dirpath + os.path.sep + ii)
for ii in mesh_files:
base, ext = os.path.splitext(ii)
fname_out = base + '.png'
if ext == '.mesh':
fname_in = 'aux.vtk'
mesh = Mesh.from_file(ii)
mesh.write(fname_in, io='auto')
else:
fname_in = ii
print(('writing %s...' % fname_out))
gen_shot(fname_in, fname_out)
if __name__ == "__main__":
main()
|
vlukes/sfepy
|
script/gen_mesh_prev.py
|
Python
|
bsd-3-clause
| 3,477
|
[
"VTK"
] |
47840c549655b692d05d911872c0fa88d173ebb20ad6fdcb3b5ee9f2b9fc7ac7
|
# -*- coding: utf-8 -*-
# Copyright 2007-2022 The HyperSpy developers
#
# This file is part of HyperSpy.
#
# HyperSpy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# HyperSpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with HyperSpy. If not, see <http://www.gnu.org/licenses/>.
import os
import logging
import numpy as np
_logger = logging.getLogger(__name__)
no_netcdf = False
try:
from netCDF4 import Dataset
which_netcdf = 'netCDF4'
except BaseException:
try:
from netCDF3 import Dataset
which_netcdf = 'netCDF3'
except BaseException:
try:
from Scientific.IO.NetCDF import NetCDFFile as Dataset
which_netcdf = 'Scientific Python'
except BaseException:
no_netcdf = True
# Plugin characteristics
# ----------------------
format_name = 'netCDF'
description = ''
full_support = True
file_extensions = ('nc', 'NC')
default_extension = 0
# Writing capabilities
writes = False
non_uniform_axis = False
# ----------------------
attrib2netcdf = \
{
'energyorigin': 'energy_origin',
'energyscale': 'energy_scale',
'energyunits': 'energy_units',
'xorigin': 'x_origin',
'xscale': 'x_scale',
'xunits': 'x_units',
'yorigin': 'y_origin',
'yscale': 'y_scale',
'yunits': 'y_units',
'zorigin': 'z_origin',
'zscale': 'z_scale',
'zunits': 'z_units',
'exposure': 'exposure',
'title': 'title',
'binning': 'binning',
'readout_frequency': 'readout_frequency',
'ccd_height': 'ccd_height',
'blanking': 'blanking'
}
acquisition2netcdf = \
{
'exposure': 'exposure',
'binning': 'binning',
'readout_frequency': 'readout_frequency',
'ccd_height': 'ccd_height',
'blanking': 'blanking',
'gain': 'gain',
'pppc': 'pppc',
}
treatments2netcdf = \
{
'dark_current': 'dark_current',
'readout': 'readout',
}
def file_reader(filename, *args, **kwds):
if no_netcdf is True:
raise ImportError("No netCDF library installed. "
"To read EELSLab netcdf files install "
"one of the following packages:"
"netCDF4, netCDF3, netcdf, scientific")
ncfile = Dataset(filename, 'r')
if hasattr(ncfile, 'file_format_version'):
if ncfile.file_format_version == 'EELSLab 0.1':
dictionary = nc_hyperspy_reader_0dot1(
ncfile,
filename,
*args,
**kwds)
else:
ncfile.close()
raise IOError('Unsupported netCDF file')
return dictionary,
def nc_hyperspy_reader_0dot1(ncfile, filename, *args, **kwds):
calibration_dict, acquisition_dict, treatments_dict = {}, {}, {}
dc = ncfile.variables['data_cube']
data = dc[:]
if 'history' in calibration_dict:
calibration_dict['history'] = eval(ncfile.history)
for attrib in attrib2netcdf.items():
if hasattr(dc, attrib[1]):
value = eval('dc.' + attrib[1])
if isinstance(value, np.ndarray):
calibration_dict[attrib[0]] = value[0]
else:
calibration_dict[attrib[0]] = value
else:
_logger.warning("Warning: the attribute '%s' is not defined in "
"the file '%s'", attrib[0], filename)
for attrib in acquisition2netcdf.items():
if hasattr(dc, attrib[1]):
value = eval('dc.' + attrib[1])
if isinstance(value, np.ndarray):
acquisition_dict[attrib[0]] = value[0]
else:
acquisition_dict[attrib[0]] = value
else:
_logger.warning("Warning: the attribute '%s' is not defined in "
"the file '%s'", attrib[0], filename)
for attrib in treatments2netcdf.items():
if hasattr(dc, attrib[1]):
treatments_dict[attrib[0]] = eval('dc.' + attrib[1])
else:
_logger.warning("Warning: the attribute '%s' is not defined in "
"the file '%s'", attrib[0], filename)
original_metadata = {'record_by': ncfile.type,
'calibration': calibration_dict,
'acquisition': acquisition_dict,
'treatments': treatments_dict}
ncfile.close()
# Now we'll map some parameters
record_by = 'image' if original_metadata[
'record_by'] == 'image' else 'spectrum'
if record_by == 'image':
dim = len(data.shape)
names = ['Z', 'Y', 'X'][3 - dim:]
scaleskeys = ['zscale', 'yscale', 'xscale']
originskeys = ['zorigin', 'yorigin', 'xorigin']
unitskeys = ['zunits', 'yunits', 'xunits']
elif record_by == 'spectrum':
dim = len(data.shape)
names = ['Y', 'X', 'Energy'][3 - dim:]
scaleskeys = ['yscale', 'xscale', 'energyscale']
originskeys = ['yorigin', 'xorigin', 'energyorigin']
unitskeys = ['yunits', 'xunits', 'energyunits']
# The images are recorded in the Fortran order
data = data.T.copy()
try:
scales = [calibration_dict[key] for key in scaleskeys[3 - dim:]]
except KeyError:
scales = [1, 1, 1][3 - dim:]
try:
origins = [calibration_dict[key] for key in originskeys[3 - dim:]]
except KeyError:
origins = [0, 0, 0][3 - dim:]
try:
units = [calibration_dict[key] for key in unitskeys[3 - dim:]]
except KeyError:
units = ['', '', '']
axes = [
{
'size': int(data.shape[i]),
'index_in_array': i,
'name': names[i],
'scale': scales[i],
'offset': origins[i],
'units': units[i], }
for i in range(dim)]
metadata = {'General': {}, 'Signal': {}}
metadata['General']['original_filename'] = os.path.split(filename)[1]
metadata["Signal"]['record_by'] = record_by
metadata["General"]['signal_type'] = ""
dictionary = {
'data': data,
'axes': axes,
'metadata': metadata,
'original_metadata': original_metadata,
}
return dictionary
|
jat255/hyperspy
|
hyperspy/io_plugins/netcdf.py
|
Python
|
gpl-3.0
| 6,700
|
[
"NetCDF"
] |
063882373fba44701cf29d35cac0b74b4551a69f7616787fd52508105dcc9f53
|
#
# gPrime - A web-based genealogy program
#
# Copyright (C) 2000-2008 Donald N. Allingham
# Copyright (C) 2008 Brian G. Matherly
# Copyright (C) 2008 Gary Burton
# Copyright (C) 2008 Robert Cheramy <robert@cheramy.net>
# Copyright (C) 2010 Jakim Friant
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"Export to GRAMPS package"
#-------------------------------------------------------------------------
#
# standard python modules
#
#-------------------------------------------------------------------------
import time
import shutil
import os
import tarfile
from io import StringIO, BytesIO
from gprime.const import LOCALE as glocale
_ = glocale.translation.gettext
#------------------------------------------------------------------------
#
# Set up logging
#
#------------------------------------------------------------------------
import logging
log = logging.getLogger(".WritePkg")
#-------------------------------------------------------------------------
#
# Gprime modules
#
#-------------------------------------------------------------------------
from gprime.plugins.export.exportxml import XmlWriter
from gprime.utils.file import media_path_full
from gprime.constfunc import win
#-------------------------------------------------------------------------
#
# writeData
#
#-------------------------------------------------------------------------
def writeData(database, filename, user, option_box=None):
# Rename file, if it exists already, with <filename>.bak
# as it it for normal XML export.
if os.path.isfile(filename):
try:
shutil.copyfile(filename, filename + ".bak")
shutil.copystat(filename, filename + ".bak")
except:
pass
if option_box:
option_box.parse_options()
database = option_box.get_filtered_database(database)
writer = PackageWriter(database, filename, user)
return writer.export()
#-------------------------------------------------------------------------
#
# PackageWriter
#
#-------------------------------------------------------------------------
class PackageWriter:
def __init__(self, database, filename, user):
self.db = database
self.user = user
self.filename = filename
def export(self):
# missmedia_action = 0
#--------------------------------------------------------------
# def remove_clicked():
# # File is lost => remove all references and the object itself
# for p_id in self.db.iter_family_handles():
# p = self.db.get_family_from_handle(p_id)
# nl = p.get_media_list()
# for o in nl:
# if o.get_reference_handle() == m_id:
# nl.remove(o)
# p.set_media_list(nl)
# self.db.commit_family(p,None)
# for key in self.db.iter_person_handles():
# p = self.db.get_person_from_handle(key)
# nl = p.get_media_list()
# for o in nl:
# if o.get_reference_handle() == m_id:
# nl.remove(o)
# p.set_media_list(nl)
# self.db.commit_person(p,None)
# for key in self.db.get_source_handles():
# p = self.db.get_source_from_handle(key)
# nl = p.get_media_list()
# for o in nl:
# if o.get_reference_handle() == m_id:
# nl.remove(o)
# p.set_media_list(nl)
# self.db.commit_source(p,None)
# for key in self.db.get_place_handles():
# p = self.db.get_place_from_handle(key)
# nl = p.get_media_list()
# for o in nl:
# if o.get_reference_handle() == m_id:
# nl.remove(o)
# p.set_media_list(nl)
# self.db.commit_place(p,None)
# for key in self.db.get_event_handles():
# p = self.db.get_event_from_handle(key)
# nl = p.get_media_list()
# for o in nl:
# if o.get_reference_handle() == m_id:
# nl.remove(o)
# p.set_media_list(nl)
# self.db.commit_event(p,None)
# self.db.remove_media(m_id,None)
# def leave_clicked():
# # File is lost => do nothing, leave as is
# pass
# def select_clicked():
# # File is lost => select a file to replace the lost one
# def fs_close_window(obj):
# pass
# def fs_ok_clicked(obj):
# name = fs_top.get_filename()
# if os.path.isfile(name):
# archive.add(name)
# fs_top = gtk.FileChooserDialog("%s - GRAMPS" % _("Select file"),
# buttons=(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
# gtk.STOCK_OK, Gtk.ResponseType.OK)
# )
# response = fs_top.run()
# if response == Gtk.ResponseType.OK:
# fs_ok_clicked(fs_top)
# elif response == gtk.RESPONSE_CANCEL:
# fs_close_window(fs_top)
# fs_top.destroy()
#---------------------------------------------------------------
try:
archive = tarfile.open(self.filename,'w:gz')
except EnvironmentError as msg:
log.warning(str(msg))
self.user.notify_error(_('Failure writing %s') % self.filename, str(msg))
return 0
# Write media files first, since the database may be modified
# during the process (i.e. when removing object)
for m_id in self.db.get_media_handles(sort_handles=True):
mobject = self.db.get_media_from_handle(m_id)
filename = media_path_full(self.db, mobject.get_path())
archname = str(mobject.get_path())
if os.path.isfile(filename) and os.access(filename, os.R_OK):
archive.add(filename, archname)
# Write XML now
g = BytesIO()
gfile = XmlWriter(self.db, self.user, 2)
gfile.write_handle(g)
tarinfo = tarfile.TarInfo('data.gramps')
tarinfo.size = len(g.getvalue())
tarinfo.mtime = time.time()
if not win():
tarinfo.uid = os.getuid()
tarinfo.gid = os.getgid()
g.seek(0)
archive.addfile(tarinfo, g)
archive.close()
g.close()
return True
|
sam-m888/gprime
|
gprime/plugins/export/exportpkg.py
|
Python
|
gpl-2.0
| 7,280
|
[
"Brian"
] |
cbef2eda0d1e45a1872926f7ddb0b51031b93687fc8e8e632f471cb3dcaa062c
|
"""A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
from company_registration import __name__, __version__, __author__
base_url = 'https://github.com/pivotal-energy-solutions/django-company-registration'
name = 'django-' + __name__
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
# Arguments marked as "Required" below must be included for upload to PyPI.
# Fields marked as "Optional" may be commented out.
setup(
# This is the name of your project. The first time you publish this
# package, this name will be registered for you. It will determine how
# users can install this project, e.g.:
#
# $ pip install sampleproject
#
# And where it will live on PyPI: https://pypi.org/project/sampleproject/
#
# There are some restrictions on what makes a valid project name
# specification here:
# https://packaging.python.org/specifications/core-metadata/#name
name=name, # Required
# Versions should comply with PEP 440:
# https://www.python.org/dev/peps/pep-0440/
#
# For a discussion on single-sourcing the version across setup.py and the
# project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version=__version__, # Required
# This is a one-line description or tagline of what your project does. This
# corresponds to the "Summary" metadata field:
# https://packaging.python.org/specifications/core-metadata/#summary
description='This package is used in conjunction with django-registration to allow a '
'company to register new users rather than a self-subscribe model.', # Required
# This is an optional longer description of your project that represents
# the body of text which users will see when they visit PyPI.
#
# Often, this is the same as your README, so you can just read it in from
# that file directly (as we have already done above)
#
# This field corresponds to the "Description" metadata field:
# https://packaging.python.org/specifications/core-metadata/#description-optional
long_description=long_description, # Optional
# Denotes that our long_description is in Markdown; valid values are
# text/plain, text/x-rst, and text/markdown
#
# Optional if long_description is written in reStructuredText (rst) but
# required for plain-text or Markdown; if unspecified, "applications should
# attempt to render [the long_description] as text/x-rst; charset=UTF-8 and
# fall back to text/plain if it is not valid rst" (see link below)
#
# This field corresponds to the "Description-Content-Type" metadata field:
# https://packaging.python.org/specifications/core-metadata/#description-content-type-optional
long_description_content_type='text/markdown', # Optional (see note above)
# This should be a valid link to your project's main homepage.
#
# This field corresponds to the "Home-Page" metadata field:
# https://packaging.python.org/specifications/core-metadata/#home-page-optional
url=base_url, # Optional
download_url='{0}/archive/{1}.tar.gz'.format(base_url, __version__),
# This should be your name or the name of the organization which owns the
# project.
author=__author__, # Optional
# This should be a valid email address corresponding to the author listed
# above.
author_email='sklass@pivotalenergysolutions.com', # Optional
# Classifiers help users find your project by categorizing it.
#
# For a list of valid classifiers, see
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[ # Optional
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 5 - Production/Stable',
# Indicate who your project is intended for
'Environment :: Web Environment',
'Framework :: Django',
# Pick your license as you wish
'License :: OSI Approved :: Apache Software License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Topic :: Internet :: WWW/HTTP',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules',
],
# This field adds keywords for your project which will appear on the
# project page. What does your project relate to?
#
# Note that this is a string of words separated by whitespace, not a list.
keywords='django registration company', # Optional
# You can just specify package directories manually here if your project is
# simple. Or you can use find_packages().
#
# Alternatively, if you just want to distribute a single Python file, use
# the `py_modules` argument instead as follows, which will expect a file
# called `my_module.py` to exist:
#
# py_modules=["my_module"],
#
packages=find_packages(exclude=['contrib', 'docs', 'tests']), # Required
# This field lists other packages that your project depends on to run.
# Any package you put here will be installed by pip when your project is
# installed, so they must be valid existing projects.
#
# For an analysis of "install_requires" vs pip's requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=['django>=1.11,<2'], # Optional
# List additional groups of dependencies here (e.g. development
# dependencies). Users will be able to install these using the "extras"
# syntax, for example:
#
# $ pip install sampleproject[dev]
#
# Similar to `install_requires` above, these must be valid existing
# projects.
# extras_require={ # Optional
# 'dev': ['check-manifest'],
# 'test': ['coverage'],
# },
# If there are data files included in your packages that need to be
# installed, specify them here.
#
# If using Python 2.6 or earlier, then these have to be included in
# MANIFEST.in as well.
package_data={
'company_registration': ['static/js/*.js', 'templates/registration/*.html', 'templates/registration/*.txt']},
# Although 'package_data' is the preferred approach, in some case you may
# need to place data files outside of your packages. See:
# http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files
#
# In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
# data_files=[('my_data', ['data/data_file'])], # Optional
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# `pip` to create the appropriate form of executable for the target
# platform.
#
# For example, the following would provide a command called `sample` which
# executes the function `main` from this package when invoked:
# entry_points={ # Optional
# 'console_scripts': [
# 'sample=sample:main',
# ],
# },
# List additional URLs that are relevant to your project as a dict.
#
# This field corresponds to the "Project-URL" metadata fields:
# https://packaging.python.org/specifications/core-metadata/#project-url-multiple-use
#
# Examples listed include a pattern for specifying where the package tracks
# issues, where the source is hosted, where to say thanks to the package
# maintainers, and where to support the project financially. The key is
# what's used to render the link text on PyPI.
project_urls={ # Optional
'Bug Reports': '{}/issues'.format(base_url),
'Say Thanks!': 'http://saythanks.io/to/rh0dium',
'Source': base_url,
},
)
|
pivotal-energy-solutions/django-company-registration
|
setup.py
|
Python
|
apache-2.0
| 8,396
|
[
"VisIt"
] |
77e9a8dae680e43a7305fcbcbdcabba1c12ec5b5eeb4ea9f71bcd1405fad885e
|
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for initializers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
distributions = tf.contrib.distributions
class GaussianTest(tf.test.TestCase):
def testGaussianConjugateKnownSigmaPosterior(self):
with tf.Session():
mu0 = tf.constant([3.0])
sigma0 = tf.constant([math.sqrt(10.0)])
sigma = tf.constant([math.sqrt(2.0)])
x = tf.constant([-2.5, 2.5, 4.0, 0.0, -1.0, 2.0])
s = tf.reduce_sum(x)
n = tf.size(x)
prior = distributions.Gaussian(mu=mu0, sigma=sigma0)
posterior = distributions.gaussian_conjugates_known_sigma_posterior(
prior=prior, sigma=sigma, s=s, n=n)
# Smoke test
self.assertTrue(isinstance(posterior, distributions.Gaussian))
posterior_log_pdf = posterior.log_pdf(x).eval()
self.assertEqual(posterior_log_pdf.shape, (6,))
def testGaussianConjugateKnownSigmaPosteriorND(self):
with tf.Session():
batch_size = 6
mu0 = tf.constant([[3.0, -3.0]] * batch_size)
sigma0 = tf.constant([[math.sqrt(10.0), math.sqrt(15.0)]] * batch_size)
sigma = tf.constant([[math.sqrt(2.0)]] * batch_size)
x = tf.transpose(
tf.constant([[-2.5, 2.5, 4.0, 0.0, -1.0, 2.0]], dtype=tf.float32))
s = tf.reduce_sum(x)
n = tf.size(x)
prior = distributions.Gaussian(mu=mu0, sigma=sigma0)
posterior = distributions.gaussian_conjugates_known_sigma_posterior(
prior=prior, sigma=sigma, s=s, n=n)
# Smoke test
self.assertTrue(isinstance(posterior, distributions.Gaussian))
posterior_log_pdf = posterior.log_pdf(x).eval()
self.assertEqual(posterior_log_pdf.shape, (6, 2))
def testGaussianConjugateKnownSigmaNDPosteriorND(self):
with tf.Session():
batch_size = 6
mu0 = tf.constant([[3.0, -3.0]] * batch_size)
sigma0 = tf.constant([[math.sqrt(10.0), math.sqrt(15.0)]] * batch_size)
sigma = tf.constant([[math.sqrt(2.0), math.sqrt(4.0)]] * batch_size)
x = tf.constant([
[-2.5, 2.5, 4.0, 0.0, -1.0, 2.0],
[2.5, -2.5, -4.0, 0.0, 1.0, -2.0]], dtype=tf.float32)
s = tf.reduce_sum(x, reduction_indices=[1])
x = tf.transpose(x) # Reshape to shape (6, 2)
n = tf.constant([6] * 2)
prior = distributions.Gaussian(mu=mu0, sigma=sigma0)
posterior = distributions.gaussian_conjugates_known_sigma_posterior(
prior=prior, sigma=sigma, s=s, n=n)
# Smoke test
self.assertTrue(isinstance(posterior, distributions.Gaussian))
# Calculate log_pdf under the 2 models
posterior_log_pdf = posterior.log_pdf(x)
self.assertEqual(posterior_log_pdf.get_shape(), (6, 2))
self.assertEqual(posterior_log_pdf.eval().shape, (6, 2))
def testGaussianConjugateKnownSigmaPredictive(self):
with tf.Session():
batch_size = 6
mu0 = tf.constant([3.0] * batch_size)
sigma0 = tf.constant([math.sqrt(10.0)] * batch_size)
sigma = tf.constant([math.sqrt(2.0)] * batch_size)
x = tf.constant([-2.5, 2.5, 4.0, 0.0, -1.0, 2.0])
s = tf.reduce_sum(x)
n = tf.size(x)
prior = distributions.Gaussian(mu=mu0, sigma=sigma0)
predictive = distributions.gaussian_congugates_known_sigma_predictive(
prior=prior, sigma=sigma, s=s, n=n)
# Smoke test
self.assertTrue(isinstance(predictive, distributions.Gaussian))
predictive_log_pdf = predictive.log_pdf(x).eval()
self.assertEqual(predictive_log_pdf.shape, (6,))
if __name__ == '__main__':
tf.test.main()
|
shakamunyi/tensorflow
|
tensorflow/contrib/distributions/python/kernel_tests/gaussian_conjugate_posteriors_test.py
|
Python
|
apache-2.0
| 4,289
|
[
"Gaussian"
] |
54caf013d3618fd39089dfa190dff950fd7e5fa3b9c9cd597e6c7023db64e1da
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for compute resource tracking."""
import re
import uuid
from oslo.config import cfg
from nova.compute import flavors
from nova.compute import resource_tracker
from nova.compute import task_states
from nova.compute import vm_states
from nova import context
from nova import db
from nova.objects import base as obj_base
from nova.objects import migration as migration_obj
from nova.openstack.common import jsonutils
from nova.openstack.common import timeutils
from nova import test
from nova.tests.objects import test_migration
from nova.virt import driver
FAKE_VIRT_MEMORY_MB = 5
FAKE_VIRT_MEMORY_OVERHEAD = 1
FAKE_VIRT_LOCAL_GB = 6
FAKE_VIRT_VCPUS = 1
CONF = cfg.CONF
class UnsupportedVirtDriver(driver.ComputeDriver):
"""Pretend version of a lame virt driver."""
def __init__(self):
super(UnsupportedVirtDriver, self).__init__(None)
def get_host_ip_addr(self):
return '127.0.0.1'
def get_available_resource(self, nodename):
# no support for getting resource usage info
return {}
class FakeVirtDriver(driver.ComputeDriver):
def __init__(self, pci_support=False):
super(FakeVirtDriver, self).__init__(None)
self.memory_mb = FAKE_VIRT_MEMORY_MB
self.local_gb = FAKE_VIRT_LOCAL_GB
self.vcpus = FAKE_VIRT_VCPUS
self.memory_mb_used = 0
self.local_gb_used = 0
self.pci_support = pci_support
def get_host_ip_addr(self):
return '127.0.0.1'
def get_available_resource(self, nodename):
d = {
'vcpus': self.vcpus,
'memory_mb': self.memory_mb,
'local_gb': self.local_gb,
'vcpus_used': 0,
'memory_mb_used': self.memory_mb_used,
'local_gb_used': self.local_gb_used,
'hypervisor_type': 'fake',
'hypervisor_version': 0,
'hypervisor_hostname': 'fakehost',
'cpu_info': '',
}
if self.pci_support:
d['pci_passthrough_devices'] = jsonutils.dumps([{
'label': 'forza-napoli',
'dev_type': 'foo',
'compute_node_id': 1,
'address': '0000:00:00.1',
'product_id': 'p1',
'vendor_id': 'v1',
'status': 'available',
'extra_k1': 'v1'}])
return d
def estimate_instance_overhead(self, instance_info):
mem = instance_info['memory_mb'] # make sure memory value is present
overhead = {
'memory_mb': FAKE_VIRT_MEMORY_OVERHEAD
}
return overhead # just return a constant value for testing
class BaseTestCase(test.TestCase):
def setUp(self):
super(BaseTestCase, self).setUp()
self.flags(reserved_host_disk_mb=0,
reserved_host_memory_mb=0)
self.context = context.get_admin_context()
self.flags(use_local=True, group='conductor')
self.conductor = self.start_service('conductor',
manager=CONF.conductor.manager)
self._instances = {}
self._instance_types = {}
self.stubs.Set(self.conductor.db,
'instance_get_all_by_host_and_node',
self._fake_instance_get_all_by_host_and_node)
self.stubs.Set(self.conductor.db,
'instance_update_and_get_original',
self._fake_instance_update_and_get_original)
self.stubs.Set(self.conductor.db,
'flavor_get', self._fake_flavor_get)
self.host = 'fakehost'
def _create_compute_node(self, values=None):
compute = {
"id": 1,
"service_id": 1,
"vcpus": 1,
"memory_mb": 1,
"local_gb": 1,
"vcpus_used": 1,
"memory_mb_used": 1,
"local_gb_used": 1,
"free_ram_mb": 1,
"free_disk_gb": 1,
"current_workload": 1,
"running_vms": 0,
"cpu_info": None,
"stats": [{"key": "num_instances", "value": "1"}],
"hypervisor_hostname": "fakenode",
}
if values:
compute.update(values)
return compute
def _create_service(self, host="fakehost", compute=None):
if compute:
compute = [compute]
service = {
"id": 1,
"host": host,
"binary": "nova-compute",
"topic": "compute",
"compute_node": compute,
}
return service
def _fake_instance_system_metadata(self, instance_type, prefix=''):
sys_meta = []
for key in flavors.system_metadata_flavor_props.keys():
sys_meta.append({'key': '%sinstance_type_%s' % (prefix, key),
'value': instance_type[key]})
return sys_meta
def _fake_instance(self, stash=True, **kwargs):
# Default to an instance ready to resize to or from the same
# instance_type
itype = self._fake_flavor_create()
sys_meta = self._fake_instance_system_metadata(itype)
if stash:
# stash instance types in system metadata.
sys_meta = (sys_meta +
self._fake_instance_system_metadata(itype, 'new_') +
self._fake_instance_system_metadata(itype, 'old_'))
instance_uuid = str(uuid.uuid1())
instance = {
'uuid': instance_uuid,
'vm_state': vm_states.RESIZED,
'task_state': None,
'memory_mb': 2,
'root_gb': 3,
'ephemeral_gb': 1,
'os_type': 'Linux',
'project_id': '123456',
'vcpus': 1,
'host': None,
'node': None,
'instance_type_id': 1,
'launched_on': None,
'system_metadata': sys_meta,
'availability_zone': None,
'vm_mode': None,
'reservation_id': None,
'display_name': None,
'default_swap_device': None,
'power_state': None,
'scheduled_at': None,
'access_ip_v6': None,
'access_ip_v4': None,
'key_name': None,
'updated_at': None,
'cell_name': None,
'locked': None,
'locked_by': None,
'launch_index': None,
'architecture': None,
'auto_disk_config': None,
'terminated_at': None,
'ramdisk_id': None,
'user_data': None,
'cleaned': None,
'deleted_at': None,
'id': 333,
'disable_terminate': None,
'hostname': None,
'display_description': None,
'key_data': None,
'deleted': None,
'default_ephemeral_device': None,
'progress': None,
'launched_at': None,
'config_drive': None,
'kernel_id': None,
'user_id': None,
'shutdown_terminate': None,
'created_at': None,
'image_ref': None,
'root_device_name': None,
}
instance.update(kwargs)
self._instances[instance_uuid] = instance
return instance
def _fake_flavor_create(self, **kwargs):
instance_type = {
'id': 1,
'name': 'fakeitype',
'memory_mb': FAKE_VIRT_MEMORY_MB,
'vcpus': FAKE_VIRT_VCPUS,
'root_gb': FAKE_VIRT_LOCAL_GB / 2,
'ephemeral_gb': FAKE_VIRT_LOCAL_GB / 2,
'swap': 0,
'rxtx_factor': 1.0,
'vcpu_weight': 1,
'flavorid': 'fakeflavor'
}
instance_type.update(**kwargs)
id_ = instance_type['id']
self._instance_types[id_] = instance_type
return instance_type
def _fake_instance_get_all_by_host_and_node(self, context, host, nodename):
return [i for i in self._instances.values() if i['host'] == host]
def _fake_flavor_get(self, ctxt, id_):
return self._instance_types[id_]
def _fake_instance_update_and_get_original(self, context, instance_uuid,
values):
instance = self._instances[instance_uuid]
instance.update(values)
# the test doesn't care what the original instance values are, it's
# only used in the subsequent notification:
return (instance, instance)
def _driver(self):
return FakeVirtDriver()
def _tracker(self, host=None):
if host is None:
host = self.host
node = "fakenode"
driver = self._driver()
tracker = resource_tracker.ResourceTracker(host, driver, node)
return tracker
class UnsupportedDriverTestCase(BaseTestCase):
"""Resource tracking should be disabled when the virt driver doesn't
support it.
"""
def setUp(self):
super(UnsupportedDriverTestCase, self).setUp()
self.tracker = self._tracker()
# seed tracker with data:
self.tracker.update_available_resource(self.context)
def _driver(self):
return UnsupportedVirtDriver()
def test_disabled(self):
# disabled = no compute node stats
self.assertTrue(self.tracker.disabled)
self.assertEqual(None, self.tracker.compute_node)
def test_disabled_claim(self):
# basic claim:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
self.assertEqual(0, claim.memory_mb)
def test_disabled_instance_claim(self):
# instance variation:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
self.assertEqual(0, claim.memory_mb)
def test_disabled_instance_context_claim(self):
# instance context manager variation:
instance = self._fake_instance()
claim = self.tracker.instance_claim(self.context, instance)
with self.tracker.instance_claim(self.context, instance) as claim:
self.assertEqual(0, claim.memory_mb)
def test_disabled_updated_usage(self):
instance = self._fake_instance(host='fakehost', memory_mb=5,
root_gb=10)
self.tracker.update_usage(self.context, instance)
def test_disabled_resize_claim(self):
instance = self._fake_instance()
instance_type = self._fake_flavor_create()
claim = self.tracker.resize_claim(self.context, instance,
instance_type)
self.assertEqual(0, claim.memory_mb)
self.assertEqual(instance['uuid'], claim.migration['instance_uuid'])
self.assertEqual(instance_type['id'],
claim.migration['new_instance_type_id'])
def test_disabled_resize_context_claim(self):
instance = self._fake_instance()
instance_type = self._fake_flavor_create()
with self.tracker.resize_claim(self.context, instance, instance_type) \
as claim:
self.assertEqual(0, claim.memory_mb)
class MissingServiceTestCase(BaseTestCase):
def setUp(self):
super(MissingServiceTestCase, self).setUp()
self.context = context.get_admin_context()
self.tracker = self._tracker()
def test_missing_service(self):
self.tracker.update_available_resource(self.context)
self.assertTrue(self.tracker.disabled)
class MissingComputeNodeTestCase(BaseTestCase):
def setUp(self):
super(MissingComputeNodeTestCase, self).setUp()
self.tracker = self._tracker()
self.stubs.Set(db, 'service_get_by_compute_host',
self._fake_service_get_by_compute_host)
self.stubs.Set(db, 'compute_node_create',
self._fake_create_compute_node)
def _fake_create_compute_node(self, context, values):
self.created = True
return self._create_compute_node()
def _fake_service_get_by_compute_host(self, ctx, host):
# return a service with no joined compute
service = self._create_service()
return service
def test_create_compute_node(self):
self.tracker.update_available_resource(self.context)
self.assertTrue(self.created)
def test_enabled(self):
self.tracker.update_available_resource(self.context)
self.assertFalse(self.tracker.disabled)
class BaseTrackerTestCase(BaseTestCase):
def setUp(self):
# setup plumbing for a working resource tracker with required
# database models and a compatible compute driver:
super(BaseTrackerTestCase, self).setUp()
self.updated = False
self.deleted = False
self.tracker = self._tracker()
self._migrations = {}
self.stubs.Set(db, 'service_get_by_compute_host',
self._fake_service_get_by_compute_host)
self.stubs.Set(db, 'compute_node_update',
self._fake_compute_node_update)
self.stubs.Set(db, 'compute_node_delete',
self._fake_compute_node_delete)
self.stubs.Set(db, 'migration_update',
self._fake_migration_update)
self.stubs.Set(db, 'migration_get_in_progress_by_host_and_node',
self._fake_migration_get_in_progress_by_host_and_node)
self.tracker.update_available_resource(self.context)
self.limits = self._limits()
def _fake_service_get_by_compute_host(self, ctx, host):
self.compute = self._create_compute_node()
self.service = self._create_service(host, compute=self.compute)
return self.service
def _fake_compute_node_update(self, ctx, compute_node_id, values,
prune_stats=False):
self.updated = True
values['stats'] = [{"key": "num_instances", "value": "1"}]
self.compute.update(values)
return self.compute
def _fake_compute_node_delete(self, ctx, compute_node_id):
self.deleted = True
self.compute.update({'deleted': 1})
return self.compute
def _fake_migration_get_in_progress_by_host_and_node(self, ctxt, host,
node):
status = ['confirmed', 'reverted']
migrations = []
for migration in self._migrations.values():
migration = obj_base.obj_to_primitive(migration)
if migration['status'] in status:
continue
uuid = migration['instance_uuid']
migration['instance'] = self._instances[uuid]
migrations.append(migration)
return migrations
def _fake_migration_update(self, ctxt, migration_id, values):
# cheat and assume there's only 1 migration present
migration = self._migrations.values()[0]
migration.update(values)
return migration
def _limits(self, memory_mb=FAKE_VIRT_MEMORY_MB +
FAKE_VIRT_MEMORY_OVERHEAD, disk_gb=FAKE_VIRT_LOCAL_GB,
vcpus=FAKE_VIRT_VCPUS):
"""Create limits dictionary used for oversubscribing resources."""
return {
'memory_mb': memory_mb,
'disk_gb': disk_gb,
'vcpu': vcpus
}
def _assert(self, value, field, tracker=None):
if tracker is None:
tracker = self.tracker
if field not in tracker.compute_node:
raise test.TestingException(
"'%(field)s' not in compute node." % {'field': field})
x = tracker.compute_node[field]
self.assertEqual(value, x)
class TrackerTestCase(BaseTrackerTestCase):
def test_free_ram_resource_value(self):
driver = FakeVirtDriver()
mem_free = driver.memory_mb - driver.memory_mb_used
self.assertEqual(mem_free, self.tracker.compute_node['free_ram_mb'])
def test_free_disk_resource_value(self):
driver = FakeVirtDriver()
mem_free = driver.local_gb - driver.local_gb_used
self.assertEqual(mem_free, self.tracker.compute_node['free_disk_gb'])
def test_update_compute_node(self):
self.assertFalse(self.tracker.disabled)
self.assertTrue(self.updated)
def test_init(self):
self._assert(FAKE_VIRT_MEMORY_MB, 'memory_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb')
self._assert(FAKE_VIRT_VCPUS, 'vcpus')
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self._assert(0, 'running_vms')
self._assert(FAKE_VIRT_MEMORY_MB, 'free_ram_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'free_disk_gb')
self.assertFalse(self.tracker.disabled)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
self._assert('{}', 'pci_stats')
class TrackerPciStatsTestCase(BaseTrackerTestCase):
def test_update_compute_node(self):
self.assertFalse(self.tracker.disabled)
self.assertTrue(self.updated)
def test_init(self):
self._assert(FAKE_VIRT_MEMORY_MB, 'memory_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb')
self._assert(FAKE_VIRT_VCPUS, 'vcpus')
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self._assert(0, 'running_vms')
self._assert(FAKE_VIRT_MEMORY_MB, 'free_ram_mb')
self._assert(FAKE_VIRT_LOCAL_GB, 'free_disk_gb')
self.assertFalse(self.tracker.disabled)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
expected = """[{"count": 1,
"vendor_id": "v1",
"product_id": "p1",
"extra_info": {"extra_k1": "v1"}}]"""
expected = re.sub(r'\s+', '', expected)
pci = re.sub(r'\s+', '', self.tracker.compute_node['pci_stats'])
self.assertEqual(expected, pci)
def _driver(self):
return FakeVirtDriver(pci_support=True)
class InstanceClaimTestCase(BaseTrackerTestCase):
def test_update_usage_only_for_tracked(self):
instance = self._fake_instance(memory_mb=3, root_gb=1, ephemeral_gb=1,
task_state=None)
self.tracker.update_usage(self.context, instance)
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'current_workload')
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertNotEqual(0, claim.memory_mb)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(2, 'local_gb_used')
# now update should actually take effect
instance['task_state'] = task_states.SCHEDULING
self.tracker.update_usage(self.context, instance)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(2, 'local_gb_used')
self._assert(1, 'current_workload')
def test_claim_and_audit(self):
claim_mem = 3
claim_disk = 2
instance = self._fake_instance(memory_mb=claim_mem, root_gb=claim_disk,
ephemeral_gb=0)
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertEqual(5, self.compute["memory_mb"])
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["memory_mb_used"])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["free_ram_mb"])
self.assertEqual(6, self.compute["local_gb"])
self.assertEqual(claim_disk, self.compute["local_gb_used"])
self.assertEqual(6 - claim_disk, self.compute["free_disk_gb"])
# 1st pretend that the compute operation finished and claimed the
# desired resources from the virt layer
driver = self.tracker.driver
driver.memory_mb_used = claim_mem
driver.local_gb_used = claim_disk
self.tracker.update_available_resource(self.context)
# confirm tracker is adding in host_ip
self.assertTrue(self.compute.get('host_ip') is not None)
# confirm that resource usage is derived from instance usages,
# not virt layer:
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['free_ram_mb'])
self.assertEqual(claim_disk, self.compute['local_gb_used'])
self.assertEqual(6 - claim_disk, self.compute['free_disk_gb'])
def test_claim_and_abort(self):
claim_mem = 3
claim_disk = 2
instance = self._fake_instance(memory_mb=claim_mem,
root_gb=claim_disk, ephemeral_gb=0)
claim = self.tracker.instance_claim(self.context, instance,
self.limits)
self.assertNotEqual(None, claim)
self.assertEqual(claim_mem + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["memory_mb_used"])
self.assertEqual(5 - claim_mem - FAKE_VIRT_MEMORY_OVERHEAD,
self.compute["free_ram_mb"])
self.assertEqual(claim_disk, self.compute["local_gb_used"])
self.assertEqual(6 - claim_disk, self.compute["free_disk_gb"])
claim.abort()
self.assertEqual(0, self.compute["memory_mb_used"])
self.assertEqual(5, self.compute["free_ram_mb"])
self.assertEqual(0, self.compute["local_gb_used"])
self.assertEqual(6, self.compute["free_disk_gb"])
def test_instance_claim_with_oversubscription(self):
memory_mb = FAKE_VIRT_MEMORY_MB * 2
root_gb = ephemeral_gb = FAKE_VIRT_LOCAL_GB
vcpus = FAKE_VIRT_VCPUS * 2
limits = {'memory_mb': memory_mb + FAKE_VIRT_MEMORY_OVERHEAD,
'disk_gb': root_gb * 2,
'vcpu': vcpus}
instance = self._fake_instance(memory_mb=memory_mb,
root_gb=root_gb, ephemeral_gb=ephemeral_gb)
self.tracker.instance_claim(self.context, instance, limits)
self.assertEqual(memory_mb + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(root_gb * 2,
self.tracker.compute_node['local_gb_used'])
def test_additive_claims(self):
self.limits['vcpu'] = 2
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1,
vcpus=1)
with self.tracker.instance_claim(self.context, instance, self.limits):
pass
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1,
vcpus=1)
with self.tracker.instance_claim(self.context, instance, self.limits):
pass
self.assertEqual(2 + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(4, self.tracker.compute_node['local_gb_used'])
self.assertEqual(2, self.tracker.compute_node['vcpus_used'])
def test_context_claim_with_exception(self):
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1)
try:
with self.tracker.instance_claim(self.context, instance):
# <insert exciting things that utilize resources>
raise test.TestingException()
except test.TestingException:
pass
self.assertEqual(0, self.tracker.compute_node['memory_mb_used'])
self.assertEqual(0, self.tracker.compute_node['local_gb_used'])
self.assertEqual(0, self.compute['memory_mb_used'])
self.assertEqual(0, self.compute['local_gb_used'])
def test_instance_context_claim(self):
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=1)
with self.tracker.instance_claim(self.context, instance):
# <insert exciting things that utilize resources>
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(2, self.tracker.compute_node['local_gb_used'])
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(2, self.compute['local_gb_used'])
# after exiting claim context, build is marked as finished. usage
# totals should be same:
self.tracker.update_available_resource(self.context)
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
self.assertEqual(2, self.tracker.compute_node['local_gb_used'])
self.assertEqual(1 + FAKE_VIRT_MEMORY_OVERHEAD,
self.compute['memory_mb_used'])
self.assertEqual(2, self.compute['local_gb_used'])
def test_update_load_stats_for_instance(self):
instance = self._fake_instance(task_state=task_states.SCHEDULING)
with self.tracker.instance_claim(self.context, instance):
pass
self.assertEqual(1, self.tracker.compute_node['current_workload'])
instance['vm_state'] = vm_states.ACTIVE
instance['task_state'] = None
instance['host'] = 'fakehost'
self.tracker.update_usage(self.context, instance)
self.assertEqual(0, self.tracker.compute_node['current_workload'])
def test_cpu_stats(self):
limits = {'disk_gb': 100, 'memory_mb': 100}
self.assertEqual(0, self.tracker.compute_node['vcpus_used'])
instance = self._fake_instance(vcpus=1)
# should not do anything until a claim is made:
self.tracker.update_usage(self.context, instance)
self.assertEqual(0, self.tracker.compute_node['vcpus_used'])
with self.tracker.instance_claim(self.context, instance, limits):
pass
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
# instance state can change without modifying vcpus in use:
instance['task_state'] = task_states.SCHEDULING
self.tracker.update_usage(self.context, instance)
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
instance = self._fake_instance(vcpus=10)
with self.tracker.instance_claim(self.context, instance, limits):
pass
self.assertEqual(11, self.tracker.compute_node['vcpus_used'])
instance['vm_state'] = vm_states.DELETED
self.tracker.update_usage(self.context, instance)
self.assertEqual(1, self.tracker.compute_node['vcpus_used'])
def test_skip_deleted_instances(self):
# ensure that the audit process skips instances that have vm_state
# DELETED, but the DB record is not yet deleted.
self._fake_instance(vm_state=vm_states.DELETED, host=self.host)
self.tracker.update_available_resource(self.context)
self.assertEqual(0, self.tracker.compute_node['memory_mb_used'])
self.assertEqual(0, self.tracker.compute_node['local_gb_used'])
class ResizeClaimTestCase(BaseTrackerTestCase):
def setUp(self):
super(ResizeClaimTestCase, self).setUp()
def _fake_migration_create(mig_self, ctxt):
self._migrations[mig_self.instance_uuid] = mig_self
mig_self.obj_reset_changes()
self.stubs.Set(migration_obj.Migration, 'create',
_fake_migration_create)
self.instance = self._fake_instance()
self.instance_type = self._fake_flavor_create()
def _fake_migration_create(self, context, values=None):
instance_uuid = str(uuid.uuid1())
mig_dict = test_migration.fake_db_migration()
mig_dict.update({
'id': 1,
'source_compute': 'host1',
'source_node': 'fakenode',
'dest_compute': 'host2',
'dest_node': 'fakenode',
'dest_host': '127.0.0.1',
'old_instance_type_id': 1,
'new_instance_type_id': 2,
'instance_uuid': instance_uuid,
'status': 'pre-migrating',
'updated_at': timeutils.utcnow()
})
if values:
mig_dict.update(values)
migration = migration_obj.Migration()
migration.update(mig_dict)
# This hits the stub in setUp()
migration.create('fake')
def test_claim(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
self.assertEqual(1, len(self.tracker.tracked_migrations))
def test_abort(self):
try:
with self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits):
raise test.TestingException("abort")
except test.TestingException:
pass
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
self.assertEqual(0, len(self.tracker.tracked_migrations))
def test_additive_claims(self):
limits = self._limits(FAKE_VIRT_MEMORY_MB * 2 +
FAKE_VIRT_MEMORY_OVERHEAD * 2,
FAKE_VIRT_LOCAL_GB * 2,
FAKE_VIRT_VCPUS * 2)
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, limits)
instance2 = self._fake_instance()
self.tracker.resize_claim(self.context, instance2, self.instance_type,
limits)
self._assert(2 * FAKE_VIRT_MEMORY_MB + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(2 * FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(2 * FAKE_VIRT_VCPUS, 'vcpus_used')
def test_claim_and_audit(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
def test_same_host(self):
self.limits['vcpu'] = 3
src_type = self._fake_flavor_create(id=2, memory_mb=1,
root_gb=1, ephemeral_gb=0, vcpus=1)
dest_type = self._fake_flavor_create(id=2, memory_mb=2,
root_gb=2, ephemeral_gb=1, vcpus=2)
# make an instance of src_type:
instance = self._fake_instance(memory_mb=1, root_gb=1, ephemeral_gb=0,
vcpus=1, instance_type_id=2)
instance['system_metadata'] = self._fake_instance_system_metadata(
dest_type)
self.tracker.instance_claim(self.context, instance, self.limits)
# resize to dest_type:
claim = self.tracker.resize_claim(self.context, instance,
dest_type, self.limits)
self._assert(3 + FAKE_VIRT_MEMORY_OVERHEAD * 2, 'memory_mb_used')
self._assert(4, 'local_gb_used')
self._assert(3, 'vcpus_used')
self.tracker.update_available_resource(self.context)
claim.abort()
# only the original instance should remain, not the migration:
self._assert(1 + FAKE_VIRT_MEMORY_OVERHEAD, 'memory_mb_used')
self._assert(1, 'local_gb_used')
self._assert(1, 'vcpus_used')
self.assertEqual(1, len(self.tracker.tracked_instances))
self.assertEqual(0, len(self.tracker.tracked_migrations))
def test_revert(self):
self.tracker.resize_claim(self.context, self.instance,
self.instance_type, self.limits)
self.tracker.drop_resize_claim(self.instance)
self.assertEqual(0, len(self.tracker.tracked_instances))
self.assertEqual(0, len(self.tracker.tracked_migrations))
self._assert(0, 'memory_mb_used')
self._assert(0, 'local_gb_used')
self._assert(0, 'vcpus_used')
def test_revert_reserve_source(self):
# if a revert has started at the API and audit runs on
# the source compute before the instance flips back to source,
# resources should still be held at the source based on the
# migration:
dest = "desthost"
dest_tracker = self._tracker(host=dest)
dest_tracker.update_available_resource(self.context)
self.instance = self._fake_instance(memory_mb=FAKE_VIRT_MEMORY_MB,
root_gb=FAKE_VIRT_LOCAL_GB, ephemeral_gb=0,
vcpus=FAKE_VIRT_VCPUS, instance_type_id=1)
values = {'source_compute': self.host, 'dest_compute': dest,
'old_instance_type_id': 1, 'new_instance_type_id': 1,
'status': 'post-migrating',
'instance_uuid': self.instance['uuid']}
migration = self._fake_migration_create(self.context, values)
# attach an instance to the destination host tracker:
dest_tracker.instance_claim(self.context, self.instance)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used', tracker=dest_tracker)
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used',
tracker=dest_tracker)
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used',
tracker=dest_tracker)
# audit and recheck to confirm migration doesn't get double counted
# on dest:
dest_tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used', tracker=dest_tracker)
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used',
tracker=dest_tracker)
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used',
tracker=dest_tracker)
# apply the migration to the source host tracker:
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + FAKE_VIRT_MEMORY_OVERHEAD,
'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
# flag the instance and migration as reverting and re-audit:
self.instance['vm_state'] = vm_states.RESIZED
self.instance['task_state'] = task_states.RESIZE_REVERTING
self.tracker.update_available_resource(self.context)
self._assert(FAKE_VIRT_MEMORY_MB + 1, 'memory_mb_used')
self._assert(FAKE_VIRT_LOCAL_GB, 'local_gb_used')
self._assert(FAKE_VIRT_VCPUS, 'vcpus_used')
def test_resize_filter(self):
instance = self._fake_instance(vm_state=vm_states.ACTIVE,
task_state=task_states.SUSPENDING)
self.assertFalse(self.tracker._instance_in_resize_state(instance))
instance = self._fake_instance(vm_state=vm_states.RESIZED,
task_state=task_states.SUSPENDING)
self.assertTrue(self.tracker._instance_in_resize_state(instance))
instance = self._fake_instance(vm_state=vm_states.ACTIVE,
task_state=task_states.RESIZE_MIGRATING)
self.assertTrue(self.tracker._instance_in_resize_state(instance))
def test_dupe_filter(self):
self._fake_flavor_create(id=2, memory_mb=1, root_gb=1,
ephemeral_gb=1, vcpus=1)
instance = self._fake_instance(host=self.host)
values = {'source_compute': self.host, 'dest_compute': self.host,
'instance_uuid': instance['uuid'], 'new_instance_type_id': 2}
self._fake_migration_create(self.context, values)
self._fake_migration_create(self.context, values)
self.tracker.update_available_resource(self.context)
self.assertEqual(1, len(self.tracker.tracked_migrations))
def test_set_instance_host_and_node(self):
instance = self._fake_instance()
self.assertEqual(None, instance['host'])
self.assertEqual(None, instance['launched_on'])
self.assertEqual(None, instance['node'])
claim = self.tracker.instance_claim(self.context, instance)
self.assertNotEqual(0, claim.memory_mb)
self.assertEqual('fakehost', instance['host'])
self.assertEqual('fakehost', instance['launched_on'])
self.assertEqual('fakenode', instance['node'])
class NoInstanceTypesInSysMetadata(ResizeClaimTestCase):
"""Make sure we handle the case where the following are true:
1) Compute node C gets upgraded to code that looks for instance types in
system metadata. AND
2) C already has instances in the process of migrating that do not have
stashed instance types.
bug 1164110
"""
def setUp(self):
super(NoInstanceTypesInSysMetadata, self).setUp()
self.instance = self._fake_instance(stash=False)
class OrphanTestCase(BaseTrackerTestCase):
def _driver(self):
class OrphanVirtDriver(FakeVirtDriver):
def get_per_instance_usage(self):
return {
'1-2-3-4-5': {'memory_mb': 4, 'uuid': '1-2-3-4-5'},
'2-3-4-5-6': {'memory_mb': 4, 'uuid': '2-3-4-5-6'},
}
return OrphanVirtDriver()
def test_usage(self):
# 2 instances, 4 mb each, plus overhead
self.assertEqual(8 + 2 * FAKE_VIRT_MEMORY_OVERHEAD,
self.tracker.compute_node['memory_mb_used'])
def test_find(self):
# create one legit instance and verify the 2 orphans remain
self._fake_instance()
orphans = self.tracker._find_orphaned_instances()
self.assertEqual(2, len(orphans))
|
ntt-sic/nova
|
nova/tests/compute/test_resource_tracker.py
|
Python
|
apache-2.0
| 38,724
|
[
"exciting"
] |
570c2f3449a5739e84331366a51d27f43edbbc3f9a8de13d99191a889dd58803
|
from ase import Atom, Atoms
from ase.calculators.lj import LennardJones
from ase.constraints import FixBondLength
dimer = Atoms([Atom('X', (0, 0, 0)),
Atom('X', (0, 0, 1))],
calculator=LennardJones(),
constraint=FixBondLength(0, 1))
print(dimer.get_forces())
print(dimer.positions)
dimer.positions[:] += 0.1
print(dimer.positions)
dimer.positions[:, 2] += 5.1
print(dimer.positions)
dimer.positions[:] = [(1,2,3),(4,5,6)]
print(dimer.positions)
dimer.set_positions([(1,2,3),(4,5,6.2)])
print(dimer.positions)
|
suttond/MODOI
|
ase/test/dimer.py
|
Python
|
lgpl-3.0
| 554
|
[
"ASE"
] |
17a5b9295f404e2c508da23732d87d92e8fa6a5414afdfe80b97fe20432cac1a
|
# Docstrings for generated ufuncs
#
# The syntax is designed to look like the function add_newdoc is being
# called from numpy.lib, but in this file add_newdoc puts the
# docstrings in a dictionary. This dictionary is used in
# _generate_pyx.py to generate the docstrings for the ufuncs in
# scipy.special at the C level when the ufuncs are created at compile
# time.
from typing import Dict
docdict: Dict[str, str] = {}
def get(name):
return docdict.get(name)
def add_newdoc(name, doc):
docdict[name] = doc
add_newdoc("_sf_error_test_function",
"""
Private function; do not use.
""")
add_newdoc("_cosine_cdf",
"""
_cosine_cdf(x)
Cumulative distribution function (CDF) of the cosine distribution::
{ 0, x < -pi
cdf(x) = { (pi + x + sin(x))/(2*pi), -pi <= x <= pi
{ 1, x > pi
Parameters
----------
x : array_like
`x` must contain real numbers.
Returns
-------
float
The cosine distribution CDF evaluated at `x`.
""")
add_newdoc("_cosine_invcdf",
"""
_cosine_invcdf(p)
Inverse of the cumulative distribution function (CDF) of the cosine
distribution.
The CDF of the cosine distribution is::
cdf(x) = (pi + x + sin(x))/(2*pi)
This function computes the inverse of cdf(x).
Parameters
----------
p : array_like
`p` must contain real numbers in the interval ``0 <= p <= 1``.
`nan` is returned for values of `p` outside the interval [0, 1].
Returns
-------
float
The inverse of the cosine distribution CDF evaluated at `p`.
""")
add_newdoc("sph_harm",
r"""
sph_harm(m, n, theta, phi)
Compute spherical harmonics.
The spherical harmonics are defined as
.. math::
Y^m_n(\theta,\phi) = \sqrt{\frac{2n+1}{4\pi} \frac{(n-m)!}{(n+m)!}}
e^{i m \theta} P^m_n(\cos(\phi))
where :math:`P_n^m` are the associated Legendre functions; see `lpmv`.
Parameters
----------
m : array_like
Order of the harmonic (int); must have ``|m| <= n``.
n : array_like
Degree of the harmonic (int); must have ``n >= 0``. This is
often denoted by ``l`` (lower case L) in descriptions of
spherical harmonics.
theta : array_like
Azimuthal (longitudinal) coordinate; must be in ``[0, 2*pi]``.
phi : array_like
Polar (colatitudinal) coordinate; must be in ``[0, pi]``.
Returns
-------
y_mn : complex float
The harmonic :math:`Y^m_n` sampled at ``theta`` and ``phi``.
Notes
-----
There are different conventions for the meanings of the input
arguments ``theta`` and ``phi``. In SciPy ``theta`` is the
azimuthal angle and ``phi`` is the polar angle. It is common to
see the opposite convention, that is, ``theta`` as the polar angle
and ``phi`` as the azimuthal angle.
Note that SciPy's spherical harmonics include the Condon-Shortley
phase [2]_ because it is part of `lpmv`.
With SciPy's conventions, the first several spherical harmonics
are
.. math::
Y_0^0(\theta, \phi) &= \frac{1}{2} \sqrt{\frac{1}{\pi}} \\
Y_1^{-1}(\theta, \phi) &= \frac{1}{2} \sqrt{\frac{3}{2\pi}}
e^{-i\theta} \sin(\phi) \\
Y_1^0(\theta, \phi) &= \frac{1}{2} \sqrt{\frac{3}{\pi}}
\cos(\phi) \\
Y_1^1(\theta, \phi) &= -\frac{1}{2} \sqrt{\frac{3}{2\pi}}
e^{i\theta} \sin(\phi).
References
----------
.. [1] Digital Library of Mathematical Functions, 14.30.
https://dlmf.nist.gov/14.30
.. [2] https://en.wikipedia.org/wiki/Spherical_harmonics#Condon.E2.80.93Shortley_phase
""")
add_newdoc("_ellip_harm",
"""
Internal function, use `ellip_harm` instead.
""")
add_newdoc("_ellip_norm",
"""
Internal function, use `ellip_norm` instead.
""")
add_newdoc("_lambertw",
"""
Internal function, use `lambertw` instead.
""")
add_newdoc("voigt_profile",
r"""
voigt_profile(x, sigma, gamma, out=None)
Voigt profile.
The Voigt profile is a convolution of a 1-D Normal distribution with
standard deviation ``sigma`` and a 1-D Cauchy distribution with half-width at
half-maximum ``gamma``.
If ``sigma = 0``, PDF of Cauchy distribution is returned.
Conversely, if ``gamma = 0``, PDF of Normal distribution is returned.
If ``sigma = gamma = 0``, the return value is ``Inf`` for ``x = 0``, and ``0`` for all other ``x``.
Parameters
----------
x : array_like
Real argument
sigma : array_like
The standard deviation of the Normal distribution part
gamma : array_like
The half-width at half-maximum of the Cauchy distribution part
out : ndarray, optional
Optional output array for the function values
Returns
-------
scalar or ndarray
The Voigt profile at the given arguments
Notes
-----
It can be expressed in terms of Faddeeva function
.. math:: V(x; \sigma, \gamma) = \frac{Re[w(z)]}{\sigma\sqrt{2\pi}},
.. math:: z = \frac{x + i\gamma}{\sqrt{2}\sigma}
where :math:`w(z)` is the Faddeeva function.
See Also
--------
wofz : Faddeeva function
References
----------
.. [1] https://en.wikipedia.org/wiki/Voigt_profile
""")
add_newdoc("wrightomega",
r"""
wrightomega(z, out=None)
Wright Omega function.
Defined as the solution to
.. math::
\omega + \log(\omega) = z
where :math:`\log` is the principal branch of the complex logarithm.
Parameters
----------
z : array_like
Points at which to evaluate the Wright Omega function
Returns
-------
omega : ndarray
Values of the Wright Omega function
Notes
-----
.. versionadded:: 0.19.0
The function can also be defined as
.. math::
\omega(z) = W_{K(z)}(e^z)
where :math:`K(z) = \lceil (\Im(z) - \pi)/(2\pi) \rceil` is the
unwinding number and :math:`W` is the Lambert W function.
The implementation here is taken from [1]_.
See Also
--------
lambertw : The Lambert W function
References
----------
.. [1] Lawrence, Corless, and Jeffrey, "Algorithm 917: Complex
Double-Precision Evaluation of the Wright :math:`\omega`
Function." ACM Transactions on Mathematical Software,
2012. :doi:`10.1145/2168773.2168779`.
""")
add_newdoc("agm",
"""
agm(a, b)
Compute the arithmetic-geometric mean of `a` and `b`.
Start with a_0 = a and b_0 = b and iteratively compute::
a_{n+1} = (a_n + b_n)/2
b_{n+1} = sqrt(a_n*b_n)
a_n and b_n converge to the same limit as n increases; their common
limit is agm(a, b).
Parameters
----------
a, b : array_like
Real values only. If the values are both negative, the result
is negative. If one value is negative and the other is positive,
`nan` is returned.
Returns
-------
float
The arithmetic-geometric mean of `a` and `b`.
Examples
--------
>>> from scipy.special import agm
>>> a, b = 24.0, 6.0
>>> agm(a, b)
13.458171481725614
Compare that result to the iteration:
>>> while a != b:
... a, b = (a + b)/2, np.sqrt(a*b)
... print("a = %19.16f b=%19.16f" % (a, b))
...
a = 15.0000000000000000 b=12.0000000000000000
a = 13.5000000000000000 b=13.4164078649987388
a = 13.4582039324993694 b=13.4581390309909850
a = 13.4581714817451772 b=13.4581714817060547
a = 13.4581714817256159 b=13.4581714817256159
When array-like arguments are given, broadcasting applies:
>>> a = np.array([[1.5], [3], [6]]) # a has shape (3, 1).
>>> b = np.array([6, 12, 24, 48]) # b has shape (4,).
>>> agm(a, b)
array([[ 3.36454287, 5.42363427, 9.05798751, 15.53650756],
[ 4.37037309, 6.72908574, 10.84726853, 18.11597502],
[ 6. , 8.74074619, 13.45817148, 21.69453707]])
""")
add_newdoc("airy",
r"""
airy(z)
Airy functions and their derivatives.
Parameters
----------
z : array_like
Real or complex argument.
Returns
-------
Ai, Aip, Bi, Bip : ndarrays
Airy functions Ai and Bi, and their derivatives Aip and Bip.
Notes
-----
The Airy functions Ai and Bi are two independent solutions of
.. math:: y''(x) = x y(x).
For real `z` in [-10, 10], the computation is carried out by calling
the Cephes [1]_ `airy` routine, which uses power series summation
for small `z` and rational minimax approximations for large `z`.
Outside this range, the AMOS [2]_ `zairy` and `zbiry` routines are
employed. They are computed using power series for :math:`|z| < 1` and
the following relations to modified Bessel functions for larger `z`
(where :math:`t \equiv 2 z^{3/2}/3`):
.. math::
Ai(z) = \frac{1}{\pi \sqrt{3}} K_{1/3}(t)
Ai'(z) = -\frac{z}{\pi \sqrt{3}} K_{2/3}(t)
Bi(z) = \sqrt{\frac{z}{3}} \left(I_{-1/3}(t) + I_{1/3}(t) \right)
Bi'(z) = \frac{z}{\sqrt{3}} \left(I_{-2/3}(t) + I_{2/3}(t)\right)
See also
--------
airye : exponentially scaled Airy functions.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
Examples
--------
Compute the Airy functions on the interval [-15, 5].
>>> from scipy import special
>>> x = np.linspace(-15, 5, 201)
>>> ai, aip, bi, bip = special.airy(x)
Plot Ai(x) and Bi(x).
>>> import matplotlib.pyplot as plt
>>> plt.plot(x, ai, 'r', label='Ai(x)')
>>> plt.plot(x, bi, 'b--', label='Bi(x)')
>>> plt.ylim(-0.5, 1.0)
>>> plt.grid()
>>> plt.legend(loc='upper left')
>>> plt.show()
""")
add_newdoc("airye",
"""
airye(z)
Exponentially scaled Airy functions and their derivatives.
Scaling::
eAi = Ai * exp(2.0/3.0*z*sqrt(z))
eAip = Aip * exp(2.0/3.0*z*sqrt(z))
eBi = Bi * exp(-abs(2.0/3.0*(z*sqrt(z)).real))
eBip = Bip * exp(-abs(2.0/3.0*(z*sqrt(z)).real))
Parameters
----------
z : array_like
Real or complex argument.
Returns
-------
eAi, eAip, eBi, eBip : array_like
Exponentially scaled Airy functions eAi and eBi, and their derivatives
eAip and eBip
Notes
-----
Wrapper for the AMOS [1]_ routines `zairy` and `zbiry`.
See also
--------
airy
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
Examples
--------
We can compute exponentially scaled Airy functions and their derivatives:
>>> from scipy.special import airye
>>> import matplotlib.pyplot as plt
>>> z = np.linspace(0, 50, 500)
>>> eAi, eAip, eBi, eBip = airye(z)
>>> f, ax = plt.subplots(2, 1, sharex=True)
>>> for ind, data in enumerate([[eAi, eAip, ["eAi", "eAip"]],
... [eBi, eBip, ["eBi", "eBip"]]]):
... ax[ind].plot(z, data[0], "-r", z, data[1], "-b")
... ax[ind].legend(data[2])
... ax[ind].grid(True)
>>> plt.show()
We can compute these using usual non-scaled Airy functions by:
>>> from scipy.special import airy
>>> Ai, Aip, Bi, Bip = airy(z)
>>> np.allclose(eAi, Ai * np.exp(2.0 / 3.0 * z * np.sqrt(z)))
True
>>> np.allclose(eAip, Aip * np.exp(2.0 / 3.0 * z * np.sqrt(z)))
True
>>> np.allclose(eBi, Bi * np.exp(-abs(np.real(2.0 / 3.0 * z * np.sqrt(z)))))
True
>>> np.allclose(eBip, Bip * np.exp(-abs(np.real(2.0 / 3.0 * z * np.sqrt(z)))))
True
Comparing non-scaled and exponentially scaled ones, the usual non-scaled
function quickly underflows for large values, whereas the exponentially
scaled function does not.
>>> airy(200)
(0.0, 0.0, nan, nan)
>>> airye(200)
(0.07501041684381093, -1.0609012305109042, 0.15003188417418148, 2.1215836725571093)
""")
add_newdoc("bdtr",
r"""
bdtr(k, n, p)
Binomial distribution cumulative distribution function.
Sum of the terms 0 through `floor(k)` of the Binomial probability density.
.. math::
\mathrm{bdtr}(k, n, p) = \sum_{j=0}^{\lfloor k \rfloor} {{n}\choose{j}} p^j (1-p)^{n-j}
Parameters
----------
k : array_like
Number of successes (double), rounded down to the nearest integer.
n : array_like
Number of events (int).
p : array_like
Probability of success in a single event (float).
Returns
-------
y : ndarray
Probability of `floor(k)` or fewer successes in `n` independent events with
success probabilities of `p`.
Notes
-----
The terms are not summed directly; instead the regularized incomplete beta
function is employed, according to the formula,
.. math::
\mathrm{bdtr}(k, n, p) = I_{1 - p}(n - \lfloor k \rfloor, \lfloor k \rfloor + 1).
Wrapper for the Cephes [1]_ routine `bdtr`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("bdtrc",
r"""
bdtrc(k, n, p)
Binomial distribution survival function.
Sum of the terms `floor(k) + 1` through `n` of the binomial probability
density,
.. math::
\mathrm{bdtrc}(k, n, p) = \sum_{j=\lfloor k \rfloor +1}^n {{n}\choose{j}} p^j (1-p)^{n-j}
Parameters
----------
k : array_like
Number of successes (double), rounded down to nearest integer.
n : array_like
Number of events (int)
p : array_like
Probability of success in a single event.
Returns
-------
y : ndarray
Probability of `floor(k) + 1` or more successes in `n` independent
events with success probabilities of `p`.
See also
--------
bdtr
betainc
Notes
-----
The terms are not summed directly; instead the regularized incomplete beta
function is employed, according to the formula,
.. math::
\mathrm{bdtrc}(k, n, p) = I_{p}(\lfloor k \rfloor + 1, n - \lfloor k \rfloor).
Wrapper for the Cephes [1]_ routine `bdtrc`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("bdtri",
r"""
bdtri(k, n, y)
Inverse function to `bdtr` with respect to `p`.
Finds the event probability `p` such that the sum of the terms 0 through
`k` of the binomial probability density is equal to the given cumulative
probability `y`.
Parameters
----------
k : array_like
Number of successes (float), rounded down to the nearest integer.
n : array_like
Number of events (float)
y : array_like
Cumulative probability (probability of `k` or fewer successes in `n`
events).
Returns
-------
p : ndarray
The event probability such that `bdtr(\lfloor k \rfloor, n, p) = y`.
See also
--------
bdtr
betaincinv
Notes
-----
The computation is carried out using the inverse beta integral function
and the relation,::
1 - p = betaincinv(n - k, k + 1, y).
Wrapper for the Cephes [1]_ routine `bdtri`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("bdtrik",
"""
bdtrik(y, n, p)
Inverse function to `bdtr` with respect to `k`.
Finds the number of successes `k` such that the sum of the terms 0 through
`k` of the Binomial probability density for `n` events with probability
`p` is equal to the given cumulative probability `y`.
Parameters
----------
y : array_like
Cumulative probability (probability of `k` or fewer successes in `n`
events).
n : array_like
Number of events (float).
p : array_like
Success probability (float).
Returns
-------
k : ndarray
The number of successes `k` such that `bdtr(k, n, p) = y`.
See also
--------
bdtr
Notes
-----
Formula 26.5.24 of [1]_ is used to reduce the binomial distribution to the
cumulative incomplete beta distribution.
Computation of `k` involves a search for a value that produces the desired
value of `y`. The search relies on the monotonicity of `y` with `k`.
Wrapper for the CDFLIB [2]_ Fortran routine `cdfbin`.
References
----------
.. [1] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
.. [2] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
""")
add_newdoc("bdtrin",
"""
bdtrin(k, y, p)
Inverse function to `bdtr` with respect to `n`.
Finds the number of events `n` such that the sum of the terms 0 through
`k` of the Binomial probability density for events with probability `p` is
equal to the given cumulative probability `y`.
Parameters
----------
k : array_like
Number of successes (float).
y : array_like
Cumulative probability (probability of `k` or fewer successes in `n`
events).
p : array_like
Success probability (float).
Returns
-------
n : ndarray
The number of events `n` such that `bdtr(k, n, p) = y`.
See also
--------
bdtr
Notes
-----
Formula 26.5.24 of [1]_ is used to reduce the binomial distribution to the
cumulative incomplete beta distribution.
Computation of `n` involves a search for a value that produces the desired
value of `y`. The search relies on the monotonicity of `y` with `n`.
Wrapper for the CDFLIB [2]_ Fortran routine `cdfbin`.
References
----------
.. [1] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
.. [2] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
""")
add_newdoc("binom",
"""
binom(n, k)
Binomial coefficient
See Also
--------
comb : The number of combinations of N things taken k at a time.
""")
add_newdoc("btdtria",
r"""
btdtria(p, b, x)
Inverse of `btdtr` with respect to `a`.
This is the inverse of the beta cumulative distribution function, `btdtr`,
considered as a function of `a`, returning the value of `a` for which
`btdtr(a, b, x) = p`, or
.. math::
p = \int_0^x \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} t^{a-1} (1-t)^{b-1}\,dt
Parameters
----------
p : array_like
Cumulative probability, in [0, 1].
b : array_like
Shape parameter (`b` > 0).
x : array_like
The quantile, in [0, 1].
Returns
-------
a : ndarray
The value of the shape parameter `a` such that `btdtr(a, b, x) = p`.
See Also
--------
btdtr : Cumulative distribution function of the beta distribution.
btdtri : Inverse with respect to `x`.
btdtrib : Inverse with respect to `b`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfbet`.
The cumulative distribution function `p` is computed using a routine by
DiDinato and Morris [2]_. Computation of `a` involves a search for a value
that produces the desired value of `p`. The search relies on the
monotonicity of `p` with `a`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] DiDinato, A. R. and Morris, A. H.,
Algorithm 708: Significant Digit Computation of the Incomplete Beta
Function Ratios. ACM Trans. Math. Softw. 18 (1993), 360-373.
""")
add_newdoc("btdtrib",
r"""
btdtria(a, p, x)
Inverse of `btdtr` with respect to `b`.
This is the inverse of the beta cumulative distribution function, `btdtr`,
considered as a function of `b`, returning the value of `b` for which
`btdtr(a, b, x) = p`, or
.. math::
p = \int_0^x \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} t^{a-1} (1-t)^{b-1}\,dt
Parameters
----------
a : array_like
Shape parameter (`a` > 0).
p : array_like
Cumulative probability, in [0, 1].
x : array_like
The quantile, in [0, 1].
Returns
-------
b : ndarray
The value of the shape parameter `b` such that `btdtr(a, b, x) = p`.
See Also
--------
btdtr : Cumulative distribution function of the beta distribution.
btdtri : Inverse with respect to `x`.
btdtria : Inverse with respect to `a`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfbet`.
The cumulative distribution function `p` is computed using a routine by
DiDinato and Morris [2]_. Computation of `b` involves a search for a value
that produces the desired value of `p`. The search relies on the
monotonicity of `p` with `b`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] DiDinato, A. R. and Morris, A. H.,
Algorithm 708: Significant Digit Computation of the Incomplete Beta
Function Ratios. ACM Trans. Math. Softw. 18 (1993), 360-373.
""")
add_newdoc("bei",
r"""
bei(x, out=None)
Kelvin function bei.
Defined as
.. math::
\mathrm{bei}(x) = \Im[J_0(x e^{3 \pi i / 4})]
where :math:`J_0` is the Bessel function of the first kind of
order zero (see `jv`). See [dlmf]_ for more details.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the Kelvin function.
See Also
--------
ber : the corresponding real part
beip : the derivative of bei
jv : Bessel function of the first kind
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10.61
Examples
--------
It can be expressed using Bessel functions.
>>> import scipy.special as sc
>>> x = np.array([1.0, 2.0, 3.0, 4.0])
>>> sc.jv(0, x * np.exp(3 * np.pi * 1j / 4)).imag
array([0.24956604, 0.97229163, 1.93758679, 2.29269032])
>>> sc.bei(x)
array([0.24956604, 0.97229163, 1.93758679, 2.29269032])
""")
add_newdoc("beip",
r"""
beip(x, out=None)
Derivative of the Kelvin function bei.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
The values of the derivative of bei.
See Also
--------
bei
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10#PT5
""")
add_newdoc("ber",
r"""
ber(x, out=None)
Kelvin function ber.
Defined as
.. math::
\mathrm{ber}(x) = \Re[J_0(x e^{3 \pi i / 4})]
where :math:`J_0` is the Bessel function of the first kind of
order zero (see `jv`). See [dlmf]_ for more details.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the Kelvin function.
See Also
--------
bei : the corresponding real part
berp : the derivative of bei
jv : Bessel function of the first kind
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10.61
Examples
--------
It can be expressed using Bessel functions.
>>> import scipy.special as sc
>>> x = np.array([1.0, 2.0, 3.0, 4.0])
>>> sc.jv(0, x * np.exp(3 * np.pi * 1j / 4)).real
array([ 0.98438178, 0.75173418, -0.22138025, -2.56341656])
>>> sc.ber(x)
array([ 0.98438178, 0.75173418, -0.22138025, -2.56341656])
""")
add_newdoc("berp",
r"""
berp(x, out=None)
Derivative of the Kelvin function ber.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
The values of the derivative of ber.
See Also
--------
ber
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10#PT5
""")
add_newdoc("besselpoly",
r"""
besselpoly(a, lmb, nu, out=None)
Weighted integral of the Bessel function of the first kind.
Computes
.. math::
\int_0^1 x^\lambda J_\nu(2 a x) \, dx
where :math:`J_\nu` is a Bessel function and :math:`\lambda=lmb`,
:math:`\nu=nu`.
Parameters
----------
a : array_like
Scale factor inside the Bessel function.
lmb : array_like
Power of `x`
nu : array_like
Order of the Bessel function.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Value of the integral.
""")
add_newdoc("beta",
r"""
beta(a, b, out=None)
Beta function.
This function is defined in [1]_ as
.. math::
B(a, b) = \int_0^1 t^{a-1}(1-t)^{b-1}dt
= \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)},
where :math:`\Gamma` is the gamma function.
Parameters
----------
a, b : array-like
Real-valued arguments
out : ndarray, optional
Optional output array for the function result
Returns
-------
scalar or ndarray
Value of the beta function
See Also
--------
gamma : the gamma function
betainc : the incomplete beta function
betaln : the natural logarithm of the absolute
value of the beta function
References
----------
.. [1] NIST Digital Library of Mathematical Functions,
Eq. 5.12.1. https://dlmf.nist.gov/5.12
Examples
--------
>>> import scipy.special as sc
The beta function relates to the gamma function by the
definition given above:
>>> sc.beta(2, 3)
0.08333333333333333
>>> sc.gamma(2)*sc.gamma(3)/sc.gamma(2 + 3)
0.08333333333333333
As this relationship demonstrates, the beta function
is symmetric:
>>> sc.beta(1.7, 2.4)
0.16567527689031739
>>> sc.beta(2.4, 1.7)
0.16567527689031739
This function satisfies :math:`B(1, b) = 1/b`:
>>> sc.beta(1, 4)
0.25
""")
add_newdoc("betainc",
r"""
betainc(a, b, x, out=None)
Incomplete beta function.
Computes the incomplete beta function, defined as [1]_:
.. math::
I_x(a, b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_0^x
t^{a-1}(1-t)^{b-1}dt,
for :math:`0 \leq x \leq 1`.
Parameters
----------
a, b : array-like
Positive, real-valued parameters
x : array-like
Real-valued such that :math:`0 \leq x \leq 1`,
the upper limit of integration
out : ndarray, optional
Optional output array for the function values
Returns
-------
array-like
Value of the incomplete beta function
See Also
--------
beta : beta function
betaincinv : inverse of the incomplete beta function
Notes
-----
The incomplete beta function is also sometimes defined
without the `gamma` terms, in which case the above
definition is the so-called regularized incomplete beta
function. Under this definition, you can get the incomplete
beta function by multiplying the result of the SciPy
function by `beta`.
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/8.17
Examples
--------
Let :math:`B(a, b)` be the `beta` function.
>>> import scipy.special as sc
The coefficient in terms of `gamma` is equal to
:math:`1/B(a, b)`. Also, when :math:`x=1`
the integral is equal to :math:`B(a, b)`.
Therefore, :math:`I_{x=1}(a, b) = 1` for any :math:`a, b`.
>>> sc.betainc(0.2, 3.5, 1.0)
1.0
It satisfies
:math:`I_x(a, b) = x^a F(a, 1-b, a+1, x)/ (aB(a, b))`,
where :math:`F` is the hypergeometric function `hyp2f1`:
>>> a, b, x = 1.4, 3.1, 0.5
>>> x**a * sc.hyp2f1(a, 1 - b, a + 1, x)/(a * sc.beta(a, b))
0.8148904036225295
>>> sc.betainc(a, b, x)
0.8148904036225296
This functions satisfies the relationship
:math:`I_x(a, b) = 1 - I_{1-x}(b, a)`:
>>> sc.betainc(2.2, 3.1, 0.4)
0.49339638807619446
>>> 1 - sc.betainc(3.1, 2.2, 1 - 0.4)
0.49339638807619446
""")
add_newdoc("betaincinv",
r"""
betaincinv(a, b, y, out=None)
Inverse of the incomplete beta function.
Computes :math:`x` such that:
.. math::
y = I_x(a, b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}
\int_0^x t^{a-1}(1-t)^{b-1}dt,
where :math:`I_x` is the normalized incomplete beta
function `betainc` and
:math:`\Gamma` is the `gamma` function [1]_.
Parameters
----------
a, b : array-like
Positive, real-valued parameters
y : array-like
Real-valued input
out : ndarray, optional
Optional output array for function values
Returns
-------
array-like
Value of the inverse of the incomplete beta function
See Also
--------
betainc : incomplete beta function
gamma : gamma function
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/8.17
Examples
--------
>>> import scipy.special as sc
This function is the inverse of `betainc` for fixed
values of :math:`a` and :math:`b`.
>>> a, b = 1.2, 3.1
>>> y = sc.betainc(a, b, 0.2)
>>> sc.betaincinv(a, b, y)
0.2
>>>
>>> a, b = 7.5, 0.4
>>> x = sc.betaincinv(a, b, 0.5)
>>> sc.betainc(a, b, x)
0.5
""")
add_newdoc("betaln",
"""
betaln(a, b)
Natural logarithm of absolute value of beta function.
Computes ``ln(abs(beta(a, b)))``.
""")
add_newdoc("boxcox",
"""
boxcox(x, lmbda)
Compute the Box-Cox transformation.
The Box-Cox transformation is::
y = (x**lmbda - 1) / lmbda if lmbda != 0
log(x) if lmbda == 0
Returns `nan` if ``x < 0``.
Returns `-inf` if ``x == 0`` and ``lmbda < 0``.
Parameters
----------
x : array_like
Data to be transformed.
lmbda : array_like
Power parameter of the Box-Cox transform.
Returns
-------
y : array
Transformed data.
Notes
-----
.. versionadded:: 0.14.0
Examples
--------
>>> from scipy.special import boxcox
>>> boxcox([1, 4, 10], 2.5)
array([ 0. , 12.4 , 126.09110641])
>>> boxcox(2, [0, 1, 2])
array([ 0.69314718, 1. , 1.5 ])
""")
add_newdoc("boxcox1p",
"""
boxcox1p(x, lmbda)
Compute the Box-Cox transformation of 1 + `x`.
The Box-Cox transformation computed by `boxcox1p` is::
y = ((1+x)**lmbda - 1) / lmbda if lmbda != 0
log(1+x) if lmbda == 0
Returns `nan` if ``x < -1``.
Returns `-inf` if ``x == -1`` and ``lmbda < 0``.
Parameters
----------
x : array_like
Data to be transformed.
lmbda : array_like
Power parameter of the Box-Cox transform.
Returns
-------
y : array
Transformed data.
Notes
-----
.. versionadded:: 0.14.0
Examples
--------
>>> from scipy.special import boxcox1p
>>> boxcox1p(1e-4, [0, 0.5, 1])
array([ 9.99950003e-05, 9.99975001e-05, 1.00000000e-04])
>>> boxcox1p([0.01, 0.1], 0.25)
array([ 0.00996272, 0.09645476])
""")
add_newdoc("inv_boxcox",
"""
inv_boxcox(y, lmbda)
Compute the inverse of the Box-Cox transformation.
Find ``x`` such that::
y = (x**lmbda - 1) / lmbda if lmbda != 0
log(x) if lmbda == 0
Parameters
----------
y : array_like
Data to be transformed.
lmbda : array_like
Power parameter of the Box-Cox transform.
Returns
-------
x : array
Transformed data.
Notes
-----
.. versionadded:: 0.16.0
Examples
--------
>>> from scipy.special import boxcox, inv_boxcox
>>> y = boxcox([1, 4, 10], 2.5)
>>> inv_boxcox(y, 2.5)
array([1., 4., 10.])
""")
add_newdoc("inv_boxcox1p",
"""
inv_boxcox1p(y, lmbda)
Compute the inverse of the Box-Cox transformation.
Find ``x`` such that::
y = ((1+x)**lmbda - 1) / lmbda if lmbda != 0
log(1+x) if lmbda == 0
Parameters
----------
y : array_like
Data to be transformed.
lmbda : array_like
Power parameter of the Box-Cox transform.
Returns
-------
x : array
Transformed data.
Notes
-----
.. versionadded:: 0.16.0
Examples
--------
>>> from scipy.special import boxcox1p, inv_boxcox1p
>>> y = boxcox1p([1, 4, 10], 2.5)
>>> inv_boxcox1p(y, 2.5)
array([1., 4., 10.])
""")
add_newdoc("btdtr",
r"""
btdtr(a, b, x)
Cumulative distribution function of the beta distribution.
Returns the integral from zero to `x` of the beta probability density
function,
.. math::
I = \int_0^x \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} t^{a-1} (1-t)^{b-1}\,dt
where :math:`\Gamma` is the gamma function.
Parameters
----------
a : array_like
Shape parameter (a > 0).
b : array_like
Shape parameter (b > 0).
x : array_like
Upper limit of integration, in [0, 1].
Returns
-------
I : ndarray
Cumulative distribution function of the beta distribution with
parameters `a` and `b` at `x`.
See Also
--------
betainc
Notes
-----
This function is identical to the incomplete beta integral function
`betainc`.
Wrapper for the Cephes [1]_ routine `btdtr`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("btdtri",
r"""
btdtri(a, b, p)
The `p`-th quantile of the beta distribution.
This function is the inverse of the beta cumulative distribution function,
`btdtr`, returning the value of `x` for which `btdtr(a, b, x) = p`, or
.. math::
p = \int_0^x \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} t^{a-1} (1-t)^{b-1}\,dt
Parameters
----------
a : array_like
Shape parameter (`a` > 0).
b : array_like
Shape parameter (`b` > 0).
p : array_like
Cumulative probability, in [0, 1].
Returns
-------
x : ndarray
The quantile corresponding to `p`.
See Also
--------
betaincinv
btdtr
Notes
-----
The value of `x` is found by interval halving or Newton iterations.
Wrapper for the Cephes [1]_ routine `incbi`, which solves the equivalent
problem of finding the inverse of the incomplete beta integral.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("cbrt",
"""
cbrt(x)
Element-wise cube root of `x`.
Parameters
----------
x : array_like
`x` must contain real numbers.
Returns
-------
float
The cube root of each value in `x`.
Examples
--------
>>> from scipy.special import cbrt
>>> cbrt(8)
2.0
>>> cbrt([-8, -3, 0.125, 1.331])
array([-2. , -1.44224957, 0.5 , 1.1 ])
""")
add_newdoc("chdtr",
r"""
chdtr(v, x, out=None)
Chi square cumulative distribution function.
Returns the area under the left tail (from 0 to `x`) of the Chi
square probability density function with `v` degrees of freedom:
.. math::
\frac{1}{2^{v/2} \Gamma(v/2)} \int_0^x t^{v/2 - 1} e^{-t/2} dt
Here :math:`\Gamma` is the Gamma function; see `gamma`. This
integral can be expressed in terms of the regularized lower
incomplete gamma function `gammainc` as
``gammainc(v / 2, x / 2)``. [1]_
Parameters
----------
v : array_like
Degrees of freedom.
x : array_like
Upper bound of the integral.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the cumulative distribution function.
See Also
--------
chdtrc, chdtri, chdtriv, gammainc
References
----------
.. [1] Chi-Square distribution,
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
Examples
--------
>>> import scipy.special as sc
It can be expressed in terms of the regularized lower incomplete
gamma function.
>>> v = 1
>>> x = np.arange(4)
>>> sc.chdtr(v, x)
array([0. , 0.68268949, 0.84270079, 0.91673548])
>>> sc.gammainc(v / 2, x / 2)
array([0. , 0.68268949, 0.84270079, 0.91673548])
""")
add_newdoc("chdtrc",
r"""
chdtrc(v, x, out=None)
Chi square survival function.
Returns the area under the right hand tail (from `x` to infinity)
of the Chi square probability density function with `v` degrees of
freedom:
.. math::
\frac{1}{2^{v/2} \Gamma(v/2)} \int_x^\infty t^{v/2 - 1} e^{-t/2} dt
Here :math:`\Gamma` is the Gamma function; see `gamma`. This
integral can be expressed in terms of the regularized upper
incomplete gamma function `gammaincc` as
``gammaincc(v / 2, x / 2)``. [1]_
Parameters
----------
v : array_like
Degrees of freedom.
x : array_like
Lower bound of the integral.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the survival function.
See Also
--------
chdtr, chdtri, chdtriv, gammaincc
References
----------
.. [1] Chi-Square distribution,
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
Examples
--------
>>> import scipy.special as sc
It can be expressed in terms of the regularized upper incomplete
gamma function.
>>> v = 1
>>> x = np.arange(4)
>>> sc.chdtrc(v, x)
array([1. , 0.31731051, 0.15729921, 0.08326452])
>>> sc.gammaincc(v / 2, x / 2)
array([1. , 0.31731051, 0.15729921, 0.08326452])
""")
add_newdoc("chdtri",
"""
chdtri(v, p, out=None)
Inverse to `chdtrc` with respect to `x`.
Returns `x` such that ``chdtrc(v, x) == p``.
Parameters
----------
v : array_like
Degrees of freedom.
p : array_like
Probability.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
x : scalar or ndarray
Value so that the probability a Chi square random variable
with `v` degrees of freedom is greater than `x` equals `p`.
See Also
--------
chdtrc, chdtr, chdtriv
References
----------
.. [1] Chi-Square distribution,
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
Examples
--------
>>> import scipy.special as sc
It inverts `chdtrc`.
>>> v, p = 1, 0.3
>>> sc.chdtrc(v, sc.chdtri(v, p))
0.3
>>> x = 1
>>> sc.chdtri(v, sc.chdtrc(v, x))
1.0
""")
add_newdoc("chdtriv",
"""
chdtriv(p, x, out=None)
Inverse to `chdtr` with respect to `v`.
Returns `v` such that ``chdtr(v, x) == p``.
Parameters
----------
p : array_like
Probability that the Chi square random variable is less than
or equal to `x`.
x : array_like
Nonnegative input.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Degrees of freedom.
See Also
--------
chdtr, chdtrc, chdtri
References
----------
.. [1] Chi-Square distribution,
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
Examples
--------
>>> import scipy.special as sc
It inverts `chdtr`.
>>> p, x = 0.5, 1
>>> sc.chdtr(sc.chdtriv(p, x), x)
0.5000000000202172
>>> v = 1
>>> sc.chdtriv(sc.chdtr(v, x), v)
1.0000000000000013
""")
add_newdoc("chndtr",
"""
chndtr(x, df, nc)
Non-central chi square cumulative distribution function
""")
add_newdoc("chndtrix",
"""
chndtrix(p, df, nc)
Inverse to `chndtr` vs `x`
""")
add_newdoc("chndtridf",
"""
chndtridf(x, p, nc)
Inverse to `chndtr` vs `df`
""")
add_newdoc("chndtrinc",
"""
chndtrinc(x, df, p)
Inverse to `chndtr` vs `nc`
""")
add_newdoc("cosdg",
"""
cosdg(x, out=None)
Cosine of the angle `x` given in degrees.
Parameters
----------
x : array_like
Angle, given in degrees.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Cosine of the input.
See Also
--------
sindg, tandg, cotdg
Examples
--------
>>> import scipy.special as sc
It is more accurate than using cosine directly.
>>> x = 90 + 180 * np.arange(3)
>>> sc.cosdg(x)
array([-0., 0., -0.])
>>> np.cos(x * np.pi / 180)
array([ 6.1232340e-17, -1.8369702e-16, 3.0616170e-16])
""")
add_newdoc("cosm1",
"""
cosm1(x, out=None)
cos(x) - 1 for use when `x` is near zero.
Parameters
----------
x : array_like
Real valued argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of ``cos(x) - 1``.
See Also
--------
expm1, log1p
Examples
--------
>>> import scipy.special as sc
It is more accurate than computing ``cos(x) - 1`` directly for
``x`` around 0.
>>> x = 1e-30
>>> np.cos(x) - 1
0.0
>>> sc.cosm1(x)
-5.0000000000000005e-61
""")
add_newdoc("cotdg",
"""
cotdg(x, out=None)
Cotangent of the angle `x` given in degrees.
Parameters
----------
x : array_like
Angle, given in degrees.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Cotangent at the input.
See Also
--------
sindg, cosdg, tandg
Examples
--------
>>> import scipy.special as sc
It is more accurate than using cotangent directly.
>>> x = 90 + 180 * np.arange(3)
>>> sc.cotdg(x)
array([0., 0., 0.])
>>> 1 / np.tan(x * np.pi / 180)
array([6.1232340e-17, 1.8369702e-16, 3.0616170e-16])
""")
add_newdoc("dawsn",
"""
dawsn(x)
Dawson's integral.
Computes::
exp(-x**2) * integral(exp(t**2), t=0..x).
See Also
--------
wofz, erf, erfc, erfcx, erfi
References
----------
.. [1] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-15, 15, num=1000)
>>> plt.plot(x, special.dawsn(x))
>>> plt.xlabel('$x$')
>>> plt.ylabel('$dawsn(x)$')
>>> plt.show()
""")
add_newdoc("ellipe",
r"""
ellipe(m)
Complete elliptic integral of the second kind
This function is defined as
.. math:: E(m) = \int_0^{\pi/2} [1 - m \sin(t)^2]^{1/2} dt
Parameters
----------
m : array_like
Defines the parameter of the elliptic integral.
Returns
-------
E : ndarray
Value of the elliptic integral.
Notes
-----
Wrapper for the Cephes [1]_ routine `ellpe`.
For `m > 0` the computation uses the approximation,
.. math:: E(m) \approx P(1-m) - (1-m) \log(1-m) Q(1-m),
where :math:`P` and :math:`Q` are tenth-order polynomials. For
`m < 0`, the relation
.. math:: E(m) = E(m/(m - 1)) \sqrt(1-m)
is used.
The parameterization in terms of :math:`m` follows that of section
17.2 in [2]_. Other parameterizations in terms of the
complementary parameter :math:`1 - m`, modular angle
:math:`\sin^2(\alpha) = m`, or modulus :math:`k^2 = m` are also
used, so be careful that you choose the correct parameter.
See Also
--------
ellipkm1 : Complete elliptic integral of the first kind, near `m` = 1
ellipk : Complete elliptic integral of the first kind
ellipkinc : Incomplete elliptic integral of the first kind
ellipeinc : Incomplete elliptic integral of the second kind
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
Examples
--------
This function is used in finding the circumference of an
ellipse with semi-major axis `a` and semi-minor axis `b`.
>>> from scipy import special
>>> a = 3.5
>>> b = 2.1
>>> e_sq = 1.0 - b**2/a**2 # eccentricity squared
Then the circumference is found using the following:
>>> C = 4*a*special.ellipe(e_sq) # circumference formula
>>> C
17.868899204378693
When `a` and `b` are the same (meaning eccentricity is 0),
this reduces to the circumference of a circle.
>>> 4*a*special.ellipe(0.0) # formula for ellipse with a = b
21.991148575128552
>>> 2*np.pi*a # formula for circle of radius a
21.991148575128552
""")
add_newdoc("ellipeinc",
r"""
ellipeinc(phi, m)
Incomplete elliptic integral of the second kind
This function is defined as
.. math:: E(\phi, m) = \int_0^{\phi} [1 - m \sin(t)^2]^{1/2} dt
Parameters
----------
phi : array_like
amplitude of the elliptic integral.
m : array_like
parameter of the elliptic integral.
Returns
-------
E : ndarray
Value of the elliptic integral.
Notes
-----
Wrapper for the Cephes [1]_ routine `ellie`.
Computation uses arithmetic-geometric means algorithm.
The parameterization in terms of :math:`m` follows that of section
17.2 in [2]_. Other parameterizations in terms of the
complementary parameter :math:`1 - m`, modular angle
:math:`\sin^2(\alpha) = m`, or modulus :math:`k^2 = m` are also
used, so be careful that you choose the correct parameter.
See Also
--------
ellipkm1 : Complete elliptic integral of the first kind, near `m` = 1
ellipk : Complete elliptic integral of the first kind
ellipkinc : Incomplete elliptic integral of the first kind
ellipe : Complete elliptic integral of the second kind
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("ellipj",
"""
ellipj(u, m)
Jacobian elliptic functions
Calculates the Jacobian elliptic functions of parameter `m` between
0 and 1, and real argument `u`.
Parameters
----------
m : array_like
Parameter.
u : array_like
Argument.
Returns
-------
sn, cn, dn, ph : ndarrays
The returned functions::
sn(u|m), cn(u|m), dn(u|m)
The value `ph` is such that if `u = ellipkinc(ph, m)`,
then `sn(u|m) = sin(ph)` and `cn(u|m) = cos(ph)`.
Notes
-----
Wrapper for the Cephes [1]_ routine `ellpj`.
These functions are periodic, with quarter-period on the real axis
equal to the complete elliptic integral `ellipk(m)`.
Relation to incomplete elliptic integral: If `u = ellipkinc(phi,m)`, then
`sn(u|m) = sin(phi)`, and `cn(u|m) = cos(phi)`. The `phi` is called
the amplitude of `u`.
Computation is by means of the arithmetic-geometric mean algorithm,
except when `m` is within 1e-9 of 0 or 1. In the latter case with `m`
close to 1, the approximation applies only for `phi < pi/2`.
See also
--------
ellipk : Complete elliptic integral of the first kind
ellipkinc : Incomplete elliptic integral of the first kind
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("ellipkm1",
"""
ellipkm1(p)
Complete elliptic integral of the first kind around `m` = 1
This function is defined as
.. math:: K(p) = \\int_0^{\\pi/2} [1 - m \\sin(t)^2]^{-1/2} dt
where `m = 1 - p`.
Parameters
----------
p : array_like
Defines the parameter of the elliptic integral as `m = 1 - p`.
Returns
-------
K : ndarray
Value of the elliptic integral.
Notes
-----
Wrapper for the Cephes [1]_ routine `ellpk`.
For `p <= 1`, computation uses the approximation,
.. math:: K(p) \\approx P(p) - \\log(p) Q(p),
where :math:`P` and :math:`Q` are tenth-order polynomials. The
argument `p` is used internally rather than `m` so that the logarithmic
singularity at `m = 1` will be shifted to the origin; this preserves
maximum accuracy. For `p > 1`, the identity
.. math:: K(p) = K(1/p)/\\sqrt(p)
is used.
See Also
--------
ellipk : Complete elliptic integral of the first kind
ellipkinc : Incomplete elliptic integral of the first kind
ellipe : Complete elliptic integral of the second kind
ellipeinc : Incomplete elliptic integral of the second kind
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("ellipk",
r"""
ellipk(m)
Complete elliptic integral of the first kind.
This function is defined as
.. math:: K(m) = \int_0^{\pi/2} [1 - m \sin(t)^2]^{-1/2} dt
Parameters
----------
m : array_like
The parameter of the elliptic integral.
Returns
-------
K : array_like
Value of the elliptic integral.
Notes
-----
For more precision around point m = 1, use `ellipkm1`, which this
function calls.
The parameterization in terms of :math:`m` follows that of section
17.2 in [1]_. Other parameterizations in terms of the
complementary parameter :math:`1 - m`, modular angle
:math:`\sin^2(\alpha) = m`, or modulus :math:`k^2 = m` are also
used, so be careful that you choose the correct parameter.
See Also
--------
ellipkm1 : Complete elliptic integral of the first kind around m = 1
ellipkinc : Incomplete elliptic integral of the first kind
ellipe : Complete elliptic integral of the second kind
ellipeinc : Incomplete elliptic integral of the second kind
References
----------
.. [1] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("ellipkinc",
r"""
ellipkinc(phi, m)
Incomplete elliptic integral of the first kind
This function is defined as
.. math:: K(\phi, m) = \int_0^{\phi} [1 - m \sin(t)^2]^{-1/2} dt
This function is also called `F(phi, m)`.
Parameters
----------
phi : array_like
amplitude of the elliptic integral
m : array_like
parameter of the elliptic integral
Returns
-------
K : ndarray
Value of the elliptic integral
Notes
-----
Wrapper for the Cephes [1]_ routine `ellik`. The computation is
carried out using the arithmetic-geometric mean algorithm.
The parameterization in terms of :math:`m` follows that of section
17.2 in [2]_. Other parameterizations in terms of the
complementary parameter :math:`1 - m`, modular angle
:math:`\sin^2(\alpha) = m`, or modulus :math:`k^2 = m` are also
used, so be careful that you choose the correct parameter.
See Also
--------
ellipkm1 : Complete elliptic integral of the first kind, near `m` = 1
ellipk : Complete elliptic integral of the first kind
ellipe : Complete elliptic integral of the second kind
ellipeinc : Incomplete elliptic integral of the second kind
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("entr",
r"""
entr(x)
Elementwise function for computing entropy.
.. math:: \text{entr}(x) = \begin{cases} - x \log(x) & x > 0 \\ 0 & x = 0 \\ -\infty & \text{otherwise} \end{cases}
Parameters
----------
x : ndarray
Input array.
Returns
-------
res : ndarray
The value of the elementwise entropy function at the given points `x`.
See Also
--------
kl_div, rel_entr
Notes
-----
This function is concave.
.. versionadded:: 0.15.0
""")
add_newdoc("erf",
"""
erf(z)
Returns the error function of complex argument.
It is defined as ``2/sqrt(pi)*integral(exp(-t**2), t=0..z)``.
Parameters
----------
x : ndarray
Input array.
Returns
-------
res : ndarray
The values of the error function at the given points `x`.
See Also
--------
erfc, erfinv, erfcinv, wofz, erfcx, erfi
Notes
-----
The cumulative of the unit normal distribution is given by
``Phi(z) = 1/2[1 + erf(z/sqrt(2))]``.
References
----------
.. [1] https://en.wikipedia.org/wiki/Error_function
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover,
1972. http://www.math.sfu.ca/~cbm/aands/page_297.htm
.. [3] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-3, 3)
>>> plt.plot(x, special.erf(x))
>>> plt.xlabel('$x$')
>>> plt.ylabel('$erf(x)$')
>>> plt.show()
""")
add_newdoc("erfc",
"""
erfc(x, out=None)
Complementary error function, ``1 - erf(x)``.
Parameters
----------
x : array_like
Real or complex valued argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the complementary error function
See Also
--------
erf, erfi, erfcx, dawsn, wofz
References
----------
.. [1] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-3, 3)
>>> plt.plot(x, special.erfc(x))
>>> plt.xlabel('$x$')
>>> plt.ylabel('$erfc(x)$')
>>> plt.show()
""")
add_newdoc("erfi",
"""
erfi(z, out=None)
Imaginary error function, ``-i erf(i z)``.
Parameters
----------
z : array_like
Real or complex valued argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the imaginary error function
See Also
--------
erf, erfc, erfcx, dawsn, wofz
Notes
-----
.. versionadded:: 0.12.0
References
----------
.. [1] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-3, 3)
>>> plt.plot(x, special.erfi(x))
>>> plt.xlabel('$x$')
>>> plt.ylabel('$erfi(x)$')
>>> plt.show()
""")
add_newdoc("erfcx",
"""
erfcx(x, out=None)
Scaled complementary error function, ``exp(x**2) * erfc(x)``.
Parameters
----------
x : array_like
Real or complex valued argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the scaled complementary error function
See Also
--------
erf, erfc, erfi, dawsn, wofz
Notes
-----
.. versionadded:: 0.12.0
References
----------
.. [1] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-3, 3)
>>> plt.plot(x, special.erfcx(x))
>>> plt.xlabel('$x$')
>>> plt.ylabel('$erfcx(x)$')
>>> plt.show()
""")
add_newdoc("erfinv",
"""Inverse of the error function.
Computes the inverse of the error function.
In the complex domain, there is no unique complex number w satisfying
erf(w)=z. This indicates a true inverse function would have multi-value.
When the domain restricts to the real, -1 < x < 1, there is a unique real
number satisfying erf(erfinv(x)) = x.
Parameters
----------
y : ndarray
Argument at which to evaluate. Domain: [-1, 1]
Returns
-------
erfinv : ndarray
The inverse of erf of y, element-wise)
See Also
--------
erf : Error function of a complex argument
erfc : Complementary error function, ``1 - erf(x)``
erfcinv : Inverse of the complementary error function
Examples
--------
1) evaluating a float number
>>> from scipy import special
>>> special.erfinv(0.5)
0.4769362762044698
2) evaluating an ndarray
>>> from scipy import special
>>> y = np.linspace(-1.0, 1.0, num=10)
>>> special.erfinv(y)
array([ -inf, -0.86312307, -0.5407314 , -0.30457019, -0.0987901 ,
0.0987901 , 0.30457019, 0.5407314 , 0.86312307, inf])
""")
add_newdoc("erfcinv",
"""Inverse of the complementary error function.
Computes the inverse of the complementary error function.
In the complex domain, there is no unique complex number w satisfying
erfc(w)=z. This indicates a true inverse function would have multi-value.
When the domain restricts to the real, 0 < x < 2, there is a unique real
number satisfying erfc(erfcinv(x)) = erfcinv(erfc(x)).
It is related to inverse of the error function by erfcinv(1-x) = erfinv(x)
Parameters
----------
y : ndarray
Argument at which to evaluate. Domain: [0, 2]
Returns
-------
erfcinv : ndarray
The inverse of erfc of y, element-wise
See Also
--------
erf : Error function of a complex argument
erfc : Complementary error function, ``1 - erf(x)``
erfinv : Inverse of the error function
Examples
--------
1) evaluating a float number
>>> from scipy import special
>>> special.erfcinv(0.5)
0.4769362762044698
2) evaluating an ndarray
>>> from scipy import special
>>> y = np.linspace(0.0, 2.0, num=11)
>>> special.erfcinv(y)
array([ inf, 0.9061938 , 0.59511608, 0.37080716, 0.17914345,
-0. , -0.17914345, -0.37080716, -0.59511608, -0.9061938 ,
-inf])
""")
add_newdoc("eval_jacobi",
r"""
eval_jacobi(n, alpha, beta, x, out=None)
Evaluate Jacobi polynomial at a point.
The Jacobi polynomials can be defined via the Gauss hypergeometric
function :math:`{}_2F_1` as
.. math::
P_n^{(\alpha, \beta)}(x) = \frac{(\alpha + 1)_n}{\Gamma(n + 1)}
{}_2F_1(-n, 1 + \alpha + \beta + n; \alpha + 1; (1 - z)/2)
where :math:`(\cdot)_n` is the Pochhammer symbol; see `poch`. When
:math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.42 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer the result is
determined via the relation to the Gauss hypergeometric
function.
alpha : array_like
Parameter
beta : array_like
Parameter
x : array_like
Points at which to evaluate the polynomial
Returns
-------
P : ndarray
Values of the Jacobi polynomial
See Also
--------
roots_jacobi : roots and quadrature weights of Jacobi polynomials
jacobi : Jacobi polynomial object
hyp2f1 : Gauss hypergeometric function
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_sh_jacobi",
r"""
eval_sh_jacobi(n, p, q, x, out=None)
Evaluate shifted Jacobi polynomial at a point.
Defined by
.. math::
G_n^{(p, q)}(x)
= \binom{2n + p - 1}{n}^{-1} P_n^{(p - q, q - 1)}(2x - 1),
where :math:`P_n^{(\cdot, \cdot)}` is the n-th Jacobi
polynomial. See 22.5.2 in [AS]_ for details.
Parameters
----------
n : int
Degree of the polynomial. If not an integer, the result is
determined via the relation to `binom` and `eval_jacobi`.
p : float
Parameter
q : float
Parameter
Returns
-------
G : ndarray
Values of the shifted Jacobi polynomial.
See Also
--------
roots_sh_jacobi : roots and quadrature weights of shifted Jacobi
polynomials
sh_jacobi : shifted Jacobi polynomial object
eval_jacobi : evaluate Jacobi polynomials
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_gegenbauer",
r"""
eval_gegenbauer(n, alpha, x, out=None)
Evaluate Gegenbauer polynomial at a point.
The Gegenbauer polynomials can be defined via the Gauss
hypergeometric function :math:`{}_2F_1` as
.. math::
C_n^{(\alpha)} = \frac{(2\alpha)_n}{\Gamma(n + 1)}
{}_2F_1(-n, 2\alpha + n; \alpha + 1/2; (1 - z)/2).
When :math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.46 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to the Gauss hypergeometric
function.
alpha : array_like
Parameter
x : array_like
Points at which to evaluate the Gegenbauer polynomial
Returns
-------
C : ndarray
Values of the Gegenbauer polynomial
See Also
--------
roots_gegenbauer : roots and quadrature weights of Gegenbauer
polynomials
gegenbauer : Gegenbauer polynomial object
hyp2f1 : Gauss hypergeometric function
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_chebyt",
r"""
eval_chebyt(n, x, out=None)
Evaluate Chebyshev polynomial of the first kind at a point.
The Chebyshev polynomials of the first kind can be defined via the
Gauss hypergeometric function :math:`{}_2F_1` as
.. math::
T_n(x) = {}_2F_1(n, -n; 1/2; (1 - x)/2).
When :math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.47 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to the Gauss hypergeometric
function.
x : array_like
Points at which to evaluate the Chebyshev polynomial
Returns
-------
T : ndarray
Values of the Chebyshev polynomial
See Also
--------
roots_chebyt : roots and quadrature weights of Chebyshev
polynomials of the first kind
chebyu : Chebychev polynomial object
eval_chebyu : evaluate Chebyshev polynomials of the second kind
hyp2f1 : Gauss hypergeometric function
numpy.polynomial.chebyshev.Chebyshev : Chebyshev series
Notes
-----
This routine is numerically stable for `x` in ``[-1, 1]`` at least
up to order ``10000``.
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_chebyu",
r"""
eval_chebyu(n, x, out=None)
Evaluate Chebyshev polynomial of the second kind at a point.
The Chebyshev polynomials of the second kind can be defined via
the Gauss hypergeometric function :math:`{}_2F_1` as
.. math::
U_n(x) = (n + 1) {}_2F_1(-n, n + 2; 3/2; (1 - x)/2).
When :math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.48 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to the Gauss hypergeometric
function.
x : array_like
Points at which to evaluate the Chebyshev polynomial
Returns
-------
U : ndarray
Values of the Chebyshev polynomial
See Also
--------
roots_chebyu : roots and quadrature weights of Chebyshev
polynomials of the second kind
chebyu : Chebyshev polynomial object
eval_chebyt : evaluate Chebyshev polynomials of the first kind
hyp2f1 : Gauss hypergeometric function
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_chebys",
r"""
eval_chebys(n, x, out=None)
Evaluate Chebyshev polynomial of the second kind on [-2, 2] at a
point.
These polynomials are defined as
.. math::
S_n(x) = U_n(x/2)
where :math:`U_n` is a Chebyshev polynomial of the second
kind. See 22.5.13 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to `eval_chebyu`.
x : array_like
Points at which to evaluate the Chebyshev polynomial
Returns
-------
S : ndarray
Values of the Chebyshev polynomial
See Also
--------
roots_chebys : roots and quadrature weights of Chebyshev
polynomials of the second kind on [-2, 2]
chebys : Chebyshev polynomial object
eval_chebyu : evaluate Chebyshev polynomials of the second kind
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
Examples
--------
>>> import scipy.special as sc
They are a scaled version of the Chebyshev polynomials of the
second kind.
>>> x = np.linspace(-2, 2, 6)
>>> sc.eval_chebys(3, x)
array([-4. , 0.672, 0.736, -0.736, -0.672, 4. ])
>>> sc.eval_chebyu(3, x / 2)
array([-4. , 0.672, 0.736, -0.736, -0.672, 4. ])
""")
add_newdoc("eval_chebyc",
r"""
eval_chebyc(n, x, out=None)
Evaluate Chebyshev polynomial of the first kind on [-2, 2] at a
point.
These polynomials are defined as
.. math::
C_n(x) = 2 T_n(x/2)
where :math:`T_n` is a Chebyshev polynomial of the first kind. See
22.5.11 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to `eval_chebyt`.
x : array_like
Points at which to evaluate the Chebyshev polynomial
Returns
-------
C : ndarray
Values of the Chebyshev polynomial
See Also
--------
roots_chebyc : roots and quadrature weights of Chebyshev
polynomials of the first kind on [-2, 2]
chebyc : Chebyshev polynomial object
numpy.polynomial.chebyshev.Chebyshev : Chebyshev series
eval_chebyt : evaluate Chebycshev polynomials of the first kind
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
Examples
--------
>>> import scipy.special as sc
They are a scaled version of the Chebyshev polynomials of the
first kind.
>>> x = np.linspace(-2, 2, 6)
>>> sc.eval_chebyc(3, x)
array([-2. , 1.872, 1.136, -1.136, -1.872, 2. ])
>>> 2 * sc.eval_chebyt(3, x / 2)
array([-2. , 1.872, 1.136, -1.136, -1.872, 2. ])
""")
add_newdoc("eval_sh_chebyt",
r"""
eval_sh_chebyt(n, x, out=None)
Evaluate shifted Chebyshev polynomial of the first kind at a
point.
These polynomials are defined as
.. math::
T_n^*(x) = T_n(2x - 1)
where :math:`T_n` is a Chebyshev polynomial of the first kind. See
22.5.14 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to `eval_chebyt`.
x : array_like
Points at which to evaluate the shifted Chebyshev polynomial
Returns
-------
T : ndarray
Values of the shifted Chebyshev polynomial
See Also
--------
roots_sh_chebyt : roots and quadrature weights of shifted
Chebyshev polynomials of the first kind
sh_chebyt : shifted Chebyshev polynomial object
eval_chebyt : evaluate Chebyshev polynomials of the first kind
numpy.polynomial.chebyshev.Chebyshev : Chebyshev series
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_sh_chebyu",
r"""
eval_sh_chebyu(n, x, out=None)
Evaluate shifted Chebyshev polynomial of the second kind at a
point.
These polynomials are defined as
.. math::
U_n^*(x) = U_n(2x - 1)
where :math:`U_n` is a Chebyshev polynomial of the first kind. See
22.5.15 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to `eval_chebyu`.
x : array_like
Points at which to evaluate the shifted Chebyshev polynomial
Returns
-------
U : ndarray
Values of the shifted Chebyshev polynomial
See Also
--------
roots_sh_chebyu : roots and quadrature weights of shifted
Chebychev polynomials of the second kind
sh_chebyu : shifted Chebyshev polynomial object
eval_chebyu : evaluate Chebyshev polynomials of the second kind
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_legendre",
r"""
eval_legendre(n, x, out=None)
Evaluate Legendre polynomial at a point.
The Legendre polynomials can be defined via the Gauss
hypergeometric function :math:`{}_2F_1` as
.. math::
P_n(x) = {}_2F_1(-n, n + 1; 1; (1 - x)/2).
When :math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.49 in [AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to the Gauss hypergeometric
function.
x : array_like
Points at which to evaluate the Legendre polynomial
Returns
-------
P : ndarray
Values of the Legendre polynomial
See Also
--------
roots_legendre : roots and quadrature weights of Legendre
polynomials
legendre : Legendre polynomial object
hyp2f1 : Gauss hypergeometric function
numpy.polynomial.legendre.Legendre : Legendre series
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
Examples
--------
>>> from scipy.special import eval_legendre
Evaluate the zero-order Legendre polynomial at x = 0
>>> eval_legendre(0, 0)
1.0
Evaluate the first-order Legendre polynomial between -1 and 1
>>> X = np.linspace(-1, 1, 5) # Domain of Legendre polynomials
>>> eval_legendre(1, X)
array([-1. , -0.5, 0. , 0.5, 1. ])
Evaluate Legendre polynomials of order 0 through 4 at x = 0
>>> N = range(0, 5)
>>> eval_legendre(N, 0)
array([ 1. , 0. , -0.5 , 0. , 0.375])
Plot Legendre polynomials of order 0 through 4
>>> X = np.linspace(-1, 1)
>>> import matplotlib.pyplot as plt
>>> for n in range(0, 5):
... y = eval_legendre(n, X)
... plt.plot(X, y, label=r'$P_{}(x)$'.format(n))
>>> plt.title("Legendre Polynomials")
>>> plt.xlabel("x")
>>> plt.ylabel(r'$P_n(x)$')
>>> plt.legend(loc='lower right')
>>> plt.show()
""")
add_newdoc("eval_sh_legendre",
r"""
eval_sh_legendre(n, x, out=None)
Evaluate shifted Legendre polynomial at a point.
These polynomials are defined as
.. math::
P_n^*(x) = P_n(2x - 1)
where :math:`P_n` is a Legendre polynomial. See 2.2.11 in [AS]_
for details.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the value is
determined via the relation to `eval_legendre`.
x : array_like
Points at which to evaluate the shifted Legendre polynomial
Returns
-------
P : ndarray
Values of the shifted Legendre polynomial
See Also
--------
roots_sh_legendre : roots and quadrature weights of shifted
Legendre polynomials
sh_legendre : shifted Legendre polynomial object
eval_legendre : evaluate Legendre polynomials
numpy.polynomial.legendre.Legendre : Legendre series
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_genlaguerre",
r"""
eval_genlaguerre(n, alpha, x, out=None)
Evaluate generalized Laguerre polynomial at a point.
The generalized Laguerre polynomials can be defined via the
confluent hypergeometric function :math:`{}_1F_1` as
.. math::
L_n^{(\alpha)}(x) = \binom{n + \alpha}{n}
{}_1F_1(-n, \alpha + 1, x).
When :math:`n` is an integer the result is a polynomial of degree
:math:`n`. See 22.5.54 in [AS]_ for details. The Laguerre
polynomials are the special case where :math:`\alpha = 0`.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer, the result is
determined via the relation to the confluent hypergeometric
function.
alpha : array_like
Parameter; must have ``alpha > -1``
x : array_like
Points at which to evaluate the generalized Laguerre
polynomial
Returns
-------
L : ndarray
Values of the generalized Laguerre polynomial
See Also
--------
roots_genlaguerre : roots and quadrature weights of generalized
Laguerre polynomials
genlaguerre : generalized Laguerre polynomial object
hyp1f1 : confluent hypergeometric function
eval_laguerre : evaluate Laguerre polynomials
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_laguerre",
r"""
eval_laguerre(n, x, out=None)
Evaluate Laguerre polynomial at a point.
The Laguerre polynomials can be defined via the confluent
hypergeometric function :math:`{}_1F_1` as
.. math::
L_n(x) = {}_1F_1(-n, 1, x).
See 22.5.16 and 22.5.54 in [AS]_ for details. When :math:`n` is an
integer the result is a polynomial of degree :math:`n`.
Parameters
----------
n : array_like
Degree of the polynomial. If not an integer the result is
determined via the relation to the confluent hypergeometric
function.
x : array_like
Points at which to evaluate the Laguerre polynomial
Returns
-------
L : ndarray
Values of the Laguerre polynomial
See Also
--------
roots_laguerre : roots and quadrature weights of Laguerre
polynomials
laguerre : Laguerre polynomial object
numpy.polynomial.laguerre.Laguerre : Laguerre series
eval_genlaguerre : evaluate generalized Laguerre polynomials
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_hermite",
r"""
eval_hermite(n, x, out=None)
Evaluate physicist's Hermite polynomial at a point.
Defined by
.. math::
H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n} e^{-x^2};
:math:`H_n` is a polynomial of degree :math:`n`. See 22.11.7 in
[AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial
x : array_like
Points at which to evaluate the Hermite polynomial
Returns
-------
H : ndarray
Values of the Hermite polynomial
See Also
--------
roots_hermite : roots and quadrature weights of physicist's
Hermite polynomials
hermite : physicist's Hermite polynomial object
numpy.polynomial.hermite.Hermite : Physicist's Hermite series
eval_hermitenorm : evaluate Probabilist's Hermite polynomials
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("eval_hermitenorm",
r"""
eval_hermitenorm(n, x, out=None)
Evaluate probabilist's (normalized) Hermite polynomial at a
point.
Defined by
.. math::
He_n(x) = (-1)^n e^{x^2/2} \frac{d^n}{dx^n} e^{-x^2/2};
:math:`He_n` is a polynomial of degree :math:`n`. See 22.11.8 in
[AS]_ for details.
Parameters
----------
n : array_like
Degree of the polynomial
x : array_like
Points at which to evaluate the Hermite polynomial
Returns
-------
He : ndarray
Values of the Hermite polynomial
See Also
--------
roots_hermitenorm : roots and quadrature weights of probabilist's
Hermite polynomials
hermitenorm : probabilist's Hermite polynomial object
numpy.polynomial.hermite_e.HermiteE : Probabilist's Hermite series
eval_hermite : evaluate physicist's Hermite polynomials
References
----------
.. [AS] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("exp1",
r"""
exp1(z, out=None)
Exponential integral E1.
For complex :math:`z \ne 0` the exponential integral can be defined as
[1]_
.. math::
E_1(z) = \int_z^\infty \frac{e^{-t}}{t} dt,
where the path of the integral does not cross the negative real
axis or pass through the origin.
Parameters
----------
z: array_like
Real or complex argument.
out: ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the exponential integral E1
See Also
--------
expi : exponential integral :math:`Ei`
expn : generalization of :math:`E_1`
Notes
-----
For :math:`x > 0` it is related to the exponential integral
:math:`Ei` (see `expi`) via the relation
.. math::
E_1(x) = -Ei(-x).
References
----------
.. [1] Digital Library of Mathematical Functions, 6.2.1
https://dlmf.nist.gov/6.2#E1
Examples
--------
>>> import scipy.special as sc
It has a pole at 0.
>>> sc.exp1(0)
inf
It has a branch cut on the negative real axis.
>>> sc.exp1(-1)
nan
>>> sc.exp1(complex(-1, 0))
(-1.8951178163559368-3.141592653589793j)
>>> sc.exp1(complex(-1, -0.0))
(-1.8951178163559368+3.141592653589793j)
It approaches 0 along the positive real axis.
>>> sc.exp1([1, 10, 100, 1000])
array([2.19383934e-01, 4.15696893e-06, 3.68359776e-46, 0.00000000e+00])
It is related to `expi`.
>>> x = np.array([1, 2, 3, 4])
>>> sc.exp1(x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
>>> -sc.expi(-x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
""")
add_newdoc("exp10",
"""
exp10(x)
Compute ``10**x`` element-wise.
Parameters
----------
x : array_like
`x` must contain real numbers.
Returns
-------
float
``10**x``, computed element-wise.
Examples
--------
>>> from scipy.special import exp10
>>> exp10(3)
1000.0
>>> x = np.array([[-1, -0.5, 0], [0.5, 1, 1.5]])
>>> exp10(x)
array([[ 0.1 , 0.31622777, 1. ],
[ 3.16227766, 10. , 31.6227766 ]])
""")
add_newdoc("exp2",
"""
exp2(x)
Compute ``2**x`` element-wise.
Parameters
----------
x : array_like
`x` must contain real numbers.
Returns
-------
float
``2**x``, computed element-wise.
Examples
--------
>>> from scipy.special import exp2
>>> exp2(3)
8.0
>>> x = np.array([[-1, -0.5, 0], [0.5, 1, 1.5]])
>>> exp2(x)
array([[ 0.5 , 0.70710678, 1. ],
[ 1.41421356, 2. , 2.82842712]])
""")
add_newdoc("expi",
r"""
expi(x, out=None)
Exponential integral Ei.
For real :math:`x`, the exponential integral is defined as [1]_
.. math::
Ei(x) = \int_{-\infty}^x \frac{e^t}{t} dt.
For :math:`x > 0` the integral is understood as a Cauchy principle
value.
It is extended to the complex plane by analytic continuation of
the function on the interval :math:`(0, \infty)`. The complex
variant has a branch cut on the negative real axis.
Parameters
----------
x: array_like
Real or complex valued argument
out: ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the exponential integral
Notes
-----
The exponential integrals :math:`E_1` and :math:`Ei` satisfy the
relation
.. math::
E_1(x) = -Ei(-x)
for :math:`x > 0`.
See Also
--------
exp1 : Exponential integral :math:`E_1`
expn : Generalized exponential integral :math:`E_n`
References
----------
.. [1] Digital Library of Mathematical Functions, 6.2.5
https://dlmf.nist.gov/6.2#E5
Examples
--------
>>> import scipy.special as sc
It is related to `exp1`.
>>> x = np.array([1, 2, 3, 4])
>>> -sc.expi(-x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
>>> sc.exp1(x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
The complex variant has a branch cut on the negative real axis.
>>> import scipy.special as sc
>>> sc.expi(-1 + 1e-12j)
(-0.21938393439552062+3.1415926535894254j)
>>> sc.expi(-1 - 1e-12j)
(-0.21938393439552062-3.1415926535894254j)
As the complex variant approaches the branch cut, the real parts
approach the value of the real variant.
>>> sc.expi(-1)
-0.21938393439552062
The SciPy implementation returns the real variant for complex
values on the branch cut.
>>> sc.expi(complex(-1, 0.0))
(-0.21938393439552062-0j)
>>> sc.expi(complex(-1, -0.0))
(-0.21938393439552062-0j)
""")
add_newdoc('expit',
"""
expit(x)
Expit (a.k.a. logistic sigmoid) ufunc for ndarrays.
The expit function, also known as the logistic sigmoid function, is
defined as ``expit(x) = 1/(1+exp(-x))``. It is the inverse of the
logit function.
Parameters
----------
x : ndarray
The ndarray to apply expit to element-wise.
Returns
-------
out : ndarray
An ndarray of the same shape as x. Its entries
are `expit` of the corresponding entry of x.
See Also
--------
logit
Notes
-----
As a ufunc expit takes a number of optional
keyword arguments. For more information
see `ufuncs <https://docs.scipy.org/doc/numpy/reference/ufuncs.html>`_
.. versionadded:: 0.10.0
Examples
--------
>>> from scipy.special import expit, logit
>>> expit([-np.inf, -1.5, 0, 1.5, np.inf])
array([ 0. , 0.18242552, 0.5 , 0.81757448, 1. ])
`logit` is the inverse of `expit`:
>>> logit(expit([-2.5, 0, 3.1, 5.0]))
array([-2.5, 0. , 3.1, 5. ])
Plot expit(x) for x in [-6, 6]:
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-6, 6, 121)
>>> y = expit(x)
>>> plt.plot(x, y)
>>> plt.grid()
>>> plt.xlim(-6, 6)
>>> plt.xlabel('x')
>>> plt.title('expit(x)')
>>> plt.show()
""")
add_newdoc("expm1",
"""
expm1(x)
Compute ``exp(x) - 1``.
When `x` is near zero, ``exp(x)`` is near 1, so the numerical calculation
of ``exp(x) - 1`` can suffer from catastrophic loss of precision.
``expm1(x)`` is implemented to avoid the loss of precision that occurs when
`x` is near zero.
Parameters
----------
x : array_like
`x` must contain real numbers.
Returns
-------
float
``exp(x) - 1`` computed element-wise.
Examples
--------
>>> from scipy.special import expm1
>>> expm1(1.0)
1.7182818284590451
>>> expm1([-0.2, -0.1, 0, 0.1, 0.2])
array([-0.18126925, -0.09516258, 0. , 0.10517092, 0.22140276])
The exact value of ``exp(7.5e-13) - 1`` is::
7.5000000000028125000000007031250000001318...*10**-13.
Here is what ``expm1(7.5e-13)`` gives:
>>> expm1(7.5e-13)
7.5000000000028135e-13
Compare that to ``exp(7.5e-13) - 1``, where the subtraction results in
a "catastrophic" loss of precision:
>>> np.exp(7.5e-13) - 1
7.5006667543675576e-13
""")
add_newdoc("expn",
r"""
expn(n, x, out=None)
Generalized exponential integral En.
For integer :math:`n \geq 0` and real :math:`x \geq 0` the
generalized exponential integral is defined as [dlmf]_
.. math::
E_n(x) = x^{n - 1} \int_x^\infty \frac{e^{-t}}{t^n} dt.
Parameters
----------
n: array_like
Non-negative integers
x: array_like
Real argument
out: ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the generalized exponential integral
See Also
--------
exp1 : special case of :math:`E_n` for :math:`n = 1`
expi : related to :math:`E_n` when :math:`n = 1`
References
----------
.. [dlmf] Digital Library of Mathematical Functions, 8.19.2
https://dlmf.nist.gov/8.19#E2
Examples
--------
>>> import scipy.special as sc
Its domain is nonnegative n and x.
>>> sc.expn(-1, 1.0), sc.expn(1, -1.0)
(nan, nan)
It has a pole at ``x = 0`` for ``n = 1, 2``; for larger ``n`` it
is equal to ``1 / (n - 1)``.
>>> sc.expn([0, 1, 2, 3, 4], 0)
array([ inf, inf, 1. , 0.5 , 0.33333333])
For n equal to 0 it reduces to ``exp(-x) / x``.
>>> x = np.array([1, 2, 3, 4])
>>> sc.expn(0, x)
array([0.36787944, 0.06766764, 0.01659569, 0.00457891])
>>> np.exp(-x) / x
array([0.36787944, 0.06766764, 0.01659569, 0.00457891])
For n equal to 1 it reduces to `exp1`.
>>> sc.expn(1, x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
>>> sc.exp1(x)
array([0.21938393, 0.04890051, 0.01304838, 0.00377935])
""")
add_newdoc("exprel",
r"""
exprel(x)
Relative error exponential, ``(exp(x) - 1)/x``.
When `x` is near zero, ``exp(x)`` is near 1, so the numerical calculation
of ``exp(x) - 1`` can suffer from catastrophic loss of precision.
``exprel(x)`` is implemented to avoid the loss of precision that occurs when
`x` is near zero.
Parameters
----------
x : ndarray
Input array. `x` must contain real numbers.
Returns
-------
float
``(exp(x) - 1)/x``, computed element-wise.
See Also
--------
expm1
Notes
-----
.. versionadded:: 0.17.0
Examples
--------
>>> from scipy.special import exprel
>>> exprel(0.01)
1.0050167084168056
>>> exprel([-0.25, -0.1, 0, 0.1, 0.25])
array([ 0.88479687, 0.95162582, 1. , 1.05170918, 1.13610167])
Compare ``exprel(5e-9)`` to the naive calculation. The exact value
is ``1.00000000250000000416...``.
>>> exprel(5e-9)
1.0000000025
>>> (np.exp(5e-9) - 1)/5e-9
0.99999999392252903
""")
add_newdoc("fdtr",
r"""
fdtr(dfn, dfd, x)
F cumulative distribution function.
Returns the value of the cumulative distribution function of the
F-distribution, also known as Snedecor's F-distribution or the
Fisher-Snedecor distribution.
The F-distribution with parameters :math:`d_n` and :math:`d_d` is the
distribution of the random variable,
.. math::
X = \frac{U_n/d_n}{U_d/d_d},
where :math:`U_n` and :math:`U_d` are random variables distributed
:math:`\chi^2`, with :math:`d_n` and :math:`d_d` degrees of freedom,
respectively.
Parameters
----------
dfn : array_like
First parameter (positive float).
dfd : array_like
Second parameter (positive float).
x : array_like
Argument (nonnegative float).
Returns
-------
y : ndarray
The CDF of the F-distribution with parameters `dfn` and `dfd` at `x`.
Notes
-----
The regularized incomplete beta function is used, according to the
formula,
.. math::
F(d_n, d_d; x) = I_{xd_n/(d_d + xd_n)}(d_n/2, d_d/2).
Wrapper for the Cephes [1]_ routine `fdtr`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("fdtrc",
r"""
fdtrc(dfn, dfd, x)
F survival function.
Returns the complemented F-distribution function (the integral of the
density from `x` to infinity).
Parameters
----------
dfn : array_like
First parameter (positive float).
dfd : array_like
Second parameter (positive float).
x : array_like
Argument (nonnegative float).
Returns
-------
y : ndarray
The complemented F-distribution function with parameters `dfn` and
`dfd` at `x`.
See also
--------
fdtr
Notes
-----
The regularized incomplete beta function is used, according to the
formula,
.. math::
F(d_n, d_d; x) = I_{d_d/(d_d + xd_n)}(d_d/2, d_n/2).
Wrapper for the Cephes [1]_ routine `fdtrc`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("fdtri",
r"""
fdtri(dfn, dfd, p)
The `p`-th quantile of the F-distribution.
This function is the inverse of the F-distribution CDF, `fdtr`, returning
the `x` such that `fdtr(dfn, dfd, x) = p`.
Parameters
----------
dfn : array_like
First parameter (positive float).
dfd : array_like
Second parameter (positive float).
p : array_like
Cumulative probability, in [0, 1].
Returns
-------
x : ndarray
The quantile corresponding to `p`.
Notes
-----
The computation is carried out using the relation to the inverse
regularized beta function, :math:`I^{-1}_x(a, b)`. Let
:math:`z = I^{-1}_p(d_d/2, d_n/2).` Then,
.. math::
x = \frac{d_d (1 - z)}{d_n z}.
If `p` is such that :math:`x < 0.5`, the following relation is used
instead for improved stability: let
:math:`z' = I^{-1}_{1 - p}(d_n/2, d_d/2).` Then,
.. math::
x = \frac{d_d z'}{d_n (1 - z')}.
Wrapper for the Cephes [1]_ routine `fdtri`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("fdtridfd",
"""
fdtridfd(dfn, p, x)
Inverse to `fdtr` vs dfd
Finds the F density argument dfd such that ``fdtr(dfn, dfd, x) == p``.
""")
add_newdoc("fdtridfn",
"""
fdtridfn(p, dfd, x)
Inverse to `fdtr` vs dfn
finds the F density argument dfn such that ``fdtr(dfn, dfd, x) == p``.
""")
add_newdoc("fresnel",
r"""
fresnel(z, out=None)
Fresnel integrals.
The Fresnel integrals are defined as
.. math::
S(z) &= \int_0^z \sin(\pi t^2 /2) dt \\
C(z) &= \int_0^z \cos(\pi t^2 /2) dt.
See [dlmf]_ for details.
Parameters
----------
z : array_like
Real or complex valued argument
out : 2-tuple of ndarrays, optional
Optional output arrays for the function results
Returns
-------
S, C : 2-tuple of scalar or ndarray
Values of the Fresnel integrals
See Also
--------
fresnel_zeros : zeros of the Fresnel integrals
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/7.2#iii
Examples
--------
>>> import scipy.special as sc
As z goes to infinity along the real axis, S and C converge to 0.5.
>>> S, C = sc.fresnel([0.1, 1, 10, 100, np.inf])
>>> S
array([0.00052359, 0.43825915, 0.46816998, 0.4968169 , 0.5 ])
>>> C
array([0.09999753, 0.7798934 , 0.49989869, 0.4999999 , 0.5 ])
They are related to the error function `erf`.
>>> z = np.array([1, 2, 3, 4])
>>> zeta = 0.5 * np.sqrt(np.pi) * (1 - 1j) * z
>>> S, C = sc.fresnel(z)
>>> C + 1j*S
array([0.7798934 +0.43825915j, 0.48825341+0.34341568j,
0.60572079+0.496313j , 0.49842603+0.42051575j])
>>> 0.5 * (1 + 1j) * sc.erf(zeta)
array([0.7798934 +0.43825915j, 0.48825341+0.34341568j,
0.60572079+0.496313j , 0.49842603+0.42051575j])
""")
add_newdoc("gamma",
r"""
gamma(z)
gamma function.
The gamma function is defined as
.. math::
\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} dt
for :math:`\Re(z) > 0` and is extended to the rest of the complex
plane by analytic continuation. See [dlmf]_ for more details.
Parameters
----------
z : array_like
Real or complex valued argument
Returns
-------
scalar or ndarray
Values of the gamma function
Notes
-----
The gamma function is often referred to as the generalized
factorial since :math:`\Gamma(n + 1) = n!` for natural numbers
:math:`n`. More generally it satisfies the recurrence relation
:math:`\Gamma(z + 1) = z \cdot \Gamma(z)` for complex :math:`z`,
which, combined with the fact that :math:`\Gamma(1) = 1`, implies
the above identity for :math:`z = n`.
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/5.2#E1
Examples
--------
>>> from scipy.special import gamma, factorial
>>> gamma([0, 0.5, 1, 5])
array([ inf, 1.77245385, 1. , 24. ])
>>> z = 2.5 + 1j
>>> gamma(z)
(0.77476210455108352+0.70763120437959293j)
>>> gamma(z+1), z*gamma(z) # Recurrence property
((1.2292740569981171+2.5438401155000685j),
(1.2292740569981158+2.5438401155000658j))
>>> gamma(0.5)**2 # gamma(0.5) = sqrt(pi)
3.1415926535897927
Plot gamma(x) for real x
>>> x = np.linspace(-3.5, 5.5, 2251)
>>> y = gamma(x)
>>> import matplotlib.pyplot as plt
>>> plt.plot(x, y, 'b', alpha=0.6, label='gamma(x)')
>>> k = np.arange(1, 7)
>>> plt.plot(k, factorial(k-1), 'k*', alpha=0.6,
... label='(x-1)!, x = 1, 2, ...')
>>> plt.xlim(-3.5, 5.5)
>>> plt.ylim(-10, 25)
>>> plt.grid()
>>> plt.xlabel('x')
>>> plt.legend(loc='lower right')
>>> plt.show()
""")
add_newdoc("gammainc",
r"""
gammainc(a, x)
Regularized lower incomplete gamma function.
It is defined as
.. math::
P(a, x) = \frac{1}{\Gamma(a)} \int_0^x t^{a - 1}e^{-t} dt
for :math:`a > 0` and :math:`x \geq 0`. See [dlmf]_ for details.
Parameters
----------
a : array_like
Positive parameter
x : array_like
Nonnegative argument
Returns
-------
scalar or ndarray
Values of the lower incomplete gamma function
Notes
-----
The function satisfies the relation ``gammainc(a, x) +
gammaincc(a, x) = 1`` where `gammaincc` is the regularized upper
incomplete gamma function.
The implementation largely follows that of [boost]_.
See also
--------
gammaincc : regularized upper incomplete gamma function
gammaincinv : inverse of the regularized lower incomplete gamma function
gammainccinv : inverse of the regularized upper incomplete gamma function
References
----------
.. [dlmf] NIST Digital Library of Mathematical functions
https://dlmf.nist.gov/8.2#E4
.. [boost] Maddock et. al., "Incomplete Gamma Functions",
https://www.boost.org/doc/libs/1_61_0/libs/math/doc/html/math_toolkit/sf_gamma/igamma.html
Examples
--------
>>> import scipy.special as sc
It is the CDF of the gamma distribution, so it starts at 0 and
monotonically increases to 1.
>>> sc.gammainc(0.5, [0, 1, 10, 100])
array([0. , 0.84270079, 0.99999226, 1. ])
It is equal to one minus the upper incomplete gamma function.
>>> a, x = 0.5, 0.4
>>> sc.gammainc(a, x)
0.6289066304773024
>>> 1 - sc.gammaincc(a, x)
0.6289066304773024
""")
add_newdoc("gammaincc",
r"""
gammaincc(a, x)
Regularized upper incomplete gamma function.
It is defined as
.. math::
Q(a, x) = \frac{1}{\Gamma(a)} \int_x^\infty t^{a - 1}e^{-t} dt
for :math:`a > 0` and :math:`x \geq 0`. See [dlmf]_ for details.
Parameters
----------
a : array_like
Positive parameter
x : array_like
Nonnegative argument
Returns
-------
scalar or ndarray
Values of the upper incomplete gamma function
Notes
-----
The function satisfies the relation ``gammainc(a, x) +
gammaincc(a, x) = 1`` where `gammainc` is the regularized lower
incomplete gamma function.
The implementation largely follows that of [boost]_.
See also
--------
gammainc : regularized lower incomplete gamma function
gammaincinv : inverse of the regularized lower incomplete gamma function
gammainccinv : inverse of the regularized upper incomplete gamma function
References
----------
.. [dlmf] NIST Digital Library of Mathematical functions
https://dlmf.nist.gov/8.2#E4
.. [boost] Maddock et. al., "Incomplete Gamma Functions",
https://www.boost.org/doc/libs/1_61_0/libs/math/doc/html/math_toolkit/sf_gamma/igamma.html
Examples
--------
>>> import scipy.special as sc
It is the survival function of the gamma distribution, so it
starts at 1 and monotonically decreases to 0.
>>> sc.gammaincc(0.5, [0, 1, 10, 100, 1000])
array([1.00000000e+00, 1.57299207e-01, 7.74421643e-06, 2.08848758e-45,
0.00000000e+00])
It is equal to one minus the lower incomplete gamma function.
>>> a, x = 0.5, 0.4
>>> sc.gammaincc(a, x)
0.37109336952269756
>>> 1 - sc.gammainc(a, x)
0.37109336952269756
""")
add_newdoc("gammainccinv",
"""
gammainccinv(a, y)
Inverse of the regularized upper incomplete gamma function.
Given an input :math:`y` between 0 and 1, returns :math:`x` such
that :math:`y = Q(a, x)`. Here :math:`Q` is the regularized upper
incomplete gamma function; see `gammaincc`. This is well-defined
because the upper incomplete gamma function is monotonic as can
be seen from its definition in [dlmf]_.
Parameters
----------
a : array_like
Positive parameter
y : array_like
Argument between 0 and 1, inclusive
Returns
-------
scalar or ndarray
Values of the inverse of the upper incomplete gamma function
See Also
--------
gammaincc : regularized upper incomplete gamma function
gammainc : regularized lower incomplete gamma function
gammaincinv : inverse of the regularized lower incomplete gamma function
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/8.2#E4
Examples
--------
>>> import scipy.special as sc
It starts at infinity and monotonically decreases to 0.
>>> sc.gammainccinv(0.5, [0, 0.1, 0.5, 1])
array([ inf, 1.35277173, 0.22746821, 0. ])
It inverts the upper incomplete gamma function.
>>> a, x = 0.5, [0, 0.1, 0.5, 1]
>>> sc.gammaincc(a, sc.gammainccinv(a, x))
array([0. , 0.1, 0.5, 1. ])
>>> a, x = 0.5, [0, 10, 50]
>>> sc.gammainccinv(a, sc.gammaincc(a, x))
array([ 0., 10., 50.])
""")
add_newdoc("gammaincinv",
"""
gammaincinv(a, y)
Inverse to the regularized lower incomplete gamma function.
Given an input :math:`y` between 0 and 1, returns :math:`x` such
that :math:`y = P(a, x)`. Here :math:`P` is the regularized lower
incomplete gamma function; see `gammainc`. This is well-defined
because the lower incomplete gamma function is monotonic as can be
seen from its definition in [dlmf]_.
Parameters
----------
a : array_like
Positive parameter
y : array_like
Parameter between 0 and 1, inclusive
Returns
-------
scalar or ndarray
Values of the inverse of the lower incomplete gamma function
See Also
--------
gammainc : regularized lower incomplete gamma function
gammaincc : regularized upper incomplete gamma function
gammainccinv : inverse of the regularized upper incomplete gamma function
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/8.2#E4
Examples
--------
>>> import scipy.special as sc
It starts at 0 and monotonically increases to infinity.
>>> sc.gammaincinv(0.5, [0, 0.1 ,0.5, 1])
array([0. , 0.00789539, 0.22746821, inf])
It inverts the lower incomplete gamma function.
>>> a, x = 0.5, [0, 0.1, 0.5, 1]
>>> sc.gammainc(a, sc.gammaincinv(a, x))
array([0. , 0.1, 0.5, 1. ])
>>> a, x = 0.5, [0, 10, 25]
>>> sc.gammaincinv(a, sc.gammainc(a, x))
array([ 0. , 10. , 25.00001465])
""")
add_newdoc("gammaln",
r"""
gammaln(x, out=None)
Logarithm of the absolute value of the gamma function.
Defined as
.. math::
\ln(\lvert\Gamma(x)\rvert)
where :math:`\Gamma` is the gamma function. For more details on
the gamma function, see [dlmf]_.
Parameters
----------
x : array_like
Real argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the log of the absolute value of gamma
See Also
--------
gammasgn : sign of the gamma function
loggamma : principal branch of the logarithm of the gamma function
Notes
-----
It is the same function as the Python standard library function
:func:`math.lgamma`.
When used in conjunction with `gammasgn`, this function is useful
for working in logspace on the real axis without having to deal
with complex numbers via the relation ``exp(gammaln(x)) =
gammasgn(x) * gamma(x)``.
For complex-valued log-gamma, use `loggamma` instead of `gammaln`.
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/5
Examples
--------
>>> import scipy.special as sc
It has two positive zeros.
>>> sc.gammaln([1, 2])
array([0., 0.])
It has poles at nonpositive integers.
>>> sc.gammaln([0, -1, -2, -3, -4])
array([inf, inf, inf, inf, inf])
It asymptotically approaches ``x * log(x)`` (Stirling's formula).
>>> x = np.array([1e10, 1e20, 1e40, 1e80])
>>> sc.gammaln(x)
array([2.20258509e+11, 4.50517019e+21, 9.11034037e+41, 1.83206807e+82])
>>> x * np.log(x)
array([2.30258509e+11, 4.60517019e+21, 9.21034037e+41, 1.84206807e+82])
""")
add_newdoc("gammasgn",
r"""
gammasgn(x)
Sign of the gamma function.
It is defined as
.. math::
\text{gammasgn}(x) =
\begin{cases}
+1 & \Gamma(x) > 0 \\
-1 & \Gamma(x) < 0
\end{cases}
where :math:`\Gamma` is the gamma function; see `gamma`. This
definition is complete since the gamma function is never zero;
see the discussion after [dlmf]_.
Parameters
----------
x : array_like
Real argument
Returns
-------
scalar or ndarray
Sign of the gamma function
Notes
-----
The gamma function can be computed as ``gammasgn(x) *
np.exp(gammaln(x))``.
See Also
--------
gamma : the gamma function
gammaln : log of the absolute value of the gamma function
loggamma : analytic continuation of the log of the gamma function
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/5.2#E1
Examples
--------
>>> import scipy.special as sc
It is 1 for `x > 0`.
>>> sc.gammasgn([1, 2, 3, 4])
array([1., 1., 1., 1.])
It alternates between -1 and 1 for negative integers.
>>> sc.gammasgn([-0.5, -1.5, -2.5, -3.5])
array([-1., 1., -1., 1.])
It can be used to compute the gamma function.
>>> x = [1.5, 0.5, -0.5, -1.5]
>>> sc.gammasgn(x) * np.exp(sc.gammaln(x))
array([ 0.88622693, 1.77245385, -3.5449077 , 2.3632718 ])
>>> sc.gamma(x)
array([ 0.88622693, 1.77245385, -3.5449077 , 2.3632718 ])
""")
add_newdoc("gdtr",
r"""
gdtr(a, b, x)
Gamma distribution cumulative distribution function.
Returns the integral from zero to `x` of the gamma probability density
function,
.. math::
F = \int_0^x \frac{a^b}{\Gamma(b)} t^{b-1} e^{-at}\,dt,
where :math:`\Gamma` is the gamma function.
Parameters
----------
a : array_like
The rate parameter of the gamma distribution, sometimes denoted
:math:`\beta` (float). It is also the reciprocal of the scale
parameter :math:`\theta`.
b : array_like
The shape parameter of the gamma distribution, sometimes denoted
:math:`\alpha` (float).
x : array_like
The quantile (upper limit of integration; float).
See also
--------
gdtrc : 1 - CDF of the gamma distribution.
Returns
-------
F : ndarray
The CDF of the gamma distribution with parameters `a` and `b`
evaluated at `x`.
Notes
-----
The evaluation is carried out using the relation to the incomplete gamma
integral (regularized gamma function).
Wrapper for the Cephes [1]_ routine `gdtr`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("gdtrc",
r"""
gdtrc(a, b, x)
Gamma distribution survival function.
Integral from `x` to infinity of the gamma probability density function,
.. math::
F = \int_x^\infty \frac{a^b}{\Gamma(b)} t^{b-1} e^{-at}\,dt,
where :math:`\Gamma` is the gamma function.
Parameters
----------
a : array_like
The rate parameter of the gamma distribution, sometimes denoted
:math:`\beta` (float). It is also the reciprocal of the scale
parameter :math:`\theta`.
b : array_like
The shape parameter of the gamma distribution, sometimes denoted
:math:`\alpha` (float).
x : array_like
The quantile (lower limit of integration; float).
Returns
-------
F : ndarray
The survival function of the gamma distribution with parameters `a`
and `b` evaluated at `x`.
See Also
--------
gdtr, gdtrix
Notes
-----
The evaluation is carried out using the relation to the incomplete gamma
integral (regularized gamma function).
Wrapper for the Cephes [1]_ routine `gdtrc`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("gdtria",
"""
gdtria(p, b, x, out=None)
Inverse of `gdtr` vs a.
Returns the inverse with respect to the parameter `a` of ``p =
gdtr(a, b, x)``, the cumulative distribution function of the gamma
distribution.
Parameters
----------
p : array_like
Probability values.
b : array_like
`b` parameter values of `gdtr(a, b, x)`. `b` is the "shape" parameter
of the gamma distribution.
x : array_like
Nonnegative real values, from the domain of the gamma distribution.
out : ndarray, optional
If a fourth argument is given, it must be a numpy.ndarray whose size
matches the broadcast result of `a`, `b` and `x`. `out` is then the
array returned by the function.
Returns
-------
a : ndarray
Values of the `a` parameter such that `p = gdtr(a, b, x)`. `1/a`
is the "scale" parameter of the gamma distribution.
See Also
--------
gdtr : CDF of the gamma distribution.
gdtrib : Inverse with respect to `b` of `gdtr(a, b, x)`.
gdtrix : Inverse with respect to `x` of `gdtr(a, b, x)`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfgam`.
The cumulative distribution function `p` is computed using a routine by
DiDinato and Morris [2]_. Computation of `a` involves a search for a value
that produces the desired value of `p`. The search relies on the
monotonicity of `p` with `a`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] DiDinato, A. R. and Morris, A. H.,
Computation of the incomplete gamma function ratios and their
inverse. ACM Trans. Math. Softw. 12 (1986), 377-393.
Examples
--------
First evaluate `gdtr`.
>>> from scipy.special import gdtr, gdtria
>>> p = gdtr(1.2, 3.4, 5.6)
>>> print(p)
0.94378087442
Verify the inverse.
>>> gdtria(p, 3.4, 5.6)
1.2
""")
add_newdoc("gdtrib",
"""
gdtrib(a, p, x, out=None)
Inverse of `gdtr` vs b.
Returns the inverse with respect to the parameter `b` of ``p =
gdtr(a, b, x)``, the cumulative distribution function of the gamma
distribution.
Parameters
----------
a : array_like
`a` parameter values of `gdtr(a, b, x)`. `1/a` is the "scale"
parameter of the gamma distribution.
p : array_like
Probability values.
x : array_like
Nonnegative real values, from the domain of the gamma distribution.
out : ndarray, optional
If a fourth argument is given, it must be a numpy.ndarray whose size
matches the broadcast result of `a`, `b` and `x`. `out` is then the
array returned by the function.
Returns
-------
b : ndarray
Values of the `b` parameter such that `p = gdtr(a, b, x)`. `b` is
the "shape" parameter of the gamma distribution.
See Also
--------
gdtr : CDF of the gamma distribution.
gdtria : Inverse with respect to `a` of `gdtr(a, b, x)`.
gdtrix : Inverse with respect to `x` of `gdtr(a, b, x)`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfgam`.
The cumulative distribution function `p` is computed using a routine by
DiDinato and Morris [2]_. Computation of `b` involves a search for a value
that produces the desired value of `p`. The search relies on the
monotonicity of `p` with `b`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] DiDinato, A. R. and Morris, A. H.,
Computation of the incomplete gamma function ratios and their
inverse. ACM Trans. Math. Softw. 12 (1986), 377-393.
Examples
--------
First evaluate `gdtr`.
>>> from scipy.special import gdtr, gdtrib
>>> p = gdtr(1.2, 3.4, 5.6)
>>> print(p)
0.94378087442
Verify the inverse.
>>> gdtrib(1.2, p, 5.6)
3.3999999999723882
""")
add_newdoc("gdtrix",
"""
gdtrix(a, b, p, out=None)
Inverse of `gdtr` vs x.
Returns the inverse with respect to the parameter `x` of ``p =
gdtr(a, b, x)``, the cumulative distribution function of the gamma
distribution. This is also known as the pth quantile of the
distribution.
Parameters
----------
a : array_like
`a` parameter values of `gdtr(a, b, x)`. `1/a` is the "scale"
parameter of the gamma distribution.
b : array_like
`b` parameter values of `gdtr(a, b, x)`. `b` is the "shape" parameter
of the gamma distribution.
p : array_like
Probability values.
out : ndarray, optional
If a fourth argument is given, it must be a numpy.ndarray whose size
matches the broadcast result of `a`, `b` and `x`. `out` is then the
array returned by the function.
Returns
-------
x : ndarray
Values of the `x` parameter such that `p = gdtr(a, b, x)`.
See Also
--------
gdtr : CDF of the gamma distribution.
gdtria : Inverse with respect to `a` of `gdtr(a, b, x)`.
gdtrib : Inverse with respect to `b` of `gdtr(a, b, x)`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfgam`.
The cumulative distribution function `p` is computed using a routine by
DiDinato and Morris [2]_. Computation of `x` involves a search for a value
that produces the desired value of `p`. The search relies on the
monotonicity of `p` with `x`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] DiDinato, A. R. and Morris, A. H.,
Computation of the incomplete gamma function ratios and their
inverse. ACM Trans. Math. Softw. 12 (1986), 377-393.
Examples
--------
First evaluate `gdtr`.
>>> from scipy.special import gdtr, gdtrix
>>> p = gdtr(1.2, 3.4, 5.6)
>>> print(p)
0.94378087442
Verify the inverse.
>>> gdtrix(1.2, 3.4, p)
5.5999999999999996
""")
add_newdoc("hankel1",
r"""
hankel1(v, z)
Hankel function of the first kind
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
out : Values of the Hankel function of the first kind.
Notes
-----
A wrapper for the AMOS [1]_ routine `zbesh`, which carries out the
computation using the relation,
.. math:: H^{(1)}_v(z) = \frac{2}{\imath\pi} \exp(-\imath \pi v/2) K_v(z \exp(-\imath\pi/2))
where :math:`K_v` is the modified Bessel function of the second kind.
For negative orders, the relation
.. math:: H^{(1)}_{-v}(z) = H^{(1)}_v(z) \exp(\imath\pi v)
is used.
See also
--------
hankel1e : this function with leading exponential behavior stripped off.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("hankel1e",
r"""
hankel1e(v, z)
Exponentially scaled Hankel function of the first kind
Defined as::
hankel1e(v, z) = hankel1(v, z) * exp(-1j * z)
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
out : Values of the exponentially scaled Hankel function.
Notes
-----
A wrapper for the AMOS [1]_ routine `zbesh`, which carries out the
computation using the relation,
.. math:: H^{(1)}_v(z) = \frac{2}{\imath\pi} \exp(-\imath \pi v/2) K_v(z \exp(-\imath\pi/2))
where :math:`K_v` is the modified Bessel function of the second kind.
For negative orders, the relation
.. math:: H^{(1)}_{-v}(z) = H^{(1)}_v(z) \exp(\imath\pi v)
is used.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("hankel2",
r"""
hankel2(v, z)
Hankel function of the second kind
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
out : Values of the Hankel function of the second kind.
Notes
-----
A wrapper for the AMOS [1]_ routine `zbesh`, which carries out the
computation using the relation,
.. math:: H^{(2)}_v(z) = -\frac{2}{\imath\pi} \exp(\imath \pi v/2) K_v(z \exp(\imath\pi/2))
where :math:`K_v` is the modified Bessel function of the second kind.
For negative orders, the relation
.. math:: H^{(2)}_{-v}(z) = H^{(2)}_v(z) \exp(-\imath\pi v)
is used.
See also
--------
hankel2e : this function with leading exponential behavior stripped off.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("hankel2e",
r"""
hankel2e(v, z)
Exponentially scaled Hankel function of the second kind
Defined as::
hankel2e(v, z) = hankel2(v, z) * exp(1j * z)
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
out : Values of the exponentially scaled Hankel function of the second kind.
Notes
-----
A wrapper for the AMOS [1]_ routine `zbesh`, which carries out the
computation using the relation,
.. math:: H^{(2)}_v(z) = -\frac{2}{\imath\pi} \exp(\frac{\imath \pi v}{2}) K_v(z exp(\frac{\imath\pi}{2}))
where :math:`K_v` is the modified Bessel function of the second kind.
For negative orders, the relation
.. math:: H^{(2)}_{-v}(z) = H^{(2)}_v(z) \exp(-\imath\pi v)
is used.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("huber",
r"""
huber(delta, r)
Huber loss function.
.. math:: \text{huber}(\delta, r) = \begin{cases} \infty & \delta < 0 \\ \frac{1}{2}r^2 & 0 \le \delta, | r | \le \delta \\ \delta ( |r| - \frac{1}{2}\delta ) & \text{otherwise} \end{cases}
Parameters
----------
delta : ndarray
Input array, indicating the quadratic vs. linear loss changepoint.
r : ndarray
Input array, possibly representing residuals.
Returns
-------
res : ndarray
The computed Huber loss function values.
Notes
-----
This function is convex in r.
.. versionadded:: 0.15.0
""")
add_newdoc("hyp0f1",
r"""
hyp0f1(v, z, out=None)
Confluent hypergeometric limit function 0F1.
Parameters
----------
v : array_like
Real-valued parameter
z : array_like
Real- or complex-valued argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
The confluent hypergeometric limit function
Notes
-----
This function is defined as:
.. math:: _0F_1(v, z) = \sum_{k=0}^{\infty}\frac{z^k}{(v)_k k!}.
It's also the limit as :math:`q \to \infty` of :math:`_1F_1(q; v; z/q)`,
and satisfies the differential equation :math:`f''(z) + vf'(z) =
f(z)`. See [1]_ for more information.
References
----------
.. [1] Wolfram MathWorld, "Confluent Hypergeometric Limit Function",
http://mathworld.wolfram.com/ConfluentHypergeometricLimitFunction.html
Examples
--------
>>> import scipy.special as sc
It is one when `z` is zero.
>>> sc.hyp0f1(1, 0)
1.0
It is the limit of the confluent hypergeometric function as `q`
goes to infinity.
>>> q = np.array([1, 10, 100, 1000])
>>> v = 1
>>> z = 1
>>> sc.hyp1f1(q, v, z / q)
array([2.71828183, 2.31481985, 2.28303778, 2.27992985])
>>> sc.hyp0f1(v, z)
2.2795853023360673
It is related to Bessel functions.
>>> n = 1
>>> x = np.linspace(0, 1, 5)
>>> sc.jv(n, x)
array([0. , 0.12402598, 0.24226846, 0.3492436 , 0.44005059])
>>> (0.5 * x)**n / sc.factorial(n) * sc.hyp0f1(n + 1, -0.25 * x**2)
array([0. , 0.12402598, 0.24226846, 0.3492436 , 0.44005059])
""")
add_newdoc("hyp1f1",
r"""
hyp1f1(a, b, x, out=None)
Confluent hypergeometric function 1F1.
The confluent hypergeometric function is defined by the series
.. math::
{}_1F_1(a; b; x) = \sum_{k = 0}^\infty \frac{(a)_k}{(b)_k k!} x^k.
See [dlmf]_ for more details. Here :math:`(\cdot)_k` is the
Pochhammer symbol; see `poch`.
Parameters
----------
a, b : array_like
Real parameters
x : array_like
Real or complex argument
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the confluent hypergeometric function
See also
--------
hyperu : another confluent hypergeometric function
hyp0f1 : confluent hypergeometric limit function
hyp2f1 : Gaussian hypergeometric function
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/13.2#E2
Examples
--------
>>> import scipy.special as sc
It is one when `x` is zero:
>>> sc.hyp1f1(0.5, 0.5, 0)
1.0
It is singular when `b` is a nonpositive integer.
>>> sc.hyp1f1(0.5, -1, 0)
inf
It is a polynomial when `a` is a nonpositive integer.
>>> a, b, x = -1, 0.5, np.array([1.0, 2.0, 3.0, 4.0])
>>> sc.hyp1f1(a, b, x)
array([-1., -3., -5., -7.])
>>> 1 + (a / b) * x
array([-1., -3., -5., -7.])
It reduces to the exponential function when `a = b`.
>>> sc.hyp1f1(2, 2, [1, 2, 3, 4])
array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
>>> np.exp([1, 2, 3, 4])
array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
""")
add_newdoc("hyp2f1",
r"""
hyp2f1(a, b, c, z)
Gauss hypergeometric function 2F1(a, b; c; z)
Parameters
----------
a, b, c : array_like
Arguments, should be real-valued.
z : array_like
Argument, real or complex.
Returns
-------
hyp2f1 : scalar or ndarray
The values of the gaussian hypergeometric function.
See also
--------
hyp0f1 : confluent hypergeometric limit function.
hyp1f1 : Kummer's (confluent hypergeometric) function.
Notes
-----
This function is defined for :math:`|z| < 1` as
.. math::
\mathrm{hyp2f1}(a, b, c, z) = \sum_{n=0}^\infty
\frac{(a)_n (b)_n}{(c)_n}\frac{z^n}{n!},
and defined on the rest of the complex z-plane by analytic
continuation [1]_.
Here :math:`(\cdot)_n` is the Pochhammer symbol; see `poch`. When
:math:`n` is an integer the result is a polynomial of degree :math:`n`.
The implementation for complex values of ``z`` is described in [2]_.
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/15.2
.. [2] S. Zhang and J.M. Jin, "Computation of Special Functions", Wiley 1996
.. [3] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
Examples
--------
>>> import scipy.special as sc
It has poles when `c` is a negative integer.
>>> sc.hyp2f1(1, 1, -2, 1)
inf
It is a polynomial when `a` or `b` is a negative integer.
>>> a, b, c = -1, 1, 1.5
>>> z = np.linspace(0, 1, 5)
>>> sc.hyp2f1(a, b, c, z)
array([1. , 0.83333333, 0.66666667, 0.5 , 0.33333333])
>>> 1 + a * b * z / c
array([1. , 0.83333333, 0.66666667, 0.5 , 0.33333333])
It is symmetric in `a` and `b`.
>>> a = np.linspace(0, 1, 5)
>>> b = np.linspace(0, 1, 5)
>>> sc.hyp2f1(a, b, 1, 0.5)
array([1. , 1.03997334, 1.1803406 , 1.47074441, 2. ])
>>> sc.hyp2f1(b, a, 1, 0.5)
array([1. , 1.03997334, 1.1803406 , 1.47074441, 2. ])
It contains many other functions as special cases.
>>> z = 0.5
>>> sc.hyp2f1(1, 1, 2, z)
1.3862943611198901
>>> -np.log(1 - z) / z
1.3862943611198906
>>> sc.hyp2f1(0.5, 1, 1.5, z**2)
1.098612288668109
>>> np.log((1 + z) / (1 - z)) / (2 * z)
1.0986122886681098
>>> sc.hyp2f1(0.5, 1, 1.5, -z**2)
0.9272952180016117
>>> np.arctan(z) / z
0.9272952180016123
""")
add_newdoc("hyperu",
r"""
hyperu(a, b, x, out=None)
Confluent hypergeometric function U
It is defined as the solution to the equation
.. math::
x \frac{d^2w}{dx^2} + (b - x) \frac{dw}{dx} - aw = 0
which satisfies the property
.. math::
U(a, b, x) \sim x^{-a}
as :math:`x \to \infty`. See [dlmf]_ for more details.
Parameters
----------
a, b : array_like
Real-valued parameters
x : array_like
Real-valued argument
out : ndarray
Optional output array for the function values
Returns
-------
scalar or ndarray
Values of `U`
References
----------
.. [dlmf] NIST Digital Library of Mathematics Functions
https://dlmf.nist.gov/13.2#E6
Examples
--------
>>> import scipy.special as sc
It has a branch cut along the negative `x` axis.
>>> x = np.linspace(-0.1, -10, 5)
>>> sc.hyperu(1, 1, x)
array([nan, nan, nan, nan, nan])
It approaches zero as `x` goes to infinity.
>>> x = np.array([1, 10, 100])
>>> sc.hyperu(1, 1, x)
array([0.59634736, 0.09156333, 0.00990194])
It satisfies Kummer's transformation.
>>> a, b, x = 2, 1, 1
>>> sc.hyperu(a, b, x)
0.1926947246463881
>>> x**(1 - b) * sc.hyperu(a - b + 1, 2 - b, x)
0.1926947246463881
""")
add_newdoc("i0",
r"""
i0(x)
Modified Bessel function of order 0.
Defined as,
.. math::
I_0(x) = \sum_{k=0}^\infty \frac{(x^2/4)^k}{(k!)^2} = J_0(\imath x),
where :math:`J_0` is the Bessel function of the first kind of order 0.
Parameters
----------
x : array_like
Argument (float)
Returns
-------
I : ndarray
Value of the modified Bessel function of order 0 at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 8] and (8, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `i0`.
See also
--------
iv
i0e
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("i0e",
"""
i0e(x)
Exponentially scaled modified Bessel function of order 0.
Defined as::
i0e(x) = exp(-abs(x)) * i0(x).
Parameters
----------
x : array_like
Argument (float)
Returns
-------
I : ndarray
Value of the exponentially scaled modified Bessel function of order 0
at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 8] and (8, infinity).
Chebyshev polynomial expansions are employed in each interval. The
polynomial expansions used are the same as those in `i0`, but
they are not multiplied by the dominant exponential factor.
This function is a wrapper for the Cephes [1]_ routine `i0e`.
See also
--------
iv
i0
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("i1",
r"""
i1(x)
Modified Bessel function of order 1.
Defined as,
.. math::
I_1(x) = \frac{1}{2}x \sum_{k=0}^\infty \frac{(x^2/4)^k}{k! (k + 1)!}
= -\imath J_1(\imath x),
where :math:`J_1` is the Bessel function of the first kind of order 1.
Parameters
----------
x : array_like
Argument (float)
Returns
-------
I : ndarray
Value of the modified Bessel function of order 1 at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 8] and (8, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `i1`.
See also
--------
iv
i1e
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("i1e",
"""
i1e(x)
Exponentially scaled modified Bessel function of order 1.
Defined as::
i1e(x) = exp(-abs(x)) * i1(x)
Parameters
----------
x : array_like
Argument (float)
Returns
-------
I : ndarray
Value of the exponentially scaled modified Bessel function of order 1
at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 8] and (8, infinity).
Chebyshev polynomial expansions are employed in each interval. The
polynomial expansions used are the same as those in `i1`, but
they are not multiplied by the dominant exponential factor.
This function is a wrapper for the Cephes [1]_ routine `i1e`.
See also
--------
iv
i1
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("_igam_fac",
"""
Internal function, do not use.
""")
add_newdoc("it2i0k0",
r"""
it2i0k0(x, out=None)
Integrals related to modified Bessel functions of order 0.
Computes the integrals
.. math::
\int_0^x \frac{I_0(t) - 1}{t} dt \\
\int_x^\infty \frac{K_0(t)}{t} dt.
Parameters
----------
x : array_like
Values at which to evaluate the integrals.
out : tuple of ndarrays, optional
Optional output arrays for the function results.
Returns
-------
ii0 : scalar or ndarray
The integral for `i0`
ik0 : scalar or ndarray
The integral for `k0`
""")
add_newdoc("it2j0y0",
r"""
it2j0y0(x, out=None)
Integrals related to Bessel functions of the first kind of order 0.
Computes the integrals
.. math::
\int_0^x \frac{1 - J_0(t)}{t} dt \\
\int_x^\infty \frac{Y_0(t)}{t} dt.
For more on :math:`J_0` and :math:`Y_0` see `j0` and `y0`.
Parameters
----------
x : array_like
Values at which to evaluate the integrals.
out : tuple of ndarrays, optional
Optional output arrays for the function results.
Returns
-------
ij0 : scalar or ndarray
The integral for `j0`
iy0 : scalar or ndarray
The integral for `y0`
""")
add_newdoc("it2struve0",
r"""
it2struve0(x)
Integral related to the Struve function of order 0.
Returns the integral,
.. math::
\int_x^\infty \frac{H_0(t)}{t}\,dt
where :math:`H_0` is the Struve function of order 0.
Parameters
----------
x : array_like
Lower limit of integration.
Returns
-------
I : ndarray
The value of the integral.
See also
--------
struve
Notes
-----
Wrapper for a Fortran routine created by Shanjie Zhang and Jianming
Jin [1]_.
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996.
https://people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.html
""")
add_newdoc("itairy",
"""
itairy(x)
Integrals of Airy functions
Calculates the integrals of Airy functions from 0 to `x`.
Parameters
----------
x: array_like
Upper limit of integration (float).
Returns
-------
Apt
Integral of Ai(t) from 0 to x.
Bpt
Integral of Bi(t) from 0 to x.
Ant
Integral of Ai(-t) from 0 to x.
Bnt
Integral of Bi(-t) from 0 to x.
Notes
-----
Wrapper for a Fortran routine created by Shanjie Zhang and Jianming
Jin [1]_.
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996.
https://people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.html
""")
add_newdoc("iti0k0",
r"""
iti0k0(x, out=None)
Integrals of modified Bessel functions of order 0.
Computes the integrals
.. math::
\int_0^x I_0(t) dt \\
\int_0^x K_0(t) dt.
For more on :math:`I_0` and :math:`K_0` see `i0` and `k0`.
Parameters
----------
x : array_like
Values at which to evaluate the integrals.
out : tuple of ndarrays, optional
Optional output arrays for the function results.
Returns
-------
ii0 : scalar or ndarray
The integral for `i0`
ik0 : scalar or ndarray
The integral for `k0`
""")
add_newdoc("itj0y0",
r"""
itj0y0(x, out=None)
Integrals of Bessel functions of the first kind of order 0.
Computes the integrals
.. math::
\int_0^x J_0(t) dt \\
\int_0^x Y_0(t) dt.
For more on :math:`J_0` and :math:`Y_0` see `j0` and `y0`.
Parameters
----------
x : array_like
Values at which to evaluate the integrals.
out : tuple of ndarrays, optional
Optional output arrays for the function results.
Returns
-------
ij0 : scalar or ndarray
The integral of `j0`
iy0 : scalar or ndarray
The integral of `y0`
""")
add_newdoc("itmodstruve0",
r"""
itmodstruve0(x)
Integral of the modified Struve function of order 0.
.. math::
I = \int_0^x L_0(t)\,dt
Parameters
----------
x : array_like
Upper limit of integration (float).
Returns
-------
I : ndarray
The integral of :math:`L_0` from 0 to `x`.
Notes
-----
Wrapper for a Fortran routine created by Shanjie Zhang and Jianming
Jin [1]_.
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996.
https://people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.html
""")
add_newdoc("itstruve0",
r"""
itstruve0(x)
Integral of the Struve function of order 0.
.. math::
I = \int_0^x H_0(t)\,dt
Parameters
----------
x : array_like
Upper limit of integration (float).
Returns
-------
I : ndarray
The integral of :math:`H_0` from 0 to `x`.
See also
--------
struve
Notes
-----
Wrapper for a Fortran routine created by Shanjie Zhang and Jianming
Jin [1]_.
References
----------
.. [1] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996.
https://people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.html
""")
add_newdoc("iv",
r"""
iv(v, z)
Modified Bessel function of the first kind of real order.
Parameters
----------
v : array_like
Order. If `z` is of real type and negative, `v` must be integer
valued.
z : array_like of float or complex
Argument.
Returns
-------
out : ndarray
Values of the modified Bessel function.
Notes
-----
For real `z` and :math:`v \in [-50, 50]`, the evaluation is carried out
using Temme's method [1]_. For larger orders, uniform asymptotic
expansions are applied.
For complex `z` and positive `v`, the AMOS [2]_ `zbesi` routine is
called. It uses a power series for small `z`, the asymptotic expansion
for large `abs(z)`, the Miller algorithm normalized by the Wronskian
and a Neumann series for intermediate magnitudes, and the uniform
asymptotic expansions for :math:`I_v(z)` and :math:`J_v(z)` for large
orders. Backward recurrence is used to generate sequences or reduce
orders when necessary.
The calculations above are done in the right half plane and continued
into the left half plane by the formula,
.. math:: I_v(z \exp(\pm\imath\pi)) = \exp(\pm\pi v) I_v(z)
(valid when the real part of `z` is positive). For negative `v`, the
formula
.. math:: I_{-v}(z) = I_v(z) + \frac{2}{\pi} \sin(\pi v) K_v(z)
is used, where :math:`K_v(z)` is the modified Bessel function of the
second kind, evaluated using the AMOS routine `zbesk`.
See also
--------
kve : This function with leading exponential behavior stripped off.
References
----------
.. [1] Temme, Journal of Computational Physics, vol 21, 343 (1976)
.. [2] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("ive",
r"""
ive(v, z)
Exponentially scaled modified Bessel function of the first kind
Defined as::
ive(v, z) = iv(v, z) * exp(-abs(z.real))
Parameters
----------
v : array_like of float
Order.
z : array_like of float or complex
Argument.
Returns
-------
out : ndarray
Values of the exponentially scaled modified Bessel function.
Notes
-----
For positive `v`, the AMOS [1]_ `zbesi` routine is called. It uses a
power series for small `z`, the asymptotic expansion for large
`abs(z)`, the Miller algorithm normalized by the Wronskian and a
Neumann series for intermediate magnitudes, and the uniform asymptotic
expansions for :math:`I_v(z)` and :math:`J_v(z)` for large orders.
Backward recurrence is used to generate sequences or reduce orders when
necessary.
The calculations above are done in the right half plane and continued
into the left half plane by the formula,
.. math:: I_v(z \exp(\pm\imath\pi)) = \exp(\pm\pi v) I_v(z)
(valid when the real part of `z` is positive). For negative `v`, the
formula
.. math:: I_{-v}(z) = I_v(z) + \frac{2}{\pi} \sin(\pi v) K_v(z)
is used, where :math:`K_v(z)` is the modified Bessel function of the
second kind, evaluated using the AMOS routine `zbesk`.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("j0",
r"""
j0(x)
Bessel function of the first kind of order 0.
Parameters
----------
x : array_like
Argument (float).
Returns
-------
J : ndarray
Value of the Bessel function of the first kind of order 0 at `x`.
Notes
-----
The domain is divided into the intervals [0, 5] and (5, infinity). In the
first interval the following rational approximation is used:
.. math::
J_0(x) \approx (w - r_1^2)(w - r_2^2) \frac{P_3(w)}{Q_8(w)},
where :math:`w = x^2` and :math:`r_1`, :math:`r_2` are the zeros of
:math:`J_0`, and :math:`P_3` and :math:`Q_8` are polynomials of degrees 3
and 8, respectively.
In the second interval, the Hankel asymptotic expansion is employed with
two rational functions of degree 6/6 and 7/7.
This function is a wrapper for the Cephes [1]_ routine `j0`.
It should not be confused with the spherical Bessel functions (see
`spherical_jn`).
See also
--------
jv : Bessel function of real order and complex argument.
spherical_jn : spherical Bessel functions.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("j1",
"""
j1(x)
Bessel function of the first kind of order 1.
Parameters
----------
x : array_like
Argument (float).
Returns
-------
J : ndarray
Value of the Bessel function of the first kind of order 1 at `x`.
Notes
-----
The domain is divided into the intervals [0, 8] and (8, infinity). In the
first interval a 24 term Chebyshev expansion is used. In the second, the
asymptotic trigonometric representation is employed using two rational
functions of degree 5/5.
This function is a wrapper for the Cephes [1]_ routine `j1`.
It should not be confused with the spherical Bessel functions (see
`spherical_jn`).
See also
--------
jv
spherical_jn : spherical Bessel functions.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("jn",
"""
jn(n, x)
Bessel function of the first kind of integer order and real argument.
Notes
-----
`jn` is an alias of `jv`.
Not to be confused with the spherical Bessel functions (see `spherical_jn`).
See also
--------
jv
spherical_jn : spherical Bessel functions.
""")
add_newdoc("jv",
r"""
jv(v, z)
Bessel function of the first kind of real order and complex argument.
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
J : ndarray
Value of the Bessel function, :math:`J_v(z)`.
Notes
-----
For positive `v` values, the computation is carried out using the AMOS
[1]_ `zbesj` routine, which exploits the connection to the modified
Bessel function :math:`I_v`,
.. math::
J_v(z) = \exp(v\pi\imath/2) I_v(-\imath z)\qquad (\Im z > 0)
J_v(z) = \exp(-v\pi\imath/2) I_v(\imath z)\qquad (\Im z < 0)
For negative `v` values the formula,
.. math:: J_{-v}(z) = J_v(z) \cos(\pi v) - Y_v(z) \sin(\pi v)
is used, where :math:`Y_v(z)` is the Bessel function of the second
kind, computed using the AMOS routine `zbesy`. Note that the second
term is exactly zero for integer `v`; to improve accuracy the second
term is explicitly omitted for `v` values such that `v = floor(v)`.
Not to be confused with the spherical Bessel functions (see `spherical_jn`).
See also
--------
jve : :math:`J_v` with leading exponential behavior stripped off.
spherical_jn : spherical Bessel functions.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("jve",
r"""
jve(v, z)
Exponentially scaled Bessel function of order `v`.
Defined as::
jve(v, z) = jv(v, z) * exp(-abs(z.imag))
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
J : ndarray
Value of the exponentially scaled Bessel function.
Notes
-----
For positive `v` values, the computation is carried out using the AMOS
[1]_ `zbesj` routine, which exploits the connection to the modified
Bessel function :math:`I_v`,
.. math::
J_v(z) = \exp(v\pi\imath/2) I_v(-\imath z)\qquad (\Im z > 0)
J_v(z) = \exp(-v\pi\imath/2) I_v(\imath z)\qquad (\Im z < 0)
For negative `v` values the formula,
.. math:: J_{-v}(z) = J_v(z) \cos(\pi v) - Y_v(z) \sin(\pi v)
is used, where :math:`Y_v(z)` is the Bessel function of the second
kind, computed using the AMOS routine `zbesy`. Note that the second
term is exactly zero for integer `v`; to improve accuracy the second
term is explicitly omitted for `v` values such that `v = floor(v)`.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("k0",
r"""
k0(x)
Modified Bessel function of the second kind of order 0, :math:`K_0`.
This function is also sometimes referred to as the modified Bessel
function of the third kind of order 0.
Parameters
----------
x : array_like
Argument (float).
Returns
-------
K : ndarray
Value of the modified Bessel function :math:`K_0` at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 2] and (2, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `k0`.
See also
--------
kv
k0e
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("k0e",
"""
k0e(x)
Exponentially scaled modified Bessel function K of order 0
Defined as::
k0e(x) = exp(x) * k0(x).
Parameters
----------
x : array_like
Argument (float)
Returns
-------
K : ndarray
Value of the exponentially scaled modified Bessel function K of order
0 at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 2] and (2, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `k0e`.
See also
--------
kv
k0
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("k1",
"""
k1(x)
Modified Bessel function of the second kind of order 1, :math:`K_1(x)`.
Parameters
----------
x : array_like
Argument (float)
Returns
-------
K : ndarray
Value of the modified Bessel function K of order 1 at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 2] and (2, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `k1`.
See also
--------
kv
k1e
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("k1e",
"""
k1e(x)
Exponentially scaled modified Bessel function K of order 1
Defined as::
k1e(x) = exp(x) * k1(x)
Parameters
----------
x : array_like
Argument (float)
Returns
-------
K : ndarray
Value of the exponentially scaled modified Bessel function K of order
1 at `x`.
Notes
-----
The range is partitioned into the two intervals [0, 2] and (2, infinity).
Chebyshev polynomial expansions are employed in each interval.
This function is a wrapper for the Cephes [1]_ routine `k1e`.
See also
--------
kv
k1
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("kei",
r"""
kei(x, out=None)
Kelvin function kei.
Defined as
.. math::
\mathrm{kei}(x) = \Im[K_0(x e^{\pi i / 4})]
where :math:`K_0` is the modified Bessel function of the second
kind (see `kv`). See [dlmf]_ for more details.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the Kelvin function.
See Also
--------
ker : the corresponding real part
keip : the derivative of kei
kv : modified Bessel function of the second kind
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10.61
Examples
--------
It can be expressed using the modified Bessel function of the
second kind.
>>> import scipy.special as sc
>>> x = np.array([1.0, 2.0, 3.0, 4.0])
>>> sc.kv(0, x * np.exp(np.pi * 1j / 4)).imag
array([-0.49499464, -0.20240007, -0.05112188, 0.0021984 ])
>>> sc.kei(x)
array([-0.49499464, -0.20240007, -0.05112188, 0.0021984 ])
""")
add_newdoc("keip",
r"""
keip(x, out=None)
Derivative of the Kelvin function kei.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
The values of the derivative of kei.
See Also
--------
kei
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10#PT5
""")
add_newdoc("kelvin",
"""
kelvin(x)
Kelvin functions as complex numbers
Returns
-------
Be, Ke, Bep, Kep
The tuple (Be, Ke, Bep, Kep) contains complex numbers
representing the real and imaginary Kelvin functions and their
derivatives evaluated at `x`. For example, kelvin(x)[0].real =
ber x and kelvin(x)[0].imag = bei x with similar relationships
for ker and kei.
""")
add_newdoc("ker",
r"""
ker(x, out=None)
Kelvin function ker.
Defined as
.. math::
\mathrm{ker}(x) = \Re[K_0(x e^{\pi i / 4})]
Where :math:`K_0` is the modified Bessel function of the second
kind (see `kv`). See [dlmf]_ for more details.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
See Also
--------
kei : the corresponding imaginary part
kerp : the derivative of ker
kv : modified Bessel function of the second kind
Returns
-------
scalar or ndarray
Values of the Kelvin function.
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10.61
Examples
--------
It can be expressed using the modified Bessel function of the
second kind.
>>> import scipy.special as sc
>>> x = np.array([1.0, 2.0, 3.0, 4.0])
>>> sc.kv(0, x * np.exp(np.pi * 1j / 4)).real
array([ 0.28670621, -0.04166451, -0.06702923, -0.03617885])
>>> sc.ker(x)
array([ 0.28670621, -0.04166451, -0.06702923, -0.03617885])
""")
add_newdoc("kerp",
r"""
kerp(x, out=None)
Derivative of the Kelvin function ker.
Parameters
----------
x : array_like
Real argument.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the derivative of ker.
See Also
--------
ker
References
----------
.. [dlmf] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/10#PT5
""")
add_newdoc("kl_div",
r"""
kl_div(x, y, out=None)
Elementwise function for computing Kullback-Leibler divergence.
.. math::
\mathrm{kl\_div}(x, y) =
\begin{cases}
x \log(x / y) - x + y & x > 0, y > 0 \\
y & x = 0, y \ge 0 \\
\infty & \text{otherwise}
\end{cases}
Parameters
----------
x, y : array_like
Real arguments
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Values of the Kullback-Liebler divergence.
See Also
--------
entr, rel_entr
Notes
-----
.. versionadded:: 0.15.0
This function is non-negative and is jointly convex in `x` and `y`.
The origin of this function is in convex programming; see [1]_ for
details. This is why the the function contains the extra :math:`-x
+ y` terms over what might be expected from the Kullback-Leibler
divergence. For a version of the function without the extra terms,
see `rel_entr`.
References
----------
.. [1] Grant, Boyd, and Ye, "CVX: Matlab Software for Disciplined Convex
Programming", http://cvxr.com/cvx/
""")
add_newdoc("kn",
r"""
kn(n, x)
Modified Bessel function of the second kind of integer order `n`
Returns the modified Bessel function of the second kind for integer order
`n` at real `z`.
These are also sometimes called functions of the third kind, Basset
functions, or Macdonald functions.
Parameters
----------
n : array_like of int
Order of Bessel functions (floats will truncate with a warning)
z : array_like of float
Argument at which to evaluate the Bessel functions
Returns
-------
out : ndarray
The results
Notes
-----
Wrapper for AMOS [1]_ routine `zbesk`. For a discussion of the
algorithm used, see [2]_ and the references therein.
See Also
--------
kv : Same function, but accepts real order and complex argument
kvp : Derivative of this function
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
.. [2] Donald E. Amos, "Algorithm 644: A portable package for Bessel
functions of a complex argument and nonnegative order", ACM
TOMS Vol. 12 Issue 3, Sept. 1986, p. 265
Examples
--------
Plot the function of several orders for real input:
>>> from scipy.special import kn
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(0, 5, 1000)
>>> for N in range(6):
... plt.plot(x, kn(N, x), label='$K_{}(x)$'.format(N))
>>> plt.ylim(0, 10)
>>> plt.legend()
>>> plt.title(r'Modified Bessel function of the second kind $K_n(x)$')
>>> plt.show()
Calculate for a single value at multiple orders:
>>> kn([4, 5, 6], 1)
array([ 44.23241585, 360.9605896 , 3653.83831186])
""")
add_newdoc("kolmogi",
"""
kolmogi(p)
Inverse Survival Function of Kolmogorov distribution
It is the inverse function to `kolmogorov`.
Returns y such that ``kolmogorov(y) == p``.
Parameters
----------
p : float array_like
Probability
Returns
-------
float
The value(s) of kolmogi(p)
Notes
-----
`kolmogorov` is used by `stats.kstest` in the application of the
Kolmogorov-Smirnov Goodness of Fit test. For historial reasons this
function is exposed in `scpy.special`, but the recommended way to achieve
the most accurate CDF/SF/PDF/PPF/ISF computations is to use the
`stats.kstwobign` distribution.
See Also
--------
kolmogorov : The Survival Function for the distribution
scipy.stats.kstwobign : Provides the functionality as a continuous distribution
smirnov, smirnovi : Functions for the one-sided distribution
Examples
--------
>>> from scipy.special import kolmogi
>>> kolmogi([0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0])
array([ inf, 1.22384787, 1.01918472, 0.82757356, 0.67644769,
0.57117327, 0. ])
""")
add_newdoc("kolmogorov",
r"""
kolmogorov(y)
Complementary cumulative distribution (Survival Function) function of
Kolmogorov distribution.
Returns the complementary cumulative distribution function of
Kolmogorov's limiting distribution (``D_n*\sqrt(n)`` as n goes to infinity)
of a two-sided test for equality between an empirical and a theoretical
distribution. It is equal to the (limit as n->infinity of the)
probability that ``sqrt(n) * max absolute deviation > y``.
Parameters
----------
y : float array_like
Absolute deviation between the Empirical CDF (ECDF) and the target CDF,
multiplied by sqrt(n).
Returns
-------
float
The value(s) of kolmogorov(y)
Notes
-----
`kolmogorov` is used by `stats.kstest` in the application of the
Kolmogorov-Smirnov Goodness of Fit test. For historial reasons this
function is exposed in `scpy.special`, but the recommended way to achieve
the most accurate CDF/SF/PDF/PPF/ISF computations is to use the
`stats.kstwobign` distribution.
See Also
--------
kolmogi : The Inverse Survival Function for the distribution
scipy.stats.kstwobign : Provides the functionality as a continuous distribution
smirnov, smirnovi : Functions for the one-sided distribution
Examples
--------
Show the probability of a gap at least as big as 0, 0.5 and 1.0.
>>> from scipy.special import kolmogorov
>>> from scipy.stats import kstwobign
>>> kolmogorov([0, 0.5, 1.0])
array([ 1. , 0.96394524, 0.26999967])
Compare a sample of size 1000 drawn from a Laplace(0, 1) distribution against
the target distribution, a Normal(0, 1) distribution.
>>> from scipy.stats import norm, laplace
>>> rng = np.random.default_rng()
>>> n = 1000
>>> lap01 = laplace(0, 1)
>>> x = np.sort(lap01.rvs(n, random_state=rng))
>>> np.mean(x), np.std(x)
(-0.05841730131499543, 1.3968109101997568)
Construct the Empirical CDF and the K-S statistic Dn.
>>> target = norm(0,1) # Normal mean 0, stddev 1
>>> cdfs = target.cdf(x)
>>> ecdfs = np.arange(n+1, dtype=float)/n
>>> gaps = np.column_stack([cdfs - ecdfs[:n], ecdfs[1:] - cdfs])
>>> Dn = np.max(gaps)
>>> Kn = np.sqrt(n) * Dn
>>> print('Dn=%f, sqrt(n)*Dn=%f' % (Dn, Kn))
Dn=0.043363, sqrt(n)*Dn=1.371265
>>> print(chr(10).join(['For a sample of size n drawn from a N(0, 1) distribution:',
... ' the approximate Kolmogorov probability that sqrt(n)*Dn>=%f is %f' % (Kn, kolmogorov(Kn)),
... ' the approximate Kolmogorov probability that sqrt(n)*Dn<=%f is %f' % (Kn, kstwobign.cdf(Kn))]))
For a sample of size n drawn from a N(0, 1) distribution:
the approximate Kolmogorov probability that sqrt(n)*Dn>=1.371265 is 0.046533
the approximate Kolmogorov probability that sqrt(n)*Dn<=1.371265 is 0.953467
Plot the Empirical CDF against the target N(0, 1) CDF.
>>> import matplotlib.pyplot as plt
>>> plt.step(np.concatenate([[-3], x]), ecdfs, where='post', label='Empirical CDF')
>>> x3 = np.linspace(-3, 3, 100)
>>> plt.plot(x3, target.cdf(x3), label='CDF for N(0, 1)')
>>> plt.ylim([0, 1]); plt.grid(True); plt.legend();
>>> # Add vertical lines marking Dn+ and Dn-
>>> iminus, iplus = np.argmax(gaps, axis=0)
>>> plt.vlines([x[iminus]], ecdfs[iminus], cdfs[iminus], color='r', linestyle='dashed', lw=4)
>>> plt.vlines([x[iplus]], cdfs[iplus], ecdfs[iplus+1], color='r', linestyle='dashed', lw=4)
>>> plt.show()
""")
add_newdoc("_kolmogc",
r"""
Internal function, do not use.
""")
add_newdoc("_kolmogci",
r"""
Internal function, do not use.
""")
add_newdoc("_kolmogp",
r"""
Internal function, do not use.
""")
add_newdoc("kv",
r"""
kv(v, z)
Modified Bessel function of the second kind of real order `v`
Returns the modified Bessel function of the second kind for real order
`v` at complex `z`.
These are also sometimes called functions of the third kind, Basset
functions, or Macdonald functions. They are defined as those solutions
of the modified Bessel equation for which,
.. math::
K_v(x) \sim \sqrt{\pi/(2x)} \exp(-x)
as :math:`x \to \infty` [3]_.
Parameters
----------
v : array_like of float
Order of Bessel functions
z : array_like of complex
Argument at which to evaluate the Bessel functions
Returns
-------
out : ndarray
The results. Note that input must be of complex type to get complex
output, e.g. ``kv(3, -2+0j)`` instead of ``kv(3, -2)``.
Notes
-----
Wrapper for AMOS [1]_ routine `zbesk`. For a discussion of the
algorithm used, see [2]_ and the references therein.
See Also
--------
kve : This function with leading exponential behavior stripped off.
kvp : Derivative of this function
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
.. [2] Donald E. Amos, "Algorithm 644: A portable package for Bessel
functions of a complex argument and nonnegative order", ACM
TOMS Vol. 12 Issue 3, Sept. 1986, p. 265
.. [3] NIST Digital Library of Mathematical Functions,
Eq. 10.25.E3. https://dlmf.nist.gov/10.25.E3
Examples
--------
Plot the function of several orders for real input:
>>> from scipy.special import kv
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(0, 5, 1000)
>>> for N in np.linspace(0, 6, 5):
... plt.plot(x, kv(N, x), label='$K_{{{}}}(x)$'.format(N))
>>> plt.ylim(0, 10)
>>> plt.legend()
>>> plt.title(r'Modified Bessel function of the second kind $K_\nu(x)$')
>>> plt.show()
Calculate for a single value at multiple orders:
>>> kv([4, 4.5, 5], 1+2j)
array([ 0.1992+2.3892j, 2.3493+3.6j , 7.2827+3.8104j])
""")
add_newdoc("kve",
r"""
kve(v, z)
Exponentially scaled modified Bessel function of the second kind.
Returns the exponentially scaled, modified Bessel function of the
second kind (sometimes called the third kind) for real order `v` at
complex `z`::
kve(v, z) = kv(v, z) * exp(z)
Parameters
----------
v : array_like of float
Order of Bessel functions
z : array_like of complex
Argument at which to evaluate the Bessel functions
Returns
-------
out : ndarray
The exponentially scaled modified Bessel function of the second kind.
Notes
-----
Wrapper for AMOS [1]_ routine `zbesk`. For a discussion of the
algorithm used, see [2]_ and the references therein.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
.. [2] Donald E. Amos, "Algorithm 644: A portable package for Bessel
functions of a complex argument and nonnegative order", ACM
TOMS Vol. 12 Issue 3, Sept. 1986, p. 265
""")
add_newdoc("_lanczos_sum_expg_scaled",
"""
Internal function, do not use.
""")
add_newdoc("_lgam1p",
"""
Internal function, do not use.
""")
add_newdoc("log1p",
"""
log1p(x, out=None)
Calculates log(1 + x) for use when `x` is near zero.
Parameters
----------
x : array_like
Real or complex valued input.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of ``log(1 + x)``.
See Also
--------
expm1, cosm1
Examples
--------
>>> import scipy.special as sc
It is more accurate than using ``log(1 + x)`` directly for ``x``
near 0. Note that in the below example ``1 + 1e-17 == 1`` to
double precision.
>>> sc.log1p(1e-17)
1e-17
>>> np.log(1 + 1e-17)
0.0
""")
add_newdoc("_log1pmx",
"""
Internal function, do not use.
""")
add_newdoc('logit',
"""
logit(x)
Logit ufunc for ndarrays.
The logit function is defined as logit(p) = log(p/(1-p)).
Note that logit(0) = -inf, logit(1) = inf, and logit(p)
for p<0 or p>1 yields nan.
Parameters
----------
x : ndarray
The ndarray to apply logit to element-wise.
Returns
-------
out : ndarray
An ndarray of the same shape as x. Its entries
are logit of the corresponding entry of x.
See Also
--------
expit
Notes
-----
As a ufunc logit takes a number of optional
keyword arguments. For more information
see `ufuncs <https://docs.scipy.org/doc/numpy/reference/ufuncs.html>`_
.. versionadded:: 0.10.0
Examples
--------
>>> from scipy.special import logit, expit
>>> logit([0, 0.25, 0.5, 0.75, 1])
array([ -inf, -1.09861229, 0. , 1.09861229, inf])
`expit` is the inverse of `logit`:
>>> expit(logit([0.1, 0.75, 0.999]))
array([ 0.1 , 0.75 , 0.999])
Plot logit(x) for x in [0, 1]:
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(0, 1, 501)
>>> y = logit(x)
>>> plt.plot(x, y)
>>> plt.grid()
>>> plt.ylim(-6, 6)
>>> plt.xlabel('x')
>>> plt.title('logit(x)')
>>> plt.show()
""")
add_newdoc("lpmv",
r"""
lpmv(m, v, x)
Associated Legendre function of integer order and real degree.
Defined as
.. math::
P_v^m = (-1)^m (1 - x^2)^{m/2} \frac{d^m}{dx^m} P_v(x)
where
.. math::
P_v = \sum_{k = 0}^\infty \frac{(-v)_k (v + 1)_k}{(k!)^2}
\left(\frac{1 - x}{2}\right)^k
is the Legendre function of the first kind. Here :math:`(\cdot)_k`
is the Pochhammer symbol; see `poch`.
Parameters
----------
m : array_like
Order (int or float). If passed a float not equal to an
integer the function returns NaN.
v : array_like
Degree (float).
x : array_like
Argument (float). Must have ``|x| <= 1``.
Returns
-------
pmv : ndarray
Value of the associated Legendre function.
See Also
--------
lpmn : Compute the associated Legendre function for all orders
``0, ..., m`` and degrees ``0, ..., n``.
clpmn : Compute the associated Legendre function at complex
arguments.
Notes
-----
Note that this implementation includes the Condon-Shortley phase.
References
----------
.. [1] Zhang, Jin, "Computation of Special Functions", John Wiley
and Sons, Inc, 1996.
""")
add_newdoc("mathieu_a",
"""
mathieu_a(m, q)
Characteristic value of even Mathieu functions
Returns the characteristic value for the even solution,
``ce_m(z, q)``, of Mathieu's equation.
""")
add_newdoc("mathieu_b",
"""
mathieu_b(m, q)
Characteristic value of odd Mathieu functions
Returns the characteristic value for the odd solution,
``se_m(z, q)``, of Mathieu's equation.
""")
add_newdoc("mathieu_cem",
"""
mathieu_cem(m, q, x)
Even Mathieu function and its derivative
Returns the even Mathieu function, ``ce_m(x, q)``, of order `m` and
parameter `q` evaluated at `x` (given in degrees). Also returns the
derivative with respect to `x` of ce_m(x, q)
Parameters
----------
m
Order of the function
q
Parameter of the function
x
Argument of the function, *given in degrees, not radians*
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("mathieu_modcem1",
"""
mathieu_modcem1(m, q, x)
Even modified Mathieu function of the first kind and its derivative
Evaluates the even modified Mathieu function of the first kind,
``Mc1m(x, q)``, and its derivative at `x` for order `m` and parameter
`q`.
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("mathieu_modcem2",
"""
mathieu_modcem2(m, q, x)
Even modified Mathieu function of the second kind and its derivative
Evaluates the even modified Mathieu function of the second kind,
Mc2m(x, q), and its derivative at `x` (given in degrees) for order `m`
and parameter `q`.
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("mathieu_modsem1",
"""
mathieu_modsem1(m, q, x)
Odd modified Mathieu function of the first kind and its derivative
Evaluates the odd modified Mathieu function of the first kind,
Ms1m(x, q), and its derivative at `x` (given in degrees) for order `m`
and parameter `q`.
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("mathieu_modsem2",
"""
mathieu_modsem2(m, q, x)
Odd modified Mathieu function of the second kind and its derivative
Evaluates the odd modified Mathieu function of the second kind,
Ms2m(x, q), and its derivative at `x` (given in degrees) for order `m`
and parameter q.
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("mathieu_sem",
"""
mathieu_sem(m, q, x)
Odd Mathieu function and its derivative
Returns the odd Mathieu function, se_m(x, q), of order `m` and
parameter `q` evaluated at `x` (given in degrees). Also returns the
derivative with respect to `x` of se_m(x, q).
Parameters
----------
m
Order of the function
q
Parameter of the function
x
Argument of the function, *given in degrees, not radians*.
Returns
-------
y
Value of the function
yp
Value of the derivative vs x
""")
add_newdoc("modfresnelm",
"""
modfresnelm(x)
Modified Fresnel negative integrals
Returns
-------
fm
Integral ``F_-(x)``: ``integral(exp(-1j*t*t), t=x..inf)``
km
Integral ``K_-(x)``: ``1/sqrt(pi)*exp(1j*(x*x+pi/4))*fp``
""")
add_newdoc("modfresnelp",
"""
modfresnelp(x)
Modified Fresnel positive integrals
Returns
-------
fp
Integral ``F_+(x)``: ``integral(exp(1j*t*t), t=x..inf)``
kp
Integral ``K_+(x)``: ``1/sqrt(pi)*exp(-1j*(x*x+pi/4))*fp``
""")
add_newdoc("modstruve",
r"""
modstruve(v, x)
Modified Struve function.
Return the value of the modified Struve function of order `v` at `x`. The
modified Struve function is defined as,
.. math::
L_v(x) = -\imath \exp(-\pi\imath v/2) H_v(\imath x),
where :math:`H_v` is the Struve function.
Parameters
----------
v : array_like
Order of the modified Struve function (float).
x : array_like
Argument of the Struve function (float; must be positive unless `v` is
an integer).
Returns
-------
L : ndarray
Value of the modified Struve function of order `v` at `x`.
Notes
-----
Three methods discussed in [1]_ are used to evaluate the function:
- power series
- expansion in Bessel functions (if :math:`|x| < |v| + 20`)
- asymptotic large-x expansion (if :math:`x \geq 0.7v + 12`)
Rounding errors are estimated based on the largest terms in the sums, and
the result associated with the smallest error is returned.
See also
--------
struve
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/11
""")
add_newdoc("nbdtr",
r"""
nbdtr(k, n, p)
Negative binomial cumulative distribution function.
Returns the sum of the terms 0 through `k` of the negative binomial
distribution probability mass function,
.. math::
F = \sum_{j=0}^k {{n + j - 1}\choose{j}} p^n (1 - p)^j.
In a sequence of Bernoulli trials with individual success probabilities
`p`, this is the probability that `k` or fewer failures precede the nth
success.
Parameters
----------
k : array_like
The maximum number of allowed failures (nonnegative int).
n : array_like
The target number of successes (positive int).
p : array_like
Probability of success in a single event (float).
Returns
-------
F : ndarray
The probability of `k` or fewer failures before `n` successes in a
sequence of events with individual success probability `p`.
See also
--------
nbdtrc
Notes
-----
If floating point values are passed for `k` or `n`, they will be truncated
to integers.
The terms are not summed directly; instead the regularized incomplete beta
function is employed, according to the formula,
.. math::
\mathrm{nbdtr}(k, n, p) = I_{p}(n, k + 1).
Wrapper for the Cephes [1]_ routine `nbdtr`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("nbdtrc",
r"""
nbdtrc(k, n, p)
Negative binomial survival function.
Returns the sum of the terms `k + 1` to infinity of the negative binomial
distribution probability mass function,
.. math::
F = \sum_{j=k + 1}^\infty {{n + j - 1}\choose{j}} p^n (1 - p)^j.
In a sequence of Bernoulli trials with individual success probabilities
`p`, this is the probability that more than `k` failures precede the nth
success.
Parameters
----------
k : array_like
The maximum number of allowed failures (nonnegative int).
n : array_like
The target number of successes (positive int).
p : array_like
Probability of success in a single event (float).
Returns
-------
F : ndarray
The probability of `k + 1` or more failures before `n` successes in a
sequence of events with individual success probability `p`.
Notes
-----
If floating point values are passed for `k` or `n`, they will be truncated
to integers.
The terms are not summed directly; instead the regularized incomplete beta
function is employed, according to the formula,
.. math::
\mathrm{nbdtrc}(k, n, p) = I_{1 - p}(k + 1, n).
Wrapper for the Cephes [1]_ routine `nbdtrc`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("nbdtri",
"""
nbdtri(k, n, y)
Inverse of `nbdtr` vs `p`.
Returns the inverse with respect to the parameter `p` of
`y = nbdtr(k, n, p)`, the negative binomial cumulative distribution
function.
Parameters
----------
k : array_like
The maximum number of allowed failures (nonnegative int).
n : array_like
The target number of successes (positive int).
y : array_like
The probability of `k` or fewer failures before `n` successes (float).
Returns
-------
p : ndarray
Probability of success in a single event (float) such that
`nbdtr(k, n, p) = y`.
See also
--------
nbdtr : Cumulative distribution function of the negative binomial.
nbdtrik : Inverse with respect to `k` of `nbdtr(k, n, p)`.
nbdtrin : Inverse with respect to `n` of `nbdtr(k, n, p)`.
Notes
-----
Wrapper for the Cephes [1]_ routine `nbdtri`.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("nbdtrik",
r"""
nbdtrik(y, n, p)
Inverse of `nbdtr` vs `k`.
Returns the inverse with respect to the parameter `k` of
`y = nbdtr(k, n, p)`, the negative binomial cumulative distribution
function.
Parameters
----------
y : array_like
The probability of `k` or fewer failures before `n` successes (float).
n : array_like
The target number of successes (positive int).
p : array_like
Probability of success in a single event (float).
Returns
-------
k : ndarray
The maximum number of allowed failures such that `nbdtr(k, n, p) = y`.
See also
--------
nbdtr : Cumulative distribution function of the negative binomial.
nbdtri : Inverse with respect to `p` of `nbdtr(k, n, p)`.
nbdtrin : Inverse with respect to `n` of `nbdtr(k, n, p)`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfnbn`.
Formula 26.5.26 of [2]_,
.. math::
\sum_{j=k + 1}^\infty {{n + j - 1}\choose{j}} p^n (1 - p)^j = I_{1 - p}(k + 1, n),
is used to reduce calculation of the cumulative distribution function to
that of a regularized incomplete beta :math:`I`.
Computation of `k` involves a search for a value that produces the desired
value of `y`. The search relies on the monotonicity of `y` with `k`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("nbdtrin",
r"""
nbdtrin(k, y, p)
Inverse of `nbdtr` vs `n`.
Returns the inverse with respect to the parameter `n` of
`y = nbdtr(k, n, p)`, the negative binomial cumulative distribution
function.
Parameters
----------
k : array_like
The maximum number of allowed failures (nonnegative int).
y : array_like
The probability of `k` or fewer failures before `n` successes (float).
p : array_like
Probability of success in a single event (float).
Returns
-------
n : ndarray
The number of successes `n` such that `nbdtr(k, n, p) = y`.
See also
--------
nbdtr : Cumulative distribution function of the negative binomial.
nbdtri : Inverse with respect to `p` of `nbdtr(k, n, p)`.
nbdtrik : Inverse with respect to `k` of `nbdtr(k, n, p)`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdfnbn`.
Formula 26.5.26 of [2]_,
.. math::
\sum_{j=k + 1}^\infty {{n + j - 1}\choose{j}} p^n (1 - p)^j = I_{1 - p}(k + 1, n),
is used to reduce calculation of the cumulative distribution function to
that of a regularized incomplete beta :math:`I`.
Computation of `n` involves a search for a value that produces the desired
value of `y`. The search relies on the monotonicity of `y` with `n`.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
""")
add_newdoc("ncfdtr",
r"""
ncfdtr(dfn, dfd, nc, f)
Cumulative distribution function of the non-central F distribution.
The non-central F describes the distribution of,
.. math::
Z = \frac{X/d_n}{Y/d_d}
where :math:`X` and :math:`Y` are independently distributed, with
:math:`X` distributed non-central :math:`\chi^2` with noncentrality
parameter `nc` and :math:`d_n` degrees of freedom, and :math:`Y`
distributed :math:`\chi^2` with :math:`d_d` degrees of freedom.
Parameters
----------
dfn : array_like
Degrees of freedom of the numerator sum of squares. Range (0, inf).
dfd : array_like
Degrees of freedom of the denominator sum of squares. Range (0, inf).
nc : array_like
Noncentrality parameter. Should be in range (0, 1e4).
f : array_like
Quantiles, i.e. the upper limit of integration.
Returns
-------
cdf : float or ndarray
The calculated CDF. If all inputs are scalar, the return will be a
float. Otherwise it will be an array.
See Also
--------
ncfdtri : Quantile function; inverse of `ncfdtr` with respect to `f`.
ncfdtridfd : Inverse of `ncfdtr` with respect to `dfd`.
ncfdtridfn : Inverse of `ncfdtr` with respect to `dfn`.
ncfdtrinc : Inverse of `ncfdtr` with respect to `nc`.
Notes
-----
Wrapper for the CDFLIB [1]_ Fortran routine `cdffnc`.
The cumulative distribution function is computed using Formula 26.6.20 of
[2]_:
.. math::
F(d_n, d_d, n_c, f) = \sum_{j=0}^\infty e^{-n_c/2} \frac{(n_c/2)^j}{j!} I_{x}(\frac{d_n}{2} + j, \frac{d_d}{2}),
where :math:`I` is the regularized incomplete beta function, and
:math:`x = f d_n/(f d_n + d_d)`.
The computation time required for this routine is proportional to the
noncentrality parameter `nc`. Very large values of this parameter can
consume immense computer resources. This is why the search range is
bounded by 10,000.
References
----------
.. [1] Barry Brown, James Lovato, and Kathy Russell,
CDFLIB: Library of Fortran Routines for Cumulative Distribution
Functions, Inverses, and Other Parameters.
.. [2] Milton Abramowitz and Irene A. Stegun, eds.
Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover, 1972.
Examples
--------
>>> from scipy import special
>>> from scipy import stats
>>> import matplotlib.pyplot as plt
Plot the CDF of the non-central F distribution, for nc=0. Compare with the
F-distribution from scipy.stats:
>>> x = np.linspace(-1, 8, num=500)
>>> dfn = 3
>>> dfd = 2
>>> ncf_stats = stats.f.cdf(x, dfn, dfd)
>>> ncf_special = special.ncfdtr(dfn, dfd, 0, x)
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot(x, ncf_stats, 'b-', lw=3)
>>> ax.plot(x, ncf_special, 'r-')
>>> plt.show()
""")
add_newdoc("ncfdtri",
"""
ncfdtri(dfn, dfd, nc, p)
Inverse with respect to `f` of the CDF of the non-central F distribution.
See `ncfdtr` for more details.
Parameters
----------
dfn : array_like
Degrees of freedom of the numerator sum of squares. Range (0, inf).
dfd : array_like
Degrees of freedom of the denominator sum of squares. Range (0, inf).
nc : array_like
Noncentrality parameter. Should be in range (0, 1e4).
p : array_like
Value of the cumulative distribution function. Must be in the
range [0, 1].
Returns
-------
f : float
Quantiles, i.e., the upper limit of integration.
See Also
--------
ncfdtr : CDF of the non-central F distribution.
ncfdtridfd : Inverse of `ncfdtr` with respect to `dfd`.
ncfdtridfn : Inverse of `ncfdtr` with respect to `dfn`.
ncfdtrinc : Inverse of `ncfdtr` with respect to `nc`.
Examples
--------
>>> from scipy.special import ncfdtr, ncfdtri
Compute the CDF for several values of `f`:
>>> f = [0.5, 1, 1.5]
>>> p = ncfdtr(2, 3, 1.5, f)
>>> p
array([ 0.20782291, 0.36107392, 0.47345752])
Compute the inverse. We recover the values of `f`, as expected:
>>> ncfdtri(2, 3, 1.5, p)
array([ 0.5, 1. , 1.5])
""")
add_newdoc("ncfdtridfd",
"""
ncfdtridfd(dfn, p, nc, f)
Calculate degrees of freedom (denominator) for the noncentral F-distribution.
This is the inverse with respect to `dfd` of `ncfdtr`.
See `ncfdtr` for more details.
Parameters
----------
dfn : array_like
Degrees of freedom of the numerator sum of squares. Range (0, inf).
p : array_like
Value of the cumulative distribution function. Must be in the
range [0, 1].
nc : array_like
Noncentrality parameter. Should be in range (0, 1e4).
f : array_like
Quantiles, i.e., the upper limit of integration.
Returns
-------
dfd : float
Degrees of freedom of the denominator sum of squares.
See Also
--------
ncfdtr : CDF of the non-central F distribution.
ncfdtri : Quantile function; inverse of `ncfdtr` with respect to `f`.
ncfdtridfn : Inverse of `ncfdtr` with respect to `dfn`.
ncfdtrinc : Inverse of `ncfdtr` with respect to `nc`.
Notes
-----
The value of the cumulative noncentral F distribution is not necessarily
monotone in either degrees of freedom. There thus may be two values that
provide a given CDF value. This routine assumes monotonicity and will
find an arbitrary one of the two values.
Examples
--------
>>> from scipy.special import ncfdtr, ncfdtridfd
Compute the CDF for several values of `dfd`:
>>> dfd = [1, 2, 3]
>>> p = ncfdtr(2, dfd, 0.25, 15)
>>> p
array([ 0.8097138 , 0.93020416, 0.96787852])
Compute the inverse. We recover the values of `dfd`, as expected:
>>> ncfdtridfd(2, p, 0.25, 15)
array([ 1., 2., 3.])
""")
add_newdoc("ncfdtridfn",
"""
ncfdtridfn(p, dfd, nc, f)
Calculate degrees of freedom (numerator) for the noncentral F-distribution.
This is the inverse with respect to `dfn` of `ncfdtr`.
See `ncfdtr` for more details.
Parameters
----------
p : array_like
Value of the cumulative distribution function. Must be in the
range [0, 1].
dfd : array_like
Degrees of freedom of the denominator sum of squares. Range (0, inf).
nc : array_like
Noncentrality parameter. Should be in range (0, 1e4).
f : float
Quantiles, i.e., the upper limit of integration.
Returns
-------
dfn : float
Degrees of freedom of the numerator sum of squares.
See Also
--------
ncfdtr : CDF of the non-central F distribution.
ncfdtri : Quantile function; inverse of `ncfdtr` with respect to `f`.
ncfdtridfd : Inverse of `ncfdtr` with respect to `dfd`.
ncfdtrinc : Inverse of `ncfdtr` with respect to `nc`.
Notes
-----
The value of the cumulative noncentral F distribution is not necessarily
monotone in either degrees of freedom. There thus may be two values that
provide a given CDF value. This routine assumes monotonicity and will
find an arbitrary one of the two values.
Examples
--------
>>> from scipy.special import ncfdtr, ncfdtridfn
Compute the CDF for several values of `dfn`:
>>> dfn = [1, 2, 3]
>>> p = ncfdtr(dfn, 2, 0.25, 15)
>>> p
array([ 0.92562363, 0.93020416, 0.93188394])
Compute the inverse. We recover the values of `dfn`, as expected:
>>> ncfdtridfn(p, 2, 0.25, 15)
array([ 1., 2., 3.])
""")
add_newdoc("ncfdtrinc",
"""
ncfdtrinc(dfn, dfd, p, f)
Calculate non-centrality parameter for non-central F distribution.
This is the inverse with respect to `nc` of `ncfdtr`.
See `ncfdtr` for more details.
Parameters
----------
dfn : array_like
Degrees of freedom of the numerator sum of squares. Range (0, inf).
dfd : array_like
Degrees of freedom of the denominator sum of squares. Range (0, inf).
p : array_like
Value of the cumulative distribution function. Must be in the
range [0, 1].
f : array_like
Quantiles, i.e., the upper limit of integration.
Returns
-------
nc : float
Noncentrality parameter.
See Also
--------
ncfdtr : CDF of the non-central F distribution.
ncfdtri : Quantile function; inverse of `ncfdtr` with respect to `f`.
ncfdtridfd : Inverse of `ncfdtr` with respect to `dfd`.
ncfdtridfn : Inverse of `ncfdtr` with respect to `dfn`.
Examples
--------
>>> from scipy.special import ncfdtr, ncfdtrinc
Compute the CDF for several values of `nc`:
>>> nc = [0.5, 1.5, 2.0]
>>> p = ncfdtr(2, 3, nc, 15)
>>> p
array([ 0.96309246, 0.94327955, 0.93304098])
Compute the inverse. We recover the values of `nc`, as expected:
>>> ncfdtrinc(2, 3, p, 15)
array([ 0.5, 1.5, 2. ])
""")
add_newdoc("nctdtr",
"""
nctdtr(df, nc, t)
Cumulative distribution function of the non-central `t` distribution.
Parameters
----------
df : array_like
Degrees of freedom of the distribution. Should be in range (0, inf).
nc : array_like
Noncentrality parameter. Should be in range (-1e6, 1e6).
t : array_like
Quantiles, i.e., the upper limit of integration.
Returns
-------
cdf : float or ndarray
The calculated CDF. If all inputs are scalar, the return will be a
float. Otherwise, it will be an array.
See Also
--------
nctdtrit : Inverse CDF (iCDF) of the non-central t distribution.
nctdtridf : Calculate degrees of freedom, given CDF and iCDF values.
nctdtrinc : Calculate non-centrality parameter, given CDF iCDF values.
Examples
--------
>>> from scipy import special
>>> from scipy import stats
>>> import matplotlib.pyplot as plt
Plot the CDF of the non-central t distribution, for nc=0. Compare with the
t-distribution from scipy.stats:
>>> x = np.linspace(-5, 5, num=500)
>>> df = 3
>>> nct_stats = stats.t.cdf(x, df)
>>> nct_special = special.nctdtr(df, 0, x)
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot(x, nct_stats, 'b-', lw=3)
>>> ax.plot(x, nct_special, 'r-')
>>> plt.show()
""")
add_newdoc("nctdtridf",
"""
nctdtridf(p, nc, t)
Calculate degrees of freedom for non-central t distribution.
See `nctdtr` for more details.
Parameters
----------
p : array_like
CDF values, in range (0, 1].
nc : array_like
Noncentrality parameter. Should be in range (-1e6, 1e6).
t : array_like
Quantiles, i.e., the upper limit of integration.
""")
add_newdoc("nctdtrinc",
"""
nctdtrinc(df, p, t)
Calculate non-centrality parameter for non-central t distribution.
See `nctdtr` for more details.
Parameters
----------
df : array_like
Degrees of freedom of the distribution. Should be in range (0, inf).
p : array_like
CDF values, in range (0, 1].
t : array_like
Quantiles, i.e., the upper limit of integration.
""")
add_newdoc("nctdtrit",
"""
nctdtrit(df, nc, p)
Inverse cumulative distribution function of the non-central t distribution.
See `nctdtr` for more details.
Parameters
----------
df : array_like
Degrees of freedom of the distribution. Should be in range (0, inf).
nc : array_like
Noncentrality parameter. Should be in range (-1e6, 1e6).
p : array_like
CDF values, in range (0, 1].
""")
add_newdoc("ndtr",
r"""
ndtr(x)
Gaussian cumulative distribution function.
Returns the area under the standard Gaussian probability
density function, integrated from minus infinity to `x`
.. math::
\frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp(-t^2/2) dt
Parameters
----------
x : array_like, real or complex
Argument
Returns
-------
ndarray
The value of the normal CDF evaluated at `x`
See Also
--------
erf
erfc
scipy.stats.norm
log_ndtr
""")
add_newdoc("nrdtrimn",
"""
nrdtrimn(p, x, std)
Calculate mean of normal distribution given other params.
Parameters
----------
p : array_like
CDF values, in range (0, 1].
x : array_like
Quantiles, i.e. the upper limit of integration.
std : array_like
Standard deviation.
Returns
-------
mn : float or ndarray
The mean of the normal distribution.
See Also
--------
nrdtrimn, ndtr
""")
add_newdoc("nrdtrisd",
"""
nrdtrisd(p, x, mn)
Calculate standard deviation of normal distribution given other params.
Parameters
----------
p : array_like
CDF values, in range (0, 1].
x : array_like
Quantiles, i.e. the upper limit of integration.
mn : float or ndarray
The mean of the normal distribution.
Returns
-------
std : array_like
Standard deviation.
See Also
--------
ndtr
""")
add_newdoc("log_ndtr",
"""
log_ndtr(x)
Logarithm of Gaussian cumulative distribution function.
Returns the log of the area under the standard Gaussian probability
density function, integrated from minus infinity to `x`::
log(1/sqrt(2*pi) * integral(exp(-t**2 / 2), t=-inf..x))
Parameters
----------
x : array_like, real or complex
Argument
Returns
-------
ndarray
The value of the log of the normal CDF evaluated at `x`
See Also
--------
erf
erfc
scipy.stats.norm
ndtr
""")
add_newdoc("ndtri",
"""
ndtri(y)
Inverse of `ndtr` vs x
Returns the argument x for which the area under the Gaussian
probability density function (integrated from minus infinity to `x`)
is equal to y.
""")
add_newdoc("obl_ang1",
"""
obl_ang1(m, n, c, x)
Oblate spheroidal angular function of the first kind and its derivative
Computes the oblate spheroidal angular function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("obl_ang1_cv",
"""
obl_ang1_cv(m, n, c, cv, x)
Oblate spheroidal angular function obl_ang1 for precomputed characteristic value
Computes the oblate spheroidal angular function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("obl_cv",
"""
obl_cv(m, n, c)
Characteristic value of oblate spheroidal function
Computes the characteristic value of oblate spheroidal wave
functions of order `m`, `n` (n>=m) and spheroidal parameter `c`.
""")
add_newdoc("obl_rad1",
"""
obl_rad1(m, n, c, x)
Oblate spheroidal radial function of the first kind and its derivative
Computes the oblate spheroidal radial function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("obl_rad1_cv",
"""
obl_rad1_cv(m, n, c, cv, x)
Oblate spheroidal radial function obl_rad1 for precomputed characteristic value
Computes the oblate spheroidal radial function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("obl_rad2",
"""
obl_rad2(m, n, c, x)
Oblate spheroidal radial function of the second kind and its derivative.
Computes the oblate spheroidal radial function of the second kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("obl_rad2_cv",
"""
obl_rad2_cv(m, n, c, cv, x)
Oblate spheroidal radial function obl_rad2 for precomputed characteristic value
Computes the oblate spheroidal radial function of the second kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pbdv",
"""
pbdv(v, x)
Parabolic cylinder function D
Returns (d, dp) the parabolic cylinder function Dv(x) in d and the
derivative, Dv'(x) in dp.
Returns
-------
d
Value of the function
dp
Value of the derivative vs x
""")
add_newdoc("pbvv",
"""
pbvv(v, x)
Parabolic cylinder function V
Returns the parabolic cylinder function Vv(x) in v and the
derivative, Vv'(x) in vp.
Returns
-------
v
Value of the function
vp
Value of the derivative vs x
""")
add_newdoc("pbwa",
r"""
pbwa(a, x)
Parabolic cylinder function W.
The function is a particular solution to the differential equation
.. math::
y'' + \left(\frac{1}{4}x^2 - a\right)y = 0,
for a full definition see section 12.14 in [1]_.
Parameters
----------
a : array_like
Real parameter
x : array_like
Real argument
Returns
-------
w : scalar or ndarray
Value of the function
wp : scalar or ndarray
Value of the derivative in x
Notes
-----
The function is a wrapper for a Fortran routine by Zhang and Jin
[2]_. The implementation is accurate only for ``|a|, |x| < 5`` and
returns NaN outside that range.
References
----------
.. [1] Digital Library of Mathematical Functions, 14.30.
https://dlmf.nist.gov/14.30
.. [2] Zhang, Shanjie and Jin, Jianming. "Computation of Special
Functions", John Wiley and Sons, 1996.
https://people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.html
""")
add_newdoc("pdtr",
r"""
pdtr(k, m, out=None)
Poisson cumulative distribution function.
Defined as the probability that a Poisson-distributed random
variable with event rate :math:`m` is less than or equal to
:math:`k`. More concretely, this works out to be [1]_
.. math::
\exp(-m) \sum_{j = 0}^{\lfloor{k}\rfloor} \frac{m^j}{m!}.
Parameters
----------
k : array_like
Nonnegative real argument
m : array_like
Nonnegative real shape parameter
out : ndarray
Optional output array for the function results
See Also
--------
pdtrc : Poisson survival function
pdtrik : inverse of `pdtr` with respect to `k`
pdtri : inverse of `pdtr` with respect to `m`
Returns
-------
scalar or ndarray
Values of the Poisson cumulative distribution function
References
----------
.. [1] https://en.wikipedia.org/wiki/Poisson_distribution
Examples
--------
>>> import scipy.special as sc
It is a cumulative distribution function, so it converges to 1
monotonically as `k` goes to infinity.
>>> sc.pdtr([1, 10, 100, np.inf], 1)
array([0.73575888, 0.99999999, 1. , 1. ])
It is discontinuous at integers and constant between integers.
>>> sc.pdtr([1, 1.5, 1.9, 2], 1)
array([0.73575888, 0.73575888, 0.73575888, 0.9196986 ])
""")
add_newdoc("pdtrc",
"""
pdtrc(k, m)
Poisson survival function
Returns the sum of the terms from k+1 to infinity of the Poisson
distribution: sum(exp(-m) * m**j / j!, j=k+1..inf) = gammainc(
k+1, m). Arguments must both be non-negative doubles.
""")
add_newdoc("pdtri",
"""
pdtri(k, y)
Inverse to `pdtr` vs m
Returns the Poisson variable `m` such that the sum from 0 to `k` of
the Poisson density is equal to the given probability `y`:
calculated by gammaincinv(k+1, y). `k` must be a nonnegative
integer and `y` between 0 and 1.
""")
add_newdoc("pdtrik",
"""
pdtrik(p, m)
Inverse to `pdtr` vs k
Returns the quantile k such that ``pdtr(k, m) = p``
""")
add_newdoc("poch",
r"""
poch(z, m)
Pochhammer symbol.
The Pochhammer symbol (rising factorial) is defined as
.. math::
(z)_m = \frac{\Gamma(z + m)}{\Gamma(z)}
For positive integer `m` it reads
.. math::
(z)_m = z (z + 1) ... (z + m - 1)
See [dlmf]_ for more details.
Parameters
----------
z, m : array_like
Real-valued arguments.
Returns
-------
scalar or ndarray
The value of the function.
References
----------
.. [dlmf] Nist, Digital Library of Mathematical Functions
https://dlmf.nist.gov/5.2#iii
Examples
--------
>>> import scipy.special as sc
It is 1 when m is 0.
>>> sc.poch([1, 2, 3, 4], 0)
array([1., 1., 1., 1.])
For z equal to 1 it reduces to the factorial function.
>>> sc.poch(1, 5)
120.0
>>> 1 * 2 * 3 * 4 * 5
120
It can be expressed in terms of the gamma function.
>>> z, m = 3.7, 2.1
>>> sc.poch(z, m)
20.529581933776953
>>> sc.gamma(z + m) / sc.gamma(z)
20.52958193377696
""")
add_newdoc("pro_ang1",
"""
pro_ang1(m, n, c, x)
Prolate spheroidal angular function of the first kind and its derivative
Computes the prolate spheroidal angular function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pro_ang1_cv",
"""
pro_ang1_cv(m, n, c, cv, x)
Prolate spheroidal angular function pro_ang1 for precomputed characteristic value
Computes the prolate spheroidal angular function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pro_cv",
"""
pro_cv(m, n, c)
Characteristic value of prolate spheroidal function
Computes the characteristic value of prolate spheroidal wave
functions of order `m`, `n` (n>=m) and spheroidal parameter `c`.
""")
add_newdoc("pro_rad1",
"""
pro_rad1(m, n, c, x)
Prolate spheroidal radial function of the first kind and its derivative
Computes the prolate spheroidal radial function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pro_rad1_cv",
"""
pro_rad1_cv(m, n, c, cv, x)
Prolate spheroidal radial function pro_rad1 for precomputed characteristic value
Computes the prolate spheroidal radial function of the first kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pro_rad2",
"""
pro_rad2(m, n, c, x)
Prolate spheroidal radial function of the second kind and its derivative
Computes the prolate spheroidal radial function of the second kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pro_rad2_cv",
"""
pro_rad2_cv(m, n, c, cv, x)
Prolate spheroidal radial function pro_rad2 for precomputed characteristic value
Computes the prolate spheroidal radial function of the second kind
and its derivative (with respect to `x`) for mode parameters m>=0
and n>=m, spheroidal parameter `c` and ``|x| < 1.0``. Requires
pre-computed characteristic value.
Returns
-------
s
Value of the function
sp
Value of the derivative vs x
""")
add_newdoc("pseudo_huber",
r"""
pseudo_huber(delta, r)
Pseudo-Huber loss function.
.. math:: \mathrm{pseudo\_huber}(\delta, r) = \delta^2 \left( \sqrt{ 1 + \left( \frac{r}{\delta} \right)^2 } - 1 \right)
Parameters
----------
delta : ndarray
Input array, indicating the soft quadratic vs. linear loss changepoint.
r : ndarray
Input array, possibly representing residuals.
Returns
-------
res : ndarray
The computed Pseudo-Huber loss function values.
Notes
-----
This function is convex in :math:`r`.
.. versionadded:: 0.15.0
""")
add_newdoc("psi",
"""
psi(z, out=None)
The digamma function.
The logarithmic derivative of the gamma function evaluated at ``z``.
Parameters
----------
z : array_like
Real or complex argument.
out : ndarray, optional
Array for the computed values of ``psi``.
Returns
-------
digamma : ndarray
Computed values of ``psi``.
Notes
-----
For large values not close to the negative real axis, ``psi`` is
computed using the asymptotic series (5.11.2) from [1]_. For small
arguments not close to the negative real axis, the recurrence
relation (5.5.2) from [1]_ is used until the argument is large
enough to use the asymptotic series. For values close to the
negative real axis, the reflection formula (5.5.4) from [1]_ is
used first. Note that ``psi`` has a family of zeros on the
negative real axis which occur between the poles at nonpositive
integers. Around the zeros the reflection formula suffers from
cancellation and the implementation loses precision. The sole
positive zero and the first negative zero, however, are handled
separately by precomputing series expansions using [2]_, so the
function should maintain full accuracy around the origin.
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/5
.. [2] Fredrik Johansson and others.
"mpmath: a Python library for arbitrary-precision floating-point arithmetic"
(Version 0.19) http://mpmath.org/
Examples
--------
>>> from scipy.special import psi
>>> z = 3 + 4j
>>> psi(z)
(1.55035981733341+1.0105022091860445j)
Verify psi(z) = psi(z + 1) - 1/z:
>>> psi(z + 1) - 1/z
(1.55035981733341+1.0105022091860445j)
""")
add_newdoc("radian",
"""
radian(d, m, s, out=None)
Convert from degrees to radians.
Returns the angle given in (d)egrees, (m)inutes, and (s)econds in
radians.
Parameters
----------
d : array_like
Degrees, can be real-valued.
m : array_like
Minutes, can be real-valued.
s : array_like
Seconds, can be real-valued.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Values of the inputs in radians.
Examples
--------
>>> import scipy.special as sc
There are many ways to specify an angle.
>>> sc.radian(90, 0, 0)
1.5707963267948966
>>> sc.radian(0, 60 * 90, 0)
1.5707963267948966
>>> sc.radian(0, 0, 60**2 * 90)
1.5707963267948966
The inputs can be real-valued.
>>> sc.radian(1.5, 0, 0)
0.02617993877991494
>>> sc.radian(1, 30, 0)
0.02617993877991494
""")
add_newdoc("rel_entr",
r"""
rel_entr(x, y, out=None)
Elementwise function for computing relative entropy.
.. math::
\mathrm{rel\_entr}(x, y) =
\begin{cases}
x \log(x / y) & x > 0, y > 0 \\
0 & x = 0, y \ge 0 \\
\infty & \text{otherwise}
\end{cases}
Parameters
----------
x, y : array_like
Input arrays
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Relative entropy of the inputs
See Also
--------
entr, kl_div
Notes
-----
.. versionadded:: 0.15.0
This function is jointly convex in x and y.
The origin of this function is in convex programming; see
[1]_. Given two discrete probability distributions :math:`p_1,
\ldots, p_n` and :math:`q_1, \ldots, q_n`, to get the relative
entropy of statistics compute the sum
.. math::
\sum_{i = 1}^n \mathrm{rel\_entr}(p_i, q_i).
See [2]_ for details.
References
----------
.. [1] Grant, Boyd, and Ye, "CVX: Matlab Software for Disciplined Convex
Programming", http://cvxr.com/cvx/
.. [2] Kullback-Leibler divergence,
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
""")
add_newdoc("rgamma",
r"""
rgamma(z, out=None)
Reciprocal of the gamma function.
Defined as :math:`1 / \Gamma(z)`, where :math:`\Gamma` is the
gamma function. For more on the gamma function see `gamma`.
Parameters
----------
z : array_like
Real or complex valued input
out : ndarray, optional
Optional output array for the function results
Returns
-------
scalar or ndarray
Function results
Notes
-----
The gamma function has no zeros and has simple poles at
nonpositive integers, so `rgamma` is an entire function with zeros
at the nonpositive integers. See the discussion in [dlmf]_ for
more details.
See Also
--------
gamma, gammaln, loggamma
References
----------
.. [dlmf] Nist, Digital Library of Mathematical functions,
https://dlmf.nist.gov/5.2#i
Examples
--------
>>> import scipy.special as sc
It is the reciprocal of the gamma function.
>>> sc.rgamma([1, 2, 3, 4])
array([1. , 1. , 0.5 , 0.16666667])
>>> 1 / sc.gamma([1, 2, 3, 4])
array([1. , 1. , 0.5 , 0.16666667])
It is zero at nonpositive integers.
>>> sc.rgamma([0, -1, -2, -3])
array([0., 0., 0., 0.])
It rapidly underflows to zero along the positive real axis.
>>> sc.rgamma([10, 100, 179])
array([2.75573192e-006, 1.07151029e-156, 0.00000000e+000])
""")
add_newdoc("round",
"""
round(x, out=None)
Round to the nearest integer.
Returns the nearest integer to `x`. If `x` ends in 0.5 exactly,
the nearest even integer is chosen.
Parameters
----------
x : array_like
Real valued input.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
The nearest integers to the elements of `x`. The result is of
floating type, not integer type.
Examples
--------
>>> import scipy.special as sc
It rounds to even.
>>> sc.round([0.5, 1.5])
array([0., 2.])
""")
add_newdoc("shichi",
r"""
shichi(x, out=None)
Hyperbolic sine and cosine integrals.
The hyperbolic sine integral is
.. math::
\int_0^x \frac{\sinh{t}}{t}dt
and the hyperbolic cosine integral is
.. math::
\gamma + \log(x) + \int_0^x \frac{\cosh{t} - 1}{t} dt
where :math:`\gamma` is Euler's constant and :math:`\log` is the
principle branch of the logarithm.
Parameters
----------
x : array_like
Real or complex points at which to compute the hyperbolic sine
and cosine integrals.
Returns
-------
si : ndarray
Hyperbolic sine integral at ``x``
ci : ndarray
Hyperbolic cosine integral at ``x``
Notes
-----
For real arguments with ``x < 0``, ``chi`` is the real part of the
hyperbolic cosine integral. For such points ``chi(x)`` and ``chi(x
+ 0j)`` differ by a factor of ``1j*pi``.
For real arguments the function is computed by calling Cephes'
[1]_ *shichi* routine. For complex arguments the algorithm is based
on Mpmath's [2]_ *shi* and *chi* routines.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Fredrik Johansson and others.
"mpmath: a Python library for arbitrary-precision floating-point arithmetic"
(Version 0.19) http://mpmath.org/
""")
add_newdoc("sici",
r"""
sici(x, out=None)
Sine and cosine integrals.
The sine integral is
.. math::
\int_0^x \frac{\sin{t}}{t}dt
and the cosine integral is
.. math::
\gamma + \log(x) + \int_0^x \frac{\cos{t} - 1}{t}dt
where :math:`\gamma` is Euler's constant and :math:`\log` is the
principle branch of the logarithm.
Parameters
----------
x : array_like
Real or complex points at which to compute the sine and cosine
integrals.
Returns
-------
si : ndarray
Sine integral at ``x``
ci : ndarray
Cosine integral at ``x``
Notes
-----
For real arguments with ``x < 0``, ``ci`` is the real part of the
cosine integral. For such points ``ci(x)`` and ``ci(x + 0j)``
differ by a factor of ``1j*pi``.
For real arguments the function is computed by calling Cephes'
[1]_ *sici* routine. For complex arguments the algorithm is based
on Mpmath's [2]_ *si* and *ci* routines.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
.. [2] Fredrik Johansson and others.
"mpmath: a Python library for arbitrary-precision floating-point arithmetic"
(Version 0.19) http://mpmath.org/
""")
add_newdoc("sindg",
"""
sindg(x, out=None)
Sine of the angle `x` given in degrees.
Parameters
----------
x : array_like
Angle, given in degrees.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Sine at the input.
See Also
--------
cosdg, tandg, cotdg
Examples
--------
>>> import scipy.special as sc
It is more accurate than using sine directly.
>>> x = 180 * np.arange(3)
>>> sc.sindg(x)
array([ 0., -0., 0.])
>>> np.sin(x * np.pi / 180)
array([ 0.0000000e+00, 1.2246468e-16, -2.4492936e-16])
""")
add_newdoc("smirnov",
r"""
smirnov(n, d)
Kolmogorov-Smirnov complementary cumulative distribution function
Returns the exact Kolmogorov-Smirnov complementary cumulative
distribution function,(aka the Survival Function) of Dn+ (or Dn-)
for a one-sided test of equality between an empirical and a
theoretical distribution. It is equal to the probability that the
maximum difference between a theoretical distribution and an empirical
one based on `n` samples is greater than d.
Parameters
----------
n : int
Number of samples
d : float array_like
Deviation between the Empirical CDF (ECDF) and the target CDF.
Returns
-------
float
The value(s) of smirnov(n, d), Prob(Dn+ >= d) (Also Prob(Dn- >= d))
Notes
-----
`smirnov` is used by `stats.kstest` in the application of the
Kolmogorov-Smirnov Goodness of Fit test. For historial reasons this
function is exposed in `scpy.special`, but the recommended way to achieve
the most accurate CDF/SF/PDF/PPF/ISF computations is to use the
`stats.ksone` distribution.
See Also
--------
smirnovi : The Inverse Survival Function for the distribution
scipy.stats.ksone : Provides the functionality as a continuous distribution
kolmogorov, kolmogi : Functions for the two-sided distribution
Examples
--------
>>> from scipy.special import smirnov
Show the probability of a gap at least as big as 0, 0.5 and 1.0 for a sample of size 5
>>> smirnov(5, [0, 0.5, 1.0])
array([ 1. , 0.056, 0. ])
Compare a sample of size 5 drawn from a source N(0.5, 1) distribution against
a target N(0, 1) CDF.
>>> from scipy.stats import norm
>>> rng = np.random.default_rng()
>>> n = 5
>>> gendist = norm(0.5, 1) # Normal distribution, mean 0.5, stddev 1
>>> x = np.sort(gendist.rvs(size=n, random_state=rng))
>>> x
array([-1.3922078 , -0.13526532, 0.1371477 , 0.18981686, 1.81948167])
>>> target = norm(0, 1)
>>> cdfs = target.cdf(x)
>>> cdfs
array([0.08192974, 0.44620105, 0.55454297, 0.57527368, 0.96558101])
# Construct the Empirical CDF and the K-S statistics (Dn+, Dn-, Dn)
>>> ecdfs = np.arange(n+1, dtype=float)/n
>>> cols = np.column_stack([x, ecdfs[1:], cdfs, cdfs - ecdfs[:n], ecdfs[1:] - cdfs])
>>> np.set_printoptions(precision=3)
>>> cols
array([[-1.392, 0.2 , 0.082, 0.082, 0.118],
[-0.135, 0.4 , 0.446, 0.246, -0.046],
[ 0.137, 0.6 , 0.555, 0.155, 0.045],
[ 0.19 , 0.8 , 0.575, -0.025, 0.225],
[ 1.819, 1. , 0.966, 0.166, 0.034]])
>>> gaps = cols[:, -2:]
>>> Dnpm = np.max(gaps, axis=0)
>>> print('Dn-=%f, Dn+=%f' % (Dnpm[0], Dnpm[1]))
Dn-=0.246201, Dn+=0.224726
>>> probs = smirnov(n, Dnpm)
>>> print(chr(10).join(['For a sample of size %d drawn from a N(0, 1) distribution:' % n,
... ' Smirnov n=%d: Prob(Dn- >= %f) = %.4f' % (n, Dnpm[0], probs[0]),
... ' Smirnov n=%d: Prob(Dn+ >= %f) = %.4f' % (n, Dnpm[1], probs[1])]))
For a sample of size 5 drawn from a N(0, 1) distribution:
Smirnov n=5: Prob(Dn- >= 0.246201) = 0.4713
Smirnov n=5: Prob(Dn+ >= 0.224726) = 0.5243
Plot the Empirical CDF against the target N(0, 1) CDF
>>> import matplotlib.pyplot as plt
>>> plt.step(np.concatenate([[-3], x]), ecdfs, where='post', label='Empirical CDF')
>>> x3 = np.linspace(-3, 3, 100)
>>> plt.plot(x3, target.cdf(x3), label='CDF for N(0, 1)')
>>> plt.ylim([0, 1]); plt.grid(True); plt.legend();
# Add vertical lines marking Dn+ and Dn-
>>> iminus, iplus = np.argmax(gaps, axis=0)
>>> plt.vlines([x[iminus]], ecdfs[iminus], cdfs[iminus], color='r', linestyle='dashed', lw=4)
>>> plt.vlines([x[iplus]], cdfs[iplus], ecdfs[iplus+1], color='m', linestyle='dashed', lw=4)
>>> plt.show()
""")
add_newdoc("smirnovi",
"""
smirnovi(n, p)
Inverse to `smirnov`
Returns `d` such that ``smirnov(n, d) == p``, the critical value
corresponding to `p`.
Parameters
----------
n : int
Number of samples
p : float array_like
Probability
Returns
-------
float
The value(s) of smirnovi(n, p), the critical values.
Notes
-----
`smirnov` is used by `stats.kstest` in the application of the
Kolmogorov-Smirnov Goodness of Fit test. For historial reasons this
function is exposed in `scpy.special`, but the recommended way to achieve
the most accurate CDF/SF/PDF/PPF/ISF computations is to use the
`stats.ksone` distribution.
See Also
--------
smirnov : The Survival Function (SF) for the distribution
scipy.stats.ksone : Provides the functionality as a continuous distribution
kolmogorov, kolmogi, scipy.stats.kstwobign : Functions for the two-sided distribution
""")
add_newdoc("_smirnovc",
"""
_smirnovc(n, d)
Internal function, do not use.
""")
add_newdoc("_smirnovci",
"""
Internal function, do not use.
""")
add_newdoc("_smirnovp",
"""
_smirnovp(n, p)
Internal function, do not use.
""")
add_newdoc("spence",
r"""
spence(z, out=None)
Spence's function, also known as the dilogarithm.
It is defined to be
.. math::
\int_1^z \frac{\log(t)}{1 - t}dt
for complex :math:`z`, where the contour of integration is taken
to avoid the branch cut of the logarithm. Spence's function is
analytic everywhere except the negative real axis where it has a
branch cut.
Parameters
----------
z : array_like
Points at which to evaluate Spence's function
Returns
-------
s : ndarray
Computed values of Spence's function
Notes
-----
There is a different convention which defines Spence's function by
the integral
.. math::
-\int_0^z \frac{\log(1 - t)}{t}dt;
this is our ``spence(1 - z)``.
""")
add_newdoc("stdtr",
"""
stdtr(df, t)
Student t distribution cumulative distribution function
Returns the integral from minus infinity to t of the Student t
distribution with df > 0 degrees of freedom::
gamma((df+1)/2)/(sqrt(df*pi)*gamma(df/2)) *
integral((1+x**2/df)**(-df/2-1/2), x=-inf..t)
""")
add_newdoc("stdtridf",
"""
stdtridf(p, t)
Inverse of `stdtr` vs df
Returns the argument df such that stdtr(df, t) is equal to `p`.
""")
add_newdoc("stdtrit",
"""
stdtrit(df, p)
Inverse of `stdtr` vs `t`
Returns the argument `t` such that stdtr(df, t) is equal to `p`.
""")
add_newdoc("struve",
r"""
struve(v, x)
Struve function.
Return the value of the Struve function of order `v` at `x`. The Struve
function is defined as,
.. math::
H_v(x) = (z/2)^{v + 1} \sum_{n=0}^\infty \frac{(-1)^n (z/2)^{2n}}{\Gamma(n + \frac{3}{2}) \Gamma(n + v + \frac{3}{2})},
where :math:`\Gamma` is the gamma function.
Parameters
----------
v : array_like
Order of the Struve function (float).
x : array_like
Argument of the Struve function (float; must be positive unless `v` is
an integer).
Returns
-------
H : ndarray
Value of the Struve function of order `v` at `x`.
Notes
-----
Three methods discussed in [1]_ are used to evaluate the Struve function:
- power series
- expansion in Bessel functions (if :math:`|z| < |v| + 20`)
- asymptotic large-z expansion (if :math:`z \geq 0.7v + 12`)
Rounding errors are estimated based on the largest terms in the sums, and
the result associated with the smallest error is returned.
See also
--------
modstruve
References
----------
.. [1] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/11
""")
add_newdoc("tandg",
"""
tandg(x, out=None)
Tangent of angle `x` given in degrees.
Parameters
----------
x : array_like
Angle, given in degrees.
out : ndarray, optional
Optional output array for the function results.
Returns
-------
scalar or ndarray
Tangent at the input.
See Also
--------
sindg, cosdg, cotdg
Examples
--------
>>> import scipy.special as sc
It is more accurate than using tangent directly.
>>> x = 180 * np.arange(3)
>>> sc.tandg(x)
array([0., 0., 0.])
>>> np.tan(x * np.pi / 180)
array([ 0.0000000e+00, -1.2246468e-16, -2.4492936e-16])
""")
add_newdoc("tklmbda",
"""
tklmbda(x, lmbda)
Tukey-Lambda cumulative distribution function
""")
add_newdoc("wofz",
"""
wofz(z)
Faddeeva function
Returns the value of the Faddeeva function for complex argument::
exp(-z**2) * erfc(-i*z)
See Also
--------
dawsn, erf, erfc, erfcx, erfi
References
----------
.. [1] Steven G. Johnson, Faddeeva W function implementation.
http://ab-initio.mit.edu/Faddeeva
Examples
--------
>>> from scipy import special
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-3, 3)
>>> z = special.wofz(x)
>>> plt.plot(x, z.real, label='wofz(x).real')
>>> plt.plot(x, z.imag, label='wofz(x).imag')
>>> plt.xlabel('$x$')
>>> plt.legend(framealpha=1, shadow=True)
>>> plt.grid(alpha=0.25)
>>> plt.show()
""")
add_newdoc("xlogy",
"""
xlogy(x, y)
Compute ``x*log(y)`` so that the result is 0 if ``x = 0``.
Parameters
----------
x : array_like
Multiplier
y : array_like
Argument
Returns
-------
z : array_like
Computed x*log(y)
Notes
-----
.. versionadded:: 0.13.0
""")
add_newdoc("xlog1py",
"""
xlog1py(x, y)
Compute ``x*log1p(y)`` so that the result is 0 if ``x = 0``.
Parameters
----------
x : array_like
Multiplier
y : array_like
Argument
Returns
-------
z : array_like
Computed x*log1p(y)
Notes
-----
.. versionadded:: 0.13.0
""")
add_newdoc("y0",
r"""
y0(x)
Bessel function of the second kind of order 0.
Parameters
----------
x : array_like
Argument (float).
Returns
-------
Y : ndarray
Value of the Bessel function of the second kind of order 0 at `x`.
Notes
-----
The domain is divided into the intervals [0, 5] and (5, infinity). In the
first interval a rational approximation :math:`R(x)` is employed to
compute,
.. math::
Y_0(x) = R(x) + \frac{2 \log(x) J_0(x)}{\pi},
where :math:`J_0` is the Bessel function of the first kind of order 0.
In the second interval, the Hankel asymptotic expansion is employed with
two rational functions of degree 6/6 and 7/7.
This function is a wrapper for the Cephes [1]_ routine `y0`.
See also
--------
j0
yv
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("y1",
"""
y1(x)
Bessel function of the second kind of order 1.
Parameters
----------
x : array_like
Argument (float).
Returns
-------
Y : ndarray
Value of the Bessel function of the second kind of order 1 at `x`.
Notes
-----
The domain is divided into the intervals [0, 8] and (8, infinity). In the
first interval a 25 term Chebyshev expansion is used, and computing
:math:`J_1` (the Bessel function of the first kind) is required. In the
second, the asymptotic trigonometric representation is employed using two
rational functions of degree 5/5.
This function is a wrapper for the Cephes [1]_ routine `y1`.
See also
--------
j1
yn
yv
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("yn",
r"""
yn(n, x)
Bessel function of the second kind of integer order and real argument.
Parameters
----------
n : array_like
Order (integer).
z : array_like
Argument (float).
Returns
-------
Y : ndarray
Value of the Bessel function, :math:`Y_n(x)`.
Notes
-----
Wrapper for the Cephes [1]_ routine `yn`.
The function is evaluated by forward recurrence on `n`, starting with
values computed by the Cephes routines `y0` and `y1`. If `n = 0` or 1,
the routine for `y0` or `y1` is called directly.
See also
--------
yv : For real order and real or complex argument.
References
----------
.. [1] Cephes Mathematical Functions Library,
http://www.netlib.org/cephes/
""")
add_newdoc("yv",
r"""
yv(v, z)
Bessel function of the second kind of real order and complex argument.
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
Y : ndarray
Value of the Bessel function of the second kind, :math:`Y_v(x)`.
Notes
-----
For positive `v` values, the computation is carried out using the
AMOS [1]_ `zbesy` routine, which exploits the connection to the Hankel
Bessel functions :math:`H_v^{(1)}` and :math:`H_v^{(2)}`,
.. math:: Y_v(z) = \frac{1}{2\imath} (H_v^{(1)} - H_v^{(2)}).
For negative `v` values the formula,
.. math:: Y_{-v}(z) = Y_v(z) \cos(\pi v) + J_v(z) \sin(\pi v)
is used, where :math:`J_v(z)` is the Bessel function of the first kind,
computed using the AMOS routine `zbesj`. Note that the second term is
exactly zero for integer `v`; to improve accuracy the second term is
explicitly omitted for `v` values such that `v = floor(v)`.
See also
--------
yve : :math:`Y_v` with leading exponential behavior stripped off.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("yve",
r"""
yve(v, z)
Exponentially scaled Bessel function of the second kind of real order.
Returns the exponentially scaled Bessel function of the second
kind of real order `v` at complex `z`::
yve(v, z) = yv(v, z) * exp(-abs(z.imag))
Parameters
----------
v : array_like
Order (float).
z : array_like
Argument (float or complex).
Returns
-------
Y : ndarray
Value of the exponentially scaled Bessel function.
Notes
-----
For positive `v` values, the computation is carried out using the
AMOS [1]_ `zbesy` routine, which exploits the connection to the Hankel
Bessel functions :math:`H_v^{(1)}` and :math:`H_v^{(2)}`,
.. math:: Y_v(z) = \frac{1}{2\imath} (H_v^{(1)} - H_v^{(2)}).
For negative `v` values the formula,
.. math:: Y_{-v}(z) = Y_v(z) \cos(\pi v) + J_v(z) \sin(\pi v)
is used, where :math:`J_v(z)` is the Bessel function of the first kind,
computed using the AMOS routine `zbesj`. Note that the second term is
exactly zero for integer `v`; to improve accuracy the second term is
explicitly omitted for `v` values such that `v = floor(v)`.
References
----------
.. [1] Donald E. Amos, "AMOS, A Portable Package for Bessel Functions
of a Complex Argument and Nonnegative Order",
http://netlib.org/amos/
""")
add_newdoc("_zeta",
"""
_zeta(x, q)
Internal function, Hurwitz zeta.
""")
add_newdoc("zetac",
"""
zetac(x)
Riemann zeta function minus 1.
This function is defined as
.. math:: \\zeta(x) = \\sum_{k=2}^{\\infty} 1 / k^x,
where ``x > 1``. For ``x < 1`` the analytic continuation is
computed. For more information on the Riemann zeta function, see
[dlmf]_.
Parameters
----------
x : array_like of float
Values at which to compute zeta(x) - 1 (must be real).
Returns
-------
out : array_like
Values of zeta(x) - 1.
See Also
--------
zeta
Examples
--------
>>> from scipy.special import zetac, zeta
Some special values:
>>> zetac(2), np.pi**2/6 - 1
(0.64493406684822641, 0.6449340668482264)
>>> zetac(-1), -1.0/12 - 1
(-1.0833333333333333, -1.0833333333333333)
Compare ``zetac(x)`` to ``zeta(x) - 1`` for large `x`:
>>> zetac(60), zeta(60) - 1
(8.673617380119933e-19, 0.0)
References
----------
.. [dlmf] NIST Digital Library of Mathematical Functions
https://dlmf.nist.gov/25
""")
add_newdoc("_riemann_zeta",
"""
Internal function, use `zeta` instead.
""")
add_newdoc("_struve_asymp_large_z",
"""
_struve_asymp_large_z(v, z, is_h)
Internal function for testing `struve` & `modstruve`
Evaluates using asymptotic expansion
Returns
-------
v, err
""")
add_newdoc("_struve_power_series",
"""
_struve_power_series(v, z, is_h)
Internal function for testing `struve` & `modstruve`
Evaluates using power series
Returns
-------
v, err
""")
add_newdoc("_struve_bessel_series",
"""
_struve_bessel_series(v, z, is_h)
Internal function for testing `struve` & `modstruve`
Evaluates using Bessel function series
Returns
-------
v, err
""")
add_newdoc("_spherical_jn",
"""
Internal function, use `spherical_jn` instead.
""")
add_newdoc("_spherical_jn_d",
"""
Internal function, use `spherical_jn` instead.
""")
add_newdoc("_spherical_yn",
"""
Internal function, use `spherical_yn` instead.
""")
add_newdoc("_spherical_yn_d",
"""
Internal function, use `spherical_yn` instead.
""")
add_newdoc("_spherical_in",
"""
Internal function, use `spherical_in` instead.
""")
add_newdoc("_spherical_in_d",
"""
Internal function, use `spherical_in` instead.
""")
add_newdoc("_spherical_kn",
"""
Internal function, use `spherical_kn` instead.
""")
add_newdoc("_spherical_kn_d",
"""
Internal function, use `spherical_kn` instead.
""")
add_newdoc("loggamma",
r"""
loggamma(z, out=None)
Principal branch of the logarithm of the gamma function.
Defined to be :math:`\log(\Gamma(x))` for :math:`x > 0` and
extended to the complex plane by analytic continuation. The
function has a single branch cut on the negative real axis.
.. versionadded:: 0.18.0
Parameters
----------
z : array-like
Values in the complex plain at which to compute ``loggamma``
out : ndarray, optional
Output array for computed values of ``loggamma``
Returns
-------
loggamma : ndarray
Values of ``loggamma`` at z.
Notes
-----
It is not generally true that :math:`\log\Gamma(z) =
\log(\Gamma(z))`, though the real parts of the functions do
agree. The benefit of not defining `loggamma` as
:math:`\log(\Gamma(z))` is that the latter function has a
complicated branch cut structure whereas `loggamma` is analytic
except for on the negative real axis.
The identities
.. math::
\exp(\log\Gamma(z)) &= \Gamma(z) \\
\log\Gamma(z + 1) &= \log(z) + \log\Gamma(z)
make `loggamma` useful for working in complex logspace.
On the real line `loggamma` is related to `gammaln` via
``exp(loggamma(x + 0j)) = gammasgn(x)*exp(gammaln(x))``, up to
rounding error.
The implementation here is based on [hare1997]_.
See also
--------
gammaln : logarithm of the absolute value of the gamma function
gammasgn : sign of the gamma function
References
----------
.. [hare1997] D.E.G. Hare,
*Computing the Principal Branch of log-Gamma*,
Journal of Algorithms, Volume 25, Issue 2, November 1997, pages 221-236.
""")
add_newdoc("_sinpi",
"""
Internal function, do not use.
""")
add_newdoc("_cospi",
"""
Internal function, do not use.
""")
add_newdoc("owens_t",
"""
owens_t(h, a)
Owen's T Function.
The function T(h, a) gives the probability of the event
(X > h and 0 < Y < a * X) where X and Y are independent
standard normal random variables.
Parameters
----------
h: array_like
Input value.
a: array_like
Input value.
Returns
-------
t: scalar or ndarray
Probability of the event (X > h and 0 < Y < a * X),
where X and Y are independent standard normal random variables.
Examples
--------
>>> from scipy import special
>>> a = 3.5
>>> h = 0.78
>>> special.owens_t(h, a)
0.10877216734852274
References
----------
.. [1] M. Patefield and D. Tandy, "Fast and accurate calculation of
Owen's T Function", Statistical Software vol. 5, pp. 1-25, 2000.
""")
add_newdoc("_factorial",
"""
Internal function, do not use.
""")
add_newdoc("wright_bessel",
r"""
wright_bessel(a, b, x)
Wright's generalized Bessel function.
Wright's generalized Bessel function is an entire function and defined as
.. math:: \Phi(a, b; x) = \sum_{k=0}^\infty \frac{x^k}{k! \Gamma(a k + b)}
See also [1].
Parameters
----------
a : array_like of float
a >= 0
b : array_like of float
b >= 0
x : array_like of float
x >= 0
Notes
-----
Due to the compexity of the function with its three parameters, only
non-negative arguments are implemented.
Examples
--------
>>> from scipy.special import wright_bessel
>>> a, b, x = 1.5, 1.1, 2.5
>>> wright_bessel(a, b-1, x)
4.5314465939443025
Now, let us verify the relation
.. math:: \Phi(a, b-1; x) = a x \Phi(a, b+a; x) + (b-1) \Phi(a, b; x)
>>> a * x * wright_bessel(a, b+a, x) + (b-1) * wright_bessel(a, b, x)
4.5314465939443025
References
----------
.. [1] Digital Library of Mathematical Functions, 10.46.
https://dlmf.nist.gov/10.46.E1
""")
add_newdoc("ndtri_exp",
r"""
ndtri_exp(y)
Inverse of `log_ndtr` vs x. Allows for greater precision than
`ndtri` composed with `numpy.exp` for very small values of y and for
y close to 0.
Parameters
----------
y : array_like of float
Returns
-------
scalar or ndarray
Inverse of the log CDF of the standard normal distribution, evaluated
at y.
Examples
--------
>>> import scipy.special as sc
`ndtri_exp` agrees with the naive implementation when the latter does
not suffer from underflow.
>>> sc.ndtri_exp(-1)
-0.33747496376420244
>>> sc.ndtri(np.exp(-1))
-0.33747496376420244
For extreme values of y, the naive approach fails
>>> sc.ndtri(np.exp(-800))
-inf
>>> sc.ndtri(np.exp(-1e-20))
inf
whereas `ndtri_exp` is still able to compute the result to high precision.
>>> sc.ndtri_exp(-800)
-39.88469483825668
>>> sc.ndtri_exp(-1e-20)
9.262340089798409
See Also
--------
log_ndtr, ndtri, ndtr
""")
|
matthew-brett/scipy
|
scipy/special/_add_newdocs.py
|
Python
|
bsd-3-clause
| 242,325
|
[
"Gaussian"
] |
69a3fb0398a75b17d6092644e4d7c6229ead948c36b30a005c939345f475365c
|
# *******************************************
# Copyright 2010-2013, Anthony Hand
#
# File version 2013.09.25 (September 25, 2013)
# Updates:
# - Corrected a bug in detectTierIphone(). A 'self' reference had been forgotten.
#
# File version 2013.07.13 (July 13, 2013)
# Updates:
# - Added support for Tizen: variable and DetectTizen().
# - Added support for Meego: variable and DetectMeego().
# - Added support for Windows Phone 8: variable and DetectWindowsPhone8().
# - Added a generic Windows Phone method: DetectWindowsPhone().
# - Added support for BlackBerry 10 OS: variable and DetectBlackBerry10Phone().
# - Added support for PlayStation Vita handheld: variable and DetectGamingHandheld().
# - Updated DetectTierIphone(). Added Tizen; updated the Windows Phone, BB10, and PS Vita support.
# - Updated DetectWindowsMobile(). Uses generic DetectWindowsPhone() method rather than WP7.
# - Updated DetectSmartphone(). Uses the IsTierIphone variable.
# - Updated DetectSonyMylo() with more efficient code.
# - Removed DetectGarminNuvifone() from DetectTierIphone(). How many are left in market in 2013? It is detected as a RichCSS Tier device.
# - Removed the deviceXoom variable. It was unused.
# - Added detection support for the Obigo mobile browser to DetectMobileQuick().
# - Corrected a bug in the DetectNintendo() method.
#
# Port to Python: Alexey Evseev (alexevseev@gmail.com)
# Made for www.irk.fm website
# Maintained by irk.fm team. Contact Jury Gerasimov (jury@softshape.com)
#
# File version date: Feburary 10, 2012
# Creation:
# - Cloned from http://code.google.com/p/mobileesp/source/browse/Java/UAgentInfo.java
# and http://code.google.com/p/mobileesp/source/browse/PHP/mdetect.php
#
# LICENSE INFORMATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
#
#
# ABOUT THIS PROJECT
# Project Owner: Anthony Hand
# Email: anthony.hand@gmail.com
# Web Site: http://www.mobileesp.com
# Source Files: http://code.google.com/p/mobileesp/
#
# Versions of this code are available for:
# PHP, JavaScript, Java, ASP.NET (C#), Ruby and Python
#
# *******************************************
class UAgentInfo(object):
"""The UAgentInfo class encapsulates information about
a browser's connection to your web site.
You can use it to find out whether the browser asking for
your site's content is probably running on a mobile device.
The methods were written so you can be as granular as you want.
For example, enquiring whether it's as specific as an iPod Touch or
as general as a smartphone class device.
The object's methods return true, or false.
"""
# Initialize some initial smartphone string variables.
engineWebKit = "webkit"
deviceIphone = "iphone"
deviceIpod = "ipod"
deviceIpad = "ipad"
deviceMacPpc = "macintosh" #Used for disambiguation
deviceAndroid = "android"
deviceGoogleTV = "googletv"
deviceHtcFlyer = "htc_flyer" #HTC Flyer
deviceWinPhone7 = "windows phone os 7"
deviceWinPhone8 = "windows phone 8"
deviceWinMob = "windows ce"
deviceWindows = "windows"
deviceIeMob = "iemobile"
devicePpc = "ppc" #Stands for PocketPC
enginePie = "wm5 pie" #An old Windows Mobile
deviceBB = "blackberry"
deviceBB10 = "bb10" #For the new BB 10 OS
vndRIM = "vnd.rim" #Detectable when BB devices emulate IE or Firefox
deviceBBStorm = "blackberry95" #Storm 1 and 2
deviceBBBold = "blackberry97" #Bold 97x0 (non-touch)
deviceBBBoldTouch = "blackberry 99" #Bold 99x0 (touchscreen)
deviceBBTour = "blackberry96" #Tour
deviceBBCurve = "blackberry89" #Curve 2
deviceBBCurveTouch = "blackberry 938" #Curve Touch 9380
deviceBBTorch = "blackberry 98" #Torch
deviceBBPlaybook = "playbook" #PlayBook tablet
deviceSymbian = "symbian"
deviceS60 = "series60"
deviceS70 = "series70"
deviceS80 = "series80"
deviceS90 = "series90"
devicePalm = "palm"
deviceWebOS = "webos" #For Palm's line of WebOS devices
deviceWebOShp = "hpwos" #For HP's line of WebOS devices
engineBlazer = "blazer" #Old Palm
engineXiino = "xiino" #Another old Palm
deviceNuvifone = "nuvifone" #Garmin Nuvifone
deviceBada = "bada" #Samsung's Bada OS
deviceTizen = "tizen" #Tizen OS
deviceMeego = "meego" #Meego OS
deviceKindle = "kindle" #Amazon Kindle, eInk one
engineSilk = "silk" #Amazon's accelerated Silk browser for Kindle Fire
#Initialize variables for mobile-specific content.
vndwap = "vnd.wap"
wml = "wml"
#Initialize variables for other random devices and mobile browsers.
deviceTablet = "tablet" #Generic term for slate and tablet devices
deviceBrew = "brew"
deviceDanger = "danger"
deviceHiptop = "hiptop"
devicePlaystation = "playstation"
devicePlaystationVita = "vita"
deviceNintendoDs = "nitro"
deviceNintendo = "nintendo"
deviceWii = "wii"
deviceXbox = "xbox"
deviceArchos = "archos"
engineOpera = "opera" #Popular browser
engineNetfront = "netfront" #Common embedded OS browser
engineUpBrowser = "up.browser" #common on some phones
engineOpenWeb = "openweb" #Transcoding by OpenWave server
deviceMidp = "midp" #a mobile Java technology
uplink = "up.link"
engineTelecaQ = "teleca q" #a modern feature phone browser
engineObigo = "obigo" #W 10 is a modern feature phone browser
devicePda = "pda" #some devices report themselves as PDAs
mini = "mini" #Some mobile browsers put "mini" in their names.
mobile = "mobile" #Some mobile browsers put "mobile" in their user agent strings.
mobi = "mobi" #Some mobile browsers put "mobi" in their user agent strings.
#Use Maemo, Tablet, and Linux to test for Nokia"s Internet Tablets.
maemo = "maemo"
linux = "linux"
qtembedded = "qt embedded" #for Sony Mylo
mylocom2 = "com2" #for Sony Mylo also
#In some UserAgents, the only clue is the manufacturer.
manuSonyEricsson = "sonyericsson"
manuericsson = "ericsson"
manuSamsung1 = "sec-sgh"
manuSony = "sony"
manuHtc = "htc"
#In some UserAgents, the only clue is the operator.
svcDocomo = "docomo"
svcKddi = "kddi"
svcVodafone = "vodafone"
#Disambiguation strings.
disUpdate = "update" #pda vs. update
def __init__(self, userAgent, httpAccept):
"""Initialize the __userAgent and __httpAccept variables
Keyword arguments:
userAgent -- the User-Agent header
httpAccept -- the Accept header
"""
# User-Agent and Accept HTTP request headers
self.__userAgent = userAgent.lower() if userAgent else ""
self.__httpAccept = httpAccept.lower() if httpAccept else ""
# Let's store values for quickly accessing the same info multiple times.
self.__isIphone = False
self.__isAndroidPhone = False
self.__isTierTablet = False
self.__isTierIphone = False
self.__isTierRichCss = False
self.__isTierGenericMobile = False
# Intialize key stored values.
self.initDeviceScan()
def getUserAgent(self):
"""Return the lower case HTTP_USER_AGENT"""
return self.__userAgent
def getHttpAccept(self):
"""Return the lower case HTTP_ACCEPT"""
return self.__httpAccept
def getIsIphone(self):
"""Return whether the device is an Iphone or iPod Touch"""
return self.__isIphone
def getIsTierTablet(self):
"""Return whether the device is in the Tablet Tier."""
return self.__isTierTablet
def getIsTierIphone(self):
"""Return whether the device is in the Iphone Tier."""
return self.__isTierIphone
def getIsTierRichCss(self):
"""Return whether the device is in the 'Rich CSS' tier of mobile devices."""
return self.__isTierRichCss
def getIsTierGenericMobile(self):
"""Return whether the device is a generic, less-capable mobile device."""
return self.__isTierGenericMobile
def initDeviceScan(self):
"""Initialize Key Stored Values."""
self.__isIphone = self.detectIphoneOrIpod()
self.__isAndroidPhone = self.detectAndroidPhone()
self.__isTierTablet = self.detectTierTablet()
self.__isTierIphone = self.detectTierIphone()
self.__isTierRichCss = self.detectTierRichCss()
self.__isTierGenericMobile = self.detectTierOtherPhones()
def detectIphone(self):
"""Return detection of an iPhone
Detects if the current device is an iPhone.
"""
# The iPad and iPod touch say they're an iPhone! So let's disambiguate.
return UAgentInfo.deviceIphone in self.__userAgent \
and not self.detectIpad() \
and not self.detectIpod()
def detectIpod(self):
"""Return detection of an iPod Touch
Detects if the current device is an iPod Touch.
"""
return UAgentInfo.deviceIpod in self.__userAgent
def detectIpad(self):
"""Return detection of an iPad
Detects if the current device is an iPad tablet.
"""
return UAgentInfo.deviceIpad in self.__userAgent \
and self.detectWebkit()
def detectIphoneOrIpod(self):
"""Return detection of an iPhone or iPod Touch
Detects if the current device is an iPhone or iPod Touch.
"""
#We repeat the searches here because some iPods may report themselves as an iPhone, which would be okay.
return UAgentInfo.deviceIphone in self.__userAgent \
or UAgentInfo.deviceIpod in self.__userAgent
def detectIos(self):
"""Return detection of an Apple iOS device
Detects *any* iOS device: iPhone, iPod Touch, iPad.
"""
return self.detectIphoneOrIpod() \
or self.detectIpad()
def detectAndroid(self):
"""Return detection of an Android device
Detects *any* Android OS-based device: phone, tablet, and multi-media player.
Also detects Google TV.
"""
if UAgentInfo.deviceAndroid in self.__userAgent \
or self.detectGoogleTV():
return True
#Special check for the HTC Flyer 7" tablet. It should report here.
return UAgentInfo.deviceHtcFlyer in self.__userAgent
def detectAndroidPhone(self):
"""Return detection of an Android phone
Detects if the current device is a (small-ish) Android OS-based device
used for calling and/or multi-media (like a Samsung Galaxy Player).
Google says these devices will have 'Android' AND 'mobile' in user agent.
Ignores tablets (Honeycomb and later).
"""
if self.detectAndroid() \
and UAgentInfo.mobile in self.__userAgent:
return True
#Special check for Android phones with Opera Mobile. They should report here.
if self.detectOperaAndroidPhone():
return True
#Special check for the HTC Flyer 7" tablet. It should report here.
return UAgentInfo.deviceHtcFlyer in self.__userAgent
def detectAndroidTablet(self):
"""Return detection of an Android tablet
Detects if the current device is a (self-reported) Android tablet.
Google says these devices will have 'Android' and NOT 'mobile' in their user agent.
"""
#First, let's make sure we're on an Android device.
if not self.detectAndroid():
return False
#Special check for Opera Android Phones. They should NOT report here.
if self.detectOperaMobile():
return False
#Special check for the HTC Flyer 7" tablet. It should NOT report here.
if UAgentInfo.deviceHtcFlyer in self.__userAgent:
return False
#Otherwise, if it's Android and does NOT have 'mobile' in it, Google says it's a tablet.
return UAgentInfo.mobile not in self.__userAgent
def detectAndroidWebKit(self):
"""Return detection of an Android WebKit browser
Detects if the current device is an Android OS-based device and
the browser is based on WebKit.
"""
return self.detectAndroid() \
and self.detectWebkit()
def detectGoogleTV(self):
"""Return detection of GoogleTV
Detects if the current device is a GoogleTV.
"""
return UAgentInfo.deviceGoogleTV in self.__userAgent
def detectWebkit(self):
"""Return detection of a WebKit browser
Detects if the current browser is based on WebKit.
"""
return UAgentInfo.engineWebKit in self.__userAgent
def detectWindowsPhone(self):
"""Return detection of EITHER Windows Phone 7.x OR 8 device.
Detects if the current browser is EITHER a
Windows Phone 7.x or 8 device.
"""
return self.detectWindowsPhone7() \
or self.detectWindowsPhone8()
def detectWindowsPhone7(self):
"""Return detection of Windows Phone 7
Detects if the current browser is a
Windows Phone 7 device.
"""
return UAgentInfo.deviceWinPhone7 in self.__userAgent
def detectWindowsPhone8(self):
"""Return detection of Windows Phone 8
Detects if the current browser is a
Windows Phone 8 device.
"""
return UAgentInfo.deviceWinPhone8 in self.__userAgent
def detectWindowsMobile(self):
"""Return detection of Windows Mobile
Detects if the current browser is a Windows Mobile device.
Excludes Windows Phone 7 devices.
Focuses on Windows Mobile 6.xx and earlier.
"""
#Exclude new Windows Phone 7.x and 8.
if self.detectWindowsPhone():
return False
#Most devices use 'Windows CE', but some report 'iemobile'
# and some older ones report as 'PIE' for Pocket IE.
# We also look for instances of HTC and Windows for many of their WinMo devices.
if UAgentInfo.deviceWinMob in self.__userAgent \
or UAgentInfo.deviceIeMob in self.__userAgent \
or UAgentInfo.enginePie in self.__userAgent:
return True
# Test for certain Windwos Mobile-based HTC devices.
if UAgentInfo.manuHtc in self.__userAgent \
and UAgentInfo.deviceWindows in self.__userAgent:
return True
if self.detectWapWml() \
and UAgentInfo.deviceWindows in self.__userAgent:
return True
#Test for Windows Mobile PPC but not old Macintosh PowerPC.
return UAgentInfo.devicePpc in self.__userAgent \
and UAgentInfo.deviceMacPpc not in self.__userAgent
def detectBlackBerry(self):
"""Return detection of Blackberry
Detects if the current browser is any BlackBerry.
Includes the PlayBook.
"""
return UAgentInfo.deviceBB in self.__userAgent \
or UAgentInfo.vndRIM in self.__httpAccept
def detectBlackBerry10Phone(self):
"""Return detection of a Blackberry 10 OS phone.
Detects if the current browser is a BlackBerry 10 OS phone.
Excludes the PlayBook.
"""
return UAgentInfo.deviceBB10 in self.__userAgent
def detectBlackBerryTablet(self):
"""Return detection of a Blackberry Tablet
Detects if the current browser is on a BlackBerry tablet device.
Example: PlayBook
"""
return UAgentInfo.deviceBBPlaybook in self.__userAgent
def detectBlackBerryWebKit(self):
"""Return detection of a Blackberry device with WebKit browser
Detects if the current browser is a BlackBerry device AND uses a
WebKit-based browser. These are signatures for the new BlackBerry OS 6.
Examples: Torch. Includes the Playbook.
"""
return self.detectBlackBerry() \
and self.detectWebkit()
def detectBlackBerryTouch(self):
"""Return detection of a Blackberry touchscreen device
Detects if the current browser is a BlackBerry Touch
device, such as the Storm, Torch, and Bold Touch. Excludes the Playbook.
"""
return UAgentInfo.deviceBBStorm in self.__userAgent \
or UAgentInfo.deviceBBTorch in self.__userAgent \
or UAgentInfo.deviceBBBoldTouch in self.__userAgent \
or UAgentInfo.deviceBBCurveTouch in self.__userAgent
def detectBlackBerryHigh(self):
"""Return detection of a Blackberry device with a better browser
Detects if the current browser is a BlackBerry device AND
has a more capable recent browser. Excludes the Playbook.
Examples, Storm, Bold, Tour, Curve2
Excludes the new BlackBerry OS 6 and 7 browser!!
"""
#Disambiguate for BlackBerry OS 6 or 7 (WebKit) browser
if self.detectBlackBerryWebKit():
return False
if not self.detectBlackBerry():
return False
return self.detectBlackBerryTouch() \
or UAgentInfo.deviceBBBold in self.__userAgent \
or UAgentInfo.deviceBBTour in self.__userAgent \
or UAgentInfo.deviceBBCurve in self.__userAgent
def detectBlackBerryLow(self):
"""Return detection of a Blackberry device with a poorer browser
Detects if the current browser is a BlackBerry device AND
has an older, less capable browser.
Examples: Pearl, 8800, Curve1
"""
if not self.detectBlackBerry():
return False
#Assume that if it's not in the High tier, then it's Low
return self.detectBlackBerryHigh() \
or self.detectBlackBerryWebKit()
def detectS60OssBrowser(self):
"""Return detection of Symbian S60 Browser
Detects if the current browser is the Symbian S60 Open Source Browser.
"""
#First, test for WebKit, then make sure it's either Symbian or S60.
return self.detectWebkit() \
and (UAgentInfo.deviceSymbian in self.__userAgent \
or UAgentInfo.deviceS60 in self.__userAgent)
def detectSymbianOS(self):
"""Return detection of SymbianOS
Detects if the current device is any Symbian OS-based device,
including older S60, Series 70, Series 80, Series 90, and UIQ,
or other browsers running on these devices.
"""
return UAgentInfo.deviceSymbian in self.__userAgent \
or UAgentInfo.deviceS60 in self.__userAgent \
or UAgentInfo.deviceS70 in self.__userAgent \
or UAgentInfo.deviceS80 in self.__userAgent \
or UAgentInfo.deviceS90 in self.__userAgent
def detectPalmOS(self):
"""Return detection of a PalmOS device
Detects if the current browser is on a PalmOS device.
"""
#Most devices nowadays report as 'Palm', but some older ones reported as Blazer or Xiino.
if UAgentInfo.devicePalm in self.__userAgent \
or UAgentInfo.engineBlazer in self.__userAgent \
or UAgentInfo.engineXiino in self.__userAgent:
# Make sure it's not WebOS
return not self.detectPalmWebOS()
return False
def detectPalmWebOS(self):
"""Return detection of a Palm WebOS device
Detects if the current browser is on a Palm device
running the new WebOS.
"""
return UAgentInfo.deviceWebOS in self.__userAgent
def detectWebOSTablet(self):
"""Return detection of an HP WebOS tablet
Detects if the current browser is on an HP tablet running WebOS.
"""
return UAgentInfo.deviceWebOShp in self.__userAgent \
and UAgentInfo.deviceTablet in self.__userAgent
def detectOperaMobile(self):
"""Return detection of an Opera browser for a mobile device
Detects Opera Mobile or Opera Mini.
"""
return UAgentInfo.engineOpera in self.__userAgent \
and (UAgentInfo.mini in self.__userAgent \
or UAgentInfo.mobi in self.__userAgent)
def detectOperaAndroidPhone(self):
"""Return detection of an Opera browser on an Android phone
Detects Opera Mobile on an Android phone.
"""
return UAgentInfo.engineOpera in self.__userAgent \
and UAgentInfo.deviceAndroid in self.__userAgent \
and UAgentInfo.mobi in self.__userAgent
def detectOperaAndroidTablet(self):
"""Return detection of an Opera browser on an Android tablet
Detects Opera Mobile on an Android tablet.
"""
return UAgentInfo.engineOpera in self.__userAgent \
and UAgentInfo.deviceAndroid in self.__userAgent \
and UAgentInfo.deviceTablet in self.__userAgent
def detectKindle(self):
"""Return detection of a Kindle
Detects if the current device is an Amazon Kindle (eInk devices only).
Note: For the Kindle Fire, use the normal Android methods.
"""
return UAgentInfo.deviceKindle in self.__userAgent \
and not self.detectAndroid()
def detectAmazonSilk(self):
"""Return detection of an Amazon Kindle Fire in Silk mode.
Detects if the current Amazon device is using the Silk Browser.
Note: Typically used by the the Kindle Fire.
"""
return UAgentInfo.engineSilk in self.__userAgent
def detectGarminNuvifone(self):
"""Return detection of a Garmin Nuvifone
Detects if the current browser is a
Garmin Nuvifone.
"""
return UAgentInfo.deviceNuvifone in self.__userAgent
def detectBada(self):
"""Return detection of a Bada OS smartphone
Detects if the current browser is on a Bada smartphone from Samsung
"""
return UAgentInfo.deviceBada in self.__userAgent
def detectTizen(self):
"""Return detection of a Tizen OS smartphone
Detects if the current browser is on a Tizen OS smartphone
"""
return UAgentInfo.deviceTizen in self.__userAgent
def detectMeego(self):
"""Return detection of a Meego OS smartphone
Detects if the current browser is on a Meego OS smartphone
"""
return UAgentInfo.deviceMeego in self.__userAgent
def detectDangerHiptop(self):
"""Return detection of a Danger Hiptop
Detects the Danger Hiptop device.
"""
return UAgentInfo.deviceDanger in self.__userAgent \
or UAgentInfo.deviceHiptop in self.__userAgent
def detectSonyMylo(self):
"""Return detection of a Sony Mylo device
Detects if the current browser is a Sony Mylo device.
"""
return UAgentInfo.manuSony in self.__userAgent \
and (UAgentInfo.qtembedded in self.__userAgent
or UAgentInfo.mylocom2 in self.__userAgent)
def detectMaemoTablet(self):
"""Return detection of a Maemo OS tablet
Detects if the current device is on one of the Maemo-based Nokia Internet Tablets.
"""
if UAgentInfo.maemo in self.__userAgent:
return True
return UAgentInfo.linux in self.__userAgent \
and UAgentInfo.deviceTablet in self.__userAgent \
and not self.detectWebOSTablet() \
and not self.detectAndroid()
def detectArchos(self):
"""Return detection of an Archos media player
Detects if the current device is an Archos media player/Internet tablet.
"""
return UAgentInfo.deviceArchos in self.__userAgent
def detectGameConsole(self):
"""Return detection of any Game Console
Detects if the current device is an Internet-capable game console.
"""
return self.detectSonyPlaystation() \
or self.detectNintendo() \
or self.detectXbox()
def detectSonyPlaystation(self):
"""Return detection of Sony Playstation.
Detects if the current device is a Sony Playstation.
"""
return UAgentInfo.devicePlaystation in self.__userAgent
def detectGamingHandheld(self):
"""Return detection of a handheld gaming device.
Detects if the current device is a handheld gaming device with
a touchscreen and modern iPhone-class browser.
Includes the Playstation Vita.
"""
return UAgentInfo.devicePlaystation in self.__userAgent \
and UAgentInfo.devicePlaystationVita in self.__userAgent
def detectNintendo(self):
"""Return detection of Nintendo
Detects if the current device is a Nintendo game device.
"""
return UAgentInfo.deviceNintendo in self.__userAgent \
or UAgentInfo.deviceWii in self.__userAgent \
or UAgentInfo.deviceNintendoDs in self.__userAgent
def detectXbox(self):
"""Return detection of Xbox
Detects if the current device is a Microsoft Xbox.
"""
return UAgentInfo.deviceXbox in self.__userAgent
def detectBrewDevice(self):
"""Return detection of a Brew device
Detects whether the device is a Brew-powered device.
"""
return UAgentInfo.deviceBrew in self.__userAgent
def detectWapWml(self):
"""Return detection of a WAP- or WML-capable device
Detects whether the device supports WAP or WML.
"""
return UAgentInfo.vndwap in self.__httpAccept \
or UAgentInfo.wml in self.__httpAccept
def detectMidpCapable(self):
"""Return detection of a MIDP mobile Java-capable device
Detects if the current device supports MIDP, a mobile Java technology.
"""
return UAgentInfo.deviceMidp in self.__userAgent \
or UAgentInfo.deviceMidp in self.__httpAccept
#*****************************
# Device Classes
#*****************************
def detectSmartphone(self):
"""Return detection of a general smartphone device
Check to see whether the device is any device
in the 'smartphone' category.
"""
return self.detectTierIphone() \
or self.detectS60OssBrowser() \
or self.detectSymbianOS() \
or self.detectWindowsMobile() \
or self.detectBlackBerry() \
or self.detectPalmWebOS()
def detectMobileQuick(self):
"""Return detection of any mobile device using the quicker method
Detects if the current device is a mobile device.
This method catches most of the popular modern devices.
Excludes Apple iPads and other modern tablets.
"""
#Let's exclude tablets
if self.__isTierTablet:
return False
#Most mobile browsing is done on smartphones
if self.detectSmartphone():
return True
if UAgentInfo.mobile in self.__userAgent:
return True
if self.detectWapWml() \
or self.detectBrewDevice() \
or self.detectOperaMobile():
return True
if UAgentInfo.engineObigo in self.__userAgent \
or UAgentInfo.engineNetfront in self.__userAgent \
or UAgentInfo.engineUpBrowser in self.__userAgent \
or UAgentInfo.engineOpenWeb in self.__userAgent:
return True
if self.detectDangerHiptop() \
or self.detectMidpCapable() \
or self.detectMaemoTablet() \
or self.detectArchos():
return True
if UAgentInfo.devicePda in self.__userAgent \
and UAgentInfo.disUpdate not in self.__userAgent:
return True
#We also look for Kindle devices
if self.detectKindle() \
or self.detectAmazonSilk():
return True
return False
def detectMobileLong(self):
"""Return detection of any mobile device using the more thorough method
The longer and more thorough way to detect for a mobile device.
Will probably detect most feature phones,
smartphone-class devices, Internet Tablets,
Internet-enabled game consoles, etc.
This ought to catch a lot of the more obscure and older devices, also --
but no promises on thoroughness!
"""
if self.detectMobileQuick() \
or self.detectGameConsole() \
or self.detectSonyMylo():
return True
#detect older phones from certain manufacturers and operators.
return UAgentInfo.uplink in self.__userAgent \
or UAgentInfo.manuSonyEricsson in self.__userAgent \
or UAgentInfo.manuericsson in self.__userAgent \
or UAgentInfo.manuSamsung1 in self.__userAgent \
or UAgentInfo.svcDocomo in self.__userAgent \
or UAgentInfo.svcKddi in self.__userAgent \
or UAgentInfo.svcVodafone in self.__userAgent
#*****************************
# For Mobile Web Site Design
#*****************************
def detectTierTablet(self):
"""Return detection of any device in the Tablet Tier
The quick way to detect for a tier of devices.
This method detects for the new generation of
HTML 5 capable, larger screen tablets.
Includes iPad, Android (e.g., Xoom), BB Playbook, WebOS, etc.
"""
return self.detectIpad() \
or self.detectAndroidTablet() \
or self.detectBlackBerryTablet() \
or self.detectWebOSTablet()
def detectTierIphone(self):
"""Return detection of any device using any OS with an iPhone-class web browser
The quick way to detect for a tier of devices.
This method detects for devices which can
display iPhone-optimized web content.
Includes iPhone, iPod Touch, Android,
Windows Phone 7 and 8, BB10, WebOS, Playstation Vita, etc.
"""
return self.detectIphoneOrIpod() \
or self.detectAndroidPhone() \
or self.detectWindowsPhone() \
or self.detectBlackBerry10Phone() \
or self.detectBlackBerryWebKit() and self.detectBlackBerryTouch() \
or self.detectPalmWebOS() \
or self.detectBada() \
or self.detectTizen() \
or self.detectGamingHandheld()
def detectTierRichCss(self):
"""Return detection of any device in the 'Rich CSS' Tier
The quick way to detect for a tier of devices.
This method detects for devices which are likely to be capable
of viewing CSS content optimized for the iPhone,
but may not necessarily support JavaScript.
Excludes all iPhone Tier devices.
"""
#The following devices are explicitly ok.
#Note: 'High' BlackBerry devices ONLY
if not self.detectMobileQuick():
return False
#Exclude iPhone Tier and e-Ink Kindle devices
if self.detectTierIphone() \
or self.detectKindle():
return False
#The following devices are explicitly ok.
#Note: 'High' BlackBerry devices ONLY
#Older Windows 'Mobile' isn't good enough for iPhone Tier.
return self.detectWebkit() \
or self.detectS60OssBrowser() \
or self.detectBlackBerryHigh() \
or self.detectWindowsMobile() \
or UAgentInfo.engineTelecaQ in self.__userAgent
def detectTierOtherPhones(self):
"""Return detection of a mobile device in the less capable tier
The quick way to detect for a tier of devices.
This method detects for all other types of phones,
but excludes the iPhone and RichCSS Tier devices.
"""
#Exclude devices in the other 2 categories
return self.detectMobileLong() \
and not self.detectTierIphone() \
and not self.detectTierRichCss()
|
alsocollective/ewings_rally
|
application/rally_for_the_cure/mobileesp/mdetect.py
|
Python
|
gpl-2.0
| 32,695
|
[
"Galaxy"
] |
989b93c91132ee769ed6f41059e4319a6c9970cf2ce7223f94b25389abba6c16
|
"""
Student Views
"""
import datetime
import logging
import uuid
import json
import warnings
from collections import defaultdict
from urlparse import urljoin
from pytz import UTC
from requests import HTTPError
from ipware.ip import get_ip
from django.conf import settings
from django.contrib.auth import logout, authenticate, login
from django.contrib.auth.models import User, AnonymousUser
from django.contrib.auth.decorators import login_required
from django.contrib.auth.views import password_reset_confirm
from django.contrib import messages
from django.core.context_processors import csrf
from django.core import mail
from django.core.urlresolvers import reverse, NoReverseMatch
from django.core.validators import validate_email, ValidationError
from django.db import IntegrityError, transaction
from django.http import (HttpResponse, HttpResponseBadRequest, HttpResponseForbidden,
HttpResponseServerError, Http404)
from django.shortcuts import redirect
from django.utils.encoding import force_bytes, force_text
from django.utils.translation import ungettext
from django.utils.http import base36_to_int, urlsafe_base64_encode
from django.utils.translation import ugettext as _, get_language
from django.views.decorators.csrf import csrf_exempt, ensure_csrf_cookie
from django.views.decorators.http import require_POST, require_GET
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.template.response import TemplateResponse
from ratelimitbackend.exceptions import RateLimitException
from social.apps.django_app import utils as social_utils
from social.backends import oauth as social_oauth
from social.exceptions import AuthException, AuthAlreadyAssociated
from edxmako.shortcuts import render_to_response, render_to_string
from course_modes.models import CourseMode
from shoppingcart.api import order_history
from student.models import (
Registration, UserProfile,
PendingEmailChange, CourseEnrollment, CourseEnrollmentAttribute, unique_id_for_user,
CourseEnrollmentAllowed, UserStanding, LoginFailures, Subscriber,
create_comments_service_user, PasswordHistory, UserSignupSource,
DashboardConfiguration, LinkedInAddToProfileConfiguration, ManualEnrollmentAudit, ALLOWEDTOENROLL_TO_ENROLLED)
from student.forms import AccountCreationForm, PasswordResetFormNoActive, get_registration_extension_form
from lms.djangoapps.commerce.utils import EcommerceService # pylint: disable=import-error
from lms.djangoapps.commerce.constants import OrderStatus # pylint: disable=import-error
from lms.djangoapps.verify_student.models import SoftwareSecurePhotoVerification # pylint: disable=import-error
from certificates.models import CertificateStatuses, certificate_status_for_student
from certificates.api import ( # pylint: disable=import-error
get_certificate_url,
has_html_certificates_enabled,
)
from xmodule.modulestore.django import modulestore
from opaque_keys import InvalidKeyError
from opaque_keys.edx.keys import CourseKey
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from opaque_keys.edx.locator import CourseLocator
from collections import namedtuple
from courseware.courses import get_courses, sort_by_announcement, sort_by_start_date # pylint: disable=import-error
from courseware.access import has_access
from django_comment_common.models import Role
from external_auth.models import ExternalAuthMap
import external_auth.views
from external_auth.login_and_register import (
login as external_auth_login,
register as external_auth_register
)
from bulk_email.models import Optout, CourseAuthorization
from lang_pref import LANGUAGE_KEY
import track.views
import dogstats_wrapper as dog_stats_api
from util.db import outer_atomic
from util.json_request import JsonResponse
from util.bad_request_rate_limiter import BadRequestRateLimiter
from util.milestones_helpers import (
get_pre_requisite_courses_not_completed,
)
from microsite_configuration import microsite
from util.password_policy_validators import (
validate_password_length, validate_password_complexity,
validate_password_dictionary
)
import third_party_auth
from third_party_auth import pipeline, provider
from student.helpers import (
check_verify_status_by_course,
auth_pipeline_urls, get_next_url_for_login_page,
DISABLE_UNENROLL_CERT_STATES,
)
from student.cookies import set_logged_in_cookies, delete_logged_in_cookies
from student.models import anonymous_id_for_user
from shoppingcart.models import DonationConfiguration, CourseRegistrationCode
from embargo import api as embargo_api
import analytics
from eventtracking import tracker
# Note that this lives in LMS, so this dependency should be refactored.
from notification_prefs.views import enable_notifications
# Note that this lives in openedx, so this dependency should be refactored.
from openedx.core.djangoapps.credit.email_utils import get_credit_provider_display_names, make_providers_strings
from openedx.core.djangoapps.user_api.preferences import api as preferences_api
from openedx.core.djangoapps.programs import utils as programs_utils
from openedx.core.djangoapps.programs.models import ProgramsApiConfig
log = logging.getLogger("edx.student")
AUDIT_LOG = logging.getLogger("audit")
ReverifyInfo = namedtuple('ReverifyInfo', 'course_id course_name course_number date status display') # pylint: disable=invalid-name
SETTING_CHANGE_INITIATED = 'edx.user.settings.change_initiated'
# Disable this warning because it doesn't make sense to completely refactor tests to appease Pylint
# pylint: disable=logging-format-interpolation
def csrf_token(context):
"""A csrf token that can be included in a form."""
token = context.get('csrf_token', '')
if token == 'NOTPROVIDED':
return ''
return (u'<div style="display:none"><input type="hidden"'
' name="csrfmiddlewaretoken" value="%s" /></div>' % (token))
# NOTE: This view is not linked to directly--it is called from
# branding/views.py:index(), which is cached for anonymous users.
# This means that it should always return the same thing for anon
# users. (in particular, no switching based on query params allowed)
def index(request, extra_context=None, user=AnonymousUser()):
"""
Render the edX main page.
extra_context is used to allow immediate display of certain modal windows, eg signup,
as used by external_auth.
"""
if extra_context is None:
extra_context = {}
courses = get_courses(user)
if microsite.get_value("ENABLE_COURSE_SORTING_BY_START_DATE",
settings.FEATURES["ENABLE_COURSE_SORTING_BY_START_DATE"]):
courses = sort_by_start_date(courses)
else:
courses = sort_by_announcement(courses)
context = {'courses': courses}
context['homepage_overlay_html'] = microsite.get_value('homepage_overlay_html')
# This appears to be an unused context parameter, at least for the master templates...
context['show_partners'] = microsite.get_value('show_partners', True)
# TO DISPLAY A YOUTUBE WELCOME VIDEO
# 1) Change False to True
context['show_homepage_promo_video'] = microsite.get_value('show_homepage_promo_video', False)
# 2) Add your video's YouTube ID (11 chars, eg "123456789xX"), or specify via microsite config
# Note: This value should be moved into a configuration setting and plumbed-through to the
# context via the microsite configuration workflow, versus living here
youtube_video_id = microsite.get_value('homepage_promo_video_youtube_id', "your-youtube-id")
context['homepage_promo_video_youtube_id'] = youtube_video_id
# allow for microsite override of the courses list
context['courses_list'] = microsite.get_template_path('courses_list.html')
# Insert additional context for use in the template
context.update(extra_context)
return render_to_response('index.html', context)
def process_survey_link(survey_link, user):
"""
If {UNIQUE_ID} appears in the link, replace it with a unique id for the user.
Currently, this is sha1(user.username). Otherwise, return survey_link.
"""
return survey_link.format(UNIQUE_ID=unique_id_for_user(user))
def cert_info(user, course_overview, course_mode):
"""
Get the certificate info needed to render the dashboard section for the given
student and course.
Arguments:
user (User): A user.
course_overview (CourseOverview): A course.
course_mode (str): The enrollment mode (honor, verified, audit, etc.)
Returns:
dict: Empty dict if certificates are disabled or hidden, or a dictionary with keys:
'status': one of 'generating', 'ready', 'notpassing', 'processing', 'restricted'
'show_download_url': bool
'download_url': url, only present if show_download_url is True
'show_disabled_download_button': bool -- true if state is 'generating'
'show_survey_button': bool
'survey_url': url, only if show_survey_button is True
'grade': if status is not 'processing'
'can_unenroll': if status allows for unenrollment
"""
if not course_overview.may_certify():
return {}
return _cert_info(
user,
course_overview,
certificate_status_for_student(user, course_overview.id),
course_mode
)
def reverification_info(statuses):
"""
Returns reverification-related information for *all* of user's enrollments whose
reverification status is in statuses.
Args:
statuses (list): a list of reverification statuses we want information for
example: ["must_reverify", "denied"]
Returns:
dictionary of lists: dictionary with one key per status, e.g.
dict["must_reverify"] = []
dict["must_reverify"] = [some information]
"""
reverifications = defaultdict(list)
# Sort the data by the reverification_end_date
for status in statuses:
if reverifications[status]:
reverifications[status].sort(key=lambda x: x.date)
return reverifications
def get_course_enrollments(user, org_to_include, orgs_to_exclude):
"""
Given a user, return a filtered set of his or her course enrollments.
Arguments:
user (User): the user in question.
org_to_include (str): for use in Microsites. If not None, ONLY courses
of this org will be returned.
orgs_to_exclude (list[str]): If org_to_include is not None, this
argument is ignored. Else, courses of this org will be excluded.
Returns:
generator[CourseEnrollment]: a sequence of enrollments to be displayed
on the user's dashboard.
"""
for enrollment in CourseEnrollment.enrollments_for_user(user):
# If the course is missing or broken, log an error and skip it.
course_overview = enrollment.course_overview
if not course_overview:
log.error(
"User %s enrolled in broken or non-existent course %s",
user.username,
enrollment.course_id
)
continue
# Skip subscription course
if unicode(course_overview.id) == unicode(settings.SUBSCRIPTION_COURSE_KEY):
continue
# If we are in a Microsite, then filter out anything that is not
# attributed (by ORG) to that Microsite.
if org_to_include and course_overview.location.org != org_to_include:
continue
# Conversely, if we are not in a Microsite, then filter out any enrollments
# with courses attributed (by ORG) to Microsites.
elif course_overview.location.org in orgs_to_exclude:
continue
# Else, include the enrollment.
else:
yield enrollment
def _cert_info(user, course_overview, cert_status, course_mode): # pylint: disable=unused-argument
"""
Implements the logic for cert_info -- split out for testing.
Arguments:
user (User): A user.
course_overview (CourseOverview): A course.
course_mode (str): The enrollment mode (honor, verified, audit, etc.)
"""
# simplify the status for the template using this lookup table
template_state = {
CertificateStatuses.generating: 'generating',
CertificateStatuses.downloadable: 'ready',
CertificateStatuses.notpassing: 'notpassing',
CertificateStatuses.restricted: 'restricted',
CertificateStatuses.auditing: 'auditing',
CertificateStatuses.audit_passing: 'auditing',
CertificateStatuses.audit_notpassing: 'auditing',
}
default_status = 'processing'
default_info = {
'status': default_status,
'show_disabled_download_button': False,
'show_download_url': False,
'show_survey_button': False,
'can_unenroll': True,
}
if cert_status is None:
return default_info
is_hidden_status = cert_status['status'] in ('unavailable', 'processing', 'generating', 'notpassing', 'auditing')
if course_overview.certificates_display_behavior == 'early_no_info' and is_hidden_status:
return {}
status = template_state.get(cert_status['status'], default_status)
status_dict = {
'status': status,
'show_download_url': status == 'ready',
'show_disabled_download_button': status == 'generating',
'mode': cert_status.get('mode', None),
'linked_in_url': None,
'can_unenroll': status not in DISABLE_UNENROLL_CERT_STATES,
}
if (status in ('generating', 'ready', 'notpassing', 'restricted', 'auditing') and
course_overview.end_of_course_survey_url is not None):
status_dict.update({
'show_survey_button': True,
'survey_url': process_survey_link(course_overview.end_of_course_survey_url, user)})
else:
status_dict['show_survey_button'] = False
if status == 'ready':
# showing the certificate web view button if certificate is ready state and feature flags are enabled.
if has_html_certificates_enabled(course_overview.id, course_overview):
if course_overview.has_any_active_web_certificate:
status_dict.update({
'show_cert_web_view': True,
'cert_web_view_url': get_certificate_url(course_id=course_overview.id, uuid=cert_status['uuid'])
})
else:
# don't show download certificate button if we don't have an active certificate for course
status_dict['show_download_url'] = False
elif 'download_url' not in cert_status:
log.warning(
u"User %s has a downloadable cert for %s, but no download url",
user.username,
course_overview.id
)
return default_info
else:
status_dict['download_url'] = cert_status['download_url']
# If enabled, show the LinkedIn "add to profile" button
# Clicking this button sends the user to LinkedIn where they
# can add the certificate information to their profile.
linkedin_config = LinkedInAddToProfileConfiguration.current()
# posting certificates to LinkedIn is not currently
# supported in microsites/White Labels
if linkedin_config.enabled and not microsite.is_request_in_microsite():
status_dict['linked_in_url'] = linkedin_config.add_to_profile_url(
course_overview.id,
course_overview.display_name,
cert_status.get('mode'),
cert_status['download_url']
)
if status in ('generating', 'ready', 'notpassing', 'restricted', 'auditing'):
if 'grade' not in cert_status:
# Note: as of 11/20/2012, we know there are students in this state-- cs169.1x,
# who need to be regraded (we weren't tracking 'notpassing' at first).
# We can add a log.warning here once we think it shouldn't happen.
return default_info
else:
status_dict['grade'] = cert_status['grade']
return status_dict
@ensure_csrf_cookie
def signin_user(request):
"""Deprecated. To be replaced by :class:`student_account.views.login_and_registration_form`."""
external_auth_response = external_auth_login(request)
if external_auth_response is not None:
return external_auth_response
# Determine the URL to redirect to following login:
redirect_to = get_next_url_for_login_page(request)
if request.user.is_authenticated():
return redirect(redirect_to)
third_party_auth_error = None
for msg in messages.get_messages(request):
if msg.extra_tags.split()[0] == "social-auth":
# msg may or may not be translated. Try translating [again] in case we are able to:
third_party_auth_error = _(unicode(msg)) # pylint: disable=translation-of-non-string
break
context = {
'login_redirect_url': redirect_to, # This gets added to the query string of the "Sign In" button in the header
# Bool injected into JS to submit form if we're inside a running third-
# party auth pipeline; distinct from the actual instance of the running
# pipeline, if any.
'pipeline_running': 'true' if pipeline.running(request) else 'false',
'pipeline_url': auth_pipeline_urls(pipeline.AUTH_ENTRY_LOGIN, redirect_url=redirect_to),
'platform_name': microsite.get_value(
'platform_name',
settings.PLATFORM_NAME
),
'third_party_auth_error': third_party_auth_error
}
return render_to_response('login.html', context)
@ensure_csrf_cookie
def register_user(request, extra_context=None):
"""Deprecated. To be replaced by :class:`student_account.views.login_and_registration_form`."""
# Determine the URL to redirect to following login:
redirect_to = get_next_url_for_login_page(request)
if request.user.is_authenticated():
return redirect(redirect_to)
external_auth_response = external_auth_register(request)
if external_auth_response is not None:
return external_auth_response
context = {
'login_redirect_url': redirect_to, # This gets added to the query string of the "Sign In" button in the header
'email': '',
'name': '',
'running_pipeline': None,
'pipeline_urls': auth_pipeline_urls(pipeline.AUTH_ENTRY_REGISTER, redirect_url=redirect_to),
'platform_name': microsite.get_value(
'platform_name',
settings.PLATFORM_NAME
),
'selected_provider': '',
'username': '',
}
if extra_context is not None:
context.update(extra_context)
if context.get("extauth_domain", '').startswith(external_auth.views.SHIBBOLETH_DOMAIN_PREFIX):
return render_to_response('register-shib.html', context)
# If third-party auth is enabled, prepopulate the form with data from the
# selected provider.
if third_party_auth.is_enabled() and pipeline.running(request):
running_pipeline = pipeline.get(request)
current_provider = provider.Registry.get_from_pipeline(running_pipeline)
if current_provider is not None:
overrides = current_provider.get_register_form_data(running_pipeline.get('kwargs'))
overrides['running_pipeline'] = running_pipeline
overrides['selected_provider'] = current_provider.name
context.update(overrides)
return render_to_response('register.html', context)
def complete_course_mode_info(course_id, enrollment, modes=None):
"""
We would like to compute some more information from the given course modes
and the user's current enrollment
Returns the given information:
- whether to show the course upsell information
- numbers of days until they can't upsell anymore
"""
if modes is None:
modes = CourseMode.modes_for_course_dict(course_id)
mode_info = {'show_upsell': False, 'days_for_upsell': None}
# we want to know if the user is already enrolled as verified or credit and
# if verified is an option.
if CourseMode.VERIFIED in modes and enrollment.mode in CourseMode.UPSELL_TO_VERIFIED_MODES:
mode_info['show_upsell'] = True
mode_info['verified_sku'] = modes['verified'].sku
mode_info['verified_bulk_sku'] = modes['verified'].bulk_sku
# if there is an expiration date, find out how long from now it is
if modes['verified'].expiration_datetime:
today = datetime.datetime.now(UTC).date()
mode_info['days_for_upsell'] = (modes['verified'].expiration_datetime.date() - today).days
return mode_info
def is_course_blocked(request, redeemed_registration_codes, course_key):
"""Checking either registration is blocked or not ."""
blocked = False
for redeemed_registration in redeemed_registration_codes:
# registration codes may be generated via Bulk Purchase Scenario
# we have to check only for the invoice generated registration codes
# that their invoice is valid or not
if redeemed_registration.invoice_item:
if not redeemed_registration.invoice_item.invoice.is_valid:
blocked = True
# disabling email notifications for unpaid registration courses
Optout.objects.get_or_create(user=request.user, course_id=course_key)
log.info(
u"User %s (%s) opted out of receiving emails from course %s",
request.user.username,
request.user.email,
course_key,
)
track.views.server_track(
request,
"change-email1-settings",
{"receive_emails": "no", "course": course_key.to_deprecated_string()},
page='dashboard',
)
break
return blocked
@login_required
@ensure_csrf_cookie
def dashboard(request):
"""
Provides the LMS dashboard view
TODO: This is lms specific and does not belong in common code.
Arguments:
request: The request object.
Returns:
The dashboard response.
"""
user = request.user
platform_name = microsite.get_value("platform_name", settings.PLATFORM_NAME)
enable_verified_certificates = microsite.get_value(
'ENABLE_VERIFIED_CERTIFICATES',
settings.FEATURES.get('ENABLE_VERIFIED_CERTIFICATES')
)
display_course_modes_on_dashboard = microsite.get_value(
'DISPLAY_COURSE_MODES_ON_DASHBOARD',
settings.FEATURES.get('DISPLAY_COURSE_MODES_ON_DASHBOARD', True)
)
# we want to filter and only show enrollments for courses within
# the 'ORG' defined in configuration.
course_org_filter = microsite.get_value('course_org_filter')
# Let's filter out any courses in an "org" that has been declared to be
# in a Microsite
org_filter_out_set = microsite.get_all_orgs()
# remove our current Microsite from the "filter out" list, if applicable
if course_org_filter:
org_filter_out_set.remove(course_org_filter)
# Build our (course, enrollment) list for the user, but ignore any courses that no
# longer exist (because the course IDs have changed). Still, we don't delete those
# enrollments, because it could have been a data push snafu.
course_enrollments = list(get_course_enrollments(user, course_org_filter, org_filter_out_set))
# sort the enrollment pairs by the enrollment date
course_enrollments.sort(key=lambda x: x.created, reverse=True)
# Retrieve the course modes for each course
enrolled_course_ids = [enrollment.course_id for enrollment in course_enrollments]
__, unexpired_course_modes = CourseMode.all_and_unexpired_modes_for_courses(enrolled_course_ids)
course_modes_by_course = {
course_id: {
mode.slug: mode
for mode in modes
}
for course_id, modes in unexpired_course_modes.iteritems()
}
# Check to see if the student has recently enrolled in a course.
# If so, display a notification message confirming the enrollment.
enrollment_message = _create_recent_enrollment_message(
course_enrollments, course_modes_by_course
)
course_optouts = Optout.objects.filter(user=user).values_list('course_id', flat=True)
message = ""
if not user.is_active:
message = render_to_string(
'registration/activate_account_notice.html',
{'email': user.email, 'platform_name': platform_name}
)
# Global staff can see what courses errored on their dashboard
staff_access = False
errored_courses = {}
if has_access(user, 'staff', 'global'):
# Show any courses that errored on load
staff_access = True
errored_courses = modulestore().get_errored_courses()
show_courseware_links_for = frozenset(
enrollment.course_id for enrollment in course_enrollments
if has_access(request.user, 'load', enrollment.course_overview)
and has_access(request.user, 'view_courseware_with_prerequisites', enrollment.course_overview)
)
# Find programs associated with courses being displayed. This information
# is passed in the template context to allow rendering of program-related
# information on the dashboard.
meter = programs_utils.ProgramProgressMeter(user, enrollments=course_enrollments)
programs_by_run = meter.engaged_programs(by_run=True)
# Construct a dictionary of course mode information
# used to render the course list. We re-use the course modes dict
# we loaded earlier to avoid hitting the database.
course_mode_info = {
enrollment.course_id: complete_course_mode_info(
enrollment.course_id, enrollment,
modes=course_modes_by_course[enrollment.course_id]
)
for enrollment in course_enrollments
}
# Determine the per-course verification status
# This is a dictionary in which the keys are course locators
# and the values are one of:
#
# VERIFY_STATUS_NEED_TO_VERIFY
# VERIFY_STATUS_SUBMITTED
# VERIFY_STATUS_APPROVED
# VERIFY_STATUS_MISSED_DEADLINE
#
# Each of which correspond to a particular message to display
# next to the course on the dashboard.
#
# If a course is not included in this dictionary,
# there is no verification messaging to display.
verify_status_by_course = check_verify_status_by_course(user, course_enrollments)
cert_statuses = {
enrollment.course_id: cert_info(request.user, enrollment.course_overview, enrollment.mode)
for enrollment in course_enrollments
}
# only show email settings for Mongo course and when bulk email is turned on
show_email_settings_for = frozenset(
enrollment.course_id for enrollment in course_enrollments if (
settings.FEATURES['ENABLE_INSTRUCTOR_EMAIL'] and
CourseAuthorization.instructor_email_enabled(enrollment.course_id)
)
)
# Verification Attempts
# Used to generate the "you must reverify for course x" banner
verification_status, verification_msg = SoftwareSecurePhotoVerification.user_status(user)
# Gets data for midcourse reverifications, if any are necessary or have failed
statuses = ["approved", "denied", "pending", "must_reverify"]
reverifications = reverification_info(statuses)
# show_refund_option_for = frozenset(
# enrollment.course_id for enrollment in course_enrollments
# if enrollment.refundable()
# )
show_refund_option_for = frozenset()
block_courses = frozenset(
enrollment.course_id for enrollment in course_enrollments
if is_course_blocked(
request,
CourseRegistrationCode.objects.filter(
course_id=enrollment.course_id,
registrationcoderedemption__redeemed_by=request.user
),
enrollment.course_id
)
)
enrolled_courses_either_paid = frozenset(
enrollment.course_id for enrollment in course_enrollments
if enrollment.is_paid_course()
)
# If there are *any* denied reverifications that have not been toggled off,
# we'll display the banner
denied_banner = any(item.display for item in reverifications["denied"])
# Populate the Order History for the side-bar.
order_history_list = order_history(user, course_org_filter=course_org_filter, org_filter_out_set=org_filter_out_set)
# get list of courses having pre-requisites yet to be completed
courses_having_prerequisites = frozenset(
enrollment.course_id for enrollment in course_enrollments
if enrollment.course_overview.pre_requisite_courses
)
courses_requirements_not_met = get_pre_requisite_courses_not_completed(user, courses_having_prerequisites)
if 'notlive' in request.GET:
redirect_message = _("The course you are looking for does not start until {date}.").format(
date=request.GET['notlive']
)
elif 'course_closed' in request.GET:
redirect_message = _("The course you are looking for is closed for enrollment as of {date}.").format(
date=request.GET['course_closed']
)
else:
redirect_message = ''
context = {
'enrollment_message': enrollment_message,
'redirect_message': redirect_message,
'course_enrollments': course_enrollments,
'course_optouts': course_optouts,
'message': message,
'staff_access': staff_access,
'errored_courses': errored_courses,
'show_courseware_links_for': show_courseware_links_for,
'all_course_modes': course_mode_info,
'cert_statuses': cert_statuses,
'credit_statuses': _credit_statuses(user, course_enrollments),
'show_email_settings_for': show_email_settings_for,
'reverifications': reverifications,
'verification_status': verification_status,
'verification_status_by_course': verify_status_by_course,
'verification_msg': verification_msg,
'show_refund_option_for': show_refund_option_for,
'block_courses': block_courses,
'denied_banner': denied_banner,
'billing_email': settings.PAYMENT_SUPPORT_EMAIL,
'user': user,
'logout_url': reverse(logout_user),
'platform_name': platform_name,
'enrolled_courses_either_paid': enrolled_courses_either_paid,
'provider_states': [],
'order_history_list': order_history_list,
'courses_requirements_not_met': courses_requirements_not_met,
'nav_hidden': True,
'programs_by_run': programs_by_run,
'show_program_listing': ProgramsApiConfig.current().show_program_listing,
'disable_courseware_js': True,
'display_course_modes_on_dashboard': enable_verified_certificates and display_course_modes_on_dashboard,
}
ecommerce_service = EcommerceService()
if ecommerce_service.is_enabled(request.user):
context.update({
'use_ecommerce_payment_flow': True,
'ecommerce_payment_page': ecommerce_service.payment_page_url(),
})
return render_to_response('dashboard.html', context)
def subscription_page(request):
subscription_course_key = CourseKey.from_string(unicode(settings.SUBSCRIPTION_COURSE_KEY))
is_active_subscription = request.user.subscriber.is_active_subscription
is_subscription_course_enrolled = CourseEnrollment.is_enrolled(request.user, subscription_course_key)
# Deactivate subscription enrollment if subscription was cancelled by user
if not is_active_subscription and is_subscription_course_enrolled:
course_enrollment = CourseEnrollment.get_enrollment(request.user, subscription_course_key)
course_enrollment.deactivate()
is_subscription_course_enrolled = False
context = {
'is_active_subscription': is_active_subscription,
'subscription_course_key': settings.SUBSCRIPTION_COURSE_KEY,
'is_subscription_course_enrolled': is_subscription_course_enrolled
}
return render_to_response('subscription.html', context)
def _create_recent_enrollment_message(course_enrollments, course_modes): # pylint: disable=invalid-name
"""
Builds a recent course enrollment message.
Constructs a new message template based on any recent course enrollments
for the student.
Args:
course_enrollments (list[CourseEnrollment]): a list of course enrollments.
course_modes (dict): Mapping of course ID's to course mode dictionaries.
Returns:
A string representing the HTML message output from the message template.
None if there are no recently enrolled courses.
"""
recently_enrolled_courses = _get_recently_enrolled_courses(course_enrollments)
if recently_enrolled_courses:
enroll_messages = [
{
"course_id": enrollment.course_overview.id,
"course_name": enrollment.course_overview.display_name,
"allow_donation": _allow_donation(course_modes, enrollment.course_overview.id, enrollment)
}
for enrollment in recently_enrolled_courses
]
platform_name = microsite.get_value('platform_name', settings.PLATFORM_NAME)
return render_to_string(
'enrollment/course_enrollment_message.html',
{'course_enrollment_messages': enroll_messages, 'platform_name': platform_name}
)
def _get_recently_enrolled_courses(course_enrollments):
"""
Given a list of enrollments, filter out all but recent enrollments.
Args:
course_enrollments (list[CourseEnrollment]): A list of course enrollments.
Returns:
list[CourseEnrollment]: A list of recent course enrollments.
"""
seconds = DashboardConfiguration.current().recent_enrollment_time_delta
time_delta = (datetime.datetime.now(UTC) - datetime.timedelta(seconds=seconds))
return [
enrollment for enrollment in course_enrollments
# If the enrollment has no created date, we are explicitly excluding the course
# from the list of recent enrollments.
if enrollment.is_active and enrollment.created > time_delta
]
def _allow_donation(course_modes, course_id, enrollment):
"""Determines if the dashboard will request donations for the given course.
Check if donations are configured for the platform, and if the current course is accepting donations.
Args:
course_modes (dict): Mapping of course ID's to course mode dictionaries.
course_id (str): The unique identifier for the course.
enrollment(CourseEnrollment): The enrollment object in which the user is enrolled
Returns:
True if the course is allowing donations.
"""
donations_enabled = DonationConfiguration.current().enabled
return (
donations_enabled and
enrollment.mode in course_modes[course_id] and
course_modes[course_id][enrollment.mode].min_price == 0
)
def _update_email_opt_in(request, org):
"""Helper function used to hit the profile API if email opt-in is enabled."""
email_opt_in = request.POST.get('email_opt_in')
if email_opt_in is not None:
email_opt_in_boolean = email_opt_in == 'true'
preferences_api.update_email_opt_in(request.user, org, email_opt_in_boolean)
def _credit_statuses(user, course_enrollments):
"""
Retrieve the status for credit courses.
A credit course is a course for which a user can purchased
college credit. The current flow is:
1. User becomes eligible for credit (submits verifications, passes the course, etc.)
2. User purchases credit from a particular credit provider.
3. User requests credit from the provider, usually creating an account on the provider's site.
4. The credit provider notifies us whether the user's request for credit has been accepted or rejected.
The dashboard is responsible for communicating the user's state in this flow.
Arguments:
user (User): The currently logged-in user.
course_enrollments (list[CourseEnrollment]): List of enrollments for the
user.
Returns: dict
The returned dictionary has keys that are `CourseKey`s and values that
are dictionaries with:
* eligible (bool): True if the user is eligible for credit in this course.
* deadline (datetime): The deadline for purchasing and requesting credit for this course.
* purchased (bool): Whether the user has purchased credit for this course.
* provider_name (string): The display name of the credit provider.
* provider_status_url (string): A URL the user can visit to check on their credit request status.
* request_status (string): Either "pending", "approved", or "rejected"
* error (bool): If true, an unexpected error occurred when retrieving the credit status,
so the user should contact the support team.
Example:
>>> _credit_statuses(user, course_enrollments)
{
CourseKey.from_string("edX/DemoX/Demo_Course"): {
"course_key": "edX/DemoX/Demo_Course",
"eligible": True,
"deadline": 2015-11-23 00:00:00 UTC,
"purchased": True,
"provider_name": "Hogwarts",
"provider_status_url": "http://example.com/status",
"request_status": "pending",
"error": False
}
}
"""
from openedx.core.djangoapps.credit import api as credit_api
# Feature flag off
if not settings.FEATURES.get("ENABLE_CREDIT_ELIGIBILITY"):
return {}
request_status_by_course = {
request["course_key"]: request["status"]
for request in credit_api.get_credit_requests_for_user(user.username)
}
credit_enrollments = {
enrollment.course_id: enrollment
for enrollment in course_enrollments
if enrollment.mode == "credit"
}
# When a user purchases credit in a course, the user's enrollment
# mode is set to "credit" and an enrollment attribute is set
# with the ID of the credit provider. We retrieve *all* such attributes
# here to minimize the number of database queries.
purchased_credit_providers = {
attribute.enrollment.course_id: attribute.value
for attribute in CourseEnrollmentAttribute.objects.filter(
namespace="credit",
name="provider_id",
enrollment__in=credit_enrollments.values()
).select_related("enrollment")
}
provider_info_by_id = {
provider["id"]: provider
for provider in credit_api.get_credit_providers()
}
statuses = {}
for eligibility in credit_api.get_eligibilities_for_user(user.username):
course_key = CourseKey.from_string(unicode(eligibility["course_key"]))
providers_names = get_credit_provider_display_names(course_key)
status = {
"course_key": unicode(course_key),
"eligible": True,
"deadline": eligibility["deadline"],
"purchased": course_key in credit_enrollments,
"provider_name": make_providers_strings(providers_names),
"provider_status_url": None,
"provider_id": None,
"request_status": request_status_by_course.get(course_key),
"error": False,
}
# If the user has purchased credit, then include information about the credit
# provider from which the user purchased credit.
# We retrieve the provider's ID from the an "enrollment attribute" set on the user's
# enrollment when the user's order for credit is fulfilled by the E-Commerce service.
if status["purchased"]:
provider_id = purchased_credit_providers.get(course_key)
if provider_id is None:
status["error"] = True
log.error(
u"Could not find credit provider associated with credit enrollment "
u"for user %s in course %s. The user will not be able to see his or her "
u"credit request status on the student dashboard. This attribute should "
u"have been set when the user purchased credit in the course.",
user.id, course_key
)
else:
provider_info = provider_info_by_id.get(provider_id, {})
status["provider_name"] = provider_info.get("display_name")
status["provider_status_url"] = provider_info.get("status_url")
status["provider_id"] = provider_id
statuses[course_key] = status
return statuses
@transaction.non_atomic_requests
@require_POST
@outer_atomic(read_committed=True)
def change_enrollment(request, check_access=True):
"""
Modify the enrollment status for the logged-in user.
The request parameter must be a POST request (other methods return 405)
that specifies course_id and enrollment_action parameters. If course_id or
enrollment_action is not specified, if course_id is not valid, if
enrollment_action is something other than "enroll" or "unenroll", if
enrollment_action is "enroll" and enrollment is closed for the course, or
if enrollment_action is "unenroll" and the user is not enrolled in the
course, a 400 error will be returned. If the user is not logged in, 403
will be returned; it is important that only this case return 403 so the
front end can redirect the user to a registration or login page when this
happens. This function should only be called from an AJAX request, so
the error messages in the responses should never actually be user-visible.
Args:
request (`Request`): The Django request object
Keyword Args:
check_access (boolean): If True, we check that an accessible course actually
exists for the given course_key before we enroll the student.
The default is set to False to avoid breaking legacy code or
code with non-standard flows (ex. beta tester invitations), but
for any standard enrollment flow you probably want this to be True.
Returns:
Response
"""
# Get the user
user = request.user
# Ensure the user is authenticated
if not user.is_authenticated():
return HttpResponseForbidden()
# Ensure we received a course_id
action = request.POST.get("enrollment_action")
if 'course_id' not in request.POST:
return HttpResponseBadRequest(_("Course id not specified"))
try:
course_id = SlashSeparatedCourseKey.from_deprecated_string(request.POST.get("course_id"))
except InvalidKeyError:
log.warning(
u"User %s tried to %s with invalid course id: %s",
user.username,
action,
request.POST.get("course_id"),
)
return HttpResponseBadRequest(_("Invalid course id"))
if action == "enroll":
# Make sure the course exists
# We don't do this check on unenroll, or a bad course id can't be unenrolled from
if not modulestore().has_course(course_id):
log.warning(
u"User %s tried to enroll in non-existent course %s",
user.username,
course_id
)
return HttpResponseBadRequest(_("Course id is invalid"))
# Record the user's email opt-in preference
if settings.FEATURES.get('ENABLE_MKTG_EMAIL_OPT_IN'):
_update_email_opt_in(request, course_id.org)
available_modes = CourseMode.modes_for_course_dict(course_id)
# Check whether the user is blocked from enrolling in this course
# This can occur if the user's IP is on a global blacklist
# or if the user is enrolling in a country in which the course
# is not available.
redirect_url = embargo_api.redirect_if_blocked(
course_id, user=user, ip_address=get_ip(request),
url=request.path
)
if redirect_url:
return HttpResponse(redirect_url)
if user.subscriber.is_active_subscription:
CourseEnrollment.enroll(user, course_id, check_access=False, mode='honor')
return HttpResponse()
# Check that auto enrollment is allowed for this course
# (= the course is NOT behind a paywall)
if CourseMode.can_auto_enroll(course_id):
# Enroll the user using the default mode (audit)
# We're assuming that users of the course enrollment table
# will NOT try to look up the course enrollment model
# by its slug. If they do, it's possible (based on the state of the database)
# for no such model to exist, even though we've set the enrollment type
# to "audit".
try:
enroll_mode = CourseMode.auto_enroll_mode(course_id, available_modes)
if enroll_mode:
CourseEnrollment.enroll(user, course_id, check_access=check_access, mode=enroll_mode)
except Exception: # pylint: disable=broad-except
return HttpResponseBadRequest(_("Could not enroll"))
# If we have more than one course mode or professional ed is enabled,
# then send the user to the choose your track page.
# (In the case of no-id-professional/professional ed, this will redirect to a page that
# funnels users directly into the verification / payment flow)
if CourseMode.has_verified_mode(available_modes) or CourseMode.has_professional_mode(available_modes):
return HttpResponse(
reverse("course_modes_choose", kwargs={'course_id': unicode(course_id)})
)
# Otherwise, there is only one mode available (the default)
return HttpResponse()
elif action == "unenroll":
enrollment = CourseEnrollment.get_enrollment(user, course_id)
if not enrollment:
return HttpResponseBadRequest(_("You are not enrolled in this course"))
certificate_info = cert_info(user, enrollment.course_overview, enrollment.mode)
if certificate_info.get('status') in DISABLE_UNENROLL_CERT_STATES:
return HttpResponseBadRequest(_("Your certificate prevents you from unenrolling from this course"))
CourseEnrollment.unenroll(user, course_id, skip_refund=True)
return HttpResponse()
else:
return HttpResponseBadRequest(_("Enrollment action is invalid"))
@transaction.non_atomic_requests
@require_POST
@outer_atomic(read_committed=True)
def update_subscription(request):
"""
Modify the subscription_until date.
Args:
request (`Request`): The Django request object
Returns:
Response
"""
# Get the user
user = request.user
# Ensure the user is authenticated
if not user.is_authenticated():
return HttpResponseForbidden()
order_date = request.POST.get("order_date")
order_status = request.POST.get("order_status")
if order_status == OrderStatus.COMPLETE:
order_datetime = datetime.datetime.strptime(order_date, "%Y-%m-%dT%H:%M:%SZ")
subscription_until = order_datetime.replace(tzinfo=UTC) + datetime.timedelta(days=settings.SUBSCRIPTOIN_DAYS)
try:
user.subscriber.created = order_datetime.replace(tzinfo=UTC)
user.subscriber.subscription_until = subscription_until
user.subscriber.save()
except Exception as e:
log.error(u"{!r}".format(e))
return HttpResponseBadRequest(_("Could not update subscription data"))
return HttpResponse()
# Need different levels of logging
@ensure_csrf_cookie
def login_user(request, error=""): # pylint: disable=too-many-statements,unused-argument
"""AJAX request to log in the user."""
backend_name = None
email = None
password = None
redirect_url = None
response = None
running_pipeline = None
third_party_auth_requested = third_party_auth.is_enabled() and pipeline.running(request)
third_party_auth_successful = False
trumped_by_first_party_auth = bool(request.POST.get('email')) or bool(request.POST.get('password'))
user = None
platform_name = microsite.get_value("platform_name", settings.PLATFORM_NAME)
if third_party_auth_requested and not trumped_by_first_party_auth:
# The user has already authenticated via third-party auth and has not
# asked to do first party auth by supplying a username or password. We
# now want to put them through the same logging and cookie calculation
# logic as with first-party auth.
running_pipeline = pipeline.get(request)
username = running_pipeline['kwargs'].get('username')
backend_name = running_pipeline['backend']
third_party_uid = running_pipeline['kwargs']['uid']
requested_provider = provider.Registry.get_from_pipeline(running_pipeline)
try:
user = pipeline.get_authenticated_user(requested_provider, username, third_party_uid)
third_party_auth_successful = True
except User.DoesNotExist:
AUDIT_LOG.warning(
u"Login failed - user with username {username} has no social auth "
"with backend_name {backend_name}".format(
username=username, backend_name=backend_name)
)
message = _(
"You've successfully logged into your {provider_name} account, "
"but this account isn't linked with an {platform_name} account yet."
).format(
platform_name=platform_name,
provider_name=requested_provider.name,
)
message += "<br/><br/>"
message += _(
"Use your {platform_name} username and password to log into {platform_name} below, "
"and then link your {platform_name} account with {provider_name} from your dashboard."
).format(
platform_name=platform_name,
provider_name=requested_provider.name,
)
message += "<br/><br/>"
message += _(
"If you don't have an {platform_name} account yet, "
"click <strong>Register</strong> at the top of the page."
).format(
platform_name=platform_name
)
return HttpResponse(message, content_type="text/plain", status=403)
else:
if 'email' not in request.POST or 'password' not in request.POST:
return JsonResponse({
"success": False,
# TODO: User error message
"value": _('There was an error receiving your login information. Please email us.'),
}) # TODO: this should be status code 400
email = request.POST['email']
password = request.POST['password']
try:
user = User.objects.get(email=email)
except User.DoesNotExist:
if settings.FEATURES['SQUELCH_PII_IN_LOGS']:
AUDIT_LOG.warning(u"Login failed - Unknown user email")
else:
AUDIT_LOG.warning(u"Login failed - Unknown user email: {0}".format(email))
# check if the user has a linked shibboleth account, if so, redirect the user to shib-login
# This behavior is pretty much like what gmail does for shibboleth. Try entering some @stanford.edu
# address into the Gmail login.
if settings.FEATURES.get('AUTH_USE_SHIB') and user:
try:
eamap = ExternalAuthMap.objects.get(user=user)
if eamap.external_domain.startswith(external_auth.views.SHIBBOLETH_DOMAIN_PREFIX):
return JsonResponse({
"success": False,
"redirect": reverse('shib-login'),
}) # TODO: this should be status code 301 # pylint: disable=fixme
except ExternalAuthMap.DoesNotExist:
# This is actually the common case, logging in user without external linked login
AUDIT_LOG.info(u"User %s w/o external auth attempting login", user)
# see if account has been locked out due to excessive login failures
user_found_by_email_lookup = user
if user_found_by_email_lookup and LoginFailures.is_feature_enabled():
if LoginFailures.is_user_locked_out(user_found_by_email_lookup):
lockout_message = _('This account has been temporarily locked due '
'to excessive login failures. Try again later.')
return JsonResponse({
"success": False,
"value": lockout_message,
}) # TODO: this should be status code 429 # pylint: disable=fixme
# see if the user must reset his/her password due to any policy settings
if user_found_by_email_lookup and PasswordHistory.should_user_reset_password_now(user_found_by_email_lookup):
return JsonResponse({
"success": False,
"value": _('Your password has expired due to password policy on this account. You must '
'reset your password before you can log in again. Please click the '
'"Forgot Password" link on this page to reset your password before logging in again.'),
}) # TODO: this should be status code 403 # pylint: disable=fixme
# if the user doesn't exist, we want to set the username to an invalid
# username so that authentication is guaranteed to fail and we can take
# advantage of the ratelimited backend
username = user.username if user else ""
if not third_party_auth_successful:
try:
user = authenticate(username=username, password=password, request=request)
# this occurs when there are too many attempts from the same IP address
except RateLimitException:
return JsonResponse({
"success": False,
"value": _('Too many failed login attempts. Try again later.'),
}) # TODO: this should be status code 429 # pylint: disable=fixme
if user is None:
# tick the failed login counters if the user exists in the database
if user_found_by_email_lookup and LoginFailures.is_feature_enabled():
LoginFailures.increment_lockout_counter(user_found_by_email_lookup)
# if we didn't find this username earlier, the account for this email
# doesn't exist, and doesn't have a corresponding password
if username != "":
if settings.FEATURES['SQUELCH_PII_IN_LOGS']:
loggable_id = user_found_by_email_lookup.id if user_found_by_email_lookup else "<unknown>"
AUDIT_LOG.warning(u"Login failed - password for user.id: {0} is invalid".format(loggable_id))
else:
AUDIT_LOG.warning(u"Login failed - password for {0} is invalid".format(email))
return JsonResponse({
"success": False,
"value": _('Email or password is incorrect.'),
}) # TODO: this should be status code 400 # pylint: disable=fixme
# successful login, clear failed login attempts counters, if applicable
if LoginFailures.is_feature_enabled():
LoginFailures.clear_lockout_counter(user)
# Track the user's sign in
if hasattr(settings, 'LMS_SEGMENT_KEY') and settings.LMS_SEGMENT_KEY:
tracking_context = tracker.get_tracker().resolve_context()
analytics.identify(
user.id,
{
'email': email,
'username': username
},
{
# Disable MailChimp because we don't want to update the user's email
# and username in MailChimp on every page load. We only need to capture
# this data on registration/activation.
'MailChimp': False
}
)
analytics.track(
user.id,
"edx.bi.user.account.authenticated",
{
'category': "conversion",
'label': request.POST.get('course_id'),
'provider': None
},
context={
'ip': tracking_context.get('ip'),
'Google Analytics': {
'clientId': tracking_context.get('client_id')
}
}
)
if user is not None and user.is_active:
try:
# We do not log here, because we have a handler registered
# to perform logging on successful logins.
login(request, user)
if request.POST.get('remember') == 'true':
request.session.set_expiry(604800)
log.debug("Setting user session to never expire")
else:
request.session.set_expiry(0)
except Exception as exc: # pylint: disable=broad-except
AUDIT_LOG.critical("Login failed - Could not create session. Is memcached running?")
log.critical("Login failed - Could not create session. Is memcached running?")
log.exception(exc)
raise
redirect_url = None # The AJAX method calling should know the default destination upon success
if third_party_auth_successful:
redirect_url = pipeline.get_complete_url(backend_name)
response = JsonResponse({
"success": True,
"redirect_url": redirect_url,
})
# Ensure that the external marketing site can
# detect that the user is logged in.
return set_logged_in_cookies(request, response, user)
if settings.FEATURES['SQUELCH_PII_IN_LOGS']:
AUDIT_LOG.warning(u"Login failed - Account not active for user.id: {0}, resending activation".format(user.id))
else:
AUDIT_LOG.warning(u"Login failed - Account not active for user {0}, resending activation".format(username))
reactivation_email_for_user(user)
not_activated_msg = _("This account has not been activated. We have sent another activation "
"message. Please check your email for the activation instructions.")
return JsonResponse({
"success": False,
"value": not_activated_msg,
}) # TODO: this should be status code 400 # pylint: disable=fixme
@csrf_exempt
@require_POST
@social_utils.strategy("social:complete")
def login_oauth_token(request, backend):
"""
Authenticate the client using an OAuth access token by using the token to
retrieve information from a third party and matching that information to an
existing user.
"""
warnings.warn("Please use AccessTokenExchangeView instead.", DeprecationWarning)
backend = request.backend
if isinstance(backend, social_oauth.BaseOAuth1) or isinstance(backend, social_oauth.BaseOAuth2):
if "access_token" in request.POST:
# Tell third party auth pipeline that this is an API call
request.session[pipeline.AUTH_ENTRY_KEY] = pipeline.AUTH_ENTRY_LOGIN_API
user = None
try:
user = backend.do_auth(request.POST["access_token"])
except (HTTPError, AuthException):
pass
# do_auth can return a non-User object if it fails
if user and isinstance(user, User):
login(request, user)
return JsonResponse(status=204)
else:
# Ensure user does not re-enter the pipeline
request.social_strategy.clean_partial_pipeline()
return JsonResponse({"error": "invalid_token"}, status=401)
else:
return JsonResponse({"error": "invalid_request"}, status=400)
raise Http404
@ensure_csrf_cookie
def logout_user(request):
"""
HTTP request to log out the user. Redirects to marketing page.
Deletes both the CSRF and sessionid cookies so the marketing
site can determine the logged in state of the user
"""
# We do not log here, because we have a handler registered
# to perform logging on successful logouts.
request.is_from_logout = True
logout(request)
if settings.FEATURES.get('AUTH_USE_CAS'):
target = reverse('cas-logout')
else:
target = settings.EDEVATE_AFTER_LOGOUT_URL
response = redirect(target)
delete_logged_in_cookies(response)
return response
@require_GET
@login_required
@ensure_csrf_cookie
def manage_user_standing(request):
"""
Renders the view used to manage user standing. Also displays a table
of user accounts that have been disabled and who disabled them.
"""
if not request.user.is_staff:
raise Http404
all_disabled_accounts = UserStanding.objects.filter(
account_status=UserStanding.ACCOUNT_DISABLED
)
all_disabled_users = [standing.user for standing in all_disabled_accounts]
headers = ['username', 'account_changed_by']
rows = []
for user in all_disabled_users:
row = [user.username, user.standing.changed_by]
rows.append(row)
context = {'headers': headers, 'rows': rows}
return render_to_response("manage_user_standing.html", context)
@require_POST
@login_required
@ensure_csrf_cookie
def disable_account_ajax(request):
"""
Ajax call to change user standing. Endpoint of the form
in manage_user_standing.html
"""
if not request.user.is_staff:
raise Http404
username = request.POST.get('username')
context = {}
if username is None or username.strip() == '':
context['message'] = _('Please enter a username')
return JsonResponse(context, status=400)
account_action = request.POST.get('account_action')
if account_action is None:
context['message'] = _('Please choose an option')
return JsonResponse(context, status=400)
username = username.strip()
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
context['message'] = _("User with username {} does not exist").format(username)
return JsonResponse(context, status=400)
else:
user_account, _success = UserStanding.objects.get_or_create(
user=user, defaults={'changed_by': request.user},
)
if account_action == 'disable':
user_account.account_status = UserStanding.ACCOUNT_DISABLED
context['message'] = _("Successfully disabled {}'s account").format(username)
log.info(u"%s disabled %s's account", request.user, username)
elif account_action == 'reenable':
user_account.account_status = UserStanding.ACCOUNT_ENABLED
context['message'] = _("Successfully reenabled {}'s account").format(username)
log.info(u"%s reenabled %s's account", request.user, username)
else:
context['message'] = _("Unexpected account status")
return JsonResponse(context, status=400)
user_account.changed_by = request.user
user_account.standing_last_changed_at = datetime.datetime.now(UTC)
user_account.save()
return JsonResponse(context)
@login_required
@ensure_csrf_cookie
def change_setting(request):
"""JSON call to change a profile setting: Right now, location"""
# TODO (vshnayder): location is no longer used
u_prof = UserProfile.objects.get(user=request.user) # request.user.profile_cache
if 'location' in request.POST:
u_prof.location = request.POST['location']
u_prof.save()
return JsonResponse({
"success": True,
"location": u_prof.location,
})
class AccountValidationError(Exception):
def __init__(self, message, field):
super(AccountValidationError, self).__init__(message)
self.field = field
@receiver(post_save, sender=User)
def user_signup_handler(sender, **kwargs): # pylint: disable=unused-argument
"""
handler that saves the user Signup Source
when the user is created
"""
if 'created' in kwargs and kwargs['created']:
site = microsite.get_value('SITE_NAME')
if site:
user_signup_source = UserSignupSource(user=kwargs['instance'], site=site)
user_signup_source.save()
log.info(u'user {} originated from a white labeled "Microsite"'.format(kwargs['instance'].id))
def _do_create_account(form, custom_form=None):
"""
Given cleaned post variables, create the User and UserProfile objects, as well as the
registration for this user.
Returns a tuple (User, UserProfile, Registration).
Note: this function is also used for creating test users.
"""
errors = {}
errors.update(form.errors)
if custom_form:
errors.update(custom_form.errors)
if errors:
raise ValidationError(errors)
user = User(
username=form.cleaned_data["username"],
email=form.cleaned_data["email"],
is_active=False
)
user.set_password(form.cleaned_data["password"])
registration = Registration()
# TODO: Rearrange so that if part of the process fails, the whole process fails.
# Right now, we can have e.g. no registration e-mail sent out and a zombie account
try:
with transaction.atomic():
user.save()
if custom_form:
custom_model = custom_form.save(commit=False)
custom_model.user = user
custom_model.save()
except IntegrityError:
# Figure out the cause of the integrity error
if len(User.objects.filter(username=user.username)) > 0:
raise AccountValidationError(
_("An account with the Public Username '{username}' already exists.").format(username=user.username),
field="username"
)
elif len(User.objects.filter(email=user.email)) > 0:
raise AccountValidationError(
_("An account with the Email '{email}' already exists.").format(email=user.email),
field="email"
)
else:
raise
# add this account creation to password history
# NOTE, this will be a NOP unless the feature has been turned on in configuration
password_history_entry = PasswordHistory()
password_history_entry.create(user)
registration.register(user)
profile_fields = [
"name", "level_of_education", "gender", "mailing_address", "city", "country", "goals",
"year_of_birth"
]
profile = UserProfile(
user=user,
**{key: form.cleaned_data.get(key) for key in profile_fields}
)
extended_profile = form.cleaned_extended_profile
if extended_profile:
profile.meta = json.dumps(extended_profile)
try:
profile.save()
except Exception: # pylint: disable=broad-except
log.exception("UserProfile creation failed for user {id}.".format(id=user.id))
raise
try:
Subscriber.objects.create(user=user)
except Exception: # pylint: disable=broad-except
log.exception("Subscriber creation failed for user {id}".format(id=user.id))
raise
return (user, profile, registration)
def create_account_with_params(request, params):
"""
Given a request and a dict of parameters (which may or may not have come
from the request), create an account for the requesting user, including
creating a comments service user object and sending an activation email.
This also takes external/third-party auth into account, updates that as
necessary, and authenticates the user for the request's session.
Does not return anything.
Raises AccountValidationError if an account with the username or email
specified by params already exists, or ValidationError if any of the given
parameters is invalid for any other reason.
Issues with this code:
* It is not transactional. If there is a failure part-way, an incomplete
account will be created and left in the database.
* Third-party auth passwords are not verified. There is a comment that
they are unused, but it would be helpful to have a sanity check that
they are sane.
* It is over 300 lines long (!) and includes disprate functionality, from
registration e-mails to all sorts of other things. It should be broken
up into semantically meaningful functions.
* The user-facing text is rather unfriendly (e.g. "Username must be a
minimum of two characters long" rather than "Please use a username of
at least two characters").
"""
# Copy params so we can modify it; we can't just do dict(params) because if
# params is request.POST, that results in a dict containing lists of values
params = dict(params.items())
# allow for microsites to define their own set of required/optional/hidden fields
extra_fields = microsite.get_value(
'REGISTRATION_EXTRA_FIELDS',
getattr(settings, 'REGISTRATION_EXTRA_FIELDS', {})
)
# Boolean of whether a 3rd party auth provider and credentials were provided in
# the API so the newly created account can link with the 3rd party account.
#
# Note: this is orthogonal to the 3rd party authentication pipeline that occurs
# when the account is created via the browser and redirect URLs.
should_link_with_social_auth = third_party_auth.is_enabled() and 'provider' in params
if should_link_with_social_auth or (third_party_auth.is_enabled() and pipeline.running(request)):
params["password"] = pipeline.make_random_password()
# if doing signup for an external authorization, then get email, password, name from the eamap
# don't use the ones from the form, since the user could have hacked those
# unless originally we didn't get a valid email or name from the external auth
# TODO: We do not check whether these values meet all necessary criteria, such as email length
do_external_auth = 'ExternalAuthMap' in request.session
if do_external_auth:
eamap = request.session['ExternalAuthMap']
try:
validate_email(eamap.external_email)
params["email"] = eamap.external_email
except ValidationError:
pass
if eamap.external_name.strip() != '':
params["name"] = eamap.external_name
params["password"] = eamap.internal_password
log.debug(u'In create_account with external_auth: user = %s, email=%s', params["name"], params["email"])
extended_profile_fields = microsite.get_value('extended_profile_fields', [])
enforce_password_policy = (
settings.FEATURES.get("ENFORCE_PASSWORD_POLICY", False) and
not do_external_auth
)
# Can't have terms of service for certain SHIB users, like at Stanford
registration_fields = getattr(settings, 'REGISTRATION_EXTRA_FIELDS', {})
tos_required = (
registration_fields.get('terms_of_service') != 'hidden' or
registration_fields.get('honor_code') != 'hidden'
) and (
not settings.FEATURES.get("AUTH_USE_SHIB") or
not settings.FEATURES.get("SHIB_DISABLE_TOS") or
not do_external_auth or
not eamap.external_domain.startswith(
external_auth.views.SHIBBOLETH_DOMAIN_PREFIX
)
)
form = AccountCreationForm(
data=params,
extra_fields=extra_fields,
extended_profile_fields=extended_profile_fields,
enforce_username_neq_password=True,
enforce_password_policy=enforce_password_policy,
tos_required=tos_required,
)
custom_form = get_registration_extension_form(data=params)
# Perform operations within a transaction that are critical to account creation
with transaction.atomic():
# first, create the account
(user, profile, registration) = _do_create_account(form, custom_form)
# next, link the account with social auth, if provided via the API.
# (If the user is using the normal register page, the social auth pipeline does the linking, not this code)
if should_link_with_social_auth:
backend_name = params['provider']
request.social_strategy = social_utils.load_strategy(request)
redirect_uri = reverse('social:complete', args=(backend_name, ))
request.backend = social_utils.load_backend(request.social_strategy, backend_name, redirect_uri)
social_access_token = params.get('access_token')
if not social_access_token:
raise ValidationError({
'access_token': [
_("An access_token is required when passing value ({}) for provider.").format(
params['provider']
)
]
})
request.session[pipeline.AUTH_ENTRY_KEY] = pipeline.AUTH_ENTRY_REGISTER_API
pipeline_user = None
error_message = ""
try:
pipeline_user = request.backend.do_auth(social_access_token, user=user)
except AuthAlreadyAssociated:
error_message = _("The provided access_token is already associated with another user.")
except (HTTPError, AuthException):
error_message = _("The provided access_token is not valid.")
if not pipeline_user or not isinstance(pipeline_user, User):
# Ensure user does not re-enter the pipeline
request.social_strategy.clean_partial_pipeline()
raise ValidationError({'access_token': [error_message]})
# Perform operations that are non-critical parts of account creation
preferences_api.set_user_preference(user, LANGUAGE_KEY, get_language())
if settings.FEATURES.get('ENABLE_DISCUSSION_EMAIL_DIGEST'):
try:
enable_notifications(user)
except Exception: # pylint: disable=broad-except
log.exception("Enable discussion notifications failed for user {id}.".format(id=user.id))
dog_stats_api.increment("common.student.account_created")
# If the user is registering via 3rd party auth, track which provider they use
third_party_provider = None
running_pipeline = None
if third_party_auth.is_enabled() and pipeline.running(request):
running_pipeline = pipeline.get(request)
third_party_provider = provider.Registry.get_from_pipeline(running_pipeline)
# Track the user's registration
if hasattr(settings, 'LMS_SEGMENT_KEY') and settings.LMS_SEGMENT_KEY:
tracking_context = tracker.get_tracker().resolve_context()
identity_args = [
user.id, # pylint: disable=no-member
{
'email': user.email,
'username': user.username,
'name': profile.name,
# Mailchimp requires the age & yearOfBirth to be integers, we send a sane integer default if falsey.
'age': profile.age or -1,
'yearOfBirth': profile.year_of_birth or datetime.datetime.now(UTC).year,
'education': profile.level_of_education_display,
'address': profile.mailing_address,
'gender': profile.gender_display,
'country': unicode(profile.country),
}
]
if hasattr(settings, 'MAILCHIMP_NEW_USER_LIST_ID'):
identity_args.append({
"MailChimp": {
"listId": settings.MAILCHIMP_NEW_USER_LIST_ID
}
})
analytics.identify(*identity_args)
analytics.track(
user.id,
"edx.bi.user.account.registered",
{
'category': 'conversion',
'label': params.get('course_id'),
'provider': third_party_provider.name if third_party_provider else None
},
context={
'ip': tracking_context.get('ip'),
'Google Analytics': {
'clientId': tracking_context.get('client_id')
}
}
)
create_comments_service_user(user)
# Don't send email if we are:
#
# 1. Doing load testing.
# 2. Random user generation for other forms of testing.
# 3. External auth bypassing activation.
# 4. Have the platform configured to not require e-mail activation.
# 5. Registering a new user using a trusted third party provider (with skip_email_verification=True)
#
# Note that this feature is only tested as a flag set one way or
# the other for *new* systems. we need to be careful about
# changing settings on a running system to make sure no users are
# left in an inconsistent state (or doing a migration if they are).
send_email = (
not settings.FEATURES.get('SKIP_EMAIL_VALIDATION', None) and
not settings.FEATURES.get('AUTOMATIC_AUTH_FOR_TESTING') and
not (do_external_auth and settings.FEATURES.get('BYPASS_ACTIVATION_EMAIL_FOR_EXTAUTH')) and
not (
third_party_provider and third_party_provider.skip_email_verification and
user.email == running_pipeline['kwargs'].get('details', {}).get('email')
)
)
if send_email:
context = {
'name': profile.name,
'key': registration.activation_key,
}
# composes activation email
subject = render_to_string('emails/activation_email_subject.txt', context)
# Email subject *must not* contain newlines
subject = ''.join(subject.splitlines())
message = render_to_string('emails/activation_email.txt', context)
from_address = microsite.get_value(
'email_from_address',
settings.DEFAULT_FROM_EMAIL
)
try:
if settings.FEATURES.get('REROUTE_ACTIVATION_EMAIL'):
dest_addr = settings.FEATURES['REROUTE_ACTIVATION_EMAIL']
message = ("Activation for %s (%s): %s\n" % (user, user.email, profile.name) +
'-' * 80 + '\n\n' + message)
mail.send_mail(subject, message, from_address, [dest_addr], fail_silently=False)
else:
user.email_user(subject, message, from_address)
except Exception: # pylint: disable=broad-except
log.error(u'Unable to send activation email to user from "%s"', from_address, exc_info=True)
else:
registration.activate()
_enroll_user_in_pending_courses(user) # Enroll student in any pending courses
# Immediately after a user creates an account, we log them in. They are only
# logged in until they close the browser. They can't log in again until they click
# the activation link from the email.
new_user = authenticate(username=user.username, password=params['password'])
login(request, new_user)
request.session.set_expiry(0)
# TODO: there is no error checking here to see that the user actually logged in successfully,
# and is not yet an active user.
if new_user is not None:
AUDIT_LOG.info(u"Login success on new account creation - {0}".format(new_user.username))
if do_external_auth:
eamap.user = new_user
eamap.dtsignup = datetime.datetime.now(UTC)
eamap.save()
AUDIT_LOG.info(u"User registered with external_auth %s", new_user.username)
AUDIT_LOG.info(u'Updated ExternalAuthMap for %s to be %s', new_user.username, eamap)
if settings.FEATURES.get('BYPASS_ACTIVATION_EMAIL_FOR_EXTAUTH'):
log.info('bypassing activation email')
new_user.is_active = True
new_user.save()
AUDIT_LOG.info(u"Login activated on extauth account - {0} ({1})".format(new_user.username, new_user.email))
return new_user
def _enroll_user_in_pending_courses(student):
"""
Enroll student in any pending courses he/she may have.
"""
ceas = CourseEnrollmentAllowed.objects.filter(email=student.email)
for cea in ceas:
if cea.auto_enroll:
enrollment = CourseEnrollment.enroll(student, cea.course_id)
manual_enrollment_audit = ManualEnrollmentAudit.get_manual_enrollment_by_email(student.email)
if manual_enrollment_audit is not None:
# get the enrolled by user and reason from the ManualEnrollmentAudit table.
# then create a new ManualEnrollmentAudit table entry for the same email
# different transition state.
ManualEnrollmentAudit.create_manual_enrollment_audit(
manual_enrollment_audit.enrolled_by, student.email, ALLOWEDTOENROLL_TO_ENROLLED,
manual_enrollment_audit.reason, enrollment
)
@csrf_exempt
def create_account(request, post_override=None):
"""
JSON call to create new edX account.
Used by form in signup_modal.html, which is included into navigation.html
"""
warnings.warn("Please use RegistrationView instead.", DeprecationWarning)
try:
user = create_account_with_params(request, post_override or request.POST)
except AccountValidationError as exc:
return JsonResponse({'success': False, 'value': exc.message, 'field': exc.field}, status=400)
except ValidationError as exc:
field, error_list = next(exc.message_dict.iteritems())
return JsonResponse(
{
"success": False,
"field": field,
"value": error_list[0],
},
status=400
)
redirect_url = None # The AJAX method calling should know the default destination upon success
# Resume the third-party-auth pipeline if necessary.
if third_party_auth.is_enabled() and pipeline.running(request):
running_pipeline = pipeline.get(request)
redirect_url = pipeline.get_complete_url(running_pipeline['backend'])
response = JsonResponse({
'success': True,
'redirect_url': redirect_url,
})
set_logged_in_cookies(request, response, user)
return response
def auto_auth(request):
"""
Create or configure a user account, then log in as that user.
Enabled only when
settings.FEATURES['AUTOMATIC_AUTH_FOR_TESTING'] is true.
Accepts the following querystring parameters:
* `username`, `email`, and `password` for the user account
* `full_name` for the user profile (the user's full name; defaults to the username)
* `staff`: Set to "true" to make the user global staff.
* `course_id`: Enroll the student in the course with `course_id`
* `roles`: Comma-separated list of roles to grant the student in the course with `course_id`
* `no_login`: Define this to create the user but not login
* `redirect`: Set to "true" will redirect to the `redirect_to` value if set, or
course home page if course_id is defined, otherwise it will redirect to dashboard
* `redirect_to`: will redirect to to this url
If username, email, or password are not provided, use
randomly generated credentials.
"""
# Generate a unique name to use if none provided
unique_name = uuid.uuid4().hex[0:30]
# Use the params from the request, otherwise use these defaults
username = request.GET.get('username', unique_name)
password = request.GET.get('password', unique_name)
email = request.GET.get('email', unique_name + "@example.com")
full_name = request.GET.get('full_name', username)
is_staff = request.GET.get('staff', None)
is_superuser = request.GET.get('superuser', None)
course_id = request.GET.get('course_id', None)
redirect_to = request.GET.get('redirect_to', None)
# mode has to be one of 'honor'/'professional'/'verified'/'audit'/'no-id-professional'/'credit'
enrollment_mode = request.GET.get('enrollment_mode', 'honor')
course_key = None
if course_id:
course_key = CourseLocator.from_string(course_id)
role_names = [v.strip() for v in request.GET.get('roles', '').split(',') if v.strip()]
redirect_when_done = request.GET.get('redirect', '').lower() == 'true' or redirect_to
login_when_done = 'no_login' not in request.GET
form = AccountCreationForm(
data={
'username': username,
'email': email,
'password': password,
'name': full_name,
},
tos_required=False
)
# Attempt to create the account.
# If successful, this will return a tuple containing
# the new user object.
try:
user, profile, reg = _do_create_account(form)
except (AccountValidationError, ValidationError):
# Attempt to retrieve the existing user.
user = User.objects.get(username=username)
user.email = email
user.set_password(password)
user.save()
profile = UserProfile.objects.get(user=user)
reg = Registration.objects.get(user=user)
# Set the user's global staff bit
if is_staff is not None:
user.is_staff = (is_staff == "true")
user.save()
if is_superuser is not None:
user.is_superuser = (is_superuser == "true")
user.save()
# Activate the user
reg.activate()
reg.save()
# ensure parental consent threshold is met
year = datetime.date.today().year
age_limit = settings.PARENTAL_CONSENT_AGE_LIMIT
profile.year_of_birth = (year - age_limit) - 1
profile.save()
# Enroll the user in a course
if course_key is not None:
CourseEnrollment.enroll(user, course_key, mode=enrollment_mode)
# Apply the roles
for role_name in role_names:
role = Role.objects.get(name=role_name, course_id=course_key)
user.roles.add(role)
# Log in as the user
if login_when_done:
user = authenticate(username=username, password=password)
login(request, user)
create_comments_service_user(user)
# Provide the user with a valid CSRF token
# then return a 200 response unless redirect is true
if redirect_when_done:
# Redirect to specific page if specified
if redirect_to:
redirect_url = redirect_to
# Redirect to course info page if course_id is known
elif course_id:
try:
# redirect to course info page in LMS
redirect_url = reverse(
'info',
kwargs={'course_id': course_id}
)
except NoReverseMatch:
# redirect to course outline page in Studio
redirect_url = reverse(
'course_handler',
kwargs={'course_key_string': course_id}
)
else:
try:
# redirect to dashboard for LMS
redirect_url = reverse('dashboard')
except NoReverseMatch:
# redirect to home for Studio
redirect_url = reverse('home')
return redirect(redirect_url)
elif request.META.get('HTTP_ACCEPT') == 'application/json':
response = JsonResponse({
'created_status': u"Logged in" if login_when_done else "Created",
'username': username,
'email': email,
'password': password,
'user_id': user.id, # pylint: disable=no-member
'anonymous_id': anonymous_id_for_user(user, None),
})
else:
success_msg = u"{} user {} ({}) with password {} and user_id {}".format(
u"Logged in" if login_when_done else "Created",
username, email, password, user.id # pylint: disable=no-member
)
response = HttpResponse(success_msg)
response.set_cookie('csrftoken', csrf(request)['csrf_token'])
return response
@ensure_csrf_cookie
def activate_account(request, key):
"""When link in activation e-mail is clicked"""
regs = Registration.objects.filter(activation_key=key)
if len(regs) == 1:
user_logged_in = request.user.is_authenticated()
already_active = True
if not regs[0].user.is_active:
regs[0].activate()
already_active = False
# Enroll student in any pending courses he/she may have if auto_enroll flag is set
_enroll_user_in_pending_courses(regs[0].user)
resp = render_to_response(
"registration/activation_complete.html",
{
'user_logged_in': user_logged_in,
'already_active': already_active
}
)
return resp
if len(regs) == 0:
return render_to_response(
"registration/activation_invalid.html",
{'csrf': csrf(request)['csrf_token']}
)
return HttpResponseServerError(_("Unknown error. Please e-mail us to let us know how it happened."))
@csrf_exempt
@require_POST
def password_reset(request):
""" Attempts to send a password reset e-mail. """
# Add some rate limiting here by re-using the RateLimitMixin as a helper class
limiter = BadRequestRateLimiter()
if limiter.is_rate_limit_exceeded(request):
AUDIT_LOG.warning("Rate limit exceeded in password_reset")
return HttpResponseForbidden()
form = PasswordResetFormNoActive(request.POST)
if form.is_valid():
form.save(use_https=request.is_secure(),
from_email=microsite.get_value('email_from_address', settings.DEFAULT_FROM_EMAIL),
request=request,
domain_override=request.get_host())
# When password change is complete, a "edx.user.settings.changed" event will be emitted.
# But because changing the password is multi-step, we also emit an event here so that we can
# track where the request was initiated.
tracker.emit(
SETTING_CHANGE_INITIATED,
{
"setting": "password",
"old": None,
"new": None,
"user_id": request.user.id,
}
)
else:
# bad user? tick the rate limiter counter
AUDIT_LOG.info("Bad password_reset user passed in.")
limiter.tick_bad_request_counter(request)
return JsonResponse({
'success': True,
'value': render_to_string('registration/password_reset_done.html', {}),
})
def password_reset_confirm_wrapper(
request,
uidb36=None,
token=None,
):
""" A wrapper around django.contrib.auth.views.password_reset_confirm.
Needed because we want to set the user as active at this step.
"""
# cribbed from django.contrib.auth.views.password_reset_confirm
try:
uid_int = base36_to_int(uidb36)
user = User.objects.get(id=uid_int)
user.is_active = True
user.save()
except (ValueError, User.DoesNotExist):
pass
# tie in password strength enforcement as an optional level of
# security protection
err_msg = None
if request.method == 'POST':
password = request.POST['new_password1']
if settings.FEATURES.get('ENFORCE_PASSWORD_POLICY', False):
try:
validate_password_length(password)
validate_password_complexity(password)
validate_password_dictionary(password)
except ValidationError, err:
err_msg = _('Password: ') + '; '.join(err.messages)
# also, check the password reuse policy
if not PasswordHistory.is_allowable_password_reuse(user, password):
if user.is_staff:
num_distinct = settings.ADVANCED_SECURITY_CONFIG['MIN_DIFFERENT_STAFF_PASSWORDS_BEFORE_REUSE']
else:
num_distinct = settings.ADVANCED_SECURITY_CONFIG['MIN_DIFFERENT_STUDENT_PASSWORDS_BEFORE_REUSE']
# Because of how ngettext is, splitting the following into shorter lines would be ugly.
# pylint: disable=line-too-long
err_msg = ungettext(
"You are re-using a password that you have used recently. You must have {num} distinct password before reusing a previous password.",
"You are re-using a password that you have used recently. You must have {num} distinct passwords before reusing a previous password.",
num_distinct
).format(num=num_distinct)
# also, check to see if passwords are getting reset too frequent
if PasswordHistory.is_password_reset_too_soon(user):
num_days = settings.ADVANCED_SECURITY_CONFIG['MIN_TIME_IN_DAYS_BETWEEN_ALLOWED_RESETS']
# Because of how ngettext is, splitting the following into shorter lines would be ugly.
# pylint: disable=line-too-long
err_msg = ungettext(
"You are resetting passwords too frequently. Due to security policies, {num} day must elapse between password resets.",
"You are resetting passwords too frequently. Due to security policies, {num} days must elapse between password resets.",
num_days
).format(num=num_days)
if err_msg:
# We have an password reset attempt which violates some security policy, use the
# existing Django template to communicate this back to the user
context = {
'validlink': True,
'form': None,
'title': _('Password reset unsuccessful'),
'err_msg': err_msg,
'platform_name': microsite.get_value('platform_name', settings.PLATFORM_NAME),
}
return TemplateResponse(request, 'registration/password_reset_confirm.html', context)
else:
# we also want to pass settings.PLATFORM_NAME in as extra_context
extra_context = {"platform_name": microsite.get_value('platform_name', settings.PLATFORM_NAME)}
# Support old password reset URLs that used base36 encoded user IDs.
# https://github.com/django/django/commit/1184d077893ff1bc947e45b00a4d565f3df81776#diff-c571286052438b2e3190f8db8331a92bR231
try:
uidb64 = force_text(urlsafe_base64_encode(force_bytes(base36_to_int(uidb36))))
except ValueError:
uidb64 = '1' # dummy invalid ID (incorrect padding for base64)
if request.method == 'POST':
# remember what the old password hash is before we call down
old_password_hash = user.password
result = password_reset_confirm(
request, uidb64=uidb64, token=token, extra_context=extra_context
)
# get the updated user
updated_user = User.objects.get(id=uid_int)
# did the password hash change, if so record it in the PasswordHistory
if updated_user.password != old_password_hash:
entry = PasswordHistory()
entry.create(updated_user)
return result
else:
return password_reset_confirm(
request, uidb64=uidb64, token=token, extra_context=extra_context
)
def reactivation_email_for_user(user):
try:
reg = Registration.objects.get(user=user)
except Registration.DoesNotExist:
return JsonResponse({
"success": False,
"error": _('No inactive user with this e-mail exists'),
}) # TODO: this should be status code 400 # pylint: disable=fixme
context = {
'name': user.profile.name,
'key': reg.activation_key,
}
subject = render_to_string('emails/activation_email_subject.txt', context)
subject = ''.join(subject.splitlines())
message = render_to_string('emails/activation_email.txt', context)
try:
user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL)
except Exception: # pylint: disable=broad-except
log.error(u'Unable to send reactivation email from "%s"', settings.DEFAULT_FROM_EMAIL, exc_info=True)
return JsonResponse({
"success": False,
"error": _('Unable to send reactivation email')
}) # TODO: this should be status code 500 # pylint: disable=fixme
return JsonResponse({"success": True})
def validate_new_email(user, new_email):
"""
Given a new email for a user, does some basic verification of the new address If any issues are encountered
with verification a ValueError will be thrown.
"""
try:
validate_email(new_email)
except ValidationError:
raise ValueError(_('Valid e-mail address required.'))
if new_email == user.email:
raise ValueError(_('Old email is the same as the new email.'))
if User.objects.filter(email=new_email).count() != 0:
raise ValueError(_('An account with this e-mail already exists.'))
def do_email_change_request(user, new_email, activation_key=None):
"""
Given a new email for a user, does some basic verification of the new address and sends an activation message
to the new address. If any issues are encountered with verification or sending the message, a ValueError will
be thrown.
"""
pec_list = PendingEmailChange.objects.filter(user=user)
if len(pec_list) == 0:
pec = PendingEmailChange()
pec.user = user
else:
pec = pec_list[0]
# if activation_key is not passing as an argument, generate a random key
if not activation_key:
activation_key = uuid.uuid4().hex
pec.new_email = new_email
pec.activation_key = activation_key
pec.save()
context = {
'key': pec.activation_key,
'old_email': user.email,
'new_email': pec.new_email
}
subject = render_to_string('emails/email_change_subject.txt', context)
subject = ''.join(subject.splitlines())
message = render_to_string('emails/email_change.txt', context)
from_address = microsite.get_value(
'email_from_address',
settings.DEFAULT_FROM_EMAIL
)
try:
mail.send_mail(subject, message, from_address, [pec.new_email])
except Exception: # pylint: disable=broad-except
log.error(u'Unable to send email activation link to user from "%s"', from_address, exc_info=True)
raise ValueError(_('Unable to send email activation link. Please try again later.'))
# When the email address change is complete, a "edx.user.settings.changed" event will be emitted.
# But because changing the email address is multi-step, we also emit an event here so that we can
# track where the request was initiated.
tracker.emit(
SETTING_CHANGE_INITIATED,
{
"setting": "email",
"old": context['old_email'],
"new": context['new_email'],
"user_id": user.id,
}
)
@ensure_csrf_cookie
def confirm_email_change(request, key): # pylint: disable=unused-argument
"""
User requested a new e-mail. This is called when the activation
link is clicked. We confirm with the old e-mail, and update
"""
with transaction.atomic():
try:
pec = PendingEmailChange.objects.get(activation_key=key)
except PendingEmailChange.DoesNotExist:
response = render_to_response("invalid_email_key.html", {})
transaction.set_rollback(True)
return response
user = pec.user
address_context = {
'old_email': user.email,
'new_email': pec.new_email
}
if len(User.objects.filter(email=pec.new_email)) != 0:
response = render_to_response("email_exists.html", {})
transaction.set_rollback(True)
return response
subject = render_to_string('emails/email_change_subject.txt', address_context)
subject = ''.join(subject.splitlines())
message = render_to_string('emails/confirm_email_change.txt', address_context)
u_prof = UserProfile.objects.get(user=user)
meta = u_prof.get_meta()
if 'old_emails' not in meta:
meta['old_emails'] = []
meta['old_emails'].append([user.email, datetime.datetime.now(UTC).isoformat()])
u_prof.set_meta(meta)
u_prof.save()
# Send it to the old email...
try:
user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL)
except Exception: # pylint: disable=broad-except
log.warning('Unable to send confirmation email to old address', exc_info=True)
response = render_to_response("email_change_failed.html", {'email': user.email})
transaction.set_rollback(True)
return response
user.email = pec.new_email
user.save()
pec.delete()
# And send it to the new email...
try:
user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL)
except Exception: # pylint: disable=broad-except
log.warning('Unable to send confirmation email to new address', exc_info=True)
response = render_to_response("email_change_failed.html", {'email': pec.new_email})
transaction.set_rollback(True)
return response
response = render_to_response("email_change_successful.html", address_context)
return response
@require_POST
@login_required
@ensure_csrf_cookie
def change_email_settings(request):
"""Modify logged-in user's setting for receiving emails from a course."""
user = request.user
course_id = request.POST.get("course_id")
course_key = SlashSeparatedCourseKey.from_deprecated_string(course_id)
receive_emails = request.POST.get("receive_emails")
if receive_emails:
optout_object = Optout.objects.filter(user=user, course_id=course_key)
if optout_object:
optout_object.delete()
log.info(
u"User %s (%s) opted in to receive emails from course %s",
user.username,
user.email,
course_id,
)
track.views.server_track(
request,
"change-email-settings",
{"receive_emails": "yes", "course": course_id},
page='dashboard',
)
else:
Optout.objects.get_or_create(user=user, course_id=course_key)
log.info(
u"User %s (%s) opted out of receiving emails from course %s",
user.username,
user.email,
course_id,
)
track.views.server_track(
request,
"change-email-settings",
{"receive_emails": "no", "course": course_id},
page='dashboard',
)
return JsonResponse({"success": True})
|
10clouds/edx-platform
|
common/djangoapps/student/views.py
|
Python
|
agpl-3.0
| 103,142
|
[
"VisIt"
] |
46f9aef9490581965376e449f2d7b5769ca87a64c5a99a273375c1d5f0461505
|
#!/usr/bin/env python
from __future__ import print_function
from builtins import input
import sys
import numpy
import pmagpy.pmag as pmag
def main():
"""
NAME
incfish.py
DESCRIPTION
calculates fisher parameters from inc only data
INPUT FORMAT
takes inc data
SYNTAX
incfish.py [options] [< filename]
OPTIONS
-h prints help message and quits
-i for interactive filename entry
-f FILE, specify input file name
-F FILE, specify output file name
< filename for reading from standard input
OUTPUT
mean inc,Fisher inc, N, R, k, a95
NOTES
takes the absolute value of inclinations (to take into account reversals),
but returns gaussian mean if < 50.0, because of polarity ambiguity and
lack of bias.
"""
inc=[]
if '-h' in sys.argv: # check if help is needed
print(main.__doc__)
sys.exit() # graceful quit
if '-i' in sys.argv: # ask for filename
file=input("Enter file name with inc data: ")
inc=numpy.loadtxt(file)
elif '-f' in sys.argv:
ind=sys.argv.index('-f')
file=sys.argv[ind+1]
inc=numpy.loadtxt(file)
else:
inc = numpy.loadtxt(sys.stdin,dtype=numpy.float)
ofile=""
if '-F' in sys.argv:
ind = sys.argv.index('-F')
ofile= sys.argv[ind+1]
out = open(ofile, 'w + a')
#
#get doincfish to do the dirty work:
fpars= pmag.doincfish(inc)
outstring='%7.1f %7.1f %i %8.1f %7.1f %7.1f'%(fpars['ginc'],fpars['inc'],fpars['n'],fpars['r'],fpars['k'],fpars['alpha95'])
if ofile == "":
print(outstring)
else:
out.write(outstring+'\n')
if __name__ == "__main__":
main()
|
Caoimhinmg/PmagPy
|
programs/incfish.py
|
Python
|
bsd-3-clause
| 1,761
|
[
"Gaussian"
] |
68ef1ef85d2ee0575781b0ee8a6cdd370c0c288f8122ef9f92ecab65abedb9c6
|
# TODO list for this module:
# - define and implement type coercion when types are not matching
# - What types do we use, Python types? special types so that we can mark out
# 8-bit, 16-bit, etc. numeric types? How can we then make sure that
# operations work the same when run in Python and in the assembled version?
# (e.g. 8-bit overflow)
# - How should be handle the pointer type?
import typing
import symtab
import ast
from errors import *
# "Enum" for loading context
NoContext = 0
ValueContext = 1
AddressContext = 2
class TypeAnnotator(ast.NodeTransformer):
def __init__(self, root_symbol_table=None):
self.top_scope = symtab.SymbolTable(root_symbol_table)
self.loading_context = NoContext
self.typing = typing.TypingSystem(None)
def push_scope(self, scope):
self.top_scope = scope
def pop_scope(self):
self.top_scope = self.top_scope.parent
def lookup_symbol(self, id):
return self.top_scope.find_symbol(id)
def add_symbol_to_top_scope(self, symbol):
self.top_scope.add_symbol(symbol)
def visit_arguments(self, node):
for arg in node.args:
sym = symtab.Symbol(arg.arg)
assert isinstance(arg.annotation, ast.Name), \
"Argument annotation must be a type identifier"
sym.typ = self.typing.get_type_for_id(arg.annotation.id)
self.add_symbol_to_top_scope(sym)
return node
def visit_Module(self, node):
scope = symtab.SymbolTable(self.top_scope)
node.scope = scope
self.push_scope(scope)
node.body = [self.visit(stmt) for stmt in node.body]
self.pop_scope()
return node
def visit_FunctionDef(self, node):
function_scope = symtab.SymbolTable(self.top_scope)
self.push_scope(function_scope)
node.scope = function_scope
node.args = self.visit(node.args)
# TODO: The return type of the function can also be determined using
# the types of Return nodes
return_type = None
if node.returns is not None:
assert isinstance(node.returns, ast.Name), \
"Return annotation must be a type identifier"
return_type = self.typing.get_type_for_id(node.returns.id)
sym = symtab.Symbol(node.name)
sym.typ = typing.Function(return_type, node.args)
# TODO: The following line is a really ugly access of above scope; but
# is needed since for the body and the arguments we need to be inside
# the function scope
self.top_scope.parent.add_symbol(sym)
node.body = [self.visit(child) for child in node.body]
node.decorator_list = [self.visit(node.decorator_list)
for child in node.decorator_list]
self.pop_scope()
return node
def visit_Call(self, node):
if not isinstance(node.func, ast.Name):
raise CompilationError(
"Call of a function must be done through an explicit name",
"<file>", node.lineno, node.col_offset)
sym = self.lookup_symbol(node.func.id)
if not isinstance(sym.typ, typing.Function):
raise CompilationError("Can't call non-function symbol '%s'" %
sym.name, "<file>", node.lineno,
node.col_offset)
node.args = [self.visit(arg) for arg in node.args]
node.typ = sym.typ.return_type
node.scope = self.top_scope
return node
def visit_Assign(self, node):
# targets* = value (expr)
node.scope = self.top_scope
self.loading_context = ValueContext
node.value = self.visit(node.value)
if not hasattr(node.value, "typ"):
raise KeyError("Can't determine type of '%s' @%u:%u" %
(node.value.id,
node.value.lineno, node.value.col_offset))
self.loading_context = AddressContext
node.targets = [self.visit(target) for target in node.targets]
target = node.targets[0]
if isinstance(target, ast.Name):
sym = self.lookup_symbol(target.id)
if sym is None:
sym = symtab.Symbol(target.id)
sym.typ = node.value.typ
self.add_symbol_to_top_scope(sym)
target.typ = node.value.typ
target.sym = sym
elif isinstance(target, ast.Subscript):
if target.typ != node.value.typ:
raise TypeError("Cannot assign %s to %s @%u:%u" %
(node.value.typ, target.typ,
node.lineno, node.col_offset))
return node
def visit_BinOp(self, node):
previous_context = self.loading_context
self.loading_context = ValueContext
node.left = self.visit(node.left)
node.op = self.visit(node.op)
node.right = self.visit(node.right)
# TODO: handle more complicated cases such as int * string etc.
node.typ = self.typing.resolve_types(
node.left.typ,
node.right.typ,
node.op)
self.loading_context = previous_context
return node
def visit_Compare(self, node):
previous_context = self.loading_context
self.loading_context = ValueContext
node.left = self.visit(node.left)
node.ops = [self.visit(op) for op in node.ops]
node.comparators = [self.visit(comp) for comp in node.comparators]
node.typ = typing.Bool()
self.loading_context = previous_context
return node
def visit_Subscript(self, node):
# value, slice, ctx
node.value = self.visit(node.value)
node.slice = self.visit(node.slice)
if not isinstance(node.value.typ, typing.Pointer) and \
not isinstance(node.value.typ, typing.String):
raise TypeError(
"Expecting a pointer value. Can't dereference value @%u:%u" %
(node.lineno, node.col_offset))
node.typ = node.value.typ.pointee_type
return node
def visit_Num(self, node):
node.typ = self.typing.get_type_from_number(node.n)
return node
def visit_Str(self, node):
node.typ = typing.String()
return node
def visit_Name(self, node):
sym = self.lookup_symbol(node.id)
if hasattr(sym, "typ") and sym.typ is not None:
node.typ = sym.typ
node.sym = sym
node.loading_context = self.loading_context
return node
|
Jokymon/hpcs
|
annotators.py
|
Python
|
gpl-3.0
| 6,616
|
[
"VisIt"
] |
8bbfbb2eb9722f2ee2e7272c2739298b2a0257794c43bc609eb333e832386176
|
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
SIZE_RANGES = {
'Y': 1 << 80,
'Z': 1 << 70,
'E': 1 << 60,
'P': 1 << 50,
'T': 1 << 40,
'G': 1 << 30,
'M': 1 << 20,
'K': 1 << 10,
'B': 1,
}
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
PASS_VARS = {
'check_mode': 'check_mode',
'debug': '_debug',
'diff': '_diff',
'keep_remote_files': '_keep_remote_files',
'module_name': '_name',
'no_log': 'no_log',
'remote_tmp': '_remote_tmp',
'selinux_special_fs': '_selinux_special_fs',
'shell_executable': '_shell',
'socket': '_socket_path',
'syslog_facility': '_syslog_facility',
'tmpdir': '_tmpdir',
'verbosity': '_verbosity',
'version': 'ansible_version',
}
PASS_BOOLS = ('no_log', 'debug', 'diff')
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import locale
import os
import re
import shlex
import signal
import subprocess
import sys
import types
import time
import select
import shutil
import stat
import tempfile
import traceback
import grp
import pwd
import platform
import errno
import datetime
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
try:
import json
# Detect the python-json library which is incompatible
try:
if not isinstance(json.loads, types.FunctionType) or not isinstance(json.dumps, types.FunctionType):
raise ImportError
except AttributeError:
raise ImportError
except ImportError:
print('\n{"msg": "Error: ansible requires the stdlib json and was not found!", "failed": true}')
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
algorithms.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import is_executable
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils._text import to_native, to_bytes, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
_NUMBERTYPES = tuple(list(integer_types) + [float])
# Deprecated compat. Only kept in case another module used these names Using
# ansible.module_utils.six is preferred
NUMBERTYPES = _NUMBERTYPES
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(),
group=dict(),
seuser=dict(),
serole=dict(),
selevel=dict(),
setype=dict(),
attributes=dict(aliases=['attr']),
# The following are not about perms and should not be in a rewritten file_common_args
src=dict(), # Maybe dest or path would be appropriate but src is not
follow=dict(type='bool', default=False), # Maybe follow is appropriate because it determines whether to follow symlinks for permission purposes too
force=dict(type='bool'),
# not taken by the file module, but other action plugins call the file module so this ignores
# them for now. In the future, the caller should take care of removing these from the module
# arguments before calling the file module.
content=dict(no_log=True), # used by copy
backup=dict(), # Used by a few modules to create a remote backup before updating the file
remote_src=dict(), # used by assemble
regexp=dict(), # used by assemble
delimiter=dict(), # used by assemble
directory_mode=dict(), # used by copy
unsafe_writes=dict(type='bool'), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
PERM_BITS = 0o7777 # file mode permission bits
EXEC_PERM_BITS = 0o0111 # execute permission bits
DEFAULT_PERM = 0o0666 # default file permission bits
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
def get_platform():
''' what's the platform? example: Linux is a platform. '''
return platform.system()
def get_distribution():
''' return the distribution name '''
if platform.system() == 'Linux':
try:
supported_dists = platform._supported_dists + ('arch', 'alpine', 'devuan')
distribution = platform.linux_distribution(supported_dists=supported_dists)[0].capitalize()
if not distribution and os.path.isfile('/etc/system-release'):
distribution = platform.linux_distribution(supported_dists=['system'])[0].capitalize()
if 'Amazon' in distribution:
distribution = 'Amazon'
else:
distribution = 'OtherLinux'
except:
# FIXME: MethodMissing, I assume?
distribution = platform.dist()[0].capitalize()
else:
distribution = None
return distribution
def get_distribution_version():
''' return the distribution version '''
if platform.system() == 'Linux':
try:
distribution_version = platform.linux_distribution()[1]
if not distribution_version and os.path.isfile('/etc/system-release'):
distribution_version = platform.linux_distribution(supported_dists=['system'])[1]
except:
# FIXME: MethodMissing, I assume?
distribution_version = platform.dist()[1]
else:
distribution_version = None
return distribution_version
def get_all_subclasses(cls):
'''
used by modules like Hardware or Network fact classes to retrieve all subclasses of a given class.
__subclasses__ return only direct sub classes. This one go down into the class tree.
'''
# Retrieve direct subclasses
subclasses = cls.__subclasses__()
to_visit = list(subclasses)
# Then visit all subclasses
while to_visit:
for sc in to_visit:
# The current class is now visited, so remove it from list
to_visit.remove(sc)
# Appending all subclasses to visit and keep a reference of available class
for ssc in sc.__subclasses__():
subclasses.append(ssc)
to_visit.append(ssc)
return subclasses
def load_platform_subclass(cls, *args, **kwargs):
'''
used by modules like User to have different implementations based on detected platform. See User
module for an example.
'''
this_platform = get_platform()
distribution = get_distribution()
subclass = None
# get the most specific superclass for this platform
if distribution is not None:
for sc in get_all_subclasses(cls):
if sc.distribution is not None and sc.distribution == distribution and sc.platform == this_platform:
subclass = sc
if subclass is None:
for sc in get_all_subclasses(cls):
if sc.platform == this_platform and sc.distribution is None:
subclass = sc
if subclass is None:
subclass = cls
return super(cls, subclass).__new__(subclass)
def json_dict_unicode_to_bytes(d, encoding='utf-8', errors='surrogate_or_strict'):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, text_type):
return to_bytes(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(map(json_dict_unicode_to_bytes, iteritems(d), repeat(encoding), repeat(errors)))
elif isinstance(d, list):
return list(map(json_dict_unicode_to_bytes, d, repeat(encoding), repeat(errors)))
elif isinstance(d, tuple):
return tuple(map(json_dict_unicode_to_bytes, d, repeat(encoding), repeat(errors)))
else:
return d
def json_dict_bytes_to_unicode(d, encoding='utf-8', errors='surrogate_or_strict'):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, binary_type):
# Warning, can traceback
return to_text(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(map(json_dict_bytes_to_unicode, iteritems(d), repeat(encoding), repeat(errors)))
elif isinstance(d, list):
return list(map(json_dict_bytes_to_unicode, d, repeat(encoding), repeat(errors)))
elif isinstance(d, tuple):
return tuple(map(json_dict_bytes_to_unicode, d, repeat(encoding), repeat(errors)))
else:
return d
def return_values(obj):
""" Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)):
if obj:
yield to_native(obj, errors='surrogate_or_strict')
return
elif isinstance(obj, SEQUENCETYPE):
for element in obj:
for subelement in return_values(element):
yield subelement
elif isinstance(obj, Mapping):
for element in obj.items():
for subelement in return_values(element[1]):
yield subelement
elif isinstance(obj, (bool, NoneType)):
# This must come before int because bools are also ints
return
elif isinstance(obj, NUMBERTYPES):
yield to_native(obj, nonstring='simplerepr')
else:
raise TypeError('Unknown parameter type: %s, %s' % (type(obj), obj))
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(NUMBERTYPES, (bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def bytes_to_human(size, isbits=False, unit=None):
base = 'Bytes'
if isbits:
base = 'bits'
suffix = ''
for suffix, limit in sorted(iteritems(SIZE_RANGES), key=lambda item: -item[1]):
if (unit is None and size >= limit) or unit is not None and unit.upper() == suffix[0]:
break
if limit != 1:
suffix += base[0]
else:
suffix = base
return '%.2f %s' % (size / limit, suffix)
def human_to_bytes(number, default_unit=None, isbits=False):
'''
Convert number in string format into bytes (ex: '2K' => 2048) or using unit argument
ex:
human_to_bytes('10M') <=> human_to_bytes(10, 'M')
'''
m = re.search(r'^\s*(\d*\.?\d*)\s*([A-Za-z]+)?', str(number), flags=re.IGNORECASE)
if m is None:
raise ValueError("human_to_bytes() can't interpret following string: %s" % str(number))
try:
num = float(m.group(1))
except:
raise ValueError("human_to_bytes() can't interpret following number: %s (original input string: %s)" % (m.group(1), number))
unit = m.group(2)
if unit is None:
unit = default_unit
if unit is None:
''' No unit given, returning raw number '''
return int(round(num))
range_key = unit[0].upper()
try:
limit = SIZE_RANGES[range_key]
except:
raise ValueError("human_to_bytes() failed to convert %s (unit = %s). The suffix must be one of %s" % (number, unit, ", ".join(SIZE_RANGES.keys())))
# default value
unit_class = 'B'
unit_class_name = 'byte'
# handling bits case
if isbits:
unit_class = 'b'
unit_class_name = 'bit'
# check unit value if more than one character (KB, MB)
if len(unit) > 1:
expect_message = 'expect %s%s or %s' % (range_key, unit_class, range_key)
if range_key == 'B':
expect_message = 'expect %s or %s' % (unit_class, unit_class_name)
if unit_class_name in unit.lower():
pass
elif unit[1] != unit_class:
raise ValueError("human_to_bytes() failed to convert %s. Value is not a valid string (%s)" % (number, expect_message))
return int(round(num * limit))
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def _lenient_lowercase(lst):
"""Lowercase elements of a list.
If an element is not a string, pass it through untouched.
"""
lowered = []
for value in lst:
try:
lowered.append(value.lower())
except AttributeError:
lowered.append(value)
return lowered
def format_attributes(attributes):
attribute_list = []
for attr in attributes:
if attr in FILE_ATTRIBUTES:
attribute_list.append(FILE_ATTRIBUTES[attr])
return attribute_list
def get_flags_from_attributes(attributes):
flags = []
for key, attr in FILE_ATTRIBUTES.items():
if attr in attributes:
flags.append(key)
return ''.join(flags)
def _json_encode_fallback(obj):
if isinstance(obj, Set):
return list(obj)
elif isinstance(obj, datetime.datetime):
return obj.isoformat()
raise TypeError("Cannot json serialize %s" % to_native(obj))
def jsonify(data, **kwargs):
for encoding in ("utf-8", "latin-1"):
try:
return json.dumps(data, encoding=encoding, default=_json_encode_fallback, **kwargs)
# Old systems using old simplejson module does not support encoding keyword.
except TypeError:
try:
new_data = json_dict_bytes_to_unicode(data, encoding=encoding)
except UnicodeDecodeError:
continue
return json.dumps(new_data, default=_json_encode_fallback, **kwargs)
except UnicodeDecodeError:
continue
raise UnicodeError('Invalid unicode encoding encountered')
def missing_required_lib(library):
hostname = platform.node()
return "Failed to import the required Python library (%s) on %s's Python %s. Please read module documentation " \
"and install in the appropriate location." % (library, hostname, sys.executable)
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=None, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None):
'''
common code for quickly building an ansible module in Python
(although you can write modules in anything that can return JSON)
see library/* for examples
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
# Check whether code set this explicitly for deprecation purposes
if check_invalid_arguments is None:
check_invalid_arguments = True
module_set_check_invalid_arguments = False
else:
module_set_check_invalid_arguments = True
self.check_invalid_arguments = check_invalid_arguments
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._warnings = []
self._deprecations = []
self._clean = {}
self.aliases = {}
self._legal_inputs = ['_ansible_%s' % k for k in PASS_VARS]
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except Exception as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments(check_invalid_arguments)
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
# Do this at the end so that logging parameters have been set up
# This is to warn third party module authors that the functionatlity is going away.
# We exclude uri and zfs as they have their own deprecation warnings for users and we'll
# make sure to update their code to stop using check_invalid_arguments when 2.9 rolls around
if module_set_check_invalid_arguments and self._name not in ('uri', 'zfs'):
self.deprecate('Setting check_invalid_arguments is deprecated and will be removed.'
' Update the code for this module In the future, AnsibleModule will'
' always check for invalid arguments.', version='2.9')
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
if isinstance(warning, string_types):
self._warnings.append(warning)
self.log('[WARNING] %s' % warning)
else:
raise TypeError("warn requires a string not a %s" % type(warning))
def deprecate(self, msg, version=None):
if isinstance(msg, string_types):
self._deprecations.append({
'msg': msg,
'version': version
})
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
else:
raise TypeError("deprecate requires a string not a %s" % type(msg))
def load_file_common_arguments(self, params):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
'''
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno == errno.EPERM: # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
else:
kwargs['state'] = 'absent'
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None):
# this uses exceptions as it happens before we can safely call fail_json
aliases_results = {} # alias:canon
if param is None:
param = self.params
if spec is None:
spec = self.argument_spec
for (k, v) in spec.items():
self._legal_inputs.append(k)
aliases = v.get('aliases', None)
default = v.get('default', None)
required = v.get('required', False)
if default is not None and required:
# not alias specific but this is a good place to check this
raise Exception("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if not isinstance(aliases, SEQUENCETYPE) or isinstance(aliases, (binary_type, text_type)):
raise Exception('internal error: aliases must be a list or tuple')
for alias in aliases:
self._legal_inputs.append(alias)
aliases_results[alias] = k
if alias in param:
param[k] = param[alias]
return aliases_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# Use the argspec to determine which args are no_log
for arg_name, arg_opts in spec.items():
if arg_opts.get('no_log', False):
# Find the value for the no_log'd param
no_log_object = param.get(arg_name, None)
if no_log_object:
self.no_log_values.update(return_values(no_log_object))
if arg_opts.get('removed_in_version') is not None and arg_name in param:
self._deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % arg_name,
'version': arg_opts.get('removed_in_version')
})
def _check_arguments(self, check_invalid_arguments, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for (k, v) in list(param.items()):
if check_invalid_arguments and k not in legal_inputs:
unsupported_parameters.add(k)
elif k.startswith('_ansible_'):
# handle setting internal properties from internal ansible vars
key = k.replace('_ansible_', '')
if key in PASS_BOOLS:
setattr(self, PASS_VARS[key], self.boolean(v))
else:
setattr(self, PASS_VARS[key], v)
# clean up internal params:
del self.params[k]
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
count = 0
if param is None:
param = self.params
for term in check:
if term in param:
count += 1
return count
def _check_mutually_exclusive(self, spec, param=None):
if spec is None:
return
for check in spec:
count = self._count_terms(check, param)
if count > 1:
msg = "parameters are mutually exclusive: %s" % ', '.join(check)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
for check in spec:
count = self._count_terms(check, param)
if count == 0:
msg = "one of the following is required: %s" % ', '.join(check)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
for check in spec:
counts = [self._count_terms([field], param) for field in check]
non_zero = [c for c in counts if c > 0]
if len(non_zero) > 0:
if 0 in counts:
msg = "parameters are required together: %s" % ', '.join(check)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_arguments(self, spec=None, param=None):
''' ensure all required arguments are present '''
missing = []
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
required = v.get('required', False)
if required and k not in param:
missing.append(k)
if len(missing) > 0:
msg = "missing required arguments: %s" % ", ".join(missing)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
for sp in spec:
missing = []
max_missing_count = 0
is_one_of = False
if len(sp) == 4:
key, val, requirements, is_one_of = sp
else:
key, val, requirements = sp
# is_one_of is True at least one requirement should be
# present, else all requirements should be present.
if is_one_of:
max_missing_count = len(requirements)
term = 'any'
else:
term = 'all'
if key in param and param[key] == val:
for check in requirements:
count = self._count_terms((check,), param)
if count == 0:
missing.append(check)
if len(missing) and len(missing) >= max_missing_count:
msg = "%s is %s but %s of the following are missing: %s" % (key, val, term, ', '.join(missing))
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = _lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = _lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
# do not allow method calls to modules
if not isinstance(value, string_types):
# already templated to a datavaluestructure, perhaps?
if include_exceptions:
return (value, None)
return value
if re.search(r'\w\.\w+\(', value):
if include_exceptions:
return (value, None)
return value
# do not allow imports
if re.search(r'import \w+', value):
if include_exceptions:
return (value, None)
return value
try:
result = literal_eval(value)
if include_exceptions:
return (result, None)
else:
return result
except Exception as e:
if include_exceptions:
return (value, e)
return value
def _check_type_str(self, value):
if isinstance(value, string_types):
return value
# Note: This could throw a unicode error if value's __str__() method
# returns non-ascii. Have to port utils.to_bytes() if that happens
return str(value)
def _check_type_list(self, value):
if isinstance(value, list):
return value
if isinstance(value, string_types):
return value.split(",")
elif isinstance(value, int) or isinstance(value, float):
return [str(value)]
raise TypeError('%s cannot be converted to a list' % type(value))
def _check_type_dict(self, value):
if isinstance(value, dict):
return value
if isinstance(value, string_types):
if value.startswith("{"):
try:
return json.loads(value)
except:
(result, exc) = self.safe_eval(value, dict(), include_exceptions=True)
if exc is not None:
raise TypeError('unable to evaluate string as dictionary')
return result
elif '=' in value:
fields = []
field_buffer = []
in_quote = False
in_escape = False
for c in value.strip():
if in_escape:
field_buffer.append(c)
in_escape = False
elif c == '\\':
in_escape = True
elif not in_quote and c in ('\'', '"'):
in_quote = c
elif in_quote and in_quote == c:
in_quote = False
elif not in_quote and c in (',', ' '):
field = ''.join(field_buffer)
if field:
fields.append(field)
field_buffer = []
else:
field_buffer.append(c)
field = ''.join(field_buffer)
if field:
fields.append(field)
return dict(x.split("=", 1) for x in fields)
else:
raise TypeError("dictionary requested, could not parse JSON or key=value")
raise TypeError('%s cannot be converted to a dict' % type(value))
def _check_type_bool(self, value):
if isinstance(value, bool):
return value
if isinstance(value, string_types) or isinstance(value, int):
return self.boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value))
def _check_type_int(self, value):
if isinstance(value, int):
return value
if isinstance(value, string_types):
return int(value)
raise TypeError('%s cannot be converted to an int' % type(value))
def _check_type_float(self, value):
if isinstance(value, float):
return value
if isinstance(value, (binary_type, text_type, int)):
return float(value)
raise TypeError('%s cannot be converted to a float' % type(value))
def _check_type_path(self, value):
value = self._check_type_str(value)
return os.path.expanduser(os.path.expandvars(value))
def _check_type_jsonarg(self, value):
# Return a jsonified string. Sometimes the controller turns a json
# string into a dict/list so transform it back into json here
if isinstance(value, (text_type, binary_type)):
return value.strip()
else:
if isinstance(value, (list, tuple, dict)):
return self.jsonify(value)
raise TypeError('%s cannot be converted to a json string' % type(value))
def _check_type_raw(self, value):
return value
def _check_type_bytes(self, value):
try:
self.human_to_bytes(value)
except ValueError:
raise TypeError('%s cannot be converted to a Byte value' % type(value))
def _check_type_bits(self, value):
try:
self.human_to_bytes(value, isbits=True)
except ValueError:
raise TypeError('%s cannot be converted to a Bit value' % type(value))
def _handle_options(self, argument_spec=None, params=None):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for param in elements:
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param)
self._handle_no_log_values(spec, param)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(self.check_invalid_arguments, spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param)
self._options_context.pop()
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
if param[k] is None:
continue
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
try:
param[k] = type_checker(value)
except (TypeError, ValueError) as e:
self.fail_json(msg="argument %s is of type %s and we were unable to convert to %s: %s" %
(k, type(value), wanted, to_native(e)))
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', False)
if self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
# try to capture all passwords/passphrase named fields missed by no_log
elif PASSWORD_MATCH.search(param) and arg_opts.get('type', 'str') != 'bool' and not arg_opts.get('choices', False):
# skip boolean and enums as they are about 'password' state
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
find system executable in PATH.
Optional arguments:
- required: if executable is not found and required is true, fail_json
- opt_dirs: optional list of directories to search in addition to PATH
if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg, required, opt_dirs)
except ValueError as e:
self.fail_json(msg=to_text(e))
return bin_path
def boolean(self, arg):
''' return a bool for the arg '''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
if self._warnings:
kwargs['warnings'] = self._warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d)
else:
self.deprecate(kwargs['deprecations'])
if self._deprecations:
kwargs['deprecations'] = self._deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, **kwargs):
''' return from the module, with an error message '''
if 'msg' not in kwargs:
raise AssertionError("implementation error -- msg to explain the error is required")
kwargs['failed'] = True
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
''' This is for checking for required params when we can not check via argspec because we
need more information than is simply given in the argspec.
'''
if not required_params:
return
missing_params = []
for required_param in required_params:
if not self.params.get(required_param):
missing_params.append(required_param)
if missing_params:
self.fail_json(msg="missing required arguments: %s" % ', '.join(missing_params))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
if not os.path.exists(filename):
return None
if os.path.isdir(filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), 9000)
if data == b(''):
rpipes.remove(file_descriptor)
return data
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment vairable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = " ".join([shlex_quote(x) for x in args])
# not set explicitly, check if set by controller
if executable:
args = [executable, '-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [self._shell, '-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [os.path.expanduser(os.path.expandvars(x)) for x in args if x is not None]
else:
args = [x for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module and explode, the remote lib path will resemble ...
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system ...
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = os.path.abspath(os.path.expanduser(cwd))
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
else:
stdout = stdout
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
alexlo03/ansible
|
lib/ansible/module_utils/basic.py
|
Python
|
gpl-3.0
| 118,054
|
[
"VisIt"
] |
23d3acadd2c7b4483875c3d541a480642c8e4bfe3fe4380f696836d78de65f3a
|
"""Unit test for util.py"""
import pysal
from pysal.common import *
import pysal.weights
import numpy as np
from scipy import sparse, float32
from scipy.spatial import KDTree
import os
import gc
class Testutil(unittest.TestCase):
def setUp(self):
self.w = pysal.rook_from_shapefile(
pysal.examples.get_path('10740.shp'))
def test_lat2W(self):
w9 = pysal.lat2W(3, 3)
self.assertEquals(w9.pct_nonzero, 0.29629629629629628)
self.assertEquals(w9[0], {1: 1.0, 3: 1.0})
self.assertEquals(w9[3], {0: 1.0, 4: 1.0, 6: 1.0})
def test_lat2SW(self):
w9 = pysal.weights.lat2SW(3, 3)
rows, cols = w9.shape
n = rows * cols
pct_nonzero = w9.nnz / float(n)
self.assertEquals(pct_nonzero, 0.29629629629629628)
data = w9.todense().tolist()
self.assertEquals(data[0], [0, 1, 0, 1, 0, 0, 0, 0, 0])
self.assertEquals(data[1], [1, 0, 1, 0, 1, 0, 0, 0, 0])
self.assertEquals(data[2], [0, 1, 0, 0, 0, 1, 0, 0, 0])
self.assertEquals(data[3], [1, 0, 0, 0, 1, 0, 1, 0, 0])
self.assertEquals(data[4], [0, 1, 0, 1, 0, 1, 0, 1, 0])
self.assertEquals(data[5], [0, 0, 1, 0, 1, 0, 0, 0, 1])
self.assertEquals(data[6], [0, 0, 0, 1, 0, 0, 0, 1, 0])
self.assertEquals(data[7], [0, 0, 0, 0, 1, 0, 1, 0, 1])
self.assertEquals(data[8], [0, 0, 0, 0, 0, 1, 0, 1, 0])
def test_regime_weights(self):
regimes = np.ones(25)
regimes[range(10, 20)] = 2
regimes[range(21, 25)] = 3
regimes = np.array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 1., 3., 3.,
3., 3.])
w = pysal.regime_weights(regimes)
ww0 = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
self.assertEquals(w.weights[0], ww0)
wn0 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 20]
self.assertEquals(w.neighbors[0], wn0)
regimes = ['n', 'n', 's', 's', 'e', 'e', 'w', 'w', 'e']
n = len(regimes)
w = pysal.regime_weights(regimes)
wn = {0: [1], 1: [0], 2: [3], 3: [2], 4: [5, 8], 5: [4, 8],
6: [7], 7: [6], 8: [4, 5]}
self.assertEquals(w.neighbors, wn)
def test_comb(self):
x = range(4)
l = []
for i in pysal.comb(x, 2):
l.append(i)
lo = [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]
self.assertEquals(l, lo)
def test_order(self):
w3 = pysal.order(self.w, kmax=3)
w3105 = [1, -1, 1, 2, 1]
self.assertEquals(w3105, w3[1][0:5])
def test_higher_order(self):
w10 = pysal.lat2W(10, 10)
w10_2 = pysal.higher_order(w10, 2)
w10_20 = {2: 1.0, 11: 1.0, 20: 1.0}
self.assertEquals(w10_20, w10_2[0])
w5 = pysal.lat2W()
w50 = {1: 1.0, 5: 1.0}
self.assertEquals(w50, w5[0])
w51 = {0: 1.0, 2: 1.0, 6: 1.0}
self.assertEquals(w51, w5[1])
w5_2 = pysal.higher_order(w5, 2)
w5_20 = {2: 1.0, 10: 1.0, 6: 1.0}
self.assertEquals(w5_20, w5_2[0])
def test_shimbel(self):
w5 = pysal.lat2W()
w5_shimbel = pysal.shimbel(w5)
w5_shimbel024 = 8
self.assertEquals(w5_shimbel024, w5_shimbel[0][24])
w5_shimbel004 = [-1, 1, 2, 3]
self.assertEquals(w5_shimbel004, w5_shimbel[0][0:4])
def test_full(self):
neighbors = {'first': ['second'], 'second': ['first',
'third'], 'third': ['second']}
weights = {'first': [1], 'second': [1, 1], 'third': [1]}
w = pysal.W(neighbors, weights)
wf, ids = pysal.full(w)
wfo = np.array([[0., 1., 0.], [1., 0., 1.], [0., 1., 0.]])
np.testing.assert_array_almost_equal(wfo, wf, decimal=8)
idso = ['first', 'second', 'third']
self.assertEquals(idso, ids)
def test_full2W(self):
a = np.zeros((4, 4))
for i in range(len(a)):
for j in range(len(a[i])):
if i != j:
a[i, j] = np.random.random(1)
w = pysal.weights.util.full2W(a)
np.testing.assert_array_equal(w.full()[0], a)
ids = ['myID0', 'myID1', 'myID2', 'myID3']
w = pysal.weights.util.full2W(a, ids=ids)
np.testing.assert_array_equal(w.full()[0], a)
w.full()[0] == a
def test_WSP2W(self):
sp = pysal.weights.lat2SW(2, 5)
wsp = pysal.weights.WSP(sp)
w = pysal.weights.WSP2W(wsp)
self.assertEquals(w.n, 10)
self.assertEquals(w[0], {1: 1, 5: 1})
w = pysal.open(pysal.examples.get_path('sids2.gal'), 'r').read()
wsp = pysal.weights.WSP(w.sparse, w.id_order)
w = pysal.weights.WSP2W(wsp)
self.assertEquals(w.n, 100)
self.assertEquals(w['37135'], {'37001': 1.0, '37033': 1.0,
'37037': 1.0, '37063': 1.0, '37145': 1.0})
def test_insert_diagonal(self):
w1 = pysal.weights.insert_diagonal(self.w)
r1 = {0: 1.0, 1: 1.0, 4: 1.0, 101: 1.0, 85: 1.0, 5: 1.0}
self.assertEquals(w1[0], r1)
w1 = pysal.weights.insert_diagonal(self.w, 20)
r1 = {0: 20, 1: 1.0, 4: 1.0, 101: 1.0, 85: 1.0, 5: 1.0}
self.assertEquals(w1[0], r1)
diag = np.arange(100, 100 + self.w.n)
w1 = pysal.weights.insert_diagonal(self.w, diag)
r1 = {0: 100, 1: 1.0, 4: 1.0, 101: 1.0, 85: 1.0, 5: 1.0}
self.assertEquals(w1[0], r1)
def test_remap_ids(self):
w = pysal.lat2W(3, 2)
wid_order = [0, 1, 2, 3, 4, 5]
self.assertEquals(wid_order, w.id_order)
wneighbors0 = [2, 1]
self.assertEquals(wneighbors0, w.neighbors[0])
old_to_new = {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e', 5: 'f'}
w_new = pysal.remap_ids(w, old_to_new)
w_newid_order = ['a', 'b', 'c', 'd', 'e', 'f']
self.assertEquals(w_newid_order, w_new.id_order)
w_newdneighborsa = ['c', 'b']
self.assertEquals(w_newdneighborsa, w_new.neighbors['a'])
def test_get_ids(self):
polyids = pysal.weights.util.get_ids(
pysal.examples.get_path('columbus.shp'), "POLYID")
polyids5 = [1, 2, 3, 4, 5]
self.assertEquals(polyids5, polyids[:5])
def test_get_points_array_from_shapefile(self):
xy = pysal.weights.util.get_points_array_from_shapefile(
pysal.examples.get_path('juvenile.shp'))
xy3 = np.array([[94., 93.], [80., 95.], [79., 90.]])
np.testing.assert_array_almost_equal(xy3, xy[:3], decimal=8)
xy = pysal.weights.util.get_points_array_from_shapefile(
pysal.examples.get_path('columbus.shp'))
xy3 = np.array([[8.82721847, 14.36907602], [8.33265837,
14.03162401], [9.01226541, 13.81971908]])
np.testing.assert_array_almost_equal(xy3, xy[:3], decimal=8)
def test_min_threshold_distance(self):
x, y = np.indices((5, 5))
x.shape = (25, 1)
y.shape = (25, 1)
data = np.hstack([x, y])
mint = 1.0
self.assertEquals(
mint, pysal.weights.util.min_threshold_distance(data))
suite = unittest.TestLoader().loadTestsFromTestCase(Testutil)
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(suite)
|
AlanZatarain/pysal
|
pysal/weights/tests/test_util.py
|
Python
|
bsd-3-clause
| 7,386
|
[
"COLUMBUS"
] |
48786cd9e41f606fb73250c7a16b2631f543a76cbd2260fb08598eb98ef99362
|
from ..log_utils import logger
try:
from ._ffi_xcursors import ffi
except ImportError:
raise ImportError("No module named libqtile.core._ffi_xcursors, be sure to run `./scripts/ffibuild`")
# Stolen from samurai-x
# (Don't know where to put it, so I'll put it here)
# XCB cursors doesn't want to be themed, libxcursor
# would be better choice I think
# and we (indirectly) depend on it anyway...
class Cursors(dict):
def __init__(self, conn):
self.conn = conn
cursors = (
(b'X_cursor', 0),
(b'arrow', 2),
(b'based_arrow_down', 4),
(b'based_arrow_up', 6),
(b'boat', 8),
(b'bogosity', 10),
(b'bottom_left_corner', 12),
(b'bottom_right_corner', 14),
(b'bottom_side', 16),
(b'bottom_tee', 18),
(b'box_spiral', 20),
(b'center_ptr', 22),
(b'circle', 24),
(b'clock', 26),
(b'coffee_mug', 28),
(b'cross', 30),
(b'cross_reverse', 32),
(b'crosshair', 34),
(b'diamond_cross', 36),
(b'dot', 38),
(b'dotbox', 40),
(b'double_arrow', 42),
(b'draft_large', 44),
(b'draft_small', 46),
(b'draped_box', 48),
(b'exchange', 50),
(b'fleur', 52),
(b'gobbler', 54),
(b'gumby', 56),
(b'hand1', 58),
(b'hand2', 60),
(b'heart', 62),
(b'icon', 64),
(b'iron_cross', 66),
(b'left_ptr', 68),
(b'left_side', 70),
(b'left_tee', 72),
(b'leftbutton', 74),
(b'll_angle', 76),
(b'lr_angle', 78),
(b'man', 80),
(b'middlebutton', 82),
(b'mouse', 84),
(b'pencil', 86),
(b'pirate', 88),
(b'plus', 90),
(b'question_arrow', 92),
(b'right_ptr', 94),
(b'right_side', 96),
(b'right_tee', 98),
(b'rightbutton', 100),
(b'rtl_logo', 102),
(b'sailboat', 104),
(b'sb_down_arrow', 106),
(b'sb_h_double_arrow', 108),
(b'sb_left_arrow', 110),
(b'sb_right_arrow', 112),
(b'sb_up_arrow', 114),
(b'sb_v_double_arrow', 116),
(b'shuttle', 118),
(b'sizing', 120),
(b'spider', 122),
(b'spraycan', 124),
(b'star', 126),
(b'target', 128),
(b'tcross', 130),
(b'top_left_arrow', 132),
(b'top_left_corner', 134),
(b'top_right_corner', 136),
(b'top_side', 138),
(b'top_tee', 140),
(b'trek', 142),
(b'ul_angle', 144),
(b'umbrella', 146),
(b'ur_angle', 148),
(b'watch', 150),
(b'xterm', 152)
)
self.xcursor = self._setup_xcursor_binding()
for name, cursor_font in cursors:
self._new(name, cursor_font)
if self.xcursor:
self.xcursor.xcb_cursor_context_free(self._cursor_ctx[0])
def finalize(self):
self._cursor_ctx = None
def _setup_xcursor_binding(self):
try:
xcursor = ffi.dlopen('libxcb-cursor.so.0')
except OSError:
logger.warning("xcb-cursor not found, fallback to font pointer")
return False
conn = self.conn.conn
screen_pointer = conn.get_screen_pointers()[0]
self._cursor_ctx = ffi.new('xcb_cursor_context_t **')
xcursor.xcb_cursor_context_new(conn._conn, screen_pointer,
self._cursor_ctx)
return xcursor
def get_xcursor(self, name):
"""
Get the cursor using xcb-util-cursor, so we support themed cursors
"""
cursor = self.xcursor.xcb_cursor_load_cursor(self._cursor_ctx[0], name)
return cursor
def get_font_cursor(self, name, cursor_font):
"""
Get the cursor from the font, used as a fallback if xcb-util-cursor
is not installed
"""
fid = self.conn.conn.generate_id()
self.conn.conn.core.OpenFont(fid, len("cursor"), "cursor")
cursor = self.conn.conn.generate_id()
self.conn.conn.core.CreateGlyphCursor(
cursor, fid, fid,
cursor_font, cursor_font + 1,
0, 0, 0,
65535, 65535, 65535
)
return cursor
def _new(self, name, cursor_font):
if self.xcursor:
cursor = self.get_xcursor(name)
else:
cursor = self.get_font_cursor(name, cursor_font)
self[name.decode()] = cursor
|
flacjacket/qtile
|
libqtile/core/xcursors.py
|
Python
|
mit
| 4,799
|
[
"FLEUR"
] |
1ca781b130ddaef4ab142cbbaa020c8842788ed0f4afbb81817870040f790235
|
"""ETL pipeline for project 'eurominder'
"""
from __future__ import absolute_import
from builtins import zip
from builtins import range
from os import path
import luigi
import pandas as pd
from sqlalchemy.orm.exc import NoResultFound
import sqlalchemy as sa
import pygeoj
from . import models
from ozelot import client
from ozelot.etl import tasks
class LoadEverything(tasks.ORMWrapperTask):
"""A top-level task to wrap the whole pipeline
"""
def requires(self):
yield LoadAllEuroStatsTables()
yield LoadAllClimateData()
yield Tests()
class Tests(tasks.ORMWrapperTask):
"""A task wrapping all tests
"""
def requires(self):
yield TestNUTS2Regions()
yield TestAllEuroStatsTables()
yield TestAllClimateData()
class NUTS2InputFile(tasks.InputFileTask):
"""An input file for NUTS2 metadata
"""
@property
def input_file(self):
"""Returns the input file name, with a default relative path
"""
return path.join(path.dirname(__file__), 'data', 'NUTS_2013.csv')
def load(self):
"""Load data, from default location
Returns:
pandas.DataFrame: columns 'key' (NUTS2 code), 'name'
"""
# read file, keep all values as strings
df = pd.read_csv(self.input_file,
sep=',',
quotechar='"',
encoding='utf-8',
dtype=object)
# wer are only interested in the NUTS code and description, rename them also
df = df[['NUTS-Code', 'Description']]
df.columns = ['key', 'name']
# we only want NUTS2 regions (4-digit codes)
df = df[df['key'].str.len() == 4]
# drop 'Extra Regio' codes ending in 'ZZ'
df = df[df['key'].str[2:] != 'ZZ']
return df
class NUTS2Regions(tasks.ORMObjectCreatorMixin, tasks.ORMTask):
"""Load base data about NUTS2 regions
"""
object_classes = [models.NUTS2Region]
def requires(self):
yield NUTS2InputFile()
def run(self):
# read the data
df = next(self.requires()).load()
# -- start documentation include: standard-object-add
# build all objects
for _, row in df.iterrows():
self.session.add(models.NUTS2Region(name=row['name'],
key=row['key']))
# -- end documentation include: standard-object-add
self.done()
class TestNUTS2Regions(tasks.ORMTestTask):
"""Test for :func:`NUTS2Regions` loading task
"""
def requires(self):
yield NUTS2Regions()
def run(self):
# load all region data
regions = self.client.df_query(self.session.query(models.NUTS2Region))
# known number of entries
assert len(regions) == 276
# uniqueness and non-null-ness
assert len(regions) == len(regions.dropna(how='any'))
assert len(regions['key'].unique()) == len(regions)
assert len(regions['name'].unique()) == len(regions)
# check one random element
sel = regions[regions['key'] == 'DE27']
assert sel.iloc[0]['name'] == u'Schwaben'
self.done()
class EuroStatsInputFile(tasks.InputFileTask):
"""An input file for EuroStats tables
"""
# indicator number, as string, including leading zeros (to be defined in derived class)
number = luigi.Parameter()
@property
def input_file(self):
"""Returns the input file name, with a default relative path
"""
return path.join(path.dirname(__file__), 'data', 'tgs{:s}.tsv'.format(self.number))
def load(self, key_filter=None, header_preproc=None):
"""Load data table from tsv file, from default location
Args:
key_filter (str): additional filter for key column - regex matching
key values to include; None for no filter
header_preproc (func): function to apply to column headers to extract year numbers (as strings)
Returns:
pd.DataFrame: data
"""
# read file, keep all values as strings
df = pd.read_csv(self.input_file,
sep='\t',
dtype=object)
if key_filter is not None:
# filter on key column (first column)
df = df[df[df.columns[0]].str.match(key_filter)]
# first column contains metadata, with NUTS2 region key as last (comma-separated) value
meta_col = df.columns[0]
df[meta_col] = df[meta_col].str.split(',').str[-1]
# convert columns to numbers, skip first column (containing metadata)
for col_name in df.columns[1:]:
# some values have lower-case characters indicating footnotes, strip them
stripped = df[col_name].str.replace(r'[a-z]', '')
# convert to numbers, convert any remaining empty values (indicated by ':' in the input table) to NaN
df[col_name] = pd.to_numeric(stripped, errors='coerce')
# preprocess headers
if header_preproc is not None:
df.columns = list(df.columns[:1]) + [header_preproc(c) for c in df.columns[1:]]
# rename columns, convert years to integers
# noinspection PyTypeChecker
df.columns = ['key'] + [int(y) for y in df.columns[1:]]
return df
class LoadAllEuroStatsTables(tasks.ORMWrapperTask):
"""Wrapper task for loading all EuroStats tables
"""
def requires(self):
yield EuroStatsGDP()
yield EuroStatsUnemployment()
yield EuroStatsPopDensity()
yield EuroStatsHRSciTech()
yield EuroStatsCancerDeaths()
yield EuroStatsHeartDiseaseDeaths()
yield EuroStatsFertilityRate()
yield EuroStatsLifeExpectancy()
class TestAllEuroStatsTables(tasks.ORMWrapperTask):
"""Wrapper task for consistency checks for all EuroStats tables
"""
def requires(self):
yield TestEuroStatsGDP()
yield TestEuroStatsUnemployment()
yield TestEuroStatsPopDensity()
yield TestEuroStatsHRSciTech()
yield TestEuroStatsCancerDeaths()
yield TestEuroStatsHeartDiseaseDeaths()
yield TestEuroStatsFertilityRate()
yield TestEuroStatsLifeExpectancy()
class LoadEuroStatsTableBase(tasks.ORMTask):
"""Base class for loading EuroStats tables
"""
# indicator number, as string, including leading zeros (to be defined in derived class)
number = None
# indicator (short) description (to be defined in derived class)
description = None
# additional filter on key column (e.g. to select from a table with values by sex);
# regular expression for columns to include, None for no filter
key_filter = None
# header preprocessing for input file (see :class:`EuroStatsInputFile`)
header_preproc = None
def requires(self):
yield EuroStatsInputFile(number=self.number)
yield NUTS2Regions()
def clear(self):
"""Clear output from one (derived) class loader
"""
# mark this task as incomplete
self.mark_incomplete()
# Delete the indicator metadata, this also deletes values by cascading.
#
# NOTE: calling 'delete()' on the query (instead of on the queried object,
# as done here), would NOT work! For a query, there is no in-Python cascading
# of delete statements in sqlalchemy, so the associated values would not
# be deleted e.g. for SQLite databases.
try:
indicator = self.session.query(models.EuroStatIndicator) \
.filter(models.EuroStatIndicator.number == self.number) \
.one()
self.session.delete(indicator)
except NoResultFound:
# Data didn't exist yet, no problem
pass
self.close_session()
def run(self):
"""Load table data to :class:`EuroStatsValue` objects
"""
# -- start documentation include: eurostats-run-1
# create a new indicator metadata object
indicator = models.EuroStatIndicator(
number=self.number,
description=self.description,
url="http://ec.europa.eu/eurostat/web/products-datasets/-/tgs" + self.number)
# add/commit to get the object ID filled
self.session.add(indicator)
self.session.commit()
# -- end documentation include: eurostats-run-1
# -- start documentation include: eurostats-run-2
# load data from input file task
df = next(self.requires()).load(key_filter=self.key_filter,
header_preproc=self.header_preproc)
# Transform data: DataFrame from loading has NUTS2 key and years as columns.
# Index by key, then stack years as second level of index. Reset the index
# to get year and key as regular columns, with one value column left.
values = df.set_index('key').stack()
values.index.levels[1].name = 'year'
values.name = 'value'
df = values.reset_index()
# -- end documentation include: eurostats-run-2
# -- start documentation include: eurostats-run-3
# get current max ID for EuroStatValue objects, for manual ID generation
max_id = models.EuroStatValue.get_max_id(self.session)
# append an ID column, starting with the current max ID of the object class plus one
df['id'] = list(range(max_id + 1, max_id + 1 + len(df)))
# -- end documentation include: eurostats-run-3
# -- start documentation include: eurostats-run-4
# append indicator ID (constant)
df['indicator_id'] = indicator.id
# append region ID column, by mapping NUTS2 region keys to DB object IDs
regions = self.client.df_query(self.session.query(models.NUTS2Region)) \
.set_index('key')['id']
df['region_id'] = df['key'].map(regions)
# drop columns that are not part of the data model
df = df.drop(['key'], axis=1) # type: pd.DataFrame
# -- end documentation include: eurostats-run-4
# -- start documentation include: eurostats-run-5
# store, done
df.to_sql(name=models.EuroStatValue.__tablename__,
con=client.get_client().engine,
if_exists='append',
index=False)
self.done()
# -- end documentation include: eurostats-run-5
class TestEuroStatsTableBase(tasks.ORMTestTask):
"""Base class for testing loading of EuroStats tables
"""
# max number of missing values (to be defined in derived class)
max_missing = None
# value for DE71, 2012 (to be defined in derived class)
test_value = None
# min/max admissable values (to be defined in derived class)
min_val = None
max_val = None
# maximum fraction of regions with no values
max_missing_region_fraction = 0.05
# header preprocessing for input file (see :class:`EuroStatsInputFile`)
header_preproc = None
def run(self):
# the tested task's data ID
number = next(self.requires()).number
# get number of regions
n_regions = self.session.query(models.NUTS2Region).count()
# get number of years from input file
input_df = EuroStatsInputFile(number=number).load(header_preproc=self.header_preproc)
n_years = len(input_df.columns) - 1
# values for the indicator
# noinspection PyComparisonWithNone
query = self.session.query(models.EuroStatValue.value) \
.filter(models.EuroStatValue.indicator.has(number=number)) \
.filter(models.EuroStatValue.region_id != None)
values = self.client.df_query(query)['value']
# full data for 12 years * all 276 NUTS2 regions, with less than N missing values,
# but also not more values than full coverage (to test if key filter works);
# ignore additional data points
assert len(values) >= n_years * n_regions - self.max_missing
assert len(values) < n_years * n_regions
# most regions should have some values
# noinspection PyComparisonWithNone
cnt = self.session.query(models.EuroStatValue.region_id.distinct()) \
.filter(models.EuroStatValue.indicator.has(number=number)) \
.filter(models.EuroStatValue.region_id != None) \
.count()
assert cnt > n_regions * (1. - self.max_missing_region_fraction)
# all years should have some values
# noinspection PyComparisonWithNone
cnt = self.session.query(models.EuroStatValue.year.distinct()) \
.filter(models.EuroStatValue.indicator.has(number=number)) \
.filter(models.EuroStatValue.region_id != None) \
.count()
assert cnt == n_years
# test min/max
assert values.min() >= self.min_val
assert values.max() <= self.max_val
self.done()
class EuroStatsGDP(LoadEuroStatsTableBase):
number = "00005"
description = "Regional gross domestic product (PPS per inhabitant)"
class TestEuroStatsGDP(TestEuroStatsTableBase):
max_missing = 2
min_val = 3000
max_val = 200000
def requires(self):
yield EuroStatsGDP()
class EuroStatsUnemployment(LoadEuroStatsTableBase):
number = "00010"
description = "Unemployment rate (percent)"
# additional filter on input file key column: extract totals only, not male/female
key_filter = '.*,T,.*'
class TestEuroStatsUnemployment(TestEuroStatsTableBase):
max_missing = 100
min_val = 1
max_val = 40
def requires(self):
yield EuroStatsUnemployment()
class EuroStatsPopDensity(LoadEuroStatsTableBase):
number = "00024"
description = "Population density (people per km^2)"
class TestEuroStatsPopDensity(TestEuroStatsTableBase):
max_missing = 100
min_val = 1
max_val = 15000
def requires(self):
yield EuroStatsPopDensity()
class EuroStatsHRSciTech(LoadEuroStatsTableBase):
number = "00038"
description = "Human resources in science and technology (percent)"
class TestEuroStatsHRSciTech(TestEuroStatsTableBase):
max_missing = 100
min_val = 5
max_val = 95
def requires(self):
yield EuroStatsHRSciTech()
class EuroStatsCancerDeaths(LoadEuroStatsTableBase):
number = "00058"
description = "Deaths due to cancer (cases per 100k population)"
# additional filter on input file key column: extract totals only, not male/female
key_filter = '^T,.*'
# column headers give 3-year averaging periods like '1997_1999', extract the second year
@staticmethod
def header_preproc(s):
return s.split('_')[1]
class TestEuroStatsCancerDeaths(TestEuroStatsTableBase):
max_missing = 800
max_missing_region_fraction = 0.2
min_val = 150
max_val = 450
# column headers give 3-year averaging periods like '1997_1999', extract the second year
@staticmethod
def header_preproc(s):
return s.split('_')[1]
def requires(self):
yield EuroStatsCancerDeaths()
class EuroStatsHeartDiseaseDeaths(LoadEuroStatsTableBase):
number = "00059"
description = "Deaths due to ischemic heart disease (cases per 100k population)"
# additional filter on input file key column: extract totals only, not male/female
key_filter = '^T,.*'
# column headers give 3-year averaging periods like '1997_1999', extract the second year
@staticmethod
def header_preproc(s):
return s.split('_')[1]
class TestEuroStatsHeartDiseaseDeaths(TestEuroStatsTableBase):
max_missing = 800
max_missing_region_fraction = 0.2
min_val = 50
max_val = 800
# column headers give 3-year averaging periods like '1997_1999', extract the second year
@staticmethod
def header_preproc(s):
return s.split('_')[1]
def requires(self):
yield EuroStatsHeartDiseaseDeaths()
class EuroStatsFertilityRate(LoadEuroStatsTableBase):
number = "00100"
description = "Total fertility rate"
class TestEuroStatsFertilityRate(TestEuroStatsTableBase):
max_missing = 200
min_val = 0.8
max_val = 6.0
def requires(self):
yield EuroStatsFertilityRate()
class EuroStatsLifeExpectancy(LoadEuroStatsTableBase):
number = "00101"
description = "Life expectancy at birth (years)"
# additional filter on input file key column: extract totals only, not male/female
key_filter = '.*,T,.*'
class TestEuroStatsLifeExpectancy(TestEuroStatsTableBase):
max_missing = 300
min_val = 60
max_val = 90
def requires(self):
yield EuroStatsLifeExpectancy()
class NUTS2GeoJSONInputFile(tasks.InputFileTask):
"""An input file for NUTS2 GeoJSON polygons
"""
@property
def input_file(self):
"""Returns the input file name, with a default relative path
"""
return path.join(path.dirname(__file__), 'data', 'nuts_rg_60M_2013_lvl_2.geojson')
def load(self):
"""Load data, from default location
Returns:
pygeoj.GeojsonFile: GeoJSON data
"""
return pygeoj.load(self.input_file)
def geojson_polygon_to_mask(feature, shape, lat_idx, lon_idx):
"""Convert a GeoJSON polygon feature to a numpy array
Args:
feature (pygeoj.Feature): polygon feature to draw
shape (tuple(int, int)): shape of 2D target numpy array to draw polygon in
lat_idx (func): function converting a latitude to the (fractional) row index in the map
lon_idx (func): function converting a longitude to the (fractional) column index in the map
Returns:
np.array: mask, background is zero, foreground is one
"""
import matplotlib
# specify 'agg' renderer, Mac renderer does not support what we want to do below
matplotlib.use('agg')
import matplotlib.pyplot as plt
from matplotlib import patches
import numpy as np
# we can only do polygons right now
if feature.geometry.type not in ('Polygon', 'MultiPolygon'):
raise ValueError("Cannot handle feature of type " + feature.geometry.type)
# fictional dpi - don't matter in the end
dpi = 100
# -- start documentation include: poly-setup
# make a new figure with no frame, no axes, with the correct size, black background
fig = plt.figure(frameon=False, dpi=dpi, )
fig.set_size_inches(shape[1] / float(dpi), shape[0] / float(dpi))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
# noinspection PyTypeChecker
ax.set_xlim([0, shape[1]])
# noinspection PyTypeChecker
ax.set_ylim([0, shape[0]])
fig.add_axes(ax)
# -- end documentation include: poly-setup
# for normal polygons make coordinates iterable
if feature.geometry.type == 'Polygon':
coords = [feature.geometry.coordinates]
else:
coords = feature.geometry.coordinates
for poly_coords in coords:
# the polygon may contain multiple outlines; the first is
# always the outer one, the others are 'holes'
for i, outline in enumerate(poly_coords):
# inside/outside fill value: figure background is white by
# default, draw inverted polygon and invert again later
value = 0. if i == 0 else 1.
# convert lats/lons to row/column indices in the array
outline = np.array(outline)
xs = lon_idx(outline[:, 0])
ys = lat_idx(outline[:, 1])
# draw the polygon
poly = patches.Polygon(list(zip(xs, ys)),
facecolor=(value, value, value),
edgecolor='none',
antialiased=True)
ax.add_patch(poly)
# -- start documentation include: poly-extract
# extract the figure to a numpy array,
fig.canvas.draw()
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
# reshape to a proper numpy array, keep one channel only
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))[:, :, 0]
# -- end documentation include: poly-extract
# make sure we get the right shape back
assert data.shape[0] == shape[0]
assert data.shape[1] == shape[1]
# convert from uints back to floats and invert to get black background
data = 1. - data.astype(float) / 255. # type: np.array
# image is flipped horizontally w.r.t. map
data = data[::-1, :]
# done, clean up
plt.close('all')
return data
class ClimateDataInputFile(tasks.InputFileTask):
"""Climate data input file with loading method
"""
# -- start documentation include: climate-input-params
# climate variable name (in netcdf file)
variable_name = luigi.Parameter()
# output file name (without path)
file_name = luigi.Parameter()
# -- end documentation include: climate-input-params
@property
def input_file(self):
"""Returns the input file name, with a default relative path
"""
return path.join(path.dirname(__file__), 'data', self.file_name)
def load(self):
"""Load the climate data as a map
Returns:
dict: {data: masked 3D numpy array containing climate data per month (first axis),
lat_idx: function converting a latitude to the (fractional) row index in the map,
lon_idx: function converting a longitude to the (fractional) column index in the map}
"""
from scipy.io import netcdf_file
from scipy import interpolate
import numpy as np
# load file
f = netcdf_file(self.input_file)
# extract data, make explicity copies of data
out = dict()
lats = f.variables['lat'][:].copy()
lons = f.variables['lon'][:].copy()
# lons start at 0, this is bad for working with data in Europe because the map border runs right through;
# roll array by half its width to get Europe into the map center
out['data'] = np.roll(f.variables[self.variable_name][:, :, :].copy(), shift=len(lons) // 2, axis=2)
lons = np.roll(lons, shift=len(lons) // 2)
# avoid wraparound problems around zero by setting lon range to -180...180, this is
# also the format used in the GeoJSON NUTS2 polygons
lons[lons > 180] -= 360
# data contains some very negative value (~ -9e36) as 'invalid data' flag, convert this to a masked array
out['data'] = np.ma.array(out['data'])
out['data'][out['data'] < -1.e6] = np.ma.masked
# -- start documentation include: climate-input-interp
# build interpolators to convert lats/lons to row/column indices
out['lat_idx'] = interpolate.interp1d(x=lats, y=np.arange(len(lats)))
out['lon_idx'] = interpolate.interp1d(x=lons, y=np.arange(len(lons)))
# -- end documentation include: climate-input-interp
# clean up
f.close()
return out
# suffixes for seasonal climate data
CLIMATE_SEASON_SUFFIXES = dict(summer=' Apr-Sep',
winter=' Oct-Mar')
class LoadClimateData(tasks.ORMTask):
"""Load data for one climate variable
"""
# -- start documentation include: climate-load-params
# climate variable name (as in netcdf file)
variable_name = luigi.Parameter()
# input file name (without path)
input_file = luigi.Parameter()
# climate variable description (in indicator)
description = luigi.Parameter()
# -- end documentation include: climate-load-params
def requires(self):
yield ClimateDataInputFile(variable_name=self.variable_name,
file_name=self.input_file)
yield NUTS2GeoJSONInputFile()
yield NUTS2Regions()
def clear(self):
"""Clear output of one climate variable
"""
# mark this task as incomplete
self.mark_incomplete()
# Delete the indicator metadata, this also deletes values by cascading.
for suffix in list(CLIMATE_SEASON_SUFFIXES.values()):
try:
# noinspection PyUnresolvedReferences
indicator = self.session.query(models.ClimateIndicator) \
.filter(models.ClimateIndicator.description == self.description + suffix) \
.one()
self.session.delete(indicator)
except NoResultFound:
# Data didn't exist yet, no problem
pass
self.close_session()
def run(self):
"""Load climate data and convert to indicator objects
"""
import numpy as np
# get all NUTS region IDs, for linking values to region objects
query = self.session.query(models.NUTS2Region.key,
models.NUTS2Region.id)
region_ids = self.client.df_query(query).set_index('key')['id'].to_dict()
# load climate data and NUTS2 polygons
data = next(self.requires()).load()
nuts = NUTS2GeoJSONInputFile().load()
# generated indicator IDs, keyed by season
indicator_ids = dict()
# climate data by season
t_data = dict()
# create new indicator objects for summer and winter, create averaged climate data
for season, suffix in CLIMATE_SEASON_SUFFIXES.items():
# noinspection PyUnresolvedReferences
indicator = models.ClimateIndicator(description=self.description + suffix)
self.session.add(indicator)
# commit, to get indicator ID filled
self.session.commit()
indicator_ids[season] = indicator.id
# select winter or summer data by month index, average over time range
if season == 'summer':
t_data[season] = np.ma.average(data['data'][3:9, :, :], axis=0)
else:
# noinspection PyTypeChecker
t_data[season] = np.ma.average(0.5 * (data['data'][0:3, :, :] + data['data'][9:12, :, :]), axis=0)
# container for output objects, for bulk saving
objects = []
# start value for manual object id generation
current_value_id = models.ClimateValue.get_max_id(self.session)
# for each region, get a mask, average climate variable over the mask and store the indicator value;
# loop over features first, then over seasons, because mask generation is expensive
for feature in nuts:
# draw region mask (doesn't matter for which season we take the map shape)
mask = geojson_polygon_to_mask(feature=feature,
shape=t_data['summer'].shape,
lat_idx=data['lat_idx'],
lon_idx=data['lon_idx'])
# create indicator values for summer and winter
for season in list(CLIMATE_SEASON_SUFFIXES.keys()):
# weighted average from region mask
value = np.ma.average(t_data[season], weights=mask)
# region ID must be cast to int (DBs don't like numpy dtypes from pandas)
region_id = region_ids.get(feature.properties['NUTS_ID'], None)
if region_id is not None:
region_id = int(region_id)
# append an indicator value, manually generate object IDs for bulk saving
current_value_id += 1
objects.append(models.ClimateValue(id=current_value_id,
value=value,
region_id=region_id,
indicator_id=indicator_ids[season]))
# # print some debugging output
# print self.variable_name + ' ' + season, feature.properties['NUTS_ID'], value
# # generate some plots for debugging
# from matplotlib import pyplot as plt
# plt.subplot(211)
# plt.imshow(0.02 * t_data + mask * t_data, interpolation='none')
# plt.subplot(212)
# plt.imshow(t_data, interpolation='none')
# plt.savefig('/tmp/' + feature.properties['NUTS_ID'] + '.png')
# bulk-save all objects
self.session.bulk_save_objects(objects)
self.session.commit()
self.done()
class TestClimateData(tasks.ORMTestTask):
"""Consistency checks for climate data
"""
# min/max values to test against
min_val = luigi.FloatParameter()
max_val = luigi.FloatParameter()
# max number of regions with missing value
max_missing = luigi.IntParameter()
# climate variable description (in indicator), including season
description = luigi.Parameter()
def requires(self):
# testing one output doesn't really require all climate data to be loaded, but setting the
# requirement to a specific task requires passing around lots of parameters
yield LoadAllClimateData()
def run(self):
# load values for all 'official' NUTS2 regions (join is 'inner left' by default)
query = self.session.query(models.ClimateValue) \
.join(models.NUTS2Region) \
.filter(models.ClimateValue.indicator.has(description=self.description))
df = self.client.df_query(query)
# get number of NUTS2 regions
n_regions = self.session.query(sa.func.count(models.NUTS2Region.id)).scalar()
# test values
assert df['value'].min() >= self.min_val
assert df['value'].max() <= self.max_val
assert len(df) >= n_regions - self.max_missing
self.done()
# list of climate data input file names and indicator descriptions, keyed by variable name
CLIMATE_DEFS = {
'air': {'file_name': 'air.mon.1981-2010.ltm.v301.nc',
'description': 'Mean temperature (C)'},
'precip': {'file_name': 'precip.mon.ltm.v301.nc',
'description': 'Mean precipitation (cm/month)'},
}
class LoadAllClimateData(tasks.ORMWrapperTask):
"""Wrapper task for loading all climate data
"""
def clear(self):
"""For this task we want to clear requirements
"""
self.mark_incomplete()
for req in self.requires():
req.clear()
def requires(self):
# generate one requirement for each listed climate variable
for variable_name, settings in CLIMATE_DEFS.items():
yield LoadClimateData(variable_name=variable_name,
input_file=settings['file_name'],
description=settings['description'])
class TestAllClimateData(tasks.ORMWrapperTask):
def clear(self):
"""For this task we want to clear requirements
"""
self.mark_incomplete()
for req in self.requires():
req.clear()
def requires(self):
yield TestClimateData(description=CLIMATE_DEFS['air']['description'] + CLIMATE_SEASON_SUFFIXES['winter'],
min_val=-10,
max_val=30,
max_missing=10)
yield TestClimateData(description=CLIMATE_DEFS['air']['description'] + CLIMATE_SEASON_SUFFIXES['summer'],
min_val=5,
max_val=35,
max_missing=10)
yield TestClimateData(description=CLIMATE_DEFS['precip']['description'] + CLIMATE_SEASON_SUFFIXES['winter'],
min_val=2,
max_val=50,
max_missing=10)
yield TestClimateData(description=CLIMATE_DEFS['precip']['description'] + CLIMATE_SEASON_SUFFIXES['summer'],
min_val=0,
max_val=50,
max_missing=10)
|
trycs/ozelot
|
examples/eurominder/eurominder/pipeline.py
|
Python
|
mit
| 32,131
|
[
"NetCDF"
] |
7d2ecaf88923cc70654d4973ca48caf630abe6daa7b788eb67bfa09b034d626e
|
from collections import OrderedDict
import logging
import os
import simtk.unit as units
from intermol.utils import which, run_subprocess
from intermol.gromacs.gromacs_parser import load, save
GMX_PATH = ''
logger = logging.getLogger('InterMolLog')
def binaries(gmxpath, gmxsuff):
"""Locate the paths to the best available gromacs binaries. """
def gmx_path(binary_path):
return os.path.join(gmxpath, binary_path + gmxsuff)
if which('gmx_d'):
logger.debug("Using double precision binaries for gromacs")
main_binary = gmx_path('gmx_d')
grompp_bin = [main_binary, 'grompp']
mdrun_bin = [main_binary, 'mdrun']
genergy_bin = [main_binary, 'energy']
elif which('grompp_d') and which('mdrun_d') and which('g_energy_d'):
logger.debug("Using double precision binaries")
grompp_bin = [gmx_path('grompp_d')]
mdrun_bin = [gmx_path('mdrun_d')]
genergy_bin = [gmx_path('g_energy_d')]
elif which('gmx'):
logger.debug("Using double precision binaries")
main_binary = gmx_path('gmx')
grompp_bin = [main_binary, 'grompp']
mdrun_bin = [main_binary, 'mdrun']
genergy_bin = [main_binary, 'energy']
elif which('grompp') and which('mdrun') and which('g_energy'):
logger.debug("Using single precision binaries")
grompp_bin = [gmx_path('grompp')]
mdrun_bin = [gmx_path('mdrun')]
genergy_bin = [gmx_path('g_energy')]
else:
raise IOError('Unable to find gromacs executables.')
return grompp_bin, mdrun_bin, genergy_bin
# energy terms we are ignoring
unwanted = ['Kinetic En.', 'Total Energy', 'Temperature', 'Pressure',
'Volume', 'Box-X', 'Box-Y', 'Box-Z', 'Box-atomic_number',
'Pres. DC', 'Vir-XY', 'Vir-XX', 'Vir-XZ', 'Vir-YY', 'Vir-YX',
'Vir-YZ', 'Vir-ZX', 'Vir-ZY', 'Vir-ZZ', 'pV', 'Density', 'Enthalpy']
def energies(top, gro, mdp, gmx_path=GMX_PATH, grosuff='', grompp_check=False):
"""Compute single-point energies using GROMACS.
Args:
top (str):
gro (str):
mdp (str):
gmx_path (str):
grosuff (str):
grompp_check (bool):
Returns:
e_out:
ener_xvg:
"""
if not os.path.isfile(mdp):
logger.error("Can't find mdp file %s to compute energies" % (mdp))
mdp = os.path.abspath(mdp)
directory, _ = os.path.split(os.path.abspath(top))
tpr = os.path.join(directory, 'topol.tpr')
ener = os.path.join(directory, 'ener.edr')
ener_xvg = os.path.join(directory, 'energy.xvg')
conf = os.path.join(directory, 'confout.gro')
mdout = os.path.join(directory, 'mdout.mdp')
state = os.path.join(directory, 'state.cpt')
traj = os.path.join(directory, 'traj.trr')
log = os.path.join(directory, 'md.log')
stdout_path = os.path.join(directory, 'gromacs_stdout.txt')
stderr_path = os.path.join(directory, 'gromacs_stderr.txt')
grompp_bin, mdrun_bin, genergy_bin = binaries(gmx_path, grosuff)
# Run grompp.
grompp_bin.extend(['-f', mdp, '-c', gro, '-p', top, '-o', tpr, '-po', mdout, '-maxwarn', '5'])
proc = run_subprocess(grompp_bin, 'gromacs', stdout_path, stderr_path)
if proc.returncode != 0:
logger.error('grompp failed. See %s' % stderr_path)
# Run single-point calculation with mdrun.
mdrun_bin.extend(['-nt', '1', '-s', tpr, '-o', traj, '-cpo', state, '-c', conf, '-e', ener, '-g', log])
proc = run_subprocess(mdrun_bin, 'gromacs', stdout_path, stderr_path)
if proc.returncode != 0:
logger.error('mdrun failed. See %s' % stderr_path)
# Extract energies using g_energy
select = " ".join(map(str, range(1, 20))) + " 0 "
genergy_bin.extend(['-f', ener, '-o', ener_xvg, '-dp'])
proc = run_subprocess(genergy_bin, 'gromacs', stdout_path, stderr_path, stdin=select)
if proc.returncode != 0:
logger.error('g_energy failed. See %s' % stderr_path)
return _group_energy_terms(ener_xvg)
def _group_energy_terms(ener_xvg):
"""Parse energy.xvg file to extract and group the energy terms in a dict. """
with open(ener_xvg) as f:
all_lines = f.readlines()
energy_types = [line.split('"')[1] for line in all_lines if line[:3] == '@ s']
energy_values = [float(x) * units.kilojoule_per_mole for x in all_lines[-1].split()[1:]]
e_out = OrderedDict(zip(energy_types, energy_values))
# Discard non-energy terms.
for group in unwanted:
if group in e_out:
del e_out[group]
# Dispersive energies.
# TODO: Do buckingham energies also get dumped here?
dispersive = ['LJ (SR)', 'LJ-14', 'Disper. corr.']
e_out['Dispersive'] = 0 * units.kilojoules_per_mole
for group in dispersive:
if group in e_out:
e_out['Dispersive'] += e_out[group]
# Electrostatic energies.
electrostatic = ['Coulomb (SR)', 'Coulomb-14', 'Coul. recip.']
e_out['Electrostatic'] = 0 * units.kilojoules_per_mole
for group in electrostatic:
if group in e_out:
e_out['Electrostatic'] += e_out[group]
e_out['Non-bonded'] = e_out['Electrostatic'] + e_out['Dispersive']
all_angles = ['Angle', 'U-B', 'G96Angle', 'Restricted Angles', 'Bond-Cross',
'BA-Cross', 'Quartic Angles']
e_out['All angles'] = 0 * units.kilojoules_per_mole
for group in all_angles:
if group in e_out:
e_out['All angles'] += e_out[group]
all_dihedrals = ['Ryckaert-Bell.', 'Proper Dih.', 'Improper Dih.']
e_out['All dihedrals'] = 0 * units.kilojoules_per_mole
for group in all_dihedrals:
if group in e_out:
e_out['All dihedrals'] += e_out[group]
return e_out, ener_xvg
|
ctk3b/InterMol
|
intermol/gromacs/__init__.py
|
Python
|
mit
| 5,739
|
[
"Gromacs"
] |
8a1bdcc37519d4a75e21be8fa1b7b2c5f1a7bfd9042ce1e7e3a4de049f5d6e66
|
#!/usr/bin/env python
"""
Add files to an existing transformation
Usage:
dirac-transformation-add-files TransID <LFN | fileContainingLFNs>
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__RCSID__ = "$Id$"
import os
import DIRAC
from DIRAC.Core.Base import Script
from DIRAC.Core.Utilities.DIRACScript import DIRACScript
@DIRACScript()
def main():
Script.parseCommandLine()
from DIRAC.TransformationSystem.Client.TransformationClient import TransformationClient
args = Script.getPositionalArgs()
if len(args) < 2:
Script.showHelp(exitCode=1)
# get arguments
inputFileName = args[1]
lfns = []
if os.path.exists(inputFileName):
inputFile = open(inputFileName, 'r')
string = inputFile.read()
inputFile.close()
lfns.extend([lfn.strip() for lfn in string.splitlines()])
else:
lfns.append(inputFileName)
tc = TransformationClient()
res = tc.addFilesToTransformation(args[0], lfns) # Files added here
if not res['OK']:
DIRAC.gLogger.error(res['Message'])
DIRAC.exit(2)
successfullyAdded = 0
alreadyPresent = 0
for lfn, message in res['Value']['Successful'].items():
if message == 'Added':
successfullyAdded += 1
elif message == 'Present':
alreadyPresent += 1
if successfullyAdded > 0:
DIRAC.gLogger.notice("Successfully added %d files" % successfullyAdded)
if alreadyPresent > 0:
DIRAC.gLogger.notice("Already present %d files" % alreadyPresent)
DIRAC.exit(0)
if __name__ == "__main__":
main()
|
yujikato/DIRAC
|
src/DIRAC/TransformationSystem/scripts/dirac_transformation_add_files.py
|
Python
|
gpl-3.0
| 1,568
|
[
"DIRAC"
] |
4a1366d512dc3fd78f51474c0683e8e71fb2534054ecd64de878e3e1d7c28831
|
import os
import datetime
from zipfile import ZipFile
from billy.scrape.votes import VoteScraper, Vote
class NCVoteScraper(VoteScraper):
jurisdiction = 'nc'
def scrape(self, chamber, session):
# Unfortunately, you now have to request access to FTP.
# This method of retrieving votes needs to be be changed or
# fall back to traditional web scraping.
if session == '2009':
# 2009 files have a different delimiter and naming scheme.
vote_data_url = 'ftp://www.ncleg.net/Bill_Status/Vote Data 2009.zip'
naming_scheme = '{session}{file_label}.txt'
delimiter = ";"
else:
vote_data_url = 'ftp://www.ncleg.net/Bill_Status/Votes%s.zip' % session
naming_scheme = '{file_label}_{session}.txt'
delimiter = "\t"
fname, resp = self.urlretrieve(vote_data_url)
# fname = "/Users/brian/Downloads/Vote Data 2009.zip"
zf = ZipFile(fname)
chamber_code = 'H' if chamber == 'lower' else 'S'
# Members_YYYY.txt: tab separated
# 0: id (unique only in chamber)
# 1: H or S
# 2: member name
# 3-5: county, district, party
# 6: mmUserId
member_file = zf.open(naming_scheme.format(file_label='Members', session=session))
members = {}
for line in member_file.readlines():
data = line.split(delimiter)
if data[1] == chamber_code:
members[data[0]] = data[2]
# Votes_YYYY.txt
# 0: sequence number
# 1: chamber (S/H)
# 2: date
# 3: prefix
# 4: bill_id
# 5: yes votes
# 6: no votes
# 7: excused absences
# 8: excused votes
# 9: didn't votes
# 10: total yes+no
# 11: sponsor
# 12: reading info
# 13: info
# 20: PASSED/FAILED
# 21: legislative day
vote_file = zf.open(naming_scheme.format(file_label='Votes', session=session))
bill_chambers = {'H':'lower', 'S':'upper'}
votes = {}
for line in vote_file.readlines():
data = line.split(delimiter)
if len(data) < 24:
self.warning('line too short %s', data)
continue
if data[1] == chamber_code:
date = datetime.datetime.strptime(data[2][:16],
'%Y-%m-%d %H:%M')
if data[3][0] not in bill_chambers:
# skip votes that aren't on bills
self.log('skipping vote %s' % data[0])
continue
votes[data[0]] = Vote(chamber, date, data[13],
'PASS' in data[20],
int(data[5]),
int(data[6]),
int(data[7])+int(data[8])+int(data[9]),
bill_chamber=bill_chambers[data[3][0]],
bill_id=data[3]+data[4], session=session)
member_vote_file = zf.open(naming_scheme.format(file_label='MemberVotes', session=session))
# 0: member id
# 1: chamber (S/H)
# 2: vote id
# 3: vote chamber (always same as 1)
# 4: vote (Y,N,E,X)
# 5: pair ID (member)
# 6: pair order
# If a vote is paired then it should be counted as an 'other'
for line in member_vote_file.readlines():
data = line.split(delimiter)
if data[1] == chamber_code:
try:
member_voting = members[data[0]]
except KeyError:
self.debug('Member %s not found.' % data[0])
continue
try:
vote = votes[data[2]]
except KeyError:
self.debug('Vote %s not found.' % data[2])
continue
# -1 votes are Lt. Gov, not included in count, so we add them
if data[4] == 'Y' and not data[5]:
if data[0] == '-1':
vote['yes_count'] += 1
vote.yes(member_voting)
elif data[4] == 'N' and not data[5]:
if data[0] == '-1':
vote['no_count'] += 1
vote.no(member_voting)
else:
# for some reason other_count is high for paired votes
if data[5]:
vote['other_count'] -= 1
# is either E: excused, X: no vote, or paired (doesn't count)
vote.other(member_voting)
for vote in votes.itervalues():
#vote.validate()
vote.add_source(vote_data_url)
self.save_vote(vote)
# remove file
zf.close()
os.remove(fname)
|
showerst/openstates
|
openstates/nc/votes.py
|
Python
|
gpl-3.0
| 4,963
|
[
"Brian"
] |
40e4abd69b40aa96046bb80613d1bda44110ba7cf92de2c90ca8ff4492b48e90
|
#!/usr/bin/env python
#============================================================================
# P Y I B E X
# File : test_thickPaving.py
# Author : Benoit Desrochers
# Copyright : Benoit Desrochers
# License : See the LICENSE file
# Created : Dec 28, 2015
#============================================================================
import unittest
import pyibex
from pyibex import Interval, IntervalVector, LargestFirst, Function
from pyibex.thickset import *
import math
class TestThickPaving(unittest.TestCase):
def test_constructor_01(self):
X0 = IntervalVector(2,[-2, 3]);
A = ThickPaving(X0, UNK);
self.assertEqual(A.X0, IntervalVector(2, [-2, 3]) );
del A
A = ThickPaving(X0, IN);
self.assertEqual(A.X0, X0 );
class TestThickPavingBisection(unittest.TestCase):
def setUp(self):
X0 = IntervalVector(2,[0, 1]);
self.f = lambda x: UNK
self.A = ThickPaving(X0, UNK, LargestFirst(0, 0.5));
def test_eps_05(self):
self.A.Sivia(self.f, 0.5, opInter);
self.assertEqual(self.A.size, 4);
def test_eps_025(self):
self.A.Sivia(self.f, 0.25, opInter);
self.assertEqual(self.A.size, 16);
def test_eps_0125(self):
self.A.Sivia(self.f, 0.125, opInter);
self.assertEqual(self.A.size, 64);
def test_eps_1_n(self):
n = 8
self.A.Sivia(self.f, 1.0/math.pow(2,n), opInter);
self.assertEqual(self.A.size, math.pow(2,2*n));
# // SECTION(" test eps = 1/2**n"){
# // A.Sivia(f, 1.0/std::pow(2,10));
# // CHECK(A.root.countLeaves() == std::pow(2,10)*std::pow(2,10) );
# // }
class TestThickTest(unittest.TestCase):
def test_ThickfInLambda(self):
flb = lambda x: x + [1., 1.]
fub = lambda x: x + [2., 2.]
test = ThickfIn(flb, fub, IntervalVector(2, [0, 2]))
flb, fub = None, None
P = ThickPaving( IntervalVector(2, [-10,10])+[1,1], test, 0.1, opInter, False)
from vibes import vibes
vibes.beginDrawing()
P.visit(ToVibes("Test"))
vibes.endDrawing()
def test_ThickfIn(self):
flb = Function("x1", "x2","( x1 + 1, x2 + 1)")
fub = Function("x1", "x2","( x1 + 2, x2 + 2)")
test = ThickfIn(flb, fub, IntervalVector(2, [0, 2]))
flb, fub = None, None
P = ThickPaving( IntervalVector(2, [-10,10]) , test, 0.5, opInter, False)
from vibes import vibes
vibes.beginDrawing()
P.visit(ToVibes("Test2"))
vibes.endDrawing()
class TestThickSetSave(unittest.TestCase):
def test_ThickfInLambda(self):
# flb = lambda x: x + [1., 1.]
# fub = lambda x: x + [2., 2.]
# flb, fub = None, None
# test = ThickfIn(flb, fub, IntervalVector(2, [0, 2]))
from vibes import vibes
vibes.beginDrawing()
test = ThickDisk(Interval(0),Interval(0), Interval(0,3), Interval(0,4))
P = ThickPaving( IntervalVector(2, [-10,10])+[1,1], test, 0.1, opInter, False)
P.save("Test.paving")
P2 = ThickPaving("Test.paving")
P2.visit(ToVibes("Test3"))
vibes.endDrawing()
#
# TEST_CASE( " THICK DISk"){
# IntervalVector X0(2,Interval(-5, 5));
# Interval mx(0);
# Interval my(0);
# Interval Rmin(0, 4);
# Interval Rmax(0, 6);
# ThickDisk t(mx, my, Rmin, Rmax);
# ThickPaving A(X0, UNK);
# A.Sivia(t, 0.5);
# }
if __name__ == '__main__':
unittest.main()
|
benEnsta/pyIbex
|
pyibex/tests/test_thickPaving.py
|
Python
|
lgpl-3.0
| 3,335
|
[
"VisIt"
] |
c8f1c617d9c34ee3c6c216bd9681ca18d5a984962e53dced2a1fb60e81d7f50a
|
"""
Find Black Hole apparent horizons in an axisymmetric spacetime.
===============================================================
Black holes are usually described by their *horizon* enclosing the singularity.
Locating any horizon in a general (typically numerically generated) spacetime
can be very hard - see Thornburg's review [1]_ for details. Here we restrict to
the simpler problem of a specific type of axisymmetric spacetime, where the
equation to solve reduces to a boundary value problem.
Strictly this module constructs *trapped surfaces*, which are surfaces where
null geodesics (light rays) are ingoing. The apparent horizon is the
outermost trapped surface.
Notes
-----
The technical restrictions on the spacetime are
1. axisymmetric, so the singularities are on the z axis;
2. singularities have Brill-Lindquist type;
3. spacetime is conformally flat;
4. coordinates are chosen to obey maximal slicing with no shift;
5. data is time symmetric.
References
----------
.. [1] J. Thornburg, "Event and Apparent Horizon Finders for 3+1 Numerical
Relativity", Living Reviews in Relativity 10 (3) 2007.
http://dx.doi.org/10.12942/lrr-2007-3.
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
from scipy.optimize import brentq, root, newton
class Spacetime:
"""
Define an axisymmetric spacetime.
For an axisymmetric, vacuum spacetime with Brill-Lindquist singularities
the only parameters that matter is the locations of the singularities
(i.e. their z-location) and their bare masses.
Parameters
----------
z_positions : list of float
The location of the singularities on the z-axis.
masses : list of float
The bare masses of the singularities.
reflection_symmetry : bool, optional
Is the spacetime symmetric across the x-axis.
See also
--------
TrappedSurface : class defining the trapped surfaces on a spacetime.
Examples
--------
>>> schwarzschild = Spacetime([0.0], [1.0], True)
This defines standard Schwarzschild spacetime with unit mass.
>>> binary = Spacetime([-0.75, 0.75], [1.0, 1.1])
This defines two black holes, with the locations mirrored but different
masses.
"""
def __init__(self, z_positions, masses, reflection_symmetric=False):
"""
Initialize the spacetime given the location and masses of the
singularities.
"""
self.reflection_symmetric = reflection_symmetric
if reflection_symmetric:
# Enforce reflection symmetry, in case only positive terms
# passed in.
z_plus, z_index = np.unique(np.abs(z_positions),
return_index=True)
if (z_plus[0] < np.spacing(1)): # One singularity at origin
z_symm = np.zeros((2*len(z_plus)-1, 1))
masses_symm = np.zeros_like(z_symm)
z_symm[0] = 0.0
if (len(z_plus) > 1):
z_symm[1:len(z_plus)] = z_plus[1:]
z_symm[len(z_plus):] = -z_plus[1:]
masses_symm[0] = masses[z_index[0]]
if (len(z_plus) > 1):
masses_symm[1:len(z_plus)] = masses[z_index[1:]]
masses_symm[len(z_plus):] = masses[z_index[1:]]
else: # No singularities at origin
z_symm = np.zeros((2*len(z_plus), 1))
masses_symm = np.zeros_like(z_symm)
z_symm[:len(z_plus)] = z_plus
z_symm[len(z_plus):] = -z_plus
masses_symm[:len(z_plus)] = masses[z_index]
masses_symm[len(z_plus):] = masses[z_index]
z_positions = z_symm
masses = masses_symm
self.z_positions = np.array(z_positions)
self.masses = np.array(masses)
self.N = len(z_positions)
class TrappedSurface:
r"""
Store any trapped surface, centred on a particular point.
The trapped surface is defined in polar coordinates centred on a point
on the z-axis; the z-axis is :math:`\theta` = 0 or :math:`\theta` =
:math:`\pi`.
Parameters
----------
spacetime : Spacetime
The spacetime on which the trapped surface lives.
z_centre : float
The z-coordinate about which the polar coordinate system describing
the trapped surface is defined.
See also
--------
Spacetime : class defining the spacetime.
Notes
-----
With the restricted spacetime considered here, a trapped surface
:math:`h(\theta)` satisfies a boundary value problem with the
boundary conditions :math:`h'(\theta = 0) = 0 = h'(\theta = \pi)`.
If the spacetime is reflection symmetric about the x-axis then the
boundary condition :math:`h'(\theta = \pi / 2) = 0` can be used
and the domain restricted to :math:`0 \le \theta \le \pi / 2`.
The shooting method is used here. In the reflection symmetric case
the algorithm needs a guess for the initial horizon radius,
:math:`h(\theta = 0)`, and a single condition is enforced at
:math:`\pi / 2` to match to the boundary condition there.
In the general case we guess the horizon radius at two points,
:math:`h(\theta = 0)` and :math:`h(\theta = \pi)` and continuity
of both :math:`h` *and* :math:`h'` are enforced at the matching point
:math:`\pi / 2`. The reason for this is a weak coordinate singularity
on the axis at :math:`\theta = 0, \pi` which makes it difficult to
integrate *to* these points, but possible to integrate *away* from them.
Examples
--------
>>> schwarzschild = Spacetime([0.0], [1.0], True)
>>> ts1 = TrappedSurface(schwarzschild)
>>> ts1.find_r0([0.49, 0.51])
>>> ts1.solve_given_r0()
>>> print(round(ts1.r0[0], 9))
0.5
This example first constructs the Schwarzschild spacetime which, in this
coordinate system, has the horizon with radius 0.5. The trapped surface
is set up, the location of the trapped surface at :math:`\theta = 0` is
found, which is (to the solver accuracy) at 0.5.
"""
def __init__(self, spacetime, z_centre=0.0):
"""
Initialize a horizon centred on a particular point.
"""
self.z_centre = z_centre
self.spacetime = spacetime
def expansion(self, theta, H):
"""
Compute the expansion for the given spacetime at a fixed point.
This function gives the differential equation defining the
boundary value problem.
Parameters
----------
theta : float
The angular location at this point.
H : list of float
A vector of :math:`(h, h')`.
"""
h = H[0]
dhdtheta = H[1]
z_i = self.spacetime.z_positions
m_i = self.spacetime.masses
distance_i = np.zeros_like(z_i)
z0_minus_zi = np.zeros_like(z_i)
for i in range(len(z_i)):
z0_minus_zi[i] = self.z_centre - z_i[i]
distance_i[i] = np.sqrt(h ** 2 +
2.0 * z0_minus_zi[i] * h * np.cos(theta)
+ z0_minus_zi[i] ** 2)
C2 = 1.0 / (1.0 + (dhdtheta / h) ** 2)
if (abs(theta) < 1e-16) or (abs(theta - np.pi) < 1e-16):
cot_theta_dhdtheta_C2 = 0.0
else:
cot_theta_dhdtheta_C2 = dhdtheta / (np.tan(theta) * C2)
psi = 1.0
dpsi_dr = 0.0
dpsi_dtheta = 0.0
for i in range(len(m_i)):
psi += 0.5 * m_i[i] / distance_i[i]
dpsi_dr -= 0.5 * m_i[i] * (h + z0_minus_zi[i] * np.cos(theta)) /\
distance_i[i] ** 3
dpsi_dtheta += 0.5 * m_i[i] * h * z0_minus_zi[i] * np.sin(theta) /\
distance_i[i] ** 3
dHdtheta = np.zeros_like(H)
dHdtheta[0] = dhdtheta
dHdtheta[1] = 2.0 * h - cot_theta_dhdtheta_C2 + \
4.0 * h ** 2 / (psi * C2) * \
(dpsi_dr - dpsi_dtheta * dhdtheta / h ** 2) + \
3.0 * dhdtheta ** 2 / h
return dHdtheta
# Define the shooting function if using matching (0 <= theta <= pi)
def shooting_function_full(self, r0):
r"""
The function used in the shooting algorithm.
This is the full algorithm from integrating over
:math:`0 \le \theta \le \pi`. The difference between the
solution and its derivative at the matching point is the
error to be minimized.
Parameters
----------
r0 : list of float
Initial guess for the horizon radius, as outlined above.
Returns
-------
list of float
The error at the matching point.
"""
# First half of the horizon
H0 = np.array([r0[0], 0.0])
solver1 = ode(self.expansion)
solver1.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver1.set_initial_value(H0, 0.0)
solver1.integrate(np.pi / 2.0)
# Second half of the horizon
H0 = np.array([r0[1], 0.0])
solver2 = ode(self.expansion)
solver2.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver2.set_initial_value(H0, np.pi)
solver2.integrate(np.pi / 2.0)
return solver1.y - solver2.y
# Define the shooting function if symmetric (0 <= theta <= pi/2)
def shooting_function(self, r0):
r"""
The function used in the shooting algorithm.
This is the symmetric algorithm from integrating over
:math:`0 \le \theta \le \pi / 2`. The difference between the
derivative at the end point and the boundary condition is the
error to be minimized.
Parameters
----------
r0 : float
Initial guess for the horizon radius, as outlined above.
Returns
-------
float
The error at the end point.
"""
H0 = np.array([r0, 0.0])
solver1 = ode(self.expansion)
solver1.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver1.set_initial_value(H0, 0.0)
solver1.integrate(np.pi / 2.0)
return solver1.y[1]
def find_r0(self, input_guess, full_horizon=False):
r"""
Given some initial guess, find the correct starting location
for the trapped surface using shooting.
This finds the horizon radius at :math:`\theta = 0` which,
together with the differential equation, specifies the trapped
surface location.
Parameters
----------
input_guess : list of float
Two positive reals defining the guess for the initial radius.
Note that the meaning is different depending on whether this
is a "full" horizon or not. For a full horizon the numbers
correspond to the guesses at :math:`\theta = 0, \pi`
respectively. In the symmetric case where only one guess is
needed the vector defines the interval within which a *unique*
root must lie.
full_horizon : bool, optional
If the general algorithm is needed (ie, the domain should be
:math:`0 \le \theta \le \pi` instead of
:math:`0 \le \theta \le \pi / 2`).
This parameter is independent of the symmetry of the spacetime.
If the spacetime is not symmetric this parameter will be
ignored and the general algorithm always used. If the spacetime
is symmetric it may still be necessary to use the general
algorithm: for example, for two singularities it is possible to
find a trapped surface surrounding just one singularity.
"""
# Now find the horizon given the input guess
self.r0 = []
if (full_horizon or
not self.spacetime.reflection_symmetric or
abs(self.z_centre) > 1.e-15):
sol = root(self.shooting_function_full, input_guess, tol=1.e-12)
self.r0 = sol.x
else:
# sol = brentq(self.shooting_function, input_guess[0],
# input_guess[1])
sol = newton(self.shooting_function, input_guess[1])
self.r0 = [sol]
def solve_given_r0(self, full_horizon=False):
r"""
Given the correct value for the initial radius, find the horizon.
This function does not find the correct radius for the trapped
surface, but solves (in polar coordinates) for the complete
surface location given the correct initial guess.
Parameters
----------
full_horizon : bool, optional
If the general algorithm is needed (ie, the domain should be
:math:`0 \le \theta \le \pi` instead of
:math:`0 \le \theta \le \pi / 2`).
This parameter is independent of the symmetry of the spacetime.
If the spacetime is not symmetric this parameter will be
ignored and the general algorithm always used. If the spacetime
is symmetric it may still be necessary to use the general
algorithm: for example, for two singularities it is possible to
find a trapped surface surrounding just one singularity.
See also
--------
find_r0 : finds the correct initial radius.
"""
dtheta = np.pi / 100.0
if (full_horizon or not self.spacetime.reflection_symmetric):
# The solution needs computing for 0 <= theta <= pi
# First half of the horizon
theta1 = []
H1 = []
H0 = np.array([self.r0[0], 0.0])
solver1 = ode(self.expansion)
solver1.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver1.set_initial_value(H0, 0.0)
theta1.append(0.0)
H1.append(H0)
while solver1.successful() and solver1.t < np.pi / 2.0:
solver1.integrate(solver1.t + dtheta)
H1.append(solver1.y)
theta1.append(solver1.t)
# Second half of the horizon
theta2 = []
H2 = []
H0 = np.array([self.r0[1], 0.0])
solver2 = ode(self.expansion)
solver2.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver2.set_initial_value(H0, np.pi)
theta2.append(np.pi)
H2.append(H0)
while solver2.successful() and solver2.t >= np.pi / 2.0 + 1e-12:
solver2.integrate(solver2.t - dtheta)
H2.append(solver2.y)
theta2.append(solver2.t)
H = np.vstack((np.array(H1), np.flipud(np.array(H2))))
theta = np.hstack((np.array(theta1),
np.flipud(np.array(theta2))))
else: # The solution needs computing for 0 <= theta <= pi / 2
theta1 = []
H1 = []
H0 = np.array([self.r0[0], 0.0])
solver1 = ode(self.expansion)
solver1.set_integrator("dopri5", atol=1.e-8, rtol=1.e-6)
solver1.set_initial_value(H0, 0.0)
theta1.append(0.0)
H1.append(H0)
while solver1.successful() and solver1.t < np.pi / 2.0:
solver1.integrate(solver1.t + dtheta)
H1.append(solver1.y)
theta1.append(solver1.t)
H = np.vstack((np.array(H1), np.flipud(H1)))
theta = np.hstack((theta1,
np.flipud(np.pi - np.array(theta1))))
# We now have the solution for 0 <= theta <= pi;
# fill the remaining angles
self.H = np.vstack((H, np.flipud(H)))
self.theta = np.hstack((theta, theta + np.pi))
return None
def convert_to_cartesian(self):
"""
When the solution is known in r, theta coordinates, compute
the locations in cartesian coordinates (2 and 3d).
This function assumes that the trapped surface has been located and
solved for.
See also
--------
solve_given_r0 : find the trapped surface location in polar
coordinates.
"""
self.x = self.H[:, 0] * np.sin(self.theta)
self.z = self.z_centre + self.H[:, 0] * np.cos(self.theta)
phi = np.linspace(0.0, 2.0 * np.pi, 20)
self.X = np.zeros((len(self.theta), len(phi)))
self.Y = np.zeros_like(self.X)
self.Z = np.zeros_like(self.X)
for t in range(len(self.theta)):
for p in range(len(phi)):
self.X[t, p] = self.H[t, 0] * np.sin(self.theta[t]) * \
np.cos(phi[p])
self.Y[t, p] = self.H[t, 0] * np.sin(self.theta[t]) * \
np.sin(phi[p])
self.Z[t, p] = self.z_centre + \
self.H[t, 0] * np.cos(self.theta[t])
self.R = np.sqrt(self.X ** 2 + self.Y ** 2 + self.Z ** 2)
return None
def plot_2d(self, ax):
"""
Given a matplotlib axis, plot the trapped surface.
Plots the surface in the x-z plane, together with the location of
the singularities: marker style is used to indicate the mass of
the singularity (will fail badly for masses significantly larger
than 1).
Parameters
----------
ax : axis object
Matplotlib axis on which to do the plot.
"""
ax.plot(self.x, self.z, 'b-')
for z, m in zip(self.spacetime.z_positions, self.spacetime.masses):
ax.plot(0.0, z,
'kx', markersize=12, markeredgewidth=1 + int(round(m)))
ax.set_xlabel("$x$")
ax.set_ylabel("$z$")
ax.axis('equal')
def find_horizon_binary_symmetric(z=0.5, mass=1.0):
r"""
Utility function to find horizons for reflection symmetric case.
This returns the horizon for a spacetime with precisely two singularities
of identical mass located at :math:`\pm z`.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For larger separations
we should not expect a common horizon.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
Returns
-------
ts : TrappedSurface
Only returns the single surface found, expected to be the common
horizon.
"""
st = Spacetime([-z, z], [mass, mass], True)
ts = TrappedSurface(st, 0.0)
# An empirical formula for the required initial guess
# (ie the value of r0, or h, at theta = 0)
r0_empirical = mass * (1.0 - 0.0383 * z + 0.945 * z ** 2 - 0.522 * z ** 3)
# This empirical formula works for the inner horizon if
# 0.65 < z < 0.72 or so. There is an inner horizon findable
# down to about 0.47, but the initial guess is very sensitive
# r0_empirical = mass * (0.204 - 1.6422*z - 0.771*z**2 + 0.5*z**3)
initial_guess = [0.99 * r0_empirical, 1.01 * r0_empirical]
try:
ts.find_r0(initial_guess)
except ValueError:
r0 = np.linspace(0.95 * r0_empirical, 1.05 * r0_empirical)
phi = np.zeros_like(r0)
for i in range(len(r0)):
phi[i] = ts.shooting_function(r0[i])
initial_guess = [r0[np.argmin(phi)], r0[-1]]
ts.find_r0(initial_guess)
ts.solve_given_r0()
ts.convert_to_cartesian()
return ts
def find_inner_outer_horizon_binary_symmetric(z=0.5, mass=1.0):
r"""
Utility function to find horizons for reflection symmetric case.
This returns two trapped surface for a spacetime with precisely
two singularities of identical mass located at :math:`\pm z`. The outer
surface is the apparent horizon; the inner surface is just a trapped
surface.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For larger separations
we should not expect a common horizon. The inner horizon is based on
a similar fit but in the narrower range :math:`0.6 \le z \le 0.7` and
so it is very likely that this function will fail for :math:`z < 0.42`.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
Returns
-------
ts1, ts2 : TrappedSurface
Returns the trapped surfaces found.
"""
st = Spacetime([-z, z], [mass, mass], True)
ts1 = TrappedSurface(st, 0.0)
ts2 = TrappedSurface(st, 0.0)
# An empirical formula for the required initial guess
# (ie the value of r0, or h, at theta = 0)
r0_empirical = mass * (1.0 - 0.0383 * z + 0.945 * z ** 2 - 0.522 * z ** 3)
initial_guess = [0.99 * r0_empirical, 1.01 * r0_empirical]
try:
ts1.find_r0(initial_guess)
except ValueError:
r0 = np.linspace(0.95 * r0_empirical, 1.05 * r0_empirical)
phi = np.zeros_like(r0)
for i in range(len(r0)):
phi[i] = ts1.shooting_function(r0[i])
initial_guess = [r0[np.argmin(phi)], r0[-1]]
ts1.find_r0(initial_guess)
ts1.solve_given_r0()
ts1.convert_to_cartesian()
# This empirical formula works for the inner horizon if
# 0.42 < z < 0.765 or so. It looks likely that the inner horizon
# persists below 0.42, but eventually it will fail.
# r0_empirical = mass * (-0.357+4.39*z-5.263*z**2+2.953*z**3)
r0_empirical = mass * \
(1.866 - 10.213 * z + 30.744 * z **
2 - 36.513 * z ** 3 + 16.21 * z ** 4)
initial_guess = [0.99 * r0_empirical, 1.01 * r0_empirical]
try:
ts2.find_r0(initial_guess)
except ValueError:
r0 = np.linspace(0.95 * r0_empirical, 1.05 * r0_empirical)
phi = np.zeros_like(r0)
for i in range(len(r0)):
phi[i] = ts2.shooting_function(r0[i])
initial_guess = [r0[np.argmin(phi)], r0[-1]]
ts2.find_r0(initial_guess)
ts2.solve_given_r0()
ts2.convert_to_cartesian()
return ts1, ts2
def find_individual_horizon_binary_symmetric(z=0.5, mass=1.0):
r"""
Utility function to find horizons for reflection symmetric case.
This returns two trapped surface for a spacetime with precisely
two singularities of identical mass located at :math:`\pm z`. These
should be trapped surfaces about only one singularity.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0.45 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For smaller separations
we should not expect individual horizons.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
Returns
-------
ts1, ts2 : TrappedSurface
Returns the trapped surfaces found.
"""
st = Spacetime([-z, z], [mass, mass], True)
ts1 = TrappedSurface(st, -z)
ts2 = TrappedSurface(st, z)
# An empirical formula for the required initial guess
# (ie the value of r0, or h, at theta = 0)
r0_close = mass * \
(0.002 + 1.027 * z - 1.235 * z ** 2 + 0.816 * z ** 3 - 0.228 * z ** 4)
r0_far = mass * \
(0.215 + 0.557 * z - 0.727 * z ** 2 + 0.531 * z ** 3 - 0.163 * z ** 4)
initial_guess = [r0_close, r0_far]
ts1.find_r0(initial_guess, True)
ts1.solve_given_r0(True)
ts1.convert_to_cartesian()
initial_guess = [r0_far, r0_close]
ts2.find_r0(initial_guess, True)
ts2.solve_given_r0(True)
ts2.convert_to_cartesian()
return ts1, ts2
def find_horizon_binary(z=0.5, mass1=1.0, mass2=1.0):
r"""
Utility function to find horizons for the general case.
This returns the horizon for a spacetime with precisely two singularities
of mass [mass1, mass2] located at :math:`\pm z`. That is, we work in the
frame where the location of the horizons is symmetric.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For larger separations
we should not expect a common horizon.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
Returns
-------
ts : TrappedSurface
Only returns the single surface found, expected to be the common
horizon.
"""
st = Spacetime([-z, z], [mass1, mass2])
ts = TrappedSurface(st, 0.0)
# An empirical formula for the required initial guess
# (ie the value of r0, or h, at theta = 0)
# This really is just a guess based on the symmetric case.
zom = 2.0 * z / (mass1 + mass2)
r0_empirical = (1.0 - 0.0383 * zom + 0.945 * zom ** 2 -
0.522 * zom ** 3) * \
(mass1 + mass2) / 2.0
r0_empirical = max(r0_empirical, z + 0.5 * max(mass1, mass2))
initial_guess = [r0_empirical, r0_empirical]
ts.find_r0(initial_guess, True)
ts.solve_given_r0()
ts.convert_to_cartesian()
return ts
def plot_horizon_3d(tss):
"""
Plot a list of horizons.
Parameters
----------
tss : list of TrappedSurface
All the trapped surfaces to visualize.
"""
from mayavi import mlab
cmaps = ['bone', 'jet', 'hot', 'cool', 'spring', 'summer', 'winter']
assert len(cmaps) > len(tss)
extents = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
for ts, cm in zip(tss, cmaps):
mlab.mesh(ts.X, ts.Y, ts.Z, colormap=cm, opacity=0.4)
extents[0] = min(extents[0], np.min(ts.X))
extents[1] = max(extents[1], np.max(ts.X))
extents[2] = min(extents[2], np.min(ts.Y))
extents[3] = max(extents[3], np.max(ts.Y))
extents[4] = min(extents[4], np.min(ts.Z))
extents[5] = max(extents[5], np.max(ts.Z))
mlab.axes(extent=extents)
mlab.outline(extent=extents)
mlab.show()
def solve_plot_symmetric(z=0.5, mass=1.0):
r"""
Utility function to find horizons for reflection symmetric case.
This returns the horizon for a spacetime with precisely two singularities
of identical mass located at :math:`\pm z`.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For larger separations
we should not expect a common horizon.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
"""
ts = find_horizon_binary_symmetric(z, mass)
fig = plt.figure()
ax = fig.add_subplot(111)
ts.plot_2d(ax)
plt.show()
return fig
def solve_plot_symmetric_3d(z=0.5, mass=1.0):
r"""
Utility function to plot horizon in 3d for reflection symmetric case.
This returns the horizon for a spacetime with precisely two singularities
of identical mass located at :math:`\pm z`.
Notes
-----
The initial guess for the horizon location is based on fitting a cubic
to the results constructed for :math:`0 \le z \le 0.75` for the unit
mass case. The radius should scale with the mass. For larger separations
we should not expect a common horizon.
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass : float, optional
The mass of the singularities.
"""
ts = find_horizon_binary_symmetric(z, mass)
plot_horizon_3d([ts])
def solve_plot_binary(z=0.5, mass1=1.0, mass2=1.0):
r"""
Utility function to find horizons for general case.
This returns the horizon for a spacetime with precisely two singularities
of different mass located at :math:`\pm z`.
Notes
-----
The initial guess is not easily determined, so performance is poor and
the algorithm not robust
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass1, mass2 : float, optional
The mass of the singularities.
"""
ts = find_horizon_binary(z, mass1, mass2)
fig = plt.figure()
ax = fig.add_subplot(111)
ts.plot_2d(ax)
plt.show()
return fig
def solve_plot_binary_3d(z=0.5, mass1=1.0, mass2=1.0):
r"""
Utility function to plot horizons in 3d for general case.
This returns the horizon for a spacetime with precisely two singularities
of different mass located at :math:`\pm z`.
Notes
-----
The initial guess is not easily determined, so performance is poor and
the algorithm not robust
Parameters
----------
z : float, optional
The distance from the origin of the singularities (ie the two
singularities are located at [-z, +z]).
mass1, mass2 : float, optional
The mass of the singularities.
"""
ts = find_horizon_binary(z, mass1, mass2)
plot_horizon_3d([ts])
if __name__ == "__main__":
# st = Spacetime([-0.5, 0.5], [1.0, 1.0])
# ts = TrappedSurface(st)
# ts.find_r0([1.0, 1.0])
# ts.solve_given_r0()
import doctest
doctest.testmod()
|
IanHawke/findhorizon
|
findhorizon/findhorizon.py
|
Python
|
unlicense
| 30,220
|
[
"Mayavi"
] |
3769a1031f6daa3f62c01015ec738ddd5333bce44d88e5ed03fb30762954093a
|
# Copyright (c) 2000 Autonomous Zone Industries
# This file is licensed under the
# GNU Lesser General Public License v2.1.
# See the file COPYING or visit http://www.gnu.org/ for details.
__revision__ = "$Id: UnreliableHandicapper.py,v 1.9 2003/02/09 17:52:13 zooko Exp $"
# pyutil modules
from pyutil.humanreadable import hr
# egtp modules
from egtp import idlib
# The most reliable broker is still handicapped this much.
TUNING_FACTOR=float(2**8)
# Extra boost for publication: we want to publish to more reliable servers!!
PUB_TUNING_FACTOR=float(8)
# The least reliable broker is handicapped as much as the furthest-away broker.
MIN_RELIABILITY=TUNING_FACTOR / idlib.Largest_Distance_NativeId_Int_Space
class UnreliableHandicapper:
def __init__(self, counterparties, our_id):
self.counterparties = counterparties
self.our_id = our_id
def __call__(self, counterparty_id, metainfo, message_type, message_body, TUNING_FACTOR=TUNING_FACTOR):
"""
for all msgtypes
"""
if idlib.equal(counterparty_id, self.our_id):
return 0.0 # no handicap for us, we have high self esteem
else:
cpty = self.counterparties.get_counterparty_object(counterparty_id)
reliability = cpty.get_custom_stat("reliability", 1.0)
if reliability < MIN_RELIABILITY:
reliability = MIN_RELIABILITY
cpty.set_reliability(reliability)
if message_type in ('pub block', 'put blob'): # "put block" is the way it will be spelled in the future, "put blob" is the way it was spelled in the past
return (TUNING_FACTOR * PUB_TUNING_FACTOR) / reliability
else:
return TUNING_FACTOR / reliability
|
zooko/egtp_new
|
egtp/UnreliableHandicapper.py
|
Python
|
lgpl-2.1
| 1,753
|
[
"VisIt"
] |
8d8765970e71335ab0be2e6c58858a7c6e462fec55cd1bae7720c23a2e1626e2
|
# A replacement for the wrapper for the CCP4 program MTZDUMP using CCTBX
# to access the file directly.
from __future__ import annotations
import copy
import os
from iotbx import mtz
class Mtzdump:
"""A class to give the same functionality as the wrapper for the CCP4
MTZDUMP program."""
def __init__(self):
self._header = {"datasets": [], "dataset_info": {}}
self._batch_header = {}
self._batches = None
self._reflections = 0
self._resolution_range = (0, 0)
def set_working_directory(self, wd):
pass
def get_working_directory(self):
return None
def set_hklin(self, hklin):
self._hklin = hklin
def dump(self):
"""Actually obtain the contents of the mtz file header."""
assert self._hklin, self._hklin
assert os.path.exists(self._hklin), self._hklin
mtz_obj = mtz.object(self._hklin)
# work through the file acculumating the necessary information
self._header["datasets"] = []
self._header["dataset_info"] = {}
self._batches = [batch.num() for batch in mtz_obj.batches()]
self._header["column_labels"] = [column.label() for column in mtz_obj.columns()]
self._header["column_types"] = [column.type() for column in mtz_obj.columns()]
self._resolution_range = mtz_obj.max_min_resolution()
self._header["spacegroup"] = mtz_obj.space_group_name()
self._reflections = mtz_obj.n_reflections()
for crystal in mtz_obj.crystals():
if crystal.name() == "HKL_base":
continue
pname = crystal.project_name()
xname = crystal.name()
cell = crystal.unit_cell().parameters()
for dataset in crystal.datasets():
dname = dataset.name()
wavelength = dataset.wavelength()
dataset_id = f"{pname}/{xname}/{dname}"
dataset_number = dataset.i_dataset()
assert dataset_id not in self._header["datasets"]
self._header["datasets"].append(dataset_id)
self._header["dataset_info"][dataset_id] = {}
self._header["dataset_info"][dataset_id]["wavelength"] = wavelength
self._header["dataset_info"][dataset_id]["cell"] = cell
self._header["dataset_info"][dataset_id]["id"] = dataset_number
def get_columns(self):
"""Get a list of the columns and their types as tuples
(label, type) in a list."""
return [
(cl, self._header["column_types"][i])
for i, cl in enumerate(self._header["column_labels"])
]
def get_resolution_range(self):
return self._resolution_range
def get_datasets(self):
"""Return a list of available datasets."""
return self._header["datasets"]
def get_dataset_info(self, dataset):
"""Get the cell, spacegroup & wavelength associated with
a dataset. The dataset is specified by pname/xname/dname."""
result = copy.deepcopy(self._header["dataset_info"][dataset])
result["spacegroup"] = self._header["spacegroup"]
return result
def get_spacegroup(self):
"""Get the spacegroup recorded for this reflection file."""
return self._header["spacegroup"]
def get_batches(self):
"""Get a list of batches found in this reflection file."""
return self._batches
def get_reflections(self):
"""Return the number of reflections found in the reflection
file."""
return self._reflections
|
xia2/xia2
|
src/xia2/Modules/Mtzdump.py
|
Python
|
bsd-3-clause
| 3,626
|
[
"CRYSTAL"
] |
904cad10cb6f1ac06780fb54ac721f140f346b7026f07838fa8426b2abb5b574
|
# Copyright (c) 2013-2015 Unidata.
# Distributed under the terms of the MIT License.
# SPDX-License-Identifier: MIT
from __future__ import print_function
import logging
log = logging.getLogger("siphon.metadata")
log.setLevel(logging.WARNING)
class _SimpleTypes(object):
def __init__(self):
self._valid = {}
self._valid["dataFormat"] = self._load_valid_data_format_types()
self._valid["upOrDown"] = self._load_valid_up_or_down()
self._valid["dataType"] = self._load_valid_data_types()
def _load_valid_data_types(self):
valid = ["grid",
"image",
"point",
"radial",
"station",
"swath",
"trajectory"]
return valid
def _load_valid_data_format_types(self):
import mimetypes
valid = ["BUFR",
"ESML",
"GEMPAK",
"GINI",
"GRIB-1",
"GRIB-2",
"HDF4",
"HDF5",
"McIDAS-AREA",
"NcML",
"NetCDF",
"NetCDF-4",
"NEXRAD2",
"NIDS",
"image/gif",
"image/jpeg",
"image/tiff",
"text/csv",
"text/html",
"text/plain",
"text/tab-separated-values",
"text/xml",
"video/mpeg",
"video/quicktime",
"video/realtime"]
valid_mime_types = mimetypes.types_map.values()
valid.extend(valid_mime_types)
return valid
def _load_valid_up_or_down(self):
valid = ["up", "down"]
return valid
def handle_upOrDown(self, element): # noqa
# name="upOrDown"
# <xsd:restriction base="xsd:token">
# <xsd:enumeration value="up"/>
# <xsd:enumeration value="down"/>
# </xsd:restriction>
#
type_name = "upOrDown"
valid = self._valid[type_name]
for attrib in element.attrib:
attr = attrib
val = element.attrib[attr]
if val not in valid:
log.warning("Value %s not valid for type %s: must be %s",
val, type_name, valid)
return {attr: val}
def handle_dataFormat(self, element): # noqa
# name="dataFormatTypes"
# <xsd:union memberTypes="xsd:token mimeType">
# <xsd:simpleType>
# <xsd:restriction base="xsd:token">
# <xsd:enumeration value="BUFR"/>
# <xsd:enumeration value="ESML"/>
# <xsd:enumeration value="GEMPAK"/>
# <xsd:enumeration value="GINI"/>
# <xsd:enumeration value="GRIB-1"/>
# <xsd:enumeration value="GRIB-2"/>
# <xsd:enumeration value="HDF4"/>
# <xsd:enumeration value="HDF5"/>
# <xsd:enumeration value="McIDAS-AREA"/>
# <xsd:enumeration value="NcML"/>
# <xsd:enumeration value="NetCDF"/>
# <xsd:enumeration value="NetCDF-4"/>
# <xsd:enumeration value="NEXRAD2"/>
# <xsd:enumeration value="NIDS"/>
#
# <xsd:enumeration value="image/gif"/>
# <xsd:enumeration value="image/jpeg"/>
# <xsd:enumeration value="image/tiff"/>
# <xsd:enumeration value="text/csv"/>
# <xsd:enumeration value="text/html"/>
# <xsd:enumeration value="text/plain"/>
# <xsd:enumeration value="text/tab-separated-values"/>
# <xsd:enumeration value="text/xml"/>
# <xsd:enumeration value="video/mpeg"/>
# <xsd:enumeration value="video/quicktime"/>
# <xsd:enumeration value="video/realtime"/>
# </xsd:restriction>
# </xsd:simpleType>
# </xsd:union>
#
# name="mimeType"
# <xsd:restriction base="xsd:token">
# <xsd:annotation>
# <xsd:documentation>any valid mime type
# (see http://www.iana.org/assignments/media-types/)
# </xsd:documentation>
# </xsd:annotation>
# </xsd:restriction>
# NOTE: to see if mimetype is valude, check against
# mimetypes.types_map.values
#
type_name = "dataFormat"
valid = self._valid[type_name]
val = element.text
if val not in valid:
log.warning("Value %s not valid for type %s: must be %s",
val, type_name, valid)
return {type_name: val}
def handle_dataType(self, element): # noqa
# name="dataTypes"
# <xsd:union memberTypes="xsd:token">
# <xsd:simpleType>
# <xsd:restriction base="xsd:token">
# <xsd:enumeration value="Grid"/>
# <xsd:enumeration value="Image"/>
# <xsd:enumeration value="Point"/>
# <xsd:enumeration value="Radial"/>
# <xsd:enumeration value="Station"/>
# <xsd:enumeration value="Swath"/>
# <xsd:enumeration value="Trajectory"/>
# </xsd:restriction>
# </xsd:simpleType>
# </xsd:union>
type_name = "dataType"
valid = self._valid[type_name]
# case insensitive
val = element.text
if val.lower() not in valid:
log.warning("Value %s not valid for type %s: must be %s",
val, type_name, valid)
return {type_name: val}
class _ComplexTypes(object):
def _get_tag_name(self, element):
if "}" in element.tag:
element_name = element.tag.split('}')[-1]
else:
element_name = element.tag
return element_name
def _spatial_range_req_children(self):
req = ["start",
"size"]
return req
def _spatial_range_opt_children(self):
opt = ["resolution",
"units"]
return opt
def _date_type_formatted_valid_attrs(self):
return ["format", "type"]
def _controlled_vocatulary_opt_attrs(self):
return ["vocabulary"]
def _variable_opt_attrs(self):
return ["vocabulary_name", "units"]
def _variable_req_attrs(self):
return ["name"]
def _variables_opt_attrs(self):
return ["vocabulary"]
def _data_size_req_attrs(self):
return ["units"]
#
# complex types:
# ==============
def handle_spatialRange(self, element): # noqa
# name="spatialRange">
# <xsd:sequence>
# <xsd:element name="start" type="xsd:double" />
# <xsd:element name="size" type="xsd:double" />
# <xsd:element name="resolution" type="xsd:double" minOccurs="0" />
# <xsd:element name="units" type="xsd:string" minOccurs="0" />
# </xsd:sequence>
type_name = "spatialRange"
req_children = self._spatial_range_req_children()
opt_children = self._spatial_range_opt_children()
valid = req_children + opt_children
spatial_range = {}
for child in element:
child_name = child.tag
if child_name in valid:
if child_name != "units":
spatial_range[child.tag] = float(child.text)
else:
spatial_range[child.tag] = child.text
else:
# child not valid
log.warning("%s is not valid for type %s",
child_name, type_name)
return spatial_range
def handle_controlledVocabulary(self, element): # noqa
#
# type="controlledVocabulary"
# <xsd:simpleContent>
# <xsd:extension base="xsd:string">
# <xsd:attribute name="vocabulary" type="xsd:string" />
# </xsd:extension>
# </xsd:simpleContent>
#
type_name = "controlledVocabulary"
opt_attrs = self._controlled_vocatulary_opt_attrs()
val = {}
for attr in element.attrib:
if attr not in opt_attrs:
log.warning("%s not a valid attribute for %s", type_name,
attr)
else:
val[attr] = element.attrib[attr]
name = element.text
tmp = {"name": name}
if val:
tmp.update(val)
return tmp
def handle_dateTypeFormatted(self, element): # noqa
# name="dateTypeFormatted"
# <xsd:simpleContent>
# <xsd:extension base="dateType">
# <xsd:attribute name="format" type="xsd:string" /> // from
# java.text.SimpleDateFormat
# <xsd:attribute name="type" type="dateEnumTypes" />
# </xsd:extension>
#
type_name = "dateTypeFormatted"
valid_attrs = self._date_type_formatted_valid_attrs()
val = {}
for attr in element.attrib:
if attr not in valid_attrs:
log.warning("%s is not a valid attribute for %s", attr,
type_name)
else:
val[attr] = element.attrib[attr]
val["value"] = element.text
return val
def handle_sourceType(self, element): # noqa
# name="sourceType"
# <xsd:sequence>
# <xsd:element name="name" type="controlledVocabulary"/>
# <xsd:element name="contact">
# <xsd:complexType>
# <xsd:attribute name="email" type="xsd:string"
# use="required"/>
# <xsd:attribute name="url" type="xsd:anyURI"/>
# </xsd:complexType>
# </xsd:element>
# </xsd:sequence>
parsed = {}
for child in element:
value = {}
if child.tag == "name":
value = self.handle_controlledVocabulary(child)
elif child.tag == "contact":
if "url" in child.attrib:
value["url"] = child.attrib["url"]
if "email" in child.attrib:
value["email"] = child.attrib["email"]
else:
log.warning("'contact' must have an attribute: "
"'email'")
value["email"] = "missing"
if value:
parsed.update(value)
return parsed
def handle_timeCoverageType(self, element): # noqa
# name="timeCoverageType">
# <xsd:sequence>
# <xsd:choice minOccurs="2" maxOccurs="3" >
# <xsd:element name="start" type="dateTypeFormatted"/>
# <xsd:element name="end" type="dateTypeFormatted"/>
# <xsd:element name="duration" type="duration"/>
# </xsd:choice>
# <xsd:element name="resolution" type="duration" minOccurs="0"/>
# </xsd:sequence>
parsed = {}
tags = []
for child in element:
tags.append(child.tag)
valid_num_elements = len(tags) >= 2 & len(tags) <= 3
if valid_num_elements:
for child in element:
value = {}
if child.tag in ["start", "end"]:
processed = self.handle_dateTypeFormatted(child)
value[child.tag] = processed["value"]
elif child.tag in ["duration", "resolution"]:
value[child.tag] = child.text
parsed.update(value)
else:
log.warning("Not enough elements to make a valid timeCoverage")
return parsed
def handle_variable(self, element):
# element_name="variable"
# <xsd:complexType mixed="true">
# <xsd:attribute name="name" type="xsd:string" use="required"/>
# <xsd:attribute name="vocabulary_name" type="xsd:string"
# use="optional"/>
# <xsd:attribute name="units" type="xsd:string"/>
# </xsd:complexType>
type_name = "variable"
opt_attrs = self._variable_opt_attrs()
req_attrs = self._variable_req_attrs()
valid_attrs = opt_attrs + req_attrs
valid = True
for req_attr in req_attrs:
if req_attr not in element.attrib:
valid = False
log.warning("%s must have an attribute %s", type_name,
req_attr)
variable = {}
if valid:
if element.text:
variable["description"] = element.text
for attr in element.attrib:
if attr in valid_attrs:
variable[attr] = element.attrib[attr]
return variable
def handle_variableMap(self, element): # noqa
# element_name="variableMap"
# <xsd:complexType>
# <xsd:attributeGroup ref="XLink"/>
# </xsd:complexType>
type_name = "variableMap" # noqa
var_map = {}
for attr in element.attrib:
var_map[attr] = element.attrib[attr]
return var_map
def handle_variables(self, element):
# element_name="variables"
# <xsd:complexType>
# <xsd:choice>
# <xsd:element ref="variable" minOccurs="0"
# maxOccurs="unbounded"/>
# <xsd:element ref="variableMap" minOccurs="0"/>
# </xsd:choice>
# <xsd:attribute name="vocabulary" type="variableNameVocabulary"
# use="optional"/>
# <xsd:attributeGroup ref="XLink"/>
# </xsd:complexType>
type_name = "variables" # noqa
variables = {}
variable_list = []
variable_map_list = []
for child in element:
child_type = self._get_tag_name(child)
if child_type == "variable":
var = self.handle_variable(child)
variable_list.append(var)
elif child_type == "variableMap":
var_map = self.handle_variableMap(element)
variable_map_list.append(var_map)
opt_attrs = self._variables_opt_attrs()
for attr in element.attrib:
if attr in opt_attrs:
variables[attr] = element.attrib[attr]
if variable_list:
variables["variables"] = variable_list
if variable_map_list:
variables["variableMaps"] = variable_map_list
return variables
def handle_dataSize(self, element): # noqa
# <xsd:complexType>
# <xsd:simpleContent>
# <xsd:extension base="xsd:string">
# <xsd:attribute name="units" type="xsd:string" use="required"/>
# </xsd:extension>
# </xsd:simpleContent>
# </xsd:complexType>
#
req_attrs = self._data_size_req_attrs()
data_size = {}
data_size["size"] = float(element.text)
for attr in element.attrib:
if attr in req_attrs:
data_size[attr] = element.attrib[attr]
return data_size
class TDSCatalogMetadata(object):
r"""
An object for holding information contained in the catalog Metadata.
Attributes
----------
name : metadata
The dictionary containing the metadata enteries
"""
def __init__(self, element, metadata_in=None):
r"""
Initialize a TDSCatalogMetadata object.
Parameters
----------
metadata_node : Element
An Element Tree Element representing a metadata node
"""
self._ct = _ComplexTypes()
self._st = _SimpleTypes()
self._sts = _SimpleTypes.__dict__
self._cts = _ComplexTypes.__dict__
inherited = False
if 'inherited' in element.attrib:
inherited = element.attrib['inherited']
if inherited == 'true':
inherited = True
else:
inherited = False
if metadata_in and inherited:
# only inherit metadata passed in if the new metadata
# element has inherit set to True
self.metadata = metadata_in
else:
self.metadata = {}
self.metadata["inherited"] = inherited
element_name = self._get_tag_name(element)
if element_name == "metadata":
for child in element:
self._parse_element(child)
else:
self._parse_element(element)
def _get_tag_name(self, element):
if "}" in element.tag:
element_name = element.tag.split('}')[-1]
else:
element_name = element.tag
return element_name
def _get_handler(self, handler_name):
handler_name = "handle_" + handler_name
if handler_name in self._cts:
handler = getattr(self._ct, handler_name)
elif handler_name in self._sts:
handler = getattr(self._st, handler_name)
else:
msg = "cannot find handler for element {}".format(handler_name)
log.error(msg)
return handler
def _parse_element(self, element):
element_name = self._get_tag_name(element)
parser = {"documentation": self._parse_documentation,
"property": self._parse_property,
"contributor": self._parse_contributor,
"geospatialCoverage": self._parse_geospatial_coverage,
"serviceName": self._parse_service_name,
"authority": self._parse_authority,
"publisher": self._parse_publisher,
"creator": self._parse_creator,
"keyword": self._parse_keyword,
"project": self._parse_project,
"dataFormat": self._parse_data_format,
"dataType": self._parse_data_type,
"date": self._parse_date,
"timeCoverage": self._parse_timeCoverage,
"variableMap": self._parse_variableMap,
"variables": self._parse_variables}
try:
parser[element_name](element)
except KeyError:
log.error("No parser found for element %s", element_name)
raise
def _parse_documentation(self, element):
# <xsd:simpleType name="documentationEnumTypes">
# <xsd:union memberTypes="xsd:token">
# <xsd:simpleType>
# <xsd:restriction base="xsd:token">
# <xsd:enumeration value="funding"/>
# <xsd:enumeration value="history"/>
# <xsd:enumeration value="processing_level"/>
# <xsd:enumeration value="rights"/>
# <xsd:enumeration value="summary"/>
# </xsd:restriction>
# </xsd:simpleType>
# </xsd:union>
# </xsd:simpleType>
#
# <xsd:complexType name="documentationType" mixed="true">
# <xsd:sequence>
# <xsd:any namespace="http://www.w3.org/1999/xhtml" minOccurs="0"
# maxOccurs="unbounded" processContents="strict"/>
# </xsd:sequence>
# <xsd:attribute name="type" type="documentationEnumTypes"/>
# <xsd:attributeGroup ref="XLink" />
# </xsd:complexType>
xlink_href_attr = '{http://www.w3.org/1999/xlink}href'
xlink_title_attr = '{http://www.w3.org/1999/xlink}title'
# doc_enum_types = ("funding", "history", "processing_level", "rights",
# "summary")
known = 'type' in element.attrib
# document element has no attributes
plain_doc = not element.attrib
md = self.metadata
md.setdefault("documentation", {})
if known or plain_doc:
if known:
doc_type = element.attrib['type']
else:
doc_type = "generic"
md["documentation"].setdefault(doc_type, []).append(element.text)
elif xlink_href_attr in element.attrib:
title = element.attrib[xlink_title_attr]
href = element.attrib[xlink_href_attr]
xlink = {"title": title, "href": href}
md["documentation"].setdefault('xlink', []).append(xlink)
self.metadata = md
def _parse_property(self, element):
# <xsd:element name="property">
# <xsd:complexType>
# <xsd:attribute name="name" type="xsd:string"/>
# <xsd:attribute name="value" type="xsd:string"/>
# </xsd:complexType>
# </xsd:element>
name = element.attrib["name"]
value = element.attrib["value"]
self.metadata.setdefault("property", {})[name] = value
def _parse_contributor(self, element):
# <xsd:element name="contributor">
# <xsd:complexType>
# <xsd:simpleContent>
# <xsd:extension base="xsd:string">
# <xsd:attribute name="role" type="xsd:string"
# use="required"/>
# </xsd:extension>
# </xsd:simpleContent>
# </xsd:complexType>
# </xsd:element>
element_type = "contributor"
role = element.attrib["role"]
name = element.text
self.metadata.setdefault(element_type, {}).setdefault(role, []).append(name)
def _parse_geospatial_coverage(self, element):
element_type = "geospatialCoverage"
md = {}
# <xsd:element name="geospatialCoverage">
# <xsd:complexType>
# <xsd:sequence>
# <xsd:element name="northsouth" type="spatialRange"
# minOccurs="0" />
# <xsd:element name="eastwest" type="spatialRange"
# minOccurs="0" />
# <xsd:element name="updown" type="spatialRange"
# minOccurs="0" />
# <xsd:element name="name" type="controlledVocabulary"
# minOccurs="0" maxOccurs="unbounded"/>
# </xsd:sequence>
#
# <xsd:attribute name="zpositive" type="upOrDown" default="up"/>
# </xsd:complexType>
# </xsd:element>
elements = {"northsouth": "spatialRange",
"eastwest": "spatialRange",
"updown": "spatialRange",
"name": "controlledVocabulary"
}
attrs = {"zpositive": "upOrDown"}
if element.attrib:
for attr in element.attrib:
if attr in attrs:
handler_name = attrs[attr]
handler = self._get_handler(handler_name)
value = handler(element)
md.update({attr: value})
else:
log.warning("Attr on %s : %s not captured", attr,
element_type)
for child in element:
child_name = child.tag
if child_name in elements:
handler_name = elements[child_name]
handler = self._get_handler(handler_name)
value = handler(child)
md.update(value)
self.metadata.setdefault(element_type, []).append(md)
def _parse_service_name(self, element):
# can only have one serviceName
element_type = "serviceName"
self.metadata[element_type] = element.text
def _parse_authority(self, element):
element_type = "authority"
self.metadata.setdefault(element_type, []).append(element.text)
def _parse_publisher(self, element):
element_type = "publisher"
parsed = self._ct.handle_sourceType(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_creator(self, element):
element_type = "creator"
parsed = self._ct.handle_sourceType(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_keyword(self, element):
element_type = "keyword"
parsed = self._ct.handle_controlledVocabulary(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_project(self, element):
element_type = "project"
parsed = self._ct.handle_controlledVocabulary(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_data_format(self, element):
element_type = "dataFormat" # noqa
parsed = self._st.handle_dataFormat(element)
self.metadata.update(parsed)
def _parse_data_type(self, element):
element_type = "dataType" # noqa
parsed = self._st.handle_dataType(element)
self.metadata.update(parsed)
def _parse_date(self, element):
element_type = "date"
parsed = self._ct.handle_dateTypeFormatted(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_timeCoverage(self, element): # noqa
element_type = "timeCoverage"
parsed = self._ct.handle_timeCoverageType(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_variableMap(self, element): # noqa
element_type = "variableMap"
parsed = self._ct.handle_variableMap(element)
self.metadata.setdefault(element_type, []).append(parsed)
def _parse_variables(self, element):
element_type = "variables"
parsed = self._ct.handle_variables(element)
for variable in parsed["variables"]:
var_name = variable["name"]
variable.pop("name", None)
self.metadata.setdefault(element_type, {})[var_name] = variable
|
MoonRaker/siphon
|
siphon/metadata.py
|
Python
|
mit
| 25,906
|
[
"NetCDF"
] |
19c9c61515b4b75a26aba4018fd5502d06973b323b7b22622745903f0a4fb830
|
#!/usr/bin/env python
import numpy
from pyscf import gto
from pyscf import scf
from pyscf import mcscf
def run(b, mo0=None, dm0=None):
mol = gto.Mole()
mol.build(
verbose = 5,
output = 'o2rhf-%3.2f.out' % b,
atom = [
['O', (0, 0, b/2)],
['O', (0, 0, -b/2)],],
basis = 'cc-pvdz',
spin = 2,
symmetry = 1,
)
mf = scf.RHF(mol)
mf.scf(dm0)
mc = mcscf.CASSCF(mf, 12, 8)
if mo0 is not None:
#from pyscf import lo
#mo0 = lo.orth.vec_lowdin(mo0, mf.get_ovlp())
mo0 = mcscf.project_init_guess(mc, mo0)
else:
mo0 = mcscf.sort_mo(mc, mf.mo_coeff, [5,6,7,8,9,11,12,13,14,15,16,17])
mc.max_orb_stepsize = .02
mc.kernel(mo0)
mc.analyze()
return mf, mc
def urun(b, mo0=None, dm0=None):
mol = gto.Mole()
mol.build(
verbose = 5,
output = 'o2uhf-%3.2f.out' % b,
atom = [
['O', (0, 0, b/2)],
['O', (0, 0, -b/2)],],
basis = 'cc-pvdz',
spin = 2,
)
mf = scf.UHF(mol)
mf.scf(dm0)
mc = mcscf.CASSCF(mf, 12, 8)
if mo0 is not None:
#from pyscf import lo
#mo0 =(lo.orth.vec_lowdin(mo0[0], mf.get_ovlp()),
# lo.orth.vec_lowdin(mo0[1], mf.get_ovlp()))
mo0 = mcscf.project_init_guess(mc, mo0)
mc.kernel(mo0)
mc.analyze()
return mf, mc
x = numpy.hstack((numpy.arange(0.9, 2.01, 0.1),
numpy.arange(2.1, 4.01, 0.1)))
dm0 = mo0 = None
eumc = []
euhf = []
s = []
for b in reversed(x):
mf, mc = urun(b, mo0, dm0)
mo0 = mc.mo_coeff
dm0 = mf.make_rdm1()
s.append(mc.spin_square()[1])
euhf.append(mf.hf_energy)
eumc.append(mc.e_tot)
euhf.reverse()
eumc.reverse()
s.reverse()
#print s
dm0 = mo0 = None
ermc = []
erhf = []
for b in x:
mf, mc = run(b, mo0, dm0)
mo0 = mc.mo_coeff
dm0 = mf.make_rdm1()
erhf.append(mf.hf_energy)
ermc.append(mc.e_tot)
with open('o2-scan.txt', 'w') as fout:
fout.write(' ROHF 0.9->4.0 RCAS(12,8) UHF 4.0->0.9 UCAS(12,8) \n')
for i, xi in enumerate(x):
fout.write('%2.1f %12.8f %12.8f %12.8f %12.8f\n'
% (xi, erhf[i], ermc[i], euhf[i], eumc[i]))
import matplotlib.pyplot as plt
plt.plot(x, erhf, label='ROHF,0.9->4.0')
plt.plot(x, euhf, label='UHF, 4.0->0.9')
plt.plot(x, ermc, label='RCAS(6,6),0.9->4.0')
plt.plot(x, eumc, label='UCAS(6,6),4.0->0.9')
plt.legend()
plt.show()
|
gkc1000/pyscf
|
examples/mcscf/61-rcas_vs_ucas/o2-scan.py
|
Python
|
apache-2.0
| 2,483
|
[
"PySCF"
] |
2928fce54765420403292c8d3dea5173063309cb292a156067e1a54e1e35233e
|
# -*- coding: utf-8 -*-
"""
End-to-end tests for the LMS Instructor Dashboard.
"""
import time
from flaky import flaky
from nose.plugins.attrib import attr
from bok_choy.promise import EmptyPromise
from ..helpers import UniqueCourseTest, get_modal_alert, EventsTestMixin
from ...pages.common.logout import LogoutPage
from ...pages.lms.auto_auth import AutoAuthPage
from ...pages.studio.overview import CourseOutlinePage
from ...pages.lms.create_mode import ModeCreationPage
from ...pages.lms.courseware import CoursewarePage
from ...pages.lms.instructor_dashboard import InstructorDashboardPage
from ...fixtures.course import CourseFixture, XBlockFixtureDesc
from ...pages.lms.dashboard import DashboardPage
from ...pages.lms.problem import ProblemPage
from ...pages.lms.track_selection import TrackSelectionPage
from ...pages.lms.pay_and_verify import PaymentAndVerificationFlow, FakePaymentPage
class BaseInstructorDashboardTest(EventsTestMixin, UniqueCourseTest):
"""
Mixin class for testing the instructor dashboard.
"""
def log_in_as_instructor(self):
"""
Logs in as an instructor and returns the id.
"""
username = "test_instructor_{uuid}".format(uuid=self.unique_id[0:6])
auto_auth_page = AutoAuthPage(self.browser, username=username, course_id=self.course_id, staff=True)
return username, auto_auth_page.visit().get_user_id()
def visit_instructor_dashboard(self):
"""
Visits the instructor dashboard.
"""
instructor_dashboard_page = InstructorDashboardPage(self.browser, self.course_id)
instructor_dashboard_page.visit()
return instructor_dashboard_page
@attr('shard_5')
class AutoEnrollmentWithCSVTest(BaseInstructorDashboardTest):
"""
End-to-end tests for Auto-Registration and enrollment functionality via CSV file.
"""
def setUp(self):
super(AutoEnrollmentWithCSVTest, self).setUp()
self.course_fixture = CourseFixture(**self.course_info).install()
self.log_in_as_instructor()
instructor_dashboard_page = self.visit_instructor_dashboard()
self.auto_enroll_section = instructor_dashboard_page.select_membership().select_auto_enroll_section()
def test_browse_and_upload_buttons_are_visible(self):
"""
Scenario: On the Membership tab of the Instructor Dashboard, Auto-Enroll Browse and Upload buttons are visible.
Given that I am on the Membership tab on the Instructor Dashboard
Then I see the 'REGISTER/ENROLL STUDENTS' section on the page with the 'Browse' and 'Upload' buttons
"""
self.assertTrue(self.auto_enroll_section.is_file_attachment_browse_button_visible())
self.assertTrue(self.auto_enroll_section.is_upload_button_visible())
def test_clicking_file_upload_button_without_file_shows_error(self):
"""
Scenario: Clicking on the upload button without specifying a CSV file results in error.
Given that I am on the Membership tab on the Instructor Dashboard
When I click the Upload Button without specifying a CSV file
Then I should be shown an Error Notification
And The Notification message should read 'File is not attached.'
"""
self.auto_enroll_section.click_upload_file_button()
self.assertTrue(self.auto_enroll_section.is_notification_displayed(section_type=self.auto_enroll_section.NOTIFICATION_ERROR))
self.assertEqual(self.auto_enroll_section.first_notification_message(section_type=self.auto_enroll_section.NOTIFICATION_ERROR), "File is not attached.")
def test_uploading_correct_csv_file_results_in_success(self):
"""
Scenario: Uploading a CSV with correct data results in Success.
Given that I am on the Membership tab on the Instructor Dashboard
When I select a csv file with correct data and click the Upload Button
Then I should be shown a Success Notification.
"""
self.auto_enroll_section.upload_correct_csv_file()
self.assertTrue(self.auto_enroll_section.is_notification_displayed(section_type=self.auto_enroll_section.NOTIFICATION_SUCCESS))
def test_uploading_csv_file_with_bad_data_results_in_errors_and_warnings(self):
"""
Scenario: Uploading a CSV with incorrect data results in error and warnings.
Given that I am on the Membership tab on the Instructor Dashboard
When I select a csv file with incorrect data and click the Upload Button
Then I should be shown an Error Notification
And a corresponding Error Message.
And I should be shown a Warning Notification
And a corresponding Warning Message.
"""
self.auto_enroll_section.upload_csv_file_with_errors_warnings()
self.assertTrue(self.auto_enroll_section.is_notification_displayed(section_type=self.auto_enroll_section.NOTIFICATION_ERROR))
self.assertEqual(self.auto_enroll_section.first_notification_message(section_type=self.auto_enroll_section.NOTIFICATION_ERROR), "Data in row #2 must have exactly four columns: email, username, full name, and country")
self.assertTrue(self.auto_enroll_section.is_notification_displayed(section_type=self.auto_enroll_section.NOTIFICATION_WARNING))
self.assertEqual(self.auto_enroll_section.first_notification_message(section_type=self.auto_enroll_section.NOTIFICATION_WARNING), "ename (d@a.com): (An account with email d@a.com exists but the provided username ename is different. Enrolling anyway with d@a.com.)")
def test_uploading_non_csv_file_results_in_error(self):
"""
Scenario: Uploading an image file for auto-enrollment results in error.
Given that I am on the Membership tab on the Instructor Dashboard
When I select an image file (a non-csv file) and click the Upload Button
Then I should be shown an Error Notification
And The Notification message should read 'Make sure that the file you upload is in CSV..'
"""
self.auto_enroll_section.upload_non_csv_file()
self.assertTrue(self.auto_enroll_section.is_notification_displayed(section_type=self.auto_enroll_section.NOTIFICATION_ERROR))
self.assertEqual(self.auto_enroll_section.first_notification_message(section_type=self.auto_enroll_section.NOTIFICATION_ERROR), "Make sure that the file you upload is in CSV format with no extraneous characters or rows.")
class ProctoredExamsTest(BaseInstructorDashboardTest):
"""
End-to-end tests for Proctoring Sections of the Instructor Dashboard.
"""
USERNAME = "STUDENT_TESTER"
EMAIL = "student101@example.com"
def setUp(self):
super(ProctoredExamsTest, self).setUp()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
self.course_outline = CourseOutlinePage(
self.browser,
self.course_info['org'],
self.course_info['number'],
self.course_info['run']
)
course_fixture = CourseFixture(**self.course_info)
course_fixture.add_advanced_settings({
"enable_proctored_exams": {"value": "true"}
})
course_fixture.add_children(
XBlockFixtureDesc('chapter', 'Test Section 1').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection 1').add_children(
XBlockFixtureDesc('problem', 'Test Problem 1')
)
)
).install()
self.track_selection_page = TrackSelectionPage(self.browser, self.course_id)
self.payment_and_verification_flow = PaymentAndVerificationFlow(self.browser, self.course_id)
self.immediate_verification_page = PaymentAndVerificationFlow(
self.browser, self.course_id, entry_point='verify-now'
)
self.upgrade_page = PaymentAndVerificationFlow(self.browser, self.course_id, entry_point='upgrade')
self.fake_payment_page = FakePaymentPage(self.browser, self.course_id)
self.dashboard_page = DashboardPage(self.browser)
self.problem_page = ProblemPage(self.browser)
# Add a verified mode to the course
ModeCreationPage(
self.browser, self.course_id, mode_slug=u'verified', mode_display_name=u'Verified Certificate',
min_price=10, suggested_prices='10,20'
).visit()
# Auto-auth register for the course.
self._auto_auth(self.USERNAME, self.EMAIL, False)
def _auto_auth(self, username, email, staff, enrollment_mode="honor"):
"""
Logout and login with given credentials.
"""
AutoAuthPage(self.browser, username=username, email=email,
course_id=self.course_id, staff=staff, enrollment_mode=enrollment_mode).visit()
def _create_a_proctored_exam_and_attempt(self):
"""
Creates a proctored exam and makes the student attempt it so that
the associated allowance and attempts are visible on the Instructor Dashboard.
"""
# Visit the course outline page in studio
LogoutPage(self.browser).visit()
self._auto_auth("STAFF_TESTER", "staff101@example.com", True)
self.course_outline.visit()
#open the exam settings to make it a proctored exam.
self.course_outline.open_exam_settings_dialog()
self.course_outline.make_exam_proctored()
time.sleep(2) # Wait for 2 seconds to save the settings.
# login as a verified student and visit the courseware.
LogoutPage(self.browser).visit()
self._auto_auth(self.USERNAME, self.EMAIL, False, enrollment_mode="verified")
self.courseware_page.visit()
# Start the proctored exam.
self.courseware_page.start_proctored_exam()
def _create_a_timed_exam_and_attempt(self):
"""
Creates a timed exam and makes the student attempt it so that
the associated allowance and attempts are visible on the Instructor Dashboard.
"""
# Visit the course outline page in studio
LogoutPage(self.browser).visit()
self._auto_auth("STAFF_TESTER", "staff101@example.com", True)
self.course_outline.visit()
#open the exam settings to make it a proctored exam.
self.course_outline.open_exam_settings_dialog()
self.course_outline.make_exam_timed()
time.sleep(2) # Wait for 2 seconds to save the settings.
# login as a verified student and visit the courseware.
LogoutPage(self.browser).visit()
self._auto_auth(self.USERNAME, self.EMAIL, False, enrollment_mode="verified")
self.courseware_page.visit()
# Start the proctored exam.
self.courseware_page.start_timed_exam()
@flaky # TODO fix this, see SOL-1183
def test_can_add_remove_allowance(self):
"""
Make sure that allowances can be added and removed.
"""
# Given that an exam has been configured to be a proctored exam.
self._create_a_proctored_exam_and_attempt()
# When I log in as an instructor,
self.log_in_as_instructor()
# And visit the Allowance Section of Instructor Dashboard's Proctoring tab
instructor_dashboard_page = self.visit_instructor_dashboard()
allowance_section = instructor_dashboard_page.select_proctoring().select_allowance_section()
# Then I can add Allowance to that exam for a student
self.assertTrue(allowance_section.is_add_allowance_button_visible)
def test_can_reset_attempts(self):
"""
Make sure that Exam attempts are visible and can be reset.
"""
# Given that an exam has been configured to be a proctored exam.
self._create_a_timed_exam_and_attempt()
# When I log in as an instructor,
self.log_in_as_instructor()
# And visit the Student Proctored Exam Attempts Section of Instructor Dashboard's Proctoring tab
instructor_dashboard_page = self.visit_instructor_dashboard()
exam_attempts_section = instructor_dashboard_page.select_proctoring().select_exam_attempts_section()
# Then I can see the search text field
self.assertTrue(exam_attempts_section.is_search_text_field_visible)
# And I can see one attempt by a student.
self.assertTrue(exam_attempts_section.is_student_attempt_visible)
# And I can remove the attempt by clicking the "x" at the end of the row.
exam_attempts_section.remove_student_attempt()
self.assertFalse(exam_attempts_section.is_student_attempt_visible)
@attr('shard_5')
class EntranceExamGradeTest(BaseInstructorDashboardTest):
"""
Tests for Entrance exam specific student grading tasks.
"""
def setUp(self):
super(EntranceExamGradeTest, self).setUp()
self.course_info.update({"settings": {"entrance_exam_enabled": "true"}})
CourseFixture(**self.course_info).install()
self.student_identifier = "johndoe_saee@example.com"
# Create the user (automatically logs us in)
AutoAuthPage(
self.browser,
username="johndoe_saee",
email=self.student_identifier,
course_id=self.course_id,
staff=False
).visit()
LogoutPage(self.browser).visit()
# go to the student admin page on the instructor dashboard
self.log_in_as_instructor()
self.student_admin_section = self.visit_instructor_dashboard().select_student_admin()
def test_input_text_and_buttons_are_visible(self):
"""
Scenario: On the Student admin tab of the Instructor Dashboard, Student Email input box,
Reset Student Attempt, Rescore Student Submission, Delete Student State for entrance exam
and Show Background Task History for Student buttons are visible
Given that I am on the Student Admin tab on the Instructor Dashboard
Then I see Student Email input box, Reset Student Attempt, Rescore Student Submission,
Delete Student State for entrance exam and Show Background Task History for Student buttons
"""
self.assertTrue(self.student_admin_section.is_student_email_input_visible())
self.assertTrue(self.student_admin_section.is_reset_attempts_button_visible())
self.assertTrue(self.student_admin_section.is_rescore_submission_button_visible())
self.assertTrue(self.student_admin_section.is_delete_student_state_button_visible())
self.assertTrue(self.student_admin_section.is_background_task_history_button_visible())
def test_clicking_reset_student_attempts_button_without_email_shows_error(self):
"""
Scenario: Clicking on the Reset Student Attempts button without entering student email
address or username results in error.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Reset Student Attempts Button under Entrance Exam Grade
Adjustment without enter an email address
Then I should be shown an Error Notification
And The Notification message should read 'Please enter a student email address or username.'
"""
self.student_admin_section.click_reset_attempts_button()
self.assertEqual(
'Please enter a student email address or username.',
self.student_admin_section.top_notification.text[0]
)
def test_clicking_reset_student_attempts_button_with_success(self):
"""
Scenario: Clicking on the Reset Student Attempts button with valid student email
address or username should result in success prompt.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Reset Student Attempts Button under Entrance Exam Grade
Adjustment after entering a valid student
email address or username
Then I should be shown an alert with success message
"""
self.student_admin_section.set_student_email(self.student_identifier)
self.student_admin_section.click_reset_attempts_button()
alert = get_modal_alert(self.student_admin_section.browser)
alert.dismiss()
def test_clicking_reset_student_attempts_button_with_error(self):
"""
Scenario: Clicking on the Reset Student Attempts button with email address or username
of a non existing student should result in error message.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Reset Student Attempts Button under Entrance Exam Grade
Adjustment after non existing student email address or username
Then I should be shown an error message
"""
self.student_admin_section.set_student_email('non_existing@example.com')
self.student_admin_section.click_reset_attempts_button()
self.student_admin_section.wait_for_ajax()
self.assertGreater(len(self.student_admin_section.top_notification.text[0]), 0)
def test_clicking_rescore_submission_button_with_success(self):
"""
Scenario: Clicking on the Rescore Student Submission button with valid student email
address or username should result in success prompt.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Rescore Student Submission Button under Entrance Exam Grade
Adjustment after entering a valid student email address or username
Then I should be shown an alert with success message
"""
self.student_admin_section.set_student_email(self.student_identifier)
self.student_admin_section.click_rescore_submissions_button()
alert = get_modal_alert(self.student_admin_section.browser)
alert.dismiss()
def test_clicking_rescore_submission_button_with_error(self):
"""
Scenario: Clicking on the Rescore Student Submission button with email address or username
of a non existing student should result in error message.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Rescore Student Submission Button under Entrance Exam Grade
Adjustment after non existing student email address or username
Then I should be shown an error message
"""
self.student_admin_section.set_student_email('non_existing@example.com')
self.student_admin_section.click_rescore_submissions_button()
self.student_admin_section.wait_for_ajax()
self.assertGreater(len(self.student_admin_section.top_notification.text[0]), 0)
def test_clicking_skip_entrance_exam_button_with_success(self):
"""
Scenario: Clicking on the Let Student Skip Entrance Exam button with
valid student email address or username should result in success prompt.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Let Student Skip Entrance Exam Button under
Entrance Exam Grade Adjustment after entering a valid student
email address or username
Then I should be shown an alert with success message
"""
self.student_admin_section.set_student_email(self.student_identifier)
self.student_admin_section.click_skip_entrance_exam_button()
#first we have window.confirm
alert = get_modal_alert(self.student_admin_section.browser)
alert.accept()
# then we have alert confirming action
alert = get_modal_alert(self.student_admin_section.browser)
alert.dismiss()
def test_clicking_skip_entrance_exam_button_with_error(self):
"""
Scenario: Clicking on the Let Student Skip Entrance Exam button with
email address or username of a non existing student should result in error message.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Let Student Skip Entrance Exam Button under
Entrance Exam Grade Adjustment after entering non existing
student email address or username
Then I should be shown an error message
"""
self.student_admin_section.set_student_email('non_existing@example.com')
self.student_admin_section.click_skip_entrance_exam_button()
#first we have window.confirm
alert = get_modal_alert(self.student_admin_section.browser)
alert.accept()
self.student_admin_section.wait_for_ajax()
self.assertGreater(len(self.student_admin_section.top_notification.text[0]), 0)
def test_clicking_delete_student_attempts_button_with_success(self):
"""
Scenario: Clicking on the Delete Student State for entrance exam button
with valid student email address or username should result in success prompt.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Delete Student State for entrance exam Button
under Entrance Exam Grade Adjustment after entering a valid student
email address or username
Then I should be shown an alert with success message
"""
self.student_admin_section.set_student_email(self.student_identifier)
self.student_admin_section.click_delete_student_state_button()
alert = get_modal_alert(self.student_admin_section.browser)
alert.dismiss()
def test_clicking_delete_student_attempts_button_with_error(self):
"""
Scenario: Clicking on the Delete Student State for entrance exam button
with email address or username of a non existing student should result
in error message.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Delete Student State for entrance exam Button
under Entrance Exam Grade Adjustment after non existing student
email address or username
Then I should be shown an error message
"""
self.student_admin_section.set_student_email('non_existing@example.com')
self.student_admin_section.click_delete_student_state_button()
self.student_admin_section.wait_for_ajax()
self.assertGreater(len(self.student_admin_section.top_notification.text[0]), 0)
def test_clicking_task_history_button_with_success(self):
"""
Scenario: Clicking on the Show Background Task History for Student
with valid student email address or username should result in table of tasks.
Given that I am on the Student Admin tab on the Instructor Dashboard
When I click the Show Background Task History for Student Button
under Entrance Exam Grade Adjustment after entering a valid student
email address or username
Then I should be shown an table listing all background tasks
"""
self.student_admin_section.set_student_email(self.student_identifier)
self.student_admin_section.click_task_history_button()
self.assertTrue(self.student_admin_section.is_background_task_history_table_visible())
class DataDownloadsTest(BaseInstructorDashboardTest):
"""
Bok Choy tests for the "Data Downloads" tab.
"""
def setUp(self):
super(DataDownloadsTest, self).setUp()
self.course_fixture = CourseFixture(**self.course_info).install()
self.instructor_username, self.instructor_id = self.log_in_as_instructor()
instructor_dashboard_page = self.visit_instructor_dashboard()
self.data_download_section = instructor_dashboard_page.select_data_download()
def verify_report_requested_event(self, report_type):
"""
Verifies that the correct event is emitted when a report is requested.
"""
self.assert_matching_events_were_emitted(
event_filter={'name': u'edx.instructor.report.requested', 'report_type': report_type}
)
def verify_report_downloaded_event(self, report_url):
"""
Verifies that the correct event is emitted when a report is downloaded.
"""
self.assert_matching_events_were_emitted(
event_filter={'name': u'edx.instructor.report.downloaded', 'report_url': report_url}
)
def verify_report_download(self, report_name):
"""
Verifies that a report can be downloaded and an event fired.
"""
download_links = self.data_download_section.report_download_links
self.assertEquals(len(download_links), 1)
download_links[0].click()
expected_url = download_links.attrs('href')[0]
self.assertIn(report_name, expected_url)
self.verify_report_downloaded_event(expected_url)
def test_student_profiles_report_download(self):
"""
Scenario: Verify that an instructor can download a student profiles report
Given that I am an instructor
And I visit the instructor dashboard's "Data Downloads" tab
And I click on the "Download profile information as a CSV" button
Then a report should be generated
And a report requested event should be emitted
When I click on the report
Then a report downloaded event should be emitted
"""
report_name = u"student_profile_info"
self.data_download_section.generate_student_report_button.click()
self.data_download_section.wait_for_available_report()
self.verify_report_requested_event(report_name)
self.verify_report_download(report_name)
def test_grade_report_download(self):
"""
Scenario: Verify that an instructor can download a grade report
Given that I am an instructor
And I visit the instructor dashboard's "Data Downloads" tab
And I click on the "Generate Grade Report" button
Then a report should be generated
And a report requested event should be emitted
When I click on the report
Then a report downloaded event should be emitted
"""
report_name = u"grade_report"
self.data_download_section.generate_grade_report_button.click()
self.data_download_section.wait_for_available_report()
self.verify_report_requested_event(report_name)
self.verify_report_download(report_name)
def test_problem_grade_report_download(self):
"""
Scenario: Verify that an instructor can download a problem grade report
Given that I am an instructor
And I visit the instructor dashboard's "Data Downloads" tab
And I click on the "Generate Problem Grade Report" button
Then a report should be generated
And a report requested event should be emitted
When I click on the report
Then a report downloaded event should be emitted
"""
report_name = u"problem_grade_report"
self.data_download_section.generate_problem_report_button.click()
self.data_download_section.wait_for_available_report()
self.verify_report_requested_event(report_name)
self.verify_report_download(report_name)
@attr('shard_5')
class CertificatesTest(BaseInstructorDashboardTest):
"""
Tests for Certificates functionality on instructor dashboard.
"""
def setUp(self):
super(CertificatesTest, self).setUp()
self.course_fixture = CourseFixture(**self.course_info).install()
self.log_in_as_instructor()
instructor_dashboard_page = self.visit_instructor_dashboard()
self.certificates_section = instructor_dashboard_page.select_certificates()
def test_generate_certificates_buttons_is_visible(self):
"""
Scenario: On the Certificates tab of the Instructor Dashboard, Generate Certificates button is visible.
Given that I am on the Certificates tab on the Instructor Dashboard
And the instructor-generation feature flag has been enabled
Then I see a 'Generate Certificates' button
And when I click on the 'Generate Certificates' button
Then I should see a status message and 'Generate Certificates' button should be disabled.
"""
self.assertTrue(self.certificates_section.generate_certificates_button.visible)
self.certificates_section.generate_certificates_button.click()
alert = get_modal_alert(self.certificates_section.browser)
alert.accept()
self.certificates_section.wait_for_ajax()
EmptyPromise(
lambda: self.certificates_section.certificate_generation_status.visible,
'Certificate generation status shown'
).fulfill()
disabled = self.certificates_section.generate_certificates_button.attrs('disabled')
self.assertEqual(disabled[0], 'true')
def test_pending_tasks_section_is_visible(self):
"""
Scenario: On the Certificates tab of the Instructor Dashboard, Pending Instructor Tasks section is visible.
Given that I am on the Certificates tab on the Instructor Dashboard
Then I see 'Pending Instructor Tasks' section
"""
self.assertTrue(self.certificates_section.pending_tasks_section.visible)
|
nanolearningllc/edx-platform-cypress
|
common/test/acceptance/tests/lms/test_lms_instructor_dashboard.py
|
Python
|
agpl-3.0
| 29,511
|
[
"VisIt"
] |
23990943ce56745041d949c339ef118472ed431a79664cb77490fa50c7037ba2
|
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Mathieu Blondel <mathieu@mblondel.org>
# Olivier Grisel <olivier.grisel@ensta.org>
# Andreas Mueller <amueller@ais.uni-bonn.de>
# Eric Martin <eric@ericmart.in>
# License: BSD 3 clause
from itertools import chain, combinations
import numbers
import warnings
import numpy as np
from scipy import sparse
from ..base import BaseEstimator, TransformerMixin
from ..externals import six
from ..utils import check_array
from ..utils.extmath import row_norms
from ..utils.fixes import combinations_with_replacement as combinations_w_r
from ..utils.sparsefuncs_fast import (inplace_csr_row_normalize_l1,
inplace_csr_row_normalize_l2)
from ..utils.sparsefuncs import (inplace_column_scale, mean_variance_axis,
min_max_axis, inplace_row_scale)
from ..utils.validation import check_is_fitted, FLOAT_DTYPES
zip = six.moves.zip
map = six.moves.map
range = six.moves.range
__all__ = [
'Binarizer',
'KernelCenterer',
'MinMaxScaler',
'MaxAbsScaler',
'Normalizer',
'OneHotEncoder',
'RobustScaler',
'StandardScaler',
'add_dummy_feature',
'binarize',
'normalize',
'scale',
'robust_scale',
'maxabs_scale',
'minmax_scale',
]
DEPRECATION_MSG_1D = ("Passing 1d arrays as data is deprecated and "
"will be removed in 0.18. Reshape your data either using"
"X.reshape(-1, 1) if your data has a single feature or"
"X.reshape(1, -1) if it contains a single sample.")
def _mean_and_std(X, axis=0, with_mean=True, with_std=True):
"""Compute mean and std deviation for centering, scaling.
Zero valued std components are reset to 1.0 to avoid NaNs when scaling.
"""
X = np.asarray(X)
Xr = np.rollaxis(X, axis)
if with_mean:
mean_ = Xr.mean(axis=0)
else:
mean_ = None
if with_std:
std_ = Xr.std(axis=0)
std_ = _handle_zeros_in_scale(std_)
else:
std_ = None
return mean_, std_
def _handle_zeros_in_scale(scale):
''' Makes sure that whenever scale is zero, we handle it correctly.
This happens in most scalers when we have constant features.'''
# if we are fitting on 1D arrays, scale might be a scalar
if np.isscalar(scale):
if scale == 0:
scale = 1.
elif isinstance(scale, np.ndarray):
scale[scale == 0.0] = 1.0
scale[~np.isfinite(scale)] = 1.0
return scale
def scale(X, axis=0, with_mean=True, with_std=True, copy=True):
"""Standardize a dataset along any axis
Center to the mean and component wise scale to unit variance.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
X : array-like or CSR matrix.
The data to center and scale.
axis : int (0 by default)
axis used to compute the means and standard deviations along. If 0,
independently standardize each feature, otherwise (if 1) standardize
each sample.
with_mean : boolean, True by default
If True, center the data before scaling.
with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
Notes
-----
This implementation will refuse to center scipy.sparse matrices
since it would make them non-sparse and would potentially crash the
program with memory exhaustion problems.
Instead the caller is expected to either set explicitly
`with_mean=False` (in that case, only variance scaling will be
performed on the features of the CSR matrix) or to call `X.toarray()`
if he/she expects the materialized dense array to fit in memory.
To avoid memory copy the caller should pass a CSR matrix.
See also
--------
:class:`sklearn.preprocessing.StandardScaler` to perform centering and
scaling using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
X = check_array(X, accept_sparse='csr', copy=copy, ensure_2d=False,
warn_on_dtype=True, estimator='the scale function',
dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` instead"
" See docstring for motivation and alternatives.")
if axis != 0:
raise ValueError("Can only scale sparse matrix on axis=0, "
" got axis=%d" % axis)
if not sparse.isspmatrix_csr(X):
X = X.tocsr()
copy = False
if copy:
X = X.copy()
_, var = mean_variance_axis(X, axis=0)
var = _handle_zeros_in_scale(var)
inplace_column_scale(X, 1 / np.sqrt(var))
else:
X = np.asarray(X)
mean_, std_ = _mean_and_std(
X, axis, with_mean=with_mean, with_std=with_std)
if copy:
X = X.copy()
# Xr is a view on the original array that enables easy use of
# broadcasting on the axis in which we are interested in
Xr = np.rollaxis(X, axis)
if with_mean:
Xr -= mean_
mean_1 = Xr.mean(axis=0)
# Verify that mean_1 is 'close to zero'. If X contains very
# large values, mean_1 can also be very large, due to a lack of
# precision of mean_. In this case, a pre-scaling of the
# concerned feature is efficient, for instance by its mean or
# maximum.
if not np.allclose(mean_1, 0):
warnings.warn("Numerical issues were encountered "
"when centering the data "
"and might not be solved. Dataset may "
"contain too large values. You may need "
"to prescale your features.")
Xr -= mean_1
if with_std:
Xr /= std_
if with_mean:
mean_2 = Xr.mean(axis=0)
# If mean_2 is not 'close to zero', it comes from the fact that
# std_ is very small so that mean_2 = mean_1/std_ > 0, even if
# mean_1 was close to zero. The problem is thus essentially due
# to the lack of precision of mean_. A solution is then to
# substract the mean again:
if not np.allclose(mean_2, 0):
warnings.warn("Numerical issues were encountered "
"when scaling the data "
"and might not be solved. The standard "
"deviation of the data is probably "
"very close to 0. ")
Xr -= mean_2
return X
class MinMaxScaler(BaseEstimator, TransformerMixin):
"""Transforms features by scaling each feature to a given range.
This estimator scales and translates each feature individually such
that it is in the given range on the training set, i.e. between
zero and one.
The transformation is given by::
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range.
This transformation is often used as an alternative to zero mean,
unit variance scaling.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
feature_range: tuple (min, max), default=(0, 1)
Desired range of transformed data.
copy : boolean, optional, default True
Set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array).
Attributes
----------
min_ : ndarray, shape (n_features,)
Per feature adjustment for minimum.
scale_ : ndarray, shape (n_features,)
Per feature relative scaling of the data.
"""
def __init__(self, feature_range=(0, 1), copy=True):
self.feature_range = feature_range
self.copy = copy
def fit(self, X, y=None):
"""Compute the minimum and maximum to be used for later scaling.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data used to compute the per-feature minimum and maximum
used for later scaling along the features axis.
"""
X = check_array(X, copy=self.copy, ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
feature_range = self.feature_range
if feature_range[0] >= feature_range[1]:
raise ValueError("Minimum of desired feature range must be smaller"
" than maximum. Got %s." % str(feature_range))
data_min = np.min(X, axis=0)
data_range = np.max(X, axis=0) - data_min
data_range = _handle_zeros_in_scale(data_range)
self.scale_ = (feature_range[1] - feature_range[0]) / data_range
self.min_ = feature_range[0] - data_min * self.scale_
self.data_range = data_range
self.data_min = data_min
return self
def transform(self, X):
"""Scaling features of X according to feature_range.
Parameters
----------
X : array-like with shape [n_samples, n_features]
Input data that will be transformed.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, copy=self.copy, ensure_2d=False)
if X.ndim == 1:
warnings.warn(DEPRECATION_MSG_1D, DeprecationWarning)
X *= self.scale_
X += self.min_
return X
def inverse_transform(self, X):
"""Undo the scaling of X according to feature_range.
Parameters
----------
X : array-like with shape [n_samples, n_features]
Input data that will be transformed.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, copy=self.copy, ensure_2d=False)
X -= self.min_
X /= self.scale_
return X
def minmax_scale(X, feature_range=(0, 1), axis=0, copy=True):
"""Transforms features by scaling each feature to a given range.
This estimator scales and translates each feature individually such
that it is in the given range on the training set, i.e. between
zero and one.
The transformation is given by::
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range.
This transformation is often used as an alternative to zero mean,
unit variance scaling.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
feature_range: tuple (min, max), default=(0, 1)
Desired range of transformed data.
axis : int (0 by default)
axis used to scale along. If 0, independently scale each feature,
otherwise (if 1) scale each sample.
copy : boolean, optional, default is True
Set to False to perform inplace scaling and avoid a copy (if the input
is already a numpy array).
"""
s = MinMaxScaler(feature_range=feature_range, copy=copy)
if axis == 0:
return s.fit_transform(X)
else:
return s.fit_transform(X.T).T
class StandardScaler(BaseEstimator, TransformerMixin):
"""Standardize features by removing the mean and scaling to unit variance
Centering and scaling happen independently on each feature by computing
the relevant statistics on the samples in the training set. Mean and
standard deviation are then stored to be used on later data using the
`transform` method.
Standardization of a dataset is a common requirement for many
machine learning estimators: they might behave badly if the
individual feature do not more or less look like standard normally
distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of
a learning algorithm (such as the RBF kernel of Support Vector
Machines or the L1 and L2 regularizers of linear models) assume that
all features are centered around 0 and have variance in the same
order. If a feature has a variance that is orders of magnitude larger
that others, it might dominate the objective function and make the
estimator unable to learn from other features correctly as expected.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
with_mean : boolean, True by default
If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default True
If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
Attributes
----------
mean_ : array of floats with shape [n_features]
The mean value for each feature in the training set.
std_ : array of floats with shape [n_features]
The standard deviation for each feature in the training set.
Set to one if the standard deviation is zero for a given feature.
See also
--------
:func:`sklearn.preprocessing.scale` to perform centering and
scaling without using the ``Transformer`` object oriented API
:class:`sklearn.decomposition.RandomizedPCA` with `whiten=True`
to further remove the linear correlation across features.
"""
def __init__(self, copy=True, with_mean=True, with_std=True):
self.with_mean = with_mean
self.with_std = with_std
self.copy = copy
def fit(self, X, y=None):
"""Compute the mean and std to be used for later scaling.
Parameters
----------
X : array-like or CSR matrix with shape [n_samples, n_features]
The data used to compute the mean and standard deviation
used for later scaling along the features axis.
"""
X = check_array(X, accept_sparse='csr', copy=self.copy,
ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` "
"instead. See docstring for motivation and alternatives.")
self.mean_ = None
if self.with_std:
var = mean_variance_axis(X, axis=0)[1]
self.std_ = np.sqrt(var)
self.std_ = _handle_zeros_in_scale(self.std_)
else:
self.std_ = None
return self
else:
self.mean_, self.std_ = _mean_and_std(
X, axis=0, with_mean=self.with_mean, with_std=self.with_std)
return self
def transform(self, X, y=None, copy=None):
"""Perform standardization by centering and scaling
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to scale along the features axis.
"""
check_is_fitted(self, 'std_')
copy = copy if copy is not None else self.copy
X = check_array(X, accept_sparse='csr', copy=copy,
ensure_2d=False, warn_on_dtype=True,
estimator=self, dtype=FLOAT_DTYPES)
if X.ndim == 1:
warnings.warn(DEPRECATION_MSG_1D, DeprecationWarning)
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot center sparse matrices: pass `with_mean=False` "
"instead. See docstring for motivation and alternatives.")
if self.std_ is not None:
inplace_column_scale(X, 1 / self.std_)
else:
if self.with_mean:
X -= self.mean_
if self.with_std:
X /= self.std_
return X
def inverse_transform(self, X, copy=None):
"""Scale back the data to the original representation
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to scale along the features axis.
"""
check_is_fitted(self, 'std_')
copy = copy if copy is not None else self.copy
if sparse.issparse(X):
if self.with_mean:
raise ValueError(
"Cannot uncenter sparse matrices: pass `with_mean=False` "
"instead See docstring for motivation and alternatives.")
if not sparse.isspmatrix_csr(X):
X = X.tocsr()
copy = False
if copy:
X = X.copy()
if self.std_ is not None:
inplace_column_scale(X, self.std_)
else:
X = np.asarray(X)
if copy:
X = X.copy()
if self.with_std:
X *= self.std_
if self.with_mean:
X += self.mean_
return X
class MaxAbsScaler(BaseEstimator, TransformerMixin):
"""Scale each feature by its maximum absolute value.
This estimator scales and translates each feature individually such
that the maximal absolute value of each feature in the
training set will be 1.0. It does not shift/center the data, and
thus does not destroy any sparsity.
This scaler can also be applied to sparse CSR or CSC matrices.
Parameters
----------
copy : boolean, optional, default is True
Set to False to perform inplace scaling and avoid a copy (if the input
is already a numpy array).
Attributes
----------
scale_ : ndarray, shape (n_features,)
Per feature relative scaling of the data.
"""
def __init__(self, copy=True):
self.copy = copy
def fit(self, X, y=None):
"""Compute the minimum and maximum to be used for later scaling.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data used to compute the per-feature minimum and maximum
used for later scaling along the features axis.
"""
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy,
ensure_2d=False, estimator=self, dtype=FLOAT_DTYPES)
if sparse.issparse(X):
mins, maxs = min_max_axis(X, axis=0)
scales = np.maximum(np.abs(mins), np.abs(maxs))
else:
scales = np.abs(X).max(axis=0)
scales = np.array(scales)
scales = scales.reshape(-1)
self.scale_ = _handle_zeros_in_scale(scales)
return self
def transform(self, X, y=None):
"""Scale the data
Parameters
----------
X : array-like or CSR matrix.
The data that should be scaled.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy,
ensure_2d=False, estimator=self, dtype=FLOAT_DTYPES)
if X.ndim == 1:
warnings.warn(DEPRECATION_MSG_1D, DeprecationWarning)
if sparse.issparse(X):
if X.shape[0] == 1:
inplace_row_scale(X, 1.0 / self.scale_)
else:
inplace_column_scale(X, 1.0 / self.scale_)
else:
X /= self.scale_
return X
def inverse_transform(self, X):
"""Scale back the data to the original representation
Parameters
----------
X : array-like or CSR matrix.
The data that should be transformed back.
"""
check_is_fitted(self, 'scale_')
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy,
ensure_2d=False, estimator=self, dtype=FLOAT_DTYPES)
if sparse.issparse(X):
if X.shape[0] == 1:
inplace_row_scale(X, self.scale_)
else:
inplace_column_scale(X, self.scale_)
else:
X *= self.scale_
return X
def maxabs_scale(X, axis=0, copy=True):
"""Scale each feature to the [-1, 1] range without breaking the sparsity.
This estimator scales each feature individually such
that the maximal absolute value of each feature in the
training set will be 1.0.
This scaler can also be applied to sparse CSR or CSC matrices.
Parameters
----------
axis : int (0 by default)
axis used to scale along. If 0, independently scale each feature,
otherwise (if 1) scale each sample.
copy : boolean, optional, default is True
Set to False to perform inplace scaling and avoid a copy (if the input
is already a numpy array).
"""
s = MaxAbsScaler(copy=copy)
if axis == 0:
return s.fit_transform(X)
else:
return s.fit_transform(X.T).T
class RobustScaler(BaseEstimator, TransformerMixin):
"""Scale features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to
the Interquartile Range (IQR). The IQR is the range between the 1st
quartile (25th quantile) and the 3rd quartile (75th quantile).
Centering and scaling happen independently on each feature (or each
sample, depending on the `axis` argument) by computing the relevant
statistics on the samples in the training set. Median and interquartile
range are then stored to be used on later data using the `transform`
method.
Standardization of a dataset is a common requirement for many
machine learning estimators. Typically this is done by removing the mean
and scaling to unit variance. However, outliers can often influence the
sample mean / variance in a negative way. In such cases, the median and
the interquartile range often give better results.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
with_centering : boolean, True by default
If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
with_scaling : boolean, True by default
If True, scale the data to interquartile range.
copy : boolean, optional, default is True
If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
Attributes
----------
center_ : array of floats
The median value for each feature in the training set.
scale_ : array of floats
The (scaled) interquartile range for each feature in the training set.
See also
--------
:class:`sklearn.preprocessing.StandardScaler` to perform centering
and scaling using mean and variance.
:class:`sklearn.decomposition.RandomizedPCA` with `whiten=True`
to further remove the linear correlation across features.
Notes
-----
See examples/preprocessing/plot_robust_scaling.py for an example.
http://en.wikipedia.org/wiki/Median_(statistics)
http://en.wikipedia.org/wiki/Interquartile_range
"""
def __init__(self, with_centering=True, with_scaling=True, copy=True):
self.with_centering = with_centering
self.with_scaling = with_scaling
self.copy = copy
def _check_array(self, X, copy):
"""Makes sure centering is not enabled for sparse matrices."""
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy,
ensure_2d=False, estimator=self, dtype=FLOAT_DTYPES)
if X.ndim == 1:
warnings.warn(DEPRECATION_MSG_1D, DeprecationWarning)
if sparse.issparse(X):
if self.with_centering:
raise ValueError(
"Cannot center sparse matrices: use `with_centering=False`"
" instead. See docstring for motivation and alternatives.")
return X
def fit(self, X, y=None):
"""Compute the median and quantiles to be used for scaling.
Parameters
----------
X : array-like with shape [n_samples, n_features]
The data used to compute the median and quantiles
used for later scaling along the features axis.
"""
if sparse.issparse(X):
raise TypeError("RobustScaler cannot be fitted on sparse inputs")
X = self._check_array(X, self.copy)
if self.with_centering:
self.center_ = np.median(X, axis=0)
if self.with_scaling:
q = np.percentile(X, (25, 75), axis=0)
self.scale_ = (q[1] - q[0])
self.scale_ = _handle_zeros_in_scale(self.scale_)
return self
def transform(self, X, y=None):
"""Center and scale the data
Parameters
----------
X : array-like or CSR matrix.
The data used to scale along the specified axis.
"""
if self.with_centering:
check_is_fitted(self, 'center_')
if self.with_scaling:
check_is_fitted(self, 'scale_')
X = self._check_array(X, self.copy)
if sparse.issparse(X):
if self.with_scaling:
if X.shape[0] == 1:
inplace_row_scale(X, 1.0 / self.scale_)
elif self.axis == 0:
inplace_column_scale(X, 1.0 / self.scale_)
else:
if self.with_centering:
X -= self.center_
if self.with_scaling:
X /= self.scale_
return X
def inverse_transform(self, X):
"""Scale back the data to the original representation
Parameters
----------
X : array-like or CSR matrix.
The data used to scale along the specified axis.
"""
if self.with_centering:
check_is_fitted(self, 'center_')
if self.with_scaling:
check_is_fitted(self, 'scale_')
X = self._check_array(X, self.copy)
if sparse.issparse(X):
if self.with_scaling:
if X.shape[0] == 1:
inplace_row_scale(X, self.scale_)
else:
inplace_column_scale(X, self.scale_)
else:
if self.with_scaling:
X *= self.scale_
if self.with_centering:
X += self.center_
return X
def robust_scale(X, axis=0, with_centering=True, with_scaling=True, copy=True):
"""Standardize a dataset along any axis
Center to the median and component wise scale
according to the interquartile range.
Read more in the :ref:`User Guide <preprocessing_scaler>`.
Parameters
----------
X : array-like.
The data to center and scale.
axis : int (0 by default)
axis used to compute the medians and IQR along. If 0,
independently scale each feature, otherwise (if 1) scale
each sample.
with_centering : boolean, True by default
If True, center the data before scaling.
with_scaling : boolean, True by default
If True, scale the data to unit variance (or equivalently,
unit standard deviation).
copy : boolean, optional, default is True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
Notes
-----
This implementation will refuse to center scipy.sparse matrices
since it would make them non-sparse and would potentially crash the
program with memory exhaustion problems.
Instead the caller is expected to either set explicitly
`with_centering=False` (in that case, only variance scaling will be
performed on the features of the CSR matrix) or to call `X.toarray()`
if he/she expects the materialized dense array to fit in memory.
To avoid memory copy the caller should pass a CSR matrix.
See also
--------
:class:`sklearn.preprocessing.RobustScaler` to perform centering and
scaling using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
s = RobustScaler(with_centering=with_centering, with_scaling=with_scaling,
copy=copy)
if axis == 0:
return s.fit_transform(X)
else:
return s.fit_transform(X.T).T
class PolynomialFeatures(BaseEstimator, TransformerMixin):
"""Generate polynomial and interaction features.
Generate a new feature matrix consisting of all polynomial combinations
of the features with degree less than or equal to the specified degree.
For example, if an input sample is two dimensional and of the form
[a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].
Parameters
----------
degree : integer
The degree of the polynomial features. Default = 2.
interaction_only : boolean, default = False
If true, only interaction features are produced: features that are
products of at most ``degree`` *distinct* input features (so not
``x[1] ** 2``, ``x[0] * x[2] ** 3``, etc.).
include_bias : boolean
If True (default), then include a bias column, the feature in which
all polynomial powers are zero (i.e. a column of ones - acts as an
intercept term in a linear model).
Examples
--------
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(2)
>>> poly.fit_transform(X)
array([[ 1, 0, 1, 0, 0, 1],
[ 1, 2, 3, 4, 6, 9],
[ 1, 4, 5, 16, 20, 25]])
>>> poly = PolynomialFeatures(interaction_only=True)
>>> poly.fit_transform(X)
array([[ 1, 0, 1, 0],
[ 1, 2, 3, 6],
[ 1, 4, 5, 20]])
Attributes
----------
powers_ : array, shape (n_input_features, n_output_features)
powers_[i, j] is the exponent of the jth input in the ith output.
n_input_features_ : int
The total number of input features.
n_output_features_ : int
The total number of polynomial output features. The number of output
features is computed by iterating over all suitably sized combinations
of input features.
Notes
-----
Be aware that the number of features in the output array scales
polynomially in the number of features of the input array, and
exponentially in the degree. High degrees can cause overfitting.
See :ref:`examples/linear_model/plot_polynomial_interpolation.py
<example_linear_model_plot_polynomial_interpolation.py>`
"""
def __init__(self, degree=2, interaction_only=False, include_bias=True):
self.degree = degree
self.interaction_only = interaction_only
self.include_bias = include_bias
@staticmethod
def _combinations(n_features, degree, interaction_only, include_bias):
comb = (combinations if interaction_only else combinations_w_r)
start = int(not include_bias)
return chain.from_iterable(comb(range(n_features), i)
for i in range(start, degree + 1))
@property
def powers_(self):
check_is_fitted(self, 'n_input_features_')
combinations = self._combinations(self.n_input_features_, self.degree,
self.interaction_only,
self.include_bias)
return np.vstack(np.bincount(c, minlength=self.n_input_features_)
for c in combinations)
def fit(self, X, y=None):
"""
Compute number of output features.
"""
n_samples, n_features = check_array(X).shape
combinations = self._combinations(n_features, self.degree,
self.interaction_only,
self.include_bias)
self.n_input_features_ = n_features
self.n_output_features_ = sum(1 for _ in combinations)
return self
def transform(self, X, y=None):
"""Transform data to polynomial features
Parameters
----------
X : array with shape [n_samples, n_features]
The data to transform, row by row.
Returns
-------
XP : np.ndarray shape [n_samples, NP]
The matrix of features, where NP is the number of polynomial
features generated from the combination of inputs.
"""
check_is_fitted(self, ['n_input_features_', 'n_output_features_'])
X = check_array(X)
n_samples, n_features = X.shape
if n_features != self.n_input_features_:
raise ValueError("X shape does not match training shape")
# allocate output data
XP = np.empty((n_samples, self.n_output_features_), dtype=X.dtype)
combinations = self._combinations(n_features, self.degree,
self.interaction_only,
self.include_bias)
for i, c in enumerate(combinations):
XP[:, i] = X[:, c].prod(1)
return XP
def normalize(X, norm='l2', axis=1, copy=True):
"""Scale input vectors individually to unit norm (vector length).
Read more in the :ref:`User Guide <preprocessing_normalization>`.
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to normalize, element by element.
scipy.sparse matrices should be in CSR format to avoid an
un-necessary copy.
norm : 'l1', 'l2', or 'max', optional ('l2' by default)
The norm to use to normalize each non zero sample (or each non-zero
feature if axis is 0).
axis : 0 or 1, optional (1 by default)
axis used to normalize the data along. If 1, independently normalize
each sample, otherwise (if 0) normalize each feature.
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix and if axis is 1).
See also
--------
:class:`sklearn.preprocessing.Normalizer` to perform normalization
using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
if norm not in ('l1', 'l2', 'max'):
raise ValueError("'%s' is not a supported norm" % norm)
if axis == 0:
sparse_format = 'csc'
elif axis == 1:
sparse_format = 'csr'
else:
raise ValueError("'%d' is not a supported axis" % axis)
X = check_array(X, sparse_format, copy=copy, warn_on_dtype=True,
estimator='the normalize function', dtype=FLOAT_DTYPES)
if axis == 0:
X = X.T
if sparse.issparse(X):
if norm == 'l1':
inplace_csr_row_normalize_l1(X)
elif norm == 'l2':
inplace_csr_row_normalize_l2(X)
elif norm == 'max':
_, norms = min_max_axis(X, 1)
norms = norms.repeat(np.diff(X.indptr))
mask = norms != 0
X.data[mask] /= norms[mask]
else:
if norm == 'l1':
norms = np.abs(X).sum(axis=1)
elif norm == 'l2':
norms = row_norms(X)
elif norm == 'max':
norms = np.max(X, axis=1)
norms = _handle_zeros_in_scale(norms)
X /= norms[:, np.newaxis]
if axis == 0:
X = X.T
return X
class Normalizer(BaseEstimator, TransformerMixin):
"""Normalize samples individually to unit norm.
Each sample (i.e. each row of the data matrix) with at least one
non zero component is rescaled independently of other samples so
that its norm (l1 or l2) equals one.
This transformer is able to work both with dense numpy arrays and
scipy.sparse matrix (use CSR format if you want to avoid the burden of
a copy / conversion).
Scaling inputs to unit norms is a common operation for text
classification or clustering for instance. For instance the dot
product of two l2-normalized TF-IDF vectors is the cosine similarity
of the vectors and is the base similarity metric for the Vector
Space Model commonly used by the Information Retrieval community.
Read more in the :ref:`User Guide <preprocessing_normalization>`.
Parameters
----------
norm : 'l1', 'l2', or 'max', optional ('l2' by default)
The norm to use to normalize each non zero sample.
copy : boolean, optional, default True
set to False to perform inplace row normalization and avoid a
copy (if the input is already a numpy array or a scipy.sparse
CSR matrix).
Notes
-----
This estimator is stateless (besides constructor parameters), the
fit method does nothing but is useful when used in a pipeline.
See also
--------
:func:`sklearn.preprocessing.normalize` equivalent function
without the object oriented API
"""
def __init__(self, norm='l2', copy=True):
self.norm = norm
self.copy = copy
def fit(self, X, y=None):
"""Do nothing and return the estimator unchanged
This method is just there to implement the usual API and hence
work in pipelines.
"""
X = check_array(X, accept_sparse='csr')
return self
def transform(self, X, y=None, copy=None):
"""Scale each non zero row of X to unit norm
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to normalize, row by row. scipy.sparse matrices should be
in CSR format to avoid an un-necessary copy.
"""
copy = copy if copy is not None else self.copy
X = check_array(X, accept_sparse='csr')
return normalize(X, norm=self.norm, axis=1, copy=copy)
def binarize(X, threshold=0.0, copy=True):
"""Boolean thresholding of array-like or scipy.sparse matrix
Read more in the :ref:`User Guide <preprocessing_binarization>`.
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to binarize, element by element.
scipy.sparse matrices should be in CSR or CSC format to avoid an
un-necessary copy.
threshold : float, optional (0.0 by default)
Feature values below or equal to this are replaced by 0, above it by 1.
Threshold may not be less than 0 for operations on sparse matrices.
copy : boolean, optional, default True
set to False to perform inplace binarization and avoid a copy
(if the input is already a numpy array or a scipy.sparse CSR / CSC
matrix and if axis is 1).
See also
--------
:class:`sklearn.preprocessing.Binarizer` to perform binarization
using the ``Transformer`` API (e.g. as part of a preprocessing
:class:`sklearn.pipeline.Pipeline`)
"""
X = check_array(X, accept_sparse=['csr', 'csc'], copy=copy)
if sparse.issparse(X):
if threshold < 0:
raise ValueError('Cannot binarize a sparse matrix with threshold '
'< 0')
cond = X.data > threshold
not_cond = np.logical_not(cond)
X.data[cond] = 1
X.data[not_cond] = 0
X.eliminate_zeros()
else:
cond = X > threshold
not_cond = np.logical_not(cond)
X[cond] = 1
X[not_cond] = 0
return X
class Binarizer(BaseEstimator, TransformerMixin):
"""Binarize data (set feature values to 0 or 1) according to a threshold
Values greater than the threshold map to 1, while values less than
or equal to the threshold map to 0. With the default threshold of 0,
only positive values map to 1.
Binarization is a common operation on text count data where the
analyst can decide to only consider the presence or absence of a
feature rather than a quantified number of occurrences for instance.
It can also be used as a pre-processing step for estimators that
consider boolean random variables (e.g. modelled using the Bernoulli
distribution in a Bayesian setting).
Read more in the :ref:`User Guide <preprocessing_binarization>`.
Parameters
----------
threshold : float, optional (0.0 by default)
Feature values below or equal to this are replaced by 0, above it by 1.
Threshold may not be less than 0 for operations on sparse matrices.
copy : boolean, optional, default True
set to False to perform inplace binarization and avoid a copy (if
the input is already a numpy array or a scipy.sparse CSR matrix).
Notes
-----
If the input is a sparse matrix, only the non-zero values are subject
to update by the Binarizer class.
This estimator is stateless (besides constructor parameters), the
fit method does nothing but is useful when used in a pipeline.
"""
def __init__(self, threshold=0.0, copy=True):
self.threshold = threshold
self.copy = copy
def fit(self, X, y=None):
"""Do nothing and return the estimator unchanged
This method is just there to implement the usual API and hence
work in pipelines.
"""
check_array(X, accept_sparse='csr')
return self
def transform(self, X, y=None, copy=None):
"""Binarize each element of X
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
The data to binarize, element by element.
scipy.sparse matrices should be in CSR format to avoid an
un-necessary copy.
"""
copy = copy if copy is not None else self.copy
return binarize(X, threshold=self.threshold, copy=copy)
class KernelCenterer(BaseEstimator, TransformerMixin):
"""Center a kernel matrix
Let K(x, z) be a kernel defined by phi(x)^T phi(z), where phi is a
function mapping x to a Hilbert space. KernelCenterer centers (i.e.,
normalize to have zero mean) the data without explicitly computing phi(x).
It is equivalent to centering phi(x) with
sklearn.preprocessing.StandardScaler(with_std=False).
Read more in the :ref:`User Guide <kernel_centering>`.
"""
def fit(self, K, y=None):
"""Fit KernelCenterer
Parameters
----------
K : numpy array of shape [n_samples, n_samples]
Kernel matrix.
Returns
-------
self : returns an instance of self.
"""
K = check_array(K)
n_samples = K.shape[0]
self.K_fit_rows_ = np.sum(K, axis=0) / n_samples
self.K_fit_all_ = self.K_fit_rows_.sum() / n_samples
return self
def transform(self, K, y=None, copy=True):
"""Center kernel matrix.
Parameters
----------
K : numpy array of shape [n_samples1, n_samples2]
Kernel matrix.
copy : boolean, optional, default True
Set to False to perform inplace computation.
Returns
-------
K_new : numpy array of shape [n_samples1, n_samples2]
"""
check_is_fitted(self, 'K_fit_all_')
K = check_array(K)
if copy:
K = K.copy()
K_pred_cols = (np.sum(K, axis=1) /
self.K_fit_rows_.shape[0])[:, np.newaxis]
K -= self.K_fit_rows_
K -= K_pred_cols
K += self.K_fit_all_
return K
def add_dummy_feature(X, value=1.0):
"""Augment dataset with an additional dummy feature.
This is useful for fitting an intercept term with implementations which
cannot otherwise fit it directly.
Parameters
----------
X : array or scipy.sparse matrix with shape [n_samples, n_features]
Data.
value : float
Value to use for the dummy feature.
Returns
-------
X : array or scipy.sparse matrix with shape [n_samples, n_features + 1]
Same data with dummy feature added as first column.
Examples
--------
>>> from sklearn.preprocessing import add_dummy_feature
>>> add_dummy_feature([[0, 1], [1, 0]])
array([[ 1., 0., 1.],
[ 1., 1., 0.]])
"""
X = check_array(X, accept_sparse=['csc', 'csr', 'coo'])
n_samples, n_features = X.shape
shape = (n_samples, n_features + 1)
if sparse.issparse(X):
if sparse.isspmatrix_coo(X):
# Shift columns to the right.
col = X.col + 1
# Column indices of dummy feature are 0 everywhere.
col = np.concatenate((np.zeros(n_samples), col))
# Row indices of dummy feature are 0, ..., n_samples-1.
row = np.concatenate((np.arange(n_samples), X.row))
# Prepend the dummy feature n_samples times.
data = np.concatenate((np.ones(n_samples) * value, X.data))
return sparse.coo_matrix((data, (row, col)), shape)
elif sparse.isspmatrix_csc(X):
# Shift index pointers since we need to add n_samples elements.
indptr = X.indptr + n_samples
# indptr[0] must be 0.
indptr = np.concatenate((np.array([0]), indptr))
# Row indices of dummy feature are 0, ..., n_samples-1.
indices = np.concatenate((np.arange(n_samples), X.indices))
# Prepend the dummy feature n_samples times.
data = np.concatenate((np.ones(n_samples) * value, X.data))
return sparse.csc_matrix((data, indices, indptr), shape)
else:
klass = X.__class__
return klass(add_dummy_feature(X.tocoo(), value))
else:
return np.hstack((np.ones((n_samples, 1)) * value, X))
def _transform_selected(X, transform, selected="all", copy=True):
"""Apply a transform function to portion of selected features
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
transform : callable
A callable transform(X) -> X_transformed
copy : boolean, optional
Copy X even if it could be avoided.
selected: "all" or array of indices or mask
Specify which features to apply the transform to.
Returns
-------
X : array or sparse matrix, shape=(n_samples, n_features_new)
"""
if selected == "all":
return transform(X)
X = check_array(X, accept_sparse='csc', copy=copy)
if len(selected) == 0:
return X
n_features = X.shape[1]
ind = np.arange(n_features)
sel = np.zeros(n_features, dtype=bool)
sel[np.asarray(selected)] = True
not_sel = np.logical_not(sel)
n_selected = np.sum(sel)
if n_selected == 0:
# No features selected.
return X
elif n_selected == n_features:
# All features selected.
return transform(X)
else:
X_sel = transform(X[:, ind[sel]])
X_not_sel = X[:, ind[not_sel]]
if sparse.issparse(X_sel) or sparse.issparse(X_not_sel):
return sparse.hstack((X_sel, X_not_sel))
else:
return np.hstack((X_sel, X_not_sel))
class OneHotEncoder(BaseEstimator, TransformerMixin):
"""Encode categorical integer features using a one-hot aka one-of-K scheme.
The input to this transformer should be a matrix of integers, denoting
the values taken on by categorical (discrete) features. The output will be
a sparse matrix where each column corresponds to one possible value of one
feature. It is assumed that input features take on values in the range
[0, n_values).
This encoding is needed for feeding categorical data to many scikit-learn
estimators, notably linear models and SVMs with the standard kernels.
Read more in the :ref:`User Guide <preprocessing_categorical_features>`.
Parameters
----------
n_values : 'auto', int or array of ints
Number of values per feature.
- 'auto' : determine value range from training data.
- int : maximum value for all features.
- array : maximum value per feature.
categorical_features: "all" or array of indices or mask
Specify what features are treated as categorical.
- 'all' (default): All features are treated as categorical.
- array of indices: Array of categorical feature indices.
- mask: Array of length n_features and with dtype=bool.
Non-categorical features are always stacked to the right of the matrix.
dtype : number type, default=np.float
Desired dtype of output.
sparse : boolean, default=True
Will return sparse matrix if set True else will return an array.
handle_unknown : str, 'error' or 'ignore'
Whether to raise an error or ignore if a unknown categorical feature is
present during transform.
Attributes
----------
active_features_ : array
Indices for active features, meaning values that actually occur
in the training set. Only available when n_values is ``'auto'``.
feature_indices_ : array of shape (n_features,)
Indices to feature ranges.
Feature ``i`` in the original data is mapped to features
from ``feature_indices_[i]`` to ``feature_indices_[i+1]``
(and then potentially masked by `active_features_` afterwards)
n_values_ : array of shape (n_features,)
Maximum number of values per feature.
Examples
--------
Given a dataset with three features and two samples, we let the encoder
find the maximum value per feature and transform the data to a binary
one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], \
[1, 0, 2]]) # doctest: +ELLIPSIS
OneHotEncoder(categorical_features='all', dtype=<... 'float'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9])
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
See also
--------
sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of
dictionary items (also handles string-valued features).
sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot
encoding of dictionary items or strings.
"""
def __init__(self, n_values="auto", categorical_features="all",
dtype=np.float, sparse=True, handle_unknown='error'):
self.n_values = n_values
self.categorical_features = categorical_features
self.dtype = dtype
self.sparse = sparse
self.handle_unknown = handle_unknown
def fit(self, X, y=None):
"""Fit OneHotEncoder to X.
Parameters
----------
X : array-like, shape=(n_samples, n_feature)
Input array of type int.
Returns
-------
self
"""
self.fit_transform(X)
return self
def _fit_transform(self, X):
"""Assumes X contains only categorical features."""
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("X needs to contain only non-negative integers.")
n_samples, n_features = X.shape
if self.n_values == 'auto':
n_values = np.max(X, axis=0) + 1
elif isinstance(self.n_values, numbers.Integral):
if (np.max(X, axis=0) >= self.n_values).any():
raise ValueError("Feature out of bounds for n_values=%d"
% self.n_values)
n_values = np.empty(n_features, dtype=np.int)
n_values.fill(self.n_values)
else:
try:
n_values = np.asarray(self.n_values, dtype=int)
except (ValueError, TypeError):
raise TypeError("Wrong type for parameter `n_values`. Expected"
" 'auto', int or array of ints, got %r"
% type(X))
if n_values.ndim < 1 or n_values.shape[0] != X.shape[1]:
raise ValueError("Shape mismatch: if n_values is an array,"
" it has to be of shape (n_features,).")
self.n_values_ = n_values
n_values = np.hstack([[0], n_values])
indices = np.cumsum(n_values)
self.feature_indices_ = indices
column_indices = (X + indices[:-1]).ravel()
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)
data = np.ones(n_samples * n_features)
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.n_values == 'auto':
mask = np.array(out.sum(axis=0)).ravel() != 0
active_features = np.where(mask)[0]
out = out[:, active_features]
self.active_features_ = active_features
return out if self.sparse else out.toarray()
def fit_transform(self, X, y=None):
"""Fit OneHotEncoder to X, then transform X.
Equivalent to self.fit(X).transform(X), but more convenient and more
efficient. See fit for the parameters, transform for the return value.
"""
return _transform_selected(X, self._fit_transform,
self.categorical_features, copy=True)
def _transform(self, X):
"""Assumes X contains only categorical features."""
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("X needs to contain only non-negative integers.")
n_samples, n_features = X.shape
indices = self.feature_indices_
if n_features != indices.shape[0] - 1:
raise ValueError("X has different shape than during fitting."
" Expected %d, got %d."
% (indices.shape[0] - 1, n_features))
# We use only those catgorical features of X that are known using fit.
# i.e lesser than n_values_ using mask.
# This means, if self.handle_unknown is "ignore", the row_indices and
# col_indices corresponding to the unknown categorical feature are
# ignored.
mask = (X < self.n_values_).ravel()
if np.any(~mask):
if self.handle_unknown not in ['error', 'ignore']:
raise ValueError("handle_unknown should be either error or "
"unknown got %s" % self.handle_unknown)
if self.handle_unknown == 'error':
raise ValueError("unknown categorical feature present %s "
"during transform." % X[~mask])
column_indices = (X + indices[:-1]).ravel()[mask]
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)[mask]
data = np.ones(np.sum(mask))
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.n_values == 'auto':
out = out[:, self.active_features_]
return out if self.sparse else out.toarray()
def transform(self, X):
"""Transform X using one-hot encoding.
Parameters
----------
X : array-like, shape=(n_samples, n_features)
Input array of type int.
Returns
-------
X_out : sparse matrix if sparse=True else a 2-d array, dtype=int
Transformed input.
"""
return _transform_selected(X, self._transform,
self.categorical_features, copy=True)
|
anurag313/scikit-learn
|
sklearn/preprocessing/data.py
|
Python
|
bsd-3-clause
| 57,350
|
[
"Gaussian"
] |
38723d19c89d42178dd66f0d12788b81851cc395d9e80e05e51ba2f0c457572f
|
#!/usr/bin/env python3
__author__ = 'Sushain K. Cherivirala, Kevin Brubeck Unhammer'
__copyright__ = 'Copyright 2014--2021, Sushain K. Cherivirala, Kevin Brubeck Unhammer'
__credits__ = ['Sushain K. Cherivirala', 'Kevin Brubeck Unhammer', 'Jonathan North Washington', 'Shardul Chiplunkar', 'Daniel Swanson']
__license__ = 'GPLv3+'
__status__ = 'Production'
__version__ = '3.0.1'
import argparse
import base64
import getpass
import json
import os
import re
import shlex
import stat
import subprocess
import sys
import urllib.error
import urllib.request
import zlib
if False: # for mypy
import http.client # noqa: F401
from typing import Dict, List, Optional, Tuple # noqa: F401
# DO NOT MODIFY, USE `make` which calls `./updateBoostraper.py`
any_module_files = {} # noqa: E501
lttoolbox_language_module_files = {} # noqa: E501
hfst_language_module_files = {} # noqa: E501
bilingual_module_files = {} # noqa: E501
english_lang_names = {} # noqa: E501
iso639_codes = {} # noqa: E501
# DO NOT MODIFY, USE `make` which calls `./updateBoostraper.py`
organization_name = 'apertium'
default_prefix = 'apertium'
default_email = 'apertium-stuff@lists.sourceforge.net'
def get_lang_name(code): # type: (str) -> str
code = iso639_codes[code] if len(code) > 2 and code in iso639_codes else code
if code in english_lang_names:
return english_lang_names[code]
else:
sys.stdout.write('Unable to find English language name for %s, using ISO code instead.\n' % code)
return code
def read_manifest(files, conditionals): # type: (Dict[str, bytes], List[str]) -> None
manifest_byt = zlib.decompress(base64.b85decode(files['MANIFEST.txt']))
manifest_txt = manifest_byt.decode('utf-8')
manifest_ls = make_replacements(manifest_txt, {}, conditionals).splitlines()
for f in list(files.keys()):
if f not in any_module_files and f not in manifest_ls:
del files[f]
def init_pair(args, email): # type: (argparse.Namespace, str) -> Tuple[Dict[str, bytes], Dict[str, str], List[str]]
language_code_1, language_code_2 = args.name.split('-')
replacements = {
'languageCode1': language_code_1,
'languageCode2': language_code_2,
'languageName1': get_lang_name(language_code_1),
'languageName2': get_lang_name(language_code_2),
'email': email,
}
conditionals = []
if args.analyser == 'giella' or args.analyser1 == 'giella':
conditionals.append('giella1')
elif args.analyser in ['hfst', 'lexd'] or args.analyser1 in ['hfst', 'lexd']:
conditionals.append('hfst1')
else:
conditionals.append('lttoolbox1')
if args.analyser == 'giella' or args.analyser2 == 'giella':
conditionals.append('giella2')
elif args.analyser in ['hfst', 'lexd'] or args.analyser2 in ['hfst', 'lexd']:
conditionals.append('hfst2')
else:
conditionals.append('lttoolbox2')
if conditionals != ['lttoolbox1', 'lttoolbox2']:
conditionals.append('hfst')
conditionals += args.pair_conds or []
conditionals.append(args.transfer)
if not args.no_prob1:
conditionals.append('prob1')
if not args.no_prob2:
conditionals.append('prob2')
if not args.no_rlx1:
conditionals.append('rlx1')
if not args.no_rlx2:
conditionals.append('rlx2')
if not args.no_pgen1:
conditionals.append('pgen1')
if not args.no_pgen2:
conditionals.append('pgen2')
files = dict(bilingual_module_files, **any_module_files)
read_manifest(files, conditionals)
return files, replacements, conditionals
def init_lang_module(args, email): # type: (argparse.Namespace, str) -> Tuple[Dict[str, bytes], Dict[str, str], List[str]]
replacements = {
'languageCode': args.name,
'languageName': get_lang_name(args.name),
'email': email,
}
conditionals = args.lang_conds or []
if args.analyser in ['lt', 'lttoolbox']:
files = dict(lttoolbox_language_module_files, **any_module_files)
elif args.analyser in ['hfst', 'lexd']:
if args.analyser == 'lexd':
conditionals.append('lexd')
files = dict(hfst_language_module_files, **any_module_files)
read_manifest(files, conditionals)
else:
raise Exception('Unrecognized analyser: ' % args.analyser)
return files, replacements, conditionals
def make_replacements(s, replacements, conditionals): # type: (str, Dict[str, str], List[str]) -> str
for _ in range(2):
s = re.sub(r'{{if_(\w+)[^\n]*(.*?)\nif_\1}}', lambda x: x.group(2) if x.group(1) in conditionals else '', s, flags=re.DOTALL)
s = re.sub(r'{{ifnot_(\w+)[^\n]*(.*?)\nifnot_\1}}', lambda x: x.group(2) if x.group(1) not in conditionals else '', s, flags=re.DOTALL)
for replacement_name, replacement_value in replacements.items():
s = s.replace('{{%s}}' % replacement_name, replacement_value)
return s
def make_all_replacements(destination, files, replacements, conditionals): # type: (str, Dict[str, bytes], Dict[str, str], List[str]) -> None
for filename, encoded_file in files.items():
replacements_filename = make_replacements(filename, replacements, conditionals)
path = os.path.join(destination, replacements_filename)
folder = os.path.dirname(path)
if not os.path.isdir(folder):
os.mkdir(folder)
if os.path.exists(path):
backup = os.path.join(folder, '.bak')
if not os.path.isdir(backup):
os.mkdir(backup)
os.rename(path, os.path.join(backup, replacements_filename))
with open(path, 'wb') as f:
decomp = zlib.decompress(base64.b85decode(encoded_file))
try:
f.write(make_replacements(str(decomp, encoding='utf-8'), replacements, conditionals).encode('utf-8'))
except UnicodeDecodeError: # binary file
f.write(decomp)
def push_to_github(args, folder, username): # type: (argparse.Namespace, str, str) -> None
remote_name = 'origin'
repository_name = '{}-{}'.format(args.prefix, args.name)
if '-' in args.name:
code1, code2 = args.name.split('-')
description = 'Apertium translation pair for {} and {}'.format(get_lang_name(code1), get_lang_name(code2))
else:
description = 'Apertium linguistic data for {}'.format(get_lang_name(args.name))
def create_github_repository(): # type: () -> http.client.HTTPResponse
password = getpass.getpass(prompt='GitHub Password ({}): '.format(username))
data = bytes(json.dumps({
'name': repository_name,
'description': description,
}), encoding='utf-8')
req = urllib.request.Request('https://api.github.com/orgs/{}/repos'.format(organization_name), data=data)
credentials = '{}:{}'.format(username, password)
encoded_credentials = base64.b64encode(credentials.encode('ascii'))
req.add_header('Authorization', 'Basic {}'.format(encoded_credentials.decode('ascii')))
try:
response = urllib.request.urlopen(req)
print('Successfully created GitHub repository {}/{}.'.format(organization_name, repository_name))
return response # type: ignore
except urllib.error.HTTPError as e:
if e.getcode() == 401:
print('Authentication failed. Retrying...')
return create_github_repository()
else:
sys.stderr.write('Failed to create GitHub repository: {}.'.format(e))
sys.exit(-1)
response = create_github_repository()
body = json.loads(response.read().decode('utf-8'))
try:
remote_url = body['ssh_url']
subprocess.check_output(shlex.split('git remote add {} {}'.format(remote_name, remote_url)), cwd=args.destination, stderr=subprocess.STDOUT)
print('Added GitHub remote {}.'.format(remote_url))
except subprocess.CalledProcessError as e:
sys.stderr.write('Adding remote {} ({}) failed: {}'.format(remote_name, remote_url, e.output))
try:
subprocess.check_output(shlex.split('git push {} master'.format(remote_name)), cwd=args.destination, stderr=subprocess.STDOUT)
print('Pushed to GitHub. Visit your new repository at {}.'.format(body['html_url']))
except subprocess.CalledProcessError as e:
sys.stderr.write('Pushing to remote {} failed: {}'.format(remote_name, e.output))
def main(cli_args=None): # type: (Optional[List[str]]) -> None
parser = argparse.ArgumentParser(description='Bootstrap an Apertium language module/pair', allow_abbrev=True)
parser.add_argument('name', help='name of new Apertium language module/pair using ISO-639-3 language code(s)')
parser.add_argument('-d', '--destination', help='destination directory for new language module/pair (default: cwd)', default=os.getcwd())
parser.add_argument('-p', '--push-new-to-github', help='push newly created repository to incubator on the Apertium organisation on GitHub (use with -u)',
action='store_true', default=False)
parser.add_argument('--push-existing-to-github', '--pe', help='push existing repository to incubator on the Apertium organisation on GitHub', default=None)
parser.add_argument('-u', '--username', help='override GitHub username (for pushing repository to GitHub); otherwise git config is used', default=None)
parser.add_argument('--prefix', help='directory prefix (default: {})'.format(default_prefix), default=default_prefix)
parser.add_argument('-r', '--rebuild', help='construct module or pair with different features using existing files',
action='store_true', default=False)
parser.add_argument('-a', '--analyser', '--analysers', help='analyser to use for all languages', choices=['lt', 'lttoolbox', 'hfst', 'lexd', 'giella'],
default='lt')
parser.add_argument('--analyser1', '--a1', help='analyser to use for first language of pair', choices=['lt', 'lttoolbox', 'hfst', 'lexd', 'giella'],
default='lt')
parser.add_argument('--analyser2', '--a2', help='analyser to use for second language of pair', choices=['lt', 'lttoolbox', 'hfst', 'lexd', 'giella'],
default='lt')
parser.add_argument('-t', '--transfer', help='structural transfer module to use', choices=['chunk', 'rtx'], default='chunk')
rlx_prob_group1 = parser.add_mutually_exclusive_group()
rlx_prob_group1.add_argument('--no-rlx1', help='no .rlx present in first language of pair (only used for bilingual pairs)',
action='store_true', default=False)
rlx_prob_group1.add_argument('--no-prob1', help='no .prob present in first language of pair (only used for bilingual pairs)',
action='store_true', default=False)
rlx_prob_group2 = parser.add_mutually_exclusive_group()
rlx_prob_group2.add_argument('--no-prob2', help='no .prob present in second language of pair (only used for bilingual pairs)',
action='store_true', default=False)
rlx_prob_group2.add_argument('--no-rlx2', help='no .rlx present in second language of pair (only used for bilingual pairs)',
action='store_true', default=False)
parser.add_argument('--no-pgen1', help='no post-dix present in first language of pair (only used for bilingual pairs)', action='store_true', default=False)
parser.add_argument('--no-pgen2', help='no post-dix present in second language of pair (only used for bilingual pairs)', action='store_true', default=False)
parser.add_argument('--with-twoc', help='include .twoc file (only used for monolingual hfst modules)', action='append_const',
const='twoc', dest='lang_conds')
parser.add_argument('--with-lsx', '--with-separable', help='include apertium-separable .lsx files (only used for bilingual pairs)',
action='append_const', const='lsx', dest='pair_conds')
parser.add_argument('--with-spellrelax', help='include spellrelax file (only used for monolingual hfst modules)', action='append_const',
const='spellrelax', dest='lang_conds')
parser.add_argument('--with-anaphora', help='include anaphora resolution file (only used for bilingual pairs)', action='append_const',
const='anaphora', dest='pair_conds')
args = parser.parse_args(cli_args)
try:
email = subprocess.check_output(shlex.split('git config user.email')).decode('utf-8').strip()
except subprocess.CalledProcessError as e:
email = default_email
sys.stderr.write('Unable to get email, defaulting to %s: %s\n' % (email, str(e).strip()))
username = args.username or email
args.name = re.sub(r'^{}-'.format(re.escape(args.prefix)), '', args.name)
repository_name = '{}-{}'.format(args.prefix, args.name)
args.destination = os.path.join(args.destination, repository_name)
if args.push_existing_to_github:
if not os.path.isdir(args.push_existing_to_github):
parser.error('--push_existing_to_github requires an existing directory')
push_to_github(args, args.destination, username)
return
if '-' in args.name and args.name.count('-') == 1:
if args.lang_conds:
parser.error('--with-%s can only be used with monolingual modules' % args.lang_conds[0])
if (args.analyser == 'giella' or args.analyser1 == 'giella') and (args.no_rlx1 or args.no_prob1):
parser.error('--analyser=giella specifies the tagger')
if (args.analyser == 'giella' or args.analyser2 == 'giella') and (args.no_rlx2 or args.no_prob2):
parser.error('--analyser=giella specifies the tagger')
files, replacements, conditionals = init_pair(args, email)
elif '-' not in args.name:
if args.pair_conds:
parser.error('--with-%s can only be used with bilingual pairs' % args.pair_conds[0])
if args.lang_conds:
if args.analyser in ['lt', 'lttoolbox']:
parser.error('--with-%s can only be used with hfst modules' % args.lang_conds[0])
# this will have to be changed if we have options for lttoolbox modules
elif args.analyser == 'lexd' and 'twoc' in args.lang_conds:
parser.error('--analyser=lexd is not compatible with --with-twoc')
if args.analyser == 'giella':
parser.error('cannot generate Giella language modules')
files, replacements, conditionals = init_lang_module(args, email)
else:
parser.error('Invalid language module name: %s' % args.name)
if args.rebuild:
if not os.path.exists(args.destination):
sys.stderr.write('Directory {} does not exist, cannot rebuild, quitting.\n'.format(args.destination))
sys.exit(-1)
files_to_delete = []
for filename in files:
if filename in ['README', 'modes.xml', 'autogen.sh', 'configure.ac', 'Makefile.am']:
continue
if filename.endswith('.pc.in'):
continue
fname = make_replacements(filename, replacements, conditionals)
if os.path.exists(os.path.join(args.destination, fname)):
files_to_delete.append(filename)
for filename in files_to_delete:
del files[filename]
elif os.path.exists(args.destination):
sys.stderr.write('Directory {} already exists, quitting.\n'.format(args.destination))
sys.exit(-1)
else:
os.makedirs(args.destination)
make_all_replacements(args.destination, files, replacements, conditionals)
autogen_path = os.path.join(args.destination, 'autogen.sh')
os.chmod(autogen_path, os.stat(autogen_path).st_mode | stat.S_IEXEC)
try:
readme_path = os.path.join(args.destination, 'README')
if args.rebuild:
readme_md_path = os.path.join(args.destination, 'README.md')
if os.path.exists(readme_md_path):
os.remove(readme_md_path)
if os.path.exists(readme_path):
os.symlink('README', os.path.join(args.destination, 'README.md'))
except OSError as err: # e.g. on Windows without running as an admin
sys.stderr.write('Unable to create symlink from README.md -> README: {}\n'.format(err))
print('Successfully created %s.' % args.destination)
try:
subprocess.check_output(shlex.split('git init .'), cwd=args.destination, universal_newlines=True, stderr=subprocess.STDOUT)
print('Initialized git repository {}.'.format(repository_name))
except subprocess.CalledProcessError as e:
sys.stderr.write('Unable to initialize git repository: {}'.format(e.output))
sys.exit(-1)
try:
add_cmd = ['git', 'add', 'README.md']
for filename in files:
add_cmd.append(make_replacements(filename, replacements, conditionals))
subprocess.check_output(add_cmd, cwd=args.destination, universal_newlines=True, stderr=subprocess.STDOUT)
if not args.rebuild:
subprocess.check_output(shlex.split('git commit -m "Initial commit"'), cwd=args.destination, universal_newlines=True, stderr=subprocess.STDOUT)
print('Successfully added and committed files to git repository {}.'.format(repository_name))
else:
print('Successfully added updated files to git repository {}.'.format(repository_name))
except subprocess.CalledProcessError as e:
sys.stderr.write('Unable to add/commit files to git repository {}: {}'.format(repository_name, e.output))
sys.exit(-1)
if '-' not in args.name and not args.rebuild:
try:
subprocess.check_output('./autogen.sh', cwd=args.destination, universal_newlines=True, stderr=subprocess.STDOUT, shell=True)
print('Successfully configure Makefile.')
except subprocess.CalledProcessError as e:
sys.stderr.write('Unable to configure Makefile, you may be missing some of the required tools: {}'.format(e.output))
# don't exit here, since failing this doesn't interfere with pushing to github
if args.push_new_to_github:
push_to_github(args, args.destination, username)
else:
print('To push your new local repository to incubator in the {} organisation on GitHub:'.format(organization_name))
print('\tapertium-init.py --pe {} {}'.format(args.destination, repository_name))
if not args.rebuild:
print("""
The directory you just created includes a GPLv3-or-later license.
If you would like to license it differently, please adjust or replace the COPYING file accordingly.
Please note that code included in the Apertium project should be GPLv2-or-later-compatible.
""")
push_hook = os.path.join(args.destination, '.git/hooks/pre-push')
if not os.path.exists(push_hook):
with open(push_hook, 'w') as f:
f.write("""#!/bin/bash
todos=`grep "TODO" README | grep -v "TODO("`
if [ ! -z "$todos" ]; then
echo ""
echo "WARNING: You have unresolved TODOs in your README."
echo "You can resolve this message by replacing the TODOs with example"
echo "sentences or by assigning them like TODO(Albert)"
echo ""
fi
""")
os.chmod(push_hook, os.stat(push_hook).st_mode | stat.S_IEXEC)
if __name__ == '__main__':
main(sys.argv[1:])
|
goavki/bootstrap
|
main.py
|
Python
|
gpl-3.0
| 19,576
|
[
"VisIt"
] |
15aaff12d9e4743df2347a3dc6dac6af7227516fd56793ece45791f1e9174a06
|
# -*- coding: latin-1 -*-
import os
from random import randrange
print "Bienvenue au Casino LOL"
reponse_ok=False
argent=0
reponse=raw_input("Voulez vous jouer une partie ? Taper O pour Oui N pour Non. ")
while reponse_ok == False :
reponse_traiter= reponse.lower()
if reponse_traiter=="o":
print "la partie va commencer."
argent=1000
continuer_parti=True
reponse_ok = True
elif reponse_traiter=="n" :
print "D'accord au revoir"
continuer_parti=False
break
else:
reponse=raw_input("Votre choix n'est pas valide, taper O pour Oui N pour Non.")
reponse_ok=False
while continuer_parti==True :
nombre_choisi = -1
while nombre_choisi < 0 or nombre_choisi > 49:
nombre_choisi= raw_input("Veuillez choisir un nombre entre 0 et 50 : ")
try :
nombre_choisi=int(nombre_choisi)
except ValueError:
print "vous n'avez pas saisie de nombre"
nombre_choisi = -1
continue
if nombre_choisi < 0 :
print "le nombre choisi ne peux pas être négatif"
if nombre_choisi >49 :
print"Le nombre choisi ne peux pas être plus grand que 49"
la_mise = -1
while la_mise < 1 or la_mise > argent :
print "Vous avez ",argent," $."
la_mise = raw_input("Combien voulez-vous miser ? : ")
try :
la_mise=int(la_mise)
except ValueError:
print "vous n'avez pas saisie de nombre"
la_mise = -1
continue
if la_mise < 1 :
print "la mise minimum est de 1 $"
if la_mise > argent :
print"Tu ne peux pas miser autant le pauvre !!!"
numero_gagnant = randrange(50)
print"La roulette tourne... ... et arrête sur le chiffre", numero_gagnant
if numero_gagnant == nombre_choisi :
argent = argent + (la_mise*3)
print "Bravo !!! Vous avez Gagnez ",(la_mise*3)," $."
print "Vous avez maintenant", argent, " $."
elif numero_gagnant % 2 == nombre_choisi % 2 :
argent = argent + (la_mise*0.5)
print "Bravo !!! Vous avez Gagnez ",(la_mise*0.5)," $."
print "Vous avez maintenant", argent, " $."
else :
argent = argent - la_mise
print "HaHa tu as perdu sale loser !!! Vous avez perdu ",la_mise," $."
print "Vous avez maintenant", argent, " $."
reponse_ok = False
while reponse_ok == False :
reponse_continuer = raw_input ("Voulez vous continuez ? (O ou N) : ")
if reponse_continuer.lower() == "o" :
reponse_ok = True
continuer_parti=True
elif reponse_continuer.lower() == "n" :
print "D'accord, au revoir. Vous avez terminez la partie avec ", argent, " $."
continuer_parti=False
reponse_ok = True
else:
print "Votre réponse n'est pas valide. Recommencer"
reponse_ok = False
|
Jaypeto/Kzino
|
casino.py
|
Python
|
gpl-2.0
| 2,970
|
[
"CASINO"
] |
7e1a9f0c8a2de93a58d19f7a23b82df034f7c7efced373cc0804e5c53f5bdf9a
|
"""
Acceptance tests for Studio related to the acid xblock.
"""
from bok_choy.web_app_test import WebAppTest
from ...pages.studio.auto_auth import AutoAuthPage
from ...pages.studio.overview import CourseOutlinePage
from ...pages.xblock.acid import AcidView
from ...fixtures.course import CourseFixture, XBlockFixtureDesc
class XBlockAcidBase(WebAppTest):
"""
Base class for tests that verify that XBlock integration is working correctly
"""
__test__ = False
def setUp(self):
"""
Create a unique identifier for the course used in this test.
"""
# Ensure that the superclass sets up
super(XBlockAcidBase, self).setUp()
# Define a unique course identifier
self.course_info = {
'org': 'test_org',
'number': 'course_' + self.unique_id[:5],
'run': 'test_' + self.unique_id,
'display_name': 'Test Course ' + self.unique_id
}
self.outline = CourseOutlinePage(
self.browser,
self.course_info['org'],
self.course_info['number'],
self.course_info['run']
)
self.course_id = '{org}.{number}.{run}'.format(**self.course_info)
self.setup_fixtures()
self.auth_page = AutoAuthPage(
self.browser,
staff=False,
username=self.user.get('username'),
email=self.user.get('email'),
password=self.user.get('password')
)
self.auth_page.visit()
def validate_acid_block_preview(self, acid_block):
"""
Validate the Acid Block's preview
"""
self.assertTrue(acid_block.init_fn_passed)
self.assertTrue(acid_block.resource_url_passed)
self.assertTrue(acid_block.scope_passed('user_state'))
self.assertTrue(acid_block.scope_passed('user_state_summary'))
self.assertTrue(acid_block.scope_passed('preferences'))
self.assertTrue(acid_block.scope_passed('user_info'))
def test_acid_block_preview(self):
"""
Verify that all expected acid block tests pass in studio preview
"""
self.outline.visit()
subsection = self.outline.section('Test Section').subsection('Test Subsection')
unit = subsection.expand_subsection().unit('Test Unit').go_to()
acid_block = AcidView(self.browser, unit.xblocks[0].preview_selector)
self.validate_acid_block_preview(acid_block)
def test_acid_block_editor(self):
"""
Verify that all expected acid block tests pass in studio editor
"""
self.outline.visit()
subsection = self.outline.section('Test Section').subsection('Test Subsection')
unit = subsection.expand_subsection().unit('Test Unit').go_to()
acid_block = AcidView(self.browser, unit.xblocks[0].edit().editor_selector)
self.assertTrue(acid_block.init_fn_passed)
self.assertTrue(acid_block.resource_url_passed)
class XBlockAcidNoChildTest(XBlockAcidBase):
"""
Tests of an AcidBlock with no children
"""
__test__ = True
def setup_fixtures(self):
course_fix = CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name']
)
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Unit').add_children(
XBlockFixtureDesc('acid', 'Acid Block')
)
)
)
).install()
self.user = course_fix.user
class XBlockAcidParentBase(XBlockAcidBase):
"""
Base class for tests that verify that parent XBlock integration is working correctly
"""
__test__ = False
def validate_acid_block_preview(self, acid_block):
super(XBlockAcidParentBase, self).validate_acid_block_preview(acid_block)
self.assertTrue(acid_block.child_tests_passed)
def test_acid_block_preview(self):
"""
Verify that all expected acid block tests pass in studio preview
"""
self.outline.visit()
subsection = self.outline.section('Test Section').subsection('Test Subsection')
unit = subsection.expand_subsection().unit('Test Unit').go_to()
container = unit.xblocks[0].go_to_container()
acid_block = AcidView(self.browser, container.xblocks[0].preview_selector)
self.validate_acid_block_preview(acid_block)
class XBlockAcidEmptyParentTest(XBlockAcidParentBase):
"""
Tests of an AcidBlock with children
"""
__test__ = True
def setup_fixtures(self):
course_fix = CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name']
)
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Unit').add_children(
XBlockFixtureDesc('acid_parent', 'Acid Parent Block').add_children(
)
)
)
)
).install()
self.user = course_fix.user
class XBlockAcidChildTest(XBlockAcidParentBase):
"""
Tests of an AcidBlock with children
"""
__test__ = True
def setup_fixtures(self):
course_fix = CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name']
)
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Unit').add_children(
XBlockFixtureDesc('acid_parent', 'Acid Parent Block').add_children(
XBlockFixtureDesc('acid', 'First Acid Child', metadata={'name': 'first'}),
XBlockFixtureDesc('acid', 'Second Acid Child', metadata={'name': 'second'}),
XBlockFixtureDesc('html', 'Html Child', data="<html>Contents</html>"),
)
)
)
)
).install()
self.user = course_fix.user
def test_acid_block_preview(self):
super(XBlockAcidChildTest, self).test_acid_block_preview()
def test_acid_block_editor(self):
super(XBlockAcidChildTest, self).test_acid_block_editor()
|
ahmadiga/min_edx
|
common/test/acceptance/tests/studio/test_studio_acid_xblock.py
|
Python
|
agpl-3.0
| 6,909
|
[
"VisIt"
] |
a65e72a9726da64205eb1aa9ac59df09a6fa9f684ffdca90da08ac91f4989ed3
|
__author__ = 'twisa'
from bcd.entity import ENTITIES, Entity
from bcd.calibration import GaussianCalibrator, StudentTCalibrator
from bcd.discounter import SimpleDiscounter
from bcd.bcdpricer import BCDPricer
from numpy import array, arange
from math import ceil
from datetime import datetime
def get_no_of_periods(delta, maturityDate, effectiveDate):
return int(ceil((maturityDate - effectiveDate).days / (365 * delta)))
def get_calibrator(copula_type):
calib = None
if copula_type == 'gaussian':
calib = GaussianCalibrator()
else:
# student t VAMSHI TO CHECK
calib = StudentTCalibrator(10.0, 0.25)
return calib
def get_bcd(entities, copula_type, seniority, delta, is_premium_accrued, \
effective_date, maturity_date, recovery_rate, no_of_simulations):
# What is basis used for . Is it unused ?
marginals = []
ent_objs = []
pca = []
for t in entities:
e = Entity(ENTITIES[t],t)
e.calibrate()
ent_objs.append(e)
marginals.append(e.survivalDistribution)
pca.append(e.getGraphData())
calib = get_calibrator(copula_type)
pd = array([e.priceData() for e in ent_objs])
cop = calib.calibrate(pd)
print marginals
no_of_names = len(entities)
no_of_periods = get_no_of_periods(delta, maturity_date, effective_date)
discounter = SimpleDiscounter([0.5, 1.0, 2.0, 3.0, 4.0, 5.0], [0.9932727, 0.9858018, 0.9627129, 0.9285788, 0.8891939, 0.8474275])
## need to handle for multiple senorities
pricer = BCDPricer(no_of_names, seniority, delta, no_of_periods, recovery_rate, discounter, cop, marginals)
price, sims = pricer.price(no_of_simulations, is_premium_accrued)
return (price*100.0*100.0, sims, pca)
|
neotrinity/cqf
|
bcd/__init__.py
|
Python
|
mit
| 1,754
|
[
"Gaussian"
] |
295e5ef164ee3bd15f5b69e3ef34cf9e7140bf6ab3c93ae81291f6ee9da86c0d
|
# Copyright (c) 2011, Alex Krizhevsky (akrizhevsky@gmail.com)
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# - Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# - Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from math import exp
import sys
import ConfigParser as cfg
import os
import numpy as n
import numpy.random as nr
from math import ceil, floor
from ordereddict import OrderedDict
from os import linesep as NL
from options import OptionsParser
import re
class LayerParsingError(Exception):
pass
# A neuron that doesn't take parameters
class NeuronParser:
def __init__(self, type, func_str, uses_acts=True, uses_inputs=True):
self.type = type
self.func_str = func_str
self.uses_acts = uses_acts
self.uses_inputs = uses_inputs
def parse(self, type):
if type == self.type:
return {'type': self.type,
'params': {},
'usesActs': self.uses_acts,
'usesInputs': self.uses_inputs}
return None
# A neuron that takes parameters
class ParamNeuronParser(NeuronParser):
neuron_regex = re.compile(r'^\s*(\w+)\s*\[\s*(\w+(\s*,\w+)*)\s*\]\s*$')
def __init__(self, type, func_str, uses_acts=True, uses_inputs=True):
NeuronParser.__init__(self, type, func_str, uses_acts, uses_inputs)
m = self.neuron_regex.match(type)
self.base_type = m.group(1)
self.param_names = m.group(2).split(',')
assert len(set(self.param_names)) == len(self.param_names)
def parse(self, type):
m = re.match(r'^%s\s*\[([\d,\.\s\-e]*)\]\s*$' % self.base_type, type)
if m:
try:
param_vals = [float(v.strip()) for v in m.group(1).split(',')]
if len(param_vals) == len(self.param_names):
return {'type': self.base_type,
'params': dict(zip(self.param_names, param_vals)),
'usesActs': self.uses_acts,
'usesInputs': self.uses_inputs}
except TypeError:
pass
return None
class AbsTanhNeuronParser(ParamNeuronParser):
def __init__(self):
ParamNeuronParser.__init__(self, 'abstanh[a,b]', 'f(x) = a * |tanh(b * x)|')
def parse(self, type):
dic = ParamNeuronParser.parse(self, type)
# Make b positive, since abs(tanh(bx)) = abs(tanh(-bx)) and the C++ code
# assumes b is positive.
if dic:
dic['params']['b'] = abs(dic['params']['b'])
return dic
# Subclass that throws more convnet-specific exceptions than the default
class MyConfigParser(cfg.SafeConfigParser):
def safe_get(self, section, option, f=cfg.SafeConfigParser.get, typestr=None, default=None):
try:
return f(self, section, option)
except cfg.NoOptionError, e:
if default is not None:
return default
raise LayerParsingError("Layer '%s': required parameter '%s' missing" % (section, option))
except ValueError, e:
if typestr is None:
raise e
raise LayerParsingError("Layer '%s': parameter '%s' must be %s" % (section, option, typestr))
def safe_get_list(self, section, option, f=str, typestr='strings', default=None):
v = self.safe_get(section, option, default=default)
if type(v) == list:
return v
try:
return [f(x.strip()) for x in v.split(',')]
except:
raise LayerParsingError("Layer '%s': parameter '%s' must be ','-delimited list of %s" % (section, option, typestr))
def safe_get_int(self, section, option, default=None):
return self.safe_get(section, option, f=cfg.SafeConfigParser.getint, typestr='int', default=default)
def safe_get_float(self, section, option, default=None):
return self.safe_get(section, option, f=cfg.SafeConfigParser.getfloat, typestr='float', default=default)
def safe_get_bool(self, section, option, default=None):
return self.safe_get(section, option, f=cfg.SafeConfigParser.getboolean, typestr='bool', default=default)
def safe_get_float_list(self, section, option, default=None):
return self.safe_get_list(section, option, float, typestr='floats', default=default)
def safe_get_int_list(self, section, option, default=None):
return self.safe_get_list(section, option, int, typestr='ints', default=default)
def safe_get_bool_list(self, section, option, default=None):
return self.safe_get_list(section, option, lambda x: x.lower() in ('true', '1'), typestr='bools', default=default)
# A class that implements part of the interface of MyConfigParser
class FakeConfigParser(object):
def __init__(self, dic):
self.dic = dic
def safe_get(self, section, option, default=None):
return self.dic[option]
class LayerParser:
def __init__(self):
self.dic = {}
self.set_defaults()
# Post-processing step -- this is called after all layers have been initialized
def optimize(self, layers):
self.dic['actsTarget'] = -1
self.dic['actsGradTarget'] = -1
# Add parameters from layer parameter file
def add_params(self, mcp):
dic, name = self.dic, self.dic['name']
dic['dropout'] = 0.0
if name in mcp.sections():
dic['dropout'] = mcp.safe_get_float(name, 'dropout', default=0.0)
def init(self, dic):
self.dic = dic
return self
def set_defaults(self):
self.dic['outputs'] = 0
self.dic['parser'] = self
self.dic['requiresParams'] = False
# Does this layer use its own activity matrix
# for some purpose other than computing its output?
# Usually, this will only be true for layers that require their
# own activity matrix for gradient computations. For example, layers
# with logistic units must compute the gradient y * (1 - y), where y is
# the activity matrix.
#
# Layers that do not not use their own activity matrix should advertise
# this, since this will enable memory-saving matrix re-use optimizations.
#
# The default value of this property is True, for safety purposes.
# If a layer advertises that it does not use its own activity matrix when
# in fact it does, bad things will happen.
self.dic['usesActs'] = True
# Does this layer use the activity matrices of its input layers
# for some purpose other than computing its output?
#
# Again true by default for safety
self.dic['usesInputs'] = True
# Force this layer to use its own activity gradient matrix,
# instead of borrowing one from one of its inputs.
#
# This should be true for layers where the mapping from output
# gradient to input gradient is non-elementwise.
self.dic['forceOwnActs'] = True
# Does this layer need the gradient at all?
# Should only be true for layers with parameters (weights).
self.dic['gradConsumer'] = False
def parse(self, name, mcp, prev_layers, model=None):
self.prev_layers = prev_layers
self.dic['name'] = name
self.dic['type'] = mcp.safe_get(name, 'type')
return self.dic
def verify_float_range(self, v, param_name, _min, _max):
self.verify_num_range(v, param_name, _min, _max, strconv=lambda x: '%.3f' % x)
def verify_num_range(self, v, param_name, _min, _max, strconv=lambda x:'%d' % x):
if type(v) == list:
for i,vv in enumerate(v):
self._verify_num_range(vv, param_name, _min, _max, i, strconv=strconv)
else:
self._verify_num_range(v, param_name, _min, _max, strconv=strconv)
def _verify_num_range(self, v, param_name, _min, _max, input=-1, strconv=lambda x:'%d' % x):
layer_name = self.dic['name'] if input < 0 else '%s[%d]' % (self.dic['name'], input)
if _min is not None and _max is not None and (v < _min or v > _max):
raise LayerParsingError("Layer '%s': parameter '%s' must be in the range %s-%s" % (layer_name, param_name, strconv(_min), strconv(_max)))
elif _min is not None and v < _min:
raise LayerParsingError("Layer '%s': parameter '%s' must be greater than or equal to %s" % (layer_name, param_name, strconv(_min)))
elif _max is not None and v > _max:
raise LayerParsingError("Layer '%s': parameter '%s' must be smaller than or equal to %s" % (layer_name, param_name, strconv(_max)))
def verify_divisible(self, value, div, value_name, div_name=None, input_idx=0):
layer_name = self.dic['name'] if len(self.dic['inputs']) == 0 else '%s[%d]' % (self.dic['name'], input_idx)
if value % div != 0:
raise LayerParsingError("Layer '%s': parameter '%s' must be divisible by %s" % (layer_name, value_name, str(div) if div_name is None else "'%s'" % div_name))
def verify_str_in(self, value, lst):
if value not in lst:
raise LayerParsingError("Layer '%s': parameter '%s' must be one of %s" % (self.dic['name'], value, ", ".join("'%s'" % s for s in lst)))
def verify_int_in(self, value, lst):
if value not in lst:
raise LayerParsingError("Layer '%s': parameter '%s' must be one of %s" % (self.dic['name'], value, ", ".join("'%d'" % s for s in lst)))
# This looks for neuron=x arguments in various layers, and creates
# separate layer definitions for them.
@staticmethod
def detach_neuron_layers(layers):
layers_new = []
for i, l in enumerate(layers):
layers_new += [l]
if l['type'] != 'neuron' and 'neuron' in l and l['neuron']:
NeuronLayerParser().detach_neuron_layer(i, layers, layers_new)
return layers_new
@staticmethod
def parse_layers(layer_cfg_path, param_cfg_path, model, layers=[]):
try:
if not os.path.exists(layer_cfg_path):
raise LayerParsingError("Layer definition file '%s' does not exist" % layer_cfg_path)
if not os.path.exists(param_cfg_path):
raise LayerParsingError("Layer parameter file '%s' does not exist" % param_cfg_path)
if len(layers) == 0:
mcp = MyConfigParser(dict_type=OrderedDict)
mcp.read([layer_cfg_path])
for name in mcp.sections():
if not mcp.has_option(name, 'type'):
raise LayerParsingError("Layer '%s': no type given" % name)
ltype = mcp.safe_get(name, 'type')
if ltype not in layer_parsers:
raise LayerParsingError("Layer '%s': Unknown layer type: '%s'" % (name, ltype))
layers += [layer_parsers[ltype]().parse(name, mcp, layers, model)]
layers = LayerParser.detach_neuron_layers(layers)
for l in layers:
lp = layer_parsers[l['type']]()
l['parser'].optimize(layers)
del l['parser']
for l in layers:
if not l['type'].startswith('cost.'):
found = max(l['name'] in [layers[n]['name'] for n in l2['inputs']] for l2 in layers if 'inputs' in l2)
if not found:
raise LayerParsingError("Layer '%s' of type '%s' is unused" % (l['name'], l['type']))
mcp = MyConfigParser(dict_type=OrderedDict)
mcp.read([param_cfg_path])
for l in layers:
if not mcp.has_section(l['name']) and l['requiresParams']:
raise LayerParsingError("Layer '%s' of type '%s' requires extra parameters, but none given in file '%s'." % (l['name'], l['type'], param_cfg_path))
lp = layer_parsers[l['type']]().init(l)
lp.add_params(mcp)
lp.dic['conserveMem'] = model.op.get_value('conserve_mem')
except LayerParsingError, e:
print e
sys.exit(1)
return layers
@staticmethod
def register_layer_parser(ltype, cls):
if ltype in layer_parsers:
raise LayerParsingError("Layer type '%s' already registered" % ltype)
layer_parsers[ltype] = cls
# Any layer that takes an input (i.e. non-data layer)
class LayerWithInputParser(LayerParser):
def __init__(self, num_inputs=-1):
LayerParser.__init__(self)
self.num_inputs = num_inputs
def verify_num_params(self, params):
for param in params:
if len(self.dic[param]) != len(self.dic['inputs']):
raise LayerParsingError("Layer '%s': %s list length does not match number of inputs" % (self.dic['name'], param))
def optimize(self, layers):
LayerParser.optimize(self, layers)
dic = self.dic
# Check if I have an input that no one else uses.
if not dic['forceOwnActs']:
for i, inp in enumerate(dic['inputs']):
l = layers[inp]
if l['outputs'] == dic['outputs'] and sum('inputs' in ll and inp in ll['inputs'] for ll in layers) == 1:
# I can share my activity matrix with this layer
# if it does not use its activity matrix, and I
# do not need to remember my inputs.
if not l['usesActs'] and not dic['usesInputs']:
dic['actsTarget'] = i
# print "Layer '%s' sharing activity matrix with layer '%s'" % (dic['name'], l['name'])
# I can share my gradient matrix with this layer.
dic['actsGradTarget'] = i
# print "Layer '%s' sharing activity gradient matrix with layer '%s'" % (dic['name'], l['name'])
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerParser.parse(self, name, mcp, prev_layers, model)
dic['inputs'] = [inp.strip() for inp in mcp.safe_get(name, 'inputs').split(',')]
prev_names = [p['name'] for p in prev_layers]
for inp in dic['inputs']:
if inp not in prev_names:
raise LayerParsingError("Layer '%s': input layer '%s' not defined" % (name, inp))
dic['inputs'] = [prev_names.index(inp) for inp in dic['inputs']]
dic['inputLayers'] = [prev_layers[inp] for inp in dic['inputs']]
for inp in dic['inputs']:
if prev_layers[inp]['outputs'] == 0:
raise LayerParsingError("Layer '%s': input layer '%s' does not produce any output" % (name, prev_names[inp]))
dic['numInputs'] = [prev_layers[i]['outputs'] for i in dic['inputs']]
# Layers can declare a neuron activation function to apply to their output, as a shortcut
# to avoid declaring a separate neuron layer above themselves.
dic['neuron'] = mcp.safe_get(name, 'neuron', default="")
if self.num_inputs > 0 and len(dic['numInputs']) != self.num_inputs:
raise LayerParsingError("Layer '%s': number of inputs must be %d", name, self.num_inputs)
# input_layers = [prev_layers[i] for i in dic['inputs']]
# dic['gradConsumer'] = any(l['gradConsumer'] for l in dic['inputLayers'])
# dic['usesActs'] = dic['gradConsumer'] # A conservative setting by default for layers with input
return dic
def verify_img_size(self):
dic = self.dic
if dic['numInputs'][0] % dic['imgPixels'] != 0 or dic['imgSize'] * dic['imgSize'] != dic['imgPixels']:
raise LayerParsingError("Layer '%s': has %-d dimensional input, not interpretable as %d-channel images" % (dic['name'], dic['numInputs'][0], dic['channels']))
@staticmethod
def grad_consumers_below(dic):
if dic['gradConsumer']:
return True
if 'inputLayers' in dic:
return any(LayerWithInputParser.grad_consumers_below(l) for l in dic['inputLayers'])
def verify_no_grads(self):
if LayerWithInputParser.grad_consumers_below(self.dic):
raise LayerParsingError("Layer '%s': layers of type '%s' cannot propagate gradient and must not be placed over layers with parameters." % (self.dic['name'], self.dic['type']))
class NailbedLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['forceOwnActs'] = False
dic['usesActs'] = False
dic['usesInputs'] = False
dic['channels'] = mcp.safe_get_int(name, 'channels')
dic['stride'] = mcp.safe_get_int(name, 'stride')
self.verify_num_range(dic['channels'], 'channels', 1, None)
# Computed values
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
dic['outputsX'] = (dic['imgSize'] + dic['stride'] - 1) / dic['stride']
dic['start'] = (dic['imgSize'] - dic['stride'] * (dic['outputsX'] - 1)) / 2
dic['outputs'] = dic['channels'] * dic['outputsX']**2
self.verify_num_range(dic['outputsX'], 'outputsX', 0, None)
self.verify_img_size()
print "Initialized bed-of-nails layer '%s', producing %dx%d %d-channel output" % (name, dic['outputsX'], dic['outputsX'], dic['channels'])
return dic
class GaussianBlurLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['forceOwnActs'] = False
dic['usesActs'] = False
dic['usesInputs'] = False
dic['outputs'] = dic['numInputs'][0]
dic['channels'] = mcp.safe_get_int(name, 'channels')
dic['filterSize'] = mcp.safe_get_int(name, 'filterSize')
dic['stdev'] = mcp.safe_get_float(name, 'stdev')
self.verify_num_range(dic['channels'], 'channels', 1, None)
self.verify_int_in(dic['filterSize'], [3, 5, 7, 9])
# Computed values
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
dic['filter'] = n.array([exp(-(dic['filterSize']/2 - i)**2 / float(2 * dic['stdev']**2))
for i in xrange(dic['filterSize'])], dtype=n.float32).reshape(1, dic['filterSize'])
dic['filter'] /= dic['filter'].sum()
self.verify_img_size()
if dic['filterSize'] > dic['imgSize']:
raise LayerParsingError("Later '%s': filter size (%d) must be smaller than image size (%d)." % (dic['name'], dic['filterSize'], dic['imgSize']))
print "Initialized Gaussian blur layer '%s', producing %dx%d %d-channel output" % (name, dic['imgSize'], dic['imgSize'], dic['channels'])
return dic
class ResizeLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['forceOwnActs'] = False
dic['usesActs'] = False
dic['usesInputs'] = False
dic['channels'] = mcp.safe_get_int(name, 'channels')
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
dic['scale'] = mcp.safe_get_float(name, 'scale')
dic['tgtSize'] = int(floor(dic['imgSize'] / dic['scale']))
dic['tgtPixels'] = dic['tgtSize']**2
self.verify_num_range(dic['channels'], 'channels', 1, None)
# Really not recommended to use this for such severe scalings
self.verify_float_range(dic['scale'], 'scale', 0.5, 2)
dic['outputs'] = dic['channels'] * dic['tgtPixels']
self.verify_img_size()
self.verify_no_grads()
print "Initialized resize layer '%s', producing %dx%d %d-channel output" % (name, dic['tgtSize'], dic['tgtSize'], dic['channels'])
return dic
class RandomScaleLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['forceOwnActs'] = False
dic['usesActs'] = False
dic['usesInputs'] = False
dic['channels'] = mcp.safe_get_int(name, 'channels')
self.verify_num_range(dic['channels'], 'channels', 1, None)
# Computed values
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
dic['maxScale'] = mcp.safe_get_float(name, 'maxScale')
dic['tgtSize'] = int(floor(dic['imgSize'] / dic['maxScale']))
dic['tgtPixels'] = dic['tgtSize']**2
self.verify_float_range(dic['maxScale'], 'maxScale', 1, 2)
dic['outputs'] = dic['channels'] * dic['tgtPixels']
self.verify_img_size()
self.verify_no_grads()
print "Initialized random scale layer '%s', producing %dx%d %d-channel output" % (name, dic['tgtSize'], dic['tgtSize'], dic['channels'])
return dic
class ColorTransformLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['forceOwnActs'] = False
dic['usesActs'] = False
dic['usesInputs'] = False
# Computed values
dic['imgPixels'] = dic['numInputs'][0] / 3
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
dic['channels'] = 3
dic['outputs'] = dic['numInputs'][0]
self.verify_img_size()
self.verify_no_grads()
return dic
class RGBToYUVLayerParser(ColorTransformLayerParser):
def __init__(self):
ColorTransformLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model=None):
dic = ColorTransformLayerParser.parse(self, name, mcp, prev_layers, model)
print "Initialized RGB --> YUV layer '%s', producing %dx%d %d-channel output" % (name, dic['imgSize'], dic['imgSize'], dic['channels'])
return dic
class RGBToLABLayerParser(ColorTransformLayerParser):
def __init__(self):
ColorTransformLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model=None):
dic = ColorTransformLayerParser.parse(self, name, mcp, prev_layers, model)
dic['center'] = mcp.safe_get_bool(name, 'center', default=False)
print "Initialized RGB --> LAB layer '%s', producing %dx%d %d-channel output" % (name, dic['imgSize'], dic['imgSize'], dic['channels'])
return dic
class NeuronLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
@staticmethod
def get_unused_layer_name(layers, wish):
layer_names = set([l['name'] for l in layers])
if wish not in layer_names:
return wish
for i in xrange(1, 100):
name = '%s.%d' % (wish, i)
if name not in layer_names:
return name
raise LayerParsingError("This is insane.")
def parse_neuron(self, neuron_str):
for n in neuron_parsers:
p = n.parse(neuron_str)
if p: # Successfully parsed neuron, return it
self.dic['neuron'] = p
self.dic['usesActs'] = self.dic['neuron']['usesActs']
self.dic['usesInputs'] = self.dic['neuron']['usesInputs']
return
# Could not parse neuron
# Print available neuron types
colnames = ['Neuron type', 'Function']
m = max(len(colnames[0]), OptionsParser._longest_value(neuron_parsers, key=lambda x:x.type)) + 2
ntypes = [OptionsParser._bold(colnames[0].ljust(m))] + [n.type.ljust(m) for n in neuron_parsers]
fnames = [OptionsParser._bold(colnames[1])] + [n.func_str for n in neuron_parsers]
usage_lines = NL.join(ntype + fname for ntype,fname in zip(ntypes, fnames))
raise LayerParsingError("Layer '%s': unable to parse neuron type '%s'. Valid neuron types: %sWhere neurons have parameters, they must be floats." % (self.dic['name'], neuron_str, NL + usage_lines + NL))
def detach_neuron_layer(self, idx, layers, layers_new):
dic = self.dic
self.set_defaults()
dic['name'] = NeuronLayerParser.get_unused_layer_name(layers, '%s_neuron' % layers[idx]['name'])
dic['type'] = 'neuron'
dic['inputs'] = layers[idx]['name']
dic['neuron'] = layers[idx]['neuron']
dic = self.parse(dic['name'], FakeConfigParser(dic), layers_new)
# Link upper layers to this new one
for l in layers[idx+1:]:
if 'inputs' in l:
l['inputs'] = [i + (i >= len(layers_new) - 1) for i in l['inputs']]
if 'weightSourceLayerIndices' in l:
l['weightSourceLayerIndices'] = [i + (i >= len(layers_new)) for i in l['weightSourceLayerIndices']]
layers_new += [dic]
# print "Initialized implicit neuron layer '%s', producing %d outputs" % (dic['name'], dic['outputs'])
def parse(self, name, mcp, prev_layers, model=None):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['outputs'] = dic['numInputs'][0]
self.parse_neuron(dic['neuron'])
dic['forceOwnActs'] = False
print "Initialized neuron layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class EltwiseSumLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
if len(set(dic['numInputs'])) != 1:
raise LayerParsingError("Layer '%s': all inputs must have the same dimensionality. Got dimensionalities: %s" % (name, ", ".join(str(s) for s in dic['numInputs'])))
dic['outputs'] = dic['numInputs'][0]
dic['usesInputs'] = False
dic['usesActs'] = False
dic['forceOwnActs'] = False
dic['coeffs'] = mcp.safe_get_float_list(name, 'coeffs', default=[1.0] * len(dic['inputs']))
print "Initialized elementwise sum layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class EltwiseMaxLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
if len(dic['inputs']) < 2:
raise LayerParsingError("Layer '%s': elementwise max layer must have at least 2 inputs, got %d." % (name, len(dic['inputs'])))
if len(set(dic['numInputs'])) != 1:
raise LayerParsingError("Layer '%s': all inputs must have the same dimensionality. Got dimensionalities: %s" % (name, ", ".join(str(s) for s in dic['numInputs'])))
dic['outputs'] = dic['numInputs'][0]
print "Initialized elementwise max layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class WeightLayerParser(LayerWithInputParser):
LAYER_PAT = re.compile(r'^\s*([^\s\[]+)(?:\[(\d+)\])?\s*$') # matches things like layername[5], etc
def __init__(self):
LayerWithInputParser.__init__(self)
@staticmethod
def get_layer_name(name_str):
m = WeightLayerParser.LAYER_PAT.match(name_str)
if not m:
return None
return m.group(1), m.group(2)
def add_params(self, mcp):
LayerWithInputParser.add_params(self, mcp)
dic, name = self.dic, self.dic['name']
dic['epsW'] = mcp.safe_get_float_list(name, 'epsW')
dic['epsB'] = mcp.safe_get_float(name, 'epsB')
dic['momW'] = mcp.safe_get_float_list(name, 'momW')
dic['momB'] = mcp.safe_get_float(name, 'momB')
dic['wc'] = mcp.safe_get_float_list(name, 'wc')
self.verify_num_params(['epsW', 'momW', 'wc'])
dic['gradConsumer'] = dic['epsB'] > 0 or any(w > 0 for w in dic['epsW'])
@staticmethod
def unshare_weights(layer, layers, matrix_idx=None):
def unshare(layer, layers, indices):
for i in indices:
if layer['weightSourceLayerIndices'][i] >= 0:
src_name = layers[layer['weightSourceLayerIndices'][i]]['name']
src_matrix_idx = layer['weightSourceMatrixIndices'][i]
layer['weightSourceLayerIndices'][i] = -1
layer['weightSourceMatrixIndices'][i] = -1
layer['weights'][i] = layer['weights'][i].copy()
layer['weightsInc'][i] = n.zeros_like(layer['weights'][i])
print "Unshared weight matrix %s[%d] from %s[%d]." % (layer['name'], i, src_name, src_matrix_idx)
else:
print "Weight matrix %s[%d] already unshared." % (layer['name'], i)
if 'weightSourceLayerIndices' in layer:
unshare(layer, layers, range(len(layer['inputs'])) if matrix_idx is None else [matrix_idx])
# Load weight/biases initialization module
def call_init_func(self, param_name, shapes, input_idx=-1):
dic = self.dic
func_pat = re.compile('^([^\.]+)\.([^\(\)]+)\s*(?:\(([^,]+(?:,[^,]+)*)\))?$')
m = func_pat.match(dic[param_name])
if not m:
raise LayerParsingError("Layer '%s': '%s' parameter must have format 'moduleName.functionName(param1,param2,...)'; got: %s." % (dic['name'], param_name, dic['initWFunc']))
module, func = m.group(1), m.group(2)
params = m.group(3).split(',') if m.group(3) is not None else []
try:
mod = __import__(module)
return getattr(mod, func)(dic['name'], input_idx, shapes, params=params) if input_idx >= 0 else getattr(mod, func)(dic['name'], shapes, params=params)
except (ImportError, AttributeError, TypeError), e:
raise LayerParsingError("Layer '%s': %s." % (dic['name'], e))
def make_weights(self, initW, rows, cols, order='C'):
dic = self.dic
dic['weights'], dic['weightsInc'] = [], []
if dic['initWFunc']: # Initialize weights from user-supplied python function
# Initialization function is supplied in the format
# module.func
for i in xrange(len(dic['inputs'])):
dic['weights'] += [self.call_init_func('initWFunc', (rows[i], cols[i]), input_idx=i)]
if type(dic['weights'][i]) != n.ndarray:
raise LayerParsingError("Layer '%s[%d]': weight initialization function %s must return numpy.ndarray object. Got: %s." % (dic['name'], i, dic['initWFunc'], type(dic['weights'][i])))
if dic['weights'][i].dtype != n.float32:
raise LayerParsingError("Layer '%s[%d]': weight initialization function %s must weight matrices consisting of single-precision floats. Got: %s." % (dic['name'], i, dic['initWFunc'], dic['weights'][i].dtype))
if dic['weights'][i].shape != (rows[i], cols[i]):
raise LayerParsingError("Layer '%s[%d]': weight matrix returned by weight initialization function %s has wrong shape. Should be: %s; got: %s." % (dic['name'], i, dic['initWFunc'], (rows[i], cols[i]), dic['weights'][i].shape))
# Convert to desired order
dic['weights'][i] = n.require(dic['weights'][i], requirements=order)
dic['weightsInc'] += [n.zeros_like(dic['weights'][i])]
print "Layer '%s[%d]' initialized weight matrices from function %s" % (dic['name'], i, dic['initWFunc'])
else:
for i in xrange(len(dic['inputs'])):
if dic['weightSourceLayerIndices'][i] >= 0: # Shared weight matrix
src_layer = self.prev_layers[dic['weightSourceLayerIndices'][i]] if dic['weightSourceLayerIndices'][i] < len(self.prev_layers) else dic
dic['weights'] += [src_layer['weights'][dic['weightSourceMatrixIndices'][i]]]
dic['weightsInc'] += [src_layer['weightsInc'][dic['weightSourceMatrixIndices'][i]]]
if dic['weights'][i].shape != (rows[i], cols[i]):
raise LayerParsingError("Layer '%s': weight sharing source matrix '%s' has shape %dx%d; should be %dx%d."
% (dic['name'], dic['weightSource'][i], dic['weights'][i].shape[0], dic['weights'][i].shape[1], rows[i], cols[i]))
print "Layer '%s' initialized weight matrix %d from %s" % (dic['name'], i, dic['weightSource'][i])
else:
dic['weights'] += [n.array(initW[i] * nr.randn(rows[i], cols[i]), dtype=n.single, order=order)]
dic['weightsInc'] += [n.zeros_like(dic['weights'][i])]
def make_biases(self, rows, cols, order='C'):
dic = self.dic
if dic['initBFunc']:
dic['biases'] = self.call_init_func('initBFunc', (rows, cols))
if type(dic['biases']) != n.ndarray:
raise LayerParsingError("Layer '%s': bias initialization function %s must return numpy.ndarray object. Got: %s." % (dic['name'], dic['initBFunc'], type(dic['biases'])))
if dic['biases'].dtype != n.float32:
raise LayerParsingError("Layer '%s': bias initialization function %s must return numpy.ndarray object consisting of single-precision floats. Got: %s." % (dic['name'], dic['initBFunc'], dic['biases'].dtype))
if dic['biases'].shape != (rows, cols):
raise LayerParsingError("Layer '%s': bias vector returned by bias initialization function %s has wrong shape. Should be: %s; got: %s." % (dic['name'], dic['initBFunc'], (rows, cols), dic['biases'].shape))
dic['biases'] = n.require(dic['biases'], requirements=order)
print "Layer '%s' initialized bias vector from function %s" % (dic['name'], dic['initBFunc'])
else:
dic['biases'] = dic['initB'] * n.ones((rows, cols), order='C', dtype=n.single)
dic['biasesInc'] = n.zeros_like(dic['biases'])
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['requiresParams'] = True
dic['gradConsumer'] = True
dic['initW'] = mcp.safe_get_float_list(name, 'initW', default=0.01)
dic['initB'] = mcp.safe_get_float(name, 'initB', default=0)
dic['initWFunc'] = mcp.safe_get(name, 'initWFunc', default="")
dic['initBFunc'] = mcp.safe_get(name, 'initBFunc', default="")
# Find shared weight matrices
dic['weightSource'] = mcp.safe_get_list(name, 'weightSource', default=[''] * len(dic['inputs']))
self.verify_num_params(['initW', 'weightSource'])
prev_names = map(lambda x: x['name'], prev_layers)
dic['weightSourceLayerIndices'] = []
dic['weightSourceMatrixIndices'] = []
for i, src_name in enumerate(dic['weightSource']):
src_layer_idx = src_layer_matrix_idx = -1
if src_name != '':
src_layer_match = WeightLayerParser.get_layer_name(src_name)
if src_layer_match is None:
raise LayerParsingError("Layer '%s': unable to parse weight sharing source '%s'. Format is layer[idx] or just layer, in which case idx=0 is used." % (name, src_name))
src_layer_name = src_layer_match[0]
src_layer_matrix_idx = int(src_layer_match[1]) if src_layer_match[1] is not None else 0
if prev_names.count(src_layer_name) == 0 and src_layer_name != name:
raise LayerParsingError("Layer '%s': weight sharing source layer '%s' does not exist." % (name, src_layer_name))
src_layer_idx = prev_names.index(src_layer_name) if src_layer_name != name else len(prev_names)
src_layer = prev_layers[src_layer_idx] if src_layer_name != name else dic
if src_layer['type'] != dic['type']:
raise LayerParsingError("Layer '%s': weight sharing source layer '%s' is of type '%s'; should be '%s'." % (name, src_layer_name, src_layer['type'], dic['type']))
if src_layer_name != name and len(src_layer['weights']) <= src_layer_matrix_idx:
raise LayerParsingError("Layer '%s': weight sharing source layer '%s' has %d weight matrices, but '%s[%d]' requested." % (name, src_layer_name, len(src_layer['weights']), src_name, src_layer_matrix_idx))
if src_layer_name == name and src_layer_matrix_idx >= i:
raise LayerParsingError("Layer '%s': weight sharing source '%s[%d]' not defined yet." % (name, name, src_layer_matrix_idx))
dic['weightSourceLayerIndices'] += [src_layer_idx]
dic['weightSourceMatrixIndices'] += [src_layer_matrix_idx]
return dic
class FCLayerParser(WeightLayerParser):
def __init__(self):
WeightLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = WeightLayerParser.parse(self, name, mcp, prev_layers, model)
dic['usesActs'] = False
dic['outputs'] = mcp.safe_get_int(name, 'outputs')
self.verify_num_range(dic['outputs'], 'outputs', 1, None)
self.make_weights(dic['initW'], dic['numInputs'], [dic['outputs']] * len(dic['numInputs']), order='F')
self.make_biases(1, dic['outputs'], order='F')
print "Initialized fully-connected layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class FCDropOutLayerParser( FCLayerParser ):
def __init__(self):
FCLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = FCLayerParser.parse(self, name, mcp, prev_layers, model)
dic['rate'] = mcp.safe_get_float( name, 'rate' )
assert( dic['rate'] >= 0 and dic['rate'] <= 1 )
print "Output Drop rate: ", dic['rate']
return dic
class FCDropConnectLayerParser( FCLayerParser ):
def __init__(self):
FCLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = FCLayerParser.parse(self, name, mcp, prev_layers, model)
dic['rate'] = mcp.safe_get_float( name, 'rate' )
assert( dic['rate'] >= 0 and dic['rate'] <= 1 )
print "Connection Drop rate: ", dic['rate']
return dic
class FCDropConnectFastLayerParser( FCLayerParser ):
def __init__(self):
FCLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = FCLayerParser.parse(self, name, mcp, prev_layers, model)
dic['rate'] = mcp.safe_get_float( name, 'rate' )
assert( dic['rate'] >= 0 and dic['rate'] <= 1 )
print "Connection Drop rate(fast): ", dic['rate']
return dic
class LocalLayerParser(WeightLayerParser):
def __init__(self):
WeightLayerParser.__init__(self)
# Convert convolutional layer to unshared, locally-connected layer
@staticmethod
def conv_to_local(layers, idx):
layer = layers[idx]
if layer['type'] == 'conv':
layer['type'] = 'local'
for inp in xrange(len(layer['inputs'])):
src_layer_idx = layer['weightSourceLayerIndices'][inp]
if layer['weightSourceLayerIndices'][inp] >= 0:
src_layer = layers[src_layer_idx]
src_matrix_idx = layer['weightSourceMatrixIndices'][inp]
LocalLayerParser.conv_to_local(layers, src_layer_idx)
for w in ('weights', 'weightsInc'):
layer[w][inp] = src_layer[w][src_matrix_idx]
else:
layer['weights'][inp] = n.require(n.reshape(n.tile(n.reshape(layer['weights'][inp], (1, n.prod(layer['weights'][inp].shape))), (layer['modules'], 1)),
(layer['modules'] * layer['filterChannels'][inp] * layer['filterPixels'][inp], layer['filters'])),
requirements='C')
layer['weightsInc'][inp] = n.zeros_like(layer['weights'][inp])
if layer['sharedBiases']:
layer['biases'] = n.require(n.repeat(layer['biases'], layer['modules'], axis=0), requirements='C')
layer['biasesInc'] = n.zeros_like(layer['biases'])
print "Converted layer '%s' from convolutional to unshared, locally-connected" % layer['name']
# Also call this function on any layers sharing my weights
for i, l in enumerate(layers):
if 'weightSourceLayerIndices' in l and idx in l['weightSourceLayerIndices']:
LocalLayerParser.conv_to_local(layers, i)
return layer
# Returns (groups, filterChannels) array that represents the set
# of image channels to which each group is connected
def gen_rand_conns(self, groups, channels, filterChannels, inputIdx):
dic = self.dic
overSample = groups * filterChannels / channels
filterConns = [x for i in xrange(overSample) for x in nr.permutation(range(channels))]
if dic['initCFunc']: # Initialize connectivity from outside source
filterConns = self.call_init_func('initCFunc', (groups, channels, filterChannels), input_idx=inputIdx)
if len(filterConns) != overSample * channels:
raise LayerParsingError("Layer '%s[%d]': random connectivity initialization function %s must return list of length <groups> * <filterChannels> = %d; got: %d" % (dic['name'], inputIdx, dic['initCFunc'], len(filterConns)))
if any(c not in range(channels) for c in filterConns):
raise LayerParsingError("Layer '%s[%d]': random connectivity initialization function %s must return list of channel indices in the range 0-<channels-1> = 0-%d." % (dic['name'], inputIdx, dic['initCFunc'], channels-1))
# Every "channels" sub-slice should be a permutation of range(channels)
if any(len(set(c)) != len(c) for c in [filterConns[o*channels:(o+1)*channels] for o in xrange(overSample)]):
raise LayerParsingError("Layer '%s[%d]': random connectivity initialization function %s must return list of channel indices such that every non-overlapping sub-list of <channels> = %d elements is a permutation of the integers 0-<channels-1> = 0-%d." % (dic['name'], inputIdx, dic['initCFunc'], channels, channels-1))
elif dic['weightSourceLayerIndices'][inputIdx] >= 0: # Shared weight matrix
src_layer = self.prev_layers[dic['weightSourceLayerIndices'][inputIdx]] if dic['weightSourceLayerIndices'][inputIdx] < len(self.prev_layers) else dic
src_inp = dic['weightSourceMatrixIndices'][inputIdx]
if 'randSparse' not in src_layer or not src_layer['randSparse']:
raise LayerParsingError("Layer '%s[%d]': randSparse is true in this layer but false in weight sharing source layer '%s[%d]'." % (dic['name'], inputIdx, src_layer['name'], src_inp))
if (groups, channels, filterChannels) != (src_layer['groups'][src_inp], src_layer['channels'][src_inp], src_layer['filterChannels'][src_inp]):
raise LayerParsingError("Layer '%s[%d]': groups, channels, filterChannels set to %d, %d, %d, respectively. Does not match setting in weight sharing source layer '%s[%d]': %d, %d, %d." % (dic['name'], inputIdx, groups, channels, filterChannels, src_layer['name'], src_inp, src_layer['groups'][src_inp], src_layer['channels'][src_inp], src_layer['filterChannels'][src_inp]))
filterConns = src_layer['filterConns'][src_inp]
return filterConns
def parse(self, name, mcp, prev_layers, model):
dic = WeightLayerParser.parse(self, name, mcp, prev_layers, model)
dic['requiresParams'] = True
dic['usesActs'] = False
# Supplied values
dic['channels'] = mcp.safe_get_int_list(name, 'channels')
dic['padding'] = mcp.safe_get_int_list(name, 'padding', default=[0]*len(dic['inputs']))
dic['stride'] = mcp.safe_get_int_list(name, 'stride', default=[1]*len(dic['inputs']))
dic['filterSize'] = mcp.safe_get_int_list(name, 'filterSize')
dic['filters'] = mcp.safe_get_int_list(name, 'filters')
dic['groups'] = mcp.safe_get_int_list(name, 'groups', default=[1]*len(dic['inputs']))
dic['randSparse'] = mcp.safe_get_bool_list(name, 'randSparse', default=[False]*len(dic['inputs']))
dic['initW'] = mcp.safe_get_float_list(name, 'initW')
dic['initCFunc'] = mcp.safe_get(name, 'initCFunc', default='')
self.verify_num_params(['channels', 'padding', 'stride', 'filterSize', \
'filters', 'groups', 'randSparse', 'initW'])
self.verify_num_range(dic['stride'], 'stride', 1, None)
self.verify_num_range(dic['filterSize'],'filterSize', 1, None)
self.verify_num_range(dic['padding'], 'padding', 0, None)
self.verify_num_range(dic['channels'], 'channels', 1, None)
self.verify_num_range(dic['groups'], 'groups', 1, None)
# Computed values
dic['imgPixels'] = [numInputs/channels for numInputs,channels in zip(dic['numInputs'], dic['channels'])]
dic['imgSize'] = [int(n.sqrt(imgPixels)) for imgPixels in dic['imgPixels']]
self.verify_num_range(dic['imgSize'], 'imgSize', 1, None)
dic['filters'] = [filters*groups for filters,groups in zip(dic['filters'], dic['groups'])]
dic['filterPixels'] = [filterSize**2 for filterSize in dic['filterSize']]
dic['modulesX'] = [1 + int(ceil((2 * padding + imgSize - filterSize) / float(stride))) for padding,imgSize,filterSize,stride in zip(dic['padding'], dic['imgSize'], dic['filterSize'], dic['stride'])]
dic['filterChannels'] = [channels/groups for channels,groups in zip(dic['channels'], dic['groups'])]
if max(dic['randSparse']): # When randSparse is turned on for any input, filterChannels must be given for all of them
dic['filterChannels'] = mcp.safe_get_int_list(name, 'filterChannels', default=dic['filterChannels'])
self.verify_num_params(['filterChannels'])
if len(set(dic['modulesX'])) != 1 or len(set(dic['filters'])) != 1:
raise LayerParsingError("Layer '%s': all inputs must produce equally-dimensioned output. Dimensions are: %s." % (name, ", ".join("%dx%dx%d" % (filters, modulesX, modulesX) for filters,modulesX in zip(dic['filters'], dic['modulesX']))))
dic['modulesX'] = dic['modulesX'][0]
dic['modules'] = dic['modulesX']**2
dic['filters'] = dic['filters'][0]
dic['outputs'] = dic['modules'] * dic['filters']
dic['filterConns'] = [[]] * len(dic['inputs'])
for i in xrange(len(dic['inputs'])):
if dic['numInputs'][i] % dic['imgPixels'][i] != 0 or dic['imgSize'][i] * dic['imgSize'][i] != dic['imgPixels'][i]:
raise LayerParsingError("Layer '%s[%d]': has %-d dimensional input, not interpretable as square %d-channel images" % (name, i, dic['numInputs'][i], dic['channels'][i]))
if dic['channels'][i] > 3 and dic['channels'][i] % 4 != 0:
raise LayerParsingError("Layer '%s[%d]': number of channels must be smaller than 4 or divisible by 4" % (name, i))
if dic['filterSize'][i] > 2 * dic['padding'][i] + dic['imgSize'][i]:
raise LayerParsingError("Layer '%s[%d]': filter size (%d) greater than image size + 2 * padding (%d)" % (name, i, dic['filterSize'][i], 2 * dic['padding'][i] + dic['imgSize'][i]))
if dic['randSparse'][i]: # Random sparse connectivity requires some extra checks
if dic['groups'][i] == 1:
raise LayerParsingError("Layer '%s[%d]': number of groups must be greater than 1 when using random sparse connectivity" % (name, i))
self.verify_divisible(dic['channels'][i], dic['filterChannels'][i], 'channels', 'filterChannels', input_idx=i)
self.verify_divisible(dic['filterChannels'][i], 4, 'filterChannels', input_idx=i)
self.verify_divisible( dic['groups'][i]*dic['filterChannels'][i], dic['channels'][i], 'groups * filterChannels', 'channels', input_idx=i)
dic['filterConns'][i] = self.gen_rand_conns(dic['groups'][i], dic['channels'][i], dic['filterChannels'][i], i)
else:
if dic['groups'][i] > 1:
self.verify_divisible(dic['channels'][i], 4*dic['groups'][i], 'channels', '4 * groups', input_idx=i)
self.verify_divisible(dic['channels'][i], dic['groups'][i], 'channels', 'groups', input_idx=i)
self.verify_divisible(dic['filters'], 16*dic['groups'][i], 'filters * groups', input_idx=i)
dic['padding'][i] = -dic['padding'][i]
dic['overSample'] = [groups*filterChannels/channels for groups,filterChannels,channels in zip(dic['groups'], dic['filterChannels'], dic['channels'])]
return dic
class ConvLayerParser(LocalLayerParser):
def __init__(self):
LocalLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = LocalLayerParser.parse(self, name, mcp, prev_layers, model)
dic['partialSum'] = mcp.safe_get_int(name, 'partialSum')
dic['sharedBiases'] = mcp.safe_get_bool(name, 'sharedBiases', default=True)
if dic['partialSum'] != 0 and dic['modules'] % dic['partialSum'] != 0:
raise LayerParsingError("Layer '%s': convolutional layer produces %dx%d=%d outputs per filter, but given partialSum parameter (%d) does not divide this number" % (name, dic['modulesX'], dic['modulesX'], dic['modules'], dic['partialSum']))
num_biases = dic['filters'] if dic['sharedBiases'] else dic['modules']*dic['filters']
eltmult = lambda list1, list2: [l1 * l2 for l1,l2 in zip(list1, list2)]
self.make_weights(dic['initW'], eltmult(dic['filterPixels'], dic['filterChannels']), [dic['filters']] * len(dic['inputs']), order='C')
self.make_biases(num_biases, 1, order='C')
print "Initialized convolutional layer '%s', producing %dx%d %d-channel output" % (name, dic['modulesX'], dic['modulesX'], dic['filters'])
return dic
class LocalUnsharedLayerParser(LocalLayerParser):
def __init__(self):
LocalLayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = LocalLayerParser.parse(self, name, mcp, prev_layers, model)
eltmult = lambda list1, list2: [l1 * l2 for l1,l2 in zip(list1, list2)]
scmult = lambda x, lst: [x * l for l in lst]
self.make_weights(dic['initW'], scmult(dic['modules'], eltmult(dic['filterPixels'], dic['filterChannels'])), [dic['filters']] * len(dic['inputs']), order='C')
self.make_biases(dic['modules'] * dic['filters'], 1, order='C')
print "Initialized locally-connected layer '%s', producing %dx%d %d-channel output" % (name, dic['modulesX'], dic['modulesX'], dic['filters'])
return dic
class DataLayerParser(LayerParser):
def __init__(self):
LayerParser.__init__(self)
def parse(self, name, mcp, prev_layers, model):
dic = LayerParser.parse(self, name, mcp, prev_layers, model)
dic['dataIdx'] = mcp.safe_get_int(name, 'dataIdx')
dic['outputs'] = model.train_data_provider.get_data_dims(idx=dic['dataIdx'])
print "Initialized data layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class SoftmaxLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['outputs'] = prev_layers[dic['inputs'][0]]['outputs']
print "Initialized softmax layer '%s', producing %d outputs" % (name, dic['outputs'])
return dic
class PoolLayerParser(LayerWithInputParser):
def __init__(self):
LayerWithInputParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['channels'] = mcp.safe_get_int(name, 'channels')
dic['sizeX'] = mcp.safe_get_int(name, 'sizeX')
dic['start'] = mcp.safe_get_int(name, 'start', default=0)
dic['stride'] = mcp.safe_get_int(name, 'stride')
dic['outputsX'] = mcp.safe_get_int(name, 'outputsX', default=0)
dic['pool'] = mcp.safe_get(name, 'pool')
# Avg pooler does not use its acts or inputs
dic['usesActs'] = 'pool' != 'avg'
dic['usesInputs'] = 'pool' != 'avg'
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
self.verify_num_range(dic['sizeX'], 'sizeX', 1, dic['imgSize'])
self.verify_num_range(dic['stride'], 'stride', 1, dic['sizeX'])
self.verify_num_range(dic['outputsX'], 'outputsX', 0, None)
self.verify_num_range(dic['channels'], 'channels', 1, None)
if LayerWithInputParser.grad_consumers_below(dic):
self.verify_divisible(dic['channels'], 16, 'channels')
self.verify_str_in(dic['pool'], ['max', 'avg'])
self.verify_img_size()
if dic['outputsX'] <= 0:
dic['outputsX'] = int(ceil((dic['imgSize'] - dic['start'] - dic['sizeX']) / float(dic['stride']))) + 1;
dic['outputs'] = dic['outputsX']**2 * dic['channels']
print "Initialized %s-pooling layer '%s', producing %dx%d %d-channel output" % (dic['pool'], name, dic['outputsX'], dic['outputsX'], dic['channels'])
return dic
class NormLayerParser(LayerWithInputParser):
RESPONSE_NORM = 'response'
CONTRAST_NORM = 'contrast'
CROSSMAP_RESPONSE_NORM = 'cross-map response'
def __init__(self, norm_type):
LayerWithInputParser.__init__(self, num_inputs=1)
self.norm_type = norm_type
def add_params(self, mcp):
LayerWithInputParser.add_params(self, mcp)
dic, name = self.dic, self.dic['name']
dic['scale'] = mcp.safe_get_float(name, 'scale')
dic['scale'] /= dic['size'] if self.norm_type == self.CROSSMAP_RESPONSE_NORM else dic['size']**2
dic['pow'] = mcp.safe_get_float(name, 'pow')
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['requiresParams'] = True
dic['channels'] = mcp.safe_get_int(name, 'channels')
dic['size'] = mcp.safe_get_int(name, 'size')
dic['blocked'] = mcp.safe_get_bool(name, 'blocked', default=False)
dic['imgPixels'] = dic['numInputs'][0] / dic['channels']
dic['imgSize'] = int(n.sqrt(dic['imgPixels']))
# Contrast normalization layer does not use its inputs
dic['usesInputs'] = self.norm_type != self.CONTRAST_NORM
self.verify_num_range(dic['channels'], 'channels', 1, None)
if self.norm_type == self.CROSSMAP_RESPONSE_NORM:
self.verify_num_range(dic['size'], 'size', 2, dic['channels'])
if dic['channels'] % 16 != 0:
raise LayerParsingError("Layer '%s': number of channels must be divisible by 16 when using crossMap" % name)
else:
self.verify_num_range(dic['size'], 'size', 1, dic['imgSize'])
if self.norm_type != self.CROSSMAP_RESPONSE_NORM and dic['channels'] > 3 and dic['channels'] % 4 != 0:
raise LayerParsingError("Layer '%s': number of channels must be smaller than 4 or divisible by 4" % name)
self.verify_img_size()
dic['outputs'] = dic['imgPixels'] * dic['channels']
print "Initialized %s-normalization layer '%s', producing %dx%d %d-channel output" % (self.norm_type, name, dic['imgSize'], dic['imgSize'], dic['channels'])
return dic
class CostParser(LayerWithInputParser):
def __init__(self, num_inputs=-1):
LayerWithInputParser.__init__(self, num_inputs=num_inputs)
def parse(self, name, mcp, prev_layers, model):
dic = LayerWithInputParser.parse(self, name, mcp, prev_layers, model)
dic['requiresParams'] = True
del dic['neuron']
return dic
def add_params(self, mcp):
LayerWithInputParser.add_params(self, mcp)
dic, name = self.dic, self.dic['name']
dic['coeff'] = mcp.safe_get_float(name, 'coeff')
class LogregCostParser(CostParser):
def __init__(self):
CostParser.__init__(self, num_inputs=2)
def parse(self, name, mcp, prev_layers, model):
dic = CostParser.parse(self, name, mcp, prev_layers, model)
if dic['numInputs'][0] != 1: # first input must be labels
raise LayerParsingError("Layer '%s': dimensionality of first input must be 1" % name)
if prev_layers[dic['inputs'][1]]['type'] != 'softmax':
raise LayerParsingError("Layer '%s': second input must be softmax layer" % name)
if dic['numInputs'][1] != model.train_data_provider.get_num_classes():
raise LayerParsingError("Layer '%s': softmax input '%s' must produce %d outputs, because that is the number of classes in the dataset" \
% (name, prev_layers[dic['inputs'][1]]['name'], model.train_data_provider.get_num_classes()))
print "Initialized logistic regression cost '%s'" % name
return dic
class SumOfSquaresCostParser(CostParser):
def __init__(self):
CostParser.__init__(self, num_inputs=1)
def parse(self, name, mcp, prev_layers, model):
dic = CostParser.parse(self, name, mcp, prev_layers, model)
print "Initialized sum-of-squares cost '%s'" % name
return dic
# All the layer parsers
layer_parsers = {'data': lambda : DataLayerParser(),
'fc': lambda : FCLayerParser(),
'fcdropo': lambda : FCDropOutLayerParser(),
'fcdropc': lambda : FCDropConnectLayerParser(),
'fcdropcf': lambda : FCDropConnectFastLayerParser(),
'conv': lambda : ConvLayerParser(),
'local': lambda : LocalUnsharedLayerParser(),
'softmax': lambda : SoftmaxLayerParser(),
'eltsum': lambda : EltwiseSumLayerParser(),
'eltmax': lambda : EltwiseMaxLayerParser(),
'neuron': lambda : NeuronLayerParser(),
'pool': lambda : PoolLayerParser(),
'rnorm': lambda : NormLayerParser(NormLayerParser.RESPONSE_NORM),
'cnorm': lambda : NormLayerParser(NormLayerParser.CONTRAST_NORM),
'cmrnorm': lambda : NormLayerParser(NormLayerParser.CROSSMAP_RESPONSE_NORM),
'nailbed': lambda : NailbedLayerParser(),
'blur': lambda : GaussianBlurLayerParser(),
'resize': lambda : ResizeLayerParser(),
'rgb2yuv': lambda : RGBToYUVLayerParser(),
'rgb2lab': lambda : RGBToLABLayerParser(),
'rscale': lambda : RandomScaleLayerParser(),
'cost.logreg': lambda : LogregCostParser(),
'cost.sum2': lambda : SumOfSquaresCostParser()}
# All the neuron parsers
# This isn't a name --> parser mapping as the layer parsers above because neurons don't have fixed names.
# A user may write tanh[0.5,0.25], etc.
neuron_parsers = sorted([NeuronParser('ident', 'f(x) = x', uses_acts=False, uses_inputs=False),
NeuronParser('logistic', 'f(x) = 1 / (1 + e^-x)', uses_acts=True, uses_inputs=False),
NeuronParser('abs', 'f(x) = |x|', uses_acts=False, uses_inputs=True),
NeuronParser('relu', 'f(x) = max(0, x)', uses_acts=True, uses_inputs=False),
NeuronParser('softrelu', 'f(x) = log(1 + e^x)', uses_acts=True, uses_inputs=False),
NeuronParser('square', 'f(x) = x^2', uses_acts=False, uses_inputs=True),
NeuronParser('sqrt', 'f(x) = sqrt(x)', uses_acts=True, uses_inputs=False),
ParamNeuronParser('tanh[a,b]', 'f(x) = a * tanh(b * x)', uses_acts=True, uses_inputs=False),
ParamNeuronParser('brelu[a]', 'f(x) = min(a, max(0, x))', uses_acts=True, uses_inputs=False),
ParamNeuronParser('linear[a,b]', 'f(x) = a * x + b', uses_acts=True, uses_inputs=False)],
key=lambda x:x.type)
|
olivernina/idropout
|
layer.py
|
Python
|
mit
| 63,500
|
[
"Gaussian",
"NEURON"
] |
304846502acabcbc8f0ace2b2c05d12e7eee60ffff38dacd654519730e5df8df
|
import abc
import time
from copy import copy
from typing import List, Set
import rdkit.rdBase as rkrb
import rdkit.RDLogger as rkl
from minedatabase.pickaxe import Pickaxe
logger = rkl.logger()
logger.setLevel(rkl.ERROR)
rkrb.DisableLog("rdApp.error")
class Filter(metaclass=abc.ABCMeta):
"""Abstract base class used to generate filters.
The Filter class provides the framework for interaction with pickaxe expansions.
Each filter subclass must inherit properties from the Filter class.
All subclasses must implement properties and methods decorated with
@abc.abstractmethod. Feel free to override other non-private methods as
well, such as _pre_print() and _post_print().
"""
@property
@abc.abstractmethod
def filter_name(self) -> str:
"""Obtain name of filter."""
pass
@abc.abstractmethod
def _choose_items_to_filter(self, pickaxe: Pickaxe, processes: int) -> Set[str]:
"""Return list of compounds to remove from pickaxe object.
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network.
processes : int
The number of processes to use, by default 1.
generation : int
Which generation the expansion is in.
"""
pass
def apply_filter(
self,
pickaxe: Pickaxe,
processes: int = 1,
generation: int = 0,
print_on: bool = True,
) -> None:
"""Apply filter from Pickaxe object.
Parameters
----------
pickaxe : Pickaxe
The Pickaxe object to filter.
processes : int
The number of processes to use, by default 1.
print_on : bool
Whether or not to print filtering results.
"""
time_sample = time.time()
self.generation = generation
if print_on:
n_total = self._get_n(pickaxe, "total")
self._pre_print_header(pickaxe)
self._pre_print()
compound_ids_to_check, reaction_ids_to_check = self._choose_items_to_filter(
pickaxe, processes
)
self._apply_filter_results(
pickaxe, compound_ids_to_check, reaction_ids_to_check
)
if print_on:
n_filtered = self._get_n(pickaxe, "filtered")
self._post_print(pickaxe, n_total, n_filtered, time_sample)
self._post_print_footer(pickaxe)
def _pre_print_header(self, pickaxe: Pickaxe) -> None:
"""Print header before filtering.
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network.
"""
print("----------------------------------------")
print(f"Filtering Generation {pickaxe.generation}\n")
def _pre_print(self) -> None:
"""Print filter being applied."""
print(f"Applying filter: {self.filter_name}")
def _post_print(
self, pickaxe: Pickaxe, n_total: int, n_filtered: int, time_sample: float
) -> None:
"""Print results of filtering.
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network.
Unused here, but may be useful in your implementation.
n_total : int
Total number of compounds.
n_filtered : int
Number of compounds remaining after filtering.
times_sample : float
Time in seconds from time.time().
"""
print(
f"{n_filtered} of {n_total} compounds remain after applying "
f"filter: {self.filter_name}"
f"--took {round(time.time() - time_sample, 2)}s.\n"
)
def _post_print_footer(self, pickaxe: Pickaxe) -> None:
"""Print end of filtering.
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network.
"""
print(f"Done filtering Generation {pickaxe.generation}")
print("----------------------------------------\n")
def _get_n(self, pickaxe: Pickaxe, n_type: str) -> int:
"""Get current number of compounds to be filtered.
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network.
n_type : str
Whether to return "total" number of "filtered" number of compounds.
Returns
-------
n : int
Either the total or filtered number of compounds.
"""
n = 0
for cpd_dict in pickaxe.compounds.values():
is_in_current_gen = cpd_dict["Generation"] == pickaxe.generation
is_predicted_compound = cpd_dict["_id"].startswith("C")
if is_in_current_gen and is_predicted_compound:
if n_type == "total":
n += 1
elif n_type == "filtered" and cpd_dict["Expand"]:
n += 1
return n
def _apply_filter_results(
self,
pickaxe: Pickaxe,
compound_ids_to_check: List[str] = [],
reaction_ids_to_delete: List[str] = [],
) -> None:
"""Apply filter results to Pickaxe object.
Remove compounds and reactions that can be removed.
For a compound to be removed it must:
1. Not be flagged for expansion
2. Not have a coproduct in a reaction marked for expansion
3. Start with "C"
Parameters
----------
pickaxe : Pickaxe
Instance of Pickaxe being used to expand and filter the network,
this method modifies the Pickaxe object's compound documents.
compound_ids_to_check : List[str]
List of compound IDs to try to remove, if possible.
"""
def should_delete_reaction(rxn_id: str) -> bool:
"""Returns whether or not a reaction can safely be deleted."""
products = pickaxe.reactions[rxn_id]["Products"]
for _, c_id in products:
if c_id.startswith("C") and c_id not in cpds_to_remove:
return False
# Every compound isn't in cpds_to_remove
return True
def remove_reaction(rxn_id):
"""Removes reaction and any resulting orphan compounds"""
cpds_to_return = set()
# Remove affiliations of reaction and check for orphans
product_ids = [cpd[1] for cpd in pickaxe.reactions[rxn_id]["Products"]]
for prod_id in product_ids:
if prod_id.startswith("C"):
pickaxe.compounds[prod_id]["Product_of"].remove(rxn_id)
cpds_to_return.add(prod_id)
compound_ids = [cpd[1] for cpd in pickaxe.reactions[rxn_id]["Reactants"]]
for cpd_id in compound_ids:
if cpd_id.startswith("C"):
pickaxe.compounds[cpd_id]["Reactant_in"].remove(rxn_id)
cpds_to_return.add(cpd_id)
# Delete reaction itself
del pickaxe.reactions[rxn_id]
return cpds_to_return
# Process reactions to delete
# Loop through reactions to add compounds to check and to delete reactions
if reaction_ids_to_delete:
cpd_check_from_rxn = set()
for rxn_id in reaction_ids_to_delete:
cpd_check_from_rxn = cpd_check_from_rxn.union(remove_reaction(rxn_id))
# Check for orphaned compounds due to reaction deletion
while len(cpd_check_from_rxn) != 0:
cpd_id = cpd_check_from_rxn.pop()
# Orphan compound is one that has no reaction connecting it
if cpd_id in pickaxe.compounds:
product_of = copy(pickaxe.compounds[cpd_id].get("Product_of", []))
# Delete if no reactions
if not product_of:
# Delete out reactions
reactant_in = copy(
pickaxe.compounds[cpd_id].get("Reactant_in", [])
)
for rxn_id in reactant_in:
cpd_check_from_rxn = cpd_check_from_rxn.union(
remove_reaction(rxn_id)
)
# Now delete compound
del pickaxe.compounds[cpd_id]
# Go through compounds_ids_to_check and delete cpds/rxns as needed
if compound_ids_to_check:
cpds_to_remove = set()
rxns_to_check = []
compound_ids_to_check = set(compound_ids_to_check)
for cpd_id in compound_ids_to_check:
cpd_dict = pickaxe.compounds.get(cpd_id)
if not cpd_dict:
continue
if not cpd_dict["Expand"] and cpd_id.startswith("C"):
cpds_to_remove.add(cpd_id)
rxns_to_check.extend(pickaxe.compounds[cpd_id]["Product_of"])
rxns_to_check.extend(pickaxe.compounds[cpd_id]["Reactant_in"])
rxns_to_check = set(rxns_to_check)
# Function to check to see if should delete reaction
# If reaction has compound that won't be deleted keep it
# Check reactions for deletion
for rxn_id in rxns_to_check:
if should_delete_reaction(rxn_id):
for _, c_id in pickaxe.reactions[rxn_id]["Products"]:
if c_id.startswith("C"):
if rxn_id in pickaxe.compounds[c_id]["Product_of"]:
pickaxe.compounds[c_id]["Product_of"].remove(rxn_id)
for _, c_id in pickaxe.reactions[rxn_id]["Reactants"]:
if c_id.startswith("C"):
if rxn_id in pickaxe.compounds[c_id]["Reactant_in"]:
pickaxe.compounds[c_id]["Reactant_in"].remove(rxn_id)
del pickaxe.reactions[rxn_id]
else:
# Reaction is dependent on compound that is flagged to be
# removed. Don't remove compound
products = pickaxe.reactions[rxn_id]["Products"]
cpds_to_remove -= set(i[1] for i in products)
# for _, c_id in products:
# if c_id in cpds_to_remove:
# cpds_to_remove -= {c_id}
# Remove compounds and reactions if any found
for cpd_id in cpds_to_remove:
del pickaxe.compounds[cpd_id]
if __name__ == "__main__":
pass
|
JamesJeffryes/MINE-Database
|
minedatabase/filters/base_filter.py
|
Python
|
mit
| 10,813
|
[
"RDKit"
] |
84a92383c525055a12845752d16654e8d74fb10cbe2418559e39d1c38576fb28
|
"""
Routines analyzing dependencies (class and module) in Python source.
"""
import os
import sys
import ast
import time
import __builtin__
import cPickle
import networkx as nx
from openmdao.util.fileutil import find_files, get_module_path, find_module
#from openmdao.util.log import logger
# This is a dict containing all of the entry point groups that OpenMDAO uses to
# identify plugins, and their corresponding Interfaces.
plugin_groups = { 'openmdao.container': ['IContainer'],
'openmdao.component': ['IComponent','IContainer'],
'openmdao.driver': ['IDriver','IComponent','IContainer'],
'openmdao.variable': ['IVariable'],
'openmdao.surrogatemodel': ['ISurrogate'],
'openmdao.doegenerator': ['IDOEgenerator'],
'openmdao.casefilter': ['ICaseFilter'],
'openmdao.caseiterator': ['ICaseIterator'],
'openmdao.caserecorder': ['ICaseRecorder'],
'openmdao.architecture': ['IArchitecture'],
'openmdao.optproblem': ['IOptProblem','IAssembly',
'IComponent','IContainer'],
}
iface_set = set()
for ifaces in plugin_groups.values():
iface_set.update(ifaces)
def _to_str(node):
"""Take groups of Name nodes or a Str node and convert to a string."""
if isinstance(node, ast.Name):
return node.id
if not hasattr(node, 'value'):
return None
val = node.value
parts = [node.attr]
while True:
if isinstance(val, ast.Attribute):
parts.append(val.attr)
val = val.value
elif isinstance(val, ast.Name):
parts.append(val.id)
break
else: # it's more than just a simple dotted name
return None
return '.'.join(parts[::-1])
# the following are interfaces that get added via the add_delegate decorator
_delegate_ifaces = {
'HasParameters': 'IHasParameters',
'HasConstraints': 'IHasConstraints',
'HasEqConstraints': 'IHasEqConstraints',
'HasInEqConstraints': 'IHasInEqConstraints',
'HasObjective': 'IHasObjective',
'HasObjectives': 'IHasObjectives',
'HasStopConditions': 'IHasStopConditions',
'HasEvents': 'IHasEvents',
'HasCouplingVars': 'IHasCouplingVars',
}
class ClassInfo(object):
def __init__(self, name, fname, bases, meta, decorators):
self.name = name
self.fname = fname
self.bases = bases
self.meta = meta
ifaces = meta.setdefault('ifaces', [])
for dec in decorators:
if dec.func.id == 'add_delegate':
for arg in [_to_str(a) for a in dec.args]:
if arg in _delegate_ifaces:
ifaces.append(_delegate_ifaces[arg])
class _ClassBodyVisitor(ast.NodeVisitor):
def __init__(self):
self.metadata = {}
ast.NodeVisitor.__init__(self)
def visit_ClassDef(self, node):
for bnode in node.body:
self.visit(bnode)
def visit_Call(self, node):
if isinstance(node.func, ast.Name) and node.func.id == 'implements':
for arg in node.args:
if isinstance(arg, ast.Name):
self.metadata.setdefault('ifaces', []).append(arg.id)
def visit_Assign(self, node):
if len(self.metadata)==0 and len(node.targets) == 1:
lhs = node.targets[0]
if isinstance(lhs, ast.Name) and lhs.id == '__openmdao_meta__':
dct = ast.literal_eval(node.value)
dct.setdefault('ifaces', [])
dct['ifaces'].extend(self.metadata.setdefault('ifaces', []))
self.metadata.update(dct)
class PythonSourceFileAnalyser(ast.NodeVisitor):
"""Collects info about imports and class inheritance from a
Python file.
"""
def __init__(self, fname, tree_analyser):
ast.NodeVisitor.__init__(self)
self.fname = os.path.abspath(os.path.expanduser(fname))
self.modpath = get_module_path(fname)
self.classes = {}
self.localnames = {} # map of local names to package names
self.starimports = []
self.unresolved_classes = set()
self.tree_analyser = tree_analyser
# in order to get this to work with the 'ast' lib, I have
# to read using universal newlines and append a newline
# to the string I read for some files. The 'compiler' lib
# didn't have this problem. :(
f = open(self.fname, 'Ur')
try:
contents = f.read()
if len(contents)>0 and contents[-1] != '\n':
contents += '\n'
for node in ast.walk(ast.parse(contents, self.fname)):
self.visit(node)
finally:
f.close()
self.update_graph(self.tree_analyser.graph)
self.update_ifaces(self.tree_analyser.graph)
self.tree_analyser = None
def visit_ClassDef(self, node):
"""This executes every time a class definition is parsed."""
fullname = '.'.join([self.modpath, node.name])
self.localnames[node.name] = fullname
bases = [_to_str(b) for b in node.bases if b is not None]
bvisitor = _ClassBodyVisitor()
bvisitor.visit(node)
bases = [self.localnames.get(b, b) for b in bases if b is not None]
self.classes[fullname] = ClassInfo(fullname, self.fname, bases,
bvisitor.metadata,
node.decorator_list)
self.tree_analyser.class_map[fullname] = self.classes[fullname]
undef_bases = [b for b in bases if b not in self.classes and not hasattr(__builtin__, b)]
while undef_bases:
base = undef_bases.pop()
cinfo = self.tree_analyser.find_classinfo(base)
if cinfo is None:
parts = base.rsplit('.', 1)
if len(parts) == 1: # no dot, so maybe it came in with a '*' import
trymods = self.starimports[::-1]
basename = base
else:
trymods = [parts[0]]
basename = parts[1]
for modname in trymods:
excluded = False
for m in self.tree_analyser.mod_excludes:
if m == modname or modname.startswith(m+'.'):
excluded = True
break
if excluded:
continue
fpath = find_module(modname)
if fpath is not None:
fanalyzer = self.tree_analyser.analyze_file(fpath)
if '.' not in base:
trybase = '.'.join([modname, base])
else:
trybase = base
if trybase in fanalyzer.classes:
break
elif basename in fanalyzer.localnames:
newname = fanalyzer.localnames[basename]
self.tree_analyser.class_map[trybase] = newname
if newname not in self.tree_analyser.class_map and \
newname not in self.unresolved_classes:
undef_bases.append(newname)
break
else:
self.unresolved_classes.add(base)
def visit_Import(self, node):
"""This executes every time an "import foo" style import statement
is parsed.
"""
for al in node.names:
if al.asname is None:
self.localnames[al.name] = al.name
else:
self.localnames[al.asname] = al.name
def visit_ImportFrom(self, node):
"""This executes every time a "from foo import bar" style import
statement is parsed.
"""
# need the following to handle relative imports
if node.level == 0:
module = node.module
else:
parts = self.modpath.split('.')
module = '.'.join(parts[0:len(parts)-node.level])
if node.module is not None:
module += '.'.join([module, node.module])
for al in node.names:
if al.name == '*':
self.starimports.append(module)
continue
if al.asname is None:
self.localnames[al.name] = '.'.join([module, al.name])
else:
self.localnames[al.asname] = '.'.join([module, al.name])
def update_graph(self, graph):
"""Update the inheritance/implements graph."""
for classname, classinfo in self.classes.items():
graph.add_node(classname, classinfo=classinfo)
for base in classinfo.bases:
cinfo = self.tree_analyser.find_classinfo(base)
if cinfo:
base = cinfo.name
graph.add_edge(base, classname)
for iface in classinfo.meta.setdefault('ifaces', []):
graph.add_edge(iface, classname)
def update_ifaces(self, graph):
"""Update our ifaces metadata based on the contents of the
inheritance/implements graph.
"""
for iface in iface_set:
try:
paths = nx.shortest_path(graph, source=iface)
except KeyError:
continue
for cname, cinfo in self.classes.items():
if cname in paths:
cinfo.meta.setdefault('ifaces', []).append(iface)
cinfo.meta['ifaces'] = list(set(cinfo.meta['ifaces']))
class PythonSourceTreeAnalyser(object):
def __init__(self, startdir=None, exclude=None, mod_excludes=None,
direxclude=None):
self.files_count = 0 # number of files analyzed
# inheritance graph. It's a directed graph with base classes pointing
# to the classes that inherit from them. Also includes interfaces
# pointing to classes that implement them.
self.graph = nx.DiGraph()
if isinstance(startdir, basestring):
self.startdirs = [startdir]
elif startdir is None:
self.startdirs = []
else:
self.startdirs = startdir
self.startdirs = [os.path.expandvars(os.path.expanduser(d)) for d in self.startdirs]
if mod_excludes is None:
self.mod_excludes = set(['traits', 'zope', 'ast'])
else:
self.mod_excludes = mod_excludes
self.modinfo = {} # maps module pathnames to PythonSourceFileAnalyzers
self.fileinfo = {} # maps filenames to (PythonSourceFileAnalyzer, modtime)
self.class_map = {} # map of classname to ClassInfo for the class
for pyfile in find_files(self.startdirs, "*.py", exclude=exclude,
direxclude=direxclude):
self.analyze_file(pyfile)
def dump(self, out, options):
"""Dumps the contents of this object for debugging purposes."""
for f, tup in self.fileinfo.items():
out.write("%s\n" % os.path.relpath(f))
if options.showclasses and tup[0].classes:
out.write(" classes:\n")
for item, cinfo in tup[0].classes.items():
out.write(" %s\n" % item)
if options.showbases and cinfo.bases and cinfo.bases != ['object']:
out.write(" bases:\n")
for base in cinfo.bases:
if base in tup[0].unresolved_classes:
out.write(" ???(%s)\n" % base)
else:
out.write(" %s\n" % base)
if options.showifaces and cinfo.meta.get('ifaces'):
out.write(" interfaces:\n")
for iface in cinfo.meta['ifaces']:
out.write(" %s\n" % iface)
out.write("\n\nFiles examined: %d\n\n" % self.files_count)
def find_classinfo(self, cname):
cinfo = cname
prev = None
while cinfo != prev:
prev = cinfo
try:
cinfo = self.class_map[cinfo]
except KeyError:
return None
if isinstance(cinfo, ClassInfo):
return cinfo
return None
def analyze_file(self, pyfile, use_cache=False):
# don't analyze the file again if we've already done it and it hasn't
# changed. If `use_cache` is True then lookup/record in global cache.
mtime = os.path.getmtime(pyfile)
if pyfile in self.fileinfo:
if mtime <= self.fileinfo[pyfile][1]:
return self.fileinfo[pyfile][0]
if use_cache:
info = _FileInfoCache.lookup(pyfile)
if info is not None and mtime <= info[1]:
self.fileinfo[pyfile] = info
return info[0]
#logger.debug("analyzing %s", pyfile)
myvisitor = PythonSourceFileAnalyser(pyfile, self)
self.modinfo[get_module_path(pyfile)] = myvisitor
self.fileinfo[myvisitor.fname] = (myvisitor,
os.path.getmtime(myvisitor.fname))
self.files_count += 1
if use_cache:
_FileInfoCache.record(pyfile, (myvisitor,
os.path.getmtime(myvisitor.fname)))
return myvisitor
def flush_cache(self):
_FileInfoCache.save()
def remove_file(self, fname):
fvisitor = self.fileinfo[fname][0]
del self.fileinfo[fname]
del self.modinfo[fvisitor.modpath]
nodes = []
for klass, cinfo in self.class_map.items():
if isinstance(cinfo, ClassInfo):
if cinfo.fname == fname:
nodes.append(klass)
else:
modname = get_module_path(fname)+'.'
if klass.startswith(modname) or cinfo.startswith(modname):
nodes.append(klass)
self.graph.remove_nodes_from(nodes)
for klass in nodes:
del self.class_map[klass]
def find_inheritors(self, base):
"""Returns a list of names of classes that inherit from the given base
class."""
try:
paths = nx.shortest_path(self.graph, source=base, target=None)
except KeyError:
return []
del paths[base] # don't want the base itself in the list
return paths.keys()
def get_interfaces(self, classname):
''' Returns a set of interfaces for a given class.'''
ifaces = set()
klass = self.find_classinfo(classname)
if klass:
ifaces.update(klass.meta['ifaces'])
for base in klass.bases:
ifaces.update(self.get_interfaces(base))
return ifaces
class _FileInfoCache(object):
""" Retains file analysis information. """
_cache = None
_dirty = False
@staticmethod
def lookup(path):
""" Return analysis info for `path`. """
if _FileInfoCache._cache is None:
_FileInfoCache._load()
return _FileInfoCache._cache.get(path)
@staticmethod
def record(path, info):
""" Record analysis info for `path`. """
_FileInfoCache._cache[path] = info
_FileInfoCache._dirty = True
@staticmethod
def save():
""" Save analysis info to file. """
if _FileInfoCache._dirty:
out = _FileInfoCache._open('wb')
cPickle.dump(_FileInfoCache._cache, out, cPickle.HIGHEST_PROTOCOL)
out.close()
_FileInfoCache._dirty = False
@staticmethod
def _load():
""" Load analysis info from file. """
_FileInfoCache._cache = {}
try:
inp = _FileInfoCache._open('rb')
except Exception:
return
# Full test with coverage removes cache at start, so we won't get here.
try: #pragma no cover
_FileInfoCache._cache = cPickle.load(inp)
except Exception: #pragma no cover
return
finally: #pragma no cover
inp.close()
@staticmethod
def _open(mode):
""" Return opened file for '~/.openmdao/fileanalyzer.dat'. """
filename = \
os.path.expanduser(os.path.join('~', '.openmdao', 'fileanalyzer.dat'))
dirname = os.path.dirname(filename)
# Full test with coverage leaves directory intact.
if not os.path.exists(dirname): #pragma no cover
os.mkdir(dirname)
return open(filename, mode)
def main():
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("-c", "--classes", action='store_true', dest='showclasses',
help="show classes found")
parser.add_argument("-b", "--bases", action="store_true", dest="showbases",
help="show base classes (only works if --classes is active)")
parser.add_argument("-i", "--interfaces", action="store_true", dest="showifaces",
help="show interfaces of classes (only works if --classes is active)")
parser.add_argument("-u", "--use-cache", action='store_true', dest='use_cache',
help="use analysis cache")
parser.add_argument('files', metavar='fname', type=str, nargs='+',
help='a file or directory to be scanned')
options = parser.parse_args()
print options.use_cache
stime = time.time()
psta = PythonSourceTreeAnalyser()
for f in options.files:
f = os.path.abspath(os.path.expanduser(f))
if os.path.isdir(f):
for pyfile in find_files(f, "*.py", exclude=lambda n: 'test' in n.split(os.sep)):
psta.analyze_file(pyfile, use_cache=options.use_cache)
else:
psta.analyze_file(f, use_cache=options.use_cache)
psta.dump(sys.stdout, options)
sys.stdout.write("elapsed time: %s seconds\n\n" % (time.time() - stime))
if options.use_cache:
_FileInfoCache.save()
if __name__ == '__main__':
main()
|
HyperloopTeam/FullOpenMDAO
|
lib/python2.7/site-packages/openmdao.util-0.13.0-py2.7.egg/openmdao/util/dep.py
|
Python
|
gpl-2.0
| 18,429
|
[
"VisIt"
] |
6c058b4544c092bce6f43e70fb3d6dd92833a8d780d5f4d00e3b4d9f76e7842c
|
#
# mainTab
#
tab = self.notebook.mainTab
tab.settings['Program'] = 'abinit'
tab.settings['Output file name'] = 'BaTiO3.out'
#
# SettingsTab
#
tab = self.notebook.settingsTab
tab.settings['Eckart flag'] = False
tab.settings['Neutral Born charges'] = False
tab.settings['Sigma value'] = 5
tab.settings['Mass definition'] = 'program'
#
# 0th Scenario tabs
#
tab = self.notebook.scenarios[0]
tab.settings['Matrix'] = 'ptfe'
tab.settings['Mass or volume fraction'] = 'volume'
tab.settings['Volume fraction'] = 0.1
tab.settings['Ellipsoid a/b'] = 0.5
tab.settings['Unique direction - h'] = 0
tab.settings['Unique direction - k'] = 0
tab.settings['Unique direction - l'] = 1
tab.settings['Effective medium method'] = 'Averaged permittivity'
tab.settings['Particle shape'] = 'Sphere'
tab.settings['Legend'] = 'Averaged permittivity'
# Add new scenarios
shapes = ['Plate', 'Ellipsoid', 'Plate']
hkls = [[0,0,1], [0,0,1], [1,0,0]]
for shape,hkl in zip(shapes,hkls):
self.notebook.addScenario()
tab = self.notebook.scenarios[-1]
tab.settings['Particle shape'] = shape
tab.settings['Effective medium method'] = 'Maxwell-Garnett'
tab.settings['Unique direction - h'] = hkl[0]
tab.settings['Unique direction - k'] = hkl[1]
tab.settings['Unique direction - l'] = hkl[2]
tab.settings['Legend'] = 'Maxwell-Garnett '+shape+' '+str(hkl)
#
# Plotting Tab
#
tab = self.notebook.plottingTab
tab.settings['Minimum frequency'] = 0
tab.settings['Maximum frequency'] = 400
tab.settings['Frequency increment'] = 0.2
tab.settings['Molar definition'] = 'Unit cells'
tab.settings['Plot title'] = 'AbInit BaTiO3 Calculation'
#
# Analysis Tab
#
tab = self.notebook.analysisTab
tab.settings['Minimum frequency'] = -1
tab.settings['Maximum frequency'] = 400
tab.settings['title'] = 'Analysis'
tab.settings['Covalent radius scaling'] = 1.1
tab.settings['Bonding tolerance'] = 0.1
tab.settings['Bar width'] = 0.5
#
|
JohnKendrick/PDielec
|
Examples/AbInit/BaTiO3/script.py
|
Python
|
mit
| 1,921
|
[
"ABINIT"
] |
c99d6a0b4d61680edb1dbb517e66a2e9d86fc4aa33084afe6a24d08c2dc61c06
|
import numpy as np
from ase.lattice import bulk
from ase.optimize import BFGS
from ase.io import PickleTrajectory
from ase.constraints import StrainFilter
from gpaw import GPAW, PW
co = bulk('Co')
co.set_initial_magnetic_moments([1.6, 1.6])
co.calc = GPAW(mode=PW(700),
xc='PBE',
kpts=(8, 8, 4),
txt='co.txt')
BFGS(StrainFilter(co)).run(0.005)
a0 = co.cell[0, 0]
c0 = co.cell[2, 2]
traj = PickleTrajectory('co.traj', 'w')
eps = 0.01
for a in a0 * np.linspace(1 - eps, 1 + eps, 3):
for c in c0 * np.linspace(1 - eps, 1 + eps, 3):
co.set_cell(bulk('Co', a=a, covera=c / a).cell, scale_atoms=True)
co.get_potential_energy()
traj.write(co)
|
robwarm/gpaw-symm
|
gpaw/test/big/stress/co.py
|
Python
|
gpl-3.0
| 716
|
[
"ASE",
"GPAW"
] |
ec09010fff0404aca5bdb758075b1a07a5b1b04f643d49848f751aa4bea8fb96
|
# Copyright (C) 2013 Oskar Maier
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# author Oskar Maier
# version r0.1.1
# since 2012-05-28
# status Release
# build-in modules
import os
# third-party modules
import scipy
# own modules
from ..core import Logger
from ..core import ImageTypeError, DependencyError,\
ImageSavingError
from .header import __update_header_from_array_nibabel,\
__is_header_itk, __is_header_nibabel, copy_meta_data
# !TODO: Change to not work with the Exceptions anymore, as these hides bugs!
# code
def save(arr, filename, hdr = False, force = True):
r"""
Save the image ``arr`` as filename using information encoded in ``hdr``. The target image
format is determined by the ``filename`` suffix. If the ``force`` parameter is set to true,
an already existing image is overwritten silently. Otherwise an error is thrown.
The header (``hdr``) object is the one returned by `~medpy.io.load.load` and is only used
if the source and target image formats are the same.
Generally this function does not guarantee, that metadata other than the image shape
and pixel data type are kept.
The supported file formats depend on the installed third party modules. This method
includes support for the NiBabel package and for ITK python wrappers created with
WrapITK. Note that for the later it is import how it has been compiled.
NiBabel enables support for:
- NifTi - Neuroimaging Informatics Technology Initiative (.nii, nii.gz)
- Analyze (plain, SPM99, SPM2) (.hdr/.img, .img.gz)
- and some others (http://nipy.sourceforge.net/nibabel/)
WrapITK enables support for:
- NifTi - Neuroimaging Informatics Technology Initiative (.nii, nii.gz)
- Analyze (plain, SPM99, SPM2) (.hdr/.img, .img.gz)
- Dicom - Digital Imaging and Communications in Medicine (.dcm, .dicom)
- Itk/Vtk MetaImage (.mhd, .mha/.raw)
- Nrrd - Nearly Raw Raster Data (.nhdr, .nrrd)
- and many others (http://www.cmake.org/Wiki/ITK/File_Formats)
Generally we advise to use the nibabel third party tool, which is implemented in pure
python and whose support for Nifti (.nii) and Analyze 7.5 (.hdr/.img) is excellent
and comprehensive.
For informations about which image formats, dimensionalities and pixel data types
your current configuration supports, see :mod:`test.io.loadsave` . There you can
find an automated test method.
Some known restrictions are explicit, independent of the third party modules or how
they were compiled:
- DICOM does not support images of 4 or more dimensions (Danger: ITK actually
saves the image without signaling an error. But the dimensionality is reduced
to 3 dimensions).
- DICOM does not support pixel data of uint32/64 and float32/64/128.
- ITK does not support images with less than 2 dimensions.
- ITK does not support pixel data of uint64, int64 and float128.
- JPEG, PIC are unstable in the sense, that differences between the saved and the
loaded data can occure.
- GIPL images are always loaded as 3D, even if they have been saved as 2D images.
- PNG images are always loaded as 2D, even if they have been saved as 3D images.
Further information:
- http://nipy.sourceforge.net/nibabel/ : The NiBabel python module
- http://www.cmake.org/Wiki/ITK/File_Formats : Supported file formats and data types by ITK
- http://code.google.com/p/pydicom/ : The PyDicom python module
Parameters
----------
arr : array_like
The image data.
filename : string
Where to save the image; path and filename including the image suffix.
hdr : object
The image header containing the metadata.
force : bool
Set to True to overwrite already exiting image silently.
Raises
------
ImageTypeError
If attempting to save as an unsupported image type.
DependencyError
If an required third party module is not existent or has been
compiled without support for the target image format
ImageSavingError
If the image could not be saved due to various reasons
"""
###############################
# responsibility dictionaries #
###############################
# These dictionaries determine which third-party module is responsible to save which
# image type. For extending the loaders functionality, just create the required
# private loader functions (__save_<name>) and register them here.
suffix_to_type = {'nii': 'nifti', # mapping from file suffix to type string
'nii.gz': 'nifti',
'hdr': 'analyze',
'img': 'analyze',
'img.gz': 'analyze',
'dcm': 'dicom',
'dicom': 'dicom',
'mhd': 'meta',
'mha': 'meta',
'nrrd': 'nrrd',
'nhdr': 'nrrd',
'png': 'png',
'bmp': 'bmp',
'tif': 'tif',
'tiff': 'tif',
'jpg': 'jpg',
'jpeg': 'jpg'}
type_to_string = {'nifti': 'NifTi - Neuroimaging Informatics Technology Initiative (.nii, nii.gz)', # mapping from type string to description string
'analyze': 'Analyze (plain, SPM99, SPM2) (.hdr/.img, .img.gz)',
'dicom': 'Dicom - Digital Imaging and Communications in Medicine (.dcm, .dicom)',
'meta': 'Itk/Vtk MetaImage (.mhd, .mha/.raw)',
'nrrd': 'Nrrd - Nearly Raw Raster Data (.nhdr, .nrrd)',
'png': 'Portable Network Graphics (.png)',
'bmp': 'Bitmap Image File (.bmp)',
'tif': 'Tagged Image File Format (.tif,.tiff)',
'jpg': 'Joint Photographic Experts Group (.jpg, .jpeg)'}
type_to_function = {'nifti': __save_nibabel, # mapping from type string to responsible loader function
'analyze': __save_nibabel,
'dicom': __save_itk,
'meta': __save_itk,
'nrrd': __save_itk,
'png': __save_itk,
'bmp': __save_itk,
'tif': __save_itk,
'jpg': __save_itk}
save_fallback_order = [__save_nibabel, __save_itk] # list and order of loader function to use in case of fallback to brute-force
########
# code #
########
logger = Logger.getInstance()
logger.info('Saving image as {}...'.format(filename))
# Check image file existence
if not force and os.path.exists(filename):
raise ImageSavingError('The target file {} already exists.'.format(filename))
# Try normal saving
try:
# determine two suffixes (the second one of the compound of the two last elements)
suffix = filename.split('.')[-1].lower()
if not suffix in suffix_to_type:
suffix = '.'.join(map(lambda x: x.lower(), filename.split('.')[-2:]))
if not suffix in suffix_to_type: # otherwise throw an Exception that will be caught later on
raise KeyError()
# determine image type by ending
image_type = suffix_to_type[suffix]
# determine responsible function
saver = type_to_function[image_type]
try:
# load the image
return saver(arr, hdr, filename)
except ImportError as e:
raise DependencyError('Saving images of type {} requires a third-party module that could not be encountered. Reason: {}.'.format(type_to_string[image_type], e))
except Exception as e:
raise ImageSavingError('Failed to save image {} as type {}. Reason signaled by third-party module: {}'.format(filename, type_to_string[image_type], e))
except KeyError:
raise ImageTypeError('The ending {} of {} could not be associated with any known image type. Supported types are: {}'.format(filename.split('.')[-1], filename, type_to_string.values()))
# Try brute force
logger.debug('Normal saving failed. Entering brute force mode.')
for saver in save_fallback_order:
try:
return saver(arr, hdr, filename)
except Exception as e:
logger.debug('Module {} signaled error: {}.'.format(saver, e))
raise err
def __save_itk(arr, hdr, filename):
"""
Image saver using the third-party module itk.
@param arr the image data
@param hdr the image header with met-information
@param filename the target location
"""
import itk
from medpy.itkvtk.utilities import itku
logger = Logger.getInstance()
logger.debug('Saving image as {} with Itk...'.format(filename))
# determine image type from array
image_type = itku.getImageTypeFromArray(arr)
# convert array to itk image
try:
img = itku.getImageFromArray(arr)
except KeyError:
raise DependencyError('The itk python PyBuffer transition object was compiled without support for image of type {}.'.format(image_type))
# if original image object was provided with hdr, try to use it for creating the image object
if __is_header_itk(hdr):
# save original image shape / largest possible region
shape = []
for i in range(img.GetLargestPossibleRegion().GetImageDimension()):
shape.append(img.GetLargestPossibleRegion().GetSize().GetElement(i))
# copy meta data
try:
img.CopyInformation(hdr.GetPointer())
# reset largest possible region / shape to original value
for i in range(len(shape)):
img.GetLargestPossibleRegion().GetSize().SetElement(i, shape[i])
except RuntimeError as e: # raised when the meta data information could not be copied (e.g. when the two images ndims differ)
logger.debug('The meta-information could not be copied form the old header. CopyInformation signaled: {}.'.format(e))
pass
elif hdr: # otherwise copy meta-data information as far as possible
copy_meta_data(img, hdr)
# save the image
writer = itk.ImageFileWriter[image_type].New()
writer.SetFileName(filename)
writer.SetInput(img.GetPointer())
writer.Update()
def __save_nibabel(arr, hdr, filename):
"""
Image saver using the third-party module nibabel.
@param arr the image data
@param hdr the image header with met-information
@param filename the target location
"""
import nibabel
from ..utilities import nibabelu
logger = Logger.getInstance()
logger.debug('Saving image as {} with NiBabel...'.format(filename))
# convert type bool to int8
if scipy.bool_ == arr.dtype:
arr = arr.astype(scipy.uint8)
# if original image object was provided with hdr, try to use it for creating the image object
if hdr and __is_header_nibabel(hdr):
__update_header_from_array_nibabel(hdr, arr)
image = nibabelu.image_like(arr, hdr)
# if not, create new image object and copy meta data as far as possible
else:
image = nibabelu.image_new(arr, filename)
if hdr: copy_meta_data(image, hdr)
# save image
nibabel.save(image, filename)
|
lfrdm/medpy
|
medpy/io/save.py
|
Python
|
gpl-3.0
| 12,228
|
[
"VTK"
] |
7349216bbaf053232af26c6aae7f67db7ceb6096ed6e9052406eef680c7f5cdb
|
from __future__ import print_function
import pylab as plt
import numpy as np
from astrometry.util.plotutils import dimshow
from legacypipe.survey import get_rgb, tim_get_resamp
from legacypipe.coadds import quick_coadds
def fitblobs_plots_2(blobs, refstars, ps):
plt.clf()
dimshow(blobs>=0, vmin=0, vmax=1)
ax = plt.axis()
plt.plot(refstars.ibx, refstars.iby, 'ro')
for ref in refstars:
magstr = ref.ref_cat
if ref.ref_cat == 'T2':
mag = ref.mag
magstr = 'T(%.1f)' % mag
elif ref.ref_cat == 'G2':
mag = ref.phot_g_mean_mag
magstr = 'G(%.1f)' % mag
plt.text(ref.ibx, ref.iby, magstr,
color='r', fontsize=10,
bbox=dict(facecolor='w', alpha=0.5))
plt.axis(ax)
plt.title('Reference stars')
ps.savefig()
def fitblobs_plots(tims, bands, targetwcs, blobslices, blobsrcs, cat,
blobs, ps):
coimgs,_ = quick_coadds(tims, bands, targetwcs)
plt.clf()
dimshow(get_rgb(coimgs, bands))
ax = plt.axis()
for i,bs in enumerate(blobslices):
sy,sx = bs
by0,by1 = sy.start, sy.stop
bx0,bx1 = sx.start, sx.stop
plt.plot([bx0, bx0, bx1, bx1, bx0], [by0, by1, by1, by0, by0],'r-')
plt.text((bx0+bx1)/2., by0, '%i' % i,
ha='center', va='bottom', color='r')
plt.axis(ax)
plt.title('Blobs')
ps.savefig()
for i,Isrcs in enumerate(blobsrcs):
for isrc in Isrcs:
src = cat[isrc]
ra,dec = src.getPosition().ra, src.getPosition().dec
_,x,y = targetwcs.radec2pixelxy(ra, dec)
plt.text(x, y, 'b%i/s%i' % (i,isrc),
ha='center', va='bottom', color='r')
plt.axis(ax)
plt.title('Blobs + Sources')
ps.savefig()
plt.clf()
dimshow(blobs)
ax = plt.axis()
for i,bs in enumerate(blobslices):
sy,sx = bs
by0,by1 = sy.start, sy.stop
bx0,bx1 = sx.start, sx.stop
plt.plot([bx0,bx0, bx1, bx1, bx0], [by0, by1, by1, by0, by0], 'r-')
plt.text((bx0+bx1)/2., by0, '%i' % i,
ha='center', va='bottom', color='r')
plt.axis(ax)
plt.title('Blobs')
ps.savefig()
plt.clf()
dimshow(blobs != -1)
ax = plt.axis()
for i,bs in enumerate(blobslices):
sy,sx = bs
by0,by1 = sy.start, sy.stop
bx0,bx1 = sx.start, sx.stop
plt.plot([bx0, bx0, bx1, bx1, bx0], [by0, by1, by1, by0,by0], 'r-')
plt.text((bx0+bx1)/2., by0, '%i' % i,
ha='center', va='bottom', color='r')
plt.axis(ax)
plt.title('Blobs')
ps.savefig()
def detection_plots_2(tims, bands, targetwcs, refstars, Tnew, hot,
saturated_pix, ps):
coimgs,_ = quick_coadds(tims, bands, targetwcs)
crossa = dict(ms=10, mew=1.5)
plt.clf()
dimshow(get_rgb(coimgs, bands))
plt.title('Detections')
ps.savefig()
ax = plt.axis()
if len(refstars):
I, = np.nonzero([len(r) and r[0] == 'T' for r in refstars.ref_cat])
if len(I):
plt.plot(refstars.ibx[I], refstars.iby[I], '+', color=(0,1,1),
label='Tycho-2', **crossa)
I, = np.nonzero([len(r) and r[0] == 'G' for r in refstars.ref_cat])
if len(I):
plt.plot(refstars.ibx[I], refstars.iby[I], '+',
color=(0.2,0.2,1), label='Gaia', **crossa)
I, = np.nonzero([len(r) and r[0] == 'L' for r in refstars.ref_cat])
if len(I):
plt.plot(refstars.ibx[I], refstars.iby[I], '+',
color=(0.6,0.6,0.2), label='Large Galaxy', **crossa)
plt.plot(Tnew.ibx, Tnew.iby, '+', color=(0,1,0),
label='New SED-matched detections', **crossa)
plt.axis(ax)
plt.title('Detections')
plt.legend(loc='upper left')
ps.savefig()
plt.clf()
plt.subplot(1,2,1)
dimshow(hot, vmin=0, vmax=1, cmap='hot')
plt.title('hot')
plt.subplot(1,2,2)
H,W = targetwcs.shape
rgb = np.zeros((H,W,3))
for i,satpix in enumerate(saturated_pix):
rgb[:,:,2-i] = satpix
dimshow(rgb)
plt.title('saturated_pix')
ps.savefig()
def detection_plots(detmaps, detivs, bands, saturated_pix, tims,
targetwcs, refstars, large_galaxies, gaia_stars, ps):
rgb = get_rgb(detmaps, bands)
plt.clf()
dimshow(rgb)
plt.title('detmaps')
ps.savefig()
for i,satpix in enumerate(saturated_pix):
rgb[:,:,2-i][satpix] = 1
plt.clf()
dimshow(rgb)
plt.title('detmaps & saturated')
ps.savefig()
coimgs,_ = quick_coadds(tims, bands, targetwcs, fill_holes=False)
if refstars:
plt.clf()
dimshow(get_rgb(coimgs, bands))
ax = plt.axis()
lp,lt = [],[]
tycho = refstars[refstars.isbright]
if len(tycho):
_,ix,iy = targetwcs.radec2pixelxy(tycho.ra, tycho.dec)
p = plt.plot(ix-1, iy-1, 'o', mew=3, ms=14, mec='r', mfc='none')
lp.append(p)
lt.append('Tycho-2 only')
if gaia_stars:
gaia = refstars[refstars.isgaia]
if gaia_stars and len(gaia):
_,ix,iy = targetwcs.radec2pixelxy(gaia.ra, gaia.dec)
p = plt.plot(ix-1, iy-1, 'o', mew=3, ms=10, mec='c', mfc='none')
for x,y,g in zip(ix,iy,gaia.phot_g_mean_mag):
plt.text(x, y, '%.1f' % g, color='k',
bbox=dict(facecolor='w', alpha=0.5))
lp.append(p)
lt.append('Gaia')
# star_clusters?
if large_galaxies:
galaxies = refstars[refstars.islargegalaxy]
if large_galaxies and len(galaxies):
_,ix,iy = targetwcs.radec2pixelxy(galaxies.ra, galaxies.dec)
p = plt.plot(ix-1, iy-1, 'o', mew=3, ms=14, mec=(0,1,0), mfc='none')
lp.append(p)
lt.append('Galaxies')
plt.axis(ax)
plt.title('Ref sources')
plt.figlegend([p[0] for p in lp], lt)
ps.savefig()
for band, detmap,detiv in zip(bands, detmaps, detivs):
plt.clf()
plt.subplot(2,1,1)
plt.hist((detmap * np.sqrt(detiv))[detiv>0], bins=50, range=(-5,8), log=True)
plt.title('Detection map pixel values (sigmas): band %s' % band)
plt.subplot(2,1,2)
plt.hist((detmap * np.sqrt(detiv))[detiv>0], bins=50, range=(-5,8))
ps.savefig()
def halo_plots_before(tims, bands, targetwcs, halostars, ps):
coimgs,_ = quick_coadds(tims, bands, targetwcs)
plt.clf()
dimshow(get_rgb(coimgs, bands))
ax = plt.axis()
plt.plot(halostars.ibx, halostars.iby, 'o', mec='r', ms=15, mfc='none')
plt.axis(ax)
plt.title('Before star halo subtraction')
ps.savefig()
return coimgs
def halo_plots_after(tims, bands, targetwcs, halostars, coimgs, ps):
coimgs2,_ = quick_coadds(tims, bands, targetwcs)
plt.clf()
dimshow(get_rgb(coimgs2, bands))
ax = plt.axis()
plt.plot(halostars.ibx, halostars.iby, 'o', mec='r', ms=15, mfc='none')
plt.axis(ax)
plt.title('After star halo subtraction')
ps.savefig()
plt.clf()
dimshow(get_rgb([co-co2 for co,co2 in zip(coimgs,coimgs2)],
bands))
ax = plt.axis()
plt.plot(halostars.ibx, halostars.iby, 'o', mec='r', ms=15, mfc='none')
plt.axis(ax)
plt.title('Subtracted halos')
ps.savefig()
for g in halostars[:10]:
plt.clf()
pixscale = targetwcs.pixel_scale()
pixrad = int(g.radius * 3600. / pixscale)
ax = [g.ibx-pixrad, g.ibx+pixrad, g.iby-pixrad, g.iby+pixrad]
ima = dict(interpolation='nearest', origin='lower')
plt.subplot(2,2,1)
plt.imshow(get_rgb(coimgs, bands), **ima)
plt.plot(halostars.ibx, halostars.iby, 'o', mec='r', ms=15, mfc='none')
plt.axis(ax)
plt.subplot(2,2,2)
plt.imshow(get_rgb(coimgs2, bands), **ima)
plt.axis(ax)
plt.subplot(2,2,3)
plt.imshow(get_rgb([co-co2 for co,co2 in zip(coimgs,coimgs2)],
bands), **ima)
plt.axis(ax)
ps.savefig()
def tim_plots(tims, bands, ps):
# Pixel histograms of subimages.
for b in bands:
sig1 = np.median([tim.sig1 for tim in tims if tim.band == b])
plt.clf()
# First select the histogram range...
blo,bhi = 0., 0.
for tim in tims:
if tim.band != b:
continue
# broaden range to encompass most pixels... only req'd
# when sky is bad
pix = tim.getImage()[tim.getInvError() > 0]
blo = min(blo, -5.*tim.sig1)
bhi = max(bhi, +5.*tim.sig1)
blo = min(blo, np.percentile(pix, 5))
bhi = max(bhi, np.percentile(pix, 95))
# Now plot histograms
for tim in tims:
if tim.band != b:
continue
# clip to limits to show pixels outside range
pix = tim.getImage()[tim.getInvError() > 0]
plt.hist(np.clip(pix, blo, bhi), range=(blo, bhi), bins=50, histtype='step',
alpha=0.5, label=tim.name)
# Print argmin unmasked pixel.
pix = tim.getImage() * (tim.getInvError() > 0)
i = np.argmin(pix)
iy,ix = np.unravel_index(i, pix.shape)
print('Image', tim, 'lowest pixel: %i,%i (with tim x0,y0 = %i,%i --> %i,%i) value %.3g' %
(ix, iy, tim.x0, tim.y0, ix+tim.x0, iy+tim.y0, pix[iy,ix]))
plt.xlim(blo, bhi)
plt.legend()
plt.xlabel('Pixel values')
plt.title('Pixel distributions: %s band' % b)
ps.savefig()
plt.clf()
lo,hi = -5., 5.
for tim in tims:
if tim.band != b:
continue
ie = tim.getInvError()
pix = (tim.getImage() * ie)[ie > 0]
plt.hist(np.clip(pix, lo, hi), range=(lo, hi), bins=50, histtype='step',
alpha=0.5, label=tim.name)
plt.legend()
plt.xlim(lo,hi)
plt.xlabel('Pixel values (sigma)')
plt.title('Pixel distributions: %s band' % b)
ps.savefig()
# Plot image pixels, invvars, masks
for tim in tims:
plt.clf()
plt.subplot(2,2,1)
dimshow(tim.getImage(), vmin=-3.*tim.sig1, vmax=10.*tim.sig1)
plt.title('image')
plt.subplot(2,2,2)
dimshow(tim.getInvError(), vmin=0, vmax=1.1/tim.sig1)
plt.title('inverr')
if tim.dq is not None:
# plt.subplot(2,2,3)
# dimshow(tim.dq, vmin=0, vmax=tim.dq.max())
# plt.title('DQ')
plt.subplot(2,2,3)
dimshow(((tim.dq & tim.dq_saturation_bits) > 0),
vmin=0, vmax=1.5, cmap='hot')
plt.title('SATUR')
plt.subplot(2,2,4)
dimshow(tim.getImage() * (tim.getInvError() > 0),
vmin=-3.*tim.sig1, vmax=10.*tim.sig1)
okpix = tim.getImage()[tim.getInvError() > 0]
plt.title('image (masked) range [%.3g, %.3g]' % (np.min(okpix), np.max(okpix)))
plt.suptitle(tim.name)
ps.savefig()
if True and tim.dq is not None:
from legacypipe.bits import DQ_BITS
plt.clf()
bitmap = dict([(v,k) for k,v in DQ_BITS.items()])
k = 1
for i in range(12):
bitval = 1 << i
if not bitval in bitmap:
continue
# only 9 bits are actually used
plt.subplot(3,3,k)
k+=1
plt.imshow((tim.dq & bitval) > 0,
vmin=0, vmax=1.5, cmap='hot')
plt.title(bitmap[bitval])
plt.suptitle('Mask planes: %s (%s %s)' % (tim.name, tim.imobj.image_filename, tim.imobj.ccdname))
ps.savefig()
im = tim.imobj
if im.camera == 'decam':
from legacypipe.decam import decam_has_dq_codes
print(tim.name, ': plver "%s"' % im.plver, 'has DQ codes:', decam_has_dq_codes(im.plver))
if im.camera == 'decam' and decam_has_dq_codes(im.plver):
# Integer codes, not bitmask. Re-read and plot.
dq = im.read_dq(slc=tim.slice)
plt.clf()
plt.subplot(1,3,1)
dimshow(tim.getImage(), vmin=-3.*tim.sig1, vmax=30.*tim.sig1)
plt.title('image')
plt.subplot(1,3,2)
dimshow(tim.getInvError(), vmin=0, vmax=1.1/tim.sig1)
plt.title('inverr')
plt.subplot(1,3,3)
plt.imshow(dq, interpolation='nearest', origin='lower',
cmap='tab10', vmin=-0.5, vmax=9.5)
plt.colorbar()
plt.title('DQ codes')
plt.suptitle('%s (%s %s) PLVER %s' % (tim.name, im.image_filename, im.ccdname, im.plver))
ps.savefig()
# Plot PSF model
for tim in tims:
plt.clf()
nx,ny = 5,5
psfs = []
h,w = tim.shape
for y in np.linspace(0, h-1, ny):
psfrow = []
for x in np.linspace(0, w-1, nx):
psf = tim.getPsf().getPointSourcePatch(x, y).patch
if x == 0 and y == 0:
print('tim', tim.name, 'PSF shape', psf.shape)
psfrow.append(psf)
psfrow = np.hstack(psfrow)
psfs.append(psfrow)
psfs = np.vstack(psfs)
plt.imshow(psfs, interpolation='nearest', origin='lower')
plt.title('PSF models for %s' % tim.name)
ps.savefig()
def _plot_mods(tims, mods, blobwcs, titles, bands, coimgs, cons, bslc,
blobw, blobh, ps,
chi_plots=True, rgb_plots=False, main_plot=True,
rgb_format='%s'):
subims = [[] for m in mods]
chis = dict([(b,[]) for b in bands])
make_coimgs = (coimgs is None)
if make_coimgs:
print('_plot_mods: blob shape', (blobh, blobw))
coimgs = [np.zeros((blobh,blobw)) for b in bands]
cons = [np.zeros((blobh,blobw)) for b in bands]
for iband,band in enumerate(bands):
comods = [np.zeros((blobh,blobw)) for m in mods]
cochis = [np.zeros((blobh,blobw)) for m in mods]
comodn = np.zeros((blobh,blobw))
mn,mx = 0,0
sig1 = 1.
for itim,tim in enumerate(tims):
if tim.band != band:
continue
R = tim_get_resamp(tim, blobwcs)
if R is None:
continue
(Yo,Xo,Yi,Xi) = R
rechi = np.zeros((blobh,blobw))
chilist = []
comodn[Yo,Xo] += 1
for imod,mod in enumerate(mods):
chi = ((tim.getImage()[Yi,Xi] - mod[itim][Yi,Xi]) *
tim.getInvError()[Yi,Xi])
rechi[Yo,Xo] = chi
chilist.append((rechi.copy(), itim))
cochis[imod][Yo,Xo] += chi
comods[imod][Yo,Xo] += mod[itim][Yi,Xi]
chis[band].append(chilist)
# we'll use 'sig1' of the last tim in the list below...
mn,mx = -10.*tim.sig1, 30.*tim.sig1
sig1 = tim.sig1
if make_coimgs:
nn = (tim.getInvError()[Yi,Xi] > 0)
coimgs[iband][Yo,Xo] += tim.getImage()[Yi,Xi] * nn
cons [iband][Yo,Xo] += nn
if make_coimgs:
coimgs[iband] /= np.maximum(cons[iband], 1)
coimg = coimgs[iband]
coimgn = cons [iband]
else:
coimg = coimgs[iband][bslc]
coimgn = cons[iband][bslc]
for comod in comods:
comod /= np.maximum(comodn, 1)
ima = dict(vmin=mn, vmax=mx, ticks=False)
resida = dict(vmin=-5.*sig1, vmax=5.*sig1, ticks=False)
for subim,comod,cochi in zip(subims, comods, cochis):
subim.append((coimg, coimgn, comod, ima, cochi, resida))
# Plot per-band image, model, and chi coadds, and RGB images
rgba = dict(ticks=False)
rgbs = []
rgbnames = []
plt.figure(1)
for i,subim in enumerate(subims):
plt.clf()
rows,cols = 3,5
for ib,b in enumerate(bands):
plt.subplot(rows,cols,ib+1)
plt.title(b)
plt.subplot(rows,cols,4)
plt.title('RGB')
plt.subplot(rows,cols,5)
plt.title('RGB(stretch)')
imgs = []
themods = []
resids = []
for j,(img,imgn,mod,ima,chi,resida) in enumerate(subim):
imgs.append(img)
themods.append(mod)
resid = img - mod
resid[imgn == 0] = np.nan
resids.append(resid)
if main_plot:
plt.subplot(rows,cols,1 + j + 0)
dimshow(img, **ima)
plt.subplot(rows,cols,1 + j + cols)
dimshow(mod, **ima)
plt.subplot(rows,cols,1 + j + cols*2)
# dimshow(-chi, **imchi)
# dimshow(imgn, vmin=0, vmax=3)
dimshow(resid, nancolor='r', **resida)
rgb = get_rgb(imgs, bands)
if i == 0:
rgbs.append(rgb)
rgbnames.append(rgb_format % 'Image')
if main_plot:
plt.subplot(rows,cols, 4)
dimshow(rgb, **rgba)
rgb = get_rgb(themods, bands)
rgbs.append(rgb)
rgbnames.append(rgb_format % titles[i])
if main_plot:
plt.subplot(rows,cols, cols+4)
dimshow(rgb, **rgba)
plt.subplot(rows,cols, cols*2+4)
dimshow(get_rgb(resids, bands, mnmx=(-10,10)), **rgba)
mnmx = -5,300
kwa = dict(mnmx=mnmx, arcsinh=1)
plt.subplot(rows,cols, 5)
dimshow(get_rgb(imgs, bands, **kwa), **rgba)
plt.subplot(rows,cols, cols+5)
dimshow(get_rgb(themods, bands, **kwa), **rgba)
plt.subplot(rows,cols, cols*2+5)
mnmx = -100,100
kwa = dict(mnmx=mnmx, arcsinh=1)
dimshow(get_rgb(resids, bands, **kwa), **rgba)
plt.suptitle(titles[i])
ps.savefig()
if rgb_plots:
# RGB image and model
plt.figure(2)
for rgb,tt in zip(rgbs, rgbnames):
plt.clf()
dimshow(rgb, **rgba)
plt.title(tt)
ps.savefig()
if not chi_plots:
return
imchi = dict(cmap='RdBu', vmin=-5, vmax=5)
plt.figure(1)
# Plot per-image chis: in a grid with band along the rows and images along the cols
cols = max(len(v) for v in chis.values())
rows = len(bands)
for imod in range(len(mods)):
plt.clf()
for row,band in enumerate(bands):
sp0 = 1 + cols*row
# chis[band] = [ (one for each tim:) [ (one for each mod:) (chi,itim), (chi,itim) ], ...]
for col,chilist in enumerate(chis[band]):
chi,itim = chilist[imod]
plt.subplot(rows, cols, sp0 + col)
dimshow(-chi, **imchi)
plt.xticks([]); plt.yticks([])
plt.title(tims[itim].name)
#plt.suptitle(titles[imod])
ps.savefig()
|
legacysurvey/pipeline
|
py/legacypipe/runbrick_plots.py
|
Python
|
gpl-2.0
| 19,237
|
[
"Galaxy"
] |
54cbe85b7e70aeb140b4b8e466bde371633e898799a53ed9964a3023ec95760e
|
# This script is part of pymaid (http://www.github.com/schlegelp/pymaid).
# Copyright (C) 2017 Philipp Schlegel
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
import gc
import math
import time
import urllib
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
from requests_futures.sessions import FuturesSession
from . import fetch, core, utils, config
# Set up logging
logger = config.logger
try:
import imageio
except ImportError:
logger.error('Unable to import imageio. Please make sure library is installed!')
__all__ = sorted(['crop_neuron', 'TileLoader', 'make_bvox'])
def crop_neuron(x, output, dimensions=(1000, 1000), interpolate_z_res=40,
remote_instance=None):
"""Crop and save EM tiles following a neuron's segments.
Parameters
----------
x : pymaid.CatmaidNeuron
Neuron to cut out.
output : str
File or folder.
dimensions : tuple of int, optional
Dimensions of square to cut out in nanometers.
interpolate_z_res : int | None, optional
If not None, will interpolate in Z direction to given
resolution. Use this to interpolate virtual nodes.
remote_instance : pymaid.CatmaidInstance, optional
"""
if isinstance(x, core.CatmaidNeuronList) and len(x) == 1:
x = x[0]
if not isinstance(x, core.CatmaidNeuron):
raise TypeError('Need a single CatmaidNeuron, got "{}".'.format(type(x)))
if len(dimensions) != 2:
raise ValueError('Need two dimensions, got {}'.format(len(dimensions)))
# Evalutate remote instance
remote_instance = utils._eval_remote_instance(remote_instance)
# Prepare treenode table to be indexed by treenode_id
this_tn = x.nodes.set_index('node_id')
# Iterate over neuron's segments
bboxes = []
for seg in x.segments:
# Get treenode coordinates
center_coords = this_tn.loc[seg, ['x', 'y', 'z']].values
# If a z resolution for interpolation is given, interpolate virtual nodes
if interpolate_z_res:
interp_coords = center_coords[0:1]
# Go over all treenode -> parent pairs
for i, (co, next_co) in enumerate(zip(center_coords[:-1], center_coords[1:])):
# If nodes are more than interpolate_z_res nm away from another
if math.fabs(co[2] - next_co[2]) >= (2 * interpolate_z_res):
# Get steps we would expect to be there
steps = int(
math.fabs(co[2] - next_co[2]) / interpolate_z_res)
# If we're going anterior, we need to inverse step size
if co[2] < next_co[2]:
step_size = interpolate_z_res
else:
step_size = -interpolate_z_res
# Interpolate coordinates
new_co = [(co[0] + int((next_co[0] - co[0]) / steps * (i + 1)),
co[1] + int((next_co[1] - co[1]) /
steps * (i + 1)),
z)
for i, z in enumerate(range(co[2] + step_size, next_co[2], step_size))]
# Track new coordinates
interp_coords = np.append(interp_coords, new_co, axis=0)
# Add next coordinate
interp_coords = np.append(interp_coords, [next_co], axis=0)
# Use interpolated coords
center_coords = interp_coords
# Turn into bounding boxes: left, right, top, bottom, z
bbox = np.array([[co[0] - dimensions[0] / 2,
co[0] + dimensions[0] / 2,
co[1] - dimensions[1] / 2,
co[1] + dimensions[1] / 2,
co[2]]
for co in center_coords]
).astype(int)
bboxes += list(bbox)
# Generate tile job
job = TileLoader(bboxes,
zoom_level=0,
coords='NM',
remote_instance=remote_instance)
return job
def _bbox_helper(coords, dimensions=(500, 500)):
"""Helper function to turn coordinates into bounding box(es).
Parameters
----------
coords : list | numpy.array
Coordinates to turn into bounding boxes.
dimensions : int | tuple, optional
X and Y dimensions of bbox. If single ``int``, ``X = Y``.
Returns
-------
bbox : numpy.array
Bounding box(es): ``left, right, top, bottom, z``
"""
if isinstance(dimensions, int):
dimensions = (dimensions, dimensions)
if isinstance(coords, list):
coords = np.array(coords)
if isinstance(coords, np.ndarray) and coords.ndim == 2:
return np.array([_bbox_helper(c, dimensions) for c in coords])
# Turn into bounding boxes: left, right, top, bottom, z
bbox = np.array([coords[0] - dimensions[0] / 2,
coords[0] + dimensions[0] / 2,
coords[1] - dimensions[1] / 2,
coords[1] + dimensions[1] / 2,
coords[2]]).astype(int)
return bbox
class TileLoader:
"""Load tiles from CATMAID, stitch into image and render output.
Important
---------
Loading lots of tiles is memory intensive. A single 100x100 pixel image
already requires 80Kb, 1000x1000 8Mb and so on.
Parameters
----------
bbox : list | numpy.array
Window to crop: ``left, right, top, bottom, z1, [z2, stepsize]``.
Can be single or list/array of bounding boxes. ``z2`` can be
omitted, in which case ``z2 = z1``. Optionally you can
provide a 7th ``stepsize`` parameter.
stack_id : int
ID of EM image stack to use.
zoom_level : int, optional
Zoom level
coords : 'NM' | 'PIXEL', optional
Dimension of bbox.
image_mirror : int | str | 'auto', optional
Image mirror to use:
- ``int`` is interpreted as mirror ID
- ``str`` must be URL
- ``'auto'`` will automatically pick fastest
mem_lim : int, optional
Memory limit in megabytes for loading tiles. This restricts
the number of tiles that can be simultaneously loaded into
memory.
Examples
--------
>>> # Generate the job
>>> job = pymaid.tiles.TileLoader([119000, 124000,
... 36000, 42000,
... 4050],
... stack_id=5,
... coords='PIXEL')
>>> # Load, stitch and crop the required EM image tiles
>>> job.load_in_memory()
>>> # Render image
>>> ax = job.render_im(slider=False, figsize=(12, 12))
>>> # Add nodes
>>> job.render_nodes(ax, nodes=True, connectors=False)
>>> # Add scalebar
>>> job.scalebar(size=1000, ax=ax, label=False)
>>> # Show
>>> plt.show()
"""
# TODOs
# -----
# 1. Check for available image mirror automatically (make stack_mirror and stack_id superfluous) - DONE
# 2. Test using matplotlib instead (would allow storing nodes and scalebar as SVG) - DONE
# 3. Code clean up
# 4. Add second mode that loads sections sequentially, saves them and discards tiles: slower but memory efficient
def __init__(self, bbox, stack_id, zoom_level=0, coords='NM',
image_mirror='auto', mem_lim=4000, remote_instance=None,
**fetch_kwargs):
"""Initialise class."""
if coords not in ['PIXEL', 'NM']:
raise ValueError('Coordinates need to be "PIXEL" or "NM", got "{}"'.format(coords))
# Convert single bbox to multiple bounding boxes
if isinstance(bbox, np.ndarray):
if bbox.ndim == 1:
self.bboxes = [bbox]
elif bbox.ndim == 2:
self.bboxes = bbox
else:
raise ValueError('Unable to interpret bounding box with {0} '
'dimensions'.format(bbox.ndim))
elif isinstance(bbox, list):
if any(isinstance(el, (list, np.ndarray)) for el in bbox):
self.bboxes = bbox
else:
self.bboxes = [bbox]
else:
raise TypeError('Bounding box must be list or array, not '
'{0}'.format(type(bbox)))
self.remote_instance = utils._eval_remote_instance(remote_instance)
self.zoom_level = zoom_level
self.coords = coords
self.stack_id = int(stack_id)
self.mem_lim = mem_lim
self.fetch_kwargs = fetch_kwargs
self.get_stack_info(image_mirror=image_mirror)
self.bboxes2imgcoords()
memory_est = self.estimate_memory()
logger.info('Estimated memory usage for loading all images: '
'{0:.2f} Mb'.format(memory_est))
def estimate_memory(self):
"""Estimate memory [Mb] consumption of loading all tiles."""
all_tiles = []
for j, im in enumerate(self.image_coords):
for i, ix_x in enumerate(range(im['tile_left'], im['tile_right'])):
for k, ix_y in enumerate(range(im['tile_top'], im['tile_bot'])):
all_tiles.append((ix_x, ix_y, im['tile_z']))
n_tiles = len(set(all_tiles))
return (n_tiles * self.bytes_per_tile) / 10 ** 6
def get_stack_info(self, image_mirror='auto'):
"""Retrieve basic info about image stack and mirrors."""
# Get available stacks for the project
available_stacks = self.remote_instance.fetch(
self.remote_instance._get_stacks_url()
)
if self.stack_id not in [e['id'] for e in available_stacks]:
raise ValueError('Stack ID {} not found on server. Available '
'stacks:\n{}'.format(self.stack_id,
available_stacks
))
# Fetch and store stack info
info = self.remote_instance.fetch(
self.remote_instance._get_stack_info_url(self.stack_id))
self.stack_dimension_x = info['dimension']['x']
self.stack_dimension_y = info['dimension']['y']
self.stack_dimension_z = info['dimension']['z']
self.resolution_x = info['resolution']['x']
self.resolution_y = info['resolution']['y']
self.resolution_z = info['resolution']['z']
if image_mirror == 'auto':
# Get fastest image mirror
match = sorted(info['mirrors'],
key=lambda x: test_response_time(x['image_base'],
calls=2),
reverse=False)
elif isinstance(image_mirror, int):
match = [m for m in info['mirrors'] if m['id'] == image_mirror]
elif isinstance(image_mirror, str):
match = [m for m in info['mirrors'] if m['image_base'] == image_mirror]
else:
raise ValueError('`image_mirror` must be int, str or "auto".')
if not match:
raise ValueError('No mirror matching "{}" found. Available '
'mirrors: {}'.format(image_mirror,
'\n'.join([m['image_base']
for m in info['mirrors']]))
)
self.img_mirror = match[0]
self.tile_source_type = self.img_mirror['tile_source_type']
self.tile_width = self.img_mirror['tile_width']
self.tile_height = self.img_mirror['tile_height']
self.mirror_url = self.img_mirror['image_base']
self.file_ext = self.img_mirror['file_extension']
if not self.mirror_url.endswith('/'):
self.mirror_url += '/'
# Memory size per tile in byte
self.bytes_per_tile = self.tile_width ** 2 * 8
logger.info('Image mirror: {0}'.format(self.mirror_url))
def bboxes2imgcoords(self):
"""Convert bounding box(es) to coordinates for individual images."""
# This keeps track of images (one per z slice)
self.image_coords = []
for bbox in self.bboxes:
if len(bbox) not in [5, 6, 7]:
raise ValueError('Need 5-7 coordinates (top, left, bottom, '
'right, z1, [z2, stepsize]), got {0}'.format(len(bbox)))
# If z2 not given, add z2 = z1
if len(bbox) == 5:
np.append(bbox, bbox[4])
if len(bbox) == 7:
stepsize = int(bbox[6])
else:
stepsize = 1
# Make sure we have left/right, top/bot, z1, z2 in correct order
left = min(bbox[0:2])
right = max(bbox[0:2])
top = min(bbox[2:4])
bottom = max(bbox[2:4])
z1 = int(min(bbox[4:6]))
z2 = int(max(bbox[4:6]))
if self.coords == 'NM':
# Map coordinates to pixels (this already accounts for zoom)
px_left = self._to_x_index(left)
px_right = self._to_x_index(right)
px_top = self._to_y_index(top)
px_bot = self._to_y_index(bottom)
px_z1 = self._to_z_index(z1)
px_z2 = self._to_z_index(z2)
nm_left, nm_right, nm_top, nm_bot, nm_z1, nm_z2 = left, right, top, bottom, z1, z2
else:
# Adjust pixel coordinates to zoom level
px_left = int(left / (2**self.zoom_level) + 0.5)
px_right = int(right / (2**self.zoom_level) + 0.5)
px_top = int(top / (2**self.zoom_level) + 0.5)
px_bot = int(bottom / (2**self.zoom_level) + 0.5)
px_z1 = int(z1)
px_z2 = int(z2)
# Turn pixel coordinates into real world coords
nm_left = int(left * self.resolution_x)
nm_right = int(right * self.resolution_x)
nm_top = int(top * self.resolution_y)
nm_bot = int(bottom * self.resolution_y)
nm_z1 = int(z1 * self.resolution_z)
nm_z2 = int(z2 * self.resolution_z)
# Map to tiles
tile_left = int(px_left / self.tile_width)
tile_right = int(px_right / self.tile_width) + 1
tile_top = int(px_top / self.tile_width)
tile_bot = int(px_bot / self.tile_width) + 1
# Get borders to crop
border_left = px_left - (tile_left * self.tile_width)
border_right = (tile_right * self.tile_width) - px_right
border_top = px_top - (tile_top * self.tile_width)
border_bot = (tile_bot * self.tile_width) - px_bot
# Generate a single entry for each z slice in this bbox
for px_z, nm_z in zip(range(px_z1, px_z2 + 1)[::stepsize],
np.arange(nm_z1, nm_z2 + self.resolution_z,
self.resolution_z)[::stepsize]):
# Tile we will have to load for this image
this_tiles = []
for i, ix_x in enumerate(range(tile_left, tile_right)):
for k, ix_y in enumerate(range(tile_top, tile_bot)):
this_tiles.append((ix_x, ix_y, px_z))
this_im = dict(
# Nanometer coords
nm_bot=nm_bot,
nm_top=nm_top,
nm_left=nm_left,
nm_right=nm_right,
nm_z=int(nm_z),
# Tile indices
tile_bot=tile_bot,
tile_top=tile_top,
tile_left=tile_left,
tile_right=tile_right,
tile_z=px_z,
px_bot=px_bot,
px_top=px_top,
px_left=px_left,
px_right=px_right,
px_z=px_z,
px_border_top=border_top,
px_border_left=border_left,
px_border_bot=border_bot,
px_border_right=border_right,
tiles_to_load=this_tiles,
)
self.image_coords.append(this_im)
def _get_tiles(self, tiles):
"""Retrieve all tiles in parallel.
Parameters
----------
tiles : list | np.ndarray
Triplets of x/y/z tile indices. E.g. [ (20,10,400 ), (...) ]
"""
tiles = list(set(tiles))
if self.remote_instance:
future_session = self.remote_instance._future_session
else:
future_session = FuturesSession(max_workers=30)
urls = [self._get_tile_url(*c) for c in tiles]
futures = [future_session.get(u, params=None, **self.fetch_kwargs) for u in urls]
resp = [f.result() for f in config.tqdm(futures,
desc='Loading tiles',
disable=config.pbar_hide or len(futures) == 1,
leave=False)]
# Make sure all responses returned data
for r in resp:
r.raise_for_status()
data = {co: imageio.imread(r.content) for co, r in zip(tiles, resp)}
return data
def _stitch_tiles(self, im, tiles):
"""Stitch tiles into final image."""
# Generate empty array
im_dim = np.array([
math.fabs((im['tile_bot'] - im['tile_top']) * self.tile_width),
math.fabs((im['tile_right'] - im['tile_left']) * self.tile_width)]).astype(int)
img = np.zeros(im_dim, dtype=int)
# Fill array
for i, ix_x in enumerate(range(im['tile_left'], im['tile_right'])):
for k, ix_y in enumerate(range(im['tile_top'], im['tile_bot'])):
# Paste this tile onto our canvas
img[k * self.tile_width: (k + 1) * self.tile_width, i * self.tile_width: (
i + 1) * self.tile_width] = tiles[(ix_x, ix_y, im['tile_z'])]
# Remove borders and create a copy (otherwise we will not be able
# to clear the original tile)
cropped_img = np.array(img[im['px_border_top']: -im['px_border_bot'],
im['px_border_left']: -im['px_border_right']],
dtype=int)
# Delete image to free memory (not sure if this does much though)
del img
return cropped_img
def load_and_save(self, filepath, filename=None):
"""Download and stitch tiles, and save as images right away (memory
efficient).
Parameters
----------
filepath : str
Path to which to store tiles.
filename : str | list | None, optional
Filename(s).
- single ``str`` filename will be added a number as suffix
- list of ``str`` must match length of images
- ``None`` will result in simple numbered files
"""
if not os.path.isdir(filepath):
raise ValueError('Invalid filepath: {}'.format(filepath))
if isinstance(filename, type(None)):
filename = ['{}.jpg'.format(i) for i in range(len(self.image_coords))]
elif isinstance(filename, str):
filename = [filename] * len(self.image_coords)
elif isinstance(filename, (list, np.ndarray)):
if len(filename) != len(self.image_coords):
raise ValueError('Number of filenames must match number of '
'images ({})'.format(len(self.image_coords)))
tiles = {}
max_safe_tiles = int((self.mem_lim * 10**6) / self.bytes_per_tile)
for l, f, im in zip(range(len(self.image_coords)),
filename,
config.tqdm(self.image_coords, 'Stitching',
leave=config.pbar_leave,
disable=config.pbar_hide)):
# Get a list of all tiles that remain to be used and are not
# currently part of the tiles
remaining_tiles = [t for img in self.image_coords[l:]
for t in img['tiles_to_load']]
# Clear tiles that we don't need anymore and force garbage collection
to_delete = [t for t in tiles if t not in remaining_tiles]
for t in to_delete:
del tiles[t]
gc.collect()
# Check if we're still missing tiles
missing_tiles = [t for t in im['tiles_to_load'] if t not in tiles]
if len(tiles) == 0 or len(missing_tiles) > 0:
tiles_to_get = [
t for t in remaining_tiles if t not in tiles][: max_safe_tiles] + missing_tiles
else:
tiles_to_get = []
if tiles_to_get:
# Get missing tiles
tiles.update(self._get_tiles(tiles_to_get))
# Generate image
cropped_img = self._stitch_tiles(im, tiles)
# Save image
fp = os.path.join(filepath, f)
# This prevents a User Warning regarding conversion from int64
# to uint8
try:
imageio.imwrite(fp, cropped_img.astype('uint8'))
except BaseException as err:
logger.error('Error saving {}: {}'.format(f, str(err)))
del cropped_img
def load_in_memory(self):
"""Download and stitch tiles, and keep images in memory.
Data accessible via ``.img`` attribute.
"""
tiles = {}
max_safe_tiles = int((self.mem_lim * 10**6) / self.bytes_per_tile)
# Assemble tiles into the requested images
images = []
for l, im in enumerate(config.tqdm(self.image_coords, 'Stitching',
leave=config.pbar_leave,
disable=config.pbar_hide)):
# Get a list of all tiles that remain to be used and are not
# currently part of the tiles
remaining_tiles = [t for img in self.image_coords[l:]
for t in img['tiles_to_load']]
# Clear tiles that we don't need anymore and force garbage collection
to_delete = [t for t in tiles if t not in remaining_tiles]
for t in to_delete:
del tiles[t]
gc.collect()
# Check if we're still missing tiles
missing_tiles = [t for t in im['tiles_to_load'] if t not in tiles]
if len(tiles) == 0 or len(missing_tiles) > 0:
tiles_to_get = [
t for t in remaining_tiles if t not in tiles][: max_safe_tiles] + missing_tiles
else:
tiles_to_get = []
if tiles_to_get:
# Get missing tiles
tiles.update(self._get_tiles(tiles_to_get))
cropped_img = self._stitch_tiles(im, tiles)
# Add slice
images.append(cropped_img)
# Clear tile data
del tiles
gc.collect()
# Before we assemble all images into a large stack make sure that
# all individual images have the same dimensions
dims = np.vstack([im.shape for im in images])
min_dims = np.min(dims, axis=0)
# Get standard deviation to check if they are all the same
if sum(np.std(dims, axis=0)) != 0:
logger.warning('Varying image dimensions detected. Cropping '
'everything to the smallest image size: {0}'.format(min_dims))
# Crop images to the smallest common dimension
for im in images:
if im.shape != min_dims:
center = im.shape // 2
im = im[center[0] - min_dims[0]: center[0] + min_dims[0],
center[1] - min_dims[1]: center[1] + min_dims[1]]
self.img = np.dstack(images).astype(int)
def scalebar(self, ax, size=1000, pos='lower left', label=True, line_kws={}, label_kws={}):
"""Add scalebar to image.
Parameters
----------
size : int
Size of scalebar. In NM!
ax : matplotlib.axes
font : PIL font, optional
If provided, will write the size below scalebar.
pos : 'lowerleft' | 'upperleft' | 'lowerright' | 'upperright', optional
Position of scalebar.
label : bool, optional
If True will label scalebar.
line_kws
Keyword arguments passed to plt.plot().
label_kws
Keyword arguments passed to plt.text()
"""
positions = {'lower left': (self.img.shape[1] * .05, self.img.shape[0] * .95),
'upper left': (self.img.shape[1] * .05, self.img.shape[0] * .05),
'lower right': (self.img.shape[1] * .95, self.img.shape[0] * .95),
'upper right': (self.img.shape[1] * .95, self.img.shape[0] * .05)}
if pos not in positions:
raise ValueError(
'Wrong position. Please use either {0}'.format(','.join(positions)))
co = positions[pos]
ax.plot([co[0], co[0] + size / self.resolution_x],
[co[1], co[1]], **line_kws)
if label:
ax.text(co[0] + ((co[0] + size / self.resolution_x) - co[0]) / 2,
co[1] + 10,
'{0} nm'.format(size),
horizontalalignment='center',
verticalalignment='center',
**label_kws)
def render_nodes(self, ax, nodes=True, connectors=True, slice_ix=None,
tn_color='yellow', cn_color='none', tn_ec=None,
cn_ec='orange', skid_include=[], cn_include=[], tn_kws={},
cn_kws={}):
"""Render nodes onto image.
Parameters
----------
ax : matplotlib ax
slice_x : int, optional
If multi-slice, provide index of slice to label.
tn_color : str | tuple | dict
Map skeleton ID to color.
cn_color : str | tuple | dict
Map connector ID to color.
ec : str | tuple | dict
Edge color.
skid_include : list of int, optional
List of skeleton IDs to include.
cn_include : list of int, optional
List of connector IDs to include.
tn_kws : dict, optional
Keywords passed to ``matplotlib.pyplot.scatter`` for
nodes.
cn_kws : dict, optional
Keywords passed to ``matplotlib.pyplot.scatter`` for
connectors.
"""
if slice_ix is None and len(self.image_coords) == 1:
slice_ix = 0
elif slice_ix is None:
raise ValueError(
'Please provide index of the slice you want nodes to be rendered for.')
slice_info = self.image_coords[slice_ix]
# Get node list
data = fetch.get_nodes_in_volume(slice_info['nm_left'],
slice_info['nm_right'],
slice_info['nm_top'],
slice_info['nm_bot'],
slice_info['nm_z'] - self.resolution_z, # get one slice up
slice_info['nm_z'] + self.resolution_z, # and one slice down
remote_instance=self.remote_instance,
coord_format='NM')
# Interpolate virtual
self.nodes = self._make_virtual_nodes(data[0])
self.connectors = data[1]
# Filter to only this Z
self.nodes = self.nodes[self.nodes.z == slice_info['nm_z']]
self.connectors = self.connectors[self.connectors.z
== slice_info['nm_z']]
# Filter to fit bounding box
self.nodes = self.nodes[
(self.nodes.x <= slice_info['nm_right']) &
(self.nodes.x >= slice_info['nm_left']) &
(self.nodes.y >= slice_info['nm_top']) &
(self.nodes.y <= slice_info['nm_bot'])
]
self.connectors = self.connectors[
(self.connectors.x <= slice_info['nm_right']) &
(self.connectors.x >= slice_info['nm_left']) &
(self.connectors.y >= slice_info['nm_top']) &
(self.connectors.y <=
slice_info['nm_bot'])
]
logger.debug('Retrieved {} nodes and {} connectors'.format(
self.nodes.shape[0],
self.connectors.shape[0]
))
# Filter if provided
if len(skid_include) > 0:
skid_include = np.array(skid_include).astype(int)
self.nodes = self.nodes[self.nodes.skeleton_id.isin(skid_include)]
if len(cn_include) > 0:
cn_include = np.array(cn_include).astype(int)
self.connectors = self.connectors[self.connectors.skeleton_id.isin(
cn_include)]
logger.debug('{} nodes and {} connectors after filtering'.format(
self.nodes.shape[0],
self.connectors.shape[0]
))
# Calculate pixel coords:
# 1. Offset
self.nodes.loc[:, 'x'] -= slice_info['nm_left']
self.nodes.loc[:, 'y'] -= slice_info['nm_top']
self.connectors.loc[:, 'x'] -= slice_info['nm_left']
self.connectors.loc[:, 'y'] -= slice_info['nm_top']
# 2. Turn into pixel coordinates
self.nodes.loc[:, 'x'] /= self.resolution_x
self.nodes.loc[:, 'y'] /= self.resolution_y
self.connectors.loc[:, 'x'] /= self.resolution_x
self.connectors.loc[:, 'y'] /= self.resolution_y
# 3. Round positions
self.nodes.loc[:, 'x'] = self.nodes.x.astype(int)
self.nodes.loc[:, 'y'] = self.nodes.y.astype(int)
self.connectors.loc[:, 'x'] = self.connectors.x.astype(int)
self.connectors.loc[:, 'y'] = self.connectors.y.astype(int)
# 4. Set colours
if isinstance(tn_color, dict):
default_colour = tn_color.get('default', 'black')
self.nodes['color'] = [tn_color.get(
str(s), default_colour) for s in self.nodes.skeleton_id.values]
else:
self.nodes['color'] = [
tn_color for x in range(self.nodes.shape[0])]
if isinstance(cn_color, dict):
default_colour = cn_color.get('default', 'black')
self.connectors['color'] = [cn_color.get(
str(s), default_colour) for s in self.connectors.connector_id.values]
else:
self.connectors['color'] = [
cn_color for x in range(self.connectors.shape[0])]
if isinstance(tn_ec, dict):
self.nodes['ec'] = [tn_ec.get(str(s), None)
for s in self.nodes.skeleton_id.values]
else:
self.nodes['ec'] = [tn_ec for x in range(self.nodes.shape[0])]
if isinstance(cn_ec, dict):
self.connectors['ec'] = [
cn_ec.get(str(s), None) for s in self.connectors.connector_id.values]
else:
self.connectors['ec'] = [
cn_ec for x in range(self.connectors.shape[0])]
if nodes:
if tn_ec:
tn_ec = self.nodes.ec.values
ax.scatter(self.nodes.x.values, self.nodes.y.values,
facecolors=self.nodes.color.values,
edgecolors=tn_ec,
**tn_kws)
if connectors:
if cn_ec:
cn_ec = self.connectors.ec.values
ax.scatter(self.connectors.x.values, self.connectors.y.values,
facecolors=self.connectors.color.values,
edgecolors=cn_ec,
**cn_kws)
def _get_tile_url(self, x, y, z):
"""Return tile url."""
if self.tile_source_type in [1, 4, 5, 9]:
if self.tile_source_type == 1:
# File-based image stack
url = '{sourceBaseUrl}{pixelPosition_z}/{row}_{col}_{zoomLevel}.{fileExtension}'
elif self.tile_source_type == 4:
# File-based image stack with zoom level directories
url = '{sourceBaseUrl}{pixelPosition_z}/{zoomLevel}/{row}_{col}.{fileExtension}'
elif self.tile_source_type == 5:
# Directory-based image stack
url = '{sourceBaseUrl}{zoomLevel}/{pixelPosition_z}/{row}/{col}.{fileExtension}'
elif self.tile_source_type == 9:
url = '{sourceBaseUrl}{pixelPosition_z}/{row}_{col}_{zoomLevel}.{fileExtension}'
url = url.format(sourceBaseUrl=self.mirror_url,
pixelPosition_z=z,
row=y,
col=x,
zoomLevel=self.zoom_level,
fileExtension=self.file_ext)
elif self.tile_source_type == 2:
# Request query-based image stack
GET = dict(x=x * self.tile_width,
y=y * self.tile_height,
z=z,
width=self.tile_width,
height=self.tile_height,
scale=self.zoom_level,
row=y,
col=x)
url = self.mirror_url
url += '?{}'.format(urllib.parse.urlencode(GET))
elif self.tile_source_type == 3:
# HDF5 via CATMAID backend
url = self.remote_instance.server
if not url.endswith('/'):
url += '/'
url += '{project_id}/stack/{stack_id}/tile'
url = url.format(project_id=self.remote_instance.project,
stack_id=self.stack_id)
GET = dict(x=x * self.tile_width,
y=y * self.tile_height,
z=z,
width=self.tile_width,
height=self.tile_height,
scale=self.zoom_level,
row=y,
col=x,
file_extension=self.file_ext,
basename=self.mirror_url,
type='all')
url += '?{}'.format(urllib.parse.urlencode(GET))
else:
msg = 'Tile source type "{}" not implement'.format(self.tile_source_type)
raise NotImplementedError(msg)
return url
def _to_x_index(self, x, enforce_bounds=True):
"""Convert a real world position to a x pixel position.
Also, makes sure the value is in bounds.
"""
zero_zoom = x / self.resolution_x
if enforce_bounds:
zero_zoom = min(max(zero_zoom, 0.0), self.stack_dimension_x - 1.0)
return int(zero_zoom / (2**self.zoom_level) + 0.5)
def _to_y_index(self, y, enforce_bounds=True):
"""Convert a real world position to a y pixel position.
Also, makes sure the value is in bounds.
"""
zero_zoom = y / self.resolution_y
if enforce_bounds:
zero_zoom = min(max(zero_zoom, 0.0), self.stack_dimension_y - 1.0)
return int(zero_zoom / (2**self.zoom_level) + 0.5)
def _to_z_index(self, z, enforce_bounds=True):
"""Convert a real world position to a slice/section number.
Also, makes sure the value is in bounds.
"""
section = z / self.resolution_z + 0.5
if enforce_bounds:
section = min(max(section, 0.0), self.stack_dimension_z - 1.0)
return int(section)
def _make_virtual_nodes(self, nodes):
"""Generate virtual nodes.
Currently, this simply adds new nodes to the table but does NOT rewire
the neurons accordingly!
"""
# Get nodes that have a parent in our list
has_parent = nodes[nodes.parent_id.isin(nodes.node_id)]
# Get treenode and parent section
tn_section = has_parent.z.values / self.resolution_z
pn_section = nodes.set_index('node_id').loc[has_parent.parent_id.values,
'z'].values / self.resolution_z
# Get distance in sections
sec_dist = np.absolute(tn_section - pn_section)
# Get those that have more than one section in between them
to_interpolate = has_parent[sec_dist > 1]
tn_locs = to_interpolate[['x', 'y', 'z']].values
pn_locs = nodes.set_index('node_id').loc[to_interpolate.parent_id,
['x', 'y', 'z']].values
distances = sec_dist[sec_dist > 1].astype(int)
skids = to_interpolate.skeleton_id.values
virtual_nodes = []
for i in range(to_interpolate.shape[0]):
x_interp = np.round(np.linspace(
tn_locs[i][0], pn_locs[i][0], distances[i] + 1).astype(int))
y_interp = np.round(np.linspace(
tn_locs[i][1], pn_locs[i][1], distances[i] + 1).astype(int))
z_interp = np.round(np.linspace(
tn_locs[i][2], pn_locs[i][2], distances[i] + 1))
virtual_nodes += [[x_interp[k], y_interp[k], z_interp[k], skids[i]]
for k in range(1, len(x_interp) - 1)]
virtual_nodes = pd.DataFrame(virtual_nodes,
columns=['x', 'y', 'z', 'skeleton_id'])
return pd.concat([nodes, virtual_nodes], axis=0,
ignore_index=True, sort=True)
def render_im(self, slider=False, ax=None, **kwargs):
"""Draw image slices with a slider."""
if isinstance(ax, type(None)):
fig, ax = plt.subplots(**kwargs)
ax.set_aspect('equal')
plt.subplots_adjust(bottom=0.25)
mpl_img = ax.imshow(self.img[:, :, 0], cmap='gray')
if slider:
axcolor = 'grey'
axslice = plt.axes([0.25, 0.1, 0.5, 0.03], facecolor=axcolor)
sslice = Slider(axslice, 'Slice', 1, self.img.shape[2],
valinit=0, valfmt='%i')
def update(val):
slice_ix = int(round(sslice.val))
sslice.valtext.set_text(str(slice_ix))
mpl_img.set_data(self.img[:, :, slice_ix - 1])
fig.canvas.draw_idle()
sslice.on_changed(update)
return ax
def test_response_time(url, calls=5):
"""Return server response time. If unresponsive returns float("inf")."""
resp_times = []
for i in range(calls):
start = time.time()
try:
_ = urllib.request.urlopen(url, timeout=3)
resp_times.append(time.time() - start)
except urllib.error.HTTPError as err:
if err.code == 404:
return float('inf')
if err.code == 401:
resp_times.append(time.time() - start)
except BaseException as err:
if 'SSL: CERTIFICATE_VERIFY_FAILED' in str(err):
msg = 'SSL: CERTIFICATE_VERIFY_FAILED error while ' + \
'accessing "{}". Try fixing SSL or set up unverified ' + \
'context:\n' + \
'>>> import ssl\n' + \
'>>> ssl._create_default_https_context = ssl._create_unverified_context\n'
logger.warning(msg.format(url))
return float('inf')
return np.mean(resp_times)
def make_bvox(arr, fp):
"""Save image array as Blender Voxel.
Can be loaded as volumetric texture.
Parameters
----------
arr : numpy.ndarray
(Nz, Nx, Ny) array of gray values (0-1 or 0-255).
fp : str
Path + filename. Will force `.bvox` file extension.
Returns
-------
Nothing
"""
assert isinstance(arr, np.ndarray)
assert isinstance(fp, str)
assert arr.ndim == 3
fp = fp + '.bvox' if not fp.endswith('.bvox') else fp
nx, ny, nz = arr.shape
nframes = 1
header = np.array([nz, ny, nx, nframes])
pointdata = arr.flatten()
# We will assume that if any value > 1 it's 0-255
if np.any(pointdata > 1):
pointdata = pointdata / 255
with open(fp, 'wb') as binfile:
header.astype('<i4').tofile(binfile)
pointdata.astype('<f4').tofile(binfile)
|
schlegelp/pymaid
|
pymaid/tiles.py
|
Python
|
gpl-3.0
| 42,031
|
[
"NEURON"
] |
818e450d39eb03dc93bd402d1ad31ef124277c0fa66fce0c4ec5f16d9df469d7
|
import os
import re
import subprocess
from setuptools import setup, find_packages
MAJOR = 0
MINOR = 6
MICRO = 0
IS_RELEASED = False
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
# When installation takes place inside a READTHEDOCS build
# we need to set zip_safe = False
on_rtd = os.environ.get('READTHEDOCS') == 'True'
plugin = ['mayavi_tools = simphony_mayavi.plugin']
if on_rtd:
requirements = ["traits"],
else:
requirements = [
"simphony[H5IO]>0.4,<0.7",
"mayavi[app]==4.4.4"
],
# Return the git revision as a string
def git_version():
def _minimal_ext_cmd(cmd):
# construct minimal environment
env = {}
for k in ['SYSTEMROOT', 'PATH']:
v = os.environ.get(k)
if v is not None:
env[k] = v
# LANGUAGE is used on win32
env['LANGUAGE'] = 'C'
env['LANG'] = 'C'
env['LC_ALL'] = 'C'
out = subprocess.Popen(
cmd, stdout=subprocess.PIPE, env=env,
).communicate()[0]
return out
try:
out = _minimal_ext_cmd(['git', 'describe', '--tags'])
except OSError:
out = ''
git_description = out.strip().decode('ascii')
expr = r'.*?\-(?P<count>\d+)-g(?P<hash>[a-fA-F0-9]+)'
match = re.match(expr, git_description)
if match is None:
git_revision, git_count = 'Unknown', '0'
else:
git_revision, git_count = match.group('hash'), match.group('count')
return git_revision, git_count
def write_version_py(filename='simphony_mayavi/_version.py'):
template = """\
# THIS FILE IS GENERATED FROM SIMPHONY-MAYAVI SETUP.PY
version = '{version}'
full_version = '{full_version}'
git_revision = '{git_revision}'
is_released = {is_released}
if not is_released:
version = full_version
"""
# Adding the git rev number needs to be done inside
# write_version_py(), otherwise the import of simphony_mayavi._version
# messes up the build under Python 3.
fullversion = VERSION
if os.path.exists('.git'):
git_rev, dev_num = git_version()
elif os.path.exists('simphony_mayavi/_version.py'):
# must be a source distribution, use existing version file
try:
from simphony_mayavi._version import git_revision as git_rev
from simphony_mayavi._version import full_version as full_v
except ImportError:
raise ImportError("Unable to import git_revision. Try removing "
"simphony_mayavi/_version.py and the build "
"directory before building.")
match = re.match(r'.*?\.dev(?P<dev_num>\d+)', full_v)
if match is None:
dev_num = '0'
else:
dev_num = match.group('dev_num')
else:
git_rev = 'Unknown'
dev_num = '0'
if not IS_RELEASED:
fullversion += '.dev{0}'.format(dev_num)
with open(filename, "wt") as fp:
fp.write(
template.format(
version=VERSION,
full_version=fullversion,
git_revision=git_rev,
is_released=IS_RELEASED))
if __name__ == "__main__":
write_version_py()
from simphony_mayavi import __version__
setup(
name='simphony_mayavi',
author='SimPhoNy FP7 European Project',
description='The mayavi visualisation plugin for SimPhoNy',
long_description=open('README.rst').read(),
install_requires=requirements,
packages=find_packages(),
entry_points={'simphony.visualisation': plugin},
version=__version__,
zip_safe=False,
license='BSD')
|
simphony/simphony-mayavi
|
setup.py
|
Python
|
bsd-2-clause
| 3,661
|
[
"Mayavi"
] |
e83d9811f18dd99ebff41922b67bea78fa8a10760bcbff542bcb871a5d86f065
|
#!/usr/bin/python
#
# Created on Aug 25, 2016
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
#
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_hardwaresecuritymodulegroup
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of HardwareSecurityModuleGroup Avi RESTful Object
description:
- This module is used to configure HardwareSecurityModuleGroup object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent","present"]
hsm:
description:
- Hardware security module configuration.
required: true
name:
description:
- Name of the hsm group configuration object.
required: true
tenant_ref:
description:
- It is a reference to an object of type tenant.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the hsm group configuration object.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Example to create HardwareSecurityModuleGroup object
avi_hardwaresecuritymodulegroup:
controller: 10.10.25.42
username: admin
password: something
state: present
name: sample_hardwaresecuritymodulegroup
"""
RETURN = '''
obj:
description: HardwareSecurityModuleGroup (api/hardwaresecuritymodulegroup) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
hsm=dict(type='dict', required=True),
name=dict(type='str', required=True),
tenant_ref=dict(type='str',),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'hardwaresecuritymodulegroup',
set([]))
if __name__ == '__main__':
main()
|
e-gob/plataforma-kioscos-autoatencion
|
scripts/ansible-play/.venv/lib/python2.7/site-packages/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py
|
Python
|
bsd-3-clause
| 3,496
|
[
"VisIt"
] |
84629025350653d6308960e03fb7b4a56f3f2ef07698c788190238c2fbe98a5f
|
# -*- coding: utf-8 -*=
import math
import numpy as np
import director.vtkAll as vtk
from director.debugVis import DebugData
class RaySensor(object):
"""Ray sensor."""
def __init__(self, num_rays=16, radius=40, min_angle=-45, max_angle=45):
"""Constructs a RaySensor.
Args:
num_rays: Number of rays.
radius: Max distance of the rays.
min_angle: Minimum angle of the rays in degrees.
max_angle: Maximum angle of the rays in degrees.
"""
self._num_rays = num_rays
self._radius = radius
self._min_angle = math.radians(min_angle)
self._max_angle = math.radians(max_angle)
self._locator = None
self._state = [0., 0., 0.] # x, y, theta
self._hit = np.zeros(num_rays)
self._distances = np.zeros(num_rays)
self._intersections = [[0, 0, 0] for i in range(num_rays)]
self._update_rays(self._state[2])
@property
def distances(self):
"""Array of distances measured by each ray."""
normalized_distances = [
self._distances[i] / self._radius if self._hit[i] else 1.0
for i in range(self._num_rays)
]
return normalized_distances
def has_collided(self, max_distance=0.05):
"""Returns whether a collision has occured or not.
Args:
max_distance: Threshold for collision distance.
"""
for hit, distance in zip(self._hit, self._distances):
if hit and distance <= max_distance:
return True
return False
def set_locator(self, locator):
"""Sets the vtk cell locator.
Args:
locator: Cell locator.
"""
self._locator = locator
def update(self, x, y, theta):
"""Updates the sensor's readings.
Args:
x: X coordinate.
y: Y coordinate.
theta: Yaw.
"""
self._update_rays(theta)
origin = np.array([x, y, 0])
self._state = [x, y, theta]
if self._locator is None:
return
for i in range(self._num_rays):
hit, dist, inter = self._cast_ray(origin, origin + self._rays[i])
self._hit[i] = hit
self._distances[i] = dist
self._intersections[i] = inter
def _update_rays(self, theta):
"""Updates the rays' readings.
Args:
theta: Yaw.
"""
r = self._radius
angle_step = (self._max_angle - self._min_angle) / (self._num_rays - 1)
self._rays = [
np.array([
r * math.cos(theta + self._min_angle + i * angle_step),
r * math.sin(theta + self._min_angle + i * angle_step),
0
])
for i in range(self._num_rays)
]
def _cast_ray(self, start, end):
"""Casts a ray and determines intersections and distances.
Args:
start: Origin of the ray.
end: End point of the ray.
Returns:
Tuple of (whether it intersected, distance, intersection).
"""
tolerance = 0.0 # intersection tolerance
pt = [0.0, 0.0, 0.0] # coordinate of intersection
distance = vtk.mutable(0.0) # distance of intersection
pcoords = [0.0, 0.0, 0.0] # location within intersected cell
subID = vtk.mutable(0) # subID of intersected cell
hit = self._locator.IntersectWithLine(start, end, tolerance,
distance, pt, pcoords, subID)
return hit, distance, pt
def to_polydata(self):
"""Converts the sensor to polydata."""
d = DebugData()
origin = np.array([self._state[0], self._state[1], 0])
for hit, intersection, ray in zip(self._hit,
self._intersections,
self._rays):
if hit:
color = [1., 0.45882353, 0.51372549]
endpoint = intersection
else:
color = [0., 0.6, 0.58823529]
endpoint = origin + ray
d.addLine(origin, endpoint, color=color, radius=0.05)
return d.getPolyData()
|
anassinator/dqn-obstacle-avoidance
|
sensor.py
|
Python
|
mit
| 4,329
|
[
"VTK"
] |
ae1982f2ed2ef80bc6fb710f371e7d922fb0974bb31f8155f37f35dea88337d6
|
## INFO ########################################################################
## ##
## Python and Cython Syntax Highlighters ##
## ===================================== ##
## ##
## Version: 2.0.00.071 (20141024) ##
## File: src/common.py ##
## ##
## For more information about the project, please visit ##
## <https://github.com/petervaro/python>. ##
## Copyright (C) 2013 - 2014 Peter Varo ##
## ##
## This program is free software: you can redistribute it and/or modify it ##
## under the terms of the GNU General Public License as published by the ##
## Free Software Foundation, either version 3 of the License, or ##
## (at your option) any later version. ##
## ##
## This program is distributed in the hope that it will be useful, but ##
## WITHOUT ANY WARRANTY; without even the implied warranty of ##
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ##
## See the GNU General Public License for more details. ##
## ##
## You should have received a copy of the GNU General Public License ##
## along with this program, most likely a file in the root directory, ##
## called 'LICENSE'. If not, see <http://www.gnu.org/licenses>. ##
## ##
######################################################################## INFO ##
#-- CHEATSHEET ----------------------------------------------------------------#
# HOWTO: http://sublimetext.info/docs/en/reference/syntaxdefs.html
# REGEX: http://manual.macromates.com/en/regular_expressions
# Syntax Definition
syntax = {
'name': '{NAME}',
'comment': ('\n\t\tCopyright (C) 2013 - 2014 Peter Varo'
'\n\t\t<http://github.com/petervaro/python>'
'\n'
'\n\t\tThis program is free software: you can redistribute it'
'\n\t\tand/or modify it under the terms of the GNU General'
'\n\t\tPublic License as published by the Free Software'
'\n\t\tFoundation, either version 3 of the License, or (at your'
'\n\t\toption) any later version.'
'\n'
'\n\t\tThis program is distributed in the hope that it will be'
'\n\t\tuseful, but WITHOUT ANY WARRANTY; without even the'
'\n\t\timplied warranty of MERCHANTABILITY or FITNESS FOR A'
'\n\t\tPARTICULAR PURPOSE. See the GNU General Public License'
'\n\t\tfor more details.'
'\n'
'\n\t\tYou should have received a copy of the GNU General Public'
'\n\t\tLicense along with this program, most likely a file in'
'\n\t\tthe root directory, called "LICENSE". If not, see'
'\n\t\t<http://www.gnu.org/licenses>.'
'\n\t'),
'scopeName': 'source.{SCOPE}',
# Patterns
'patterns':
{
#-- COMMENT -------------------------------------------------------------------#
0x0000 :
{
'include': '#comment'
},
#-- NUMBERS -------------------------------------------------------------------#
0x0010:
{
'name' : 'constant.numeric.integer.binary.{SCOPE}',
'match': r'\b0b[01]+'
},
0x0020:
{
'name' : 'constant.numeric.integer.hexadecimal.{SCOPE}',
'match': r'\b0x\h+'
},
0x0030:
{
'name' : 'constant.numeric.integer.octal.{SCOPE}',
'match': r'\b0o[0-7]+'
},
0x0040:
{
# .001 .1e6 .1E6 .1e+6 .1E+6 .1e-6 .1E-6
'name' : 'constant.numeric.float_and_complex.decimal.floatnumber.{SCOPE}',
'match': r'(?<=\W|^)\.\d+([eE][+-]?\d+)?[jJ]?'
},
0x0050:
{
# 1. 1.0 1.e10 1.1e6 1.1E6 1.1e+6 1.1E+6 1.1e-6 1.1E-6
'name' : 'constant.numeric.float_and_complex.decimal.pointfloat.{SCOPE}',
'match': r'\d+\.(\d*([eE][+-]?\d+)?)?[jJ]?(?=\W)'
},
0x0060:
{
# 1e6 1E6 1e+6 1E+6 1e-6 1E-6
'name' : 'constant.numeric.float_and_complex.decimal.exponent.{SCOPE}',
'match': r'(?<![\.\d])\d+[eE][+-]?\d+[jJ]?'
},
0x0070:
{
'name' : 'constant.numeric.integer_and_complex.decimal.{SCOPE}',
'match': r'\b(?<!\.)([1-9]\d*|0)[jJ]?'
},
#-- KEYWORDS ------------------------------------------------------------------#
# 0x0080: storage.modifier.declaration
# 0x0090: keyword.control.import_and_import_from
# 0x00A0: keyword.control.flow_block_delimiters
0x00B0:
{
'name' : 'keyword.operator.bool.logical.{SCOPE}',
'match': r'\b(and|in|is|not|or)\b'
},
# 0x00C0: keyword.other
#-- OPERATORS -----------------------------------------------------------------#
0x00D0:
{
'name' : 'keyword.operator.comparison.{SCOPE}',
'match': r'<=|>=|==|<|>|!='
},
0x00E0:
{
'name' : 'keyword.operator.assignment.augmented.{SCOPE}',
'match': r'\+=|-=|\*=|/=|//=|%=|&=|\|=|\^=|<<=|>>=|\*\*='
},
0x00F0:
{
'name' : 'keyword.operator.arithmetic.{SCOPE}',
'match': r'\+|-|\*|\*\*|/|//|%|<<|>>|&|\||\^|~'
},
0x0100:
{
'name' : 'keyword.operator.value_and_annotation_assignment.{SCOPE}',
'match': r'=|->'
},
#-- CLASS ---------------------------------------------------------------------#
# 0x0110: meta.class
#-- FUNCTION ------------------------------------------------------------------#
# 0x0120: meta.function
#-- LAMBDA --------------------------------------------------------------------#
0x0130:
{
'name' : 'meta.function.anonymous.{SCOPE}',
'begin': r'\b(lambda)\b',
'beginCaptures':
{
1: {'name': 'storage.type.function.anonymous.{SCOPE}'}
},
'patterns':
[
{
'begin': r'\s+',
'patterns':
[
# Keyword arguments
{
'begin': r'\b([a-zA-Z_]\w*)\s*(=)',
'beginCaptures':
{
1: {'name': 'variable.parameter.function.{SCOPE}'},
2: {'name': 'keyword.operator.assignment.{SCOPE}'}
},
'patterns':
[
{'include': '$self'}
],
'end': r'(?=,|:)'
},
# Positional arguments
{
'name' : 'variable.parameter.function.{SCOPE}',
'match': r'\b[a-zA-Z_]\w*'
}
],
'end': r'(?=:)'
}
],
'end': r':'
},
#-- DECORATOR -----------------------------------------------------------------#
# Decorator with arguments
0x0140:
{
'name' : 'meta.function.decorator.with_arguments.{SCOPE}',
'begin': r'^\s*(@\s*[a-zA-Z_]\w*(\.[a-zA-Z_]\w*)*)\s*\(',
'beginCaptures':
{
1: {'name': 'support.function.decorator.{SCOPE}'}
},
'patterns':
[
{'include': '#keyword_arguments'},
{'include': '$self'}
],
'end': r'\)'
},
# Decorator without arguments
0x0150:
{
'name' : 'meta.function.decorator.without_arguments.{SCOPE}',
'begin': r'^\s*(@\s*[a-zA-Z_]\w*(\.[a-zA-Z_]\w*)*)',
'beginCaptures':
{
1: {'name': 'support.function.decorator.{SCOPE}'}
},
'end': r'(?=\s|$\n?|#)'
},
#-- CONSTANTS -----------------------------------------------------------------#
# 0x0160: constant.language.word_like
0x0170:
{
'name' : 'constant.language.symbol_like.{SCOPE}',
'match': r'(?<=\W|^)\.\.\.(?=\W|$)'
},
#-- STORAGES ------------------------------------------------------------------#
# 0x0180: storage.type.function
0x0190:
{
'name' : 'storage.type.class.{SCOPE}',
'match': r'\b(class)\b'
},
#-- BUILTINS ------------------------------------------------------------------#
0x01A0:
{
'include': '#builtin_types'
},
0x01B0:
{
'include': '#builtin_functions'
},
0x01C0:
{
'include': '#builtin_exceptions'
},
#-- MAGIC STUFFS --------------------------------------------------------------#
0x01D0:
{
'include': '#magic_function_names'
},
0x01F0:
{
'include': '#magic_variable_names'
},
#-- ETC -----------------------------------------------------------------------#
0x0200:
{
'include': '#line_continuation'
},
0x0210:
{
'include': '#language_variables'
},
#-- STRUCTURES ----------------------------------------------------------------#
# LIST
0x0220:
{
'name': 'meta.structure.list.{SCOPE}',
'begin': r'\[',
'patterns':
[
{
'begin': r'(?<=\[|,)\s*(?![\],])',
'patterns':
[
{'include': '$self'}
],
'end' : r'\s*(?:,|(?=\]))'
}
],
'end' : r'\]'
},
# DICTIONARY
0x0230:
{
'name': 'meta.structure.dictionary.{SCOPE}',
'begin': r'\{',
'patterns':
[
{
'begin': r'(?<=\{|,|^)\s*(?![,}])',
'patterns':
[
{'include': '$self'}
],
'end' : r'\s*(?:(?=\})|:)'
},
{
'begin': r'(?<=:|^)\s*',
'patterns':
[
{'include': '$self'}
],
'end' : r'\s*(?:(?=\})|,)'
}
],
'end' : r'\}'
},
0x0240:
# GROUPS, TUPLES
{
'name' : 'meta.structure.group.{SCOPE}',
'begin': r'(?<=,|;|=|\+|-|\*|/|\||:|<|>|~|%|\^|\\)\s*\(',
'patterns':
[
{'include': '$self'}
],
'end': r'\)'
},
#-- ACCESS --------------------------------------------------------------------#
0x0250:
{
'name' : 'meta.function_call.{SCOPE}',
'begin': r'(?<!:|,|;|\[|\{|\}|=|\+|-|\*|/|\||<|>|~|%|\^|\\|\n)\s*\(',
'patterns':
[
{'include': '#keyword_arguments'},
{'include': '$self'}
],
'end': r'\)'
},
#-- STRING --------------------------------------------------------------------#
0x0260:
{
'include': '#string_quoted'
}
},
#-- REPOSITORY ----------------------------------------------------------------#
'repository':
{
#-- COMMENTS ------------------------------------------------------------------#
'comment':
{
'name' : 'comment.line.hashmark.{SCOPE}',
'match': r'#.*$\n?'
},
#-- CLASS ---------------------------------------------------------------------#
'class_entity_name':
{
'contentName': 'entity.name.type.class.{SCOPE}',
'begin': r'(?=[a-zA-Z_]\w*)',
'patterns':
[
{'include': '#entity_name_class'}
],
'end': r'(?!\w)'
},
'class_inheritance':
{
'contentName': 'meta.class.inheritance.{SCOPE}',
'begin': r'\(',
'patterns':
[
{
'contentName': 'entity.other.inherited-class.{SCOPE}',
'begin': r'(?<=\(|,)\s*',
'patterns':
[
{'include': '$self'}
],
'end': r'\s*(?:,|(?=\)))',
'endCaptures':
{
1: {'name': 'punctuation.separator.inheritance.{SCOPE}'}
}
}
],
'end': r'\)|:'
},
#-- FUNCTION ------------------------------------------------------------------#
'function_entity_name':
{
'contentName': 'entity.name.function.{SCOPE}',
'begin': r'(?=[a-zA-Z_]\w*)',
'patterns':
[
{'include': '#entity_name_function'}
],
'end': r'(?!\w)'
},
'function_arguments':
{
'begin': r'\(',
'patterns':
[
# 'Inline' comments
{'include': '#comment'},
# Keyword arguments
{
'begin': r'\b([a-zA-Z_]\w*)\s*(=)',
'beginCaptures':
{
1: {'name': 'variable.parameter.function.{SCOPE}'},
2: {'name': 'keyword.operator.assignment.{SCOPE}'}
},
'patterns':
[
# Keyword assignment
{
'begin': r'(?<=(=))\s*',
'beginCaptures':
{
1: {'name': 'keyword.operator.assignment.{SCOPE}'}
},
'patterns':
[
{'include': '$self'}
],
'end': r'(?=,|[\n)])',
},
# Annotation assignment (kwargs)
{
'begin': r'(?<=:)\s*',
'patterns':
[
{'include': '$self'}
],
'end': r'(?=,|(=)|[\n)])',
'endCaptures':
{
1: {'name': 'keyword.operator.assignment.{SCOPE}'}
}
}
],
'end': r'(?=,|[\n)])'
},
# Positional arguments
{
'begin': r'\b([a-zA-Z_]\w*)\s*',
'beginCaptures':
{
1: {'name': 'variable.parameter.function.{SCOPE}'}
},
'patterns':
[
# Annotation assignment (args)
{
'begin': r'(?<=:)\s*',
'patterns':
[
{'include': '$self'}
],
'end': r'(?=,|[\n)])',
}
],
'end': r'(?=,|[\n)])'
}
],
'end': r'(?=\))'
},
'function_annotation':
{
'begin': r'(?<=\))\s*(->)\s*',
'beginCaptures':
{
1: {'name': 'keyword.operator.annotation.assignment.{SCOPE}'}
},
'patterns':
[
{'include': '$self'}
],
'end': r'(?=\s*:)'
},
#-- BUILTINS ------------------------------------------------------------------#
'builtin_exceptions':
{
'name' : 'support.type.exception.{SCOPE}',
'match':
(
r'(?<!\.)\b('
r'(Arithmetic|Buffer|Lookup|Assertion|Attribute|EOF|FloatingPoint|'
r'Import|Index|Key|Memory|Name|NotImplemented|OS|Overflow|Reference|'
r'Runtime|Syntax|Indentation|Tab|System|Type|UnboundLocal|Unicode|'
r'Unicode(Encode|Decode|Translate)?|Value|ZeroDivision|'
r'Environment|IO|VMS|Windows|BlockingIO|ChildProcess|'
r'BrokenPipe|Connection(Aborted|Refused|Reset)?|'
r'FileExists|FileNotFound|Interrupted|(Is|Not)ADirectory|'
r'Permission|ProcessLookup|Timeout)Error|(User|Deprecation|'
r'PendingDeprecation|Syntax|Runtime|Future|Import|Bytes|'
r'Resource)Warning|(Base)?Exception|(Generator|System)Exit|'
r'KeyboardInterrupt|StopIteration|Warning'
r')\b'
)
},
#-- ENTITY --------------------------------------------------------------------#
'entity_name_class':
{
'patterns':
[
{'include': '#illegal_names'},
{'include': '#generic_names'}
]
},
'entity_name_function':
{
'patterns':
[
{'include': '#magic_function_names'},
{'include': '#illegal_names'},
{'include': '#generic_names'}
]
},
'generic_names':
{
'match': r'[a-zA-Z_]\w*'
},
#-- KEYWORDS ------------------------------------------------------------------#
'keyword_arguments':
{
'begin': r'\b([a-zA-Z_]\w*)\s*(=)(?!=)',
'beginCaptures':
{
1: {'name': 'variable.parameter.function.{SCOPE}'},
2: {'name': 'keyword.operator.assignment.{SCOPE}'}
},
'patterns':
[
{'include': '$self'}
],
'end': r'(?=,|[\n)])'
},
#-- MAGIC STUFFS --------------------------------------------------------------#
# TODO: rearrange -> what is magic function and what is magic variable?
'magic_variable_names':
{
'name' : 'support.variable.magic.{SCOPE}',
'match':
(
r'\b__('
r'all|annotations|bases|builtins|class|debug|dict|doc|file|'
r'members|metaclass|mro|name|qualname|slots|weakref'
r')__\b'
)
},
# conventions
'language_variables':
{
'name' : 'variable.language.{SCOPE}',
'match': r'(?<!\.)\b(self|cls)\b'
},
'line_continuation':
{
'match': r'(\\)(.*)$\n?',
'captures':
{
1: {'name': 'punctuation.separator.continuation.line.{SCOPE}'},
2: {'name': 'invalid.illegal.unexpected_text.{SCOPE}'}
}
},
#-- STRING --------------------------------------------------------------------#
# TODO: decide if source.sql and special words, like SELECT and INSERT needed
'string_quoted':
{
# stringprefix: "r" | "u" | "R" | "U" |
# bytesprefix : "b" | "B" | "br" | "Br" | "bR" |
# "BR" | "rb" | "rB" | "Rb" | "RB" |
'patterns':
[
# Single BLOCK
{
'name' : 'string.quoted.single.block.{SCOPE}',
'begin': r"([bBuU]?)'''",
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'}
],
'end': r"'''"
},
{
'name' : 'string.quoted.single.block.{SCOPE}',
'begin': r"([rR][bB]|[bB][rR]|[rR])'''",
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'},
{'include': '#regular_expressions'},
{'include': '#comment'}
],
'end': r"'''"
},
# Single LINE
{
'name' : 'string.quoted.single.line.{SCOPE}',
'begin': r"([bBuU]?)'",
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'}
],
'end': r"'|(\n)",
'endCaptures':
{
1: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
}
},
{
'name' : 'string.quoted.single.line.{SCOPE}',
'begin': r"([rR][bB]|[bB][rR]|[rR])'",
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'},
{'include': '#regular_expressions'}
],
'end': r"'|(\n)",
'endCaptures':
{
1: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
}
},
# Double BLOCK
{
'name' : 'string.quoted.double.block.{SCOPE}',
'begin': r'([bBuU]?)"""',
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'}
],
'end': r'"""'
},
{
'name' : 'string.quoted.double.block.{SCOPE}',
'begin': r'([rR][bB]|[bB][rR]|[rR])"""',
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'},
{'include': '#regular_expressions'},
{'include': '#comment'}
],
'end': r'"""'
},
# Double LINE
{
'name' : 'string.quoted.double.line.{SCOPE}',
'begin': r'([bBuU]?)"',
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'}
],
'end': r'"|(\n)',
'endCaptures':
{
1: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
}
},
# {
# 'name' : 'meta.format_attribute.format.{SCOPE}',
# 'begin': r'(\.format)\s*\(',
# 'beginCaptures':
# {
# 1: {'name': 'invalid.illegal.none.{SCOPE}'}
# },
# 'patterns':
# [
# {
# 'name' : 'string.quoted.double.format.{SCOPE}',
# 'begin': r'([uUbB]?)"',
# 'beginCaptures':
# {
# 1: {'name': 'storage.type.string.prefix.{SCOPE}'}
# },
# 'patterns':
# [
# {'include': '#string_patterns'},
# {'include': '#format_mini_language'}
# ],
# 'end': r'"|(\n)',
# 'endCaptures':
# {
# 1: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
# }
# }
# ],
# 'end': r'\)'
# },
# {
# 'name' : 'string.quoted.double.format.{SCOPE}',
# 'begin': r'([uUbB]?)"',
# 'beginCaptures':
# {
# 1: {'name': 'storage.type.string.prefix.{SCOPE}'}
# },
# 'patterns':
# [
# {'include': '#string_patterns'},
# {'include': '#format_mini_language'}
# ],
# 'end': r'"\.format', # |(\n)',
# 'endCaptures':
# {
# 2: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
# }
# },
{
'name' : 'string.quoted.double.line.{SCOPE}',
'begin': r'([rR][bB]|[bB][rR]|[rR])"',
'beginCaptures':
{
1: {'name': 'storage.type.string.prefix.{SCOPE}'}
},
'patterns':
[
{'include': '#string_patterns'},
{'include': '#regular_expressions'}
],
'end': r'"|(\n)',
'endCaptures':
{
1: {'name': 'invalid.illegal.unclosed_string.{SCOPE}'}
}
}
]
},
'string_patterns':
{
'patterns':
[
{'include': '#constant_placeholder'},
{'include': '#escaped_characters'},
{'include': '#escaped_unicode_characters'}
]
},
'constant_placeholder':
{
'name' : 'string.interpolated.placeholder.{SCOPE}',
'match': r'%(\(\w+\))?#?0?-?[ ]?\+?(\d*|\*)(\.(\d*|\*))?[hlL]?[diouxXeEfFgGcrs%]'
},
'format_mini_language':
{
'patterns':
[
{
'name' : 'constant.other.placeholder.format.{SCOPE}',
'match': r'\{\}'
}
]
},
'escaped_characters':
{
# escape:
# hex | octal | newline | double-quote |
# single-quote | bell | backspace | formfeed |
# line-feed | return | tab | vertical-tab | escape char
'name' : 'constant.character.escaped.special.{SCOPE}',
'match': r'\\(x\h{2}|[0-7]{3}|\n|\"|\'|a|b|f|n|r|t|v|\\)'
},
'escaped_unicode_characters':
{
# 16bit hex | 32bit hex | unicodename
'name' : 'constant.character.escaped.{SCOPE}',
'match': r'\\(u\h{4}|U\h{8}|N\{[a-zA-Z\s]+\})'
},
#-- REGEX ---------------------------------------------------------------------#
'regular_expressions':
{
'patterns':
[
{
# (?= positive look-ahead)
# (?! negative look-ahead)
# (?<= positive look-behind)
# (?<! negative look-behind)
# (?: non-capturing)
# (?P<id> group)
# (?(id/name)yes-pattern|no-pattern)
'name' : 'constant.character.escape.{SCOPE}',
'match': r'(?<=\()\?(=|!|<=|<!|:|P<[a-zA-Z_]\w*?>|'
r'\(([1-9]\d?|[a-zA-Z_]\w*)\))'
# NOTE: the problem of making this to be a begin/end block
# is that the patterns needs to include the multiline-
# comments only if the expression is in multline
# quotes otherwise it should be exclude it...
},
{
# (?P=this_is_a_group)
'name' : 'keyword.other.group_reference_name.regex.{SCOPE}',
'match': r'\((\?P=)([a-zA-Z_]\w*)\)',
'captures':
{
1: {'name': 'constant.character.escape.{SCOPE}'}
}
},
{
'name' : 'keyword.control.anchor.regex.{SCOPE}',
'match': r'\\[bBAZzG]|\^|\$'
},
{
# \number
'name' : 'keyword.other.group_reference_order.regex.{SCOPE}',
'match': r'\\[1-9]\d?'
},
{
# {2}, {2,}, {,2}, {2,3}, {2,3}?
'name' : 'keyword.operator.quantifier.regex.{SCOPE}',
'match': r'[?+*][?+]?|\{(\d+,\d+|\d+,|,\d+|\d+)\}\??'
},
{
'name' : 'keyword.operator.or.regex.{SCOPE}',
'match': r'\|'
},
{
# (?# comment)
'name' : 'comment.block.regex.{SCOPE}',
'begin': r'\(\?#',
'end' : r'\)'
},
{
# flags: a: ASCII-only matching)
# i: ignore case
# L: locale dependent
# m: multi-line
# s: dot matches all
# u: unicode
# x: extended form (verbose)
'name' : 'keyword.other.option_toggle.regex.{SCOPE}',
'match': r'\(\?[aiLmsux]+\)'
},
{
'include': '#regular_expressions_escaped_characters'
},
{
'include': '#regular_expressions_character_classes'
},
{
'name' : 'keyword.operator.group.regex.{SCOPE}',
'match': r'[()]'
}
]
},
'regular_expressions_character_classes':
{
'patterns':
[
{
# \w, \W, \s, \S, \d, \D, .
'name' : 'constant.character.character_class.regex.{SCOPE}',
'match': r'\\[wWsSdD]|\.'
},
{
# [set of characters]
'name' : 'constant.other.character_class.set.regex.{SCOPE}',
'begin': r'\[(\^)?(\](?=.*\]))?',
'beginCaptures':
{
1: {'name': 'keyword.operator.negation.regex.{SCOPE}'}
},
'patterns':
[
{
'name' : 'constant.character.escaped.special.regex.except.{SCOPE}',
'match': r'\[|\\\\|\\\]'
},
{'include': '#regular_expressions_character_classes'},
{'include': '#regular_expressions_escaped_characters'}
],
'end': r'\]'
}
]
},
'regular_expressions_escaped_characters':
{
'name' : 'constant.character.escaped.special.regex.{SCOPE}',
'match': r'\\(\\|\?|\.|\*|\+|\{|\}|\||\(|\)|\[|\]|\^|\$)'
}
}
}
|
evhub/python
|
src/common.py
|
Python
|
gpl-3.0
| 33,936
|
[
"VisIt"
] |
80fe691b83222ef88a69b6dbf1a8a69eff8cb1b0212c9ddfeab6405f8574ae5f
|
# function.py ---
#
# Filename: function.py
# Description:
# Author: Subhasis Ray
# Maintainer:
# Created: Tue Sep 9 17:59:50 2014 (+0530)
# Version:
# Last-Updated: Sun Dec 20 00:02:50 2015 (-0500)
# By: subha
# Update #: 4
# URL:
# Keywords:
# Compatibility:
#
#
# Commentary:
#
#
#
#
# Change log:
#
#
#
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street, Fifth
# Floor, Boston, MA 02110-1301, USA.
#
#
# Code:
import numpy as np
import sys
import matplotlib.pyplot as plt
import moose
simtime = 1.0
def example():
"""Function objects can be used to evaluate expressions with arbitrary
number of variables and constants. We can assign expression of the
form::
f(c0, c1, ..., cM, x0, x1, ..., xN, y0,..., yP )
where `c_i`'s are constants and `x_i`'s and `y_i`'s are variables.
The constants must be defined before setting the expression and
variables are connected via messages. The constants can have any
name, but the variable names must be of the form x{i} or y{i}
where i is increasing integer starting from 0.
The `x_i`'s are field elements and you have to set their number
first (function.x.num = N). Then you can connect any source field
sending out double to the 'input' destination field of the
`x[i]`.
The `y_i`'s are useful when the required variable is a value field
and is not available as a source field. In that case you connect
the `requestOut` source field of the function element to the
`get{Field}` destination field on the target element. The `y_i`'s
are automatically added on connecting. Thus, if you call::
moose.connect(function, 'requestOut', a, 'getSomeField')
moose.connect(function, 'requestOut', b, 'getSomeField')
then ``a.someField`` will be assigned to ``y0`` and
``b.someField`` will be assigned to ``y1``.
In this example we evaluate the expression: ``z = c0 * exp(c1 *
x0) * cos(y0)``
with x0 ranging from -1 to +1 and y0 ranging from -pi to
+pi. These values are stored in two stimulus tables called xtab
and ytab respectively, so that at each timestep the next values of
x0 and y0 are assigned to the function.
Along with the value of the expression itself we also compute its
derivative with respect to y0 and its derivative with respect to
time (rate). The former uses a five-point stencil for the
numerical differentiation and has a glitch at y=0. The latter uses
backward difference divided by dt.
Unlike Func class, the number of variables and constants are
unlimited in Function and you can set all the variables via
messages.
"""
demo = moose.Neutral('/model')
function = moose.Function('/model/function')
function.c['c0'] = 1.0
function.c['c1'] = 2.0
#function.x.num = 1
function.expr = 'c0 * exp(c1*x0) * cos(y0) + sin(t)'
# mode 0 - evaluate function value, derivative and rate
# mode 1 - just evaluate function value,
# mode 2 - evaluate derivative,
# mode 3 - evaluate rate
function.mode = 0
function.independent = 'y0'
nsteps = 1000
xarr = np.linspace(0.0, 1.0, nsteps)
# Stimulus tables allow you to store sequences of numbers which
# are delivered via the 'output' message at each time step. This
# is a placeholder and in real scenario you will be using any
# sourceFinfo that sends out a double value.
input_x = moose.StimulusTable('/xtab')
input_x.vector = xarr
input_x.startTime = 0.0
input_x.stepPosition = xarr[0]
input_x.stopTime = simtime
moose.connect(input_x, 'output', function.x[0], 'input')
yarr = np.linspace(-np.pi, np.pi, nsteps)
input_y = moose.StimulusTable('/ytab')
input_y.vector = yarr
input_y.startTime = 0.0
input_y.stepPosition = yarr[0]
input_y.stopTime = simtime
moose.connect(function, 'requestOut', input_y, 'getOutputValue')
# data recording
result = moose.Table('/ztab')
moose.connect(result, 'requestOut', function, 'getValue')
derivative = moose.Table('/zprime')
moose.connect(derivative, 'requestOut', function, 'getDerivative')
rate = moose.Table('/dz_by_dt')
moose.connect(rate, 'requestOut', function, 'getRate')
x_rec = moose.Table('/xrec')
moose.connect(x_rec, 'requestOut', input_x, 'getOutputValue')
y_rec = moose.Table('/yrec')
moose.connect(y_rec, 'requestOut', input_y, 'getOutputValue')
dt = simtime/nsteps
for ii in range(32):
moose.setClock(ii, dt)
moose.reinit()
moose.start(simtime)
# Uncomment the following lines and the import matplotlib.pyplot as plt on top
# of this file to display the plot.
plt.subplot(3,1,1)
plt.plot(x_rec.vector, result.vector, 'r-', label='z = {}'.format(function.expr))
z = function.c['c0'] * np.exp(function.c['c1'] * xarr) * np.cos(yarr) + np.sin(np.arange(len(xarr)) * dt)
plt.plot(xarr, z, 'b--', label='numpy computed')
plt.xlabel('x')
plt.ylabel('z')
plt.legend()
plt.subplot(3,1,2)
plt.plot(y_rec.vector, derivative.vector, 'r-', label='dz/dy0')
# derivatives computed by putting x values in the analytical formula
dzdy = function.c['c0'] * np.exp(function.c['c1'] * xarr) * (- np.sin(yarr))
plt.plot(yarr, dzdy, 'b--', label='numpy computed')
plt.xlabel('y')
plt.ylabel('dz/dy')
plt.legend()
plt.subplot(3,1,3)
# *** BEWARE *** The first two entries are spurious. Entry 0 is
# *** from reinit sending out the defaults. Entry 2 is because
# *** there is no lastValue for computing real forward difference.
plt.plot(np.arange(2, len(rate.vector), 1) * dt, rate.vector[2:], 'r-', label='dz/dt')
dzdt = np.diff(z)/dt
plt.plot(np.arange(0, len(dzdt), 1.0) * dt, dzdt, 'b--', label='numpy computed')
plt.xlabel('t')
plt.ylabel('dz/dt')
plt.legend()
plt.tight_layout()
plt.show()
if __name__ == '__main__':
example()
#
# function.py ends here
|
BhallaLab/moose-examples
|
snippets/function.py
|
Python
|
gpl-2.0
| 6,612
|
[
"MOOSE"
] |
e2a98e300e14699f7e30e4a3634e3cb8888e6a599dc4c90b221bd891b88e9890
|
#
# Copyright 2013 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
import pickle
import collections
from datetime import (
datetime,
timedelta,
)
import logging
import operator
import unittest
from nose_parameterized import parameterized
import nose.tools as nt
import pytz
import itertools
import pandas as pd
import numpy as np
from six.moves import range, zip
from zipline.assets import AssetFinder
import zipline.utils.factory as factory
import zipline.finance.performance as perf
from zipline.finance.slippage import Transaction, create_transaction
import zipline.utils.math_utils as zp_math
from zipline.gens.composites import date_sorted_sources
from zipline.finance.trading import SimulationParameters
from zipline.finance.blotter import Order
from zipline.finance.commission import PerShare, PerTrade, PerDollar
from zipline.finance.trading import with_environment
from zipline.utils.factory import create_random_simulation_parameters
import zipline.protocol as zp
from zipline.protocol import Event, DATASOURCE_TYPE
from zipline.sources.data_frame_source import DataPanelSource
logger = logging.getLogger('Test Perf Tracking')
onesec = timedelta(seconds=1)
oneday = timedelta(days=1)
tradingday = timedelta(hours=6, minutes=30)
# nose.tools changed name in python 3
if not hasattr(nt, 'assert_count_equal'):
nt.assert_count_equal = nt.assert_items_equal
def check_perf_period(pp,
gross_leverage,
net_leverage,
long_exposure,
longs_count,
short_exposure,
shorts_count):
perf_data = pp.to_dict()
np.testing.assert_allclose(
gross_leverage, perf_data['gross_leverage'], rtol=1e-3)
np.testing.assert_allclose(
net_leverage, perf_data['net_leverage'], rtol=1e-3)
np.testing.assert_allclose(
long_exposure, perf_data['long_exposure'], rtol=1e-3)
np.testing.assert_allclose(
longs_count, perf_data['longs_count'], rtol=1e-3)
np.testing.assert_allclose(
short_exposure, perf_data['short_exposure'], rtol=1e-3)
np.testing.assert_allclose(
shorts_count, perf_data['shorts_count'], rtol=1e-3)
def check_account(account,
settled_cash,
equity_with_loan,
total_positions_value,
regt_equity,
available_funds,
excess_liquidity,
cushion,
leverage,
net_leverage,
net_liquidation):
# this is a long only portfolio that is only partially invested
# so net and gross leverage are equal.
np.testing.assert_allclose(settled_cash,
account['settled_cash'], rtol=1e-3)
np.testing.assert_allclose(equity_with_loan,
account['equity_with_loan'], rtol=1e-3)
np.testing.assert_allclose(total_positions_value,
account['total_positions_value'], rtol=1e-3)
np.testing.assert_allclose(regt_equity,
account['regt_equity'], rtol=1e-3)
np.testing.assert_allclose(available_funds,
account['available_funds'], rtol=1e-3)
np.testing.assert_allclose(excess_liquidity,
account['excess_liquidity'], rtol=1e-3)
np.testing.assert_allclose(cushion,
account['cushion'], rtol=1e-3)
np.testing.assert_allclose(leverage, account['leverage'], rtol=1e-3)
np.testing.assert_allclose(net_leverage,
account['net_leverage'], rtol=1e-3)
np.testing.assert_allclose(net_liquidation,
account['net_liquidation'], rtol=1e-3)
def create_txn(trade_event, price, amount):
"""
Create a fake transaction to be filled and processed prior to the execution
of a given trade event.
"""
mock_order = Order(trade_event.dt, trade_event.sid, amount, id=None)
return create_transaction(trade_event, mock_order, price, amount)
@with_environment()
def benchmark_events_in_range(sim_params, env=None):
return [
Event({'dt': dt,
'returns': ret,
'type': zp.DATASOURCE_TYPE.BENCHMARK,
# We explicitly rely on the behavior that benchmarks sort before
# any other events.
'source_id': '1Abenchmarks'})
for dt, ret in env.benchmark_returns.iteritems()
if dt.date() >= sim_params.period_start.date() and
dt.date() <= sim_params.period_end.date()
]
def calculate_results(host,
trade_events,
dividend_events=None,
splits=None,
txns=None):
"""
Run the given events through a stripped down version of the loop in
AlgorithmSimulator.transform.
IMPORTANT NOTE FOR TEST WRITERS/READERS:
This loop has some wonky logic for the order of event processing for
datasource types. This exists mostly to accomodate legacy tests accomodate
existing tests that were making assumptions about how events would be
sorted.
In particular:
- Dividends passed for a given date are processed PRIOR to any events
for that date.
- Splits passed for a given date are process AFTER any events for that
date.
Tests that use this helper should not be considered useful guarantees of
the behavior of AlgorithmSimulator on a stream containing the same events
unless the subgroups have been explicitly re-sorted in this way.
"""
txns = txns or []
splits = splits or []
perf_tracker = perf.PerformanceTracker(host.sim_params)
if dividend_events is not None:
dividend_frame = pd.DataFrame(
[
event.to_series(index=zp.DIVIDEND_FIELDS)
for event in dividend_events
],
)
perf_tracker.update_dividends(dividend_frame)
# Raw trades
trade_events = sorted(trade_events, key=lambda ev: (ev.dt, ev.source_id))
# Add a benchmark event for each date.
trades_plus_bm = date_sorted_sources(trade_events, host.benchmark_events)
# Filter out benchmark events that are later than the last trade date.
filtered_trades_plus_bm = (filt_event for filt_event in trades_plus_bm
if filt_event.dt <= trade_events[-1].dt)
grouped_trades_plus_bm = itertools.groupby(filtered_trades_plus_bm,
lambda x: x.dt)
results = []
bm_updated = False
for date, group in grouped_trades_plus_bm:
for txn in filter(lambda txn: txn.dt == date, txns):
# Process txns for this date.
perf_tracker.process_transaction(txn)
for event in group:
if event.type == zp.DATASOURCE_TYPE.TRADE:
perf_tracker.process_trade(event)
elif event.type == zp.DATASOURCE_TYPE.DIVIDEND:
perf_tracker.process_dividend(event)
elif event.type == zp.DATASOURCE_TYPE.BENCHMARK:
perf_tracker.process_benchmark(event)
bm_updated = True
elif event.type == zp.DATASOURCE_TYPE.COMMISSION:
perf_tracker.process_commission(event)
for split in filter(lambda split: split.dt == date, splits):
# Process splits for this date.
perf_tracker.process_split(split)
if bm_updated:
msg = perf_tracker.handle_market_close_daily()
msg['account'] = perf_tracker.get_account(True)
results.append(msg)
bm_updated = False
return results
def check_perf_tracker_serialization(perf_tracker):
scalar_keys = [
'emission_rate',
'txn_count',
'market_open',
'last_close',
'_dividend_count',
'period_start',
'day_count',
'capital_base',
'market_close',
'saved_dt',
'period_end',
'total_days',
]
p_string = pickle.dumps(perf_tracker)
test = pickle.loads(p_string)
for k in scalar_keys:
nt.assert_equal(getattr(test, k), getattr(perf_tracker, k), k)
for period in test.perf_periods:
nt.assert_true(hasattr(period, '_position_tracker'))
class TestSplitPerformance(unittest.TestCase):
def setUp(self):
self.sim_params, self.dt, self.end_dt = \
create_random_simulation_parameters()
# start with $10,000
self.sim_params.capital_base = 10e3
self.benchmark_events = benchmark_events_in_range(self.sim_params)
def test_split_long_position(self):
events = factory.create_trade_history(
1,
[20, 20],
[100, 100],
oneday,
self.sim_params
)
# set up a long position in sid 1
# 100 shares at $20 apiece = $2000 position
txns = [create_txn(events[0], 20, 100)]
# set up a split with ratio 3 occurring at the start of the second
# day.
splits = [
factory.create_split(
1,
3,
events[1].dt,
),
]
results = calculate_results(self, events, txns=txns, splits=splits)
# should have 33 shares (at $60 apiece) and $20 in cash
self.assertEqual(2, len(results))
latest_positions = results[1]['daily_perf']['positions']
self.assertEqual(1, len(latest_positions))
# check the last position to make sure it's been updated
position = latest_positions[0]
self.assertEqual(1, position['sid'])
self.assertEqual(33, position['amount'])
self.assertEqual(60, position['cost_basis'])
self.assertEqual(60, position['last_sale_price'])
# since we started with $10000, and we spent $2000 on the
# position, but then got $20 back, we should have $8020
# (or close to it) in cash.
# we won't get exactly 8020 because sometimes a split is
# denoted as a ratio like 0.3333, and we lose some digits
# of precision. thus, make sure we're pretty close.
daily_perf = results[1]['daily_perf']
self.assertTrue(
zp_math.tolerant_equals(8020,
daily_perf['ending_cash'], 1))
# Validate that the account attributes were updated.
account = results[1]['account']
self.assertEqual(float('inf'), account['day_trades_remaining'])
# this is a long only portfolio that is only partially invested
# so net and gross leverage are equal.
np.testing.assert_allclose(0.198, account['leverage'], rtol=1e-3)
np.testing.assert_allclose(0.198, account['net_leverage'], rtol=1e-3)
np.testing.assert_allclose(8020, account['regt_equity'], rtol=1e-3)
self.assertEqual(float('inf'), account['regt_margin'])
np.testing.assert_allclose(8020, account['available_funds'], rtol=1e-3)
self.assertEqual(0, account['maintenance_margin_requirement'])
np.testing.assert_allclose(10000,
account['equity_with_loan'], rtol=1e-3)
self.assertEqual(float('inf'), account['buying_power'])
self.assertEqual(0, account['initial_margin_requirement'])
np.testing.assert_allclose(8020, account['excess_liquidity'],
rtol=1e-3)
np.testing.assert_allclose(8020, account['settled_cash'], rtol=1e-3)
np.testing.assert_allclose(10000, account['net_liquidation'],
rtol=1e-3)
np.testing.assert_allclose(0.802, account['cushion'], rtol=1e-3)
np.testing.assert_allclose(1980, account['total_positions_value'],
rtol=1e-3)
self.assertEqual(0, account['accrued_interest'])
for i, result in enumerate(results):
for perf_kind in ('daily_perf', 'cumulative_perf'):
perf_result = result[perf_kind]
# prices aren't changing, so pnl and returns should be 0.0
self.assertEqual(0.0, perf_result['pnl'],
"day %s %s pnl %s instead of 0.0" %
(i, perf_kind, perf_result['pnl']))
self.assertEqual(0.0, perf_result['returns'],
"day %s %s returns %s instead of 0.0" %
(i, perf_kind, perf_result['returns']))
class TestCommissionEvents(unittest.TestCase):
def setUp(self):
self.sim_params, self.dt, self.end_dt = \
create_random_simulation_parameters()
logger.info("sim_params: %s, dt: %s, end_dt: %s" %
(self.sim_params, self.dt, self.end_dt))
self.sim_params.capital_base = 10e3
self.benchmark_events = benchmark_events_in_range(self.sim_params)
def test_commission_event(self):
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
# Test commission models and validate result
# Expected commission amounts:
# PerShare commission: 1.00, 1.00, 1.50 = $3.50
# PerTrade commission: 5.00, 5.00, 5.00 = $15.00
# PerDollar commission: 1.50, 3.00, 4.50 = $9.00
# Total commission = $3.50 + $15.00 + $9.00 = $27.50
# Create 3 transactions: 50, 100, 150 shares traded @ $20
transactions = [create_txn(events[0], 20, i)
for i in [50, 100, 150]]
# Create commission models and validate that produce expected
# commissions.
models = [PerShare(cost=0.01, min_trade_cost=1.00),
PerTrade(cost=5.00),
PerDollar(cost=0.0015)]
expected_results = [3.50, 15.0, 9.0]
for model, expected in zip(models, expected_results):
total_commission = 0
for trade in transactions:
total_commission += model.calculate(trade)[1]
self.assertEqual(total_commission, expected)
# Verify that commission events are handled correctly by
# PerformanceTracker.
cash_adj_dt = events[0].dt
cash_adjustment = factory.create_commission(1, 300.0, cash_adj_dt)
events.append(cash_adjustment)
# Insert a purchase order.
txns = [create_txn(events[0], 20, 1)]
results = calculate_results(self, events, txns=txns)
# Validate that we lost 320 dollars from our cash pool.
self.assertEqual(results[-1]['cumulative_perf']['ending_cash'],
9680)
# Validate that the cost basis of our position changed.
self.assertEqual(results[-1]['daily_perf']['positions']
[0]['cost_basis'], 320.0)
# Validate that the account attributes were updated.
account = results[1]['account']
self.assertEqual(float('inf'), account['day_trades_remaining'])
np.testing.assert_allclose(0.001, account['leverage'], rtol=1e-3,
atol=1e-4)
np.testing.assert_allclose(9680, account['regt_equity'], rtol=1e-3)
self.assertEqual(float('inf'), account['regt_margin'])
np.testing.assert_allclose(9680, account['available_funds'],
rtol=1e-3)
self.assertEqual(0, account['maintenance_margin_requirement'])
np.testing.assert_allclose(9690,
account['equity_with_loan'], rtol=1e-3)
self.assertEqual(float('inf'), account['buying_power'])
self.assertEqual(0, account['initial_margin_requirement'])
np.testing.assert_allclose(9680, account['excess_liquidity'],
rtol=1e-3)
np.testing.assert_allclose(9680, account['settled_cash'],
rtol=1e-3)
np.testing.assert_allclose(9690, account['net_liquidation'],
rtol=1e-3)
np.testing.assert_allclose(0.999, account['cushion'], rtol=1e-3)
np.testing.assert_allclose(10, account['total_positions_value'],
rtol=1e-3)
self.assertEqual(0, account['accrued_interest'])
def test_commission_zero_position(self):
"""
Ensure no div-by-zero errors.
"""
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
# Buy and sell the same sid so that we have a zero position by the
# time of events[3].
txns = [
create_txn(events[0], 20, 1),
create_txn(events[1], 20, -1),
]
# Add a cash adjustment at the time of event[3].
cash_adj_dt = events[3].dt
cash_adjustment = factory.create_commission(1, 300.0, cash_adj_dt)
events.append(cash_adjustment)
results = calculate_results(self, events, txns=txns)
# Validate that we lost 300 dollars from our cash pool.
self.assertEqual(results[-1]['cumulative_perf']['ending_cash'],
9700)
def test_commission_no_position(self):
"""
Ensure no position-not-found or sid-not-found errors.
"""
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
# Add a cash adjustment at the time of event[3].
cash_adj_dt = events[3].dt
cash_adjustment = factory.create_commission(1, 300.0, cash_adj_dt)
events.append(cash_adjustment)
results = calculate_results(self, events)
# Validate that we lost 300 dollars from our cash pool.
self.assertEqual(results[-1]['cumulative_perf']['ending_cash'],
9700)
class TestDividendPerformance(unittest.TestCase):
def setUp(self):
self.sim_params, self.dt, self.end_dt = \
create_random_simulation_parameters()
self.sim_params.capital_base = 10e3
self.benchmark_events = benchmark_events_in_range(self.sim_params)
def test_market_hours_calculations(self):
# DST in US/Eastern began on Sunday March 14, 2010
before = datetime(2010, 3, 12, 14, 31, tzinfo=pytz.utc)
after = factory.get_next_trading_dt(
before,
timedelta(days=1)
)
self.assertEqual(after.hour, 13)
def test_long_position_receives_dividend(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
# declared date, when the algorithm finds out about
# the dividend
events[0].dt,
# ex_date, the date before which the algorithm must hold stock
# to receive the dividend
events[1].dt,
# pay date, when the algorithm receives the dividend.
events[2].dt
)
# Simulate a transaction being filled prior to the ex_date.
txns = [create_txn(events[0], 10.0, 100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0.0, 0.0, 0.1, 0.1, 0.1])
daily_returns = [event['daily_perf']['returns']
for event in results]
self.assertEqual(daily_returns, [0.0, 0.0, 0.10, 0.0, 0.0])
cash_flows = [event['daily_perf']['capital_used']
for event in results]
self.assertEqual(cash_flows, [-1000, 0, 1000, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [-1000, -1000, 0, 0, 0])
cash_pos = \
[event['cumulative_perf']['ending_cash'] for event in results]
self.assertEqual(cash_pos, [9000, 9000, 10000, 10000, 10000])
def test_long_position_receives_stock_dividend(self):
# post some trades in the market
events = []
for sid in (1, 2):
events.extend(
factory.create_trade_history(
sid,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params)
)
dividend = factory.create_stock_dividend(
1,
payment_sid=2,
ratio=2,
# declared date, when the algorithm finds out about
# the dividend
declared_date=events[0].dt,
# ex_date, the date before which the algorithm must hold stock
# to receive the dividend
ex_date=events[1].dt,
# pay date, when the algorithm receives the dividend.
pay_date=events[2].dt
)
txns = [create_txn(events[0], 10.0, 100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0.0, 0.0, 0.2, 0.2, 0.2])
daily_returns = [event['daily_perf']['returns']
for event in results]
self.assertEqual(daily_returns, [0.0, 0.0, 0.2, 0.0, 0.0])
cash_flows = [event['daily_perf']['capital_used']
for event in results]
self.assertEqual(cash_flows, [-1000, 0, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [-1000] * 5)
cash_pos = \
[event['cumulative_perf']['ending_cash'] for event in results]
self.assertEqual(cash_pos, [9000] * 5)
def test_long_position_purchased_on_ex_date_receives_no_dividend(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
events[0].dt, # Declared date
events[1].dt, # Exclusion date
events[2].dt # Pay date
)
# Simulate a transaction being filled on the ex_date.
txns = [create_txn(events[1], 10.0, 100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0, 0, 0, 0, 0])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0, 0, 0, 0, 0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [0, -1000, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows,
[0, -1000, -1000, -1000, -1000])
def test_selling_before_dividend_payment_still_gets_paid(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
events[0].dt, # Declared date
events[1].dt, # Exclusion date
events[3].dt # Pay date
)
buy_txn = create_txn(events[0], 10.0, 100)
sell_txn = create_txn(events[2], 10.0, -100)
txns = [buy_txn, sell_txn]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0, 0, 0, 0.1, 0.1])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0, 0, 0, 0.1, 0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [-1000, 0, 1000, 1000, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [-1000, -1000, 0, 1000, 1000])
def test_buy_and_sell_before_ex(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10, 10],
[100, 100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
events[3].dt,
events[4].dt,
events[5].dt
)
buy_txn = create_txn(events[1], 10.0, 100)
sell_txn = create_txn(events[2], 10.0, -100)
txns = [buy_txn, sell_txn]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 6)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0, 0, 0, 0, 0, 0])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0, 0, 0, 0, 0, 0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [0, -1000, 1000, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [0, -1000, 0, 0, 0, 0])
def test_ending_before_pay_date(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
pay_date = self.sim_params.first_open
# find pay date that is much later.
for i in range(30):
pay_date = factory.get_next_trading_dt(pay_date, oneday)
dividend = factory.create_dividend(
1,
10.00,
events[0].dt,
events[0].dt,
pay_date
)
txns = [create_txn(events[1], 10.0, 100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0, 0, 0, 0.0, 0.0])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0, 0, 0, 0, 0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [0, -1000, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(
cumulative_cash_flows,
[0, -1000, -1000, -1000, -1000]
)
def test_short_position_pays_dividend(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
# declare at open of test
events[0].dt,
# ex_date same as trade 2
events[2].dt,
events[3].dt
)
txns = [create_txn(events[1], 10.0, -100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0.0, 0.0, 0.0, -0.1, -0.1])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0.0, 0.0, 0.0, -0.1, 0.0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [0, 1000, 0, -1000, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [0, 1000, 1000, 0, 0])
def test_no_position_receives_no_dividend(self):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
events[0].dt,
events[1].dt,
events[2].dt
)
results = calculate_results(
self,
events,
dividend_events=[dividend],
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0.0, 0.0, 0.0, 0.0, 0.0])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0.0, 0.0, 0.0, 0.0, 0.0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [0, 0, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows, [0, 0, 0, 0, 0])
@with_environment()
def test_no_dividend_at_simulation_end(self, env=None):
# post some trades in the market
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
self.sim_params
)
dividend = factory.create_dividend(
1,
10.00,
# declared date, when the algorithm finds out about
# the dividend
events[-3].dt,
# ex_date, the date before which the algorithm must hold stock
# to receive the dividend
events[-2].dt,
# pay date, when the algorithm receives the dividend.
# This pays out on the day after the last event
env.next_trading_day(events[-1].dt)
)
# Set the last day to be the last event
self.sim_params.period_end = events[-1].dt
self.sim_params._update_internal()
# Simulate a transaction being filled prior to the ex_date.
txns = [create_txn(events[0], 10.0, 100)]
results = calculate_results(
self,
events,
dividend_events=[dividend],
txns=txns,
)
self.assertEqual(len(results), 5)
cumulative_returns = \
[event['cumulative_perf']['returns'] for event in results]
self.assertEqual(cumulative_returns, [0.0, 0.0, 0.0, 0.0, 0.0])
daily_returns = [event['daily_perf']['returns'] for event in results]
self.assertEqual(daily_returns, [0.0, 0.0, 0.0, 0.0, 0.0])
cash_flows = [event['daily_perf']['capital_used'] for event in results]
self.assertEqual(cash_flows, [-1000, 0, 0, 0, 0])
cumulative_cash_flows = \
[event['cumulative_perf']['capital_used'] for event in results]
self.assertEqual(cumulative_cash_flows,
[-1000, -1000, -1000, -1000, -1000])
class TestDividendPerformanceHolidayStyle(TestDividendPerformance):
# The holiday tests begins the simulation on the day
# before Thanksgiving, so that the next trading day is
# two days ahead. Any tests that hard code events
# to be start + oneday will fail, since those events will
# be skipped by the simulation.
def setUp(self):
self.dt = datetime(2003, 11, 30, tzinfo=pytz.utc)
self.end_dt = datetime(2004, 11, 25, tzinfo=pytz.utc)
self.sim_params = SimulationParameters(
self.dt,
self.end_dt)
self.benchmark_events = benchmark_events_in_range(self.sim_params)
class TestPositionPerformance(unittest.TestCase):
def setUp(self):
self.sim_params, self.dt, self.end_dt = \
create_random_simulation_parameters()
self.benchmark_events = benchmark_events_in_range(self.sim_params)
def test_long_short_positions(self):
"""
start with $1000
buy 100 stock1 shares at $10
sell short 100 stock2 shares at $10
stock1 then goes down to $9
stock2 goes to $11
"""
trades_1 = factory.create_trade_history(
1,
[10, 10, 10, 9],
[100, 100, 100, 100],
onesec,
self.sim_params
)
trades_2 = factory.create_trade_history(
2,
[10, 10, 10, 11],
[100, 100, 100, 100],
onesec,
self.sim_params
)
txn1 = create_txn(trades_1[1], 10.0, 100)
txn2 = create_txn(trades_2[1], 10.0, -100)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
pt.execute_transaction(txn1)
pp.handle_execution(txn1)
pt.execute_transaction(txn2)
pp.handle_execution(txn2)
for trade in itertools.chain(trades_1[:-2], trades_2[:-2]):
pt.update_last_sale(trade)
pp.calculate_performance()
check_perf_period(
pp,
gross_leverage=2.0,
net_leverage=0.0,
long_exposure=1000.0,
longs_count=1,
short_exposure=-1000.0,
shorts_count=1)
# Validate that the account attributes were updated.
account = pp.as_account()
check_account(account,
settled_cash=1000.0,
equity_with_loan=1000.0,
total_positions_value=0.0,
regt_equity=1000.0,
available_funds=1000.0,
excess_liquidity=1000.0,
cushion=1.0,
leverage=2.0,
net_leverage=0.0,
net_liquidation=1000.0)
# now simulate stock1 going to $9
pt.update_last_sale(trades_1[-1])
# and stock2 going to $11
pt.update_last_sale(trades_2[-1])
pp.calculate_performance()
# Validate that the account attributes were updated.
account = pp.as_account()
check_perf_period(
pp,
gross_leverage=2.5,
net_leverage=-0.25,
long_exposure=900.0,
longs_count=1,
short_exposure=-1100.0,
shorts_count=1)
check_account(account,
settled_cash=1000.0,
equity_with_loan=800.0,
total_positions_value=-200.0,
regt_equity=1000.0,
available_funds=1000.0,
excess_liquidity=1000.0,
cushion=1.25,
leverage=2.5,
net_leverage=-0.25,
net_liquidation=800.0)
def test_levered_long_position(self):
"""
start with $1,000, then buy 1000 shares at $10.
price goes to $11
"""
# post some trades in the market
trades = factory.create_trade_history(
1,
[10, 10, 10, 11],
[100, 100, 100, 100],
onesec,
self.sim_params
)
txn = create_txn(trades[1], 10.0, 1000)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
pt.execute_transaction(txn)
pp.handle_execution(txn)
for trade in trades[:-2]:
pt.update_last_sale(trade)
pp.calculate_performance()
check_perf_period(
pp,
gross_leverage=10.0,
net_leverage=10.0,
long_exposure=10000.0,
longs_count=1,
short_exposure=0.0,
shorts_count=0)
# Validate that the account attributes were updated.
account = pp.as_account()
check_account(account,
settled_cash=-9000.0,
equity_with_loan=1000.0,
total_positions_value=10000.0,
regt_equity=-9000.0,
available_funds=-9000.0,
excess_liquidity=-9000.0,
cushion=-9.0,
leverage=10.0,
net_leverage=10.0,
net_liquidation=1000.0)
# now simulate a price jump to $11
pt.update_last_sale(trades[-1])
pp.calculate_performance()
check_perf_period(
pp,
gross_leverage=5.5,
net_leverage=5.5,
long_exposure=11000.0,
longs_count=1,
short_exposure=0.0,
shorts_count=0)
# Validate that the account attributes were updated.
account = pp.as_account()
check_account(account,
settled_cash=-9000.0,
equity_with_loan=2000.0,
total_positions_value=11000.0,
regt_equity=-9000.0,
available_funds=-9000.0,
excess_liquidity=-9000.0,
cushion=-4.5,
leverage=5.5,
net_leverage=5.5,
net_liquidation=2000.0)
def test_long_position(self):
"""
verify that the performance period calculates properly for a
single buy transaction
"""
# post some trades in the market
trades = factory.create_trade_history(
1,
[10, 10, 10, 11],
[100, 100, 100, 100],
onesec,
self.sim_params
)
txn = create_txn(trades[1], 10.0, 100)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
pt.execute_transaction(txn)
pp.handle_execution(txn)
# This verifies that the last sale price is being correctly
# set in the positions. If this is not the case then returns can
# incorrectly show as sharply dipping if a transaction arrives
# before a trade. This is caused by returns being based on holding
# stocks with a last sale price of 0.
self.assertEqual(pp.positions[1].last_sale_price, 10.0)
for trade in trades:
pt.update_last_sale(trade)
pp.calculate_performance()
self.assertEqual(
pp.period_cash_flow,
-1 * txn.price * txn.amount,
"capital used should be equal to the opposite of the transaction \
cost of sole txn in test"
)
self.assertEqual(
len(pp.positions),
1,
"should be just one position")
self.assertEqual(
pp.positions[1].sid,
txn.sid,
"position should be in security with id 1")
self.assertEqual(
pp.positions[1].amount,
txn.amount,
"should have a position of {sharecount} shares".format(
sharecount=txn.amount
)
)
self.assertEqual(
pp.positions[1].cost_basis,
txn.price,
"should have a cost basis of 10"
)
self.assertEqual(
pp.positions[1].last_sale_price,
trades[-1]['price'],
"last sale should be same as last trade. \
expected {exp} actual {act}".format(
exp=trades[-1]['price'],
act=pp.positions[1].last_sale_price)
)
self.assertEqual(
pp.ending_value,
1100,
"ending value should be price of last trade times number of \
shares in position"
)
self.assertEqual(pp.pnl, 100, "gain of 1 on 100 shares should be 100")
check_perf_period(
pp,
gross_leverage=1.0,
net_leverage=1.0,
long_exposure=1100.0,
longs_count=1,
short_exposure=0.0,
shorts_count=0)
# Validate that the account attributes were updated.
account = pp.as_account()
check_account(account,
settled_cash=0.0,
equity_with_loan=1100.0,
total_positions_value=1100.0,
regt_equity=0.0,
available_funds=0.0,
excess_liquidity=0.0,
cushion=0.0,
leverage=1.0,
net_leverage=1.0,
net_liquidation=1100.0)
def test_short_position(self):
"""verify that the performance period calculates properly for a \
single short-sale transaction"""
trades = factory.create_trade_history(
1,
[10, 10, 10, 11, 10, 9],
[100, 100, 100, 100, 100, 100],
onesec,
self.sim_params
)
trades_1 = trades[:-2]
txn = create_txn(trades[1], 10.0, -100)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
pt.execute_transaction(txn)
pp.handle_execution(txn)
for trade in trades_1:
pt.update_last_sale(trade)
pp.calculate_performance()
self.assertEqual(
pp.period_cash_flow,
-1 * txn.price * txn.amount,
"capital used should be equal to the opposite of the transaction\
cost of sole txn in test"
)
self.assertEqual(
len(pp.positions),
1,
"should be just one position")
self.assertEqual(
pp.positions[1].sid,
txn.sid,
"position should be in security from the transaction"
)
self.assertEqual(
pp.positions[1].amount,
-100,
"should have a position of -100 shares"
)
self.assertEqual(
pp.positions[1].cost_basis,
txn.price,
"should have a cost basis of 10"
)
self.assertEqual(
pp.positions[1].last_sale_price,
trades_1[-1]['price'],
"last sale should be price of last trade"
)
self.assertEqual(
pp.ending_value,
-1100,
"ending value should be price of last trade times number of \
shares in position"
)
self.assertEqual(pp.pnl, -100, "gain of 1 on 100 shares should be 100")
# simulate additional trades, and ensure that the position value
# reflects the new price
trades_2 = trades[-2:]
# simulate a rollover to a new period
pp.rollover()
for trade in trades_2:
pt.update_last_sale(trade)
pp.calculate_performance()
self.assertEqual(
pp.period_cash_flow,
0,
"capital used should be zero, there were no transactions in \
performance period"
)
self.assertEqual(
len(pp.positions),
1,
"should be just one position"
)
self.assertEqual(
pp.positions[1].sid,
txn.sid,
"position should be in security from the transaction"
)
self.assertEqual(
pp.positions[1].amount,
-100,
"should have a position of -100 shares"
)
self.assertEqual(
pp.positions[1].cost_basis,
txn.price,
"should have a cost basis of 10"
)
self.assertEqual(
pp.positions[1].last_sale_price,
trades_2[-1].price,
"last sale should be price of last trade"
)
self.assertEqual(
pp.ending_value,
-900,
"ending value should be price of last trade times number of \
shares in position")
self.assertEqual(
pp.pnl,
200,
"drop of 2 on -100 shares should be 200"
)
# now run a performance period encompassing the entire trade sample.
ptTotal = perf.PositionTracker()
ppTotal = perf.PerformancePeriod(1000.0)
ppTotal.position_tracker = pt
for trade in trades_1:
ptTotal.update_last_sale(trade)
ptTotal.execute_transaction(txn)
ppTotal.handle_execution(txn)
for trade in trades_2:
ptTotal.update_last_sale(trade)
ppTotal.calculate_performance()
self.assertEqual(
ppTotal.period_cash_flow,
-1 * txn.price * txn.amount,
"capital used should be equal to the opposite of the transaction \
cost of sole txn in test"
)
self.assertEqual(
len(ppTotal.positions),
1,
"should be just one position"
)
self.assertEqual(
ppTotal.positions[1].sid,
txn.sid,
"position should be in security from the transaction"
)
self.assertEqual(
ppTotal.positions[1].amount,
-100,
"should have a position of -100 shares"
)
self.assertEqual(
ppTotal.positions[1].cost_basis,
txn.price,
"should have a cost basis of 10"
)
self.assertEqual(
ppTotal.positions[1].last_sale_price,
trades_2[-1].price,
"last sale should be price of last trade"
)
self.assertEqual(
ppTotal.ending_value,
-900,
"ending value should be price of last trade times number of \
shares in position")
self.assertEqual(
ppTotal.pnl,
100,
"drop of 1 on -100 shares should be 100"
)
check_perf_period(
pp,
gross_leverage=0.8181,
net_leverage=-0.8181,
long_exposure=0.0,
longs_count=0,
short_exposure=-900.0,
shorts_count=1)
# Validate that the account attributes.
account = ppTotal.as_account()
check_account(account,
settled_cash=2000.0,
equity_with_loan=1100.0,
total_positions_value=-900.0,
regt_equity=2000.0,
available_funds=2000.0,
excess_liquidity=2000.0,
cushion=1.8181,
leverage=0.8181,
net_leverage=-0.8181,
net_liquidation=1100.0)
def test_covering_short(self):
"""verify performance where short is bought and covered, and shares \
trade after cover"""
trades = factory.create_trade_history(
1,
[10, 10, 10, 11, 9, 8, 7, 8, 9, 10],
[100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
onesec,
self.sim_params
)
short_txn = create_txn(
trades[1],
10.0,
-100,
)
cover_txn = create_txn(trades[6], 7.0, 100)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
pt.execute_transaction(short_txn)
pp.handle_execution(short_txn)
pt.execute_transaction(cover_txn)
pp.handle_execution(cover_txn)
for trade in trades:
pt.update_last_sale(trade)
pp.calculate_performance()
short_txn_cost = short_txn.price * short_txn.amount
cover_txn_cost = cover_txn.price * cover_txn.amount
self.assertEqual(
pp.period_cash_flow,
-1 * short_txn_cost - cover_txn_cost,
"capital used should be equal to the net transaction costs"
)
self.assertEqual(
len(pp.positions),
1,
"should be just one position"
)
self.assertEqual(
pp.positions[1].sid,
short_txn.sid,
"position should be in security from the transaction"
)
self.assertEqual(
pp.positions[1].amount,
0,
"should have a position of -100 shares"
)
self.assertEqual(
pp.positions[1].cost_basis,
0,
"a covered position should have a cost basis of 0"
)
self.assertEqual(
pp.positions[1].last_sale_price,
trades[-1].price,
"last sale should be price of last trade"
)
self.assertEqual(
pp.ending_value,
0,
"ending value should be price of last trade times number of \
shares in position"
)
self.assertEqual(
pp.pnl,
300,
"gain of 1 on 100 shares should be 300"
)
check_perf_period(
pp,
gross_leverage=0.0,
net_leverage=0.0,
long_exposure=0.0,
longs_count=0,
short_exposure=0.0,
shorts_count=0)
account = pp.as_account()
check_account(account,
settled_cash=1300.0,
equity_with_loan=1300.0,
total_positions_value=0.0,
regt_equity=1300.0,
available_funds=1300.0,
excess_liquidity=1300.0,
cushion=1.0,
leverage=0.0,
net_leverage=0.0,
net_liquidation=1300.0)
def test_cost_basis_calc(self):
history_args = (
1,
[10, 11, 11, 12],
[100, 100, 100, 100],
onesec,
self.sim_params
)
trades = factory.create_trade_history(*history_args)
transactions = factory.create_txn_history(*history_args)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
average_cost = 0
for i, txn in enumerate(transactions):
pt.execute_transaction(txn)
pp.handle_execution(txn)
average_cost = (average_cost * i + txn.price) / (i + 1)
self.assertEqual(pp.positions[1].cost_basis, average_cost)
for trade in trades:
pt.update_last_sale(trade)
pp.calculate_performance()
self.assertEqual(
pp.positions[1].last_sale_price,
trades[-1].price,
"should have a last sale of 12, got {val}".format(
val=pp.positions[1].last_sale_price)
)
self.assertEqual(
pp.positions[1].cost_basis,
11,
"should have a cost basis of 11"
)
self.assertEqual(
pp.pnl,
400
)
down_tick = factory.create_trade(
1,
10.0,
100,
trades[-1].dt + onesec)
sale_txn = create_txn(
down_tick,
10.0,
-100)
pp.rollover()
pt.execute_transaction(sale_txn)
pp.handle_execution(sale_txn)
pt.update_last_sale(down_tick)
pp.calculate_performance()
self.assertEqual(
pp.positions[1].last_sale_price,
10,
"should have a last sale of 10, was {val}".format(
val=pp.positions[1].last_sale_price)
)
self.assertEqual(
pp.positions[1].cost_basis,
11,
"should have a cost basis of 11"
)
self.assertEqual(pp.pnl, -800, "this period goes from +400 to -400")
pt3 = perf.PositionTracker()
pp3 = perf.PerformancePeriod(1000.0)
pp3.position_tracker = pt3
average_cost = 0
for i, txn in enumerate(transactions):
pt3.execute_transaction(txn)
pp3.handle_execution(txn)
average_cost = (average_cost * i + txn.price) / (i + 1)
self.assertEqual(pp3.positions[1].cost_basis, average_cost)
pt3.execute_transaction(sale_txn)
pp3.handle_execution(sale_txn)
trades.append(down_tick)
for trade in trades:
pt3.update_last_sale(trade)
pp3.calculate_performance()
self.assertEqual(
pp3.positions[1].last_sale_price,
10,
"should have a last sale of 10"
)
self.assertEqual(
pp3.positions[1].cost_basis,
11,
"should have a cost basis of 11"
)
self.assertEqual(
pp3.pnl,
-400,
"should be -400 for all trades and transactions in period"
)
def test_cost_basis_calc_close_pos(self):
history_args = (
1,
[10, 9, 11, 8, 9, 12, 13, 14],
[200, -100, -100, 100, -300, 100, 500, 400],
onesec,
self.sim_params
)
cost_bases = [10, 10, 0, 8, 9, 9, 13, 13.5]
trades = factory.create_trade_history(*history_args)
transactions = factory.create_txn_history(*history_args)
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(1000.0)
pp.position_tracker = pt
for txn, cb in zip(transactions, cost_bases):
pt.execute_transaction(txn)
pp.handle_execution(txn)
self.assertEqual(pp.positions[1].cost_basis, cb)
for trade in trades:
pt.update_last_sale(trade)
pp.calculate_performance()
self.assertEqual(pp.positions[1].cost_basis, cost_bases[-1])
class TestPerformanceTracker(unittest.TestCase):
NumDaysToDelete = collections.namedtuple(
'NumDaysToDelete', ('start', 'middle', 'end'))
@parameterized.expand([
("Don't delete any events",
NumDaysToDelete(start=0, middle=0, end=0)),
("Delete first day of events",
NumDaysToDelete(start=1, middle=0, end=0)),
("Delete first two days of events",
NumDaysToDelete(start=2, middle=0, end=0)),
("Delete one day of events from the middle",
NumDaysToDelete(start=0, middle=1, end=0)),
("Delete two events from the middle",
NumDaysToDelete(start=0, middle=2, end=0)),
("Delete last day of events",
NumDaysToDelete(start=0, middle=0, end=1)),
("Delete last two days of events",
NumDaysToDelete(start=0, middle=0, end=2)),
("Delete all but one event.",
NumDaysToDelete(start=2, middle=1, end=2)),
])
def test_tracker(self, parameter_comment, days_to_delete):
"""
@days_to_delete - configures which days in the data set we should
remove, used for ensuring that we still return performance messages
even when there is no data.
"""
# This date range covers Columbus day,
# however Columbus day is not a market holiday
#
# October 2008
# Su Mo Tu We Th Fr Sa
# 1 2 3 4
# 5 6 7 8 9 10 11
# 12 13 14 15 16 17 18
# 19 20 21 22 23 24 25
# 26 27 28 29 30 31
start_dt = datetime(year=2008,
month=10,
day=9,
tzinfo=pytz.utc)
end_dt = datetime(year=2008,
month=10,
day=16,
tzinfo=pytz.utc)
trade_count = 6
sid = 133
price = 10.1
price_list = [price] * trade_count
volume = [100] * trade_count
trade_time_increment = timedelta(days=1)
sim_params = SimulationParameters(
period_start=start_dt,
period_end=end_dt
)
benchmark_events = benchmark_events_in_range(sim_params)
trade_history = factory.create_trade_history(
sid,
price_list,
volume,
trade_time_increment,
sim_params,
source_id="factory1"
)
sid2 = 134
price2 = 12.12
price2_list = [price2] * trade_count
trade_history2 = factory.create_trade_history(
sid2,
price2_list,
volume,
trade_time_increment,
sim_params,
source_id="factory2"
)
# 'middle' start of 3 depends on number of days == 7
middle = 3
# First delete from middle
if days_to_delete.middle:
del trade_history[middle:(middle + days_to_delete.middle)]
del trade_history2[middle:(middle + days_to_delete.middle)]
# Delete start
if days_to_delete.start:
del trade_history[:days_to_delete.start]
del trade_history2[:days_to_delete.start]
# Delete from end
if days_to_delete.end:
del trade_history[-days_to_delete.end:]
del trade_history2[-days_to_delete.end:]
sim_params.first_open = \
sim_params.calculate_first_open()
sim_params.last_close = \
sim_params.calculate_last_close()
sim_params.capital_base = 1000.0
sim_params.frame_index = [
'sid',
'volume',
'dt',
'price',
'changed']
perf_tracker = perf.PerformanceTracker(
sim_params
)
events = date_sorted_sources(trade_history, trade_history2)
events = [event for event in
self.trades_with_txns(events, trade_history[0].dt)]
# Extract events with transactions to use for verification.
txns = [event for event in
events if event.type == zp.DATASOURCE_TYPE.TRANSACTION]
orders = [event for event in
events if event.type == zp.DATASOURCE_TYPE.ORDER]
all_events = date_sorted_sources(events, benchmark_events)
filtered_events = [filt_event for filt_event
in all_events if filt_event.dt <= end_dt]
filtered_events.sort(key=lambda x: x.dt)
grouped_events = itertools.groupby(filtered_events, lambda x: x.dt)
perf_messages = []
for date, group in grouped_events:
for event in group:
if event.type == zp.DATASOURCE_TYPE.TRADE:
perf_tracker.process_trade(event)
elif event.type == zp.DATASOURCE_TYPE.ORDER:
perf_tracker.process_order(event)
elif event.type == zp.DATASOURCE_TYPE.BENCHMARK:
perf_tracker.process_benchmark(event)
elif event.type == zp.DATASOURCE_TYPE.TRANSACTION:
perf_tracker.process_transaction(event)
msg = perf_tracker.handle_market_close_daily()
perf_messages.append(msg)
self.assertEqual(perf_tracker.txn_count, len(txns))
self.assertEqual(perf_tracker.txn_count, len(orders))
positions = perf_tracker.cumulative_performance.positions
if len(txns) == 0:
self.assertNotIn(sid, positions)
else:
expected_size = len(txns) / 2 * -25
cumulative_pos = positions[sid]
self.assertEqual(cumulative_pos.amount, expected_size)
self.assertEqual(len(perf_messages),
sim_params.days_in_period)
check_perf_tracker_serialization(perf_tracker)
def trades_with_txns(self, events, no_txn_dt):
for event in events:
# create a transaction for all but
# first trade in each sid, to simulate None transaction
if event.dt != no_txn_dt:
order = Order(
sid=event.sid,
amount=-25,
dt=event.dt
)
order.source_id = 'MockOrderSource'
yield order
yield event
txn = Transaction(
sid=event.sid,
amount=-25,
dt=event.dt,
price=10.0,
commission=0.50,
order_id=order.id
)
txn.source_id = 'MockTransactionSource'
yield txn
else:
yield event
@with_environment()
def test_minute_tracker(self, env=None):
""" Tests minute performance tracking."""
start_dt = env.exchange_dt_in_utc(datetime(2013, 3, 1, 9, 31))
end_dt = env.exchange_dt_in_utc(datetime(2013, 3, 1, 16, 0))
sim_params = SimulationParameters(
period_start=start_dt,
period_end=end_dt,
emission_rate='minute'
)
tracker = perf.PerformanceTracker(sim_params)
foosid = 1
barsid = 2
env.update_asset_finder(identifiers=[foosid, barsid])
foo_event_1 = factory.create_trade(foosid, 10.0, 20, start_dt)
order_event_1 = Order(sid=foo_event_1.sid,
amount=-25,
dt=foo_event_1.dt)
bar_event_1 = factory.create_trade(barsid, 100.0, 200, start_dt)
txn_event_1 = Transaction(sid=foo_event_1.sid,
amount=-25,
dt=foo_event_1.dt,
price=10.0,
commission=0.50,
order_id=order_event_1.id)
benchmark_event_1 = Event({
'dt': start_dt,
'returns': 0.01,
'type': zp.DATASOURCE_TYPE.BENCHMARK
})
foo_event_2 = factory.create_trade(
foosid, 11.0, 20, start_dt + timedelta(minutes=1))
bar_event_2 = factory.create_trade(
barsid, 11.0, 20, start_dt + timedelta(minutes=1))
benchmark_event_2 = Event({
'dt': start_dt + timedelta(minutes=1),
'returns': 0.02,
'type': zp.DATASOURCE_TYPE.BENCHMARK
})
events = [
foo_event_1,
order_event_1,
benchmark_event_1,
txn_event_1,
bar_event_1,
foo_event_2,
benchmark_event_2,
bar_event_2,
]
grouped_events = itertools.groupby(
events, operator.attrgetter('dt'))
messages = {}
for date, group in grouped_events:
tracker.set_date(date)
for event in group:
if event.type == zp.DATASOURCE_TYPE.TRADE:
tracker.process_trade(event)
elif event.type == zp.DATASOURCE_TYPE.BENCHMARK:
tracker.process_benchmark(event)
elif event.type == zp.DATASOURCE_TYPE.ORDER:
tracker.process_order(event)
elif event.type == zp.DATASOURCE_TYPE.TRANSACTION:
tracker.process_transaction(event)
msg, _ = tracker.handle_minute_close(date)
messages[date] = msg
self.assertEquals(2, len(messages))
msg_1 = messages[foo_event_1.dt]
msg_2 = messages[foo_event_2.dt]
self.assertEquals(1, len(msg_1['minute_perf']['transactions']),
"The first message should contain one "
"transaction.")
# Check that transactions aren't emitted for previous events.
self.assertEquals(0, len(msg_2['minute_perf']['transactions']),
"The second message should have no "
"transactions.")
self.assertEquals(1, len(msg_1['minute_perf']['orders']),
"The first message should contain one orders.")
# Check that orders aren't emitted for previous events.
self.assertEquals(0, len(msg_2['minute_perf']['orders']),
"The second message should have no orders.")
# Ensure that period_close moves through time.
# Also, ensure that the period_closes are the expected dts.
self.assertEquals(foo_event_1.dt,
msg_1['minute_perf']['period_close'])
self.assertEquals(foo_event_2.dt,
msg_2['minute_perf']['period_close'])
# In this test event1 transactions arrive on the first bar.
# This leads to no returns as the price is constant.
# Sharpe ratio cannot be computed and is None.
# In the second bar we can start establishing a sharpe ratio.
self.assertIsNone(msg_1['cumulative_risk_metrics']['sharpe'])
self.assertIsNotNone(msg_2['cumulative_risk_metrics']['sharpe'])
check_perf_tracker_serialization(tracker)
@with_environment()
def test_close_position_event(self, env=None):
env.update_asset_finder(identifiers=[1, 2])
pt = perf.PositionTracker()
dt = pd.Timestamp("1984/03/06 3:00PM")
pos1 = perf.Position(1, amount=np.float64(120.0),
last_sale_date=dt, last_sale_price=3.4)
pos2 = perf.Position(2, amount=np.float64(-100.0),
last_sale_date=dt, last_sale_price=3.4)
pt.update_positions({1: pos1, 2: pos2})
event_type = DATASOURCE_TYPE.CLOSE_POSITION
index = [dt + timedelta(days=1)]
pan = pd.Panel({1: pd.DataFrame({'price': 1, 'volume': 0,
'type': event_type}, index=index),
2: pd.DataFrame({'price': 1, 'volume': 0,
'type': event_type}, index=index),
3: pd.DataFrame({'price': 1, 'volume': 0,
'type': event_type}, index=index)})
source = DataPanelSource(pan)
for i, event in enumerate(source):
txn = pt.maybe_create_close_position_transaction(event)
if event.sid == 1:
# Test owned long
self.assertEqual(-120, txn.amount)
elif event.sid == 2:
# Test owned short
self.assertEqual(100, txn.amount)
elif event.sid == 3:
# Test not-owned SID
self.assertIsNone(txn)
def test_handle_sid_removed_from_universe(self):
# post some trades in the market
sim_params, _, _ = create_random_simulation_parameters()
events = factory.create_trade_history(
1,
[10, 10, 10, 10, 10],
[100, 100, 100, 100, 100],
oneday,
sim_params
)
# Create a tracker and a dividend
perf_tracker = perf.PerformanceTracker(sim_params)
dividend = factory.create_dividend(
1,
10.00,
# declared date, when the algorithm finds out about
# the dividend
events[0].dt,
# ex_date, the date before which the algorithm must hold stock
# to receive the dividend
events[1].dt,
# pay date, when the algorithm receives the dividend.
events[2].dt
)
dividend_frame = pd.DataFrame(
[dividend.to_series(index=zp.DIVIDEND_FIELDS)],
)
perf_tracker.update_dividends(dividend_frame)
# Ensure that the dividend is in the tracker
self.assertIn(1, perf_tracker.dividend_frame['sid'].values)
# Inform the tracker that sid 1 has been removed from the universe
perf_tracker.handle_sid_removed_from_universe(1)
# Ensure that the dividend for sid 1 has been removed from dividend
# frame
self.assertNotIn(1, perf_tracker.dividend_frame['sid'].values)
def test_serialization(self):
start_dt = datetime(year=2008,
month=10,
day=9,
tzinfo=pytz.utc)
end_dt = datetime(year=2008,
month=10,
day=16,
tzinfo=pytz.utc)
sim_params = SimulationParameters(
period_start=start_dt,
period_end=end_dt
)
perf_tracker = perf.PerformanceTracker(
sim_params
)
check_perf_tracker_serialization(perf_tracker)
class TestPosition(unittest.TestCase):
def setUp(self):
pass
def test_serialization(self):
dt = pd.Timestamp("1984/03/06 3:00PM")
pos = perf.Position(10, amount=np.float64(120.0), last_sale_date=dt,
last_sale_price=3.4)
p_string = pickle.dumps(pos)
test = pickle.loads(p_string)
nt.assert_dict_equal(test.__dict__, pos.__dict__)
class TestPositionTracker(unittest.TestCase):
def setUp(self):
pass
def test_empty_positions(self):
"""
make sure all the empty position stats return a numeric 0
Originally this bug was due to np.dot([], []) returning
np.bool_(False)
"""
pt = perf.PositionTracker()
stats = [
'calculate_positions_value',
'_net_exposure',
'_gross_value',
'_gross_exposure',
'_short_value',
'_short_exposure',
'_shorts_count',
'_long_value',
'_long_exposure',
'_longs_count',
]
for name in stats:
meth = getattr(pt, name)
val = meth()
self.assertEquals(val, 0)
self.assertNotIsInstance(val, (bool, np.bool_))
@with_environment()
def test_update_last_sale(self, env=None):
metadata = {1: {'asset_type': 'equity'},
2: {'asset_type': 'future',
'contract_multiplier': 1000}}
asset_finder = AssetFinder()
env.update_asset_finder(
asset_finder=asset_finder,
asset_metadata=metadata)
pt = perf.PositionTracker()
dt = pd.Timestamp("1984/03/06 3:00PM")
pos1 = perf.Position(1, amount=np.float64(100.0),
last_sale_date=dt, last_sale_price=10)
pos2 = perf.Position(2, amount=np.float64(100.0),
last_sale_date=dt, last_sale_price=10)
pt.update_positions({1: pos1, 2: pos2})
event1 = Event({'sid': 1,
'price': 11,
'dt': dt})
event2 = Event({'sid': 2,
'price': 11,
'dt': dt})
# Check cash-adjustment return value
self.assertEqual(0, pt.update_last_sale(event1))
self.assertEqual(100000, pt.update_last_sale(event2))
@with_environment()
def test_position_values_and_exposures(self, env=None):
metadata = {1: {'asset_type': 'equity'},
2: {'asset_type': 'equity'},
3: {'asset_type': 'future',
'contract_multiplier': 1000},
4: {'asset_type': 'future',
'contract_multiplier': 1000}}
env.update_asset_finder(asset_metadata=metadata)
pt = perf.PositionTracker()
dt = pd.Timestamp("1984/03/06 3:00PM")
pos1 = perf.Position(1, amount=np.float64(10.0),
last_sale_date=dt, last_sale_price=10)
pos2 = perf.Position(2, amount=np.float64(-20.0),
last_sale_date=dt, last_sale_price=10)
pos3 = perf.Position(3, amount=np.float64(30.0),
last_sale_date=dt, last_sale_price=10)
pos4 = perf.Position(4, amount=np.float64(-40.0),
last_sale_date=dt, last_sale_price=10)
pt.update_positions({1: pos1, 2: pos2, 3: pos3, 4: pos4})
# Test long-only methods
self.assertEqual(100, pt._long_value())
self.assertEqual(100 + 300000, pt._long_exposure())
# Test short-only methods
self.assertEqual(-200, pt._short_value())
self.assertEqual(-200 - 400000, pt._short_exposure())
# Test gross and net values
self.assertEqual(100 + 200, pt._gross_value())
self.assertEqual(100 - 200, pt._net_value())
# Test gross and net exposures
self.assertEqual(100 + 200 + 300000 + 400000, pt._gross_exposure())
self.assertEqual(100 - 200 + 300000 - 400000, pt._net_exposure())
@with_environment()
def test_serialization(self, env=None):
metadata = {1: {'asset_type': 'equity'},
2: {'asset_type': 'future',
'contract_multiplier': 1000}}
env.update_asset_finder(asset_metadata=metadata)
pt = perf.PositionTracker()
dt = pd.Timestamp("1984/03/06 3:00PM")
pos1 = perf.Position(1, amount=np.float64(120.0),
last_sale_date=dt, last_sale_price=3.4)
pos2 = perf.Position(2, amount=np.float64(100.0),
last_sale_date=dt, last_sale_price=3.4)
pt.update_positions({1: pos1, 2: pos2})
p_string = pickle.dumps(pt)
test = pickle.loads(p_string)
nt.assert_dict_equal(test._position_amounts, pt._position_amounts)
nt.assert_dict_equal(test._position_last_sale_prices,
pt._position_last_sale_prices)
nt.assert_count_equal(test.positions.keys(), pt.positions.keys())
for sid in pt.positions:
nt.assert_dict_equal(test.positions[sid].__dict__,
pt.positions[sid].__dict__)
class TestPerformancePeriod(unittest.TestCase):
def setUp(self):
pass
def test_serialization(self):
pt = perf.PositionTracker()
pp = perf.PerformancePeriod(100)
pp.position_tracker = pt
p_string = pickle.dumps(pp)
test = pickle.loads(p_string)
correct = pp.__dict__.copy()
del correct['_position_tracker']
nt.assert_count_equal(test.__dict__.keys(), correct.keys())
equal_keys = list(correct.keys())
equal_keys.remove('_account_store')
equal_keys.remove('_portfolio_store')
for k in equal_keys:
nt.assert_equal(test.__dict__[k], correct[k])
|
morrisonwudi/zipline
|
tests/test_perf_tracking.py
|
Python
|
apache-2.0
| 77,177
|
[
"COLUMBUS"
] |
1238fda0403af318a84524d0fef2c1f439ff06e61308e4a7027425dca69f4f87
|
#Python script using youtube API for liking video
#!/usr/bin/python
import httplib2
import os
import sys
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the {{ Google Cloud Console }} at
# {{ https://cloud.google.com/console }}.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the {{ Cloud Console }}
{{ https://cloud.google.com/console }}
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),
CLIENT_SECRETS_FILE))
# This OAuth 2.0 access scope allows for full read/write access to the
# authenticated user's account.
YOUTUBE_READ_WRITE_SCOPE = "https://www.googleapis.com/auth/youtube"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_READ_WRITE_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
# Add the video rating. This code sets the rating to "like," but you could
# also support an additional option that supports values of "like" and
# "dislike."
def like_video(youtube, video_id):
youtube.videos().rate(
id=video_id,
rating="like"
).execute()
if __name__ == "__main__":
argparser.add_argument("--videoid", default="L-oNKK1CrnU",
help="ID of video to like.")
args = argparser.parse_args()
youtube = get_authenticated_service(args)
try:
like_video(youtube, args.videoid)
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
else:
print "%s has been liked." % args.videoid
|
dasdhrub95/cmn-198-youtube
|
yt_api_like_video_ex.py
|
Python
|
mit
| 3,049
|
[
"VisIt"
] |
54883ee9e830909232431c71af6fdff4a4076b9e4bda8d9518bcf046807d99e1
|
####################Drug Half-life Prediction ############################
#Author:Mengyuan Zhu, Binghe Wang
#Email: mzhu7@gsu.edu
#Department of Chemistry
#Georgia State University
#Usage: python t_half.py filename
##########################################################################
import numpy
from sklearn.externals import joblib
import pybel
import sys
def run():
inputfile=pybel.readfile(sys.argv[1].split(".")[-1],sys.argv[1])
value=()
for mol in inputfile:
descvalues=mol.calcdesc()
value= value+(descvalues.get('TPSA'),)
value= value+(descvalues.get('HBD'),)
value= value+(descvalues.get('logP'),)
value= value+(descvalues.get('MW'),)
value= value+(descvalues.get('tbonds'),)
value= value+(descvalues.get('nF'),)
value= value+(descvalues.get('bonds'),)
value= value+(descvalues.get('atoms'),)
value= value+(descvalues.get('HBA1'),)
value= value+(descvalues.get('HBA2'),)
value= value+(descvalues.get('sbonds'),)
value= value+(descvalues.get('dbonds'),)
value= value+(descvalues.get('MR'),)
value= value+(descvalues.get('abonds'),)
smarts = pybel.Smarts("[+]")
num=smarts.findall(mol)
value= value+(len(num),)
smarts = pybel.Smarts("[-]")
num=smarts.findall(mol)
value= value+(len(num),)
model=joblib.load('t_half_model/t_half.pkl')
for result in model.predict(value):
return round(result,2)
print run()
|
MengyuanZhu/PK_predict
|
t_half.py
|
Python
|
gpl-2.0
| 1,391
|
[
"Pybel"
] |
859de57bdf68847e081b1fd90543c4f86091be53a53dd2eab6b13523ba0900b8
|
"""Utilities to assist with commerce tasks."""
import json
import logging
from urllib.parse import urlencode, urljoin
import requests
import waffle
from django.conf import settings
from django.contrib.auth import get_user_model
from django.urls import reverse
from django.utils.translation import ugettext as _
from opaque_keys.edx.keys import CourseKey
from common.djangoapps.course_modes.models import CourseMode
from common.djangoapps.student.models import CourseEnrollment # lint-amnesty, pylint: disable=unused-import
from openedx.core.djangoapps.commerce.utils import ecommerce_api_client, is_commerce_service_configured
from openedx.core.djangoapps.site_configuration import helpers as configuration_helpers
from openedx.core.djangoapps.theming import helpers as theming_helpers
from .models import CommerceConfiguration
log = logging.getLogger(__name__)
def is_account_activation_requirement_disabled():
"""
Checks to see if the django-waffle switch for disabling the account activation requirement is active
Returns:
Boolean value representing switch status
"""
switch_name = configuration_helpers.get_value(
'DISABLE_ACCOUNT_ACTIVATION_REQUIREMENT_SWITCH',
settings.DISABLE_ACCOUNT_ACTIVATION_REQUIREMENT_SWITCH
)
return waffle.switch_is_active(switch_name)
class EcommerceService:
""" Helper class for ecommerce service integration. """
def __init__(self):
self.config = CommerceConfiguration.current()
@property
def ecommerce_url_root(self):
""" Retrieve Ecommerce service public url root. """
return configuration_helpers.get_value('ECOMMERCE_PUBLIC_URL_ROOT', settings.ECOMMERCE_PUBLIC_URL_ROOT)
def get_absolute_ecommerce_url(self, ecommerce_page_url):
""" Return the absolute URL to the ecommerce page.
Args:
ecommerce_page_url (str): Relative path to the ecommerce page.
Returns:
Absolute path to the ecommerce page.
"""
return urljoin(self.ecommerce_url_root, ecommerce_page_url)
def get_order_dashboard_url(self):
""" Return the URL to the ecommerce dashboard orders page.
Returns:
String: order dashboard url.
"""
return self.get_absolute_ecommerce_url(CommerceConfiguration.DEFAULT_ORDER_DASHBOARD_URL)
def get_receipt_page_url(self, order_number):
"""
Gets the URL for the Order Receipt page hosted by the ecommerce service.
Args:
order_number (str): Order number.
Returns:
Receipt page for the specified Order.
"""
return self.get_absolute_ecommerce_url(CommerceConfiguration.DEFAULT_RECEIPT_PAGE_URL + order_number)
def is_enabled(self, user):
"""
Determines the availability of the EcommerceService based on user activation and service configuration.
Note: If the user is anonymous we bypass the user activation gate and only look at the service config.
Returns:
Boolean
"""
user_is_active = user.is_active or is_account_activation_requirement_disabled()
allow_user = user_is_active or user.is_anonymous
return allow_user and self.config.checkout_on_ecommerce_service
def payment_page_url(self):
""" Return the URL for the checkout page.
Example:
http://localhost:8002/basket/add/
"""
return self.get_absolute_ecommerce_url(self.config.basket_checkout_page)
def get_checkout_page_url(self, *skus, **kwargs):
""" Construct the URL to the ecommerce checkout page and include products.
Args:
skus (list): List of SKUs associated with products to be added to basket
program_uuid (string): The UUID of the program, if applicable
Returns:
Absolute path to the ecommerce checkout page showing basket that contains specified products.
Example:
http://localhost:8002/basket/add/?sku=5H3HG5&sku=57FHHD
http://localhost:8002/basket/add/?sku=5H3HG5&sku=57FHHD&bundle=3bdf1dd1-49be-4a15-9145-38901f578c5a
"""
program_uuid = kwargs.get('program_uuid')
enterprise_catalog_uuid = kwargs.get('catalog')
query_params = {'sku': skus}
if enterprise_catalog_uuid:
query_params.update({'catalog': enterprise_catalog_uuid})
url = '{checkout_page_path}?{query_params}'.format(
checkout_page_path=self.get_absolute_ecommerce_url(self.config.basket_checkout_page),
query_params=urlencode(query_params, doseq=True),
)
if program_uuid:
url = '{url}&bundle={program_uuid}'.format(
url=url,
program_uuid=program_uuid
)
return url
def upgrade_url(self, user, course_key):
"""
Returns the URL for the user to upgrade, or None if not applicable.
"""
verified_mode = CourseMode.verified_mode_for_course(course_key)
if verified_mode:
if self.is_enabled(user):
return self.get_checkout_page_url(verified_mode.sku)
else:
return reverse('dashboard')
return None
def refund_entitlement(course_entitlement):
"""
Attempt a refund of a course entitlement. Verify the User before calling this refund method
Returns:
bool: True if the Refund is successfully processed.
"""
user_model = get_user_model()
enrollee = course_entitlement.user
entitlement_uuid = str(course_entitlement.uuid)
if not is_commerce_service_configured():
log.error(
'Ecommerce service is not configured, cannot refund for user [%s], course entitlement [%s].',
enrollee.id,
entitlement_uuid
)
return False
service_user = user_model.objects.get(username=settings.ECOMMERCE_SERVICE_WORKER_USERNAME)
api_client = ecommerce_api_client(service_user)
log.info(
'Attempting to create a refund for user [%s], course entitlement [%s]...',
enrollee.id,
entitlement_uuid
)
try:
refund_ids = api_client.refunds.post(
{
'order_number': course_entitlement.order_number,
'username': enrollee.username,
'entitlement_uuid': entitlement_uuid,
}
)
except Exception as exc: # pylint: disable=broad-except
# Catch any possible exceptions from the Ecommerce service to ensure we fail gracefully
log.exception(
"Unexpected exception while attempting to initiate refund for user [%s], "
"course entitlement [%s] message: [%s]",
enrollee.id,
course_entitlement.uuid,
str(exc)
)
return False
if refund_ids:
log.info(
'Refund successfully opened for user [%s], course entitlement [%s]: %r',
enrollee.id,
entitlement_uuid,
refund_ids,
)
return _process_refund(
refund_ids=refund_ids,
api_client=api_client,
mode=course_entitlement.mode,
user=enrollee,
always_notify=True,
)
else:
log.warning('No refund opened for user [%s], course entitlement [%s]', enrollee.id, entitlement_uuid)
return False
def refund_seat(course_enrollment, change_mode=False):
"""
Attempt to initiate a refund for any orders associated with the seat being unenrolled,
using the commerce service.
Arguments:
course_enrollment (CourseEnrollment): a student enrollment
change_mode (Boolean): change the course mode to free mode or not
Returns:
A list of the external service's IDs for any refunds that were initiated
(may be empty).
Raises:
exceptions.SlumberBaseException: for any unhandled HTTP error during communication with the E-Commerce Service.
exceptions.Timeout: if the attempt to reach the commerce service timed out.
"""
User = get_user_model() # pylint:disable=invalid-name
course_key_str = str(course_enrollment.course_id)
enrollee = course_enrollment.user
service_user = User.objects.get(username=settings.ECOMMERCE_SERVICE_WORKER_USERNAME)
api_client = ecommerce_api_client(service_user)
log.info('Attempting to create a refund for user [%s], course [%s]...', enrollee.id, course_key_str)
refund_ids = api_client.refunds.post({'course_id': course_key_str, 'username': enrollee.username})
if refund_ids:
log.info('Refund successfully opened for user [%s], course [%s]: %r', enrollee.id, course_key_str, refund_ids)
_process_refund(
refund_ids=refund_ids,
api_client=api_client,
mode=course_enrollment.mode,
user=enrollee,
)
if change_mode and CourseMode.can_auto_enroll(course_id=CourseKey.from_string(course_key_str)):
course_enrollment.update_enrollment(mode=CourseMode.auto_enroll_mode(course_id=course_key_str),
is_active=False, skip_refund=True)
course_enrollment.save()
else:
log.info('No refund opened for user [%s], course [%s]', enrollee.id, course_key_str)
return refund_ids
def _process_refund(refund_ids, api_client, mode, user, always_notify=False):
"""
Helper method to process a refund for a given course_product. This method assumes that the User has already
been unenrolled.
Arguments:
refund_ids: List of refund ids to be processed
api_client: The API Client used in the processing of refunds
mode: The mode that the refund should be processed for
user: The user that the refund is being processed for
always_notify (bool): This will enable always notifying support with Zendesk tickets when
an approval is required
Returns:
bool: True if the refund process was successful, False if there are any Errors that are not handled
"""
config = CommerceConfiguration.current()
if config.enable_automatic_refund_approval:
refunds_requiring_approval = []
for refund_id in refund_ids:
try:
# NOTE: The following assumes that the user has already been unenrolled.
# We are then able to approve payment. Additionally, this ensures we don't tie up an
# additional web worker when the E-Commerce Service tries to unenroll the learner.
api_client.refunds(refund_id).process.put({'action': 'approve_payment_only'})
log.info('Refund [%d] successfully approved.', refund_id)
except: # pylint: disable=bare-except
# Push the refund to Support to process
log.exception('Failed to automatically approve refund [%d]!', refund_id)
refunds_requiring_approval.append(refund_id)
else:
refunds_requiring_approval = refund_ids
if refunds_requiring_approval:
# XCOM-371: this is a temporary measure to suppress refund-related email
# notifications to students and support for free enrollments. This
# condition should be removed when the CourseEnrollment.refundable() logic
# is updated to be more correct, or when we implement better handling (and
# notifications) in Otto for handling reversal of $0 transactions.
if mode != 'verified' and not always_notify:
# 'verified' is the only enrollment mode that should presently
# result in opening a refund request.
log.info(
'Skipping refund support notification for non-verified mode for user [%s], mode: [%s]',
user.id,
mode,
)
else:
try:
return _send_refund_notification(user, refunds_requiring_approval)
except: # pylint: disable=bare-except
# Unable to send notification to Support, do not break as this method is used by Signals
log.warning('Could not send support notification for refund.', exc_info=True)
return False
return True
def _send_refund_notification(user, refund_ids):
"""
Notify the support team of the refund request.
Returns:
bool: True if we are able to send the notification. In this case that means we were able to create
a ZenDesk ticket
"""
tags = ['auto_refund']
if theming_helpers.is_request_in_themed_site():
# this is not presently supported with the external service.
raise NotImplementedError("Unable to send refund processing emails to support teams.")
# Build the information for the ZenDesk ticket
student = user
subject = _("[Refund] User-Requested Refund")
body = _generate_refund_notification_body(student, refund_ids)
requester_name = student.profile.name or student.username
return create_zendesk_ticket(requester_name, student.email, subject, body, tags)
def _generate_refund_notification_body(student, refund_ids):
""" Returns a refund notification message body. """
msg = _(
'A refund request has been initiated for {username} ({email}). '
'To process this request, please visit the link(s) below.'
).format(username=student.username, email=student.email)
ecommerce_url_root = configuration_helpers.get_value(
'ECOMMERCE_PUBLIC_URL_ROOT', settings.ECOMMERCE_PUBLIC_URL_ROOT,
)
refund_urls = [urljoin(ecommerce_url_root, f'/dashboard/refunds/{refund_id}/')
for refund_id in refund_ids]
# emails contained in this message could contain unicode characters so encode as such
return '{msg}\n\n{urls}'.format(msg=msg, urls='\n'.join(refund_urls))
def create_zendesk_ticket(requester_name, requester_email, subject, body, tags=None):
"""
Create a Zendesk ticket via API.
Returns:
bool: False if we are unable to create the ticket for any reason
"""
if not (settings.ZENDESK_URL and settings.ZENDESK_USER and settings.ZENDESK_API_KEY):
log.error('Zendesk is not configured. Cannot create a ticket.')
return False
# Copy the tags to avoid modifying the original list.
tags = set(tags or [])
tags.add('LMS')
tags = list(tags)
data = {
'ticket': {
'requester': {
'name': requester_name,
'email': str(requester_email)
},
'subject': subject,
'comment': {'body': body},
'tags': tags
}
}
# Encode the data to create a JSON payload
payload = json.dumps(data)
# Set the request parameters
url = urljoin(settings.ZENDESK_URL, '/api/v2/tickets.json')
user = f'{settings.ZENDESK_USER}/token'
pwd = settings.ZENDESK_API_KEY
headers = {'content-type': 'application/json'}
try:
response = requests.post(url, data=payload, auth=(user, pwd), headers=headers)
# Check for HTTP codes other than 201 (Created)
if response.status_code != 201:
log.error('Failed to create ticket. Status: [%d], Body: [%s]', response.status_code, response.content)
return False
else:
log.debug('Successfully created ticket.')
except Exception: # pylint: disable=broad-except
log.exception('Failed to create ticket.')
return False
return True
|
eduNEXT/edunext-platform
|
lms/djangoapps/commerce/utils.py
|
Python
|
agpl-3.0
| 15,601
|
[
"VisIt"
] |
ba883a1514a312d2f83dc808a25adccb0b45edb98a971c231adaf69bcb3555a1
|
import hashlib
import json
from sys import version_info
IS_PY3 = version_info[0] >= 3
if IS_PY3:
from urllib.parse import quote
else:
from urllib import quote as _quote
def quote(x):
if isinstance(x, unicode):
x = x.encode('utf8')
return _quote(x)
import requests
from fds.auth.common import Common
from fds.auth.signature.signer import Signer
from fds.fds_client_configuration import FDSClientConfiguration
from fds.fds_request import FDSRequest
from fds.galaxy_fds_client_exception import GalaxyFDSClientException
from fds.model.access_control_policy import AccessControlPolicy
from fds.model.fds_bucket import FDSBucket
from fds.model.fds_object import FDSObject
from fds.model.fds_object_listing import FDSObjectListing
from fds.model.fds_object_metadata import FDSObjectMetadata
from fds.model.fds_object_summary import FDSObjectSummary
from fds.model.permission import AccessControlList, UserGroups, Permission, \
GrantType
from fds.model.permission import Grant
from fds.model.permission import Grantee
from fds.model.permission import Owner
from fds.model.put_object_result import PutObjectResult
from fds.model.subresource import SubResource
from fds.model.init_multipart_upload_result import InitMultipartUploadResult
from fds.model.upload_part_result import UploadPartResult
from fds.model.fds_lifecycle import FDSLifecycleConfig, FDSLifecycleRule
from fds.model.fds_cors import FDSCORSConfig, FDSCORSRule
from fds.model.copy_object_result import CopyObjectResult
from fds.model.timestamp_anti_stealing_link_config import TimestampAntiStealingLinkConfig
from fds.model.list_domain_mappings_result import ListDomainMappingsResult
from fds.model.fds_access_log_config import AccessLogConfig
from fds.model.fds_storage_class import FDSStorageClass
import os
import sys
from .utils import uri_to_bucket_and_object, to_json_object
import logging
from fds.model.upload_part_result_list import UploadPartResultList
from io import BytesIO
from io import IOBase
from io import StringIO
from time import sleep
class GalaxyFDSClient(object):
'''
Client for Galaxy FDS Service.
'''
def __init__(self, access_key=None, access_secret=None, config=None):
'''
:param access_key: The app access key
:param access_secret: The app access secret
:param config: The FDS service's config
'''
self._delimiter = "/"
if access_key == None or access_secret == None:
self._access_key = self.load_access_key()
self._secret_key = self.load_secret_key()
else:
self._access_key = access_key
self._secret_key = access_secret
self._auth = Signer(self._access_key, self._secret_key)
if config == None:
config = FDSClientConfiguration()
config.set_endpoint(self.load_endpoint())
self._config = config
self._request = FDSRequest(config.timeout, config.max_retries)
def load_endpoint(self):
endpoint = None
if endpoint is None and "XIAOMI_FDS_ENDPOINT" in os.environ:
endpoint = os.environ["XIAOMI_FDS_ENDPOINT"]
if endpoint is None and "FDS_ENDPOINT" in os.environ:
endpoint = os.environ["FDS_ENDPOINT"]
if endpoint is None:
endpoint = self.load_config("xiaomi_fds_endpoint")
if endpoint is not None and len(endpoint.strip()) == 0:
logging.warn(
"endpoint is set to empty, please check ${XIAOMI_FDS_ENDPOINT} or ${FDS_ENDPOINT} in environ variables, or \"xiaomi_fds_endpoint\" in ~/.config/xiaomi/config")
return endpoint
def load_access_key(self):
access_key = None
if access_key is None and "XIAOMI_ACCESS_KEY_ID" in os.environ:
access_key = os.environ["XIAOMI_ACCESS_KEY_ID"]
if access_key is None and "XIAOMI_ACCESS_KEY" in os.environ:
access_key = os.environ["XIAOMI_ACCESS_KEY"]
if access_key is None:
access_key = self.load_config("xiaomi_access_key_id")
if access_key is not None and len(access_key.strip()) == 0:
logging.warn(
"access_key is set to empty, please check ${XIAOMI_ACCESS_KEY_ID} or ${XIAOMI_ACCESS_KEY} in environ variables, or \"xiaomi_access_key_id\" in ~/.config/xiaomi/config")
return access_key
def load_secret_key(self):
secret_key = None
if secret_key is None and "XIAOMI_SECRET_ACCESS_KEY" in os.environ:
secret_key = os.environ["XIAOMI_SECRET_ACCESS_KEY"]
if secret_key is None and "XIAOMI_SECRET_KEY" in os.environ:
secret_key = os.environ["XIAOMI_SECRET_KEY"]
if secret_key is None:
secret_key = self.load_config("xiaomi_secret_access_key")
if secret_key is not None and len(secret_key.strip()) == 0:
logging.warn(
"secret_key is set to empty, please check ${XIAOMI_SECRET_ACCESS_KEY} or ${XIAOMI_SECRET_KEY} in environ variables, or \"xiaomi_secret_access_key\" in ~/.config/xiaomi/config")
return secret_key
def load_config(self, config_key):
try:
config_filename = os.path.join(os.path.expanduser('~'), ".config/xiaomi/config")
if os.path.exists(config_filename):
with open(config_filename) as f:
data = to_json_object(f.read())
return data[config_key]
except:
pass
@property
def delimiter(self):
return self._delimiter
@delimiter.setter
def delimiter(self, delimiter):
self._delimiter = delimiter
def does_bucket_exist(self, bucket_name):
'''
Check the existence of a specified bucket.
:param bucket_name: The bucket name of the bucket to check
:return: True if the bucket exists, otherwise False
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.head(uri, auth=self._auth)
if response.status_code == requests.codes.ok:
return True
elif response.status_code == requests.codes.not_found:
return False
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Check bucket existence failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def list_buckets(self):
'''
List all the buckets of the current developer.
:return: A list of FDSBucket which contains name and owner of the bucket.
'''
uri = self._config.get_base_uri()
response = self._request.get(uri, auth=self._auth)
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'List buckets failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
elif response.content:
buckets_list = []
json_response = to_json_object(response.content)
buckets = json_response['buckets']
owner = Owner().from_json(json_response['owner'])
for bucket in buckets:
buckets_list.append(FDSBucket(bucket['name'], owner))
return buckets_list
else:
return list()
def list_authorized_buckets(self):
'''
List all the authorized buckets of the current developer.
:return: A list of FDSBucket which contains name and owner of the bucket.
'''
uri = self._config.get_base_uri()
response = self._request.get(uri, auth=self._auth, params={"authorizedBuckets": "true"})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'List buckets failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
elif response.content:
buckets_list = []
json_response = to_json_object(response.content)
buckets = json_response['buckets']
for bucket in buckets:
buckets_list.append(FDSBucket(bucket['name'], ''))
return buckets_list
else:
return list()
def create_bucket(self, bucket_name):
'''
Create a bucket with the specified name.
:param bucket_name: The name of the bucket to create
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth)
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Create bucket failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def delete_bucket(self, bucket_name):
'''
Delete a bucket of a specified name.
:param bucket_name: The name of the bucket to delete
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.delete(uri, auth=self._auth)
if (response.status_code != requests.codes.ok and
response.status_code != requests.codes.not_found):
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Delete bucket failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def list_objects(self, bucket_name, prefix='', delimiter=None, max_keys=None, is_archive=False):
'''
List all objects in a specified bucket with prefix. If the number of objects
in the bucket is larger than a threshold, you would get a FDSObjectListing
contains no FDSObjects. In this scenario, you should call
list_next_batch_of_objects with the returned value
:param bucket_name: The name of the bucket to whom the object is put
:param prefix: The prefix of the object to list
:param delimiter: The delimiter used in listing, using '/' if 'None' given
:param is_archive: List archives if true
:return: FDSObjectListing contains FDSObject list and other metadata
'''
if delimiter is None:
delimiter = self._delimiter
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
params = {
'prefix': prefix,
'delimiter': delimiter
}
if max_keys is not None:
params["maxKeys"] = str(max_keys)
if is_archive:
params["archive"] = "true"
response = self._request.get(uri, auth=self._auth, params=params)
if response.status_code == requests.codes.ok:
objects_list = FDSObjectListing(to_json_object(response.content))
return objects_list
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'List objects under bucket %s with prefix %s failed, ' \
'status=%s, reason=%s%s' % \
(bucket_name, prefix, response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def list_trash_objects(self, prefix='', delimiter=None, max_keys=None):
'''
Compared with list_objects, it returns a list of objects in the trash.
:param prefix: The prefix of bucket_name/object_name.
:param delimiter: The delimiter used in listing, using '/' if 'None' given.
:return: FDSObjectListing contains a list of objects in the trash.
'''
return self.list_objects("trash", prefix, delimiter, max_keys=max_keys);
def list_next_batch_of_objects(self, previous, is_archive=False):
'''
List objects in a iterative manner
:param previous: The FDSObjectListing returned by previous call or list_objects
:return: FDSObjectListing contains FDSObject list and other metadata, 'None'
if all objects returned by previous calls
'''
if not previous.is_truncated:
return None
bucket_name = previous.bucket_name
prefix = previous.prefix
delimiter = previous.delimiter
marker = previous.next_marker
uri = "%s%s" % (self._config.get_base_uri(), quote(bucket_name))
params = {
'prefix': previous.prefix,
'delimiter': previous.delimiter,
'marker': previous.next_marker,
'maxKeys': previous.max_keys
}
if is_archive:
params['archive'] = 'true'
response = self._request.get(uri, auth=self._auth, params=params)
if response.status_code == requests.codes.ok:
objects_list = FDSObjectListing(to_json_object(response.content))
return objects_list
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'List next batch of objects under bucket %s with prefix %s ' \
'and marker %s failed, status=%s, reason=%s%s' % \
(bucket_name, prefix, marker, response.status_code, response.content,
headers)
raise GalaxyFDSClientException(message)
def put_object_with_uri(self, uri, data, metadata=None):
'''
Put the object with the uri.
:param uri: The uri of th bucket and object
:param data: The data to put, bytes or a file like object
:param metadata: The metadata of the object
:return: The result of putting action server returns
'''
bucket_name, object_name = uri_to_bucket_and_object(uri)
self.put_object(bucket_name, object_name, data, metadata)
def put_object(self, bucket_name, object_name, data, metadata=None, is_archive=False):
'''
Put the object to a specified bucket. If a object with the same name already
existed, it will be overwritten.
:param bucket_name: The name of the bucket to whom the object is put
:param object_name: The name of the object to put
:param data: The data to put, bytes or a file like object
:param metadata: The metadata of the object
:param is_archive: Put as archived object if true
:return: The result of putting action server returns
'''
part_size = self._config.get_part_size()
threshold_size = self._config.get_threshold_size()
inputstream = None
if isinstance(data, bytes):
inputstream = BytesIO(data)
elif isinstance(data, str):
inputstream = StringIO(data)
elif not IS_PY3 and isinstance(data, file):
inputstream = data
elif (isinstance(data, IOBase)):
if data.seekable():
inputstream = data
else:
buf = data.readlines()
if len(buf) > 0:
if isinstance(buf[0], str):
inputstream = StringIO(''.join(buf))
else:
inputstream = BytesIO(b''.join(buf))
else:
inputstream = StringIO('')
else:
raise GalaxyFDSClientException("Cannot identify data type")
pos = inputstream.tell()
inputstream.seek(0, 2)
filen = inputstream.tell()
inputstream.seek(0, pos)
if filen < threshold_size:
return self.put_object_directly(bucket_name, object_name, inputstream, metadata, is_archive=is_archive)
else:
content = inputstream.read(part_size)
upload_token = self.init_multipart_upload(bucket_name, object_name, metadata, is_archive=is_archive)
upload_list = []
part_number = 1
while (content):
upload_result = None
upload_success_flag = False
last_exception = None
for i in range(3):
try:
upload_result = self.upload_part(bucket_name, object_name, upload_token.upload_id,
part_number, content)
upload_list.append(upload_result)
upload_success_flag = True
break
except Exception as e:
last_exception = e
logging.warning("upload part %d failed, retry after %d seconds" % (part_number, 3))
sleep(3)
if not upload_success_flag:
logging.error("upload failed, bucket: %s, object: %s, upload part: %d" % (bucket_name, object_name, part_number))
raise last_exception
part_number = part_number + 1
content = inputstream.read(part_size)
upload_part_result = UploadPartResultList({"uploadPartResultList": upload_list})
return self.complete_multipart_upload(bucket_name=bucket_name, object_name=object_name,
upload_id=upload_token.upload_id,
metadata=metadata,
upload_part_result_list=json.dumps(upload_part_result))
def put_object_directly(self, bucket_name, object_name, data, metadata=None, is_archive=False):
'''
Put the object to a specified bucket. If a object with the same name already
existed, it will be overwritten.
:param bucket_name: The name of the bucket to whom the object is put
:param object_name: The name of the object to put
:param data: The data to put, bytes or a file like object
:param metadata: The metadata of the object
:param is_archive: Put as archived object if true
:return: The result of putting action server returns
'''
uri = '%s%s/%s' % (self._config.get_upload_base_uri(), quote(bucket_name), quote(object_name))
if metadata is None:
metadata = FDSObjectMetadata()
if is_archive:
metadata.add_header(Common.STORAGE_CLASS, FDSStorageClass.Archive.value)
inputstream = None
if isinstance(data, bytes):
inputstream = BytesIO(data)
elif isinstance(data, str):
inputstream = StringIO(data)
elif not IS_PY3 and isinstance(data, file):
inputstream = data
elif (isinstance(data, IOBase)):
if data.seekable():
inputstream = data
else:
buf = data.readlines()
if len(buf) > 0:
if isinstance(buf[0], str):
inputstream = StringIO(''.join(buf))
else:
inputstream = BytesIO(b''.join(buf))
else:
inputstream = StringIO('')
else:
raise GalaxyFDSClientException("Cannot identify data type")
if self._config.enable_md5_calculate:
digest = hashlib.md5()
pos = inputstream.tell()
content = inputstream.read()
if IS_PY3 and isinstance(content, str):
content = content.encode(encoding="UTF-8")
digest.update(content)
inputstream.seek(0, pos)
metadata.add_header(Common.CONTENT_MD5, digest.hexdigest())
response = None
if IS_PY3 and isinstance(inputstream, StringIO):
response = self._request.put(uri, data=inputstream.read(), auth=self._auth,
headers=metadata.metadata)
else:
response = self._request.put(uri, data=inputstream, auth=self._auth,
headers=metadata.metadata)
if response.status_code == requests.codes.ok:
return PutObjectResult(to_json_object(response.content))
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Put object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def post_object(self, bucket_name, data, metadata=None):
'''
Post the object to a specified bucket. The object name will be generated
by the server uniquely.
:param bucket_name: The name of the bucket to whom the object is put
:param data: The data to put, bytes or a file like object
:param metadata: The metadata of the object
:return: The result of posting action server returns
'''
uri = '%s%s/' % (self._config.get_upload_base_uri(), quote(bucket_name))
if metadata is None:
metadata = FDSObjectMetadata()
if self._config.enable_md5_calculate:
digest = hashlib.md5()
digest.update(data)
metadata.add_header(Common.CONTENT_MD5, digest.hexdigest())
response = self._request.post(uri, data=data, auth=self._auth,
headers=metadata.metadata)
if response.status_code == requests.codes.ok:
return PutObjectResult(to_json_object(response.content))
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Post object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_object_with_uri(self, uri, position=0, size=4096):
'''
Get a specified object from fds uri.
:param uri: The uri of th bucket and object
:param position: The start index of object to get
:param size: The maximum size of each piece when return streaming is on
:return: The FDS object
'''
bucket_name, object_name = uri_to_bucket_and_object(uri)
return self.get_object(bucket_name, object_name, position, size)
def get_object(self, bucket_name, object_name, position=0, size=4096, stream=None,
version_id=None, is_archive=False):
'''
Get a specified object from a bucket.
:param bucket_name: The name of the bucket from whom to get the object
:param object_name: The name of the object to get
:param position: The start index of object to get
:param size: The maximum size of each piece when return streaming is on
:param stream: Set True to enable streaming, otherwise, whole object content is read to memory
:param is_archive: Get archive object if true
:return: The FDS object
'''
if position < 0:
raise GalaxyFDSClientException("Seek position should be no less than 0")
uri = '%s%s/%s' % (self._config.get_download_base_uri(), quote(bucket_name), quote(object_name))
req_params = dict()
if version_id:
req_params["versionId"] = version_id
if is_archive:
req_params["archive"] = "true"
if position > 0:
header = {Common.RANGE: 'bytes=%d-' % position}
response = self._request.get(uri, auth=self._auth, headers=header, stream=stream,
params=req_params)
else:
response = self._request.get(uri, auth=self._auth, stream=stream, params=req_params)
if response.status_code == requests.codes.ok or \
response.status_code == requests.codes.partial:
obj = FDSObject()
obj.stream = response.iter_content(chunk_size=size)
summary = FDSObjectSummary()
summary.bucket_name = bucket_name
summary.object_name = object_name
summary.size = int(response.headers['content-length'])
obj.summary = summary
obj.metadata = self._parse_object_metadata_from_headers(response.headers)
return obj
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def download_object_with_uri(self, uri, data_file, offset=0, length=-1):
bucket_name, object_name = uri_to_bucket_and_object(uri)
self.download_object(bucket_name, object_name, data_file, offset, length)
def download_object(self, bucket_name, object_name, data_file, offset=0, length=-1):
fds_object = self.get_object(bucket_name=bucket_name,
object_name=object_name,
position=offset)
length_left = length
if length_left == -1:
length_left = IS_PY3 and sys.maxsize or sys.maxint
try:
if data_file:
with open(data_file, "wb") as f:
for chunk in fds_object.stream:
l = min(length_left, len(chunk));
f.write(chunk[0:l])
length_left -= l
if length_left <= 0:
break
else:
for chunk in fds_object.stream:
l = min(length_left, len(chunk))
if IS_PY3:
sys.stdout.buffer.write(chunk[0:l])
else:
sys.stdout.write(chunk[0:l])
length_left -= l
if length_left <= 0:
break
sys.stdout.flush()
finally:
fds_object.stream.close()
def does_object_exists(self, bucket_name, object_name):
'''
Check the existence of a specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object to check
:return: True if the object exists, otherwise, False
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.head(uri, auth=self._auth)
if response.status_code == requests.codes.ok:
return True
elif response.status_code == requests.codes.not_found:
return False
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Check object existence failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def delete_object(self, bucket_name, object_name, **kwargs):
'''
Delete specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
params = {}
if "is_archive" in kwargs and kwargs["is_archive"] is True:
params["archive"] = "true"
elif "enable_trash" in kwargs and kwargs["enable_trash"] is False:
params["enableTrash"] = "false"
response = self._request.delete(uri, auth=self._auth, params=params)
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Delete object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
if "object-version-id" in response.headers:
return response.headers["object-version-id"]
return None
def delete_objects(self, bucket_name, object_names):
'''
Delete specified objects in the bucket
:param bucket_name:
:param object_names:
:return:
'''
uri = "%s%s" % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, data=json.dumps(object_names), params={"deleteObjects": "true"})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Delete objects failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
else:
failed_list = to_json_object(response.content)
return failed_list
def restore_object(self, bucket_name, object_name):
'''
Restore a specified object from trash.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, params={"restore": 'true'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Restore object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def rename_object(self, bucket_name, src_object_name, dst_object_name):
'''
Rename a specified object to a new name.
:param bucket_name: The name of the bucket
:param src_object_name: The original name of the object
:param dst_object_name: The target name of the object to rename to
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(src_object_name))
response = self._request.put(uri, auth=self._auth, params={"renameTo": dst_object_name})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Rename object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def put_domain_mapping(self, bucket_name, domain_name, index_name='index.html'):
'''
Put bucket domain mapping
:param bucket_name: The name of the bucket
:param domain_name: The name of domain to put
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={'domain': domain_name, 'index': index_name})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Put bucket domain mapping failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def delete_domain_mapping(self, bucket_name, domain_name):
'''
Delete bucket domain mapping
:param bucket_name: The name of bucket
:param domain_name: The name of domain to delete
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.delete(uri, auth=self._auth, params={'domain': domain_name})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
messgae = 'Delete bucket domain mapping failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(messgae)
def list_domain_mappins(self, bucket_name):
'''
List bucket domain mappings
:param bucket_name: The name of bucket
:return: The list domain mappings
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={'domain': 'true'})
if response.status_code == requests.codes.ok:
list_domain_mappings_result = ListDomainMappingsResult(to_json_object(response.content))
return list_domain_mappings_result.domain_mappings
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'List bucket domain mappings failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def crop_image(self, bucket_name, object_name, x, y, w, h):
'''
Crop image
:param bucket_name:The name of bucket
:param object_name: The name of object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, params={'cropImage': 'true', 'x': x, 'y': y, 'w': w, 'h': h})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'crop image failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_access_log_config(self, bucket_name):
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={'accessLog': 'true'})
if response.status_code == requests.codes.ok:
access_log_config = AccessLogConfig(to_json_object(response.content))
return access_log_config
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'get access log config failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def update_access_log_config(self, bucket_name, access_log_config):
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth,
data=json.dumps(access_log_config, default=lambda x: x.to_string()),
params={'accessLog': 'true'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'crop image failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def set_bucket_acl(self, bucket_name, acl):
'''
Add grant(ACL) for specified bucket.
:param bucket_name: The name of the bucket to add grant
:param acl: The grant(ACL) to add
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
acp = self._acl_to_acp(acl)
response = self._request.put(uri, auth=self._auth,
params={SubResource.ACL: 'true'},
data=json.dumps(acp, default=lambda x: x.to_string()))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Set bucket acl failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_bucket_acl(self, bucket_name):
'''
Get the ACL of a specified bucket.
:param bucket_name: The name of the bucket to get ACL
:return: The got access control list
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={SubResource.ACL: 'true'})
if response.status_code == requests.codes.ok:
acp = AccessControlPolicy(to_json_object(response.content))
acl = self._acp_to_acl(acp)
return acl
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket acl failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def delete_bucket_acl(self, bucket_name, acl):
'''
Delete grant(ACL) for specified bucket.
:param bucket_name: The name of the bucket to delete grant
:param acl: The grant(ACL) to delete
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
acp = self._acl_to_acp(acl)
response = self._request.put(uri, auth=self._auth, data=json.dumps(acp, default=lambda x: x.to_string()), params={
"acl": "true",
"action": "delete"})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Delete bucket acl failed,status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def delete_object_acl(self, bucket_name, object_name, acl, is_archive=False):
'''
Delete grant(ACL)for a specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:param acl: The grant(ACL) to delete
:param is_archive: If true, put ACL of archive object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
acp = self._acl_to_acp(acl)
params = {'acl': 'true', 'action': "delete"}
if is_archive:
params["archive"] = "true"
response = self._request.put(uri, auth=self._auth, params=params,
data=json.dumps(acp, default=lambda x: x.to_string()))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = 'header=%s' % response.headers
message = 'Delete object acl failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def set_object_acl(self, bucket_name, object_name, acl, is_archive=False):
'''
Add grant(ACL) for a specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:param acl: The grant(ACL) to add
:param is_archive: If true, put ACL of archive object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
acp = self._acl_to_acp(acl)
params = {SubResource.ACL: 'true'}
if is_archive:
params['archive'] = "true"
response = self._request.put(uri, auth=self._auth, params=params,
data=json.dumps(acp, default=lambda x: x.to_string()))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Set object acl failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_object_acl(self, bucket_name, object_name, is_archive=False):
'''
Get the ACL of a specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:param is_archive: If true, put ACL of archive object
:return: The got access control list
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
params = {SubResource.ACL: 'true'}
if is_archive:
params["archive"] = "true"
response = self._request.get(uri, auth=self._auth, params=params)
if response.status_code == requests.codes.ok:
acp = AccessControlPolicy(to_json_object(response.content))
acl = self._acp_to_acl(acp)
return acl
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get object acl failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_object_metadata(self, bucket_name, object_name):
'''
Get the metadata of a specified object.
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:return: The got object metadata
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.get(uri, auth=self._auth, params={SubResource.METADATA: 'true'})
if response.status_code == requests.codes.ok:
metadata = self._parse_object_metadata_from_headers(response.headers)
return metadata
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get object metadata failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def prefetch_object(self, bucket_name, object_name):
'''
Prefetch the object to CDN
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:return: void
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, data="", params={"prefetch": 'true'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Prefetch object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def refresh_object(self, bucket_name, object_name):
'''
Refresh the cache of the object in CDN
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:return: void
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, data="", params={"refresh": 'true'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Refresh object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def set_public(self, bucket_name, object_name=None, is_archive=False):
"""
Make resource public, if object_name is not being set, this method will set the bucket_name to public resource,
if object_name is set, this method will set object_name to public resource
:param bucket_name:
:param object_name:
:return:
"""
if object_name is None:
self._set_bucket_public(bucket_name)
else:
self._set_object_public(bucket_name, object_name, is_archive=is_archive)
def _set_object_public(self, bucket_name, object_name, is_archive=False):
acl = AccessControlList()
grant = Grant(Grantee(UserGroups.ALL_USERS), Permission.READ)
grant.type = GrantType.GROUP
acl.add_grant(grant)
self.set_object_acl(bucket_name, object_name, acl, is_archive=is_archive)
def _set_bucket_public(self, bucket_name):
acl = AccessControlList()
grant = Grant(Grantee(UserGroups.ALL_USERS), Permission.READ_OBJECTS)
grant.type = GrantType.GROUP
acl.add_grant(grant)
self.set_bucket_acl(bucket_name, acl)
def set_private(self, bucket_name, object_name=None, is_archive=False):
if object_name is None:
self._set_bucket_private(bucket_name)
else:
self._set_object_private(bucket_name, object_name, is_archive)
def _set_bucket_private(self, bucket_name):
acl = AccessControlList()
grant = Grant(Grantee(UserGroups.ALL_USERS), Permission.READ_OBJECTS)
grant.type = GrantType.GROUP
acl.add_grant(grant)
self.delete_bucket_acl(bucket_name, acl)
def _set_object_private(self, bucket_name, object_name, is_archive=False):
acl = AccessControlList()
grant = Grant(Grantee(UserGroups.ALL_USERS), Permission.READ)
grant.type = GrantType.GROUP
acl.add_grant(grant)
self.delete_object_acl(bucket_name, object_name, acl, is_archive)
def init_multipart_upload(self, bucket_name, object_name, metadata=None, is_archive=False):
'''
Init a multipart upload session
:param bucket_name:
:param object_name:
:return:
'''
if not metadata:
metadata = FDSObjectMetadata()
if is_archive:
metadata.add_header(Common.STORAGE_CLASS, FDSStorageClass.Archive.value)
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, data="", params={"uploads": 'true'}, headers=metadata.metadata)
if response.status_code == requests.codes.ok:
result = InitMultipartUploadResult(to_json_object(response.content))
return result
else:
headers = ""
if self._config.debug:
headers = ' headers=%s' % response.headers
message = 'Init multipart upload failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def upload_part(self, bucket_name, object_name, upload_id, part_number, data):
'''
Upload a multipart upload part
:param bucket_name:
:param object_name:
:param upload_id:
:param part_number:
:param data:
:return:
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, data=data, params={
"uploadId": upload_id,
"partNumber": str(part_number)
})
if response.status_code == requests.codes.ok:
result = UploadPartResult(to_json_object(response.content))
return result
else:
headers = ""
if self._config.debug:
headers = ' headers=%s' % response.headers
message = 'Upload part failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def complete_multipart_upload(self, bucket_name, object_name, upload_id,
metadata, upload_part_result_list):
'''
Complete a multipart upload
:param bucket_name:
:param object_name:
:param upload_id:
:param metadata:
:param upload_part_result_list:
:return:
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
if metadata is None:
metadata = FDSObjectMetadata()
response = self._request.put(uri, auth=self._auth,
data=upload_part_result_list, headers=metadata.metadata,
params={"uploadId": upload_id})
if response.status_code == requests.codes.ok:
result = PutObjectResult(to_json_object(response.content))
return result
else:
headers = ""
if self._config.debug:
headers = ' headers=%s' % response.headers
message = 'Complete multipart upload failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def abort_multipart_upload(self, bucket_name, object_name, upload_id):
'''
Abort a multipart upload
:param bucket_name:
:param object_name:
:param upload_id:
:return:
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.delete(uri, auth=self._auth, data='', params={
"uploadId": upload_id
})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' headers=%s' % response.headers
message = 'Abort multipart upload failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def generate_presigned_uri(self, base_uri, bucket_name, object_name,
expiration, http_method="GET", content_type=None, sub_resources=None):
'''
Generate a pre-signed uri to share object with the public
:param base_uri: The base uri of rest server. Use client's default if 'None' pass
:param bucket_name: The name of the bucket
:param object_name: The name of the object
:param expiration: The expiration time of the uri: milliseconds from the Epoch
:param http_method: The http method used in uri
:return: The pre-signed uri string
'''
if not base_uri or base_uri == '':
if http_method == 'PUT' or http_method == 'POST':
base_uri = self._config.get_upload_base_uri()
elif http_method == 'DELETE':
base_uri = self._config.get_base_uri()
else:
base_uri = self._config.get_download_base_uri()
if not IS_PY3 and isinstance(base_uri, unicode):
base_uri = base_uri.encode('utf8')
try:
if sub_resources == None:
uri = '%s%s/%s?%s=%s&%s=%s&' % \
(base_uri, bucket_name, object_name, \
Common.GALAXY_ACCESS_KEY_ID, self._auth._app_key, \
Common.EXPIRES, str(int(expiration)))
else:
uri = '%s%s/%s?%s&%s=%s&%s=%s&' % \
(base_uri, bucket_name, object_name, '&'.join(sub_resources), \
Common.GALAXY_ACCESS_KEY_ID, self._auth._app_key, \
Common.EXPIRES, str(int(expiration)))
headers = None
if content_type != None and isinstance(content_type, IS_PY3 and str or basestring):
headers = {Common.CONTENT_TYPE: content_type}
signature = str(self._auth._sign_to_base64(http_method, headers, uri, \
self._auth._app_secret))
if sub_resources == None:
return '%s%s/%s?%s=%s&%s=%s&%s=%s' % \
(base_uri, quote(bucket_name), quote(object_name), \
Common.GALAXY_ACCESS_KEY_ID, self._auth._app_key, \
Common.EXPIRES, str(int(expiration)), Common.SIGNATURE, signature)
else:
return '%s%s/%s?%s&%s=%s&%s=%s&%s=%s' % \
(base_uri, quote(bucket_name), quote(object_name), '&'.join(sub_resources), \
Common.GALAXY_ACCESS_KEY_ID, self._auth._app_key, \
Common.EXPIRES, str(int(expiration)), Common.SIGNATURE, signature)
except Exception as e:
import traceback
traceback.print_exc()
message = 'Wrong expiration given. ' \
'Milliseconds since January 1, 1970 should be used. ' + str(e)
raise GalaxyFDSClientException(message)
def generate_download_object_uri(self, bucket_name, object_name):
'''
Generate a URI for downloading object
'''
return '%s%s/%s' % (self._config.get_download_base_uri(), bucket_name,
object_name)
def copy_object(self, src_bucket_name, src_object_name, dst_bucket_name, dst_object_name):
'''
Copy src_object_name from src_bucket_name to dst_bucket_name, and rename it to dst_object_name
:param src_bucket_name: Source bucket name
:param src_object_name: Source object name
:param dst_bucket_name: Target bucket name
:param dst_object_name: Target object name
:return:
'''
uri = '%s%s/%s' % (self._config.get_upload_base_uri(), quote(dst_bucket_name),\
quote(dst_object_name))
data = {"srcBucketName": src_bucket_name, "srcObjectName": src_object_name}
response = self._request.put(uri, data=json.dumps(data, default=lambda x: x.to_string()),
auth=self._auth, params={"cp": "cpparam"})
if response.status_code == requests.codes.ok:
return CopyObjectResult(to_json_object(response.content))
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Copy object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _acp_to_acl(self, acp):
'''
Translate AccessControlPolicy to AccessControlList.
'''
if acp is not None:
acl = AccessControlList()
for item in acp['accessControlList']:
grantee = item['grantee']
grant_id = grantee['id']
permission = item['permission']
g = Grant(Grantee(grant_id), permission)
acl.add_grant(g)
return acl
return str()
# def copy_object(self, srcBucketName, srcObjectName, dstBucketName, dstObjectName):
# cp_request = FDSCopyObjectRequest(srcBucketName, srcObjectName, dstBucketName, dstObjectName)
# return self.copy_object(cp_request)
def _acl_to_acp(self, acl):
'''
Translate AccessControlList to AccessControlPolicy.
'''
if acl is not None:
acp = AccessControlPolicy(dict())
owner = Owner()
owner.id = self._access_key
acp.owner = owner
acp.access_control_list = acl.get_grant_list()
return acp
return ''
def _parse_object_metadata_from_headers(self, response_headers):
'''
Parse object metadata from the response headers.
'''
metadata = FDSObjectMetadata()
header_keys = [c.lower() for c in response_headers.keys()]
for key in FDSObjectMetadata.PRE_DEFINED_METADATA:
if key.lower() in header_keys:
metadata.add_header(key, response_headers[key])
for key in response_headers:
if key.lower().startswith(FDSObjectMetadata.USER_DEFINED_METADATA_PREFIX):
metadata.add_user_metadata(key, response_headers[key])
return metadata
def list_all_objects(self, bucket_name, prefix='', delimiter=None, is_archive=False):
'''
traverse all objects in the bucket
:param bucket_name:
:param prefix:
:param delimiter:
:return:
'''
result = self.list_objects(bucket_name, prefix, delimiter, is_archive=is_archive)
while True:
for object_summary in result.objects:
yield object_summary
if result.is_truncated:
result = self.list_next_batch_of_objects(result, is_archive=is_archive)
else:
break
def _update_bucket_versioning_(self, bucket_name, versioning):
'''
Update bucket versioning
:param bucket_name:
:param versioning:
:return:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={"versioning": str(versioning)})
if response.status_code == requests.codes.ok:
return to_json_object(response.content)
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Update bucket versioning failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _get_bucket_versioning_(self, bucket_name):
'''
Get bucket versioning
:param bucket_name:
:param versioning:
:return:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={"versioning": "true"})
if response.status_code == requests.codes.ok:
content = response.content
if isinstance(content, bytes):
content = content.decode(encoding='utf-8')
return int(content)
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket versioning failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _list_version_ids_(self, bucket_name, object_name):
'''
List all version ids of specified object order by timestamp desc
:param bucket_name:
:param object_name:
:return: a list of version ids of specified object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.get(uri, auth=self._auth, params={"versionIds": "true"})
if response.status_code == requests.codes.ok:
return to_json_object(response.content)
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get object version ids failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def update_lifecycle_config(self, bucket_name, lifecycle_config):
'''
Update lifecycle config of a bucket which determine by the config
:param lifecycle_config:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri,
auth=self._auth,
params={"lifecycle": "true"},
data=json.dumps(lifecycle_config))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Update bucket lifecycle config failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def update_lifecycle_rule(self, bucket_name, rule):
'''
Update a TTL rule of specified bucket
:param rule: If rule.id is None or not exists in bucket.lifecycle_config, add rule with a generated ID. Otherwise bucket.lifecycle_config[rule.id] will be updated
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri,
auth=self._auth,
params={"lifecycle": "rule"},
data=json.dumps(rule))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Update bucket lifecycle rule failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_lifecycle_config(self, bucket_name, rule_id=None):
'''
Get lifecycle config of specified bucket
:param bucket_name:
:param rule_id: if rule_id is None, return lifecycle_config of the bucket; otherwise return bucket.lifecycle_config[rule_id], if the rule exists
:return:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth,
params={"lifecycle": rule_id and rule_id or ""})
if response.status_code == requests.codes.ok:
js = to_json_object(response.content)
if rule_id:
return FDSLifecycleRule(js)
else:
return FDSLifecycleConfig(js)
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket lifecycle config failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def update_cors_config(self, bucket_name, cors_config):
'''
Update cors config of a bucket which determine by the config
:param bucket_name:
:param cors_config:
:return:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={"cors": "true"},
data=json.dumps(cors_config))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Update bucket cors config failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def update_cors_rule(self, bucket_name, rule):
'''
Update a cors rule of specified bucket
:param rule: If rule.id is None or not exists in bucket.cors_config, add rule with a generated ID. Otherwise bucket.cors_config[rule.id] will be updated
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri,
auth=self._auth,
params={"cors": "rule"},
data=json.dumps(rule))
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Update bucket cors rule failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def get_cors_config(self, bucket_name, rule_id=None):
'''
Get cors config of specified bucket
:param bucket_name:
:param rule_id: if rule_id is None, return cors_config of the bucket; otherwise return bucket.cors_config[rule_id], if the rule exists
:return:
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth,
params={"cors": rule_id and rule_id or ""})
if response.status_code == requests.codes.ok:
print(response.content)
js = to_json_object(response.content)
if rule_id:
return FDSCORSRule(js)
else:
return FDSCORSConfig(js)
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket cors config failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _set_bucket_default_webp_quality_(self, bucket_name, quality):
'''
Enable bucket auto convert webp.
:param bucket_name: The name of the bucket
:param quality: Default webp quality, -1 means disable auto convert webp
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={
"webpQuality": str(quality)
})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Enable bucket auto convert webp failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _get_bucket_default_webp_quality_(self, bucket_name):
'''
Check the existence of a specified object.
:param bucket_name: The name of the bucket
:return: True if the object exists, otherwise, False
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={
"webpQuality": "true"
})
if response.status_code == requests.codes.ok:
content = response.content
if IS_PY3:
content = content.decode("UTF-8")
return int(content)
elif response.status_code == requests.codes.not_found:
return False
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket default webp quality failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _get_webp_(self, bucket_name, object_name, stream=None, quality=None):
'''
Get a specified object from a bucket.
:param bucket_name: The name of the bucket from whom to get the object
:param object_name: The name of the object to get
:param stream: Set True to enable streaming, otherwise, whole object content is read to memory
:return: The FDS object
'''
uri = '%s%s/%s' % (self._config.get_download_base_uri(), quote(bucket_name), quote(object_name))
params = {
"f": "webp"
}
if quality:
params["q"] = quality
response = self._request.get(uri, auth=self._auth, stream=stream, params=params)
if response.status_code == requests.codes.ok or \
response.status_code == requests.codes.partial:
obj = FDSObject()
obj.stream = response.iter_content()
summary = FDSObjectSummary()
summary.bucket_name = bucket_name
summary.object_name = object_name
summary.size = int(response.headers['content-length'])
obj.summary = summary
obj.metadata = self._parse_object_metadata_from_headers(response.headers)
return obj
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get webp failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _set_bucket_default_gif_extract_type_(self, bucket_name, mime_type):
'''
Enable bucket auto gif extract.
:param bucket_name: The name of the bucket
:param mime_type: Default gif extract type, unknown means disable auto gif extract
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={"gifType": mime_type})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Enable bucket auto gif extract failed,status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _get_extracted_gif_(self, bucket_name, object_name, type, stream=None):
'''
Get gif extracted frame of a specified
:param bucket_name: The name of the bucket from whom to get the object
:param object_name: The name of the object to get
:param stream: Set True to enable streaming, otherwise, whole object content is read to memory
:param type: The FDS object
:return:
'''
uri = '%s%s/%s' % (self._config.get_download_base_uri(), quote(bucket_name), quote(object_name))
params = {"gf": type}
response = self._request.get(uri, auth=self._auth, stream=stream, params=params)
if response.status_code == requests.codes.ok or response.status_code == requests.codes.partial:
obj = FDSObject()
obj.stream = response.iter_content()
summary = FDSObjectSummary()
summary.bucket_name = bucket_name
summary.object_name = object_name
summary.size = int(response.headers['content-length'])
obj.summary = summary
obj.metadata = self._parse_object_metadata_from_headers(response.headers)
return obj
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get extracted gif failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def _get_bucket_default_gif_extract_type_(self, bucket_name):
'''
Check the existence of a specified bucket.
:param bucket_name: The name of the bucket
:return: Type if the bucket can auto gif extract, otherwise, unknown
'''
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.get(uri, auth=self._auth, params={"gifType": "true"})
if response.status_code == requests.codes.ok:
content = response.content
if IS_PY3:
content = content.decode("UTF-8")
return content
elif response.status_code == requests.codes.not_found:
return False
else:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Get bucket default gif extract type failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def generate_anti_stealing_uri(self, bucket_name, object_name, expiration, key):
'''
Generate anti stealing URI
:param bucket_name: The name of the bucket from whom to get the object
:param object_name: The name of the object to get
:param expiration : The timestamp of milliseconds from 1970-01-01-00:00:00
:param key : The key you set up for the bucket
:return: The anti stealing URI
'''
return self._generate_anti_stealing_uri(self._config.get_base_uri(), bucket_name, object_name,
expiration, key)
def generate_anti_stealing_cdn_uri(self, bucket_name, object_name, expiration, key):
'''
Generate anti stealing CDN URI
:param bucket_name: The name of the bucket from whom to get the object
:param object_name: The name of the object to get
:param expiration : The timestamp of milliseconds from 1970-01-01-00:00:00
:param key : The key you set up for the bucket
:return: The anti stealing CDN URI
'''
return self._generate_anti_stealing_uri(self._config.get_cdn_base_uri(), bucket_name,
object_name, expiration,
key)
def _generate_anti_stealing_uri(self, base_uri, bucket_name, object_name, expiration, key):
timestamp = int(expiration) / 1000
t = hex(timestamp).lstrip('0x')
string_to_cal = key + '/' + bucket_name + '/' + object_name + t
m = hashlib.md5()
m.update(string_to_cal)
sign = m.hexdigest()
return "%s%s/%s?sign=%s&t=%s" % (base_uri, bucket_name, object_name, sign, t)
def update_timestamp_anti_stealing_link_config(self, bucket_name, config):
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
resp = self._request.put(uri, auth=self._auth, params={'antiStealingLink': 'true'},
data=json.dumps(config, default=lambda x: x.to_string()))
if resp.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % resp.headers
message = 'Update timestamp anti stealing link config failed, status=%s, reason=%s%s' % (
resp.status_code, resp.content, headers)
raise GalaxyFDSClientException(message)
def get_timestamp_anti_stealing_link_config(self, bucket_name):
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
resp = self._request.get(uri, params={'antiStealingLink': 'true'}, auth=self._auth)
if resp.status_code == requests.codes.ok:
config = TimestampAntiStealingLinkConfig(to_json_object(resp.content))
return config
headers = ""
if self._config.debug:
headers = ' header=%s' % resp.headers
message = 'Get timestamp anti stealing link config failed, status=%s, reason=%s%s' % (
resp.status_code, resp.content, headers)
raise GalaxyFDSClientException(message)
def delete_timestamp_anti_stealing_link_config(self, bucket_name):
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
resp = self._request.delete(uri, params={'antiStealingLink': 'true'}, auth=self._auth)
if resp.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % resp.headers
message = 'Delete timestamp anti stealing link config failed, status=%s, reason=%s%s' % (
resp.status_code, resp.content, headers)
raise GalaxyFDSClientException(message)
def _restore_archived_object_(self, src_bucket_name, src_object_name):
'''
Restore archived object
If object is restored, extend expiration time of the copy
:param bucket_name: The name of the bucket
:param object_name: The name of the object
'''
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(src_bucket_name), quote(src_object_name))
response = self._request.put(uri, auth=self._auth,
params={"restoreArchive": "true"})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Restore archived object failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def set_object_outside_access(self, bucket_name, object_name, value):
if not isinstance(value, bool):
raise GalaxyFDSClientException("value should be bool")
uri = '%s%s/%s' % (self._config.get_base_uri(), quote(bucket_name), quote(object_name))
response = self._request.put(uri, auth=self._auth, params={'setOutsideAccess': 'true' if value else 'false'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Set object outsideAccess failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
def set_bucket_outside_access(self, bucket_name, value):
if not isinstance(value, bool):
raise GalaxyFDSClientException("value should be bool")
uri = '%s%s' % (self._config.get_base_uri(), quote(bucket_name))
response = self._request.put(uri, auth=self._auth, params={'setOutsideAccess': 'true' if value else 'false'})
if response.status_code != requests.codes.ok:
headers = ""
if self._config.debug:
headers = ' header=%s' % response.headers
message = 'Set bucket outsideAccess failed, status=%s, reason=%s%s' % (
response.status_code, response.content, headers)
raise GalaxyFDSClientException(message)
|
XiaoMi/galaxy-fds-sdk-python
|
fds/galaxy_fds_client.py
|
Python
|
apache-2.0
| 71,610
|
[
"Galaxy"
] |
2b73778835f5c8d01582ce3a6d7bf39e833ffdeed8dc6484dc39d0779d89f305
|
"""Functions for converting to and from xarray objects
"""
from collections import Counter
import numpy as np
import pandas as pd
from .coding.times import CFDatetimeCoder, CFTimedeltaCoder
from .conventions import decode_cf
from .core import duck_array_ops
from .core.dataarray import DataArray
from .core.dtypes import get_fill_value
from .core.pycompat import dask_array_type
cdms2_ignored_attrs = {"name", "tileIndex"}
iris_forbidden_keys = {
"standard_name",
"long_name",
"units",
"bounds",
"axis",
"calendar",
"leap_month",
"leap_year",
"month_lengths",
"coordinates",
"grid_mapping",
"climatology",
"cell_methods",
"formula_terms",
"compress",
"missing_value",
"add_offset",
"scale_factor",
"valid_max",
"valid_min",
"valid_range",
"_FillValue",
}
cell_methods_strings = {
"point",
"sum",
"maximum",
"median",
"mid_range",
"minimum",
"mean",
"mode",
"standard_deviation",
"variance",
}
def encode(var):
return CFTimedeltaCoder().encode(CFDatetimeCoder().encode(var.variable))
def _filter_attrs(attrs, ignored_attrs):
"""Return attrs that are not in ignored_attrs"""
return {k: v for k, v in attrs.items() if k not in ignored_attrs}
def from_cdms2(variable):
"""Convert a cdms2 variable into an DataArray"""
values = np.asarray(variable)
name = variable.id
dims = variable.getAxisIds()
coords = {}
for axis in variable.getAxisList():
coords[axis.id] = DataArray(
np.asarray(axis),
dims=[axis.id],
attrs=_filter_attrs(axis.attributes, cdms2_ignored_attrs),
)
grid = variable.getGrid()
if grid is not None:
ids = [a.id for a in grid.getAxisList()]
for axis in grid.getLongitude(), grid.getLatitude():
if axis.id not in variable.getAxisIds():
coords[axis.id] = DataArray(
np.asarray(axis[:]),
dims=ids,
attrs=_filter_attrs(axis.attributes, cdms2_ignored_attrs),
)
attrs = _filter_attrs(variable.attributes, cdms2_ignored_attrs)
dataarray = DataArray(values, dims=dims, coords=coords, name=name, attrs=attrs)
return decode_cf(dataarray.to_dataset())[dataarray.name]
def to_cdms2(dataarray, copy=True):
"""Convert a DataArray into a cdms2 variable"""
# we don't want cdms2 to be a hard dependency
import cdms2
def set_cdms2_attrs(var, attrs):
for k, v in attrs.items():
setattr(var, k, v)
# 1D axes
axes = []
for dim in dataarray.dims:
coord = encode(dataarray.coords[dim])
axis = cdms2.createAxis(coord.values, id=dim)
set_cdms2_attrs(axis, coord.attrs)
axes.append(axis)
# Data
var = encode(dataarray)
cdms2_var = cdms2.createVariable(
var.values, axes=axes, id=dataarray.name, mask=pd.isnull(var.values), copy=copy
)
# Attributes
set_cdms2_attrs(cdms2_var, var.attrs)
# Curvilinear and unstructured grids
if dataarray.name not in dataarray.coords:
cdms2_axes = {}
for coord_name in set(dataarray.coords.keys()) - set(dataarray.dims):
coord_array = dataarray.coords[coord_name].to_cdms2()
cdms2_axis_cls = (
cdms2.coord.TransientAxis2D
if coord_array.ndim
else cdms2.auxcoord.TransientAuxAxis1D
)
cdms2_axis = cdms2_axis_cls(coord_array)
if cdms2_axis.isLongitude():
cdms2_axes["lon"] = cdms2_axis
elif cdms2_axis.isLatitude():
cdms2_axes["lat"] = cdms2_axis
if "lon" in cdms2_axes and "lat" in cdms2_axes:
if len(cdms2_axes["lon"].shape) == 2:
cdms2_grid = cdms2.hgrid.TransientCurveGrid(
cdms2_axes["lat"], cdms2_axes["lon"]
)
else:
cdms2_grid = cdms2.gengrid.AbstractGenericGrid(
cdms2_axes["lat"], cdms2_axes["lon"]
)
for axis in cdms2_grid.getAxisList():
cdms2_var.setAxis(cdms2_var.getAxisIds().index(axis.id), axis)
cdms2_var.setGrid(cdms2_grid)
return cdms2_var
def _pick_attrs(attrs, keys):
"""Return attrs with keys in keys list"""
return {k: v for k, v in attrs.items() if k in keys}
def _get_iris_args(attrs):
"""Converts the xarray attrs into args that can be passed into Iris"""
# iris.unit is deprecated in Iris v1.9
import cf_units
args = {"attributes": _filter_attrs(attrs, iris_forbidden_keys)}
args.update(_pick_attrs(attrs, ("standard_name", "long_name")))
unit_args = _pick_attrs(attrs, ("calendar",))
if "units" in attrs:
args["units"] = cf_units.Unit(attrs["units"], **unit_args)
return args
# TODO: Add converting bounds from xarray to Iris and back
def to_iris(dataarray):
"""Convert a DataArray into a Iris Cube"""
# Iris not a hard dependency
import iris
from iris.fileformats.netcdf import parse_cell_methods
dim_coords = []
aux_coords = []
for coord_name in dataarray.coords:
coord = encode(dataarray.coords[coord_name])
coord_args = _get_iris_args(coord.attrs)
coord_args["var_name"] = coord_name
axis = None
if coord.dims:
axis = dataarray.get_axis_num(coord.dims)
if coord_name in dataarray.dims:
try:
iris_coord = iris.coords.DimCoord(coord.values, **coord_args)
dim_coords.append((iris_coord, axis))
except ValueError:
iris_coord = iris.coords.AuxCoord(coord.values, **coord_args)
aux_coords.append((iris_coord, axis))
else:
iris_coord = iris.coords.AuxCoord(coord.values, **coord_args)
aux_coords.append((iris_coord, axis))
args = _get_iris_args(dataarray.attrs)
args["var_name"] = dataarray.name
args["dim_coords_and_dims"] = dim_coords
args["aux_coords_and_dims"] = aux_coords
if "cell_methods" in dataarray.attrs:
args["cell_methods"] = parse_cell_methods(dataarray.attrs["cell_methods"])
masked_data = duck_array_ops.masked_invalid(dataarray.data)
cube = iris.cube.Cube(masked_data, **args)
return cube
def _iris_obj_to_attrs(obj):
"""Return a dictionary of attrs when given a Iris object"""
attrs = {"standard_name": obj.standard_name, "long_name": obj.long_name}
if obj.units.calendar:
attrs["calendar"] = obj.units.calendar
if obj.units.origin != "1" and not obj.units.is_unknown():
attrs["units"] = obj.units.origin
attrs.update(obj.attributes)
return {k: v for k, v in attrs.items() if v is not None}
def _iris_cell_methods_to_str(cell_methods_obj):
"""Converts a Iris cell methods into a string"""
cell_methods = []
for cell_method in cell_methods_obj:
names = "".join(f"{n}: " for n in cell_method.coord_names)
intervals = " ".join(
f"interval: {interval}" for interval in cell_method.intervals
)
comments = " ".join(f"comment: {comment}" for comment in cell_method.comments)
extra = " ".join([intervals, comments]).strip()
if extra:
extra = f" ({extra})"
cell_methods.append(names + cell_method.method + extra)
return " ".join(cell_methods)
def _name(iris_obj, default="unknown"):
"""Mimics `iris_obj.name()` but with different name resolution order.
Similar to iris_obj.name() method, but using iris_obj.var_name first to
enable roundtripping.
"""
return iris_obj.var_name or iris_obj.standard_name or iris_obj.long_name or default
def from_iris(cube):
"""Convert a Iris cube into an DataArray"""
import iris.exceptions
name = _name(cube)
if name == "unknown":
name = None
dims = []
for i in range(cube.ndim):
try:
dim_coord = cube.coord(dim_coords=True, dimensions=(i,))
dims.append(_name(dim_coord))
except iris.exceptions.CoordinateNotFoundError:
dims.append(f"dim_{i}")
if len(set(dims)) != len(dims):
duplicates = [k for k, v in Counter(dims).items() if v > 1]
raise ValueError(f"Duplicate coordinate name {duplicates}.")
coords = {}
for coord in cube.coords():
coord_attrs = _iris_obj_to_attrs(coord)
coord_dims = [dims[i] for i in cube.coord_dims(coord)]
if coord_dims:
coords[_name(coord)] = (coord_dims, coord.points, coord_attrs)
else:
coords[_name(coord)] = ((), coord.points.item(), coord_attrs)
array_attrs = _iris_obj_to_attrs(cube)
cell_methods = _iris_cell_methods_to_str(cube.cell_methods)
if cell_methods:
array_attrs["cell_methods"] = cell_methods
# Deal with iris 1.* and 2.*
cube_data = cube.core_data() if hasattr(cube, "core_data") else cube.data
# Deal with dask and numpy masked arrays
if isinstance(cube_data, dask_array_type):
from dask.array import ma as dask_ma
filled_data = dask_ma.filled(cube_data, get_fill_value(cube.dtype))
elif isinstance(cube_data, np.ma.MaskedArray):
filled_data = np.ma.filled(cube_data, get_fill_value(cube.dtype))
else:
filled_data = cube_data
dataarray = DataArray(
filled_data, coords=coords, name=name, attrs=array_attrs, dims=dims
)
decoded_ds = decode_cf(dataarray._to_temp_dataset())
return dataarray._from_temp_dataset(decoded_ds)
|
pydata/xarray
|
xarray/convert.py
|
Python
|
apache-2.0
| 9,642
|
[
"NetCDF"
] |
8266b7879c197fed89f38d6e60481873239eb8126c01b83f9f465280fefd7896
|
# this program corresponds to special.py
### Means test is not done yet
# E Means test is giving error (E)
# F Means test is failing (F)
# EF Means test is giving error and Failing
#! Means test is segfaulting
# 8 Means test runs forever
### test_besselpoly
### test_mathieu_a
### test_mathieu_even_coef
### test_mathieu_odd_coef
### test_modfresnelp
### test_modfresnelm
# test_pbdv_seq
### test_pbvv_seq
### test_sph_harm
# test_sph_in
# test_sph_jn
# test_sph_kn
from __future__ import division, print_function, absolute_import
import itertools
import warnings
import numpy as np
from numpy import array, isnan, r_, arange, finfo, pi, sin, cos, tan, exp, \
log, zeros, sqrt, asarray, inf, nan_to_num, real, arctan, float_
from numpy.testing import assert_equal, assert_almost_equal, \
assert_array_equal, assert_array_almost_equal, assert_approx_equal, \
assert_, rand, dec, TestCase, run_module_suite, assert_allclose, \
assert_raises, assert_array_almost_equal_nulp
from scipy import special
import scipy.special._ufuncs as cephes
from scipy.special import ellipk
from scipy.special._testutils import assert_tol_equal, with_special_errors, \
assert_func_equal
class TestCephes(TestCase):
def test_airy(self):
cephes.airy(0)
def test_airye(self):
cephes.airye(0)
def test_binom(self):
n = np.array([0.264, 4, 5.2, 17])
k = np.array([2, 0.4, 7, 3.3])
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
rknown = np.array([[-0.097152, 0.9263051596159367, 0.01858423645695389,
-0.007581020651518199],[6, 2.0214389119675666, 0, 2.9827344527963846],
[10.92, 2.22993515861399, -0.00585728, 10.468891352063146],
[136, 3.5252179590758828, 19448, 1024.5526916174495]])
assert_func_equal(cephes.binom, rknown.ravel(), nk, rtol=1e-13)
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.arange(-7, 30), 1000*np.random.rand(30) - 500]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_2(self):
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.logspace(1, 300, 20)]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_exact(self):
@np.vectorize
def binom_int(n, k):
n = int(n)
k = int(k)
num = int(1)
den = int(1)
for i in range(1, k+1):
num *= i + n - k
den *= i
return float(num/den)
np.random.seed(1234)
n = np.arange(1, 15)
k = np.arange(0, 15)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
nk = nk[nk[:,0] >= nk[:,1]]
assert_func_equal(cephes.binom,
binom_int(nk[:,0], nk[:,1]),
nk,
atol=0, rtol=0)
def test_bdtr(self):
assert_equal(cephes.bdtr(1,1,0.5),1.0)
def test_bdtri(self):
assert_equal(cephes.bdtri(1,3,0.5),0.5)
def test_bdtrc(self):
assert_equal(cephes.bdtrc(1,3,0.5),0.5)
def test_bdtrin(self):
assert_equal(cephes.bdtrin(1,0,1),5.0)
def test_bdtrik(self):
cephes.bdtrik(1,3,0.5)
def test_bei(self):
assert_equal(cephes.bei(0),0.0)
def test_beip(self):
assert_equal(cephes.beip(0),0.0)
def test_ber(self):
assert_equal(cephes.ber(0),1.0)
def test_berp(self):
assert_equal(cephes.berp(0),0.0)
def test_besselpoly(self):
assert_equal(cephes.besselpoly(0,0,0),1.0)
def test_beta(self):
assert_equal(cephes.beta(1,1),1.0)
assert_allclose(cephes.beta(-100.3, 1e-200), cephes.gamma(1e-200))
assert_allclose(cephes.beta(0.0342, 171), 24.070498359873497, rtol=1e-14, atol=0)
def test_betainc(self):
assert_equal(cephes.betainc(1,1,1),1.0)
assert_allclose(cephes.betainc(0.0342, 171, 1e-10), 0.55269916901806648)
def test_betaln(self):
assert_equal(cephes.betaln(1,1),0.0)
assert_allclose(cephes.betaln(-100.3, 1e-200), cephes.gammaln(1e-200))
assert_allclose(cephes.betaln(0.0342, 170), 3.1811881124242447, rtol=1e-14, atol=0)
def test_betaincinv(self):
assert_equal(cephes.betaincinv(1,1,1),1.0)
assert_allclose(cephes.betaincinv(0.0342, 171, 0.25), 8.4231316935498957e-21, rtol=1e-12, atol=0)
def test_beta_inf(self):
assert_(np.isinf(special.beta(-1, 2)))
def test_btdtr(self):
assert_equal(cephes.btdtr(1,1,1),1.0)
def test_btdtri(self):
assert_equal(cephes.btdtri(1,1,1),1.0)
def test_btdtria(self):
assert_equal(cephes.btdtria(1,1,1),5.0)
def test_btdtrib(self):
assert_equal(cephes.btdtrib(1,1,1),5.0)
def test_cbrt(self):
assert_approx_equal(cephes.cbrt(1),1.0)
def test_chdtr(self):
assert_equal(cephes.chdtr(1,0),0.0)
def test_chdtrc(self):
assert_equal(cephes.chdtrc(1,0),1.0)
def test_chdtri(self):
assert_equal(cephes.chdtri(1,1),0.0)
def test_chdtriv(self):
assert_equal(cephes.chdtriv(0,0),5.0)
def test_chndtr(self):
assert_equal(cephes.chndtr(0,1,0),0.0)
p = cephes.chndtr(np.linspace(20, 25, 5), 2, 1.07458615e+02)
assert_allclose(p, [1.21805009e-09, 2.81979982e-09, 6.25652736e-09,
1.33520017e-08, 2.74909967e-08],
rtol=1e-6, atol=0)
assert_almost_equal(cephes.chndtr(np.inf, np.inf, 0), 2.0)
assert_almost_equal(cephes.chndtr(2, 1, np.inf), 0.0)
assert_(np.isnan(cephes.chndtr(np.nan, 1, 2)))
assert_(np.isnan(cephes.chndtr(5, np.nan, 2)))
assert_(np.isnan(cephes.chndtr(5, 1, np.nan)))
def test_chndtridf(self):
assert_equal(cephes.chndtridf(0,0,1),5.0)
def test_chndtrinc(self):
assert_equal(cephes.chndtrinc(0,1,0),5.0)
def test_chndtrix(self):
assert_equal(cephes.chndtrix(0,1,0),0.0)
def test_cosdg(self):
assert_equal(cephes.cosdg(0),1.0)
def test_cosm1(self):
assert_equal(cephes.cosm1(0),0.0)
def test_cotdg(self):
assert_almost_equal(cephes.cotdg(45),1.0)
def test_dawsn(self):
assert_equal(cephes.dawsn(0),0.0)
assert_allclose(cephes.dawsn(1.23), 0.50053727749081767)
def test_diric(self):
# Test behavior near multiples of 2pi. Regression test for issue
# described in gh-4001.
n_odd = [1, 5, 25]
x = np.array(2*np.pi + 5e-5).astype(np.float32)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=7)
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
x = np.array(2*np.pi + 1e-15).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
if hasattr(np, 'float128'):
# No float128 available in 32-bit numpy
x = np.array(2*np.pi + 1e-12).astype(np.float128)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=19)
n_even = [2, 4, 24]
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_even), -1.0, decimal=15)
# Test at some values not near a multiple of pi
x = np.arange(0.2*np.pi, 1.0*np.pi, 0.2*np.pi)
octave_result = [0.872677996249965, 0.539344662916632,
0.127322003750035, -0.206011329583298]
assert_almost_equal(special.diric(x, 3), octave_result, decimal=15)
def test_diric_broadcasting(self):
x = np.arange(5)
n = np.array([1, 3, 7])
assert_(special.diric(x[:, np.newaxis], n).shape == (x.size, n.size))
def test_ellipe(self):
assert_equal(cephes.ellipe(1),1.0)
def test_ellipeinc(self):
assert_equal(cephes.ellipeinc(0,1),0.0)
def test_ellipj(self):
cephes.ellipj(0,1)
def test_ellipk(self):
assert_allclose(ellipk(0), pi/2)
def test_ellipkinc(self):
assert_equal(cephes.ellipkinc(0,0),0.0)
def test_erf(self):
assert_equal(cephes.erf(0),0.0)
def test_erfc(self):
assert_equal(cephes.erfc(0),1.0)
def test_exp1(self):
cephes.exp1(1)
def test_expi(self):
cephes.expi(1)
def test_expn(self):
cephes.expn(1,1)
def test_exp1_reg(self):
# Regression for #834
a = cephes.exp1(-complex(19.9999990))
b = cephes.exp1(-complex(19.9999991))
assert_array_almost_equal(a.imag, b.imag)
def test_exp10(self):
assert_approx_equal(cephes.exp10(2),100.0)
def test_exp2(self):
assert_equal(cephes.exp2(2),4.0)
def test_expm1(self):
assert_equal(cephes.expm1(0),0.0)
def test_fdtr(self):
assert_equal(cephes.fdtr(1,1,0),0.0)
def test_fdtrc(self):
assert_equal(cephes.fdtrc(1,1,0),1.0)
def test_fdtri(self):
# cephes.fdtri(1,1,0.5) #BUG: gives NaN, should be 1
assert_allclose(cephes.fdtri(1, 1, [0.499, 0.501]),
array([0.9937365, 1.00630298]), rtol=1e-6)
def test_fdtridfd(self):
assert_equal(cephes.fdtridfd(1,0,0),5.0)
def test_fresnel(self):
assert_equal(cephes.fresnel(0),(0.0,0.0))
def test_gamma(self):
assert_equal(cephes.gamma(5),24.0)
def test_gammainc(self):
assert_equal(cephes.gammainc(5,0),0.0)
def test_gammaincc(self):
assert_equal(cephes.gammaincc(5,0),1.0)
def test_gammainccinv(self):
assert_equal(cephes.gammainccinv(5,1),0.0)
def test_gammaln(self):
cephes.gammaln(10)
def test_gammasgn(self):
vals = np.array([-4, -3.5, -2.3, 1, 4.2], np.float64)
assert_array_equal(cephes.gammasgn(vals), np.sign(cephes.rgamma(vals)))
def test_gdtr(self):
assert_equal(cephes.gdtr(1,1,0),0.0)
def test_gdtrc(self):
assert_equal(cephes.gdtrc(1,1,0),1.0)
def test_gdtria(self):
assert_equal(cephes.gdtria(0,1,1),0.0)
def test_gdtrib(self):
cephes.gdtrib(1,0,1)
# assert_equal(cephes.gdtrib(1,0,1),5.0)
def test_gdtrix(self):
cephes.gdtrix(1,1,.1)
def test_hankel1(self):
cephes.hankel1(1,1)
def test_hankel1e(self):
cephes.hankel1e(1,1)
def test_hankel2(self):
cephes.hankel2(1,1)
def test_hankel2e(self):
cephes.hankel2e(1,1)
def test_hyp1f1(self):
assert_approx_equal(cephes.hyp1f1(1,1,1), exp(1.0))
assert_approx_equal(cephes.hyp1f1(3,4,-6), 0.026056422099537251095)
cephes.hyp1f1(1,1,1)
def test_hyp1f2(self):
cephes.hyp1f2(1,1,1,1)
def test_hyp2f0(self):
cephes.hyp2f0(1,1,1,1)
def test_hyp2f1(self):
assert_equal(cephes.hyp2f1(1,1,1,0),1.0)
def test_hyp3f0(self):
assert_equal(cephes.hyp3f0(1,1,1,0),(1.0,0.0))
def test_hyperu(self):
assert_equal(cephes.hyperu(0,1,1),1.0)
def test_i0(self):
assert_equal(cephes.i0(0),1.0)
def test_i0e(self):
assert_equal(cephes.i0e(0),1.0)
def test_i1(self):
assert_equal(cephes.i1(0),0.0)
def test_i1e(self):
assert_equal(cephes.i1e(0),0.0)
def test_it2i0k0(self):
cephes.it2i0k0(1)
def test_it2j0y0(self):
cephes.it2j0y0(1)
def test_it2struve0(self):
cephes.it2struve0(1)
def test_itairy(self):
cephes.itairy(1)
def test_iti0k0(self):
assert_equal(cephes.iti0k0(0),(0.0,0.0))
def test_itj0y0(self):
assert_equal(cephes.itj0y0(0),(0.0,0.0))
def test_itmodstruve0(self):
assert_equal(cephes.itmodstruve0(0),0.0)
def test_itstruve0(self):
assert_equal(cephes.itstruve0(0),0.0)
def test_iv(self):
assert_equal(cephes.iv(1,0),0.0)
def _check_ive(self):
assert_equal(cephes.ive(1,0),0.0)
def test_j0(self):
assert_equal(cephes.j0(0),1.0)
def test_j1(self):
assert_equal(cephes.j1(0),0.0)
def test_jn(self):
assert_equal(cephes.jn(0,0),1.0)
def test_jv(self):
assert_equal(cephes.jv(0,0),1.0)
def _check_jve(self):
assert_equal(cephes.jve(0,0),1.0)
def test_k0(self):
cephes.k0(2)
def test_k0e(self):
cephes.k0e(2)
def test_k1(self):
cephes.k1(2)
def test_k1e(self):
cephes.k1e(2)
def test_kei(self):
cephes.kei(2)
def test_keip(self):
assert_equal(cephes.keip(0),0.0)
def test_ker(self):
cephes.ker(2)
def test_kerp(self):
cephes.kerp(2)
def _check_kelvin(self):
cephes.kelvin(2)
def test_kn(self):
cephes.kn(1,1)
def test_kolmogi(self):
assert_equal(cephes.kolmogi(1),0.0)
assert_(np.isnan(cephes.kolmogi(np.nan)))
def test_kolmogorov(self):
assert_equal(cephes.kolmogorov(0),1.0)
def _check_kv(self):
cephes.kv(1,1)
def _check_kve(self):
cephes.kve(1,1)
def test_log1p(self):
assert_equal(cephes.log1p(0),0.0)
def test_lpmv(self):
assert_equal(cephes.lpmv(0,0,1),1.0)
def test_mathieu_a(self):
assert_equal(cephes.mathieu_a(1,0),1.0)
def test_mathieu_b(self):
assert_equal(cephes.mathieu_b(1,0),1.0)
def test_mathieu_cem(self):
assert_equal(cephes.mathieu_cem(1,0,0),(1.0,0.0))
# Test AMS 20.2.27
@np.vectorize
def ce_smallq(m, q, z):
z *= np.pi/180
if m == 0:
return 2**(-0.5) * (1 - .5*q*cos(2*z)) # + O(q^2)
elif m == 1:
return cos(z) - q/8 * cos(3*z) # + O(q^2)
elif m == 2:
return cos(2*z) - q*(cos(4*z)/12 - 1/4) # + O(q^2)
else:
return cos(m*z) - q*(cos((m+2)*z)/(4*(m+1)) - cos((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(0, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_cem(m[:,None], q[None,:], 0.123)[0],
ce_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_sem(self):
assert_equal(cephes.mathieu_sem(1,0,0),(0.0,1.0))
# Test AMS 20.2.27
@np.vectorize
def se_smallq(m, q, z):
z *= np.pi/180
if m == 1:
return sin(z) - q/8 * sin(3*z) # + O(q^2)
elif m == 2:
return sin(2*z) - q*sin(4*z)/12 # + O(q^2)
else:
return sin(m*z) - q*(sin((m+2)*z)/(4*(m+1)) - sin((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(1, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_sem(m[:,None], q[None,:], 0.123)[0],
se_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_modcem1(self):
assert_equal(cephes.mathieu_modcem1(1,0,0),(0.0,0.0))
def test_mathieu_modcem2(self):
cephes.mathieu_modcem2(1,1,1)
# Test reflection relation AMS 20.6.19
m = np.arange(0, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modcem2(m, q, -z)[0]
fr = -cephes.mathieu_modcem2(m, q, 0)[0] / cephes.mathieu_modcem1(m, q, 0)[0]
y2 = -cephes.mathieu_modcem2(m, q, z)[0] - 2*fr*cephes.mathieu_modcem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_modsem1(self):
assert_equal(cephes.mathieu_modsem1(1,0,0),(0.0,0.0))
def test_mathieu_modsem2(self):
cephes.mathieu_modsem2(1,1,1)
# Test reflection relation AMS 20.6.20
m = np.arange(1, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modsem2(m, q, -z)[0]
fr = cephes.mathieu_modsem2(m, q, 0)[1] / cephes.mathieu_modsem1(m, q, 0)[1]
y2 = cephes.mathieu_modsem2(m, q, z)[0] - 2*fr*cephes.mathieu_modsem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_overflow(self):
# Check that these return NaNs instead of causing a SEGV
assert_equal(cephes.mathieu_cem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_cem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem2(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem2(10000, 1.5, 1.3), (np.nan, np.nan))
def test_mathieu_ticket_1847(self):
# Regression test --- this call had some out-of-bounds access
# and could return nan occasionally
for k in range(60):
v = cephes.mathieu_modsem2(2, 100, -1)
# Values from ACM TOMS 804 (derivate by numerical differentiation)
assert_allclose(v[0], 0.1431742913063671074347, rtol=1e-10)
assert_allclose(v[1], 0.9017807375832909144719, rtol=1e-4)
def test_modfresnelm(self):
cephes.modfresnelm(0)
def test_modfresnelp(self):
cephes.modfresnelp(0)
def _check_modstruve(self):
assert_equal(cephes.modstruve(1,0),0.0)
def test_nbdtr(self):
assert_equal(cephes.nbdtr(1,1,1),1.0)
def test_nbdtrc(self):
assert_equal(cephes.nbdtrc(1,1,1),0.0)
def test_nbdtri(self):
assert_equal(cephes.nbdtri(1,1,1),1.0)
def __check_nbdtrik(self):
cephes.nbdtrik(1,.4,.5)
def test_nbdtrin(self):
assert_equal(cephes.nbdtrin(1,0,0),5.0)
def test_ncfdtr(self):
assert_equal(cephes.ncfdtr(1,1,1,0),0.0)
def test_ncfdtri(self):
assert_equal(cephes.ncfdtri(1,1,1,0),0.0)
def test_ncfdtridfd(self):
cephes.ncfdtridfd(1,0.5,0,1)
def __check_ncfdtridfn(self):
cephes.ncfdtridfn(1,0.5,0,1)
def __check_ncfdtrinc(self):
cephes.ncfdtrinc(1,0.5,0,1)
def test_nctdtr(self):
assert_equal(cephes.nctdtr(1,0,0),0.5)
assert_equal(cephes.nctdtr(9, 65536, 45), 0.0)
assert_approx_equal(cephes.nctdtr(np.inf, 1., 1.), 0.5, 5)
assert_(np.isnan(cephes.nctdtr(2., np.inf, 10.)))
assert_approx_equal(cephes.nctdtr(2., 1., np.inf), 1.)
assert_(np.isnan(cephes.nctdtr(np.nan, 1., 1.)))
assert_(np.isnan(cephes.nctdtr(2., np.nan, 1.)))
assert_(np.isnan(cephes.nctdtr(2., 1., np.nan)))
def __check_nctdtridf(self):
cephes.nctdtridf(1,0.5,0)
def test_nctdtrinc(self):
cephes.nctdtrinc(1,0,0)
def test_nctdtrit(self):
cephes.nctdtrit(.1,0.2,.5)
def test_ndtr(self):
assert_equal(cephes.ndtr(0), 0.5)
assert_almost_equal(cephes.ndtr(1), 0.84134474606)
def test_ndtri(self):
assert_equal(cephes.ndtri(0.5),0.0)
def test_nrdtrimn(self):
assert_approx_equal(cephes.nrdtrimn(0.5,1,1),1.0)
def test_nrdtrisd(self):
assert_tol_equal(cephes.nrdtrisd(0.5,0.5,0.5), 0.0,
atol=0, rtol=0)
def test_obl_ang1(self):
cephes.obl_ang1(1,1,1,0)
def test_obl_ang1_cv(self):
result = cephes.obl_ang1_cv(1,1,1,1,0)
assert_almost_equal(result[0],1.0)
assert_almost_equal(result[1],0.0)
def _check_obl_cv(self):
assert_equal(cephes.obl_cv(1,1,0),2.0)
def test_obl_rad1(self):
cephes.obl_rad1(1,1,1,0)
def test_obl_rad1_cv(self):
cephes.obl_rad1_cv(1,1,1,1,0)
def test_obl_rad2(self):
cephes.obl_rad2(1,1,1,0)
def test_obl_rad2_cv(self):
cephes.obl_rad2_cv(1,1,1,1,0)
def test_pbdv(self):
assert_equal(cephes.pbdv(1,0),(0.0,1.0))
def test_pbvv(self):
cephes.pbvv(1,0)
def test_pbwa(self):
cephes.pbwa(1,0)
def test_pdtr(self):
val = cephes.pdtr(0, 1)
assert_almost_equal(val, np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtr([0, 1, 2], 0.0)
assert_array_equal(val, [1, 1, 1])
def test_pdtrc(self):
val = cephes.pdtrc(0, 1)
assert_almost_equal(val, 1 - np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtrc([0, 1, 2], 0.0)
assert_array_equal(val, [0, 0, 0])
def test_pdtri(self):
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
cephes.pdtri(0.5,0.5)
def test_pdtrik(self):
k = cephes.pdtrik(0.5, 1)
assert_almost_equal(cephes.gammaincc(k + 1, 1), 0.5)
# Edge case: m = 0 or very small.
k = cephes.pdtrik([[0], [0.25], [0.95]], [0, 1e-20, 1e-6])
assert_array_equal(k, np.zeros((3, 3)))
def test_pro_ang1(self):
cephes.pro_ang1(1,1,1,0)
def test_pro_ang1_cv(self):
assert_array_almost_equal(cephes.pro_ang1_cv(1,1,1,1,0),
array((1.0,0.0)))
def _check_pro_cv(self):
assert_equal(cephes.pro_cv(1,1,0),2.0)
def test_pro_rad1(self):
cephes.pro_rad1(1,1,1,0.1)
def test_pro_rad1_cv(self):
cephes.pro_rad1_cv(1,1,1,1,0)
def test_pro_rad2(self):
cephes.pro_rad2(1,1,1,0)
def test_pro_rad2_cv(self):
cephes.pro_rad2_cv(1,1,1,1,0)
def test_psi(self):
cephes.psi(1)
def test_radian(self):
assert_equal(cephes.radian(0,0,0),0)
def test_rgamma(self):
assert_equal(cephes.rgamma(1),1.0)
def test_round(self):
assert_equal(cephes.round(3.4),3.0)
assert_equal(cephes.round(-3.4),-3.0)
assert_equal(cephes.round(3.6),4.0)
assert_equal(cephes.round(-3.6),-4.0)
assert_equal(cephes.round(3.5),4.0)
assert_equal(cephes.round(-3.5),-4.0)
def test_shichi(self):
cephes.shichi(1)
def test_sici(self):
cephes.sici(1)
s, c = cephes.sici(np.inf)
assert_almost_equal(s, np.pi * 0.5)
assert_almost_equal(c, 0)
s, c = cephes.sici(-np.inf)
assert_almost_equal(s, -np.pi * 0.5)
assert_(np.isnan(c), "cosine integral(-inf) is not nan")
def test_sindg(self):
assert_equal(cephes.sindg(90),1.0)
def test_smirnov(self):
assert_equal(cephes.smirnov(1,.1),0.9)
assert_(np.isnan(cephes.smirnov(1,np.nan)))
def test_smirnovi(self):
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.4)),0.4)
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.6)),0.6)
assert_(np.isnan(cephes.smirnovi(1,np.nan)))
def test_spence(self):
assert_equal(cephes.spence(1),0.0)
def test_stdtr(self):
assert_equal(cephes.stdtr(1,0),0.5)
assert_almost_equal(cephes.stdtr(1,1), 0.75)
assert_almost_equal(cephes.stdtr(1,2), 0.852416382349)
def test_stdtridf(self):
cephes.stdtridf(0.7,1)
def test_stdtrit(self):
cephes.stdtrit(1,0.7)
def test_struve(self):
assert_equal(cephes.struve(0,0),0.0)
def test_tandg(self):
assert_equal(cephes.tandg(45),1.0)
def test_tklmbda(self):
assert_almost_equal(cephes.tklmbda(1,1),1.0)
def test_y0(self):
cephes.y0(1)
def test_y1(self):
cephes.y1(1)
def test_yn(self):
cephes.yn(1,1)
def test_yv(self):
cephes.yv(1,1)
def _check_yve(self):
cephes.yve(1,1)
def test_zeta(self):
cephes.zeta(2,2)
def test_zetac(self):
assert_equal(cephes.zetac(0),-1.5)
def test_wofz(self):
z = [complex(624.2,-0.26123), complex(-0.4,3.), complex(0.6,2.),
complex(-1.,1.), complex(-1.,-9.), complex(-1.,9.),
complex(-0.0000000234545,1.1234), complex(-3.,5.1),
complex(-53,30.1), complex(0.0,0.12345),
complex(11,1), complex(-22,-2), complex(9,-28),
complex(21,-33), complex(1e5,1e5), complex(1e14,1e14)
]
w = [
complex(-3.78270245518980507452677445620103199303131110e-7,
0.000903861276433172057331093754199933411710053155),
complex(0.1764906227004816847297495349730234591778719532788,
-0.02146550539468457616788719893991501311573031095617),
complex(0.2410250715772692146133539023007113781272362309451,
0.06087579663428089745895459735240964093522265589350),
complex(0.30474420525691259245713884106959496013413834051768,
-0.20821893820283162728743734725471561394145872072738),
complex(7.317131068972378096865595229600561710140617977e34,
8.321873499714402777186848353320412813066170427e34),
complex(0.0615698507236323685519612934241429530190806818395,
-0.00676005783716575013073036218018565206070072304635),
complex(0.3960793007699874918961319170187598400134746631,
-5.593152259116644920546186222529802777409274656e-9),
complex(0.08217199226739447943295069917990417630675021771804,
-0.04701291087643609891018366143118110965272615832184),
complex(0.00457246000350281640952328010227885008541748668738,
-0.00804900791411691821818731763401840373998654987934),
complex(0.8746342859608052666092782112565360755791467973338452,
0.),
complex(0.00468190164965444174367477874864366058339647648741,
0.0510735563901306197993676329845149741675029197050),
complex(-0.0023193175200187620902125853834909543869428763219,
-0.025460054739731556004902057663500272721780776336),
complex(9.11463368405637174660562096516414499772662584e304,
3.97101807145263333769664875189354358563218932e305),
complex(-4.4927207857715598976165541011143706155432296e281,
-2.8019591213423077494444700357168707775769028e281),
complex(2.820947917809305132678577516325951485807107151e-6,
2.820947917668257736791638444590253942253354058e-6),
complex(2.82094791773878143474039725787438662716372268e-15,
2.82094791773878143474039725773333923127678361e-15)
]
assert_func_equal(cephes.wofz, w, z, rtol=1e-13)
class TestAiry(TestCase):
def test_airy(self):
# This tests the airy function to ensure 8 place accuracy in computation
x = special.airy(.99)
assert_array_almost_equal(x,array([0.13689066,-0.16050153,1.19815925,0.92046818]),8)
x = special.airy(.41)
assert_array_almost_equal(x,array([0.25238916,-.23480512,0.80686202,0.51053919]),8)
x = special.airy(-.36)
assert_array_almost_equal(x,array([0.44508477,-0.23186773,0.44939534,0.48105354]),8)
def test_airye(self):
a = special.airye(0.01)
b = special.airy(0.01)
b1 = [None]*4
for n in range(2):
b1[n] = b[n]*exp(2.0/3.0*0.01*sqrt(0.01))
for n in range(2,4):
b1[n] = b[n]*exp(-abs(real(2.0/3.0*0.01*sqrt(0.01))))
assert_array_almost_equal(a,b1,6)
def test_bi_zeros(self):
bi = special.bi_zeros(2)
bia = (array([-1.17371322, -3.2710930]),
array([-2.29443968, -4.07315509]),
array([-0.45494438, 0.39652284]),
array([0.60195789, -0.76031014]))
assert_array_almost_equal(bi,bia,4)
bi = special.bi_zeros(5)
assert_array_almost_equal(bi[0],array([-1.173713222709127,
-3.271093302836352,
-4.830737841662016,
-6.169852128310251,
-7.376762079367764]),11)
assert_array_almost_equal(bi[1],array([-2.294439682614122,
-4.073155089071828,
-5.512395729663599,
-6.781294445990305,
-7.940178689168587]),10)
assert_array_almost_equal(bi[2],array([-0.454944383639657,
0.396522836094465,
-0.367969161486959,
0.349499116831805,
-0.336026240133662]),11)
assert_array_almost_equal(bi[3],array([0.601957887976239,
-0.760310141492801,
0.836991012619261,
-0.88947990142654,
0.929983638568022]),11)
def test_ai_zeros(self):
ai = special.ai_zeros(1)
assert_array_almost_equal(ai,(array([-2.33810741]),
array([-1.01879297]),
array([0.5357]),
array([0.7012])),4)
def test_ai_zeros_big(self):
z, zp, ai_zpx, aip_zx = special.ai_zeros(50000)
ai_z, aip_z, _, _ = special.airy(z)
ai_zp, aip_zp, _, _ = special.airy(zp)
ai_envelope = 1/abs(z)**(1./4)
aip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(ai_zpx, ai_zp, rtol=1e-10)
assert_allclose(aip_zx, aip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(ai_z/ai_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(aip_zp/aip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.1
assert_allclose(z[:6],
[-2.3381074105, -4.0879494441, -5.5205598281,
-6.7867080901, -7.9441335871, -9.0226508533], rtol=1e-10)
assert_allclose(zp[:6],
[-1.0187929716, -3.2481975822, -4.8200992112,
-6.1633073556, -7.3721772550, -8.4884867340], rtol=1e-10)
def test_bi_zeros_big(self):
z, zp, bi_zpx, bip_zx = special.bi_zeros(50000)
_, _, bi_z, bip_z = special.airy(z)
_, _, bi_zp, bip_zp = special.airy(zp)
bi_envelope = 1/abs(z)**(1./4)
bip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(bi_zpx, bi_zp, rtol=1e-10)
assert_allclose(bip_zx, bip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(bi_z/bi_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(bip_zp/bip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.2
assert_allclose(z[:6],
[-1.1737132227, -3.2710933028, -4.8307378417,
-6.1698521283, -7.3767620794, -8.4919488465], rtol=1e-10)
assert_allclose(zp[:6],
[-2.2944396826, -4.0731550891, -5.5123957297,
-6.7812944460, -7.9401786892, -9.0195833588], rtol=1e-10)
class TestAssocLaguerre(TestCase):
def test_assoc_laguerre(self):
a1 = special.genlaguerre(11,1)
a2 = special.assoc_laguerre(.2,11,1)
assert_array_almost_equal(a2,a1(.2),8)
a2 = special.assoc_laguerre(1,11,1)
assert_array_almost_equal(a2,a1(1),8)
class TestBesselpoly(TestCase):
def test_besselpoly(self):
pass
class TestKelvin(TestCase):
def test_bei(self):
mbei = special.bei(2)
assert_almost_equal(mbei, 0.9722916273066613,5) # this may not be exact
def test_beip(self):
mbeip = special.beip(2)
assert_almost_equal(mbeip,0.91701361338403631,5) # this may not be exact
def test_ber(self):
mber = special.ber(2)
assert_almost_equal(mber,0.75173418271380821,5) # this may not be exact
def test_berp(self):
mberp = special.berp(2)
assert_almost_equal(mberp,-0.49306712470943909,5) # this may not be exact
def test_bei_zeros(self):
# Abramowitz & Stegun, Table 9.12
bi = special.bei_zeros(5)
assert_array_almost_equal(bi,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
def test_beip_zeros(self):
bip = special.beip_zeros(5)
assert_array_almost_equal(bip,array([3.772673304934953,
8.280987849760042,
12.742147523633703,
17.193431752512542,
21.641143941167325]),8)
def test_ber_zeros(self):
ber = special.ber_zeros(5)
assert_array_almost_equal(ber,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
def test_berp_zeros(self):
brp = special.berp_zeros(5)
assert_array_almost_equal(brp,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
def test_kelvin(self):
mkelv = special.kelvin(2)
assert_array_almost_equal(mkelv,(special.ber(2) + special.bei(2)*1j,
special.ker(2) + special.kei(2)*1j,
special.berp(2) + special.beip(2)*1j,
special.kerp(2) + special.keip(2)*1j),8)
def test_kei(self):
mkei = special.kei(2)
assert_almost_equal(mkei,-0.20240006776470432,5)
def test_keip(self):
mkeip = special.keip(2)
assert_almost_equal(mkeip,0.21980790991960536,5)
def test_ker(self):
mker = special.ker(2)
assert_almost_equal(mker,-0.041664513991509472,5)
def test_kerp(self):
mkerp = special.kerp(2)
assert_almost_equal(mkerp,-0.10660096588105264,5)
def test_kei_zeros(self):
kei = special.kei_zeros(5)
assert_array_almost_equal(kei,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
def test_keip_zeros(self):
keip = special.keip_zeros(5)
assert_array_almost_equal(keip,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
# numbers come from 9.9 of A&S pg. 381
def test_kelvin_zeros(self):
tmp = special.kelvin_zeros(5)
berz,beiz,kerz,keiz,berpz,beipz,kerpz,keipz = tmp
assert_array_almost_equal(berz,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
assert_array_almost_equal(beiz,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
assert_array_almost_equal(kerz,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44382]),4)
assert_array_almost_equal(keiz,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
assert_array_almost_equal(berpz,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
assert_array_almost_equal(beipz,array([3.77267,
# table from 1927 had 3.77320
# but this is more accurate
8.28099,
12.74215,
17.19343,
21.64114]),4)
assert_array_almost_equal(kerpz,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
assert_array_almost_equal(keipz,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
def test_ker_zeros(self):
ker = special.ker_zeros(5)
assert_array_almost_equal(ker,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44381]),4)
def test_kerp_zeros(self):
kerp = special.kerp_zeros(5)
assert_array_almost_equal(kerp,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
class TestBernoulli(TestCase):
def test_bernoulli(self):
brn = special.bernoulli(5)
assert_array_almost_equal(brn,array([1.0000,
-0.5000,
0.1667,
0.0000,
-0.0333,
0.0000]),4)
class TestBeta(TestCase):
def test_beta(self):
bet = special.beta(2,4)
betg = (special.gamma(2)*special.gamma(4))/special.gamma(6)
assert_almost_equal(bet,betg,8)
def test_betaln(self):
betln = special.betaln(2,4)
bet = log(abs(special.beta(2,4)))
assert_almost_equal(betln,bet,8)
def test_betainc(self):
btinc = special.betainc(1,1,.2)
assert_almost_equal(btinc,0.2,8)
def test_betaincinv(self):
y = special.betaincinv(2,4,.5)
comp = special.betainc(2,4,y)
assert_almost_equal(comp,.5,5)
class TestCombinatorics(TestCase):
def test_comb(self):
assert_array_almost_equal(special.comb([10, 10], [3, 4]), [120., 210.])
assert_almost_equal(special.comb(10, 3), 120.)
assert_equal(special.comb(10, 3, exact=True), 120)
assert_equal(special.comb(10, 3, exact=True, repetition=True), 220)
def test_comb_with_np_int64(self):
n = 70
k = 30
np_n = np.int64(n)
np_k = np.int64(k)
assert_equal(special.comb(np_n, np_k, exact=True),
special.comb(n, k, exact=True))
def test_comb_zeros(self):
assert_equal(special.comb(2, 3, exact=True), 0)
assert_equal(special.comb(-1, 3, exact=True), 0)
assert_equal(special.comb(2, -1, exact=True), 0)
assert_equal(special.comb(2, -1, exact=False), 0)
assert_array_almost_equal(special.comb([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 120.])
def test_perm(self):
assert_array_almost_equal(special.perm([10, 10], [3, 4]), [720., 5040.])
assert_almost_equal(special.perm(10, 3), 720.)
assert_equal(special.perm(10, 3, exact=True), 720)
def test_perm_zeros(self):
assert_equal(special.perm(2, 3, exact=True), 0)
assert_equal(special.perm(-1, 3, exact=True), 0)
assert_equal(special.perm(2, -1, exact=True), 0)
assert_equal(special.perm(2, -1, exact=False), 0)
assert_array_almost_equal(special.perm([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 720.])
class TestTrigonometric(TestCase):
def test_cbrt(self):
cb = special.cbrt(27)
cbrl = 27**(1.0/3.0)
assert_approx_equal(cb,cbrl)
def test_cbrtmore(self):
cb1 = special.cbrt(27.9)
cbrl1 = 27.9**(1.0/3.0)
assert_almost_equal(cb1,cbrl1,8)
def test_cosdg(self):
cdg = special.cosdg(90)
cdgrl = cos(pi/2.0)
assert_almost_equal(cdg,cdgrl,8)
def test_cosdgmore(self):
cdgm = special.cosdg(30)
cdgmrl = cos(pi/6.0)
assert_almost_equal(cdgm,cdgmrl,8)
def test_cosm1(self):
cs = (special.cosm1(0),special.cosm1(.3),special.cosm1(pi/10))
csrl = (cos(0)-1,cos(.3)-1,cos(pi/10)-1)
assert_array_almost_equal(cs,csrl,8)
def test_cotdg(self):
ct = special.cotdg(30)
ctrl = tan(pi/6.0)**(-1)
assert_almost_equal(ct,ctrl,8)
def test_cotdgmore(self):
ct1 = special.cotdg(45)
ctrl1 = tan(pi/4.0)**(-1)
assert_almost_equal(ct1,ctrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.cotdg(45), 1.0, 14)
assert_almost_equal(special.cotdg(-45), -1.0, 14)
assert_almost_equal(special.cotdg(90), 0.0, 14)
assert_almost_equal(special.cotdg(-90), 0.0, 14)
assert_almost_equal(special.cotdg(135), -1.0, 14)
assert_almost_equal(special.cotdg(-135), 1.0, 14)
assert_almost_equal(special.cotdg(225), 1.0, 14)
assert_almost_equal(special.cotdg(-225), -1.0, 14)
assert_almost_equal(special.cotdg(270), 0.0, 14)
assert_almost_equal(special.cotdg(-270), 0.0, 14)
assert_almost_equal(special.cotdg(315), -1.0, 14)
assert_almost_equal(special.cotdg(-315), 1.0, 14)
assert_almost_equal(special.cotdg(765), 1.0, 14)
def test_sinc(self):
# the sinc implementation and more extensive sinc tests are in numpy
assert_array_equal(special.sinc([0]), 1)
assert_equal(special.sinc(0.0), 1.0)
def test_sindg(self):
sn = special.sindg(90)
assert_equal(sn,1.0)
def test_sindgmore(self):
snm = special.sindg(30)
snmrl = sin(pi/6.0)
assert_almost_equal(snm,snmrl,8)
snm1 = special.sindg(45)
snmrl1 = sin(pi/4.0)
assert_almost_equal(snm1,snmrl1,8)
class TestTandg(TestCase):
def test_tandg(self):
tn = special.tandg(30)
tnrl = tan(pi/6.0)
assert_almost_equal(tn,tnrl,8)
def test_tandgmore(self):
tnm = special.tandg(45)
tnmrl = tan(pi/4.0)
assert_almost_equal(tnm,tnmrl,8)
tnm1 = special.tandg(60)
tnmrl1 = tan(pi/3.0)
assert_almost_equal(tnm1,tnmrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.tandg(0), 0.0, 14)
assert_almost_equal(special.tandg(45), 1.0, 14)
assert_almost_equal(special.tandg(-45), -1.0, 14)
assert_almost_equal(special.tandg(135), -1.0, 14)
assert_almost_equal(special.tandg(-135), 1.0, 14)
assert_almost_equal(special.tandg(180), 0.0, 14)
assert_almost_equal(special.tandg(-180), 0.0, 14)
assert_almost_equal(special.tandg(225), 1.0, 14)
assert_almost_equal(special.tandg(-225), -1.0, 14)
assert_almost_equal(special.tandg(315), -1.0, 14)
assert_almost_equal(special.tandg(-315), 1.0, 14)
class TestEllip(TestCase):
def test_ellipj_nan(self):
"""Regression test for #912."""
special.ellipj(0.5, np.nan)
def test_ellipj(self):
el = special.ellipj(0.2,0)
rel = [sin(0.2),cos(0.2),1.0,0.20]
assert_array_almost_equal(el,rel,13)
def test_ellipk(self):
elk = special.ellipk(.2)
assert_almost_equal(elk,1.659623598610528,11)
assert_equal(special.ellipkm1(0.0), np.inf)
assert_equal(special.ellipkm1(1.0), pi/2)
assert_equal(special.ellipkm1(np.inf), 0.0)
assert_equal(special.ellipkm1(np.nan), np.nan)
assert_equal(special.ellipkm1(-1), np.nan)
assert_allclose(special.ellipk(-10), 0.7908718902387385)
def test_ellipkinc(self):
elkinc = special.ellipkinc(pi/2,.2)
elk = special.ellipk(0.2)
assert_almost_equal(elkinc,elk,15)
alpha = 20*pi/180
phi = 45*pi/180
m = sin(alpha)**2
elkinc = special.ellipkinc(phi,m)
assert_almost_equal(elkinc,0.79398143,8)
# From pg. 614 of A & S
assert_equal(special.ellipkinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipkinc(pi/2, 1.0), np.inf)
assert_equal(special.ellipkinc(pi/2, -np.inf), 0.0)
assert_equal(special.ellipkinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipkinc(pi/2, 2), np.nan)
assert_equal(special.ellipkinc(0, 0.5), 0.0)
assert_equal(special.ellipkinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipkinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipkinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipkinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipkinc(0.38974112035318718, 1), 0.4, rtol=1e-14)
assert_allclose(special.ellipkinc(1.5707, -10), 0.79084284661724946)
def test_ellipkinc_2(self):
# Regression test for gh-3550
# ellipkinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipkinc(phi, mvals)
assert_array_almost_equal_nulp(f, 1.0259330100195334 * np.ones_like(f), 1)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipkinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, 5.1296650500976675 * np.ones_like(f1), 2)
def test_ellipkinc_singular(self):
# ellipkinc(phi, 1) has closed form and is finite only for phi in (-pi/2, pi/2)
xlog = np.logspace(-300, -17, 25)
xlin = np.linspace(1e-17, 0.1, 25)
xlin2 = np.linspace(0.1, pi/2, 25, endpoint=False)
assert_allclose(special.ellipkinc(xlog, 1), np.arcsinh(np.tan(xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin, 1), np.arcsinh(np.tan(xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin2, 1), np.arcsinh(np.tan(xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(np.pi/2, 1), np.inf)
assert_allclose(special.ellipkinc(-xlog, 1), np.arcsinh(np.tan(-xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin, 1), np.arcsinh(np.tan(-xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin2, 1), np.arcsinh(np.tan(-xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(-np.pi/2, 1), np.inf)
def test_ellipe(self):
ele = special.ellipe(.2)
assert_almost_equal(ele,1.4890350580958529,8)
assert_equal(special.ellipe(0.0), pi/2)
assert_equal(special.ellipe(1.0), 1.0)
assert_equal(special.ellipe(-np.inf), np.inf)
assert_equal(special.ellipe(np.nan), np.nan)
assert_equal(special.ellipe(2), np.nan)
assert_allclose(special.ellipe(-10), 3.6391380384177689)
def test_ellipeinc(self):
eleinc = special.ellipeinc(pi/2,.2)
ele = special.ellipe(0.2)
assert_almost_equal(eleinc,ele,14)
# pg 617 of A & S
alpha, phi = 52*pi/180,35*pi/180
m = sin(alpha)**2
eleinc = special.ellipeinc(phi,m)
assert_almost_equal(eleinc, 0.58823065, 8)
assert_equal(special.ellipeinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipeinc(pi/2, 1.0), 1.0)
assert_equal(special.ellipeinc(pi/2, -np.inf), np.inf)
assert_equal(special.ellipeinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipeinc(pi/2, 2), np.nan)
assert_equal(special.ellipeinc(0, 0.5), 0.0)
assert_equal(special.ellipeinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipeinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipeinc(np.inf, -np.inf), np.inf)
assert_equal(special.ellipeinc(-np.inf, -np.inf), -np.inf)
assert_equal(special.ellipeinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipeinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipeinc(1.5707, -10), 3.6388185585822876)
def test_ellipeinc_2(self):
# Regression test for gh-3550
# ellipeinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipeinc(phi, mvals)
assert_array_almost_equal_nulp(f, 0.84442884574781019 * np.ones_like(f), 2)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipeinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, 3.3471442287390509 * np.ones_like(f1), 4)
class TestErf(TestCase):
def test_erf(self):
er = special.erf(.25)
assert_almost_equal(er,0.2763263902,8)
def test_erf_zeros(self):
erz = special.erf_zeros(5)
erzr = array([1.45061616+1.88094300j,
2.24465928+2.61657514j,
2.83974105+3.17562810j,
3.33546074+3.64617438j,
3.76900557+4.06069723j])
assert_array_almost_equal(erz,erzr,4)
def _check_variant_func(self, func, other_func, rtol, atol=0):
np.random.seed(1234)
n = 10000
x = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
y = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
z = x + 1j*y
old_errors = np.seterr(all='ignore')
try:
w = other_func(z)
w_real = other_func(x).real
mask = np.isfinite(w)
w = w[mask]
z = z[mask]
mask = np.isfinite(w_real)
w_real = w_real[mask]
x = x[mask]
# test both real and complex variants
assert_func_equal(func, w, z, rtol=rtol, atol=atol)
assert_func_equal(func, w_real, x, rtol=rtol, atol=atol)
finally:
np.seterr(**old_errors)
def test_erfc_consistent(self):
self._check_variant_func(
cephes.erfc,
lambda z: 1 - cephes.erf(z),
rtol=1e-12,
atol=1e-14 # <- the test function loses precision
)
def test_erfcx_consistent(self):
self._check_variant_func(
cephes.erfcx,
lambda z: np.exp(z*z) * cephes.erfc(z),
rtol=1e-12
)
def test_erfi_consistent(self):
self._check_variant_func(
cephes.erfi,
lambda z: -1j * cephes.erf(1j*z),
rtol=1e-12
)
def test_dawsn_consistent(self):
self._check_variant_func(
cephes.dawsn,
lambda z: sqrt(pi)/2 * np.exp(-z*z) * cephes.erfi(z),
rtol=1e-12
)
def test_erfcinv(self):
i = special.erfcinv(1)
# Use assert_array_equal instead of assert_equal, so the comparsion
# of -0.0 and 0.0 doesn't fail.
assert_array_equal(i, 0)
def test_erfinv(self):
i = special.erfinv(0)
assert_equal(i,0)
def test_errprint(self):
a = special.errprint()
b = 1-a # a is the state 1-a inverts state
c = special.errprint(b) # returns last state 'a'
assert_equal(a,c)
d = special.errprint(a) # returns to original state
assert_equal(d,b) # makes sure state was returned
# assert_equal(d,1-a)
class TestEuler(TestCase):
def test_euler(self):
eu0 = special.euler(0)
eu1 = special.euler(1)
eu2 = special.euler(2) # just checking segfaults
assert_almost_equal(eu0[0],1,8)
assert_almost_equal(eu2[2],-1,8)
eu24 = special.euler(24)
mathworld = [1,1,5,61,1385,50521,2702765,199360981,
19391512145,2404879675441,
370371188237525,69348874393137901,
15514534163557086905]
correct = zeros((25,),'d')
for k in range(0,13):
if (k % 2):
correct[2*k] = -float(mathworld[k])
else:
correct[2*k] = float(mathworld[k])
olderr = np.seterr(all='ignore')
try:
err = nan_to_num((eu24-correct)/correct)
errmax = max(err)
finally:
np.seterr(**olderr)
assert_almost_equal(errmax, 0.0, 14)
class TestExp(TestCase):
def test_exp2(self):
ex = special.exp2(2)
exrl = 2**2
assert_equal(ex,exrl)
def test_exp2more(self):
exm = special.exp2(2.5)
exmrl = 2**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_exp10(self):
ex = special.exp10(2)
exrl = 10**2
assert_approx_equal(ex,exrl)
def test_exp10more(self):
exm = special.exp10(2.5)
exmrl = 10**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_expm1(self):
ex = (special.expm1(2),special.expm1(3),special.expm1(4))
exrl = (exp(2)-1,exp(3)-1,exp(4)-1)
assert_array_almost_equal(ex,exrl,8)
def test_expm1more(self):
ex1 = (special.expm1(2),special.expm1(2.1),special.expm1(2.2))
exrl1 = (exp(2)-1,exp(2.1)-1,exp(2.2)-1)
assert_array_almost_equal(ex1,exrl1,8)
class TestFactorialFunctions(TestCase):
def test_factorial(self):
assert_array_almost_equal([6., 24., 120.],
special.factorial([3, 4, 5], exact=False))
assert_equal(special.factorial(5, exact=True), 120)
def test_factorial2(self):
assert_array_almost_equal([105., 384., 945.],
special.factorial2([7, 8, 9], exact=False))
assert_equal(special.factorial2(7, exact=True), 105)
def test_factorialk(self):
assert_equal(special.factorialk(5, 1, exact=True), 120)
assert_equal(special.factorialk(5, 3, exact=True), 10)
class TestFresnel(TestCase):
def test_fresnel(self):
frs = array(special.fresnel(.5))
assert_array_almost_equal(frs,array([0.064732432859999287, 0.49234422587144644]),8)
# values from pg 329 Table 7.11 of A & S
# slightly corrected in 4th decimal place
def test_fresnel_zeros(self):
szo, czo = special.fresnel_zeros(5)
assert_array_almost_equal(szo,
array([2.0093+0.2885j,
2.8335+0.2443j,
3.4675+0.2185j,
4.0026+0.2009j,
4.4742+0.1877j]),3)
assert_array_almost_equal(czo,
array([1.7437+0.3057j,
2.6515+0.2529j,
3.3204+0.2240j,
3.8757+0.2047j,
4.3611+0.1907j]),3)
vals1 = special.fresnel(szo)[0]
vals2 = special.fresnel(czo)[1]
assert_array_almost_equal(vals1,0,14)
assert_array_almost_equal(vals2,0,14)
def test_fresnelc_zeros(self):
szo, czo = special.fresnel_zeros(6)
frc = special.fresnelc_zeros(6)
assert_array_almost_equal(frc,czo,12)
def test_fresnels_zeros(self):
szo, czo = special.fresnel_zeros(5)
frs = special.fresnels_zeros(5)
assert_array_almost_equal(frs,szo,12)
class TestGamma(TestCase):
def test_gamma(self):
gam = special.gamma(5)
assert_equal(gam,24.0)
def test_gammaln(self):
gamln = special.gammaln(3)
lngam = log(special.gamma(3))
assert_almost_equal(gamln,lngam,8)
def test_gammainc(self):
gama = special.gammainc(.5,.5)
assert_almost_equal(gama,.7,1)
def test_gammaincnan(self):
gama = special.gammainc(-1,1)
assert_(isnan(gama))
def test_gammainczero(self):
# bad arg but zero integration limit
gama = special.gammainc(-1,0)
assert_equal(gama,0.0)
def test_gammaincc(self):
gicc = special.gammaincc(.5,.5)
greal = 1 - special.gammainc(.5,.5)
assert_almost_equal(gicc,greal,8)
def test_gammainccnan(self):
gama = special.gammaincc(-1,1)
assert_(isnan(gama))
def test_gammainccinv(self):
gccinv = special.gammainccinv(.5,.5)
gcinv = special.gammaincinv(.5,.5)
assert_almost_equal(gccinv,gcinv,8)
@with_special_errors
def test_gammaincinv(self):
y = special.gammaincinv(.4,.4)
x = special.gammainc(.4,y)
assert_almost_equal(x,0.4,1)
y = special.gammainc(10, 0.05)
x = special.gammaincinv(10, 2.5715803516000736e-20)
assert_almost_equal(0.05, x, decimal=10)
assert_almost_equal(y, 2.5715803516000736e-20, decimal=10)
x = special.gammaincinv(50, 8.20754777388471303050299243573393e-18)
assert_almost_equal(11.0, x, decimal=10)
@with_special_errors
def test_975(self):
# Regression test for ticket #975 -- switch point in algorithm
# check that things work OK at the point, immediately next floats
# around it, and a bit further away
pts = [0.25,
np.nextafter(0.25, 0), 0.25 - 1e-12,
np.nextafter(0.25, 1), 0.25 + 1e-12]
for xp in pts:
y = special.gammaincinv(.4, xp)
x = special.gammainc(0.4, y)
assert_tol_equal(x, xp, rtol=1e-12)
def test_rgamma(self):
rgam = special.rgamma(8)
rlgam = 1/special.gamma(8)
assert_almost_equal(rgam,rlgam,8)
def test_infinity(self):
assert_(np.isinf(special.gamma(-1)))
assert_equal(special.rgamma(-1), 0)
class TestHankel(TestCase):
def test_negv1(self):
assert_almost_equal(special.hankel1(-3,2), -special.hankel1(3,2), 14)
def test_hankel1(self):
hank1 = special.hankel1(1,.1)
hankrl = (special.jv(1,.1) + special.yv(1,.1)*1j)
assert_almost_equal(hank1,hankrl,8)
def test_negv1e(self):
assert_almost_equal(special.hankel1e(-3,2), -special.hankel1e(3,2), 14)
def test_hankel1e(self):
hank1e = special.hankel1e(1,.1)
hankrle = special.hankel1(1,.1)*exp(-.1j)
assert_almost_equal(hank1e,hankrle,8)
def test_negv2(self):
assert_almost_equal(special.hankel2(-3,2), -special.hankel2(3,2), 14)
def test_hankel2(self):
hank2 = special.hankel2(1,.1)
hankrl2 = (special.jv(1,.1) - special.yv(1,.1)*1j)
assert_almost_equal(hank2,hankrl2,8)
def test_neg2e(self):
assert_almost_equal(special.hankel2e(-3,2), -special.hankel2e(3,2), 14)
def test_hankl2e(self):
hank2e = special.hankel2e(1,.1)
hankrl2e = special.hankel2e(1,.1)
assert_almost_equal(hank2e,hankrl2e,8)
class TestHyper(TestCase):
def test_h1vp(self):
h1 = special.h1vp(1,.1)
h1real = (special.jvp(1,.1) + special.yvp(1,.1)*1j)
assert_almost_equal(h1,h1real,8)
def test_h2vp(self):
h2 = special.h2vp(1,.1)
h2real = (special.jvp(1,.1) - special.yvp(1,.1)*1j)
assert_almost_equal(h2,h2real,8)
def test_hyp0f1(self):
# scalar input
assert_allclose(special.hyp0f1(2.5, 0.5), 1.21482702689997, rtol=1e-12)
assert_allclose(special.hyp0f1(2.5, 0), 1.0, rtol=1e-15)
# float input, expected values match mpmath
x = special.hyp0f1(3.0, [-1.5, -1, 0, 1, 1.5])
expected = np.array([0.58493659229143, 0.70566805723127, 1.0,
1.37789689539747, 1.60373685288480])
assert_allclose(x, expected, rtol=1e-12)
# complex input
x = special.hyp0f1(3.0, np.array([-1.5, -1, 0, 1, 1.5]) + 0.j)
assert_allclose(x, expected.astype(np.complex), rtol=1e-12)
# test broadcasting
x1 = [0.5, 1.5, 2.5]
x2 = [0, 1, 0.5]
x = special.hyp0f1(x1, x2)
expected = [1.0, 1.8134302039235093, 1.21482702689997]
assert_allclose(x, expected, rtol=1e-12)
x = special.hyp0f1(np.row_stack([x1] * 2), x2)
assert_allclose(x, np.row_stack([expected] * 2), rtol=1e-12)
assert_raises(ValueError, special.hyp0f1,
np.row_stack([x1] * 3), [0, 1])
def test_hyp1f1(self):
hyp1 = special.hyp1f1(.1,.1,.3)
assert_almost_equal(hyp1, 1.3498588075760032,7)
# test contributed by Moritz Deger (2008-05-29)
# http://projects.scipy.org/scipy/scipy/ticket/659
# reference data obtained from mathematica [ a, b, x, m(a,b,x)]:
# produced with test_hyp1f1.nb
ref_data = array([[-8.38132975e+00, -1.28436461e+01, -2.91081397e+01, 1.04178330e+04],
[2.91076882e+00, -6.35234333e+00, -1.27083993e+01, 6.68132725e+00],
[-1.42938258e+01, 1.80869131e-01, 1.90038728e+01, 1.01385897e+05],
[5.84069088e+00, 1.33187908e+01, 2.91290106e+01, 1.59469411e+08],
[-2.70433202e+01, -1.16274873e+01, -2.89582384e+01, 1.39900152e+24],
[4.26344966e+00, -2.32701773e+01, 1.91635759e+01, 6.13816915e+21],
[1.20514340e+01, -3.40260240e+00, 7.26832235e+00, 1.17696112e+13],
[2.77372955e+01, -1.99424687e+00, 3.61332246e+00, 3.07419615e+13],
[1.50310939e+01, -2.91198675e+01, -1.53581080e+01, -3.79166033e+02],
[1.43995827e+01, 9.84311196e+00, 1.93204553e+01, 2.55836264e+10],
[-4.08759686e+00, 1.34437025e+01, -1.42072843e+01, 1.70778449e+01],
[8.05595738e+00, -1.31019838e+01, 1.52180721e+01, 3.06233294e+21],
[1.81815804e+01, -1.42908793e+01, 9.57868793e+00, -2.84771348e+20],
[-2.49671396e+01, 1.25082843e+01, -1.71562286e+01, 2.36290426e+07],
[2.67277673e+01, 1.70315414e+01, 6.12701450e+00, 7.77917232e+03],
[2.49565476e+01, 2.91694684e+01, 6.29622660e+00, 2.35300027e+02],
[6.11924542e+00, -1.59943768e+00, 9.57009289e+00, 1.32906326e+11],
[-1.47863653e+01, 2.41691301e+01, -1.89981821e+01, 2.73064953e+03],
[2.24070483e+01, -2.93647433e+00, 8.19281432e+00, -6.42000372e+17],
[8.04042600e-01, 1.82710085e+01, -1.97814534e+01, 5.48372441e-01],
[1.39590390e+01, 1.97318686e+01, 2.37606635e+00, 5.51923681e+00],
[-4.66640483e+00, -2.00237930e+01, 7.40365095e+00, 4.50310752e+00],
[2.76821999e+01, -6.36563968e+00, 1.11533984e+01, -9.28725179e+23],
[-2.56764457e+01, 1.24544906e+00, 1.06407572e+01, 1.25922076e+01],
[3.20447808e+00, 1.30874383e+01, 2.26098014e+01, 2.03202059e+04],
[-1.24809647e+01, 4.15137113e+00, -2.92265700e+01, 2.39621411e+08],
[2.14778108e+01, -2.35162960e+00, -1.13758664e+01, 4.46882152e-01],
[-9.85469168e+00, -3.28157680e+00, 1.67447548e+01, -1.07342390e+07],
[1.08122310e+01, -2.47353236e+01, -1.15622349e+01, -2.91733796e+03],
[-2.67933347e+01, -3.39100709e+00, 2.56006986e+01, -5.29275382e+09],
[-8.60066776e+00, -8.02200924e+00, 1.07231926e+01, 1.33548320e+06],
[-1.01724238e-01, -1.18479709e+01, -2.55407104e+01, 1.55436570e+00],
[-3.93356771e+00, 2.11106818e+01, -2.57598485e+01, 2.13467840e+01],
[3.74750503e+00, 1.55687633e+01, -2.92841720e+01, 1.43873509e-02],
[6.99726781e+00, 2.69855571e+01, -1.63707771e+01, 3.08098673e-02],
[-2.31996011e+01, 3.47631054e+00, 9.75119815e-01, 1.79971073e-02],
[2.38951044e+01, -2.91460190e+01, -2.50774708e+00, 9.56934814e+00],
[1.52730825e+01, 5.77062507e+00, 1.21922003e+01, 1.32345307e+09],
[1.74673917e+01, 1.89723426e+01, 4.94903250e+00, 9.90859484e+01],
[1.88971241e+01, 2.86255413e+01, 5.52360109e-01, 1.44165360e+00],
[1.02002319e+01, -1.66855152e+01, -2.55426235e+01, 6.56481554e+02],
[-1.79474153e+01, 1.22210200e+01, -1.84058212e+01, 8.24041812e+05],
[-1.36147103e+01, 1.32365492e+00, -7.22375200e+00, 9.92446491e+05],
[7.57407832e+00, 2.59738234e+01, -1.34139168e+01, 3.64037761e-02],
[2.21110169e+00, 1.28012666e+01, 1.62529102e+01, 1.33433085e+02],
[-2.64297569e+01, -1.63176658e+01, -1.11642006e+01, -2.44797251e+13],
[-2.46622944e+01, -3.02147372e+00, 8.29159315e+00, -3.21799070e+05],
[-1.37215095e+01, -1.96680183e+01, 2.91940118e+01, 3.21457520e+12],
[-5.45566105e+00, 2.81292086e+01, 1.72548215e-01, 9.66973000e-01],
[-1.55751298e+00, -8.65703373e+00, 2.68622026e+01, -3.17190834e+16],
[2.45393609e+01, -2.70571903e+01, 1.96815505e+01, 1.80708004e+37],
[5.77482829e+00, 1.53203143e+01, 2.50534322e+01, 1.14304242e+06],
[-1.02626819e+01, 2.36887658e+01, -2.32152102e+01, 7.28965646e+02],
[-1.30833446e+00, -1.28310210e+01, 1.87275544e+01, -9.33487904e+12],
[5.83024676e+00, -1.49279672e+01, 2.44957538e+01, -7.61083070e+27],
[-2.03130747e+01, 2.59641715e+01, -2.06174328e+01, 4.54744859e+04],
[1.97684551e+01, -2.21410519e+01, -2.26728740e+01, 3.53113026e+06],
[2.73673444e+01, 2.64491725e+01, 1.57599882e+01, 1.07385118e+07],
[5.73287971e+00, 1.21111904e+01, 1.33080171e+01, 2.63220467e+03],
[-2.82751072e+01, 2.08605881e+01, 9.09838900e+00, -6.60957033e-07],
[1.87270691e+01, -1.74437016e+01, 1.52413599e+01, 6.59572851e+27],
[6.60681457e+00, -2.69449855e+00, 9.78972047e+00, -2.38587870e+12],
[1.20895561e+01, -2.51355765e+01, 2.30096101e+01, 7.58739886e+32],
[-2.44682278e+01, 2.10673441e+01, -1.36705538e+01, 4.54213550e+04],
[-4.50665152e+00, 3.72292059e+00, -4.83403707e+00, 2.68938214e+01],
[-7.46540049e+00, -1.08422222e+01, -1.72203805e+01, -2.09402162e+02],
[-2.00307551e+01, -7.50604431e+00, -2.78640020e+01, 4.15985444e+19],
[1.99890876e+01, 2.20677419e+01, -2.51301778e+01, 1.23840297e-09],
[2.03183823e+01, -7.66942559e+00, 2.10340070e+01, 1.46285095e+31],
[-2.90315825e+00, -2.55785967e+01, -9.58779316e+00, 2.65714264e-01],
[2.73960829e+01, -1.80097203e+01, -2.03070131e+00, 2.52908999e+02],
[-2.11708058e+01, -2.70304032e+01, 2.48257944e+01, 3.09027527e+08],
[2.21959758e+01, 4.00258675e+00, -1.62853977e+01, -9.16280090e-09],
[1.61661840e+01, -2.26845150e+01, 2.17226940e+01, -8.24774394e+33],
[-3.35030306e+00, 1.32670581e+00, 9.39711214e+00, -1.47303163e+01],
[7.23720726e+00, -2.29763909e+01, 2.34709682e+01, -9.20711735e+29],
[2.71013568e+01, 1.61951087e+01, -7.11388906e-01, 2.98750911e-01],
[8.40057933e+00, -7.49665220e+00, 2.95587388e+01, 6.59465635e+29],
[-1.51603423e+01, 1.94032322e+01, -7.60044357e+00, 1.05186941e+02],
[-8.83788031e+00, -2.72018313e+01, 1.88269907e+00, 1.81687019e+00],
[-1.87283712e+01, 5.87479570e+00, -1.91210203e+01, 2.52235612e+08],
[-5.61338513e-01, 2.69490237e+01, 1.16660111e-01, 9.97567783e-01],
[-5.44354025e+00, -1.26721408e+01, -4.66831036e+00, 1.06660735e-01],
[-2.18846497e+00, 2.33299566e+01, 9.62564397e+00, 3.03842061e-01],
[6.65661299e+00, -2.39048713e+01, 1.04191807e+01, 4.73700451e+13],
[-2.57298921e+01, -2.60811296e+01, 2.74398110e+01, -5.32566307e+11],
[-1.11431826e+01, -1.59420160e+01, -1.84880553e+01, -1.01514747e+02],
[6.50301931e+00, 2.59859051e+01, -2.33270137e+01, 1.22760500e-02],
[-1.94987891e+01, -2.62123262e+01, 3.90323225e+00, 1.71658894e+01],
[7.26164601e+00, -1.41469402e+01, 2.81499763e+01, -2.50068329e+31],
[-1.52424040e+01, 2.99719005e+01, -2.85753678e+01, 1.31906693e+04],
[5.24149291e+00, -1.72807223e+01, 2.22129493e+01, 2.50748475e+25],
[3.63207230e-01, -9.54120862e-02, -2.83874044e+01, 9.43854939e-01],
[-2.11326457e+00, -1.25707023e+01, 1.17172130e+00, 1.20812698e+00],
[2.48513582e+00, 1.03652647e+01, -1.84625148e+01, 6.47910997e-02],
[2.65395942e+01, 2.74794672e+01, 1.29413428e+01, 2.89306132e+05],
[-9.49445460e+00, 1.59930921e+01, -1.49596331e+01, 3.27574841e+02],
[-5.89173945e+00, 9.96742426e+00, 2.60318889e+01, -3.15842908e-01],
[-1.15387239e+01, -2.21433107e+01, -2.17686413e+01, 1.56724718e-01],
[-5.30592244e+00, -2.42752190e+01, 1.29734035e+00, 1.31985534e+00]])
for a,b,c,expected in ref_data:
result = special.hyp1f1(a,b,c)
assert_(abs(expected - result)/expected < 1e-4)
def test_hyp1f1_gh2957(self):
hyp1 = special.hyp1f1(0.5, 1.5, -709.7827128933)
hyp2 = special.hyp1f1(0.5, 1.5, -709.7827128934)
assert_almost_equal(hyp1, hyp2, 12)
def test_hyp1f2(self):
pass
def test_hyp2f0(self):
pass
def test_hyp2f1(self):
# a collection of special cases taken from AMS 55
values = [[0.5, 1, 1.5, 0.2**2, 0.5/0.2*log((1+0.2)/(1-0.2))],
[0.5, 1, 1.5, -0.2**2, 1./0.2*arctan(0.2)],
[1, 1, 2, 0.2, -1/0.2*log(1-0.2)],
[3, 3.5, 1.5, 0.2**2,
0.5/0.2/(-5)*((1+0.2)**(-5)-(1-0.2)**(-5))],
[-3, 3, 0.5, sin(0.2)**2, cos(2*3*0.2)],
[3, 4, 8, 1, special.gamma(8)*special.gamma(8-4-3)/special.gamma(8-3)/special.gamma(8-4)],
[3, 2, 3-2+1, -1, 1./2**3*sqrt(pi) *
special.gamma(1+3-2)/special.gamma(1+0.5*3-2)/special.gamma(0.5+0.5*3)],
[5, 2, 5-2+1, -1, 1./2**5*sqrt(pi) *
special.gamma(1+5-2)/special.gamma(1+0.5*5-2)/special.gamma(0.5+0.5*5)],
[4, 0.5+4, 1.5-2*4, -1./3, (8./9)**(-2*4)*special.gamma(4./3) *
special.gamma(1.5-2*4)/special.gamma(3./2)/special.gamma(4./3-2*4)],
# and some others
# ticket #424
[1.5, -0.5, 1.0, -10.0, 4.1300097765277476484],
# negative integer a or b, with c-a-b integer and x > 0.9
[-2,3,1,0.95,0.715],
[2,-3,1,0.95,-0.007],
[-6,3,1,0.95,0.0000810625],
[2,-5,1,0.95,-0.000029375],
# huge negative integers
(10, -900, 10.5, 0.99, 1.91853705796607664803709475658e-24),
(10, -900, -10.5, 0.99, 3.54279200040355710199058559155e-18),
]
for i, (a, b, c, x, v) in enumerate(values):
cv = special.hyp2f1(a, b, c, x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_hyp3f0(self):
pass
def test_hyperu(self):
val1 = special.hyperu(1,0.1,100)
assert_almost_equal(val1,0.0098153,7)
a,b = [0.3,0.6,1.2,-2.7],[1.5,3.2,-0.4,-3.2]
a,b = asarray(a), asarray(b)
z = 0.5
hypu = special.hyperu(a,b,z)
hprl = (pi/sin(pi*b))*(special.hyp1f1(a,b,z) /
(special.gamma(1+a-b)*special.gamma(b)) -
z**(1-b)*special.hyp1f1(1+a-b,2-b,z)
/ (special.gamma(a)*special.gamma(2-b)))
assert_array_almost_equal(hypu,hprl,12)
def test_hyperu_gh2287(self):
assert_almost_equal(special.hyperu(1, 1.5, 20.2),
0.048360918656699191, 12)
class TestBessel(TestCase):
def test_itj0y0(self):
it0 = array(special.itj0y0(.2))
assert_array_almost_equal(it0,array([0.19933433254006822, -0.34570883800412566]),8)
def test_it2j0y0(self):
it2 = array(special.it2j0y0(.2))
assert_array_almost_equal(it2,array([0.0049937546274601858, -0.43423067011231614]),8)
def test_negv_iv(self):
assert_equal(special.iv(3,2), special.iv(-3,2))
def test_j0(self):
oz = special.j0(.1)
ozr = special.jn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_j1(self):
o1 = special.j1(.1)
o1r = special.jn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_jn(self):
jnnr = special.jn(1,.2)
assert_almost_equal(jnnr,0.099500832639235995,8)
def test_negv_jv(self):
assert_almost_equal(special.jv(-3,2), -special.jv(3,2), 14)
def test_jv(self):
values = [[0, 0.1, 0.99750156206604002],
[2./3, 1e-8, 0.3239028506761532e-5],
[2./3, 1e-10, 0.1503423854873779e-6],
[3.1, 1e-10, 0.1711956265409013e-32],
[2./3, 4.0, -0.2325440850267039],
]
for i, (v, x, y) in enumerate(values):
yc = special.jv(v, x)
assert_almost_equal(yc, y, 8, err_msg='test #%d' % i)
def test_negv_jve(self):
assert_almost_equal(special.jve(-3,2), -special.jve(3,2), 14)
def test_jve(self):
jvexp = special.jve(1,.2)
assert_almost_equal(jvexp,0.099500832639235995,8)
jvexp1 = special.jve(1,.2+1j)
z = .2+1j
jvexpr = special.jv(1,z)*exp(-abs(z.imag))
assert_almost_equal(jvexp1,jvexpr,8)
def test_jn_zeros(self):
jn0 = special.jn_zeros(0,5)
jn1 = special.jn_zeros(1,5)
assert_array_almost_equal(jn0,array([2.4048255577,
5.5200781103,
8.6537279129,
11.7915344391,
14.9309177086]),4)
assert_array_almost_equal(jn1,array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),4)
jn102 = special.jn_zeros(102,5)
assert_tol_equal(jn102, array([110.89174935992040343,
117.83464175788308398,
123.70194191713507279,
129.02417238949092824,
134.00114761868422559]), rtol=1e-13)
jn301 = special.jn_zeros(301,5)
assert_tol_equal(jn301, array([313.59097866698830153,
323.21549776096288280,
331.22338738656748796,
338.39676338872084500,
345.03284233056064157]), rtol=1e-13)
def test_jn_zeros_slow(self):
jn0 = special.jn_zeros(0, 300)
assert_tol_equal(jn0[260-1], 816.02884495068867280, rtol=1e-13)
assert_tol_equal(jn0[280-1], 878.86068707124422606, rtol=1e-13)
assert_tol_equal(jn0[300-1], 941.69253065317954064, rtol=1e-13)
jn10 = special.jn_zeros(10, 300)
assert_tol_equal(jn10[260-1], 831.67668514305631151, rtol=1e-13)
assert_tol_equal(jn10[280-1], 894.51275095371316931, rtol=1e-13)
assert_tol_equal(jn10[300-1], 957.34826370866539775, rtol=1e-13)
jn3010 = special.jn_zeros(3010,5)
assert_tol_equal(jn3010, array([3036.86590780927,
3057.06598526482,
3073.66360690272,
3088.37736494778,
3101.86438139042]), rtol=1e-8)
def test_jnjnp_zeros(self):
jn = special.jn
def jnp(n, x):
return (jn(n-1,x) - jn(n+1,x))/2
for nt in range(1, 30):
z, n, m, t = special.jnjnp_zeros(nt)
for zz, nn, tt in zip(z, n, t):
if tt == 0:
assert_allclose(jn(nn, zz), 0, atol=1e-6)
elif tt == 1:
assert_allclose(jnp(nn, zz), 0, atol=1e-6)
else:
raise AssertionError("Invalid t return for nt=%d" % nt)
def test_jnp_zeros(self):
jnp = special.jnp_zeros(1,5)
assert_array_almost_equal(jnp, array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),4)
jnp = special.jnp_zeros(443,5)
assert_tol_equal(special.jvp(443, jnp), 0, atol=1e-15)
def test_jnyn_zeros(self):
jnz = special.jnyn_zeros(1,5)
assert_array_almost_equal(jnz,(array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),
array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),
array([2.19714,
5.42968,
8.59601,
11.74915,
14.89744]),
array([3.68302,
6.94150,
10.12340,
13.28576,
16.44006])),5)
def test_jvp(self):
jvprim = special.jvp(2,2)
jv0 = (special.jv(1,2)-special.jv(3,2))/2
assert_almost_equal(jvprim,jv0,10)
def test_k0(self):
ozk = special.k0(.1)
ozkr = special.kv(0,.1)
assert_almost_equal(ozk,ozkr,8)
def test_k0e(self):
ozke = special.k0e(.1)
ozker = special.kve(0,.1)
assert_almost_equal(ozke,ozker,8)
def test_k1(self):
o1k = special.k1(.1)
o1kr = special.kv(1,.1)
assert_almost_equal(o1k,o1kr,8)
def test_k1e(self):
o1ke = special.k1e(.1)
o1ker = special.kve(1,.1)
assert_almost_equal(o1ke,o1ker,8)
def test_jacobi(self):
a = 5*rand() - 1
b = 5*rand() - 1
P0 = special.jacobi(0,a,b)
P1 = special.jacobi(1,a,b)
P2 = special.jacobi(2,a,b)
P3 = special.jacobi(3,a,b)
assert_array_almost_equal(P0.c,[1],13)
assert_array_almost_equal(P1.c,array([a+b+2,a-b])/2.0,13)
cp = [(a+b+3)*(a+b+4), 4*(a+b+3)*(a+2), 4*(a+1)*(a+2)]
p2c = [cp[0],cp[1]-2*cp[0],cp[2]-cp[1]+cp[0]]
assert_array_almost_equal(P2.c,array(p2c)/8.0,13)
cp = [(a+b+4)*(a+b+5)*(a+b+6),6*(a+b+4)*(a+b+5)*(a+3),
12*(a+b+4)*(a+2)*(a+3),8*(a+1)*(a+2)*(a+3)]
p3c = [cp[0],cp[1]-3*cp[0],cp[2]-2*cp[1]+3*cp[0],cp[3]-cp[2]+cp[1]-cp[0]]
assert_array_almost_equal(P3.c,array(p3c)/48.0,13)
def test_kn(self):
kn1 = special.kn(0,.2)
assert_almost_equal(kn1,1.7527038555281462,8)
def test_negv_kv(self):
assert_equal(special.kv(3.0, 2.2), special.kv(-3.0, 2.2))
def test_kv0(self):
kv0 = special.kv(0,.2)
assert_almost_equal(kv0, 1.7527038555281462, 10)
def test_kv1(self):
kv1 = special.kv(1,0.2)
assert_almost_equal(kv1, 4.775972543220472, 10)
def test_kv2(self):
kv2 = special.kv(2,0.2)
assert_almost_equal(kv2, 49.51242928773287, 10)
def test_kn_largeorder(self):
assert_allclose(special.kn(32, 1), 1.7516596664574289e+43)
def test_kv_largearg(self):
assert_equal(special.kv(0, 1e19), 0)
def test_negv_kve(self):
assert_equal(special.kve(3.0, 2.2), special.kve(-3.0, 2.2))
def test_kve(self):
kve1 = special.kve(0,.2)
kv1 = special.kv(0,.2)*exp(.2)
assert_almost_equal(kve1,kv1,8)
z = .2+1j
kve2 = special.kve(0,z)
kv2 = special.kv(0,z)*exp(z)
assert_almost_equal(kve2,kv2,8)
def test_kvp_v0n1(self):
z = 2.2
assert_almost_equal(-special.kv(1,z), special.kvp(0,z, n=1), 10)
def test_kvp_n1(self):
v = 3.
z = 2.2
xc = -special.kv(v+1,z) + v/z*special.kv(v,z)
x = special.kvp(v,z, n=1)
assert_almost_equal(xc, x, 10) # this function (kvp) is broken
def test_kvp_n2(self):
v = 3.
z = 2.2
xc = (z**2+v**2-v)/z**2 * special.kv(v,z) + special.kv(v+1,z)/z
x = special.kvp(v, z, n=2)
assert_almost_equal(xc, x, 10)
def test_y0(self):
oz = special.y0(.1)
ozr = special.yn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_y1(self):
o1 = special.y1(.1)
o1r = special.yn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_y0_zeros(self):
yo,ypo = special.y0_zeros(2)
zo,zpo = special.y0_zeros(2,complex=1)
all = r_[yo,zo]
allval = r_[ypo,zpo]
assert_array_almost_equal(abs(special.yv(0.0,all)),0.0,11)
assert_array_almost_equal(abs(special.yv(1,all)-allval),0.0,11)
def test_y1_zeros(self):
y1 = special.y1_zeros(1)
assert_array_almost_equal(y1,(array([2.19714]),array([0.52079])),5)
def test_y1p_zeros(self):
y1p = special.y1p_zeros(1,complex=1)
assert_array_almost_equal(y1p,(array([0.5768+0.904j]), array([-0.7635+0.5892j])),3)
def test_yn_zeros(self):
an = special.yn_zeros(4,2)
assert_array_almost_equal(an,array([5.64515, 9.36162]),5)
an = special.yn_zeros(443,5)
assert_tol_equal(an, [450.13573091578090314, 463.05692376675001542,
472.80651546418663566, 481.27353184725625838,
488.98055964441374646], rtol=1e-15)
def test_ynp_zeros(self):
ao = special.ynp_zeros(0,2)
assert_array_almost_equal(ao,array([2.19714133, 5.42968104]),6)
ao = special.ynp_zeros(43,5)
assert_tol_equal(special.yvp(43, ao), 0, atol=1e-15)
ao = special.ynp_zeros(443,5)
assert_tol_equal(special.yvp(443, ao), 0, atol=1e-9)
def test_ynp_zeros_large_order(self):
ao = special.ynp_zeros(443,5)
assert_tol_equal(special.yvp(443, ao), 0, atol=1e-14)
def test_yn(self):
yn2n = special.yn(1,.2)
assert_almost_equal(yn2n,-3.3238249881118471,8)
def test_negv_yv(self):
assert_almost_equal(special.yv(-3,2), -special.yv(3,2), 14)
def test_yv(self):
yv2 = special.yv(1,.2)
assert_almost_equal(yv2,-3.3238249881118471,8)
def test_negv_yve(self):
assert_almost_equal(special.yve(-3,2), -special.yve(3,2), 14)
def test_yve(self):
yve2 = special.yve(1,.2)
assert_almost_equal(yve2,-3.3238249881118471,8)
yve2r = special.yv(1,.2+1j)*exp(-1)
yve22 = special.yve(1,.2+1j)
assert_almost_equal(yve22,yve2r,8)
def test_yvp(self):
yvpr = (special.yv(1,.2) - special.yv(3,.2))/2.0
yvp1 = special.yvp(2,.2)
assert_array_almost_equal(yvp1,yvpr,10)
def _cephes_vs_amos_points(self):
"""Yield points at which to compare Cephes implementation to AMOS"""
# check several points, including large-amplitude ones
for v in [-120, -100.3, -20., -10., -1., -.5,
0., 1., 12.49, 120., 301]:
for z in [-1300, -11, -10, -1, 1., 10., 200.5, 401., 600.5,
700.6, 1300, 10003]:
yield v, z
# check half-integers; these are problematic points at least
# for cephes/iv
for v in 0.5 + arange(-60, 60):
yield v, 3.5
def check_cephes_vs_amos(self, f1, f2, rtol=1e-11, atol=0, skip=None):
for v, z in self._cephes_vs_amos_points():
if skip is not None and skip(v, z):
continue
c1, c2, c3 = f1(v, z), f1(v,z+0j), f2(int(v), z)
if np.isinf(c1):
assert_(np.abs(c2) >= 1e300, (v, z))
elif np.isnan(c1):
assert_(c2.imag != 0, (v, z))
else:
assert_tol_equal(c1, c2, err_msg=(v, z), rtol=rtol, atol=atol)
if v == int(v):
assert_tol_equal(c3, c2, err_msg=(v, z),
rtol=rtol, atol=atol)
def test_jv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.jv, special.jn, rtol=1e-10, atol=1e-305)
def test_yv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305)
def test_yv_cephes_vs_amos_only_small_orders(self):
skipper = lambda v, z: (abs(v) > 50)
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305, skip=skipper)
def test_iv_cephes_vs_amos(self):
olderr = np.seterr(all='ignore')
try:
self.check_cephes_vs_amos(special.iv, special.iv, rtol=5e-9, atol=1e-305)
finally:
np.seterr(**olderr)
@dec.slow
def test_iv_cephes_vs_amos_mass_test(self):
N = 1000000
np.random.seed(1)
v = np.random.pareto(0.5, N) * (-1)**np.random.randint(2, size=N)
x = np.random.pareto(0.2, N) * (-1)**np.random.randint(2, size=N)
imsk = (np.random.randint(8, size=N) == 0)
v[imsk] = v[imsk].astype(int)
old_err = np.seterr(all='ignore')
try:
c1 = special.iv(v, x)
c2 = special.iv(v, x+0j)
# deal with differences in the inf and zero cutoffs
c1[abs(c1) > 1e300] = np.inf
c2[abs(c2) > 1e300] = np.inf
c1[abs(c1) < 1e-300] = 0
c2[abs(c2) < 1e-300] = 0
dc = abs(c1/c2 - 1)
dc[np.isnan(dc)] = 0
finally:
np.seterr(**old_err)
k = np.argmax(dc)
# Most error apparently comes from AMOS and not our implementation;
# there are some problems near integer orders there
assert_(dc[k] < 2e-7, (v[k], x[k], special.iv(v[k], x[k]), special.iv(v[k], x[k]+0j)))
def test_kv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.kv, special.kn, rtol=1e-9, atol=1e-305)
self.check_cephes_vs_amos(special.kv, special.kv, rtol=1e-9, atol=1e-305)
def test_ticket_623(self):
assert_tol_equal(special.jv(3, 4), 0.43017147387562193)
assert_tol_equal(special.jv(301, 1300), 0.0183487151115275)
assert_tol_equal(special.jv(301, 1296.0682), -0.0224174325312048)
def test_ticket_853(self):
"""Negative-order Bessels"""
# cephes
assert_tol_equal(special.jv(-1, 1), -0.4400505857449335)
assert_tol_equal(special.jv(-2, 1), 0.1149034849319005)
assert_tol_equal(special.yv(-1, 1), 0.7812128213002887)
assert_tol_equal(special.yv(-2, 1), -1.650682606816255)
assert_tol_equal(special.iv(-1, 1), 0.5651591039924851)
assert_tol_equal(special.iv(-2, 1), 0.1357476697670383)
assert_tol_equal(special.kv(-1, 1), 0.6019072301972347)
assert_tol_equal(special.kv(-2, 1), 1.624838898635178)
assert_tol_equal(special.jv(-0.5, 1), 0.43109886801837607952)
assert_tol_equal(special.yv(-0.5, 1), 0.6713967071418031)
assert_tol_equal(special.iv(-0.5, 1), 1.231200214592967)
assert_tol_equal(special.kv(-0.5, 1), 0.4610685044478945)
# amos
assert_tol_equal(special.jv(-1, 1+0j), -0.4400505857449335)
assert_tol_equal(special.jv(-2, 1+0j), 0.1149034849319005)
assert_tol_equal(special.yv(-1, 1+0j), 0.7812128213002887)
assert_tol_equal(special.yv(-2, 1+0j), -1.650682606816255)
assert_tol_equal(special.iv(-1, 1+0j), 0.5651591039924851)
assert_tol_equal(special.iv(-2, 1+0j), 0.1357476697670383)
assert_tol_equal(special.kv(-1, 1+0j), 0.6019072301972347)
assert_tol_equal(special.kv(-2, 1+0j), 1.624838898635178)
assert_tol_equal(special.jv(-0.5, 1+0j), 0.43109886801837607952)
assert_tol_equal(special.jv(-0.5, 1+1j), 0.2628946385649065-0.827050182040562j)
assert_tol_equal(special.yv(-0.5, 1+0j), 0.6713967071418031)
assert_tol_equal(special.yv(-0.5, 1+1j), 0.967901282890131+0.0602046062142816j)
assert_tol_equal(special.iv(-0.5, 1+0j), 1.231200214592967)
assert_tol_equal(special.iv(-0.5, 1+1j), 0.77070737376928+0.39891821043561j)
assert_tol_equal(special.kv(-0.5, 1+0j), 0.4610685044478945)
assert_tol_equal(special.kv(-0.5, 1+1j), 0.06868578341999-0.38157825981268j)
assert_tol_equal(special.jve(-0.5,1+0.3j), special.jv(-0.5, 1+0.3j)*exp(-0.3))
assert_tol_equal(special.yve(-0.5,1+0.3j), special.yv(-0.5, 1+0.3j)*exp(-0.3))
assert_tol_equal(special.ive(-0.5,0.3+1j), special.iv(-0.5, 0.3+1j)*exp(-0.3))
assert_tol_equal(special.kve(-0.5,0.3+1j), special.kv(-0.5, 0.3+1j)*exp(0.3+1j))
assert_tol_equal(special.hankel1(-0.5, 1+1j), special.jv(-0.5, 1+1j) + 1j*special.yv(-0.5,1+1j))
assert_tol_equal(special.hankel2(-0.5, 1+1j), special.jv(-0.5, 1+1j) - 1j*special.yv(-0.5,1+1j))
def test_ticket_854(self):
"""Real-valued Bessel domains"""
assert_(isnan(special.jv(0.5, -1)))
assert_(isnan(special.iv(0.5, -1)))
assert_(isnan(special.yv(0.5, -1)))
assert_(isnan(special.yv(1, -1)))
assert_(isnan(special.kv(0.5, -1)))
assert_(isnan(special.kv(1, -1)))
assert_(isnan(special.jve(0.5, -1)))
assert_(isnan(special.ive(0.5, -1)))
assert_(isnan(special.yve(0.5, -1)))
assert_(isnan(special.yve(1, -1)))
assert_(isnan(special.kve(0.5, -1)))
assert_(isnan(special.kve(1, -1)))
assert_(isnan(special.airye(-1)[0:2]).all(), special.airye(-1))
assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1))
def test_ticket_503(self):
"""Real-valued Bessel I overflow"""
assert_tol_equal(special.iv(1, 700), 1.528500390233901e302)
assert_tol_equal(special.iv(1000, 1120), 1.301564549405821e301)
def test_iv_hyperg_poles(self):
assert_tol_equal(special.iv(-0.5, 1), 1.231200214592967)
def iv_series(self, v, z, n=200):
k = arange(0, n).astype(float_)
r = (v+2*k)*log(.5*z) - special.gammaln(k+1) - special.gammaln(v+k+1)
r[isnan(r)] = inf
r = exp(r)
err = abs(r).max() * finfo(float_).eps * n + abs(r[-1])*10
return r.sum(), err
def test_i0_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(0, z)
assert_tol_equal(special.i0(z), value, atol=err, err_msg=z)
def test_i1_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(1, z)
assert_tol_equal(special.i1(z), value, atol=err, err_msg=z)
def test_iv_series(self):
for v in [-20., -10., -1., 0., 1., 12.49, 120.]:
for z in [1., 10., 200.5, -1+2j]:
value, err = self.iv_series(v, z)
assert_tol_equal(special.iv(v, z), value, atol=err, err_msg=(v, z))
def test_i0(self):
values = [[0.0, 1.0],
[1e-10, 1.0],
[0.1, 0.9071009258],
[0.5, 0.6450352706],
[1.0, 0.4657596077],
[2.5, 0.2700464416],
[5.0, 0.1835408126],
[20.0, 0.0897803119],
]
for i, (x, v) in enumerate(values):
cv = special.i0(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i0e(self):
oize = special.i0e(.1)
oizer = special.ive(0,.1)
assert_almost_equal(oize,oizer,8)
def test_i1(self):
values = [[0.0, 0.0],
[1e-10, 0.4999999999500000e-10],
[0.1, 0.0452984468],
[0.5, 0.1564208032],
[1.0, 0.2079104154],
[5.0, 0.1639722669],
[20.0, 0.0875062222],
]
for i, (x, v) in enumerate(values):
cv = special.i1(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i1e(self):
oi1e = special.i1e(.1)
oi1er = special.ive(1,.1)
assert_almost_equal(oi1e,oi1er,8)
def test_iti0k0(self):
iti0 = array(special.iti0k0(5))
assert_array_almost_equal(iti0,array([31.848667776169801, 1.5673873907283657]),5)
def test_it2i0k0(self):
it2k = special.it2i0k0(.1)
assert_array_almost_equal(it2k,array([0.0012503906973464409, 3.3309450354686687]),6)
def test_iv(self):
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(iv1,0.90710092578230106,10)
def test_negv_ive(self):
assert_equal(special.ive(3,2), special.ive(-3,2))
def test_ive(self):
ive1 = special.ive(0,.1)
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(ive1,iv1,10)
def test_ivp0(self):
assert_almost_equal(special.iv(1,2), special.ivp(0,2), 10)
def test_ivp(self):
y = (special.iv(0,2) + special.iv(2,2))/2
x = special.ivp(1,2)
assert_almost_equal(x,y,10)
class TestLaguerre(TestCase):
def test_laguerre(self):
lag0 = special.laguerre(0)
lag1 = special.laguerre(1)
lag2 = special.laguerre(2)
lag3 = special.laguerre(3)
lag4 = special.laguerre(4)
lag5 = special.laguerre(5)
assert_array_almost_equal(lag0.c,[1],13)
assert_array_almost_equal(lag1.c,[-1,1],13)
assert_array_almost_equal(lag2.c,array([1,-4,2])/2.0,13)
assert_array_almost_equal(lag3.c,array([-1,9,-18,6])/6.0,13)
assert_array_almost_equal(lag4.c,array([1,-16,72,-96,24])/24.0,13)
assert_array_almost_equal(lag5.c,array([-1,25,-200,600,-600,120])/120.0,13)
def test_genlaguerre(self):
k = 5*rand()-0.9
lag0 = special.genlaguerre(0,k)
lag1 = special.genlaguerre(1,k)
lag2 = special.genlaguerre(2,k)
lag3 = special.genlaguerre(3,k)
assert_equal(lag0.c,[1])
assert_equal(lag1.c,[-1,k+1])
assert_almost_equal(lag2.c,array([1,-2*(k+2),(k+1.)*(k+2.)])/2.0)
assert_almost_equal(lag3.c,array([-1,3*(k+3),-3*(k+2)*(k+3),(k+1)*(k+2)*(k+3)])/6.0)
# Base polynomials come from Abrahmowitz and Stegan
class TestLegendre(TestCase):
def test_legendre(self):
leg0 = special.legendre(0)
leg1 = special.legendre(1)
leg2 = special.legendre(2)
leg3 = special.legendre(3)
leg4 = special.legendre(4)
leg5 = special.legendre(5)
assert_equal(leg0.c, [1])
assert_equal(leg1.c, [1,0])
assert_almost_equal(leg2.c, array([3,0,-1])/2.0, decimal=13)
assert_almost_equal(leg3.c, array([5,0,-3,0])/2.0)
assert_almost_equal(leg4.c, array([35,0,-30,0,3])/8.0)
assert_almost_equal(leg5.c, array([63,0,-70,0,15,0])/8.0)
class TestLambda(TestCase):
def test_lmbda(self):
lam = special.lmbda(1,.1)
lamr = (array([special.jn(0,.1), 2*special.jn(1,.1)/.1]),
array([special.jvp(0,.1), -2*special.jv(1,.1)/.01 + 2*special.jvp(1,.1)/.1]))
assert_array_almost_equal(lam,lamr,8)
class TestLog1p(TestCase):
def test_log1p(self):
l1p = (special.log1p(10), special.log1p(11), special.log1p(12))
l1prl = (log(11), log(12), log(13))
assert_array_almost_equal(l1p,l1prl,8)
def test_log1pmore(self):
l1pm = (special.log1p(1), special.log1p(1.1), special.log1p(1.2))
l1pmrl = (log(2),log(2.1),log(2.2))
assert_array_almost_equal(l1pm,l1pmrl,8)
class TestLegendreFunctions(TestCase):
def test_clpmn(self):
z = 0.5+0.3j
clp = special.clpmn(2, 2, z, 3)
assert_array_almost_equal(clp,
(array([[1.0000, z, 0.5*(3*z*z-1)],
[0.0000, sqrt(z*z-1), 3*z*sqrt(z*z-1)],
[0.0000, 0.0000, 3*(z*z-1)]]),
array([[0.0000, 1.0000, 3*z],
[0.0000, z/sqrt(z*z-1), 3*(2*z*z-1)/sqrt(z*z-1)],
[0.0000, 0.0000, 6*z]])),
7)
def test_clpmn_close_to_real_2(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 2)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 2)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x),
special.lpmv(m, n, x)]),
7)
def test_clpmn_close_to_real_3(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 3)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 3)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x)*np.exp(-0.5j*m*np.pi),
special.lpmv(m, n, x)*np.exp(0.5j*m*np.pi)]),
7)
def test_clpmn_across_unit_circle(self):
eps = 1e-7
m = 1
n = 1
x = 1j
for type in [2, 3]:
assert_almost_equal(special.clpmn(m, n, x+1j*eps, type)[0][m, n],
special.clpmn(m, n, x-1j*eps, type)[0][m, n], 6)
def test_inf(self):
for z in (1, -1):
for n in range(4):
for m in range(1, n):
lp = special.clpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
lp = special.lpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
def test_deriv_clpmn(self):
# data inside and outside of the unit circle
zvals = [0.5+0.5j, -0.5+0.5j, -0.5-0.5j, 0.5-0.5j,
1+1j, -1+1j, -1-1j, 1-1j]
m = 2
n = 3
for type in [2, 3]:
for z in zvals:
for h in [1e-3, 1e-3j]:
approx_derivative = (special.clpmn(m, n, z+0.5*h, type)[0]
- special.clpmn(m, n, z-0.5*h, type)[0])/h
assert_allclose(special.clpmn(m, n, z, type)[1],
approx_derivative,
rtol=1e-4)
def test_lpmn(self):
lp = special.lpmn(0,2,.5)
assert_array_almost_equal(lp,(array([[1.00000,
0.50000,
-0.12500]]),
array([[0.00000,
1.00000,
1.50000]])),4)
def test_lpn(self):
lpnf = special.lpn(2,.5)
assert_array_almost_equal(lpnf,(array([1.00000,
0.50000,
-0.12500]),
array([0.00000,
1.00000,
1.50000])),4)
def test_lpmv(self):
lp = special.lpmv(0,2,.5)
assert_almost_equal(lp,-0.125,7)
lp = special.lpmv(0,40,.001)
assert_almost_equal(lp,0.1252678976534484,7)
# XXX: this is outside the domain of the current implementation,
# so ensure it returns a NaN rather than a wrong answer.
olderr = np.seterr(all='ignore')
try:
lp = special.lpmv(-1,-1,.001)
finally:
np.seterr(**olderr)
assert_(lp != 0 or np.isnan(lp))
def test_lqmn(self):
lqmnf = special.lqmn(0,2,.5)
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqmnf[0][0],lqf[0],4)
assert_array_almost_equal(lqmnf[1][0],lqf[1],4)
def test_lqmn_gt1(self):
"""algorithm for real arguments changes at 1.0001
test against analytical result for m=2, n=1
"""
x0 = 1.0001
delta = 0.00002
for x in (x0-delta, x0+delta):
lq = special.lqmn(2, 1, x)[0][-1, -1]
expected = 2/(x*x-1)
assert_almost_equal(lq, expected)
def test_lqmn_shape(self):
a, b = special.lqmn(4, 4, 1.1)
assert_equal(a.shape, (5, 5))
assert_equal(b.shape, (5, 5))
a, b = special.lqmn(4, 0, 1.1)
assert_equal(a.shape, (5, 1))
assert_equal(b.shape, (5, 1))
def test_lqn(self):
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqf,(array([0.5493, -0.7253, -0.8187]),
array([1.3333, 1.216, -0.8427])),4)
class TestMathieu(TestCase):
def test_mathieu_a(self):
pass
def test_mathieu_even_coef(self):
mc = special.mathieu_even_coef(2,5)
# Q not defined broken and cannot figure out proper reporting order
def test_mathieu_odd_coef(self):
# same problem as above
pass
class TestFresnelIntegral(TestCase):
def test_modfresnelp(self):
pass
def test_modfresnelm(self):
pass
class TestOblCvSeq(TestCase):
def test_obl_cv_seq(self):
obl = special.obl_cv_seq(0,3,1)
assert_array_almost_equal(obl,array([-0.348602,
1.393206,
5.486800,
11.492120]),5)
class TestParabolicCylinder(TestCase):
def test_pbdn_seq(self):
pb = special.pbdn_seq(1,.1)
assert_array_almost_equal(pb,(array([0.9975,
0.0998]),
array([-0.0499,
0.9925])),4)
def test_pbdv(self):
pbv = special.pbdv(1,.2)
derrl = 1/2*(.2)*special.pbdv(1,.2)[0] - special.pbdv(0,.2)[0]
def test_pbdv_seq(self):
pbn = special.pbdn_seq(1,.1)
pbv = special.pbdv_seq(1,.1)
assert_array_almost_equal(pbv,(real(pbn[0]),real(pbn[1])),4)
def test_pbdv_points(self):
# simple case
eta = np.linspace(-10, 10, 5)
z = 2**(eta/2)*np.sqrt(np.pi)/special.gamma(.5-.5*eta)
assert_tol_equal(special.pbdv(eta, 0.)[0], z, rtol=1e-14, atol=1e-14)
# some points
assert_tol_equal(special.pbdv(10.34, 20.44)[0], 1.3731383034455e-32, rtol=1e-12)
assert_tol_equal(special.pbdv(-9.53, 3.44)[0], 3.166735001119246e-8, rtol=1e-12)
def test_pbdv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbdv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbdv(eta, x + eps)[0] - special.pbdv(eta, x - eps)[0]) / eps / 2.
assert_tol_equal(p[1], dp, rtol=1e-6, atol=1e-6)
def test_pbvv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbvv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbvv(eta, x + eps)[0] - special.pbvv(eta, x - eps)[0]) / eps / 2.
assert_tol_equal(p[1], dp, rtol=1e-6, atol=1e-6)
class TestPolygamma(TestCase):
# from Table 6.2 (pg. 271) of A&S
def test_polygamma(self):
poly2 = special.polygamma(2,1)
poly3 = special.polygamma(3,1)
assert_almost_equal(poly2,-2.4041138063,10)
assert_almost_equal(poly3,6.4939394023,10)
# Test polygamma(0, x) == psi(x)
x = [2, 3, 1.1e14]
assert_almost_equal(special.polygamma(0, x), special.psi(x))
# Test broadcasting
n = [0, 1, 2]
x = [0.5, 1.5, 2.5]
expected = [-1.9635100260214238, 0.93480220054467933,
-0.23620405164172739]
assert_almost_equal(special.polygamma(n, x), expected)
expected = np.row_stack([expected]*2)
assert_almost_equal(special.polygamma(n, np.row_stack([x]*2)),
expected)
assert_almost_equal(special.polygamma(np.row_stack([n]*2), x),
expected)
class TestProCvSeq(TestCase):
def test_pro_cv_seq(self):
prol = special.pro_cv_seq(0,3,1)
assert_array_almost_equal(prol,array([0.319000,
2.593084,
6.533471,
12.514462]),5)
class TestPsi(TestCase):
def test_psi(self):
ps = special.psi(1)
assert_almost_equal(ps,-0.57721566490153287,8)
class TestRadian(TestCase):
def test_radian(self):
rad = special.radian(90,0,0)
assert_almost_equal(rad,pi/2.0,5)
def test_radianmore(self):
rad1 = special.radian(90,1,60)
assert_almost_equal(rad1,pi/2+0.0005816135199345904,5)
class TestRiccati(TestCase):
def test_riccati_jn(self):
jnrl = (special.sph_jn(1,.2)[0]*.2,special.sph_jn(1,.2)[0]+special.sph_jn(1,.2)[1]*.2)
ricjn = special.riccati_jn(1,.2)
assert_array_almost_equal(ricjn,jnrl,8)
def test_riccati_yn(self):
ynrl = (special.sph_yn(1,.2)[0]*.2,special.sph_yn(1,.2)[0]+special.sph_yn(1,.2)[1]*.2)
ricyn = special.riccati_yn(1,.2)
assert_array_almost_equal(ricyn,ynrl,8)
class TestRound(TestCase):
def test_round(self):
rnd = list(map(int,(special.round(10.1),special.round(10.4),special.round(10.5),special.round(10.6))))
# Note: According to the documentation, scipy.special.round is
# supposed to round to the nearest even number if the fractional
# part is exactly 0.5. On some platforms, this does not appear
# to work and thus this test may fail. However, this unit test is
# correctly written.
rndrl = (10,10,10,11)
assert_array_equal(rnd,rndrl)
def test_sph_harm():
# Tests derived from tables in
# http://en.wikipedia.org/wiki/Table_of_spherical_harmonics
sh = special.sph_harm
pi = np.pi
exp = np.exp
sqrt = np.sqrt
sin = np.sin
cos = np.cos
yield (assert_array_almost_equal, sh(0,0,0,0),
0.5/sqrt(pi))
yield (assert_array_almost_equal, sh(-2,2,0.,pi/4),
0.25*sqrt(15./(2.*pi)) *
(sin(pi/4))**2.)
yield (assert_array_almost_equal, sh(-2,2,0.,pi/2),
0.25*sqrt(15./(2.*pi)))
yield (assert_array_almost_equal, sh(2,2,pi,pi/2),
0.25*sqrt(15/(2.*pi)) *
exp(0+2.*pi*1j)*sin(pi/2.)**2.)
yield (assert_array_almost_equal, sh(2,4,pi/4.,pi/3.),
(3./8.)*sqrt(5./(2.*pi)) *
exp(0+2.*pi/4.*1j) *
sin(pi/3.)**2. *
(7.*cos(pi/3.)**2.-1))
yield (assert_array_almost_equal, sh(4,4,pi/8.,pi/6.),
(3./16.)*sqrt(35./(2.*pi)) *
exp(0+4.*pi/8.*1j)*sin(pi/6.)**4.)
class TestSpherical(TestCase):
def test_sph_harm(self):
# see test_sph_harm function
pass
def test_sph_in(self):
i1n = special.sph_in(1,.2)
inp0 = (i1n[0][1])
inp1 = (i1n[0][0] - 2.0/0.2 * i1n[0][1])
assert_array_almost_equal(i1n[0],array([1.0066800127054699381,
0.066933714568029540839]),12)
assert_array_almost_equal(i1n[1],[inp0,inp1],12)
def test_sph_inkn(self):
spikn = r_[special.sph_in(1,.2) + special.sph_kn(1,.2)]
inkn = r_[special.sph_inkn(1,.2)]
assert_array_almost_equal(inkn,spikn,10)
def test_sph_in_kn_order0(self):
x = 1.
sph_i0 = special.sph_in(0, x)
sph_i0_expected = np.array([np.sinh(x)/x,
np.cosh(x)/x-np.sinh(x)/x**2])
assert_array_almost_equal(r_[sph_i0], sph_i0_expected)
sph_k0 = special.sph_kn(0, x)
sph_k0_expected = np.array([0.5*pi*exp(-x)/x,
-0.5*pi*exp(-x)*(1/x+1/x**2)])
assert_array_almost_equal(r_[sph_k0], sph_k0_expected)
sph_i0k0 = special.sph_inkn(0, x)
assert_array_almost_equal(r_[sph_i0+sph_k0],
r_[sph_i0k0],
10)
def test_sph_jn(self):
s1 = special.sph_jn(2,.2)
s10 = -s1[0][1]
s11 = s1[0][0]-2.0/0.2*s1[0][1]
s12 = s1[0][1]-3.0/0.2*s1[0][2]
assert_array_almost_equal(s1[0],[0.99334665397530607731,
0.066400380670322230863,
0.0026590560795273856680],12)
assert_array_almost_equal(s1[1],[s10,s11,s12],12)
def test_sph_jnyn(self):
jnyn = r_[special.sph_jn(1,.2) + special.sph_yn(1,.2)] # tuple addition
jnyn1 = r_[special.sph_jnyn(1,.2)]
assert_array_almost_equal(jnyn1,jnyn,9)
def test_sph_kn(self):
kn = special.sph_kn(2,.2)
kn0 = -kn[0][1]
kn1 = -kn[0][0]-2.0/0.2*kn[0][1]
kn2 = -kn[0][1]-3.0/0.2*kn[0][2]
assert_array_almost_equal(kn[0],[6.4302962978445670140,
38.581777787067402086,
585.15696310385559829],12)
assert_array_almost_equal(kn[1],[kn0,kn1,kn2],9)
def test_sph_yn(self):
sy1 = special.sph_yn(2,.2)[0][2]
sy2 = special.sph_yn(0,.2)[0][0]
sphpy = (special.sph_yn(1,.2)[0][0]-2*special.sph_yn(2,.2)[0][2])/3 # correct derivative value
assert_almost_equal(sy1,-377.52483,5) # previous values in the system
assert_almost_equal(sy2,-4.9003329,5)
sy3 = special.sph_yn(1,.2)[1][1]
assert_almost_equal(sy3,sphpy,4) # compare correct derivative val. (correct =-system val).
class TestStruve(object):
def _series(self, v, z, n=100):
"""Compute Struve function & error estimate from its power series."""
k = arange(0, n)
r = (-1)**k * (.5*z)**(2*k+v+1)/special.gamma(k+1.5)/special.gamma(k+v+1.5)
err = abs(r).max() * finfo(float_).eps * n
return r.sum(), err
def test_vs_series(self):
"""Check Struve function versus its power series"""
for v in [-20, -10, -7.99, -3.4, -1, 0, 1, 3.4, 12.49, 16]:
for z in [1, 10, 19, 21, 30]:
value, err = self._series(v, z)
assert_tol_equal(special.struve(v, z), value, rtol=0, atol=err), (v, z)
def test_some_values(self):
assert_tol_equal(special.struve(-7.99, 21), 0.0467547614113, rtol=1e-7)
assert_tol_equal(special.struve(-8.01, 21), 0.0398716951023, rtol=1e-8)
assert_tol_equal(special.struve(-3.0, 200), 0.0142134427432, rtol=1e-12)
assert_tol_equal(special.struve(-8.0, -41), 0.0192469727846, rtol=1e-11)
assert_equal(special.struve(-12, -41), -special.struve(-12, 41))
assert_equal(special.struve(+12, -41), -special.struve(+12, 41))
assert_equal(special.struve(-11, -41), +special.struve(-11, 41))
assert_equal(special.struve(+11, -41), +special.struve(+11, 41))
assert_(isnan(special.struve(-7.1, -1)))
assert_(isnan(special.struve(-10.1, -1)))
def test_regression_679(self):
"""Regression test for #679"""
assert_tol_equal(special.struve(-1.0, 20 - 1e-8), special.struve(-1.0, 20 + 1e-8))
assert_tol_equal(special.struve(-2.0, 20 - 1e-8), special.struve(-2.0, 20 + 1e-8))
assert_tol_equal(special.struve(-4.3, 20 - 1e-8), special.struve(-4.3, 20 + 1e-8))
def test_chi2_smalldf():
assert_almost_equal(special.chdtr(0.6,3), 0.957890536704110)
def test_chi2c_smalldf():
assert_almost_equal(special.chdtrc(0.6,3), 1-0.957890536704110)
def test_chi2_inv_smalldf():
assert_almost_equal(special.chdtri(0.6,1-0.957890536704110), 3)
def test_agm_simple():
assert_allclose(special.agm(24, 6), 13.4581714817)
assert_allclose(special.agm(1e30, 1), 2.2292230559453832047768593e28)
def test_legacy():
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
# Legacy behavior: truncating arguments to integers
assert_equal(special.bdtrc(1, 2, 0.3), special.bdtrc(1.8, 2.8, 0.3))
assert_equal(special.bdtr(1, 2, 0.3), special.bdtr(1.8, 2.8, 0.3))
assert_equal(special.bdtri(1, 2, 0.3), special.bdtri(1.8, 2.8, 0.3))
assert_equal(special.expn(1, 0.3), special.expn(1.8, 0.3))
assert_equal(special.hyp2f0(1, 2, 0.3, 1), special.hyp2f0(1, 2, 0.3, 1.8))
assert_equal(special.nbdtrc(1, 2, 0.3), special.nbdtrc(1.8, 2.8, 0.3))
assert_equal(special.nbdtr(1, 2, 0.3), special.nbdtr(1.8, 2.8, 0.3))
assert_equal(special.nbdtri(1, 2, 0.3), special.nbdtri(1.8, 2.8, 0.3))
assert_equal(special.pdtrc(1, 0.3), special.pdtrc(1.8, 0.3))
assert_equal(special.pdtr(1, 0.3), special.pdtr(1.8, 0.3))
assert_equal(special.pdtri(1, 0.3), special.pdtri(1.8, 0.3))
assert_equal(special.kn(1, 0.3), special.kn(1.8, 0.3))
assert_equal(special.yn(1, 0.3), special.yn(1.8, 0.3))
assert_equal(special.smirnov(1, 0.3), special.smirnov(1.8, 0.3))
assert_equal(special.smirnovi(1, 0.3), special.smirnovi(1.8, 0.3))
@with_special_errors
def test_error_raising():
assert_raises(special.SpecialFunctionWarning, special.iv, 1, 1e99j)
def test_xlogy():
def xfunc(x, y):
if x == 0 and not np.isnan(y):
return x
else:
return x*np.log(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0)], dtype=float)
z2 = np.r_[z1, [(0, 1j), (1, 1j)]]
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlogy, w1, z1, rtol=1e-13, atol=1e-13)
w2 = np.vectorize(xfunc)(z2[:,0], z2[:,1])
assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13)
def test_xlog1py():
def xfunc(x, y):
if x == 0 and not np.isnan(y):
return x
else:
return x * np.log1p(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0),
(1, 1e-30)], dtype=float)
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlog1py, w1, z1, rtol=1e-13, atol=1e-13)
def test_entr():
def xfunc(x):
if x < 0:
return -np.inf
else:
return -special.xlogy(x, x)
values = (0, 0.5, 1.0, np.inf)
signs = [-1, 1]
arr = []
for sgn, v in itertools.product(signs, values):
arr.append(sgn * v)
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z)
assert_func_equal(special.entr, w, z, rtol=1e-13, atol=1e-13)
def test_kl_div():
def xfunc(x, y):
if x < 0 or y < 0 or (y == 0 and x != 0):
# extension of natural domain to preserve convexity
return np.inf
elif np.isposinf(x) or np.isposinf(y):
# limits within the natural domain
return np.inf
elif x == 0:
return y
else:
return special.xlogy(x, x/y) - x + y
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.kl_div, w, z, rtol=1e-13, atol=1e-13)
def test_rel_entr():
def xfunc(x, y):
if x > 0 and y > 0:
return special.xlogy(x, x/y)
elif x == 0 and y >= 0:
return 0
else:
return np.inf
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.rel_entr, w, z, rtol=1e-13, atol=1e-13)
def test_huber():
assert_equal(special.huber(-1, 1.5), np.inf)
assert_allclose(special.huber(2, 1.5), 0.5 * np.square(1.5))
assert_allclose(special.huber(2, 2.5), 2 * (2.5 - 0.5 * 2))
def xfunc(delta, r):
if delta < 0:
return np.inf
elif np.abs(r) < delta:
return 0.5 * np.square(r)
else:
return delta * (np.abs(r) - 0.5 * delta)
z = np.random.randn(10, 2)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.huber, w, z, rtol=1e-13, atol=1e-13)
def test_pseudo_huber():
def xfunc(delta, r):
if delta < 0:
return np.inf
elif (not delta) or (not r):
return 0
else:
return delta**2 * (np.sqrt(1 + (r/delta)**2) - 1)
z = np.array(np.random.randn(10, 2).tolist() + [[0, 0.5], [0.5, 0]])
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.pseudo_huber, w, z, rtol=1e-13, atol=1e-13)
if __name__ == "__main__":
run_module_suite()
|
nvoron23/scipy
|
scipy/special/tests/test_basic.py
|
Python
|
bsd-3-clause
| 121,709
|
[
"Elk"
] |
7795bbf4e992beed3e6293f9a9c7f05eea229b7350cdaa01c890014015de43bb
|
from EXOSIMS.Prototypes.BackgroundSources import BackgroundSources
import os, inspect
import numpy as np
import astropy.units as u
from scipy.interpolate import griddata
class GalaxiesFaintStars(BackgroundSources):
"""
GalaxiesFaintStars class
This class calculates the total number background sources in number per square
arcminute, including galaxies and faint stars.
"""
def __init__(self, **specs):
"""
Constructor for class GalaxiesFaintStars
"""
BackgroundSources.__init__(self, **specs)
def dNbackground(self, coords, intDepths):
"""
Return total number counts per square arcmin
Args:
coords (astropy SkyCoord array):
SkyCoord object containing right ascension, declination, and
distance to star of the planets of interest in units of deg, deg and pc
intDepths (float ndarray):
Integration depths equal to the planet magnitude (Vmag+dMag),
i.e. the V magnitude of the dark hole to be produced for each target.
Must be of same length as coords.
Returns:
dN (astropy Quantity array):
Number densities of background sources for given targets in
units of 1/arcmin2. Same length as inputs.
"""
# check whether inputs are valid arrays
mag = np.array(intDepths, ndmin=1, copy=False)
dN = super(GalaxiesFaintStars, self).dNbackground(coords, mag)
# make sure mag is within [15,25]
mag = np.clip(mag, 15., 25.)
#retrieve the galactic latitude in degrees from input coords
lat = abs(coords.galactic.b.degree)
# Load stellar background counts from stellar_cnts.txt
# The table comes from Allen Astrophysical Quantities
# Units are in V magnitudes
path = os.path.split(inspect.getfile(self.__class__))[0]
table = np.loadtxt(os.path.join(path, 'stellar_cnts.txt'))
# create data point coordinates
lat_pts = np.array([0., 5, 10, 20, 30, 60, 90]) # deg
mag_pts = np.array([15., 16, 17, 18, 19, 20, 21, 22, 23, 24, 25])
y_pts, x_pts = np.meshgrid(mag_pts, lat_pts)
points = np.array(list(zip(np.concatenate(x_pts), np.concatenate(y_pts))))
# create data values
values = table.reshape(table.size)
# interpolates 2D
C_st = griddata(points,values,np.array(list(zip(lat,mag)))) # log values
C_st = 10**C_st/3600
# Galaxy count per square arcmin, from Windhorst et al 2011
# who derived numbers based on Deep Field HST data
C_gal = 2*2.1**(mag - 12.5)/3600
# total counts
dN = C_st + C_gal
return dN/u.arcmin**2
|
dsavransky/EXOSIMS
|
EXOSIMS/BackgroundSources/GalaxiesFaintStars.py
|
Python
|
bsd-3-clause
| 2,881
|
[
"Galaxy"
] |
b70ad0072126fe18df03d1e3ce5caabdc20c601029475c9deb5baaf82f9f2c32
|
# Copyright (C) 2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest as ut
import importlib_wrapper
tutorial, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@TUTORIALS_DIR@/04-lattice_boltzmann/04-lattice_boltzmann_part1.py",
gpu=True)
@skipIfMissingFeatures
class Tutorial(ut.TestCase):
system = tutorial.system
def test_stokes_force(self):
difference = abs(tutorial.size(tutorial.force) - tutorial.stokes_force)
self.assertLess(difference, 0.1)
if __name__ == "__main__":
ut.main()
|
mkuron/espresso
|
testsuite/scripts/tutorials/test_04-lattice_boltzmann_part1.py
|
Python
|
gpl-3.0
| 1,197
|
[
"ESPResSo"
] |
e4a4e976a5724f21c01f62c66a9b6b2681f7dcdfcf8fb4df03c6f788aeabe2b7
|
"""
1D Kriging
==========
An example of 1D kriging with PyKrige
"""
import os
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# Data taken from https://blog.dominodatalab.com/fitting-gaussian-process-models-python/
X, y = np.array([[-5.01, 1.06], [-4.90, 0.92], [-4.82, 0.35], [-4.69, 0.49], [-4.56, 0.52],
[-4.52, 0.12], [-4.39, 0.47], [-4.32,-0.19], [-4.19, 0.08], [-4.11,-0.19],
[-4.00,-0.03], [-3.89,-0.03], [-3.78,-0.05], [-3.67, 0.10], [-3.59, 0.44],
[-3.50, 0.66], [-3.39,-0.12], [-3.28, 0.45], [-3.20, 0.14], [-3.07,-0.28],
[-3.01,-0.46], [-2.90,-0.32], [-2.77,-1.58], [-2.69,-1.44], [-2.60,-1.51],
[-2.49,-1.50], [-2.41,-2.04], [-2.28,-1.57], [-2.19,-1.25], [-2.10,-1.50],
[-2.00,-1.42], [-1.91,-1.10], [-1.80,-0.58], [-1.67,-1.08], [-1.61,-0.79],
[-1.50,-1.00], [-1.37,-0.04], [-1.30,-0.54], [-1.19,-0.15], [-1.06,-0.18],
[-0.98,-0.25], [-0.87,-1.20], [-0.78,-0.49], [-0.68,-0.83], [-0.57,-0.15],
[-0.50, 0.00], [-0.38,-1.10], [-0.29,-0.32], [-0.18,-0.60], [-0.09,-0.49],
[0.03 ,-0.50], [0.09 ,-0.02], [0.20 ,-0.47], [0.31 ,-0.11], [0.41 ,-0.28],
[0.53 , 0.40], [0.61 , 0.11], [0.70 , 0.32], [0.94 , 0.42], [1.02 , 0.57],
[1.13 , 0.82], [1.24 , 1.18], [1.30 , 0.86], [1.43 , 1.11], [1.50 , 0.74],
[1.63 , 0.75], [1.74 , 1.15], [1.80 , 0.76], [1.93 , 0.68], [2.03 , 0.03],
[2.12 , 0.31], [2.23 ,-0.14], [2.31 ,-0.88], [2.40 ,-1.25], [2.50 ,-1.62],
[2.63 ,-1.37], [2.72 ,-0.99], [2.80 ,-1.92], [2.83 ,-1.94], [2.91 ,-1.32],
[3.00 ,-1.69], [3.13 ,-1.84], [3.21 ,-2.05], [3.30 ,-1.69], [3.41 ,-0.53],
[3.52 ,-0.55], [3.63 ,-0.92], [3.72 ,-0.76], [3.80 ,-0.41], [3.91 , 0.12],
[4.04 , 0.25], [4.13 , 0.16], [4.24 , 0.26], [4.32 , 0.62], [4.44 , 1.69],
[4.52 , 1.11], [4.65 , 0.36], [4.74 , 0.79], [4.84 , 0.87], [4.93 , 1.01],
[5.02 , 0.55]]).T
from pykrige import OrdinaryKriging
X_pred = np.linspace(-6, 6, 200)
# pykrige doesn't support 1D data for now, only 2D or 3D
# adapting the 1D input to 2D
uk = OrdinaryKriging(X, np.zeros(X.shape), y, variogram_model='gaussian',)
y_pred, y_std = uk.execute('grid', X_pred, np.array([0.]))
y_pred = np.squeeze(y_pred)
y_std = np.squeeze(y_std)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.scatter(X, y, s=40, label='Input data')
ax.plot(X_pred, y_pred, label='Predicted values')
ax.fill_between(X_pred, y_pred - 3*y_std, y_pred + 3*y_std, alpha=0.3, label='Confidence interval')
ax.legend(loc=9)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_xlim(-6, 6)
ax.set_ylim(-2.8, 3.5)
if 'CI' not in os.environ:
# skip in continous integration
plt.show()
|
rth/PyKrige
|
examples/kriging_1D.py
|
Python
|
bsd-3-clause
| 2,650
|
[
"Gaussian"
] |
50dd27a701567599e7d3999fabc884b5ff57dc2b56088c5bfd03f00cba859db8
|
def block(ch):
'''
Return the Unicode block name for ch, or None if ch has no block.
>>> block(u'a')
'Basic Latin'
>>> block(unichr(0x0b80))
'Tamil'
>>> block(unichr(0xe0080))
'''
assert isinstance(ch, unicode) and len(ch) == 1, repr(ch)
cp = ord(ch)
for start, end, name in _blocks:
if start <= cp <= end:
return name
def _initBlocks(text):
global _blocks
_blocks = []
import re
pattern = re.compile(r'([0-9A-F]+)\.\.([0-9A-F]+);\ (\S.*\S)')
for line in text.splitlines():
m = pattern.match(line)
if m:
start, end, name = m.groups()
_blocks.append((int(start, 16), int(end, 16), name))
# retrieved from http://unicode.org/Public/UNIDATA/Blocks.txt
_initBlocks('''
# Blocks-5.1.0.txt
# Date: 2008-03-20, 17:41:00 PDT [KW]
#
# Unicode Character Database
# Copyright (c) 1991-2008 Unicode, Inc.
# For terms of use, see http://www.unicode.org/terms_of_use.html
# For documentation, see UCD.html
#
# Note: The casing of block names is not normative.
# For example, "Basic Latin" and "BASIC LATIN" are equivalent.
#
# Format:
# Start Code..End Code; Block Name
# ================================================
# Note: When comparing block names, casing, whitespace, hyphens,
# and underbars are ignored.
# For example, "Latin Extended-A" and "latin extended a" are equivalent.
# For more information on the comparison of property values,
# see UCD.html.
#
# All code points not explicitly listed for Block
# have the value No_Block.
# Property: Block
#
# @missing: 0000..10FFFF; No_Block
0000..007F; Basic Latin
0080..00FF; Latin-1 Supplement
0100..017F; Latin Extended-A
0180..024F; Latin Extended-B
0250..02AF; IPA Extensions
02B0..02FF; Spacing Modifier Letters
0300..036F; Combining Diacritical Marks
0370..03FF; Greek and Coptic
0400..04FF; Cyrillic
0500..052F; Cyrillic Supplement
0530..058F; Armenian
0590..05FF; Hebrew
0600..06FF; Arabic
0700..074F; Syriac
0750..077F; Arabic Supplement
0780..07BF; Thaana
07C0..07FF; NKo
0900..097F; Devanagari
0980..09FF; Bengali
0A00..0A7F; Gurmukhi
0A80..0AFF; Gujarati
0B00..0B7F; Oriya
0B80..0BFF; Tamil
0C00..0C7F; Telugu
0C80..0CFF; Kannada
0D00..0D7F; Malayalam
0D80..0DFF; Sinhala
0E00..0E7F; Thai
0E80..0EFF; Lao
0F00..0FFF; Tibetan
1000..109F; Myanmar
10A0..10FF; Georgian
1100..11FF; Hangul Jamo
1200..137F; Ethiopic
1380..139F; Ethiopic Supplement
13A0..13FF; Cherokee
1400..167F; Unified Canadian Aboriginal Syllabics
1680..169F; Ogham
16A0..16FF; Runic
1700..171F; Tagalog
1720..173F; Hanunoo
1740..175F; Buhid
1760..177F; Tagbanwa
1780..17FF; Khmer
1800..18AF; Mongolian
1900..194F; Limbu
1950..197F; Tai Le
1980..19DF; New Tai Lue
19E0..19FF; Khmer Symbols
1A00..1A1F; Buginese
1B00..1B7F; Balinese
1B80..1BBF; Sundanese
1C00..1C4F; Lepcha
1C50..1C7F; Ol Chiki
1D00..1D7F; Phonetic Extensions
1D80..1DBF; Phonetic Extensions Supplement
1DC0..1DFF; Combining Diacritical Marks Supplement
1E00..1EFF; Latin Extended Additional
1F00..1FFF; Greek Extended
2000..206F; General Punctuation
2070..209F; Superscripts and Subscripts
20A0..20CF; Currency Symbols
20D0..20FF; Combining Diacritical Marks for Symbols
2100..214F; Letterlike Symbols
2150..218F; Number Forms
2190..21FF; Arrows
2200..22FF; Mathematical Operators
2300..23FF; Miscellaneous Technical
2400..243F; Control Pictures
2440..245F; Optical Character Recognition
2460..24FF; Enclosed Alphanumerics
2500..257F; Box Drawing
2580..259F; Block Elements
25A0..25FF; Geometric Shapes
2600..26FF; Miscellaneous Symbols
2700..27BF; Dingbats
27C0..27EF; Miscellaneous Mathematical Symbols-A
27F0..27FF; Supplemental Arrows-A
2800..28FF; Braille Patterns
2900..297F; Supplemental Arrows-B
2980..29FF; Miscellaneous Mathematical Symbols-B
2A00..2AFF; Supplemental Mathematical Operators
2B00..2BFF; Miscellaneous Symbols and Arrows
2C00..2C5F; Glagolitic
2C60..2C7F; Latin Extended-C
2C80..2CFF; Coptic
2D00..2D2F; Georgian Supplement
2D30..2D7F; Tifinagh
2D80..2DDF; Ethiopic Extended
2DE0..2DFF; Cyrillic Extended-A
2E00..2E7F; Supplemental Punctuation
2E80..2EFF; CJK Radicals Supplement
2F00..2FDF; Kangxi Radicals
2FF0..2FFF; Ideographic Description Characters
3000..303F; CJK Symbols and Punctuation
3040..309F; Hiragana
30A0..30FF; Katakana
3100..312F; Bopomofo
3130..318F; Hangul Compatibility Jamo
3190..319F; Kanbun
31A0..31BF; Bopomofo Extended
31C0..31EF; CJK Strokes
31F0..31FF; Katakana Phonetic Extensions
3200..32FF; Enclosed CJK Letters and Months
3300..33FF; CJK Compatibility
3400..4DBF; CJK Unified Ideographs Extension A
4DC0..4DFF; Yijing Hexagram Symbols
4E00..9FFF; CJK Unified Ideographs
A000..A48F; Yi Syllables
A490..A4CF; Yi Radicals
A500..A63F; Vai
A640..A69F; Cyrillic Extended-B
A700..A71F; Modifier Tone Letters
A720..A7FF; Latin Extended-D
A800..A82F; Syloti Nagri
A840..A87F; Phags-pa
A880..A8DF; Saurashtra
A900..A92F; Kayah Li
A930..A95F; Rejang
AA00..AA5F; Cham
AC00..D7AF; Hangul Syllables
D800..DB7F; High Surrogates
DB80..DBFF; High Private Use Surrogates
DC00..DFFF; Low Surrogates
E000..F8FF; Private Use Area
F900..FAFF; CJK Compatibility Ideographs
FB00..FB4F; Alphabetic Presentation Forms
FB50..FDFF; Arabic Presentation Forms-A
FE00..FE0F; Variation Selectors
FE10..FE1F; Vertical Forms
FE20..FE2F; Combining Half Marks
FE30..FE4F; CJK Compatibility Forms
FE50..FE6F; Small Form Variants
FE70..FEFF; Arabic Presentation Forms-B
FF00..FFEF; Halfwidth and Fullwidth Forms
FFF0..FFFF; Specials
10000..1007F; Linear B Syllabary
10080..100FF; Linear B Ideograms
10100..1013F; Aegean Numbers
10140..1018F; Ancient Greek Numbers
10190..101CF; Ancient Symbols
101D0..101FF; Phaistos Disc
10280..1029F; Lycian
102A0..102DF; Carian
10300..1032F; Old Italic
10330..1034F; Gothic
10380..1039F; Ugaritic
103A0..103DF; Old Persian
10400..1044F; Deseret
10450..1047F; Shavian
10480..104AF; Osmanya
10800..1083F; Cypriot Syllabary
10900..1091F; Phoenician
10920..1093F; Lydian
10A00..10A5F; Kharoshthi
12000..123FF; Cuneiform
12400..1247F; Cuneiform Numbers and Punctuation
1D000..1D0FF; Byzantine Musical Symbols
1D100..1D1FF; Musical Symbols
1D200..1D24F; Ancient Greek Musical Notation
1D300..1D35F; Tai Xuan Jing Symbols
1D360..1D37F; Counting Rod Numerals
1D400..1D7FF; Mathematical Alphanumeric Symbols
1F000..1F02F; Mahjong Tiles
1F030..1F09F; Domino Tiles
20000..2A6DF; CJK Unified Ideographs Extension B
2F800..2FA1F; CJK Compatibility Ideographs Supplement
E0000..E007F; Tags
E0100..E01EF; Variation Selectors Supplement
F0000..FFFFF; Supplementary Private Use Area-A
100000..10FFFF; Supplementary Private Use Area-B
# EOF
''')
|
grantdelozier/TopoCluster
|
scripts/UnicodeBlocks.py
|
Python
|
apache-2.0
| 6,596
|
[
"FEFF"
] |
a0edb5697595fa1dd151eec50e139c260090d87b585d2bafed3dddf3e5a8229a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.