text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
"""
nanslicer.py
A command-line tool for producing figures with slices through MR images. This is installed
by PIP as ``nanslicer``. Supports overlays, dual-coded overlays and colorbars. Dual-coding is
described here https://www.cell.com/neuron/fulltext/S0896-6273(12)00428-X.
The minimum command-line call is:
``nanslicer image.nii.gz output.png``
To add a dual-coded overlay, call:
``nanslicer structural.nii.gz output.png --overlay beta.nii.gz --overlay_alpha pval.nii.gz``
There are a lot of command-line options to control the colormaps and scaling. Type
``nanviewer --help`` to see a full list. The number of slices can be controlled with
the ``--slice_rows`` and ``--slice_cols`` arguments, or you can choose ``--three_axis``.
The ``--slice_axis`` and ``--slice_lims`` arguments specify the axis along which to slice,
and where to start and stop along it (expressed as fractions), for example:
``nanslicer structural.nii.gz --slice_axis x --slice_lims 0.25 0.75``
If you have timeseries data as the base image, you can plot the same slice through
each volume with ``--timeseries``, or you can choose the volume in the timeseries to use
with ``--volume N`` (the default is the first).
Controlling image quality is slightly complicated because there are two interpolation
steps. First we have to sample the 3D volumes to produce 2D slices to arbitrary
precision. Then, ``matplotlib`` has to sample those slices to plot them to the canvas.
The first step is controlled by ``--samples N``, which controls the number of points to sample in
each direction of the slice, and ``--interp_order N``, which controls the quality
of the interpolation. The defaults are 128 and 1 (linear interpolation). Increase
them to increase the quality. The ``matplotlib`` step is controlled by ``--interp METHOD``,
and can be any valid ``matplotlib` interpolation method. The default is ``hanning``,
for increased speed this can be changed to ``linear`` or ``none``. From experience,
it is the quality of the ``matplotlib`` step which is the dominant factor in figure
quality, hence the defaults of fairly fast sampling in the slicing step but using
Hanning sampling in the ``matplotlib`` step.
"""
import argparse
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from .util import add_common_arguments, Axis_map
from .colorbar import colorbar, alphabar
from .box import Box
from .slicer import Slicer
from .slice_func import scale_clip
from .layer import Layer, blend_layers
def main(args=None):
"""
The main function that is called from the command line.
Parameters:
- args -- The command-line arguments. See module docstring or command-line help for a full list
"""
parser = argparse.ArgumentParser(
description='Takes aesthetically pleasing slices through MR images')
add_common_arguments(parser)
parser.add_argument('output', help='Output image name', type=str)
parser.add_argument('--slice_rows', type=int, default=4,
help='Number of rows of slices')
parser.add_argument('--slice_cols', type=int, default=5,
help='Number of columns of slices')
parser.add_argument('--slice_axis', type=str, default='z',
help='Axis to slice along (x/y/z)')
parser.add_argument('--slice_lims', type=float, nargs=2, default=(0.1, 0.9),
help='Slice between these limits along the axis, default=0.1 0.9')
parser.add_argument('--slices', type=float, nargs='+',
help='Slice at specified positions')
parser.add_argument(
'--three_axis', help='Make a 3 axis (x,y,z) plot', action='store_true')
parser.add_argument('--timeseries', action='store_true',
help='Plot the same slice through each volume in a time-series')
parser.add_argument('--volume', type=int, default=0,
help='Plot one volume from a timeseries')
parser.add_argument('--bar_pos', type=str, default='south',
help='Position of color-bar (north/south/east/west)')
parser.add_argument('--figsize', type=float, nargs=2,
default=None, help='Figure size (width, height) in inches')
parser.add_argument('--dpi', type=int, default=150,
help='DPI for output figure')
parser.add_argument('--transpose', action='store_true',
help='Swap rows and columns')
parser.add_argument('--font', type=str, default='Helvetica',
help='Font name, default Helvetica')
parser.add_argument('--fontsize', type=int, default=8,
help='Font size, default 8')
parser.add_argument('--title', type=str, default=None, help='Add a title')
args = parser.parse_args()
mpl.rc('font', family=args.font, size=args.fontsize)
print('*** Loading base image: ', args.base_image)
layers = [Layer(args.base_image, mask=args.mask, crop_center=args.crop_center, crop_size=args.crop_size,
cmap=args.base_map, clim=args.base_lims, climp=args.base_lims_p, scale=args.base_scale,
interp_order=args.interp_order, volume=args.volume), ]
if args.base_lims is None:
print('*** Base limits:', layers[0].clim)
if args.overlay:
layers.append(Layer(args.overlay, scale=args.overlay_scale,
cmap=args.overlay_map, clim=args.overlay_lim,
mask=args.overlay_mask, mask_threshold=args.overlay_mask_thresh,
alpha=args.overlay_alpha, alpha_scale=args.overlay_alpha_scale, alpha_lim=args.overlay_alpha_lim,
interp_order=args.interp_order))
bbox = layers[0].bbox
args.slice_axis = Axis_map[args.slice_axis]
if args.three_axis:
args.slice_rows = 1
args.slice_cols = 3
args.slice_axis = ['x', 'y', 'z']
slice_total = 3
slice_pos = (bbox.center[0], bbox.center[1], bbox.center[2])
elif args.timeseries:
slice_pos = bbox.center[args.slice_axis]
slice_total = layers[0].shape[3]
elif args.slices:
slice_total = args.slice_rows*args.slice_cols
if slice_total != len(args.slices):
print('Number of slices and rows*cols does not match')
exit()
slice_pos = np.array(args.slices)
args.slice_axis = [args.slice_axis] * slice_total
else:
slice_total = args.slice_rows*args.slice_cols
slice_pos = bbox.start[args.slice_axis] + bbox.diag[args.slice_axis] * \
np.linspace(args.slice_lims[0], args.slice_lims[1], slice_total)
args.slice_axis = [args.slice_axis] * slice_total
print(slice_total, ' slices in ', args.slice_rows,
' rows and ', args.slice_cols, ' columns')
if args.orient == 'preclin':
origin = 'upper'
else:
origin = 'lower'
gs1 = gridspec.GridSpec(args.slice_rows, args.slice_cols)
if not args.figsize:
args.figsize = (3*args.slice_cols, 3*args.slice_rows)
figure = plt.figure(facecolor='black', figsize=args.figsize)
print('*** Slicing')
for s in range(0, slice_total):
if args.transpose:
col, row = divmod(s, args.slice_rows)
else:
row, col = divmod(s, args.slice_cols)
ax = plt.subplot(gs1[row, col], facecolor='black')
if args.timeseries:
layers[0].volume = s
sp = slice_pos
axis = args.slice_axis
else:
sp = slice_pos[s]
axis = args.slice_axis[s]
slcr = Slicer(bbox, sp, axis, args.samples, orient=args.orient)
sl_final = blend_layers(layers, slcr)
ax.imshow(sl_final, origin=origin, extent=slcr.extent,
interpolation=args.interp)
ax.axis('off')
if args.contour:
sl_contour = layers[1].get_alpha(slcr)
contour_levels = scale_clip(
np.array(args.contour), args.overlay_alpha_lim)
# Contour levels must be within the range of overlay alpha values.
# Ignore contour levels that are not within this range to prevent
# spurious contour lines from being drawn.
valid_levels = (np.min(sl_contour) < contour_levels) & (
contour_levels < np.max(sl_contour))
if any(valid_levels):
ax.contour(sl_contour, levels=contour_levels[valid_levels], origin=origin, extent=slcr.extent,
colors=args.contour_color, linestyles=args.contour_style, linewidths=1)
if args.base_label or args.overlay_label:
print('*** Adding colorbar')
if args.bar_pos.lower() == 'south':
cbar_bottom = 0.3 * (args.fontsize / 12) / args.figsize[1]
cbar_top = cbar_bottom + 0.1 / args.figsize[1]
gs1.update(left=0.01, right=0.99, bottom=cbar_top+0.001,
top=0.99, wspace=0.01, hspace=0.01)
gs2 = gridspec.GridSpec(1, 1)
gs2.update(left=0.1, right=0.9, bottom=cbar_bottom,
top=cbar_top, wspace=0.1, hspace=0.1)
c_orient = 'h'
c_axes = plt.subplot(gs2[0], facecolor='black')
elif args.bar_pos.lower() == 'south-inset':
gs1.update(left=0.01, right=0.99, bottom=0.01,
top=0.99, wspace=0.01, hspace=0.01)
c_orient = 'h'
c_axes = figure.add_subplot(3, 3, 8)
c_axes.set_position([0.1, 0.1, 0.8, 0.05])
print('Rect: ', c_axes.get_position())
elif args.bar_pos.lower() == 'north':
cbarh = 0.15 * (args.fontsize / 12) / args.figsize[1]
gs1.update(left=0.01, right=0.99, bottom=0.01,
top=0.99 - cbarh, wspace=0.01, hspace=0.01)
gs2 = gridspec.GridSpec(1, 1)
gs2.update(left=0.07, right=0.93, bottom=0.99 - cbarh,
top=0.99, wspace=0.01, hspace=0.01)
c_orient = 'h'
c_axes = plt.subplot(gs2[0], facecolor='black')
print('Rect: ', c_axes.get_position())
elif args.bar_pos.lower() == 'west':
cbarw = 0.275 * (args.fontsize / 12) / args.figsize[0]
gs1.update(left=0.01 + cbarw, right=0.99, bottom=0.01,
top=0.99, wspace=0.01, hspace=0.01)
gs2 = gridspec.GridSpec(1, 1)
gs2.update(left=0.01, right=cbarw, bottom=0.08,
top=0.92, wspace=0.01, hspace=0.01)
c_orient = 'v'
c_axes = plt.subplot(gs2[0], facecolor='black')
elif args.bar_pos.lower() == 'east':
cbarw = 0.35 * (args.fontsize / 12) / args.figsize[0]
gs1.update(left=0.01, right=1 - cbarw, bottom=0.01,
top=0.99, wspace=0.01, hspace=0.01)
gs2 = gridspec.GridSpec(1, 1)
gs2.update(left=1 - cbarw + 0.001, right=1 - cbarw/1.5, bottom=0.08,
top=0.92, wspace=0.01, hspace=0.01)
c_orient = 'v'
c_axes = plt.subplot(gs2[0], facecolor='black')
else:
raise ValueError('Unsupported bar position: ' + args.bar_pos)
if args.overlay_alpha:
alphabar(c_axes, args.overlay_map, args.overlay_lim, args.overlay_label,
args.overlay_alpha_lim, args.overlay_alpha_label, orient=c_orient)
else:
if args.base_map:
colorbar(c_axes, layers[0].cmap, layers[0].clim,
args.base_label, orient=c_orient)
else:
colorbar(c_axes, layers[1].cmap, layers[1].clim,
args.overlay_label, orient=c_orient)
else:
gs1.update(left=0.01, right=0.99, bottom=0.01,
top=0.99, wspace=0.01, hspace=0.01)
if args.title:
figure.axes[-1].text(0.01, 0.99, args.title, color='w', size=args.fontsize, verticalalignment='top',
transform=figure.transFigure)
print('*** Saving')
print('Writing file: ', args.output, 'at', args.dpi, ' DPI')
figure.savefig(args.output, facecolor=figure.get_facecolor(),
edgecolor='none', dpi=args.dpi)
plt.close(figure)
if __name__ == "__main__":
main()
|
spinicist/qiview
|
nanslice/nanslicer.py
|
Python
|
mpl-2.0
| 12,382
|
[
"NEURON"
] |
38fd312250490c4c5ed168ba2e941a832d49848074fe5f8d447f6180ab395071
|
import itertools
import os
import pprint
import re
from math import isclose
import lmfit
import numpy as np
import pandas as pd
import peakutils as pku
from ImagingReso.resonance import Resonance
import ImagingReso._utilities as reso_util
from cerberus import Validator
import matplotlib.pyplot as plt
x_type_list = ['energy', 'lambda', 'time', 'number']
y_type_list = ['transmission', 'attenuation']
t_unit_list = ['s', 'ms', 'us', 'ns']
peak_type_list = ['indexed', 'all']
# peak_type_list = ['indexed', 'all', 'none']
index_level_list = ['iso', 'ele']
peak_model_list = ['Gaussian', 'Lorentzian']
def check_if_in_list(name, name_list):
if name not in name_list:
raise ValueError("'{}' is not valid, only support: '{}'".format(name, name_list))
def convert_energy_to(x_type, x,
offset_us=None, source_to_detector_m=None, t_unit='us',
num_offset=0,
time_resolution_us=None,
t_start_us=None):
check_if_in_list(x_type, x_type_list)
check_if_in_list(t_unit, t_unit_list)
if x_type == 'lambda':
x = reso_util.ev_to_angstroms(x)
if x_type == 'time':
if offset_us is None:
raise ValueError("'offset_us=' is required when x_type='time'")
if source_to_detector_m is None:
raise ValueError("'source_to_detector_m=' is required when x_type='time'")
x = reso_util.ev_to_s(offset_us=offset_us,
source_to_detector_m=source_to_detector_m,
array=x)
x = convert_s_to(x=x, t_unit=t_unit)
if x_type == 'number':
if time_resolution_us is not None:
x = reso_util.ev_to_image_number(offset_us=offset_us,
source_to_detector_m=source_to_detector_m,
array=x,
time_resolution_us=time_resolution_us,
t_start_us=t_start_us)
# x = x + num_offset
else:
x = np.array(range(len(x))) + num_offset
return x
def convert_attenuation_to(y_type, y):
check_if_in_list(y_type, y_type_list)
if y_type == 'transmission':
y = 1 - y
return np.array(y)
def convert_s_to(x, t_unit):
if t_unit == 'ns':
_x = x * 1e9
elif t_unit == 'us':
_x = x * 1e6
elif t_unit == 'ms':
_x = x * 1e3
else:
_x = x
return _x
def convert_exp_peak_df(peak_df: pd.DataFrame, x_type, t_unit):
check_if_in_list(x_type, x_type_list)
check_if_in_list(t_unit, t_unit_list)
if x_type == 'energy':
assert 'x' in peak_df.columns
_x = peak_df['x']
elif x_type == 'lambda':
assert 'x_A' in peak_df.columns
_x = peak_df['x_A']
elif x_type == 'time':
assert 'x_s' in peak_df.columns
_x = peak_df['x_s']
_x = convert_s_to(x=_x, t_unit=t_unit)
else:
assert 'x_num_o' in peak_df.columns
_x = peak_df['x_num_o']
return _x.values # np.array
def check_and_make_dir(current_path, name):
_dir_path = os.path.join(current_path, name)
if not os.path.exists(_dir_path):
os.makedirs(_dir_path)
print("Folder: '{}' has been created ".format(_dir_path))
return _dir_path
def load_txt_csv(path_to_file):
"""
Load and format data from .txt or .csv files
:param path_to_file:
:return: pd.Dataframe
"""
# Error for file format and existence
_format = path_to_file[-4:]
if _format not in ['.txt', '.csv']:
raise ValueError("File must be in the format of '.txt' or '.csv'")
if os.path.exists(path_to_file) is False:
raise ValueError(
"Can not locate file '{}' in '{}' ".format(os.path.basename(path_to_file), os.path.dirname(path_to_file)))
_sep = ','
df = pd.read_csv(path_to_file, sep=_sep, header=None)
if type(df[0][0]) is str:
# if the first element is still a str, use ',' to pd.read_csv
if df[0][0].count('\t') != 0:
_sep = '\t'
df = pd.read_csv(path_to_file, sep=_sep, header=None)
if type(df[0][0]) is str:
# if the first element is still a str, skip the row of the 'X' 'Y' axis labels
df = pd.read_csv(path_to_file, sep=_sep, header=None, skiprows=1)
if list(df[0][:4]) == [1, 2, 3, 4]:
df[0] = df[1]
df.drop(df.columns[1], axis=1, inplace=True)
return df
def get_foil_density_gcm3(length_mm, width_mm, thickness_mm, mass_g):
"""
Get density from mass/(L*W*H)
:param length_mm:
:param width_mm:
:param thickness_mm:
:param mass_g:
:return: density in g/cm^3
"""
_mm3_to_cm3 = 0.001
density_gcm3 = mass_g / (length_mm * width_mm * thickness_mm * _mm3_to_cm3)
return density_gcm3
def set_plt(ax, fig_title, grid, x_type, y_type, t_unit, logx, logy):
check_if_in_list(x_type, x_type_list)
check_if_in_list(y_type, y_type_list)
ax.set_title(fig_title)
if x_type == 'energy':
ax.set_xlabel('Energy (eV)')
elif x_type == 'lambda':
ax.set_xlabel('Wavelength (\u212B)')
elif x_type == 'number':
ax.set_xlabel('Image number (#)')
else:
check_if_in_list(t_unit, t_unit_list)
if t_unit == 'us':
ax.set_xlabel('Time of flight (\u03BCs)')
else:
ax.set_xlabel('Time of flight ({})'.format(t_unit))
if y_type == 'attenuation':
ax.set_ylabel('Neutron attenuation')
else:
ax.set_ylabel('Neutron transmission')
ax.legend(loc='best')
# ax1.legend(bbox_to_anchor=(1., 1), loc=2, borderaxespad=0.)
# ax1.legend(bbox_to_anchor=(0, 0.93, 1., .102), loc=3, borderaxespad=0.)
if grid:
# ax1.set_xticks(np.arange(0, 100, 10))
# ax1.set_yticks(np.arange(0, 1., 0.1))
ax.grid()
if logx:
ax.set_xscale('log')
if logy:
ax.set_yscale('log')
return ax
def rm_envelope(y, deg, max_it=None, tol=None):
envelope = pku.envelope(y=y, deg=deg, max_it=max_it, tol=tol)
# return y + y.max() - envelope
return y / envelope
class Items(object):
"""
A easier way to specify layers/elements/isotopes for in plot()/export()
"""
def __init__(self, o_reso, database='ENDF_VIII'):
self.o_reso = o_reso
self.shaped_list = None
self.database = database
def shaped(self, items_list):
_shaped_list = []
for _raw_path_to_plot in items_list:
if type(_raw_path_to_plot) is not list:
if '*' in _raw_path_to_plot:
_shaped_list = _shaped_list + _fill_iso_to_items(name=_raw_path_to_plot,
stack=self.o_reso.stack,
database=self.database)
else:
_shaped_list.append(_shape_items(_raw_path_to_plot))
else:
if len(_raw_path_to_plot) == 1:
_raw_path_to_plot = _shape_items(_raw_path_to_plot[0])
_shaped_list.append(_raw_path_to_plot)
# Clean duplicates in list
_shaped_list = _rm_duplicated_items(_shaped_list)
self.shaped_list = _shaped_list
return _shaped_list
def values(self, y_axis_type='attenuation'):
# plot specified from 'items_to_plot'
if self.shaped_list is None:
raise ValueError("'.shaped_list' is empty, please run '.shaped()' first.")
if y_axis_type != 'sigma':
_stack = self.o_reso.stack_signal
else:
_stack = self.o_reso.stack_sigma
y_axis_type = 'sigma_b'
y_axis_tag = y_axis_type
_y_axis_dict = {}
for _each_path in self.shaped_list:
_label = _each_path[-1]
if len(_each_path) == 3:
_y_axis_dict[_label] = _stack[_each_path[0]][_each_path[1]][_each_path[2]][y_axis_tag]
elif len(_each_path) == 2:
_y_axis_dict[_label] = _stack[_each_path[0]][_each_path[1]][y_axis_tag]
else:
raise ValueError("Format error of '{}', should be in the form of "
"['layer', 'element'] or ['layer', 'element', 'isotope']")
return _y_axis_dict
def _shape_items(name):
# input is not structured as required by ImagingReso
if type(name) is not str:
raise ValueError("'{}' entered is not a string.".format(name))
if len(name) == 0:
raise ValueError("'{}' entered has no length.".format(name))
_path_of_input = []
if any(str.isdigit(i) for i in name) is True:
# isotopes
_parsed = re.findall(r'([A-Z][a-z]*)(\d*)', name)
_element_str = _parsed[0][0]
_number_str = re.findall('\d+', name)[0]
_isotope_str = _number_str + '-' + _element_str
_path_of_input.append(_element_str)
_path_of_input.append(_element_str)
_path_of_input.append(_isotope_str)
else:
# elements
if len(name) > 2:
raise ValueError("'{}' entered is not a single element symbol.".format(name))
if len(name) == 1:
if name.isupper() is False:
name = name.upper()
_path_of_input.append(name)
_path_of_input.append(name)
if len(name) == 2:
if name[0].isupper() and name[1].islower() is True:
_path_of_input.append(name)
_path_of_input.append(name)
else:
raise ValueError("'{}' entered is not a valid element symbol.".format(name))
return _path_of_input
def _fill_iso_to_items(name, stack=None, database='ENDF_VII'):
if '*' not in name:
raise ValueError("'*' is needed to retrieve all isotopes of '{}' ".format(name))
else:
ele_name = name.replace('*', '')
if stack is None:
o_reso = Resonance(database=database)
o_reso.add_layer(formula=ele_name,
thickness=1)
stack = o_reso.stack
iso_list = stack[ele_name][ele_name]['isotopes']['list']
_path_to_iso = []
for _each_iso in iso_list:
_path_to_iso.append(_shape_items(_each_iso))
return _path_to_iso
def _rm_duplicated_items(raw):
raw.sort()
cleaned_list = list(raw for raw, _ in itertools.groupby(raw))
return cleaned_list
# def almostequatl
class Layer(object):
def __init__(self):
self.info = {}
def add_Layer(self, layers):
for _each_layer in list(layers.info.keys()):
self.add_layer(layer=layers.info[_each_layer]['layer'],
thickness_mm=layers.info[_each_layer]['thickness'],
density_gcm3=layers.info[_each_layer]['density'])
def add_layer(self, layer, thickness_mm, density_gcm3=None):
# Input Validation
_input = {'layer': layer,
'thickness': thickness_mm,
'density': density_gcm3,
}
schema = {'layer': {'type': 'string',
'required': True,
},
'thickness': {'type': 'number',
'required': True,
},
'density': {'type': 'number',
'required': True,
'nullable': True,
},
}
v = Validator(schema)
if v.validate(_input) is False:
raise ValueError(v.errors)
_formula = re.findall(r'([A-Z][a-z]*)(\d*)', layer)
_elements = []
for _element in _formula:
_single_element = list(_element)[0]
_elements.append(_single_element)
# raise error if input is contains more than one element for single layer.
if len(_elements) > 1:
raise ValueError("Please enter single element as layer in string. Example: 'Gd' or 'U'")
if density_gcm3 is not None:
self.info[layer] = {'layer': layer,
'thickness': {'value': thickness_mm,
'units': 'mm',
},
'density': {'value': density_gcm3,
'units': 'g/cm3',
},
'molar_mass': {'value': None,
'units': None,
},
'molar_conc': {'value': None,
'units': None,
},
}
else:
self.info[layer] = {'layer': layer,
'thickness': {'value': thickness_mm,
'units': 'mm',
},
'density': {'value': np.NaN,
'units': 'g/cm3',
},
'molar_mass': {'value': None,
'units': None,
},
'molar_conc': {'value': None,
'units': None,
},
}
def pprint(self):
pprint.pprint(self.info)
def find_peak(y, x=None, x_name='x_num', y_name='y', thres=0.015, min_dist=1, imprv_reso=False):
if x is None:
x = np.array(range(len(y)))
# Note: weirdly, indexes have to be reset here to get correct peak locations
x = np.array(x)
y = np.array(y)
_index = pku.indexes(y=y, thres=thres, min_dist=min_dist)
if len(_index) != 0:
_peak_y = list(y[_index])
if imprv_reso is False:
_peak_x = list(x[_index])
else:
_peak_x = list(pku.interpolate(x, y, ind=_index))
else:
# No peaks detected
_peak_y = []
_peak_x = []
peak_df = pd.DataFrame()
peak_df[x_name] = _peak_x
peak_df[y_name] = _peak_y
peak_df.sort_values([x_name], inplace=True)
peak_df.reset_index(inplace=True, drop=True)
return peak_df
def index_peak(peak_dict, peak_map_dict, rel_tol):
num_peak_indexed = 0
_peak_map = peak_map_dict['peak_map']
_peak_df = peak_dict['df']
_names = _peak_map.keys()
peak_map_indexed = {}
for _peak_name in _names:
_df = pd.DataFrame()
_df_ideal = pd.DataFrame()
peak_map_indexed[_peak_name] = {}
_peak_x = _peak_map[_peak_name]['ideal']['x']
_peak_y = _peak_map[_peak_name]['ideal']['y']
_x_num_indexed_list = []
_x_indexed_list = []
_y_indexed_list = []
_x_ideal_list = []
_y_ideal_list = []
for _i in range(len(_peak_df['x'])):
for _j in range(len(_peak_x)):
if peak_map_dict['y_type'] == 'attenuation':
if isclose(_peak_x[_j], _peak_df['x'][_i], rel_tol=rel_tol) and _peak_y[_j] >= _peak_df['y'][_i]:
_x_num_indexed_list.append(_peak_df['x_num'][_i])
_x_indexed_list.append(_peak_df['x'][_i])
_y_indexed_list.append(_peak_df['y'][_i])
_x_ideal_list.append(_peak_x[_j])
_y_ideal_list.append(_peak_y[_j])
else:
if isclose(_peak_x[_j], _peak_df['x'][_i], rel_tol=rel_tol) and _peak_y[_j] <= _peak_df['y'][_i]:
_x_num_indexed_list.append(_peak_df['x_num'][_i])
_x_indexed_list.append(_peak_df['x'][_i])
_y_indexed_list.append(_peak_df['y'][_i])
_x_ideal_list.append(_peak_x[_j])
_y_ideal_list.append(_peak_y[_j])
num_peak_indexed += len(_x_indexed_list)
_df['x_num'] = _x_num_indexed_list
_df['x'] = _x_indexed_list
_df['y'] = _y_indexed_list
_df_ideal['x'] = _x_ideal_list
_df_ideal['y'] = _y_ideal_list
peak_map_indexed[_peak_name]['exp'] = _df
peak_map_indexed[_peak_name]['ideal'] = _df_ideal
peak_map_indexed_dict = {
'peak_map_indexed': peak_map_indexed,
'x_type': peak_map_dict['x_type'],
'y_type': peak_map_dict['y_type'],
}
return peak_map_indexed_dict
class ResoPeak(object):
def __init__(self, y, x, y_type, x_type, img_num):
"""
Initialization
"""
self.peak_dict = None
self.peak_map_indexed_dict = None
self.y_type = y_type
self.x_type = x_type
self.shape_report = None
self.prefix_list = None
self.x = x
self.y = y
self.img_num = img_num
def find_peak(self, thres, min_dist, imprv_reso: bool):
_peak_dict = self._find_peak(y=self.y,
x=self.x,
thres=thres,
min_dist=min_dist,
imprv_reso=imprv_reso)
_peak_dict['x_type'] = self.x_type
_peak_dict['df']['y'] = convert_attenuation_to(y_type=self.y_type, y=_peak_dict['df']['y'])
_peak_dict['y_type'] = self.y_type
self.peak_dict = _peak_dict
return _peak_dict
def _find_peak(self, y: np.array, thres, min_dist, imprv_reso: bool, x=None):
""""""
if x is None:
x = np.array(range(len(y)))
else:
x = np.array(x)
if x.shape != y.shape:
raise ValueError("The length ({}) of 'x' is not equal the length ({}) of 'y'".format(len(x), len(y)))
peak_index = pku.indexes(y=y, thres=thres, min_dist=min_dist)
if len(peak_index) != 0:
_peak_x_num = self.img_num[peak_index]
_peak_y = y[peak_index]
if imprv_reso:
_peak_x = pku.interpolate(x, y, ind=peak_index)
else:
_peak_x = x[peak_index]
else:
# No peaks detected
_peak_x_num = []
_peak_x = []
_peak_y = []
peak_df = pd.DataFrame()
peak_df['x_num'] = _peak_x_num
peak_df['x'] = _peak_x
peak_df['y'] = _peak_y
peak_dict = {
'df': peak_df
}
self.peak_dict = peak_dict
return peak_dict
def index_peak(self, peak_map_dict, rel_tol):
if self.peak_dict is None:
raise ValueError("Please identify peak use 'Peak.find()' before indexing.")
self.peak_map_indexed_dict = index_peak(peak_dict=self.peak_dict, peak_map_dict=peak_map_dict, rel_tol=rel_tol)
def analyze(self, report=False, fit_model='Lorentzian'):
check_if_in_list(fit_model, peak_model_list)
# print(self.img_num)
_peak_map_indexed_dict = self.peak_map_indexed_dict
_peak_map_indexed = _peak_map_indexed_dict['peak_map_indexed']
_y = self.y
_x = self.img_num
model = lmfit.models.GaussianModel(prefix='bkg_')
pars = model.guess(_y, x=_x)
self.prefix_list = []
for _ele in _peak_map_indexed.keys():
if '-' not in _ele:
for _ind in range(len(_peak_map_indexed[_ele]['exp'])):
_prefix = _ele + '_' + str(_ind) + '_'
if fit_model == 'Gaussian':
_model = lmfit.models.GaussianModel(prefix=_prefix)
else: # fit_model == 'Lorentzian':
_model = lmfit.models.LorentzianModel(prefix=_prefix)
_center = _peak_map_indexed[_ele]['exp']['x_num'][_ind]
pars.update(_model.make_params())
pars[_prefix + 'amplitude'].value = 1
pars[_prefix + 'center'].set(_center, min=_center - 100, max=_center + 100)
# pars[_prefix + 'center'].set(_center)
pars[_prefix + 'sigma'].set(2.0, min=0.5, max=20)
# pars[_prefix + 'sigma'].set(2.0)
model += _model
self.prefix_list.append(_prefix)
_out = model.fit(_y, pars, x=_x)
self.shape_report = _out
self.__fwhm()
self.__fill_img_num_to_peak_map_indexed()
print("+------------ Peak analysis ------------+\n{} peak fitting:".format(fit_model))
print("{}\n".format(self.fwhm_df))
if report is True:
print(_out.fit_report())
def plot_fit(self):
if self.shape_report is not None:
self.shape_report.plot()
plt.show()
else:
print("Peaks have not been fitted. Please run 'Peak.analyze()' before plotting.")
def __fwhm(self):
_fwhm_df = pd.DataFrame()
# generate ele list for _fwhm_df
_ele_list = [_ele_name.split('_')[0] for _ele_name in self.prefix_list]
_prefix_list = self.prefix_list
_values = self.shape_report.__dict__['params'].valuesdict()
pars_center_name = [_i + 'center' for _i in _prefix_list]
pars_fwhm_name = [_i + 'fwhm' for _i in _prefix_list]
pars_center_value = [_values[_name] for _name in pars_center_name]
pars_fwhm_value = [_values[_name] for _name in pars_fwhm_name]
_fwhm_df['ele_name'] = _ele_list
_fwhm_df['center_val'] = pars_center_value
_fwhm_df['fwhm_val'] = pars_fwhm_value
_fwhm_df.sort_values(['center_val'], inplace=True)
_fwhm_df.reset_index(inplace=True, drop=True)
self.fwhm_df = _fwhm_df
def __fill_img_num_to_peak_map_indexed(self):
_peak_map_indexed = self.peak_map_indexed_dict['peak_map_indexed']
_fwhm_df = self.fwhm_df
_df = pd.DataFrame()
_df['x_num'] = self.img_num
_df['x'] = self.x
_df['y'] = self.y
_df.set_index('x_num', inplace=True)
for _ele in _peak_map_indexed.keys():
_peak_map_indexed[_ele]['peak_span'] = {}
_img_num_list = []
_peak_span_df = pd.DataFrame()
for _ind in range(len(_fwhm_df)):
if _fwhm_df['ele_name'][_ind] == _ele:
half_fwhm = _fwhm_df['fwhm_val'][_ind] / 2
_min = _fwhm_df['center_val'][_ind] - half_fwhm
# _min = _fwhm_df['center_val'][_ind] - half_fwhm + self.x_num_gap
_max = _fwhm_df['center_val'][_ind] + half_fwhm
# _max = _fwhm_df['center_val'][_ind] + half_fwhm + self.x_num_gap
_min = int(np.floor(_min))
_max = int(np.ceil(_max)) + 1
_img_num_list += [a for a in range(_min, _max)]
_peak_span_df['x_num'] = _img_num_list
_peak_span_df['x'] = list(_df['x'].reindex(_img_num_list))
_peak_span_df['y'] = list(_df['y'].reindex(_img_num_list))
_peak_span_df['y'] = convert_attenuation_to(y_type=self.y_type, y=_peak_span_df['y'])
_peak_map_indexed[_ele]['peak_span'] = _peak_span_df
self.peak_map_indexed_dict['peak_map_indexed'] = _peak_map_indexed
# def a_new_decorator(a_func):
# @wraps(a_func)
# def wrapTheFunction():
# print("I am doing some boring work before executing a_func()")
# a_func()
# print("I am doing some boring work after executing a_func()")
#
# return wrapTheFunction
#
#
# @a_new_decorator
# def a_function_requiring_decoration():
# """Hey yo! Decorate me!"""
# print("I am the function which needs some decoration to "
# "remove my foul smell")
#
#
# class Plot(object):
# def __init__(self, logfile='out.log'):
# self.logfile = logfile
#
# def __call__(self, func):
# log_string = func.__name__ + " was called"
# print(log_string)
# # Open the logfile and append
# with open(self.logfile, 'a') as opened_file:
# # Now we log to the specified logfile
# opened_file.write(log_string + '\n')
# # Now, send a notification
# self.notify()
#
# def notify(self):
# # logit only logs, no more
# pass
#
#
# class Export(object):
# def __init__(self, logfile='out.log'):
# self.logfile = logfile
#
# def __call__(self, func):
# log_string = func.__name__ + " was called"
# print(log_string)
# # Open the logfile and append
# with open(self.logfile, 'a') as opened_file:
# # Now we log to the specified logfile
# opened_file.write(log_string + '\n')
# # Now, send a notification
# self.notify()
#
# def notify(self):
# # logit only logs, no more
# pass
#
#
# class Logit(object):
# def __init__(self, logfile='out.log'):
# self.logfile = logfile
#
# def __call__(self, func):
# log_string = func.__name__ + " was called"
# print(log_string)
# # Open the logfile and append
# with open(self.logfile, 'a') as opened_file:
# # Now we log to the specified logfile
# opened_file.write(log_string + '\n')
# # Now, send a notification
# self.notify()
#
# def notify(self):
# # logit only logs, no more
# pass
|
ornlneutronimaging/ResoFit
|
ResoFit/_utilities.py
|
Python
|
bsd-3-clause
| 25,822
|
[
"Gaussian"
] |
983039911090e1cb53761aa8561e7f2a833d2efd596ca99a0a3b9a068f84bd55
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# https://github.com/shenwei356/bio_scripts
import argparse
import sys
from collections import Counter, defaultdict
import pysam
parser = argparse.ArgumentParser(
description="bam2gff. Extracting the locations of properly mapping paired (single) ends to GFF format.",
epilog="https://github.com/shenwei356/bio_scripts")
parser.add_argument('bamfile', type=str, help='bam file')
parser.add_argument('-c', '--cache-size', type=int, default=1000, help='cache size [1000]')
parser.add_argument('-m', '--match-proportion', type=float, default=0.75,
help='minimum match proportion to define properly paired ends [0.75]')
parser.add_argument('-se', '--single-end', action='store_true', help='single read mapping result')
parser.add_argument("-v", "--verbose", help='verbosely print information',
action="count", default=0)
args = parser.parse_args()
pairs = defaultdict(lambda: defaultdict(dict))
stats = Counter()
samfile = pysam.AlignmentFile(args.bamfile, "rb")
for read in samfile.fetch():
if args.single_end:
if not read.reference_length or read.reference_length < read.query_length * args.match_proportion: # full match
stats['bad match'] += 1
continue
ref = samfile.getrname(read.reference_id)
if read.is_reverse:
start, end, strand = read.reference_start, read.reference_end, '-'
else:
start, end, strand = read.reference_start, read.reference_end, '+'
sys.stdout.write('\t'.join(
[ref, 'bam2gff.py', 'single_ends', str(start + 1), str(end), '.', strand, '.',
read.query_name]) + "\n")
continue
if read.is_proper_pair and not read.is_secondary:
if read.reference_length < read.query_length * args.match_proportion: # full match
stats['bad match'] += 1
continue
key = '_'.join([str(x) for x in sorted([read.reference_start, read.next_reference_start])])
pairs[read.query_name][key]['read1' if read.is_read1 else 'read2'] = {'start': read.reference_start,
'end': read.reference_end,
'ref': samfile.getrname(
read.reference_id),
'reverse': read.is_reverse}
if 'read1' in pairs[read.query_name][key] and 'read2' in pairs[read.query_name][key]:
read1, read2 = pairs[read.query_name][key]['read1'], pairs[read.query_name][key]['read2']
if not read1['reverse']:
strand, start, end = '+', read1['start'], read2['end']
else:
strand, start, end = '-', read2['start'], read1['end']
sys.stdout.write('\t'.join(
[read1['ref'], 'bam2gff.py', 'paired_ends', str(start + 1), str(end), '.', strand, '.',
read.query_name]) + "\n")
stats['paired'] += 1
del pairs[read.query_name][key]
samfile.close()
for query, sites in pairs.items():
if len(sites) == 0:
continue
stats['unpaired'] += 1
sys.stderr.write('{} summary: {}\n'.format(args.bamfile, stats))
|
shenwei356/bio_scripts
|
file_formats/bam2gff.py
|
Python
|
mit
| 3,401
|
[
"pysam"
] |
7ea13683f6c3e095785f8bf660ae255cc38f7cc7f8aa09ffd1ec082520702178
|
#!/usr/bin/env python
# coding: utf-8
"""Test suite for autopep8.
Unit tests go in "UnitTests". System tests go in "SystemTests".
"""
from __future__ import unicode_literals
import os
import re
import sys
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
import contextlib
import io
import shutil
from subprocess import Popen, PIPE
from tempfile import mkstemp
import tempfile
import tokenize
import warnings
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
ROOT_DIR = os.path.split(os.path.abspath(os.path.dirname(__file__)))[0]
sys.path.insert(0, ROOT_DIR)
import autopep8
if 'AUTOPEP8_COVERAGE' in os.environ and int(os.environ['AUTOPEP8_COVERAGE']):
AUTOPEP8_CMD_TUPLE = ('coverage', 'run', '--branch', '--parallel',
'--omit=*/site-packages/*',
os.path.join(ROOT_DIR, 'autopep8.py'),)
else:
# We need to specify the executable to make sure the correct Python
# interpreter gets used.
AUTOPEP8_CMD_TUPLE = (sys.executable,
os.path.join(ROOT_DIR,
'autopep8.py'),) # pragma: no cover
class UnitTests(unittest.TestCase):
maxDiff = None
def test_find_newline_only_cr(self):
source = ['print 1\r', 'print 2\r', 'print3\r']
self.assertEqual(autopep8.CR, autopep8.find_newline(source))
def test_find_newline_only_lf(self):
source = ['print 1\n', 'print 2\n', 'print3\n']
self.assertEqual(autopep8.LF, autopep8.find_newline(source))
def test_find_newline_only_crlf(self):
source = ['print 1\r\n', 'print 2\r\n', 'print3\r\n']
self.assertEqual(autopep8.CRLF, autopep8.find_newline(source))
def test_find_newline_cr1_and_lf2(self):
source = ['print 1\n', 'print 2\r', 'print3\n']
self.assertEqual(autopep8.LF, autopep8.find_newline(source))
def test_find_newline_cr1_and_crlf2(self):
source = ['print 1\r\n', 'print 2\r', 'print3\r\n']
self.assertEqual(autopep8.CRLF, autopep8.find_newline(source))
def test_find_newline_should_default_to_lf(self):
self.assertEqual(autopep8.LF, autopep8.find_newline([]))
self.assertEqual(autopep8.LF, autopep8.find_newline(['', '']))
def test_detect_encoding(self):
self.assertEqual(
'utf-8',
autopep8.detect_encoding(
os.path.join(ROOT_DIR, 'setup.py')))
def test_detect_encoding_with_cookie(self):
self.assertEqual(
'iso-8859-1',
autopep8.detect_encoding(
os.path.join(ROOT_DIR, 'test', 'iso_8859_1.py')))
def test_readlines_from_file_with_bad_encoding(self):
"""Bad encoding should not cause an exception."""
self.assertEqual(
['# -*- coding: zlatin-1 -*-\n'],
autopep8.readlines_from_file(
os.path.join(ROOT_DIR, 'test', 'bad_encoding.py')))
def test_readlines_from_file_with_bad_encoding2(self):
"""Bad encoding should not cause an exception."""
# This causes a warning on Python 3.
with warnings.catch_warnings(record=True):
self.assertTrue(autopep8.readlines_from_file(
os.path.join(ROOT_DIR, 'test', 'bad_encoding2.py')))
def test_fix_whitespace(self):
self.assertEqual(
'a b',
autopep8.fix_whitespace('a b', offset=1, replacement=' '))
def test_fix_whitespace_with_tabs(self):
self.assertEqual(
'a b',
autopep8.fix_whitespace('a\t \t b', offset=1, replacement=' '))
def test_multiline_string_lines(self):
self.assertEqual(
set([2]),
autopep8.multiline_string_lines(
"""\
'''
'''
"""))
def test_multiline_string_lines_with_many(self):
self.assertEqual(
set([2, 7, 10, 11, 12]),
autopep8.multiline_string_lines(
"""\
'''
'''
''''''
''''''
''''''
'''
'''
'''
'''
"""))
def test_multiline_string_should_not_report_single_line(self):
self.assertEqual(
set(),
autopep8.multiline_string_lines(
"""\
'''abc'''
"""))
def test_multiline_string_should_not_report_docstrings(self):
self.assertEqual(
set([5]),
autopep8.multiline_string_lines(
"""\
def foo():
'''Foo.
Bar.'''
hello = '''
'''
"""))
def test_supported_fixes(self):
self.assertIn('E121', [f[0] for f in autopep8.supported_fixes()])
def test_shorten_comment(self):
self.assertEqual('# ' + '=' * 72 + '\n',
autopep8.shorten_comment('# ' + '=' * 100 + '\n',
max_line_length=79))
def test_shorten_comment_should_not_split_numbers(self):
line = '# ' + '0' * 100 + '\n'
self.assertEqual(line,
autopep8.shorten_comment(line,
max_line_length=79))
def test_shorten_comment_should_not_split_words(self):
line = '# ' + 'a' * 100 + '\n'
self.assertEqual(line,
autopep8.shorten_comment(line,
max_line_length=79))
def test_shorten_comment_should_not_split_urls(self):
line = '# http://foo.bar/' + 'abc-' * 100 + '\n'
self.assertEqual(line,
autopep8.shorten_comment(line,
max_line_length=79))
def test_shorten_comment_should_not_modify_special_comments(self):
line = '#!/bin/blah ' + ' x' * 90 + '\n'
self.assertEqual(line,
autopep8.shorten_comment(line,
max_line_length=79))
def test_format_block_comments(self):
self.assertEqual(
'# abc',
autopep8.fix_e265('#abc'))
self.assertEqual(
'# abc',
autopep8.fix_e265('####abc'))
self.assertEqual(
'# abc',
autopep8.fix_e265('## # ##abc'))
def test_format_block_comments_should_leave_outline_alone(self):
line = """\
###################################################################
## Some people like these crazy things. So leave them alone. ##
###################################################################
"""
self.assertEqual(line, autopep8.fix_e265(line))
line = """\
#################################################################
# Some people like these crazy things. So leave them alone. #
#################################################################
"""
self.assertEqual(line, autopep8.fix_e265(line))
def test_format_block_comments_with_multiple_lines(self):
self.assertEqual(
"""\
# abc
# blah blah
# four space indentation
''' #do not modify strings
#do not modify strings
#do not modify strings
#do not modify strings'''
#
""",
autopep8.fix_e265("""\
# abc
#blah blah
#four space indentation
''' #do not modify strings
#do not modify strings
#do not modify strings
#do not modify strings'''
#
"""))
def test_format_block_comments_should_not_corrupt_special_comments(self):
self.assertEqual(
'#: abc',
autopep8.fix_e265('#: abc'))
self.assertEqual(
'#!/bin/bash\n',
autopep8.fix_e265('#!/bin/bash\n'))
def test_format_block_comments_should_only_touch_real_comments(self):
commented_out_code = '#x = 1'
self.assertEqual(
commented_out_code,
autopep8.fix_e265(commented_out_code))
def test_fix_file(self):
self.assertIn(
'import ',
autopep8.fix_file(
filename=os.path.join(ROOT_DIR, 'test', 'example.py')))
def test_fix_file_with_diff(self):
filename = os.path.join(ROOT_DIR, 'test', 'example.py')
self.assertIn(
'@@',
autopep8.fix_file(
filename=filename,
options=autopep8.parse_args(['--diff', filename])))
def test_fix_lines(self):
self.assertEqual(
'print(123)\n',
autopep8.fix_lines(['print( 123 )\n'],
options=autopep8.parse_args([''])))
def test_fix_code(self):
self.assertEqual(
'print(123)\n',
autopep8.fix_code('print( 123 )\n'))
def test_fix_code_with_empty_string(self):
self.assertEqual(
'',
autopep8.fix_code(''))
def test_fix_code_with_multiple_lines(self):
self.assertEqual(
'print(123)\nx = 4\n',
autopep8.fix_code('print( 123 )\nx =4'))
def test_fix_code_byte_string(self):
"""This feature is here for friendliness to Python 2."""
self.assertEqual(
'print(123)\n',
autopep8.fix_code(b'print( 123 )\n'))
def test_normalize_line_endings(self):
self.assertEqual(
['abc\n', 'def\n', '123\n', 'hello\n', 'world\n'],
autopep8.normalize_line_endings(
['abc\n', 'def\n', '123\n', 'hello\r\n', 'world\r'],
'\n'))
def test_normalize_line_endings_with_crlf(self):
self.assertEqual(
['abc\r\n', 'def\r\n', '123\r\n', 'hello\r\n', 'world\r\n'],
autopep8.normalize_line_endings(
['abc\n', 'def\r\n', '123\r\n', 'hello\r\n', 'world\r'],
'\r\n'))
def test_normalize_multiline(self):
self.assertEqual('def foo(): pass',
autopep8.normalize_multiline('def foo():'))
self.assertEqual('def _(): return 1',
autopep8.normalize_multiline('return 1'))
self.assertEqual('@decorator\ndef _(): pass',
autopep8.normalize_multiline('@decorator\n'))
self.assertEqual('class A: pass',
autopep8.normalize_multiline('class A:'))
def test_code_match(self):
self.assertTrue(autopep8.code_match('E2', select=['E2', 'E3'],
ignore=[]))
self.assertTrue(autopep8.code_match('E26', select=['E2', 'E3'],
ignore=[]))
self.assertFalse(autopep8.code_match('E26', select=[], ignore=['E']))
self.assertFalse(autopep8.code_match('E2', select=['E2', 'E3'],
ignore=['E2']))
self.assertFalse(autopep8.code_match('E26', select=['W'], ignore=['']))
self.assertFalse(autopep8.code_match('E26', select=['W'],
ignore=['E1']))
def test_split_at_offsets(self):
self.assertEqual([''], autopep8.split_at_offsets('', [0]))
self.assertEqual(['1234'], autopep8.split_at_offsets('1234', [0]))
self.assertEqual(['1', '234'], autopep8.split_at_offsets('1234', [1]))
self.assertEqual(['12', '34'], autopep8.split_at_offsets('1234', [2]))
self.assertEqual(['12', '3', '4'],
autopep8.split_at_offsets('1234', [2, 3]))
def test_split_at_offsets_with_out_of_order(self):
self.assertEqual(['12', '3', '4'],
autopep8.split_at_offsets('1234', [3, 2]))
def test_fix_2to3(self):
self.assertEqual(
'try: pass\nexcept ValueError as e: pass\n',
autopep8.fix_2to3('try: pass\nexcept ValueError, e: pass\n'))
self.assertEqual(
'while True: pass\n',
autopep8.fix_2to3('while 1: pass\n'))
self.assertEqual(
"""\
import sys
sys.maxsize
""",
autopep8.fix_2to3("""\
import sys
sys.maxint
"""))
def test_fix_2to3_subset(self):
line = 'type(res) == type(42)\n'
fixed = 'isinstance(res, type(42))\n'
self.assertEqual(fixed, autopep8.fix_2to3(line))
self.assertEqual(fixed, autopep8.fix_2to3(line, select=['E721']))
self.assertEqual(fixed, autopep8.fix_2to3(line, select=['E7']))
self.assertEqual(line, autopep8.fix_2to3(line, select=['W']))
self.assertEqual(line, autopep8.fix_2to3(line, select=['E999']))
self.assertEqual(line, autopep8.fix_2to3(line, ignore=['E721']))
def test_is_python_file(self):
self.assertTrue(autopep8.is_python_file(
os.path.join(ROOT_DIR, 'autopep8.py')))
with temporary_file_context('#!/usr/bin/env python') as filename:
self.assertTrue(autopep8.is_python_file(filename))
with temporary_file_context('#!/usr/bin/python') as filename:
self.assertTrue(autopep8.is_python_file(filename))
with temporary_file_context('#!/usr/bin/python3') as filename:
self.assertTrue(autopep8.is_python_file(filename))
with temporary_file_context('#!/usr/bin/pythonic') as filename:
self.assertFalse(autopep8.is_python_file(filename))
with temporary_file_context('###!/usr/bin/python') as filename:
self.assertFalse(autopep8.is_python_file(filename))
self.assertFalse(autopep8.is_python_file(os.devnull))
self.assertFalse(autopep8.is_python_file('/bin/bash'))
def test_match_file(self):
with temporary_file_context('', suffix='.py', prefix='.') as filename:
self.assertFalse(autopep8.match_file(filename, exclude=[]),
msg=filename)
self.assertFalse(autopep8.match_file(os.devnull, exclude=[]))
with temporary_file_context('', suffix='.py', prefix='') as filename:
self.assertTrue(autopep8.match_file(filename, exclude=[]),
msg=filename)
def test_line_shortening_rank(self):
self.assertGreater(
autopep8.line_shortening_rank('(1\n+1)\n',
indent_word=' ',
max_line_length=79),
autopep8.line_shortening_rank('(1+\n1)\n',
indent_word=' ',
max_line_length=79))
self.assertGreaterEqual(
autopep8.line_shortening_rank('(1+\n1)\n',
indent_word=' ',
max_line_length=79),
autopep8.line_shortening_rank('(1+1)\n',
indent_word=' ',
max_line_length=79))
# Do not crash.
autopep8.line_shortening_rank('\n',
indent_word=' ',
max_line_length=79)
self.assertGreater(
autopep8.line_shortening_rank('[foo(\nx) for x in y]\n',
indent_word=' ',
max_line_length=79),
autopep8.line_shortening_rank('[foo(x)\nfor x in y]\n',
indent_word=' ',
max_line_length=79))
def test_extract_code_from_function(self):
def fix_e123():
pass # pragma: no cover
self.assertEqual('e123', autopep8.extract_code_from_function(fix_e123))
def foo():
pass # pragma: no cover
self.assertEqual(None, autopep8.extract_code_from_function(foo))
def fix_foo():
pass # pragma: no cover
self.assertEqual(None, autopep8.extract_code_from_function(fix_foo))
def e123():
pass # pragma: no cover
self.assertEqual(None, autopep8.extract_code_from_function(e123))
def fix_():
pass # pragma: no cover
self.assertEqual(None, autopep8.extract_code_from_function(fix_))
def test_reindenter(self):
reindenter = autopep8.Reindenter('if True:\n pass\n')
self.assertEqual('if True:\n pass\n',
reindenter.run())
def test_reindenter_with_non_standard_indent_size(self):
reindenter = autopep8.Reindenter('if True:\n pass\n')
self.assertEqual('if True:\n pass\n',
reindenter.run(3))
def test_reindenter_with_good_input(self):
lines = 'if True:\n pass\n'
reindenter = autopep8.Reindenter(lines)
self.assertEqual(lines,
reindenter.run())
def test_reindenter_should_leave_stray_comment_alone(self):
lines = ' #\nif True:\n pass\n'
reindenter = autopep8.Reindenter(lines)
self.assertEqual(' #\nif True:\n pass\n',
reindenter.run())
def test_fix_e225_avoid_failure(self):
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents=' 1\n')
self.assertEqual(
[],
fix_pep8.fix_e225({'line': 1,
'column': 5}))
def test_fix_e271_ignore_redundant(self):
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='x = 1\n')
self.assertEqual(
[],
fix_pep8.fix_e271({'line': 1,
'column': 2}))
def test_fix_e401_avoid_non_import(self):
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents=' 1\n')
self.assertEqual(
[],
fix_pep8.fix_e401({'line': 1,
'column': 5}))
def test_fix_e711_avoid_failure(self):
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='None == x\n')
self.assertEqual(
[],
fix_pep8.fix_e711({'line': 1,
'column': 6}))
self.assertEqual(
[],
fix_pep8.fix_e711({'line': 1,
'column': 700}))
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='x <> None\n')
self.assertEqual(
[],
fix_pep8.fix_e711({'line': 1,
'column': 3}))
def test_fix_e712_avoid_failure(self):
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='True == x\n')
self.assertEqual(
[],
fix_pep8.fix_e712({'line': 1,
'column': 5}))
self.assertEqual(
[],
fix_pep8.fix_e712({'line': 1,
'column': 700}))
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='x != True\n')
self.assertEqual(
[],
fix_pep8.fix_e712({'line': 1,
'column': 3}))
fix_pep8 = autopep8.FixPEP8(filename='',
options=autopep8.parse_args(['']),
contents='x == False\n')
self.assertEqual(
[],
fix_pep8.fix_e712({'line': 1,
'column': 3}))
def test_get_diff_text(self):
# We ignore the first two lines since it differs on Python 2.6.
self.assertEqual(
"""\
-foo
+bar
""",
'\n'.join(autopep8.get_diff_text(['foo\n'],
['bar\n'],
'').split('\n')[3:]))
def test_get_diff_text_without_newline(self):
# We ignore the first two lines since it differs on Python 2.6.
self.assertEqual(
"""\
-foo
\\ No newline at end of file
+foo
""",
'\n'.join(autopep8.get_diff_text(['foo'],
['foo\n'],
'').split('\n')[3:]))
def test_count_unbalanced_brackets(self):
self.assertEqual(
0,
autopep8.count_unbalanced_brackets('()'))
self.assertEqual(
1,
autopep8.count_unbalanced_brackets('('))
self.assertEqual(
2,
autopep8.count_unbalanced_brackets('(['))
self.assertEqual(
1,
autopep8.count_unbalanced_brackets('[])'))
self.assertEqual(
1,
autopep8.count_unbalanced_brackets(
"'','.join(['%s=%s' % (col, col)')"))
def test_refactor_with_2to3(self):
self.assertEqual(
'1 in {}\n',
autopep8.refactor_with_2to3('{}.has_key(1)\n', ['has_key']))
def test_refactor_with_2to3_should_handle_syntax_error_gracefully(self):
self.assertEqual(
'{}.has_key(1\n',
autopep8.refactor_with_2to3('{}.has_key(1\n', ['has_key']))
def test_commented_out_code_lines(self):
self.assertEqual(
[1, 4],
autopep8.commented_out_code_lines("""\
#x = 1
#Hello
#Hello world.
#html_use_index = True
"""))
def test_standard_deviation(self):
self.assertAlmostEqual(
2, autopep8.standard_deviation([2, 4, 4, 4, 5, 5, 7, 9]))
self.assertAlmostEqual(0, autopep8.standard_deviation([]))
self.assertAlmostEqual(0, autopep8.standard_deviation([1]))
self.assertAlmostEqual(.5, autopep8.standard_deviation([1, 2]))
def test_priority_key_with_non_existent_key(self):
pep8_result = {'id': 'foobar'}
self.assertGreater(autopep8._priority_key(pep8_result), 1)
def test_decode_filename(self):
self.assertEqual('foo.py', autopep8.decode_filename(b'foo.py'))
def test_almost_equal(self):
self.assertTrue(autopep8.code_almost_equal(
"""\
[1, 2, 3
4, 5]
""",
"""\
[1, 2, 3
4, 5]
"""))
self.assertTrue(autopep8.code_almost_equal(
"""\
[1,2,3
4,5]
""",
"""\
[1, 2, 3
4,5]
"""))
self.assertFalse(autopep8.code_almost_equal(
"""\
[1, 2, 3
4, 5]
""",
"""\
[1, 2, 3, 4,
5]
"""))
def test_token_offsets(self):
text = """\
1
"""
string_io = io.StringIO(text)
self.assertEqual(
[(tokenize.NUMBER, '1', 0, 1),
(tokenize.NEWLINE, '\n', 1, 2),
(tokenize.ENDMARKER, '', 2, 2)],
list(autopep8.token_offsets(
tokenize.generate_tokens(string_io.readline))))
def test_token_offsets_with_multiline(self):
text = """\
x = '''
1
2
'''
"""
string_io = io.StringIO(text)
self.assertEqual(
[(tokenize.NAME, 'x', 0, 1),
(tokenize.OP, '=', 2, 3),
(tokenize.STRING, "'''\n1\n2\n'''", 4, 15),
(tokenize.NEWLINE, '\n', 15, 16),
(tokenize.ENDMARKER, '', 16, 16)],
list(autopep8.token_offsets(
tokenize.generate_tokens(string_io.readline))))
def test_token_offsets_with_escaped_newline(self):
text = """\
True or \\
False
"""
string_io = io.StringIO(text)
self.assertEqual(
[(tokenize.NAME, 'True', 0, 4),
(tokenize.NAME, 'or', 5, 7),
(tokenize.NAME, 'False', 11, 16),
(tokenize.NEWLINE, '\n', 16, 17),
(tokenize.ENDMARKER, '', 17, 17)],
list(autopep8.token_offsets(
tokenize.generate_tokens(string_io.readline))))
def test_shorten_line_candidates_are_valid(self):
for text in [
"""\
[xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, y] = [1, 2]
""",
"""\
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, y = [1, 2]
""",
"""\
lambda xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: line_shortening_rank(x,
indent_word,
max_line_length)
""",
]:
indent = autopep8._get_indentation(text)
source = text[len(indent):]
assert source.lstrip() == source
tokens = list(autopep8.generate_tokens(source))
for candidate in autopep8.shorten_line(
tokens, source, indent,
indent_word=' ',
max_line_length=79,
aggressive=10,
experimental=True,
previous_line=''):
self.assertEqual(
re.sub(r'\s', '', text),
re.sub(r'\s', '', candidate))
class SystemTests(unittest.TestCase):
maxDiff = None
def test_e101(self):
line = """\
while True:
if True:
\t1
"""
fixed = """\
while True:
if True:
1
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_with_indent_size_0(self):
line = """\
while True:
if True:
\t1
"""
with autopep8_context(line, options=['--indent-size=0']) as result:
self.assertEqual(line, result)
def test_e101_with_indent_size_1(self):
line = """\
while True:
if True:
\t1
"""
fixed = """\
while True:
if True:
1
"""
with autopep8_context(line, options=['--indent-size=1']) as result:
self.assertEqual(fixed, result)
def test_e101_with_indent_size_2(self):
line = """\
while True:
if True:
\t1
"""
fixed = """\
while True:
if True:
1
"""
with autopep8_context(line, options=['--indent-size=2']) as result:
self.assertEqual(fixed, result)
def test_e101_with_indent_size_3(self):
line = """\
while True:
if True:
\t1
"""
fixed = """\
while True:
if True:
1
"""
with autopep8_context(line, options=['--indent-size=3']) as result:
self.assertEqual(fixed, result)
def test_e101_should_not_expand_non_indentation_tabs(self):
line = """\
while True:
if True:
\t1 == '\t'
"""
fixed = """\
while True:
if True:
1 == '\t'
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_should_ignore_multiline_strings(self):
line = """\
x = '''
while True:
if True:
\t1
'''
"""
fixed = """\
x = '''
while True:
if True:
\t1
'''
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_should_fix_docstrings(self):
line = """\
class Bar(object):
def foo():
'''
\tdocstring
'''
"""
fixed = """\
class Bar(object):
def foo():
'''
docstring
'''
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_when_pep8_mistakes_first_tab_in_string(self):
# pep8 will complain about this even if the tab indentation found
# elsewhere is in a multiline string.
line = """\
x = '''
\tHello.
'''
if True:
123
"""
fixed = """\
x = '''
\tHello.
'''
if True:
123
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_should_ignore_multiline_strings_complex(self):
line = """\
print(3 <> 4, '''
while True:
if True:
\t1
\t''', 4 <> 5)
"""
fixed = """\
print(3 != 4, '''
while True:
if True:
\t1
\t''', 4 != 5)
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e101_with_comments(self):
line = """\
while True: # My inline comment
# with a hanging
# comment.
# Hello
if True:
\t# My comment
\t1
\t# My other comment
"""
fixed = """\
while True: # My inline comment
# with a hanging
# comment.
# Hello
if True:
# My comment
1
# My other comment
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e101_skip_if_bad_indentation(self):
line = """\
try:
\t pass
except:
pass
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e101_skip_innocuous(self):
# pep8 will complain about this even if the tab indentation found
# elsewhere is in a multiline string. If we don't filter the innocuous
# report properly, the below command will take a long time.
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
['-vvv', '--select=E101', '--diff',
os.path.join(ROOT_DIR, 'test', 'e101_example.py')],
stdout=PIPE, stderr=PIPE)
output = [x.decode('utf-8') for x in p.communicate()][0]
self.assertEqual('', output)
def test_e111_short(self):
line = 'class Dummy:\n\n def __init__(self):\n pass\n'
fixed = 'class Dummy:\n\n def __init__(self):\n pass\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_long(self):
line = 'class Dummy:\n\n def __init__(self):\n pass\n'
fixed = 'class Dummy:\n\n def __init__(self):\n pass\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_longer(self):
line = """\
while True:
if True:
1
elif True:
2
"""
fixed = """\
while True:
if True:
1
elif True:
2
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_multiple_levels(self):
line = """\
while True:
if True:
1
# My comment
print('abc')
"""
fixed = """\
while True:
if True:
1
# My comment
print('abc')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_with_dedent(self):
line = """\
def foo():
if True:
2
1
"""
fixed = """\
def foo():
if True:
2
1
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_with_other_errors(self):
line = """\
def foo():
if True:
(2 , 1)
1
if True:
print('hello')\t
2
"""
fixed = """\
def foo():
if True:
(2, 1)
1
if True:
print('hello')
2
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e111_should_not_modify_string_contents(self):
line = """\
if True:
x = '''
1
'''
"""
fixed = """\
if True:
x = '''
1
'''
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e112(self):
line = """\
if True:
# A comment.
pass
"""
fixed = """\
if True:
# A comment.
pass
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e112_should_leave_bad_syntax_alone(self):
line = """\
if True:
pass
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e113(self):
line = """\
# A comment.
"""
fixed = """\
# A comment.
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e113_should_leave_bad_syntax_alone(self):
line = """\
pass
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e12_reindent(self):
line = """\
def foo_bar(baz, frop,
fizz, bang): # E128
pass
if True:
x = {
} # E123
#: E121
print "E121", (
"dent")
#: E122
print "E122", (
"dent")
#: E124
print "E124", ("visual",
"indent_two"
)
#: E125
if (row < 0 or self.moduleCount <= row or
col < 0 or self.moduleCount <= col):
raise Exception("%s,%s - %s" % (row, col, self.moduleCount))
#: E126
print "E126", (
"dent")
#: E127
print "E127", ("over-",
"over-indent")
#: E128
print "E128", ("under-",
"under-indent")
"""
fixed = """\
def foo_bar(baz, frop,
fizz, bang): # E128
pass
if True:
x = {
} # E123
#: E121
print "E121", (
"dent")
#: E122
print "E122", (
"dent")
#: E124
print "E124", ("visual",
"indent_two"
)
#: E125
if (row < 0 or self.moduleCount <= row or
col < 0 or self.moduleCount <= col):
raise Exception("%s,%s - %s" % (row, col, self.moduleCount))
#: E126
print "E126", (
"dent")
#: E127
print "E127", ("over-",
"over-indent")
#: E128
print "E128", ("under-",
"under-indent")
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e12_reindent_with_multiple_fixes(self):
line = """\
sql = 'update %s set %s %s' % (from_table,
','.join(['%s=%s' % (col, col) for col in cols]),
where_clause)
"""
fixed = """\
sql = 'update %s set %s %s' % (from_table,
','.join(['%s=%s' % (col, col)
for col in cols]),
where_clause)
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e12_tricky(self):
line = """\
#: E126
if (
x == (
3
) or
x == (
3
) or
y == 4):
pass
"""
fixed = """\
#: E126
if (
x == (
3
) or
x == (
3
) or
y == 4):
pass
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e12_large(self):
line = """\
class BogusController(controller.CementBaseController):
class Meta:
pass
class BogusController2(controller.CementBaseController):
class Meta:
pass
class BogusController3(controller.CementBaseController):
class Meta:
pass
class BogusController4(controller.CementBaseController):
class Meta:
pass
class TestBaseController(controller.CementBaseController):
class Meta:
pass
class TestBaseController2(controller.CementBaseController):
class Meta:
pass
class TestStackedController(controller.CementBaseController):
class Meta:
arguments = [
]
class TestDuplicateController(controller.CementBaseController):
class Meta:
config_defaults = dict(
foo='bar',
)
arguments = [
(['-f2', '--foo2'], dict(action='store'))
]
def my_command(self):
pass
"""
fixed = """\
class BogusController(controller.CementBaseController):
class Meta:
pass
class BogusController2(controller.CementBaseController):
class Meta:
pass
class BogusController3(controller.CementBaseController):
class Meta:
pass
class BogusController4(controller.CementBaseController):
class Meta:
pass
class TestBaseController(controller.CementBaseController):
class Meta:
pass
class TestBaseController2(controller.CementBaseController):
class Meta:
pass
class TestStackedController(controller.CementBaseController):
class Meta:
arguments = [
]
class TestDuplicateController(controller.CementBaseController):
class Meta:
config_defaults = dict(
foo='bar',
)
arguments = [
(['-f2', '--foo2'], dict(action='store'))
]
def my_command(self):
pass
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e12_with_bad_indentation(self):
line = r"""
def bar():
foo(1,
2)
def baz():
pass
pass
"""
fixed = r"""
def bar():
foo(1,
2)
def baz():
pass
pass
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e121_with_multiline_string(self):
line = """\
testing = \\
'''inputs: d c b a
'''
"""
fixed = """\
testing = \\
'''inputs: d c b a
'''
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e121_with_stupid_fallback(self):
line = """\
list(''.join([
'%d'
% 1,
list(''),
''
]))
"""
fixed = """\
list(''.join([
'%d'
% 1,
list(''),
''
]))
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e122_with_fallback(self):
line = """\
foooo('',
scripts=[''],
classifiers=[
'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Developers',
])
"""
fixed = """\
foooo('',
scripts=[''],
classifiers=[
'Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Developers',
])
"""
with autopep8_context(line, options=[]) as result:
self.assertEqual(fixed, result)
def test_e123(self):
line = """\
if True:
foo = (
)
"""
fixed = """\
if True:
foo = (
)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e123_with_escaped_newline(self):
line = r"""
x = \
(
)
"""
fixed = r"""
x = \
(
)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e125(self):
line = """\
if (a and
b in [
'foo',
] or
c):
pass
"""
fixed = """\
if (a and
b in [
'foo',
] or
c):
pass
"""
with autopep8_context(line, options=['--select=E125']) as result:
self.assertEqual(fixed, result)
def test_e125_with_multiline_string(self):
line = """\
for foo in '''
abc
123
'''.strip().split():
print(foo)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(line, result)
def test_e125_with_multiline_string_okay(self):
line = """\
def bar(
a='''a'''):
print(foo)
"""
fixed = """\
def bar(
a='''a'''):
print(foo)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e126(self):
line = """\
if True:
posted = models.DateField(
default=datetime.date.today,
help_text="help"
)
"""
fixed = """\
if True:
posted = models.DateField(
default=datetime.date.today,
help_text="help"
)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e126_should_not_interfere_with_other_fixes(self):
line = """\
self.assertEqual('bottom 1',
SimpleNamedNode.objects.filter(id__gt=1).exclude(
name='bottom 3').filter(
name__in=['bottom 3', 'bottom 1'])[0].name)
"""
fixed = """\
self.assertEqual('bottom 1',
SimpleNamedNode.objects.filter(id__gt=1).exclude(
name='bottom 3').filter(
name__in=['bottom 3', 'bottom 1'])[0].name)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e127(self):
line = """\
if True:
if True:
chksum = (sum([int(value[i]) for i in xrange(0, 9, 2)]) * 7 -
sum([int(value[i]) for i in xrange(1, 9, 2)])) % 10
"""
fixed = """\
if True:
if True:
chksum = (sum([int(value[i]) for i in xrange(0, 9, 2)]) * 7 -
sum([int(value[i]) for i in xrange(1, 9, 2)])) % 10
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_e127_align_visual_indent(self):
line = """\
def draw(self):
color = [([0.2, 0.1, 0.3], [0.2, 0.1, 0.3], [0.2, 0.1, 0.3]),
([0.9, 0.3, 0.5], [0.5, 1.0, 0.5], [0.3, 0.3, 0.9]) ][self._p._colored ]
self.draw_background(color)
"""
fixed = """\
def draw(self):
color = [([0.2, 0.1, 0.3], [0.2, 0.1, 0.3], [0.2, 0.1, 0.3]),
([0.9, 0.3, 0.5], [0.5, 1.0, 0.5], [0.3, 0.3, 0.9])][self._p._colored]
self.draw_background(color)
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e127_align_visual_indent_okay(self):
"""This is for code coverage."""
line = """\
want = (have + _leading_space_count(
after[jline - 1]) -
_leading_space_count(lines[jline]))
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e127_with_backslash(self):
line = r"""
if True:
if True:
self.date = meta.session.query(schedule.Appointment)\
.filter(schedule.Appointment.id ==
appointment_id).one().agenda.endtime
"""
fixed = r"""
if True:
if True:
self.date = meta.session.query(schedule.Appointment)\
.filter(schedule.Appointment.id ==
appointment_id).one().agenda.endtime
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e127_with_bracket_then_parenthesis(self):
line = r"""
if True:
foo = [food(1)
for bar in bars]
"""
fixed = r"""
if True:
foo = [food(1)
for bar in bars]
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e12_with_backslash(self):
line = r"""
if True:
assert reeval == parsed, \
'Repr gives different object:\n %r !=\n %r' % (parsed, reeval)
"""
fixed = r"""
if True:
assert reeval == parsed, \
'Repr gives different object:\n %r !=\n %r' % (parsed, reeval)
"""
with autopep8_context(line, options=['--select=E12']) as result:
self.assertEqual(fixed, result)
def test_w191(self):
line = """\
while True:
\tif True:
\t\t1
"""
fixed = """\
while True:
if True:
1
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e201(self):
line = '( 1)\n'
fixed = '(1)\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e202(self):
line = '(1 )\n[2 ]\n{3 }\n'
fixed = '(1)\n[2]\n{3}\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e202_skip_multiline(self):
"""We skip this since pep8 reports the error as being on line 1."""
line = """\
('''
a
b
c
''' )
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e202_skip_multiline_with_escaped_newline(self):
line = r"""
('c\
' )
"""
fixed = r"""
('c\
')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e203_colon(self):
line = '{4 : 3}\n'
fixed = '{4: 3}\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e203_comma(self):
line = '[1 , 2 , 3]\n'
fixed = '[1, 2, 3]\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e203_semicolon(self):
line = "print(a, end=' ') ; nl = 0\n"
fixed = "print(a, end=' '); nl = 0\n"
with autopep8_context(line, options=['--select=E203']) as result:
self.assertEqual(fixed, result)
def test_e203_with_newline(self):
line = "print(a\n, end=' ')\n"
fixed = "print(a, end=' ')\n"
with autopep8_context(line, options=['--select=E203']) as result:
self.assertEqual(fixed, result)
def test_e211(self):
line = 'd = [1, 2, 3]\nprint d [0]\n'
fixed = 'd = [1, 2, 3]\nprint d[0]\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e221(self):
line = 'a = 1 + 1\n'
fixed = 'a = 1 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e221_should_skip_multiline(self):
line = '''\
def javascript(self):
return u"""
<script type="text/javascript" src="++resource++ptg.shufflegallery/jquery.promptu-menu.js"></script>
<script type="text/javascript">
$(function(){
$('ul.promptu-menu').promptumenu({width: %(width)i, height: %(height)i, rows: %(rows)i, columns: %(columns)i, direction: '%(direction)s', intertia: %(inertia)i, pages: %(pages)i});
\t$('ul.promptu-menu a').click(function(e) {
e.preventDefault();
});
$('ul.promptu-menu a').dblclick(function(e) {
window.location.replace($(this).attr("href"));
});
});
</script>
""" % {
}
'''
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e222(self):
line = 'a = 1 + 1\n'
fixed = 'a = 1 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e223(self):
line = 'a = 1 + 1\n' # include TAB
fixed = 'a = 1 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e223_double(self):
line = 'a = 1 + 1\n' # include TAB
fixed = 'a = 1 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e223_with_tab_indentation(self):
line = """\
class Foo():
\tdef __init__(self):
\t\tx= 1\t+ 3
"""
fixed = """\
class Foo():
\tdef __init__(self):
\t\tx = 1 + 3
"""
with autopep8_context(line, options=['--ignore=E1,W191']) as result:
self.assertEqual(fixed, result)
def test_e224(self):
line = 'a = 11 + 1\n' # include TAB
fixed = 'a = 11 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e224_double(self):
line = 'a = 11 + 1\n' # include TAB
fixed = 'a = 11 + 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e224_with_tab_indentation(self):
line = """\
class Foo():
\tdef __init__(self):
\t\tx= \t3
"""
fixed = """\
class Foo():
\tdef __init__(self):
\t\tx = 3
"""
with autopep8_context(line, options=['--ignore=E1,W191']) as result:
self.assertEqual(fixed, result)
def test_e225(self):
line = '1+1\n2 +2\n3+ 3\n'
fixed = '1 + 1\n2 + 2\n3 + 3\n'
with autopep8_context(line, options=['--select=E,W']) as result:
self.assertEqual(fixed, result)
def test_e225_with_indentation_fix(self):
line = """\
class Foo(object):
def bar(self):
return self.elephant is not None
"""
fixed = """\
class Foo(object):
def bar(self):
return self.elephant is not None
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e226(self):
line = '1*1\n2*2\n3*3\n'
fixed = '1 * 1\n2 * 2\n3 * 3\n'
with autopep8_context(line, options=['--select=E22']) as result:
self.assertEqual(fixed, result)
def test_e227(self):
line = '1&1\n2&2\n3&3\n'
fixed = '1 & 1\n2 & 2\n3 & 3\n'
with autopep8_context(line, options=['--select=E22']) as result:
self.assertEqual(fixed, result)
def test_e228(self):
line = '1%1\n2%2\n3%3\n'
fixed = '1 % 1\n2 % 2\n3 % 3\n'
with autopep8_context(line, options=['--select=E22']) as result:
self.assertEqual(fixed, result)
def test_e231(self):
line = '[1,2,3]\n'
fixed = '[1, 2, 3]\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e231_with_many_commas(self):
fixed = str(list(range(200))) + '\n'
line = re.sub(', ', ',', fixed)
with autopep8_context(line, options=['--select=E231']) as result:
self.assertEqual(fixed, result)
def test_e231_with_colon_after_comma(self):
"""ws_comma fixer ignores this case."""
line = 'a[b1,:]\n'
fixed = 'a[b1, :]\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e231_should_only_do_ws_comma_once(self):
"""If we don't check appropriately, we end up doing ws_comma multiple
times and skipping all other fixes."""
line = """\
print( 1 )
foo[0,:]
bar[zap[0][0]:zig[0][0],:]
"""
fixed = """\
print(1)
foo[0, :]
bar[zap[0][0]:zig[0][0], :]
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e241(self):
line = 'l = (1, 2)\n'
fixed = 'l = (1, 2)\n'
with autopep8_context(line, options=['--select=E']) as result:
self.assertEqual(fixed, result)
def test_e241_should_be_enabled_by_aggressive(self):
line = 'l = (1, 2)\n'
fixed = 'l = (1, 2)\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e241_double(self):
line = 'l = (1, 2)\n'
fixed = 'l = (1, 2)\n'
with autopep8_context(line, options=['--select=E']) as result:
self.assertEqual(fixed, result)
def test_e242(self):
line = 'l = (1,\t2)\n'
fixed = 'l = (1, 2)\n'
with autopep8_context(line, options=['--select=E']) as result:
self.assertEqual(fixed, result)
def test_e242_double(self):
line = 'l = (1,\t\t2)\n'
fixed = 'l = (1, 2)\n'
with autopep8_context(line, options=['--select=E']) as result:
self.assertEqual(fixed, result)
def test_e251(self):
line = 'def a(arg = 1):\n print arg\n'
fixed = 'def a(arg=1):\n print arg\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e251_with_escaped_newline(self):
line = '1\n\n\ndef a(arg=\\\n1):\n print(arg)\n'
fixed = '1\n\n\ndef a(arg=1):\n print(arg)\n'
with autopep8_context(line, options=['--select=E251']) as result:
self.assertEqual(fixed, result)
def test_e251_with_calling(self):
line = 'foo(bar= True)\n'
fixed = 'foo(bar=True)\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e251_with_argument_on_next_line(self):
line = 'foo(bar\n=None)\n'
fixed = 'foo(bar=None)\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e261(self):
line = "print 'a b '# comment\n"
fixed = "print 'a b ' # comment\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e261_with_inline_commented_out_code(self):
line = '1 # 0 + 0\n'
fixed = '1 # 0 + 0\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e261_with_dictionary(self):
line = 'd = {# comment\n1: 2}\n'
fixed = 'd = { # comment\n 1: 2}\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e261_with_dictionary_no_space(self):
line = 'd = {#comment\n1: 2}\n'
fixed = 'd = { # comment\n 1: 2}\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e261_with_comma(self):
line = '{1: 2 # comment\n , }\n'
fixed = '{1: 2 # comment\n , }\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e262_more_space(self):
line = "print 'a b ' # comment\n"
fixed = "print 'a b ' # comment\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e262_none_space(self):
line = "print 'a b ' #comment\n"
fixed = "print 'a b ' # comment\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e262_hash_in_string(self):
line = "print 'a b #string' #comment\n"
fixed = "print 'a b #string' # comment\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e262_hash_in_string_and_multiple_hashes(self):
line = "print 'a b #string' #comment #comment\n"
fixed = "print 'a b #string' # comment #comment\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e262_more_complex(self):
line = "print 'a b ' #comment\n123\n"
fixed = "print 'a b ' # comment\n123\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e271(self):
line = 'True and False\n'
fixed = 'True and False\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e272(self):
line = 'True and False\n'
fixed = 'True and False\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e273(self):
line = 'True and\tFalse\n'
fixed = 'True and False\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e274(self):
line = 'True\tand False\n'
fixed = 'True and False\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e301(self):
line = 'class k:\n s = 0\n def f():\n print 1\n'
fixed = 'class k:\n s = 0\n\n def f():\n print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e301_extended_with_docstring(self):
line = '''\
class Foo(object):
"""Test."""
def foo(self):
"""Test."""
def bar():
pass
'''
fixed = '''\
class Foo(object):
"""Test."""
def foo(self):
"""Test."""
def bar():
pass
'''
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e302(self):
line = 'def f():\n print 1\n\ndef ff():\n print 2\n'
fixed = 'def f():\n print 1\n\n\ndef ff():\n print 2\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e303(self):
line = '\n\n\n# alpha\n\n1\n'
fixed = '\n\n# alpha\n\n1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e303_extended(self):
line = '''\
def foo():
"""Document."""
'''
fixed = '''\
def foo():
"""Document."""
'''
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e304(self):
line = '@contextmanager\n\ndef f():\n print 1\n'
fixed = '@contextmanager\ndef f():\n print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e304_with_comment(self):
line = '@contextmanager\n# comment\n\ndef f():\n print 1\n'
fixed = '@contextmanager\n# comment\ndef f():\n print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e309(self):
line = 'class Foo:\n def bar():\n print 1\n'
fixed = 'class Foo:\n\n def bar():\n print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e401(self):
line = 'import os, sys\n'
fixed = 'import os\nimport sys\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e401_with_indentation(self):
line = 'def a():\n import os, sys\n'
fixed = 'def a():\n import os\n import sys\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e401_should_ignore_commented_comma(self):
line = 'import bdist_egg, egg # , not a module, neither is this\n'
fixed = 'import bdist_egg\nimport egg # , not a module, neither is this\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e401_should_ignore_commented_comma_with_indentation(self):
line = 'if True:\n import bdist_egg, egg # , not a module, neither is this\n'
fixed = 'if True:\n import bdist_egg\n import egg # , not a module, neither is this\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e401_should_ignore_false_positive(self):
line = 'import bdist_egg; bdist_egg.write_safety_flag(cmd.egg_info, safe)\n'
with autopep8_context(line, options=['--select=E401']) as result:
self.assertEqual(line, result)
def test_e401_with_escaped_newline_case(self):
line = 'import foo, \\\n bar\n'
fixed = 'import foo\nimport \\\n bar\n'
with autopep8_context(line, options=['--select=E401']) as result:
self.assertEqual(fixed, result)
def test_e501_basic(self):
line = """\
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
print(111, 111, 111, 111, 222, 222, 222, 222,
222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_commas_and_colons(self):
line = """\
foobar = {'aaaaaaaaaaaa': 'bbbbbbbbbbbbbbbb', 'dddddd': 'eeeeeeeeeeeeeeee', 'ffffffffffff': 'gggggggg'}
"""
fixed = """\
foobar = {'aaaaaaaaaaaa': 'bbbbbbbbbbbbbbbb',
'dddddd': 'eeeeeeeeeeeeeeee', 'ffffffffffff': 'gggggggg'}
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_inline_comments(self):
line = """\
' ' # Long inline comments should be moved above.
if True:
' ' # Long inline comments should be moved above.
"""
fixed = """\
# Long inline comments should be moved above.
' '
if True:
# Long inline comments should be moved above.
' '
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_with_inline_comments_should_skip_multiline(self):
line = """\
'''This should be left alone. -----------------------------------------------------
''' # foo
'''This should be left alone. -----------------------------------------------------
''' \\
# foo
'''This should be left alone. -----------------------------------------------------
''' \\
\\
# foo
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(line, result)
def test_e501_with_inline_comments_should_skip_keywords(self):
line = """\
' ' # noqa Long inline comments should be moved above.
if True:
' ' # pylint: disable-msgs=E0001
' ' # pragma: no cover
' ' # pragma: no cover
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_e501_with_inline_comments_should_skip_keywords_without_aggressive(
self):
line = """\
' ' # noqa Long inline comments should be moved above.
if True:
' ' # pylint: disable-msgs=E0001
' ' # pragma: no cover
' ' # pragma: no cover
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_with_inline_comments_should_skip_edge_cases(self):
line = """\
if True:
x = \\
' ' # Long inline comments should be moved above.
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_basic_should_prefer_balanced_brackets(self):
line = """\
if True:
reconstructed = iradon(radon(image), filter="ramp", interpolation="nearest")
"""
fixed = """\
if True:
reconstructed = iradon(
radon(image), filter="ramp", interpolation="nearest")
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_very_long_line(self):
line = """\
x = [3244234243234, 234234234324, 234234324, 23424234, 234234234, 234234, 234243, 234243, 234234234324, 234234324, 23424234, 234234234, 234234, 234243, 234243]
"""
fixed = """\
x = [
3244234243234,
234234234324,
234234324,
23424234,
234234234,
234234,
234243,
234243,
234234234324,
234234324,
23424234,
234234234,
234234,
234243,
234243]
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_shorten_at_commas_skip(self):
line = """\
parser.add_argument('source_corpus', help='corpus name/path relative to an nltk_data directory')
parser.add_argument('target_corpus', help='corpus name/path relative to an nltk_data directory')
"""
fixed = """\
parser.add_argument(
'source_corpus',
help='corpus name/path relative to an nltk_data directory')
parser.add_argument(
'target_corpus',
help='corpus name/path relative to an nltk_data directory')
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_with_shorter_length(self):
line = "foooooooooooooooooo('abcdefghijklmnopqrstuvwxyz')\n"
fixed = "foooooooooooooooooo(\n 'abcdefghijklmnopqrstuvwxyz')\n"
with autopep8_context(line,
options=['--max-line-length=40']) as result:
self.assertEqual(fixed, result)
def test_e501_with_indent(self):
line = """\
def d():
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
def d():
print(111, 111, 111, 111, 222, 222, 222, 222,
222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_alone_with_indentation(self):
line = """\
if True:
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
if True:
print(111, 111, 111, 111, 222, 222, 222, 222,
222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
with autopep8_context(line, options=['--select=E501']) as result:
self.assertEqual(fixed, result)
def test_e501_alone_with_tuple(self):
line = """\
fooooooooooooooooooooooooooooooo000000000000000000000000 = [1,
('TransferTime', 'FLOAT')
]
"""
fixed = """\
fooooooooooooooooooooooooooooooo000000000000000000000000 = [1,
('TransferTime',
'FLOAT')
]
"""
with autopep8_context(line, options=['--select=E501']) as result:
self.assertEqual(fixed, result)
def test_e501_should_not_try_to_break_at_every_paren_in_arithmetic(self):
line = """\
term3 = w6 * c5 * (8.0 * psi4 * (11.0 - 24.0 * t2) - 28 * psi3 * (1 - 6.0 * t2) + psi2 * (1 - 32 * t2) - psi * (2.0 * t2) + t4) / 720.0
this_should_be_shortened = (' ', ' ')
"""
fixed = """\
term3 = w6 * c5 * (8.0 * psi4 * (11.0 - 24.0 * t2) - 28 * psi3 *
(1 - 6.0 * t2) + psi2 * (1 - 32 * t2) - psi * (2.0 * t2) + t4) / 720.0
this_should_be_shortened = (
' ',
' ')
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_arithmetic_operator_with_indent(self):
line = """\
def d():
111 + 111 + 111 + 111 + 111 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 333 + 333 + 333 + 333
"""
fixed = r"""def d():
111 + 111 + 111 + 111 + 111 + 222 + 222 + 222 + 222 + \
222 + 222 + 222 + 222 + 222 + 333 + 333 + 333 + 333
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_more_complicated(self):
line = """\
blahblah = os.environ.get('blahblah') or os.environ.get('blahblahblah') or os.environ.get('blahblahblahblah')
"""
fixed = """\
blahblah = os.environ.get('blahblah') or os.environ.get(
'blahblahblah') or os.environ.get('blahblahblahblah')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_skip_even_more_complicated(self):
line = """\
if True:
if True:
if True:
blah = blah.blah_blah_blah_bla_bl(blahb.blah, blah.blah,
blah=blah.label, blah_blah=blah_blah,
blah_blah2=blah_blah)
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_prefer_to_break_at_begnning(self):
"""We prefer not to leave part of the arguments hanging."""
line = """\
looooooooooooooong = foo(one, two, three, four, five, six, seven, eight, nine, ten)
"""
fixed = """\
looooooooooooooong = foo(
one, two, three, four, five, six, seven, eight, nine, ten)
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_avoid_breaking_at_empty_parentheses_if_possible(self):
line = """\
someverylongindenttionwhatnot().foo().bar().baz("and here is a long string 123456789012345678901234567890")
"""
fixed = """\
someverylongindenttionwhatnot().foo().bar().baz(
"and here is a long string 123456789012345678901234567890")
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_logical_fix(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc,
dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_logical_fix_and_physical_fix(self):
line = """\
# ------------------------------------ ------------------------------------------
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
# ------------------------------------ -----------------------------------
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc,
dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_logical_fix_and_adjacent_strings(self):
line = """\
print('a-----------------------' 'b-----------------------' 'c-----------------------'
'd-----------------------''e'"f"r"g")
"""
fixed = """\
print(
'a-----------------------'
'b-----------------------'
'c-----------------------'
'd-----------------------'
'e'
"f"
r"g")
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_multiple_lines(self):
line = """\
foo_bar_zap_bing_bang_boom(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333,
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333)
"""
fixed = """\
foo_bar_zap_bing_bang_boom(
111,
111,
111,
111,
222,
222,
222,
222,
222,
222,
222,
222,
222,
333,
333,
111,
111,
111,
111,
222,
222,
222,
222,
222,
222,
222,
222,
222,
333,
333)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_multiple_lines_and_quotes(self):
line = """\
if True:
xxxxxxxxxxx = xxxxxxxxxxxxxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxx={'xxxxxxxxxxxx': 'xxxxx',
'xxxxxxxxxxx': xx,
'xxxxxxxx': False,
})
"""
fixed = """\
if True:
xxxxxxxxxxx = xxxxxxxxxxxxxxxxx(
xxxxxxxxxxx,
xxxxxxxxxxxxxxxx={
'xxxxxxxxxxxx': 'xxxxx',
'xxxxxxxxxxx': xx,
'xxxxxxxx': False,
})
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_do_not_break_on_keyword(self):
# We don't want to put a newline after equals for keywords as this
# violates PEP 8.
line = """\
if True:
long_variable_name = tempfile.mkstemp(prefix='abcdefghijklmnopqrstuvwxyz0123456789')
"""
fixed = """\
if True:
long_variable_name = tempfile.mkstemp(
prefix='abcdefghijklmnopqrstuvwxyz0123456789')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_do_not_begin_line_with_comma(self):
# This fix is incomplete. (The line is still too long.) But it is here
# just to confirm that we do not put a comma at the beginning of a
# line.
line = """\
def dummy():
if True:
if True:
if True:
object = ModifyAction( [MODIFY70.text, OBJECTBINDING71.text, COLON72.text], MODIFY70.getLine(), MODIFY70.getCharPositionInLine() )
"""
fixed = """\
def dummy():
if True:
if True:
if True:
object = ModifyAction([MODIFY70.text, OBJECTBINDING71.text, COLON72.text], MODIFY70.getLine(
), MODIFY70.getCharPositionInLine())
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_should_not_break_on_dot(self):
line = """\
if True:
if True:
raise xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx('xxxxxxxxxxxxxxxxx "{d}" xxxxxxxxxxxxxx'.format(d='xxxxxxxxxxxxxxx'))
"""
fixed = """\
if True:
if True:
raise xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
'xxxxxxxxxxxxxxxxx "{d}" xxxxxxxxxxxxxx'.format(d='xxxxxxxxxxxxxxx'))
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_comment(self):
line = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
pass
# http://foo.bar/abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-
# The following is ugly commented-out code and should not be touched.
#xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx = 1
"""
fixed = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will
# wrap it using textwrap to be within 72 characters.
pass
# http://foo.bar/abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-
# The following is ugly commented-out code and should not be touched.
#xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx = 1
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_comment_should_not_modify_docstring(self):
line = '''\
def foo():
"""
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
"""
'''
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_should_only_modify_last_comment(self):
line = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 1. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 2. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 3. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
"""
fixed = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 1. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 2. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 3. This is a long comment that should be wrapped. I
# will wrap it using textwrap to be within 72
# characters.
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_should_not_interfere_with_non_comment(self):
line = '''
"""
# not actually a comment %d. 12345678901234567890, 12345678901234567890, 12345678901234567890.
""" % (0,)
'''
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_should_cut_comment_pattern(self):
line = """123
# -- Useless lines ----------------------------------------------------------------------
321
"""
fixed = """123
# -- Useless lines -------------------------------------------------------
321
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_function_should_not_break_on_colon(self):
line = r"""
class Useless(object):
def _table_field_is_plain_widget(self, widget):
if widget.__class__ == Widget or\
(widget.__class__ == WidgetMeta and Widget in widget.__bases__):
return True
return False
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_should_break_before_tuple_start(self):
line = """\
xxxxxxxxxxxxx(aaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbb, cccccccccc, (dddddddddddddddddddddd, eeeeeeeeeeee, fffffffffff, gggggggggg))
"""
fixed = """\
xxxxxxxxxxxxx(aaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbb, cccccccccc,
(dddddddddddddddddddddd, eeeeeeeeeeee, fffffffffff, gggggggggg))
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive(self):
line = """\
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
}
"""
fixed = """\
models = {
'auth.group': {
'Meta': {
'object_name': 'Group'},
'permissions': (
'django.db.models.fields.related.ManyToManyField',
[],
{
'to': "orm['auth.Permission']",
'symmetrical': 'False',
'blank': 'True'})},
'auth.permission': {
'Meta': {
'ordering': "('content_type__app_label', 'content_type__model', 'codename')",
'unique_together': "(('content_type', 'codename'),)",
'object_name': 'Permission'},
'name': (
'django.db.models.fields.CharField',
[],
{
'max_length': '50'})},
}
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_multiple_logical_lines(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc,
dddddddddddddddddddddddd)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc,
dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_multiple_logical_lines_with_math(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx([-1 + 5 / 10,
100,
-3 - 4])
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
[-1 + 5 / 10, 100, -3 - 4])
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_import(self):
line = """\
from . import (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy)
"""
fixed = """\
from . import (
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_massive_number_of_logical_lines(self):
"""We do not care about results here.
We just want to know that it doesn't take a ridiculous amount of
time. Caching is currently required to avoid repeately trying
the same line.
"""
line = """\
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
from provider.compat import user_model_label
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Client'
db.create_table('oauth2_client', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm[user_model_label])),
('url', self.gf('django.db.models.fields.URLField')(max_length=200)),
('redirect_uri', self.gf('django.db.models.fields.URLField')(max_length=200)),
('client_id', self.gf('django.db.models.fields.CharField')(default='37b581bdc702c732aa65', max_length=255)),
('client_secret', self.gf('django.db.models.fields.CharField')(default='5cf90561f7566aa81457f8a32187dcb8147c7b73', max_length=255)),
('client_type', self.gf('django.db.models.fields.IntegerField')()),
))
db.send_create_signal('oauth2', ['Client'])
# Adding model 'Grant'
db.create_table('oauth2_grant', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm[user_model_label])),
('client', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['oauth2.Client'])),
('code', self.gf('django.db.models.fields.CharField')(default='f0cda1a5f4ae915431ff93f477c012b38e2429c4', max_length=255)),
('expires', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime(2012, 2, 8, 10, 43, 45, 620301))),
('redirect_uri', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('scope', self.gf('django.db.models.fields.IntegerField')(default=0)),
))
db.send_create_signal('oauth2', ['Grant'])
# Adding model 'AccessToken'
db.create_table('oauth2_accesstoken', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm[user_model_label])),
('token', self.gf('django.db.models.fields.CharField')(default='b10b8f721e95117cb13c', max_length=255)),
('client', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['oauth2.Client'])),
('expires', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime(2013, 2, 7, 10, 33, 45, 618854))),
('scope', self.gf('django.db.models.fields.IntegerField')(default=0)),
))
db.send_create_signal('oauth2', ['AccessToken'])
# Adding model 'RefreshToken'
db.create_table('oauth2_refreshtoken', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm[user_model_label])),
('token', self.gf('django.db.models.fields.CharField')(default='84035a870dab7c820c2c501fb0b10f86fdf7a3fe', max_length=255)),
('access_token', self.gf('django.db.models.fields.related.OneToOneField')(related_name='refresh_token', unique=True, to=orm['oauth2.AccessToken'])),
('client', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['oauth2.Client'])),
('expired', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('oauth2', ['RefreshToken'])
def backwards(self, orm):
# Deleting model 'Client'
db.delete_table('oauth2_client')
# Deleting model 'Grant'
db.delete_table('oauth2_grant')
# Deleting model 'AccessToken'
db.delete_table('oauth2_accesstoken')
# Deleting model 'RefreshToken'
db.delete_table('oauth2_refreshtoken')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
user_model_label: {
'Meta': {'object_name': user_model_label.split('.')[-1]},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'oauth2.accesstoken': {
'Meta': {'object_name': 'AccessToken'},
'client': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['oauth2.Client']"}),
'expires': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2013, 2, 7, 10, 33, 45, 624553)'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'scope': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'token': ('django.db.models.fields.CharField', [], {'default': "'d5c1f65020ebdc89f20c'", 'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['%s']" % user_model_label})
},
'oauth2.client': {
'Meta': {'object_name': 'Client'},
'client_id': ('django.db.models.fields.CharField', [], {'default': "'306fb26cbcc87dd33cdb'", 'max_length': '255'}),
'client_secret': ('django.db.models.fields.CharField', [], {'default': "'7e5785add4898448d53767f15373636b918cf0e3'", 'max_length': '255'}),
'client_type': ('django.db.models.fields.IntegerField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'redirect_uri': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['%s']" % user_model_label})
},
'oauth2.grant': {
'Meta': {'object_name': 'Grant'},
'client': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['oauth2.Client']"}),
'code': ('django.db.models.fields.CharField', [], {'default': "'310b2c63e27306ecf5307569dd62340cc4994b73'", 'max_length': '255'}),
'expires': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2012, 2, 8, 10, 43, 45, 625956)'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'redirect_uri': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'scope': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['%s']" % user_model_label})
},
'oauth2.refreshtoken': {
'Meta': {'object_name': 'RefreshToken'},
'access_token': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'refresh_token'", 'unique': 'True', 'to': "orm['oauth2.AccessToken']"}),
'client': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['oauth2.Client']"}),
'expired': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'token': ('django.db.models.fields.CharField', [], {'default': "'ef0ab76037f17769ab2975a816e8f41a1c11d25e'", 'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['%s']" % user_model_label})
}
}
complete_apps = ['oauth2']
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(''.join(line.split()),
''.join(result.split()))
def test_e501_shorten_comment_with_aggressive(self):
line = """\
# --------- ----------------------------------------------------------------------
"""
fixed = """\
# --------- --------------------------------------------------------------
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_escaped_newline(self):
line = """\
if True or \\
False: # test test test test test test test test test test test test test test
pass
"""
fixed = """\
# test test test test test test test test test test test test test test
if True or False:
pass
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_multiline_string(self):
line = """\
print('---------------------------------------------------------------------',
('================================================', '====================='),
'''--------------------------------------------------------------------------------
''')
"""
fixed = """\
print(
'---------------------------------------------------------------------',
('================================================',
'====================='),
'''--------------------------------------------------------------------------------
''')
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_multiline_string_with_addition(self):
line = '''\
def f():
email_text += """<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>"""+despot["Nicholas"]+"""<br>
<b>Minion: </b>"""+serf["Dmitri"]+"""<br>
<b>Residence: </b>"""+palace["Winter"]+"""<br>
</body>
</html>"""
'''
fixed = '''\
def f():
email_text += """<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>""" + despot["Nicholas"] + """<br>
<b>Minion: </b>""" + serf["Dmitri"] + """<br>
<b>Residence: </b>""" + palace["Winter"] + """<br>
</body>
</html>"""
'''
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_multiline_string_in_parens(self):
line = '''\
def f():
email_text += ("""<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>"""+despot["Nicholas"]+"""<br>
<b>Minion: </b>"""+serf["Dmitri"]+"""<br>
<b>Residence: </b>"""+palace["Winter"]+"""<br>
</body>
</html>""")
'''
fixed = '''\
def f():
email_text += (
"""<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>""" +
despot["Nicholas"] +
"""<br>
<b>Minion: </b>""" +
serf["Dmitri"] +
"""<br>
<b>Residence: </b>""" +
palace["Winter"] +
"""<br>
</body>
</html>""")
'''
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_indentation(self):
line = """\
if True:
# comment here
print(aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb,cccccccccccccccccccccccccccccccccccccccccc)
"""
fixed = """\
if True:
# comment here
print(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccccccccccccccccc)
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_multiple_keys_and_aggressive(self):
line = """\
one_two_three_four_five_six = {'one two three four five': 12345, 'asdfsdflsdkfjl sdflkjsdkfkjsfjsdlkfj sdlkfjlsfjs': '343',
1: 1}
"""
fixed = """\
one_two_three_four_five_six = {
'one two three four five': 12345,
'asdfsdflsdkfjl sdflkjsdkfkjsfjsdlkfj sdlkfjlsfjs': '343',
1: 1}
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_with_aggressive_and_carriage_returns_only(self):
"""Make sure _find_logical() does not crash."""
line = 'if True:\r from aaaaaaaaaaaaaaaa import bbbbbbbbbbbbbbbbbbb\r \r ccccccccccc = None\r'
fixed = 'if True:\r from aaaaaaaaaaaaaaaa import bbbbbbbbbbbbbbbbbbb\r\r ccccccccccc = None\r'
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_should_ignore_imports(self):
line = """\
import logging, os, bleach, commonware, urllib2, json, time, requests, urlparse, re
"""
with autopep8_context(line, options=['--select=E501']) as result:
self.assertEqual(line, result)
def test_e501_should_not_do_useless_things(self):
line = """\
foo(' ')
"""
with autopep8_context(line) as result:
self.assertEqual(line, result)
def test_e501_aggressive_with_percent(self):
line = """\
raise MultiProjectException("Ambiguous workspace: %s=%s, %s" % ( varname, varname_path, os.path.abspath(config_filename)))
"""
fixed = """\
raise MultiProjectException(
"Ambiguous workspace: %s=%s, %s" %
(varname, varname_path, os.path.abspath(config_filename)))
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_with_def(self):
line = """\
def foo(sldfkjlsdfsdf, kksdfsdfsf,sdfsdfsdf, sdfsdfkdk, szdfsdfsdf, sdfsdfsdfsdlkfjsdlf, sdfsdfddf,sdfsdfsfd, sdfsdfdsf):
pass
"""
fixed = """\
def foo(sldfkjlsdfsdf, kksdfsdfsf, sdfsdfsdf, sdfsdfkdk, szdfsdfsdf,
sdfsdfsdfsdlkfjsdlf, sdfsdfddf, sdfsdfsfd, sdfsdfdsf):
pass
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e501_more_aggressive_with_def(self):
line = """\
def foobar(sldfkjlsdfsdf, kksdfsdfsf,sdfsdfsdf, sdfsdfkdk, szdfsdfsdf, sdfsdfsdfsdlkfjsdlf, sdfsdfddf,sdfsdfsfd, sdfsdfdsf):
pass
"""
fixed = """\
def foobar(
sldfkjlsdfsdf,
kksdfsdfsf,
sdfsdfsdf,
sdfsdfkdk,
szdfsdfsdf,
sdfsdfsdfsdlkfjsdlf,
sdfsdfddf,
sdfsdfsfd,
sdfsdfdsf):
pass
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_with_tuple(self):
line = """\
def f():
man_this_is_a_very_long_function_name(an_extremely_long_variable_name,
('a string that is long: %s'%'bork'))
"""
fixed = """\
def f():
man_this_is_a_very_long_function_name(
an_extremely_long_variable_name,
('a string that is long: %s' %
'bork'))
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_with_tuple_in_list(self):
line = """\
def f(self):
self._xxxxxxxx(aaaaaa, bbbbbbbbb, cccccccccccccccccc,
[('mmmmmmmmmm', self.yyyyyyyyyy.zzzzzzz/_DDDDD)], eee, 'ff')
"""
fixed = """\
def f(self):
self._xxxxxxxx(
aaaaaa, bbbbbbbbb, cccccccccccccccccc, [
('mmmmmmmmmm', self.yyyyyyyyyy.zzzzzzz / _DDDDD)], eee, 'ff')
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_decorator(self):
line = """\
@foo(('xxxxxxxxxxxxxxxxxxxxxxxxxx', users.xxxxxxxxxxxxxxxxxxxxxxxxxx), ('yyyyyyyyyyyy', users.yyyyyyyyyyyy), ('zzzzzzzzzzzzzz', users.zzzzzzzzzzzzzz))
"""
fixed = """\
@foo(
('xxxxxxxxxxxxxxxxxxxxxxxxxx',
users.xxxxxxxxxxxxxxxxxxxxxxxxxx),
('yyyyyyyyyyyy',
users.yyyyyyyyyyyy),
('zzzzzzzzzzzzzz',
users.zzzzzzzzzzzzzz))
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_long_class_name(self):
line = """\
class AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB):
pass
"""
fixed = """\
class AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB):
pass
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_long_comment_and_long_line(self):
line = """\
def foo():
#. This is not a novel to be tossed aside lightly. It should be throw with great force.
self.xxxxxxxxx(_('yyyyyyyyyyyyy yyyyyyyyyyyy yyyyyyyy yyyyyyyy y'), 'zzzzzzzzzzzzzzzzzzz', bork='urgent')
"""
fixed = """\
def foo():
#. This is not a novel to be tossed aside lightly. It should be throw with great force.
self.xxxxxxxxx(
_('yyyyyyyyyyyyy yyyyyyyyyyyy yyyyyyyy yyyyyyyy y'),
'zzzzzzzzzzzzzzzzzzz',
bork='urgent')
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_aggressive_intermingled_comments(self):
line = """\
A = [
# A comment
['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbbbbbbbbbbbbb', 'cccccccccccccccccccccc']
]
"""
fixed = """\
A = [
# A comment
['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
'bbbbbbbbbbbbbbbbbbbbbb',
'cccccccccccccccccccccc']
]
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_if_line_over_limit(self):
line = """\
if not xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
return 1
"""
fixed = """\
if not xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbb,
cccccccccccccc,
dddddddddddddddddddddd):
return 1
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_for_line_over_limit(self):
line = """\
for aaaaaaaaa in xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
pass
"""
fixed = """\
for aaaaaaaaa in xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbb,
cccccccccccccc,
dddddddddddddddddddddd):
pass
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e501_while_line_over_limit(self):
line = """\
while xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
pass
"""
fixed = """\
while xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbb,
cccccccccccccc,
dddddddddddddddddddddd):
pass
"""
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e502(self):
line = "print('abc'\\\n 'def')\n"
fixed = "print('abc'\n 'def')\n"
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e701(self):
line = 'if True: print True\n'
fixed = 'if True:\n print True\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e701_with_escaped_newline(self):
line = 'if True:\\\nprint True\n'
fixed = 'if True:\n print True\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e701_with_escaped_newline_and_spaces(self):
line = 'if True: \\ \nprint True\n'
fixed = 'if True:\n print True\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702(self):
line = 'print 1; print 2\n'
fixed = 'print 1\nprint 2\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_semicolon_at_end(self):
line = 'print 1;\n'
fixed = 'print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_semicolon_and_space_at_end(self):
line = 'print 1; \n'
fixed = 'print 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_whitespace(self):
line = 'print 1 ; print 2\n'
fixed = 'print 1\nprint 2\n'
with autopep8_context(line, options=['--select=E702']) as result:
self.assertEqual(fixed, result)
def test_e702_with_non_ascii_file(self):
line = """\
# -*- coding: utf-8 -*-
# French comment with accent é
# Un commentaire en français avec un accent é
import time
time.strftime('%d-%m-%Y');
"""
fixed = """\
# -*- coding: utf-8 -*-
# French comment with accent é
# Un commentaire en français avec un accent é
import time
time.strftime('%d-%m-%Y')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_escaped_newline(self):
line = '1; \\\n2\n'
fixed = '1\n2\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_escaped_newline_with_indentation(self):
line = '1; \\\n 2\n'
fixed = '1\n2\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_more_complicated(self):
line = """\
def foo():
if bar : bar+=1; bar=bar*bar ; return bar
"""
fixed = """\
def foo():
if bar:
bar += 1
bar = bar * bar
return bar
"""
with autopep8_context(line, options=['--select=E,W']) as result:
self.assertEqual(fixed, result)
def test_e702_with_semicolon_in_string(self):
line = 'print(";");\n'
fixed = 'print(";")\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_semicolon_in_string_to_the_right(self):
line = 'x = "x"; y = "y;y"\n'
fixed = 'x = "x"\ny = "y;y"\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_indent_correctly(self):
line = """\
(
1,
2,
3); 4; 5; 5 # pyflakes
"""
fixed = """\
(
1,
2,
3)
4
5
5 # pyflakes
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_triple_quote(self):
line = '"""\n hello\n """; 1\n'
fixed = '"""\n hello\n """\n1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_triple_quote_and_indent(self):
line = ' """\n hello\n """; 1\n'
fixed = ' """\n hello\n """\n 1\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e702_with_semicolon_after_string(self):
line = """\
raise IOError('abc '
'def.');
"""
fixed = """\
raise IOError('abc '
'def.')
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_e711(self):
line = 'foo == None\n'
fixed = 'foo is None\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e711_in_conditional(self):
line = 'if foo == None and None == foo:\npass\n'
fixed = 'if foo is None and None == foo:\npass\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e711_in_conditional_with_multiple_instances(self):
line = 'if foo == None and bar == None:\npass\n'
fixed = 'if foo is None and bar is None:\npass\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e711_with_not_equals_none(self):
line = 'foo != None\n'
fixed = 'foo is not None\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e712(self):
line = 'foo == True\n'
fixed = 'foo\n'
with autopep8_context(line,
options=['-aa', '--select=E712']) as result:
self.assertEqual(fixed, result)
def test_e712_in_conditional_with_multiple_instances(self):
line = 'if foo == True and bar == True:\npass\n'
fixed = 'if foo and bar:\npass\n'
with autopep8_context(line,
options=['-aa', '--select=E712']) as result:
self.assertEqual(fixed, result)
def test_e712_with_false(self):
line = 'foo != False\n'
fixed = 'foo\n'
with autopep8_context(line,
options=['-aa', '--select=E712']) as result:
self.assertEqual(fixed, result)
def test_e712_with_special_case_equal_not_true(self):
line = 'if foo != True:\n pass\n'
fixed = 'if not foo:\n pass\n'
with autopep8_context(line,
options=['-aa', '--select=E712']) as result:
self.assertEqual(fixed, result)
def test_e712_with_special_case_equal_false(self):
line = 'if foo == False:\n pass\n'
fixed = 'if not foo:\n pass\n'
with autopep8_context(line,
options=['-aa', '--select=E712']) as result:
self.assertEqual(fixed, result)
def test_e712_only_if_aggressive_level_2(self):
line = 'foo == True\n'
with autopep8_context(line, options=['-a']) as result:
self.assertEqual(line, result)
def test_e711_and_e712(self):
line = 'if (foo == None and bar == True) or (foo != False and bar != None):\npass\n'
fixed = 'if (foo is None and bar) or (foo and bar is not None):\npass\n'
with autopep8_context(line, options=['-aa']) as result:
self.assertEqual(fixed, result)
def test_e713(self):
line = 'if not x in y:\n pass\n'
fixed = 'if x not in y:\n pass\n'
with autopep8_context(line,
options=['-aa', '--select=E713']) as result:
self.assertEqual(fixed, result)
def test_e721(self):
line = "type('') == type('')\n"
fixed = "isinstance('', type(''))\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e721_with_str(self):
line = "str == type('')\n"
fixed = "isinstance('', str)\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_e721_in_conditional(self):
line = "if str == type(''):\n pass\n"
fixed = "if isinstance('', str):\n pass\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_should_preserve_vertical_tab(self):
line = """\
#Memory Bu\vffer Register:
"""
fixed = """\
# Memory Bu\vffer Register:
"""
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_w191_should_ignore_multiline_strings(self):
line = """\
print(3 <> 4, '''
while True:
if True:
\t1
\t''', 4 <> 5)
if True:
\t123
"""
fixed = """\
print(3 != 4, '''
while True:
if True:
\t1
\t''', 4 != 5)
if True:
123
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w191_should_ignore_tabs_in_strings(self):
line = """\
if True:
\tx = '''
\t\tblah
\tif True:
\t1
\t'''
if True:
\t123
else:
\t32
"""
fixed = """\
if True:
x = '''
\t\tblah
\tif True:
\t1
\t'''
if True:
123
else:
32
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w291(self):
line = "print 'a b '\t \n"
fixed = "print 'a b '\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w291_with_comment(self):
line = "print 'a b ' # comment\t \n"
fixed = "print 'a b ' # comment\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w292(self):
line = '1\n2'
fixed = '1\n2\n'
with autopep8_context(line, options=['--aggressive',
'--select=W292']) as result:
self.assertEqual(fixed, result)
def test_w293(self):
line = '1\n \n2\n'
fixed = '1\n\n2\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w391(self):
line = ' \n'
fixed = ''
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w391_more_complex(self):
line = '123\n456\n \n'
fixed = '123\n456\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601(self):
line = 'a = {0: 1}\na.has_key(0)\n'
fixed = 'a = {0: 1}\n0 in a\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_word(self):
line = 'my_dict = {0: 1}\nmy_dict.has_key(0)\n'
fixed = 'my_dict = {0: 1}\n0 in my_dict\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_conditional(self):
line = 'a = {0: 1}\nif a.has_key(0):\n print 1\n'
fixed = 'a = {0: 1}\nif 0 in a:\n print 1\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_self(self):
line = 'self.a.has_key(0)\n'
fixed = '0 in self.a\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_self_with_conditional(self):
line = 'if self.a.has_key(0):\n print 1\n'
fixed = 'if 0 in self.a:\n print 1\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_with_multiple(self):
line = 'a.has_key(0) and b.has_key(0)\n'
fixed = '0 in a and 0 in b\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_with_multiple_nested(self):
line = 'alpha.has_key(nested.has_key(12)) and beta.has_key(1)\n'
fixed = '(12 in nested) in alpha and 1 in beta\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_with_more_complexity(self):
line = 'y.has_key(0) + x.has_key(x.has_key(0) + x.has_key(x.has_key(0) + x.has_key(1)))\n'
fixed = '(0 in y) + ((0 in x) + ((0 in x) + (1 in x) in x) in x)\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_precedence(self):
line = 'if self.a.has_key(1 + 2):\n print 1\n'
fixed = 'if 1 + 2 in self.a:\n print 1\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w601_with_parens(self):
line = 'foo(12) in alpha\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w601_with_multiline(self):
line = """\
a.has_key(
0
)
"""
fixed = '0 in a\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
@unittest.skipIf(sys.version_info < (2, 6, 4),
'older versions of 2.6 may be buggy')
def test_w601_with_non_ascii(self):
line = """\
# -*- coding: utf-8 -*-
## éはe
correct = dict().has_key('good syntax ?')
"""
fixed = """\
# -*- coding: utf-8 -*-
# éはe
correct = 'good syntax ?' in dict()
"""
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_arg_is_string(self):
line = "raise ValueError, \"w602 test\"\n"
fixed = "raise ValueError(\"w602 test\")\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_arg_is_string_with_comment(self):
line = "raise ValueError, \"w602 test\" # comment\n"
fixed = "raise ValueError(\"w602 test\") # comment\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_skip_ambiguous_case(self):
line = "raise 'a', 'b', 'c'\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w602_with_logic(self):
line = "raise TypeError, e or 'hello'\n"
fixed = "raise TypeError(e or 'hello')\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_triple_quotes(self):
line = 'raise ValueError, """hello"""\n1\n'
fixed = 'raise ValueError("""hello""")\n1\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline(self):
line = 'raise ValueError, """\nhello"""\n'
fixed = 'raise ValueError("""\nhello""")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_with_complex_multiline(self):
line = 'raise ValueError, """\nhello %s %s""" % (\n 1, 2)\n'
fixed = 'raise ValueError("""\nhello %s %s""" % (\n 1, 2))\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_with_trailing_spaces(self):
line = 'raise ValueError, """\nhello""" \n'
fixed = 'raise ValueError("""\nhello""")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_with_escaped_newline(self):
line = 'raise ValueError, \\\n"""\nhello"""\n'
fixed = 'raise ValueError("""\nhello""")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_with_escaped_newline_and_comment(self):
line = 'raise ValueError, \\\n"""\nhello""" # comment\n'
fixed = 'raise ValueError("""\nhello""") # comment\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_with_multiple_escaped_newlines(self):
line = 'raise ValueError, \\\n\\\n\\\n"""\nhello"""\n'
fixed = 'raise ValueError("""\nhello""")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_with_nested_quotes(self):
line = 'raise ValueError, """hello\'\'\'blah"a"b"c"""\n'
fixed = 'raise ValueError("""hello\'\'\'blah"a"b"c""")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_with_multiline_with_single_quotes(self):
line = "raise ValueError, '''\nhello'''\n"
fixed = "raise ValueError('''\nhello''')\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiline_string_stays_the_same(self):
line = 'raise """\nhello"""\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w602_escaped_lf(self):
line = 'raise ValueError, \\\n"hello"\n'
fixed = 'raise ValueError("hello")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_escaped_crlf(self):
line = 'raise ValueError, \\\r\n"hello"\r\n'
fixed = 'raise ValueError("hello")\r\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_indentation(self):
line = 'def foo():\n raise ValueError, "hello"\n'
fixed = 'def foo():\n raise ValueError("hello")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_escaped_cr(self):
line = 'raise ValueError, \\\r"hello"\n\n'
fixed = 'raise ValueError("hello")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_multiple_statements(self):
line = 'raise ValueError, "hello";print 1\n'
fixed = 'raise ValueError("hello")\nprint 1\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_raise_argument_with_indentation(self):
line = 'if True:\n raise ValueError, "error"\n'
fixed = 'if True:\n raise ValueError("error")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_skip_raise_argument_triple(self):
line = 'raise ValueError, "info", traceback\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w602_skip_raise_argument_triple_with_comment(self):
line = 'raise ValueError, "info", traceback # comment\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w602_raise_argument_triple_fake(self):
line = 'raise ValueError, "info, info2"\n'
fixed = 'raise ValueError("info, info2")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_with_list_comprehension(self):
line = 'raise Error, [x[0] for x in probs]\n'
fixed = 'raise Error([x[0] for x in probs])\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w602_with_bad_syntax(self):
line = "raise Error, 'abc\n"
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_w603(self):
line = 'if 2 <> 2:\n print False'
fixed = 'if 2 != 2:\n print False\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w604(self):
line = '`1`\n'
fixed = 'repr(1)\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w604_with_multiple_instances(self):
line = '``1`` + ``b``\n'
fixed = 'repr(repr(1)) + repr(repr(b))\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_w604_with_multiple_lines(self):
line = '`(1\n )`\n'
fixed = 'repr((1\n ))\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_trailing_whitespace_in_multiline_string(self):
line = 'x = """ \nhello""" \n'
fixed = 'x = """ \nhello"""\n'
with autopep8_context(line) as result:
self.assertEqual(fixed, result)
def test_trailing_whitespace_in_multiline_string_aggressive(self):
line = 'x = """ \nhello""" \n'
fixed = 'x = """\nhello"""\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(fixed, result)
def test_execfile_in_lambda_should_not_be_modified(self):
"""Modifying this to the exec() form is invalid in Python 2."""
line = 'lambda: execfile("foo.py")\n'
with autopep8_context(line, options=['--aggressive']) as result:
self.assertEqual(line, result)
def test_range(self):
line = 'print( 1 )\nprint( 2 )\n print( 3 )\n'
fixed = 'print( 1 )\nprint(2)\n print( 3 )\n'
with autopep8_context(line, options=['--range', '2', '2']) as result:
self.assertEqual(fixed, result)
def test_range_line_number_changes_from_one_line(self):
line = 'a=12\na=1; b=2;c=3\nd=4;\n\ndef f(a = 1):\n pass\n'
fixed = 'a=12\na = 1\nb = 2\nc = 3\nd=4;\n\ndef f(a = 1):\n pass\n'
with autopep8_context(line, options=['--range', '2', '2']) as result:
self.assertEqual(fixed, result)
def test_range_indent_changes_large_range(self):
line = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
fixed0_9 = '\nif True:\n (1,\n 2,\n 3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
with autopep8_context(line, options=['--range', '1', '9']) as result:
self.assertEqual(fixed0_9, result)
def test_range_indent_changes_small_range(self):
line = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
fixed2_5 = '\nif True:\n (1,\n 2,\n 3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
with autopep8_context(line, options=['--range', '2', '5']) as result:
self.assertEqual(fixed2_5, result)
def test_range_indent_changes_multiline(self):
line = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
fixed_11_15 = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n 2)\n'
with autopep8_context(line, options=['--range', '11', '15']) as result:
self.assertEqual(fixed_11_15, result)
def test_range_indent_changes_partial_multiline(self):
line = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
fixed_11_14 = '\nif True:\n (1, \n 2,\n3)\nelif False:\n a = 1\nelse:\n a = 2\n\nc = 1\nif True:\n c = 2\n a = (1,\n2)\n'
with autopep8_context(line, options=['--range', '11', '14']) as result:
self.assertEqual(fixed_11_14, result)
def test_range_indent_long_multiline_small_range(self):
line = '\nif True:\n (1,\n2,\n3,\n\n4,\n\n5,\n6)'
fixed_2_3 = '\nif True:\n (1,\n2,\n3,\n\n4,\n\n5,\n6)\n'
with autopep8_context(line, options=['--range', '2', '3']) as result:
self.assertEqual(fixed_2_3, result)
def test_range_indent_long_multiline_partial_range(self):
line = '\nif True:\n (1,\n2,\n3,\n\n4,\n\n5,\n6)'
fixed_2_6 = '\nif True:\n (1,\n 2,\n 3,\n\n4,\n\n5,\n6)\n'
with autopep8_context(line, options=['--range', '2', '6']) as result:
self.assertEqual(fixed_2_6, result)
def test_range_indent_long_multiline_middle_of_multiline(self):
line = '\nif True:\n (1,\n2,\n3,\n\n4,\n\n5,\n6)'
# weird-ish edge case, fixes earlier lines (up to beginning of
# multi-line block)
fixed_2_6 = '\nif True:\n (1,\n 2,\n 3,\n\n 4,\n\n5,\n6)\n'
with autopep8_context(line, options=['--range', '4', '6']) as result:
self.assertEqual(fixed_2_6, result)
def test_range_indent_deep_if_blocks_first_block(self):
line = '\nif a:\n if a = 1:\n b = 1\n else:\n b = 2\nelif a == 0:\n b = 3\nelse:\n b = 4\n'
with autopep8_context(line, options=['--range', '2', '5']) as result:
self.assertEqual(line, result)
def test_range_indent_deep_if_blocks_large_range(self):
line = '\nif a:\n if a = 1:\n b = 1\n else:\n b = 2\nelif a == 0:\n b = 3\nelse:\n b = 4\n'
fixed_2_7 = '\nif a:\n if a = 1:\n b = 1\n else:\n b = 2\nelif a == 0:\n b = 3\nelse:\n b = 4\n'
with autopep8_context(line, options=['--range', '2', '7']) as result:
self.assertEqual(fixed_2_7, result)
def test_range_indent_deep_if_blocks_second_block(self):
line = '\nif a:\n if a = 1:\n b = 1\n else:\n b = 2\nelif a == 0:\n b = 3\nelse:\n b = 4\n'
with autopep8_context(line, options=['--range', '6', '9']) as result:
self.assertEqual(line, result)
def test_range_indent_continued_statements(self):
line = '\nif a == 1:\n\ttry:\n\t foo\n\texcept AttributeError:\n\t pass\n\telse:\n\t "nooo"\n\tb = 1\n'
fixed_2_8 = '\nif a == 1:\n\ttry:\n\t foo\n\texcept AttributeError:\n\t pass\n\telse:\n\t "nooo"\n\tb = 1\n'
with autopep8_context(line, options=['--range', '2', '8']) as result:
self.assertEqual(fixed_2_8, result)
def test_range_indent_continued_statements_partial(self):
line = '\nif a == 1:\n\ttry:\n\t foo\n\texcept AttributeError:\n\t pass\n\telse:\n\t "nooo"\n\tb = 1\n'
with autopep8_context(line, options=['--range', '2', '6']) as result:
self.assertEqual(line, result)
def test_range_indent_continued_statements_last_block(self):
line = '\nif a == 1:\n\ttry:\n\t foo\n\texcept AttributeError:\n\t pass\n\telse:\n\t "nooo"\n\tb = 1\n'
with autopep8_context(line, options=['--range', '6', '9']) as result:
self.assertEqual(line, result)
def test_range_indent_neighbouring_blocks(self):
line = '\nif a == 1:\n b = 1\nif a == 2:\n b = 2\nif a == 3:\n b = 3\n'
fixed_2_3 = '\nif a == 1:\n b = 1\nif a == 2:\n b = 2\nif a == 3:\n b = 3\n'
with autopep8_context(line, options=['--range', '2', '3']) as result:
self.assertEqual(fixed_2_3, result)
def test_range_indent_neighbouring_blocks_one_line(self):
line = '\nif a == 1:\n b = 1\nif a == 2:\n b = 2\nif a == 3:\n b = 3\n'
fixed_2_3 = '\nif a == 1:\n b = 1\nif a == 2:\n b = 2\nif a == 3:\n b = 3\n'
fixed_3_3 = fixed_2_3
with autopep8_context(line, options=['--range', '3', '3']) as result:
self.assertEqual(fixed_3_3, result)
def test_range_indent_above_less_indented(self):
line = '\ndef f(x):\n if x:\n return x\n'
fixed_3_4 = '\ndef f(x):\n if x:\n return x\n'
with autopep8_context(line, options=['--range', '3', '4']) as result:
self.assertEqual(fixed_3_4, result)
def test_range_indent_docstrings_partial(self):
line = '\ndef f(x):\n """docstring\n docstring"""\n #comment\n if x:\n return x\n'
# TODO this should fix the comment spacing
fixed_2_5 = '\ndef f(x):\n """docstring\n docstring"""\n #comment\n if x:\n return x\n'
with autopep8_context(line, options=['--range', '2', '5']) as result:
self.assertEqual(fixed_2_5, result)
def test_range_indent_docstrings(self):
line = '\ndef f(x):\n """docstring\n docstring"""\n #comment\n if x:\n return x\n'
fixed_2_7 = '\ndef f(x):\n """docstring\n docstring"""\n # comment\n if x:\n return x\n'
with autopep8_context(line, options=['--range', '2', '7']) as result:
self.assertEqual(fixed_2_7, result)
def test_range_indent_multiline_strings(self):
line = '\nif True:\n a = """multi\nline\nstring"""\n #comment\n a=1\na=2\n'
fixed_2_7 = '\nif True:\n a = """multi\nline\nstring"""\n # comment\n a = 1\na=2\n'
with autopep8_context(line, options=['--range', '2', '7']) as result:
self.assertEqual(fixed_2_7, result)
def test_range_with_broken_syntax(self):
line = """\
if True:
if True:
pass
else:
pass
"""
with autopep8_context(line, options=['--range', '1', '1']) as result:
self.assertEqual(line, result)
class CommandLineTests(unittest.TestCase):
maxDiff = None
def test_diff(self):
line = "'abc' \n"
fixed = "-'abc' \n+'abc'\n"
with autopep8_subprocess(line, ['--diff']) as result:
self.assertEqual(fixed, '\n'.join(result.split('\n')[3:]))
def test_diff_with_empty_file(self):
with autopep8_subprocess('', ['--diff']) as result:
self.assertEqual('\n'.join(result.split('\n')[3:]), '')
def test_diff_with_nonexistent_file(self):
p = Popen(list(AUTOPEP8_CMD_TUPLE) + ['--diff', 'non_existent_file'],
stdout=PIPE, stderr=PIPE)
error = p.communicate()[1].decode('utf-8')
self.assertIn('non_existent_file', error)
def test_diff_with_standard_in(self):
p = Popen(list(AUTOPEP8_CMD_TUPLE) + ['--diff', '-'],
stdout=PIPE, stderr=PIPE)
error = p.communicate()[1].decode('utf-8')
self.assertIn('cannot', error)
def test_pep8_passes(self):
line = "'abc' \n"
fixed = "'abc'\n"
with autopep8_subprocess(line, ['--pep8-passes', '0']) as result:
self.assertEqual(fixed, result)
def test_pep8_ignore(self):
line = "'abc' \n"
with autopep8_subprocess(line, ['--ignore=E,W']) as result:
self.assertEqual(line, result)
def test_help(self):
p = Popen(list(AUTOPEP8_CMD_TUPLE) + ['-h'],
stdout=PIPE)
self.assertIn('usage:', p.communicate()[0].decode('utf-8').lower())
def test_verbose(self):
line = 'bad_syntax)'
with temporary_file_context(line) as filename:
p = Popen(list(AUTOPEP8_CMD_TUPLE) + [filename, '-vvv'],
stdout=PIPE, stderr=PIPE)
verbose_error = p.communicate()[1].decode('utf-8')
self.assertIn("'fix_e901' is not defined", verbose_error)
def test_verbose_diff(self):
line = '+'.join(100 * ['323424234234'])
with temporary_file_context(line) as filename:
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[filename, '-vvvv', '--diff'],
stdout=PIPE, stderr=PIPE)
verbose_error = p.communicate()[1].decode('utf-8')
self.assertIn('------------', verbose_error)
def test_in_place(self):
line = "'abc' \n"
fixed = "'abc'\n"
with temporary_file_context(line) as filename:
p = Popen(list(AUTOPEP8_CMD_TUPLE) + [filename, '--in-place'])
p.wait()
with open(filename) as f:
self.assertEqual(fixed, f.read())
def test_parallel_jobs(self):
line = "'abc' \n"
fixed = "'abc'\n"
with temporary_file_context(line) as filename_a:
with temporary_file_context(line) as filename_b:
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[filename_a, filename_b, '--jobs=3', '--in-place'])
p.wait()
with open(filename_a) as f:
self.assertEqual(fixed, f.read())
with open(filename_b) as f:
self.assertEqual(fixed, f.read())
def test_parallel_jobs_with_automatic_cpu_count(self):
line = "'abc' \n"
fixed = "'abc'\n"
with temporary_file_context(line) as filename_a:
with temporary_file_context(line) as filename_b:
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[filename_a, filename_b, '--jobs=0', '--in-place'])
p.wait()
with open(filename_a) as f:
self.assertEqual(fixed, f.read())
with open(filename_b) as f:
self.assertEqual(fixed, f.read())
def test_in_place_with_empty_file(self):
line = ''
with temporary_file_context(line) as filename:
p = Popen(list(AUTOPEP8_CMD_TUPLE) + [filename, '--in-place'])
p.wait()
self.assertEqual(0, p.returncode)
with open(filename) as f:
self.assertEqual(f.read(), line)
def test_in_place_and_diff(self):
line = "'abc' \n"
with temporary_file_context(line) as filename:
p = Popen(
list(AUTOPEP8_CMD_TUPLE) + [filename,
'--in-place', '--diff'],
stderr=PIPE)
result = p.communicate()[1].decode('utf-8')
self.assertIn('--in-place and --diff are mutually exclusive', result)
def test_recursive(self):
temp_directory = tempfile.mkdtemp(dir='.')
try:
with open(os.path.join(temp_directory, 'a.py'), 'w') as output:
output.write("'abc' \n")
os.mkdir(os.path.join(temp_directory, 'd'))
with open(os.path.join(temp_directory, 'd', 'b.py'),
'w') as output:
output.write('123 \n')
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[temp_directory, '--recursive', '--diff'],
stdout=PIPE)
result = p.communicate()[0].decode('utf-8')
self.assertEqual(
"-'abc' \n+'abc'",
'\n'.join(result.split('\n')[3:5]))
self.assertEqual(
'-123 \n+123',
'\n'.join(result.split('\n')[8:10]))
finally:
shutil.rmtree(temp_directory)
def test_recursive_should_not_crash_on_unicode_filename(self):
temp_directory = tempfile.mkdtemp(dir='.')
try:
for filename in ['x.py', 'é.py', 'é.txt']:
with open(os.path.join(temp_directory, filename), 'w'):
pass
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[temp_directory,
'--recursive',
'--diff'],
stdout=PIPE)
self.assertFalse(p.communicate()[0])
self.assertEqual(0, p.returncode)
finally:
shutil.rmtree(temp_directory)
def test_recursive_should_ignore_hidden(self):
temp_directory = tempfile.mkdtemp(dir='.')
temp_subdirectory = tempfile.mkdtemp(prefix='.', dir=temp_directory)
try:
with open(os.path.join(temp_subdirectory, 'a.py'), 'w') as output:
output.write("'abc' \n")
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[temp_directory, '--recursive', '--diff'],
stdout=PIPE)
result = p.communicate()[0].decode('utf-8')
self.assertEqual(0, p.returncode)
self.assertEqual('', result)
finally:
shutil.rmtree(temp_directory)
def test_exclude(self):
temp_directory = tempfile.mkdtemp(dir='.')
try:
with open(os.path.join(temp_directory, 'a.py'), 'w') as output:
output.write("'abc' \n")
os.mkdir(os.path.join(temp_directory, 'd'))
with open(os.path.join(temp_directory, 'd', 'b.py'),
'w') as output:
output.write('123 \n')
p = Popen(list(AUTOPEP8_CMD_TUPLE) +
[temp_directory, '--recursive', '--exclude=a*',
'--diff'],
stdout=PIPE)
result = p.communicate()[0].decode('utf-8')
self.assertNotIn('abc', result)
self.assertIn('123', result)
finally:
shutil.rmtree(temp_directory)
def test_invalid_option_combinations(self):
line = "'abc' \n"
with temporary_file_context(line) as filename:
for options in [['--recursive', filename], # without --diff
['--jobs=2', filename], # without --diff
['--exclude=foo', filename], # without --recursive
['--max-line-length=0', filename],
[], # no argument
['-', '--in-place'],
['-', '--recursive'],
['-', filename],
['--range', '0', '2', filename],
['--range', '2', '1', filename],
['--range', '-1', '-1', filename],
]:
p = Popen(list(AUTOPEP8_CMD_TUPLE) + options,
stderr=PIPE)
result = p.communicate()[1].decode('utf-8')
self.assertNotEqual(0, p.returncode, msg=str(options))
self.assertTrue(len(result))
def test_list_fixes(self):
with autopep8_subprocess('', options=['--list-fixes']) as result:
self.assertIn('E121', result)
def test_fixpep8_class_constructor(self):
line = 'print 1\nprint 2\n'
with temporary_file_context(line) as filename:
pep8obj = autopep8.FixPEP8(filename, None)
self.assertEqual(''.join(pep8obj.source), line)
def test_inplace_with_multi_files(self):
exception = None
with disable_stderr():
try:
autopep8.parse_args(['test.py', 'dummy.py'])
except SystemExit as e:
exception = e
self.assertTrue(exception)
self.assertEqual(exception.code, 2)
def test_standard_out_should_use_native_line_ending(self):
line = '1\r\n2\r\n3\r\n'
with temporary_file_context(line) as filename:
process = Popen(list(AUTOPEP8_CMD_TUPLE) +
[filename],
stdout=PIPE)
self.assertEqual(
os.linesep.join(['1', '2', '3', '']),
process.communicate()[0].decode('utf-8'))
def test_standard_out_should_use_native_line_ending_with_cr_input(self):
line = '1\r2\r3\r'
with temporary_file_context(line) as filename:
process = Popen(list(AUTOPEP8_CMD_TUPLE) +
[filename],
stdout=PIPE)
self.assertEqual(
os.linesep.join(['1', '2', '3', '']),
process.communicate()[0].decode('utf-8'))
def test_standard_in(self):
line = 'print( 1 )\n'
fixed = 'print(1)' + os.linesep
process = Popen(list(AUTOPEP8_CMD_TUPLE) +
['-'],
stdout=PIPE,
stdin=PIPE)
self.assertEqual(
fixed,
process.communicate(line.encode('utf-8'))[0].decode('utf-8'))
class ExperimentalSystemTests(unittest.TestCase):
maxDiff = None
def test_e501_experimental_basic(self):
line = """\
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
print(
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333,
333, 333)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_commas_and_colons(self):
line = """\
foobar = {'aaaaaaaaaaaa': 'bbbbbbbbbbbbbbbb', 'dddddd': 'eeeeeeeeeeeeeeee', 'ffffffffffff': 'gggggggg'}
"""
fixed = """\
foobar = {
'aaaaaaaaaaaa': 'bbbbbbbbbbbbbbbb', 'dddddd': 'eeeeeeeeeeeeeeee',
'ffffffffffff': 'gggggggg'}
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_inline_comments(self):
line = """\
' ' # Long inline comments should be moved above.
if True:
' ' # Long inline comments should be moved above.
"""
fixed = """\
# Long inline comments should be moved above.
' '
if True:
# Long inline comments should be moved above.
' '
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_inline_comments_should_skip_multiline(self):
line = """\
'''This should be left alone. -----------------------------------------------------
''' # foo
'''This should be left alone. -----------------------------------------------------
''' \\
# foo
'''This should be left alone. -----------------------------------------------------
''' \\
\\
# foo
"""
fixed = """\
'''This should be left alone. -----------------------------------------------------
''' # foo
'''This should be left alone. -----------------------------------------------------
''' # foo
'''This should be left alone. -----------------------------------------------------
''' # foo
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_inline_comments_should_skip_keywords(self):
line = """\
' ' # noqa Long inline comments should be moved above.
if True:
' ' # pylint: disable-msgs=E0001
' ' # pragma: no cover
' ' # pragma: no cover
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_with_inline_comments_should_skip_edge_cases(self):
line = """\
if True:
x = \\
' ' # Long inline comments should be moved above.
"""
fixed = """\
if True:
# Long inline comments should be moved above.
x = ' '
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_basic_should_prefer_balanced_brackets(self):
line = """\
if True:
reconstructed = iradon(radon(image), filter="ramp", interpolation="nearest")
"""
fixed = """\
if True:
reconstructed = iradon(
radon(image),
filter="ramp", interpolation="nearest")
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_very_long_line(self):
line = """\
x = [3244234243234, 234234234324, 234234324, 23424234, 234234234, 234234, 234243, 234243, 234234234324, 234234324, 23424234, 234234234, 234234, 234243, 234243]
"""
fixed = """\
x = [
3244234243234, 234234234324, 234234324, 23424234, 234234234, 234234,
234243, 234243, 234234234324, 234234324, 23424234, 234234234, 234234,
234243, 234243]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_shorten_at_commas_skip(self):
line = """\
parser.add_argument('source_corpus', help='corpus name/path relative to an nltk_data directory')
parser.add_argument('target_corpus', help='corpus name/path relative to an nltk_data directory')
"""
fixed = """\
parser.add_argument(
'source_corpus',
help='corpus name/path relative to an nltk_data directory')
parser.add_argument(
'target_corpus',
help='corpus name/path relative to an nltk_data directory')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_shorter_length(self):
line = """\
foooooooooooooooooo('abcdefghijklmnopqrstuvwxyz')
"""
fixed = """\
foooooooooooooooooo(
'abcdefghijklmnopqrstuvwxyz')
"""
with autopep8_context(line,
options=['--max-line-length=40',
'--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_indent(self):
line = """\
def d():
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
def d():
print(
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333,
333, 333, 333)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_alone_with_indentation(self):
line = """\
if True:
print(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
if True:
print(
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333,
333, 333, 333)
"""
with autopep8_context(line, options=['--select=E501',
'--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_alone_with_tuple(self):
line = """\
fooooooooooooooooooooooooooooooo000000000000000000000000 = [1,
('TransferTime', 'FLOAT')
]
"""
fixed = """\
fooooooooooooooooooooooooooooooo000000000000000000000000 = [
1, ('TransferTime', 'FLOAT')]
"""
with autopep8_context(line, options=['--select=E501',
'--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_should_not_try_to_break_at_every_paren_in_arithmetic(self):
line = """\
term3 = w6 * c5 * (8.0 * psi4 * (11.0 - 24.0 * t2) - 28 * psi3 * (1 - 6.0 * t2) + psi2 * (1 - 32 * t2) - psi * (2.0 * t2) + t4) / 720.0
this_should_be_shortened = (' ', ' ')
"""
fixed = """\
term3 = w6 * c5 * (
8.0 * psi4 * (11.0 - 24.0 * t2) - 28 * psi3 * (1 - 6.0 * t2) + psi2 *
(1 - 32 * t2) - psi * (2.0 * t2) + t4) / 720.0
this_should_be_shortened = (
' ',
' ')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_arithmetic_operator_with_indent(self):
line = """\
def d():
111 + 111 + 111 + 111 + 111 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 222 + 333 + 333 + 333 + 333
"""
fixed = """\
def d():
111 + 111 + 111 + 111 + 111 + 222 + 222 + 222 + 222 + \\
222 + 222 + 222 + 222 + 222 + 333 + 333 + 333 + 333
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_more_complicated(self):
line = """\
blahblah = os.environ.get('blahblah') or os.environ.get('blahblahblah') or os.environ.get('blahblahblahblah')
"""
fixed = """\
blahblah = os.environ.get('blahblah') or os.environ.get(
'blahblahblah') or os.environ.get('blahblahblahblah')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_skip_even_more_complicated(self):
line = """\
if True:
if True:
if True:
blah = blah.blah_blah_blah_bla_bl(blahb.blah, blah.blah,
blah=blah.label, blah_blah=blah_blah,
blah_blah2=blah_blah)
"""
fixed = """\
if True:
if True:
if True:
blah = blah.blah_blah_blah_bla_bl(
blahb.blah, blah.blah, blah=blah.label, blah_blah=blah_blah,
blah_blah2=blah_blah)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_prefer_to_break_at_beginning(self):
"""We prefer not to leave part of the arguments hanging."""
line = """\
looooooooooooooong = foo(one, two, three, four, five, six, seven, eight, nine, ten)
"""
fixed = """\
looooooooooooooong = foo(
one, two, three, four, five, six, seven, eight, nine, ten)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_logical_fix(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_logical_fix_and_physical_fix(self):
line = """\
# ------ ------------------------------------------------------------------------
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
# ------ -----------------------------------------------------------------
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_logical_fix_and_adjacent_strings(self):
line = """\
print('a-----------------------' 'b-----------------------' 'c-----------------------'
'd-----------------------''e'"f"r"g")
"""
fixed = """\
print(
'a-----------------------'
'b-----------------------'
'c-----------------------'
'd-----------------------'
'e'
"f"
r"g")
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_multiple_lines(self):
line = """\
foo_bar_zap_bing_bang_boom(111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333,
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333)
"""
fixed = """\
foo_bar_zap_bing_bang_boom(
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333,
111, 111, 111, 111, 222, 222, 222, 222, 222, 222, 222, 222, 222, 333, 333)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_do_not_break_on_keyword(self):
# We don't want to put a newline after equals for keywords as this
# violates PEP 8.
line = """\
if True:
long_variable_name = tempfile.mkstemp(prefix='abcdefghijklmnopqrstuvwxyz0123456789')
"""
fixed = """\
if True:
long_variable_name = tempfile.mkstemp(
prefix='abcdefghijklmnopqrstuvwxyz0123456789')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_do_not_begin_line_with_comma(self):
line = """\
def dummy():
if True:
if True:
if True:
object = ModifyAction( [MODIFY70.text, OBJECTBINDING71.text, COLON72.text], MODIFY70.getLine(), MODIFY70.getCharPositionInLine() )
"""
fixed = """\
def dummy():
if True:
if True:
if True:
object = ModifyAction(
[MODIFY70.text, OBJECTBINDING71.text, COLON72.text],
MODIFY70.getLine(),
MODIFY70.getCharPositionInLine())
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_should_not_break_on_dot(self):
line = """\
if True:
if True:
raise xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx('xxxxxxxxxxxxxxxxx "{d}" xxxxxxxxxxxxxx'.format(d='xxxxxxxxxxxxxxx'))
"""
fixed = """\
if True:
if True:
raise xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
'xxxxxxxxxxxxxxxxx "{d}" xxxxxxxxxxxxxx'.format(
d='xxxxxxxxxxxxxxx'))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_comment(self):
line = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
pass
# http://foo.bar/abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-
# The following is ugly commented-out code and should not be touched.
#xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx = 1
"""
fixed = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will
# wrap it using textwrap to be within 72 characters.
pass
# http://foo.bar/abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-abc-
# The following is ugly commented-out code and should not be touched.
#xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx = 1
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_comment_should_not_modify_docstring(self):
line = '''\
def foo():
"""
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
"""
'''
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_should_only_modify_last_comment(self):
line = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 1. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 2. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 3. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
"""
fixed = """123
if True:
if True:
if True:
if True:
if True:
if True:
# This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 1. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 2. This is a long comment that should be wrapped. I will wrap it using textwrap to be within 72 characters.
# 3. This is a long comment that should be wrapped. I
# will wrap it using textwrap to be within 72
# characters.
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_should_not_interfere_with_non_comment(self):
line = '''
"""
# not actually a comment %d. 12345678901234567890, 12345678901234567890, 12345678901234567890.
""" % (0,)
'''
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_should_cut_comment_pattern(self):
line = """123
# -- Useless lines ----------------------------------------------------------------------
321
"""
fixed = """123
# -- Useless lines -------------------------------------------------------
321
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_function_should_not_break_on_colon(self):
line = r"""
class Useless(object):
def _table_field_is_plain_widget(self, widget):
if widget.__class__ == Widget or\
(widget.__class__ == WidgetMeta and Widget in widget.__bases__):
return True
return False
"""
fixed = r"""
class Useless(object):
def _table_field_is_plain_widget(self, widget):
if widget.__class__ == Widget or(
widget.__class__ == WidgetMeta and Widget in widget.__bases__):
return True
return False
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental(self):
# FIXME: This has really bad output.
line = """\
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
}
"""
fixed = """\
models = {
'auth.group':
{'Meta': {'object_name': 'Group'},
'permissions':
('django.db.models.fields.related.ManyToManyField', [],
{'to': "orm['auth.Permission']", 'symmetrical': 'False',
'blank': 'True'})},
'auth.permission':
{
'Meta':
{
'ordering':
"('content_type__app_label', 'content_type__model', 'codename')",
'unique_together': "(('content_type', 'codename'),)",
'object_name': 'Permission'},
'name': ('django.db.models.fields.CharField', [],
{'max_length': '50'})}, }
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_and_multiple_logical_lines(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(aaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbb, cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccc, dddddddddddddddddddddddd)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_and_multiple_logical_lines_with_math(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx([-1 + 5 / -10,
100,
-3 - 4])
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
[-1 + 5 / -10, 100, -3 - 4])
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_and_import(self):
line = """\
from . import (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy)
"""
fixed = """\
from . import (
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_shorten_comment_with_experimental(self):
line = """\
# ------ -------------------------------------------------------------------------
"""
fixed = """\
# ------ -----------------------------------------------------------------
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_escaped_newline(self):
line = """\
if True or \\
False: # test test test test test test test test test test test test test test
pass
"""
fixed = """\
# test test test test test test test test test test test test test test
if True or False:
pass
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_multiline_string(self):
line = """\
print('---------------------------------------------------------------------',
('================================================', '====================='),
'''--------------------------------------------------------------------------------
''')
"""
fixed = """\
print(
'---------------------------------------------------------------------',
('================================================',
'====================='),
'''--------------------------------------------------------------------------------
''')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_multiline_string_with_addition(self):
line = '''\
def f():
email_text += """<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>"""+despot["Nicholas"]+"""<br>
<b>Minion: </b>"""+serf["Dmitri"]+"""<br>
<b>Residence: </b>"""+palace["Winter"]+"""<br>
</body>
</html>"""
'''
fixed = '''\
def f():
email_text += """<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>""" + despot["Nicholas"] + """<br>
<b>Minion: </b>""" + serf["Dmitri"] + """<br>
<b>Residence: </b>""" + palace["Winter"] + """<br>
</body>
</html>"""
'''
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_multiline_string_in_parens(self):
line = '''\
def f():
email_text += ("""<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>"""+despot["Nicholas"]+"""<br>
<b>Minion: </b>"""+serf["Dmitri"]+"""<br>
<b>Residence: </b>"""+palace["Winter"]+"""<br>
</body>
</html>""")
'''
fixed = '''\
def f():
email_text += (
"""<html>This is a really long docstring that goes over the column limit and is multi-line.<br><br>
<b>Czar: </b>""" + despot["Nicholas"] + """<br>
<b>Minion: </b>""" + serf["Dmitri"] + """<br>
<b>Residence: </b>""" + palace["Winter"] + """<br>
</body>
</html>""")
'''
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_indentation(self):
line = """\
if True:
# comment here
print(aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb,cccccccccccccccccccccccccccccccccccccccccc)
"""
fixed = """\
if True:
# comment here
print(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb,
cccccccccccccccccccccccccccccccccccccccccc)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_multiple_keys_and_experimental(self):
line = """\
one_two_three_four_five_six = {'one two three four five': 12345, 'asdfsdflsdkfjl sdflkjsdkfkjsfjsdlkfj sdlkfjlsfjs': '343',
1: 1}
"""
fixed = """\
one_two_three_four_five_six = {
'one two three four five': 12345,
'asdfsdflsdkfjl sdflkjsdkfkjsfjsdlkfj sdlkfjlsfjs': '343', 1: 1}
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_with_experimental_and_carriage_returns_only(self):
"""Make sure _find_logical() does not crash."""
line = 'if True:\r from aaaaaaaaaaaaaaaa import bbbbbbbbbbbbbbbbbbb\r \r ccccccccccc = None\r'
fixed = 'if True:\r from aaaaaaaaaaaaaaaa import bbbbbbbbbbbbbbbbbbb\r\r ccccccccccc = None\r'
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_should_ignore_imports(self):
line = """\
import logging, os, bleach, commonware, urllib2, json, time, requests, urlparse, re
"""
with autopep8_context(line, options=['--select=E501',
'--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_should_not_do_useless_things(self):
line = """\
foo(' ')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_with_percent(self):
line = """\
raise MultiProjectException("Ambiguous workspace: %s=%s, %s" % ( varname, varname_path, os.path.abspath(config_filename)))
"""
fixed = """\
raise MultiProjectException(
"Ambiguous workspace: %s=%s, %s" %
(varname, varname_path, os.path.abspath(config_filename)))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_def(self):
line = """\
def foobar(sldfkjlsdfsdf, kksdfsdfsf,sdfsdfsdf, sdfsdfkdk, szdfsdfsdf, sdfsdfsdfsdlkfjsdlf, sdfsdfddf,sdfsdfsfd, sdfsdfdsf):
pass
"""
fixed = """\
def foobar(
sldfkjlsdfsdf, kksdfsdfsf, sdfsdfsdf, sdfsdfkdk, szdfsdfsdf,
sdfsdfsdfsdlkfjsdlf, sdfsdfddf, sdfsdfsfd, sdfsdfdsf):
pass
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_tuple(self):
line = """\
def f():
man_this_is_a_very_long_function_name(an_extremely_long_variable_name,
('a string that is long: %s'%'bork'))
"""
fixed = """\
def f():
man_this_is_a_very_long_function_name(
an_extremely_long_variable_name,
('a string that is long: %s' % 'bork'))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_tuple_in_list(self):
line = """\
def f(self):
self._xxxxxxxx(aaaaaa, bbbbbbbbb, cccccccccccccccccc,
[('mmmmmmmmmm', self.yyyyyyyyyy.zzzzzzz/_DDDDD)], eee, 'ff')
"""
fixed = """\
def f(self):
self._xxxxxxxx(
aaaaaa, bbbbbbbbb, cccccccccccccccccc,
[('mmmmmmmmmm', self.yyyyyyyyyy.zzzzzzz / _DDDDD)],
eee, 'ff')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
@unittest.skipIf(sys.version_info < (2, 7),
'Python 2.6 does not support dictionary comprehensions')
def test_e501_experimental_with_complex_reformat(self):
line = """\
bork(111, 111, 111, 111, 222, 222, 222, { 'foo': 222, 'qux': 222 }, ((['hello', 'world'], ['yo', 'stella', "how's", 'it'], ['going']), {str(i): i for i in range(10)}, {'bork':((x, x**x) for x in range(10))}), 222, 222, 222, 222, 333, 333, 333, 333)
"""
fixed = """\
bork(
111, 111, 111, 111, 222, 222, 222, {'foo': 222, 'qux': 222},
((['hello', 'world'],
['yo', 'stella', "how's", 'it'],
['going']),
{str(i): i for i in range(10)},
{'bork': ((x, x ** x) for x in range(10))}),
222, 222, 222, 222, 333, 333, 333, 333)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_multiple_lines_and_quotes(self):
line = """\
if True:
xxxxxxxxxxx = xxxxxxxxxxxxxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxx={'xxxxxxxxxxxx': 'xxxxx',
'xxxxxxxxxxx': xx,
'xxxxxxxx': False,
})
"""
fixed = """\
if True:
xxxxxxxxxxx = xxxxxxxxxxxxxxxxx(
xxxxxxxxxxx,
xxxxxxxxxxxxxxxx={'xxxxxxxxxxxx': 'xxxxx', 'xxxxxxxxxxx': xx,
'xxxxxxxx': False, })
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_dot_calls(self):
line = """\
if True:
logging.info('aaaaaa bbbbb dddddd ccccccc eeeeeee fffffff gg: %s',
xxxxxxxxxxxxxxxxx.yyyyyyyyyyyyyyyyyyyyy(zzzzzzzzzzzzzzzzz.jjjjjjjjjjjjjjjjj()))
"""
fixed = """\
if True:
logging.info(
'aaaaaa bbbbb dddddd ccccccc eeeeeee fffffff gg: %s',
xxxxxxxxxxxxxxxxx.yyyyyyyyyyyyyyyyyyyyy(
zzzzzzzzzzzzzzzzz.jjjjjjjjjjjjjjjjj()))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_avoid_breaking_at_empty_parentheses_if_possible(self):
line = """\
someverylongindenttionwhatnot().foo().bar().baz("and here is a long string 123456789012345678901234567890")
"""
fixed = """\
someverylongindenttionwhatnot().foo().bar().baz(
"and here is a long string 123456789012345678901234567890")
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_unicode(self):
line = """\
someverylongindenttionwhatnot().foo().bar().baz("and here is a l안녕하세요 123456789012345678901234567890")
"""
fixed = """\
someverylongindenttionwhatnot().foo().bar().baz(
"and here is a l안녕하세요 123456789012345678901234567890")
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_with_tuple_assignment(self):
line = """\
if True:
(xxxxxxx,) = xxxx.xxxxxxx.xxxxx(xxxxxxxxxxxx.xx).xxxxxx(xxxxxxxxxxxx.xxxx == xxxx.xxxx).xxxxx()
"""
fixed = """\
if True:
(xxxxxxx,) = xxxx.xxxxxxx.xxxxx(xxxxxxxxxxxx.xx).xxxxxx(
xxxxxxxxxxxx.xxxx == xxxx.xxxx).xxxxx()
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_tuple_on_line(self):
line = """\
def f():
self.aaaaaaaaa(bbbbbb, ccccccccc, dddddddddddddddd,
((x, y/eeeeeee) for x, y in self.outputs.total.iteritems()),
fff, 'GG')
"""
fixed = """\
def f():
self.aaaaaaaaa(
bbbbbb, ccccccccc, dddddddddddddddd,
((x, y / eeeeeee) for x, y in self.outputs.total.iteritems()),
fff, 'GG')
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_tuple_on_line_two_space_indent(self):
line = """\
def f():
self.aaaaaaaaa(bbbbbb, ccccccccc, dddddddddddddddd,
((x, y/eeeeeee) for x, y in self.outputs.total.iteritems()),
fff, 'GG')
"""
fixed = """\
def f():
self.aaaaaaaaa(bbbbbb, ccccccccc, dddddddddddddddd,
((x, y / eeeeeee) for x, y in self.outputs.total.iteritems()),
fff, 'GG')
"""
with autopep8_context(line, options=['--experimental',
'--indent-size=2']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_oversized_default_initializer(self):
line = """\
aaaaaaaaaaaaaaaaaaaaa(lllll,mmmmmmmm,nnn,fffffffffff,ggggggggggg,hhh,ddddddddddddd=eeeeeeeee,bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=ccccccccccccccccccccccccccccccccccccccccccccccccc,bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=cccccccccccccccccccccccccccccccccccccccccccccccc)
"""
fixed = """\
aaaaaaaaaaaaaaaaaaaaa(
lllll, mmmmmmmm, nnn, fffffffffff, ggggggggggg, hhh,
ddddddddddddd=eeeeeeeee,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=ccccccccccccccccccccccccccccccccccccccccccccccccc,
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=cccccccccccccccccccccccccccccccccccccccccccccccc)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_decorator(self):
line = """\
@foo(('xxxxxxxxxxxxxxxxxxxxxxxxxx', users.xxxxxxxxxxxxxxxxxxxxxxxxxx), ('yyyyyyyyyyyy', users.yyyyyyyyyyyy), ('zzzzzzzzzzzzzz', users.zzzzzzzzzzzzzz))
"""
fixed = """\
@foo(
('xxxxxxxxxxxxxxxxxxxxxxxxxx', users.xxxxxxxxxxxxxxxxxxxxxxxxxx),
('yyyyyyyyyyyy', users.yyyyyyyyyyyy),
('zzzzzzzzzzzzzz', users.zzzzzzzzzzzzzz))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_long_class_name(self):
line = """\
class AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB):
pass
"""
fixed = """\
class AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA(
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB):
pass
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_no_line_change(self):
line = """\
return '<a href="javascript:;" class="copy-to-clipboard-button" data-clipboard-text="%s" title="copy url to clipboard">Copy Link</a>' % url
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(line, result)
def test_e501_experimental_splitting_small_arrays(self):
line = """\
def foo():
unspecified[service] = ('# The %s brown fox jumped over the lazy, good for nothing '
'dog until it grew tired and set its sights upon the cat!' % adj)
"""
fixed = """\
def foo():
unspecified[service] = (
'# The %s brown fox jumped over the lazy, good for nothing '
'dog until it grew tired and set its sights upon the cat!' % adj)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_no_splitting_in_func_call(self):
line = """\
def foo():
if True:
if True:
function.calls('%r (%s): aaaaaaaa bbbbbbbbbb ccccccc ddddddd eeeeee (%d, %d)',
xxxxxx.yy, xxxxxx.yyyy, len(mmmmmmmmmmmmm['fnord']),
len(mmmmmmmmmmmmm['asdfakjhdsfkj']))
"""
fixed = """\
def foo():
if True:
if True:
function.calls(
'%r (%s): aaaaaaaa bbbbbbbbbb ccccccc ddddddd eeeeee (%d, %d)',
xxxxxx.yy, xxxxxx.yyyy, len(mmmmmmmmmmmmm['fnord']),
len(mmmmmmmmmmmmm['asdfakjhdsfkj']))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_no_splitting_at_dot(self):
line = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx = [yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.MMMMMM_NNNNNNN_OOOOO,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.PPPPPP_QQQQQQQ_RRRRR,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.SSSSSS_TTTTTTT_UUUUU]
"""
fixed = """\
xxxxxxxxxxxxxxxxxxxxxxxxxxxx = [
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.MMMMMM_NNNNNNN_OOOOO,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.PPPPPP_QQQQQQQ_RRRRR,
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.SSSSSS_TTTTTTT_UUUUU]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_no_splitting_before_arg_list(self):
line = """\
xxxxxxxxxxxx = [yyyyyy['yyyyyy'].get('zzzzzzzzzzz') for yyyyyy in x.get('aaaaaaaaaaa') if yyyyyy['yyyyyy'].get('zzzzzzzzzzz')]
"""
fixed = """\
xxxxxxxxxxxx = [yyyyyy['yyyyyy'].get('zzzzzzzzzzz')
for yyyyyy in x.get('aaaaaaaaaaa')
if yyyyyy['yyyyyy'].get('zzzzzzzzzzz')]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_dont_split_if_looks_bad(self):
line = """\
def f():
if True:
BAD(('xxxxxxxxxxxxx', 42), 'I died for beauty, but was scarce / Adjusted in the tomb %s', yyyyyyyyyyyyy)
"""
fixed = """\
def f():
if True:
BAD(('xxxxxxxxxxxxx', 42),
'I died for beauty, but was scarce / Adjusted in the tomb %s',
yyyyyyyyyyyyy)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_list_comp(self):
line = """\
xxxxxxxxxxxs = [xxxxxxxxxxx for xxxxxxxxxxx in xxxxxxxxxxxs if not yyyyyyyyyyyy[xxxxxxxxxxx] or not yyyyyyyyyyyy[xxxxxxxxxxx].zzzzzzzzzz]
"""
fixed = """\
xxxxxxxxxxxs = [
xxxxxxxxxxx for xxxxxxxxxxx in xxxxxxxxxxxs
if not yyyyyyyyyyyy[xxxxxxxxxxx] or
not yyyyyyyyyyyy[xxxxxxxxxxx].zzzzzzzzzz]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
line = """\
def f():
xxxxxxxxxx = [f for f in yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.zzzzzzzzzzzzzzzzzzzzzzzz.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]
"""
fixed = """\
def f():
xxxxxxxxxx = [
f
for f in
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy.zzzzzzzzzzzzzzzzzzzzzzzz.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_dict(self):
line = """\
def f():
zzzzzzzzzzzzz = {
'aaaaaa/bbbbbb/ccccc/dddddddd/eeeeeeeee/fffffffffff/ggggggggg/hhhhhhhh.py':
yyyyyyyyyyy.xxxxxxxxxxx(
'aa/bbbbbbb/cc/ddddddd/eeeeeeeeeee/fffffffffff/ggggggggg/hhhhhhh/ggggg.py',
'00000000',
yyyyyyyyyyy.xxxxxxxxx.zzzz),
}
"""
fixed = """\
def f():
zzzzzzzzzzzzz = {
'aaaaaa/bbbbbb/ccccc/dddddddd/eeeeeeeee/fffffffffff/ggggggggg/hhhhhhhh.py':
yyyyyyyyyyy.xxxxxxxxxxx(
'aa/bbbbbbb/cc/ddddddd/eeeeeeeeeee/fffffffffff/ggggggggg/hhhhhhh/ggggg.py',
'00000000', yyyyyyyyyyy.xxxxxxxxx.zzzz), }
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_indentation(self):
line = """\
class Klass(object):
'''Class docstring.'''
def Quote(self, parameter_1, parameter_2, parameter_3, parameter_4, parameter_5):
pass
"""
fixed = """\
class Klass(object):
'''Class docstring.'''
def Quote(
self, parameter_1, parameter_2, parameter_3, parameter_4,
parameter_5):
pass
"""
with autopep8_context(line, options=['--experimental',
'--indent-size=2']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_long_function_call_elements(self):
line = """\
def g():
pppppppppppppppppppppppppp1, pppppppppppppppppppppppp2 = (
zzzzzzzzzzzz.yyyyyyyyyyyyyy(aaaaaaaaa=10, bbbbbbbbbbbbbbbb='2:3',
cccccccc='{1:2}', dd=1, eeeee=0),
zzzzzzzzzzzz.yyyyyyyyyyyyyy(dd=7, aaaaaaaaa=16, bbbbbbbbbbbbbbbb='2:3',
cccccccc='{1:2}',
eeeee=xxxxxxxxxxxxxxxxx.wwwwwwwwwwwww.vvvvvvvvvvvvvvvvvvvvvvvvv))
"""
fixed = """\
def g():
pppppppppppppppppppppppppp1, pppppppppppppppppppppppp2 = (
zzzzzzzzzzzz.yyyyyyyyyyyyyy(
aaaaaaaaa=10, bbbbbbbbbbbbbbbb='2:3', cccccccc='{1:2}', dd=1,
eeeee=0),
zzzzzzzzzzzz.yyyyyyyyyyyyyy(
dd=7, aaaaaaaaa=16, bbbbbbbbbbbbbbbb='2:3', cccccccc='{1:2}',
eeeee=xxxxxxxxxxxxxxxxx.wwwwwwwwwwwww.vvvvvvvvvvvvvvvvvvvvvvvvv))
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_long_nested_tuples_in_arrays(self):
line = """\
def f():
aaaaaaaaaaa.bbbbbbb([
('xxxxxxxxxx', 'yyyyyy', 'Heaven hath no wrath like love to hatred turned. Nor hell a fury like a woman scorned.'),
('xxxxxxx', 'yyyyyyyyyyy', "To the last I grapple with thee. From hell's heart I stab at thee. For hate's sake I spit my last breath at thee!")])
"""
fixed = """\
def f():
aaaaaaaaaaa.bbbbbbb(
[('xxxxxxxxxx', 'yyyyyy',
'Heaven hath no wrath like love to hatred turned. Nor hell a fury like a woman scorned.'),
('xxxxxxx', 'yyyyyyyyyyy',
"To the last I grapple with thee. From hell's heart I stab at thee. For hate's sake I spit my last breath at thee!")])
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_func_call_open_paren_not_separated(self):
# Don't separate the opening paren of a function call from the
# function's name.
line = """\
def f():
owned_list = [o for o in owned_list if self.display['zzzzzzzzzzzzzz'] in aaaaaaaaaaaaaaaaa.bbbbbbbbbbbbbbbbbbbb(o.qq, ccccccccccccccccccccccccccc.ddddddddd.eeeeeee)]
"""
fixed = """\
def f():
owned_list = [
o for o in owned_list
if self.display['zzzzzzzzzzzzzz'] in aaaaaaaaaaaaaaaaa.bbbbbbbbbbbbbbbbbbbb(
o.qq, ccccccccccccccccccccccccccc.ddddddddd.eeeeeee)]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_long_dotted_object(self):
# Don't separate a long dotted object too soon. Otherwise, it may end
# up with most of its elements on separate lines.
line = """\
def f(self):
return self.xxxxxxxxxxxxxxx(aaaaaaa.bbbbb.ccccccc.ddd.eeeeee.fffffffff.ggggg.hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh)
"""
fixed = """\
def f(self):
return self.xxxxxxxxxxxxxxx(
aaaaaaa.bbbbb.ccccccc.ddd.eeeeee.fffffffff.ggggg.
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh)
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_parsing_dict_with_comments(self):
line = """\
self.display['xxxxxxxxxxxx'] = [{'title': _('Library'), #. This is the first comment.
'flag': aaaaaaaaaa.bbbbbbbbb.cccccccccc
}, {'title': _('Original'), #. This is the second comment.
'flag': aaaaaaaaaa.bbbbbbbbb.dddddddddd
}, {'title': _('Unknown'), #. This is the third comment.
'flag': aaaaaaaaaa.bbbbbbbbb.eeeeeeeeee}]
"""
fixed = """\
self.display['xxxxxxxxxxxx'] = [{'title': _('Library'), # . This is the first comment.
'flag': aaaaaaaaaa.bbbbbbbbb.cccccccccc
# . This is the second comment.
}, {'title': _('Original'),
'flag': aaaaaaaaaa.bbbbbbbbb.dddddddddd
# . This is the third comment.
}, {'title': _('Unknown'),
'flag': aaaaaaaaaa.bbbbbbbbb.eeeeeeeeee}]
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_if_line_over_limit(self):
line = """\
if not xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
return 1
"""
fixed = """\
if not xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc,
dddddddddddddddddddddd):
return 1
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_for_line_over_limit(self):
line = """\
for aaaaaaaaa in xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
pass
"""
fixed = """\
for aaaaaaaaa in xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc,
dddddddddddddddddddddd):
pass
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
def test_e501_experimental_while_line_over_limit(self):
line = """\
while xxxxxxxxxxxx(aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc, dddddddddddddddddddddd):
pass
"""
fixed = """\
while xxxxxxxxxxxx(
aaaaaaaaaaaaaaaaaa, bbbbbbbbbbbbbbbb, cccccccccccccc,
dddddddddddddddddddddd):
pass
"""
with autopep8_context(line, options=['--experimental']) as result:
self.assertEqual(fixed, result)
@contextlib.contextmanager
def autopep8_context(line, options=None):
if not options:
options = []
with temporary_file_context(line) as filename:
options = autopep8.parse_args([filename] + list(options))
yield autopep8.fix_file(filename=filename, options=options)
@contextlib.contextmanager
def autopep8_subprocess(line, options):
with temporary_file_context(line) as filename:
p = Popen(list(AUTOPEP8_CMD_TUPLE) + [filename] + options,
stdout=PIPE)
yield p.communicate()[0].decode('utf-8')
@contextlib.contextmanager
def temporary_file_context(text, suffix='', prefix=''):
temporary = mkstemp(suffix=suffix, prefix=prefix)
os.close(temporary[0])
with autopep8.open_with_encoding(temporary[1],
encoding='utf-8',
mode='w') as temp_file:
temp_file.write(text)
yield temporary[1]
os.remove(temporary[1])
@contextlib.contextmanager
def disable_stderr():
sio = StringIO()
with capture_stderr(sio):
yield
@contextlib.contextmanager
def capture_stderr(sio):
_tmp = sys.stderr
sys.stderr = sio
try:
yield
finally:
sys.stderr = _tmp
if __name__ == '__main__':
unittest.main()
|
michaelBenin/autopep8
|
test/test_autopep8.py
|
Python
|
mit
| 196,084
|
[
"Psi4"
] |
a076c001a02bc81f9fb30c3e02242ebbe5f2afd2ec8932b5bb5b410de93520de
|
#! python
# encoding: utf-8
# Wellcome Trust Sanger Institute and Imperial College London
# Copyright (C) 2020 Wellcome Trust Sanger Institute and Imperial College London
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Generic imports
import sys
import argparse
import re
# Phylogenetic imports
import dendropy
# Biopython imports
from Bio import AlignIO
from Bio import Phylo
from Bio import SeqIO
from Bio.Align import MultipleSeqAlignment
from Bio.Seq import Seq
# command line parsing
def get_options():
parser = argparse.ArgumentParser(description='Extract a clade from a Gubbins output',
prog='extract_gubbins_clade')
# input options
parser.add_argument('--clades',
help = 'Two column file assigning isolates (first column) to clades (second column)',
required = True)
parser.add_argument('--gff',
help = 'recombination prediction GFF file output by Gubbins',
required = True)
parser.add_argument('--snps',
help = 'branch base reconstruction EMBL file output by Gubbins',
required = True)
parser.add_argument('--exclude-regions',
help = 'Two column file specifying start and end of regions to be excluded',
required = False,
default = None)
parser.add_argument('--tree',
help = 'Labelled tree output by Gubbins',
required = True)
parser.add_argument('--print-trees',
help = 'Print clade trees',
default = False,
action = 'store_true')
parser.add_argument('--print-rec-lengths',
help = 'Print recombination lengths',
default = False,
action = 'store_true')
parser.add_argument('--out',
help = 'Output file prefix; suffix is "_clades.csv"',
required = True)
return parser.parse_args()
# main code
if __name__ == "__main__":
# Get command line options
args = get_options()
# Parse clades
clades = {}
clade_names = set()
with open(args.clades,'r') as clade_list:
for line in clade_list.readlines():
info = line.strip().split()
if len(info) == 2:
clades[info[0]] = info[1]
clade_names.add(info[1])
else:
sys.stderr.write('Line needs two columns: ' + line + '\n')
# Exclude regions
excluded_region_starts = []
excluded_region_ends = []
if args.exclude_regions is not None:
with open(args.exclude_regions,'r') as exclude_file:
for line in exclude_file.readlines():
coords = line.strip().split()
if int(coords[0]) < int(coords[1]):
excluded_region_starts.append(int(coords[0]))
excluded_region_ends.append(int(coords[1]))
else:
sys.stderr.write('Start of excluded region must be less than end\n')
sys.exit(1)
# Store SNP information
node_snps = {}
snp_total = 0
with open(args.snps,'r') as snp_file:
pos = 0
for line in snp_file.readlines():
info = line.strip().split()
if info[1] == 'variation':
pos = int(info[2])
if info[1].startswith('/node='):
node = info[1].replace('"','').split('->')
include_snp = True
for s,e in zip(excluded_region_starts,excluded_region_ends):
if pos >= s and pos <= e:
include_snp = False
break
if include_snp:
snp_total += 1
if node[1] in node_snps:
node_snps[node[1]].append(pos)
else:
node_snps[node[1]] = [pos]
# Store recombination information
node_rec_starts = {}
node_rec_ends = {}
with open(args.gff,'r') as gff_file:
for line in gff_file.readlines():
if not line.startswith('##'):
info = line.rstrip().split('\t')
start = int(info[3])
end = int(info[4])
node = info[8].split(';')[0].replace('"','').split('->')[1]
include_rec = True
for s,e in zip(excluded_region_starts,excluded_region_ends):
if start >= s and end <= e:
include_rec = False
if include_rec:
if node not in node_rec_starts:
node_rec_starts[node] = [start]
node_rec_ends[node] = [end]
else:
node_rec_starts[node].append(start)
node_rec_ends[node].append(end)
# Divide SNPs into recombinant and non-recombinant
rec_snps = {node:0 for node in node_snps}
pm_snps = {node:0 for node in node_snps}
for node in node_snps:
for p in node_snps[node]:
rec_snp = False
if node in node_rec_starts:
for s,e in zip(node_rec_starts[node],node_rec_ends[node]):
if p >= s and p <= e:
rec_snp = True
break
if rec_snp:
rec_snps[node] += 1
else:
pm_snps[node] += 1
# Parse tree
info_labels = ['total_snps','rec_snps','mutation_snps','recombinations']
tree_info_labels = ['n_taxa','n_branches','branch_length']
tree = dendropy.Tree.get(path = args.tree,
schema = 'newick',
preserve_underscores = True,
rooting='force-rooted')
# Calculate statistics per clade
rec_length_string = ''
with open(args.out + '_clades.csv','w') as out_file:
out_file.write('Clade,')
out_file.write(','.join(info_labels + tree_info_labels))
out_file.write('\n')
for clade_name in clade_names:
out_file.write(clade_name + ',')
clade_members = [sequence for sequence in clades if clades[sequence] == clade_name]
clade_tree = tree.clone(depth = 1)
clade_tree.retain_taxa_with_labels(clade_members)
if args.print_trees:
clade_tree_string = clade_tree.as_string(
schema='newick',
suppress_leaf_taxon_labels=False,
suppress_leaf_node_labels=True,
suppress_internal_taxon_labels=True,
suppress_internal_node_labels=True,
suppress_rooting=True,
suppress_edge_lengths=False,
unquoted_underscores=True,
preserve_spaces=False,
store_tree_weights=False,
suppress_annotations=True,
annotations_as_nhx=False,
suppress_item_comments=True,
node_label_element_separator=' '
)
with open(clade_name + '.tre','w') as tree_out:
tree_out.write(clade_tree_string + '\n')
clade_info = {label:0 for label in info_labels + tree_info_labels}
for node in clade_tree.preorder_node_iter():
if node != clade_tree.seed_node:
clade_info['n_branches'] += 1
clade_info['branch_length'] += node.edge_length
if node.is_leaf():
clade_info['n_taxa'] += 1
node_label_string = node.taxon.label
else:
node_label_string = node.label
if node_label_string in node_snps:
clade_info['total_snps'] += len(node_snps[node_label_string])
clade_info['rec_snps'] += rec_snps[node_label_string]
clade_info['mutation_snps'] += pm_snps[node_label_string]
if node_label_string in node_rec_starts:
clade_info['recombinations'] += len(node_rec_starts[node_label_string])
if args.print_rec_lengths:
for s,e in zip(node_rec_starts[node_label_string],node_rec_ends[node_label_string]):
rec_length_string += clade_name + ',' + str(1+e-s) + '\n'
out_file.write(','.join([str(clade_info[label]) for label in info_labels + tree_info_labels]))
out_file.write('\n')
if args.print_rec_lengths:
with open(args.out + '_rec_lengths.csv','w') as rec_out_file:
rec_out_file.write('Clade,Length\n' + rec_length_string)
|
sanger-pathogens/gubbins
|
python/scripts/extract_gubbins_clade_statistics.py
|
Python
|
gpl-2.0
| 10,002
|
[
"Biopython"
] |
f3173b70a5f7c18fc4d9c789df6e77d212b6e8a52289250c37f26f969a630ed8
|
#!/usr/bin/env python
# Copyright (C) 2011 by Brian Rowe
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import dbus
def bind():
bus = dbus.SessionBus()
banshee = bus.get_object('org.bansheeproject.Banshee',
'/org/bansheeproject/Banshee/PlayerEngine')
return banshee
def set_rating(n, banshee=None):
banshee = banshee or bind()
if n < 0:
n = 0
elif n > 5:
n = 5
banshee.SetRating(dbus.Byte(n))
def get_rating(banshee=None):
banshee = banshee or bind()
return banshee.GetRating()
def inc_rating(banshee=None):
banshee = banshee or bind()
rating = int(banshee.GetRating()) + 1
banshee.SetRating(dbus.Byte(rating))
return rating
def dec_rating(banshee=None):
banshee = banshee or bind()
rating = int(banshee.GetRating()) - 1
banshee.SetRating(dbus.Byte(rating))
return rating
def current_track(banshee=None):
banshee = banshee or bind()
return banshee.GetCurrentTrack()
|
briprowe/RateSongs
|
banshee.py
|
Python
|
gpl-2.0
| 2,047
|
[
"Brian"
] |
4eb6af4e07e2d8d80c5e0db2fd2a9607e161a0de1b90047438cb7d0abba6e03e
|
# -*- coding: utf-8 -*-
# <Bunch - BDD test tool for Lettuce scenarios>
# Copyright (c) 2012 Grid Dynamics Consulting Services, Inc, All Rights Reserved
# http://www.griddynamics.com
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from lettuce_bunch.exceptions import CyclicDependencySpecification
from nose.tools import assert_equals, assert_raises
from lettuce_bunch.dependencies import combine_fixture_deps, dependency_lists_to_pairs, dependency_groups_to_pairs
from tests.asserts import assert_element_wise_equals, flatten, print_iterable
def test_deplist_to_pairs():
deplist1 = ['adf', 'abc', 'gh', 'ceg', 'bdeh']
result = dependency_lists_to_pairs(deplist1)
expected = [('a', 'd'),
('d', 'f'),
('a', 'b'),
('b', 'c'),
('g', 'h'),
('c', 'e'),
('e', 'g'),
('b', 'd'),
('d', 'e'),
('e', 'h')]
assert_equals(list(result), expected)
def test_dependency_grops_to_pairs():
assert_equals(
list(dependency_groups_to_pairs([['a', 'b'], ['c'], ['d']])),
[('a', 'c'), ('b', 'c'), ('c', 'd')])
assert_equals(
list(dependency_groups_to_pairs([['a', 'b'], [], ['d']])),
[])
assert_equals(
list(dependency_groups_to_pairs([[1, 2], [3, 4], [5, 6]])),
[(1, 3), (1, 4), (2, 3), (2, 4), (3, 5), (3, 6), (4, 5), (4, 6)])
#TODO: Convert flattened assert to structured one when concurrent is ready
def test_combine_fixtures_basic():
grouplist1 = [
[ ['single-node'],
['novaclient-users' , 'novaclient-network'],
['novaclient-images'] , ['novaclient-keys'], ['novaclient-flatnetwork'] ],
[ ['single-node'],
['novaclient-users' , 'novaclient-network'],
['novaclient-images'],
['novaclient-keys'] ],
[ ['single-node'],
['novaclient-users', 'novaclient-network'],
['novaclient-images'],
['novaclient-keys'],
['volume-services'],
['volume'] ] ]
result = combine_fixture_deps(grouplist1)
expected = ['single-node',
'novaclient-network',
'novaclient-users',
'novaclient-images',
'novaclient-keys',
'novaclient-flatnetwork',
'volume-services',
'volume']
assert_equals(list(flatten(result)), expected)
def test_combine_fixtures_cyclic():
grouplist2 = [
[ ['a'], ['b'], ['c'] ],
[ ['c'], ['b'], ['a'] ] ]
#print_iterable(combine_fixture_deps(grouplist2))
assert_raises(CyclicDependencySpecification, combine_fixture_deps, grouplist2)
def test_one_solitary_dep():
grouplist = [
[ ['one'] ] ]
assert_equals(list(flatten(combine_fixture_deps(grouplist))), ['one'])
def test_several_solitary_deps():
grouplist = [
[ ['one'] ], [ ['two'] ], [ ['three'] ] ]
assert_equals(list(flatten(combine_fixture_deps(grouplist))), ['one', 'two', 'three'])
def test_empty_deps():
grouplist = [
[ [] ] ]
assert_equals(list(flatten(combine_fixture_deps(grouplist))), [])
def test_several_empty_deps():
grouplist = [
[ [] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
[])
def test_empties_and_solitaries_deps():
grouplist = [
[ [] ], [ ['one'] ], [ [] ], [ ['two'] ], [ [] ],[ ['three'] ], [ [] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
['one', 'two', 'three'])
def test_empties_solitaries_and_usual_deps():
grouplist = [
[ [] ], [ ['one'] ], [ [] ], [ ['two'] ], [ [] ],[ ['three'] ], [ [] ], [['four'], ['five'], ['six']], [ [] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
['four', 'five', 'six', 'one', 'two', 'three'])
def test_independent_deps():
grouplist = [
[ ['1','2','3'], ['4'], ['5'] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
['1', '2', '3', '4', '5'])
def test_independent_single_deps():
grouplist = [
[ ['1','2','3' ,'4', '5'] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
['1', '2', '3', '4', '5'])
def test_empties_solitaries_indepent_and_usual_deps():
grouplist = [
[ [] ],
[ ['one'] ],
[ [] ],
[ ['two'] ],
[ [] ],
[ ['three'] ],
[ [] ],
[ ['four'], ['five'], ['six']],
[ [] ],
[ ['seven','eight', 'nine'] ],
[ [] ] ]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
['four', 'five', 'six', 'one', 'two', 'three', 'seven', 'eight', 'nine'])
def test_no_solitary_duplication():
grouplist =[[],
[[u'single-node.clean.setup'], [u'keystone-init.setup'], [u'keystone-user.setup'], [u'novaclient-network.setup'], [u'novarc-keystone.setup'], [u'novaclient-images.setup'], [u'novaclient-keys.setup']],
[[u'novaclient-keys.setup']],
[],
[[u'single-node.clean.setup'], [u'novaclient-users.setup'], [u'novaclient-network.setup'], [u'novaclient-images.setup'], [u'novaclient-keys.setup']],
[[u'lvm.setup']]]
assert_equals(
list(flatten(combine_fixture_deps(grouplist))),
list(flatten([[u'single-node.clean.setup'],
[u'keystone-init.setup', u'novaclient-users.setup'],
[u'keystone-user.setup'],
[u'novaclient-network.setup'],
[u'novarc-keystone.setup'],
[u'novaclient-images.setup'],
[u'novaclient-keys.setup'],
[u'lvm.setup']])))
|
griddynamics/bunch
|
tests/unit/test_dependencies.py
|
Python
|
gpl-3.0
| 7,242
|
[
"ADF"
] |
752bb51e1ad969cb9c687accff3304bc3a484237236a4997067859bc64fadfde
|
# $HeadURL$
"""
DIRAC Wrapper to execute python and system commands with a wrapper, that might
set a timeout.
3 FUNCTIONS are provided:
- shellCall( iTimeOut, cmdSeq, callbackFunction = None, env = None ):
it uses subprocess.Popen class with "shell = True".
If cmdSeq is a string, it specifies the command string to execute through
the shell. If cmdSeq is a sequence, the first item specifies the command
string, and any additional items will be treated as additional shell arguments.
- systemCall( iTimeOut, cmdSeq, callbackFunction = None, env = None ):
it uses subprocess.Popen class with "shell = False".
cmdSeq should be a string, or a sequence of program arguments.
stderr and stdout are piped. callbackFunction( pipeId, line ) can be
defined to process the stdout (pipeId = 0) and stderr (pipeId = 1) as
they are produced
They return a DIRAC.ReturnValue dictionary with a tuple in Value
( returncode, stdout, stderr ) the tuple will also be available upon
timeout error or buffer overflow error.
- pythonCall( iTimeOut, function, \*stArgs, \*\*stKeyArgs )
calls function with given arguments within a timeout Wrapper
should be used to wrap third party python functions
"""
__RCSID__ = "$Id$"
from multiprocessing import Process, Manager
import threading
import time
import select
import os
import sys
import types
import subprocess
import signal
# Very Important:
# Here we can not import directly from DIRAC, since this file it is imported
# at initialization time therefore the full path is necessary
# from DIRAC import S_OK, S_ERROR
from DIRAC.Core.Utilities.ReturnValues import S_OK, S_ERROR
# from DIRAC import gLogger
from DIRAC.FrameworkSystem.Client.Logger import gLogger
USE_WATCHDOG = False
class Watchdog( object ):
"""
.. class Watchdog
timeout watchdog decorator
"""
def __init__( self, func, args=None, kwargs=None ):
""" c'tor """
self.func = func if callable(func) else None
self.args = args if args else tuple()
self.kwargs = kwargs if kwargs else {}
self.start = self.end = self.pid = None
self.rwEvent = threading.Event()
self.rwEvent.clear()
self.__watchdogThread = None
self.manager = Manager()
self.s_ok_error = self.manager.dict()
self.__executor = Process( target = self.run_func, args = (self.s_ok_error, ) )
def run_func( self, s_ok_error ):
""" subprocess target
:param Pipe pipe: pipe used for communication
"""
try:
ret = self.func( *self.args, **self.kwargs )
## set rw event
self.rwEvent.set()
for k in ret:
s_ok_error[k] = ret[k]
except Exception, error:
s_ok_error["OK"] = False
s_ok_error["Message"] = str(error)
finally:
## clear rw event
self.rwEvent.clear()
def watchdog( self ):
""" watchdog thread target """
while True:
if self.rwEvent.is_set() or time.time() < self.end:
time.sleep(5)
else:
break
if not self.__executor.is_alive():
return
else:
## wait until r/w operation finishes
while self.rwEvent.is_set():
time.sleep(5)
continue
## SIGTERM
os.kill( self.pid, signal.SIGTERM )
time.sleep(5)
## SIGKILL
if self.__executor.is_alive():
os.kill( self.pid, signal.SIGKILL )
def __call__( self, timeout = 0 ):
""" decorator execution """
timeout = int(timeout)
ret = { "OK" : True, "Value" : "" }
if timeout:
self.start = int( time.time() )
self.end = self.start + timeout + 2
self.__watchdogThread = threading.Thread( target = self.watchdog )
self.__watchdogThread.daemon = True
self.__watchdogThread.start()
ret = { "OK" : False, "Message" : "Timeout after %s seconds" % timeout,
"Value": ( 1, '', '' ) }
try:
self.__executor.start()
time.sleep(0.5)
self.pid = self.__executor.pid
if timeout:
self.__executor.join( timeout )
else:
self.__executor.join()
## get results if any, block watchdog by setting rwEvent
if not self.__executor.is_alive():
self.rwEvent.set()
for k in self.s_ok_error.keys():
ret[k] = self.s_ok_error[k]
self.rwEvent.clear()
except Exception, error:
return { "OK" : False, "Message" : str(error),
"Value": ( 2, '', '' ) }
return ret
class Subprocess:
"""
.. class:: Subprocess
"""
def __init__( self, timeout = False, bufferLimit = 52428800 ):
""" c'tor
:param int timeout: timeout in seconds
:param int bufferLimit: buffer size, default 5MB
"""
self.log = gLogger.getSubLogger( 'Subprocess' )
self.timeout = False
try:
self.changeTimeout( timeout )
self.bufferLimit = int( bufferLimit ) # 5MB limit for data
except Exception, x:
self.log.exception( 'Failed initialisation of Subprocess object' )
raise x
self.child = None
self.childPID = 0
self.childKilled = False
self.callback = None
self.bufferList = []
self.cmdSeq = []
def changeTimeout( self, timeout ):
""" set the time out limit to :timeout: seconds
:param int timeout: time out in seconds
"""
self.timeout = int( timeout )
if self.timeout == 0:
self.timeout = False
#self.log.debug( 'Timeout set to', timeout )
def __readFromFD( self, fd, baseLength = 0 ):
""" read from file descriptior :fd:
:param fd: file descriptior
:param int baseLength: ???
"""
dataString = ''
redBuf = " "
while len( redBuf ) > 0:
redBuf = os.read( fd, 8192 )
dataString += redBuf
if len( dataString ) + baseLength > self.bufferLimit:
self.log.error( 'Maximum output buffer length reached' )
retDict = S_ERROR( 'Reached maximum allowed length (%d bytes) '
'for called function return value' % self.bufferLimit )
retDict[ 'Value' ] = dataString
return retDict
return S_OK( dataString )
def __executePythonFunction( self, function, writePipe, *stArgs, **stKeyArgs ):
"""
execute function :funtion: using :stArgs: and :stKeyArgs:
"""
from DIRAC.Core.Utilities import DEncode
try:
os.write( writePipe, DEncode.encode( S_OK( function( *stArgs, **stKeyArgs ) ) ) )
except OSError, x:
if str( x ) == '[Errno 32] Broken pipe':
# the parent has died
pass
except Exception, x:
self.log.exception( 'Exception while executing', function.__name__ )
os.write( writePipe, DEncode.encode( S_ERROR( str( x ) ) ) )
#HACK: Allow some time to flush logs
time.sleep( 1 )
try:
os.close( writePipe )
finally:
os._exit( 0 )
def __selectFD( self, readSeq, timeout = False ):
""" select file descriptor from :readSeq: list """
validList = []
for fd in readSeq:
try:
os.fstat( fd )
validList.append( fd )
except OSError:
pass
if not validList:
return False
if self.timeout and not timeout:
timeout = self.timeout
if not timeout:
return select.select( validList , [], [] )[0]
else:
return select.select( validList , [], [], timeout )[0]
def __killPid( self, pid, sig = 9 ):
""" send signal :sig: to process :pid:
:param int pid: process id
:param int sig: signal to send, default 9 (SIGKILL)
"""
try:
os.kill( pid, sig )
except Exception, x:
if not str( x ) == '[Errno 3] No such process':
self.log.exception( 'Exception while killing timed out process' )
raise x
def __poll( self, pid ):
""" wait for :pid: """
try:
return os.waitpid( pid, os.WNOHANG )
except os.error:
if self.childKilled:
return False
return None
def killChild( self, recursive = True ):
""" kill child process
:param boolean recursive: flag to kill all descendants
"""
if self.childPID < 1:
self.log.error( "Could not kill child", "Child PID is %s" % self.childPID )
return - 1
os.kill( self.childPID, signal.SIGSTOP )
if recursive:
for gcpid in getChildrenPIDs( self.childPID, lambda cpid: os.kill( cpid, signal.SIGSTOP ) ):
try:
os.kill( gcpid, signal.SIGKILL )
self.__poll( gcpid )
except Exception:
pass
self.__killPid( self.childPID )
#HACK to avoid python bug
# self.child.wait()
exitStatus = self.__poll( self.childPID )
i = 0
while exitStatus == None and i < 1000:
i += 1
time.sleep( 0.000001 )
exitStatus = self.__poll( self.childPID )
try:
exitStatus = os.waitpid( self.childPID, 0 )
except os.error:
pass
self.childKilled = True
if exitStatus == None:
return exitStatus
return exitStatus[1]
def pythonCall( self, function, *stArgs, **stKeyArgs ):
""" call python function :function: with :stArgs: and :stKeyArgs: """
from DIRAC.Core.Utilities import DEncode
self.log.verbose( 'pythonCall:', function.__name__ )
readFD, writeFD = os.pipe()
pid = os.fork()
self.childPID = pid
if pid == 0:
os.close( readFD )
self.__executePythonFunction( function, writeFD, *stArgs, **stKeyArgs )
# FIXME: the close it is done at __executePythonFunction, do we need it here?
os.close( writeFD )
else:
os.close( writeFD )
readSeq = self.__selectFD( [ readFD ] )
if readSeq == False:
return S_ERROR( "Can't read from call %s" % ( function.__name__ ) )
try:
if len( readSeq ) == 0:
self.log.debug( 'Timeout limit reached for pythonCall', function.__name__ )
self.__killPid( pid )
#HACK to avoid python bug
# self.wait()
retries = 10000
while os.waitpid( pid, 0 ) == -1 and retries > 0:
time.sleep( 0.001 )
retries -= 1
return S_ERROR( '%d seconds timeout for "%s" call' % ( self.timeout, function.__name__ ) )
elif readSeq[0] == readFD:
retDict = self.__readFromFD( readFD )
os.waitpid( pid, 0 )
if retDict[ 'OK' ]:
dataStub = retDict[ 'Value' ]
if not dataStub:
return S_ERROR( "Error decoding data coming from call" )
retObj, stubLen = DEncode.decode( dataStub )
if stubLen == len( dataStub ):
return retObj
else:
return S_ERROR( "Error decoding data coming from call" )
return retDict
finally:
os.close( readFD )
def __generateSystemCommandError( self, exitStatus, message ):
""" create system command error
:param int exitStatus: exist status
:param str message: error message
:return: S_ERROR with additional 'Value' tuple ( existStatus, stdoutBuf, stderrBuf )
"""
retDict = S_ERROR( message )
retDict[ 'Value' ] = ( exitStatus,
self.bufferList[0][0],
self.bufferList[1][0] )
return retDict
def __readFromFile( self, fd, baseLength, doAll = True ):
""" read from file descriptor :fd: and save it to the dedicated buffer
"""
try:
dataString = ""
fn = fd.fileno()
rawRead = type( fn ) == types.IntType
while fd in select.select( [ fd ], [], [], 1 )[0]:
if rawRead:
nB = os.read( fn, self.bufferLimit )
else:
nB = fd.read( 1 )
if nB == "":
break
dataString += nB
except Exception, x:
self.log.exception( "SUBPROCESS: readFromFile exception" )
try:
self.log.error( 'Error reading', 'type(nB) =%s' % type( nB ) )
self.log.error( 'Error reading', 'nB =%s' % str( nB ) )
except Exception:
pass
return S_ERROR( 'Can not read from output: %s' % str( x ) )
if len( dataString ) + baseLength > self.bufferLimit:
self.log.error( 'Maximum output buffer length reached' )
retDict = S_ERROR( 'Reached maximum allowed length (%d bytes) for called '
'function return value' % self.bufferLimit )
retDict[ 'Value' ] = dataString
return retDict
return S_OK( dataString )
def __readFromSystemCommandOutput( self, fd, bufferIndex ):
""" read stdout from file descriptor :fd: """
retDict = self.__readFromFile( fd,
len( self.bufferList[ bufferIndex ][0] ) )
if retDict[ 'OK' ]:
self.bufferList[ bufferIndex ][0] += retDict[ 'Value' ]
if not self.callback == None:
while self.__callLineCallback( bufferIndex ):
pass
return S_OK()
else: # buffer size limit reached killing process (see comment on __readFromFile)
exitStatus = self.killChild()
return self.__generateSystemCommandError(
exitStatus,
"%s for '%s' call" % ( retDict['Message'], self.cmdSeq ) )
def systemCall( self, cmdSeq, callbackFunction = None, shell = False, env = None ):
""" system call (no shell) - execute :cmdSeq: """
if shell:
self.log.verbose( 'shellCall:', cmdSeq )
else:
self.log.verbose( 'systemCall:', cmdSeq )
self.cmdSeq = cmdSeq
self.callback = callbackFunction
if sys.platform.find( "win" ) == 0:
closefd = False
else:
closefd = True
try:
self.child = subprocess.Popen( self.cmdSeq,
shell = shell,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
close_fds = closefd,
env = env )
self.childPID = self.child.pid
except OSError, v:
retDict = S_ERROR( v )
retDict['Value'] = ( -1, '' , str( v ) )
return retDict
except Exception, x:
try:
self.child.stdout.close()
self.child.stderr.close()
except Exception:
pass
retDict = S_ERROR( x )
retDict['Value'] = ( -1, '' , str( x ) )
return retDict
try:
self.bufferList = [ [ "", 0 ], [ "", 0 ] ]
initialTime = time.time()
exitStatus = self.__poll( self.child.pid )
while ( 0, 0 ) == exitStatus or None == exitStatus:
retDict = self.__readFromCommand()
if not retDict[ 'OK' ]:
return retDict
if self.timeout and time.time() - initialTime > self.timeout:
exitStatus = self.killChild()
self.__readFromCommand()
return self.__generateSystemCommandError(
exitStatus,
"Timeout (%d seconds) for '%s' call" %
( self.timeout, cmdSeq ) )
time.sleep( 0.01 )
exitStatus = self.__poll( self.child.pid )
self.__readFromCommand()
if exitStatus:
exitStatus = exitStatus[1]
if exitStatus >= 256:
exitStatus /= 256
return S_OK( ( exitStatus, self.bufferList[0][0], self.bufferList[1][0] ) )
finally:
try:
self.child.stdout.close()
self.child.stderr.close()
except Exception:
pass
def getChildPID( self ):
""" child pid getter """
return self.childPID
def __readFromCommand( self, isLast = False ):
""" read child stdout and stderr """
fdList = []
for i in ( self.child.stdout, self.child.stderr ):
try:
if not i.closed:
fdList.append( i.fileno() )
except Exception:
self.log.exception( "SUBPROCESS: readFromCommand exception" )
readSeq = self.__selectFD( fdList, True )
if readSeq == False:
return S_OK()
if self.child.stdout.fileno() in readSeq:
retDict = self.__readFromSystemCommandOutput( self.child.stdout, 0 )
if not retDict[ 'OK' ]:
return retDict
if self.child.stderr.fileno() in readSeq:
retDict = self.__readFromSystemCommandOutput( self.child.stderr, 1 )
if not retDict[ 'OK' ]:
return retDict
return S_OK()
def __callLineCallback( self, bufferIndex ):
""" line callback execution """
nextLineIndex = self.bufferList[ bufferIndex ][0][ self.bufferList[ bufferIndex ][1]: ].find( "\n" )
if nextLineIndex > -1:
try:
self.callback( bufferIndex, self.bufferList[ bufferIndex ][0][
self.bufferList[ bufferIndex ][1]:
self.bufferList[ bufferIndex ][1] + nextLineIndex ] )
#Each line processed is taken out of the buffer to prevent the limit from killing us
nL = self.bufferList[ bufferIndex ][1] + nextLineIndex + 1
self.bufferList[ bufferIndex ][0] = self.bufferList[ bufferIndex ][0][ nL: ]
self.bufferList[ bufferIndex ][1] = 0
except Exception:
self.log.exception( 'Exception while calling callback function',
'%s' % self.callback.__name__ )
self.log.showStack()
return True
return False
def systemCall( timeout, cmdSeq, callbackFunction = None, env = None, bufferLimit = 52428800 ):
"""
Use SubprocessExecutor class to execute cmdSeq (it can be a string or a sequence)
with a timeout wrapper, it is executed directly without calling a shell
"""
if timeout > 0 and USE_WATCHDOG:
spObject = Subprocess( timeout=timeout, bufferLimit = bufferLimit )
sysCall = Watchdog( spObject.systemCall, args=( cmdSeq, ), kwargs = { "callbackFunction" : callbackFunction,
"env" : env,
"shell" : False } )
spObject.log.verbose( 'Subprocess Watchdog timeout set to %d' % timeout )
result = sysCall(timeout+1)
else:
spObject = Subprocess( timeout, bufferLimit = bufferLimit )
result = spObject.systemCall( cmdSeq,
callbackFunction = callbackFunction,
env = env,
shell = False )
return result
def shellCall( timeout, cmdSeq, callbackFunction = None, env = None, bufferLimit = 52428800 ):
"""
Use SubprocessExecutor class to execute cmdSeq (it can be a string or a sequence)
with a timeout wrapper, cmdSeq it is invoque by /bin/sh
"""
if timeout > 0 and USE_WATCHDOG:
spObject = Subprocess( timeout=timeout, bufferLimit = bufferLimit )
shCall = Watchdog( spObject.systemCall, args=( cmdSeq, ), kwargs = { "callbackFunction" : callbackFunction,
"env" : env,
"shell" : True } )
spObject.log.verbose( 'Subprocess Watchdog timeout set to %d' % timeout )
result = shCall(timeout+1)
else:
spObject = Subprocess( timeout, bufferLimit = bufferLimit )
result = spObject.systemCall( cmdSeq,
callbackFunction = callbackFunction,
env = env,
shell = True )
return result
def pythonCall( timeout, function, *stArgs, **stKeyArgs ):
"""
Use SubprocessExecutor class to execute function with provided arguments,
with a timeout wrapper.
"""
if timeout > 0 and USE_WATCHDOG:
spObject = Subprocess( timeout=timeout )
pyCall = Watchdog( spObject.pythonCall, args=( function, ) + stArgs, kwargs=stKeyArgs )
spObject.log.verbose( 'Subprocess Watchdog timeout set to %d' % timeout )
result = pyCall(timeout+1)
else:
spObject = Subprocess( timeout )
result = spObject.pythonCall( function, *stArgs, **stKeyArgs )
return result
def __getChildrenForPID( ppid ):
"""
Get a list of children pids for ppid
"""
magicCmd = "ps --no-headers --ppid %d -o pid" % ppid
try:
import psutil
childrenList = []
for proc in psutil.process_iter():
if proc.ppid == ppid:
childrenList.append( proc.pid )
return childrenList
except Exception:
exc = subprocess.Popen( magicCmd,
stdout = subprocess.PIPE,
shell = True,
close_fds = True )
exc.wait()
return [ int( pid.strip() ) for pid in exc.stdout.readlines() if pid.strip() ]
def getChildrenPIDs( ppid, foreachFunc = None ):
"""
Get all children recursively for a given ppid.
Optional foreachFunc will be executed for each children pid
"""
cpids = __getChildrenForPID( ppid )
pids = []
for pid in cpids:
pids.append( pid )
if foreachFunc:
foreachFunc( pid )
pids.extend( getChildrenPIDs( pid, foreachFunc ) )
return pids
|
vmendez/DIRAC
|
Core/Utilities/Subprocess.py
|
Python
|
gpl-3.0
| 20,895
|
[
"DIRAC"
] |
3dc7d32e7ba15378c1a10eb2f882686e2028d032d20f0ce301587c5766f65eb7
|
#!/usr/bin/env python
import os
from bioblend import galaxy
admin_email = os.environ.get('GALAXY_DEFAULT_ADMIN_USER', 'admin@galaxy.org')
admin_pass = os.environ.get('GALAXY_DEFAULT_ADMIN_PASSWORD', 'admin')
url = "http://localhost:8080"
gi = galaxy.GalaxyInstance(url=url, email=admin_email, password=admin_pass)
wf = galaxy.workflows.WorkflowClient(gi)
wf.import_workflow_from_local_path('workflows/paired_001.ga')
wf.import_workflow_from_local_path('workflows/single_001.ga')
|
gregvonkuster/docker-galaxy-ChIP-exo
|
import_workflows.py
|
Python
|
mit
| 481
|
[
"Galaxy"
] |
cc12fceb3dc877bdd6891c4d28830e938cfd2d5db221b315fb70e82795a4a5cf
|
#!/usr/bin/env python
#
# heatmap.py - Generates heat map images and animations from geographic data
# Copyright 2010 Seth Golub
# http://www.sethoscope.net/heatmap/
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
import sys
import logging
import math
from PIL import Image
from PIL import ImageColor
import tempfile
import os.path
import shutil
import subprocess
from time import mktime, strptime
from collections import defaultdict
import xml.etree.cElementTree as ET
from colorsys import hsv_to_rgb
try:
import cPickle as pickle
except ImportError:
import pickle
__version__ = '1.11'
class Coordinate(object):
def __init__(self, x, y):
self.x = x
self.y = y
first = property(lambda self: self.x)
second = property(lambda self: self.y)
def copy(self):
return self.__class__(self.first, self.second)
def __str__(self):
return '(%s, %s)' % (str(self.x), str(self.y))
def __hash__(self):
return hash((self.x, self.y))
def __eq__(self, o):
return True if self.x == o.x and self.y == o.y else False
def __sub__(self, o):
return self.__class__(self.first - o.first, self.second - o.second)
class LatLon(Coordinate):
def __init__(self, lat, lon):
self.lat = lat
self.lon = lon
def get_lat(self):
return self.y
def set_lat(self, lat):
self.y = lat
def get_lon(self):
return self.x
def set_lon(self, lon):
self.x = lon
lat = property(get_lat, set_lat)
lon = property(get_lon, set_lon)
first = property(get_lat)
second = property(get_lon)
class TrackLog:
class Trkseg(list): # for GPX <trkseg> tags
pass
class Trkpt: # for GPX <trkpt> tags
def __init__(self, lat, lon):
self.coords = LatLon(float(lat), float(lon))
def __str__(self):
return str(self.coords)
def _parse(self, filename):
self._segments = []
for event, elem in ET.iterparse(filename, ('start', 'end')):
elem.tag = elem.tag[elem.tag.rfind('}') + 1:] # remove namespace
if elem.tag == "trkseg":
if event == 'start':
self._segments.append(TrackLog.Trkseg())
else: # event == 'end'
yield self._segments.pop()
elem.clear() # delete contents from parse tree
elif elem.tag == 'trkpt' and event == 'end':
point = TrackLog.Trkpt(elem.attrib['lat'], elem.attrib['lon'])
self._segments[-1].append(point)
timestr = elem.findtext('time')
if timestr:
timestr = timestr[:-1].split('.')[0] + ' GMT'
point.time = mktime(
strptime(timestr, '%Y-%m-%dT%H:%M:%S %Z'))
elem.clear() # clear the trkpt node to minimize memory usage
def __init__(self, filename):
self.filename = filename
def segments(self):
'''Parse file and yield segments containing points'''
logging.info('reading GPX track from %s' % self.filename)
return self._parse(self.filename)
class Projection(object):
# For guessing scale, we pretend the earth is a sphere with this
# radius in meters, as in Web Mercator (the projection all the
# online maps use).
EARTH_RADIUS = 6378137 # in meters
def get_pixels_per_degree(self):
try:
return self._pixels_per_degree
except AttributeError:
raise AttributeError('projection scale was never set')
def set_pixels_per_degree(self, val):
self._pixels_per_degree = val
logging.info('scale: %f meters/pixel (%f pixels/degree)'
% (self.meters_per_pixel, val))
def get_meters_per_pixel(self):
return 2 * math.pi * self.EARTH_RADIUS / 360 / self.pixels_per_degree
def set_meters_per_pixel(self, val):
self.pixels_per_degree = 2 * math.pi * self.EARTH_RADIUS / 360 / val
return val
pixels_per_degree = property(get_pixels_per_degree, set_pixels_per_degree)
meters_per_pixel = property(get_meters_per_pixel, set_meters_per_pixel)
def is_scaled(self):
return hasattr(self, '_pixels_per_degree')
def project(self, coords):
raise NotImplementedError
def inverse_project(self, coords): # Not all projections can support this.
raise NotImplementedError
def auto_set_scale(self, extent_in, padding, width=None, height=None):
# We need to choose a scale at which the data's bounding box,
# once projected onto the map, will fit in the specified height
# and/or width. The catch is that we can't project until we
# have a scale, so what we'll do is set a provisional scale,
# project the bounding box onto the map, then adjust the scale
# appropriately. This way we don't need to know anything about
# the projection.
#
# Projection subclasses are free to override this method with
# something simpler that just solves for scale given the lat/lon
# and x/y bounds.
# We'll work large to minimize roundoff error.
SCALE_FACTOR = 1000000.0
self.pixels_per_degree = SCALE_FACTOR
extent_out = extent_in.map(self.project)
padding *= 2 # padding-per-edge -> padding-in-each-dimension
try:
if height:
self.pixels_per_degree = pixels_per_lat = (
float(height - padding) /
extent_out.size().y * SCALE_FACTOR)
if width:
self.pixels_per_degree = (
float(width - padding) /
extent_out.size().x * SCALE_FACTOR)
if height:
self.pixels_per_degree = min(self.pixels_per_degree,
pixels_per_lat)
except ZeroDivisionError:
raise ZeroDivisionError(
'You need at least two data points for auto scaling. '
'Try specifying the scale explicitly (or extent + '
'height or width).')
assert(self.pixels_per_degree > 0)
# Treats Lat/Lon as a square grid.
class EquirectangularProjection(Projection):
# http://en.wikipedia.org/wiki/Equirectangular_projection
def project(self, coord):
x = coord.lon * self.pixels_per_degree
y = -coord.lat * self.pixels_per_degree
return Coordinate(x, y)
def inverse_project(self, coord):
lat = -coord.y / self.pixels_per_degree
lon = coord.x / self.pixels_per_degree
return LatLon(lat, lon)
class MercatorProjection(Projection):
def set_pixels_per_degree(self, val):
super(MercatorProjection, self).set_pixels_per_degree(val)
self._pixels_per_radian = val * (180 / math.pi)
pixels_per_degree = property(Projection.get_pixels_per_degree,
set_pixels_per_degree)
def project(self, coord):
x = coord.lon * self.pixels_per_degree
y = -self._pixels_per_radian * math.log(
math.tan((math.pi/4 + math.pi/360 * coord.lat)))
return Coordinate(x, y)
def inverse_project(self, coord):
lat = (360 / math.pi
* math.atan(math.exp(-coord.y / self._pixels_per_radian)) - 90)
lon = coord.x / self.pixels_per_degree
return LatLon(lat, lon)
class Extent():
def __init__(self, coords=None, shapes=None):
if coords:
coords = tuple(coords) # if it's a generator, slurp them all
self.min = coords[0].__class__(min(c.first for c in coords),
min(c.second for c in coords))
self.max = coords[0].__class__(max(c.first for c in coords),
max(c.second for c in coords))
elif shapes:
self.from_shapes(shapes)
else:
raise ValueError('Extent must be initialized')
def __str__(self):
return '%s,%s,%s,%s' % (self.min.y, self.min.x, self.max.y, self.max.x)
def update(self, other):
'''grow this bounding box so that it includes the other'''
self.min.x = min(self.min.x, other.min.x)
self.min.y = min(self.min.y, other.min.y)
self.max.x = max(self.max.x, other.max.x)
self.max.y = max(self.max.y, other.max.y)
def from_bounding_box(self, other):
self.min = other.min.copy()
self.max = other.max.copy()
def from_shapes(self, shapes):
shapes = iter(shapes)
self.from_bounding_box(next(shapes).extent)
for s in shapes:
self.update(s.extent)
def corners(self):
return (self.min, self.max)
def size(self):
return self.max.__class__(self.max.x - self.min.x,
self.max.y - self.min.y)
def grow(self, pad):
self.min.x -= pad
self.min.y -= pad
self.max.x += pad
self.max.y += pad
def resize(self, width=None, height=None):
if width:
self.max.x += float(width - self.size().x) / 2
self.min.x = self.max.x - width
if height:
self.max.y += float(height - self.size().y) / 2
self.min.y = self.max.y - height
def is_inside(self, coord):
return (coord.x >= self.min.x and coord.x <= self.max.x and
coord.y >= self.min.y and coord.y <= self.max.y)
def map(self, func):
'''Returns a new Extent whose corners are a function of the
corners of this one. The expected use is to project a Extent
onto a map. For example: bbox_xy = bbox_ll.map(projector.project)'''
return Extent(coords=(func(self.min), func(self.max)))
class Matrix(defaultdict):
'''An abstract sparse matrix, with data stored as {coord : value}.'''
@staticmethod
def matrix_factory(decay):
# If decay is 0 or 1, we can accumulate as we go and save lots of
# memory.
if decay == 1.0:
logging.info('creating a summing matrix')
return SummingMatrix()
elif decay == 0.0:
logging.info('creating a maxing matrix')
return MaxingMatrix()
logging.info('creating an appending matrix')
return AppendingMatrix(decay)
def __init__(self, default_factory=float):
self.default_factory = default_factory
def add(self, coord, val):
raise NotImplementedError
def extent(self):
return(Extent(coords=self.keys()))
def finalized(self):
return self
class SummingMatrix(Matrix):
def add(self, coord, val):
self[coord] += val
class MaxingMatrix(Matrix):
def add(self, coord, val):
self[coord] = max(val, self.get(coord, val))
class AppendingMatrix(Matrix):
def __init__(self, decay):
self.default_factory = list
self.decay = decay
def add(self, coord, val):
self[coord].append(val)
def finalized(self):
logging.info('combining coincident points')
m = Matrix()
for (coord, values) in self.items():
m[coord] = self.reduce(self.decay, values)
return m
@staticmethod
def reduce(decay, values):
'''
Returns a weighted sum of the values, where weight N is
pow(decay,N). This means the largest value counts fully, but
additional values have diminishing contributions. decay=0 makes
the reduction equivalent to max(), which makes each data point
visible, but says nothing about their relative magnitude.
decay=1 makes this like sum(), which makes the relative
magnitude of the points more visible, but could make smaller
values hard to see. Experiment with values between 0 and 1.
Values outside that range will give weird results.
'''
# It would be nice to do this on the fly, while accumulating data, but
# it needs to be insensitive to data order.
weight = 1.0
total = 0.0
values.sort(reverse=True)
for value in values:
total += value * weight
weight *= decay
return total
class Point:
def __init__(self, coord, weight=1.0):
self.coord = coord
self.weight = weight
def __str__(self):
return 'P(%s)' % str(self.coord)
@staticmethod
def general_distance(x, y):
# assumes square units, which causes distortion in some projections
return (x ** 2 + y ** 2) ** 0.5
@property
def extent(self):
if not hasattr(self, '_extent'):
self._extent = Extent(coords=(self.coord,))
return self._extent
# From a modularity standpoint, it would be reasonable to cache
# distances, not heat values, and let the kernel cache the
# distance to heat map, but this is substantially faster.
heat_cache = {}
@classmethod
def _initialize_heat_cache(cls, kernel):
cache = {}
for x in range(kernel.radius + 1):
for y in range(kernel.radius + 1):
cache[(x, y)] = kernel.heat(cls.general_distance(x, y))
cls.heat_cache[kernel] = cache
def add_heat_to_matrix(self, matrix, kernel):
if kernel not in Point.heat_cache:
Point._initialize_heat_cache(kernel)
cache = Point.heat_cache[kernel]
x = int(self.coord.x)
y = int(self.coord.y)
for dx in range(-kernel.radius, kernel.radius + 1):
for dy in range(-kernel.radius, kernel.radius + 1):
matrix.add(Coordinate(x + dx, y + dy),
self.weight * cache[(abs(dx), abs(dy))])
def map(self, func):
return Point(func(self.coord), self.weight)
class LineSegment:
def __init__(self, start, end, weight=1.0):
self.start = start
self.end = end
self.weight = weight
self.length_squared = float((self.end.x - self.start.x) ** 2 +
(self.end.y - self.start.y) ** 2)
self.extent = Extent(coords=(start, end))
def __str__(self):
return 'LineSegment(%s, %s)' % (self.start, self.end)
def distance(self, coord):
# http://stackoverflow.com/questions/849211/shortest-distance-between-a-point-and-a-line-segment
# http://www.topcoder.com/tc?d1=tutorials&d2=geometry1&module=Static#line_point_distance
# http://local.wasp.uwa.edu.au/~pbourke/geometry/pointline/
try:
dx = (self.end.x - self.start.x)
dy = (self.end.y - self.start.y)
u = ((coord.x - self.start.x) * dx +
(coord.y - self.start.y) * dy) / self.length_squared
if u < 0:
u = 0
elif u > 1:
u = 1
except ZeroDivisionError:
u = 0 # Our line is zero-length. That's ok.
dx = self.start.x + u * dx - coord.x
dy = self.start.y + u * dy - coord.y
return math.sqrt(dx * dx + dy * dy)
def add_heat_to_matrix(self, matrix, kernel):
# Iterate over every point in a bounding box around this, with an
# extra margin given by the kernel's self-reported maximum range.
# TODO: There is probably a more clever iteration that skips more
# of the empty space.
for x in range(int(self.extent.min.x - kernel.radius),
int(self.extent.max.x + kernel.radius + 1)):
for y in range(int(self.extent.min.y - kernel.radius),
int(self.extent.max.y + kernel.radius + 1)):
coord = Coordinate(x, y)
heat = kernel.heat(self.distance(coord))
if heat:
matrix.add(coord, self.weight * heat)
def map(self, func):
return LineSegment(func(self.start), func(self.end))
class LinearKernel:
'''Uses a linear falloff, essentially turning a point into a cone.'''
def __init__(self, radius):
self.radius = radius # in pixels
self.radius_float = float(radius) # worthwhile time saver
def heat(self, distance):
if distance >= self.radius:
return 0.0
return 1.0 - (distance / self.radius_float)
class GaussianKernel:
def __init__(self, radius):
'''radius is the distance beyond which you should not bother.'''
self.radius = radius
# We set the scale such that the heat value drops to 1/256 of
# the peak at a distance of radius.
self.scale = math.log(256) / radius
def heat(self, distance):
'''Returns 1.0 at center, 1/e at radius pixels from center.'''
return math.e ** (-distance * self.scale)
class ColorMap:
DEFAULT_HSVA_MIN_STR = '000ffff00'
DEFAULT_HSVA_MAX_STR = '02affffff'
@staticmethod
def _str_to_float(string, base=16, maxval=256):
return float(int(string, base)) / maxval
@staticmethod
def str_to_hsva(string):
'''
Returns a 4-tuple of ints from a hex string color specification,
such that AAABBCCDD becomes AAA, BB, CC, DD. For example,
str2hsva('06688bbff') returns (102, 136, 187, 255). Note that
the first number is 3 digits.
'''
if string.startswith('#'):
string = string[1:] # Leading "#" was once required, is now optional.
return tuple(ColorMap._str_to_float(s) for s in (string[0:3],
string[3:5],
string[5:7],
string[7:9]))
def __init__(self, hsva_min=None, hsva_max=None, image=None, steps=256):
'''
Create a color map based on a progression in the specified
range, or using pixels in a provided image.
If supplied, hsva_min and hsva_max must each be a 4-tuple of
(hue, saturation, value, alpha), where each is a float from
0.0 to 1.0. The gradient will be a linear progression from
hsva_min to hsva_max, including both ends of the range.
The optional steps argument specifies how many discrete steps
there should be in the color gradient when using hsva_min
and hsva_max.
'''
# TODO: do the interpolation in Lab space instead of HSV
self.values = []
if image:
assert image.mode == 'RGBA', (
'Gradient image must be RGBA. Yours is %s.' % image.mode)
num_rows = image.size[1]
self.values = [image.getpixel((0, row)) for row in range(num_rows)]
self.values.reverse()
else:
if not hsva_min:
hsva_min = ColorMap.str_to_hsva(self.DEFAULT_HSVA_MIN_STR)
if not hsva_max:
hsva_max = ColorMap.str_to_hsva(self.DEFAULT_HSVA_MAX_STR)
# Turn (h1,s1,v1,a1), (h2,s2,v2,a2) into (h2-h1,s2-s1,v2-v1,a2-a1)
hsva_range = list(map(lambda min, max: max - min, hsva_min, hsva_max))
for value in range(0, steps):
hsva = list(map(
lambda range, min: value / float(steps - 1) * range + min,
hsva_range, hsva_min))
hsva[0] = hsva[0] % 1 # in case hue is out of range
rgba = tuple(
[int(x * 255) for x in hsv_to_rgb(*hsva[0:3]) + (hsva[3],)])
self.values.append(rgba)
def get(self, floatval):
return self.values[int(floatval * (len(self.values) - 1))]
class ImageMaker():
def __init__(self, config):
'''Each argument to the constructor should be a 4-tuple of (hue,
saturaton, value, alpha), one to use for minimum data values and
one for maximum. Each should be in [0,1], however because hue is
circular, you may specify hue in any range and it will be shifted
into [0,1] as needed. This is so you can wrap around the color
wheel in either direction.'''
self.config = config
if config.background and not config.background_image:
self.background = ImageColor.getrgb(config.background)
else:
self.background = None
@staticmethod
def _blend_pixels(a, b):
# a is RGBA, b is RGB; we could write this more generically,
# but why complicate things?
alpha = a[3] / 255.0
return tuple(
map(lambda aa, bb: int(aa * alpha + bb * (1 - alpha)), a[:3], b))
def make_image(self, matrix):
extent = self.config.extent_out
if not extent:
extent = matrix.extent()
extent.resize((self.config.width or 1) - 1,
(self.config.height or 1) - 1)
size = extent.size()
size.x = int(size.x) + 1
size.y = int(size.y) + 1
logging.info('saving image (%d x %d)' % (size.x, size.y))
if self.background:
img = Image.new('RGB', (size.x, size.y), self.background)
else:
img = Image.new('RGBA', (size.x, size.y))
maxval = max(matrix.values())
pixels = img.load()
for (coord, val) in matrix.items():
x = int(coord.x - extent.min.x)
y = int(coord.y - extent.min.y)
if extent.is_inside(coord):
color = self.config.colormap.get(val / maxval)
if self.background:
pixels[x, y] = ImageMaker._blend_pixels(color,
self.background)
else:
pixels[x, y] = color
if self.config.background_image:
img = Image.composite(img, self.config.background_image,
img.split()[3])
return img
class ImageSeriesMaker():
'''Creates a movie showing the data appearing on the heatmap.'''
def __init__(self, config):
self.config = config
self.image_maker = ImageMaker(config)
self.tmpdir = tempfile.mkdtemp()
self.imgfile_template = os.path.join(self.tmpdir, 'frame-%05d.png')
def _save_image(self, matrix):
self.frame_count += 1
logging.info('Frame %d' % (self.frame_count))
matrix = matrix.finalized()
image = self.image_maker.make_image(matrix)
image.save(self.imgfile_template % self.frame_count)
def maybe_save_image(self, matrix):
self.inputs_since_output += 1
if self.inputs_since_output >= self.config.frequency:
self._save_image(matrix)
self.inputs_since_output = 0
@staticmethod
def create_movie(infiles, outfile, ffmpegopts):
command = ['ffmpeg', '-i', infiles]
if ffmpegopts:
# I hope they don't have spaces in their arguments
command.extend(ffmpegopts.split())
command.append(outfile)
logging.info('Encoding video: %s' % ' '.join(command))
subprocess.call(command)
def run(self):
logging.info('Putting animation frames in %s' % self.tmpdir)
self.inputs_since_output = 0
self.frame_count = 0
matrix = process_shapes(self.config, self.maybe_save_image)
if ( not self.frame_count
or self.inputs_since_output >= self.config.straggler_threshold ):
self._save_image(matrix)
self.create_movie(self.imgfile_template,
self.config.output,
self.config.ffmpegopts)
if self.config.keepframes:
logging.info('The animation frames are in %s' % self.tmpdir)
else:
shutil.rmtree(self.tmpdir)
return matrix
def _get_osm_image(bbox, zoom, osm_base):
# Just a wrapper for osm.createOSMImage to translate coordinate schemes
try:
from osmviz.manager import PILImageManager, OSMManager
osm = OSMManager(
image_manager=PILImageManager('RGB'),
server=osm_base)
(c1, c2) = bbox.corners()
image, bounds = osm.createOSMImage((c1.lat, c2.lat, c1.lon, c2.lon), zoom)
(lat1, lat2, lon1, lon2) = bounds
return image, Extent(coords=(LatLon(lat1, lon1),
LatLon(lat2, lon2)))
except ImportError as e:
logging.error(
"ImportError: %s.\n"
"The --osm option depends on the osmviz module, available from\n"
"http://cbick.github.com/osmviz/\n\n" % str(e))
sys.exit(1)
def _scale_for_osm_zoom(zoom):
return 256 * pow(2, zoom) / 360.0
def choose_osm_zoom(config, padding):
# Since we know we're only going to do this with Mercator, we could do
# a bit more math and solve this directly, but as a first pass method,
# we instead project the bounding box into pixel-land at a high zoom
# level, then see the power of two we're off by.
if config.zoom:
return config.zoom
if not (config.width or config.height):
raise ValueError('For OSM, you must specify height, width, or zoom')
crazy_zoom_level = 30
proj = MercatorProjection()
scale = _scale_for_osm_zoom(crazy_zoom_level)
proj.pixels_per_degree = scale
bbox_crazy_xy = config.extent_in.map(proj.project)
if config.width:
size_ratio = width_ratio = (
float(bbox_crazy_xy.size().x) / (config.width - 2 * padding))
if config.height:
size_ratio = (
float(bbox_crazy_xy.size().y) / (config.height - 2 * padding))
if config.width:
size_ratio = max(size_ratio, width_ratio)
# TODO: We use --height and --width as upper bounds, choosing a zoom
# level that lets our image be no larger than the specified size.
# It might be desirable to use them as lower bounds or to get as close
# as possible, whether larger or smaller (where "close" probably means
# in pixels, not scale factors).
# TODO: This is off by a little bit at small scales.
zoom = int(crazy_zoom_level - math.log(size_ratio, 2))
logging.info('Choosing OSM zoom level %d' % zoom)
return zoom
def get_osm_background(config, padding):
zoom = choose_osm_zoom(config, padding)
proj = MercatorProjection()
proj.pixels_per_degree = _scale_for_osm_zoom(zoom)
bbox_xy = config.extent_in.map(proj.project)
# We're not checking that the padding fits within the specified size.
bbox_xy.grow(padding)
bbox_ll = bbox_xy.map(proj.inverse_project)
image, img_bbox_ll = _get_osm_image(bbox_ll, zoom, config.osm_base)
img_bbox_xy = img_bbox_ll.map(proj.project)
# TODO: this crops to our data extent, which means we're not making
# an image of the requested dimensions. Perhaps we should let the
# user specify whether to treat the requested size as min,max,exact.
offset = bbox_xy.min - img_bbox_xy.min
image = image.crop((
int(offset.x),
int(offset.y),
int(offset.x + bbox_xy.size().x + 1),
int(offset.y + bbox_xy.size().y + 1)))
config.background_image = image
config.extent_in = bbox_ll
config.projection = proj
(config.width, config.height) = image.size
return image, bbox_ll, proj
def process_shapes(config, hook=None):
matrix = Matrix.matrix_factory(config.decay)
logging.info('processing data')
for shape in config.shapes:
shape = shape.map(config.projection.project)
# TODO: skip shapes outside map extent
shape.add_heat_to_matrix(matrix, config.kernel)
if hook:
hook(matrix)
return matrix
def shapes_from_gpx(filename):
track = TrackLog(filename)
for trkseg in track.segments():
for i, p1 in enumerate(trkseg[:-1]):
p2 = trkseg[i + 1]
yield LineSegment(p1.coords, p2.coords)
def shapes_from_file(filename):
logging.info('reading points from %s' % filename)
count = 0
with open(filename, 'rU') as f:
for line in f:
line = line.strip()
if len(line) > 0: # ignore blank lines
values = [float(x) for x in line.split()]
assert len(values) == 2 or len(values) == 3, (
'input lines must have two or three values: %s' % line)
(lat, lon) = values[0:2]
weight = 1.0 if len(values) == 2 else values[2]
count += 1
yield Point(LatLon(lat, lon), weight)
logging.info('read %d points' % count)
def shapes_from_csv(filename, ignore_csv_header):
import csv
logging.info('reading csv')
count = 0
with open(filename, 'rU') as f:
reader = csv.reader(f)
if ignore_csv_header:
next(reader) # Skip header line
for row in reader:
(lat, lon) = (float(row[0]), float(row[1]))
count += 1
yield Point(LatLon(lat, lon))
logging.info('read %d points' % count)
def shapes_from_shp(filename):
try:
import ogr
import osr
except ImportError:
try:
from osgeo import ogr
from osgeo import osr
except ImportError:
raise ImportError('You need to have python-gdal bindings installed')
driver = ogr.GetDriverByName("ESRI Shapefile")
dataSource = driver.Open(filename, 0)
if dataSource is None:
raise Exception("Not a valid shape file")
layer = dataSource.GetLayer()
if layer.GetGeomType() != 1:
raise Exception("Only point layers are supported")
spatial_reference = layer.GetSpatialRef()
if spatial_reference is None:
raise Exception("The shapefile doesn't have spatial reference")
spatial_reference.AutoIdentifyEPSG()
auth_code = spatial_reference.GetAuthorityCode(None)
if auth_code == '':
raise Exception("The input shapefile projection could not be recognized")
if auth_code != '4326':
# TODO: implement reproject layer (maybe geometry by geometry is easier)
raise Exception("Currently only Lng-Lat WGS84 is supported (EPSG 4326)")
count = 0
for feature in layer:
geom = feature.GetGeometryRef()
lat = geom.GetY()
lon = geom.GetX()
count += 1
yield Point(LatLon(lat,lon))
logging.info('read %d points' % count)
class Configuration(object):
'''
This object holds the settings for creating a heatmap as well as
an iterator for the input data.
Most of the command line processing is about settings and data, so
the command line options are also processed with this object.
This happens in two phases.
First the settings are parsed and turned into more useful objects
in set_from_options(). Command line flags go in, and the
Configuration object is populated with the specified values and
defaults.
In the second phase, various other parameters are computed. These
are things we set automatically based on the other settings or on
the data. You can skip this if you set everything manually, but
The idea is that someone could import this module, populate a
Configuration instance manually, and run the process themselves.
Where possible, this object contains instances, rather than option
strings (e.g. for projection, kernel, colormap, etc).
Every parameter is explained in the glossary dictionary, and only
documented parameters are allowed. Parameters default to None.
'''
glossary = {
# Many of these are exactly the same as the command line option.
# In those cases, the documentation is left blank.
# Many have default values based on the command line defaults.
'output' : '',
'width' : '',
'height' : '',
'margin' : '',
'shapes' : 'unprojected iterable of shapes (Points and LineSegments)',
'projection' : 'Projection instance',
'colormap' : 'ColorMap instance',
'decay' : '',
'kernel' : 'kernel instance',
'extent_in' : 'extent in original space',
'extent_out' : 'extent in projected space',
'background': '',
'background_image': '',
'background_brightness' : '',
# OpenStreetMap background tiles
'osm' : 'True/False; see command line options',
'osm_base' : '',
'zoom' : '',
# These are for making an animation, ignored otherwise.
'ffmpegopts' : '',
'keepframes' : '',
'frequency' : '',
'straggler_threshold' : '',
# We always instantiate an OptionParser in order to set up
# default values. You can use this OptionParser in your own
# script, perhaps adding your own options.
'optparser' : 'OptionParser instance for command line processing',
}
_kernels = { 'linear': LinearKernel,
'gaussian': GaussianKernel, }
_projections = { 'equirectangular': EquirectangularProjection,
'mercator': MercatorProjection, }
def __init__(self, use_defaults=True):
for k in self.glossary.keys():
setattr(self, k, None)
self.optparser = self._make_optparser()
if use_defaults:
self.set_defaults()
def set_defaults(self):
(options, args) = self.optparser.parse_args([])
self.set_from_options(options)
def _make_optparser(self):
'''Return a an OptionParser set up for our command line options.'''
# TODO: convert to argparse
from optparse import OptionParser
optparser = OptionParser(version=__version__)
optparser.add_option('-g', '--gpx', metavar='FILE')
optparser.add_option(
'-p', '--points', metavar='FILE',
help=(
'File containing one space-separated coordinate pair per line, '
'with optional point value as third term.'))
optparser.add_option(
'', '--csv', metavar='FILE',
help=(
'File containing one comma-separated coordinate pair per line, '
'the rest of the line is ignored.'))
optparser.add_option(
'', '--ignore_csv_header', action='store_true',
help='Ignore first line of CSV input file.')
optparser.add_option(
'', '--shp_file', metavar='FILE',
help=('ESRI Shapefile containing the points.'))
optparser.add_option(
'-s', '--scale', metavar='FLOAT', type='float',
help='meters per pixel, approximate'),
optparser.add_option(
'-W', '--width', metavar='INT', type='int',
help='width of output image'),
optparser.add_option(
'-H', '--height', metavar='INT', type='int',
help='height of output image'),
optparser.add_option(
'-P', '--projection', metavar='NAME', type='choice',
choices=list(self._projections.keys()), default='mercator',
help='choices: ' + ', '.join(self._projections.keys()) +
'; default: %default')
optparser.add_option(
'-e', '--extent', metavar='RANGE',
help=(
'Clip results to RANGE, which is specified as lat1,lon1,lat2,lon2;'
' (for square mercator: -85.0511,-180,85.0511,180)'))
optparser.add_option(
'-R', '--margin', metavar='INT', type='int', default=0,
help=(
'Try to keep data at least this many pixels away from image '
'border.'))
optparser.add_option(
'-r', '--radius', metavar='INT', type='int', default=5,
help='pixel radius of point blobs; default: %default')
optparser.add_option(
'-d', '--decay', metavar='FLOAT', type='float', default=0.95,
help=(
'float in [0,1]; Larger values give more weight to data '
'magnitude. Smaller values are more democratic. default:'
'%default'))
optparser.add_option(
'-S', '--save', metavar='FILE', help='save processed data to FILE')
optparser.add_option(
'-L', '--load', metavar='FILE', help='load processed data from FILE')
optparser.add_option(
'-o', '--output', metavar='FILE',
help='name of output file (image or video)')
optparser.add_option(
'-a', '--animate', action='store_true',
help='Make an animation instead of a static image')
optparser.add_option(
'', '--frequency', type='int', default=1,
help='input points per animation frame; default: %default')
optparser.add_option(
'', '--straggler_threshold', type='int', default=1,
help='add one more animation frame if >= this many inputs remain')
optparser.add_option(
'-F', '--ffmpegopts', metavar='STR',
help='extra options to pass to ffmpeg when making an animation')
optparser.add_option(
'-K', '--keepframes', action='store_true',
help='keep intermediate images after creating an animation')
optparser.add_option(
'-b', '--background', metavar='COLOR',
help='composite onto this background (color name or #rrggbb)')
optparser.add_option(
'-I', '--background_image', metavar='FILE',
help='composite onto this image')
optparser.add_option(
'-B', '--background_brightness', type='float', metavar='NUM',
help='Multiply each pixel in background image by this.')
optparser.add_option(
'-m', '--hsva_min', metavar='HEX',
default=ColorMap.DEFAULT_HSVA_MIN_STR,
help='hhhssvvaa hex for minimum data values; default: %default')
optparser.add_option(
'-M', '--hsva_max', metavar='HEX',
default=ColorMap.DEFAULT_HSVA_MAX_STR,
help='hhhssvvaa hex for maximum data values; default: %default')
optparser.add_option(
'-G', '--gradient', metavar='FILE',
help=(
'Take color gradient from this the first column of pixels in '
'this image. Overrides -m and -M.'))
optparser.add_option(
'-k', '--kernel',
type='choice',
default='linear',
choices=list(self._kernels.keys()),
help=('Kernel to use for the falling-off function; choices: ' +
', '.join(self._kernels.keys()) + '; default: %default'))
optparser.add_option(
'', '--osm', action='store_true',
help='Composite onto OpenStreetMap tiles')
optparser.add_option(
'', '--osm_base', metavar='URL',
default='http://tile.openstreetmap.org',
help='Base URL for map tiles; default %default')
optparser.add_option(
'-z', '--zoom', type='int',
help='Zoom level for OSM; 0 (the default) means autozoom')
optparser.add_option('-v', '--verbose', action='store_true')
optparser.add_option('', '--debug', action='store_true')
return optparser
def set_from_options(self, options):
for k in self.glossary.keys():
try:
setattr(self, k, getattr(options, k))
except AttributeError:
pass
self.kernel = self._kernels[options.kernel](options.radius)
self.projection = self._projections[options.projection]()
if options.scale:
self.projection.meters_per_pixel = options.scale
if options.gradient:
self.colormap = ColorMap(image = Image.open(options.gradient))
else:
self.colormap = ColorMap(hsva_min = ColorMap.str_to_hsva(options.hsva_min),
hsva_max = ColorMap.str_to_hsva(options.hsva_max))
if options.gpx:
logging.debug('Reading from gpx: %s' % options.gpx)
self.shapes = shapes_from_gpx(options.gpx)
elif options.points:
logging.debug('Reading from points: %s' % options.points)
self.shapes = shapes_from_file(options.points)
elif options.csv:
logging.debug('Reading from csv: %s' % options.csv)
self.shapes = shapes_from_csv(options.csv, options.ignore_csv_header)
elif options.shp_file:
logging.debug('Reading from Shape File: %s' % options.shp_file)
self.shapes = shapes_from_shp(options.shp_file)
if options.extent:
(lat1, lon1, lat2, lon2) = \
[float(f) for f in options.extent.split(',')]
self.extent_in = Extent(coords=(LatLon(lat1, lon1),
LatLon(lat2, lon2)))
if options.background_image:
self.background_image = Image.open(options.background_image)
(self.width, self.height) = background_image.size
def fill_missing(self):
if not self.shapes:
raise ValueError('no input specified')
padding = self.margin + self.kernel.radius
if not self.extent_in:
logging.debug('reading input data')
self.shapes = list(self.shapes)
logging.debug('read %d shapes' % len(self.shapes))
self.extent_in = Extent(shapes=self.shapes)
if self.osm:
get_osm_background(self, padding)
else:
if not self.projection.is_scaled():
self.projection.auto_set_scale(self.extent_in, padding,
self.width, self.height)
if not (self.width or self.height or self.background_image):
raise ValueError('You must specify width or height or scale '
'or background_image or both osm and zoom.')
if self.background_brightness is not None:
if self.background_image:
self.background_image = self.background_image.point(
lambda x: x * self.background_brightness)
self.background_brightness = None # idempotence
else:
logging.warning(
'background brightness specified, but no background image')
if not self.extent_out:
self.extent_out = self.extent_in.map(self.projection.project)
self.extent_out.grow(padding)
logging.info('input extent: %s' % str(self.extent_out.map(
self.projection.inverse_project)))
logging.info('output extent: %s' % str(self.extent_out))
def main():
logging.basicConfig(format='%(relativeCreated)8d ms // %(message)s')
config = Configuration(use_defaults=False)
(options, args) = config.optparser.parse_args()
if options.verbose:
logging.getLogger().setLevel(logging.INFO)
if options.debug:
logging.getLogger().setLevel(logging.DEBUG)
if options.load:
logging.info('loading data')
matrix = pickle.load(open(options.load, 'rb'))
config = matrix['config']
del matrix['config']
config.set_from_options(options)
config.fill_missing()
else:
config.set_from_options(options)
config.fill_missing()
if options.animate:
animator = ImageSeriesMaker(config)
matrix = animator.run()
else:
matrix = process_shapes(config)
matrix = matrix.finalized()
if options.output and not options.animate:
image = ImageMaker(config).make_image(matrix)
image.save(options.output)
if options.save:
logging.info('saving data')
matrix['config'] = config
pickle.dump(matrix, open(options.save, 'wb'), 2)
logging.info('end')
if __name__ == '__main__':
main()
|
HoTSStuff/replaylib
|
replaylib/heatmap.py
|
Python
|
apache-2.0
| 44,628
|
[
"Gaussian"
] |
cb3a3b0e80adc20520c287e60ba4215128850d9b7ced9a53b5923ce6e2ead74d
|
import argparse
import os
from os import path
import subprocess
import sys
import socket
import time
import warnings
from math import floor
import gc # garbage collector
import smtplib
import numpy as np
from scipy import signal, linalg
from matplotlib import pyplot as plt
import GPy
import classes as cls
import utilities as util
from utilities import bcolors
# import rpy2.robjects as ro
# from rpy2.robjects.packages import importr
# from rpy2.robjects.numpy2ri import numpy2ri
# # Activate automatic conversion of ndarray to R objects
# ro.conversion.py2ri = numpy2ri
from progressbar import ProgressBar, SimpleProgress, ETA, Percentage, Bar, \
AnimatedMarker, Timer, Counter
if __name__ == "__main__":
# gc.set_debug(gc.DEBUG_LEAK)
# Parsing input from command line
parser = argparse.ArgumentParser(
description = "SN lightcurve fitter and classifier.",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
actionGroup = parser.add_argument_group('ACTION')
inputGroup = parser.add_argument_group('INPUT')
"""
ACTION OPTIONS
----------------------------------------------------------------------------
"""
actionGroup.add_argument(
"--fit", dest="fit",
action="store_true",
help="Fit lightcurves with Gaussian processes method."
)
actionGroup.add_argument(
'--prior', dest='prior',
action='store_true', help='Use priors in GP regression.'
)
actionGroup.add_argument(
'--length', dest='testLength',
action='store_true',
help='Set length scale hyper parameter to random value to ease \
optimization.'
)
actionGroup.add_argument(
"--cross-correlation", dest="crossCor",
action="store_true",
help="Performs cross correlation between non peaked lcs (with maximum in \
r-band at one of the MJD extremes) and all the peaked lcs. Produces \
an estimate for maximum in r-band. VERY TIME CONSUMING."
)
actionGroup.add_argument(
"--distance-matrix", dest="distMatrix",
action="store_true",
help="Calculate distance between fitted lightcurves in same band. \
It is use to build a diffusion map (see Coifman & Lafon (2006) \
and Lafon & Lee (2006)).")
actionGroup.add_argument(
"--diffuse", dest="diffuse",
action="store_true",
help="Computes the diffusion map coefficients. Run together or after \
--distance-matrix option. Uses `diffusionMap` R package developed \
by Joseph Richards.")
actionGroup.add_argument(
"--train", dest="train",
action="store_true",
help="Train the classifier - Random Forest. Uses `randomForest` R \
package.")
actionGroup.add_argument(
"--classify", dest="classify",
action="store_true")
actionGroup.add_argument(
"--plot", dest="plot",
action="store_true",
help="Save on `pdf` file the plot of fitting curve over data.")
actionGroup.add_argument(
'--nice-plots', dest='nicePlots',
action='store_true',
help='Produces plot suitable for publication (pdf, 300dpi).'
)
"""-------------------------------------------------------------------------
INPUT OPTIONS
----------------------------------------------------------------------------
"""
inputGroup.add_argument(
"--data-directory", dest="dirData",
default="train_data" + os.sep + "SIMGEN_PUBLIC_DES",
help="Path to directory containing training data.")
inputGroup.add_argument(
"--fit-directory", dest="dirFit",
default="results" + os.sep + "FIT",
help="Path to directory containing fitted data.")
# the use of this keyword is developed in dev_magnitudes branch
inputGroup.add_argument(
"--mag", dest="mag",
action="store_true",
help="Reads in magnitudes from file."
)
inputGroup.add_argument(
"--fit-file", dest="fitFile",
help="Path to file in which to dump fitting results.")
inputGroup.add_argument(
"-f", "--file",
help="")
inputGroup.add_argument(
"-c", "--candidate", dest="cand",
default=-1, type=int,
help="ID of a candidate."
)
inputGroup.add_argument(
"--all-bands", dest="allBands",
action="store_true",
help="Plot all bands --nice-plots option."
)
inputGroup.add_argument(
"-b", "--band", dest="band", default='r',
help="Which band to plot with --nice-plots.")
inputGroup.add_argument(
"--nBands", dest="nBands",
default=-1, type=int,
help="Number of bands to plot with --nice-plots.")
inputGroup.add_argument(
'--limits', nargs=2, dest='limits',
default=[0, 5], type=int,
help='Starting ending indeces for fitting and cross-correlation.'
)
inputGroup.add_argument(
'--offset', '-o', dest='offset',
default=0, type=int,
help='Offset for columns WRT limits (which are referred to rows).'
)
inputGroup.add_argument(
'--plot-offset', dest='plotOffset',
default=-1, type=int,
help='Offset in index to begin light curves plotting from.'
)
"""-------------------------------------------------------------------------
"""
args = parser.parse_args()
bands = ['g', 'r', 'i', 'z']
else:
pass
if __name__ == "__main__":
# os.system("clear")
fromAddress = 'mothra@oapd.inaf.it'
toAddress = 'marco.depa@gmail.com'
sent = False
indent = " "
resDir = "results"+os.sep
peakIdx = np.empty(0)
nopeakIdx = np.empty(0)
print bcolors.bldpur
print indent + "* * * * * * * * * * * * * * *"
print indent + "* Miniature Adventure *"
print indent + "* ------------------- *"
print indent + "* lightcurves fitting *"
print indent + "* and *"
print indent + "* SN classification *"
print indent + "* * * * * * * * * * * * * * *"
print bcolors.txtrst
if args.dirFit == 'results/FIT':
yesno = str(raw_input(indent + 'Set fit directory other then default (' + \
parser.get_default('dirFit') + ')? (y/n)'))
if yesno == 'y':
args.dirFit = str(raw_input(indent + 'Specify new directory '\
+'for fit: '))
if args.dirData[-1] != os.sep:
args.dirData += os.sep
if args.dirFit[-1] != os.sep:
args.dirFit += os.sep
print indent + 'Fit directory will be: ' + path.abspath(args.dirFit)
if not os.path.exists(path.abspath(args.dirFit)):
os.makedirs(path.abspath(args.dirFit))
start_time = time.time()
"""
Get list of files in data directory and fit directory
----------------------------------------------------------------------------
"""
p = subprocess.Popen("ls *SN*.DAT", shell=True, stdout=subprocess.PIPE,
cwd=args.dirData)
lsDirData = p.stdout.read()
lsDirData = lsDirData.split('\n')
lsDirData.sort()
lsDirData.remove('')
p = subprocess.Popen("ls *SN*.DAT", shell=True, stdout=subprocess.PIPE,
cwd=args.dirFit)
lsDirFit = p.stdout.read()
lsDirFit = lsDirFit.split('\n')
lsDirFit.sort()
lsDirFit.remove('')
"""-------------------------------------------------------------------------
"""
"""
PERFORMS LCs FITTING
"""
if args.fit:
if args.limits[1] > len(lsDirData):
print indent + \
"WARNING: upper limit > than the number of files. Corrected.\n"
args.limits[1] = len(lsDirData)
filePath = args.dirFit + 'PEAKED_{:<}_{:<5.3f}.LIST'.format(
socket.gethostname(), time.time()
)
fPeaked = open(filePath, 'w')
filePath = args.dirFit + 'NOPEAKED_{:<}_{:<5.3f}.LIST'.format(
socket.gethostname(), time.time()
)
fNopeaked = open(filePath, 'w')
# Relevant input data
print "\n" + indent + "[1] * Fit lightcurves ..."
print "\n" + indent + "Index interval [{:<},{:<})".format(
args.limits[0], args.limits[1]
)
print "\n" + indent + \
"Data directory: " + os.curdir + args.dirData
print "\n" + indent \
+ "Number of candidates = {:<d}".format(len(lsDirData))
"""
GP kernel specification
------------------------------------------------------------------------
"""
# kern = GPy.kern.RatQuad(1)
kern = GPy.kern.RBF(1)
# kern = GPy.kern.Matern32(1)
# kern = GPy.kern.Matern52(1)
"""---------------------------------------------------------------------
"""
print "\n" + indent \
+ "Data will be smoothed using GP kernel " + kern.name.upper()
print '\n' + indent + \
"INDEX | SN ID | BAND"
for i in range(args.limits[0], args.limits[1]):
filePath = path.splitext(lsDirData[i])[0] + "_FIT.DAT"
"""
Check if file with fit results already exits. If positive skip
to next loop iteration.
"""
if filePath in lsDirFit:
continue
candidate = util.get_sn_from_file(
args.dirData + lsDirData[i],
args.mag
)
# Creating SupernovaFit object
candidateFit = cls.SupernovaFit(candidate, kern.name)
for b in candidate.lcsDict.keys():
# Correcting for time dilution
epoch = util.time_correct(
candidate.lcsDict[b].mjd,
candidate.zSpec if candidate.zSpec else candidate.zPhotHost
)
# Correcting for absorption
flux = util.correct_for_absorption(
candidate.lcsDict[b].flux,
candidate.MWEBV, b
)
errFlux = candidate.lcsDict[b].fluxErr
if (candidate.lcsDict[b].badCurve) or (len(flux) <= 3):
candidateFit.lcsDict[b].badCurve = True
print indent + bcolors.FAIL + \
"{:<} {:<} {:<} Bad Curve".format(i, candidate.SNID, b) + \
bcolors.txtrst
"""
>>> if 'break' instead of 'continue' the candidate would not be
>>> processed and the further code would be easier (no double
>>> checks both on data and fit).
"""
continue
"""
Fitting Lightcurve
----------------------------------------------------------------
"""
try:
predMjd, predFlux, predErr, GPModel = util.gp_fit(
epoch, flux, errFlux,
kern, n_restarts=10,
parallel=False,
test_length=args.testLength,
test_prior=args.prior)
except linalg.LinAlgError as e:
if sent == False:
server = smtplib.SMTP('mailauth.oapd.inaf.it',587)
server.starttls()
server.login('marco.depascale', 'M@p3d_8$')
msg = 'Subject: LinAlgError\n\n' + \
'index = {:<d}, SNID = {:<d}'.format(i, candidate.SNID)
server.sendmail(fromAddress, toAddress, msg)
server.close()
sent = True
"""
if LinAlgError light curve won't be saved.
"""
print indent + \
"{:>5d} {:>5d} {:>4s} > FAIL".format(
i, candidate.SNID, b
) + bcolors.FAIL + ' LinAlgError' + bcolors.txtrst
candidateFit.r.badCurve = True
raise ValueError(
'LinAlgError from GPy. Mail sent to {:s}'.format(
toAddress
)
)
else:
candidateFit.set_lightcurve(b, predMjd, predFlux, predErr)
print indent + bcolors.OKGREEN + \
"{:>5d} {:>5d} {:>4s} > DONE".format(
i, candidate.SNID, b
) + bcolors.txtrst
"""-------------------------------------------------------------
"""
else:
"""
Saving fit results on file
----------------------------------------------------------------
"""
if (candidateFit.r.badCurve == False):
filePath = args.dirFit + \
path.splitext(lsDirData[i])[0] + "_FIT.DAT"
candidateFit.save_on_txt(filePath)
print indent + 'file saved!'
if candidateFit.peaked:
peakIdx = np.append(peakIdx, i)
fPeaked.write('{:<}\n'.format(filePath))
else:
nopeakIdx = np.append(nopeakIdx, i)
fNopeaked.write('{:<}\n'.format(filePath))
"""-------------------------------------------------------------
"""
gc.collect()
# free memory
gc.collect()
fPeaked.close()
fNopeaked.close()
filePath = 'peaked_{:<}_{:<5.3f}.dat'.format(
socket.gethostname(), time.time()
)
np.savetxt(args.dirFit + filePath, peakIdx,
header='Indexes of fitted LCs with r maximum.', fmt='%d')
filePath = args.dirFit + 'nopeaked_{:<}_{:<5.3f}.dat'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, nopeakIdx,
header='Indexes of fitted LCs without an r maximum.', fmt='%d')
gc.collect()
"""#########################################################################
############################################################################
PERFORMING CROSS-CORRELATION
############################################################################
############################################################################
"""
if args.crossCor:
"""
File are sorted by SNID.
In the following peakIdx and nopeakIdx contain index referring to the
full list of files. For this reason the list of files it is queried on
dirData. It is then filtered using the above variables.
"""
print "\n" + indent + bcolors.undwht + \
"(*) Calculate cross-correlation of not peaked- with " + \
"peaked-lcs ..." + bcolors.txtrst
print "\n" + indent + "Interval [{:<},{:<})".format(args.limits[0], args.limits[1])
filePath = args.dirFit + 'PEAKED.LIST'
if path.exists(filePath) == False:
# create the file concatenating existing partial files
print '{:<s} created!'.format(filePath)
peakedFileList = util.list_files(args.dirFit+'PEAKED*.LIST')
util.concat_files(peakedFileList, filePath)
peakList = np.loadtxt(filePath, dtype=np.str)
filePath = args.dirFit + 'NOPEAKED.LIST'
if path.exists(filePath) == False:
# create the file from existing partial files
print '{:<s} created!'.format(filePath)
noPeakedFileList = util.list_files(args.dirFit+'NOPEAKED*.LIST')
util.concat_files(noPeakedFileList, filePath)
tmp = np.loadtxt(filePath, dtype=np.str)
if tmp.size == 1:
nopeakList = np.asarray([tmp])
else:
nopeakList = np.asarray(tmp)
if args.limits[1] > len(nopeakList):
args.limits[1] = len(nopeakList)
#
# filePath = 'repeats.txt'
# repeats = np.loadtxt(args.dirFit + filePath, dtype=np.str)
filePath = 'cross_correlated_files_{:<5.3f}.dat'.format(time.time())
reWrite = open(args.dirFit + filePath, 'w')
prog = 0
for i in nopeakList[args.limits[0]:args.limits[1]]:
z = 0 # goes on peakIdx to index the progress bar
"""
READ DATA FROM NOT-PEAKED FILE
creates a Supernova object
"""
filePath = i
try:
tmpSN = util.get_sn_from_file(filePath)
print "Progress: {:<d} -- {:<}".format(prog, filePath)
prog += 1
ccIndent = "ID:{: ^7d}".format(tmpSN.SNID)
widgets = [ccIndent, Percentage(), ' ',
Bar(marker='#',left='[',right=']'),
' ', ETA()]
pbar = ProgressBar(widgets=widgets, maxval=len(peakList)).start()
except IOError:
print "IOError: {:<}".format(filePath)
continue
if tmpSN.r.badCurve:
print "IOError (BAD r curve): {:<}".format(filePath)
continue
"""
create SupernovaFit object
"""
notPeaked = cls.SupernovaFit(tmpSN)
for l in tmpSN.lcsDict.keys():
notPeaked.set_lightcurve(l,
tmpSN.lcsDict[l].mjd,
tmpSN.lcsDict[l].flux,
tmpSN.lcsDict[l].fluxErr
)
"""
Shifting mjds in not-peaked
"""
notPeaked.shift_mjds()
ccMax = list()#np.zeros(peakIdx.size)
k = 0 # goes on ccMax
# for j in peakIdx:
for j in peakList:
"""
READ DATA FROM PEAKED FILE
"""
# if j in repeats:
# print indent + bcolors.WARNING + \
# 'File appears also in unpeaked list: ignoring it.' + \
# bcolors.txtrst
# continue
filePath = j#args.dirFit + lsDirData[j][0:12] + '_FIT.DAT'
try:
tmpSN = util.get_sn_from_file(filePath)
except IOError:
print indent + bcolors.WARNING + \
'File appears also in peaked list but it does not exists: ignoring it.' + \
bcolors.txtrst
continue
if tmpSN.r.badCurve:
print indent + bcolors.WARNING + \
'Peaked file has bad r curve: ignoring it.' + \
bcolors.txtrst
continue
peaked = cls.SupernovaFit(tmpSN)
for l in tmpSN.lcsDict.keys():
peaked.set_lightcurve(l,
tmpSN.lcsDict[l].mjd,
tmpSN.lcsDict[l].flux,
tmpSN.lcsDict[l].fluxErr
)
"""
Shifting mjds in peaked
"""
peaked.shift_mjds()
"""
Performing cross-correlation
"""
ycorr = signal.correlate(
notPeaked.normalized_flux('r'),
peaked.normalized_flux('r')
)
xcorr = np.arange(ycorr.size)
lags = xcorr - (
len(notPeaked.normalized_flux('r'))-1
)
distancePerLag = (
notPeaked.r.shiftedMjd[-1] - \
notPeaked.r.shiftedMjd[0])/float(
len(notPeaked.r.shiftedMjd)
)
offsets = -lags*distancePerLag
# ccMax[k] = offsets[np.argmax(ycorr)]
ccMax.append(offsets[np.argmax(ycorr)])
# k += 1
pbar.update(z+1)
z += 1
# gc.collect()
notPeaked.ccMjdMaxFlux = np.mean(ccMax)#ccMax.mean()
"""
re-writing file of not peaked lc to include information on maximum
position from CC.
"""
filePath = i#args.dirFit + lsDirData[i][0:12] + '_FIT.DAT'
notPeaked.save_on_txt(filePath)
reWrite.write(filePath+'\n')
pbar.finish()
# gc.collect()
reWrite.close()
print 'CC ended!'
gc.collect()
"""
CALCULATING DISTANCE MATRIX
needs:
- args.distMatrix
- args.limits
- args.offset
- args.dirFit
"""
if args.distMatrix:
if not os.path.exists(path.abspath(args.dirFit + 'distance_matrix' + os.sep)):
os.makedirs(path.abspath(args.dirFit + 'distance_matrix' + os.sep))
"""
Calculate distance between fitted lightcurves.
Distance values are saved in a R matrix. This will be used by the R
package `diffusionMap` through rpy2 Python package.
"""
j_offset = args.offset
i_start = args.limits[0]
i_end = args.limits[1]
j_start = i_start + j_offset
j_end = (i_end + j_offset) if (i_end+j_offset<=len(lsDirFit)) else len(lsDirFit)
print "\n" + indent + bcolors.undwht + \
"(*) Calculate distances between lightcurves ..." + \
bcolors.txtrst
print indent + "Rows in [{:<d}, {:<d})".format(i_start, i_end)
print indent + "Cols in [{:<d}, {:<d})".format(j_start, j_end)
"""
setting value for big distance
"""
distFlag = 5
missColCount = 0
missRowlist = list()
bandDict = {
'g':0,
'r':1,
'i':2,
'z':3
}
widgets = [indent, 'Processing:', ' ', Counter(), ' ',
AnimatedMarker(), indent, Timer()]
# creating list of 4 lists
distList = list([[], [], [], []])
nCols = 0
# distList = np.zeros((4,
# len(lsDirFit[i_start:i_end]), len(lsDirFit[i_start:i_end])),
# dtype=float
# )
pbar = ProgressBar(widgets=widgets, maxval=(i_end-i_start)).start()
for i in range(i_start, i_end):
missColCount = 0
"""
Reading in i-candidate
"""
tmpSN = util.get_sn_from_file(
args.dirFit+lsDirFit[i]
)
if tmpSN.r.badCurve:
# nothing has to be added to the distance matrix. Print and
#
# continue to nex object
# print "{:<} Has bad curve in r band - ".format(lsDirFit[i]) + \
# "THE FILE HAS TO BE DELETED" +\
# " indices {:<d}".format(i)
missRowlist.append(i)
continue
iCandidate = cls.SupernovaFit(tmpSN)
for b in tmpSN.lcsDict.keys():
# set_lightcurve set also if the lc is peaked or not
iCandidate.set_lightcurve(b,
tmpSN.lcsDict[b].mjd,
tmpSN.lcsDict[b].flux,
tmpSN.lcsDict[b].fluxErr
)
"""
Shifting mjds in i-candidate
"""
iCandidate.shift_mjds()
if iCandidate.peaked == False:
# print i, iCandidate.SNID
"""
keeping to perform check with other non peaked LC
"""
iElMax = iCandidate.r.shiftedMjd.index(0.)
"""
correcting using CC results
"""
for b in bands:
iCandidate.lcsDict[b].shiftedMjd = [
iCandidate.lcsDict[b].shiftedMjd[l] +
iCandidate.ccMjdMaxFlux for l in range(len(
iCandidate.lcsDict[b].shiftedMjd
))
]
iElSize = iCandidate.r.size
iPeaked = iCandidate.peaked
for j in range(j_start, j_end):
"""
if this SN has badCurve in this band it will be far from all
the others by default.
here will save time from not opening all the other files
to create new SupernovaFit objcets.
"""
if j == i:
# filling elements on the distance matrix diagonal
for b in bands:
# adding one element to each sub list in distList
distList[bandDict[b]].append(0.)
# distList[bandDict[b], i-i_start, j-j_start] = 0.
continue
if j < i:
# filling matrix elements below the diagonal
if j in missRowlist:
missColCount += 1
continue
for b in bands:
# appending the symmetric element in the list: i-i_start
distList[bandDict[b]].append(
distList[bandDict[b]][
(j-j_start-missColCount)*nCols+\
i-i_start-len(missRowlist)
])
# distList[bandDict[b], i-i_start, j-j_start] = \
# distList[bandDict[b], j-j_start, i-i_start]
continue # jump to the next iteration of the loop
"""
Reading in j-candidate
"""
try:
tmpSN = util.get_sn_from_file(
args.dirFit+lsDirFit[j]
)
except IndexError:
print j, len(lsDirFit)
raise IndexError("list index out of range")
if tmpSN.r.badCurve:
# nothing has to be added to the distance matrix. Print and
#
# continue to nex object
# print "{:<} Has bad curve in r band -".format(lsDirFit[j])+\
# " THE FILE HAS TO BE DELETED:" +\
# " indices {:<d}, {:<d}".format(i, j)
continue
jCandidate = cls.SupernovaFit(tmpSN)
for b in tmpSN.lcsDict.keys():
jCandidate.set_lightcurve(b,
tmpSN.lcsDict[b].mjd,
tmpSN.lcsDict[b].flux,
tmpSN.lcsDict[b].fluxErr
)
"""
Shifting mjds in j-candidate
"""
jCandidate.shift_mjds()
if jCandidate.peaked == False:
"""
keeping to perform check with other non peaked LC
"""
jElMax = jCandidate.r.shiftedMjd.index(0.)
"""
correcting using CC results
"""
for b in bands:
jCandidate.lcsDict[b].shiftedMjd = [
jCandidate.lcsDict[b].shiftedMjd[l] +
jCandidate.ccMjdMaxFlux for l in range(len(
jCandidate.lcsDict[b].shiftedMjd
))
]
jElSize = jCandidate.r.size
for b in bands:
if not jCandidate.lcsDict[b].badCurve \
and not iCandidate.lcsDict[b].badCurve:
distList[bandDict[b]].append(
iCandidate.get_distance(jCandidate, b)
)
# distList[bandDict[b], i-i_start, j-j_start] = \
# iCandidate.get_distance(jCandidate, b)
else:
# in case of bad curve
"""
This works like a flag. These elements will be set
equal to a neutral value (the mean of the other)
"""
distList[bandDict[b]].append(distFlag)
# distList[bandDict[b], i-i_start, j-j_start] = distFlag
"""
# >>> !! Checking for i being equal to its beginning value in the loop
does not take into account the
possibility of the first SN having a bad r curve, in which case
the loop will never arrive here, since it is reset by a continue.
Checking on nCols being still equal to zero is much better, since is
the only way to verify if the first loop has been completed.
"""
# if (i == i_start):
if (nCols == 0):
nCols = len(distList[0])
print 'nCols updated! {:<d}'.format(nCols)
pbar.update(i-i_start+1)
pbar.finish()
# del iCandidate
# del jCandidate
# del tmpSN
gc.collect()
distMatrix = np.zeros((4,
len(distList[0])/nCols, nCols),
dtype=float
)
for b in bands:
distMatrix[bandDict[b]] = np.reshape(
distList[bandDict[b]], (len(distList[bandDict[b]])/nCols, nCols)
)
"""
distList is no more used from now on. I delete it to save memory
"""
del distList
gc.collect()
# fixing flagged elements
# raise SystemExit
if distMatrix[0, distMatrix[0] == distFlag].size > 0:
ind = np.where(distMatrix[0] == distFlag)
distMatrix[0, ind[0], ind[1]] = np.add(
np.add(
distMatrix[1, ind[0], ind[1]],
distMatrix[2, ind[0], ind[1]]
),
distMatrix[3, ind[0], ind[1]]
)/3.
if distMatrix[1, distMatrix[1] == distFlag].size > 0:
ind = np.where(distMatrix[1] == distFlag)
# distMatrix[1, ind[0], ind[1]] = distMatrix[1,:,:].max()
distMatrix[1, ind[0], ind[1]] = np.add(
np.add(
distMatrix[0, ind[0], ind[1]],
distMatrix[2, ind[0], ind[1]]
),
distMatrix[3, ind[0], ind[1]]
)/3.
if distMatrix[2, distMatrix[2] == distFlag].size > 0:
ind = np.where(distMatrix[2] == distFlag)
# distMatrix[2, ind[0], ind[1]] = distMatrix[2].max()
distMatrix[2, ind[0], ind[1]] = np.add(
np.add(
distMatrix[0, ind[0], ind[1]],
distMatrix[1, ind[0], ind[1]]
),
distMatrix[3, ind[0], ind[1]]
)/3.
if distMatrix[3, distMatrix[3] == distFlag].size > 0:
ind = np.where(distMatrix[3] == distFlag)
# distMatrix[3, ind[0], ind[1]] = distMatrix[3].max()
distMatrix[3, ind[0], ind[1]] = np.add(
np.add(
distMatrix[0, ind[0], ind[1]],
distMatrix[1, ind[0], ind[1]]
),
distMatrix[2, ind[0], ind[1]]
)/3.
distMatrixSum = np.sum(distMatrix, 0)
"""
Saving on text files
"""
fileHeader = "distMatrix[{:<d}:{:<d},{:<d}:{:<d}] --- ".format(
i_start, i_end, j_start, j_end
) + \
"Created by {:<}".format(socket.gethostname())
filePath = args.dirFit + 'distance_matrix' + os.sep + \
'dist_matrix_Sum_{:<}_{:<5.3f}.txt'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, distMatrixSum, fmt='%6.4f', header=fileHeader)
del distMatrixSum
gc.collect()
filePath = args.dirFit + 'distance_matrix' + os.sep + \
'dist_matrix_g_{:<}_{:<5.3f}.txt'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, distMatrix[0], fmt='%6.4f', header=fileHeader)
filePath = args.dirFit + 'distance_matrix' + os.sep + \
'dist_matrix_r_{:<}_{:<5.3f}.txt'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, distMatrix[1], fmt='%6.4f', header=fileHeader)
filePath = args.dirFit + 'distance_matrix' + os.sep + \
'dist_matrix_i_{:<}_{:<5.3f}.txt'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, distMatrix[2], fmt='%6.4f', header=fileHeader)
filePath = args.dirFit + 'distance_matrix' + os.sep + \
'dist_matrix_z_{:<}_{:<5.3f}.txt'.format(
socket.gethostname(), time.time()
)
np.savetxt(filePath, distMatrix[3], fmt='%6.4f', header=fileHeader)
del distMatrix
gc.collect()
"""
CALCULATING DIFFUSION MAP
"""
if args.diffuse:
if 'diffusionMap' not in globals():
diffusionMap = importr('diffusionMap')
ndim = ro.r.attributes(Rmatrix)[0][0]
dmap = diffusionMap.diffuse(Rmatrix, neigen=5)
util.dump_pkl('diffusion_map.pkl', dmap)
"""
TRAINING RANDOM FOREST CLASSIFIER
"""
if args.train:
randomForest = importr('randomForest')
if 'dmap' not in globals():
print indent + 'Loading catalog from dump file ...'
dmap = util.open_pkl('tmp_diffusion_map.pkl')
dmap_rf = randomForest.randomForest(dmap)
"""
PLOT OBSERVATION AND FIT
--plot
"""
if args.plot:
timeMark = time.time()
"""
getting file list from directory
File will be sorted by SNID
"""
print indent + 'Plotting ...'
'''
Column index is always increasing, no check on its value.
'''
nrows = 5
ncols = 5
"""
If plotOffset is to specified, get a proper random value
"""
if (args.plotOffset == -1):
np.random.RandomState
offset = int(np.random.uniform(low=0, high=len(lsDirFit)-nrows*ncols))
else:
offset = args.plotOffset
fig_g, ax_g = plt.subplots(nrows=nrows, ncols=ncols,
figsize=(16.5, 11.7)#,
#tight_layout=True
)
fig_r, ax_r = plt.subplots(nrows=nrows, ncols=ncols,
figsize=(16.5, 11.7)#,
#tight_layout=True
)
fig_i, ax_i = plt.subplots(nrows=nrows, ncols=ncols,
figsize=(16.5, 11.7)#,
#tight_layout=True
)
fig_z, ax_z = plt.subplots(nrows=nrows, ncols=ncols,
figsize=(16.5, 11.7)#,
# tight_layout=True
)
dictFig = {'g':fig_g,
'r':fig_r,
'i':fig_i,
'z':fig_z}
dictAx = {'g':ax_g,
'r':ax_r,
'i':ax_i,
'z':ax_z}
r = {'g':0,
'r':0,
'i':0,
'z':0}
c = {'g':0,
'r':0,
'i':0,
'z':0}
"""
Adjust subplot margins and title
"""
for b in dictFig.keys():
dictFig[b].subplots_adjust(
top=0.96, right=0.99, bottom=0.03, left=0.02,
wspace=0.08, hspace=0.13
)
dictFig[b].suptitle('band {:<1} - offset {:<d}'.format(b, offset))
GPkern = ''
for i in range(nrows*ncols):
"""
Getting the observational data from file
"""
candidate = util.get_sn_from_file(
args.dirData + lsDirData[i+offset]#candidateIdx]
)
"""
Reading fit data from file
"""
try:
tmpSN = util.get_sn_from_file(
args.dirFit+lsDirFit[i+offset],
magFlag=args.mag,
)
except IndexError:
warnStr = 'IndexError: list index out of range. '+\
'i={:<d}.'.format(i+offset)
print warnings.warn(warnStr)
print '\n'+indent+'Saving files as they are and stopping.'
else:
"""
Initializing SupernovaFit object
"""
fit = cls.SupernovaFit(tmpSN,
tmpSN.kern if hasattr(tmpSN, 'kern') else None)
if (i == 0) and hasattr(tmpSN, 'kern'):
GPkern = tmpSN.kern
for b in tmpSN.lcsDict.keys():
fit.set_lightcurve(b,
tmpSN.lcsDict[b].mjd,
tmpSN.lcsDict[b].flux,
tmpSN.lcsDict[b].fluxErr,
magFlag=args.mag
)
if fit.r.badCurve:
print 'SN ID{:>06d} has bad r band light curve!'.format(
fit.SNID)
# continue
else:
"""
Shift fit mjd to have 0 at r band maximum
"""
fit.shift_mjds()
"""
Fixing shiftedMjd for not-peaked LCs
"""
if (fit.peaked == False) and (fit.r.badCurve == False) :
"""
correcting using CC results
"""
for b in bands:
fit.lcsDict[b].shiftedMjd = [
el + fit.ccMjdMaxFlux for el in fit.lcsDict[b].shiftedMjd
]
for b in dictAx.keys():
"""
variable `data` initialized as light curve in band b for
cleaner code.
"""
data = candidate.lcsDict[b]
fit_b = fit.lcsDict[b]
fit_r = fit.lcsDict['r']
if c[b] > nrows-1:
c[b] = 0
r[b] += 1
xlim = dictAx[b][r[b], c[b]].get_xlim()
ylim = dictAx[b][r[b], c[b]].get_ylim()
dictAx[b][r[b], c[b]].set_xticks([0])
dictAx[b][r[b], c[b]].set_yticks([0])
dictAx[b][r[b], c[b]].set_xticklabels(['0'])
dictAx[b][r[b], c[b]].set_yticklabels(['0'])
if (data.badCurve == False) and (fit_b.badCurve == False) and (fit.r.badCurve == False):
epoch = util.time_correct(data.mjd,
candidate.zSpec if candidate.zSpec else candidate.zPhotHost)
epoch = [val-fit_r.mjd[fit_r.max_flux_index] for val in epoch]
if fit.peaked == False:
epoch = [val+fit.ccMjdMaxFlux for val in epoch]
flux = util.correct_for_absorption(data.flux,
candidate.MWEBV, b)
"""
Setting limits for plot axes
"""
if min(fit_b.flux) < min(flux):
y_min = min(fit_b.flux) - 3*max(fit_b.fluxErr)
else:
y_min = min(flux) - np.median(data.fluxErr)
if max(fit_b.flux) > max(flux):
y_max = max(fit_b.flux) + 3*max(fit_b.fluxErr)
else:
y_max = max(flux) + np.median(data.fluxErr)
dictAx[b][r[b], c[b]].set_ylim(y_min, y_max)
"""
Setting limits for fill_between
"""
fluxUpLim = [val for val in [
fit_b.flux[el] + fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
fluxLowLim = [val for val in [
fit_b.flux[el] - fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
dictAx[b][r[b], c[b]].fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.4, linewidth=0.5)
"""
Setting limits for fill_between
"""
fluxUpLim = [val for val in [
fit_b.flux[el] + 2*fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
fluxLowLim = [val for val in [
fit_b.flux[el] - 2*fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
dictAx[b][r[b], c[b]].fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.2, linewidth=0.5)
"""
Setting limits for fill_between
"""
fluxUpLim = [val for val in [
fit_b.flux[el] + 3*fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
fluxLowLim = [val for val in [
fit_b.flux[el] - 3*fit_b.fluxErr[el]
for el in range(len(fit_b.flux))
]]
dictAx[b][r[b], c[b]].fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.1, linewidth=0.5)
dictAx[b][r[b], c[b]].plot(fit_b.shiftedMjd, fit_b.flux,
color='#7f0000',
linewidth=2)
scatterLab = 'SN ID {:<d}'.format(candidate.SNID)
dictAx[b][r[b], c[b]].scatter(epoch, flux,
s=10, label=scatterLab, c='black', marker='x')
dictAx[b][r[b], c[b]].errorbar(epoch, flux,
data.fluxErr, fmt=None, color='black', ecolor='black')
if not fit.peaked:
pass
dictAx[b][r[b], c[b]].legend(
loc='best', framealpha=0.3, fontsize='10')
else:
label = str(candidate.SNID)+" BAD CURVE"
dictAx[b][r[b], c[b]].plot([0, 1], [0, 1], color='red',
label=label)
dictAx[b][r[b], c[b]].plot([0, 1], [1, 0], color='red')
dictAx[b][r[b], c[b]].legend(
loc='best', framealpha=0.3, fontsize='10')
c[b] += 1
print indent + "Plots saved in files:"
if not os.path.exists(path.abspath(args.dirFit + "plots" + os.sep)):
os.makedirs(args.dirFit + "plots")
for b in dictFig.keys():
dictFig[b].savefig(
args.dirFit + "plots"+ os.sep + GPkern + \
"_band_{:<1}_{:<f}.png".format(b,timeMark),
dpi=300
)
print indent + " - " + args.dirFit + "plots" + os.sep + \
GPkern + "_band_{:<1}_{:<f}.png".format(b,timeMark)
plt.close('all')
"""
PLOT OBSERVATION AND FIT (publication style)
--nice-plots
"""
if args.nicePlots:
"""
1 candidate
choose how many bands
make the plot with confidence regions
"""
# if args.nBands != 1 or args.nBands != 4:
# args.nBands = 1
if args.cand == -1:
args.cand = np.random.random_integers(
low=0, high=len(lsDirData))
fname = 'DES_SN{:0>6d}.DAT'.format(args.cand)
candidate = util.get_sn_from_file(
args.dirData+fname
)
fname = 'DES_SN{:0>6d}_FIT.DAT'.format(args.cand)
tmpSN = util.get_sn_from_file(
args.dirFit+fname,
magFlag=args.mag,
)
"""
Initializing SupernovaFit object
"""
fit = cls.SupernovaFit(tmpSN, tmpSN.kern if hasattr(tmpSN, 'kern') else None)
for b in tmpSN.lcsDict.keys():
fit.set_lightcurve(b,
tmpSN.lcsDict[b].mjd,
tmpSN.lcsDict[b].flux,
tmpSN.lcsDict[b].fluxErr,
magFlag=args.mag
)
if fit.r.badCurve:
raise SystemExit('Bad r curve!')
fit.shift_mjds()
"""
Fixing shiftedMjd for not-peaked LCs
"""
if fit.peaked == False:
"""
correcting using CC results
"""
for b in candidate.lcsDict.keys():
fit.lcsDict[b].shiftedMjd = [el + fit.ccMjdMaxFlux
for el in fit.lcsDict[b].shiftedMjd]
bands = candidate.lcsDict.keys() if args.allBands else args.band
"""
Pre-process data so to be compared with fit (made from
pre-precessed data)
"""
for b in bands:
if (not candidate.lcsDict[b].badCurve) and (not fit.lcsDict[b].badCurve):
candidate = util.pre_process(candidate, b)
candidate.lcsDict[b].mjd = [el - fit.r.mjd[fit.r.max_flux_index]
for el in candidate.lcsDict[b].mjd]
if fit.peaked == False:
candidate.lcsDict[b].mjd = [el + fit.ccMjdMaxFlux
for el in candidate.lcsDict[b].mjd]
else:
raise SystemExit('Bad {:1s} curve!'.format(b))
if args.allBands:
fig, ax = plt.subplots(nrows=2, ncols=2,
# figsize=(16.5, 11.7),
tight_layout=False
)
axDict = {
'g':ax[0,0],
'r':ax[0,1],
'i':ax[1,0],
'z':ax[1,1]
}
# fig.subplots_adjust(left=0.05, right=0.97, top=0.94, wspace=0.29)
else:
fig = plt.figure()
xlim = [-35,12]
ylim = [-10,10]
# fig, ax = plt.subplots(nrows=2, ncols=1,
# # figsize=(16.5, 11.7),
# tight_layout=False
# )
# axDict = {
# 'g':ax[0,0],
# 'r':ax[0,1],
# 'i':ax[1,0],
# 'z':ax[1,1]
# }
if not args.allBands:
fit_b = fit.lcsDict[args.band]
data = candidate.lcsDict[args.band]
if not data.badCurve and not fit_b.badCurve:
epoch = data.mjd
flux = data.flux
"""
Setting limits for fill_between
"""
fluxUpLim = [el for el in [
fit_b.flux[i] + fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
fluxLowLim = [el for el in [
fit_b.flux[i] - fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
plt.fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.4, linewidth=0.5)
# axDict[b].fill_between(fit_b.shiftedMjd,
# fluxUpLim, fluxLowLim,
# facecolor='red', alpha=0.4, linewidth=0.5)
"""
Setting limits for fill_between
"""
fluxUpLim = [el for el in [
fit_b.flux[i] + 2*fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
fluxLowLim = [el for el in [
fit_b.flux[i] - 2*fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
plt.fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.2, linewidth=0.5)
# axDict[b].fill_between(fit_b.shiftedMjd,
# fluxUpLim, fluxLowLim,
# facecolor='red', alpha=0.2, linewidth=0.5)
"""
Setting limits for fill_between
"""
fluxUpLim = [el for el in [
fit_b.flux[i] + 3*fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
fluxLowLim = [el for el in [
fit_b.flux[i] - 3*fit_b.fluxErr[i]
for i in range(len(fit_b.flux))
]]
plt.fill_between(fit_b.shiftedMjd,
fluxUpLim, fluxLowLim,
facecolor='red', alpha=0.1, linewidth=0.5)
# axDict[b].fill_between(fit_b.shiftedMjd,
# fluxUpLim, fluxLowLim,
# facecolor='red', alpha=0.1, linewidth=0.5)
plt.plot(fit_b.shiftedMjd, fit_b.flux,
color='#7f0000',
linewidth=2,
label='GP fit')
# axDict[b].plot(fit_b.shiftedMjd, fit_b.flux,
# color='#7f0000',
# linewidth=2)
plt.scatter(epoch, flux,
s=30, label='data', c='black', marker='x')
# axDict[b].scatter(epoch, flux,
# s=10, label=str(candidate.SNID), c='black', marker='x')
plt.errorbar(epoch, flux,
data.fluxErr, fmt=None, color='black', ecolor='black')
# plt.xlim(xlim)
plt.ylim(ylim)
title = 'SN ID {:d} - Band {:s}'.format(candidate.SNID, args.band)
plt.title(title)
plt.xlabel('Epoch [mjd]')
plt.ylabel('Flux [adu]')
plt.legend(loc='upper right', scatterpoints=1)
# axDict[b].errorbar(epoch, flux,
# data.fluxErr, fmt=None, color='black', ecolor='black')
print "\n" + indent \
+ "The process took {:5.3f} secs.".format(time.time()-start_time)
|
mdepasca/miniature-adventure
|
miniature_adventure.py
|
Python
|
unlicense
| 51,544
|
[
"Gaussian"
] |
c0bdafe016e1b7acbde4a6db4b89c15246ec8d3f7d4a725c594eccda31668c3c
|
#
# Copyright (C) 2013-2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
"""
Simulate a Lennard-Jones fluid in different thermodynamic ensembles (NVT, NpT).
Sliders from a MIDI controller can change system variables such as temperature
and volume. Some thermodynamic observables are analyzed and plotted live.
"""
import matplotlib
matplotlib.use('WXAgg')
import espressomd
espressomd.assert_features(["LENNARD_JONES"])
from espressomd import visualization
import numpy as np
from matplotlib import pyplot
from threading import Thread
from traits.api import HasTraits, Any, Range, List, Enum, Float
from traitsui.api import View, Group, Item, CheckListEditor, RangeEditor
import time
import argparse
parser = argparse.ArgumentParser(epilog=__doc__)
group = parser.add_mutually_exclusive_group()
group.add_argument("--mayavi", action="store_const", dest="visualizer",
const="mayavi", help="MayaVi visualizer", default="mayavi")
group.add_argument("--opengl", action="store_const", dest="visualizer",
const="opengl", help="OpenGL visualizer")
args = parser.parse_args()
use_opengl = args.visualizer == "opengl"
use_mayavi = args.visualizer == "mayavi"
if use_mayavi:
from espressomd.visualization_mayavi import mlab
if use_opengl:
from pyface.api import GUI
try:
import midi
except BaseException:
try:
from pygame import midi
except BaseException:
from portmidi import midi
midi.init()
# if log flag is set, midi controller will change pressure logarithmically
pressure_log_flag = True
mayavi_autozoom = False # autozoom is buggy... works only for rotation
old_pressure = -1
# NPT variables
#############################################################
NPTGamma0 = 1.0
#NPTInitPistonMass = 1e-06
#NPTMinPistonMass = 1e-06
NPTMinPistonMass = 1e-04
NPTMaxPistonMass = 1.0
NPTInitPistonMass = NPTMinPistonMass
# System parameters
#############################################################
# 300 Particles
box_l = 7.5395
density = 0.7
# Interaction parameters (repulsive Lennard-Jones)
#############################################################
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 2.5 * lj_sig
lj_cap = 20
# Integration parameters
#############################################################
system = espressomd.System(box_l=[box_l, box_l, box_l])
system.set_random_state_PRNG()
#system.seed = system.cell_system.get_state()['n_nodes'] * [1234]
system.time_step = 0.01
system.cell_system.skin = 0.4
system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=42)
system.cell_system.set_n_square(use_verlet_lists=False)
# do the warmup until the particles have at least the distance min_dist
min_dist = 0.9
# integration
int_steps = 1
int_n_times = 5000000
#############################################################
# Setup System #
#############################################################
# Interaction setup
#############################################################
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig,
cutoff=lj_cut, shift="auto")
system.force_cap = lj_cap
# Particle setup
#############################################################
volume = box_l**3
n_part = int(volume * density)
for i in range(n_part):
system.part.add(id=i, pos=np.random.random(3) * system.box_l)
system.analysis.dist_to(0)
act_min_dist = system.analysis.min_dist()
if use_mayavi:
vis = visualization.mayaviLive(system)
elif use_opengl:
vis = visualization.openGLLive(system)
mayavi_rotation_angle = 45.
mayavi_rotation_angle_step = 5.
mayavi_zoom = 36.
mayavi_zoom_old = mayavi_zoom
mayavi_zoom_step = 3.
plot_max_data_len = 20
#############################################################
# GUI Controls #
#############################################################
inputs, outputs = [], []
for i in range(midi.get_count()):
interf, name, input, output, opened = midi.get_device_info(i)
if input:
inputs.append((i, interf + " " + name))
if output:
outputs.append((i, interf + " " + name))
class Controls(HasTraits):
if len(inputs) == 1:
default_input = inputs
for i in inputs:
if "Through Port" not in i[1]:
default_input = i
break
default_input = default_input if inputs else None
default_output = -1
through_port_output = None
for i in outputs:
if "Through Port" not in i[1]:
default_output = i
break
else:
through_port_output = i
default_output = default_output if len(
outputs) > 1 else through_port_output
if default_input is None or default_output is None:
print('Cannot connect to any MIDI device')
input_device = List(value=default_input,
editor=CheckListEditor(values=inputs))
output_device = List(value=default_output,
editor=CheckListEditor(values=outputs))
max_temp = 2.
min_temp = 0.5
max_press = 10.
min_press = 5e-4
max_vol = 100000.
min_vol = 50.
max_n = 1000
min_n = 50
temperature = Range(min_temp, max_temp, 1., )
volume = Float(box_l**3.)
pressure = Float(1.)
number_of_particles = Range(min_n, max_n, n_part, )
ensemble = Enum('NVT', 'NPT')
midi_input = None
midi_output = None
MIDI_BASE = 224
MIDI_NUM_TEMPERATURE = MIDI_BASE + 0
MIDI_NUM_VOLUME = MIDI_BASE + 1
MIDI_NUM_PRESSURE = MIDI_BASE + 2
MIDI_NUM_NUMBEROFPARTICLES = MIDI_BASE + 3
MIDI_ROTATE = 0
MIDI_ZOOM = 144
_ui = Any
view = View(
Group(
Item('temperature', editor=RangeEditor(
low_name='min_temp', high_name='max_temp')),
Item('volume', editor=RangeEditor(
low_name='min_vol', high_name='max_vol')),
Item('pressure', editor=RangeEditor(
low_name='min_press', high_name='max_press')),
Item('number_of_particles', editor=RangeEditor(
low_name='min_n', high_name='max_n', is_float=False)),
Item('ensemble', style='custom'),
show_labels=True,
label='Parameters'
),
Group(
Item('input_device'),
Item('output_device'),
show_labels=True,
label='MIDI devices'
),
buttons=[],
title='Control',
height=0.2,
width=0.3
)
def __init__(self, **traits):
super(Controls, self).__init__(**traits)
self._ui = self.edit_traits()
self.push_current_values()
def push_current_values(self):
"""send the current values to the MIDI controller"""
self._temperature_fired()
self._volume_fired()
self._pressure_fired()
self._number_of_particles_fired()
self._ensemble_fired()
def _input_device_fired(self):
if self.midi_input is not None:
self.midi_input.close()
if self.input_device:
self.midi_input = midi.Input(self.input_device[0])
def _output_device_fired(self):
if self.midi_output is not None:
self.midi_output.close()
self.midi_output = midi.Output(self.output_device[0])
self.push_current_values()
def _temperature_fired(self):
status = self.MIDI_NUM_TEMPERATURE
data1 = int((self.temperature - self.min_temp) /
(self.max_temp - self.min_temp) * 127)
data2 = data1
if self.midi_output is not None:
self.midi_output.write_short(status, data1, data2)
def _volume_fired(self):
status = self.MIDI_NUM_VOLUME
data1 = limit_range(int((system.box_l[0]**3. - self.min_vol) / (
self.max_vol - self.min_vol) * 127), minval=0, maxval=127)
data2 = data1
if self.midi_output is not None:
self.midi_output.write_short(status, data1, data2)
def _pressure_fired(self):
status = self.MIDI_NUM_PRESSURE
if pressure_log_flag:
data1 = limit_range(int(127 *
(np.log(self.pressure) -
np.log(self.min_press)) /
(np.log(self.max_press) -
np.log(self.min_press))), minval=0, maxval=127)
else:
data1 = limit_range(int((self.pressure -
self.min_press) /
(self.max_press -
self.min_press) *
127), minval=0, maxval=127)
data2 = data1
if self.midi_output is not None:
self.midi_output.write_short(status, data1, data2)
def _number_of_particles_fired(self):
status = self.MIDI_NUM_NUMBEROFPARTICLES
data1 = int(self.number_of_particles / self.max_n * 127)
data2 = data1
if self.midi_output is not None:
self.midi_output.write_short(status, data1, data2)
def _ensemble_fired(self):
if self.midi_output is not None:
self.midi_output.write_short(144, 0, 127) # T
self.midi_output.write_short(
144, 1, 127 * (self.ensemble != 'NPT')) # V
self.midi_output.write_short(
144, 2, 127 * (self.ensemble == 'NPT')) # P
self.midi_output.write_short(144, 3, 127) # N
#############################################################
# Integration #
#############################################################
# get initial observables
pressure = system.analysis.pressure()
temperature = 0.0
# TODO: this is some terrible polynomial fit, replace it with a better expression
# equation of state
pyplot.subplot(131)
pyplot.semilogy()
pyplot.title("Phase diagram")
pyplot.xlabel("Temperature")
pyplot.ylabel("Pressure")
pyplot.xlim(0.5, 2.0)
pyplot.ylim(5e-5, 2e1)
xx = np.linspace(0.5, 0.7, 200)
pyplot.plot(xx, -6.726 * xx**4 + 16.92 * xx**3 -
15.85 * xx**2 + 6.563 * xx - 1.015, 'k-')
xx = np.linspace(0.7, 1.3, 600)
pyplot.plot(xx, -0.5002 * xx**4 + 2.233 * xx**3 -
3.207 * xx**2 + 1.917 * xx - 0.4151, 'k-')
xx = np.linspace(0.6, 2.2, 1500)
pyplot.plot(xx, 16.72 * xx**4 - 88.28 * xx**3 +
168 * xx**2 - 122.4 * xx + 29.79, 'k-')
cursor = pyplot.scatter(temperature, pressure['total'], 200, 'g')
#cursor2 = pyplot.scatter(-1, -1, 200, 'r')
pyplot.text(0.6, 10, 'solid')
pyplot.text(1, 1, 'liquid')
pyplot.text(1, 10**-3, 'gas')
pyplot.subplot(132)
pyplot.title("Temperature")
plot1, = pyplot.plot([0], [temperature])
pyplot.xlabel("Time")
pyplot.ylabel("Temperature")
pyplot.subplot(133)
pyplot.title("Pressure")
plot2, = pyplot.plot([0], [pressure['total']])
pyplot.xlabel("Time")
pyplot.ylabel("Pressure")
# pyplot.legend()
pyplot.show(block=False)
plt1_x_data = np.zeros(1)
plt1_y_data = np.zeros(1)
plt2_x_data = np.zeros(1)
plt2_y_data = np.zeros(1)
def limit_range(val, minval=0., maxval=1.):
if val > maxval:
ret_val = maxval
elif val < minval:
ret_val = minval
else:
ret_val = val
if isinstance(val, int):
return int(ret_val)
elif isinstance(val, float):
return float(ret_val)
else:
return ret_val
def pressure_from_midi_val(midi_val, pmin, pmax, log_flag=pressure_log_flag):
if log_flag:
return pmin * (float(pmax) / pmin)**(float(midi_val) / 127)
else:
return midi_val * (pmax - pmin) / 127 + pmin
def main_loop():
global energies, plt1_x_data, plt1_y_data, plt2_x_data, plt2_y_data, old_pressure
system.integrator.run(steps=int_steps)
vis.update()
# increase LJ cap during warmup
if system.force_cap > 0:
if system.analysis.min_dist() < min_dist:
system.force_cap = system.force_cap + 0.1
else:
system.force_cap = 0
print("Switching off force capping")
# make sure the parameters are valid
# not sure if this is necessary after using limit_range
if controls.volume == 0:
controls.volume = controls.min_vol
if controls.number_of_particles == 0:
controls.number_of_particles = 1
if controls.pressure == 0:
controls.pressure = controls.min_press
pressure = system.analysis.pressure()
# update the parameters set in the GUI
if system.thermostat.get_state()[0]['kT'] != controls.temperature:
system.thermostat.set_langevin(kT=controls.temperature, gamma=1.0)
print("temperature changed")
system.force_cap = lj_cap
if controls.ensemble == 'NPT':
# reset Vkappa when target pressure has changed
if old_pressure != controls.pressure:
system.analysis.v_kappa('reset')
print("pressure changed")
old_pressure = controls.pressure
system.force_cap = lj_cap
newVkappa = system.analysis.v_kappa('read')['Vk1']
newVkappa = newVkappa if newVkappa > 0. else 4.0 / \
(NPTGamma0 * NPTGamma0 * NPTInitPistonMass)
pistonMass = limit_range(4.0 / (NPTGamma0 * NPTGamma0 * newVkappa),
NPTMinPistonMass, NPTMaxPistonMass)
system.integrator.set_isotropic_npt(
controls.pressure, pistonMass, cubic_box=True)
controls.volume = system.box_l[0]**3.
else:
system.integrator.set_nvt()
controls.pressure = pressure['total']
new_box = np.ones(3) * controls.volume**(1. / 3.)
if np.any(np.array(system.box_l) != new_box):
for i in range(len(system.part)):
system.part[i].pos = system.part[i].pos * \
new_box / system.box_l[0]
print("volume changed")
system.force_cap = lj_cap
system.box_l = new_box
new_part = controls.number_of_particles
if new_part > len(system.part):
for i in range(len(system.part), new_part):
system.part.add(id=i, pos=np.random.random(3) * system.box_l)
print("particles added")
system.force_cap = lj_cap
elif new_part < len(system.part):
for i in range(new_part, len(system.part)):
system.part[i].remove()
print("particles removed")
plt1_x_data = plot1.get_xdata()
plt1_y_data = plot1.get_ydata()
plt2_x_data = plot2.get_xdata()
plt2_y_data = plot2.get_ydata()
plt1_x_data = np.append(
plt1_x_data[-plot_max_data_len + 1:], system.time)
plt1_y_data = np.append(plt1_y_data[-plot_max_data_len + 1:],
2. / (3. * len(system.part))
* system.analysis.energy()["kinetic"])
plt2_x_data = np.append(
plt2_x_data[-plot_max_data_len + 1:], system.time)
plt2_y_data = np.append(
plt2_y_data[-plot_max_data_len + 1:], pressure['total'])
def main_thread():
for _ in range(int_n_times):
main_loop()
def midi_thread():
global mayavi_rotation_angle, mayavi_zoom
while True:
try:
if controls.midi_input is not None and controls.midi_input.poll():
events = controls.midi_input.read(1000)
for event in events:
status, data1, data2, _ = event[0]
if status == controls.MIDI_NUM_TEMPERATURE:
temperature = data2 * \
(controls.max_temp - controls.min_temp) / \
127 + controls.min_temp
controls.temperature = limit_range(
temperature, controls.min_temp, controls.max_temp)
elif status == controls.MIDI_NUM_VOLUME:
volume = data2 * \
(controls.max_vol - controls.min_vol) / \
127 + controls.min_vol
controls.volume = limit_range(
volume, controls.min_vol, controls.max_vol)
controls.ensemble = 'NVT'
elif status == controls.MIDI_NUM_PRESSURE:
pressure = pressure_from_midi_val(
data2, controls.min_press, controls.max_press)
controls.pressure = limit_range(
pressure, controls.min_press, controls.max_press)
controls.ensemble = 'NPT'
elif status == controls.MIDI_NUM_NUMBEROFPARTICLES:
npart = int(data2 * controls.max_n / 127)
controls.number_of_particles = limit_range(
npart, controls.min_n, controls.max_n)
elif status == controls.MIDI_ROTATE:
if data2 < 65:
# rotate clockwise
mayavi_rotation_angle += mayavi_rotation_angle_step * \
data2
elif data2 >= 65:
# rotate counterclockwise
mayavi_rotation_angle -= mayavi_rotation_angle_step * \
(data2 - 64)
elif status == controls.MIDI_ZOOM:
if data1 == 99 and data2 == 127:
# zoom in
mayavi_zoom -= mayavi_zoom_step
elif data1 == 98 and data2 == 127:
# zoom out
mayavi_zoom += mayavi_zoom_step
# else:
# print("Unknown Status {0} with data1={1} and
# data2={2}".format(status, data1, data2))
except Exception as e:
print(e)
time.sleep(0.01)
last_plotted = 0
def rotate_scene():
global mayavi_rotation_angle
if use_mayavi and mayavi_rotation_angle:
# mlab.yaw(mayavi_rotation_angle)
if mayavi_autozoom:
mlab.view(azimuth=mayavi_rotation_angle, distance='auto')
else:
current_view_vals = mlab.view()
mlab.view(azimuth=mayavi_rotation_angle,
elevation=current_view_vals[1],
distance=current_view_vals[2],
focalpoint=current_view_vals[3])
mayavi_rotation_angle %= 360.
def zoom_scene():
global mayavi_zoom, mayavi_zoom_old
if use_mayavi:
mlab.view(distance=mayavi_zoom)
elif use_opengl:
if mayavi_zoom_old < mayavi_zoom:
vis.camera.move_backward()
mayavi_zoom_old = mayavi_zoom
elif mayavi_zoom_old > mayavi_zoom:
vis.camera.move_forward()
help(vis.camera.move_forward)
mayavi_zoom_old = mayavi_zoom
def update_plot():
global last_plotted
# rotate_scene()
zoom_scene()
data_len = np.array([len(plt1_x_data), len(plt1_y_data),
len(plt2_x_data), len(plt2_y_data)]).min()
plot1.set_xdata(plt1_x_data[:data_len])
plot1.set_ydata(plt1_y_data[:data_len])
plot2.set_xdata(plt2_x_data[:data_len])
plot2.set_ydata(plt2_y_data[:data_len])
cursor.set_offsets([plt1_y_data[data_len - 1], plt2_y_data[data_len - 1]])
# cursor2.set_offsets([controls.temperature, controls.pressure])
current_time = plot1.get_xdata()[-1]
if last_plotted == current_time:
return
last_plotted = current_time
plot1.axes.set_xlim(plot1.get_xdata()[0], plot1.get_xdata()[-1])
plot1.axes.set_ylim(0.8 * plot1.get_ydata().min(),
1.2 * plot1.get_ydata().max())
plot2.axes.set_xlim(plot2.get_xdata()[0], plot2.get_xdata()[-1])
plot2.axes.set_ylim(0.8 * plot2.get_ydata().min(),
1.2 * plot2.get_ydata().max())
pyplot.draw()
t = Thread(target=main_thread)
t.daemon = True
vis.register_callback(update_plot, interval=1000)
controls = Controls()
t.start()
if controls.midi_input is not None:
t2 = Thread(target=midi_thread)
t2.daemon = True
t2.start()
if use_opengl:
gui = GUI()
vis.register_callback(gui.process_events, interval=1000)
vis.start()
|
psci2195/espresso-ffans
|
samples/lj-demo.py
|
Python
|
gpl-3.0
| 20,993
|
[
"ESPResSo",
"Mayavi"
] |
d06baa5ae2d1710779c7301da7a931f194cb8871fb4d31cdcced955e5447853a
|
from __future__ import absolute_import
import numpy as np
import matplotlib.pyplot as plt
def implot(plt, x, y, Z, ax=None, colorbar=True, **kwargs):
"""
Image plot of general data (like imshow but with non-pixel axes).
Parameters
----------
plt : plot object
Plot object, typically `matplotlib.pyplot`.
x : (M,) array_like
Vector of x-axis points, must be linear (equally spaced).
y : (N,) array_like
Vector of y-axis points, must be linear (equally spaced).
Z : (M, N) array_like
Matrix of data to be displayed, the value at each (x, y) point.
ax : axis object (optional)
A specific axis to plot on (defaults to `plt.gca()`).
colorbar: boolean (optional)
Whether to plot a colorbar.
**kwargs
Additional arguments for `ax.imshow`.
"""
ax = plt.gca() if ax is None else ax
def is_linear(x):
diff = np.diff(x)
return np.allclose(diff, diff[0])
assert is_linear(x) and is_linear(y)
image = ax.imshow(Z, aspect='auto', extent=(x[0], x[-1], y[-1], y[0]),
**kwargs)
if colorbar:
plt.colorbar(image, ax=ax)
def rasterplot(time, spikes, ax=None, **kwargs):
'''Generate a raster plot of the provided spike data
Parameters
----------
time : array
Time data from the simulation
spikes: array
The spike data with columns for each neuron and 1s indicating spikes
ax: matplotlib.axes.Axes
The figure axes to plot into.
Returns
-------
ax: matplotlib.axes.Axes
The axes that were plotted into
Examples
--------
>>> import nengo
>>> model = nengo.Model("Raster")
>>> A = nengo.Ensemble(nengo.LIF(20), dimensions=1)
>>> A_spikes = nengo.Probe(A, "spikes")
>>> sim = nengo.Simulator(model)
>>> sim.run(1)
>>> rasterplot(sim.trange(), sim.data[A_spikes])
'''
if ax is None:
ax = plt.gca()
colors = kwargs.pop('colors', None)
if colors is None:
color_cycle = plt.rcParams['axes.color_cycle']
colors = [color_cycle[ix % len(color_cycle)]
for ix in range(spikes.shape[1])]
if hasattr(ax, 'eventplot'):
spikes = [time[spikes[:, i] > 0].flatten()
for i in range(spikes.shape[1])]
for ix in range(len(spikes)):
if spikes[ix].shape == (0,):
spikes[ix] = np.array([-1])
ax.eventplot(spikes, colors=colors, **kwargs)
ax.set_ylim(len(spikes) - 0.5, -0.5)
if len(spikes) == 1:
ax.set_ylim(0.4, 1.6) # eventplot plots different for len==1
ax.set_xlim(left=0)
else:
# Older Matplotlib, doesn't have eventplot
for i in range(spikes.shape[1]):
ax.plot(time[spikes[:, i] > 0],
np.ones_like(np.where(spikes[:, i] > 0)).T + i, ',',
color=colors[i], **kwargs)
return ax
|
ZeitgeberH/nengo
|
nengo/utils/matplotlib.py
|
Python
|
gpl-3.0
| 2,968
|
[
"NEURON"
] |
5ec0ee23fde1c05f9d904b6b687a920a0e8c9d07c896efca9c0c81aae90a1f07
|
# DicomAligner.py by Francois Malan - 2011-06-23
# Revised as version 2.0 on 2011-07-07
from module_base import ModuleBase
from module_mixins import NoConfigModuleMixin
from module_kits.misc_kit import misc_utils
import wx
import os
import vtk
import itk
import math
import numpy
class DICOMAligner(
NoConfigModuleMixin, ModuleBase):
def __init__(self, module_manager):
# initialise our base class
ModuleBase.__init__(self, module_manager)
NoConfigModuleMixin.__init__(
self, {'Module (self)' : self})
self.sync_module_logic_with_config()
self._ir = vtk.vtkImageReslice()
self._ici = vtk.vtkImageChangeInformation()
def close(self):
# we play it safe... (the graph_editor/module_manager should have
# disconnected us by now)
for input_idx in range(len(self.get_input_descriptions())):
self.set_input(input_idx, None)
# this will take care of GUI
NoConfigModuleMixin.close(self)
def set_input(self, idx, input_stream):
if idx == 0:
self._imagedata = input_stream
else:
self._metadata = input_stream
self._input = input_stream
def get_input_descriptions(self):
return ('vtkImageData (from DICOMReader port 0)', 'Medical metadata (from DICOMReader port 1)')
def get_output_descriptions(self):
return ('vtkImageData', )
def get_output(self, idx):
return self._output
def _convert_input(self):
'''
Performs the required transformation to match the image to the world coordinate system defined by medmeta
'''
# the first two columns of the direction cosines matrix represent
# the x,y axes of the DICOM slices in the patient's LPH space
# if we want to resample the images so that x,y are always LP
# the inverse should do the trick (transpose should also work as long as boths sets of axes
# is right-handed but let's stick to inverse for safety)
dcmatrix = vtk.vtkMatrix4x4()
dcmatrix.DeepCopy(self._metadata.direction_cosines)
dcmatrix.Invert()
origin = self._imagedata.GetOrigin()
spacing = self._imagedata.GetSpacing()
extent = self._imagedata.GetExtent()
# convert our new cosines to something we can give the ImageReslice
dcm = [[0,0,0] for _ in range(3)]
for col in range(3):
for row in range(3):
dcm[col][row] = dcmatrix.GetElement(row, col)
# do it.
self._ir.SetResliceAxesDirectionCosines(dcm[0], dcm[1], dcm[2])
self._ir.SetInput(self._imagedata)
self._ir.SetAutoCropOutput(1)
self._ir.SetInterpolationModeToCubic()
isotropic_sp = min(min(spacing[0],spacing[1]),spacing[2])
self._ir.SetOutputSpacing(isotropic_sp, isotropic_sp, isotropic_sp)
self._ir.Update()
output = self._ir.GetOutput()
#We now have to check whether the origin needs to be moved from its prior position
#Yes folks - the reslice operation screws up the origin and we must fix it.
#(Since the IPP is INDEPENDENT of the IOP, a reslice operation to fix the axes' orientation
# should not rotate the origin)
#
#The origin's coordinates (as provided by the DICOMreader) are expressed in PATIENT-LPH
#We are transforming the voxels (i.e. image coordiante axes)
# FROM IMAGE TO LPH coordinates. We must not transform the origin in this
# sense- only the image axes (and therefore voxels). However, vtkImageReslice
# (for some strange reason) transforms the origin according to the
# transformation matrix (?). So we need to reset this.
#Once the image is aligned to the LPH coordinate axes, a voxel(centre)'s LPH coordinates
# = origin + image_coordinates * spacing.
#But, there is a caveat.
# Since both image coordinates and spacing are positive, the origin must be at
# the "most negative" corner (in LPH terms). Even worse, if the LPH axes are not
# perpendicular relative to the original image axes, this "most negative" corner will
# lie outside of the original image volume (in a zero-padded region) - see AutoCropOutput.
# But the original origin is defined at the "most negative" corner in IMAGE
# coordinates(!). This means that the origin should, in most cases, be
# translated from its original position, depending on the relative LPH and
# image axes' orientations.
#
#The (x,y,z) components of the new origin are, independently, the most negative x,
#most negative y and most negative z LPH coordinates of the eight ORIGINAL IMAGE corners.
#To determine this we compute the eight corner coordinates and do a minimization.
#
#Remember that (in matlab syntax)
# p_world = dcm_matrix * diag(spacing)*p_image + origin
#for example: for a 90 degree rotation around the x axis this is
# [p_x] [ 1 0 0][nx*dx] [ox]
# [p_y] = [ 0 0 1][ny*dy] + [oy]
# [p_z] [ 0 -1 0][nz*dz] [oz]
#, where p is the LPH coordinates, d is the spacing, n is the image
# coordinates and o is the origin (IPP of the slice with the most negative IMAGE z coordinate).
originn = numpy.array(origin)
dcmn = numpy.array(dcm)
corners = numpy.zeros((3,8))
#first column of the DCM is a unit LPH-space vector in the direction of the first IMAGE axis, etc.
#From this it follows that the displacements along the full IMAGE's x, y and z extents are:
sx = spacing[0]*extent[1]*dcmn[:,0]
sy = spacing[1]*extent[3]*dcmn[:,1]
sz = spacing[2]*extent[5]*dcmn[:,2]
corners[:,0] = originn
corners[:,1] = originn + sx
corners[:,2] = originn + sy
corners[:,3] = originn + sx + sy
corners[:,4] = originn + sz
corners[:,5] = originn + sx + sz
corners[:,6] = originn + sy + sz
corners[:,7] = originn + sx + sy + sz
newOriginX = min(corners[0,:]);
newOriginY = min(corners[1,:]);
newOriginZ = min(corners[2,:]);
#Since we set the direction cosine matrix to unity we have to reset the
#axis labels array as well.
self._ici.SetInput(output)
self._ici.Update()
fd = self._ici.GetOutput().GetFieldData()
fd.RemoveArray('axis_labels_array')
lut = {'L' : 0, 'R' : 1, 'P' : 2, 'A' : 3, 'F' : 4, 'H' : 5}
fd.RemoveArray('axis_labels_array')
axis_labels_array = vtk.vtkIntArray()
axis_labels_array.SetName('axis_labels_array')
axis_labels_array.InsertNextValue(lut['R'])
axis_labels_array.InsertNextValue(lut['L'])
axis_labels_array.InsertNextValue(lut['A'])
axis_labels_array.InsertNextValue(lut['P'])
axis_labels_array.InsertNextValue(lut['F'])
axis_labels_array.InsertNextValue(lut['H'])
fd.AddArray(axis_labels_array)
self._ici.Update()
output = self._ici.GetOutput()
output.SetOrigin(newOriginX, newOriginY, newOriginZ)
self._output = output
def execute_module(self):
self._convert_input()
|
chrisidefix/devide
|
modules/filters/DICOMAligner.py
|
Python
|
bsd-3-clause
| 7,460
|
[
"VTK"
] |
2d2577945ed2d4a7e4ecb984b3954ab5085fff4a915fd999f0d222c2197c2a7c
|
#!/usr/bin/env python
"""renum_pdb_to_aln.py - renumber a pdb file based on the alignment.
author: A. Zyla under supervision of mmagnus
.. warning:: works only for single chain! and requires Biopython (tested with v1.68)
"""
import logging
import argparse
from Bio.SeqRecord import SeqRecord
from Bio import SeqIO
from Bio.PDB import PDBParser
from Bio.PDB import PDBIO
from Bio.PDB.Atom import PDBConstructionWarning
import warnings
warnings.simplefilter('ignore', PDBConstructionWarning)
# logger
logger = logging.getLogger()
handler = logging.StreamHandler()
logger.addHandler(handler)
def get_seq(alignfn, seqid):
"""Get seq from an alignment with gaps.
Args:
alignfn (str): a path to an alignment
seqid (str): seq id in an alignment
Usage::
>>> get_seq('test_data/ALN_OBJ1_OBJ2.fa', 'obj1')
SeqRecord(seq=SeqRecord(seq=Seq('GUUCAG-------------------UGAC-', SingleLetterAlphabet()), id='obj1', name='obj1', description='obj1', dbxrefs=[]), id='<unknown id>', name='<unknown name>', description='<unknown description>', dbxrefs=[])
Returns:
SeqRecord
"""
# alignment = AlignIO.read(alignfn, 'fasta')
alignment = SeqIO.index(alignfn, 'fasta')
# print SeqRecord(alignment[seqid])
sequence = SeqRecord(alignment[seqid])
return sequence
def open_pdb(pdbfn):
"""Open pdb with Biopython.
Args:
pdbfn (str): a path to a pdb structure
Returns:
PDB Biopython object: with a pdb structure
"""
parser = PDBParser()
return parser.get_structure('struc', pdbfn)
def renumber(seq_with_gaps, struc, residue_index_start):
"""Renumber a pdb file.
Args:
seq_with_gaps (str): a target sequence extracted from the alignment
struc (pdb): a structure
residue_index_start (int): starting number
Returns:
BioPython Structure object
"""
new_numbering = []
for nt in seq_with_gaps:
if nt != '-':
nt_num_a = [residue_index_start, nt]
new_numbering.append(residue_index_start)
logger.info(nt_num_a)
residue_index_start = residue_index_start + 1
logger.info(new_numbering)
# works only for single chain
for struc in pdb:
for chain in struc:
for residue, resi in zip(chain, new_numbering):
residue.id = (residue.id[0], resi, residue.id[2])
return struc
def write_struc(struc, outfn):
"""Write renumbered pdb with Biopython.
Args:
struc (pdb): a renumbered structure
outfn (str): a path to a new, renumbered pdb file
Returns:
none: writes to a file
"""
io = PDBIO()
io.set_structure(struc)
io.save(outfn)
logger.info('Structure written to %s' % outfn)
def get_parser():
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("-v", "--verbose", help="increase output verbosity",
action="store_true")
parser.add_argument("--residue_index_start",
help="renumber starting number (default: 1)",
default=1, type=int)
parser.add_argument("--outfn", help="output pdb file (default: pdbfn .pdb -> _out.pdb)")
parser.add_argument("seqid", help="seq id in the alignemnt")
parser.add_argument("alignfn", help="alignemnt in the Fasta format")
parser.add_argument("pdbfn", help="pdb file")
return parser
# main
if __name__ == '__main__':
args = get_parser().parse_args()
if args.verbose:
logger.setLevel(logging.INFO)
if not args.outfn:
args.outfn = args.pdbfn.replace('.pdb', '_out.pdb')
seq_with_gaps = get_seq(args.alignfn, args.seqid)
pdb = open_pdb(args.pdbfn)
struc = renumber(seq_with_gaps, pdb, args.residue_index_start)
write_struc(struc, args.outfn)
|
mmagnus/rna-pdb-tools
|
rna_tools/tools/renum_pdb_to_aln/renum_pdb_to_aln.py
|
Python
|
gpl-3.0
| 3,954
|
[
"Biopython"
] |
508bc4f1780db6315fc03c43889f0f978d6688a4360e1ce0d74d84a633e0cfc6
|
#!/usr/bin/env python
import itertools
import os
import logging
import shutil
import subprocess
import sys
import tempfile
import pandas
import requests
from Bio import SeqIO
from cref.app import BaseApp
logger = logging.getLogger('CReF')
class TerminalApp(BaseApp):
"""
App to be run on the terminal
"""
def reporter(self, state):
pass
def run_cref(aa_sequence, output_dir, params):
pandas.set_option('display.max_columns', 0)
pandas.set_option('display.max_rows', 5)
if not os.path.isdir(output_dir):
os.makedirs(output_dir)
app = TerminalApp(params)
return app.run(aa_sequence, output_dir)
def configure_logger(log_level='INFO', include_pathname=False):
logger = logging.getLogger('CReF')
level = getattr(logging, log_level.upper(), None)
if not isinstance(level, int):
raise ValueError('Invalid log level: %s' % log_level)
logger.propagate = False
logger = logging.getLogger('CReF')
logger.setLevel(level)
ch = logging.StreamHandler()
ch.setLevel(level)
if include_pathname:
template = ('%(asctime)s - %(name)s - %(levelname)s'
'(%(pathname)s, %(lineno)d)- %(message)s')
else:
template = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
formatter = logging.Formatter(template, datefmt='%d/%m/%Y %I:%M:%S %p')
ch.setFormatter(formatter)
logger.addHandler(ch)
def read_fasta(filepath):
records = []
with open(filepath, 'rU') as fasta_file:
records = list(SeqIO.parse(fasta_file, 'fasta'))
return records
def predict_fasta(filepath, output_dir, params):
sequences = read_fasta(filepath)
output_filepaths = []
for sequence in sequences:
seq = str(sequence.seq).replace('X', '')
output = run_cref(seq, output_dir, params)
sequence_file = os.path.join(output_dir, 'sequence.txt')
with open(sequence_file, 'w') as sequence_output:
sequence_output.write(seq)
output_filepaths.append(output)
return output_filepaths
def _download_file(url, filepath):
r = requests.get(url, stream=True)
with open(filepath, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
return filepath
def download_fasta(pdb_code, filepath):
""""""
url = ('http://www.rcsb.org/pdb'
'/files/fasta.txt?structureIdList=' + pdb_code.upper())
return _download_file(url, filepath)
def download_pdb(pdb_code, filepath):
url = ('http://www.rcsb.org/pdb/download/downloadFile.do?'
'fileFormat=pdb&compression=NO&structureId=' + pdb_code.upper())
return _download_file(url, filepath)
def read_config():
pdb_id = sys.argv[1].upper()
excluded_pdbs = [x.strip() for x in sys.argv[2].split()]
excluded_pdbs.append(pdb_id)
fragment_size = range(5, 16, 2)
number_of_clusters = range(4, 13)
matrix = ["PAM30"]
max_templates = [100]
number_of_alignments = [1000]
params_list = []
print(len(list(itertools.product(
fragment_size,
number_of_clusters,
matrix,
max_templates,
number_of_alignments))))
for f, c, m, t, a in itertools.product(
fragment_size,
number_of_clusters,
matrix,
max_templates,
number_of_alignments):
print(f, c, t, a, m)
params = {
"id": (f, c, t, a, m),
"exclude": {"pdbs": excluded_pdbs},
"fragment_size": f,
"number_of_clusters": c,
"max_templates": t,
"blast": {
"number_of_alignments": a,
"scoring": {
"matrix": m,
"gap_costs": "ungapped",
}
}
}
params['pdb'] = pdb_id
params['output_dir'] = os.path.join(
'predictions/benchmark',
params['pdb'],
'_'.join([str(x) for x in (f, c, t, a, m)]),
)
params_list.append(params)
return params_list
def run_pymol(pdb_code, predicted_filepath):
filepath = os.path.join(
os.path.dirname(predicted_filepath),
'experimental_structure.pdb'
)
experimental_pdb = download_pdb(pdb_code, filepath)
output = subprocess.check_output([
'pymol',
predicted_filepath,
experimental_pdb,
'-r',
'cref/utils/pymolbench.py'
])
output = output.decode('utf-8').split('\n')
rmsd = output[-4]
imagepath = output[-3]
return rmsd, imagepath
def rmsds_to_csv(rmsds, filename):
results = []
for key, value in rmsds.items():
results.append(key + (value,))
df = pandas.DataFrame(
results,
columns=['fragment_size', 'group_count', 'max_templates',
'max_blast', 'matrix', 'rmsd']
)
df.to_csv(filename + '.rmsd.csv')
def main():
configure_logger('INFO')
test_cases = read_config()
results = {}
for params in test_cases:
print('Predicting', params['pdb'], params['id'])
print(params)
handler, fasta_file = tempfile.mkstemp(
suffix='.fasta', prefix='tmp')
download_fasta(params['pdb'], fasta_file)
output_files = predict_fasta(
fasta_file, params['output_dir'], params)
rmsd, imagepath = run_pymol(params['pdb'], output_files[0])
output_file = os.path.join(params['output_dir'], 'rmsd.txt')
with open(output_file, 'w') as rmsd_file:
rmsd_file.write(rmsd)
results[params['id']] = rmsd
shutil.copyfile(
imagepath,
os.path.join(params['output_dir'], 'alignment-pymol.png'),
)
print('Prediction written to', output_files)
print('RMSD from reference structure:', rmsd)
os.remove(fasta_file)
rmsds_to_csv(results, params['pdb'])
print(results)
rmsds_to_csv(results, params['pdb'])
if __name__ == '__main__':
main()
|
mchelem/cref2
|
cref/evaluation/benchmark.py
|
Python
|
mit
| 6,209
|
[
"BLAST",
"PyMOL"
] |
bd02752518225d1dfec1b55126fc8edba1159833d05c882c1e8ee61c6f7224b6
|
# Copyright 2014-2020 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Oliver J. Backhouse <olbackhouse@gmail.com>
# George H. Booth <george.booth@kcl.ac.uk>
#
'''
Auxiliary second-order Green's function perturbation theory
'''
import numpy as np
import copy
from pyscf import lib
from pyscf.lib import logger
from pyscf import __config__
from pyscf import ao2mo
from pyscf.scf import _vhf
from pyscf.agf2 import mpi_helper, _agf2
from pyscf.agf2 import aux_space as aux
from pyscf.agf2 import chkfile as chkutil
from pyscf.agf2.chempot import binsearch_chempot, minimize_chempot
from pyscf.mp.mp2 import get_frozen_mask as _get_frozen_mask
BLKMIN = getattr(__config__, 'agf2_blkmin', 1)
def kernel(agf2, eri=None, gf=None, se=None, verbose=None, dump_chk=True):
log = logger.new_logger(agf2, verbose)
cput1 = cput0 = (logger.process_clock(), logger.perf_counter())
name = agf2.__class__.__name__
if eri is None: eri = agf2.ao2mo()
if gf is None: gf = agf2.gf
if se is None: se = agf2.se
if verbose is None: verbose = agf2.verbose
if gf is None:
gf = agf2.init_gf()
gf_froz = agf2.init_gf(frozen=True)
else:
gf_froz = gf
if se is None:
se = agf2.build_se(eri, gf_froz)
if dump_chk:
agf2.dump_chk(gf=gf, se=se)
if isinstance(agf2.diis, lib.diis.DIIS):
diis = agf2.diis
elif agf2.diis:
diis = lib.diis.DIIS(agf2)
diis.space = agf2.diis_space
diis.min_space = agf2.diis_min_space
else:
diis = None
e_init = agf2.energy_mp2(agf2.mo_energy, se)
log.info('E(init) = %.16g E_corr(init) = %.16g', e_init+eri.e_hf, e_init)
e_1b = eri.e_hf
e_2b = e_init
e_prev = 0.0
se_prev = None
converged = False
for niter in range(1, agf2.max_cycle+1):
if agf2.damping != 0.0:
se_prev = copy.deepcopy(se)
# one-body terms
gf, se, fock_conv = agf2.fock_loop(eri, gf, se)
e_1b = agf2.energy_1body(eri, gf)
# two-body terms
se = agf2.build_se(eri, gf, se_prev=se_prev)
se = agf2.run_diis(se, diis)
e_2b = agf2.energy_2body(gf, se)
if dump_chk:
agf2.dump_chk(gf=gf, se=se)
e_tot = e_1b + e_2b
ip = agf2.get_ip(gf, nroots=1)[0][0]
ea = agf2.get_ea(gf, nroots=1)[0][0]
log.info('cycle = %3d E(%s) = %.15g E_corr(%s) = %.15g dE = %.9g',
niter, name, e_tot, name, e_tot-eri.e_hf, e_tot-e_prev)
log.info('E_1b = %.15g E_2b = %.15g', e_1b, e_2b)
log.info('IP = %.15g EA = %.15g', ip, ea)
cput1 = log.timer('%s iter'%name, *cput1)
if abs(e_tot - e_prev) < agf2.conv_tol:
converged = True
break
e_prev = e_tot
if dump_chk:
agf2.dump_chk(gf=gf, se=se)
log.timer('%s'%name, *cput0)
return converged, e_1b, e_2b, gf, se
def build_se_part(agf2, eri, gf_occ, gf_vir, os_factor=1.0, ss_factor=1.0):
''' Builds either the auxiliaries of the occupied self-energy,
or virtual if :attr:`gf_occ` and :attr:`gf_vir` are swapped.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
gf_occ : GreensFunction
Occupied Green's function
gf_vir : GreensFunction
Virtual Green's function
Kwargs:
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
Returns:
:class:`SelfEnergy`
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
assert type(gf_occ) is aux.GreensFunction
assert type(gf_vir) is aux.GreensFunction
nmo = eri.nmo
tol = agf2.weight_tol
facs = dict(os_factor=os_factor, ss_factor=ss_factor)
ci, ei = gf_occ.coupling, gf_occ.energy
ca, ea = gf_vir.coupling, gf_vir.energy
mem_incore = (gf_occ.nphys*gf_occ.naux**2*gf_vir.naux) * 8/1e6
mem_now = lib.current_memory()[0]
if (mem_incore+mem_now < agf2.max_memory) or agf2.incore_complete:
qeri = _make_qmo_eris_incore(agf2, eri, (ci, ci, ca))
else:
qeri = _make_qmo_eris_outcore(agf2, eri, (ci, ci, ca))
if isinstance(qeri, np.ndarray):
vv, vev = _agf2.build_mats_ragf2_incore(qeri, ei, ea, **facs)
else:
vv, vev = _agf2.build_mats_ragf2_outcore(qeri, ei, ea, **facs)
e, c = _agf2.cholesky_build(vv, vev)
se = aux.SelfEnergy(e, c, chempot=gf_occ.chempot)
se.remove_uncoupled(tol=tol)
if not (agf2.frozen is None or agf2.frozen == 0):
mask = get_frozen_mask(agf2)
coupling = np.zeros((nmo, se.naux))
coupling[mask] = se.coupling
se = aux.SelfEnergy(se.energy, coupling, chempot=se.chempot)
log.timer('se part', *cput0)
return se
def get_jk(agf2, eri, rdm1, with_j=True, with_k=True):
''' Get the J/K matrices.
Args:
eri : ndarray or H5 dataset
Electronic repulsion integrals (NOT as _ChemistsERIs)
rdm1 : 2D array
Reduced density matrix
Kwargs:
with_j : bool
Whether to compute J. Default value is True
with_k : bool
Whether to compute K. Default value is True
Returns:
tuple of ndarrays corresponding to J and K, if either are
not requested then they are set to None.
'''
if isinstance(eri, np.ndarray):
vj, vk = _vhf.incore(eri, rdm1, with_j=with_j, with_k=with_k)
else:
nmo = rdm1.shape[0]
npair = nmo*(nmo+1)//2
vj = vk = None
if with_j:
rdm1_tril = lib.pack_tril(rdm1 + np.tril(rdm1, k=-1))
vj = np.zeros_like(rdm1_tril)
if with_k:
vk = np.zeros_like(rdm1)
blksize = _agf2.get_blksize(agf2.max_memory, (nmo*npair, nmo**3))
blksize = min(1, max(BLKMIN, blksize))
logger.debug1(agf2, 'blksize (ragf2.get_jk) = %d' % blksize)
tril2sq = lib.square_mat_in_trilu_indices(nmo)
for p0, p1 in lib.prange(0, nmo, blksize):
idx = list(np.concatenate(tril2sq[p0:p1]))
eri0 = eri[idx]
# vj built in tril layout with scaled rdm1_tril
if with_j:
vj[idx] = np.dot(eri0, rdm1_tril)
if with_k:
eri0 = lib.unpack_tril(eri0, axis=-1)
eri0 = eri0.reshape(p1-p0, nmo, nmo, nmo)
vk[p0:p1] = lib.einsum('ijkl,jk->il', eri0, rdm1)
if with_j:
vj = lib.unpack_tril(vj)
return vj, vk
def get_fock(agf2, eri, gf=None, rdm1=None):
''' Computes the physical space Fock matrix in MO basis. If :attr:`rdm1`
is not supplied, it is built from :attr:`gf`, which defaults to
the mean-field Green's function.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
Kwargs:
gf : Greensfunction
Auxiliaries of the Green's function
rdm1 : 2D array
Reduced density matrix.
Returns:
ndarray of physical space Fock matrix
'''
if rdm1 is None:
rdm1 = agf2.make_rdm1(gf)
vj, vk = agf2.get_jk(eri.eri, rdm1)
fock = eri.h1e + vj - 0.5 * vk
return fock
def fock_loop(agf2, eri, gf, se):
''' Self-consistent loop for the density matrix via the HF self-
consistent field.
Args:
eri : _ChemistERIs
Electronic repulsion integrals
gf : GreensFunction
Auxiliaries of the Green's function
se : SelfEnergy
Auxiliaries of the self-energy
Returns:
:class:`SelfEnergy`, :class:`GreensFunction` and a boolean
indicating wheter convergence was successful.
'''
assert type(gf) is aux.GreensFunction
assert type(se) is aux.SelfEnergy
cput0 = cput1 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
diis = lib.diis.DIIS(agf2)
diis.space = agf2.fock_diis_space
diis.min_space = agf2.fock_diis_min_space
fock = agf2.get_fock(eri, gf)
nelec = eri.nocc * 2
nmo = eri.nmo
naux = se.naux
nqmo = nmo + naux
buf = np.zeros((nqmo, nqmo))
converged = False
opts = dict(tol=agf2.conv_tol_nelec, maxiter=agf2.max_cycle_inner)
rdm1_prev = 0
for niter1 in range(1, agf2.max_cycle_outer+1):
se, opt = minimize_chempot(se, fock, nelec, x0=se.chempot, **opts)
for niter2 in range(1, agf2.max_cycle_inner+1):
w, v = se.eig(fock, chempot=0.0, out=buf)
se.chempot, nerr = binsearch_chempot((w, v), nmo, nelec)
w, v = se.eig(fock, out=buf)
gf = aux.GreensFunction(w, v[:nmo], chempot=se.chempot)
fock = agf2.get_fock(eri, gf)
rdm1 = agf2.make_rdm1(gf)
fock = diis.update(fock, xerr=None)
if niter2 > 1:
derr = np.max(np.absolute(rdm1 - rdm1_prev))
if derr < agf2.conv_tol_rdm1:
break
rdm1_prev = rdm1.copy()
log.debug1('fock loop %d cycles = %d dN = %.3g |ddm| = %.3g',
niter1, niter2, nerr, derr)
cput1 = log.timer_debug1('fock loop %d'%niter1, *cput1)
if derr < agf2.conv_tol_rdm1 and abs(nerr) < agf2.conv_tol_nelec:
converged = True
break
log.info('fock converged = %s chempot = %.9g dN = %.3g |ddm| = %.3g',
converged, se.chempot, nerr, derr)
log.timer('fock loop', *cput0)
return gf, se, converged
def energy_1body(agf2, eri, gf):
''' Calculates the one-body energy according to the RHF form.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
gf : GreensFunction
Auxiliaries of Green's function
Returns:
One-body energy
'''
assert type(gf) is aux.GreensFunction
rdm1 = agf2.make_rdm1(gf)
fock = agf2.get_fock(eri, gf)
e1b = 0.5 * np.sum(rdm1 * (eri.h1e + fock))
e1b += agf2.energy_nuc()
return e1b
def energy_2body(agf2, gf, se):
''' Calculates the two-body energy using analytically integrated
Galitskii-Migdal formula. The formula is symmetric and only
one side needs to be calculated.
Args:
gf : GreensFunction
Auxiliaries of the Green's function
se : SelfEnergy
Auxiliaries of the self-energy
Returns
Two-body energy
'''
assert type(gf) is aux.GreensFunction
assert type(se) is aux.SelfEnergy
gf_occ = gf.get_occupied()
se_vir = se.get_virtual()
e2b = 0.0
for l in mpi_helper.nrange(gf_occ.naux):
vxl = gf_occ.coupling[:,l]
vxk = se_vir.coupling
dlk = gf_occ.energy[l] - se_vir.energy
vv = vxk * vxl[:,None]
e2b += lib.einsum('xk,yk,k->', vv, vv.conj(), 1./dlk)
e2b *= 2
mpi_helper.barrier()
e2b = mpi_helper.allreduce(e2b)
return np.ravel(e2b.real)[0]
def energy_mp2(agf2, mo_energy, se):
''' Calculates the two-body energy using analytically integrated
Galitskii-Migdal formula for an MP2 self-energy. Per the
definition of one- and two-body partitioning in the Dyson
equation, this result is half of :func:`energy_2body`.
Args:
gf : GreensFunction
Auxiliaries of the Green's function
se : SelfEnergy
Auxiliaries of the self-energy
Returns
MP2 energy
'''
assert type(se) is aux.SelfEnergy
occ = mo_energy < se.chempot
se_vir = se.get_virtual()
vxk = se_vir.coupling[occ]
dxk = lib.direct_sum('x,k->xk', mo_energy[occ], -se_vir.energy)
emp2 = lib.einsum('xk,xk,xk->', vxk, vxk.conj(), 1./dxk)
return np.ravel(emp2.real)[0]
class RAGF2(lib.StreamObject):
''' Restricted AGF2 with canonical HF reference
Attributes:
verbose : int
Print level. Default value equals to :class:`Mole.verbose`
max_memory : float or int
Allowed memory in MB. Default value equals to :class:`Mole.max_memory`
incore_complete : bool
Avoid all I/O. Default is False.
conv_tol : float
Convergence threshold for AGF2 energy. Default value is 1e-7
conv_tol_rdm1 : float
Convergence threshold for first-order reduced density matrix.
Default value is 1e-8.
conv_tol_nelec : float
Convergence threshold for the number of electrons. Default
value is 1e-6.
max_cycle : int
Maximum number of AGF2 iterations. Default value is 50.
max_cycle_outer : int
Maximum number of outer Fock loop iterations. Default
value is 20.
max_cycle_inner : int
Maximum number of inner Fock loop iterations. Default
value is 50.
weight_tol : float
Threshold in spectral weight of auxiliaries to be considered
zero. Default 1e-11.
diis : bool or lib.diis.DIIS
Whether to use DIIS, can also be a lib.diis.DIIS object. Default
value is True.
diis_space : int
DIIS space size. Default value is 8.
diis_min_space : int
Minimum space of DIIS. Default value is 1.
fock_diis_space : int
DIIS space size for Fock loop iterations. Default value is 6.
fock_diis_min_space : int
Minimum space of DIIS. Default value is 1.
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
damping : float
Damping factor for the self-energy. Default value is 0.0
Saved results
e_corr : float
AGF2 correlation energy
e_tot : float
Total energy (HF + correlation)
e_1b : float
One-body part of :attr:`e_tot`
e_2b : float
Two-body part of :attr:`e_tot`
e_init : float
Initial correlation energy (truncated MP2)
converged : bool
Whether convergence was successful
se : SelfEnergy
Auxiliaries of the self-energy
gf : GreensFunction
Auxiliaries of the Green's function
'''
async_io = getattr(__config__, 'agf2_async_io', True)
incore_complete = getattr(__config__, 'agf2_incore_complete', False)
def __init__(self, mf, frozen=None, mo_energy=None, mo_coeff=None, mo_occ=None):
if mo_energy is None: mo_energy = mpi_helper.bcast(mf.mo_energy)
if mo_coeff is None: mo_coeff = mpi_helper.bcast(mf.mo_coeff)
if mo_occ is None: mo_occ = mpi_helper.bcast(mf.mo_occ)
self.mol = mf.mol
self._scf = mf
self.verbose = self.mol.verbose
self.stdout = self.mol.stdout
self.max_memory = mf.max_memory
self.incore_complete = self.incore_complete or self.mol.incore_anyway
self.conv_tol = getattr(__config__, 'agf2_conv_tol', 1e-7)
self.conv_tol_rdm1 = getattr(__config__, 'agf2_conv_tol_rdm1', 1e-8)
self.conv_tol_nelec = getattr(__config__, 'agf2_conv_tol_nelec', 1e-6)
self.max_cycle = getattr(__config__, 'agf2_max_cycle', 50)
self.max_cycle_outer = getattr(__config__, 'agf2_max_cycle_outer', 20)
self.max_cycle_inner = getattr(__config__, 'agf2_max_cycle_inner', 50)
self.weight_tol = getattr(__config__, 'agf2_weight_tol', 1e-11)
self.fock_diis_space = getattr(__config__, 'agf2_diis_space', 6)
self.fock_diis_min_space = getattr(__config__, 'agf2_diis_min_space', 1)
self.diis = getattr(__config__, 'agf2_diis', True)
self.diis_space = getattr(__config__, 'agf2_diis_space', 8)
self.diis_min_space = getattr(__config__, 'agf2_diis_min_space', 1)
self.os_factor = getattr(__config__, 'agf2_os_factor', 1.0)
self.ss_factor = getattr(__config__, 'agf2_ss_factor', 1.0)
self.damping = getattr(__config__, 'agf2_damping', 0.0)
self.mo_energy = mo_energy
self.mo_coeff = mo_coeff
self.mo_occ = mo_occ
self.se = None
self.gf = None
self.e_1b = mf.e_tot
self.e_2b = 0.0
self.e_init = 0.0
self.frozen = frozen
self._nmo = None
self._nocc = None
self.converged = False
self.chkfile = mf.chkfile
self._keys = set(self.__dict__.keys())
energy_1body = energy_1body
energy_2body = energy_2body
fock_loop = fock_loop
build_se_part = build_se_part
get_jk = get_jk
def ao2mo(self, mo_coeff=None):
''' Get the electronic repulsion integrals in MO basis.
'''
# happens when e.g. restarting from chkfile
if self._scf._eri is None and self._scf._is_mem_enough():
self._scf._eri = self.mol.intor('int2e', aosym='s8')
mem_incore = ((self.nmo*(self.nmo+1)//2)**2) * 8/1e6
mem_now = lib.current_memory()[0]
if (self._scf._eri is not None and
(mem_incore+mem_now < self.max_memory or self.incore_complete)):
eri = _make_mo_eris_incore(self, mo_coeff)
else:
logger.warn(self, 'MO eris are outcore - this may be very '
'slow for agf2. increasing max_memory or '
'using density fitting is recommended.')
eri = _make_mo_eris_outcore(self, mo_coeff)
return eri
def make_rdm1(self, gf=None):
''' Computes the one-body reduced density matrix in MO basis.
Kwargs:
gf : GreensFunction
Auxiliaries of the Green's function
Returns:
ndarray of density matrix
'''
if gf is None: gf = self.gf
if gf is None: gf = self.init_gf()
return gf.make_rdm1()
def get_fock(self, eri=None, gf=None, rdm1=None):
''' Computes the physical space Fock matrix in MO basis.
'''
if eri is None: eri = self.ao2mo()
if gf is None: gf = self.gf
return get_fock(self, eri, gf=gf, rdm1=rdm1)
def energy_mp2(self, mo_energy=None, se=None):
if mo_energy is None: mo_energy = self.mo_energy
if se is None: se = self.build_se(gf=self.gf)
self.e_init = energy_mp2(self, mo_energy, se)
return self.e_init
def init_gf(self, frozen=False):
''' Builds the Hartree-Fock Green's function.
Returns:
:class:`GreensFunction`, :class:`SelfEnergy`
'''
energy = self.mo_energy
coupling = np.eye(self.nmo)
chempot = binsearch_chempot(np.diag(energy), self.nmo, self.nocc*2)[0]
if frozen:
mask = get_frozen_mask(self)
energy = energy[mask]
coupling = coupling[:,mask]
gf = aux.GreensFunction(energy, coupling, chempot=chempot)
return gf
def build_gf(self, eri=None, gf=None, se=None):
''' Builds the auxiliaries of the Green's function by solving
the Dyson equation.
Kwargs:
eri : _ChemistsERIs
Electronic repulsion integrals
gf : GreensFunction
Auxiliaries of the Green's function
se : SelfEnergy
Auxiliaries of the self-energy
Returns:
:class:`GreensFunction`
'''
if eri is None: eri = self.ao2mo()
if gf is None: gf = self.gf
if gf is None: gf = self.init_gf()
if se is None: se = self.build_se(eri, gf)
fock = self.get_fock(eri, gf)
return se.get_greens_function(fock)
def build_se(self, eri=None, gf=None, os_factor=None, ss_factor=None, se_prev=None):
''' Builds the auxiliaries of the self-energy.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
gf : GreensFunction
Auxiliaries of the Green's function
Kwargs:
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
se_prev : SelfEnergy
Previous self-energy for damping. Default value is None
Returns:
:class:`SelfEnergy`
'''
if eri is None: eri = self.ao2mo()
if gf is None: gf = self.gf
if gf is None: gf = self.init_gf()
if os_factor is None: os_factor = self.os_factor
if ss_factor is None: ss_factor = self.ss_factor
facs = dict(os_factor=os_factor, ss_factor=ss_factor)
gf_occ = gf.get_occupied()
gf_vir = gf.get_virtual()
if gf_occ.naux == 0 or gf_vir.naux == 0:
logger.warn(self, 'Attempting to build a self-energy with '
'no (i,j,a) or (a,b,i) configurations.')
se = aux.SelfEnergy([], [[],]*self.nmo, chempot=gf.chempot)
else:
se_occ = self.build_se_part(eri, gf_occ, gf_vir, **facs)
se_vir = self.build_se_part(eri, gf_vir, gf_occ, **facs)
se = aux.combine(se_occ, se_vir)
if se_prev is not None and self.damping != 0.0:
se.coupling *= np.sqrt(1.0-self.damping)
se_prev.coupling *= np.sqrt(self.damping)
se = aux.combine(se, se_prev)
se = se.compress(n=(None,0))
return se
def run_diis(self, se, diis=None):
''' Runs the direct inversion of the iterative subspace for the
self-energy.
Args:
se : SelfEnergy
Auxiliaries of the self-energy
diis : lib.diis.DIIS
DIIS object
Returns:
:class:`SelfEnergy`
'''
if diis is None:
return se
se_occ = se.get_occupied()
se_vir = se.get_virtual()
vv_occ = np.dot(se_occ.coupling, se_occ.coupling.T)
vv_vir = np.dot(se_vir.coupling, se_vir.coupling.T)
vev_occ = np.dot(se_occ.coupling * se_occ.energy[None], se_occ.coupling.T)
vev_vir = np.dot(se_vir.coupling * se_vir.energy[None], se_vir.coupling.T)
dat = np.array([vv_occ, vv_vir, vev_occ, vev_vir])
dat = diis.update(dat)
vv_occ, vv_vir, vev_occ, vev_vir = dat
se_occ = aux.SelfEnergy(*_agf2.cholesky_build(vv_occ, vev_occ), chempot=se.chempot)
se_vir = aux.SelfEnergy(*_agf2.cholesky_build(vv_vir, vev_vir), chempot=se.chempot)
se = aux.combine(se_occ, se_vir)
return se
def energy_nuc(self):
return self._scf.energy_nuc()
def dump_flags(self, verbose=None):
log = logger.new_logger(self, verbose)
log.info('')
log.info('******** %s ********', self.__class__)
log.info('conv_tol = %g', self.conv_tol)
log.info('conv_tol_rdm1 = %g', self.conv_tol_rdm1)
log.info('conv_tol_nelec = %g', self.conv_tol_nelec)
log.info('max_cycle = %g', self.max_cycle)
log.info('max_cycle_outer = %g', self.max_cycle_outer)
log.info('max_cycle_inner = %g', self.max_cycle_inner)
log.info('weight_tol = %g', self.weight_tol)
log.info('diis = %d', self.diis)
log.info('diis_space = %d', self.diis_space)
log.info('diis_min_space = %d', self.diis_min_space)
log.info('fock_diis_space = %d', self.fock_diis_space)
log.info('fock_diis_min_space = %d', self.fock_diis_min_space)
log.info('os_factor = %g', self.os_factor)
log.info('ss_factor = %g', self.ss_factor)
log.info('damping = %g', self.damping)
log.info('nmo = %s', self.nmo)
log.info('nocc = %s', self.nocc)
if self.frozen is not None:
log.info('frozen orbitals = %s', self.frozen)
log.info('max_memory %d MB (current use %d MB)',
self.max_memory, lib.current_memory()[0])
return self
def _finalize(self):
''' Hook for dumping results and clearing up the object.
'''
if self.converged:
logger.info(self, '%s converged', self.__class__.__name__)
else:
logger.note(self, '%s not converged', self.__class__.__name__)
ip = self.get_ip(self.gf, nroots=1)[0][0]
ea = self.get_ea(self.gf, nroots=1)[0][0]
logger.note(self, 'E(%s) = %.16g E_corr = %.16g',
self.__class__.__name__, self.e_tot, self.e_corr)
logger.note(self, 'IP = %.16g EA = %.16g', ip, ea)
logger.note(self, 'Quasiparticle gap = %.16g', ip+ea)
return self
def reset(self, mol=None):
if mol is not None:
self.mol = mol
self._scf.reset(mol)
return self
def kernel(self, eri=None, gf=None, se=None, dump_chk=True):
if self.verbose >= logger.WARN:
self.check_sanity()
self.dump_flags()
if eri is None: eri = self.ao2mo()
if gf is None: gf = self.gf
if se is None: se = self.se
if gf is None:
gf = self.init_gf()
gf_froz = self.init_gf(frozen=True)
else:
gf_froz = gf
if se is None:
se = self.build_se(eri, gf_froz)
self.converged, self.e_1b, self.e_2b, self.gf, self.se = \
kernel(self, eri=eri, gf=gf, se=se, verbose=self.verbose, dump_chk=dump_chk)
self._finalize()
return self.converged, self.e_1b, self.e_2b, self.gf, self.se
def dump_chk(self, chkfile=None, key='agf2', gf=None, se=None,
frozen=None, nmom=None,
mo_energy=None, mo_coeff=None, mo_occ=None):
chkutil.dump_agf2(self, chkfile, key,
gf, se, frozen, None,
mo_energy, mo_coeff, mo_occ)
return self
def update_from_chk_(self, chkfile=None, key='agf2'):
if chkfile is None:
chkfile = self.chkfile
mol, agf2_dict = chkutil.load_agf2(chkfile, key)
self.__dict__.update(agf2_dict)
return self
update = update_from_chk = update_from_chk_
def density_fit(self, auxbasis=None, with_df=None):
from pyscf.agf2 import dfragf2
myagf2 = dfragf2.DFRAGF2(self._scf)
myagf2.__dict__.update(self.__dict__)
if with_df is not None:
myagf2.with_df = with_df
if auxbasis is not None and myagf2.with_df.auxbasis != auxbasis:
import copy
myagf2.with_df = copy.copy(myagf2.with_df)
myagf2.with_df.auxbasis = auxbasis
return myagf2
def get_ip(self, gf, nroots=5):
gf_occ = gf.get_occupied()
e_ip = list(-gf_occ.energy[-nroots:])[::-1]
v_ip = list(gf_occ.coupling[:,-nroots:].T)[::-1]
return e_ip, v_ip
def ipagf2(self, nroots=5):
''' Find the (N-1)-electron charged excitations, corresponding
to the largest :attr:`nroots` poles of the occupied
Green's function.
Kwargs:
nroots : int
Number of roots (poles) requested. Default 1.
Returns:
IP and transition moment (float, 1D array) if :attr:`nroots`
= 1, or array of IPs and moments (1D array, 2D array) if
:attr:`nroots` > 1.
'''
e_ip, v_ip = self.get_ip(self.gf, nroots=nroots)
for n, en, vn in zip(range(nroots), e_ip, v_ip):
qpwt = np.linalg.norm(vn)**2
logger.note(self, 'IP energy level %d E = %.16g QP weight = %0.6g', n, en, qpwt)
if nroots == 1:
return e_ip[0], v_ip[0]
else:
return e_ip, v_ip
def get_ea(self, gf, nroots=5):
gf_vir = gf.get_virtual()
e_ea = list(gf_vir.energy[:nroots])
v_ea = list(gf_vir.coupling[:,:nroots].T)
return e_ea, v_ea
def eaagf2(self, nroots=5):
''' Find the (N+1)-electron charged excitations, corresponding
to the smallest :attr:`nroots` poles of the virtual
Green's function.
Kwargs:
See ipagf2()
'''
e_ea, v_ea = self.get_ea(self.gf, nroots=nroots)
for n, en, vn in zip(range(nroots), e_ea, v_ea):
qpwt = np.linalg.norm(vn)**2
logger.note(self, 'EA energy level %d E = %.16g QP weight = %0.6g', n, en, qpwt)
if nroots == 1:
return e_ea[0], v_ea[0]
else:
return e_ea, v_ea
@property
def nocc(self):
if self._nocc is None:
self._nocc = np.sum(self.mo_occ > 0)
return self._nocc
@nocc.setter
def nocc(self, val):
self._nocc = val
@property
def nmo(self):
if self._nmo is None:
self._nmo = self.mo_occ.size
return self._nmo
@nmo.setter
def nmo(self, val):
self._nmo = val
@property
def e_tot(self):
return self.e_1b + self.e_2b
@property
def e_corr(self):
e_hf = mpi_helper.bcast(self._scf.e_tot)
return self.e_tot - e_hf
@property
def qmo_energy(self):
return self.gf.energy
@property
def qmo_coeff(self):
''' Gives the couplings in AO basis '''
return np.dot(self.mo_coeff, self.gf.coupling)
@property
def qmo_occ(self):
coeff = self.gf.get_occupied().coupling
occ = 2.0 * np.linalg.norm(coeff, axis=0) ** 2
vir = np.zeros_like(self.gf.get_virtual().energy)
qmo_occ = np.concatenate([occ, vir])
return qmo_occ
def get_frozen_mask(agf2):
with lib.temporary_env(agf2, _nocc=None, _nmo=None):
return _get_frozen_mask(agf2)
class _ChemistsERIs:
''' (pq|rs)
MO integrals stored in s4 symmetry, we only need QMO integrals
in low-symmetry tensors and s4 is highest supported by _vhf
'''
def __init__(self, mol=None):
self.mol = mol
self.mo_coeff = None
self.nmo = None
self.nocc = None
self.fock = None
self.h1e = None
self.eri = None
self.e_hf = None
def _common_init_(self, agf2, mo_coeff=None):
if mo_coeff is None:
mo_coeff = agf2.mo_coeff
self.mo_coeff = mo_coeff
dm = agf2._scf.make_rdm1(agf2.mo_coeff, agf2.mo_occ)
h1e_ao = agf2._scf.get_hcore()
fock_ao = h1e_ao + agf2._scf.get_veff(agf2.mol, dm)
self.h1e = np.dot(np.dot(mo_coeff.conj().T, h1e_ao), mo_coeff)
self.fock = np.dot(np.dot(mo_coeff.conj().T, fock_ao), mo_coeff)
self.h1e = mpi_helper.bcast(self.h1e)
self.fock = mpi_helper.bcast(self.fock)
self.e_hf = mpi_helper.bcast(agf2._scf.e_tot)
self.nmo = agf2.nmo
self.nocc = agf2.nocc
self.mol = agf2.mol
mo_e = self.fock.diagonal()
gap = abs(mo_e[:self.nocc,None] - mo_e[None,self.nocc:]).min()
if gap < 1e-5:
logger.warn(agf2, 'HOMO-LUMO gap %s may be too small for AGF2', gap)
return self
def _make_mo_eris_incore(agf2, mo_coeff=None):
''' Returns _ChemistsERIs
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
eris = _ChemistsERIs()
eris._common_init_(agf2, mo_coeff)
eri = ao2mo.incore.full(agf2._scf._eri, eris.mo_coeff, verbose=log)
eri = ao2mo.addons.restore('s4', eri, eris.nmo)
eris.eri = eri
log.timer('MO integral transformation', *cput0)
return eris
def _make_mo_eris_outcore(agf2, mo_coeff=None):
''' Returns _ChemistsERIs
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
eris = _ChemistsERIs()
eris._common_init_(agf2, mo_coeff)
mol = agf2.mol
mo_coeff = np.asarray(eris.mo_coeff, order='F')
eris.feri = lib.H5TmpFile()
ao2mo.outcore.full(mol, mo_coeff, eris.feri, dataname='mo',
max_memory=agf2.max_memory, verbose=log)
eris.eri = eris.feri['mo']
log.timer('MO integral transformation', *cput0)
return eris
def _make_qmo_eris_incore(agf2, eri, coeffs):
''' Returns ndarray
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
cx = np.eye(eri.nmo)
if not (agf2.frozen is None or agf2.frozen == 0):
mask = get_frozen_mask(agf2)
cx = cx[:,mask]
coeffs = (cx,) + coeffs
shape = tuple(x.shape[1] for x in coeffs)
qeri = ao2mo.incore.general(eri.eri, coeffs, compact=False, verbose=log)
qeri = qeri.reshape(shape)
log.timer('QMO integral transformation', *cput0)
return qeri
def _make_qmo_eris_outcore(agf2, eri, coeffs):
''' Returns H5 dataset
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
nmo = eri.nmo
ci, cj, ca = coeffs
ni = ci.shape[1]
nj = cj.shape[1]
na = ca.shape[1]
npair = nmo*(nmo+1)//2
mask = get_frozen_mask(agf2)
frozen = np.sum(~mask)
# possible to have incore MO, outcore QMO
if getattr(eri, 'feri', None) is None:
eri.feri = lib.H5TmpFile()
elif 'qmo' in eri.feri:
del eri.feri['qmo']
eri.feri.create_dataset('qmo', (nmo-frozen, ni, nj, na), 'f8')
blksize = _agf2.get_blksize(agf2.max_memory, (nmo*npair, nj*na, npair), (nmo*ni, nj*na))
blksize = min(nmo, max(BLKMIN, blksize))
log.debug1('blksize (ragf2._make_qmo_eris_outcore) = %d', blksize)
tril2sq = lib.square_mat_in_trilu_indices(nmo)
q1 = 0
for p0, p1 in lib.prange(0, nmo, blksize):
if not np.any(mask[p0:p1]):
# block is fully frozen
continue
inds = np.arange(p0, p1)[mask[p0:p1]]
q0, q1 = q1, q1 + len(inds)
idx = list(np.concatenate(tril2sq[inds]))
buf = eri.eri[idx] # (blk, nmo, npair)
buf = buf.reshape((q1-q0)*nmo, -1) # (blk*nmo, npair)
jasym, nja, cja, sja = ao2mo.incore._conc_mos(cj, ca, compact=True)
buf = ao2mo._ao2mo.nr_e2(buf, cja, sja, 's2kl', 's1')
buf = buf.reshape(q1-q0, nmo, nj, na)
buf = lib.einsum('xpja,pi->xija', buf, ci)
eri.feri['qmo'][q0:q1] = np.asarray(buf, order='C')
log.timer('QMO integral transformation', *cput0)
return eri.feri['qmo']
if __name__ == '__main__':
from pyscf import gto, scf, mp
mol = gto.M(atom='O 0 0 0; H 0 0 1; H 0 1 0', basis='cc-pvdz', verbose=3)
rhf = scf.RHF(mol)
rhf.conv_tol = 1e-11
rhf.run()
ragf2 = RAGF2(rhf, frozen=0)
ragf2.run()
ragf2.ipagf2(nroots=5)
ragf2.eaagf2(nroots=5)
print(mp.MP2(rhf, frozen=ragf2.frozen).run(verbose=0).e_corr)
print(ragf2.e_init)
ragf2 = ragf2.density_fit()
ragf2.run()
|
sunqm/pyscf
|
pyscf/agf2/ragf2.py
|
Python
|
apache-2.0
| 35,794
|
[
"PySCF"
] |
6da33d7141098472eb091220d324bcf9cb63d5b225418c496e888af9012ef170
|
import matplotlib
matplotlib.use('Agg')
from msmbuilder.dataset import dataset
from msmbuilder import msm, featurizer, utils, decomposition
import numpy as np
import mdtraj as md
import matplotlib.pyplot as plt
from glob import glob
import os
# Source directory for MEK simulations
source_directory = '/cbio/jclab/projects/fah/fah-data/munged/no-solvent/10488'
################################################################################
# Load trajectories
################################################################################
print ('loading trajectories...')
filenames = glob(os.path.join(source_directory, '*0.h5'))
trajectories = [md.load(filename) for filename in filenames]
print "We are analyzing %s trajectories." % len(trajectories)
################################################################################
# initialize dihedral and tICA features
################################################################################
print('initializing dihedral and tICA features...')
dihedrals = featurizer.DihedralFeaturizer(types=["chi1"]).transform(trajectories)
print "We are using %s chi1 dihedral features." % len(dihedrals[0])
tica = decomposition.tICA(n_components = 4,lag_time= 1600)
X = tica.fit_transform(dihedrals)
################################################################################
# Make eigenvalues plot
################################################################################
plt.clf()
eigenvalues = (tica.eigenvalues_)**2
sum_eigenvalues = np.sum(eigenvalues[0:2])
print "This is the sum of the first two eigenvalues: %s." % sum_eigenvalues
plt.plot(eigenvalues)
plt.xlim(0,4)
plt.ylim(0,1.2)
plt.annotate('sum first two: %s.' % sum_eigenvalues, xy=(0.25,0.1))
plt.savefig('msmb-eigenvalues.png')
################################################################################
# plot first two tics
################################################################################
plt.clf()
Xf = np.concatenate(X)
plt.hexbin(Xf[:,0], Xf[:, 1], bins='log')
plt.title("Dihedral tICA Analysis")
plt.xlabel("tic 1")
plt.ylabel("tic 2")
plt.savefig("msmbuilder-finding4-mek.png", bbox_inches="tight")
|
choderalab/MSMs
|
shanson/mek-10488/msmbuilder-finding4/msmbuilder-finding4-chi1/msmbuilder-finding4-mek.py
|
Python
|
gpl-2.0
| 2,180
|
[
"MDTraj"
] |
8f7f65e985a5a158eef7f5ad0b5783f8e7a738c0c5c3e0d5f05931ef7186aecd
|
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
__RCSID__ = "$Id$"
class Synchronizer(object):
"""Class encapsulating a lock
allowing it to be used as a synchronizing
decorator making the call thread-safe"""
def __init__(self, lockName="", recursive=False):
from DIRAC.Core.Utilities.LockRing import LockRing
self.__lockName = lockName
self.__lr = LockRing()
self.__lock = self.__lr.getLock(lockName, recursive=recursive)
def __call__(self, funcToCall):
def lockedFunc(*args, **kwargs):
try:
if self.__lockName:
print("LOCKING", self.__lockName)
self.__lock.acquire()
return funcToCall(*args, **kwargs)
finally:
if self.__lockName:
print("UNLOCKING", self.__lockName)
self.__lock.release()
# Add target method docstring that this description appeared when compiling the documentation
lockedFunc.__doc__ = funcToCall.__doc__
return lockedFunc
def lock(self):
return self.__lock.acquire()
def unlock(self):
return self.__lock.release()
|
ic-hep/DIRAC
|
src/DIRAC/Core/Utilities/ThreadSafe.py
|
Python
|
gpl-3.0
| 1,256
|
[
"DIRAC"
] |
4f246e7474a34db55118007c787163d8e8d25b63fc3abd1f6f7cedf3f0124f6e
|
import numpy as np
import lasagne
import lasagne.layers as L
import lasagne.nonlinearities as NL
import theano
import theano.tensor as TT
from rllab.misc.ext import compile_function
from rllab.core.lasagne_layers import ParamLayer
from rllab.core.lasagne_powered import LasagnePowered
from rllab.core.network import ConvNetwork
from rllab.misc import tensor_utils
from rllab.optimizers.lbfgs_optimizer import LbfgsOptimizer
from rllab.optimizers.penalty_lbfgs_optimizer import PenaltyLbfgsOptimizer
from rllab.distributions.diagonal_gaussian import DiagonalGaussian
from rllab.core.serializable import Serializable
from rllab.misc.ext import iterate_minibatches_generic
from rllab.misc import logger
class GaussianConvRegressor(LasagnePowered):
"""
A class for performing regression by fitting a Gaussian distribution to the outputs.
"""
def __init__(
self,
name,
input_shape,
output_dim,
hidden_sizes,
conv_filters,conv_filter_sizes,conv_strides,conv_pads,
hidden_nonlinearity=NL.rectify,
mean_network=None,
optimizer=None,
use_trust_region=True,
step_size=0.01,
subsample_factor=1.0,
batchsize=None,
learn_std=True,
init_std=1.0,
adaptive_std=False,
std_share_network=False,
std_conv_filters=[],std_conv_filters_sizes=[],std_conv_strides=[],std_conv_pads=[],
std_hidden_sizes=(32, 32),
std_nonlinearity=None,
normalize_inputs=True,
normalize_outputs=True,
):
"""
:param input_shape: usually for images of the form (width,height,channel)
:param output_dim: Dimension of output.
:param hidden_sizes: Number of hidden units of each layer of the mean network.
:param hidden_nonlinearity: Non-linearity used for each layer of the mean network.
:param optimizer: Optimizer for minimizing the negative log-likelihood.
:param use_trust_region: Whether to use trust region constraint.
:param step_size: KL divergence constraint for each iteration
:param learn_std: Whether to learn the standard deviations. Only effective if adaptive_std is False. If
adaptive_std is True, this parameter is ignored, and the weights for the std network are always learned.
:param adaptive_std: Whether to make the std a function of the states.
:param std_share_network: Whether to use the same network as the mean.
:param std_hidden_sizes: Number of hidden units of each layer of the std network. Only used if
`std_share_network` is False. It defaults to the same architecture as the mean.
:param std_nonlinearity: Non-linearity used for each layer of the std network. Only used if `std_share_network`
is False. It defaults to the same non-linearity as the mean.
"""
Serializable.quick_init(self, locals())
if optimizer is None:
if use_trust_region:
optimizer = PenaltyLbfgsOptimizer("optimizer")
else:
optimizer = LbfgsOptimizer("optimizer")
self._optimizer = optimizer
self.input_shape = input_shape
if mean_network is None:
mean_network = ConvNetwork(
name="mean_network",
input_shape=input_shape,
output_dim=output_dim,
conv_filters=conv_filters,
conv_filter_sizes=conv_filter_sizes,
conv_strides=conv_strides,
conv_pads=conv_pads,
hidden_sizes=hidden_sizes,
hidden_nonlinearity=hidden_nonlinearity,
output_nonlinearity=None,
)
l_mean = mean_network.output_layer
if adaptive_std:
l_log_std = ConvNetwork(
name="log_std_network",
input_shape=input_shape,
input_var=mean_network.input_layer.input_var,
output_dim=output_dim,
conv_filters=std_conv_filters,
conv_filter_sizes=std_conv_filter_sizes,
conv_strides=std_conv_strides,
conv_pads=std_conv_pads,
hidden_sizes=std_hidden_sizes,
hidden_nonlinearity=std_nonlinearity,
output_nonlinearity=None,
).output_layer
else:
l_log_std = ParamLayer(
mean_network.input_layer,
num_units=output_dim,
param=lasagne.init.Constant(np.log(init_std)),
name="output_log_std",
trainable=learn_std,
)
LasagnePowered.__init__(self, [l_mean, l_log_std])
xs_var = mean_network.input_layer.input_var
ys_var = TT.matrix("ys")
old_means_var = TT.matrix("old_means")
old_log_stds_var = TT.matrix("old_log_stds")
x_mean_var = theano.shared(
np.zeros((1,np.prod(input_shape)), dtype=theano.config.floatX),
name="x_mean",
broadcastable=(True,False),
)
x_std_var = theano.shared(
np.ones((1,np.prod(input_shape)), dtype=theano.config.floatX),
name="x_std",
broadcastable=(True,False),
)
y_mean_var = theano.shared(
np.zeros((1, output_dim), dtype=theano.config.floatX),
name="y_mean",
broadcastable=(True, False)
)
y_std_var = theano.shared(
np.ones((1, output_dim), dtype=theano.config.floatX),
name="y_std",
broadcastable=(True, False)
)
normalized_xs_var = (xs_var - x_mean_var) / x_std_var
normalized_ys_var = (ys_var - y_mean_var) / y_std_var
normalized_means_var = L.get_output(
l_mean, {mean_network.input_layer: normalized_xs_var})
normalized_log_stds_var = L.get_output(
l_log_std, {mean_network.input_layer: normalized_xs_var})
means_var = normalized_means_var * y_std_var + y_mean_var
log_stds_var = normalized_log_stds_var + TT.log(y_std_var)
normalized_old_means_var = (old_means_var - y_mean_var) / y_std_var
normalized_old_log_stds_var = old_log_stds_var - TT.log(y_std_var)
dist = self._dist = DiagonalGaussian(output_dim)
normalized_dist_info_vars = dict(
mean=normalized_means_var, log_std=normalized_log_stds_var)
mean_kl = TT.mean(dist.kl_sym(
dict(mean=normalized_old_means_var,
log_std=normalized_old_log_stds_var),
normalized_dist_info_vars,
))
loss = - \
TT.mean(dist.log_likelihood_sym(
normalized_ys_var, normalized_dist_info_vars))
self._f_predict = compile_function([xs_var], means_var)
self._f_pdists = compile_function([xs_var], [means_var, log_stds_var])
self._l_mean = l_mean
self._l_log_std = l_log_std
optimizer_args = dict(
loss=loss,
target=self,
network_outputs=[normalized_means_var, normalized_log_stds_var],
)
if use_trust_region:
optimizer_args["leq_constraint"] = (mean_kl, step_size)
optimizer_args["inputs"] = [
xs_var, ys_var, old_means_var, old_log_stds_var]
else:
optimizer_args["inputs"] = [xs_var, ys_var]
self._optimizer.update_opt(**optimizer_args)
self._use_trust_region = use_trust_region
self._name = name
self._normalize_inputs = normalize_inputs
self._normalize_outputs = normalize_outputs
self._mean_network = mean_network
self._x_mean_var = x_mean_var
self._x_std_var = x_std_var
self._y_mean_var = y_mean_var
self._y_std_var = y_std_var
self._subsample_factor = subsample_factor
self._batchsize = batchsize
def fit(self, xs, ys):
if self._subsample_factor < 1:
num_samples_tot = xs.shape[0]
idx = np.random.randint(0, num_samples_tot, int(num_samples_tot * self._subsample_factor))
xs, ys = xs[idx], ys[idx]
if self._normalize_inputs:
# recompute normalizing constants for inputs
self._x_mean_var.set_value(
np.mean(xs, axis=0, keepdims=True).astype(theano.config.floatX))
self._x_std_var.set_value(
(np.std(xs, axis=0, keepdims=True) + 1e-8).astype(theano.config.floatX))
if self._normalize_outputs:
# recompute normalizing constants for outputs
self._y_mean_var.set_value(
np.mean(ys, axis=0, keepdims=True).astype(theano.config.floatX))
self._y_std_var.set_value(
(np.std(ys, axis=0, keepdims=True) + 1e-8).astype(theano.config.floatX))
if self._name:
prefix = self._name + "_"
else:
prefix = ""
# FIXME: needs batch computation to avoid OOM.
loss_before, loss_after, mean_kl, batch_count = 0., 0., 0., 0
for batch in iterate_minibatches_generic(input_lst=[xs, ys], batchsize=self._batchsize, shuffle=True):
batch_count += 1
xs, ys = batch
if self._use_trust_region:
old_means, old_log_stds = self._f_pdists(xs)
inputs = [xs, ys, old_means, old_log_stds]
else:
inputs = [xs, ys]
loss_before += self._optimizer.loss(inputs)
self._optimizer.optimize(inputs)
loss_after += self._optimizer.loss(inputs)
if self._use_trust_region:
mean_kl += self._optimizer.constraint_val(inputs)
logger.record_tabular(prefix + 'LossBefore', loss_before / batch_count)
logger.record_tabular(prefix + 'LossAfter', loss_after / batch_count)
logger.record_tabular(prefix + 'dLoss', loss_before - loss_after / batch_count)
if self._use_trust_region:
logger.record_tabular(prefix + 'MeanKL', mean_kl / batch_count)
def predict(self, xs):
"""
Return the maximum likelihood estimate of the predicted y.
:param xs:
:return:
"""
return self._f_predict(xs)
def sample_predict(self, xs):
"""
Sample one possible output from the prediction distribution.
:param xs:
:return:
"""
means, log_stds = self._f_pdists(xs)
return self._dist.sample(dict(mean=means, log_std=log_stds))
def predict_log_likelihood(self, xs, ys):
means, log_stds = self._f_pdists(xs)
return self._dist.log_likelihood(ys, dict(mean=means, log_std=log_stds))
def log_likelihood_sym(self, x_var, y_var):
normalized_xs_var = (x_var - self._x_mean_var) / self._x_std_var
normalized_means_var, normalized_log_stds_var = \
L.get_output([self._l_mean, self._l_log_std], {
self._mean_network.input_layer: normalized_xs_var})
means_var = normalized_means_var * self._y_std_var + self._y_mean_var
log_stds_var = normalized_log_stds_var + TT.log(self._y_std_var)
return self._dist.log_likelihood_sym(y_var, dict(mean=means_var, log_std=log_stds_var))
def get_param_values(self, **tags):
return LasagnePowered.get_param_values(self, **tags)
def set_param_values(self, flattened_params, **tags):
return LasagnePowered.set_param_values(self, flattened_params, **tags)
|
brain-research/mirage-rl-qprop
|
rllab/regressors/gaussian_conv_regressor.py
|
Python
|
mit
| 11,624
|
[
"Gaussian"
] |
376336a9171827d37768123d3aece6df310bd6a1e2d3782c24bdf74767ad7683
|
"""
ETB Parser based on parsimonious.
The grammar is defined in the docstring, essentially EBNF, but
uses ``/`` (first match) instead of ``|``. ``+``, ``*``, ``?`` have usual meaning
regex's start with ``~``.
The grammar.parse function generates parsimonious Nodes, which are
then translated to ETB terms using the visit method of ETBParser.
This is significantly faster than pyparsing, while still being easy to install.
..
Copyright (C) 2013 SRI International
This program is free software: you can redistribute it
and/or modify it under the terms of the GNU General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version. This program is
distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details. You should have received a copy of the GNU General
Public License along with this program. If not, see
<http://www.gnu.org/licenses/>.
"""
import terms
import string
import re
from parsimonious.grammar import Grammar, NodeVisitor
from parsimonious.exceptions import ParseError, IncompleteParseError
grammar = Grammar(
"""
statements = _ statement+
statement = fact / clause / inference_rule
fact = literal pd
# clause is same as derivation_rule?
clause = literal ts literals pd
inference_rule = literal infer literals pd
claims = _ lk claim rest_claims* rk
rest_claims = co claim
claim = claim_type lp literal co "reason" _ eq reason rp
claim_type = "claim" / "interpretedClaim" / "derivedClaim" / "provedClaim"
reason = dstring / clause / inference_rule #/ derivation_rule
literals = literal rest_lits*
rest_lits = co literal
literal = infix_lit / app_lit
infix_lit = term binop term
binop = eq / neq
app_lit = pred args
pred = id / string
args = lp terms? rp
substitutions = lk substs? rk
substs = subst rest_substs*
rest_substs = co subst
subst = "subst" lp bindings? rp
bindings = binding rest_bindings*
rest_bindings = co binding
binding = id eq term
terms = term rest_terms*
rest_terms = co term
term = token / array / obj
token = num / id / string
array = lk terms? rk access*
obj = lb objpairs? rb access*
objpairs = objpair rest_objpair*
rest_objpair = co objpair
objpair = token cl term
access = lk token rk
string = dstring / sstring
id = ~r"[^][(){}=:`'\\".,~?% \\\]+" _
dstring = ~r'"([^"\\\\]*(?:\\\\.[^"\\\\]*)*)"' _
sstring = ~r"'([^'\\\\]*(?:\\\\.[^'\\\\]*)*)'" _
num = ~"[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?" _
_ = whitespace*
whitespace = ~"\s+" / comment
comment = ~"%[^\\n\\r]*[\\n\\r]*"
eq = "=" _
neq = "!=" _
ts = ":-" _
infer = "<=" _
lp = "(" _
rp = ")" _
lk = "[" _
rk = "]" _
lb = "{" _
rb = "}" _
cl = ":" _
co = "," _
pd = "." _
""")
class ETBParser(NodeVisitor):
"""Visitor that turns a parse tree into ETB Terms
See parsimonious.NodeVisitor docstring for more info
"""
def visit(self, node):
"""Replaces NodeVisitor.visit, which wraps errors in an opaque way.
In particular, parsing a file with an error generates pages of output
that is meaningless to the user.
This is actually the same as NodeVisitor.visit, but without try...except"""
method = getattr(self, 'visit_' + node.expr_name, self.generic_visit)
return method(node, [self.visit(n) for n in node])
def visit_statements(self, node, (_, statements)):
return statements
def visit_statement(self, node, stmt):
return stmt[0]
def visit_fact(self, node, (term, _)):
#print 'visit_fact: term {0}: {1}'.format(term, type(term))
return term
def visit_clause(self, node, (head, ts, tail, pd)):
return terms.DerivationRule(head, tail)
def visit_inference_rule(self, node, (head, inf, tail, pd)):
return terms.InferenceRule(head, tail)
def visit_claims(self, node, (_, lk, claim, rest_claims, rk)):
if isinstance(rest_claims, list):
return [claim] + rest_claims
else:
return [claim]
def visit_rest_claims(self, node, (_, claim)):
return claim
def visit_claim(self, node, (ctype, lp, lit, co, re, _, eq, reason, rp)):
if ctype == "interpretedClaim":
return terms.InterpretedClaim(lit, reason)
elif ctype == "derivedClaim":
return terms.DerivedClaim(lit, reason)
elif ctype == "provedClaim":
return terms.ProvedClaim(lit, reason)
else:
return terms.Claim(lit, reason)
def visit_reason(self, node, reason):
return reason[0]
def visit_literals(self, node, (first_lit, rest_lits)):
#print 'visit_literals: first {0}: {1}, rest {2}: {3}'.format(first_lit, type(first_lit), rest_lits, type(rest_lits))
if isinstance(rest_lits, list):
return [first_lit] + rest_lits
else:
return [first_lit]
def visit_rest_lits(self, node, (_, lit)):
#print 'visit_rest_lits: lit {0}: {1}'.format(lit, type(lit))
return lit
def visit_literal(self, node, lit):
#print 'visit_literal: lit {0}: {1}'.format(lit[0], type(lit[0]))
return lit[0]
def visit_infix_lit(self, node, (lhs, op, rhs)):
#print 'visit_infix_lit: lhs {0}: {1}'.format(lhs, type(lhs))
#print 'visit_infix_lit: op {0}: {1}'.format(op, type(op))
#print 'visit_infix_lit: rhs {0}: {1}'.format(rhs, type(rhs))
return terms.InfixLiteral(op, [lhs, rhs])
def visit_binop(self, node, op):
#print 'visit_binop: op {0}: {1}'.format(op[0], type(op[0]))
binop = op[0]
return binop
def visit_eq(self, node, (eq, _)):
#print 'visit_eq: eq {0}: {1}'.format(eq, type(eq))
return '='
def visit_neq(self, node, (neq, _)):
#print 'visit_eq: eq {0}: {1}'.format(neq, type(neq))
return '!='
def visit_app_lit(self, node, (pred, args)):
#print 'visit_app_lit: pred {0}: {1}, args {2}: {3}'.format(pred, type(pred), args, type(args))
return terms.Literal(pred, args)
def visit_pred(self, node, pred):
#print 'visit_pred: node {0}'.format(node.children[0].expr_name)
#print 'visit_pred: pred {0}: {1}'.format(pred, type(pred))
if node.children[0].expr_name == 'string':
return terms.StringConst(pred[0])
else:
return terms.IdConst(pred[0])
def visit_args(self, node, (lp, terms, rp)):
#print 'visit_args: terms = {0}, type {1}'.format(terms[0], type(terms[0]))
if isinstance(terms, list):
return terms[0]
else:
return []
# Substitutions
def visit_substitutions(self, node, (lk, substs, rk)):
if isinstance(substs, list):
return substs[0]
else:
return []
def visit_substs(self, node, (subst, rest_substs)):
if isinstance(rest_substs, list):
return [subst] + rest_substs
else:
return [subst]
def visit_rest_substs(self, node, (_, subst)):
return subst
def visit_subst(self, node, (_, lp, bindings, rp)):
if isinstance(bindings, list):
return terms.Subst(dict(bindings[0]))
else:
return terms.Subst(dict())
def visit_bindings(self, node, (binding, rest_bindings)):
if isinstance(rest_bindings, list):
return [binding] + rest_bindings
else:
return [binding]
def visit_rest_bindings(self, node, (_, binding)):
return binding
def visit_binding(self, node, (id, eq, term)):
if not id[0].isupper():
raise TypeError('Identifier expected to be variable (i.e., capitalized) here')
return (terms.Var(id), term)
# Terms
def visit_terms(self, node, (term, rest_terms)):
#print 'visit_terms: term {0}: {1}, rest_terms {0}: {1}'.format(term, type(term), rest_terms, type(rest_terms))
if isinstance(rest_terms, list):
return [term] + rest_terms
else:
return [term]
def visit_rest_terms(self, node, (_, term)):
#print 'visit_rest_terms: term {0}: {1}'.format(term, type(term))
return term
def visit_term(self, node, term):
#print 'visit_term: node {0}: {1}'.format(node, type(node))
#print 'visit_term: term {0}: {1}'.format(term[0], type(term[0]))
return term[0]
def visit_token(self, node, token):
#print 'visit_token: token = {0}: {1}'.format(token, type(token))
#print 'visit_token: node.children = {0}: {1}'.format(node.children, len(node.children))
text = token[0]
if node.children[0].expr_name == 'string':
term = terms.mk_stringconst(text)
elif node.children[0].expr_name == 'id':
if text[0].isupper():
term = terms.mk_var(text)
else:
term = terms.mk_idconst(text)
else:
term = terms.mk_numberconst(text)
#print 'visit_const: {0}, type {1}'.format(term, type(term))
return term
def visit_array(self, node, (lk, elems, rk, accesses)):
#print 'visit_array: elems {0}: {1}, {2}: {3}'.format(elems, type(elems), accesses, type(accesses))
if isinstance(elems, list):
array = terms.mk_array(elems[0])
else:
#print 'visit_array: empty array'
array = terms.mk_array([])
if isinstance(accesses, list):
return array.reduce_access(accesses)
else:
#print 'visit_array: array = {0}, type {1}'.format(array, type(array))
return array
def visit_obj(self, node, (lb, objpairs, rb, accesses)):
#print 'visit_obj: {0}: {1}, {2}: {3}'.format(objpairs, type(objpairs), accesses, type(accesses))
if isinstance(objpairs, list):
obj = terms.mk_map(objpairs[0])
else:
obj = terms.mk_map([])
if isinstance(accesses, list):
return obj.reduce_access(accesses)
else:
#print 'visit_array: array = {0}, type {1}'.format(array, type(array))
return obj
def visit_objpairs(self, node, (objpair, rest_objpair)):
#print 'visit_objpairs: objpair {0}: {1}, other {2}: {3}'.format(objpair, type(objpair), rest_objpair, type(rest_objpair))
#print 'visit_objpairs: objpair[1] {0}: {1}'.format(objpair[1], type(objpair[1]))
if isinstance(rest_objpair, list):
return dict([objpair] + rest_objpair)
else:
return dict([objpair])
def visit_rest_objpair(self, node, (_, objpair)):
#print 'visit_rest_objpair: {0}: {1}'.format(objpair, type(objpair))
return objpair
def visit_objpair(self, node, (token, cl, term)):
#print 'visit_objpair: token {0}: {1}'.format(token, type(token))
#print ' term {0}: {1}'.format(term, type(term))
if isinstance(token, terms.Var):
raise TypeError('Identifier expected to be constant (i.e., not capitalized) here')
return (token, term)
def visit_access(self, node, (lk, token, rk)):
#print 'visit_access: token {0}: {1}'.format(token, type(token))
return token
def visit_id(self, node, (id, _)):
#print 'visit_id: {0}: {1}'.format(node, type(node))
#print 'visit_id: {0}: {1}'.format(id.text, type(id.text))
return id.text
def visit_string(self, node, string):
#print 'visit_string: {0}: {1}'.format(string[0], type(string[0]))
return string[0]
def visit_dstring(self, node, (string, _)):
return string.text[1:-1]
def visit_sstring(self, node, (string, _)):
return string.text[1:-1]
def visit_num(self, node, (num, _)):
return num.text
def generic_visit(self, node, visited_children):
"""Default visitor method
"""
result = visited_children or node
#print 'generic_visit: result = {0}: {1}'.format(result, type(result))
return result
def parse(text, nt='statements'):
"""Uses parsimonious to parse the ETB extended datalog language
nt is the nonterminal. The most useful ones are:
statements, statement, literals, literal, and term.
>>> type(parse('V', 'term'))
<class 'terms.Var'>
>>> type(parse('v', 'term'))
<class 'terms.IdConst'>
>>> type(parse('3', 'term'))
<class 'terms.NumberConst'>
>>> type(parse('3.14', 'term'))
<class 'terms.NumberConst'>
>>> type(parse('-3.14e-10', 'term'))
<class 'terms.NumberConst'>
>>> type(parse('"3 is a number"', 'term'))
<class 'terms.StringConst'>
>>> parse('3a', 'term')
term has extra text: 'a' (line 1, column 2).
"""
try:
node = grammar[nt].parse(text.strip())
return ETBParser().visit(node)
except IncompleteParseError as iperr:
raise ValueError(u"{0} has extra text: '{1}' (line {2}, column {3}).".format(
iperr.expr.name, iperr.text[iperr.pos:iperr.pos + 20],
iperr.line(), iperr.column()))
except ParseError as perr:
rule_name = ((u"{0}".format(perr.expr.name)) if perr.expr.name else
unicode(perr.expr))
raise ValueError(u"{0} expected at '{1}' (line {2}, column {3})."
.format(rule_name, perr.text[perr.pos:perr.pos + 20],
perr.line(), perr.column()))
def parse_term(text):
return parse(text, 'term')
def parse_literal(text):
lit = parse(text, 'literal')
return lit
def parse_file(file, nt='statements'):
with open(file, 'rb') as fd:
text = fd.read()
return parse(text, nt)
|
SRI-CSL/ETB
|
etb/parser.py
|
Python
|
gpl-3.0
| 13,054
|
[
"VisIt"
] |
3c1bd40c7f867e2bc0b205e466b88789752c3406afe7fd17750bc30c2d874ae9
|
#!/usr/bin/env python
# -*- mode: python; coding: utf-8; -*-
##---------------------------------------------------------------------------##
##
## Copyright (C) 1998-2003 Markus Franz Xaver Johannes Oberhumer
## Copyright (C) 2003 Mt. Hood Playing Card Co.
## Copyright (C) 2005-2009 Skomoroh
##
## This program is free software: you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with this program. If not, see <http://www.gnu.org/licenses/>.
##
##---------------------------------------------------------------------------##
__all__ = []
# imports
import sys
# PySol imports
from pysollib.gamedb import registerGame, GameInfo, GI
from pysollib.util import *
from pysollib.mfxutil import kwdefault, Struct
from pysollib.stack import *
from pysollib.game import Game
from pysollib.layout import Layout
from pysollib.hint import AbstractHint, DefaultHint, CautiousDefaultHint
from pysollib.hint import KlondikeType_Hint
from pysollib.hint import FreeCellSolverWrapper
from pysollib.pysoltk import MfxCanvasText
from canfield import CanfieldRush_Talon
# ************************************************************************
# * Klondike
# ************************************************************************
class Klondike(Game):
Layout_Method = Layout.klondikeLayout
Talon_Class = WasteTalonStack
Foundation_Class = SS_FoundationStack
RowStack_Class = KingAC_RowStack
Hint_Class = KlondikeType_Hint
def createGame(self, max_rounds=-1, num_deal=1, **layout):
# create layout
l, s = Layout(self), self.s
kwdefault(layout, rows=7, waste=1, texts=1, playcards=16)
self.Layout_Method(l, **layout)
self.setSize(l.size[0], l.size[1])
# create stacks
s.talon = self.Talon_Class(l.s.talon.x, l.s.talon.y, self,
max_rounds=max_rounds, num_deal=num_deal)
if l.s.waste:
s.waste = WasteStack(l.s.waste.x, l.s.waste.y, self)
for r in l.s.foundations:
s.foundations.append(self.Foundation_Class(r.x, r.y, self, suit=r.suit))
for r in l.s.rows:
s.rows.append(self.RowStack_Class(r.x, r.y, self))
# default
l.defaultAll()
return l
def startGame(self, flip=0, reverse=1):
for i in range(1, len(self.s.rows)):
self.s.talon.dealRow(rows=self.s.rows[i:], flip=flip, frames=0, reverse=reverse)
self.startDealSample()
self.s.talon.dealRow(reverse=reverse)
if self.s.waste:
self.s.talon.dealCards() # deal first card to WasteStack
shallHighlightMatch = Game._shallHighlightMatch_AC
# ************************************************************************
# * Vegas Klondike
# ************************************************************************
class VegasKlondike(Klondike):
getGameScore = Game.getGameScoreCasino
getGameBalance = Game.getGameScoreCasino
def createGame(self, max_rounds=1):
l = Klondike.createGame(self, max_rounds=max_rounds)
self.texts.score = MfxCanvasText(self.canvas,
8, self.height - 8, anchor="sw",
font=self.app.getFont("canvas_large"))
return l
def updateText(self):
if self.preview > 1:
return
b1, b2 = self.app.stats.gameid_balance, 0
if self.shallUpdateBalance():
b2 = self.getGameBalance()
t = _("Balance $%d") % (b1 + b2)
self.texts.score.config(text=t)
def getDemoInfoTextAttr(self, tinfo):
return tinfo[1] # "se" corner
# ************************************************************************
# * Casino Klondike
# ************************************************************************
class CasinoKlondike(VegasKlondike):
def createGame(self):
l = VegasKlondike.createGame(self, max_rounds=3)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
# ************************************************************************
# * Klondike by Threes
# ************************************************************************
class KlondikeByThrees(Klondike):
def createGame(self):
Klondike.createGame(self, num_deal=3)
# ************************************************************************
# * Thumb and Pouch
# * Chinaman
# ************************************************************************
class ThumbAndPouch(Klondike):
RowStack_Class = BO_RowStack
def createGame(self):
Klondike.createGame(self, max_rounds=1)
def shallHighlightMatch(self, stack1, card1, stack2, card2):
return (card1.suit != card2.suit
and (card1.rank + 1 == card2.rank
or card2.rank + 1 == card1.rank))
class Chinaman(ThumbAndPouch):
RowStack_Class = StackWrapper(BO_RowStack, base_rank=KING)
def createGame(self):
l = Klondike.createGame(self, num_deal=3,
max_rounds=2, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
# ************************************************************************
# * Whitehead
# ************************************************************************
class Whitehead_RowStack(SS_RowStack):
def _isAcceptableSequence(self, cards):
return isSameColorSequence(cards, self.cap.mod, self.cap.dir)
def getHelp(self):
return _('Tableau. Build down by color. Sequences of cards in the same suit can be moved as a unit.')
class Whitehead(Klondike):
RowStack_Class = Whitehead_RowStack
Hint_Class = CautiousDefaultHint
def createGame(self):
Klondike.createGame(self, max_rounds=1)
def startGame(self):
Klondike.startGame(self, flip=1)
shallHighlightMatch = Game._shallHighlightMatch_SS
getQuickPlayScore = Game._getSpiderQuickPlayScore
# ************************************************************************
# * Small Harp (Klondike in a different layout)
# ************************************************************************
class SmallHarp(Klondike):
Layout_Method = Layout.gypsyLayout
def startGame(self):
for i in range(len(self.s.rows)):
self.s.talon.dealRow(rows=self.s.rows[:i], flip=0, frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards() # deal first card to WasteStack
# ************************************************************************
# * Eastcliff
# * Easthaven
# ************************************************************************
class Eastcliff(Klondike):
RowStack_Class = AC_RowStack
def createGame(self):
Klondike.createGame(self, max_rounds=1)
def startGame(self):
for i in range(2):
self.s.talon.dealRow(flip=0, frames=0)
self.startDealSample()
self.s.talon.dealRow()
if self.s.waste:
self.s.talon.dealCards() # deal first card to WasteStack
class Easthaven(Eastcliff):
Talon_Class = DealRowTalonStack
def createGame(self):
Klondike.createGame(self, max_rounds=1, waste=0)
class DoubleEasthaven(Easthaven):
def createGame(self):
Klondike.createGame(self, rows=8, max_rounds=1, waste=0, playcards=20)
class TripleEasthaven(Easthaven):
def createGame(self):
Klondike.createGame(self, rows=12, max_rounds=1, waste=0, playcards=26)
# ************************************************************************
# * Westcliff
# * Westhaven
# ************************************************************************
class Westcliff(Eastcliff):
Foundation_Class = StackWrapper(SS_FoundationStack, max_move=0)
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=10)
class Westhaven(Westcliff):
Talon_Class = DealRowTalonStack
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=10, waste=0)
# ************************************************************************
# * Pas Seul
# ************************************************************************
class PasSeul(Eastcliff):
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=6)
def startGame(self):
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards() # deal first card to WasteStack
# ************************************************************************
# * Blind Alleys
# ************************************************************************
class BlindAlleys(Eastcliff):
def createGame(self):
l = Klondike.createGame(self, max_rounds=2, rows=6, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
def _shuffleHook(self, cards):
# move Aces to top of the Talon (i.e. first cards to be dealt)
return self._shuffleHookMoveToTop(cards, lambda c: (c.rank == 0, c.suit))
def startGame(self):
self.s.talon.dealRow(rows=self.s.foundations, frames=0)
Eastcliff.startGame(self)
# ************************************************************************
# * Somerset
# * Morehead
# * Usk
# ************************************************************************
class Somerset(Klondike):
Talon_Class = InitialDealTalonStack
RowStack_Class = SuperMoveAC_RowStack
Hint_Class = CautiousDefaultHint
Solver_Class = FreeCellSolverWrapper()
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=10, waste=0, texts=0)
def startGame(self):
for i in range(6):
self.s.talon.dealRow(rows=self.s.rows[i:], frames=0)
self.startDealSample()
self.s.talon.dealRow(rows=self.s.rows[6:])
self.s.talon.dealRow(rows=self.s.rows[7:])
class Morehead(Somerset):
RowStack_Class = StackWrapper(BO_RowStack, max_move=1)
Solver_Class = None
class Usk(Somerset):
Talon_Class = RedealTalonStack
RowStack_Class = StackWrapper(AC_RowStack, base_rank=KING)
Solver_Class = None
def createGame(self):
l = Klondike.createGame(self, max_rounds=2, rows=10,
waste=False, texts=False, round_text=True)
l.createRoundText(self.s.talon, 'ne')
def redealCards(self):
n = 0
while self.s.talon.cards:
self.s.talon.dealRowAvail(rows=self.s.rows[n:], frames=4)
n += 1
# ************************************************************************
# * Canister
# * American Canister
# * British Canister
# ************************************************************************
class AmericanCanister(Klondike):
Talon_Class = InitialDealTalonStack
RowStack_Class = AC_RowStack
Solver_Class = FreeCellSolverWrapper(sm='unlimited')
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=8, waste=0, texts=0)
def startGame(self):
for i in range(5):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealRow(rows=self.s.rows[2:6])
class Canister(AmericanCanister):
RowStack_Class = RK_RowStack
Solver_Class = FreeCellSolverWrapper(sbb='rank', sm='unlimited')
shallHighlightMatch = Game._shallHighlightMatch_RK
class BritishCanister(AmericanCanister):
RowStack_Class = StackWrapper(KingAC_RowStack, max_move=1)
Solver_Class = FreeCellSolverWrapper(esf='kings')
# ************************************************************************
# * Agnes Sorel
# ************************************************************************
class AgnesSorel(Klondike):
Talon_Class = DealRowTalonStack
Foundation_Class = StackWrapper(SS_FoundationStack, mod=13, base_rank=NO_RANK, max_move=0)
RowStack_Class = StackWrapper(SC_RowStack, mod=13, base_rank=NO_RANK)
def createGame(self):
Klondike.createGame(self, max_rounds=1, waste=0)
def startGame(self):
Klondike.startGame(self, flip=1)
c = self.s.talon.dealSingleBaseCard()
def shallHighlightMatch(self, stack1, card1, stack2, card2):
return (card1.color == card2.color and
((card1.rank + 1) % 13 == card2.rank or
(card2.rank + 1) % 13 == card1.rank))
# ************************************************************************
# * 8 x 8
# * Achtmal Acht
# * Eight by Eight
# ************************************************************************
class EightTimesEight(Klondike):
Layout_Method = Layout.gypsyLayout
RowStack_Class = AC_RowStack
def createGame(self):
Klondike.createGame(self, rows=8)
def startGame(self):
for i in range(7):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards() # deal first card to WasteStack
class AchtmalAcht(EightTimesEight):
def createGame(self):
l = Klondike.createGame(self, rows=8, max_rounds=3, round_text=True)
l.createRoundText(self.s.talon, 'sw', dx=-l.XS)
class EightByEight_RowStack(RK_RowStack):
def acceptsCards(self, from_stack, cards):
if not RK_RowStack.acceptsCards(self, from_stack, cards):
return False
if not self.cards:
return len(cards) == 1
return True
class EightByEight(EightTimesEight):
Layout_Method = Layout.klondikeLayout ##gypsyLayout
Talon_Class = CanfieldRush_Talon
RowStack_Class = EightByEight_RowStack
def createGame(self):
l = Klondike.createGame(self, rows=8, playcards=20,
max_rounds=3, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
shallHighlightMatch = Game._shallHighlightMatch_RK
# ************************************************************************
# * Batsford
# * Batsford Again
# ************************************************************************
class Batsford_ReserveStack(ReserveStack):
def acceptsCards(self, from_stack, cards):
if not ReserveStack.acceptsCards(self, from_stack, cards):
return False
# must be a King
return cards[0].rank == KING
def getHelp(self):
return _('Reserve. Only Kings are acceptable.')
class Batsford(Klondike):
def createGame(self, **layout):
kwdefault(layout, rows=10, max_rounds=1, playcards=22)
round_text = (layout['max_rounds'] > 1)
layout['round_text'] = round_text
l = Klondike.createGame(self, **layout)
s = self.s
x, y = l.XM, self.height - l.YS
s.reserves.append(Batsford_ReserveStack(x, y, self, max_cards=3))
self.setRegion(s.reserves, (-999, y - l.YM - l.CH/2, x + l.XS - l.CW/2, 999999), priority=1)
l.createText(s.reserves[0], "se")
if round_text:
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
l.defaultStackGroups()
class BatsfordAgain(Batsford):
def createGame(self):
Batsford.createGame(self, max_rounds=2)
# ************************************************************************
# * Jumbo
# ************************************************************************
class Jumbo(Klondike):
def createGame(self):
l = Klondike.createGame(self, rows=9, max_rounds=2, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
def startGame(self, flip=0):
for i in range(9):
self.s.talon.dealRow(rows=self.s.rows[:i], flip=flip, frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards() # deal first card to WasteStack
class OpenJumbo(Jumbo):
def startGame(self):
Jumbo.startGame(self, flip=1)
# ************************************************************************
# * Stonewall
# * Flower Garden
# ************************************************************************
class Stonewall(Klondike):
Talon_Class = InitialDealTalonStack
RowStack_Class = AC_RowStack
DEAL = (0, 1, 0, 1, -1, 0, 1)
def createGame(self):
l = Klondike.createGame(self, rows=6, waste=0, max_rounds=1, texts=0)
s = self.s
h = max(self.height, l.YM+4*l.YS)
self.setSize(self.width + l.XM+4*l.XS, h)
for i in range(4):
for j in range(4):
x, y = self.width + (j-4)*l.XS, l.YM + i*l.YS
s.reserves.append(OpenStack(x, y, self, max_accept=0))
l.defaultStackGroups()
def startGame(self):
frames = 0
for flip in self.DEAL:
if flip < 0:
frames = -1
self.startDealSample()
else:
self.s.talon.dealRow(flip=flip, frames=frames)
self.s.talon.dealRow(rows=self.s.reserves)
class FlowerGarden(Stonewall):
RowStack_Class = StackWrapper(RK_RowStack, max_move=1)
Hint_Class = CautiousDefaultHint
DEAL = (1, 1, 1, 1, -1, 1, 1)
shallHighlightMatch = Game._shallHighlightMatch_RK
# ************************************************************************
# * King Albert
# * Raglan
# * Brigade
# * Queen Victoria
# ************************************************************************
class KingAlbert(Klondike):
Talon_Class = InitialDealTalonStack
RowStack_Class = StackWrapper(AC_RowStack, max_move=1)
Hint_Class = CautiousDefaultHint
ROWS = 9
RESERVES = (2, 2, 2, 1)
def createGame(self):
l = Klondike.createGame(self, max_rounds=1, rows=self.ROWS, waste=0, texts=0)
s = self.s
rw, rh = max(self.RESERVES), len(self.RESERVES)
h = max(self.height, l.YM+rh*l.YS)
self.setSize(self.width + 2*l.XM+rw*l.XS, h)
for i in range(rh):
for j in range(self.RESERVES[i]):
x, y = self.width + (j-rw)*l.XS, l.YM + i*l.YS
s.reserves.append(OpenStack(x, y, self, max_accept=0))
l.defaultStackGroups()
def startGame(self):
Klondike.startGame(self, flip=1, reverse=0)
self.s.talon.dealRow(rows=self.s.reserves)
class Raglan(KingAlbert):
RESERVES = (2, 2, 2)
def _shuffleHook(self, cards):
# move Aces to bottom of the Talon (i.e. last cards to be dealt)
return self._shuffleHookMoveToBottom(cards, lambda c: (c.rank == 0, c.suit))
def startGame(self):
for i in range(6):
self.s.talon.dealRow(rows=self.s.rows[i:], frames=0)
self.startDealSample()
self.s.talon.dealRow(rows=self.s.rows[6:])
self.s.talon.dealRow(rows=self.s.reserves)
self.s.talon.dealRow(rows=self.s.foundations)
class Brigade(Raglan):
RowStack_Class = StackWrapper(RK_RowStack, max_move=1)
ROWS = 7
RESERVES = (4, 4, 4, 1)
def startGame(self):
for i in range(4):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealRow(rows=self.s.reserves)
self.s.talon.dealRow(rows=self.s.foundations)
shallHighlightMatch = Game._shallHighlightMatch_RK
class QueenVictoria(KingAlbert):
RowStack_Class = AC_RowStack
# ************************************************************************
# * Jane
# * Agnes Bernauer
# ************************************************************************
class Jane_Talon(OpenTalonStack):
rightclickHandler = OpenStack.rightclickHandler
doubleclickHandler = OpenStack.doubleclickHandler
def canFlipCard(self):
return False
def canDealCards(self):
return len(self.cards) >= 2
def dealCards(self, sound=False):
c = 0
if len(self.cards) > 2:
c = self.dealRow(self.game.s.reserves, sound=sound)
if len(self.cards) == 2:
self.game.flipMove(self)
self.game.moveMove(1, self, self.game.s.waste, frames=4, shadow=0)
self.game.flipMove(self)
c = c + 1
return c
class Jane(Klondike):
Talon_Class = Jane_Talon
Foundation_Class = StackWrapper(SS_FoundationStack, mod=13, base_rank=NO_RANK, min_cards=1)
RowStack_Class = StackWrapper(AC_RowStack, mod=13, base_rank=NO_RANK)
def createGame(self, max_rounds=1, rows=7, reserves=7, playcards=16):
l, s = Layout(self), self.s
maxrows = max(rows, 7)
w = l.XM+maxrows*l.XS+l.XM+2*l.XS
h = max(l.YM+2*l.YS+playcards*l.YOFFSET+l.TEXT_HEIGHT, l.YM+4*l.YS)
self.setSize(w, h)
x, y = l.XM, l.YM
s.talon = self.Talon_Class(x, y, self, max_rounds=max_rounds)
l.createText(s.talon, 's')
x += l.XS
s.waste = WasteStack(x, y, self)
x += 2*l.XS
for i in range(4):
s.foundations.append(self.Foundation_Class(x, y, self, suit=i))
x += l.XS
x, y = l.XM, l.YM+l.YS+l.TEXT_HEIGHT
for i in range(rows):
s.rows.append(self.RowStack_Class(x, y, self))
x += l.XS
x0, y = self.width - 2*l.XS, l.YM
for i in range(reserves):
x = x0 + ((i+1) & 1) * l.XS
stack = OpenStack(x, y, self, max_accept=0)
stack.CARD_YOFFSET = l.YM / 3
s.reserves.append(stack)
y = y + l.YS / 2
# not needed, as no cards may be placed on the reserves
##self.setRegion(s.reserves, (x0-l.XM/2, -999, 999999, 999999), priority=1)
l.defaultStackGroups()
self.sg.dropstacks.append(s.talon)
def startGame(self, flip=0, reverse=1):
for i in range(1, len(self.s.rows)):
self.s.talon.dealRow(rows=self.s.rows[i:], flip=flip, frames=0, reverse=reverse)
self.startDealSample()
self.s.talon.dealRow(reverse=reverse)
self.s.talon.dealRow(rows=self.s.reserves)
c = self.s.talon.dealSingleBaseCard()
# update base rank of row stacks
cap = Struct(base_rank=(c.rank - 1) % 13)
for s in self.s.rows:
s.cap.update(cap.__dict__)
self.saveinfo.stack_caps.append((s.id, cap))
shallHighlightMatch = Game._shallHighlightMatch_ACW
def _autoDeal(self, sound=True):
return 0
class AgnesBernauer_Talon(DealRowTalonStack):
def dealCards(self, sound=False):
return self.dealRowAvail(self.game.s.reserves, sound=sound)
class AgnesBernauer(Jane):
Talon_Class = AgnesBernauer_Talon
Foundation_Class = StackWrapper(SS_FoundationStack, mod=13, base_rank=NO_RANK, max_move=0)
def startGame(self):
Jane.startGame(self, flip=1)
# ************************************************************************
# * Senate
# ************************************************************************
class Senate(Jane):
def createGame(self, rows=4):
playcards = 10
l, s = Layout(self), self.s
self.setSize(l.XM+(rows+7)*l.XS, l.YM+2*(l.YS+playcards*l.YOFFSET))
x, y = l.XM, l.YM
for i in range(rows):
s.rows.append(SS_RowStack(x, y, self))
x += l.XS
for y in l.YM, l.YM+l.YS+playcards*l.YOFFSET:
x = l.XM+rows*l.XS+l.XS/2
for i in range(4):
stack = OpenStack(x, y, self, max_accept=0)
stack.CARD_XOFFSET, stack.CARD_YOFFSET = 0, l.YOFFSET
s.reserves.append(stack)
x += l.XS
x = l.XM+(rows+5)*l.XS
for i in range(2):
y = l.YM+l.YS
for j in range(4):
s.foundations.append(SS_FoundationStack(x, y, self, suit=j))
y += l.YS
x += l.XS
x, y = self.width-l.XS, l.YM
s.talon = AgnesBernauer_Talon(x, y, self)
l.createText(s.talon, 'nw')
l.defaultStackGroups()
def startGame(self):
self.s.talon.dealRow(rows=self.s.foundations, frames=0)
self.startDealSample()
self.s.talon.dealRow(rows=self.s.reserves)
self.s.talon.dealRow()
def _shuffleHook(self, cards):
# move Aces to top of the Talon (i.e. first cards to be dealt)
return self._shuffleHookMoveToTop(cards,
lambda c: (c.rank == ACE, (c.deck, c.suit)))
shallHighlightMatch = Game._shallHighlightMatch_SS
class SenatePlus(Senate):
def createGame(self):
Senate.createGame(self, rows=5)
# ************************************************************************
# * Phoenix
# * Arizona
# ************************************************************************
class Phoenix(Klondike):
Hint_Class = CautiousDefaultHint
RowStack_Class = AC_RowStack
def createGame(self):
l, s = Layout(self), self.s
self.setSize(l.XM + 10*l.XS, l.YM + 4*(l.YS+l.YM))
for i in range(2):
x = l.XM + i*l.XS
for j in range(4):
y = l.YM + j*(l.YS+l.YM)
s.reserves.append(OpenStack(x, y, self, max_accept=0))
for i in range(2):
x = l.XM + (8+i)*l.XS
for j in range(4):
y = l.YM + j*(l.YS+l.YM)
s.reserves.append(OpenStack(x, y, self, max_accept=0))
for i in range(4):
s.foundations.append(SS_FoundationStack(l.XM+(3+i)*l.XS, l.YM, self, i))
for i in range(6):
s.rows.append(self.RowStack_Class(l.XM+(2+i)*l.XS, l.YM+l.YS, self))
s.talon = InitialDealTalonStack(l.XM+int(4.5*l.XS), l.YM+3*(l.YS+l.YM), self)
l.defaultStackGroups()
def startGame(self):
for i in range(6):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow(rows=self.s.reserves)
class Arizona(Phoenix):
RowStack_Class = RK_RowStack
shallHighlightMatch = Game._shallHighlightMatch_RK
# ************************************************************************
# * Lanes
# ************************************************************************
class Lanes(Klondike):
Hint_Class = CautiousDefaultHint
Foundation_Class = StackWrapper(SS_FoundationStack, max_move=0)
RowStack_Class = StackWrapper(AC_RowStack, base_rank=ANY_RANK, max_move=1)
def createGame(self):
l = Klondike.createGame(self, rows=6, max_rounds=2, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
def _shuffleHook(self, cards):
# move Aces to top of the Talon (i.e. first cards to be dealt)
return self._shuffleHookMoveToTop(cards,
lambda c: (c.rank == ACE, c.suit))
def startGame(self):
self.s.talon.dealRow(rows=self.s.foundations, frames=0)
for i in range(2):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards() # deal first card to WasteStack
# ************************************************************************
# * Thirty Six
# ************************************************************************
class ThirtySix(Klondike):
Foundation_Class = StackWrapper(SS_FoundationStack, max_move=0)
RowStack_Class = StackWrapper(RK_RowStack, base_rank=ANY_RANK)
def createGame(self):
Klondike.createGame(self, rows=6, max_rounds=1)
def _fillOne(self):
for r in self.s.rows:
if r.cards:
c = r.cards[-1]
for f in self.s.foundations:
if f.acceptsCards(r, [c]):
self.moveMove(1, r, f, frames=4, shadow=0)
return 1
return 0
def startGame(self):
self.startDealSample()
for i in range(6):
self.s.talon.dealRow()
while True:
if not self._fillOne():
break
self.s.talon.dealCards() # deal first card to WasteStack
shallHighlightMatch = Game._shallHighlightMatch_RK
# ************************************************************************
# * Q.C.
# ************************************************************************
class Q_C_(Klondike):
Hint_Class = CautiousDefaultHint
Foundation_Class = StackWrapper(SS_FoundationStack, max_move=0)
RowStack_Class = StackWrapper(SS_RowStack, base_rank=ANY_RANK, max_move=1)
def createGame(self):
l = Klondike.createGame(self, rows=6, max_rounds=2)
l.createRoundText(self.s.talon, 'sss')
def startGame(self):
for i in range(3):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
while self.s.talon.cards:
self.s.talon.dealCards() # deal first card to WasteStack
if not self.fillWaste():
break
def fillWaste(self):
waste = self.s.waste
if waste.cards:
c = waste.cards[-1]
for f in self.s.foundations:
if f.acceptsCards(self.s.waste, [c]):
waste.moveMove(1, f)
return True
return False
def fillStack(self, stack=None):
waste = self.s.waste
while True:
if not self.fillWaste():
break
if stack in self.s.rows and not stack.cards:
if not waste.cards:
while self.s.talon.cards:
self.s.talon.dealCards()
if not self.fillWaste():
break
if waste.cards:
waste.moveMove(1, stack)
shallHighlightMatch = Game._shallHighlightMatch_SS
# ************************************************************************
# * Northwest Territory
# * Artic Garden
# ************************************************************************
class NorthwestTerritory(KingAlbert):
RowStack_Class = StackWrapper(AC_RowStack, base_rank=KING)
RESERVES = (4, 4, 4, 4)
ROWS = 8
def startGame(self):
Klondike.startGame(self, flip=0, reverse=0)
self.s.talon.dealRow(rows=self.s.reserves)
class ArticGarden(NorthwestTerritory):
def startGame(self):
Klondike.startGame(self, flip=1, reverse=0)
self.s.talon.dealRow(rows=self.s.reserves)
# ************************************************************************
# * Aunt Mary
# ************************************************************************
class AuntMary(Klondike):
def createGame(self):
Klondike.createGame(self, rows=6, max_rounds=1)
def startGame(self):
for i in range(5):
j = i+1
self.s.talon.dealRow(rows=self.s.rows[:j], frames=0, flip=1)
self.s.talon.dealRow(rows=self.s.rows[j:], frames=0, flip=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards()
# ************************************************************************
# * Double Dot
# ************************************************************************
class DoubleDot(Klondike):
Talon_Class = DealRowTalonStack
RowStack_Class = StackWrapper(RK_RowStack, dir=-2, mod=13)
Foundation_Class = StackWrapper(SS_FoundationStack, dir=2, mod=13)
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=8, waste=0)
def _shuffleHook(self, cards):
return self._shuffleHookMoveToTop(cards,
lambda c: ((c.rank == ACE and c.suit in (0,1)) or
(c.rank == 1 and c.suit in (2,3)), c.suit))
def startGame(self):
self.s.talon.dealRow(rows=self.s.foundations, frames=0)
self.startDealSample()
self.s.talon.dealRow()
def shallHighlightMatch(self, stack1, card1, stack2, card2):
return abs(card1.rank-card2.rank) == 2
shallHighlightMatch = Game._shallHighlightMatch_RKW
# ************************************************************************
# * Seven Devils
# ************************************************************************
class SevenDevils_RowStack(AC_RowStack):
def acceptsCards(self, from_stack, cards):
if not AC_RowStack.acceptsCards(self, from_stack, cards):
return False
return not from_stack in self.game.s.reserves
class SevenDevils(Klondike):
Hint_Class = CautiousDefaultHint
RowStack_Class = StackWrapper(SevenDevils_RowStack, max_move=1)
def createGame(self):
l, s = Layout(self), self.s
self.setSize(l.XM + 10*l.XS, l.YM+3*l.YS+12*l.YOFFSET)
x, y = l.XM, l.YM
for i in range(8):
s.foundations.append(SS_FoundationStack(x, y, self, suit=i/2))
x += l.XS
x, y = l.XM+l.XS/2, l.YM+l.YS
for i in range(7):
s.rows.append(self.RowStack_Class(x, y, self))
x += l.XS
x0, y = self.width - 2*l.XS, l.YM
for i in range(7):
x = x0 + ((i+1) & 1) * l.XS
s.reserves.append(OpenStack(x, y, self, max_accept=0))
y = y + l.YS / 2
x, y = l.XM, self.height-l.YS
s.talon = WasteTalonStack(x, y, self, max_rounds=1)
l.createText(s.talon, 'n')
x += l.XS
s.waste = WasteStack(x, y, self)
l.createText(s.waste, 'n')
l.defaultStackGroups()
def startGame(self, flip=0, reverse=1):
Klondike.startGame(self)
self.s.talon.dealRow(rows=self.s.reserves)
# ************************************************************************
# * Moving Left
# * Souter
# ************************************************************************
class MovingLeft(Klondike):
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=10, playcards=24)
def fillStack(self, stack):
if not stack.cards:
old_state = self.enterState(self.S_FILL)
if stack in self.s.rows:
i = list(self.s.rows).index(stack)
if i < len(self.s.rows)-1:
from_stack = self.s.rows[i+1]
pile = from_stack.getPile()
if pile:
from_stack.moveMove(len(pile), stack)
self.leaveState(old_state)
class Souter(MovingLeft):
def createGame(self):
l = Klondike.createGame(self, max_rounds=2, rows=10,
playcards=24, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
# ************************************************************************
# * Big Forty
# * Ali Baba
# * Cassim
# ************************************************************************
class BigForty(Klondike):
RowStack_Class = SS_RowStack
def createGame(self):
Klondike.createGame(self, rows=10)
def startGame(self):
self.s.talon.dealRow(frames=0)
self.s.talon.dealRow(frames=0)
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards()
shallHighlightMatch = Game._shallHighlightMatch_SS
class AliBaba(BigForty):
def _shuffleHook(self, cards):
# move Aces to top of the Talon (i.e. first cards to be dealt)
return self._shuffleHookMoveToTop(cards,
lambda c: (c.rank == ACE, c.suit))
def startGame(self):
self.s.talon.dealRow(rows=self.s.foundations, frames=0)
BigForty.startGame(self)
class Cassim(AliBaba):
def createGame(self):
Klondike.createGame(self, rows=7)
# ************************************************************************
# * Saratoga
# ************************************************************************
class Saratoga(Klondike):
def createGame(self):
Klondike.createGame(self, num_deal=3)
def startGame(self):
Klondike.startGame(self, flip=1)
# ************************************************************************
# * Whitehorse
# ************************************************************************
class Whitehorse(Klondike):
def createGame(self):
Klondike.createGame(self, num_deal=3)
def startGame(self):
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards()
def fillStack(self, stack):
if not stack.cards:
old_state = self.enterState(self.S_FILL)
if stack in self.s.rows:
if not self.s.waste.cards:
self.s.talon.dealCards()
if self.s.waste.cards:
self.s.waste.moveMove(1, stack)
self.leaveState(old_state)
# ************************************************************************
# * Boost
# ************************************************************************
class Boost(Klondike):
def createGame(self):
l = Klondike.createGame(self, rows=4, max_rounds=3, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
# ************************************************************************
# * Gold Rush
# ************************************************************************
class GoldRush(Klondike):
Talon_Class = CanfieldRush_Talon
def createGame(self):
l = Klondike.createGame(self, max_rounds=3, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
# ************************************************************************
# * Gold Mine
# ************************************************************************
class GoldMine_RowStack(AC_RowStack):
getBottomImage = Stack._getReserveBottomImage
class GoldMine(Klondike):
RowStack_Class = GoldMine_RowStack
def createGame(self):
Klondike.createGame(self, max_rounds=1, num_deal=3)
def startGame(self):
self.startDealSample()
self.s.talon.dealCards()
# ************************************************************************
# * Lucky Thirteen
# * Lucky Piles
# ************************************************************************
class LuckyThirteen(Game):
Hint_Class = CautiousDefaultHint
RowStack_Class = StackWrapper(RK_RowStack, base_rank=NO_RANK)
def createGame(self, xoffset=0, playcards=0):
l, s = Layout(self), self.s
if xoffset:
xoffset = l.XOFFSET
w0 = l.XS+playcards*l.XOFFSET
self.setSize(l.XM + 5*w0, l.YM+4*l.YS)
x, y = l.XM, l.YM+l.YS
for i in range(5):
stack = self.RowStack_Class(x, y, self, max_move=1)
s.rows.append(stack)
stack.CARD_XOFFSET = xoffset
stack.CARD_YOFFSET = 0
x += w0
x, y = l.XM+w0, l.YM+2*l.YS
for i in range(3):
stack = self.RowStack_Class(x, y, self, max_move=1)
s.rows.append(stack)
stack.CARD_XOFFSET = xoffset
stack.CARD_YOFFSET = 0
x += w0
x, y = l.XM, l.YM+3*l.YS
for i in range(5):
stack = self.RowStack_Class(x, y, self, max_move=1)
s.rows.append(stack)
stack.CARD_XOFFSET = xoffset
stack.CARD_YOFFSET = 0
x += w0
x, y = (self.width-4*l.XS)/2, l.YM
for i in range(4):
s.foundations.append(SS_FoundationStack(x, y, self, suit=i))
x += l.XS
x, y = l.XM, self.height-l.YS
s.talon = InitialDealTalonStack(x, y, self, max_rounds=1)
l.defaultStackGroups()
def startGame(self):
self.s.talon.dealRow(frames=0)
self.s.talon.dealRow(frames=0)
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
shallHighlightMatch = Game._shallHighlightMatch_RK
class LuckyPiles(LuckyThirteen):
RowStack_Class = StackWrapper(UD_SS_RowStack, base_rank=KING)
def createGame(self):
LuckyThirteen.createGame(self, xoffset=1, playcards=7)
shallHighlightMatch = Game._shallHighlightMatch_SS
# ************************************************************************
# * Legion
# ************************************************************************
class Legion(Klondike):
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=8)
def startGame(self):
self.startDealSample()
self.s.talon.dealRow()
for i in (1,2,3):
self.s.talon.dealRow(rows=self.s.rows[i:-i], flip=0)
self.s.talon.dealRow(rows=self.s.rows[i:-i])
self.s.talon.dealCards()
# ************************************************************************
# * Big Bertha
# ************************************************************************
class BigBertha(Game):
def createGame(self):
l, s = Layout(self), self.s
self.setSize(l.XM+15*l.XS, l.YM+3*l.YS+15*l.YOFFSET)
x, y = l.XM, l.YM
s.talon = InitialDealTalonStack(x, y, self)
x, y = l.XM+3.5*l.XS, l.YM
for i in range(8):
s.foundations.append(SS_FoundationStack(x, y, self,
suit=i%4, max_cards=12))
x += l.XS
x, y = l.XM, l.YM+l.YS
for i in range(15):
s.rows.append(AC_RowStack(x, y, self))
x += l.XS
x, y = l.XM, self.height-l.YS
for i in range(14):
s.reserves.append(OpenStack(x, y, self, max_accept=0))
x += l.XS
s.foundations.append(RK_FoundationStack(x, y, self, suit=ANY_SUIT,
base_rank=KING, dir=0, max_cards=8))
l.defaultStackGroups()
def startGame(self):
for i in range(5):
self.s.talon.dealRow(frames=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealRow(rows=self.s.reserves)
shallHighlightMatch = Game._shallHighlightMatch_AC
# ************************************************************************
# * Athena
# ************************************************************************
class Athena(Klondike):
def startGame(self):
self.s.talon.dealRow(frames=0, flip=0)
self.s.talon.dealRow(frames=0)
self.s.talon.dealRow(frames=0, flip=0)
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards()
# ************************************************************************
# * Kingsley
# ************************************************************************
class Kingsley(Klondike):
Foundation_Class = StackWrapper(SS_FoundationStack, base_rank=KING, dir=-1)
RowStack_Class = StackWrapper(KingAC_RowStack, base_rank=ACE, dir=1)
def createGame(self):
Klondike.createGame(self, max_rounds=1)
# ************************************************************************
# * Scarp
# ************************************************************************
class Scarp(Klondike):
Talon_Class = DealRowTalonStack
RowStack_Class = AC_RowStack
def createGame(self):
Klondike.createGame(self, max_rounds=1, rows=13, waste=0, playcards=28)
def startGame(self):
Klondike.startGame(self, flip=1)
# ************************************************************************
# * Eight Sages
# ************************************************************************
class EightSages_Row(AC_RowStack):
def acceptsCards(self, from_stack, cards):
if not AC_RowStack.acceptsCards(self, from_stack, cards):
return False
return from_stack is self.game.s.waste
class EightSages(Klondike):
RowStack_Class = EightSages_Row
def createGame(self):
l = Klondike.createGame(self, max_rounds=2, rows=8,
playcards=12, round_text=True)
l.createRoundText(self.s.talon, 'ne', dx=l.XS)
def startGame(self):
self.startDealSample()
self.s.talon.dealRow()
self.s.talon.dealCards()
# register the game
registerGame(GameInfo(2, Klondike, "Klondike",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(61, CasinoKlondike, "Casino Klondike",
GI.GT_KLONDIKE | GI.GT_SCORE, 1, 2, GI.SL_BALANCED))
registerGame(GameInfo(129, VegasKlondike, "Vegas Klondike",
GI.GT_KLONDIKE | GI.GT_SCORE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(18, KlondikeByThrees, "Klondike by Threes",
GI.GT_KLONDIKE, 1, -1, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(58, ThumbAndPouch, "Thumb and Pouch",
GI.GT_KLONDIKE, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(67, Whitehead, "Whitehead",
GI.GT_KLONDIKE, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(39, SmallHarp, "Small Harp",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED,
altnames=("Die kleine Harfe",) ))
registerGame(GameInfo(66, Eastcliff, "Eastcliff",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(224, Easthaven, "Easthaven",
GI.GT_GYPSY, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(33, Westcliff, "Westcliff",
GI.GT_KLONDIKE, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(225, Westhaven, "Westhaven",
GI.GT_GYPSY, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(107, PasSeul, "Pas Seul",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(81, BlindAlleys, "Blind Alleys",
GI.GT_KLONDIKE, 1, 1, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(215, Somerset, "Somerset",
GI.GT_BELEAGUERED_CASTLE | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(231, Canister, "Canister",
GI.GT_BELEAGUERED_CASTLE | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(229, AgnesSorel, "Agnes Sorel",
GI.GT_GYPSY, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(4, EightTimesEight, "8 x 8",
GI.GT_KLONDIKE, 2, -1, GI.SL_BALANCED))
registerGame(GameInfo(127, AchtmalAcht, "Eight Times Eight",
GI.GT_KLONDIKE, 2, 2, GI.SL_BALANCED,
altnames=("Achtmal Acht",) ))
registerGame(GameInfo(133, Batsford, "Batsford",
GI.GT_KLONDIKE, 2, 0, GI.SL_BALANCED))
registerGame(GameInfo(221, Stonewall, "Stonewall",
GI.GT_RAGLAN, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(222, FlowerGarden, "Flower Garden",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL,
altnames=("The Bouquet", "The Garden",) ))
registerGame(GameInfo(233, KingAlbert, "King Albert",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL,
altnames=("Idiot's Delight",) ))
registerGame(GameInfo(232, Raglan, "Raglan",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(223, Brigade, "Brigade",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(230, Jane, "Jane",
GI.GT_RAGLAN, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(236, AgnesBernauer, "Agnes Bernauer",
GI.GT_RAGLAN, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(263, Phoenix, "Phoenix",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(283, Jumbo, "Jumbo",
GI.GT_KLONDIKE, 2, 1, GI.SL_BALANCED))
registerGame(GameInfo(333, OpenJumbo, "Open Jumbo",
GI.GT_KLONDIKE, 2, 1, GI.SL_BALANCED))
registerGame(GameInfo(326, Lanes, "Lanes",
GI.GT_KLONDIKE, 1, 1, GI.SL_BALANCED))
registerGame(GameInfo(327, ThirtySix, "Thirty Six",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(350, Q_C_, "Q.C.",
GI.GT_KLONDIKE, 2, 1, GI.SL_BALANCED))
registerGame(GameInfo(361, NorthwestTerritory, "Northwest Territory",
GI.GT_RAGLAN, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(362, Morehead, "Morehead",
GI.GT_BELEAGUERED_CASTLE | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(388, Senate, "Senate",
GI.GT_RAGLAN, 2, 0, GI.SL_BALANCED))
registerGame(GameInfo(389, SenatePlus, "Senate +",
GI.GT_RAGLAN, 2, 0, GI.SL_BALANCED))
registerGame(GameInfo(390, Arizona, "Arizona",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(407, AuntMary, "Aunt Mary",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(420, DoubleDot, "Double Dot",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(434, SevenDevils, "Seven Devils",
GI.GT_RAGLAN, 2, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(452, DoubleEasthaven, "Double Easthaven",
GI.GT_GYPSY, 2, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(453, TripleEasthaven, "Triple Easthaven",
GI.GT_GYPSY, 3, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(470, MovingLeft, "Moving Left",
GI.GT_KLONDIKE, 2, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(471, Souter, "Souter",
GI.GT_KLONDIKE, 2, 1, GI.SL_BALANCED))
registerGame(GameInfo(473, BigForty, "Big Forty",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(474, AliBaba, "Ali Baba",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(475, Cassim, "Cassim",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(479, Saratoga, "Saratoga",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(491, Whitehorse, "Whitehorse",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(518, Boost, "Boost",
GI.GT_KLONDIKE | GI.GT_ORIGINAL, 1, 2, GI.SL_BALANCED))
registerGame(GameInfo(522, ArticGarden, "Artic Garden",
GI.GT_RAGLAN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(532, GoldRush, "Gold Rush",
GI.GT_KLONDIKE, 1, 2, GI.SL_BALANCED))
registerGame(GameInfo(539, Usk, "Usk",
GI.GT_KLONDIKE, 1, 1, GI.SL_BALANCED))
registerGame(GameInfo(541, BatsfordAgain, "Batsford Again",
GI.GT_KLONDIKE, 2, 1, GI.SL_BALANCED))
registerGame(GameInfo(572, GoldMine, "Gold Mine",
GI.GT_NUMERICA, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(585, LuckyThirteen, "Lucky Thirteen",
GI.GT_1DECK_TYPE, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(586, LuckyPiles, "Lucky Piles",
GI.GT_FAN_TYPE, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(601, AmericanCanister, "American Canister",
GI.GT_BELEAGUERED_CASTLE | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(602, BritishCanister, "British Canister",
GI.GT_BELEAGUERED_CASTLE | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(607, Legion, "Legion",
GI.GT_KLONDIKE, 1, 0, GI.SL_BALANCED))
registerGame(GameInfo(627, QueenVictoria, "Queen Victoria",
GI.GT_RAGLAN | GI.GT_OPEN, 1, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(630, BigBertha, "Big Bertha",
GI.GT_RAGLAN | GI.GT_OPEN, 2, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(633, Athena, "Athena",
GI.GT_KLONDIKE, 1, -1, GI.SL_BALANCED))
registerGame(GameInfo(634, Chinaman, "Chinaman",
GI.GT_KLONDIKE, 1, 1, GI.SL_BALANCED))
registerGame(GameInfo(651, EightByEight, "Eight by Eight",
GI.GT_KLONDIKE, 2, 2, GI.SL_BALANCED))
registerGame(GameInfo(667, Kingsley, "Kingsley",
GI.GT_KLONDIKE, 1, 0, GI.SL_MOSTLY_LUCK))
registerGame(GameInfo(669, Scarp, "Scarp",
GI.GT_GYPSY | GI.GT_ORIGINAL, 3, 0, GI.SL_MOSTLY_SKILL))
registerGame(GameInfo(726, EightSages, "Eight Sages",
GI.GT_KLONDIKE, 2, 1, GI.SL_MOSTLY_LUCK))
|
TrevorLowing/PyGames
|
pysollib/games/klondike.py
|
Python
|
gpl-2.0
| 52,598
|
[
"CASINO"
] |
cbae46a210a1f95301bbffdbf82b2dc287b3d0d9bd00045e5bac2ead1934de08
|
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class RBiobase(RPackage):
"""Biobase: Base functions for Bioconductor
Functions that are needed by many other packages or which replace R
functions."""
homepage = "https://bioconductor.org/packages/Biobase"
git = "https://git.bioconductor.org/packages/Biobase.git"
version('2.50.0', commit='9927f90d0676382f2f99e099d8d2c8e2e6f1b4de')
version('2.44.0', commit='bde2077f66047986297ec35a688751cdce150dd3')
version('2.42.0', commit='3e5bd466b99e3cc4af1b0c3b32687fa56d6f8e4d')
version('2.40.0', commit='6555edbbcb8a04185ef402bfdea7ed8ac72513a5')
version('2.38.0', commit='83f89829e0278ac014b0bc6664e621ac147ba424')
version('2.36.2', commit='15f50912f3fa08ccb15c33b7baebe6b8a59ce075')
depends_on('r@2.10:', type=('build', 'run'))
depends_on('r-biocgenerics@0.3.2:', type=('build', 'run'))
depends_on('r-biocgenerics@0.27.1:', when='@2.42.0:', type=('build', 'run'))
|
LLNL/spack
|
var/spack/repos/builtin/packages/r-biobase/package.py
|
Python
|
lgpl-2.1
| 1,151
|
[
"Bioconductor"
] |
ce53428b682555d2bc28c88c3468e9b0efab5c52d7afc1b4105da4013bc968d8
|
# -*- coding: utf-8 -*-
#
#
# TheVirtualBrain-Scientific Package. This package holds all simulators, and
# analysers necessary to run brain-simulations. You can use it stand alone or
# in conjunction with TheVirtualBrain-Framework Package. See content of the
# documentation-folder for more details. See also http://www.thevirtualbrain.org
#
# (c) 2012-2013, Baycrest Centre for Geriatric Care ("Baycrest")
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License version 2 as published by the Free
# Software Foundation. This program is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
# License for more details. You should have received a copy of the GNU General
# Public License along with this program; if not, you can download it here
# http://www.gnu.org/licenses/old-licenses/gpl-2.0
#
#
# CITATION:
# When using The Virtual Brain for scientific publications, please cite it as follows:
#
# Paula Sanz Leon, Stuart A. Knock, M. Marmaduke Woodman, Lia Domide,
# Jochen Mersmann, Anthony R. McIntosh, Viktor Jirsa (2013)
# The Virtual Brain: a simulator of primate brain network dynamics.
# Frontiers in Neuroinformatics (7:10. doi: 10.3389/fninf.2013.00010)
"""
Oscillator models.
"""
from .base import Model, ModelNumbaDfun, LOG, numpy, basic, arrays
import numexpr
from numba import guvectorize, float64
class Generic2dOscillator(ModelNumbaDfun):
r"""
The Generic2dOscillator model is a generic dynamic system with two state
variables. The dynamic equations of this model are composed of two ordinary
differential equations comprising two nullclines. The first nullcline is a
cubic function as it is found in most neuron and population models; the
second nullcline is arbitrarily configurable as a polynomial function up to
second order. The manipulation of the latter nullcline's parameters allows
to generate a wide range of different behaviours.
Equations:
.. math::
\dot{V} &= d \, \tau (-f V^3 + e V^2 + g V + \alpha W + \gamma I), \\
\dot{W} &= \dfrac{d}{\tau}\,\,(c V^2 + b V - \beta W + a),
See:
.. [FH_1961] FitzHugh, R., *Impulses and physiological states in theoretical
models of nerve membrane*, Biophysical Journal 1: 445, 1961.
.. [Nagumo_1962] Nagumo et.al, *An Active Pulse Transmission Line Simulating
Nerve Axon*, Proceedings of the IRE 50: 2061, 1962.
.. [SJ_2011] Stefanescu, R., Jirsa, V.K. *Reduced representations of
heterogeneous mixed neural networks with synaptic coupling*.
Physical Review E, 83, 2011.
.. [SJ_2010] Jirsa VK, Stefanescu R. *Neural population modes capture
biologically realistic large-scale network dynamics*. Bulletin of
Mathematical Biology, 2010.
.. [SJ_2008_a] Stefanescu, R., Jirsa, V.K. *A low dimensional description
of globally coupled heterogeneous neural networks of excitatory and
inhibitory neurons*. PLoS Computational Biology, 4(11), 2008).
The model's (:math:`V`, :math:`W`) time series and phase-plane its nullclines
can be seen in the figure below.
The model with its default parameters exhibits FitzHugh-Nagumo like dynamics.
+---------------------------+
| Table 1 |
+--------------+------------+
| EXCITABLE CONFIGURATION |
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | -2.0 |
+--------------+------------+
| b | -10.0 |
+--------------+------------+
| c | 0.0 |
+--------------+------------+
| d | 0.02 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
| limit cycle if a is 2.0 |
+---------------------------+
+---------------------------+
| Table 2 |
+--------------+------------+
| BISTABLE CONFIGURATION |
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | 1.0 |
+--------------+------------+
| b | 0.0 |
+--------------+------------+
| c | -5.0 |
+--------------+------------+
| d | 0.02 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
| monostable regime: |
| fixed point if Iext=-2.0 |
| limit cycle if Iext=-1.0 |
+---------------------------+
+---------------------------+
| Table 3 |
+--------------+------------+
| EXCITABLE CONFIGURATION |
+--------------+------------+
| (similar to Morris-Lecar)|
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | 0.5 |
+--------------+------------+
| b | 0.6 |
+--------------+------------+
| c | -4.0 |
+--------------+------------+
| d | 0.02 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
| excitable regime if b=0.6 |
| oscillatory if b=0.4 |
+---------------------------+
+---------------------------+
| Table 4 |
+--------------+------------+
| GhoshetAl, 2008 |
| KnocketAl, 2009 |
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | 1.05 |
+--------------+------------+
| b | -1.00 |
+--------------+------------+
| c | 0.0 |
+--------------+------------+
| d | 0.1 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
| alpha | 1.0 |
+--------------+------------+
| beta | 0.2 |
+--------------+------------+
| gamma | -1.0 |
+--------------+------------+
| e | 0.0 |
+--------------+------------+
| g | 1.0 |
+--------------+------------+
| f | 1/3 |
+--------------+------------+
| tau | 1.25 |
+--------------+------------+
| |
| frequency peak at 10Hz |
| |
+---------------------------+
+---------------------------+
| Table 5 |
+--------------+------------+
| SanzLeonetAl 2013 |
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | - 0.5 |
+--------------+------------+
| b | -10.0 |
+--------------+------------+
| c | 0.0 |
+--------------+------------+
| d | 0.02 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
| |
| intrinsic frequency is |
| approx 10 Hz |
| |
+---------------------------+
NOTE: This regime, if I = 2.1, is called subthreshold regime.
Unstable oscillations appear through a subcritical Hopf bifurcation.
.. figure :: img/Generic2dOscillator_01_mode_0_pplane.svg
.. _phase-plane-Generic2D:
:alt: Phase plane of the generic 2D population model with (V, W)
The (:math:`V`, :math:`W`) phase-plane for the generic 2D population
model for default parameters. The dynamical system has an equilibrium
point.
.. #Currently there seems to be a clash between traits and autodoc, autodoc
.. #can't find the methods of the class, the class specific names below get
.. #us around this...
.. automethod:: Generic2dOscillator.__init__
.. automethod:: Generic2dOscillator.dfun
"""
_ui_name = "Generic 2d Oscillator"
ui_configurable_parameters = ['tau', 'a', 'b', 'c', 'I', 'd', 'e', 'f', 'g', 'alpha', 'beta', 'gamma']
#Define traited attributes for this model, these represent possible kwargs.
tau = arrays.FloatArray(
label=r":math:`\tau`",
default=numpy.array([1.0]),
range=basic.Range(lo=1.0, hi=5.0, step=0.01),
doc="""A time-scale hierarchy can be introduced for the state
variables :math:`V` and :math:`W`. Default parameter is 1, which means
no time-scale hierarchy.""",
order=1)
I = arrays.FloatArray(
label=":math:`I_{ext}`",
default=numpy.array([0.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.01),
doc="""Baseline shift of the cubic nullcline""",
order=2)
a = arrays.FloatArray(
label=":math:`a`",
default=numpy.array([-2.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.01),
doc="""Vertical shift of the configurable nullcline""",
order=3)
b = arrays.FloatArray(
label=":math:`b`",
default=numpy.array([-10.0]),
range=basic.Range(lo=-20.0, hi=15.0, step=0.01),
doc="""Linear slope of the configurable nullcline""",
order=4)
c = arrays.FloatArray(
label=":math:`c`",
default=numpy.array([0.0]),
range=basic.Range(lo=-10.0, hi=10.0, step=0.01),
doc="""Parabolic term of the configurable nullcline""",
order=5)
d = arrays.FloatArray(
label=":math:`d`",
default=numpy.array([0.02]),
range=basic.Range(lo=0.0001, hi=1.0, step=0.0001),
doc="""Temporal scale factor. Warning: do not use it unless
you know what you are doing and know about time tides.""",
order=13)
e = arrays.FloatArray(
label=":math:`e`",
default=numpy.array([3.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.0001),
doc="""Coefficient of the quadratic term of the cubic nullcline.""",
order=6)
f = arrays.FloatArray(
label=":math:`f`",
default=numpy.array([1.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.0001),
doc="""Coefficient of the cubic term of the cubic nullcline.""",
order=7)
g = arrays.FloatArray(
label=":math:`g`",
default=numpy.array([0.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.5),
doc="""Coefficient of the linear term of the cubic nullcline.""",
order=8)
alpha = arrays.FloatArray(
label=r":math:`\alpha`",
default=numpy.array([1.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.0001),
doc="""Constant parameter to scale the rate of feedback from the
slow variable to the fast variable.""",
order=9)
beta = arrays.FloatArray(
label=r":math:`\beta`",
default=numpy.array([1.0]),
range=basic.Range(lo=-5.0, hi=5.0, step=0.0001),
doc="""Constant parameter to scale the rate of feedback from the
slow variable to itself""",
order=10)
# This parameter is basically a hack to avoid having a negative lower boundary in the global coupling strength.
gamma = arrays.FloatArray(
label=r":math:`\gamma`",
default=numpy.array([1.0]),
range=basic.Range(lo=-1.0, hi=1.0, step=0.1),
doc="""Constant parameter to reproduce FHN dynamics where
excitatory input currents are negative.
It scales both I and the long range coupling term.""",
order=13)
#Informational attribute, used for phase-plane and initial()
state_variable_range = basic.Dict(
label="State Variable ranges [lo, hi]",
default={"V": numpy.array([-2.0, 4.0]),
"W": numpy.array([-6.0, 6.0])},
doc="""The values for each state-variable should be set to encompass
the expected dynamic range of that state-variable for the current
parameters, it is used as a mechanism for bounding random initial
conditions when the simulation isn't started from an explicit
history, it is also provides the default range of phase-plane plots.""",
order=11)
# variables_of_interest = arrays.IntegerArray(
# label = "Variables watched by Monitors.",
# range = basic.Range(lo = 0.0, hi = 2.0, step = 1.0),
# default = numpy.array([0], dtype=numpy.int32),
# doc = """This represents the default state-variables of this Model to be
# monitored. It can be overridden for each Monitor if desired. The
# corresponding state-variable indices for this model are :math:`V = 0`
# and :math:`W = 1`""",
# order = 7)
variables_of_interest = basic.Enumerate(
label="Variables or quantities available to Monitors",
options=["V", "W", "V + W", "V - W"],
default=["V", ],
select_multiple=True,
doc="The quantities of interest for monitoring for the generic 2D oscillator.",
order=12)
state_variables = ['V', 'W']
_nvar = 2
cvar = numpy.array([0], dtype=numpy.int32)
def _numpy_dfun(self, state_variables, coupling, local_coupling=0.0, ev=numexpr.evaluate):
r"""
The two state variables :math:`V` and :math:`W` are typically considered
to represent a function of the neuron's membrane potential, such as the
firing rate or dendritic currents, and a recovery variable, respectively.
If there is a time scale hierarchy, then typically :math:`V` is faster
than :math:`W` corresponding to a value of :math:`\tau` greater than 1.
The equations of the generic 2D population model read
.. math::
\dot{V} &= d \, \tau (-f V^3 + e V^2 + g V + \alpha W + \gamma I), \\
\dot{W} &= \dfrac{d}{\tau}\,\,(c V^2 + b V - \beta W + a),
where external currents :math:`I` provide the entry point for local,
long-range connectivity and stimulation.
"""
V = state_variables[0, :]
W = state_variables[1, :]
#[State_variables, nodes]
c_0 = coupling[0, :]
tau = self.tau
I = self.I
a = self.a
b = self.b
c = self.c
d = self.d
e = self.e
f = self.f
g = self.g
beta = self.beta
alpha = self.alpha
gamma = self.gamma
lc_0 = local_coupling * V
# Pre-allocate the result array then instruct numexpr to use it as output.
# This avoids an expensive array concatenation
derivative = numpy.empty_like(state_variables)
ev('d * tau * (alpha * W - f * V**3 + e * V**2 + g * V + gamma * I + gamma *c_0 + lc_0)', out=derivative[0])
ev('d * (a + b * V + c * V**2 - beta * W) / tau', out=derivative[1])
return derivative
def dfun(self, vw, c, local_coupling=0.0):
lc_0 = local_coupling * vw[0, :, 0]
vw_ = vw.reshape(vw.shape[:-1]).T
c_ = c.reshape(c.shape[:-1]).T
deriv = _numba_dfun_g2d(vw_, c_, self.tau, self.I, self.a, self.b, self.c, self.d, self.e, self.f, self.g,
self.beta, self.alpha, self.gamma, lc_0)
return deriv.T[..., numpy.newaxis]
@guvectorize([(float64[:],) * 16], '(n),(m)' + ',()'*13 + '->(n)', nopython=True)
def _numba_dfun_g2d(vw, c_0, tau, I, a, b, c, d, e, f, g, beta, alpha, gamma, lc_0, dx):
"Gufunc for reduced Wong-Wang model equations."
V = vw[0]
V2 = V * V
W = vw[1]
dx[0] = d[0] * tau[0] * (alpha[0] * W - f[0] * V2*V + e[0] * V2 + g[0] * V + gamma[0] * I[0] + gamma[0] * c_0[0] + lc_0[0])
dx[1] = d[0] * (a[0] + b[0] * V + c[0] * V2 - beta[0] * W) / tau[0]
class Kuramoto(Model):
r"""
The Kuramoto model is a model of synchronization phenomena derived by
Yoshiki Kuramoto in 1975 which has since been applied to diverse domains
including the study of neuronal oscillations and synchronization.
See:
.. [YK_1975] Y. Kuramoto, in: H. Arakai (Ed.), International Symposium
on Mathematical Problems in Theoretical Physics, *Lecture Notes in
Physics*, page 420, vol. 39, 1975.
.. [SS_2000] S. H. Strogatz. *From Kuramoto to Crawford: exploring the
onset of synchronization in populations of coupled oscillators*.
Physica D, 143, 2000.
.. [JC_2011] J. Cabral, E. Hugues, O. Sporns, G. Deco. *Role of local
network oscillations in resting-state functional connectivity*.
NeuroImage, 57, 1, 2011.
The :math:`\theta` variable is the phase angle of the oscillation.
Dynamic equations:
.. math::
\dot{\theta}_{k} = \omega_{k} + \mathbf{\Gamma}(\theta_k, \theta_j, u_{kj}) + \sin(W_{\zeta}\theta)
"""
_ui_name = "Kuramoto Oscillator"
ui_configurable_parameters = ['omega']
#Define traited attributes for this model, these represent possible kwargs.
omega = arrays.FloatArray(
label=r":math:`\omega`",
default=numpy.array([1.0]),
range=basic.Range(lo=0.01, hi=200.0, step=0.1),
doc=""":math:`\omega` sets the base line frequency for the
Kuramoto oscillator in [rad/ms]""",
order=1)
#Informational attribute, used for phase-plane and initial()
state_variable_range = basic.Dict(
label="State Variable ranges [lo, hi]",
default={"theta": numpy.array([0.0, numpy.pi * 2.0]),
},
doc="""The values for each state-variable should be set to encompass
the expected dynamic range of that state-variable for the current
parameters, it is used as a mechanism for bounding random initial
conditions when the simulation isn't started from an explicit
history, it is also provides the default range of phase-plane plots.""",
order=6)
variables_of_interest = basic.Enumerate(
label="Variables watched by Monitors",
options=["theta"],
default=["theta"],
select_multiple=True,
doc="""This represents the default state-variables of this Model to be
monitored. It can be overridden for each Monitor if desired. The Kuramoto
model, however, only has one state variable with and index of 0, so it
is not necessary to change the default here.""",
order=7)
state_variables = ['theta']
_nvar = 1
cvar = numpy.array([0], dtype=numpy.int32)
def dfun(self, state_variables, coupling, local_coupling=0.0,
ev=numexpr.evaluate, sin=numpy.sin, pi2=numpy.pi * 2):
r"""
The :math:`\theta` variable is the phase angle of the oscillation.
.. math::
\dot{\theta}_{k} = \omega_{k} + \mathbf{\Gamma}(\theta_k, \theta_j, u_{kj}) + \sin(W_{\zeta}\theta)
where :math:`I` is the input via local and long range connectivity,
passing first through the Kuramoto coupling function,
:py:class:tvb.simulator.coupling.Kuramoto.
"""
theta = state_variables[0, :]
#import pdb; pdb.set_trace()
#A) Distribution of phases according to the local connectivity kernel
local_range_coupling = numpy.sin(local_coupling * theta)
# NOTE: To evaluate.
#B) Strength of the interactions
#local_range_coupling = local_coupling * numpy.sin(theta)
I = coupling[0, :] + local_range_coupling
if not hasattr(self, 'derivative'):
self.derivative = numpy.empty((1,) + theta.shape)
# phase update
self.derivative[0] = self.omega + I
# all this pi makeh me have great hungary, can has sum NaN?
return self.derivative
|
stuart-knock/tvb-library
|
tvb/simulator/models/oscillator.py
|
Python
|
gpl-2.0
| 20,193
|
[
"NEURON"
] |
8d9c8e926da7ae08a5c3213e93f0a7c72f24d96f995aaac3e7b3c1071cf692c2
|
""" StaticExpressions gathers constant expression that involve types. """
from pythran.passmanager import NodeAnalysis
class HasStaticExpression(NodeAnalysis):
def __init__(self):
self.result = False
super(HasStaticExpression, self).__init__()
def visit_Attribute(self, node):
self.generic_visit(node)
self.result |= node.attr == 'is_none'
class StaticExpressions(NodeAnalysis):
"""Identify constant expressions."""
def __init__(self):
self.result = set()
self.constant_expressions = set()
super(StaticExpressions, self).__init__()
def add(self, node):
self.result.add(node)
return True
def not_add(self, _):
return False
def match_all(self, *args):
assert len(args) > 1, "at least two arguments"
static = False
const = True
for value in args:
if self.visit(value):
static = True
else:
const &= value in self.constant_expressions
return static and const
def visit_BoolOp(self, node):
return self.match_all(*node.values) and self.add(node)
def visit_BinOp(self, node):
return self.match_all(node.left, node.right) and self.add(node)
def visit_UnaryOp(self, node):
return self.visit(node.operand) and self.add(node)
def visit_IfExp(self, node):
return (self.match_all(node.test, node.body, node.orelse)
and self.add(node))
def visit_Compare(self, node):
return self.match_all(node.left, *node.comparators) and self.add(node)
def visit_Call(self, node):
return self.visit(node.func)and self.add(node) # very limited
def visit_Attribute(self, node):
return node.attr in ('is_none', 'isinstance')
def visit_Constant(self, node):
self.constant_expressions.add(node)
visit_Subscript = not_add
visit_Name = not_add
visit_Dict = not_add
visit_List = not_add
visit_Tuple = not_add
visit_Set = not_add
visit_Slice = not_add
visit_Index = not_add
|
pombredanne/pythran
|
pythran/analyses/static_expressions.py
|
Python
|
bsd-3-clause
| 2,105
|
[
"VisIt"
] |
bf327b10766f5542b95526370dbfa3998c8175298f6e2223f40a56b15b4962d5
|
#!/usr/bin/env python
"""Pygme: Python Gaussian ModElling - a python implementation of the Multi-Gaussian Expansion Method.
Fit MGE models, and Generate initial conditions for N body simulations
See Monnet et al. 1992 and Emsellem et al. 1994 for more details
"""
## Distribution for the PyMGE package
import sys
# simple hack to allow use of "python setup.py develop". Should not affect
# users, only developers.
if 'develop' in sys.argv:
# use setuptools for develop, but nothing else
from setuptools import setup
else:
from distutils.core import setup
import os
if os.path.exists('MANIFEST'):
os.remove('MANIFEST')
setup(name='pygme',
version='0.0.2',
description='PYthon Gaussian ModElling - Python MGE Tool',
author='Eric Emsellem',
author_email='eric.emsellem@eso.org',
maintainer='Eric Emsellem',
# url='http://',
# requires=['pymodelfit'],
# requires=['openopt'],
license='LICENSE',
packages=['pygme', 'pygme.binning', 'pygme.astroprofiles', 'pygme.fitting', 'pygme.utils', 'pygme.colormaps'],
package_dir={'pygme.astroprofiles': 'pygme/astroprofiles'},
package_data={'pygme.astroprofiles': ['data/*.dat']},
)
|
emsellem/pygme
|
setup.py
|
Python
|
bsd-3-clause
| 1,232
|
[
"Gaussian"
] |
597d065efe3b216afb27c0f70f18ddbbc0cea0aecd49d7bd6e931f031271db02
|
import numpy
import pylab
import moose
import time
'''
This example implements a reaction-diffusion like system which is
bistable and propagates losslessly. It is based on the NEURON example
rxdrun.py, but incorporates more compartments and runs for a longer time.
The system is implemented as a hybrid of a reaction and a function which
sets its rates. Please see rxdFuncDiffusion.py for a variant that uses
just a function object to set up the system.
'''
dt = 0.1
# define the geometry
compt = moose.CylMesh( '/cylinder' )
compt.r0 = compt.r1 = 1
compt.x1 = 100
compt.diffLength = 0.2
assert( compt.numDiffCompts == compt.x1/compt.diffLength )
#define the molecule. Its geometry is defined by its parent volume, cylinder
c = moose.Pool( '/cylinder/pool' )
c.diffConst = 1 # define diffusion constant
# There is an implicit reaction substrate/product. MOOSE makes it explicit.
buf = moose.BufPool( '/cylinder/buf' )
buf.nInit = 1
# The reaction is something entirely peculiar, not a chemical thing.
reaction = moose.Reac( '/cylinder/reac' )
reaction.Kb = 0
# so here we set up a function calculation to do the same thing.
func = moose.Function( '/cylinder/reac/func' )
func.expr = "(1 - x0) * (0.3 - x0)"
func.x.num = 1 #specify number of input variables.
#Connect the reaction to the pools
moose.connect( reaction, 'sub', c, 'reac' )
moose.connect( reaction, 'prd', buf, 'reac' )
#Connect the function to the reaction
moose.connect( func, 'valueOut', reaction, 'setNumKf' )
#Connect the molecules to the func
moose.connect( c, 'nOut', func.x[0], 'input' )
#Set up solvers
ksolve = moose.Ksolve( '/cylinder/ksolve' )
dsolve = moose.Dsolve( '/cylinder/dsolve' )
stoich = moose.Stoich( '/cylinder/stoich' )
stoich.compartment = compt
stoich.ksolve = ksolve
stoich.dsolve = dsolve
stoich.path = '/cylinder/##'
for i in range( 10, 18 ):
moose.setClock( i, dt )
#initialize
x = numpy.arange( 0, compt.x1, compt.diffLength )
c.vec.nInit = [ (q < 0.2 * compt.x1) for q in x ]
# Run and plot it.
moose.reinit()
updateDt = 50
runtime = updateDt * 4
plt = pylab.plot( x, c.vec.n, label='t = 0 ')
t1 = time.time()
for t in range( 0, runtime-1, updateDt ):
moose.start( updateDt )
plt = pylab.plot( x, c.vec.n, label='t = '+str(t + updateDt) )
print "Time = ", time.time() - t1
pylab.ylim( 0, 1.05 )
pylab.legend()
pylab.show()
|
dilawar/moose-full
|
moose-examples/snippets/rxdReacDiffusion.py
|
Python
|
gpl-2.0
| 2,345
|
[
"MOOSE",
"NEURON"
] |
ccfaff21192fffed053a6e29af2bac9d91ebd686c2722dd52a69aec916764700
|
# Mantid Repository : https://github.com/mantidproject/mantid
#
# Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI,
# NScD Oak Ridge National Laboratory, European Spallation Source
# & Institut Laue - Langevin
# SPDX - License - Identifier: GPL - 3.0 +
from __future__ import (absolute_import, division, print_function)
import unittest
from mantid.api import FrameworkManagerImpl, IFunction1D, FunctionFactory
class TestFunctionNoAttrs(IFunction1D):
pass
class TestFunctionOnlyInit(IFunction1D):
def init(self):
pass
class TestFunctionOnlyFunction1D(IFunction1D):
def function1D(self, xvals):
pass
class TestFunctionCorrectForm(IFunction1D):
def init(self):
pass
def function1D(self, xvals):
pass
class FunctionFactoryTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
FrameworkManagerImpl.Instance()
def test_get_function_factory_does_not_return_None(self):
self.assertTrue(FunctionFactory is not None)
def test_get_functions(self):
all_funcs = FunctionFactory.getFunctionNames()
self.assertTrue( len(all_funcs) > 0 )
self.assertTrue("Gaussian" in all_funcs)
def test_get_Gaussian(self):
name = "Gaussian"
func = FunctionFactory.createFunction(name)
self.assertTrue(func.name() == name)
self.assertTrue(len(func.__repr__()) > len(name))
self.assertTrue("Peak" in func.categories())
def test_function_subscription_of_non_class_type_raises_error(self):
def not_a_fit_function(*args, **kwargs):
pass
self.assertRaises(ValueError, FunctionFactory.subscribe, not_a_fit_function)
def test_function_subscription_of_class_without_IFunction_base_raises_error(self):
class NotAFitFunction(object):
pass
self.assertRaises(ValueError, FunctionFactory.subscribe, NotAFitFunction)
def test_function_subscription_without_required_attrs_fails(self):
self.assertRaises(RuntimeError, FunctionFactory.Instance().subscribe, TestFunctionNoAttrs)
self.assertTrue("TestFunctionNoAttrs" not in FunctionFactory.getFunctionNames())
self.assertRaises(RuntimeError, FunctionFactory.Instance().subscribe, TestFunctionOnlyInit)
self.assertTrue("TestFunctionOnlyInit" not in FunctionFactory.getFunctionNames())
def test_function_with_expected_attrs_subscribes_successfully(self):
nfuncs_orig = len(FunctionFactory.getFunctionNames())
FunctionFactory.subscribe(TestFunctionCorrectForm)
new_funcs = FunctionFactory.getFunctionNames()
self.assertEquals(nfuncs_orig+1, len(new_funcs))
self.assertTrue("TestFunctionCorrectForm" in new_funcs)
def test_function_existing_function_can_be_unsubscribed(self):
FunctionFactory.subscribe(TestFunctionCorrectForm)
nfuncs_before = len(FunctionFactory.getFunctionNames())
FunctionFactory.unsubscribe("TestFunctionCorrectForm")
available_functions = FunctionFactory.getFunctionNames()
self.assertEquals(nfuncs_before - 1, len(available_functions))
self.assertTrue("TestFunctionCorrectForm" not in available_functions)
if __name__ == '__main__':
unittest.main()
|
mganeva/mantid
|
Framework/PythonInterface/test/python/mantid/api/FunctionFactoryTest.py
|
Python
|
gpl-3.0
| 3,273
|
[
"Gaussian"
] |
1f283ddb65d86e939fc4c7770d662185cdb2b9a46a0bff327e4c6959ae463c41
|
"""Test operator support in VTK-Python
The following operators are supported:
- The << operator becomes python str() and print()
- The < <= == != > >= operators become richcompare
- The [int] operator become the sequence protocol
The following operators are not yet supported:
- The () operator
- The [] operator for the mapping protocol
- Arithmetic operators + - * / %
Created on May 7, 2011 by David Gobbi
"""
import sys
import vtk
from vtk.test import Testing
class TestOperators(Testing.vtkTest):
def testPrint(self):
"""Use str slot"""
c1 = vtk.vtkArrayRange(3,4)
s1 = str(c1)
s2 = '[3, 4)'
self.assertEqual(s1, s2)
def testCompare(self):
"""Use comparison operators"""
c1 = vtk.vtkArrayRange(3,4)
c2 = vtk.vtkArrayRange(3,4)
# will fail if the "==" operator is not wrapped
self.assertEqual(c1, c2)
def testSequence(self):
"""Use sequence operators"""
c1 = vtk.vtkArrayCoordinates()
c1.SetDimensions(3)
n = len(c1) # sq_length slot
self.assertEqual(n, 3)
c1[1] = 5 # sq_ass_item slot
n = c1[1] # sq_item slot
self.assertEqual(n, 5)
r = vtk.vtkArrayRange(3,4)
e = vtk.vtkArrayExtents()
e.SetDimensions(2)
e[0] = r
s = e[0]
self.assertEqual(s, r)
if __name__ == "__main__":
Testing.main([(TestOperators, 'test')])
|
hlzz/dotfiles
|
graphics/VTK-7.0.0/Common/Core/Testing/Python/TestOperators.py
|
Python
|
bsd-3-clause
| 1,492
|
[
"VTK"
] |
8b6f37d8a72997e5129c7b72991e117d60f26030218b443151bde6c6f089bacf
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2007-2008 Douglas S. Blank
# Copyright (C) 2004-2007 Donald N. Allingham
# Copyright (C) 2008 Brian G. Matherly
# Copyright (C) 2010 Jakim Friant
# Copyright (C) 2011 Tim G L Lyons
# Copyright (C) 2013 Vassilii Khachaturov
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"Export to CSV Spreadsheet."
#-------------------------------------------------------------------------
#
# Standard Python Modules
#
#-------------------------------------------------------------------------
import os
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.sgettext
import csv
from io import StringIO
import codecs
#------------------------------------------------------------------------
#
# Set up logging
#
#------------------------------------------------------------------------
import logging
from collections import abc
LOG = logging.getLogger(".ExportCSV")
#-------------------------------------------------------------------------
#
# Gramps modules
#
#-------------------------------------------------------------------------
from gramps.gen.lib import EventType, Person
from gramps.gen.lib.eventroletype import EventRoleType
from gramps.gui.plug.export import WriterOptionBox
from gramps.gen.utils.string import gender as gender_map
from gramps.gen.datehandler import get_date
from gramps.gen.display.place import displayer as _pd
from gramps.gui.glade import Glade
from gramps.gen.constfunc import win
#-------------------------------------------------------------------------
#
# The function that does the exporting
#
#-------------------------------------------------------------------------
def exportData(database, filename, user, option_box=None):
gw = CSVWriter(database, filename, user, option_box)
return gw.export_data()
#-------------------------------------------------------------------------
#
# Support Functions
#
#-------------------------------------------------------------------------
def sortable_string_representation(text):
numeric = ""
alpha = ""
for s in text:
if s.isdigit():
numeric += s
else:
alpha += s
return alpha + (("0" * 10) + numeric)[-10:]
def get_primary_event_ref_from_type(db, person, event_name):
"""
>>> get_primary_event_ref_from_type(db, Person(), "Baptism"):
"""
for ref in person.event_ref_list:
if ref.get_role() == EventRoleType.PRIMARY:
event = db.get_event_from_handle(ref.ref)
if event and event.type.is_type(event_name):
return ref
return None
def get_primary_source_title(db, obj):
for citation_handle in obj.get_citation_list():
citation = db.get_citation_from_handle(citation_handle)
source = db.get_source_from_handle(citation.get_reference_handle())
if source:
return source.get_title()
return ""
#-------------------------------------------------------------------------
#
# CSVWriter Options
#
#-------------------------------------------------------------------------
class CSVWriterOptionBox(WriterOptionBox):
"""
Create a VBox with the option widgets and define methods to retrieve
the options.
"""
def __init__(self, person, dbstate, uistate, track=[], window=None):
WriterOptionBox.__init__(self, person, dbstate, uistate, track=track,
window=window)
## TODO: add place filter selection
self.include_individuals = 1
self.include_marriages = 1
self.include_children = 1
self.include_places = 1
self.translate_headers = 1
self.include_individuals_check = None
self.include_marriages_check = None
self.include_children_check = None
self.include_places_check = None
self.translate_headers_check = None
def get_option_box(self):
from gi.repository import Gtk
option_box = WriterOptionBox.get_option_box(self)
self.include_individuals_check = Gtk.CheckButton(label=_("Include people"))
self.include_marriages_check = Gtk.CheckButton(label=_("Include marriages"))
self.include_children_check = Gtk.CheckButton(label=_("Include children"))
self.include_places_check = Gtk.CheckButton(label=_("Include places"))
self.translate_headers_check = Gtk.CheckButton(label=_("Translate headers"))
self.include_individuals_check.set_active(1)
self.include_marriages_check.set_active(1)
self.include_children_check.set_active(1)
self.include_places_check.set_active(1)
self.translate_headers_check.set_active(1)
option_box.pack_start(self.include_individuals_check, False, True, 0)
option_box.pack_start(self.include_marriages_check, False, True, 0)
option_box.pack_start(self.include_children_check, False, True, 0)
option_box.pack_start(self.include_places_check, False, True, 0)
option_box.pack_start(self.translate_headers_check, False, True, 0)
return option_box
def parse_options(self):
WriterOptionBox.parse_options(self)
if self.include_individuals_check:
self.include_individuals = self.include_individuals_check.get_active()
self.include_marriages = self.include_marriages_check.get_active()
self.include_children = self.include_children_check.get_active()
self.include_places = self.include_places_check.get_active()
self.translate_headers = self.translate_headers_check.get_active()
#-------------------------------------------------------------------------
#
# CSVWriter class
#
#-------------------------------------------------------------------------
class CSVWriter:
def __init__(self, database, filename, user, option_box=None):
self.db = database
self.option_box = option_box
self.filename = filename
self.user = user
if isinstance(self.user.callback, abc.Callable): # is really callable
self.update = self.update_real
else:
self.update = self.update_empty
self.plist = {}
self.flist = {}
self.place_list = {}
self.persons_details_done = []
self.persons_notes_done = []
self.person_ids = {}
if not option_box:
self.include_individuals = 1
self.include_marriages = 1
self.include_children = 1
self.include_places = 1
self.translate_headers = 1
else:
self.option_box.parse_options()
self.db = option_box.get_filtered_database(self.db)
self.include_individuals = self.option_box.include_individuals
self.include_marriages = self.option_box.include_marriages
self.include_children = self.option_box.include_children
self.include_places = self.option_box.include_places
self.translate_headers = self.option_box.translate_headers
self.plist = [x for x in self.db.iter_person_handles()]
# make place list so that dependencies are first:
self.place_list = []
place_list = sorted([x for x in self.db.iter_place_handles()])
while place_list:
handle = place_list[0]
place = self.db.get_place_from_handle(handle)
if place:
if all([(x.ref in self.place_list) for x in place.placeref_list]):
self.place_list.append(place_list.pop(0))
else: # put at the back of the line:
place_list.append(place_list.pop(0))
else:
place_list.pop(0)
# get the families for which these people are spouses:
self.flist = {}
for key in self.plist:
p = self.db.get_person_from_handle(key)
if p:
for family_handle in p.get_family_handle_list():
self.flist[family_handle] = 1
# now add the families for which these people are a child:
for family_handle in self.db.iter_family_handles():
family = self.db.get_family_from_handle(family_handle)
if family:
for child_ref in family.get_child_ref_list():
if child_ref:
child_handle = child_ref.ref
if child_handle in self.plist:
self.flist[family_handle] = 1
def update_empty(self):
pass
def update_real(self):
self.count += 1
newval = int(100*self.count/self.total)
if newval != self.oldval:
self.user.callback(newval)
self.oldval = newval
def writeln(self):
self.g.writerow([])
def write_csv(self, *items):
self.g.writerow(items)
def export_data(self):
self.dirname = os.path.dirname (self.filename)
try:
self.fp = open(self.filename, "w",
encoding='utf_8_sig' if win() else 'utf_8',
newline='')
self.g = csv.writer(self.fp)
except IOError as msg:
msg2 = _("Could not create %s") % self.filename
self.user.notify_error(msg2,str(msg))
return False
except:
self.user.notify_error(_("Could not create %s") % self.filename)
return False
######################### initialize progress bar
self.count = 0
self.total = 0
self.oldval = 0
if self.include_individuals:
self.total += len(self.plist)
if self.include_marriages:
self.total += len(self.flist)
if self.include_children:
self.total += len(self.flist)
if self.include_places:
self.total += len(self.place_list)
########################
LOG.debug("Possible people to export: %s", len(self.plist))
LOG.debug("Possible families to export: %s", len(self.flist))
LOG.debug("Possible places to export: %s", len(self.place_list))
###########################
if self.include_places:
if self.translate_headers:
self.write_csv(_("Place"), _("Title"), _("Name"),
_("Type"), _("Latitude"), _("Longitude"),
_("Code"), _("Enclosed_by"), _("Date"))
else:
self.write_csv("Place", "Title", "Name",
"Type", "Latitude", "Longitude",
"Code", "Enclosed_by", "Date")
for key in self.place_list:
place = self.db.get_place_from_handle(key)
if place:
place_id = place.gramps_id
place_title = place.title
place_name = place.name.value
place_type = str(place.place_type)
place_latitude = place.lat
place_longitude = place.long
place_code = place.code
if place.placeref_list:
for placeref in place.placeref_list:
placeref_obj = self.db.get_place_from_handle(placeref.ref)
placeref_date = ""
if not placeref.date.is_empty():
placeref_date = placeref.date
placeref_id = ""
if placeref_obj:
placeref_id = "[%s]" % placeref_obj.gramps_id
self.write_csv("[%s]" % place_id, place_title, place_name, place_type,
place_latitude, place_longitude, place_code, placeref_id,
placeref_date)
else:
self.write_csv("[%s]" % place_id, place_title, place_name, place_type,
place_latitude, place_longitude, place_code, "",
"")
self.update()
self.writeln()
########################### sort:
sortorder = []
dropped_surnames = set()
for key in self.plist:
person = self.db.get_person_from_handle(key)
if person:
primary_name = person.get_primary_name()
first_name = primary_name.get_first_name()
surname_obj = primary_name.get_primary_surname()
surname = surname_obj.get_surname()
# See bug #6955
nonprimary_surnames = set(primary_name.get_surname_list())
nonprimary_surnames.remove(surname_obj)
dropped_surnames.update(nonprimary_surnames)
sortorder.append( (surname, first_name, key) )
if dropped_surnames:
LOG.warning(
_("CSV export doesn't support non-primary surnames, "
"{count} dropped").format(
count=len(dropped_surnames)) )
LOG.debug(
"Dropped surnames: " +
', '.join([("%s %s %s" % (surname.get_prefix(),
surname.get_surname(), surname.get_connector())).strip()
for surname in dropped_surnames]))
sortorder.sort() # will sort on tuples
plist = [data[2] for data in sortorder]
###########################
if self.include_individuals:
if self.translate_headers:
self.write_csv(
_("Person"), _("Surname"), _("Given"),
_("Call"), _("Suffix"), _("Prefix"),
_("Person|Title"), _("Gender"),
_("Birth date"), _("Birth place"), _("Birth source"),
_("Baptism date"), _("Baptism place"), _("Baptism source"),
_("Death date"), _("Death place"), _("Death source"),
_("Burial date"), _("Burial place"), _("Burial source"),
_("Note"))
else:
self.write_csv(
"Person", "Surname", "Given",
"Call", "Suffix", "Prefix",
"Title", "Gender",
"Birth date", "Birth place", "Birth source",
"Baptism date", "Baptism place", "Baptism source",
"Death date", "Death place", "Death source",
"Burial date", "Burial place", "Burial source",
"Note")
for key in plist:
person = self.db.get_person_from_handle(key)
if person:
primary_name = person.get_primary_name()
first_name = primary_name.get_first_name()
surname_obj = primary_name.get_primary_surname()
surname = surname_obj.get_surname()
prefix = surname_obj.get_prefix()
suffix = primary_name.get_suffix()
title = primary_name.get_title()
grampsid = person.get_gramps_id()
grampsid_ref = ""
if grampsid != "":
grampsid_ref = "[" + grampsid + "]"
note = '' # don't export notes
callname = primary_name.get_call_name()
gender = person.get_gender()
if gender == Person.MALE:
gender = gender_map[Person.MALE]
elif gender == Person.FEMALE:
gender = gender_map[Person.FEMALE]
else:
gender = gender_map[Person.UNKNOWN]
# Birth:
birthdate = ""
birthplace = ""
birthsource = ""
birth_ref = person.get_birth_ref()
if birth_ref:
birth = self.db.get_event_from_handle(birth_ref.ref)
if birth:
birthdate = self.format_date( birth)
birthplace = self.format_place(birth)
birthsource = get_primary_source_title(self.db, birth)
# Baptism:
baptismdate = ""
baptismplace = ""
baptismsource = ""
baptism_ref = get_primary_event_ref_from_type(
self.db, person, "Baptism")
if baptism_ref:
baptism = self.db.get_event_from_handle(baptism_ref.ref)
if baptism:
baptismdate = self.format_date( baptism)
baptismplace = self.format_place(baptism)
baptismsource = get_primary_source_title(self.db, baptism)
# Death:
deathdate = ""
deathplace = ""
deathsource = ""
death_ref = person.get_death_ref()
if death_ref:
death = self.db.get_event_from_handle(death_ref.ref)
if death:
deathdate = self.format_date( death)
deathplace = self.format_place(death)
deathsource = get_primary_source_title(self.db, death)
# Burial:
burialdate = ""
burialplace = ""
burialsource = ""
burial_ref = get_primary_event_ref_from_type(
self.db, person, "Burial")
if burial_ref:
burial = self.db.get_event_from_handle(burial_ref.ref)
if burial:
burialdate = self.format_date( burial)
burialplace = self.format_place(burial)
burialsource = get_primary_source_title(self.db, burial)
# Write it out:
self.write_csv(grampsid_ref, surname, first_name, callname,
suffix, prefix, title, gender,
birthdate, birthplace, birthsource,
baptismdate, baptismplace, baptismsource,
deathdate, deathplace, deathsource,
burialdate, burialplace, burialsource,
note)
self.update()
self.writeln()
########################### sort:
sortorder = []
for key in self.flist:
family = self.db.get_family_from_handle(key)
if family:
marriage_id = family.get_gramps_id()
sortorder.append(
(sortable_string_representation(marriage_id), key)
)
sortorder.sort() # will sort on tuples
flist = [data[1] for data in sortorder]
###########################
if self.include_marriages:
if self.translate_headers:
self.write_csv(_("Marriage"), _("Husband"), _("Wife"),
_("Date"), _("Place"), _("Source"), _("Note"))
else:
self.write_csv("Marriage", "Husband", "Wife",
"Date", "Place", "Source", "Note")
for key in flist:
family = self.db.get_family_from_handle(key)
if family:
marriage_id = family.get_gramps_id()
if marriage_id != "":
marriage_id = "[" + marriage_id + "]"
mother_id = ''
father_id = ''
father_handle = family.get_father_handle()
if father_handle:
father = self.db.get_person_from_handle(father_handle)
father_id = father.get_gramps_id()
if father_id != "":
father_id = "[" + father_id + "]"
mother_handle = family.get_mother_handle()
if mother_handle:
mother = self.db.get_person_from_handle(mother_handle)
mother_id = mother.get_gramps_id()
if mother_id != "":
mother_id = "[" + mother_id + "]"
# get mdate, mplace
mdate, mplace, source = '', '', ''
event_ref_list = family.get_event_ref_list()
for event_ref in event_ref_list:
event = self.db.get_event_from_handle(event_ref.ref)
if event.get_type() == EventType.MARRIAGE:
mdate = self.format_date( event)
mplace = self.format_place(event)
source = get_primary_source_title(self.db, event)
note = ''
self.write_csv(marriage_id, father_id, mother_id, mdate,
mplace, source, note)
self.update()
self.writeln()
if self.include_children:
if self.translate_headers:
self.write_csv(_("Family"), _("Child"))
else:
self.write_csv("Family", "Child")
for key in flist:
family = self.db.get_family_from_handle(key)
if family:
family_id = family.get_gramps_id()
if family_id != "":
family_id = "[" + family_id + "]"
for child_ref in family.get_child_ref_list():
child_handle = child_ref.ref
child = self.db.get_person_from_handle(child_handle)
grampsid = child.get_gramps_id()
grampsid_ref = ""
if grampsid != "":
grampsid_ref = "[" + grampsid + "]"
self.write_csv(family_id, grampsid_ref)
self.update()
self.writeln()
self.fp.close()
return True
def format_date(self, date):
return get_date(date)
def format_place(self, event):
"""
If places are included in the export return a link, else return a
formatted place for the given event.
"""
if self.include_places:
place_handle = event.get_place_handle()
if place_handle:
place = self.db.get_place_from_handle(place_handle)
if place:
return "[%s]" % place.get_gramps_id()
return ""
else:
return _pd.display_event(self.db, event)
|
sam-m888/gramps
|
gramps/plugins/export/exportcsv.py
|
Python
|
gpl-2.0
| 23,769
|
[
"Brian"
] |
332c965224b216bf41bdef70b927a4cc66db97594df57b2d1b4ae8bccc906060
|
#! /usr/bin/env python
# Quex is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# This software is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with this library; if not, write to the Free Software Foundation, Inc., 59
# Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# (C) 2007 Frank-Rene Schaefer
#
################################################################################
# -*- python -*-
import os
import sys
QUEX_VERSION = '0.65.2'
try:
QUEX_INSTALLATION_DIR = os.environ["QUEX_PATH"]
# Note, that windows can also deal with backslashes.
QUEX_INSTALLATION_DIR = QUEX_INSTALLATION_DIR.replace("\\", "/")
except:
print "error: environment variable 'QUEX_PATH' is not defined."
if os.name == "posix":
print "error: your system is 'posix'."
print "error: if you are using bash-shell, append the following line"
print "error: to your '~/.bashrc' file:"
print "error:"
print "error: export QUEX_PATH=directory-where-quex-has-been-installed"
elif os.name == "nt":
print "error: Right click on [MyComputer]"
print "error: -> [Properties]"
print "error: -> Tab[Advanced]"
print "error: -> [Environment Variables]"
print "error: and from there it is obvious."
else:
print "error: for your system '%s' it is not known how to set environment" % os.name
print "error: variables. if you find out, please, send an email to"
print "error: <fschaef@users.sourceforge.net>"
sys.exit(-1) # sys.exit(-1) is acceptable
QUEX_PATH = QUEX_INSTALLATION_DIR
QUEX_CODEC_DB_PATH = QUEX_PATH + "/quex/engine/codec_db/database"
sys.path.insert(0, QUEX_INSTALLATION_DIR)
def check():
global QUEX_INSTALLATION_DIR
# -- Try to acces the file 'quex-exe.py' in order to verify
if os.access(QUEX_INSTALLATION_DIR + "/quex-exe.py", os.F_OK) == False:
print "error: Environment variable 'QUEX_PATH' does not point to"
print "error: a valid installation directory of quex."
print "error: current setting of 'QUEX_PATH':"
print "error:", QUEX_INSTALLATION_DIR
sys.exit(-1) # sys.exit(-1) is acceptable
# -- Check for version 2.5 or higher
if sys.version_info[0] < 2 or \
(sys.version_info[0] == 2 and sys.version_info[1] < 6):
print "error: Quex requires Python version 2.6 or higher (but nothing >= 3.0).\n" + \
"error: Detected version '%i.%i'." % \
(sys.version_info[0], sys.version_info[1])
print "error: Please, visit http://www.python.org and download an appropriate release."
sys.exit(-1) # sys.exit(-1) is acceptable
|
dkopecek/amplify
|
third-party/quex-0.65.2/quex/DEFINITIONS.py
|
Python
|
gpl-2.0
| 3,233
|
[
"VisIt"
] |
c7e53107b4057f1504d0be3d905e8207cbd657ae414acc5fa8bc6889b7644915
|
#!/usr/bin/python
import os, sys
from argparse import ArgumentParser
from pylab import*
current_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.abspath(os.path.join(current_path,"../../../tools")))
import sumatra_tracking.io_manager as smt
import analysis.colormaps as cmaps
from analysis.tuning_analysis import*
from analysis.data_extractor import*
from analysis.pretty_plotting import*
from analysis.data_extractor import*
parser = ArgumentParser()
parser.add_argument("sim_ids", help = "simulation ids")
parser.add_argument("record", help = "record results", type = int)
parser.add_argument("run_id", help = "sumatra_label")
args = parser.parse_args()
sim_ids = args.sim_ids
record = args.record
run_id = args.run_id
output_dir = None
if(record):
output_dir = smt.get_output_dir(sim_ids, run_id)
sims = get_simulations(sim_ids)
# Analysis: --------------------------------------------------------------------
Ns=sims[-1].integrator.Ns
Nt=sims[-1].integrator.Nt
for sim in sims:
sim.stimulus.read_property()
diameters = extract_unique_simulation_attrs(sims, "stimulus.mask_size")
diameters = diameters[argsort(diameters)]
k_points = sims[0].integrator.k_points
print diameters
fig = plt.figure()
ax = fig.add_subplot(111)
spines_edge_color(ax)
remove_ticks(ax)
set_grid(ax)
set_font()
set_legend()
n=[0,4,6]
for i, d in enumerate(diameters[2:-1:10]):
label=r"$d=$"+'{0:.2f}'.format(d)
sim = simulation_extractor(sims, "stimulus.mask_size", d)[0]
ax.plot(sim.integrator.k_points,
sim.stimulus.fourier_transform[0, Ns/2,:]/max(sim.stimulus.fourier_transform[0, Ns/2, :]),
label = label, color=colormap(n[i]))
ax.set_title("$\widetilde{S}(k_x, k_y=0, w=0; d)$")
ax.set_xlabel("$k_x$")
ax.set_ylabel("$\widetilde{S}$")
ax.set_xlim([-30, 30])
legend()
if record : fig.savefig(os.path.join(output_dir, "stim_ft.png"))
# 2d: --------------------------------------------------------------------
S_ft = np.zeros([len(diameters), Ns])
for i, d in enumerate(diameters):
sim = simulation_extractor(sims, "stimulus.mask_size", d)[0]
S_ft[i,:] = sim.stimulus.fourier_transform[0, Ns/2]/max(sim.stimulus.fourier_transform[0, Ns/2])
fig = plt.figure()
extent =[k_points.min(), k_points.max(), diameters.min(), diameters.max()]
internpolation = "gaussian"
imshow(S_ft, extent=extent, origin="lower", aspect='auto', interpolation=internpolation, cmap =cmaps.viridis )
title("$\widetilde{S}(k_x, k_y=0, w=0; d)$")
ylabel("Spot diameter [deg]", fontsize= 16)
xlabel("$k_x$",fontsize= 25)
colorbar()
if record : fig.savefig(os.path.join(output_dir, "stim_ft_vs_d.png"))
|
miladh/lgn-simulator
|
apps/stimuliAnalysis/analysis/patch_stim.py
|
Python
|
gpl-3.0
| 2,631
|
[
"Gaussian"
] |
b031befe302151b4cdc1448e0da3c1617b69b6ec76d4cdd088711fa945fa0ef5
|
#!/usr/bin/env python
"""
Show distribution after a change of variables with y = x^(1/2), where the pdf for x is Gaussian
"""
import matplotlib.pyplot as pl
from scipy.stats import norm
import numpy as np
# normal distribution
mu = 5. # the mean, mu
sigma = 1 # standard deviations, sigma
x = np.linspace(0, 10, 1000) # x
# set plot to render labels using latex
pl.rc('text', usetex=True)
pl.rc('font', family='serif')
pl.rc('font', size=14)
fig = pl.figure(figsize=(6,5), dpi=100)
# plot pdfs
pl.plot(x, norm.pdf(x, mu, sigma), 'b--', label='$p(z=x)$')
pl.plot(np.sqrt(x), 2.*np.sqrt(x)*norm.pdf(x, mu, sigma), 'r', label='$p(z=y=x^{1/2})$')
ax = pl.gca()
ax.set_xlabel('$z$', fontsize=14)
ax.set_ylabel('$p(z)$', fontsize=14)
ax.legend(loc='upper right', frameon=False)
fig.subplots_adjust(bottom=0.15)
pl.savefig('../change_of_variables_1d.pdf')
pl.show()
|
mattpitkin/GraWIToNStatisticsLectures
|
figures/scripts/change_of_variables_1d.py
|
Python
|
mit
| 870
|
[
"Gaussian"
] |
b11835e5533e08545e9fe25fc1fdb426c57bf488549d58f697a9417f385ba2c5
|
import os
from setuptools import setup
from setuptools import find_packages
version = '0.1'
shortdesc = "Klarna Payment for bda.plone.shop"
setup(
name='bda.plone.klarnapayment',
version=version,
description=shortdesc,
classifiers=[
'Environment :: Web Environment',
'License :: OSI Approved :: GNU General Public License (GPL)',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
],
author='Espen Moe-Nilssen',
author_email='espen@medialog.no',
license='GNU General Public Licence',
packages=find_packages('src'),
package_dir = {'': 'src'},
namespace_packages=['bda', 'bda.plone'],
include_package_data=True,
zip_safe=False,
install_requires=[
'setuptools',
'Plone',
'bda.plone.shop',
'klarnacheckout',
],
extras_require={
'test': [
'plone.app.testing',
]
},
entry_points="""
[z3c.autoinclude.plugin]
target = plone
""",
)
|
espenmn/bda.plone.klarnapayment
|
setup.py
|
Python
|
bsd-3-clause
| 1,088
|
[
"MOE"
] |
42f0f544f5aafe2f39ebcbae6e0a78b08a1aaa5ff98edac95aaf58a5b4d6f2cf
|
#!/usr/bin/env python3
#-*- coding: utf-8 -*-
# Copyright 2017, National University of Ireland and The James Hutton Insitute
# Author: Nicholas Waters
#
# This code is part of the riboSeed package, and is governed by its licence.
# Please see the LICENSE file that should have been included as part of
# this package.
import pkg_resources
import sys
import os
import shutil
import subprocess
import argparse
from .shared_methods import set_up_logging
helpstring = """
Welcome to the ribo try! Here we test the integration of several parts of the
riboSeed pipeline. First, `ribo run` is performed on the included test
dataset. Then, essentially the same thing is done, but calling the
individual subcommands (`ribo scan`, `ribo select`, etc)
If all goes well, no errors should occur, and you should essentially have
two "identical" riboSeed assemblies (although due to random assignments
of mapping duplicates, the nature of error correction, etc, I can't
guarantee that you will get the exact same result
Have fun!
"""
def get_args(test_args=None): # pragma: no cover
parser = argparse.ArgumentParser(
prog="ribo try",
description=helpstring,
add_help=False) # to allow for custom help
parser.prog = "ribo try"
parser.add_argument("-o", "--output", dest='output', action="store",
help="output directory; " +
"default: %(default)s",
default=os.path.join(
os.getcwd(), "riboSeed_sample_results"),
type=str)
parser.add_argument("-v", "--verbosity", dest='verbosity',
action="store",
default=2, type=int, choices=[1, 2, 3, 4, 5],
help="Logger writes debug to file in output dir; " +
"this sets verbosity level sent to stderr. " +
" 1 = debug(), 2 = info(), 3 = warning(), " +
"4 = error() and 5 = critical(); " +
"default: %(default)s")
parser.add_argument("-c", "--cores", dest='cores', action="store",
default=2, type=int,
help="cores to be used" +
"; default: %(default)s")
parser.add_argument("-t", "--threads", dest='threads',
action="store",
default=1, type=int,
choices=[1, 2, 4],
help="if your cores are hyperthreaded, set number" +
" threads to the number of threads per processer." +
"If unsure, see 'cat /proc/cpuinfo' under 'cpu " +
"cores', or 'lscpu' under 'Thread(s) per core'." +
": %(default)s")
parser.add_argument("-m", "--memory", dest='memory', action="store",
default=8, type=int,
help="system memory available" +
"; default: %(default)s")
parser.add_argument("-h", "--help",
action="help", default=argparse.SUPPRESS,
help="Displays this help message")
args = parser.parse_args(sys.argv[2:])
return args
def main(args, logger=None):
output_root = os.path.abspath(os.path.expanduser(args.output))
try:
os.makedirs(output_root, exist_ok=False)
except OSError:
print("Output directory %s already exists; exiting..." % output_root)
sys.exit(1)
log_path = os.path.join(output_root, "riboTry.log")
if logger is None:
logger = set_up_logging(verbosity=args.verbosity,
outfile=log_path,
name=__name__)
logger.info("Testing your installation of riboSeed on some test data")
# here we locate the test data we packaged with riboSeed -
# some reads and a reference
resource_package = pkg_resources.Requirement.parse("riboSeed")
logger.debug(resource_package)
# this looks like I should be using os.path.join, but the package resource
# stuff needs unix-style path seps
resource_path_fasta = '/'.join(('riboSeed',
'integration_data', 'concatenated_seq.fasta'))
resource_path_reffasta = '/'.join(('riboSeed',
'integration_data', 'NC_000913.3.fasta'))
resource_path_1 = '/'.join(('riboSeed',
'integration_data', 'test_reads1.fq'))
resource_path_2 = '/'.join(('riboSeed',
'integration_data', 'test_reads2.fq'))
logger.debug(resource_path_fasta)
fasta = pkg_resources.resource_filename(resource_package, resource_path_fasta)
reffasta = pkg_resources.resource_filename(resource_package,
resource_path_reffasta)
fastq1 = pkg_resources.resource_filename(resource_package, resource_path_1)
fastq2 = pkg_resources.resource_filename(resource_package, resource_path_2)
# fasta_path = pkg_resources.resource_string("/", resource_path)
logger.debug(fasta)
logger.debug(reffasta)
logger.debug(fastq1)
logger.debug(fastq2)
for i in ["blastn", "spades.py", "bwa", "mafft",
"samtools", "barrnap"]:
assert shutil.which(i) is not None, \
"{0} executable not found in PATH!".format(i)
ribo_run_cmd = str(
"ribo run -r {0} -o {1} -F {2} -R {3} --serialize -v 1 " +
"--subassembler skesa " +
"--stages stack score spec --cores {4} --threads {5} --memory {6}"
).format(
fasta,
os.path.join(output_root, "run"),
fastq1,
fastq2,
args.cores,
args.threads,
args.memory)
logger.info("running " + ribo_run_cmd)
logger.info("This usually take about ~4-5 minutes to run all the modules")
subprocess.run([ribo_run_cmd],
shell=sys.platform != "win32",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=True)
logger.info("finished running integration test with ribo run!")
|
nickp60/riboSeed
|
riboSeed/riboTry.py
|
Python
|
mit
| 6,211
|
[
"BWA"
] |
be5a3c0f2dd56d8e852534a82138791282cd6b62a428949389121bbbfd5e288b
|
"""Functions to process .aaf (Alignment Analysis Format) files """
import tempfile
from collections import namedtuple
from multiprocessing import Process, Queue
import pysam
import cytoolz
import cytoolz.curried as cyt
import logging
logger = logging.getLogger(__name__)
AafRead = namedtuple('AAF', ['chrom', 'pos', 'd_err', 'MQ', 'vlist'])
@cytoolz.curry
def save_as_aaf(seq_dict, output_dir, titer):
"""Iterate over all the reads saving them as an Alignment Analysis Format file.
This tab delimited file has the following fields:
chrom pos d_err MQ variantlist
variantlist is a semicolon separated, spaceless list of variant sizes that looks like
-1;+2;+1...
:param seq_dict: converts reference_id to sequence_name including unmapped ones
:param output_dir:
:param titer:
:return: output file name (generated by )
NOTES:
1. This expects qnames to be parsed and d_err to be computed
2. The output is a temp file written to the given directory. This is to enable us
to use it with parallel processing. The output file names are returned to us
and we can combine them as we wish to create the final file
3. It is wasteful to pair reads for this operation - the output file will be
resorted in order to make use of tabix indexing
"""
f, aaf_fname = tempfile.mkstemp(prefix='aaf-', dir=output_dir)
with open(aaf_fname, 'w') as fp:
for template in titer:
for mate in template:
rd, ri, d_err = mate['read'], mate['read_info'], mate['d_err']
fp.write('{chrom}\t{pos}\t{d_err}\t{mq}\t{vl}\n'.
format(chrom=seq_dict[rd.reference_id], pos=rd.pos,
d_err=d_err, mq=rd.mapping_quality,
vl=';'.join(str(v) for v in ri.v_list)))
return aaf_fname
def aaf_iter(fp, contig_q):
"""Returns read objects from contigs until someone passes None as a contig
:param fp: tabixfile pointer (pysam.TabixFile)
:param contig_q: a queue into which we put contig string
:return: a generator
"""
for contig in iter(contig_q.get, None):
cnt = 0
for cnt, read in enumerate(fp.fetch(contig)):
_, _, d_err, MQ, v_list = read.split('\t')
yield int(d_err), int(MQ), [int(v) for v in v_list.split(';') if v is not '']
logger.debug('{}: {} reads'.format(contig, cnt))
def worker(pipeline, aaf_fname, result_q, contig_q):
"""Given a pipeline, run it with reads from the given AAF taken from contigs supplied
over the contig_q.
This expects the pipeline to yield one final result which it can then return.
It expects the last element of pipeline to be a function that consumes a aaf
iterator and returns a result.
:param pipeline: A list of pipeline nodes
:param aaf_fname: Source AAF file
:param result_q: The result is put here.
:param contig_q: messages are contig names. A None indicates stop_iter
:return:
"""
aaf = pysam.TabixFile(aaf_fname)
t1 = aaf_iter(aaf, contig_q)
sink = pipeline[-1]
result_q.put(sink(cyt.pipe(t1, *pipeline[:-1])))
def scatter_aaf(pipeline, aaf_fname, ncpus=2):
"""Given a pipeline and a source bam file use multiprocessing to run the pipeline
via multiple workers splitting up the work by contig
python multiprocessing will be used for running the pipelines in parallel and care
must be taken to ensure the individual pipeline nodes are parallelizable
This expects the pipeline to yield one final result which it can then return.
:param bam_fname:
:param pipeline:
:param paired: When run in parallel, paired vs unpaired pipelines work differently
So we have to tell scatter if we want to source paired or unpaired reads
:param ncpus:
:param max_singles:
:return:
"""
assert ncpus > 1, "ncpus = 1 can't use scatter!"
result_q = Queue()
contig_q = Queue()
p_list = []
for i in range(ncpus):
p_list += [
Process(target=worker,
args=(pipeline, aaf_fname, result_q, contig_q))
]
for p in p_list:
p.start()
for contig in pysam.TabixFile(aaf_fname).contigs:
contig_q.put(contig)
# Tell child processes to stop
for i in range(ncpus):
contig_q.put(None)
for i in range(ncpus):
yield result_q.get()
# Orderly exit
for p in p_list:
p.join()
|
sbg/Mitty
|
mitty/analysis/aaftoolz.py
|
Python
|
apache-2.0
| 4,292
|
[
"pysam"
] |
4d6d4d7e252eff39342c15596bb33ccafe056e2a1863eee3d642eb3b16be9378
|
#
#@BEGIN LICENSE
#
# PSI4: an ab initio quantum chemistry software package
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#@END LICENSE
#
r"""Module to provide mechanism to store and restore option states in driver.
"""
import sys
import psi4
class OptionState(object):
"""Class to store the state of a single *option*. If *module* given, the *option*
value and has_changed value is stored for global, local to *module*, and used by
*module* scopes; otherwise (used for BASIS keywords), only global scope is stored.
Class can store, print, and restore option values. ::
>>> OptionState('SCF_TYPE', 'SCF')
>>> print(OptionState('DF_BASIS_MP2'))
"""
def __init__(self, option, module=None):
self.option = option.upper()
if module:
self.module = module.upper()
else:
self.module = None
self.value_global = psi4.get_global_option(option)
self.haschanged_global = psi4.has_global_option_changed(option)
if self.module:
self.value_local = psi4.get_local_option(self.module, option)
self.haschanged_local = psi4.has_local_option_changed(self.module, option)
self.value_used = psi4.get_option(self.module, option)
self.haschanged_used = psi4.has_option_changed(self.module, option)
else:
self.value_local = None
self.haschanged_local = None
self.value_used = None
self.haschanged_used = None
def __str__(self):
text = ''
if self.module:
text += """ ==> %s Option in Module %s <==\n\n""" % (self.option, self.module)
text += """ Global (has changed?) value: %7s %s\n""" % ('(' + str(self.haschanged_global) + ')', self.value_global)
text += """ Local (has changed?) value: %7s %s\n""" % ('(' + str(self.haschanged_local) + ')', self.value_local)
text += """ Used (has changed?) value: %7s %s\n""" % ('(' + str(self.haschanged_used) + ')', self.value_used)
else:
text += """ ==> %s Option in Global Scope <==\n\n""" % (self.option)
text += """ Global (has changed?) value: %7s %s\n""" % ('(' + str(self.haschanged_global) + ')', self.value_global)
text += """\n"""
return text
def restore(self):
psi4.set_global_option(self.option, self.value_global)
if not self.haschanged_global:
psi4.revoke_global_option_changed(self.option)
if self.module:
psi4.set_local_option(self.module, self.option, self.value_local)
if not self.haschanged_local:
psi4.revoke_local_option_changed(self.module, self.option)
class OptionsState(object):
"""Class to contain multiple :py:func:`~optproc.OptionsState` objects.
Used in python driver functions to collect several options before altering
them, then restoring before function return. ::
>>> optstash = OptionsState(
['SCF', 'DFT_FUNCTIONAL'],
['DF_BASIS_SCF'],
['SCF', 'SCF_TYPE'],
['SCF', 'REFERENCE'])
>>> print(optstash)
>>> optstash.restore()
"""
def __init__(self, *largs):
self.data = []
for item in largs:
if len(item) == 2:
self.data.append(OptionState(item[1], item[0]))
elif len(item) == 1:
self.data.append(OptionState(item[0]))
else:
print('ERROR: Each argument to OptionsState should be an array, the first element')
print(' of which is the module scope and the second element of which is the')
print(' module name. Bad argument: %s' % (item))
sys.exit()
def __str__(self):
text = ''
for item in self.data:
text += str(item)
return text
def restore(self):
for item in self.data:
item.restore()
|
spring01/libPSI
|
lib/python/p4util/optproc.py
|
Python
|
gpl-2.0
| 4,666
|
[
"Psi4"
] |
68e9acd7a7278b9678132a09fa0b485e2102ae9ecaf9511b9ad7ebd2754fc1fe
|
#!/usr/bin/env python
'''
The LIF network is based on:
Ostojic, S. (2014).
Two types of asynchronous activity in networks of
excitatory and inhibitory spiking neurons.
Nat Neurosci 17, 594-600.
Key parameter to change is synaptic coupling J (mV).
Tested with Brian 1.4.1
'''
from brian import *
from pylab import * # imports matplot like commands into the namespace
# also can use np. for numpy and mpl. for matplotlib
np.random.seed(100) # set seed for reproducibility of simulations
# ###########################################
# Defining network model parameters
# ###########################################
N = 10000 # Total number of neurons
f = 0.8 # Fraction of exc neurons
NE = int(f*N) # Number of excitatory cells
NI = N-NE # Number of inhibitory cells
C = 1000 # Number of incoming connections on each neuron (exc or inh)
fC = f # fraction fC incoming connections are exc, rest inhibitory
J = 0.2*mV # exc strength is J.
# Critical J is ~ 0.45 mV in paper for N = 1000, C = 1000
# Here, N = 1000, C = 100, I get critical J similar!
# Using N = 10000, C = 1000, I get critical J of
# I'm defining critical J as the J at which mean pop rate
# starts to increase beyond that at rest (~38 Hz) after the dip.
# But going by firing rate fluctuations of neurons,
# should define it as the minimum point of the dip (Fig 1a).
g = 5.0 # -gJ is the inh strength. For exc-inh balance g>~f(1-f)=4
#eta = 1e-2 # Learning rate
#tau_stdp = 20*ms # STDP time constant
simtime = 1.0*second # Simulation time
dt = defaultclock.dt/second
# ###########################################
# Neuron model
# ###########################################
el = 24*mV#-41*mV # Resting potential, same as mu0, spontaneously spiking
#el = -65*mV # Resting potential, same as mu0
vt = 20*mV#-45.*mV # Spiking threshold
taum = 20*ms # Membrane time constant
vr = 10*mV#-55*mV # Reset potential
taur = 0.5*ms # Refractory period
taudelay = 0.5*ms + dt*second # Synaptic delay, must be > refractory period
# else no 'chaotic' async state
# also at least >= taur + dt else missed
eqs_neurons='''
dv/dt=(1/taum)*(-(v-el)) : volt
'''
# ###########################################
# Initialize neuron group
# ###########################################
neurons=NeuronGroup(N,model=eqs_neurons,\
threshold=vt,reset=vr,refractory=taur)
Pe=neurons.subgroup(NE)
Pi=neurons.subgroup(NI)
#Pe.v = uniform(el,vt+10*mV,NE)
#Pi.v = uniform(el,vt+10*mV,NI)
# ###########################################
# Connecting the network
# ###########################################
sparseness_e = fC*C/float(NE)
sparseness_i = (1-fC)*C/float(NI)
# Follow Dale's law -- exc (inh) neurons only have +ve (-ve) synapses.
con_e = Synapses(Pe,neurons,'',pre='v_post+=J')
con_e.connect_random(sparseness=sparseness_e)
con_e.delay = taudelay
con_i = Synapses(Pi,neurons,'',pre='v_post+=-g*J')
con_i.connect_random(sparseness=sparseness_i)
con_i.delay = taudelay
# Obsolete and inflexible method of creating synapses
#con_e = Connection(Pe,neurons,'v',delay=taudelay)
#con_e.connect_random(Pe,neurons,p=sparseness_e,\
# fixed=True,weight=1.0,seed=100)
#con_i = Connection(Pi,neurons,'v',delay=taudelay)
#con_i.connect_random(Pi,neurons,p=sparseness_i,\
# fixed=True,weight=-g,seed=200)
# Can avoid autapses with string based synapse creation:
# something like S[:, :] = '(i != j) * (rand() > 0.15)'
# will be slow as not a sparse matrix
# ###########################################
# Setting up monitors
# ###########################################
Nmon = 100
Nmon_exc = int(f*Nmon)
Pe_mon = Pe.subgroup(Nmon_exc)
sm_e = SpikeMonitor(Pe_mon)
Pi_mon = Pi.subgroup(Nmon-Nmon_exc)
sm_i = SpikeMonitor(Pi_mon)
# Population monitor
popm_e = PopulationRateMonitor(Pe,bin=1.*ms)
popm_i = PopulationRateMonitor(Pi,bin=1.*ms)
# ###########################################
# Run
# ###########################################
print "Setup complete, running for",simtime,"at dt =",dt,"s."
run(simtime,report='text')
print "For g,J =",g,J,"mean exc rate =",\
sm_e.nspikes/float(Nmon_exc)/(simtime/second),'Hz.'
print "For g,J =",g,J,"mean inh rate =",\
sm_i.nspikes/float(Nmon-Nmon_exc)/(simtime/second),'Hz.'
# ###########################################
# Analysis functions
# ###########################################
def rate_from_spiketrain(spiketimes,fulltime,dt,tau=50e-3):
"""
Returns a rate series of spiketimes convolved with a Gaussian kernel;
all times must be in SI units,
remember to divide fulltime and dt by second
"""
sigma = tau/2.
# normalized Gaussian kernel, integral with dt is normed to 1
# to count as 1 spike smeared over a finite interval
norm_factor = 1./(sqrt(2.*pi)*sigma)
gauss_kernel = array([norm_factor*exp(-x**2/(2.*sigma**2))\
for x in arange(-5.*sigma,5.*sigma+dt,dt)])
kernel_len = len(gauss_kernel)
# need to accommodate half kernel_len on either side of fulltime
rate_full = zeros(int(fulltime/dt)+kernel_len)
for spiketime in spiketimes:
idx = int(spiketime/dt)
rate_full[idx:idx+kernel_len] += gauss_kernel
# only the middle fulltime part of the rate series
# This is already in Hz,
# since should have multiplied by dt for above convolution
# and divided by dt to get a rate, so effectively not doing either.
return rate_full[kernel_len/2:kernel_len/2+int(fulltime/dt)]
# ###########################################
# Make plots
# ###########################################
fig = figure()
# raster plots
subplot(231)
raster_plot(sm_e,ms=1.)
title(str(Nmon_exc)+" exc neurons")
xlabel("")
subplot(234)
raster_plot(sm_i,ms=1.)
title(str(Nmon-Nmon_exc)+" inh neurons")
subplot(232)
# firing rates
timeseries = arange(0,simtime/second,dt)*1000
num_to_plot = 10
#rates = []
for nrni in range(num_to_plot):
rate = rate_from_spiketrain(sm_e[nrni],simtime/second,dt)
plot(timeseries,rate)
#print mean(rate),len(sm_e[nrni])
#rates.append(rate)
title(str(num_to_plot)+" exc rates")
ylabel("Hz")
ylim(0,300)
subplot(235)
for nrni in range(num_to_plot):
rate = rate_from_spiketrain(sm_i[nrni],simtime/second,dt)
plot(timeseries,rate)
#print mean(rate),len(sm_i[nrni])
#rates.append(rate)
title(str(num_to_plot)+" inh rates")
ylim(0,300)
#print "Mean rate = ",mean(rates)
xlabel("Time (ms)")
ylabel("Hz")
# Population firing rates
subplot(233)
timeseries = arange(0,simtime/second,1e-3)*1000
plot(timeseries,popm_e.smooth_rate(width=50.*ms,filter="gaussian"))
title("Exc population rate")
ylabel("Hz")
subplot(236)
timeseries = arange(0,simtime/second,1e-3)
plot(timeseries,popm_i.smooth_rate(width=50.*ms,filter="gaussian"))
title("Inh population rate")
xlabel("Time (ms)")
ylabel("Hz")
fig.tight_layout()
show()
|
adityagilra/from-papers
|
Ostojic2014_ExcInhNet.py
|
Python
|
lgpl-3.0
| 7,101
|
[
"Brian",
"Gaussian",
"NEURON"
] |
9e8939f74ffc5e61f5f9e80c8fe2a0c9091ea45d66e482febf751d69ca66bc54
|
# -*- coding: utf-8 -*-
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2003-2005 Donald N. Allingham
# Copyright (C) 2008 Stefan Siegel
# Copyright (C) 2008 Brian G. Matherly
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# $Id$
# Original version written by Alex Roitman, largely based on relationship.py
# by Don Allingham and on valuable input from Dr. Martin Senftleben
# Modified by Joachim Breitner to not use „Großcousine“, in accordance with
# http://de.wikipedia.org/wiki/Verwandtschaftsbeziehung
# Rewritten from scratch for GRAMPS 3 by Stefan Siegel,
# loosely based on rel_fr.py
"""
German-specific classes for relationships.
"""
#-------------------------------------------------------------------------
#
# standard python modules
#
#-------------------------------------------------------------------------
import re
#-------------------------------------------------------------------------
#
# GRAMPS modules
#
#-------------------------------------------------------------------------
from gramps.gen.lib import Person
import gramps.gen.relationship
#-------------------------------------------------------------------------
#
#
#
#-------------------------------------------------------------------------
_ordinal = [ u'nullte',
u'erste', u'zweite', u'dritte', u'vierte', u'fünfte', u'sechste',
u'siebte', u'achte', u'neunte', u'zehnte', u'elfte', u'zwölfte',
]
_removed = [ u'',
u'', u'Groß', u'Urgroß',
u'Alt', u'Altgroß', u'Alturgroß',
u'Ober', u'Obergroß', u'Oberurgroß',
u'Stamm', u'Stammgroß', u'Stammurgroß',
u'Ahnen', u'Ahnengroß', u'Ahnenurgroß',
u'Urahnen', u'Urahnengroß', u'Urahnenurgroß',
u'Erz', u'Erzgroß', u'Erzurgroß',
u'Erzahnen', u'Erzahnengroß', u'Erzahnenurgroß',
]
_lineal_up = {
'many': u'%(p)sEltern%(s)s',
'unknown': u'%(p)sElter%(s)s', # "Elter" sounds strange but is correct
'male': u'%(p)sVater%(s)s',
'female': u'%(p)sMutter%(s)s',
}
_lineal_down = {
'many': u'%(p)sKinder%(s)s',
'unknown': u'%(p)sKind%(s)s',
'male': u'%(p)sSohn%(s)s',
'female': u'%(p)sTochter%(s)s',
}
_collateral_up = {
'many': u'%(p)sOnkel und %(p)sTanten%(s)s',
'unknown': u'%(p)sOnkel oder %(p)sTante%(s)s',
'male': u'%(p)sOnkel%(s)s',
'female': u'%(p)sTante%(s)s',
}
_collateral_down = {
'many': u'%(p)sNeffen und %(p)sNichten%(s)s',
'unknown': u'%(p)sNeffe oder %(p)sNichte%(s)s',
'male': u'%(p)sNeffe%(s)s',
'female': u'%(p)sNichte%(s)s',
}
_collateral_same = {
'many': u'%(p)sCousins und %(p)sCousinen%(s)s',
'unknown': u'%(p)sCousin oder %(p)sCousine%(s)s',
'male': u'%(p)sCousin%(s)s',
'female': u'%(p)sCousine%(s)s',
}
_collateral_sib = {
'many': u'%(p)sGeschwister%(s)s',
'unknown': u'%(p)sGeschwisterkind%(s)s',
'male': u'%(p)sBruder%(s)s',
'female': u'%(p)sSchwester%(s)s',
}
_schwager = {
'many': u'%(p)sSchwager%(s)s',
'unknown': u'%(p)sSchwager%(s)s',
'male': u'%(p)sSchwager%(s)s',
'female': u'%(p)sSchwägerin%(s)s',
}
_schwippschwager = {
'many': u'%(p)sSchwippschwager%(s)s',
'unknown': u'%(p)sSchwippschwager%(s)s',
'male': u'%(p)sSchwippschwager%(s)s',
'female': u'%(p)sSchwippschwägerin%(s)s',
}
#-------------------------------------------------------------------------
#
#
#
#-------------------------------------------------------------------------
class RelationshipCalculator(gramps.gen.relationship.RelationshipCalculator):
"""
RelationshipCalculator Class
"""
def __init__(self):
gramps.gen.relationship.RelationshipCalculator.__init__(self)
def _make_roman(self, num):
roman = ''
for v, r in [(1000, u'M'), (900, u'CM'), (500, u'D'), (400, u'CD'),
( 100, u'C'), ( 90, u'XC'), ( 50, u'L'), ( 40, u'XL'),
( 10, u'X'), ( 9, u'IX'), ( 5, u'V'), ( 4, u'IV'),
( 1, u'I')]:
while num > v:
num -= v
roman += r
return roman
def _fix_caps(self, string):
return re.sub(r'(?<=[^\s(/A-Z])[A-Z]', lambda m: m.group().lower(), string)
def _removed_text(self, degree, removed):
if (degree, removed) == (0, -2):
return u'Enkel'
elif (degree, removed) == (0, -3):
return u'Urenkel'
removed = abs(removed)
if removed < len(_removed):
return _removed[removed]
else:
return u'(%s)' % self._make_roman(removed-2)
def _degree_text(self, degree, removed):
if removed == 0:
degree -= 1 # a cousin has same degree as his parent (uncle/aunt)
if degree <= 1:
return u''
if degree < len(_ordinal):
return u' %sn Grades' % _ordinal[degree]
else:
return u' %d. Grades' % degree
def _gender_convert(self, gender):
if gender == Person.MALE:
return 'male'
elif gender == Person.FEMALE:
return 'female'
else:
return 'unknown'
def _get_relationship_string(self, Ga, Gb, gender,
reltocommon_a='', reltocommon_b='',
only_birth=True,
in_law_a=False, in_law_b=False):
common_ancestor_count = 0
if reltocommon_a == '':
reltocommon_a = self.REL_FAM_BIRTH
if reltocommon_b == '':
reltocommon_b = self.REL_FAM_BIRTH
if reltocommon_a[-1] in [self.REL_MOTHER, self.REL_FAM_BIRTH,
self.REL_FAM_BIRTH_MOTH_ONLY] and \
reltocommon_b[-1] in [self.REL_MOTHER, self.REL_FAM_BIRTH,
self.REL_FAM_BIRTH_MOTH_ONLY]:
common_ancestor_count += 1 # same female ancestor
if reltocommon_a[-1] in [self.REL_FATHER, self.REL_FAM_BIRTH,
self.REL_FAM_BIRTH_FATH_ONLY] and \
reltocommon_b[-1] in [self.REL_FATHER, self.REL_FAM_BIRTH,
self.REL_FAM_BIRTH_FATH_ONLY]:
common_ancestor_count += 1 # same male ancestor
degree = min(Ga, Gb)
removed = Ga-Gb
if degree == 0 and removed < 0:
# for descendants the "in-law" logic is reversed
(in_law_a, in_law_b) = (in_law_b, in_law_a)
rel_str = u''
pre = u''
post = u''
if in_law_b and degree == 0:
pre += u'Stief'
elif (not only_birth) or common_ancestor_count == 0:
pre += u'Stief-/Adoptiv'
if in_law_a and (degree, removed) != (1, 0):
# A "Schwiegerbruder" really is a "Schwager" (handled later)
pre += u'Schwieger'
if degree != 0 and common_ancestor_count == 1:
pre += u'Halb'
pre += self._removed_text(degree, removed)
post += self._degree_text(degree, removed)
if in_law_b and degree != 0 and (degree, removed) != (1, 0):
# A "Bruder (angeheiratet)" also is a "Schwager" (handled later)
post += u' (angeheiratet)'
if degree == 0:
# lineal relationship
if removed > 0:
rel_str = _lineal_up[gender]
elif removed < 0:
rel_str = _lineal_down[gender]
elif in_law_a or in_law_b:
rel_str = u'Partner'
else:
rel_str = u'Proband'
else:
# collateral relationship
if removed > 0:
rel_str = _collateral_up[gender]
elif removed < 0:
rel_str = _collateral_down[gender]
elif degree == 1:
if in_law_a or in_law_b:
if in_law_a and in_law_b:
rel_str = _schwippschwager[gender]
else:
rel_str = _schwager[gender]
else:
rel_str = _collateral_sib[gender]
else:
rel_str = _collateral_same[gender]
return self._fix_caps(rel_str % {'p': pre, 's': post})
def get_plural_relationship_string(self, Ga, Gb,
reltocommon_a='', reltocommon_b='',
only_birth=True,
in_law_a=False, in_law_b=False):
return self._get_relationship_string(Ga, Gb, 'many',
reltocommon_a, reltocommon_b,
only_birth, in_law_a, in_law_b)
def get_single_relationship_string(self, Ga, Gb, gender_a, gender_b,
reltocommon_a, reltocommon_b,
only_birth=True,
in_law_a=False, in_law_b=False):
return self._get_relationship_string(Ga, Gb,
self._gender_convert(gender_b),
reltocommon_a, reltocommon_b,
only_birth, in_law_a, in_law_b)
def get_sibling_relationship_string(self, sib_type, gender_a, gender_b,
in_law_a=False, in_law_b=False):
if sib_type in [self.NORM_SIB, self.UNKNOWN_SIB]:
# the NORM_SIB translation is generic and suitable for UNKNOWN_SIB
rel = self.REL_FAM_BIRTH
only_birth = True
elif sib_type == self.HALF_SIB_FATHER:
rel = self.REL_FAM_BIRTH_FATH_ONLY
only_birth = True
elif sib_type == self.HALF_SIB_MOTHER:
rel = self.REL_FAM_BIRTH_MOTH_ONLY
only_birth = True
elif sib_type == self.STEP_SIB:
rel = self.REL_FAM_NONBIRTH
only_birth = False
return self._get_relationship_string(1, 1,
self._gender_convert(gender_b),
rel, rel,
only_birth, in_law_a, in_law_b)
if __name__ == "__main__":
# Test function. Call it as follows from the command line (so as to find
# imported modules):
# export PYTHONPATH=/path/to/gramps/src
# python src/plugins/rel/rel_de.py
# (Above not needed here)
"""TRANSLATORS, copy this if statement at the bottom of your
rel_xx.py module, and test your work with:
python src/plugins/rel/rel_xx.py
"""
from gramps.gen.relationship import test
rc = RelationshipCalculator()
test(rc, True)
|
arunkgupta/gramps
|
gramps/plugins/rel/rel_de.py
|
Python
|
gpl-2.0
| 11,411
|
[
"Brian"
] |
b5ad36d4c5c59489e4aae9684f43ee97a7666cbba123634874f79412562d16d8
|
##############################################################################
# Minimal working example
# Parameter inference in Gaussian IID model
# using correlated psuedo-marginal Metropolis-Hastings
#
# (c) Johan Dahlin 2016 ( johan.dahlin (at) liu.se )
##############################################################################
import numpy as np
import matplotlib.pylab as plt
from state import smc
from para import pmh_correlatedRVs
from models import normalIID_2parameters
np.random.seed( 87655678 );
##############################################################################
# Arrange the data structures
##############################################################################
sm = smc.smcSampler();
pmh = pmh_correlatedRVs.stcPMH();
##############################################################################
# Setup the system
##############################################################################
sys = normalIID_2parameters.ssm()
sys.par = np.zeros((sys.nPar,1))
sys.par[0] = 0.50;
sys.par[1] = 0.30;
sys.par[2] = 0.10;
sys.T = 10;
sys.xo = 0.0;
##############################################################################
# Generate data
##############################################################################
sys.generateData();
##############################################################################
# Setup the parameters
##############################################################################
th = normalIID_2parameters.ssm()
th.nParInference = 1;
th.copyData(sys);
##############################################################################
# Setup the IS algorithm
##############################################################################
sm.filter = sm.SISrv;
sm.sortParticles = False;
sm.nPart = 10;
sm.resampFactor = 2.0;
sm.genInitialState = True;
##############################################################################
# Setup the PMH algorithm
##############################################################################
pmh.nIter = 30000;
pmh.nBurnIn = 10000;
pmh.nProgressReport = 5000;
pmh.rvnSamples = 1 + sm.nPart;
pmh.writeOutProgressToFile = False;
# Set initial parameters
pmh.initPar = sys.par;
# Settings for th proposal
pmh.invHessian = 1.0;
pmh.stepSize = 0.1;
# Settings for u proposal
pmh.alpha = 0.00;
##############################################################################
# Run the correlated pmMH algorithm
##############################################################################
# Correlated random numbers
pmh.sigmaU = 0.50
pmh.runSampler( sm, sys, th );
muCPMMH = pmh.th
iactC = pmh.calcIACT()
# Uncorrelated random numbers (standard pmMH)
pmh.sigmaU = 1.0
pmh.runSampler( sm, sys, th );
muUPMMH = pmh.th
iactU = pmh.calcIACT()
(iactC, iactU)
##############################################################################
# Plot the comparison
##############################################################################
plt.figure(1);
plt.subplot(2,3,1);
plt.plot(muCPMMH[:,0]);
plt.xlabel("iteration");
plt.ylabel("mu (cpmMH)");
plt.subplot(2,3,2);
plt.hist(muCPMMH[:,0],normed=True);
plt.xlabel("mu");
plt.ylabel("posterior estimate (cpmMH)");
plt.subplot(2,3,3);
plt.acorr(muCPMMH[:,0],maxlags=100);
plt.axis((0,100,0.92,1))
plt.xlabel("lag");
plt.ylabel("acf of mu (cpmMH)");
plt.figure(1);
plt.subplot(2,3,4);
plt.plot(muUPMMH[:,0]);
plt.xlabel("iteration");
plt.ylabel("mu (pmMH)");
plt.subplot(2,3,5);
plt.hist(muUPMMH[:,0],normed=True);
plt.xlabel("mu");
plt.ylabel("posterior estimate (pmMH)");
plt.subplot(2,3,6);
plt.acorr(muUPMMH[:,0],maxlags=100);
plt.axis((0,100,0.92,1))
plt.xlabel("iteration");
plt.ylabel("acf of mu (pmMH)");
##############################################################################
# End of file
##############################################################################
|
compops/pmmh-correlated2015
|
scripts-mwe/mwe-gaussian-iid-1parameter.py
|
Python
|
gpl-3.0
| 4,164
|
[
"Gaussian"
] |
c389d2325a433f25bba169d59d2896407d8550092d01b60286bad7ea35e88f91
|
#!/usr/bin/env python
from operator import itemgetter
import numpy
import re
try: import psyco; pysco.full()
except: pass
def dag_array(dagf):
recs = [] #collections.defaultdict(list)
fh = open(dagf, 'r')
qname_len = 0
sname_len = 0
qchr_len = 0
schr_len = 0
for line in fh:
if line[0] == '#': continue
qchr, qname, qstart, qstop, schr, sname, sstart, sstop, score = line.rstrip("*,\n,+").split("\t")[:9]
if len(qchr) > qchr_len: qchr_len = len(qchr)
if len(schr) > schr_len: schr_len = len(schr)
if len(qname) > qname_len: qname_len = len(qname)
if len(sname) > sname_len: sname_len = len(sname)
if not (qname, sname) in recs: recs[(qname, sname)] = []
recs[(qname, sname)].append([qchr, qname, int(qstart), int(qstop), schr, sname, int(sstart), int(sstop), float(score)])
fh.close()
arr = []
for k in sorted(recs, key=itemgetter(1)):
arr.extend([li for li in sorted(recs[k], itemgetter(8))])
dag_names = ('qchr', 'qname', 'qstart', 'qstop', 'schr', 'sname', 'sstart', 'sstop', 'score')
dag_types = ['S', 'S', 'i4', 'i4', 'S', 'S', 'i4', 'i4', 'f8']
dag_types[0] += str(qchr_len)
dag_types[4] += str(schr_len)
dag_types[1] += str(qname_len)
dag_types[5] += str(sname_len)
return numpy.rec.array(arr, names=dag_names, formats=dag_types)
chrre = re.compile("(\d+)")
def get_chr(line):
try:
return re.search(chrre, line).groups(0)[0]
except:
print >>sys.stderr, line
sys.exit(2)
def blast_to_dag(blast_file, query, subject, qdups, sdups, get_chr=get_chr, condense=True):
if qdups:
qdups = frozenset([x.strip() for x in open(qdups)])
if sdups:
sdups = frozenset([x.strip() for x in open(sdups)])
#if query == subject: subject += "2"
qorg = query + "_"
sorg = subject + "_"
seen = {}
n_qdups = 0
n_sdups = 0
for line in open(blast_file):
line = line.split("\t")
if line[0] in qdups: n_qdups += 1; continue
if line[1] in sdups: n_sdups += 1; continue
if condense:
key = line[0] + line[1]
eval, score = map(float, line[-2:])
if key in seen and (seen[key][0] < eval and seen[key][1] > score): continue
seen[key] = (eval, score)
qinfo = line[0].split("||")
sinfo = line[1].split("||")
# it wast just the name
if len(qinfo) > 1:
qchr = qinfo[0]
qlocs = [l.lstrip('0') for l in qinfo[1:3]]
if len(qinfo) > 4 and qinfo[4] == '-1':
qlocs.reverse()
else:
# a whole chromosome, use the locs it came with.
qlocs = line[6:8]
qchr = line[0]
# qchr = get_chr(line[0])
line[0] = line[0]+"||"+qlocs[0]+"||"+qlocs[1]
if len(sinfo) > 1:
schr = sinfo[0]
slocs = [l.lstrip('0') for l in sinfo[1:3]]
if len(sinfo) > 4 and sinfo[4] == '-1':
slocs.reverse()
else:
# a whole chromosome, use the locs it came with.
slocs = line[8:10]
schr = line[1]
# schr = get_chr(line[1])
line[1] = line[1]+"||"+slocs[0]+"||"+slocs[1]
print "\t".join([
qorg + qchr, line[0] + "||" + line[2], qlocs[0], qlocs[1]
,sorg + schr, line[1] + "||" + line[2], slocs[0], slocs[1], line[10]])
if qdups:
print >>sys.stderr, "removed %i dups from query " % n_qdups
if sdups:
print >>sys.stderr, "removed %i dups from subject" % n_sdups
if __name__ == "__main__":
import sys, os
import re
import cPickle
from optparse import OptionParser
usage = """
takes a tab-delimited blast file and converts it to the format used by
dagchainer and tandems.py. output is to STDOUT.
if (optional) files are given for query/subject_dups with format:
dupa_name
dupb_name
.
.
dupzzz_name
then any hits containing those are removed. from the output
"""
parser = OptionParser(usage)
parser.add_option("-b", "--blast_file", dest="blast_file", help="the name of the blast_file", default=False)
parser.add_option("-q", "--query", dest="query", help="the name of the query organism")
parser.add_option("-s", "--subject", dest="subject", help="the name of the subject organism")
parser.add_option("--query_dups", dest="query_dups", help="file containing list of query dups", default=[])
parser.add_option("--subject_dups", dest="subject_dups", help="file containing list of subject dups", default=[])
parser.add_option("-c","--condense", dest="condense", help="condense duplicate blast hits", action="store_false")
(options, _) = parser.parse_args()
condense=options.condense
if not options.blast_file:
sys.exit(parser.print_help())
blast_to_dag(options.blast_file, options.query, options.subject, options.query_dups, options.subject_dups, condense=condense)
|
asherkhb/coge
|
scripts/synmap/dag_tools.py
|
Python
|
bsd-2-clause
| 5,069
|
[
"BLAST"
] |
14744f65f4b2a45ae26db9b381f94e661973a074a9dfccdc6c1bdffd5735509d
|
"""
.. currentmodule:: pylayers.antprop.coverage
.. autosummary::
:members:
"""
from pylayers.util.project import *
#from pylayers.measures.mesuwb import *
from pylayers.simul.radionode import *
import pylayers.util.pyutil as pyu
from pylayers.util.utilnet import str2bool
from pylayers.gis.layout import Layout
import pylayers.antprop.loss as loss
import pylayers.antprop.deygout as dg
import pylayers.gis.ezone as ez
import pylayers.signal.standard as std
import matplotlib.cm as cm
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as m
from mpl_toolkits.axes_grid1 import make_axes_locatable
import ConfigParser
import pdb
import doctest
from itertools import product
try:
from mayavi import mlab
from tvtk.tools import visual
except:
print 'mayavi not installed'
class Coverage(PyLayers):
""" Handle Layout Coverage
Methods
-------
creategrid()
create a uniform grid for evaluating losses
cover()
run the coverage calculation
showPower()
display the map of received power
showLoss()
display the map of losses
Attributes
----------
All attributes are read from fileini ino the ini directory of the
current project
_fileini
default coverage.ini
L : a Layout
nx : number of point on x
ny : number of point on y
tx : transmitter position
txpe : transmitter power emmission level
show : boolean for automatic display power map
na : number of access point
"""
def __init__(self,_fileini='coverage.ini'):
""" object constructor
Parameters
----------
_fileini : string
name of the configuration file
Notes
-----
Coverage is described in an ini file.
Default file is coverage.ini and is placed in the ini directory of the current project.
"""
self.config = ConfigParser.ConfigParser()
self.config.read(pyu.getlong(_fileini,pstruc['DIRSIMUL']))
self.layoutopt = dict(self.config.items('layout'))
self.gridopt = dict(self.config.items('grid'))
self.apopt = dict(self.config.items('ap'))
self.rxopt = dict(self.config.items('rx'))
self.showopt = dict(self.config.items('show'))
# get the Layout
filename = self.layoutopt['filename']
if filename.endswith('lay'):
self.typ = 'indoor'
self.L = Layout(filename)
# get the receiving grid
self.nx = eval(self.gridopt['nx'])
self.ny = eval(self.gridopt['ny'])
if 'zgrid' in self.gridopt:
self.zgrid = eval(self.gridopt['zgrid'])
else:
self.zgrid = 1.0
self.mode = self.gridopt['mode']
assert self.mode in ['file','full','zone'], "Error reading grid mode "
self.boundary = eval(self.gridopt['boundary'])
self.filespa = self.gridopt['file']
#
# create grid
#
self.creategrid(mode=self.mode,boundary=self.boundary,_fileini=self.filespa)
self.dap = {}
for k in self.apopt:
kwargs = eval(self.apopt[k])
ap = std.AP(**kwargs)
self.dap[eval(k)] = ap
try:
self.L.Gt.nodes()
except:
pass
try:
self.L.dumpr()
except:
self.L.build()
self.L.dumpw()
else:
self.typ='outdoor'
self.E = ez.Ezone(filename)
self.E.loadh5()
self.E.rebase()
# The frequency is fixed from the AP nature
self.fGHz = np.array([])
#self.fGHz = eval(self.txopt['fghz'])
#self.tx = np.array((eval(self.txopt['x']),eval(self.txopt['y'])))
#self.ptdbm = eval(self.txopt['ptdbm'])
#self.framelengthbytes = eval(self.txopt['framelengthbytes'])
# receiver section
#self.rxsens = eval(self.rxopt['sensitivity'])
self.temperaturek = eval(self.rxopt['temperaturek'])
self.noisefactordb = eval(self.rxopt['noisefactordb'])
# show section
self.bshow = str2bool(self.showopt['show'])
def __repr__(self):
st=''
if self.typ=='indoor':
st = st+ 'Layout file : '+self.L._filename + '\n\n'
st = st + '-----list of Access Points ------'+'\n'
for k in self.dap:
st = st + self.dap[k].__repr__()+'\n'
st = st + '-----Rx------'+'\n'
st= st+ 'temperature (K) : '+ str(self.temperaturek) + '\n'
st= st+ 'noisefactor (dB) : '+ str(self.noisefactordb) + '\n\n'
st = st + '--- Grid ----'+'\n'
st= st+ 'mode : ' + str(self.mode) + '\n'
if self.mode!='file':
st= st+ 'nx : ' + str(self.nx) + '\n'
st= st+ 'ny : ' + str(self.ny) + '\n'
if self.mode=='zone':
st= st+ 'boundary (xmin,ymin,xmax,ymax) : ' + str(self.boundary) + '\n\n'
if self.mode=='file':
st = st+' filename : '+self.filespa+'\n'
return(st)
def creategrid(self,mode='full',boundary=[],_fileini=''):
""" create a grid
Parameters
----------
full : boolean
default (True) use all the layout area
boundary : (xmin,ymin,xmax,ymax)
if full is False the boundary argument is used
"""
if mode=="file":
self.RN = RadioNode(name='',
typ='rx',
_fileini = _fileini,
_fileant = 'def.vsh3')
self.grid =self.RN.position[0:2,:].T
else:
if mode=="full":
mi=np.min(self.L.Gs.pos.values(),axis=0)+0.01
ma=np.max(self.L.Gs.pos.values(),axis=0)-0.01
if mode=="zone":
assert boundary!=[]
mi = np.array([boundary[0],boundary[1]])
ma = np.array([boundary[2],boundary[3]])
x = np.linspace(mi[0],ma[0],self.nx)
y = np.linspace(mi[1],ma[1],self.ny)
self.grid=np.array((list(np.broadcast(*np.ix_(x, y)))))
self.ng = self.grid.shape[0]
def where1(self):
"""
Unfinished : Not sure this is the right place (too specific)
"""
M1 = UWBMeasure(1)
self.dap={}
self.dap[1]={}
self.dap[2]={}
self.dap[3]={}
self.dap[4]={}
self.dap[1]['p']=M1.rx[1,0:2]
self.dap[2]['p']=M1.rx[1,0:2]
self.dap[3]['p']=M1.rx[1,0:2]
self.dap[4]['p']=M1.rx[1,0:2]
for k in range(300):
try:
M = UWBMeasure(k)
tx = M.tx
self.grid=np.vstack((self.grid,tx[0:2]))
D = M.rx-tx[np.newaxis,:]
D2 = D*D
dist = np.sqrt(np.sum(D2,axis=1))[1:]
Emax = M.Emax()
Etot = M.Etot()[0]
try:
td1 = np.hstack((td1,dist[0]))
td2 = np.hstack((td2,dist[1]))
td3 = np.hstack((td3,dist[2]))
td4 = np.hstack((td4,dist[3]))
te1 = np.hstack((te1,Emax[0]))
te2 = np.hstack((te2,Emax[1]))
te3 = np.hstack((te3,Emax[2]))
te4 = np.hstack((te4,Emax[3]))
tt1 = np.hstack((tt1,Etot[0]))
tt2 = np.hstack((tt2,Etot[1]))
tt3 = np.hstack((tt3,Etot[2]))
tt4 = np.hstack((tt4,Etot[3]))
#tdist = np.hstack((tdist,dist))
#te = np.hstack((te,Emax))
except:
td1=np.array(dist[0])
td2=np.array(dist[1])
td3=np.array(dist[2])
td4=np.array(dist[3])
te1 =np.array(Emax[0])
te2 =np.array(Emax[1])
te3 =np.array(Emax[2])
te4 =np.array(Emax[3])
tt1 =np.array(Etot[0])
tt2 =np.array(Etot[1])
tt3 =np.array(Etot[2])
tt4 =np.array(Etot[3])
except:
pass
def cover(self,sinr=True,snr=True,best=True):
""" run the coverage calculation
Parameters
----------
sinr : boolean
snr : boolean
best : boolean
Examples
--------
.. plot::
:include-source:
>>> from pylayers.antprop.coverage import *
>>> C = Coverage()
>>> C.cover()
>>> f,a=C.show(typ='sinr',figsize=(10,8))
>>> plt.show()
Notes
-----
self.fGHz is an array, it means that Coverage is calculated at once
for a whole set of frequencies. In practice, it would be the center
frequency of a given standard channel.
This function is calling `loss.Losst` which calculates Losses along a
straight path.
In a future implementation we will
abstract the EM solver in order to make use of other calculation
approaches as a full or partial Ray Tracing.
The following members variables are evaluated :
+ freespace Loss @ fGHz PL() PathLoss (shoud be rename FS as free space) $
+ prdbmo : Received power in dBm .. math:`P_{rdBm} =P_{tdBm} - L_{odB}`
+ prdbmp : Received power in dBm .. math:`P_{rdBm} =P_{tdBm} - L_{pdB}`
+ snro : SNR polar o (H)
+ snrp : SNR polar p (H)
See Also
--------
pylayers.antprop.loss.Losst
pylayers.antprop.loss.PL
"""
#
# select active AP
#
lactiveAP = []
try:
del self.aap
del self.ptdbm
except:
pass
self.kB = 1.3806503e-23 # Boltzmann constant
#
# Loop opver access points
#
for iap in self.dap:
if self.dap[iap]['on']:
lactiveAP.append(iap)
fGHz = self.dap[iap].s.fcghz
# The frequency band is set here
self.fGHz=np.unique(np.hstack((self.fGHz,fGHz)))
apchan = self.dap[iap]['chan']
try:
self.aap = np.vstack((self.aap,self.dap[iap]['p']))
self.ptdbm = np.vstack((self.ptdbm,self.dap[iap]['PtdBm']))
self.bmhz = np.vstack((self.bmhz,
self.dap[iap].s.chan[apchan[0]]['BMHz']))
except:
self.aap = self.dap[iap]['p']
self.ptdbm = np.array(self.dap[iap]['PtdBm'])
self.bmhz = np.array(self.dap[iap].s.chan[apchan[0]]['BMHz'])
PnW = np.array((10**(self.noisefactordb/10.))*self.kB*self.temperaturek*self.bmhz*1e6)
# Evaluate Noise Power (in dBm)
self.pndbm = np.array(10*np.log10(PnW)+30)
#lchan = map(lambda x: self.dap[x]['chan'],lap)
#apchan = zip(self.dap.keys(),lchan)
#self.bmhz = np.array(map(lambda x: self.dap[x[0]].s.chan[x[1][0]]['BMHz']*len(x[1]),apchan))
self.ptdbm = self.ptdbm.T
self.pndbm = self.pndbm.T
# creating all links
# all grid to all ap
#
if len(self.pndbm.shape ) == 0:
self.ptdbm = self.ptdbm.reshape(1,1)
self.pndbm = self.pndbm.reshape(1,1)
p = product(range(self.ng),lactiveAP)
#
# pa : access point
# pg : grid point
#
# 1 x na
for k in p:
pg = self.grid[k[0],:]
pa = np.array(self.dap[k[1]]['p'])
# exemple with 3 AP
# 321 0
# 321 1
# 321 2
# 322 0
try:
self.pa = np.vstack((self.pa,pa))
except:
self.pa = pa
try:
self.pg = np.vstack((self.pg,pg))
except:
self.pg = pg
self.pa = self.pa.T
shpa = self.pa.shape
shpg = self.pg.shape
if shpa[0] != 3:
self.pa = np.vstack((self.pa,np.ones(shpa[1])))
self.pg = self.pg.T
self.pg = np.vstack((self.pg,self.zgrid*np.ones(shpg[0])))
self.nf = len(self.fGHz)
# retrieving dimensions along the 3 axis
na = len(lactiveAP)
self.na = na
ng = self.ng
nf = self.nf
for k,iap in enumerate(self.dap):
# select only one access point
u = na*np.arange(0,ng,1).astype('int')+k
if self.dap[iap]['on']:
pt = self.pa[:,u]
pr = self.pg[:,u]
azoffset = self.dap[iap]['phideg']*np.pi/180.
self.dap[iap].A.eval(fGHz=self.fGHz, pt=pt, pr=pr, azoffset=azoffset)
gain = (self.dap[iap].A.G).T
#pdb.set_trace()
# to handle omnidirectional antenna (nf,1,1)
if gain.shape[1]==1:
gain = np.repeat(gain,ng,axis=1)
try:
tgain = np.dstack((tgain,gain[:,:,None]))
except:
tgain = gain[:,:,None]
#Lwo,Lwp,Edo,Edp = loss.Losst(self.L,self.fGHz,self.pa,self.pg,dB=False)
Lwo,Lwp,Edo,Edp = loss.Losst(self.L,self.fGHz,self.pa,self.pg,dB=False)
self.Lwo = Lwo.reshape(nf,ng,na)
self.Edo = Edo.reshape(nf,ng,na)
self.Lwp = Lwp.reshape(nf,ng,na)
self.Edp = Edp.reshape(nf,ng,na)
freespace = loss.PL(self.fGHz,self.pa,self.pg,dB=False)
self.freespace = freespace.reshape(nf,ng,na)
# transmitting power
# f x g x a
# CmW : Received Power coverage in mW
self.CmWo = 10**(self.ptdbm[np.newaxis,...]/10.)*self.Lwo*self.freespace*tgain
self.CmWp = 10**(self.ptdbm[np.newaxis,...]/10.)*self.Lwp*self.freespace*tgain
if snr:
self.evsnr()
if sinr:
self.evsinr()
if best:
self.evbestsv()
def evsnr(self):
""" calculates signal to noise ratio
"""
NmW = 10**(self.pndbm/10.)[np.newaxis,:]
self.snro = self.CmWo/NmW
self.snrp = self.CmWp/NmW
def evsinr(self):
""" calculates sinr
"""
# na : number of access point
na = self.na
# U : 1 x 1 x na x na
U = (np.ones((na,na))-np.eye(na))[np.newaxis,np.newaxis,:,:]
# CmWo : received power in mW orthogonal polarization
# CmWp : received power in mW parallel polarization
ImWo = np.einsum('ijkl,ijl->ijk',U,self.CmWo)
ImWp = np.einsum('ijkl,ijl->ijk',U,self.CmWp)
NmW = 10**(self.pndbm/10.)[np.newaxis,:]
self.sinro = self.CmWo/(ImWo+NmW)
self.sinrp = self.CmWp/(ImWp+NmW)
def evbestsv(self):
""" determine the best server map
Notes
-----
C.bestsv
"""
na = self.na
ng = self.ng
nf = self.nf
# find best server regions
Vo = self.CmWo
Vp = self.CmWp
self.bestsvo = np.empty(nf*ng*na).reshape(nf,ng,na)
self.bestsvp = np.empty(nf*ng*na).reshape(nf,ng,na)
for kf in range(nf):
MaxVo = np.max(Vo[kf,:,:],axis=1)
MaxVp = np.max(Vp[kf,:,:],axis=1)
for ka in range(na):
uo = np.where(Vo[kf,:,ka]==MaxVo)
up = np.where(Vp[kf,:,ka]==MaxVp)
self.bestsvo[kf,uo,ka]=ka+1
self.bestsvp[kf,up,ka]=ka+1
# def showEd(self,polar='o',**kwargs):
# """ shows a map of direct path excess delay
#
# Parameters
# ----------
#
# polar : string
# 'o' | 'p'
#
# Examples
# --------
#
# .. plot::
# :include-source:
#
# >> from pylayers.antprop.coverage import *
# >> C = Coverage()
# >> C.cover()
# >> C.showEd(polar='o')
#
# """
#
# if not kwargs.has_key('alphacy'):
# kwargs['alphacy']=0.0
# if not kwargs.has_key('colorcy'):
# kwargs['colorcy']='w'
# if not kwargs.has_key('nodes'):
# kwargs['nodes']=False
#
# fig,ax = self.L.showG('s',**kwargs)
# l = self.grid[0,0]
# r = self.grid[-1,0]
# b = self.grid[0,1]
# t = self.grid[-1,-1]
#
# cdict = {
# 'red' : ((0., 0.5, 0.5), (1., 1., 1.)),
# 'green': ((0., 0.5, 0.5), (1., 1., 1.)),
# 'blue' : ((0., 0.5, 0.5), (1., 1., 1.))
# }
# #generate the colormap with 1024 interpolated values
# my_cmap = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024)
#
# if polar=='o':
# prdbm=self.prdbmo
# if polar=='p':
# prdbm=self.prdbmp
#
#
#
# if polar=='o':
# mcEdof = np.ma.masked_where(prdbm < self.rxsens,self.Edo)
#
# cov=ax.imshow(mcEdof.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),cmap = 'jet',
# origin='lower')
#
#
#
# # cov=ax.imshow(self.Edo.reshape((self.nx,self.ny)).T,
# # extent=(l,r,b,t),
# # origin='lower')
# titre = "Map of LOS excess delay, polar orthogonal"
#
# if polar=='p':
# mcEdpf = np.ma.masked_where(prdbm < self.rxsens,self.Edp)
#
# cov=ax.imshow(mcEdpf.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),cmap = 'jet',
# origin='lower')
#
# # cov=ax.imshow(self.Edp.reshape((self.nx,self.ny)).T,
# # extent=(l,r,b,t),
# # origin='lower')
# titre = "Map of LOS excess delay, polar parallel"
#
# ax.scatter(self.tx[0],self.tx[1],linewidth=0)
# ax.set_title(titre)
#
# divider = make_axes_locatable(ax)
# cax = divider.append_axes("right", size="5%", pad=0.05)
# clb = fig.colorbar(cov,cax)
# clb.set_label('excess delay (ns)')
#
# if self.show:
# plt.show()
# return fig,ax
#
# def showPower(self,rxsens=True,nfl=True,polar='o',**kwargs):
# """ show the map of received power
#
# Parameters
# ----------
#
# rxsens : bool
# clip the map with rx sensitivity set in self.rxsens
# nfl : bool
# clip the map with noise floor set in self.pndbm
# polar : string
# 'o'|'p'
#
# Examples
# --------
#
# .. plot::
# :include-source:
#
# > from pylayers.antprop.coverage import *
# > C = Coverage()
# > C.cover()
# > C.showPower()
#
# """
#
# if not kwargs.has_key('alphacy'):
# kwargs['alphacy']=0.0
# if not kwargs.has_key('colorcy'):
# kwargs['colorcy']='w'
# if not kwargs.has_key('nodes'):
# kwargs['nodes']=False
# fig,ax = self.L.showG('s',**kwargs)
#
# l = self.grid[0,0]
# r = self.grid[-1,0]
# b = self.grid[0,1]
# t = self.grid[-1,-1]
#
# if polar=='o':
# prdbm=self.prdbmo
# if polar=='p':
# prdbm=self.prdbmp
#
## tCM = plt.cm.get_cmap('jet')
## tCM._init()
## alphas = np.abs(np.linspace(.0,1.0, tCM.N))
## tCM._lut[:-3,-1] = alphas
#
# title='Map of received power - Pt = ' + str(self.ptdbm) + ' dBm'+str(' fGHz =') + str(self.fGHz) + ' polar = '+polar
#
# cdict = {
# 'red' : ((0., 0.5, 0.5), (1., 1., 1.)),
# 'green': ((0., 0.5, 0.5), (1., 1., 1.)),
# 'blue' : ((0., 0.5, 0.5), (1., 1., 1.))
# }
#
# if not kwargs.has_key('cmap'):
# # generate the colormap with 1024 interpolated values
# cmap = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024)
# else:
# cmap = kwargs['cmap']
# #my_cmap = cm.copper
#
#
# if rxsens :
#
# ## values between the rx sensitivity and noise floor
# mcPrf = np.ma.masked_where((prdbm > self.rxsens)
# & (prdbm < self.pndbm),prdbm)
# # mcPrf = np.ma.masked_where((prdbm > self.rxsens) ,prdbm)
#
# cov1 = ax.imshow(mcPrf.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),cmap = cm.copper,
# vmin=self.rxsens,origin='lower')
#
# ### values above the sensitivity
# mcPrs = np.ma.masked_where(prdbm < self.rxsens,prdbm)
# cov = ax.imshow(mcPrs.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),
# cmap = cmap,
# vmin=self.rxsens,origin='lower')
# title=title + '\n black : Pr (dBm) < %.2f' % self.rxsens + ' dBm'
#
# else :
# cov=ax.imshow(prdbm.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),
# cmap = cmap,
# vmin=self.pndbm,origin='lower')
#
# if nfl:
# ### values under the noise floor
# ### we first clip the value below the noise floor
# cl = np.nonzero(prdbm<=self.pndbm)
# cPr = prdbm
# cPr[cl] = self.pndbm
# mcPruf = np.ma.masked_where(cPr > self.pndbm,cPr)
# cov2 = ax.imshow(mcPruf.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),cmap = 'binary',
# vmax=self.pndbm,origin='lower')
# title=title + '\n white : Pr (dBm) < %.2f' % self.pndbm + ' dBm'
#
#
# ax.scatter(self.tx[0],self.tx[1],s=10,c='k',linewidth=0)
#
# ax.set_title(title)
# divider = make_axes_locatable(ax)
# cax = divider.append_axes("right", size="5%", pad=0.05)
# clb = fig.colorbar(cov,cax)
# clb.set_label('Power (dBm)')
#
# if self.show:
# plt.show()
#
# return fig,ax
#
#
# def showTransistionRegion(self,polar='o'):
# """
#
# Notes
# -----
#
# See : "Analyzing the Transitional Region in Low Power Wireless Links"
# Marco Zuniga and Bhaskar Krishnamachari
#
# Examples
# --------
#
# .. plot::
# :include-source:
#
# > from pylayers.antprop.coverage import *
# > C = Coverage()
# > C.cover()
# > C.showTransitionRegion()
#
# """
#
# frameLength = self.framelengthbytes
#
# PndBm = self.pndbm
# gammaU = 10*np.log10(-1.28*np.log(2*(1-0.9**(1./(8*frameLength)))))
# gammaL = 10*np.log10(-1.28*np.log(2*(1-0.1**(1./(8*frameLength)))))
#
# PrU = PndBm + gammaU
# PrL = PndBm + gammaL
#
# fig,ax = self.L.showGs()
#
# l = self.grid[0,0]
# r = self.grid[-1,0]
# b = self.grid[0,1]
# t = self.grid[-1,-1]
#
# if polar=='o':
# prdbm=self.prdbmo
# if polar=='p':
# prdbm=self.prdbmp
#
# zones = np.zeros(np.shape(prdbm))
# #pdb.set_trace()
#
# uconnected = np.nonzero(prdbm>PrU)
# utransition = np.nonzero((prdbm < PrU)&(prdbm > PrL))
# udisconnected = np.nonzero(prdbm < PrL)
#
# zones[uconnected] = 1
# zones[utransition] = (prdbm[utransition]-PrL)/(PrU-PrL)
# cov = ax.imshow(zones.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),cmap = 'BuGn',origin='lower')
#
# title='PDR region'
# ax.scatter(self.tx[0],self.tx[1],linewidth=0)
#
# ax.set_title(title)
# divider = make_axes_locatable(ax)
# cax = divider.append_axes("right", size="5%", pad=0.05)
# fig.colorbar(cov,cax)
# if self.show:
# plt.show()
#
def plot(self,**kwargs):
"""
"""
defaults = { 'typ': 'pr',
'grid': False,
'f' : 0,
'a' : 0,
'db':True,
'label':'',
'pol':'p',
'col':'b'
}
for k in defaults:
if k not in kwargs:
kwargs[k]=defaults[k]
if 'fig' in kwargs:
fig=kwargs['fig']
else:
fig=plt.figure()
if 'ax' in kwargs:
ax = kwargs['ax']
else:
ax = fig.add_subplot(111)
if kwargs['typ']=='pr':
if kwargs['a']!=-1:
if kwargs['pol']=='p':
U = self.CmWp[kwargs['f'],:,kwargs['a']]
if kwargs['pol']=='o':
U = self.CmWo[kwargs['f'],:,kwargs['a']]
else:
if kwargs['pol']=='p':
U = self.CmWp[kwargs['f'],:,:].reshape(self.na*self.ng)
else:
U = self.CmWo[kwargs['f'],:,:].reshape(self.na*self.ng)
if kwargs['db']:
U = 10*np.log10(U)
D = np.sqrt(np.sum((self.pa-self.pg)*(self.pa-self.pg),axis=0))
if kwargs['a']!=-1:
D = D.reshape(self.ng,self.na)
ax.semilogx(D[:,kwargs['a']],U,'.',color=kwargs['col'],label=kwargs['label'])
else:
ax.semilogx(D,U,'.',color=kwargs['col'],label=kwargs['label'])
return fig,ax
def show(self,**kwargs):
""" show coverage
Parameters
----------
typ : string
'pr' | 'sinr' | 'capacity' | 'loss' | 'best' | 'egd'
grid : boolean
polar : string
'o' | 'p'
best : boolean
draw best server contour if True
f : int
frequency index
a : int
access point index (-1 all access point)
Examples
--------
.. plot::
:include-source:
>>> from pylayers.antprop.coverage import *
>>> C = Coverage()
>>> C.cover()
>>> f,a = C.show(typ='pr',figsize=(10,8))
>>> plt.show()
>>> f,a = C.show(typ='best',figsize=(10,8))
>>> plt.show()
>>> f,a = C.show(typ='loss',figsize=(10,8))
>>> plt.show()
>>> f,a = C.show(typ='sinr',figsize=(10,8))
>>> plt.show()
See Also
--------
pylayers.gis.layout.Layout.showG
"""
defaults = { 'typ': 'pr',
'grid': False,
'polar':'p',
'f' : 0,
'a' :-1,
'db':True,
'cmap' :cm.jet,
'best':True
}
title = self.dap[self.dap.keys()[0]].s.name+ ' : '
for k in defaults:
if k not in kwargs:
kwargs[k]=defaults[k]
polar = kwargs['polar']
assert polar in ['p','o'],"polar wrongly defined in show coverage"
if 'fig' in kwargs:
if 'ax' in kwargs:
fig,ax=self.L.showG('s',fig=kwargs['fig'],ax=kwargs['ax'])
else:
fig,ax=self.L.showG('s',fig=kwargs['fig'])
else:
if 'figsize' in kwargs:
fig,ax=self.L.showG('s',figsize=kwargs['figsize'])
else:
fig,ax=self.L.showG('s')
# plot the grid
if kwargs['grid']:
for k in self.dap:
p = self.dap[k].p
ax.plot(p[0],p[1],'or')
f = kwargs['f']
a = kwargs['a']
typ = kwargs['typ']
assert typ in ['best','egd','sinr','snr','capacity','pr','loss'],"typ unknown in show coverage"
best = kwargs['best']
dB = kwargs['db']
# setting the grid
l = self.grid[0,0]
r = self.grid[-1,0]
b = self.grid[0,1]
t = self.grid[-1,-1]
if typ=='best':
title = title + 'Best server'+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
for ka in range(self.na):
if polar=='p':
bestsv = self.bestsvp[f,:,ka]
if polar=='o':
bestsv = self.bestsvo[f,:,ka]
m = np.ma.masked_where(bestsv == 0,bestsv)
if self.mode!='file':
W = m.reshape(self.nx,self.ny).T
ax.imshow(W, extent=(l,r,b,t),
origin='lower',
vmin=1,
vmax=self.na+1)
else:
ax.scatter(self.grid[:,0],self.grid[:,1],c=m,s=20,linewidth=0)
ax.set_title(title)
else:
if typ=='egd':
title = title + 'excess group delay : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
V = self.Ed
dB = False
legcb = 'Delay (ns)'
if typ=='sinr':
title = title + 'SINR : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
if dB:
legcb = 'dB'
else:
legcb = 'Linear scale'
if polar=='o':
V = self.sinro
if polar=='p':
V = self.sinrp
if typ=='snr':
title = title + 'SNR : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
if dB:
legcb = 'dB'
else:
legcb = 'Linear scale'
if polar=='o':
V = self.snro
if polar=='p':
V = self.snrp
if typ=='capacity':
title = title + 'Capacity : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
legcb = 'Mbit/s'
if polar=='o':
V = self.bmhz.T[np.newaxis,:]*np.log(1+self.sinro)/np.log(2)
if polar=='p':
V = self.bmhz.T[np.newaxis,:]*np.log(1+self.sinrp)/np.log(2)
if typ=='pr':
title = title + 'Pr : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
if dB:
legcb = 'dBm'
else:
lgdcb = 'mW'
if polar=='o':
V = self.CmWo
if polar=='p':
V = self.CmWp
if typ=='loss':
title = title + 'Loss : '+' fc = '+str(self.fGHz[f])+' GHz'+ ' polar : '+polar
if dB:
legcb = 'dB'
else:
legcb = 'Linear scale'
if polar=='o':
V = self.Lwo*self.freespace
if polar=='p':
V = self.Lwp*self.freespace
if a == -1:
V = np.max(V[f,:,:],axis=1)
else:
V = V[f,:,a]
# reshaping the data on the grid
if self.mode!='file':
U = V.reshape((self.nx,self.ny)).T
else:
U = V
if dB:
U = 10*np.log10(U)
if 'vmin' in kwargs:
vmin = kwargs['vmin']
else:
vmin = U.min()
if 'vmax' in kwargs:
vmax = kwargs['vmax']
else:
vmax = U.max()
if self.mode!='file':
img = ax.imshow(U,
extent=(l,r,b,t),
origin='lower',
vmin = vmin,
vmax = vmax,
cmap = kwargs['cmap'])
else:
img=ax.scatter(self.grid[:,0],
self.grid[:,1],
c=U,
s=20,
linewidth=0,
cmap=kwargs['cmap'],
vmin=vmin,
vmax=vmax)
for k in range(self.na):
ax.annotate(str(k),xy=(self.pa[0,k],self.pa[1,k]))
ax.set_title(title)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
clb = fig.colorbar(img,cax)
clb.set_label(legcb)
if best:
if self.mode!='file':
if polar=='o':
ax.contour(np.sum(self.bestsvo,axis=2)[f,:].reshape(self.nx,self.ny).T,extent=(l,r,b,t),linestyles='dotted')
if polar=='p':
ax.contour(np.sum(self.bestsvp,axis=2)[f,:].reshape(self.nx,self.ny).T,extent=(l,r,b,t),linestyles='dotted')
# display access points
if a==-1:
ax.scatter(self.pa[0,:],self.pa[1,:],s=30,c='r',linewidth=0)
else:
ax.scatter(self.pa[0,a],self.pa[1,a],s=30,c='r',linewidth=0)
plt.tight_layout()
return(fig,ax)
# def showLoss(self,polar='o',**kwargs):
# """ show losses map
#
# Parameters
# ----------
#
# polar : string
# 'o'|'p'|'both'
#
# Examples
# --------
#
# .. plot::
# :include-source:
#
# >>> from pylayers.antprop.coverage import *
# >>> C = Coverage()
# >>> C.cover(polar='o')
# >>> f,a = C.show(typ='pr',figsize=(10,8))
# >>> plt.show()
# """
#
# fig = plt.figure()
# fig,ax=self.L.showGs(fig=fig)
#
# # setting the grid
#
# l = self.grid[0,0]
# r = self.grid[-1,0]
# b = self.grid[0,1]
# t = self.grid[-1,-1]
#
# Lo = self.freespace+self.Lwo
# Lp = self.freespace+self.Lwp
#
# # orthogonal polarization
#
# if polar=='o':
# cov = ax.imshow(Lo.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),
# origin='lower',
# vmin = 40,
# vmax = 130)
# str1 = 'Map of losses, orthogonal (V) polarization, fGHz='+str(self.fGHz)
# title = (str1)
#
# # parallel polarization
# if polar=='p':
# cov = ax.imshow(Lp.reshape((self.nx,self.ny)).T,
# extent=(l,r,b,t),
# origin='lower',
# vmin = 40,
# vmax = 130)
# str2 = 'Map of losses, orthogonal (V) polarization, fGHz='+str(self.fGHz)
# title = (str2)
#
# ax.scatter(self.tx[0],self.tx[1],s=10,c='k',linewidth=0)
# ax.set_title(title)
#
# divider = make_axes_locatable(ax)
# cax = divider.append_axes("right", size="5%", pad=0.05)
# clb = fig.colorbar(cov,cax)
# clb.set_label('Loss (dB)')
#
# if self.show:
# plt.show()
if (__name__ == "__main__"):
doctest.testmod()
|
dialounke/pylayers
|
pylayers/antprop/coverage.py
|
Python
|
mit
| 35,555
|
[
"Mayavi"
] |
366d359db4d3b2d5c0105f9679703d674e9ef5b69eec329c1dc8724c796cb13d
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2007-2008 Brian G. Matherly
# Copyright (C) 2009 Douglas S. Blank
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#
"""
Display a people who have a person's same surname or given name.
"""
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
ngettext = glocale.translation.ngettext # else "nearby" comments are ignored
from gramps.gen.simple import SimpleAccess, SimpleDoc
from gramps.gui.plug.quick import QuickTable
from gramps.gen.lib import Person
from gramps.gen.filters.rules import Rule
from gramps.gen.filters import GenericFilterFactory
class IncompleteSurname(Rule):
"""People with incomplete surnames"""
name = _('People with incomplete surnames')
description = _("Matches people with lastname missing")
category = _('General filters')
def apply(self, db, person):
for name in [person.get_primary_name()] + person.get_alternate_names():
if name.get_group_name() == "":
return True
return False
class SameSurname(Rule):
"""People with same surname"""
labels = [_('Substring:')]
name = _('People matching the <surname>')
description = _("Matches people with same lastname")
category = _('General filters')
def apply(self, db, person):
src = self.list[0].upper()
for name in [person.get_primary_name()] + person.get_alternate_names():
if name.get_surname() and name.get_surname().upper() == src.upper():
return True
return False
class SameGiven(Rule):
"""People with same given name"""
labels = [_('Substring:')]
name = _('People matching the <given>')
description = _("Matches people with same given name")
category = _('General filters')
def apply(self, db, person):
src = self.list[0].upper()
for name in [person.get_primary_name()] + person.get_alternate_names():
if name.first_name:
anyNBSP = name.first_name.split('\u00A0')
if len(anyNBSP) > 1: # there was an NBSP, a non-breaking space
first_two = anyNBSP[0] + '\u00A0' + anyNBSP[1].split()[0]
if first_two.upper() == src:
return True
else:
name.first_name = ' '.join(anyNBSP[1].split()[1:])
if " " in name.first_name.strip():
for name in name.first_name.upper().strip().split():
if name == src.upper():
return True
elif name.first_name.upper() == src.upper():
return True
return False
class IncompleteGiven(Rule):
"""People with incomplete given names"""
name = _('People with incomplete given names')
description = _("Matches people with firstname missing")
category = _('General filters')
def apply(self, db, person):
for name in [person.get_primary_name()] + person.get_alternate_names():
if name.get_first_name() == "":
return True
return False
def run(database, document, person):
"""
Loops through the families that the person is a child in, and displays
the information about the other children.
"""
# setup the simple access functions
sdb = SimpleAccess(database)
sdoc = SimpleDoc(document)
stab = QuickTable(sdb)
if isinstance(person, Person):
surname = sdb.surname(person)
rsurname = person.get_primary_name().get_group_name()
else:
surname = person
rsurname = person
# display the title
sdoc.title(_("People sharing the surname '%s'") % surname)
sdoc.paragraph("")
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
filter = GenericFilterFactory('Person')()
if rsurname != '':
rule = SameSurname([rsurname])
else:
rule = IncompleteSurname([])
filter.add_rule(rule)
people = filter.apply(database,
database.iter_person_handles())
matches = 0
for person_handle in people:
person = database.get_person_from_handle(person_handle)
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
document.has_data = matches > 0
sdoc.paragraph(
# Translators: leave all/any {...} untranslated
ngettext("There is {number_of} person "
"with a matching name, or alternate name.\n",
"There are {number_of} people "
"with a matching name, or alternate name.\n", matches
).format(number_of=matches) )
stab.write(sdoc)
def run_given(database, document, person):
"""
Loops through the families that the person is a child in, and displays
the information about the other children.
"""
# setup the simple access functions
sdb = SimpleAccess(database)
sdoc = SimpleDoc(document)
stab = QuickTable(sdb)
if isinstance(person, Person):
rgivenname = person.get_primary_name().get_first_name()
else:
rgivenname = person
if " " in rgivenname.strip():
rgivenname, second = rgivenname.strip().split(" ", 1)
# display the title
sdoc.title(_("People with the given name '%s'") % rgivenname)
sdoc.paragraph("")
stab.columns(_("Person"), _("Birth Date"), _("Name type"))
filter = GenericFilterFactory('Person')()
if rgivenname != '':
rule = SameGiven([rgivenname])
else:
rule = IncompleteGiven([])
filter.add_rule(rule)
people = filter.apply(database,
database.iter_person_handles())
matches = 0
for person_handle in people:
person = database.get_person_from_handle(person_handle)
stab.row(person, sdb.birth_or_fallback(person),
str(person.get_primary_name().get_type()))
matches += 1
document.has_data = matches > 0
sdoc.paragraph(
# Translators: leave all/any {...} untranslated
ngettext("There is {number_of} person "
"with a matching name, or alternate name.\n",
"There are {number_of} people "
"with a matching name, or alternate name.\n", matches
).format(number_of=matches) )
stab.write(sdoc)
|
SNoiraud/gramps
|
gramps/plugins/quickview/samesurnames.py
|
Python
|
gpl-2.0
| 7,150
|
[
"Brian"
] |
f7113b450d85cfbf9f55f752bfe6513c35ee7c497bc490d8c23b346ab14ca746
|
import os
import unittest
from __main__ import vtk, qt, ctk, slicer
#import slicer.modules.ChestImagingPlatform
#
# LungRegistration
#
class LungRegistration:
def __init__(self, parent):
parent.title = "LungRegistration" # TODO make this more human readable by adding spaces
parent.categories = ["Chest Imaging Platform"]
parent.dependencies = []
parent.contributors = ["Applied Chest Imaging Laboratory, Brigham and Women's Hopsital"] # replace with "Firstname Lastname (Org)"
parent.helpText = """
Simple Lung registration module
"""
parent.acknowledgementText = """
This work is funded by the National Heart, Lung, And Blood Institute of the National Institutes of Health under Award Number R01HL116931. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
""" # replace with organization, grant and thanks.
self.parent = parent
# Add this test to the SelfTest module's list for discovery when the module
# is created. Since this module may be discovered before SelfTests itself,
# create the list if it doesn't already exist.
try:
slicer.selfTests
except AttributeError:
slicer.selfTests = {}
slicer.selfTests['LungRegistration'] = self.runTest
def runTest(self):
tester = LungRegistrationTest()
tester.runTest()
#
# qLungRegistrationWidget
#
class LungRegistrationWidget:
def __init__(self, parent = None):
if not parent:
self.parent = slicer.qMRMLWidget()
self.parent.setLayout(qt.QVBoxLayout())
self.parent.setMRMLScene(slicer.mrmlScene)
else:
self.parent = parent
self.layout = self.parent.layout()
if not parent:
self.setup()
self.parent.show()
def setup(self):
# Instantiate and connect widgets ...
#
# Reload and Test area
#
reloadCollapsibleButton = ctk.ctkCollapsibleButton()
reloadCollapsibleButton.text = "Reload && Test"
self.layout.addWidget(reloadCollapsibleButton)
reloadFormLayout = qt.QFormLayout(reloadCollapsibleButton)
# reload button
# (use this during development, but remove it when delivering
# your module to users)
self.reloadButton = qt.QPushButton("Reload")
self.reloadButton.toolTip = "Reload this module."
self.reloadButton.name = "LungRegistration Reload"
reloadFormLayout.addWidget(self.reloadButton)
self.reloadButton.connect('clicked()', self.onReload)
# reload and test button
# (use this during development, but remove it when delivering
# your module to users)
self.reloadAndTestButton = qt.QPushButton("Reload and Test")
self.reloadAndTestButton.toolTip = "Reload this module and then run the self tests."
reloadFormLayout.addWidget(self.reloadAndTestButton)
self.reloadAndTestButton.connect('clicked()', self.onReloadAndTest)
#
# Parameters Area
#
parametersCollapsibleButton = ctk.ctkCollapsibleButton()
parametersCollapsibleButton.text = "Parameters"
self.layout.addWidget(parametersCollapsibleButton)
# Layout within the dummy collapsible button
parametersFormLayout = qt.QFormLayout(parametersCollapsibleButton)
#
# input .vtk selector
#
self.inputVTKSelector = slicer.qMRMLNodeComboBox()
self.inputVTKSelector.nodeTypes = ( ("vtkMRMLModelNode"), "" )
self.inputVTKSelector.selectNodeUponCreation = True
self.inputVTKSelector.addEnabled = False
self.inputVTKSelector.removeEnabled = False
self.inputVTKSelector.noneEnabled = False
self.inputVTKSelector.showHidden = False
self.inputVTKSelector.showChildNodeTypes = False
self.inputVTKSelector.setMRMLScene( slicer.mrmlScene )
self.inputVTKSelector.setToolTip( "Pick the input convex hull to the algorithm." )
parametersFormLayout.addRow("Input .vtk atlas convex hull: ", self.inputVTKSelector)
#input CT image selector
self.inputSelector = slicer.qMRMLNodeComboBox()
self.inputSelector.nodeTypes = ( ("vtkMRMLScalarVolumeNode"), "" )
#self.inputSelector.addAttribute( "vtkMRMLScalarVolumeNode", "LabelMap", 0 )
self.inputSelector.selectNodeUponCreation = True
self.inputSelector.addEnabled = False
self.inputSelector.removeEnabled = False
self.inputSelector.noneEnabled = False
self.inputSelector.showHidden = False
self.inputSelector.showChildNodeTypes = False
self.inputSelector.setMRMLScene( slicer.mrmlScene )
self.inputSelector.setToolTip( "Pick the input to the algorithm." )
parametersFormLayout.addRow("Input volume: ", self.inputSelector)
##
## atlas volume selector
##
self.atlasSelector = slicer.qMRMLNodeComboBox()
self.atlasSelector.nodeTypes = ( ("vtkMRMLScalarVolumeNode"), "" )
#self.leftAtlasSelector.addAttribute( "vtkMRMLScalarVolumeNode", "LabelMap", 1 )
self.atlasSelector.selectNodeUponCreation = True
self.atlasSelector.addEnabled = False
self.atlasSelector.removeEnabled = False
self.atlasSelector.noneEnabled = False
self.atlasSelector.showHidden = False
self.atlasSelector.showChildNodeTypes = False
self.atlasSelector.setMRMLScene( slicer.mrmlScene )
self.atlasSelector.setToolTip( "Pick the atlas volume." )
parametersFormLayout.addRow("Atlas Volume: ", self.atlasSelector)
#
##
## right atlas volume selector
##
#self.rightAtlasSelector = slicer.qMRMLNodeComboBox()
#self.rightAtlasSelector.nodeTypes = ( ("vtkMRMLScalarVolumeNode"), "" )
##self.rightAtlasSelector.addAttribute( "vtkMRMLScalarVolumeNode", "LabelMap", 2 )
#self.rightAtlasSelector.selectNodeUponCreation = True
#self.rightAtlasSelector.addEnabled = False
#self.rightAtlasSelector.removeEnabled = False
#self.rightAtlasSelector.noneEnabled = False
#self.rightAtlasSelector.showHidden = False
#self.rightAtlasSelector.showChildNodeTypes = False
#self.rightAtlasSelector.setMRMLScene( slicer.mrmlScene )
#self.rightAtlasSelector.setToolTip( "Pick the atlas volume." )
#parametersFormLayout.addRow("right Atlas Volume: ", self.rightAtlasSelector)
#
# output volume selector
#
self.outputSelector = slicer.qMRMLNodeComboBox()
self.outputSelector.nodeTypes = ( ("vtkMRMLScalarVolumeNode"), "" )
self.outputSelector.addAttribute( "vtkMRMLScalarVolumeNode", "LabelMap", 0 )
self.outputSelector.selectNodeUponCreation = False
self.outputSelector.addEnabled = True
self.outputSelector.removeEnabled = True
self.outputSelector.noneEnabled = False
self.outputSelector.showHidden = False
self.outputSelector.showChildNodeTypes = False
self.outputSelector.setMRMLScene( slicer.mrmlScene )
self.outputSelector.setToolTip( "Pick the output to the algorithm." )
parametersFormLayout.addRow("Output Volume: ", self.outputSelector)
#Add parameters:
self.numberOfIterations = qt.QSpinBox()
self.numberOfIterations.setRange(1,1000000)
self.numberOfIterations.setValue(200)
self.numberOfIterations.setToolTip( "Specify the number of iterations to find the transformation." )
parametersFormLayout.addRow("Number of iterations (Registration part): ", self.numberOfIterations)
self.boneThreshold = qt.QSpinBox()
self.boneThreshold.setRange(1,1000000)
self.boneThreshold.setValue(600)
self.boneThreshold.setToolTip( "Threshold value for bone. Any voxel having HU intensity greater than or equal to this value will be considered bone and will be added to the fixed point set.." )
parametersFormLayout.addRow("Threshold value for bone (Registration part): ", self.boneThreshold)
#
# Apply Button
#
self.applyButton = qt.QPushButton("Register")
self.applyButton.toolTip = "Run the registration algorithm."
self.applyButton.enabled = False
parametersFormLayout.addRow(self.applyButton)
# connections
self.applyButton.connect('clicked(bool)', self.onApplyButton)
self.inputSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
self.outputSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
self.inputVTKSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
self.atlasSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
self.boneThreshold.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
#self.rightAtlasSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
#self.outModel.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
#self.numberOfIterations.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)
# Add vertical spacer
self.layout.addStretch(1)
def cleanup(self):
pass
def onSelect(self):
self.applyButton.enabled = self.outputSelector.currentNode()
def onApplyButton(self):
logic = LungRegistrationLogic()
print("Run the algorithm")
#logic.run(self.inputSelector.currentNode(), self.leftAtlasSelector.currentNode(), self.rightAtlasSelector.currentNode(),"~/TestConvexHull.vtk", self.numberOfIterations, self.outModel)
logic.run(self.inputSelector.currentNode(), self.atlasSelector.currentNode(),self.inputVTKSelector.currentNode(), self.numberOfIterations, self.boneThreshold,self.outputSelector.currentNode())
####need to specify output type for resample
def onReload(self,moduleName="LungRegistration"):
"""Generic reload method for any scripted module.
ModuleWizard will subsitute correct default moduleName.
"""
import imp, sys, os, slicer
widgetName = moduleName + "Widget"
# reload the source code
# - set source file path
# - load the module to the global space
filePath = eval('slicer.modules.%s.path' % moduleName.lower())
p = os.path.dirname(filePath)
if not sys.path.__contains__(p):
sys.path.insert(0,p)
fp = open(filePath, "r")
globals()[moduleName] = imp.load_module(
moduleName, fp, filePath, ('.py', 'r', imp.PY_SOURCE))
fp.close()
# rebuild the widget
# - find and hide the existing widget
# - create a new widget in the existing parent
parent = slicer.util.findChildren(name='%s Reload' % moduleName)[0].parent().parent()
for child in parent.children():
try:
child.hide()
except AttributeError:
pass
# Remove spacer items
item = parent.layout().itemAt(0)
while item:
parent.layout().removeItem(item)
item = parent.layout().itemAt(0)
# delete the old widget instance
if hasattr(globals()['slicer'].modules, widgetName):
getattr(globals()['slicer'].modules, widgetName).cleanup()
# create new widget inside existing parent
globals()[widgetName.lower()] = eval(
'globals()["%s"].%s(parent)' % (moduleName, widgetName))
globals()[widgetName.lower()].setup()
setattr(globals()['slicer'].modules, widgetName, globals()[widgetName.lower()])
def onReloadAndTest(self,moduleName="LungRegistration"):
try:
self.onReload()
evalString = 'globals()["%s"].%sTest()' % (moduleName, moduleName)
tester = eval(evalString)
tester.runTest()
except Exception as e:
import traceback
traceback.print_exc()
qt.QMessageBox.warning(slicer.util.mainWindow(),
"Reload and Test", 'Exception!\n\n' + str(e) + "\n\nSee Python Console for Stack Trace")
#
# LungRegistrationLogic
#
class LungRegistrationLogic:
"""This class should implement all the actual
computation done by your module. The interface
should be such that other python code can import
this class and make use of the functionality without
requiring an instance of the Widget
"""
def __init__(self):
pass
def hasImageData(self,volumeNode):
"""This is a dummy logic method that
returns true if the passed in volume
node has valid image data
"""
if not volumeNode:
print('no volume node')
return False
if volumeNode.GetImageData() == None:
print('no image data')
return False
return True
def run(self,inputVolume,atlasVolume, convexHullVolume, numIterations, boneThreshold, outVolume):
"""
Run the actual algorithm
"""
print('In Run method')
"""
Generate Atlas convex Hull
"""
#convexHullVolume = "~/TestConvexHull.vtk"
#cliparameters = {
#"leftAtlasFileName" : leftAtlasVolume.GetID(),
#"rightAtlasFileName" : rightAtlasVolume.GetID(),
#"downsampleFactor" : 4,
#"outputFileName" : outModel.GetID(), #"~/TestConvexHull.vtk", *** should be a vtk file
#}
#GenerateAtlasConvexHull = slicer.modules.generateatlasconvexhull
#slicer.cli.run(GenerateAtlasConvexHull,None, cliparameters, wait_for_completion=True)
#C:\ChestImagingPlatformPrivate\Build\bin\Debug>RegisterLungAtlas -i 200 -m D:/Po
# stdoc/Data/LungAtlases/atlasConvexHull.vtk -c D:/Postdoc/Data/10360K/10360Kinsp
# .nhdr -o d:/Postdoc/Data/10360K/AtlasTo10360Kinsp.tfm
#"""
#Call RegisterLungAtlas cli, tfm intermediate file ?
#"""
#Define temporary .tfm file
f = qt.QTemporaryFile( slicer.app.temporaryPath+ "/RegisterLungAtlas-XXXXXX.tfm") #slicer.app.temporaryPath
f.open() # Create the file
# Get model node by ID
modelNode = slicer.mrmlScene.GetNodeByID(convexHullVolume.GetID())
polyData = modelNode.GetPolyData()
cliparameters = {}
cliparameters['convexHullMeshFileName'] = convexHullVolume.GetID() #modelNode.GetID() #""/Users/rolaharmouche/Documents/Data/LungAtlases/atlasConvexHull.vtk" #
cliparameters['numberOfIterations'] = numIterations.value
cliparameters['boneThreshold'] = boneThreshold.value
cliparameters['outputTransformFileName'] = f.fileName()#"/Users/rolaharmouche/Documents/Data/tempdata/Test6.tfm" #outputTransform, slicer.app.temporarypath
cliparameters['ctFileName'] = inputVolume.GetID()
#cliparameters['ctFileName'] = "/Users/rolaharmouche/Documents/Data/COPDGene/14988Y/14988Y_INSP_STD_UAB_COPD/14988Y_INSP_STD_UAB_COPD_downsampled.nrrd"
#destructor delete stuff
RegisterLungAtlas = slicer.modules.registerlungatlas
cliNode = slicer.cli.run(RegisterLungAtlas,None, cliparameters, wait_for_completion=True)
#"""
#Call ResampleLabelMap cli, save the output volume directly
#"""
##ResampleLabelMap.exe -d D:/Postdoc/Data/10360K/10360Kinsp.nhdr -r D:/Postdoc/Data/10360K/10360KleftAtlas.nrrd -t
##D:/Postdoc/Data/10360K/AtlasTo10360Kinsp.tfm -l D:/Postdoc/Data/LungAtlases/leftLungAtlas.nhdr
#
cliparameters = {}
cliparameters['labelMapFileName'] = atlasVolume.GetID() # "/Users/rolaharmouche/Documents/Data/LungAtlases/leftLungAtlas.nhdr"
cliparameters['transformFileName'] = f.fileName()#"/Users/rolaharmouche/Documents/Data/tempdata/Test6.tfm"
cliparameters['resampledFileName'] = outVolume.GetID() #"~/Test.nrrd" #
cliparameters['destinationFileName'] = inputVolume.GetID() #"/Users/rolaharmouche/Documents/Data/COPDGene/14988Y/14988Y_INSP_STD_UAB_COPD/14988Y_INSP_STD_UAB_COPD_downsampled.nrrd"
cliparameters['isInvertTransformation'] =True
ResampleLabelMap = slicer.modules.resamplelabelmap
cliNode = slicer.cli.run(ResampleLabelMap,None, cliparameters, wait_for_completion=True), #use qt assistant
return True
class LungRegistrationTest(unittest.TestCase):
"""
This is the test case for your scripted module.
"""
def delayDisplay(self,message,msec=1000):
"""This utility method displays a small dialog and waits.
This does two things: 1) it lets the event loop catch up
to the state of the test so that rendering and widget updates
have all taken place before the test continues and 2) it
shows the user/developer/tester the state of the test
so that we'll know when it breaks.
"""
print(message)
self.info = qt.QDialog()
self.infoLayout = qt.QVBoxLayout()
self.info.setLayout(self.infoLayout)
self.label = qt.QLabel(message,self.info)
self.infoLayout.addWidget(self.label)
qt.QTimer.singleShot(msec, self.info.close)
self.info.exec_()
def setUp(self):
""" Do whatever is needed to reset the state - typically a scene clear will be enough.
"""
slicer.mrmlScene.Clear(0)
def runTest(self):
"""Run as few or as many tests as needed here.
"""
self.setUp()
self.test_LungRegistration1()
def test_LungRegistration1(self):
""" Ideally you should have several levels of tests. At the lowest level
tests sould exercise the functionality of the logic with different inputs
(both valid and invalid). At higher levels your tests should emulate the
way the user would interact with your code and confirm that it still works
the way you intended.
One of the most important features of the tests is that it should alert other
developers when their changes will have an impact on the behavior of your
module. For example, if a developer removes a feature that you depend on,
your test should break so they know that the feature is needed.
"""
self.delayDisplay("Starting the test")
#
# first, get some data
#
import urllib.request, urllib.parse, urllib.error
downloads = (
('http://slicer.kitware.com/midas3/download?items=5767', 'FA.nrrd', slicer.util.loadVolume),
)
for url,name,loader in downloads:
filePath = slicer.app.temporaryPath + '/' + name
if not os.path.exists(filePath) or os.stat(filePath).st_size == 0:
print(('Requesting download %s from %s...\n' % (name, url)))
urllib.request.urlretrieve(url, filePath)
if loader:
print(('Loading %s...\n' % (name,)))
loader(filePath)
self.delayDisplay('Finished with download and loading\n')
volumeNode = slicer.util.getNode(pattern="FA")
logic = LungRegistrationLogic()
self.assertTrue( logic.hasImageData(volumeNode) )
self.delayDisplay('Test passed!')
|
acil-bwh/SlicerCIP
|
Scripted/attic/LungRegistration/LungRegistration.py
|
Python
|
bsd-3-clause
| 18,130
|
[
"VTK"
] |
4b40259fab9fca21918534b4c719ede5e4de1fc84646de39d1f954fbbd232fb7
|
#!/usr/bin/python
# rmbh11
# v0.72
# corects and builds on release version - v0.5
######################################
# USAGE: prepFragsLAMW.py database-prefix file.fasta > logfile
# Originally written by Ryan Hoffmann from Wolynes group
# Modified by Shubham Tripathi 10/21/2016
#######################################################################
# NOTE: Before running this script, please make sure the fasta
# file contains only the sequences that have coordinates in the PDB file
######################################################################
#######################################################################
# brain_damage_flag = 0 --> Homologues allowed.
# brain_damage_flag = 1 --> Homologues excluded.
# brain_damage_flag = 2 --> Homologues only; pass the sequence identity cutoff as the final parameter.
#######################################################################
import sys
import os
import re
from IndexPdb import *
from Pdb2GroLib import *
from Bio.PDB.Polypeptide import * # func three_to_one()
from Bio import SeqIO
if len(sys.argv) != 6:
print "\n prepFragsLAMW.py database-prefix file.fasta N_mem brain_damage_flag (2/1/0 for ho/yes/no) cutoff \n\n"
print "#######################################################################"
print "#NOTE: Before running this script, please make sure the fasta file "
print "#contains only the sequences that have coordinates in the PDB file"
print "######################################################################"
exit()
################################################################
# NoMissingAtoms function
def NoMissingAtoms(atom_list, residue_list, res_Start, pdbID, ch_name, pdbFile):
res_End = res_Start + len(residue_list) - 1
p = PDBParser(PERMISSIVE=1)
s = p.get_structure(pdbID, pdbFile)
chains = s[0].get_list()
if ch_name == '':
ch_name = "A"
keys_res = {}
keys = {}
for chain in chains:
if chain.get_id() == ch_name:
i = 0
for res in chain:
res_index = res.get_id()[1]
if (res_index < res_Start):
continue
if (res_index > res_End and i == 0):
print "Residue index shifted: ", res_index, "mismatch: ", res_Start
return False
if (res_index > res_End):
break
is_regular_res = res.has_id('N') and res.has_id(
'CA') and res.has_id('C')
res_id = res.get_id()[0]
if not (res_id == ' ' or res_id == 'H_MSE' or res_id == 'H_M3L' or res_id == 'H_CAS') and is_regular_res:
print 'Discard Fragment: Non-regular residue:', res.get_id()[0], 'at position', res_index, 'in pdb:', pdbID
return False
res_name = res.get_resname()
# convert to 1-letter code
if res_name == 'MSE':
res_code = 'M'
elif res_name == 'M3L':
res_code = 'K'
elif res_name == 'CAS':
res_code = 'C'
else:
res_code = three_to_one(res_name)
# Add sanity check, residues have to match the blast-out seq
if (res_code != residue_list[i]):
print "Mismatching residue in the PDB file:", pdbID, "residue :", res_code
return False
i += 1
keys = {}
if res_name == 'GLY': # GLY has no CB atoms
keys['CB'] = 1
for atom in res:
atom_name = atom.get_name()
for target_atom_name in atom_list:
if atom_name == target_atom_name:
keys[target_atom_name] = 1
# print "matching:", atom_name
if len(keys) == len(atom_list):
break
if len(keys) == len(atom_list):
# print "matching res:", res_index
keys_res[res_index] = 1
if len(keys_res) == res_End - res_Start + 1:
return True
else:
print "Missing CA or CB in the residues for PDB ", pdbID, ch_name
print "Good residues: "
for j in keys_res:
print j
return False
# NoMissingAtoms function
################################################################
database = sys.argv[1]
fasta = sys.argv[2]
natID = fasta.split('.')
natID = natID[0]
N_mem = int(sys.argv[3])
brain_damage = int(sys.argv[4])
inFASTA = open(fasta, 'r')
weight = 1 # feature in match file
h_f = open('homologues.txt', 'w')
find_homologue = "psiblast -db " + database + " -query " + fasta + \
" -num_iterations 1 -word_size 2 -evalue 0.005 -matrix BLOSUM62 -outfmt '6 sseqid slen bitscore score evalue pident'"
homologue_out = os.popen(find_homologue).read()
homologues = homologue_out.splitlines()
print homologue_out.strip().split('\n')
for line in homologues:
h_f.write(line + '\n')
h_f.close()
iden_cutoff = float(sys.argv[5])
full_list = []
print iden_cutoff
f = open('homologues.txt', 'r')
iden_list = []
for line in f:
l = line.strip().split()
score = float(l[5])
full_list.append(l[0])
if score > iden_cutoff:
name = l[0]
iden_list.append(name)
print iden_list
f.close()
from Bio import SeqIO
inseq = SeqIO.read(inFASTA, 'fasta')
print "processing: ", inseq.name
query = str(inseq.name)[0:4]
myhome = os.environ.get("HOME")
pdbDir = myhome + "/opt/script/PDBs/"
indexDir = myhome + "/opt/script/indices/"
# fLibDir = myhome + "/fraglib/"
fLibDir = "fraglib/"
pdbSeqres = myhome + "/opt/script/pdb_seqres.txt"
fasta_database = database + ".fasta"
# Index database fasta file
if not os.path.isfile(fasta_database):
print "Can't find database fasta file"
exit()
seq_records = SeqIO.index(fasta_database, "fasta")
fragmentLength = 9 # needs to be an odd number
memoriesPerPosition = N_mem # can be any integer > 0
# needs to be large enough that PSI-BLAST returns at least memoriesPerPosition
EvalueThreshold = 10000
# SANITY CHECKING
# is length greater than fragmentLength?
if(len(inseq.seq) < fragmentLength):
print "Exception::query sequence is smaller than " + str(fragmentLength) + " residues"
print "This version has no means to handle smaller queries"
sys.exit()
# Create necessary directories
if not os.path.exists(indexDir):
os.makedirs(indexDir)
if not os.path.exists(pdbDir):
os.makedirs(pdbDir)
if not os.path.exists(fLibDir):
os.makedirs(fLibDir)
if not os.path.exists(pdbDir) or not os.path.exists(fLibDir) or not os.path.exists(indexDir):
print "Can't create necessary directories"
sys.exit()
# open match file
match = open('prepFrags.match', 'w')
match.write(query + "\n")
# FRAGMENT GENERATION LOOP
iterations = len(inseq.seq) - fragmentLength + 1 # number of sliding windows
for i in range(1, iterations + 1):
# select subrange
print "window position:::" + str(i)
rangeStart = i - 1
rangeEnd = i + fragmentLength - 1
subrange = str(inseq[rangeStart:rangeEnd].seq)
fragment = open('fragment.fasta', 'w')
print "fragment subrange:::" + subrange
fragment.write(subrange)
fragment.close()
# submit PSI-BLAST
# run "psiblast -help" for more details of output format (outfmt)
exeline = "psiblast -num_iterations 1 -word_size 2 -evalue " + \
str(EvalueThreshold)
exeline += " -outfmt '6 sseqid qstart qend sstart send qseq sseq length gaps bitscore evalue' -matrix BLOSUM62 -db "
exeline += database + " -query fragment.fasta"
print "executing:::" + exeline
psiblastOut = os.popen(exeline).read()
psiblastOut = psiblastOut.splitlines() # now an array
print "Number of searched PDBs: ", len(psiblastOut)
# print psiblastOut
# exit()
# print "PDB INSEQ-START INSEQ-END MATCH-START MATCH-END EVALUE"
for line in psiblastOut: # [0:memoriesPerPosition]:
this = line.split()
this.append(str(i))
print this
# 0:sseqid 1:qlen 2:slen 3:qstart 4:qend 5:sstart 6:send 7:qseq 8:sseq
# 9:length 10:gaps 11:bitscore 12:evalue 13:window_index
queryStart = int(this[1]) + rangeStart # +int(this[6])
queryEnd = rangeStart + int(this[2])
# print this
# #[1],str(queryStart),str(queryEnd),this[8],this[9],this[11]
this[1] = str(queryStart)
this[2] = str(queryEnd)
out = ' '.join(this)
out += '\n'
gaps = this[8]
if(gaps == '0'): # skip gapped alignments
match.write(out)
#out=this[1]+' '+str(queryStart)+' '+str(queryEnd)+' '
# out+=this[8]+' '+this[9]+' '+str(weight)+"\n"
# delQuery=queryEnd-queryStart
# delAlign=int(this[9])-int(this[8])
# if residue ranges do not match, this alignment was gapped
#skip gapped alignments: ################################
# if ((delQuery-delAlign)==0):
# match.write(out)
match.close()
match = open('prepFrags.match', 'r') # match is read-only now
LAMWmatch = open('frag.mem', 'w')
LAMWmatch.write('[Target]' + "\n")
LAMWmatch.write(query + "\n\n" + '[Memories]' + "\n")
log_match = open('log.mem', 'w')
# get pdbs
matchlines = list()
keys = {}
for line in match.readlines():
matchlines.append(line)
entries = line.split()
pdbfull = str(entries[0])
keys[pdbfull] = 1
unique = keys.keys()
from Bio.PDB.PDBParser import PDBParser
pdbparse = PDBParser(PERMISSIVE=1)
# atomLine=re.compile('\AATOM')
# Finding homologs
print inseq.seq
fragment = open('fragment.fasta', 'w')
fragment.write(str(inseq.seq))
fragment.close()
homo = {}
failed_pdb = {}
for pdbfull in unique:
pdbID = pdbfull[0:4].lower()
pdbIDsecond = pdbfull[1:2].lower()
pdbIDthird = pdbfull[2:3].lower()
chainID = pdbfull[4:5].lower()
failed_pdb[pdbID] = 0
homo[pdbID] = 0
if not os.path.isfile(pdbDir + pdbID.upper() + ".pdb"):
# from script 'pdbget' (original author unknown)
exeline = "wget ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/"
exeline += pdbIDsecond + pdbIDthird + "/pdb" + pdbID + ".ent.gz"
os.system(exeline)
os.system("nice gunzip pdb" + pdbID + ".ent.gz; mv pdb" +
pdbID + ".ent " + pdbDir + pdbID.upper() + ".pdb")
if not os.path.isfile(pdbDir + pdbID.upper() + ".pdb"):
print ":::Cannot build PDB for PDB ID, failed to download:" + pdbID.upper()
failed_pdb[pdbID] = 1
if brain_damage == 1 or brain_damage == 2:
# blast the whole sequence to identify homologs Evalue 0.005
exeline = "psiblast -num_iterations 1 -word_size 2 -evalue 0.005"
exeline += " -outfmt '6 sseqid slen bitscore score evalue' -matrix BLOSUM62 -db "
exeline += database + " -query fragment.fasta"
print "brain damamge, finding homologs"
print "executing::: " + exeline
homoOut = os.popen(exeline).read()
homoOut = homoOut.splitlines() # now an array
for line in homoOut:
entries = line.split()
print entries
pdbfull = entries[0]
pdbID = pdbfull[0:4].lower()
homo[pdbID] = 1
print pdbID
iter = 0
count = {}
# count number of mem per fragments
for i in range(1, iterations + 1):
count[str(i)] = 0
Missing_count = 0
Missing_pdb = {}
fastFile = "./tmp.fasta"
for line in matchlines:
iter += 1
if not(iter == 1):
# print ":::here: match line:"+line.rstrip('\n')
entries = line.split()
windows_index_str = entries[11]
if count[windows_index_str] >= N_mem:
continue
pdbfull = str(entries[0])
pdbID = pdbfull[0:4].lower()
pdbIDsecond = pdbfull[1:2].lower()
pdbIDthird = pdbfull[2:3].lower()
chainID = pdbfull[4:5].lower()
groFile = fLibDir + pdbID + chainID + ".gro"
groName = pdbID + chainID + ".gro"
pdbFile = pdbDir + pdbID.upper() + ".pdb"
indexFile = indexDir + pdbID + chainID + ".index"
if failed_pdb[pdbID]: # failed-downloaded ones are still in matchlines, need to be ignored
continue
# ignore homologs
# if brain_damage == 2:
# print natID
# print pdbID
if brain_damage == 1 and pdbfull.upper() in full_list:
print pdbID, " is a homolog, discard"
continue
elif brain_damage == 2 and ((not homo[pdbID]) or (pdbfull.upper() in iden_list) or (pdbfull.upper() not in full_list)):
# print pdbID, "is not a homologue. Discardig..."
if pdbID.upper() in iden_list:
print pdbID.upper()
print pdbID, "is not a homologue. Discardig..."
continue
atoms_list = ('CA', 'CB')
residue_list = entries[6] # sseq
res_Start = int(entries[3])
res_End = int(entries[4])
print "start: ", res_Start, "end: ", res_End
# check missing atoms
# have to check residue list, not residue index.
# if NoMissingAtoms(atoms_list, residue_list, res_Start, pdbID,
# chainID.upper(), pdbFile):
# Do I have the index file?
# No, write it
if not os.path.isfile(indexFile):
# generate fasta file
seq_id = pdbID.upper() + chainID.upper()
handle = open(fastFile, "w")
SeqIO.write(seq_records[seq_id], handle, "fasta")
handle.close()
# print str(seq_records[seq_id].seq)
# if not os.path.isfile(pdbSeqres):
# print "Need to download pdb_seqres.txt from PDB!"
# print "ftp://ftp.wwpdb.org/pub/pdb/derived_data/pdb_seqres.txt"
# print "Copy to $HOME/opt/script/"
# exit()
# fastaFile=pdbID+'_'+chainID.upper()
# exeline="grep -A1 "+fastaFile+" "+pdbSeqres+" > ./tmp.fasta"
# os.popen(exeline)
# write index file
if os.path.getsize('tmp.fasta') > 0:
print "Writing indexFile: ", indexFile
writeIndexFile(fastFile, pdbFile, indexFile, chainID.upper())
# Read index file
if not os.path.isfile(indexFile):
print "Can't create index file, ignore and go on!"
continue
index = open(indexFile, 'r')
# create new_index for frag_seq starting position
line_count = 0
flag = ' '
index_shift = 0
# read and get the flag
indexlines = list()
for index_line in index.readlines():
# print index_line
tmp_line = index_line.split()
line_count += 1
indexlines.append(index_line)
if line_count == 1: # first line is the flag
flag = tmp_line[0] # index_line
if flag == "SHIFT" and line_count == 2:
print "shift: ", tmp_line[0]
index_shift = int(tmp_line[0])
r_list = '' # list()
if flag == "SKIP":
Missing_pdb[pdbID] = 1
Missing_count += 1
print "***********", flag
print "SKIP pdb:", pdbID + chainID
continue
elif flag == "FULLMATCH":
new_index = int(entries[3])
r_list = residue_list
print "***********", flag
elif flag == "SHIFT":
new_index = int(entries[3]) + index_shift
r_list = residue_list
print "***********", flag
elif flag == "INDEXED":
print "***********", flag
# check if there is gaps
count_flag = 0
line_count1 = 0
for index_line in indexlines:
line_count1 += 1
if not line_count1 == 1:
index_entries = index_line.split()
seq_id = int(index_entries[0])
res_id = int(index_entries[1])
# print "seq_id:", seq_id, "res_id:", res_id
if seq_id < res_Start:
continue
if seq_id > res_End:
break
if res_id == -1:
print "Missing residues in PDB: ", pdbID + chainID
break
if count_flag == 0:
new_index = res_id
count_flag += 1
res_nm = index_entries[2]
# print "res_name: ", res_nm
# r_list.append(res_nm)
r_list += res_nm
# print r_list
if r_list != residue_list:
print "Missing residues: ", pdbID + chainID, residue_list, " incomplete: ", r_list
# print
Missing_pdb[pdbID] = 1
Missing_count += 1
continue
if os.path.isfile(pdbFile):
if not os.path.isfile(groFile):
Pdb2Gro(pdbFile, groFile, chainID.upper())
print ":::convert: " + pdbFile + " --> " + groFile
count[windows_index_str] += 1
print ":::here2: writing line to LAMWmatch\n"
length = res_End - res_Start + 1
out = groFile + ' ' + entries[1] + ' ' # queue start
# out+=entries[3]+' '+str(length)+' '+str(weight)+"\n" #frag_seq
# start
out += str(new_index) + ' ' + str(length) + ' ' + \
str(weight) + "\n" # frag_seq start
LAMWmatch.write(out)
#out1 = out
out1 = windows_index_str
out1 += ' ' + str(count[windows_index_str])
out1 += ' ' + entries[9] + ' ' + entries[10] + ' ' + groName
out1 += ' ' + \
entries[1] + ' ' + str(new_index) + ' ' + \
str(length) + ' ' + str(weight) + "\n"
log_match.write(out1)
else:
print pdbFile, "does not exist! Go figure..."
if brain_damage == 1:
for line in homoOut:
entries = line.split()
print "HOMOLOGS:::"
print entries
print "memories per position that is fewer than expected:"
for i in count:
if count[i] < N_mem:
print i, count[i]
# print "MemPerPosition: ", count
print "Number of blasted PDB: ", len(failed_pdb)
print "Number of failed downloaded PDB: ", sum(failed_pdb.values())
print "Number of PDB with Missing atoms: ", len(Missing_pdb)
print "Discarded fragments with Missing atoms: ", Missing_count
|
luwei0917/awsemmd_script
|
MultCha_prepFrags_index_HO.py
|
Python
|
mit
| 18,545
|
[
"BLAST"
] |
5ff35b45880c9eda90101dbad8f2382eb44029358b209e967a01243f008048d5
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2011 Rackspace
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mox
import shutil
import sys
import tempfile
from nova import context
from nova import db
from nova import exception
from nova import flags
from nova import log as logging
import nova.policy
from nova import rpc
from nova import test
from nova import utils
from nova.network import manager as network_manager
from nova.tests import fake_network
LOG = logging.getLogger(__name__)
HOST = "testhost"
networks = [{'id': 0,
'uuid': "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
'label': 'test0',
'injected': False,
'multi_host': False,
'cidr': '192.168.0.0/24',
'cidr_v6': '2001:db8::/64',
'gateway_v6': '2001:db8::1',
'netmask_v6': '64',
'netmask': '255.255.255.0',
'bridge': 'fa0',
'bridge_interface': 'fake_fa0',
'gateway': '192.168.0.1',
'broadcast': '192.168.0.255',
'dns1': '192.168.0.1',
'dns2': '192.168.0.2',
'vlan': None,
'host': HOST,
'project_id': 'fake_project',
'vpn_public_address': '192.168.0.2'},
{'id': 1,
'uuid': "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
'label': 'test1',
'injected': False,
'multi_host': False,
'cidr': '192.168.1.0/24',
'cidr_v6': '2001:db9::/64',
'gateway_v6': '2001:db9::1',
'netmask_v6': '64',
'netmask': '255.255.255.0',
'bridge': 'fa1',
'bridge_interface': 'fake_fa1',
'gateway': '192.168.1.1',
'broadcast': '192.168.1.255',
'dns1': '192.168.0.1',
'dns2': '192.168.0.2',
'vlan': None,
'host': HOST,
'project_id': 'fake_project',
'vpn_public_address': '192.168.1.2'}]
fixed_ips = [{'id': 0,
'network_id': 0,
'address': '192.168.0.100',
'instance_id': 0,
'allocated': False,
'virtual_interface_id': 0,
'floating_ips': []},
{'id': 0,
'network_id': 1,
'address': '192.168.1.100',
'instance_id': 0,
'allocated': False,
'virtual_interface_id': 0,
'floating_ips': []}]
flavor = {'id': 0,
'rxtx_cap': 3}
floating_ip_fields = {'id': 0,
'address': '192.168.10.100',
'pool': 'nova',
'interface': 'eth0',
'fixed_ip_id': 0,
'project_id': None,
'auto_assigned': False}
vifs = [{'id': 0,
'address': 'DE:AD:BE:EF:00:00',
'uuid': '00000000-0000-0000-0000-0000000000000000',
'network_id': 0,
'instance_id': 0},
{'id': 1,
'address': 'DE:AD:BE:EF:00:01',
'uuid': '00000000-0000-0000-0000-0000000000000001',
'network_id': 1,
'instance_id': 0},
{'id': 2,
'address': 'DE:AD:BE:EF:00:02',
'uuid': '00000000-0000-0000-0000-0000000000000002',
'network_id': 2,
'instance_id': 0}]
class FlatNetworkTestCase(test.TestCase):
def setUp(self):
super(FlatNetworkTestCase, self).setUp()
self.tempdir = tempfile.mkdtemp()
self.flags(logdir=self.tempdir)
self.network = network_manager.FlatManager(host=HOST)
temp = utils.import_object('nova.network.minidns.MiniDNS')
self.network.instance_dns_manager = temp
self.network.instance_dns_domain = ''
self.network.db = db
self.context = context.RequestContext('testuser', 'testproject',
is_admin=False)
def tearDown(self):
shutil.rmtree(self.tempdir)
super(FlatNetworkTestCase, self).tearDown()
def test_get_instance_nw_info(self):
fake_get_instance_nw_info = fake_network.fake_get_instance_nw_info
nw_info = fake_get_instance_nw_info(self.stubs, 0, 2)
self.assertFalse(nw_info)
nw_info = fake_get_instance_nw_info(self.stubs, 1, 2)
for i, (nw, info) in enumerate(nw_info):
nid = i + 1
check = {'bridge': 'fake_br%d' % nid,
'cidr': '192.168.%s.0/24' % nid,
'cidr_v6': '2001:db8:0:%x::/64' % nid,
'id': '00000000-0000-0000-0000-00000000000000%02d' % nid,
'multi_host': False,
'injected': False,
'bridge_interface': None,
'vlan': None}
self.assertDictMatch(nw, check)
check = {'broadcast': '192.168.%d.255' % nid,
'dhcp_server': '192.168.%d.1' % nid,
'dns': ['192.168.%d.3' % nid, '192.168.%d.4' % nid],
'gateway': '192.168.%d.1' % nid,
'gateway_v6': 'fe80::def',
'ip6s': 'DONTCARE',
'ips': 'DONTCARE',
'label': 'test%d' % nid,
'mac': 'DE:AD:BE:EF:00:%02x' % nid,
'rxtx_cap': 0,
'vif_uuid':
'00000000-0000-0000-0000-00000000000000%02d' % nid,
'should_create_vlan': False,
'should_create_bridge': False}
self.assertDictMatch(info, check)
check = [{'enabled': 'DONTCARE',
'ip': '2001:db8:0:1::%x' % nid,
'netmask': 64,
'gateway': 'fe80::def'}]
self.assertDictListMatch(info['ip6s'], check)
num_fixed_ips = len(info['ips'])
check = [{'enabled': 'DONTCARE',
'ip': '192.168.%d.%03d' % (nid, ip_num + 99),
'netmask': '255.255.255.0',
'gateway': '192.168.%d.1' % nid}
for ip_num in xrange(1, num_fixed_ips + 1)]
self.assertDictListMatch(info['ips'], check)
def test_validate_networks(self):
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
self.mox.StubOutWithMock(db, "fixed_ip_get_by_address")
requested_networks = [("bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
"192.168.1.100")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[1])
ip = fixed_ips[1].copy()
ip['instance_id'] = None
db.fixed_ip_get_by_address(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(ip)
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_reserved(self):
context_admin = context.RequestContext('testuser', 'testproject',
is_admin=True)
nets = self.network.create_networks(context_admin, 'fake',
'192.168.0.0/24', False, 1,
256, None, None, None, None, None)
self.assertEqual(1, len(nets))
network = nets[0]
self.assertEqual(3, db.network_count_reserved_ips(context_admin,
network['id']))
def test_validate_networks_none_requested_networks(self):
self.network.validate_networks(self.context, None)
def test_validate_networks_empty_requested_networks(self):
requested_networks = []
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_invalid_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "192.168.0.100.1")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks, self.context,
requested_networks)
def test_validate_networks_empty_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks,
self.context, requested_networks)
def test_validate_networks_none_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, None)]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_add_fixed_ip_instance_without_vpn_requested_networks(self):
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'network_update')
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'instance_get')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.instance_get(self.context,
1).AndReturn({'display_name': HOST,
'uuid': 'test-00001'})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'availability_zone': ''})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.101')
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[0])
db.network_update(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
self.mox.ReplayAll()
self.network.add_fixed_ip_to_instance(self.context, 1, HOST,
networks[0]['id'])
def test_mini_dns_driver(self):
zone1 = "example.org"
zone2 = "example.com"
driver = self.network.instance_dns_manager
driver.create_entry("hostone", "10.0.0.1", "A", zone1)
driver.create_entry("hosttwo", "10.0.0.2", "A", zone1)
driver.create_entry("hostthree", "10.0.0.3", "A", zone1)
driver.create_entry("hostfour", "10.0.0.4", "A", zone1)
driver.create_entry("hostfive", "10.0.0.5", "A", zone2)
driver.delete_entry("hostone", zone1)
driver.modify_address("hostfour", "10.0.0.1", zone1)
driver.modify_address("hostthree", "10.0.0.1", zone1)
names = driver.get_entries_by_address("10.0.0.1", zone1)
self.assertEqual(len(names), 2)
self.assertIn('hostthree', names)
self.assertIn('hostfour', names)
names = driver.get_entries_by_address("10.0.0.5", zone2)
self.assertEqual(len(names), 1)
self.assertIn('hostfive', names)
addresses = driver.get_entries_by_name("hosttwo", zone1)
self.assertEqual(len(addresses), 1)
self.assertIn('10.0.0.2', addresses)
self.assertRaises(exception.InvalidInput,
driver.create_entry,
"hostname",
"10.10.10.10",
"invalidtype",
zone1)
def test_instance_dns(self):
fixedip = '192.168.0.101'
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'network_update')
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'instance_get')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.instance_get(self.context,
1).AndReturn({'display_name': HOST,
'uuid': 'test-00001'})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'availability_zone': ''})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(fixedip)
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[0])
db.network_update(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
self.mox.ReplayAll()
self.network.add_fixed_ip_to_instance(self.context, 1, HOST,
networks[0]['id'])
instance_manager = self.network.instance_dns_manager
addresses = instance_manager.get_entries_by_name(HOST,
self.network.instance_dns_domain)
self.assertEqual(len(addresses), 1)
self.assertEqual(addresses[0], fixedip)
addresses = instance_manager.get_entries_by_name('test-00001',
self.network.instance_dns_domain)
self.assertEqual(len(addresses), 1)
self.assertEqual(addresses[0], fixedip)
class VlanNetworkTestCase(test.TestCase):
def setUp(self):
super(VlanNetworkTestCase, self).setUp()
self.network = network_manager.VlanManager(host=HOST)
self.network.db = db
self.context = context.RequestContext('testuser', 'testproject',
is_admin=False)
def test_vpn_allocate_fixed_ip(self):
self.mox.StubOutWithMock(db, 'fixed_ip_associate')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
db.fixed_ip_associate(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
reserved=True).AndReturn('192.168.0.1')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
self.mox.ReplayAll()
network = dict(networks[0])
network['vpn_private_address'] = '192.168.0.2'
self.network.allocate_fixed_ip(None, 0, network, vpn=True)
def test_vpn_allocate_fixed_ip_no_network_id(self):
network = dict(networks[0])
network['vpn_private_address'] = '192.168.0.2'
network['id'] = None
context_admin = context.RequestContext('testuser', 'testproject',
is_admin=True)
self.assertRaises(exception.FixedIpNotFoundForNetwork,
self.network.allocate_fixed_ip,
context_admin,
0,
network,
vpn=True)
def test_allocate_fixed_ip(self):
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'instance_get')
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.1')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
self.mox.ReplayAll()
network = dict(networks[0])
network['vpn_private_address'] = '192.168.0.2'
self.network.allocate_fixed_ip(self.context, 0, network)
def test_create_networks_too_big(self):
self.assertRaises(ValueError, self.network.create_networks, None,
num_networks=4094, vlan_start=1)
def test_create_networks_too_many(self):
self.assertRaises(ValueError, self.network.create_networks, None,
num_networks=100, vlan_start=1,
cidr='192.168.0.1/24', network_size=100)
def test_validate_networks(self):
def network_get(_context, network_id):
return networks[network_id]
self.stubs.Set(db, 'network_get', network_get)
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
self.mox.StubOutWithMock(db, "fixed_ip_get_by_address")
requested_networks = [("bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
"192.168.1.100")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
fixed_ips[1]['network_id'] = networks[1]['id']
fixed_ips[1]['instance_id'] = None
db.fixed_ip_get_by_address(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(fixed_ips[1])
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_none_requested_networks(self):
self.network.validate_networks(self.context, None)
def test_validate_networks_empty_requested_networks(self):
requested_networks = []
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_invalid_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "192.168.0.100.1")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks, self.context,
requested_networks)
def test_validate_networks_empty_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks,
self.context, requested_networks)
def test_validate_networks_none_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, None)]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_floating_ip_owned_by_project(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
# raises because floating_ip project_id is None
floating_ip = {'address': '10.0.0.1',
'project_id': None}
self.assertRaises(exception.NotAuthorized,
self.network._floating_ip_owned_by_project,
ctxt,
floating_ip)
# raises because floating_ip project_id is not equal to ctxt project_id
floating_ip = {'address': '10.0.0.1',
'project_id': ctxt.project_id + '1'}
self.assertRaises(exception.NotAuthorized,
self.network._floating_ip_owned_by_project,
ctxt,
floating_ip)
# does not raise (floating ip is owned by ctxt project)
floating_ip = {'address': '10.0.0.1',
'project_id': ctxt.project_id}
self.network._floating_ip_owned_by_project(ctxt, floating_ip)
def test_allocate_floating_ip(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
def fake1(*args, **kwargs):
return {'address': '10.0.0.1'}
def fake2(*args, **kwargs):
return 25
def fake3(*args, **kwargs):
return 0
self.stubs.Set(self.network.db, 'floating_ip_allocate_address', fake1)
# this time should raise
self.stubs.Set(self.network.db, 'floating_ip_count_by_project', fake2)
self.assertRaises(exception.QuotaError,
self.network.allocate_floating_ip,
ctxt,
ctxt.project_id)
# this time should not
self.stubs.Set(self.network.db, 'floating_ip_count_by_project', fake3)
self.network.allocate_floating_ip(ctxt, ctxt.project_id)
def test_deallocate_floating_ip(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
def fake1(*args, **kwargs):
pass
def fake2(*args, **kwargs):
return {'address': '10.0.0.1', 'fixed_ip_id': 1}
def fake3(*args, **kwargs):
return {'address': '10.0.0.1', 'fixed_ip_id': None}
self.stubs.Set(self.network.db, 'floating_ip_deallocate', fake1)
self.stubs.Set(self.network, '_floating_ip_owned_by_project', fake1)
# this time should raise because floating ip is associated to fixed_ip
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake2)
self.assertRaises(exception.FloatingIpAssociated,
self.network.deallocate_floating_ip,
ctxt,
mox.IgnoreArg())
# this time should not raise
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake3)
self.network.deallocate_floating_ip(ctxt, ctxt.project_id)
def test_associate_floating_ip(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
def fake1(*args, **kwargs):
pass
# floating ip that's already associated
def fake2(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'fixed_ip_id': 1}
# floating ip that isn't associated
def fake3(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'fixed_ip_id': None}
# fixed ip with remote host
def fake4(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'network_id': 'blah'}
def fake4_network(*args, **kwargs):
return {'multi_host': False, 'host': 'jibberjabber'}
# fixed ip with local host
def fake5(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'network_id': 'blahblah'}
def fake5_network(*args, **kwargs):
return {'multi_host': False, 'host': 'testhost'}
def fake6(*args, **kwargs):
self.local = False
def fake7(*args, **kwargs):
self.local = True
def fake8(*args, **kwargs):
raise exception.ProcessExecutionError('',
'Cannot find device "em0"\n')
# raises because interface doesn't exist
self.stubs.Set(self.network.db,
'floating_ip_fixed_ip_associate',
fake1)
self.stubs.Set(self.network.db, 'floating_ip_disassociate', fake1)
self.stubs.Set(self.network.driver, 'bind_floating_ip', fake8)
self.assertRaises(exception.NoFloatingIpInterface,
self.network._associate_floating_ip,
ctxt,
mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
self.stubs.Set(self.network, '_floating_ip_owned_by_project', fake1)
# raises because floating_ip is already associated to a fixed_ip
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake2)
self.assertRaises(exception.FloatingIpAssociated,
self.network.associate_floating_ip,
ctxt,
mox.IgnoreArg(),
mox.IgnoreArg())
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake3)
# does not raise and makes call remotely
self.local = True
self.stubs.Set(self.network.db, 'fixed_ip_get_by_address', fake4)
self.stubs.Set(self.network.db, 'network_get', fake4_network)
self.stubs.Set(rpc, 'cast', fake6)
self.network.associate_floating_ip(ctxt, mox.IgnoreArg(),
mox.IgnoreArg())
self.assertFalse(self.local)
# does not raise and makes call locally
self.local = False
self.stubs.Set(self.network.db, 'fixed_ip_get_by_address', fake5)
self.stubs.Set(self.network.db, 'network_get', fake5_network)
self.stubs.Set(self.network, '_associate_floating_ip', fake7)
self.network.associate_floating_ip(ctxt, mox.IgnoreArg(),
mox.IgnoreArg())
self.assertTrue(self.local)
def test_disassociate_floating_ip(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
def fake1(*args, **kwargs):
pass
# floating ip that isn't associated
def fake2(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'fixed_ip_id': None}
# floating ip that is associated
def fake3(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'fixed_ip_id': 1}
# fixed ip with remote host
def fake4(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'network_id': 'blah'}
def fake4_network(*args, **kwargs):
return {'multi_host': False,
'host': 'jibberjabber'}
# fixed ip with local host
def fake5(*args, **kwargs):
return {'address': '10.0.0.1',
'pool': 'nova',
'interface': 'eth0',
'network_id': 'blahblah'}
def fake5_network(*args, **kwargs):
return {'multi_host': False, 'host': 'testhost'}
def fake6(*args, **kwargs):
self.local = False
def fake7(*args, **kwargs):
self.local = True
self.stubs.Set(self.network, '_floating_ip_owned_by_project', fake1)
# raises because floating_ip is not associated to a fixed_ip
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake2)
self.assertRaises(exception.FloatingIpNotAssociated,
self.network.disassociate_floating_ip,
ctxt,
mox.IgnoreArg())
self.stubs.Set(self.network.db, 'floating_ip_get_by_address', fake3)
# does not raise and makes call remotely
self.local = True
self.stubs.Set(self.network.db, 'fixed_ip_get', fake4)
self.stubs.Set(self.network.db, 'network_get', fake4_network)
self.stubs.Set(rpc, 'cast', fake6)
self.network.disassociate_floating_ip(ctxt, mox.IgnoreArg())
self.assertFalse(self.local)
# does not raise and makes call locally
self.local = False
self.stubs.Set(self.network.db, 'fixed_ip_get', fake5)
self.stubs.Set(self.network.db, 'network_get', fake5_network)
self.stubs.Set(self.network, '_disassociate_floating_ip', fake7)
self.network.disassociate_floating_ip(ctxt, mox.IgnoreArg())
self.assertTrue(self.local)
def test_add_fixed_ip_instance_without_vpn_requested_networks(self):
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'instance_get')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}],
'availability_zone': ''})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.101')
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[0])
self.mox.ReplayAll()
self.network.add_fixed_ip_to_instance(self.context, 1, HOST,
networks[0]['id'])
def test_ip_association_and_allocation_of_other_project(self):
"""Makes sure that we cannot deallocaate or disassociate
a public ip of other project"""
def network_get(_context, network_id):
return networks[network_id]
self.stubs.Set(db, 'network_get', network_get)
context1 = context.RequestContext('user', 'project1')
context2 = context.RequestContext('user', 'project2')
address = '1.2.3.4'
float_addr = db.floating_ip_create(context1.elevated(),
{'address': address,
'project_id': context1.project_id})
instance = db.instance_create(context1,
{'project_id': 'project1'})
fix_addr = db.fixed_ip_associate_pool(context1.elevated(),
1, instance['id'])
# Associate the IP with non-admin user context
self.assertRaises(exception.NotAuthorized,
self.network.associate_floating_ip,
context2,
float_addr,
fix_addr)
# Deallocate address from other project
self.assertRaises(exception.NotAuthorized,
self.network.deallocate_floating_ip,
context2,
float_addr)
# Now Associates the address to the actual project
self.network.associate_floating_ip(context1, float_addr, fix_addr)
# Now try dis-associating from other project
self.assertRaises(exception.NotAuthorized,
self.network.disassociate_floating_ip,
context2,
float_addr)
# Clean up the ip addresses
self.network.disassociate_floating_ip(context1, float_addr)
self.network.deallocate_floating_ip(context1, float_addr)
self.network.deallocate_fixed_ip(context1, fix_addr, 'fake')
db.floating_ip_destroy(context1.elevated(), float_addr)
db.fixed_ip_disassociate(context1.elevated(), fix_addr)
class CommonNetworkTestCase(test.TestCase):
def setUp(self):
super(CommonNetworkTestCase, self).setUp()
self.context = context.RequestContext('fake', 'fake')
def fake_create_fixed_ips(self, context, network_id, fixed_cidr=None):
return None
def test_remove_fixed_ip_from_instance(self):
manager = fake_network.FakeNetworkManager()
manager.remove_fixed_ip_from_instance(self.context, 99, HOST,
'10.0.0.1')
self.assertEquals(manager.deallocate_called, '10.0.0.1')
def test_remove_fixed_ip_from_instance_bad_input(self):
manager = fake_network.FakeNetworkManager()
self.assertRaises(exception.FixedIpNotFoundForSpecificInstance,
manager.remove_fixed_ip_from_instance,
self.context, 99, HOST, 'bad input')
def test_validate_cidrs(self):
manager = fake_network.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/24',
False, 1, 256, None, None, None,
None, None)
self.assertEqual(1, len(nets))
cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/24' in cidrs)
def test_validate_cidrs_split_exact_in_half(self):
manager = fake_network.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/24',
False, 2, 128, None, None, None,
None, None)
self.assertEqual(2, len(nets))
cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/25' in cidrs)
self.assertTrue('192.168.0.128/25' in cidrs)
def test_validate_cidrs_split_cidr_in_use_middle_of_range(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.0/24'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 4, 256, None, None, None,
None, None)
self.assertEqual(4, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24',
'192.168.4.0/24']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/24' in cidrs)
def test_validate_cidrs_smaller_subnet_in_use(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.9/25'}])
self.mox.ReplayAll()
# ValueError: requested cidr (192.168.2.0/24) conflicts with
# existing smaller cidr
args = (None, 'fake', '192.168.2.0/24', False, 1, 256, None, None,
None, None, None)
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_split_smaller_cidr_in_use(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.0/25'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 4, 256, None, None, None, None,
None)
self.assertEqual(4, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24',
'192.168.4.0/24']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/24' in cidrs)
def test_validate_cidrs_split_smaller_cidr_in_use2(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.9/29'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.2.0/24',
False, 3, 32, None, None, None, None,
None)
self.assertEqual(3, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.2.32/27', '192.168.2.64/27', '192.168.2.96/27']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/27' in cidrs)
def test_validate_cidrs_split_all_in_use(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
in_use = [{'id': 1, 'cidr': '192.168.2.9/29'},
{'id': 2, 'cidr': '192.168.2.64/26'},
{'id': 3, 'cidr': '192.168.2.128/26'}]
manager.db.network_get_all(ctxt).AndReturn(in_use)
self.mox.ReplayAll()
args = (None, 'fake', '192.168.2.0/24', False, 3, 64, None, None,
None, None, None)
# ValueError: Not enough subnets avail to satisfy requested num_
# networks - some subnets in requested range already
# in use
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_one_in_use(self):
manager = fake_network.FakeNetworkManager()
args = (None, 'fake', '192.168.0.0/24', False, 2, 256, None, None,
None, None, None)
# ValueError: network_size * num_networks exceeds cidr size
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_already_used(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.0.0/24'}])
self.mox.ReplayAll()
# ValueError: cidr already in use
args = (None, 'fake', '192.168.0.0/24', False, 1, 256, None, None,
None, None, None)
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_too_many(self):
manager = fake_network.FakeNetworkManager()
args = (None, 'fake', '192.168.0.0/24', False, 200, 256, None, None,
None, None, None)
# ValueError: Not enough subnets avail to satisfy requested
# num_networks
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_split_partial(self):
manager = fake_network.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 2, 256, None, None, None, None,
None)
returned_cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/24' in returned_cidrs)
self.assertTrue('192.168.1.0/24' in returned_cidrs)
def test_validate_cidrs_conflict_existing_supernet(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
fakecidr = [{'id': 1, 'cidr': '192.168.0.0/8'}]
manager.db.network_get_all(ctxt).AndReturn(fakecidr)
self.mox.ReplayAll()
args = (None, 'fake', '192.168.0.0/24', False, 1, 256, None, None,
None, None, None)
# ValueError: requested cidr (192.168.0.0/24) conflicts
# with existing supernet
self.assertRaises(ValueError, manager.create_networks, *args)
def test_create_networks(self):
cidr = '192.168.0.0/24'
manager = fake_network.FakeNetworkManager()
self.stubs.Set(manager, '_create_fixed_ips',
self.fake_create_fixed_ips)
args = [None, 'foo', cidr, None, 1, 256, 'fd00::/48', None, None,
None, None, None]
self.assertTrue(manager.create_networks(*args))
def test_create_networks_cidr_already_used(self):
manager = fake_network.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
fakecidr = [{'id': 1, 'cidr': '192.168.0.0/24'}]
manager.db.network_get_all(ctxt).AndReturn(fakecidr)
self.mox.ReplayAll()
args = [None, 'foo', '192.168.0.0/24', None, 1, 256,
'fd00::/48', None, None, None, None, None]
self.assertRaises(ValueError, manager.create_networks, *args)
def test_create_networks_many(self):
cidr = '192.168.0.0/16'
manager = fake_network.FakeNetworkManager()
self.stubs.Set(manager, '_create_fixed_ips',
self.fake_create_fixed_ips)
args = [None, 'foo', cidr, None, 10, 256, 'fd00::/48', None, None,
None, None, None]
self.assertTrue(manager.create_networks(*args))
def test_get_instance_uuids_by_ip_regex(self):
manager = fake_network.FakeNetworkManager()
_vifs = manager.db.virtual_interface_get_all(None)
fake_context = context.RequestContext('user', 'project')
# Greedy get eveything
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '.*'})
self.assertEqual(len(res), len(_vifs))
# Doesn't exist
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '10.0.0.1'})
self.assertFalse(res)
# Get instance 1
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '172.16.0.2'})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[1]['instance_id'])
# Get instance 2
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '173.16.0.2'})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[2]['instance_id'])
# Get instance 0 and 1
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '172.16.0.*'})
self.assertTrue(res)
self.assertEqual(len(res), 2)
self.assertEqual(res[0]['instance_id'], _vifs[0]['instance_id'])
self.assertEqual(res[1]['instance_id'], _vifs[1]['instance_id'])
# Get instance 1 and 2
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip': '17..16.0.2'})
self.assertTrue(res)
self.assertEqual(len(res), 2)
self.assertEqual(res[0]['instance_id'], _vifs[1]['instance_id'])
self.assertEqual(res[1]['instance_id'], _vifs[2]['instance_id'])
def test_get_instance_uuids_by_ipv6_regex(self):
manager = fake_network.FakeNetworkManager()
_vifs = manager.db.virtual_interface_get_all(None)
fake_context = context.RequestContext('user', 'project')
# Greedy get eveything
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': '.*'})
self.assertEqual(len(res), len(_vifs))
# Doesn't exist
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': '.*1034.*'})
self.assertFalse(res)
# Get instance 1
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': '2001:.*2'})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[1]['instance_id'])
# Get instance 2
ip6 = '2001:db8:69:1f:dead:beff:feff:ef03'
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': ip6})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[2]['instance_id'])
# Get instance 0 and 1
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': '.*ef0[1,2]'})
self.assertTrue(res)
self.assertEqual(len(res), 2)
self.assertEqual(res[0]['instance_id'], _vifs[0]['instance_id'])
self.assertEqual(res[1]['instance_id'], _vifs[1]['instance_id'])
# Get instance 1 and 2
ip6 = '2001:db8:69:1.:dead:beff:feff:ef0.'
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'ip6': ip6})
self.assertTrue(res)
self.assertEqual(len(res), 2)
self.assertEqual(res[0]['instance_id'], _vifs[1]['instance_id'])
self.assertEqual(res[1]['instance_id'], _vifs[2]['instance_id'])
def test_get_instance_uuids_by_ip(self):
manager = fake_network.FakeNetworkManager()
_vifs = manager.db.virtual_interface_get_all(None)
fake_context = context.RequestContext('user', 'project')
# No regex for you!
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'fixed_ip': '.*'})
self.assertFalse(res)
# Doesn't exist
ip = '10.0.0.1'
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'fixed_ip': ip})
self.assertFalse(res)
# Get instance 1
ip = '172.16.0.2'
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'fixed_ip': ip})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[1]['instance_id'])
# Get instance 2
ip = '173.16.0.2'
res = manager.get_instance_uuids_by_ip_filter(fake_context,
{'fixed_ip': ip})
self.assertTrue(res)
self.assertEqual(len(res), 1)
self.assertEqual(res[0]['instance_id'], _vifs[2]['instance_id'])
def test_get_network(self):
manager = fake_network.FakeNetworkManager()
fake_context = context.RequestContext('user', 'project')
self.mox.StubOutWithMock(manager.db, 'network_get_all_by_uuids')
manager.db.network_get_all_by_uuids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
uuid = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
network = manager.get_network(fake_context, uuid)
self.assertEqual(network['uuid'], uuid)
def test_get_network_not_found(self):
manager = fake_network.FakeNetworkManager()
fake_context = context.RequestContext('user', 'project')
self.mox.StubOutWithMock(manager.db, 'network_get_all_by_uuids')
manager.db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn([])
self.mox.ReplayAll()
uuid = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee'
self.assertRaises(exception.NetworkNotFound,
manager.get_network, fake_context, uuid)
def test_get_all_networks(self):
manager = fake_network.FakeNetworkManager()
fake_context = context.RequestContext('user', 'project')
self.mox.StubOutWithMock(manager.db, 'network_get_all')
manager.db.network_get_all(mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
output = manager.get_all_networks(fake_context)
self.assertEqual(len(networks), 2)
self.assertEqual(output[0]['uuid'],
'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa')
self.assertEqual(output[1]['uuid'],
'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb')
def test_disassociate_network(self):
manager = fake_network.FakeNetworkManager()
fake_context = context.RequestContext('user', 'project')
self.mox.StubOutWithMock(manager.db, 'network_get_all_by_uuids')
manager.db.network_get_all_by_uuids(
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
uuid = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
manager.disassociate_network(fake_context, uuid)
def test_disassociate_network_not_found(self):
manager = fake_network.FakeNetworkManager()
fake_context = context.RequestContext('user', 'project')
self.mox.StubOutWithMock(manager.db, 'network_get_all_by_uuids')
manager.db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn([])
self.mox.ReplayAll()
uuid = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee'
self.assertRaises(exception.NetworkNotFound,
manager.disassociate_network, fake_context, uuid)
class TestRPCFixedManager(network_manager.RPCAllocateFixedIP,
network_manager.NetworkManager):
"""Dummy manager that implements RPCAllocateFixedIP"""
class RPCAllocateTestCase(test.TestCase):
"""Tests nova.network.manager.RPCAllocateFixedIP"""
def setUp(self):
super(RPCAllocateTestCase, self).setUp()
self.rpc_fixed = TestRPCFixedManager()
self.context = context.RequestContext('fake', 'fake')
def test_rpc_allocate(self):
"""Test to verify bug 855030 doesn't resurface.
Mekes sure _rpc_allocate_fixed_ip returns a value so the call
returns properly and the greenpool completes."""
address = '10.10.10.10'
def fake_allocate(*args, **kwargs):
return address
def fake_network_get(*args, **kwargs):
return {}
self.stubs.Set(self.rpc_fixed, 'allocate_fixed_ip', fake_allocate)
self.stubs.Set(self.rpc_fixed.db, 'network_get', fake_network_get)
rval = self.rpc_fixed._rpc_allocate_fixed_ip(self.context,
'fake_instance',
'fake_network')
self.assertEqual(rval, address)
class TestFloatingIPManager(network_manager.FloatingIP,
network_manager.NetworkManager):
"""Dummy manager that implements FloatingIP"""
class AllocateTestCase(test.TestCase):
def test_allocate_for_instance(self):
address = "10.10.10.10"
self.flags(auto_assign_floating_ip=True)
self.compute = self.start_service('compute')
self.network = self.start_service('network')
self.user_id = 'fake'
self.project_id = 'fake'
self.context = context.RequestContext(self.user_id,
self.project_id,
is_admin=True)
db.floating_ip_create(self.context,
{'address': address,
'pool': 'nova'})
inst = db.instance_create(self.context, {'host': self.compute.host,
'instance_type_id': 1})
networks = db.network_get_all(self.context)
for network in networks:
db.network_update(self.context, network['id'],
{'host': self.network.host})
project_id = self.context.project_id
nw_info = self.network.allocate_for_instance(self.context,
instance_id=inst['id'],
instance_uuid='',
host=inst['host'],
vpn=None,
rxtx_factor=3,
project_id=project_id)
self.assertEquals(1, len(nw_info))
fixed_ip = nw_info.fixed_ips()[0]['address']
self.assertTrue(utils.is_valid_ipv4(fixed_ip))
self.network.deallocate_for_instance(self.context,
instance_id=inst['id'],
fixed_ips=fixed_ip,
host=self.network.host,
project_id=project_id)
class FloatingIPTestCase(test.TestCase):
"""Tests nova.network.manager.FloatingIP"""
def setUp(self):
super(FloatingIPTestCase, self).setUp()
self.tempdir = tempfile.mkdtemp()
self.flags(logdir=self.tempdir)
self.network = TestFloatingIPManager()
temp = utils.import_object('nova.network.minidns.MiniDNS')
self.network.floating_dns_manager = temp
self.network.db = db
self.project_id = 'testproject'
self.context = context.RequestContext('testuser', self.project_id,
is_admin=False)
def tearDown(self):
shutil.rmtree(self.tempdir)
super(FloatingIPTestCase, self).tearDown()
def test_double_deallocation(self):
instance_ref = db.api.instance_create(self.context,
{"project_id": self.project_id})
# Run it twice to make it fault if it does not handle
# instances without fixed networks
# If this fails in either, it does not handle having no addresses
self.network.deallocate_for_instance(self.context,
instance_id=instance_ref['id'])
self.network.deallocate_for_instance(self.context,
instance_id=instance_ref['id'])
def test_deallocation_deleted_instance(self):
instance_ref = db.api.instance_create(self.context,
{"project_id": self.project_id, "deleted": True})
self.network.deallocate_for_instance(self.context,
instance_id=instance_ref['id'])
def test_floating_dns_create_conflict(self):
zone = "example.org"
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
self.network.add_dns_entry(self.context, address1, name1, "A", zone)
self.assertRaises(exception.FloatingIpDNSExists,
self.network.add_dns_entry, self.context,
address1, name1, "A", zone)
def test_floating_create_and_get(self):
zone = "example.org"
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
entries = self.network.get_dns_entries_by_address(self.context,
address1, zone)
self.assertFalse(entries)
self.network.add_dns_entry(self.context, address1, name1, "A", zone)
self.network.add_dns_entry(self.context, address1, name2, "A", zone)
entries = self.network.get_dns_entries_by_address(self.context,
address1, zone)
self.assertEquals(len(entries), 2)
self.assertEquals(entries[0], name1)
self.assertEquals(entries[1], name2)
entries = self.network.get_dns_entries_by_name(self.context,
name1, zone)
self.assertEquals(len(entries), 1)
self.assertEquals(entries[0], address1)
def test_floating_dns_delete(self):
zone = "example.org"
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
self.network.add_dns_entry(self.context, address1, name1, "A", zone)
self.network.add_dns_entry(self.context, address1, name2, "A", zone)
self.network.delete_dns_entry(self.context, name1, zone)
entries = self.network.get_dns_entries_by_address(self.context,
address1, zone)
self.assertEquals(len(entries), 1)
self.assertEquals(entries[0], name2)
self.assertRaises(exception.NotFound,
self.network.delete_dns_entry, self.context,
name1, zone)
def test_floating_dns_domains_public(self):
zone1 = "testzone"
domain1 = "example.org"
domain2 = "example.com"
address1 = '10.10.10.10'
entryname = 'testentry'
context_admin = context.RequestContext('testuser', 'testproject',
is_admin=True)
self.assertRaises(exception.AdminRequired,
self.network.create_public_dns_domain, self.context,
domain1, zone1)
self.network.create_public_dns_domain(context_admin, domain1,
'testproject')
self.network.create_public_dns_domain(context_admin, domain2,
'fakeproject')
domains = self.network.get_dns_domains(self.context)
self.assertEquals(len(domains), 2)
self.assertEquals(domains[0]['domain'], domain1)
self.assertEquals(domains[1]['domain'], domain2)
self.assertEquals(domains[0]['project'], 'testproject')
self.assertEquals(domains[1]['project'], 'fakeproject')
self.network.add_dns_entry(self.context, address1, entryname,
'A', domain1)
entries = self.network.get_dns_entries_by_name(self.context,
entryname, domain1)
self.assertEquals(len(entries), 1)
self.assertEquals(entries[0], address1)
self.assertRaises(exception.AdminRequired,
self.network.delete_dns_domain, self.context,
domain1)
self.network.delete_dns_domain(context_admin, domain1)
self.network.delete_dns_domain(context_admin, domain2)
# Verify that deleting the domain deleted the associated entry
entries = self.network.get_dns_entries_by_name(self.context,
entryname, domain1)
self.assertFalse(entries)
def test_delete_all_by_ip(self):
domain1 = "example.org"
domain2 = "example.com"
address = "10.10.10.10"
name1 = "foo"
name2 = "bar"
def fake_domains(context):
return [{'domain': 'example.org', 'scope': 'public'},
{'domain': 'example.com', 'scope': 'public'},
{'domain': 'test.example.org', 'scope': 'public'}]
self.stubs.Set(self.network, 'get_dns_domains', fake_domains)
context_admin = context.RequestContext('testuser', 'testproject',
is_admin=True)
self.network.create_public_dns_domain(context_admin, domain1,
'testproject')
self.network.create_public_dns_domain(context_admin, domain2,
'fakeproject')
domains = self.network.get_dns_domains(self.context)
for domain in domains:
self.network.add_dns_entry(self.context, address,
name1, "A", domain['domain'])
self.network.add_dns_entry(self.context, address,
name2, "A", domain['domain'])
entries = self.network.get_dns_entries_by_address(self.context,
address,
domain['domain'])
self.assertEquals(len(entries), 2)
self.network._delete_all_entries_for_ip(self.context, address)
for domain in domains:
entries = self.network.get_dns_entries_by_address(self.context,
address,
domain['domain'])
self.assertFalse(entries)
self.network.delete_dns_domain(context_admin, domain1)
self.network.delete_dns_domain(context_admin, domain2)
class NetworkPolicyTestCase(test.TestCase):
def setUp(self):
super(NetworkPolicyTestCase, self).setUp()
nova.policy.reset()
nova.policy.init()
self.context = context.get_admin_context()
def tearDown(self):
super(NetworkPolicyTestCase, self).tearDown()
nova.policy.reset()
def _set_rules(self, rules):
nova.common.policy.set_brain(nova.common.policy.HttpBrain(rules))
def test_check_policy(self):
self.mox.StubOutWithMock(nova.policy, 'enforce')
target = {
'project_id': self.context.project_id,
'user_id': self.context.user_id,
}
nova.policy.enforce(self.context, 'network:get_all', target)
self.mox.ReplayAll()
network_manager.check_policy(self.context, 'get_all')
class InstanceDNSTestCase(test.TestCase):
"""Tests nova.network.manager instance DNS"""
def setUp(self):
super(InstanceDNSTestCase, self).setUp()
self.tempdir = tempfile.mkdtemp()
self.flags(logdir=self.tempdir)
self.network = TestFloatingIPManager()
temp = utils.import_object('nova.network.minidns.MiniDNS')
self.network.instance_dns_manager = temp
temp = utils.import_object('nova.network.dns_driver.DNSDriver')
self.network.floating_dns_manager = temp
self.network.db = db
self.project_id = 'testproject'
self.context = context.RequestContext('testuser', self.project_id,
is_admin=False)
def tearDown(self):
shutil.rmtree(self.tempdir)
super(InstanceDNSTestCase, self).tearDown()
def test_dns_domains_private(self):
zone1 = 'testzone'
domain1 = 'example.org'
context_admin = context.RequestContext('testuser', 'testproject',
is_admin=True)
self.assertRaises(exception.AdminRequired,
self.network.create_private_dns_domain, self.context,
domain1, zone1)
self.network.create_private_dns_domain(context_admin, domain1, zone1)
domains = self.network.get_dns_domains(self.context)
self.assertEquals(len(domains), 1)
self.assertEquals(domains[0]['domain'], domain1)
self.assertEquals(domains[0]['availability_zone'], zone1)
self.assertRaises(exception.AdminRequired,
self.network.delete_dns_domain, self.context,
domain1)
self.network.delete_dns_domain(context_admin, domain1)
domain1 = "example.org"
domain2 = "example.com"
class LdapDNSTestCase(test.TestCase):
"""Tests nova.network.ldapdns.LdapDNS"""
def setUp(self):
super(LdapDNSTestCase, self).setUp()
self.saved_ldap = sys.modules.get('ldap')
import nova.auth.fakeldap
sys.modules['ldap'] = nova.auth.fakeldap
temp = utils.import_object('nova.network.ldapdns.FakeLdapDNS')
self.driver = temp
self.driver.create_domain(domain1)
self.driver.create_domain(domain2)
def tearDown(self):
self.driver.delete_domain(domain1)
self.driver.delete_domain(domain2)
sys.modules['ldap'] = self.saved_ldap
super(LdapDNSTestCase, self).tearDown()
def test_ldap_dns_domains(self):
domains = self.driver.get_domains()
self.assertEqual(len(domains), 2)
self.assertIn(domain1, domains)
self.assertIn(domain2, domains)
def test_ldap_dns_create_conflict(self):
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
self.driver.create_entry(name1, address1, "A", domain1)
self.assertRaises(exception.FloatingIpDNSExists,
self.driver.create_entry,
name1, address1, "A", domain1)
def test_ldap_dns_create_and_get(self):
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
entries = self.driver.get_entries_by_address(address1, domain1)
self.assertFalse(entries)
self.driver.create_entry(name1, address1, "A", domain1)
self.driver.create_entry(name2, address1, "A", domain1)
entries = self.driver.get_entries_by_address(address1, domain1)
self.assertEquals(len(entries), 2)
self.assertEquals(entries[0], name1)
self.assertEquals(entries[1], name2)
entries = self.driver.get_entries_by_name(name1, domain1)
self.assertEquals(len(entries), 1)
self.assertEquals(entries[0], address1)
def test_ldap_dns_delete(self):
address1 = "10.10.10.11"
name1 = "foo"
name2 = "bar"
self.driver.create_entry(name1, address1, "A", domain1)
self.driver.create_entry(name2, address1, "A", domain1)
entries = self.driver.get_entries_by_address(address1, domain1)
self.assertEquals(len(entries), 2)
self.driver.delete_entry(name1, domain1)
entries = self.driver.get_entries_by_address(address1, domain1)
LOG.debug("entries: %s" % entries)
self.assertEquals(len(entries), 1)
self.assertEquals(entries[0], name2)
self.assertRaises(exception.NotFound,
self.driver.delete_entry,
name1, domain1)
|
gyang/nova
|
nova/tests/network/test_manager.py
|
Python
|
apache-2.0
| 69,034
|
[
"FEFF"
] |
4497194566e089a254e569fe6b9f1dd2eb2841c2546c46fb929b0757fda9de75
|
import numpy
import sys
from mayavi import mlab
import time
import pylab
from numba import jit
@jit
def absolute_speed(vx,vy,vz,multiplier=100, kb=1.38065):
return numpy.sqrt(((vx*vx) + (vy*vy) + (vz*vz)))*multiplier;
@jit
def absolute_temp(vx,vy,vz,m,multiplier=100, kb=1.38065):
return multiplier*m*((vx*vx) + (vy*vy) + (vz*vz)) / (3.0*kb)
def offscreen(draw_func):
def wrapper():
mlab.options.offscreen = True
draw_func()
return wrapper
def saveimg(filename):
def wrap(draw_func):
def wrapped_f(*args):
draw_func(*args)
mlab.savefig(filename)
return wrapped_f
return wrap
def showable(draw_func):
def wrapper():
draw_func()
mlab.show()
return wrapper
def show_scalar_planes(f):
planey = mlab.pipeline.image_plane_widget(f, plane_orientation='y_axes')
planex = mlab.pipeline.image_plane_widget(f, plane_orientation='x_axes')
planez = mlab.pipeline.image_plane_widget(f, plane_orientation='z_axes')
return (planex, planey, planez)
def show_vector_planes(f,vx,vy,vz):
src = mlab.pipeline.vector_scatter(x,y,z,vx,vy,vz)
planex = mlab.pipeline.vector_cut_plane(src, plane_orientation='x_axes')
planey = mlab.pipeline.vector_cut_plane(src, plane_orientation='y_axes')
planez = mlab.pipeline.vector_cut_plane(src, plane_orientation='z_axes')
return(planex, planey, planez, src)
def make_mlab_scalar_field(x,y,z,v,pts=100j):
from scipy.interpolate import griddata
X, Y, Z = numpy.mgrid[x.min():x.max():pts,y.min():y.max():pts,z.min():z.max():pts]
R = numpy.dstack([x,y,z])
R = R.reshape((len(x),3))
F = griddata(R,v,(X,Y,Z))
fi = mlab.pipeline.scalar_field(F)
return fi
def init_mlab_scene(size):
fig = mlab.figure('Viz', size=size, bgcolor=(0,0,0))
fig.scene.set_size(size)
fig.scene.anti_aliasing_frames = 0
mlab.clf()
return fig
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in xrange(0, len(l), n):
yield l[i:i+n]
def pairs(l):
"""Yield successive n-sized chunks from l."""
for i in xrange(1, len(l)):
yield (l[i-1],l[i]])
|
detorto/mdvis
|
src/mmdlab/utils.py
|
Python
|
mit
| 2,042
|
[
"Mayavi"
] |
55da2b89446e22bd8e1711eb20d5c52a297d21a62ccffcf96c3efbba375a6fd6
|
#!/usr/bin/env python
'''
CREATED:2013-02-12 16:33:40 by Brian McFee <brm2132@columbia.edu>
Beat tracking with HPSS filtering
Usage: ./hpss_beats.py [-h] input_audio.mp3 output_beats.csv
'''
from __future__ import print_function
import argparse
import numpy as np
import sys
import librosa
# Some magic number defaults, FFT window and hop length
N_FFT = 2048
# We use a hop of 512 here so that the HPSS spectrogram input
# matches the default beat tracker parameters
HOP_LENGTH = 512
def hpss_beats(input_file, output_csv):
'''HPSS beat tracking
:parameters:
- input_file : str
Path to input audio file (wav, mp3, m4a, flac, etc.)
- output_file : str
Path to save beat event timestamps as a CSV file
'''
# Load the file
print('Loading ', input_file)
y, sr = librosa.load(input_file)
# Do HPSS
print('Harmonic-percussive separation ... ')
y = librosa.effects.percussive(y)
# Construct onset envelope from percussive component
print('Tracking beats on percussive component')
onset_env = librosa.onset.onset_strength(y=y,
sr=sr,
hop_length=HOP_LENGTH,
n_fft=N_FFT,
aggregate=np.median)
# Track the beats
tempo, beats = librosa.beat.beat_track(onset_envelope=onset_env,
sr=sr,
hop_length=HOP_LENGTH)
beat_times = librosa.frames_to_time(beats,
sr=sr,
hop_length=HOP_LENGTH)
# Save the output
print('Saving beats to ', output_csv)
librosa.output.times_csv(output_csv, beat_times)
def process_arguments(args):
'''Argparse function to get the program parameters'''
parser = argparse.ArgumentParser(description='HPSS beat-tracking example')
parser.add_argument('input_file',
action='store',
help='path to the input file (wav, mp3, etc)')
parser.add_argument('output_file',
action='store',
help='path to the output file (csv of beat times)')
return vars(parser.parse_args(args))
if __name__ == '__main__':
# Get the parameters
parameters = process_arguments(sys.argv[1:])
# Run the beat tracker
hpss_beats(parameters['input_file'], parameters['output_file'])
|
Cortexelus/librosa
|
examples/hpss_beats.py
|
Python
|
isc
| 2,556
|
[
"Brian"
] |
487760e0ce3ad400633ff419089ed28bbe57d6aa099bbe8abe21da20b60772ad
|
# Copyright (C) 2013, Thomas Leonard
# See the README file for details, or visit http://0install.net.
import urllib.parse
import http.client as httplib
import ftplib
from zeroinstall import SafeException
def get_http_size(url, ttl = 3, method = None):
address = urllib.parse.urlparse(url)
if url.lower().startswith('http://'):
http = httplib.HTTPConnection(address.hostname, address.port or 80)
elif url.lower().startswith('https://'):
http = httplib.HTTPSConnection(address.hostname, address.port or 443)
else:
assert False, url
parts = url.split('/', 3)
if len(parts) == 4:
path = parts[3]
else:
path = ''
if method is None:
if address.hostname.endswith('.s3.amazonaws.com'):
method = 'GET' # HEAD doesn't work on S3 due to signature mismatch
else:
method = 'HEAD'
http.request(method, '/' + path, headers = {'Host': address.hostname, 'User-agent': '0repo (http://0install.net/0repo.html)'})
response = http.getresponse()
try:
if response.status == 200:
l = response.getheader('Content-Length')
if l is None:
if method == "HEAD":
print("No Content-Length header returned; requesting whole archive...")
return get_http_size(url, ttl, method = "GET")
else:
return len(response.read())
else:
return int(l)
elif response.status in (301, 302, 303):
new_url_rel = response.getheader('Location') or response.getheader('URI')
new_url = urllib.parse.urljoin(url, new_url_rel)
else:
raise SafeException("HTTP error: got status code %s for %s" % (response.status, url))
finally:
response.close()
if ttl:
print("Moved")
print("Checking new URL {}...".format(new_url), end = '')
assert new_url
return get_http_size(new_url, ttl - 1)
else:
raise SafeException('Too many redirections.')
def get_ftp_size(url):
address = urllib.parse.urlparse(url)
ftp = ftplib.FTP(address.hostname)
try:
ftp.login()
ftp.voidcmd('TYPE I')
return ftp.size(url.split('/', 3)[3])
finally:
ftp.close()
def get_size(url):
print("Checking {url}... ".format(url = url), end = '')
try:
scheme = urllib.parse.urlparse(url)[0].lower()
if scheme.startswith('http') or scheme.startswith('https'):
size = get_http_size(url)
elif scheme.startswith('ftp'):
size = get_ftp_size(url)
else:
raise SafeException("Unknown scheme '%s' in '%s'" % (scheme, url))
except:
print("ERROR")
raise
print(size, "bytes")
return size
|
0install/0repo
|
repo/urltest.py
|
Python
|
lgpl-2.1
| 2,418
|
[
"VisIt"
] |
0c58df7cbf5f3da24d7d31a1a5a7a507787216f514f1236bb8d2300949689eb5
|
# coding=utf-8
# Copyright 2022 The TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""ogbg_molpcba dataset."""
from typing import Dict, Text, Tuple
from etils import epath
import numpy as np
import tensorflow as tf
import tensorflow_datasets.public_api as tfds
# Type hints.
ArrayDict = Dict[Text, np.ndarray]
Path = epath.Path
_DESCRIPTION = """
'ogbg-molpcba' is a molecular dataset sampled from PubChem BioAssay.
It is a graph prediction dataset from the Open Graph Benchmark (OGB).
This dataset is experimental, and the API is subject to change in
future releases.
The below description of the dataset is adapted from the OGB paper:
### Input Format
All the molecules are pre-processed using RDKit ([1]).
* Each graph represents a molecule, where nodes are atoms, and edges are
chemical bonds.
* Input node features are 9-dimensional, containing atomic number and chirality,
as well as other additional atom features such as formal charge and
whether the atom is in the ring.
* Input edge features are 3-dimensional, containing bond type,
bond stereochemistry, as well as an additional bond feature indicating
whether the bond is conjugated.
The exact description of all features is available at
https://github.com/snap-stanford/ogb/blob/master/ogb/utils/features.py.
### Prediction
The task is to predict 128 different biological activities (inactive/active).
See [2] and [3] for more description about these targets.
Not all targets apply to each molecule: missing targets are indicated by NaNs.
### References
[1]: Greg Landrum, et al. 'RDKit: Open-source cheminformatics'.
URL: https://github.com/rdkit/rdkit
[2]: Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster,
David Konerding and Vijay Pande. 'Massively Multitask Networks for
Drug Discovery'.
URL: https://arxiv.org/pdf/1502.02072.pdf
[3]: Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes,
Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande.
MoleculeNet: a benchmark for molecular machine learning.
Chemical Science, 9(2):513-530, 2018.
"""
_CITATION = """
@inproceedings{DBLP:conf/nips/HuFZDRLCL20,
author = {Weihua Hu and
Matthias Fey and
Marinka Zitnik and
Yuxiao Dong and
Hongyu Ren and
Bowen Liu and
Michele Catasta and
Jure Leskovec},
editor = {Hugo Larochelle and
Marc Aurelio Ranzato and
Raia Hadsell and
Maria{-}Florina Balcan and
Hsuan{-}Tien Lin},
title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs},
booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference
on Neural Information Processing Systems 2020, NeurIPS 2020, December
6-12, 2020, virtual},
year = {2020},
url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html},
timestamp = {Tue, 19 Jan 2021 15:57:06 +0100},
biburl = {https://dblp.org/rec/conf/nips/HuFZDRLCL20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
"""
# URL.
_OGB_URL = 'https://ogb.stanford.edu/docs/graphprop'
_DOWNLOAD_URL = 'https://snap.stanford.edu/ogb/data/graphproppred/csv_mol_download/pcba.zip'
# File containing the names of individual tasks.
_TASKS_FNAME = 'graphs/ogbg_molpcba/ogbg_molpcba_tasks.txt'
class OgbgMolpcba(tfds.core.GeneratorBasedBuilder):
"""DatasetBuilder for ogbg_molpcba dataset."""
VERSION = tfds.core.Version('0.1.3')
RELEASE_NOTES = {
'0.1.0': 'Initial release of experimental API.',
'0.1.1': 'Exposes the number of edges in each graph explicitly.',
'0.1.2': 'Add metadata field for GraphVisualizer.',
'0.1.3': 'Add metadata field for names of individual tasks.',
}
def _info(self) -> tfds.core.DatasetInfo:
"""Returns the dataset metadata."""
# Read the individual task names.
tasks_file = tfds.core.tfds_path(_TASKS_FNAME)
tasks = tasks_file.read_text().splitlines()
# Specify the tfds.core.DatasetInfo object
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
# We mimic the features of the OGB platform-agnostic DataLoader.
features=tfds.features.FeaturesDict({
'num_nodes':
tfds.features.Tensor(shape=(None,), dtype=tf.int64),
'node_feat':
tfds.features.Tensor(shape=(None, 9), dtype=tf.float32),
'num_edges':
tfds.features.Tensor(shape=(None,), dtype=tf.int64),
'edge_feat':
tfds.features.Tensor(shape=(None, 3), dtype=tf.float32),
'edge_index':
tfds.features.Tensor(shape=(None, 2), dtype=tf.int64),
'labels':
tfds.features.Tensor(shape=(128,), dtype=tf.float32),
}),
supervised_keys=None,
homepage=_OGB_URL,
citation=_CITATION,
metadata=tfds.core.MetadataDict({
'tasks':
tasks,
'graph_visualizer':
tfds.visualization.GraphVisualizerMetadataDict(
edgelist_feature_name='edge_index')
}),
)
def _split_generators(self, dl_manager: tfds.download.DownloadManager):
"""Returns SplitGenerators."""
# Download the original data.
path = dl_manager.download_and_extract(_DOWNLOAD_URL)
# Read the extracted data.
data_path = (path / 'pcba/raw')
split_path = (path / 'pcba/split/scaffold')
all_data, split_indices = _read_extracted_data(data_path, split_path)
# Return a list of the train/validation/test split generators.
return {
tfds.Split.TRAIN:
self._generate_examples(all_data, split_indices['train']),
tfds.Split.VALIDATION:
self._generate_examples(all_data, split_indices['valid']),
tfds.Split.TEST:
self._generate_examples(all_data, split_indices['test']),
}
def _generate_examples(self, all_data: ArrayDict, split_indices: np.ndarray):
"""Yields examples."""
# Precompute for later.
num_total_graphs = len(all_data['labels'])
split_indices = set(split_indices)
accumulated_num_nodes = np.concatenate([np.array([0]),
np.cumsum(all_data['num_nodes'])])
accumulated_num_edges = np.concatenate([np.array([0]),
np.cumsum(all_data['num_edges'])])
# Loop over the training set.
for idx in range(num_total_graphs):
# Check if this example is part of the split.
if idx not in split_indices:
continue
# Read all of the graph information.
labels = all_data['labels'][idx]
num_nodes = all_data['num_nodes'][idx]
node_slice = slice(
accumulated_num_nodes[idx], accumulated_num_nodes[idx + 1]
)
node_feat = all_data['node_feat'][node_slice]
num_edges = all_data['num_edges'][idx]
edge_slice = slice(
accumulated_num_edges[idx], accumulated_num_edges[idx + 1]
)
edge_feat = all_data['edge_feat'][edge_slice]
edge_index = all_data['edge_index'][edge_slice]
# Combine into a single dictionary.
record = {
'labels': labels,
'num_nodes': num_nodes,
'node_feat': node_feat,
'num_edges': num_edges,
'edge_feat': edge_feat,
'edge_index': edge_index,
}
yield idx, record
def _read_extracted_data(data_path: Path,
split_path: Path) -> Tuple[ArrayDict, ArrayDict]:
"""Reads and processes the extracted graph data and splits."""
pd = tfds.core.lazy_imports.pandas
# Load columns describing the graph features and structure.
column_names = [
'edge_index',
'num_nodes',
'num_edges',
'node_feat',
'edge_feat',
'labels',
]
file_names = [
'edge.csv.gz',
'num-node-list.csv.gz',
'num-edge-list.csv.gz',
'node-feat.csv.gz',
'edge-feat.csv.gz',
'graph-label.csv.gz',
]
dtypes = [
np.int64,
np.int64,
np.int64,
np.float32,
np.float32,
np.float32,
]
all_data = {}
for column_name, file_name, dtype in zip(column_names, file_names, dtypes):
with (data_path / file_name).open('rb') as fp:
values = pd.read_csv(fp, compression='gzip', header=None).values
values = values.astype(dtype)
all_data[column_name] = values
# Load data splits.
split_indices = {}
for split_name in ['train', 'valid', 'test']:
with (split_path / ('%s.csv.gz' % split_name)).open('rb') as fp:
indices = pd.read_csv(fp, compression='gzip', header=None).values.T[0]
split_indices[split_name] = indices
return all_data, split_indices
|
tensorflow/datasets
|
tensorflow_datasets/graphs/ogbg_molpcba/ogbg_molpcba.py
|
Python
|
apache-2.0
| 9,433
|
[
"RDKit"
] |
d5d3bcc9c4c0554b0bef291db17d17871368914158f0f2ec0ee0967fb0cfc376
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Jeremie DECOCK (http://www.jdhp.org)
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# TODO:
# - Données sans tendance ni saisonnalité (pour AR, MA, ARMA)
# - Données avec tendance (pour ARIMA et méthodes de suppression de tendance)
# - Données avec saisonnalité (pour SARMA et méthodes désaisonnalisation)
# - Données avec tendance et saisonnalité (pour SARIMA)
# - Données avec causalité depuis une variable exogène (pour ARX)
# - Données multivariées (pour VAR)
"""
This module contains toy data (time series) to test TSA models.
"""
__all__ = ['additive_model_ts2']
import numpy as np
import pandas as pd
def additive_model_ts2(num_periods=10, T1=24, T2=4, relative_period_size=4,
noise_sigma=0.05, trend_slope=0.005, trend_intercept=3.):
"""A toy dataset generated by an additive model containing a trend, two
levels of saisonality (with a periodicity of `T1=24` and `T=T1*T2=96` time
steps by default) and a gaussian noise.
Parameters
----------
num_periods : int
Number of periods `T` with `T = T1 * T2`. The size of the time serie
is `T1 * T2 * num_periods`.
T1 : int
TODO
T2 : int
TODO relative period ; period T = T1 * T2
noise_sigma : float
The standard deviation of the gaussian noise added to the time serie.
trend_slope : float
The slope of the trend. The trend is modeled by a linear function
having two parameters: the intercept and the slope.
trend_intercept : float
The intercept of the trend. The trend is modeled by a linear function
having two parameters: the intercept and the slope.
Returns
-------
Pandas DataFrame
The generated time serie. Column `t` contains the time step and column
`y` contains the value at the corresponding time step.
"""
t = np.arange(T1 * T2 * num_periods)
shift = int(T1 / 4) # We shift t by -1/4 to start the time serie at 0 (i.e. we want sin(0) = -1 so that sin(0) + 1 = 0)
y = np.sin(2. * np.pi * (t - shift) / float(T1)) + 1.
for i in range(1, num_periods + 1):
we_index = T1*T2*i
y[we_index-T1+1:we_index] = 0
y += trend_slope * t + trend_intercept # Add trend (additive model)
y += np.random.normal(loc=0., scale=noise_sigma, size=y.shape) # Add noise (additive model)
df = pd.DataFrame(np.array([t, y]).T, columns=['t', 'y'])
return df
|
jeremiedecock/pyai
|
ailib/tsa/data/toymodels.py
|
Python
|
mit
| 3,601
|
[
"Gaussian"
] |
4d2ed6f8dce815fbc04c7447784c6acc53c71d3390872873f9fffda36debdf46
|
import scrapelib
import datetime
import os
import re
from collections import defaultdict
from functools import wraps
from openstates.scrape import Scraper, Bill, VoteEvent
from openstates.utils import convert_pdf
import lxml.html
import urllib
# Workaround to prevent chunking error (thanks @showerst)
#
# @see https://stackoverflow.com/a/37818792/1858091
import http.client
_HTTP_VSN = http.client.HTTPConnection._http_vsn
_HTTP_VSN_STR = http.client.HTTPConnection._http_vsn_str
def downgrade_http_version():
http.client.HTTPConnection._http_vsn = 10
http.client.HTTPConnection._http_vsn_str = "HTTP/1.0"
def undo_downgrade_http_version():
http.client.HTTPConnection._http_vsn = _HTTP_VSN
http.client.HTTPConnection._http_vsn_str = _HTTP_VSN_STR
def toggle_http_version(method):
@wraps(method)
def wrapper(self, *args, **kwargs):
downgrade_http_version()
response = method(self, *args, **kwargs)
undo_downgrade_http_version()
return response
return wrapper
def action_type(action):
"""
Used to standardise the bill actions to the terms specified
:param scraped action:
:return action classifier:
"""
# http://www.scstatehouse.gov/actionsearch.php is very useful for this
classifiers = (
("Adopted", "passage"),
("Amended and adopted", ["passage", "amendment-passage"]),
("Amended", "amendment-passage"),
("Certain items vetoed", "executive-veto-line-item"),
("Committed to", "referral-committee"),
("Committee Amendment Adopted", "amendment-passage"),
(
"Committee Amendment Amended and Adopted",
["amendment-passage", "amendment-amendment"],
),
("Committee Amendment Amended", "amendment-amendment"),
("Committee Amendment Tabled", "amendment-deferral"),
("Committee report: Favorable", "committee-passage-favorable"),
("Committee report: Majority favorable", "committee-passage"),
("House amendment amended", "amendment-amendment"),
("Introduced and adopted", ["introduction", "passage"]),
("Introduced, adopted", ["introduction", "passage"]),
("Introduced and read first time", ["introduction", "reading-1"]),
("Introduced, read first time", ["introduction", "reading-1"]),
("Introduced", "introduction"),
("Prefiled", "filing"),
("Read second time", "reading-2"),
("Read third time", ["passage", "reading-3"]),
("Recommitted to Committee", "referral-committee"),
("Referred to Committee", "referral-committee"),
("Rejected", "failure"),
("Senate amendment amended", "amendment-amendment"),
("Signed by governor", "executive-signature"),
("Signed by Governor", "executive-signature"),
("Tabled", "failure"),
("Veto overridden", "veto-override-passage"),
("Veto sustained", "veto-override-failure"),
("Vetoed by Governor", "executive-veto"),
)
for prefix, atype in classifiers:
if action.lower().startswith(prefix.lower()):
return atype
# otherwise
return None
class SCBillScraper(Scraper):
"""
Bill scraper that pulls down all legislatition on from sc website.
Used to pull in information regarding Legislation, and basic associated metadata,
using x-path to find and obtain the information
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.raise_errors = False
self.retry_attempts = 5
urls = {
"lower": {
"daily-bill-index": "https://www.scstatehouse.gov/hintro/hintros.php",
"prefile-index": "https://www.scstatehouse.gov/sessphp/prefil"
"{last_two_digits_of_session_year}.php",
},
"upper": {
"daily-bill-index": "https://www.scstatehouse.gov/sintro/sintros.php",
"prefile-index": "https://www.scstatehouse.gov/sessphp/prefil"
"{last_two_digits_of_session_year}.php",
},
}
_subjects = defaultdict(set)
@toggle_http_version
def downgraded_http_get(self, url, params=None, **kwargs):
return self.get(url, params=params, **kwargs)
@toggle_http_version
def downgraded_http_post(self, url, data=None, json=None, **kwargs):
return self.post(url, data=data, json=json, **kwargs)
def scrape_subjects(self, session):
"""
Obtain bill subjects, which will be saved onto _subjects global,
to be added on to bill later on in process.
:param session_code:
"""
# only need to do it once
if self._subjects:
return
session_code = {
"2013-2014": "120",
"2015-2016": "121",
"2017-2018": "122",
"2019-2020": "123",
}[session]
subject_search_url = "https://www.scstatehouse.gov/subjectsearch.php"
data = self.post(
subject_search_url,
data=dict(
(
("GETINDEX", "Y"),
("SESSION", session_code),
("INDEXCODE", "0"),
("INDEXTEXT", ""),
("AORB", "B"),
("PAGETYPE", "0"),
)
),
).text
doc = lxml.html.fromstring(data)
# skip first two subjects, filler options
for option in doc.xpath("//option")[2:]:
subject = option.text
code = option.get("value")
url = "%s?AORB=B&session=%s&indexcode=%s" % (
subject_search_url,
session_code,
code,
)
# SC's server is sending some noncomplient server responses
# that are confusing self.get
# workaround via
# https://stackoverflow.com/questions/14442222/how-to-handle-incompleteread-in-python
try:
self.info(url)
data = urllib.request.urlopen(url).read()
except (http.client.IncompleteRead) as e:
self.warning("Client IncompleteRead error on {}".format(url))
data = e.partial
doc = lxml.html.fromstring(data)
for bill in doc.xpath('//span[@style="font-weight:bold;"]'):
match = re.match(r"(?:H|S) \d{4}", bill.text)
if match:
# remove * and leading zeroes
bill_id = match.group().replace("*", " ")
bill_id = re.sub(" 0*", " ", bill_id)
self._subjects[bill_id].add(subject)
def scrape_vote_history(self, bill, vurl):
"""
Obtain the information on a vote and link it to the related Bill
:param bill: related bill
:param vurl: source for the voteEvent information.
:return: voteEvent object
"""
html = self.get(vurl).text
doc = lxml.html.fromstring(html)
doc.make_links_absolute(vurl)
# skip first two rows
for row in doc.xpath("//table/tr")[2:]:
tds = row.getchildren()
if len(tds) != 11:
self.warning("irregular vote row: %s" % vurl)
continue
(
timestamp,
motion,
vote,
yeas,
nays,
nv,
exc,
pres,
abst,
total,
result,
) = tds
timestamp = timestamp.text.replace(u"\xa0", " ")
timestamp = datetime.datetime.strptime(timestamp, "%m/%d/%Y %H:%M %p")
yeas = int(yeas.text)
nays = int(nays.text)
others = int(nv.text) + int(exc.text) + int(abst.text) + int(pres.text)
assert yeas + nays + others == int(total.text)
if result.text == "Passed":
passed = "pass"
else:
passed = "fail"
vote_link = vote.xpath("a")[0]
if "[H]" in vote_link.text:
chamber = "lower"
else:
chamber = "upper"
vote = VoteEvent(
chamber=chamber, # 'upper' or 'lower'
start_date=timestamp.strftime("%Y-%m-%d"), # 'YYYY-MM-DD' format
motion_text=motion.text,
result=passed,
classification="passage", # Can also be 'other'
# Provide a Bill instance to link with the VoteEvent...
bill=bill,
)
vote.set_count("yes", yeas)
vote.set_count("no", nays)
vote.set_count("other", others)
vote.add_source(vurl)
# obtain vote rollcall from pdf and add it to the VoteEvent object
rollcall_pdf = vote_link.get("href")
self.scrape_rollcall(vote, rollcall_pdf)
vote.add_source(rollcall_pdf)
if rollcall_pdf in self._seen_vote_ids:
self.warning("duplicate usage of %s, skipping", rollcall_pdf)
continue
else:
self._seen_vote_ids.add(rollcall_pdf)
vote.pupa_id = rollcall_pdf # distinct KEY for each one
yield vote
def scrape_rollcall(self, vote, vurl):
"""
Get text information from the pdf, containing the vote roll call
and add the information obtained to the related voteEvent object
:param vote: related voteEvent object
:param vurl: pdf source url
"""
(path, resp) = self.urlretrieve(vurl)
pdflines = convert_pdf(path, "text")
os.remove(path)
current_vfunc = None
option = None
for line in pdflines.split(b"\n"):
line = line.strip().decode()
# change what is being recorded
if line.startswith("YEAS") or line.startswith("AYES"):
current_vfunc = vote.yes
elif line.startswith("NAYS"):
current_vfunc = vote.no
elif line.startswith("EXCUSED"):
current_vfunc = vote.vote
option = "excused"
elif line.startswith("NOT VOTING"):
current_vfunc = vote.vote
option = "excused"
elif line.startswith("ABSTAIN"):
current_vfunc = vote.vote
option = "excused"
elif line.startswith("PAIRED"):
current_vfunc = vote.vote
option = "paired"
# skip these
elif not line or line.startswith("Page "):
continue
# if a vfunc is active
elif current_vfunc:
# split names apart by 3 or more spaces
names = re.split(r"\s{3,}", line)
for name in names:
if name:
if not option:
current_vfunc(name.strip())
else:
current_vfunc(option=option, voter=name.strip())
def scrape_details(self, bill_detail_url, session, chamber, bill_id):
"""
Create the Bill and add the information obtained from the provided bill_detail_url.
and then yield the bill object.
:param bill_detail_url:
:param session:
:param chamber:
:param bill_id:
:return:
"""
page = self.get(bill_detail_url).text
if "INVALID BILL NUMBER" in page:
self.warning("INVALID BILL %s" % bill_detail_url)
return
doc = lxml.html.fromstring(page)
doc.make_links_absolute(bill_detail_url)
bill_div = doc.xpath('//div[@style="margin:0 0 40px 0;"]')[0]
bill_type = bill_div.xpath("span/text()")[0]
if "General Bill" in bill_type:
bill_type = "bill"
elif "Concurrent Resolution" in bill_type:
bill_type = "concurrent resolution"
elif "Joint Resolution" in bill_type:
bill_type = "joint resolution"
elif "Resolution" in bill_type:
bill_type = "resolution"
else:
raise ValueError("unknown bill type: %s" % bill_type)
# this is fragile, but less fragile than it was
b = bill_div.xpath('./b[text()="Summary:"]')[0]
bill_summary = b.getnext().tail.strip()
bill = Bill(
bill_id,
legislative_session=session, # session name metadata's `legislative_sessions`
chamber=chamber, # 'upper' or 'lower'
title=bill_summary,
classification=bill_type,
)
subjects = list(self._subjects[bill_id])
for subject in subjects:
bill.add_subject(subject)
# sponsors
for sponsor in doc.xpath('//a[contains(@href, "member.php")]/text()'):
bill.add_sponsorship(
name=sponsor,
classification="primary",
primary=True,
entity_type="person",
)
for sponsor in doc.xpath('//a[contains(@href, "committee.php")]/text()'):
sponsor = sponsor.replace(u"\xa0", " ").strip()
bill.add_sponsorship(
name=sponsor,
classification="primary",
primary=True,
entity_type="organization",
)
# find versions
version_url = doc.xpath('//a[text()="View full text"]/@href')[0]
version_html = self.get(version_url).text
version_doc = lxml.html.fromstring(version_html)
version_doc.make_links_absolute(version_url)
for version in version_doc.xpath('//a[contains(@href, "/prever/")]'):
# duplicate versions with same date, use first appearance
bill.add_version_link(
note=version.text, # Description of the version from the state;
# eg, 'As introduced', 'Amended', etc.
url=version.get("href"),
on_duplicate="ignore",
media_type="text/html", # Still a MIME type
)
# actions
for row in bill_div.xpath("table/tr"):
date_td, chamber_td, action_td = row.xpath("td")
date = datetime.datetime.strptime(date_td.text, "%m/%d/%y")
action_chamber = {"Senate": "upper", "House": "lower", None: "legislature"}[
chamber_td.text
]
action = action_td.text_content()
action = action.split("(House Journal")[0]
action = action.split("(Senate Journal")[0].strip()
atype = action_type(action)
bill.add_action(
description=action, # Action description, from the state
date=date.strftime("%Y-%m-%d"), # `YYYY-MM-DD` format
chamber=action_chamber, # 'upper' or 'lower'
classification=atype, # Options explained in the next section
)
# votes
vurl = doc.xpath('//a[text()="View Vote History"]/@href')
if vurl:
vurl = vurl[0]
yield from self.scrape_vote_history(bill, vurl)
bill.add_source(bill_detail_url)
yield bill
def scrape(self, chamber=None, session=None):
"""
Obtain the bill urls containing the bill information which will be used
by the scrape_details function to yield the desired Bill objects
:param chamber:
:param session:
"""
if session is None:
session = self.latest_session()
self.info("no session specified, using %s", session)
self._seen_vote_ids = set()
# Subject scraping disabled Summer 2020, openstates/issues#77
# Leaving the remnants of this around since it is very possible that SC will
# update their web configuration and we can reuse this later, but for now it was
# breaking 75% of the time and it isn't worth the cost.
# self.scrape_subjects(session)
# get bill index
chambers = [chamber] if chamber else ["upper", "lower"]
for chamber in chambers:
index_url = self.urls[chamber]["daily-bill-index"]
chamber_letter = "S" if chamber == "upper" else "H"
page = self.get(index_url).text
doc = lxml.html.fromstring(page)
doc.make_links_absolute(index_url)
# visit each day and extract bill ids
days = doc.xpath("//div/b/a/@href")
for day_url in days:
try:
data = self.get(day_url).text
except scrapelib.HTTPError:
continue
doc = lxml.html.fromstring(data)
doc.make_links_absolute(day_url)
for bill_a in doc.xpath("//p/a[1]"):
bill_id = bill_a.text.replace(".", "")
if bill_id.startswith(chamber_letter):
yield from self.scrape_details(
bill_a.get("href"), session, chamber, bill_id
)
prefile_url = self.urls[chamber]["prefile-index"].format(
last_two_digits_of_session_year=session[2:4]
)
page = self.get(prefile_url).text
doc = lxml.html.fromstring(page)
doc.make_links_absolute(prefile_url)
# visit each day and extract bill ids
if chamber == "lower":
days = doc.xpath('//dd[contains(text(),"House")]/a/@href')
else:
days = doc.xpath('//dd[contains(text(),"Senate")]/a/@href')
for day_url in days:
try:
data = self.get(day_url).text
except scrapelib.HTTPError:
continue
doc = lxml.html.fromstring(data)
doc.make_links_absolute(day_url)
for bill_a in doc.xpath("//p/a[1]"):
bill_id = bill_a.text.replace(".", "")
if bill_id.startswith(chamber_letter):
yield from self.scrape_details(
bill_a.get("href"), session, chamber, bill_id
)
|
sunlightlabs/openstates
|
scrapers/sc/bills.py
|
Python
|
gpl-3.0
| 18,396
|
[
"VisIt"
] |
2b9e988ad37686b3f0a65c5b61bd6000429225d4708090080cc3fa7fe2985cdf
|
# Copyright (C) 2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest as ut
import importlib_wrapper
import numpy as np
implementation = "gpu" if "gpu" in "@TEST_LABELS@".split(";") else "cpu"
sample, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@SAMPLES_DIR@/lbf.py", gpu=implementation == "gpu",
cmd_arguments=["--" + implementation], script_suffix=implementation)
@skipIfMissingFeatures
class Sample(ut.TestCase):
system = sample.system
def test_electrophoresis_gradient(self):
# the force is applied along the z-axis
gradient = np.mean(np.gradient(sample.f_list.T, axis=1), axis=1)
self.assertAlmostEqual(gradient[0], 0.0, places=11)
self.assertAlmostEqual(gradient[1], 0.0, places=11)
self.assertAlmostEqual(gradient[2], -7.78814e-7, places=11)
if __name__ == "__main__":
ut.main()
|
espressomd/espresso
|
testsuite/scripts/samples/test_lbf.py
|
Python
|
gpl-3.0
| 1,525
|
[
"ESPResSo"
] |
5cebe56bcd870cb868567e0121765f1eb64373f2a109073dd1ddf12747cbba5d
|
# $Id$
#
# Copyright (c) 2009, Novartis Institutes for BioMedical Research Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Novartis Institutes for BioMedical Research Inc.
# nor the names of its contributors may be used to endorse or promote
# products derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Created by Greg Landrum, Nov 2008
""" Implementation of the BRICS algorithm from Degen et al. ChemMedChem *3* 1503-7 (2008)
"""
from __future__ import print_function
import sys,re,random
from rdkit import Chem
from rdkit.Chem import rdChemReactions as Reactions
from rdkit.six import iteritems, iterkeys, next
from rdkit.six.moves import range
# These are the definitions that will be applied to fragment molecules:
environs = {
'L1':'[C;D3]([#0,#6,#7,#8])(=O)',
#
# After some discussion, the L2 definitions ("N.pl3" in the original
# paper) have been removed and incorporated into a (almost) general
# purpose amine definition in L5 ("N.sp3" in the paper).
#
# The problem is one of consistency.
# Based on the original definitions you should get the following
# fragmentations:
# C1CCCCC1NC(=O)C -> C1CCCCC1N[2*].[1*]C(=O)C
# c1ccccc1NC(=O)C -> c1ccccc1[16*].[2*]N[2*].[1*]C(=O)C
# This difference just didn't make sense to us. By switching to
# the unified definition we end up with:
# C1CCCCC1NC(=O)C -> C1CCCCC1[15*].[5*]N[5*].[1*]C(=O)C
# c1ccccc1NC(=O)C -> c1ccccc1[16*].[5*]N[5*].[1*]C(=O)C
#
#'L2':'[N;!R;!D1;!$(N=*)]-;!@[#0,#6]',
# this one turned out to be too tricky to define above, so we set it off
# in its own definition:
#'L2a':'[N;D3;R;$(N(@[C;!$(C=*)])@[C;!$(C=*)])]',
'L3':'[O;D2]-;!@[#0,#6,#1]',
'L4':'[C;!D1;!$(C=*)]-;!@[#6]',
#'L5':'[N;!D1;!$(N*!-*);!$(N=*);!$(N-[!C;!#0])]-[#0,C]',
'L5':'[N;!D1;!$(N=*);!$(N-[!#6;!#16;!#0;!#1]);!$([N;R]@[C;R]=O)]',
'L6':'[C;D3;!R](=O)-;!@[#0,#6,#7,#8]',
'L7a':'[C;D2,D3]-[#6]',
'L7b':'[C;D2,D3]-[#6]',
'#L8':'[C;!R;!D1]-;!@[#6]',
'L8':'[C;!R;!D1;!$(C!-*)]',
'L9':'[n;+0;$(n(:[c,n,o,s]):[c,n,o,s])]',
'L10':'[N;R;$(N(@C(=O))@[C,N,O,S])]',
'L11':'[S;D2](-;!@[#0,#6])',
'L12':'[S;D4]([#6,#0])(=O)(=O)',
'L13':'[C;$(C(-;@[C,N,O,S])-;@[N,O,S])]',
'L14':'[c;$(c(:[c,n,o,s]):[n,o,s])]',
'L14b':'[c;$(c(:[c,n,o,s]):[n,o,s])]',
'L15':'[C;$(C(-;@C)-;@C)]',
'L16':'[c;$(c(:c):c)]',
'L16b':'[c;$(c(:c):c)]',
}
reactionDefs = (
# L1
[
('1','3','-'),
('1','5','-'),
('1','10','-'),
],
# L3
[
('3','4','-'),
('3','13','-'),
('3','14','-'),
('3','15','-'),
('3','16','-'),
],
# L4
[
('4','5','-'),
('4','11','-'),
],
# L5
[
('5','12','-'),
('5','14','-'),
('5','16','-'),
('5','13','-'),
('5','15','-'),
],
# L6
[
('6','13','-'),
('6','14','-'),
('6','15','-'),
('6','16','-'),
],
# L7
[
('7a','7b','='),
],
# L8
[
('8','9','-'),
('8','10','-'),
('8','13','-'),
('8','14','-'),
('8','15','-'),
('8','16','-'),
],
# L9
[
('9','13','-'),# not in original paper
('9','14','-'),# not in original paper
('9','15','-'),
('9','16','-'),
],
# L10
[
('10','13','-'),
('10','14','-'),
('10','15','-'),
('10','16','-'),
],
# L11
[
('11','13','-'),
('11','14','-'),
('11','15','-'),
('11','16','-'),
],
# L12
# none left
# L13
[
('13','14','-'),
('13','15','-'),
('13','16','-'),
],
# L14
[
('14','14','-'),# not in original paper
('14','15','-'),
('14','16','-'),
],
# L15
[
('15','16','-'),
],
# L16
[
('16','16','-'), # not in original paper
],
)
import copy
smartsGps=copy.deepcopy(reactionDefs)
for gp in smartsGps:
for j,defn in enumerate(gp):
g1,g2,bnd = defn
r1=environs['L'+g1]
r2=environs['L'+g2]
g1 = re.sub('[a-z,A-Z]','',g1)
g2 = re.sub('[a-z,A-Z]','',g2)
sma='[$(%s):1]%s;!@[$(%s):2]>>[%s*]-[*:1].[%s*]-[*:2]'%(r1,bnd,r2,g1,g2)
gp[j] =sma
for gp in smartsGps:
for defn in gp:
try:
t=Reactions.ReactionFromSmarts(defn)
t.Initialize()
except Exception:
print(defn)
raise
environMatchers={}
for env,sma in iteritems(environs):
environMatchers[env]=Chem.MolFromSmarts(sma)
bondMatchers=[]
for i,compats in enumerate(reactionDefs):
tmp=[]
for i1,i2,bType in compats:
e1 = environs['L%s'%i1]
e2 = environs['L%s'%i2]
patt = '[$(%s)]%s;!@[$(%s)]'%(e1,bType,e2)
patt = Chem.MolFromSmarts(patt)
tmp.append((i1,i2,bType,patt))
bondMatchers.append(tmp)
reactions = tuple([[Reactions.ReactionFromSmarts(y) for y in x] for x in smartsGps])
reverseReactions = []
for i,rxnSet in enumerate(smartsGps):
for j,sma in enumerate(rxnSet):
rs,ps = sma.split('>>')
sma = '%s>>%s'%(ps,rs)
rxn = Reactions.ReactionFromSmarts(sma)
labels = re.findall(r'\[([0-9]+?)\*\]',ps)
rxn._matchers=[Chem.MolFromSmiles('[%s*]'%x) for x in labels]
reverseReactions.append(rxn)
def FindBRICSBonds(mol,randomizeOrder=False,silent=True):
""" returns the bonds in a molecule that BRICS would cleave
>>> from rdkit import Chem
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> res = list(FindBRICSBonds(m))
>>> res
[((3, 2), ('3', '4')), ((3, 4), ('3', '4'))]
a more complicated case:
>>> m = Chem.MolFromSmiles('CCCOCCC(=O)c1ccccc1')
>>> res = list(FindBRICSBonds(m))
>>> res
[((3, 2), ('3', '4')), ((3, 4), ('3', '4')), ((6, 8), ('6', '16'))]
we can also randomize the order of the results:
>>> random.seed(23)
>>> res = list(FindBRICSBonds(m,randomizeOrder=True))
>>> sorted(res)
[((3, 2), ('3', '4')), ((3, 4), ('3', '4')), ((6, 8), ('6', '16'))]
Note that this is a generator function :
>>> res = FindBRICSBonds(m)
>>> res
<generator object ...>
>>> next(res)
((3, 2), ('3', '4'))
>>> m = Chem.MolFromSmiles('CC=CC')
>>> res = list(FindBRICSBonds(m))
>>> sorted(res)
[((1, 2), ('7', '7'))]
make sure we don't match ring bonds:
>>> m = Chem.MolFromSmiles('O=C1NCCC1')
>>> list(FindBRICSBonds(m))
[]
another nice one, make sure environment 8 doesn't match something connected
to a ring atom:
>>> m = Chem.MolFromSmiles('CC1(C)CCCCC1')
>>> list(FindBRICSBonds(m))
[]
"""
letter = re.compile('[a-z,A-Z]')
indices = list(range(len(bondMatchers)))
bondsDone=set()
if randomizeOrder: random.shuffle(indices,random=random.random)
envMatches={}
for env,patt in iteritems(environMatchers):
envMatches[env]=mol.HasSubstructMatch(patt)
for gpIdx in indices:
if randomizeOrder:
compats =bondMatchers[gpIdx][:]
random.shuffle(compats,random=random.random)
else:
compats = bondMatchers[gpIdx]
for i1,i2,bType,patt in compats:
if not envMatches['L'+i1] or not envMatches['L'+i2]: continue
matches = mol.GetSubstructMatches(patt)
i1 = letter.sub('',i1)
i2 = letter.sub('',i2)
for match in matches:
if match not in bondsDone and (match[1],match[0]) not in bondsDone:
bondsDone.add(match)
yield(((match[0],match[1]),(i1,i2)))
def BreakBRICSBonds(mol,bonds=None,sanitize=True,silent=True):
""" breaks the BRICS bonds in a molecule and returns the results
>>> from rdkit import Chem
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> m2=BreakBRICSBonds(m)
>>> Chem.MolToSmiles(m2,True)
'[3*]O[3*].[4*]CC.[4*]CCC'
a more complicated case:
>>> m = Chem.MolFromSmiles('CCCOCCC(=O)c1ccccc1')
>>> m2=BreakBRICSBonds(m)
>>> Chem.MolToSmiles(m2,True)
'[16*]c1ccccc1.[3*]O[3*].[4*]CCC.[4*]CCC([6*])=O'
can also specify a limited set of bonds to work with:
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> m2 = BreakBRICSBonds(m,[((3, 2), ('3', '4'))])
>>> Chem.MolToSmiles(m2,True)
'[3*]OCC.[4*]CCC'
this can be used as an alternate approach for doing a BRICS decomposition by
following BreakBRICSBonds with a call to Chem.GetMolFrags:
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> m2=BreakBRICSBonds(m)
>>> frags = Chem.GetMolFrags(m2,asMols=True)
>>> [Chem.MolToSmiles(x,True) for x in frags]
['[4*]CCC', '[3*]O[3*]', '[4*]CC']
"""
if not bonds:
#bonds = FindBRICSBonds(mol)
res = Chem.FragmentOnBRICSBonds(mol)
if sanitize:
Chem.SanitizeMol(res)
return res
eMol = Chem.EditableMol(mol)
nAts = mol.GetNumAtoms()
dummyPositions=[]
for indices,dummyTypes in bonds:
ia,ib = indices
obond = mol.GetBondBetweenAtoms(ia,ib)
bondType=obond.GetBondType()
eMol.RemoveBond(ia,ib)
da,db = dummyTypes
atoma = Chem.Atom(0)
atoma.SetIsotope(int(da))
atoma.SetNoImplicit(True)
idxa = nAts
nAts+=1
eMol.AddAtom(atoma)
eMol.AddBond(ia,idxa,bondType)
atomb = Chem.Atom(0)
atomb.SetIsotope(int(db))
atomb.SetNoImplicit(True)
idxb = nAts
nAts+=1
eMol.AddAtom(atomb)
eMol.AddBond(ib,idxb,bondType)
if mol.GetNumConformers():
dummyPositions.append((idxa,ib))
dummyPositions.append((idxb,ia))
res = eMol.GetMol()
if sanitize:
Chem.SanitizeMol(res)
if mol.GetNumConformers():
for conf in mol.GetConformers():
resConf = res.GetConformer(conf.GetId())
for ia,pa in dummyPositions:
resConf.SetAtomPosition(ia,conf.GetAtomPosition(pa))
return res
def BRICSDecompose(mol,allNodes=None,minFragmentSize=1,onlyUseReactions=None,
silent=True,keepNonLeafNodes=False,singlePass=False,returnMols=False):
""" returns the BRICS decomposition for a molecule
>>> from rdkit import Chem
>>> m = Chem.MolFromSmiles('CCCOCc1cc(c2ncccc2)ccc1')
>>> res = list(BRICSDecompose(m))
>>> sorted(res)
['[14*]c1ccccn1', '[16*]c1cccc([16*])c1', '[3*]O[3*]', '[4*]CCC', '[4*]C[8*]']
>>> res = list(BRICSDecompose(m,returnMols=True))
>>> res[0]
<rdkit.Chem.rdchem.Mol object ...>
>>> smis = [Chem.MolToSmiles(x,True) for x in res]
>>> sorted(smis)
['[14*]c1ccccn1', '[16*]c1cccc([16*])c1', '[3*]O[3*]', '[4*]CCC', '[4*]C[8*]']
nexavar, an example from the paper (corrected):
>>> m = Chem.MolFromSmiles('CNC(=O)C1=NC=CC(OC2=CC=C(NC(=O)NC3=CC(=C(Cl)C=C3)C(F)(F)F)C=C2)=C1')
>>> res = list(BRICSDecompose(m))
>>> sorted(res)
['[1*]C([1*])=O', '[1*]C([6*])=O', '[14*]c1cc([16*])ccn1', '[16*]c1ccc(Cl)c([16*])c1', '[16*]c1ccc([16*])cc1', '[3*]O[3*]', '[5*]NC', '[5*]N[5*]', '[8*]C(F)(F)F']
it's also possible to keep pieces that haven't been fully decomposed:
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> res = list(BRICSDecompose(m,keepNonLeafNodes=True))
>>> sorted(res)
['CCCOCC', '[3*]OCC', '[3*]OCCC', '[3*]O[3*]', '[4*]CC', '[4*]CCC']
>>> m = Chem.MolFromSmiles('CCCOCc1cc(c2ncccc2)ccc1')
>>> res = list(BRICSDecompose(m,keepNonLeafNodes=True))
>>> sorted(res)
['CCCOCc1cccc(-c2ccccn2)c1', '[14*]c1ccccn1', '[16*]c1cccc(-c2ccccn2)c1', '[16*]c1cccc(COCCC)c1', '[16*]c1cccc([16*])c1', '[3*]OCCC', '[3*]OC[8*]', '[3*]OCc1cccc(-c2ccccn2)c1', '[3*]OCc1cccc([16*])c1', '[3*]O[3*]', '[4*]CCC', '[4*]C[8*]', '[4*]Cc1cccc(-c2ccccn2)c1', '[4*]Cc1cccc([16*])c1', '[8*]COCCC']
or to only do a single pass of decomposition:
>>> m = Chem.MolFromSmiles('CCCOCc1cc(c2ncccc2)ccc1')
>>> res = list(BRICSDecompose(m,singlePass=True))
>>> sorted(res)
['CCCOCc1cccc(-c2ccccn2)c1', '[14*]c1ccccn1', '[16*]c1cccc(-c2ccccn2)c1', '[16*]c1cccc(COCCC)c1', '[3*]OCCC', '[3*]OCc1cccc(-c2ccccn2)c1', '[4*]CCC', '[4*]Cc1cccc(-c2ccccn2)c1', '[8*]COCCC']
setting a minimum size for the fragments:
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> res = list(BRICSDecompose(m,keepNonLeafNodes=True,minFragmentSize=2))
>>> sorted(res)
['CCCOCC', '[3*]OCC', '[3*]OCCC', '[4*]CC', '[4*]CCC']
>>> m = Chem.MolFromSmiles('CCCOCC')
>>> res = list(BRICSDecompose(m,keepNonLeafNodes=True,minFragmentSize=3))
>>> sorted(res)
['CCCOCC', '[3*]OCC', '[4*]CCC']
>>> res = list(BRICSDecompose(m,minFragmentSize=2))
>>> sorted(res)
['[3*]OCC', '[3*]OCCC', '[4*]CC', '[4*]CCC']
"""
global reactions
mSmi = Chem.MolToSmiles(mol,1)
if allNodes is None:
allNodes=set()
if mSmi in allNodes:
return set()
activePool={mSmi:mol}
allNodes.add(mSmi)
foundMols={mSmi:mol}
for gpIdx,reactionGp in enumerate(reactions):
newPool = {}
while activePool:
matched=False
nSmi = next(iterkeys(activePool))
mol = activePool.pop(nSmi)
for rxnIdx,reaction in enumerate(reactionGp):
if onlyUseReactions and (gpIdx,rxnIdx) not in onlyUseReactions:
continue
if not silent:
print('--------')
print(smartsGps[gpIdx][rxnIdx])
ps = reaction.RunReactants((mol,))
if ps:
if not silent: print(nSmi,'->',len(ps),'products')
for prodSeq in ps:
seqOk=True
# we want to disqualify small fragments, so sort the product sequence by size
tSeq = [(prod.GetNumAtoms(onlyExplicit=True),idx) for idx,prod in enumerate(prodSeq)]
tSeq.sort()
for nats,idx in tSeq:
prod = prodSeq[idx]
try:
Chem.SanitizeMol(prod)
except Exception:
continue
pSmi = Chem.MolToSmiles(prod,1)
if minFragmentSize>0:
nDummies = pSmi.count('*')
if nats-nDummies<minFragmentSize:
seqOk=False
break
prod.pSmi = pSmi
ts = [(x,prodSeq[y]) for x,y in tSeq]
prodSeq=ts
if seqOk:
matched=True
for nats,prod in prodSeq:
pSmi = prod.pSmi
#print('\t',nats,pSmi)
if pSmi not in allNodes:
if not singlePass:
activePool[pSmi] = prod
allNodes.add(pSmi)
foundMols[pSmi]=prod
if singlePass or keepNonLeafNodes or not matched:
newPool[nSmi]=mol
activePool = newPool
if not (singlePass or keepNonLeafNodes):
if not returnMols:
res = set(activePool.keys())
else:
res = activePool.values()
else:
if not returnMols:
res = allNodes
else:
res = foundMols.values()
return res
import random
dummyPattern=Chem.MolFromSmiles('[*]')
def BRICSBuild(fragments,onlyCompleteMols=True,seeds=None,uniquify=True,
scrambleReagents=True,maxDepth=3):
seen = set()
if not seeds:
seeds = list(fragments)
if scrambleReagents:
seeds = list(seeds)
random.shuffle(seeds,random=random.random)
if scrambleReagents:
tempReactions = list(reverseReactions)
random.shuffle(tempReactions,random=random.random)
else:
tempReactions=reverseReactions
for seed in seeds:
seedIsR1=False
seedIsR2=False
nextSteps=[]
for rxn in tempReactions:
if seed.HasSubstructMatch(rxn._matchers[0]):
seedIsR1=True
if seed.HasSubstructMatch(rxn._matchers[1]):
seedIsR2=True
for fragment in fragments:
ps = None
if fragment.HasSubstructMatch(rxn._matchers[0]):
if seedIsR2:
ps = rxn.RunReactants((fragment,seed))
if fragment.HasSubstructMatch(rxn._matchers[1]):
if seedIsR1:
ps = rxn.RunReactants((seed,fragment))
if ps:
for p in ps:
if uniquify:
pSmi =Chem.MolToSmiles(p[0],True)
if pSmi in seen:
continue
else:
seen.add(pSmi)
if p[0].HasSubstructMatch(dummyPattern):
nextSteps.append(p[0])
if not onlyCompleteMols:
yield p[0]
else:
yield p[0]
if nextSteps and maxDepth>0:
for p in BRICSBuild(fragments,onlyCompleteMols=onlyCompleteMols,
seeds=nextSteps,uniquify=uniquify,
maxDepth=maxDepth-1):
if uniquify:
pSmi =Chem.MolToSmiles(p,True)
if pSmi in seen:
continue
else:
seen.add(pSmi)
yield p
# ------- ------- ------- ------- ------- ------- ------- -------
# Begin testing code
#------------------------------------
#
# doctest boilerplate
#
def _test():
import doctest,sys
return doctest.testmod(sys.modules["__main__"],
optionflags=doctest.ELLIPSIS+doctest.NORMALIZE_WHITESPACE)
if __name__=='__main__':
import unittest
class TestCase(unittest.TestCase):
def test1(self):
m = Chem.MolFromSmiles('CC(=O)OC')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2)
m = Chem.MolFromSmiles('CC(=O)N1CCC1=O')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2,res)
m = Chem.MolFromSmiles('c1ccccc1N(C)C')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2,res)
m = Chem.MolFromSmiles('c1cccnc1N(C)C')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2,res)
m = Chem.MolFromSmiles('o1ccnc1N(C)C')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2)
m = Chem.MolFromSmiles('c1ccccc1OC')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2)
m = Chem.MolFromSmiles('o1ccnc1OC')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2)
m = Chem.MolFromSmiles('O1CCNC1OC')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==2)
m = Chem.MolFromSmiles('CCCSCC')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==3,res)
self.assertTrue('[11*]S[11*]' in res,res)
m = Chem.MolFromSmiles('CCNC(=O)C1CC1')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==4,res)
self.assertTrue('[5*]N[5*]' in res,res)
def test2(self):
# example from the paper, nexavar:
m = Chem.MolFromSmiles('CNC(=O)C1=NC=CC(OC2=CC=C(NC(=O)NC3=CC(=C(Cl)C=C3)C(F)(F)F)C=C2)=C1')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==9,res)
def test3(self):
m = Chem.MolFromSmiles('FC(F)(F)C1=C(Cl)C=CC(NC(=O)NC2=CC=CC=C2)=C1')
res = BRICSDecompose(m)
self.assertTrue(res)
self.assertTrue(len(res)==5,res)
self.assertTrue('[5*]N[5*]' in res,res)
self.assertTrue('[16*]c1ccccc1' in res,res)
self.assertTrue('[8*]C(F)(F)F' in res,res)
def test4(self):
allNodes = set()
m = Chem.MolFromSmiles('c1ccccc1OCCC')
res = BRICSDecompose(m,allNodes=allNodes)
self.assertTrue(res)
leaves=res
self.assertTrue(len(leaves)==3,leaves)
self.assertTrue(len(allNodes)==6,allNodes)
res = BRICSDecompose(m,allNodes=allNodes)
self.assertFalse(res)
self.assertTrue(len(allNodes)==6,allNodes)
m = Chem.MolFromSmiles('c1ccccc1OCCCC')
res = BRICSDecompose(m,allNodes=allNodes)
self.assertTrue(res)
leaves.update(res)
self.assertTrue(len(allNodes)==9,allNodes)
self.assertTrue(len(leaves)==4,leaves)
m = Chem.MolFromSmiles('c1cc(C(=O)NCC)ccc1OCCC')
res = BRICSDecompose(m,allNodes=allNodes)
self.assertTrue(res)
leaves.update(res)
self.assertTrue(len(leaves)==8,leaves)
self.assertTrue(len(allNodes)==18,allNodes)
def test5(self):
allNodes = set()
frags = [
'[14*]c1ncncn1',
'[16*]c1ccccc1',
'[14*]c1ncccc1',
]
frags = [Chem.MolFromSmiles(x) for x in frags]
res = BRICSBuild(frags)
self.assertTrue(res)
res= list(res)
self.assertTrue(len(res)==6)
smis = [Chem.MolToSmiles(x,True) for x in res]
self.assertTrue('c1ccc(-c2ccccc2)cc1' in smis)
self.assertTrue('c1ccc(-c2ccccn2)cc1' in smis)
def test5a(self):
allNodes = set()
frags = [
'[3*]O[3*]',
'[16*]c1ccccc1',
]
frags = [Chem.MolFromSmiles(x) for x in frags]
res = BRICSBuild(frags)
self.assertTrue(res)
res=list(res)
smis = [Chem.MolToSmiles(x,True) for x in res]
self.assertTrue(len(smis)==2,smis)
self.assertTrue('c1ccc(Oc2ccccc2)cc1' in smis)
self.assertTrue('c1ccc(-c2ccccc2)cc1' in smis)
def test6(self):
allNodes = set()
frags = [
'[16*]c1ccccc1',
'[3*]OC',
'[9*]n1cccc1',
]
frags = [Chem.MolFromSmiles(x) for x in frags]
res = BRICSBuild(frags)
self.assertTrue(res)
res= list(res)
self.assertTrue(len(res)==3)
smis = [Chem.MolToSmiles(x,True) for x in res]
self.assertTrue('c1ccc(-c2ccccc2)cc1' in smis)
self.assertTrue('COc1ccccc1' in smis)
self.assertTrue('c1ccc(-n2cccc2)cc1' in smis,smis)
def test7(self):
allNodes = set()
frags = [
'[16*]c1ccccc1',
'[3*]OC',
'[3*]OCC(=O)[6*]',
]
frags = [Chem.MolFromSmiles(x) for x in frags]
res = BRICSBuild(frags)
self.assertTrue(res)
res= list(res)
smis = [Chem.MolToSmiles(x,True) for x in res]
self.assertTrue(len(res)==3)
self.assertTrue('c1ccc(-c2ccccc2)cc1' in smis)
self.assertTrue('COc1ccccc1' in smis)
self.assertTrue('O=C(COc1ccccc1)c1ccccc1' in smis)
def test8(self):
random.seed(23)
base = Chem.MolFromSmiles("n1cncnc1OCC(C1CC1)OC1CNC1")
catalog = BRICSDecompose(base)
self.assertTrue(len(catalog)==5,catalog)
catalog = [Chem.MolFromSmiles(x) for x in catalog]
ms = list(BRICSBuild(catalog,maxDepth=4))
for m in ms:
Chem.SanitizeMol(m)
ms = [Chem.MolToSmiles(x) for x in ms]
self.assertEqual(len(ms),36)
ts = ['n1cnc(C2CNC2)nc1','n1cnc(-c2ncncn2)nc1','C(OC1CNC1)C(C1CC1)OC1CNC1',
'n1cnc(OC(COC2CNC2)C2CC2)nc1','n1cnc(OCC(OC2CNC2)C2CNC2)nc1']
ts = [Chem.MolToSmiles(Chem.MolFromSmiles(x),True) for x in ts]
for t in ts:
self.assertTrue(t in ms,(t,ms))
def test9(self):
m = Chem.MolFromSmiles('CCOc1ccccc1c1ncc(c2nc(NCCCC)ncn2)cc1')
res=BRICSDecompose(m)
self.assertEqual(len(res),7)
self.assertTrue('[3*]O[3*]' in res)
self.assertFalse('[14*]c1ncnc(NCCCC)n1' in res)
res = BRICSDecompose(m,singlePass=True)
self.assertEqual(len(res),13)
self.assertTrue('[3*]OCC' in res)
self.assertTrue('[14*]c1ncnc(NCCCC)n1' in res)
def test10(self):
m = Chem.MolFromSmiles('C1CCCCN1c1ccccc1')
res=BRICSDecompose(m)
self.assertEqual(len(res),2,res)
def test11(self):
# test coordinate preservation:
molblock="""
RDKit 3D
13 14 0 0 0 0 0 0 0 0999 V2000
-1.2004 0.5900 0.6110 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.2328 1.3173 0.0343 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.4299 0.6533 -0.1500 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.3633 -0.7217 -0.3299 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.1552 -1.3791 -0.2207 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.1425 -0.7969 0.5335 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1458 -1.4244 0.4108 O 0 0 0 0 0 0 0 0 0 0 0 0
1.2976 -0.7398 -0.1026 C 0 0 0 0 0 0 0 0 0 0 0 0
2.4889 -0.7939 0.5501 N 0 0 0 0 0 0 0 0 0 0 0 0
3.4615 0.1460 0.3535 C 0 0 0 0 0 0 0 0 0 0 0 0
3.0116 1.4034 -0.0296 C 0 0 0 0 0 0 0 0 0 0 0 0
1.9786 1.4264 -0.9435 C 0 0 0 0 0 0 0 0 0 0 0 0
1.1399 0.3193 -0.9885 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0
2 3 1 0
3 4 2 0
4 5 1 0
5 6 2 0
6 7 1 0
7 8 1 0
8 9 2 0
9 10 1 0
10 11 2 0
11 12 1 0
12 13 2 0
6 1 1 0
13 8 1 0
M END
"""
m = Chem.MolFromMolBlock(molblock)
pieces = BreakBRICSBonds(m)
frags = Chem.GetMolFrags(pieces,asMols=True)
self.assertEqual(len(frags),3)
self.assertEqual(frags[0].GetNumAtoms(),7)
self.assertEqual(frags[1].GetNumAtoms(),3)
self.assertEqual(frags[2].GetNumAtoms(),7)
c1 = m.GetConformer()
c2 = frags[0].GetConformer()
for i in range(6):
p1 = c1.GetAtomPosition(i)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[2].GetConformer()
for i in range(6):
p1 = c1.GetAtomPosition(i+7)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[1].GetConformer()
for i in range(1):
p1 = c1.GetAtomPosition(i+6)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(5)
p2 = c2.GetAtomPosition(1)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(0)
self.assertEqual((p1-p2).Length(),0.0)
# make sure multiple conformations (include 2D) also work:
molblock="""
RDKit 2D
13 14 0 0 0 0 0 0 0 0999 V2000
-1.2990 -0.8654 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.5981 -1.6154 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.8971 -0.8654 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.8971 0.6346 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.5981 1.3846 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.2990 0.6346 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.0000 1.3846 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
1.2990 0.6346 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.2990 -0.8654 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
2.5981 -1.6154 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.8971 -0.8654 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.8971 0.6346 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.5981 1.3846 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0
2 3 1 0
3 4 2 0
4 5 1 0
5 6 2 0
6 7 1 0
7 8 1 0
8 9 2 0
9 10 1 0
10 11 2 0
11 12 1 0
12 13 2 0
6 1 1 0
13 8 1 0
M END
"""
m2 = Chem.MolFromMolBlock(molblock)
m.AddConformer(m2.GetConformer(),assignId=True)
self.assertEqual(m.GetNumConformers(),2)
pieces = BreakBRICSBonds(m)
frags = Chem.GetMolFrags(pieces,asMols=True)
self.assertEqual(len(frags),3)
self.assertEqual(frags[0].GetNumAtoms(),7)
self.assertEqual(frags[1].GetNumAtoms(),3)
self.assertEqual(frags[2].GetNumAtoms(),7)
self.assertEqual(frags[0].GetNumConformers(),2)
self.assertEqual(frags[1].GetNumConformers(),2)
self.assertEqual(frags[2].GetNumConformers(),2)
c1 = m.GetConformer(0)
c2 = frags[0].GetConformer(0)
for i in range(6):
p1 = c1.GetAtomPosition(i)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[2].GetConformer(0)
for i in range(6):
p1 = c1.GetAtomPosition(i+7)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[1].GetConformer(0)
for i in range(1):
p1 = c1.GetAtomPosition(i+6)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(5)
p2 = c2.GetAtomPosition(1)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(0)
self.assertEqual((p1-p2).Length(),0.0)
c1 = m.GetConformer(1)
c2 = frags[0].GetConformer(1)
for i in range(6):
p1 = c1.GetAtomPosition(i)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[2].GetConformer(1)
for i in range(6):
p1 = c1.GetAtomPosition(i+7)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(6)
self.assertEqual((p1-p2).Length(),0.0)
c2 = frags[1].GetConformer(1)
for i in range(1):
p1 = c1.GetAtomPosition(i+6)
p2 = c2.GetAtomPosition(i)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(5)
p2 = c2.GetAtomPosition(1)
self.assertEqual((p1-p2).Length(),0.0)
p1 = c1.GetAtomPosition(6)
p2 = c2.GetAtomPosition(0)
self.assertEqual((p1-p2).Length(),0.0)
def test12(self):
m = Chem.MolFromSmiles('CCS(=O)(=O)NCC')
res=list(FindBRICSBonds(m))
self.assertEqual(len(res),2,res)
atIds = [x[0] for x in res]
atIds.sort()
self.assertEqual(atIds,[(5,2), (6,5)])
failed,tried = _test()
if failed:
sys.exit(failed)
unittest.main()
|
adalke/rdkit
|
rdkit/Chem/BRICS.py
|
Python
|
bsd-3-clause
| 30,959
|
[
"RDKit"
] |
85b0c5f8516faba796f86dee871a33ef4b9f0fdfa305e3f3584d0b1e7c3ea042
|
"""
Graphical model (GM)-based optimization algorithm using Theano
"""
__authors__ = "James Bergstra"
__license__ = "3-clause BSD License"
__contact__ = "github.com/jaberg/hyperopt"
import logging
import time
import numpy as np
from scipy.special import erf
import pyll
from pyll import scope
from pyll.stochastic import implicit_stochastic
from .base import miscs_to_idxs_vals
from .base import miscs_update_idxs_vals
from .base import Trials
import rand
logger = logging.getLogger(__name__)
EPS = 1e-12
# -- default linear forgetting. don't try to change by writing this variable
# because it's captured in function default args when this file is read
DEFAULT_LF = 25
adaptive_parzen_samplers = {}
def adaptive_parzen_sampler(name):
def wrapper(f):
assert name not in adaptive_parzen_samplers
adaptive_parzen_samplers[name] = f
return f
return wrapper
#
# These are some custom distributions
# that are used to represent posterior distributions.
#
# -- Categorical
@scope.define
def categorical_lpdf(sample, p, upper):
"""
"""
if sample.size:
return np.log(np.asarray(p)[sample])
else:
return np.asarray([])
# -- Bounded Gaussian Mixture Model (BGMM)
@implicit_stochastic
@scope.define
def GMM1(weights, mus, sigmas, low=None, high=None, q=None, rng=None,
size=()):
"""Sample from truncated 1-D Gaussian Mixture Model"""
weights, mus, sigmas = map(np.asarray, (weights, mus, sigmas))
assert len(weights) == len(mus) == len(sigmas)
n_samples = np.prod(size)
#n_components = len(weights)
if low is None and high is None:
# -- draw from a standard GMM
active = np.argmax(rng.multinomial(1, weights, (n_samples,)), axis=1)
samples = rng.normal(loc=mus[active], scale=sigmas[active])
else:
# -- draw from truncated components
# TODO: one-sided-truncation
low = float(low)
high = float(high)
if low >= high:
raise ValueError('low >= high', (low, high))
samples = []
while len(samples) < n_samples:
active = np.argmax(rng.multinomial(1, weights))
draw = rng.normal(loc=mus[active], scale=sigmas[active])
if low <= draw < high:
samples.append(draw)
samples = np.reshape(np.asarray(samples), size)
#print 'SAMPLES', samples
if q is None:
return samples
else:
return np.round(samples / q) * q
@scope.define
def normal_cdf(x, mu, sigma):
top = (x - mu)
bottom = np.maximum(np.sqrt(2) * sigma, EPS)
z = top / bottom
return 0.5 * (1 + erf(z))
@scope.define
def GMM1_lpdf(samples, weights, mus, sigmas, low=None, high=None, q=None):
verbose = 0
samples, weights, mus, sigmas = map(np.asarray,
(samples, weights, mus, sigmas))
if samples.size == 0:
return np.asarray([])
if weights.ndim != 1:
raise TypeError('need vector of weights', weights.shape)
if mus.ndim != 1:
raise TypeError('need vector of mus', mus.shape)
if sigmas.ndim != 1:
raise TypeError('need vector of sigmas', sigmas.shape)
assert len(weights) == len(mus) == len(sigmas)
_samples = samples
samples = _samples.flatten()
if verbose:
print 'GMM1_lpdf:samples', set(samples)
print 'GMM1_lpdf:weights', weights
print 'GMM1_lpdf:mus', mus
print 'GMM1_lpdf:sigmas', sigmas
print 'GMM1_lpdf:low', low
print 'GMM1_lpdf:high', high
print 'GMM1_lpdf:q', q
if low is None and high is None:
p_accept = 1
else:
p_accept = np.sum(
weights * (
normal_cdf(high, mus, sigmas)
- normal_cdf(low, mus, sigmas)))
if q is None:
dist = samples[:, None] - mus
mahal = (dist / np.maximum(sigmas, EPS)) ** 2
# mahal shape is (n_samples, n_components)
Z = np.sqrt(2 * np.pi * sigmas ** 2)
coef = weights / Z / p_accept
rval = logsum_rows(- 0.5 * mahal + np.log(coef))
else:
prob = np.zeros(samples.shape, dtype='float64')
for w, mu, sigma in zip(weights, mus, sigmas):
if high is None:
ubound = samples + q / 2.0
else:
ubound = np.minimum(samples + q / 2.0, high)
if low is None:
lbound = samples - q / 2.0
else:
lbound = np.maximum(samples - q / 2.0, low)
# -- two-stage addition is slightly more numerically accurate
inc_amt = w * normal_cdf(ubound, mu, sigma)
inc_amt -= w * normal_cdf(lbound, mu, sigma)
prob += inc_amt
rval = np.log(prob) - np.log(p_accept)
if verbose:
print 'GMM1_lpdf:rval:', dict(zip(samples, rval))
rval.shape = _samples.shape
return rval
# -- Mixture of Log-Normals
@scope.define
def lognormal_cdf(x, mu, sigma):
# wikipedia claims cdf is
# .5 + .5 erf( log(x) - mu / sqrt(2 sigma^2))
#
# the maximum is used to move negative values and 0 up to a point
# where they do not cause nan or inf, but also don't contribute much
# to the cdf.
if len(x) == 0:
return np.asarray([])
if x.min() < 0:
raise ValueError('negative arg to lognormal_cdf', x)
olderr = np.seterr(divide='ignore')
try:
top = np.log(np.maximum(x, EPS)) - mu
bottom = np.maximum(np.sqrt(2) * sigma, EPS)
z = top / bottom
return .5 + .5 * erf(z)
finally:
np.seterr(**olderr)
@scope.define
def lognormal_lpdf(x, mu, sigma):
# formula copied from wikipedia
# http://en.wikipedia.org/wiki/Log-normal_distribution
assert np.all(sigma >= 0)
sigma = np.maximum(sigma, EPS)
Z = sigma * x * np.sqrt(2 * np.pi)
E = 0.5 * ((np.log(x) - mu) / sigma) ** 2
rval = -E - np.log(Z)
return rval
@scope.define
def qlognormal_lpdf(x, mu, sigma, q):
# casting rounds up to nearest step multiple.
# so lpdf is log of integral from x-step to x+1 of P(x)
# XXX: subtracting two numbers potentially very close together.
return np.log(
lognormal_cdf(x, mu, sigma)
- lognormal_cdf(x - q, mu, sigma))
@implicit_stochastic
@scope.define
def LGMM1(weights, mus, sigmas, low=None, high=None, q=None,
rng=None, size=()):
weights, mus, sigmas = map(np.asarray, (weights, mus, sigmas))
n_samples = np.prod(size)
#n_components = len(weights)
if low is None and high is None:
active = np.argmax(
rng.multinomial(1, weights, (n_samples,)),
axis=1)
assert len(active) == n_samples
samples = np.exp(
rng.normal(
loc=mus[active],
scale=sigmas[active]))
else:
# -- draw from truncated components
# TODO: one-sided-truncation
low = float(low)
high = float(high)
if low >= high:
raise ValueError('low >= high', (low, high))
samples = []
while len(samples) < n_samples:
active = np.argmax(rng.multinomial(1, weights))
draw = rng.normal(loc=mus[active], scale=sigmas[active])
if low <= draw < high:
samples.append(np.exp(draw))
samples = np.asarray(samples)
samples = np.reshape(np.asarray(samples), size)
if q is not None:
samples = np.round(samples / q) * q
return samples
def logsum_rows(x):
R, C = x.shape
m = x.max(axis=1)
return np.log(np.exp(x - m[:, None]).sum(axis=1)) + m
@scope.define
def LGMM1_lpdf(samples, weights, mus, sigmas, low=None, high=None, q=None):
samples, weights, mus, sigmas = map(np.asarray,
(samples, weights, mus, sigmas))
assert weights.ndim == 1
assert mus.ndim == 1
assert sigmas.ndim == 1
_samples = samples
if samples.ndim != 1:
samples = samples.flatten()
if low is None and high is None:
p_accept = 1
else:
p_accept = np.sum(
weights * (
normal_cdf(high, mus, sigmas)
- normal_cdf(low, mus, sigmas)))
if q is None:
# compute the lpdf of each sample under each component
lpdfs = lognormal_lpdf(samples[:, None], mus, sigmas)
rval = logsum_rows(lpdfs + np.log(weights))
else:
# compute the lpdf of each sample under each component
prob = np.zeros(samples.shape, dtype='float64')
for w, mu, sigma in zip(weights, mus, sigmas):
if high is None:
ubound = samples + q / 2.0
else:
ubound = np.minimum(samples + q / 2.0, np.exp(high))
if low is None:
lbound = samples - q / 2.0
else:
lbound = np.maximum(samples - q / 2.0, np.exp(low))
lbound = np.maximum(0, lbound)
# -- two-stage addition is slightly more numerically accurate
inc_amt = w * lognormal_cdf(ubound, mu, sigma)
inc_amt -= w * lognormal_cdf(lbound, mu, sigma)
prob += inc_amt
rval = np.log(prob) - np.log(p_accept)
rval.shape = _samples.shape
return rval
#
# This is the weird heuristic ParzenWindow estimator used for continuous
# distributions in various ways.
#
@scope.define_info(o_len=3)
def adaptive_parzen_normal_orig(mus, prior_weight, prior_mu, prior_sigma):
"""
A heuristic estimator for the mu and sigma values of a GMM
TODO: try to find this heuristic in the literature, and cite it - Yoshua
mentioned the term 'elastic' I think?
mus - matrix (N, M) of M, N-dimensional component centers
"""
mus_orig = np.array(mus)
mus = np.array(mus)
assert str(mus.dtype) != 'object'
if mus.ndim != 1:
raise TypeError('mus must be vector', mus)
if len(mus) == 0:
mus = np.asarray([prior_mu])
sigma = np.asarray([prior_sigma])
elif len(mus) == 1:
mus = np.asarray([prior_mu] + [mus[0]])
sigma = np.asarray([prior_sigma, prior_sigma * .5])
elif len(mus) >= 2:
order = np.argsort(mus)
mus = mus[order]
sigma = np.zeros_like(mus)
sigma[1:-1] = np.maximum(
mus[1:-1] - mus[0:-2],
mus[2:] - mus[1:-1])
if len(mus) > 2:
lsigma = mus[2] - mus[0]
usigma = mus[-1] - mus[-3]
else:
lsigma = mus[1] - mus[0]
usigma = mus[-1] - mus[-2]
sigma[0] = lsigma
sigma[-1] = usigma
# XXX: is sorting them necessary anymore?
# un-sort the mus and sigma
mus[order] = mus.copy()
sigma[order] = sigma.copy()
if not np.all(mus_orig == mus):
print 'orig', mus_orig
print 'mus', mus
assert np.all(mus_orig == mus)
# put the prior back in
mus = np.asarray([prior_mu] + list(mus))
sigma = np.asarray([prior_sigma] + list(sigma))
maxsigma = prior_sigma
# -- magic formula:
minsigma = prior_sigma / np.sqrt(1 + len(mus))
#print 'maxsigma, minsigma', maxsigma, minsigma
sigma = np.clip(sigma, minsigma, maxsigma)
weights = np.ones(len(mus), dtype=mus.dtype)
weights[0] = prior_weight
#print weights.dtype
weights = weights / weights.sum()
if 0:
print 'WEIGHTS', weights
print 'MUS', mus
print 'SIGMA', sigma
return weights, mus, sigma
@scope.define
def linear_forgetting_weights(N, LF):
assert N >= 0
assert LF > 0
if N == 0:
return np.asarray([])
elif N < LF:
return np.ones(N)
else:
ramp = np.linspace(1.0 / N, 1.0, num=N - LF)
flat = np.ones(LF)
weights = np.concatenate([ramp, flat], axis=0)
assert weights.shape == (N,), (weights.shape, N)
return weights
# XXX: make TPE do a post-inference pass over the pyll graph and insert
# non-default LF argument
@scope.define_info(o_len=3)
def adaptive_parzen_normal(mus, prior_weight, prior_mu, prior_sigma,
LF=DEFAULT_LF):
"""
mus - matrix (N, M) of M, N-dimensional component centers
"""
#mus_orig = np.array(mus)
mus = np.array(mus)
assert str(mus.dtype) != 'object'
if mus.ndim != 1:
raise TypeError('mus must be vector', mus)
if len(mus) == 0:
srtd_mus = np.asarray([prior_mu])
sigma = np.asarray([prior_sigma])
prior_pos = 0
elif len(mus) == 1:
if prior_mu < mus[0]:
prior_pos = 0
srtd_mus = np.asarray([prior_mu, mus[0]])
sigma = np.asarray([prior_sigma, prior_sigma * .5])
else:
prior_pos = 1
srtd_mus = np.asarray([mus[0], prior_mu])
sigma = np.asarray([prior_sigma * .5, prior_sigma])
elif len(mus) >= 2:
# create new_mus, which is sorted, and in which
# the prior has been inserted
order = np.argsort(mus)
prior_pos = np.searchsorted(mus[order], prior_mu)
srtd_mus = np.zeros(len(mus) + 1)
srtd_mus[:prior_pos] = mus[order[:prior_pos]]
srtd_mus[prior_pos] = prior_mu
srtd_mus[prior_pos + 1:] = mus[order[prior_pos:]]
sigma = np.zeros_like(srtd_mus)
sigma[1:-1] = np.maximum(
srtd_mus[1:-1] - srtd_mus[0:-2],
srtd_mus[2:] - srtd_mus[1:-1])
lsigma = srtd_mus[1] - srtd_mus[0]
usigma = srtd_mus[-1] - srtd_mus[-2]
sigma[0] = lsigma
sigma[-1] = usigma
if LF and LF < len(mus):
unsrtd_weights = linear_forgetting_weights(len(mus), LF)
srtd_weights = np.zeros_like(srtd_mus)
assert len(unsrtd_weights) + 1 == len(srtd_mus)
srtd_weights[:prior_pos] = unsrtd_weights[order[:prior_pos]]
srtd_weights[prior_pos] = prior_weight
srtd_weights[prior_pos + 1:] = unsrtd_weights[order[prior_pos:]]
else:
srtd_weights = np.ones(len(srtd_mus))
srtd_weights[prior_pos] = prior_weight
# -- magic formula:
maxsigma = prior_sigma / 1.0
minsigma = prior_sigma / min(100.0, (1.0 + len(srtd_mus)))
#print 'maxsigma, minsigma', maxsigma, minsigma
sigma = np.clip(sigma, minsigma, maxsigma)
sigma[prior_pos] = prior_sigma
assert prior_sigma > 0
assert maxsigma > 0
assert minsigma > 0
assert np.all(sigma > 0), (sigma.min(), minsigma, maxsigma)
#print weights.dtype
srtd_weights /= srtd_weights.sum()
if 0:
print 'WEIGHTS', srtd_weights
print 'MUS', srtd_mus
print 'SIGMA', sigma
return srtd_weights, srtd_mus, sigma
#
# Adaptive Parzen Samplers
# These produce conditional estimators for various prior distributions
#
# -- Uniform
@adaptive_parzen_sampler('uniform')
def ap_uniform_sampler(obs, prior_weight, low, high, size=(), rng=None):
prior_mu = 0.5 * (high + low)
prior_sigma = 1.0 * (high - low)
weights, mus, sigmas = scope.adaptive_parzen_normal(obs,
prior_weight, prior_mu, prior_sigma)
return scope.GMM1(weights, mus, sigmas, low=low, high=high, q=None,
size=size, rng=rng)
@adaptive_parzen_sampler('quniform')
def ap_quniform_sampler(obs, prior_weight, low, high, q, size=(), rng=None):
prior_mu = 0.5 * (high + low)
prior_sigma = 1.0 * (high - low)
weights, mus, sigmas = scope.adaptive_parzen_normal(obs,
prior_weight, prior_mu, prior_sigma)
return scope.GMM1(weights, mus, sigmas, low=low, high=high, q=q,
size=size, rng=rng)
@adaptive_parzen_sampler('loguniform')
def ap_loguniform_sampler(obs, prior_weight, low, high,
size=(), rng=None):
prior_mu = 0.5 * (high + low)
prior_sigma = 1.0 * (high - low)
weights, mus, sigmas = scope.adaptive_parzen_normal(
scope.log(obs), prior_weight, prior_mu, prior_sigma)
rval = scope.LGMM1(weights, mus, sigmas, low=low, high=high,
size=size, rng=rng)
return rval
@adaptive_parzen_sampler('qloguniform')
def ap_qloguniform_sampler(obs, prior_weight, low, high, q,
size=(), rng=None):
prior_mu = 0.5 * (high + low)
prior_sigma = 1.0 * (high - low)
weights, mus, sigmas = scope.adaptive_parzen_normal(
scope.log(
# -- map observations that were quantized to be below exp(low)
# (particularly 0) back up to exp(low) where they will
# interact in a reasonable way with the AdaptiveParzen
# thing.
scope.maximum(
obs,
scope.maximum( # -- protect against exp(low) underflow
EPS,
scope.exp(low)))),
prior_weight, prior_mu, prior_sigma)
return scope.LGMM1(weights, mus, sigmas, low, high, q=q,
size=size, rng=rng)
# -- Normal
@adaptive_parzen_sampler('normal')
def ap_normal_sampler(obs, prior_weight, mu, sigma, size=(), rng=None):
weights, mus, sigmas = scope.adaptive_parzen_normal(
obs, prior_weight, mu, sigma)
return scope.GMM1(weights, mus, sigmas, size=size, rng=rng)
@adaptive_parzen_sampler('qnormal')
def ap_qnormal_sampler(obs, prior_weight, mu, sigma, q, size=(), rng=None):
weights, mus, sigmas = scope.adaptive_parzen_normal(
obs, prior_weight, mu, sigma)
return scope.GMM1(weights, mus, sigmas, q=q, size=size, rng=rng)
@adaptive_parzen_sampler('lognormal')
def ap_loglognormal_sampler(obs, prior_weight, mu, sigma, size=(), rng=None):
weights, mus, sigmas = scope.adaptive_parzen_normal(
scope.log(obs), prior_weight, mu, sigma)
rval = scope.LGMM1(weights, mus, sigmas, size=size, rng=rng)
return rval
@adaptive_parzen_sampler('qlognormal')
def ap_qlognormal_sampler(obs, prior_weight, mu, sigma, q, size=(), rng=None):
log_obs = scope.log(scope.maximum(obs, EPS))
weights, mus, sigmas = scope.adaptive_parzen_normal(
log_obs, prior_weight, mu, sigma)
rval = scope.LGMM1(weights, mus, sigmas, q=q, size=size, rng=rng)
return rval
# -- Categorical
@adaptive_parzen_sampler('randint')
def ap_categorical_sampler(obs, prior_weight, upper,
size=(), rng=None, LF=DEFAULT_LF):
weights = scope.linear_forgetting_weights(scope.len(obs), LF=LF)
counts = scope.bincount(obs, minlength=upper, weights=weights)
# -- add in some prior pseudocounts
pseudocounts = counts + prior_weight
return scope.categorical(pseudocounts / scope.sum(pseudocounts),
upper=upper, size=size, rng=rng)
# @adaptive_parzen_sampler('categorical')
# def ap_categorical_sampler(obs, prior_weight, p, upper, size=(), rng=None,
# LF=DEFAULT_LF):
# return scope.categorical(p, upper, size=size, rng
# =rng)
@scope.define
def tpe_cat_pseudocounts(counts, upper, prior_weight, p, size):
#print counts
if size == 0 or np.prod(size) == 0:
return []
if p.ndim == 2:
assert np.all(p == p[0])
p = p[0]
pseudocounts = counts + upper * (prior_weight * p)
return pseudocounts / np.sum(pseudocounts)
@adaptive_parzen_sampler('categorical')
def ap_categorical_sampler(obs, prior_weight, p, upper=None,
size=(), rng=None, LF=DEFAULT_LF):
weights = scope.linear_forgetting_weights(scope.len(obs), LF=LF)
counts = scope.bincount(obs, minlength=upper, weights=weights)
pseudocounts = scope.tpe_cat_pseudocounts(counts, upper, prior_weight, p, size)
return scope.categorical(pseudocounts, upper=upper, size=size, rng=rng)
#
# Posterior clone performs symbolic inference on the pyll graph of priors.
#
@scope.define_info(o_len=2)
def ap_filter_trials(o_idxs, o_vals, l_idxs, l_vals, gamma,
gamma_cap=DEFAULT_LF):
"""Return the elements of o_vals that correspond to trials whose losses
were above gamma, or below gamma.
"""
o_idxs, o_vals, l_idxs, l_vals = map(np.asarray, [o_idxs, o_vals, l_idxs,
l_vals])
# XXX if this is working, refactor this sort for efficiency
# Splitting is done this way to cope with duplicate loss values.
n_below = min(int(np.ceil(gamma * np.sqrt(len(l_vals)))), gamma_cap)
l_order = np.argsort(l_vals)
keep_idxs = set(l_idxs[l_order[:n_below]])
below = [v for i, v in zip(o_idxs, o_vals) if i in keep_idxs]
if 0:
print 'DEBUG: thresh', l_vals[l_order[:n_below]]
keep_idxs = set(l_idxs[l_order[n_below:]])
above = [v for i, v in zip(o_idxs, o_vals) if i in keep_idxs]
#print 'AA0', below
#print 'AA1', above
return np.asarray(below), np.asarray(above)
def build_posterior(specs, prior_idxs, prior_vals, obs_idxs, obs_vals,
oloss_idxs, oloss_vals, oloss_gamma, prior_weight):
"""
This method clones a posterior inference graph by iterating forward in
topological order, and replacing prior random-variables (prior_vals) with
new posterior distributions that make use of observations (obs_vals).
"""
assert all(isinstance(arg, pyll.Apply)
for arg in [oloss_idxs, oloss_vals, oloss_gamma])
expr = pyll.as_apply([specs, prior_idxs, prior_vals])
nodes = pyll.dfs(expr)
# build the joint posterior distribution as the values in this memo
memo = {}
# map prior RVs to observations
obs_memo = {}
for nid in prior_vals:
# construct the leading args for each call to adaptive_parzen_sampler
# which will permit the "adaptive parzen samplers" to adapt to the
# correct samples.
obs_below, obs_above = scope.ap_filter_trials(
obs_idxs[nid], obs_vals[nid],
oloss_idxs, oloss_vals, oloss_gamma)
obs_memo[prior_vals[nid]] = [obs_below, obs_above]
for node in nodes:
if node not in memo:
new_inputs = [memo[arg] for arg in node.inputs()]
if node in obs_memo:
# -- this case corresponds to an observed Random Var
# node.name is a distribution like "normal", "randint", etc.
obs_below, obs_above = obs_memo[node]
aa = [memo[a] for a in node.pos_args]
fn = adaptive_parzen_samplers[node.name]
b_args = [obs_below, prior_weight] + aa
named_args = [[kw, memo[arg]]
for (kw, arg) in node.named_args]
b_post = fn(*b_args, **dict(named_args))
a_args = [obs_above, prior_weight] + aa
a_post = fn(*a_args, **dict(named_args))
assert a_post.name == b_post.name
fn_lpdf = getattr(scope, a_post.name + '_lpdf')
#print fn_lpdf
a_kwargs = dict([(n, a) for n, a in a_post.named_args
if n not in ('rng', 'size')])
b_kwargs = dict([(n, a) for n, a in b_post.named_args
if n not in ('rng', 'size')])
# calculate the llik of b_post under both distributions
below_llik = fn_lpdf(*([b_post] + b_post.pos_args), **b_kwargs)
above_llik = fn_lpdf(*([b_post] + a_post.pos_args), **a_kwargs)
#improvement = below_llik - above_llik
#new_node = scope.broadcast_best(b_post, improvement)
new_node = scope.broadcast_best(b_post, below_llik, above_llik)
elif hasattr(node, 'obj'):
# -- keep same literals in the graph
new_node = node
else:
# -- this case is for all the other stuff in the graph
new_node = node.clone_from_inputs(new_inputs)
memo[node] = new_node
post_specs = memo[specs]
post_idxs = dict([(nid, memo[idxs])
for nid, idxs in prior_idxs.items()])
post_vals = dict([(nid, memo[vals])
for nid, vals in prior_vals.items()])
assert set(post_idxs.keys()) == set(post_vals.keys())
assert set(post_idxs.keys()) == set(prior_idxs.keys())
return post_specs, post_idxs, post_vals
@scope.define
def idxs_prod(full_idxs, idxs_by_label, llik_by_label):
"""Add all of the log-likelihoods together by id.
Example arguments:
full_idxs = [0, 1, ... N-1]
idxs_by_label = {'node_a': [1, 3], 'node_b': [3]}
llik_by_label = {'node_a': [0.1, -3.3], node_b: [1.0]}
This would return N elements: [0, 0.1, 0, -2.3, 0, 0, ... ]
"""
#print 'FULL IDXS'
#print full_idxs
assert len(set(full_idxs)) == len(full_idxs)
full_idxs = list(full_idxs)
rval = np.zeros(len(full_idxs))
pos_of_tid = dict(zip(full_idxs, range(len(full_idxs))))
assert set(idxs_by_label.keys()) == set(llik_by_label.keys())
for nid in idxs_by_label:
idxs = idxs_by_label[nid]
llik = llik_by_label[nid]
assert np.all(np.asarray(idxs) > 1)
assert len(set(idxs)) == len(idxs)
assert len(idxs) == len(llik)
for ii, ll in zip(idxs, llik):
rval[pos_of_tid[ii]] += ll
#rval[full_idxs.index(ii)] += ll
return rval
@scope.define
def broadcast_best(samples, below_llik, above_llik):
if len(samples):
#print 'AA2', dict(zip(samples, below_llik - above_llik))
score = below_llik - above_llik
if len(samples) != len(score):
raise ValueError()
best = np.argmax(score)
return [samples[best]] * len(samples)
else:
return []
_default_prior_weight = 1.0
# -- suggest best of this many draws on every iteration
_default_n_EI_candidates = 24
# -- gamma * sqrt(n_trials) is fraction of to use as good
_default_gamma = 0.25
_default_n_startup_jobs = 20
_default_linear_forgetting = DEFAULT_LF
def tpe_transform(domain, prior_weight, gamma):
s_prior_weight = pyll.Literal(float(prior_weight))
# -- these dummy values will be replaced in suggest1() and never used
observed = dict(
idxs=pyll.Literal(),
vals=pyll.Literal())
observed_loss = dict(
idxs=pyll.Literal(),
vals=pyll.Literal())
specs, idxs, vals = build_posterior(
# -- vectorized clone of bandit template
domain.vh.v_expr,
# -- this dict and next represent prior dists
domain.vh.idxs_by_label(),
domain.vh.vals_by_label(),
observed['idxs'],
observed['vals'],
observed_loss['idxs'],
observed_loss['vals'],
pyll.Literal(gamma),
s_prior_weight
)
return (s_prior_weight, observed, observed_loss,
specs, idxs, vals)
def suggest(new_ids, domain, trials, seed,
prior_weight=_default_prior_weight,
n_startup_jobs=_default_n_startup_jobs,
n_EI_candidates=_default_n_EI_candidates,
gamma=_default_gamma,
linear_forgetting=_default_linear_forgetting,
):
new_id, = new_ids
t0 = time.time()
(s_prior_weight, observed, observed_loss, specs, opt_idxs, opt_vals) \
= tpe_transform(domain, prior_weight, gamma)
tt = time.time() - t0
logger.info('tpe_transform took %f seconds' % tt)
best_docs = dict()
best_docs_loss = dict()
for doc in trials.trials:
# get either this docs own tid or the one that it's from
tid = doc['misc'].get('from_tid', doc['tid'])
loss = domain.loss(doc['result'], doc['spec'])
if loss is None:
# -- associate infinite loss to new/running/failed jobs
loss = float('inf')
else:
loss = float(loss)
best_docs_loss.setdefault(tid, loss)
if loss <= best_docs_loss[tid]:
best_docs_loss[tid] = loss
best_docs[tid] = doc
tid_docs = best_docs.items()
# -- sort docs by order of suggestion
# so that linear_forgetting removes the oldest ones
tid_docs.sort()
losses = [best_docs_loss[k] for k, v in tid_docs]
tids = [k for k, v in tid_docs]
docs = [v for k, v in tid_docs]
if docs:
logger.info('TPE using %i/%i trials with best loss %f' % (
len(docs), len(trials), min(best_docs_loss.values())))
else:
logger.info('TPE using 0 trials')
if len(docs) < n_startup_jobs:
# N.B. THIS SEEDS THE RNG BASED ON THE new_id
return rand.suggest(new_ids, domain, trials, seed)
# Sample and compute log-probability.
if tids:
# -- the +2 co-ordinates with an assertion above
# to ensure that fake ids are used during sampling
fake_id_0 = max(max(tids), new_id) + 2
else:
# -- weird - we're running the TPE algo from scratch
assert n_startup_jobs <= 0
fake_id_0 = new_id + 2
fake_ids = range(fake_id_0, fake_id_0 + n_EI_candidates)
# -- this dictionary will map pyll nodes to the values
# they should take during the evaluation of the pyll program
memo = {
domain.s_new_ids: fake_ids,
domain.s_rng: np.random.RandomState(seed),
}
o_idxs_d, o_vals_d = miscs_to_idxs_vals(
[d['misc'] for d in docs], keys=domain.params.keys())
memo[observed['idxs']] = o_idxs_d
memo[observed['vals']] = o_vals_d
memo[observed_loss['idxs']] = tids
memo[observed_loss['vals']] = losses
idxs, vals = pyll.rec_eval([opt_idxs, opt_vals], memo=memo,
print_node_on_error=False)
# -- retrieve the best of the samples and form the return tuple
# the build_posterior makes all specs the same
rval_specs = [None] # -- specs are deprecated
rval_results = [domain.new_result()]
rval_miscs = [dict(tid=new_id, cmd=domain.cmd, workdir=domain.workdir)]
miscs_update_idxs_vals(rval_miscs, idxs, vals,
idxs_map={fake_ids[0]: new_id},
assert_all_vals_used=False)
rval_docs = trials.new_trial_docs([new_id],
rval_specs, rval_results, rval_miscs)
return rval_docs
|
CVML/hyperopt
|
hyperopt/tpe.py
|
Python
|
bsd-3-clause
| 30,055
|
[
"Gaussian"
] |
2741c446bb87c12fc664b3c01455bf1cd4533012602cdc5743a6415dffa0d53f
|
from __future__ import print_function
import os
import pprint
import re
try:
from urllib import urlretrieve
except ImportError:
pass
try:
from urllib.request import urlretrieve
except ImportError:
pass
import zipfile
import shutil
import datetime
import numpy as np
from ase.units import Bohr
from ase.atom import Atom
from ase.atoms import Atoms
from ase.data import atomic_numbers, chemical_symbols
# databases from http://toc.uni-muenster.de/GMTKN/GMTKN30/GMTKN30main.html
url_root = 'http://www.thch.uni-bonn.de/tc/downloads/GMTKN/GMTKN30/'
# we may store all downloaded files locally
# (a good idea, but need to ask permission from the authors)
#url_root = './GMTKN30/'
databases = [
'MB08-165', # 180
'W4-08', # 111
'G21IP', # 71
'G21EA', # 50
'PA', # 24
'SIE11', # 29
'BHPERI', # 61
'BH76', # 95
'RSE43', # 88
'O3ADD6', # 9
'G2RC', # 47
'AL2X', # 14
'NBPRC', # 21
'ISO34', # 63
'ISOL22', # 44
'DC9', # 19
'DARC', # 22
'ALK6', # 13
'BSR36', # 38
'IDISP', # 13
'WATER27', # 30
'S22', # 57
'ADIM6', # 12
'RG6', # 11
'HEAVY28', # 38
'PCONF', # 11
'ACONF', # 18
'SCONF', # 19
'CYCONF', # 11
]
database_files = {}
for db in databases:
database_files[db] = {
'structures': 'strucs/' + db + 'structures.zip',
'ref': db + 'ref.html',
'module': 'GMTKN30_' + db.replace('-', '_'),
}
for xc in ['PBE', 'PBE0', 'SVWN']:
database_files[db][xc] = 'funcsGMTKN30/' + db + xc + '.html'
def download_file(url, filename, dir='.'):
# do not mirror subdirectory structure of url
outfile = os.path.join(dir, os.path.basename(filename))
urlretrieve(os.path.join(url, filename), outfile)
return outfile
def read_charge_filter(s):
try:
return re.search('\(([-+]\d+)\)', s).group(1)
except AttributeError:
return False
def read_charge(filename, dir='.'):
fh = open(os.path.join(dir, filename), 'rb')
lines = list(filter(read_charge_filter, fh.readlines()))
charge = []
for line in lines:
sline = line.split()
charge.append((sline[0],
float(re.search('\(([-+]\d+)\)', sline[1]).group(1))))
fh.close()
return charge
def read_charges(dirname, dir='.'):
fullname = os.path.join(dir, dirname)
for root, dirs, files in os.walk(fullname):
for file in files:
if file == 'README': # read charge/number of unpaired electrons file
return read_charge(file, dir=root)
break
else:
return []
def read_number_of_unpaired_electrons_filter(s):
try:
return re.search('\((\d+)\)', s).group(1)
except AttributeError:
return False
def read_number_of_unpaired_electrons(filename, dir='.'):
fh = open(os.path.join(dir, filename), 'rb')
lines = list(filter(read_number_of_unpaired_electrons_filter, fh.readlines()))
number_of_unpaired_electrons = []
for line in lines:
sline = line.split()
no_unpaired_electrons = float(re.search('\((\d+)\)', sline[1]).group(1))
number_of_unpaired_electrons.append((sline[0], no_unpaired_electrons))
fh.close()
return number_of_unpaired_electrons
def read_numbers_of_unpaired_electrons(dirname, dir='.'):
fullname = os.path.join(dir, dirname)
for root, dirs, files in os.walk(fullname):
for file in files:
if file == 'README': # read charge/number of unpaired electrons file
return read_number_of_unpaired_electrons(file, dir=root)
break
else:
return []
def read_geometry_filter(s):
return (not s.startswith('$'))
def read_geometry(filename, dir='.'):
fh = open(os.path.join(dir, filename), 'rb')
lines = list(filter(read_geometry_filter, fh.readlines()))
# return geometry in ASE format
geometry = []
for line in lines:
sline = line.split()
# find chemical symbol (the symbols in the file are lowercase)
symbol = sline[-1]
for s in chemical_symbols:
if symbol == s.lower():
symbol = s
break
geometry.append(Atom(symbol=symbol, position=sline[:-1]))
fh.close()
atoms = Atoms(geometry)
atoms.set_positions(atoms.get_positions()*Bohr) # convert to Angstrom
return atoms
def read_structures(dirname, dir='.'):
fullname = os.path.join(dir, dirname)
geometries = []
for root, dirs, files in os.walk(fullname):
for file in files:
if file != 'README': # skip file
geometries.append((file, read_geometry(file, dir=root)))
return geometries
def read_html(filename, dir='.'):
fh = open(os.path.join(dir, filename), 'rb')
table = fh.read()
# extract html table: help from David Landis
table = table.split('<table')
table = table[1]
table = table.split('</table')
table = table[0]
# keep field separator tags
table = table.replace('<tr', ' TTRR <')
table = table.replace('<td', ' TTDD <')
# remove the html tags
#table = re.sub('<[^>]+>', '', table) # wrong
table = re.sub('<.*?>', '', table)
# remove end-of-line
table = re.sub('\n', '', table)
# split on columns
table = table.split('TTRR')
csv = []
separator = ':' # BHPERI contains chemical names with comas
ncompounds = 0
for item in table:
if item.find('TTDD')!=-1:
item = item.strip().replace('TTDD', separator)
# remove the first coma
item = item[1:]
litem = []
for f in item.split(separator):
fs = f.strip()
try:
v = eval(fs)
if fs.isdigit() and str(v) != fs: # e.g. undesirable eval('001') = 1
v = fs
# string: NameError, .*[+-*], etc: SyntaxError
except (NameError, SyntaxError):
v = fs
litem.append(v)
# the number of compounds
# (exclude reference value and reaction number and divide by 2)
if ncompounds:
assert ncompounds == (len(litem)-2)/2, 'Error: number of compounds incorrect for reaction: ' + str(litem[0]) + ' in file: ' + filename
ncompounds = (len(litem)-2)/2
# set names of unused compounds to empty string
for i in range(ncompounds):
if litem[1+i] == 0: litem[1+i] = ''
# move the reaction identifier to the end of list
litem.append(litem.pop(0))
csv.append(litem)
fh.close()
# return the number of compounds per reaction, and the table
return ncompounds, csv
def table2reference(ncompounds, table):
# convert from format given by read_html
reactions = []
reference = {}
for r in table:
reaction_id = r[-1]
reference[reaction_id] = r[-2]
stoich = []
for c in range(ncompounds):
if r[c] != '': # only defined compounds
# compound names can have spaces around
stoich.append((str(r[c]).strip(), r[c+ncompounds]))
stoich.append(('reaction_id', reaction_id))
reactions.append(stoich)
return reference, reactions
def table2results(nsets, table, mode='default'):
assert mode in ['default', 'D3']
# convert from format given by read_html
if mode == 'default':
index = 0
else:
index = nsets
reference = {}
for r in table[:-3]: # ignore 3 last rows of statistics
reaction_id = r[-1]
if r[index] != '': # only defined compounds
reference[reaction_id] = r[index]
return reference
def unzip_file(filename, dir='.'):
# unzip contents of filename into dir
fh = open(filename, 'rb')
z = zipfile.ZipFile(fh)
if not os.path.isdir(dir):
os.mkdir(dir)
for entry in z.namelist():
# skip spurious zip inside zip files (in HEAVY28)
if entry.find('.zip') == -1:
outfile = open(entry, 'wb')
outfile.write(z.read(entry))
outfile.close()
fh.close()
def format_data(database, geometries, no_unpaired_electrons=[], charges=[]):
"Return data in the custom format. "
import numpy as np
data = {}
for geometry in geometries:
system = geometry[0]
atoms = geometry[1]
# find the heaviest atom in the system
heaviest = max([a.number for a in atoms])
heaviest_index = [a.number for a in atoms].index(heaviest)
# find number of unpaired electrons
if system in [s[0] for s in no_unpaired_electrons]:
magmom = 0
for s, m in no_unpaired_electrons:
if system == s:
magmom = m
break
magmoms = [0.0 for a in atoms]
# assume the magnetic moment on the heaviest atom in the system
# this is incorrect, but is there a better way to set the magnetic moment?
magmoms[heaviest_index] = float(magmom)
usemagmoms = np.array(magmoms)
else:
usemagmoms = None
# find charge, put it on the heaviest atom
if system in [s[0] for s in charges]:
charge = 0
for s, c in charges:
if system == s:
charge = c
break
cs = [0.0 for a in atoms]
cs[heaviest_index] = float(charge)
usecharges = np.array(cs)
else:
usecharges = None
# populate data
data[system] = {
'database': database,
'name': atoms.get_chemical_formula(),
'symbols': ''.join(atoms.get_chemical_symbols()),
'magmoms': usemagmoms, # None or list
'charges': usecharges, # None or list
'positions': atoms.get_positions(),
}
return data
def main():
import os
if not os.path.isdir('GMTKN30/strucs'):
os.makedirs('GMTKN30/strucs')
#for database in ['G2RC', 'WATER27']:
for database in database_files.keys(): # all databases
fh = open(database_files[database]['module'].lower() + '.py', 'w')
fh.write('# Computer generated code! Hands off!\n')
fh.write('# Generated: ' + str(datetime.date.today()) + '\n')
fh.write('from numpy import array\n')
fh.write('data = ')
data = {} # specification of molecules
info = {} # reference/calculation info
# download structures
file = database_files[database]['structures']
f = os.path.abspath(download_file(url_root, file, dir='GMTKN30/strucs'))
fdir = os.path.splitext(os.path.basename(f))[0]
unzip_file(f, dir=fdir)
structures = read_structures(fdir)
no_unpaired_electrons = read_numbers_of_unpaired_electrons(fdir)
charges = read_charges(fdir)
# remove temporary directory
if os.path.isdir(fdir): shutil.rmtree(fdir)
data = format_data(database, structures, no_unpaired_electrons, charges)
pprint.pprint(data, stream=fh)
fh.write('info = ')
# download reference data
info = {}
file = database_files[database]['ref']
f = download_file(url_root, file, dir='GMTKN30')
ncompounds, table = read_html(f)
# transform table into reactions format
reference, reactions = table2reference(ncompounds, table)
info['reactions'] = reactions
info['reaction energy'] = {}
info['reaction energy']['reference'] = reference
# download XC results
for xc in ['PBE', 'PBE0', 'SVWN']:
file = database_files[database][xc]
f = download_file(url_root, file, dir='GMTKN30')
nsets, table = read_html(f)
# transform table into results format
reference = table2results(nsets, table)
info['reaction energy'][xc] = reference
pprint.pprint(info, stream=fh)
fh.close()
if __name__ == '__main__':
main()
|
suttond/MODOI
|
ase/data/gmtkn30.py
|
Python
|
lgpl-3.0
| 12,170
|
[
"ASE"
] |
a10ab5f785f29ef4416a42d1e6a11e54a7df5d166944378bedd11bd95b6cf14b
|
## Copyright (C) 2010- Alexey Petrov
## Copyright (C) 2009-2010 Pebble Bed Modular Reactor (Pty) Limited (PBMR)
##
## This program is free software: you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with this program. If not, see <http://www.gnu.org/licenses/>.
##
## See http://sourceforge.net/projects/pythonflu
##
## Author : Ivor CLIFFORD
##
#--------------------------------------------------------------------------------------
from foam2vtk import *
import vtk
from math import sqrt
class field_plotter:
def __init__(self, obj):
self.vtkObj = volScalarFieldSource( obj )
self.getVTKWindows()
self.internalMesh = vtk.vtkObject( self.vtkObj.internalMesh().__hex__() )
self.getVTKActor( self.internalMesh )
self.mapper.SetScalarModeToUseCellData()
self.istyle = vtk.vtkInteractorStyleSwitch()
self.istyle.SetCurrentStyleToTrackballCamera()
self.iren.SetInteractorStyle(self.istyle)
self.update()
self.iren.Start()
def getVTKWindows(self):
self.ren = vtk.vtkRenderer()
self.renWin = vtk.vtkRenderWindow()
self.iren = vtk.vtkRenderWindowInteractor()
self.renWin.AddRenderer(self.ren)
self.iren.SetRenderWindow(self.renWin)
self.ren.SetBackground(0.5, 0.6, 1)
self.renWin.SetSize(640, 480)
self.ren.GetActiveCamera().ParallelProjectionOn()
self.iren.Initialize()
def getVTKActor(self, obj):
self.triFilter = vtk.vtkDataSetTriangleFilter()
self.mapper = vtk.vtkDataSetMapper()
self.actor = vtk.vtkActor()
self.triFilter.SetInput( obj )
self.mapper.SetInput(self.triFilter.GetOutput())
self.actor.SetMapper(self.mapper)
self.ren.AddActor(self.actor)
def xIn(self):
self.centerCamera((-1,0,0), False)
self.ren.GetActiveCamera().SetViewUp((0,1,0))
self.update()
def xOut(self):
self.centerCamera((1,0,0), False)
self.ren.GetActiveCamera().SetViewUp((0,1,0))
self.update()
def yIn(self):
self.centerCamera((0,-1,0), False)
self.ren.GetActiveCamera().SetViewUp((0,0,1))
self.update()
def yOut(self):
self.centerCamera((0,1,0), False)
self.ren.GetActiveCamera().SetViewUp((0,0,-1))
self.update()
def zIn(self):
self.centerCamera((0,0,-1), False)
self.ren.GetActiveCamera().SetViewUp((0,1,0))
self.update()
def zOut(self):
self.centerCamera((0,0,1), False)
self.ren.GetActiveCamera().SetViewUp((0,1,0))
self.update()
def centerCamera(self):
self.centerCamera(
self.ren.GetActiveCamera().GetDirectionOfProjection()
)
def centerCamera(self, dirn, redraw=True):
bounds = self.actor.GetBounds()
center = self.actor.GetCenter()
dx = abs(bounds[3]-bounds[0])
dy = abs(bounds[4]-bounds[1])
dz = abs(bounds[5]-bounds[2])
offset = 2*__builtins__.max(dx,dy,dz)
camera = self.ren.GetActiveCamera()
camera.SetPosition((
center[0]+offset*dirn[0],
center[1]+offset*dirn[1],
center[2]+offset*dirn[2]
))
camera.SetFocalPoint(center)
if redraw:
self.update()
def update(self):
self.ren.ResetCamera()
self.renWin.Render()
self.iren.Initialize()
def render(self):
self.renWin.Render()
def interact(self):
self.iren.Start()
|
asimurzin/hybridFlu
|
hybridFlu/vtkPlotter.py
|
Python
|
gpl-3.0
| 4,107
|
[
"VTK"
] |
7e6a453ceaba2501d38190b73606d6f10402fa3a6caae830d176250554215bce
|
# Copyright 1999 by Jeffrey Chang. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
# Patched by Brad Chapman.
# Chris Wroe added modifications for work in myGrid
"""
This module provides code to work with the WWW version of BLAST
provided by the NCBI.
http://blast.ncbi.nlm.nih.gov/
Functions:
qblast Do a BLAST search using the QBLAST API.
"""
import sys
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
from Bio._py3k import _as_string, _as_bytes
def qblast(program, database, sequence,
auto_format=None,composition_based_statistics=None,
db_genetic_code=None,endpoints=None,entrez_query='(none)',
expect=10.0,filter=None,gapcosts=None,genetic_code=None,
hitlist_size=50,i_thresh=None,layout=None,lcase_mask=None,
matrix_name=None,nucl_penalty=None,nucl_reward=None,
other_advanced=None,perc_ident=None,phi_pattern=None,
query_file=None,query_believe_defline=None,query_from=None,
query_to=None,searchsp_eff=None,service=None,threshold=None,
ungapped_alignment=None,word_size=None,
alignments=500,alignment_view=None,descriptions=500,
entrez_links_new_window=None,expect_low=None,expect_high=None,
format_entrez_query=None,format_object=None,format_type='XML',
ncbi_gi=None,results_file=None,show_overview=None, megablast=None,
):
"""Do a BLAST search using the QBLAST server at NCBI.
Supports all parameters of the qblast API for Put and Get.
Some useful parameters:
program blastn, blastp, blastx, tblastn, or tblastx (lower case)
database Which database to search against (e.g. "nr").
sequence The sequence to search.
ncbi_gi TRUE/FALSE whether to give 'gi' identifier.
descriptions Number of descriptions to show. Def 500.
alignments Number of alignments to show. Def 500.
expect An expect value cutoff. Def 10.0.
matrix_name Specify an alt. matrix (PAM30, PAM70, BLOSUM80, BLOSUM45).
filter "none" turns off filtering. Default no filtering
format_type "HTML", "Text", "ASN.1", or "XML". Def. "XML".
entrez_query Entrez query to limit Blast search
hitlist_size Number of hits to return. Default 50
megablast TRUE/FALSE whether to use MEga BLAST algorithm (blastn only)
service plain, psi, phi, rpsblast, megablast (lower case)
This function does no checking of the validity of the parameters
and passes the values to the server as is. More help is available at:
http://www.ncbi.nlm.nih.gov/BLAST/blast_overview.html
"""
import urllib, urllib2
import time
assert program in ['blastn', 'blastp', 'blastx', 'tblastn', 'tblastx']
# Format the "Put" command, which sends search requests to qblast.
# Parameters taken from http://www.ncbi.nlm.nih.gov/BLAST/Doc/node5.html on 9 July 2007
# Additional parameters are taken from http://www.ncbi.nlm.nih.gov/BLAST/Doc/node9.html on 8 Oct 2010
# To perform a PSI-BLAST or PHI-BLAST search the service ("Put" and "Get" commands) must be specified
# (e.g. psi_blast = NCBIWWW.qblast("blastp", "refseq_protein", input_sequence, service="psi"))
parameters = [
('AUTO_FORMAT',auto_format),
('COMPOSITION_BASED_STATISTICS',composition_based_statistics),
('DATABASE',database),
('DB_GENETIC_CODE',db_genetic_code),
('ENDPOINTS',endpoints),
('ENTREZ_QUERY',entrez_query),
('EXPECT',expect),
('FILTER',filter),
('GAPCOSTS',gapcosts),
('GENETIC_CODE',genetic_code),
('HITLIST_SIZE',hitlist_size),
('I_THRESH',i_thresh),
('LAYOUT',layout),
('LCASE_MASK',lcase_mask),
('MEGABLAST',megablast),
('MATRIX_NAME',matrix_name),
('NUCL_PENALTY',nucl_penalty),
('NUCL_REWARD',nucl_reward),
('OTHER_ADVANCED',other_advanced),
('PERC_IDENT',perc_ident),
('PHI_PATTERN',phi_pattern),
('PROGRAM',program),
#('PSSM',pssm), - It is possible to use PSI-BLAST via this API?
('QUERY',sequence),
('QUERY_FILE',query_file),
('QUERY_BELIEVE_DEFLINE',query_believe_defline),
('QUERY_FROM',query_from),
('QUERY_TO',query_to),
#('RESULTS_FILE',...), - Can we use this parameter?
('SEARCHSP_EFF',searchsp_eff),
('SERVICE',service),
('THRESHOLD',threshold),
('UNGAPPED_ALIGNMENT',ungapped_alignment),
('WORD_SIZE',word_size),
('CMD', 'Put'),
]
query = [x for x in parameters if x[1] is not None]
message = _as_bytes(urllib.urlencode(query))
# Send off the initial query to qblast.
# Note the NCBI do not currently impose a rate limit here, other
# than the request not to make say 50 queries at once using multiple
# threads.
request = urllib2.Request("http://blast.ncbi.nlm.nih.gov/Blast.cgi",
message,
{"User-Agent":"BiopythonClient"})
handle = urllib2.urlopen(request)
# Format the "Get" command, which gets the formatted results from qblast
# Parameters taken from http://www.ncbi.nlm.nih.gov/BLAST/Doc/node6.html on 9 July 2007
rid, rtoe = _parse_qblast_ref_page(handle)
parameters = [
('ALIGNMENTS',alignments),
('ALIGNMENT_VIEW',alignment_view),
('DESCRIPTIONS',descriptions),
('ENTREZ_LINKS_NEW_WINDOW',entrez_links_new_window),
('EXPECT_LOW',expect_low),
('EXPECT_HIGH',expect_high),
('FORMAT_ENTREZ_QUERY',format_entrez_query),
('FORMAT_OBJECT',format_object),
('FORMAT_TYPE',format_type),
('NCBI_GI',ncbi_gi),
('RID',rid),
('RESULTS_FILE',results_file),
('SERVICE',service),
('SHOW_OVERVIEW',show_overview),
('CMD', 'Get'),
]
query = [x for x in parameters if x[1] is not None]
message = _as_bytes(urllib.urlencode(query))
# Poll NCBI until the results are ready. Use a 3 second wait
delay = 3.0
previous = time.time()
while True:
current = time.time()
wait = previous + delay - current
if wait > 0:
time.sleep(wait)
previous = current + wait
else:
previous = current
request = urllib2.Request("http://blast.ncbi.nlm.nih.gov/Blast.cgi",
message,
{"User-Agent":"BiopythonClient"})
handle = urllib2.urlopen(request)
results = _as_string(handle.read())
# Can see an "\n\n" page while results are in progress,
# if so just wait a bit longer...
if results=="\n\n":
continue
# XML results don't have the Status tag when finished
if results.find("Status=") < 0:
break
i = results.index("Status=")
j = results.index("\n", i)
status = results[i+len("Status="):j].strip()
if status.upper() == "READY":
break
return StringIO(results)
def _parse_qblast_ref_page(handle):
"""Extract a tuple of RID, RTOE from the 'please wait' page (PRIVATE).
The NCBI FAQ pages use TOE for 'Time of Execution', so RTOE is proably
'Request Time of Execution' and RID would be 'Request Identifier'.
"""
s = _as_string(handle.read())
i = s.find("RID =")
if i == -1:
rid = None
else:
j = s.find("\n", i)
rid = s[i+len("RID ="):j].strip()
i = s.find("RTOE =")
if i == -1:
rtoe = None
else:
j = s.find("\n", i)
rtoe = s[i+len("RTOE ="):j].strip()
if not rid and not rtoe:
#Can we reliably extract the error message from the HTML page?
#e.g. "Message ID#24 Error: Failed to read the Blast query:
# Nucleotide FASTA provided for protein sequence"
#or "Message ID#32 Error: Query contains no data: Query
# contains no sequence data"
#
#This used to occur inside a <div class="error msInf"> entry:
i = s.find('<div class="error msInf">')
if i != -1:
msg = s[i+len('<div class="error msInf">'):].strip()
msg = msg.split("</div>",1)[0].split("\n",1)[0].strip()
if msg:
raise ValueError("Error message from NCBI: %s" % msg)
#In spring 2010 the markup was like this:
i = s.find('<p class="error">')
if i != -1:
msg = s[i+len('<p class="error">'):].strip()
msg = msg.split("</p>",1)[0].split("\n",1)[0].strip()
if msg:
raise ValueError("Error message from NCBI: %s" % msg)
#Generic search based on the way the error messages start:
i = s.find('Message ID#')
if i != -1:
#Break the message at the first HTML tag
msg = s[i:].split("<",1)[0].split("\n",1)[0].strip()
raise ValueError("Error message from NCBI: %s" % msg)
#We didn't recognise the error layout :(
#print s
raise ValueError("No RID and no RTOE found in the 'please wait' page, "
"there was probably an error in your request but we "
"could not extract a helpful error message.")
elif not rid:
#Can this happen?
raise ValueError("No RID found in the 'please wait' page."
" (although RTOE = %s)" % repr(rtoe))
elif not rtoe:
#Can this happen?
raise ValueError("No RTOE found in the 'please wait' page."
" (although RID = %s)" % repr(rid))
try:
return rid, int(rtoe)
except ValueError:
raise ValueError("A non-integer RTOE found in " \
+"the 'please wait' page, %s" % repr(rtoe))
|
bryback/quickseq
|
genescript/Bio/Blast/NCBIWWW.py
|
Python
|
mit
| 10,089
|
[
"BLAST",
"Biopython"
] |
04f52da34b717d2de21fc187555d89f548a552f82a0dc5e6e96ec21bac48559a
|
"""Most things relating to article definitions reside here"""
import os
import re
import yawt.default_templates
from yawt.article import make_article
from yawt.utils import call_plugins, call_plugins_arg, save_file, \
joinfile, ensure_path, base_and_ext, ReprMixin
class YawtSiteManager(object):
"""The default article store. Stores articles on disk. No plugins."""
def __init__(self, **kwargs):
self.root_dir = kwargs.pop('root_dir')
self.content_folder = kwargs.get('content_folder', 'content')
self.draft_folder = kwargs.get('draft_folder', 'drafts')
self.template_folder = kwargs.get('template_folder', 'templates')
self.file_extensions = kwargs.get('file_extensions')
self.meta_types = kwargs.get('meta_types')
def initialize(self):
"""Set up an empty blog folder"""
if os.path.exists(self.root_dir):
raise SiteExistsError(self.root_dir)
ensure_path(self._content_root())
ensure_path(self._draft_root())
ensure_path(self._template_root())
config_content = '# put configuration here'
save_file(os.path.join(self.root_dir, 'config.py'), config_content)
template_contents = yawt.default_templates.default_article_template
self._save_template('article', 'html', template_contents)
template_404_contents = yawt.default_templates.default_404_template
self._save_template('404', 'html', template_404_contents)
files = ['config.py', 'article.html', '404.html']
return call_plugins_arg('on_new_site', files)
def fetch_article_by_repofile(self, repofile):
"""Fetch single article info by repofile (path starting from root of
repository). Returns None if no article exists with that name.
"""
filename = os.path.join(self.root_dir, repofile)
fullname = self._file2name(filename)
if not self.exists(fullname):
raise ArticleNotFoundError(fullname)
article = make_article(fullname, filename, self.meta_types)
return call_plugins_arg('on_article_fetch', article)
def fetch_articles_by_repofiles(self, repofiles):
"""Fetches list of articles, calling plugins"""
return [article for article in
(self.fetch_article_by_repofile(rfile) for rfile in repofiles)
if article]
def fetch_article_by_info(self, article_info):
"""Fetches an article, calling all the plugins"""
article = self._fetch_by_fullname(article_info.fullname)
article.info = article_info
return call_plugins_arg('on_article_fetch', article)
def fetch_article(self, fullname):
"""Fetches an article, calling all the plugins"""
article = self._fetch_by_fullname(fullname)
return call_plugins_arg('on_article_fetch', article)
def exists(self, fullname):
"""Return True if article exists"""
return self._fullname2file(fullname) is not None
def category_exists(self, fullname):
"""Return True if fullname refers to real, existing,
category on disk"""
return os.path.isdir(os.path.join(self._content_root(), fullname))
def is_article(self, repofile):
"""Return True if repofile refers to an article file"""
prefix = self.content_folder
if not prefix.endswith('/'):
prefix += '/'
return repofile.startswith(prefix)
def walk(self):
"""Perform a walk (i.e. visit each article in the store) and run the
plugins to process the articles.
"""
call_plugins('on_pre_walk')
for fullname in self._walk():
article = self.fetch_article(fullname)
call_plugins('on_visit_article', article)
call_plugins('on_post_walk')
def _fetch_by_fullname(self, fullname):
filename = self._fullname2file(fullname)
if filename is None:
raise ArticleNotFoundError(fullname)
return make_article(fullname, filename, self.meta_types)
def _walk(self, category=""):
"""Yields fullnames"""
start_path = os.path.join(self._content_root(), category)
for directory, basedirs, basefiles in os.walk(start_path):
for filename in self._articles_in_directory(directory, basefiles):
yield self._file2name(filename)
def _articles_in_directory(self, directory, basefiles):
return [os.path.abspath(os.path.join(directory, basefile))
for basefile in basefiles if self._is_article_basefile(basefile)]
def _is_article_basefile(self, basefile):
base, extension = base_and_ext(basefile)
return extension in self.file_extensions and base != 'index'
def _fullname_ext2file(self, fullname, ext):
return joinfile(self._content_root(), fullname, ext)
def _template_ext2file(self, templatename, ext):
return joinfile(self._template_root(), templatename, ext)
def _save_template(self, name, flavour, contents):
save_file(self._template_ext2file(name, flavour), contents)
def _fullname2file(self, fullname):
"""Return None if name does not exist."""
for ext in self.file_extensions:
filename = self._fullname_ext2file(fullname, ext)
if os.path.isfile(filename):
return filename
return None
def _file2name(self, filename):
"""Take a full absolute filename (including repository root folder) and
extract the fullname of the article
"""
rel_filename = re.sub('^{0}/'.format(self._content_root()),
'', filename)
fullname = os.path.splitext(rel_filename)[0]
return fullname
def _content_root(self):
return os.path.join(self.root_dir, self.content_folder)
def _draft_root(self):
return os.path.join(self.root_dir, self.draft_folder)
def _template_root(self):
return os.path.join(self.root_dir, self.template_folder)
class SiteExistsError(Exception, ReprMixin):
"""Raised when we try to initialize a site over an existsing site"""
def __init__(self, folder):
super(SiteExistsError, self).__init__()
self.folder = folder
class ArticleNotFoundError(Exception, ReprMixin):
"""Raised when we try to fetch an article that does not exist"""
def __init__(self, fullname):
super(ArticleNotFoundError, self).__init__()
self.fullname = fullname
|
drivet/yawt
|
yawt/site_manager.py
|
Python
|
mit
| 6,502
|
[
"VisIt"
] |
23e71d0d57e2adf52911bf213a280857f106ef108bf36ca885e4466bc8309b32
|
import argparse
from itertools import count
import numpy as np
import h5py
from traits.api import HasTraits, Range, Instance, Bool, Int, on_trait_change
from traitsui.api import View, Item, HGroup, RangeEditor
from tvtk.api import tvtk
from tvtk.pyface.scene_editor import SceneEditor
from tvtk.common import configure_input, configure_input_data
from mayavi.tools.mlab_scene_model import MlabSceneModel
from mayavi.core.ui.mayavi_scene import MayaviScene
from pyface.timer.api import Timer
from util import veclen
from inout import load_splocs
class Visualization(HasTraits):
component = Int(0)
_max_component_index = Int()
activation = Range(-1., 1.)
oscillate = Bool(True)
allow_negative = Bool(False)
pd = Instance(tvtk.PolyData)
normals = Instance(tvtk.PolyDataNormals)
actor = Instance(tvtk.Actor)
scene = Instance(MlabSceneModel, (), kw=dict(background=(1,1,1)))
timer = Instance(Timer)
def __init__(self, Xmean, tris, components):
HasTraits.__init__(self)
self._components = components
self._max_component_index = len(components)
self._Xmean = Xmean
self.pd = tvtk.PolyData(points=Xmean, polys=tris)
self.normals = tvtk.PolyDataNormals(splitting=False)
configure_input_data(self.normals, self.pd)
mapper = tvtk.PolyDataMapper(immediate_mode_rendering=True)
self.actor = tvtk.Actor(mapper=mapper)
configure_input(self.actor.mapper, self.normals)
self.actor.mapper.lookup_table = tvtk.LookupTable(
hue_range = (0.45, 0.6),
saturation_range = (0., 0.8),
value_range = (.6, 1.),
)
self.scene.add_actor(self.actor)
self.timer = Timer(40, self.animate().next)
def animate(self):
for i in count():
if self.oscillate:
frame = i % 30
alpha = np.sin(frame/30. * np.pi*2)
if not self.allow_negative:
alpha = np.abs(alpha)
self.activation = alpha
yield
@on_trait_change('activation, component')
def update_plot(self):
c = self._components[self.component]
self.pd.points = self._Xmean + self.activation * c
magnitude = veclen(c)
self.pd.point_data.scalars = magnitude
self.actor.mapper.scalar_range = (0, magnitude.max())
self.scene.render()
view = View(
Item('scene', editor=SceneEditor(scene_class=MayaviScene),
height=600, width=800, show_label=False),
HGroup(
Item('component', editor=RangeEditor(
is_float=False, low=0, high_name='_max_component_index', mode='spinner')),
'activation',
'oscillate',
'allow_negative',
),
resizable=True, title="View SPLOC's",
)
def main(component_hdf5_file):
Xmean, tris, components, names = load_splocs(component_hdf5_file)
visualization = Visualization(Xmean, tris, components)
visualization.configure_traits()
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Viewer for sparse localized deformation components')
parser.add_argument('input_sploc_file')
args = parser.parse_args()
main(args.input_sploc_file)
|
tneumann/splocs
|
view_splocs.py
|
Python
|
mit
| 3,302
|
[
"Mayavi"
] |
aa40883ea90a18e59b1ef4618492dd7208f9770f0f5668ff12f8f6fb31b22ef3
|
#!/usr/bin/env python
""" Archive a transformation
"""
from __future__ import print_function
import sys
from DIRAC.Core.Base.Script import parseCommandLine
parseCommandLine()
if len( sys.argv ) < 2:
print('Usage: dirac-transformation-archive transID [transID] [transID]')
sys.exit()
else:
transIDs = [int( arg ) for arg in sys.argv[1:]]
from DIRAC.TransformationSystem.Agent.TransformationCleaningAgent import TransformationCleaningAgent
from DIRAC.TransformationSystem.Client.TransformationClient import TransformationClient
agent = TransformationCleaningAgent( 'Transformation/TransformationCleaningAgent',
'Transformation/TransformationCleaningAgent',
'dirac-transformation-archive' )
agent.initialize()
client = TransformationClient()
for transID in transIDs:
agent.archiveTransformation( transID )
|
fstagni/DIRAC
|
TransformationSystem/scripts/dirac-transformation-archive.py
|
Python
|
gpl-3.0
| 905
|
[
"DIRAC"
] |
5344dfcff9c446d9f8da5c639448553ac7fd923bcb77f58c4deef5163c46fa8e
|
#!/usr/bin/python
###########################################################################################
# Filename:
# Device.py
###########################################################################################
# Project Authors:
# Juhapekka Piiroinen
# Brian Wu
#
# Changes:
# June 14, 2010 by Juhapekka Piiroinen - changes committed to svn
# - added comments for the device commands according to the manual from Pololu
# - added latest draft code for rotating base servo (Parallax Continuous Rotating Servo)
# - note! you should be able to clear error flags with .get_errors function according to the manual
# - renamed CameraDriver to LegacyCameraDriver as Brian Wu has done better one
# - integrated batch of changes provided by Brian Wu
#
# June 11, 2010 by Brian Wu - Changes committed thru email
# - Decoupling the implementation from the program
#
# April 19, 2010 by Juhapekka Piiroinen
# - Initial Release
#
# Email:
# juhapekka.piiroinen@gmail.com
#
# License:
# GNU/GPLv3
#
# Description:
# A python-wrapper for Pololu Micro Maestro 6-Channel USB Servo Controller
#
############################################################################################
# /!\ Notes /!\
# You will have to enable _USB Dual Port_ mode from the _Pololu Maestro Control Center_.
#
############################################################################################
# Device Documentation is available @ http://www.pololu.com/docs/pdf/0J40/maestro.pdf
############################################################################################
# (C) 2010 Juhapekka Piiroinen
# Brian Wu
############################################################################################
import serial
import time
def log(*msgline):
for msg in msgline:
print msg,
print
class Device(object):
def __init__(self,con_port="COM6",ser_port="COM7",timeout=1): #/dev/ttyACM0 and /dev/ttyACM1 for Linux
############################
# lets introduce and init the main variables
self.con = None
self.ser = None
self.isInitialized = False
############################
# lets connect the TTL Port
try:
self.con = serial.Serial(con_port,timeout=timeout,baudrate=9600)
self.con.close()
self.con.open()
self.con.baudrate = 9600
log("Link to Command Port -", con_port, "- successful")
except serial.serialutil.SerialException, e:
print e
log("Link to Command Port -", con_port, "- failed")
if self.con:
#####################
#If your Maestro's serial mode is "UART, detect baud rate", you must first send it the baud rate indication byte 0xAA on
#the RX line before sending any commands. The 0xAA baud rate indication byte can be the first byte of a Pololu protocol
#command.
#http://www.pololu.com/docs/pdf/0J40/maestro.pdf - page 35
# self.con.baudrate = 9600
# self.con.write(chr(0xAA))
# self.con.flush()
# log("Baud rate indication byte 0xAA sent!")
pass
###################################
# lets connect the TTL Port
try:
self.ser = serial.Serial(ser_port,timeout=timeout,baudrate=9600)
self.ser.close()
self.ser.open()
self.ser.baudrate = 9600
log("Link to TTL Port -", ser_port, "- successful")
except serial.serialutil.SerialException, e:
print e
log("Link to TTL Port -", ser_port, "- failed!")
self.isInitialized = (self.con!=None and self.ser!=None)
if (self.isInitialized):
err_flags = self.get_errors()
log("Device error flags read (",err_flags,") and cleared")
log("Device initialized:",self.isInitialized)
###########################################################################################################################
## common write function for handling all write related tasks
def write(self,*data):
if not self.isInitialized: log("Not initialized"); return
if not self.ser.writable():
log("Device not writable")
return
for d in data:
self.ser.write(chr(d))
self.ser.flush()
###########################################################################################################################
## Go Home
# Compact protocol: 0xA2
# --
# This command sends all servos and outputs to their home positions, just as if an error had occurred. For servos and
# outputs set to "Ignore", the position will be unchanged.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def go_home(self):
if not self.isInitialized: log("Not initialized"); return
self.write(0xA2)
###########################################################################################################################
## Set Target
# Compact protocol: 0x84, channel number, target low bits, target high bits
# --
# The lower 7 bits of the third data byte represent bits 0-6 of the target (the lower 7 bits), while the lower 7 bits of the
# fourth data byte represent bits 7-13 of the target. The target is a non-negative integer.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def set_target(self,servo,value):
if not self.isInitialized: log("Not initialized"); return
highbits,lowbits = divmod(value,32)
self.write(0x84,servo,lowbits << 2,highbits)
###########################################################################################################################
## Set Speed
# Compact protocol: 0x87, channel number, speed low bits, speed high bits
# --
# This command limits the speed at which a servo channel's output value changes. The speed limit is given in units of (0.25 us)/(10 ms)
# --
# For example, the command 0x87, 0x05, 0x0C, 0x01 sets
# the speed of servo channel 5 to a value of 140, which corresponds to a speed of 3.5 us/ms. What this means is that if
# you send a Set Target command to adjust the target from, say, 1000 us to 1350 us, it will take 100 ms to make that
# adjustment. A speed of 0 makes the speed unlimited, so that setting the target will immediately affect the position. Note
# that the actual speed at which your servo moves is also limited by the design of the servo itself, the supply voltage, and
# mechanical loads; this parameter will not help your servo go faster than what it is physically capable of.
# --
# At the minimum speed setting of 1, the servo output takes 40 seconds to move from 1 to 2 ms.
# The speed setting has no effect on channels configured as inputs or digital outputs.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def set_speed(self,servo,speed):
if not self.isInitialized: log("Not initialized"); return
highbits,lowbits = divmod(speed,32)
self.write(0x87,servo,lowbits << 2,highbits)
###########################################################################################################################
## Set Acceleration
# Compact protocol: 0x89, channel number, acceleration low bits, acceleration high bits
# --
# This command limits the acceleration of a servo channel's output. The acceleration limit is a value from 0 to 255 in units of (0.25 us)/(10 ms)/(80 ms),
# --
# A value of 0 corresponds to no acceleration limit. An acceleration limit causes the speed of a servo to slowly ramp up until it reaches the maximum speed, then
# to ramp down again as position approaches target, resulting in a relatively smooth motion from one point to another.
# With acceleration and speed limits, only a few target settings are required to make natural-looking motions that would
# otherwise be quite complicated to produce.
# --
# At the minimum acceleration setting of 1, the servo output takes about 3 seconds to move smoothly from a target of 1 ms to a target of 2 ms.
# The acceleration setting has no effect on channels configured as inputs or digital outputs.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def set_acceleration(self,servo,acceleration):
if not self.isInitialized: log("Not initialized"); return
highbits,lowbits = divmod(acceleration,32)
self.write(0x89,servo,lowbits << 2,highbits)
###########################################################################################################################
## Get Position
# Compact protocol: 0x90, channel number
# Response: position low 8 bits, position high 8 bits
# --
# This command allows the device communicating with the Maestro to get the position value of a channel. The position
# is sent as a two-byte response immediately after the command is received.
# --
# If the specified channel is configured as a servo, this position value represents the current pulse width that the Maestro
# is transmitting on the channel, reflecting the effects of any previous commands, speed and acceleration limits, or scripts
# running on the Maestro.
# --
# If the channel is configured as a digital output, a position value less than 6000 means the Maestro is driving the line low,
# while a position value of 6000 or greater means the Maestro is driving the line high.
# --
# If the channel is configured as an input, the position represents the voltage measured on the channel. The inputs on
# channels 0-11 are analog: their values range from 0 to 1023, representing voltages from 0 to 5 V. The inputs on channels
# 12-23 are digital: their values are either exactly 0 or exactly 1023.
# --
# Note that the formatting of the position in this command differs from the target/speed/acceleration formatting in the
# other commands. Since there is no restriction on the high bit, the position is formatted as a standard little-endian two-
# byte unsigned integer. For example, a position of 2567 corresponds to a response 0x07, 0x0A.
# --
# Note that the position value returned by this command is equal to four times the number displayed in the Position box
# in the Status tab of the Maestro Control Center.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def get_position(self,servo):
if not self.isInitialized: log("Not initialized"); return None
self.write(0x90,servo)
data = self.ser.read(2)
if data:
return (ord(data[0])+(ord(data[1])<<8))/4
else:
return None
###########################################################################################################################
## Get Moving State
# Compact protocol: 0x93
# Response: 0x00 if no servos are moving, 0x01 if servos are moving
# --
# This command is used to determine whether the servo outputs have reached their targets or are still changing, limited
# by speed or acceleration settings. Using this command together with the Set Target command, you can initiate several
# servo movements and wait for all the movements to finish before moving on to the next step of your program.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def get_moving_state(self):
if not self.isInitialized: log("Not initialized"); return None
self.write(0x93)
data = self.ser.read(1)
if data:
return ord(data[0])
else:
return None
###########################################################################################################################
## Get Errors
# Compact protocol: 0xA1
# --
# Response: error bits 0-7, error bits 8-15
# --
# Use this command to examine the errors that the Maestro has detected.
# --
# The error register is sent as a two-byte response immediately after the command is received,
# then all the error bits are cleared. For most applications using serial control, it is a good idea to check errors continuously
# and take appropriate action if errors occur.
# --
# Source: http://www.pololu.com/docs/pdf/0J40/maestro.pdf
def get_errors(self):
if not self.isInitialized: log("Not initialized"); return None
self.write(0xA1)
data = self.ser.read(2)
if data:
return ord(data[0])+(ord(data[1])<<8)
else:
return None
###########################################################################################################################
## a helper function for Set Target
def wait_until_at_target(self):
while (self.get_moving_state()):
time.sleep(0.1)
###########################################################################################################################
## Lets close and clean when we are done
def __del__(self):
if (self.ser):
self.ser.close()
if (self.con):
self.con.close()
del(self.ser)
del(self.con)
####################################################################
hexapod_legs = [
[[704,2304], [896,2208], [528,1600]], # leg 1
[[496,2000], [704,2000], [400,1648]], # leg 2
[[304,1904], [1184,2512], [656,2000]], # leg 3
[[992,2448], [992,2256], [896,2208]], # leg 4
[[656,2208], [496,1648], [608,1696]], # leg 5
[[992,2608], [608,1808], [496,1600]], # leg 6
]
servo = Device("/dev/ttyAMA0","/dev/ttyAMA0")
import sys
from math import floor
# tests limits of leg joints
#limit = sys.argv[1] # Take input "low" or "high"
pair = sys.argv[1] # take input of the pair 1 or 2
joint = int(sys.argv[2]) # take joint number 0-2
if(pair == "1"):
leg = 0
n = 2
elif(pair == "2"):
leg = 3
n = 5
if(leg >= 0 and n > 0 and (joint >=0 or joint <=2)):
while leg <= n:
limits = hexapod_legs[leg][joint]
srv = leg*3+joint
print '[Limits] Leg:{0} Joint:{1} Servo:{2} Limits:{3}'.format(leg,joint,srv,limits)
servo.set_speed(srv,50)
servo.set_acceleration(srv,20)
# center
servo.set_target(srv,int(floor((limits[1]-limits[0])/2)))
time.sleep(1)
# low
servo.set_target(srv,limits[0]+100)
time.sleep(1)
# high
servo.set_target(srv,limits[1]-100)
time.sleep(1)
# center
servo.set_target(srv,limits[0]+int(floor((limits[1]-limits[0])/2)))
time.sleep(1)
# increment
leg+=1
|
antonvino/inmoov-basic
|
hexapod_scripts_base/maestro_test_limits.py
|
Python
|
mit
| 14,902
|
[
"Brian"
] |
a3544d697635f9dec6f5b874aa0eb27813987a007b40fb6c5fdd00614d957ba5
|
#!/usr/bin/env python
# Mesa 3-D graphics library
# Version: 4.1
#
# Copyright (C) 1999-2001 Brian Paul All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# BRIAN PAUL BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# Generate the mesa.def file for Windows.
#
# Usage:
# mesadef.py >mesa.def
# Then copy to src/mesa/drivers/windows/gdi
#
# Dependencies:
# The apispec file must be in the current directory.
import apiparser
import string
def PrintHead():
print '; DO NOT EDIT - This file generated automatically by mesadef.py script'
print 'DESCRIPTION \'Mesa (OpenGL work-alike) for Win32\''
print 'VERSION 6.0'
print ';'
print '; Module definition file for Mesa (OPENGL32.DLL)'
print ';'
print '; Note: The OpenGL functions use the STDCALL'
print '; function calling convention. Microsoft\'s'
print '; OPENGL32 uses this convention and so must the'
print '; Mesa OPENGL32 so that the Mesa DLL can be used'
print '; as a drop-in replacement.'
print ';'
print '; The linker exports STDCALL entry points with'
print '; \'decorated\' names; e.g., _glBegin@0, where the'
print '; trailing number is the number of bytes of '
print '; parameter data pushed onto the stack. The'
print '; callee is responsible for popping this data'
print '; off the stack, usually via a RETF n instruction.'
print ';'
print '; However, the Microsoft OPENGL32.DLL does not export'
print '; the decorated names, even though the calling convention'
print '; is STDCALL. So, this module definition file is'
print '; needed to force the Mesa OPENGL32.DLL to export the'
print '; symbols in the same manner as the Microsoft DLL.'
print '; Were it not for this problem, this file would not'
print '; be needed (for the gl* functions) since the entry'
print '; points are compiled with dllexport declspec.'
print ';'
print '; However, this file is still needed to export "internal"'
print '; Mesa symbols for the benefit of the OSMESA32.DLL.'
print ';'
print 'EXPORTS'
return
#enddef
def PrintTail():
print ';'
print '; WGL API'
print '\twglChoosePixelFormat'
print '\twglCopyContext'
print '\twglCreateContext'
print '\twglCreateLayerContext'
print '\twglDeleteContext'
print '\twglDescribeLayerPlane'
print '\twglDescribePixelFormat'
print '\twglGetCurrentContext'
print '\twglGetCurrentDC'
print '\twglGetExtensionsStringARB'
print '\twglGetLayerPaletteEntries'
print '\twglGetPixelFormat'
print '\twglGetProcAddress'
print '\twglMakeCurrent'
print '\twglRealizeLayerPalette'
print '\twglSetLayerPaletteEntries'
print '\twglSetPixelFormat'
print '\twglShareLists'
print '\twglSwapBuffers'
print '\twglSwapLayerBuffers'
print '\twglUseFontBitmapsA'
print '\twglUseFontBitmapsW'
print '\twglUseFontOutlinesA'
print '\twglUseFontOutlinesW'
print ';'
print '; Mesa internals - mostly for OSMESA'
print '\t_ac_CreateContext'
print '\t_ac_DestroyContext'
print '\t_ac_InvalidateState'
print '\t_glapi_get_context'
print '\t_glapi_get_proc_address'
print '\t_mesa_buffer_data'
print '\t_mesa_buffer_map'
print '\t_mesa_buffer_subdata'
print '\t_mesa_choose_tex_format'
print '\t_mesa_compressed_texture_size'
print '\t_mesa_create_framebuffer'
print '\t_mesa_create_visual'
print '\t_mesa_delete_buffer_object'
print '\t_mesa_delete_texture_object'
print '\t_mesa_destroy_framebuffer'
print '\t_mesa_destroy_visual'
print '\t_mesa_enable_1_3_extensions'
print '\t_mesa_enable_1_4_extensions'
print '\t_mesa_enable_1_5_extensions'
print '\t_mesa_enable_sw_extensions'
print '\t_mesa_error'
print '\t_mesa_free_context_data'
print '\t_mesa_get_current_context'
print '\t_mesa_init_default_imports'
print '\t_mesa_initialize_context'
print '\t_mesa_make_current'
print '\t_mesa_new_buffer_object'
print '\t_mesa_new_texture_object'
print '\t_mesa_problem'
print '\t_mesa_ResizeBuffersMESA'
print '\t_mesa_store_compressed_teximage1d'
print '\t_mesa_store_compressed_teximage2d'
print '\t_mesa_store_compressed_teximage3d'
print '\t_mesa_store_compressed_texsubimage1d'
print '\t_mesa_store_compressed_texsubimage2d'
print '\t_mesa_store_compressed_texsubimage3d'
print '\t_mesa_store_teximage1d'
print '\t_mesa_store_teximage2d'
print '\t_mesa_store_teximage3d'
print '\t_mesa_store_texsubimage1d'
print '\t_mesa_store_texsubimage2d'
print '\t_mesa_store_texsubimage3d'
print '\t_mesa_test_proxy_teximage'
print '\t_mesa_Viewport'
print '\t_mesa_meta_CopyColorSubTable'
print '\t_mesa_meta_CopyColorTable'
print '\t_mesa_meta_CopyConvolutionFilter1D'
print '\t_mesa_meta_CopyConvolutionFilter2D'
print '\t_mesa_meta_CopyTexImage1D'
print '\t_mesa_meta_CopyTexImage2D'
print '\t_mesa_meta_CopyTexSubImage1D'
print '\t_mesa_meta_CopyTexSubImage2D'
print '\t_mesa_meta_CopyTexSubImage3D'
print '\t_swrast_Accum'
print '\t_swrast_alloc_buffers'
print '\t_swrast_Bitmap'
print '\t_swrast_CopyPixels'
print '\t_swrast_DrawPixels'
print '\t_swrast_GetDeviceDriverReference'
print '\t_swrast_Clear'
print '\t_swrast_choose_line'
print '\t_swrast_choose_triangle'
print '\t_swrast_CreateContext'
print '\t_swrast_DestroyContext'
print '\t_swrast_InvalidateState'
print '\t_swrast_ReadPixels'
print '\t_swrast_zbuffer_address'
print '\t_swsetup_Wakeup'
print '\t_swsetup_CreateContext'
print '\t_swsetup_DestroyContext'
print '\t_swsetup_InvalidateState'
print '\t_tnl_CreateContext'
print '\t_tnl_DestroyContext'
print '\t_tnl_InvalidateState'
print '\t_tnl_MakeCurrent'
print '\t_tnl_run_pipeline'
#enddef
records = []
def FindOffset(funcName):
for (name, alias, offset) in records:
if name == funcName:
return offset
#endif
#endfor
return -1
#enddef
def EmitEntry(name, returnType, argTypeList, argNameList, alias, offset):
if alias == '':
dispatchName = name
else:
dispatchName = alias
if offset < 0:
offset = FindOffset(dispatchName)
if offset >= 0 and string.find(name, "unused") == -1:
print '\tgl%s' % (name)
# save this info in case we need to look up an alias later
records.append((name, dispatchName, offset))
#enddef
PrintHead()
apiparser.ProcessSpecFile("APIspec", EmitEntry)
PrintTail()
|
CPFDSoftware-Tony/gmv
|
utils/Mesa/Mesa-7.8.2/src/mesa/glapi/gen/mesadef.py
|
Python
|
gpl-3.0
| 7,099
|
[
"Brian"
] |
9c5cf024656a583741d04f7eff7583e3810c7a4e6ddc4cc0019f2128c257ecac
|
# Lint as: python3
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for batch_major_attention."""
import math
from absl.testing import flagsaver
from absl.testing import parameterized
from lingvo import compat as tf
from lingvo.core import attention as tm_attention
from lingvo.core import attention_util
from lingvo.core import base_layer
from lingvo.core import batch_major_attention as attention
from lingvo.core import hyperparams
from lingvo.core import py_utils
from lingvo.core import stream_step_test_base
from lingvo.core import test_utils
import numpy as np
class FAVORDotAttenTest(test_utils.TestCase, parameterized.TestCase):
def test_favor_output(self):
multiheadattention = attention.MultiHeadedFavorAttention.Params().Set(
name='atten',
input_dim=4,
hidden_dim=4,
enable_per_dim_scale=False,
enable_scaling_code_motion=True,
attention_type='softmax',
num_random_features=1000).Instantiate()
batch_size = 1
length = 2
num_heads = 1
dim = 8
query = tf.random.normal([batch_size, length, num_heads, dim])
key = tf.random.normal([batch_size, length, num_heads, dim])
value = tf.random.normal([batch_size, length, num_heads, dim])
encoded, _ = multiheadattention._DotAtten(None, query, key, value, None,
None)
query = tf.multiply(query, 1.0 / math.sqrt(float(dim)))
attention_scores = tf.einsum('BXHD,BYHD->BXYH', query, key)
attention_scores = tf.nn.softmax(attention_scores, axis=2)
exact_attention_block_output = tf.einsum('BXYH,BYHD->BXHD',
attention_scores, value)
max_error = 0.5
with self.session(use_gpu=False) as sess:
favor_output, groundtruth_output = sess.run(
[exact_attention_block_output, encoded])
error = np.max(
np.abs((groundtruth_output - favor_output) / groundtruth_output))
self.assertLess(error, max_error)
class MultiHeadSelfAttentionTest(test_utils.TestCase, parameterized.TestCase):
"""Test attention models."""
def _AttentionInputs(self, input_dim=4, dtype=tf.float32):
np.random.seed(6348575)
batch_size = 6
seq_len = 6
input_vecs_p = [
np.random.rand(seq_len, input_dim) for _ in range(batch_size)
]
input_vecs = tf.stack([tf.constant(x, dtype=dtype) for x in input_vecs_p])
# pyformat: disable
input_padding_p = [[0, 0, 1, 1, 0, 0], [1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 1, 0], [0, 0, 1, 1, 0, 0],
[1, 0, 0, 0, 1, 0], [0, 0, 1, 0, 1, 0]]
# pyformat: enable
input_padding = tf.constant(input_padding_p, dtype=dtype)
return input_vecs, input_padding, input_vecs_p, input_padding_p
def testDotProductAttention(self):
(input_vecs, input_padding, input_vecs_p,
input_padding_p) = self._AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten',
input_dim=4,
hidden_dim=4,
enable_scaling_code_motion=True)
l = p.Instantiate()
probs, probs_sum = l.AttenProbs(
l.theta,
tf.expand_dims(input_vecs, 2),
tf.expand_dims(input_vecs, 2),
input_padding,
segment_mask=None)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
prob_out = sess.run(tf.squeeze(probs / probs_sum))
# Use numpy to perform the same computation to generate expected results.
input_vecs_p = np.array(input_vecs_p)
target_vecs_p = np.transpose(input_vecs_p, (0, 2, 1))
expected_logit = np.matmul(input_vecs_p, target_vecs_p)
expected_logit = np.transpose(expected_logit, (0, 2, 1))
elexp = np.exp(expected_logit)
input_padding_p = np.array(input_padding_p)
input_padding_p = np.expand_dims(input_padding_p, axis=1)
input_padding_p = np.tile(input_padding_p, (1, 6, 1))
elexp *= (1 - input_padding_p)
expected_prob_out = elexp / np.expand_dims(np.sum(elexp, axis=-1), axis=-1)
expected_prob_out = np.reshape(expected_prob_out, (6, 6, 6))
self.assertAllClose(expected_prob_out, prob_out)
@parameterized.parameters(1.0, 5.0, 10.0)
def testAttenLogitCapping(self, atten_logit_cap):
(input_vecs, input_padding, input_vecs_p,
input_padding_p) = self._AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten',
input_dim=4,
hidden_dim=4,
enable_scaling_code_motion=True,
atten_logit_cap=atten_logit_cap)
l = p.Instantiate()
probs, probs_sum = l.AttenProbs(
l.theta,
tf.expand_dims(input_vecs, 2),
tf.expand_dims(input_vecs, 2),
input_padding,
segment_mask=None)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
prob_out = sess.run(tf.squeeze(probs / probs_sum))
# Use numpy to perform the same computation to generate expected results.
input_vecs_p = np.array(input_vecs_p)
target_vecs_p = np.transpose(input_vecs_p, (0, 2, 1))
expected_logit = np.matmul(input_vecs_p, target_vecs_p)
expected_logit = np.transpose(expected_logit, (0, 2, 1))
expected_logit = atten_logit_cap * np.tanh(expected_logit / atten_logit_cap)
elexp = np.exp(expected_logit)
input_padding_p = np.array(input_padding_p)
input_padding_p = np.expand_dims(input_padding_p, axis=1)
input_padding_p = np.tile(input_padding_p, (1, 6, 1))
elexp *= (1 - input_padding_p)
expected_prob_out = elexp / np.expand_dims(np.sum(elexp, axis=-1), axis=-1)
expected_prob_out = np.reshape(expected_prob_out, (6, 6, 6))
self.assertAllClose(expected_prob_out, prob_out)
@parameterized.named_parameters(('Two', 2), ('Three', 3))
def testMultiHeadedProjectionLayerInputMode(self, batch_dims):
with self.session(use_gpu=True) as sess:
batch_sizes = list(np.arange(3, 3 + batch_dims))
num_heads, dim_per_head = 4, 2
model_dims = num_heads * dim_per_head
input_tf = tf.random.normal(
shape=batch_sizes + [model_dims], dtype=tf.float32)
proj_p = attention.MultiHeadedProjectionLayer.Params().Set(
input_dim=model_dims,
num_heads=num_heads,
dim_per_head=dim_per_head,
is_output_projection=False,
name='proj')
proj = proj_p.Instantiate()
tf.global_variables_initializer().run()
result = proj.FPropDefaultTheta(input_tf)
result_np = sess.run(result)
self.assertEqual(result_np.shape,
tuple(batch_sizes + [num_heads, dim_per_head]))
@parameterized.named_parameters(('Two', 2), ('Three', 3))
def testMultiHeadedProjectionLayerOutputMode(self, batch_dims):
with self.session(use_gpu=True) as sess:
batch_sizes = list(np.arange(3, 3 + batch_dims))
num_heads, dim_per_head = 4, 2
model_dims = num_heads * dim_per_head
input_tf = tf.random.normal(
shape=batch_sizes + [num_heads, dim_per_head], dtype=tf.float32)
proj_p = attention.MultiHeadedProjectionLayer.Params().Set(
input_dim=model_dims,
num_heads=num_heads,
dim_per_head=dim_per_head,
is_output_projection=True,
name='proj')
proj = proj_p.Instantiate()
tf.global_variables_initializer().run()
result = proj.FPropDefaultTheta(input_tf)
result_np = sess.run(result)
self.assertEqual(result_np.shape, tuple(batch_sizes + [model_dims]))
def testMultiHeadedAttentionDotProductOutputDim(self):
# input_batch:6, seq_len:6. Test n = 2 case.
bsz, slen = 6, 6
input_dim = 2
hidden_dim = 4
output_dim = 4
num_heads = 2
with self.session(use_gpu=True) as sess:
input_vecs, input_padding, _, _ = self._AttentionInputs(
input_dim=input_dim)
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
output_dim=output_dim)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, attn_prob = l.FProp(
l.theta,
input_vecs,
input_vecs,
input_vecs,
input_padding,
segment_mask=None)
context_vec_np, attn_prob_np = sess.run([ctx_vec, attn_prob])
self.assertEqual(context_vec_np.shape, (bsz, slen, output_dim))
self.assertEqual(attn_prob_np.shape, (bsz, num_heads, slen, slen))
@parameterized.named_parameters(
# Use the default data types.
('dtype_default', [], 1e-06),
# Set the post projection matrix to float16.
('dtype_post_float16', [('.*post/w', tf.float16)], 1e-04),
# Set the 4 weight matrices, query, key, value and post, to float16.
('dtype_all_float16', [('.*w', tf.float16)], 1e-04))
def testMultiHeadedAttentionDotProduct(self, list_regex_dtypes, atol):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=True) as sess:
input_vecs, input_padding, _, _ = self._AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten', num_heads=2, input_dim=4, hidden_dim=4)
# Use Gaussian() to have consistent init values for float32 and float16.
p.params_init = py_utils.WeightInit.Gaussian(0.1)
with py_utils.VariableListDtypeRegexScope(list_regex_dtypes):
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta,
input_vecs,
input_vecs,
input_vecs,
input_padding,
segment_mask=None)
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (6, 24))
self.assertAllClose(
[-0.091584, 0.133402, 0.036773, -0.033578, 0.097802, 0.047879],
np.sum(context_vec_out, axis=1),
atol=atol)
def testMultiHeadedCrossAttentionDotProduct(self):
with self.session(use_gpu=True) as sess:
input_vecs, input_padding, _, _ = self._AttentionInputs()
# Set query input dim to 8 with value as concat of input_vecs.
query_vecs = tf.concat([input_vecs, input_vecs], axis=-1)
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten',
num_heads=2,
input_dim={
'query': 8,
'key': 4,
'value': 4
},
hidden_dim=4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta,
query_vecs,
input_vecs,
input_vecs,
input_padding,
segment_mask=None)
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (12, 24))
self.assertAllClose([
11.009628, 10.825181, 12.373755, 12.3311825, 7.5814877, 7.620001,
9.472344, 9.438789, 8.375568, 8.353212, 11.167051, 11.240829
], np.sum(context_vec_out, axis=1))
def testCausalSegmentMask(self):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=False) as sess:
segment_ids = tf.constant([[1, 1, 1, 0]])
mask = attention.CausalSegmentMask(segment_ids, tf.float32)
mask_val = sess.run(mask)
print(mask_val)
atten_allowed = np.sum((mask_val >= 0.0).astype(np.float32))
self.assertEqual(7.0, atten_allowed)
def testMultiHeadedAttentionDotProductSegmentMask(self):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=True) as sess:
input_vecs, input_padding, _, _ = self._AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='self_atten',
num_heads=2,
input_dim=4,
hidden_dim=4,
packed_input=True)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
segment_id = tf.zeros([6, 6])
segment_mask = attention.SegmentMask(segment_id, segment_id)
padding = tf.tile(tf.reshape(input_padding, [6, 1, 1, 6]), [1, 1, 6, 1])
padding_mask = padding * segment_mask.dtype.max * tf.constant(
-0.7, dtype=segment_mask.dtype)
segment_mask += padding_mask
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta,
input_vecs,
input_vecs,
input_vecs,
input_padding,
segment_mask=segment_mask)
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (6, 24))
self.assertAllClose(
[27.417763, 31.783672, 19.99568, 23.907103, 21.078259, 28.429199],
np.sum(context_vec_out, axis=1))
class MultiHeadedAttentionXLOracle:
"""Oracle layer used for computing ground truths for MultiHeadedAttention.
Written in a non-vectorized way.
"""
def __init__(self, u, v, pos_proj, sinusoid_emb):
"""Constructor.
Args:
u: A numpy ndarray of shape [N, H]
v: A numpy ndarray of shape [N, H]
pos_proj: A numpy ndarray of shape [embed_dim, N, H]
sinusoid_emb: A numpy ndarray of shape [seqlen, emb_dim].
"""
assert u.shape == v.shape
assert u.shape == pos_proj.shape[1:]
assert sinusoid_emb.shape[-1] == pos_proj.shape[0]
# [N, H]
self._u = u
# [N, H]
self._v = v
# [?, N, H]
self._pos_proj = pos_proj
self._num_heads = u.shape[0]
self._atten_dim = u.shape[-1]
self._hidden_dim = u.shape[0] * u.shape[-1]
self._sinusoid_emb = sinusoid_emb
def _GetPositionEnc(self, tgt_t, src_t, head, seqlen):
"""Gets positional encoding.
Args:
tgt_t: A Python int, time step of target seq.
src_t: A Python int, time step of source seq.
head: A Python int, num of heads of the attention.
seqlen: A Python int, sequence length of target/source seq.
Returns:
A numpy array of shape [head, emb_dim // head].
"""
# [emb_dim]
sinusoid_enc = self._sinusoid_emb[tgt_t - src_t + seqlen - 1]
return np.einsum('DNH,D->NH', self._pos_proj, sinusoid_enc)[head]
def AttenProbs(self, key, query, paddings, per_step_padding):
"""Computes attention probs in a non vectorized way.
Args:
key: A numpy ndarray of shape [batch, seqlen, heads, dim].
query: A numpy ndarray of the same shape as `key`.
paddings: A numpy ndarray of shape [batch, seqlen].
per_step_padding: A numpy ndarray of shape [batch, seqlen, seqlen].
Returns:
A numpy ndarray of shape [batch, query_seqlen, key_seqlen]
"""
assert query.ndim == 4
assert paddings.ndim == 2
assert key.shape == query.shape
batch, seqlen = query.shape[:2]
tgtlen, srclen = seqlen, seqlen
assert query.shape[2] == self._num_heads
assert query.shape[3] == self._atten_dim
assert paddings.shape == query.shape[:2]
logits = np.zeros((batch, self._num_heads, tgtlen, srclen))
probs = np.zeros((batch, self._num_heads, tgtlen, srclen))
def Normalize(vec):
expx = np.exp(vec)
expxsum = np.sum(expx, axis=-1)
return expx / expxsum
# [b, tgtlen, srclen]
paddings = np.broadcast_to(
np.reshape(paddings, (batch, 1, seqlen)), (batch, seqlen, seqlen))
for b in range(batch):
for h in range(self._num_heads):
for i in range(tgtlen):
for j in range(srclen):
pos_enc = self._GetPositionEnc(i, j, h, seqlen)
logits[b][h][i][j] = (
np.dot(query[b][i][h], key[b][j][h]) +
np.dot(query[b][i][h], pos_enc) +
np.dot(self._u[h], key[b][j][h]) + np.dot(self._v[h], pos_enc))
total_padding = paddings[b][i] + per_step_padding[b][i]
logits[b][h][i] = np.where(total_padding > 0,
np.finfo(np.float32).max * (-0.7),
logits[b][h][i])
probs[b][h][i] = Normalize(logits[b][h][i])
return probs
def _AttentionInputs(input_dim=4, dtype=tf.float32, is_causal=True):
np.random.seed(6348575)
batch_size = 6
seq_len = 6
query_vec_p = [np.random.rand(seq_len, input_dim) for _ in range(batch_size)]
query_vec_p = np.array(query_vec_p).astype(dtype.as_numpy_dtype)
query_vec = tf.convert_to_tensor(query_vec_p)
memory_vec_p = [np.random.rand(seq_len, input_dim) for _ in range(batch_size)]
memory_vec_p = np.array(memory_vec_p).astype(dtype.as_numpy_dtype)
memory_vec = tf.convert_to_tensor(memory_vec_p)
# pyformat: disable
paddings_p = np.array(
[[0, 0, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1], [0, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 1]]).astype(dtype.as_numpy_dtype)
paddings = tf.convert_to_tensor(paddings_p)
# causal padding.
if is_causal:
per_step_padding_p = [
[0, 1, 1, 1, 1, 1], [0, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0]]
else:
per_step_padding_p = np.zeros((seq_len, seq_len))
per_step_padding_p = [per_step_padding_p for _ in range(batch_size)]
per_step_padding_p = np.array(per_step_padding_p).astype(dtype.as_numpy_dtype)
per_step_padding = tf.convert_to_tensor(per_step_padding_p)
# pyformat: enable
return (query_vec, memory_vec, paddings, per_step_padding, query_vec_p,
memory_vec_p, paddings_p, per_step_padding_p)
class MultiHeadedAttentionTest(test_utils.TestCase, parameterized.TestCase):
"""Test dot-product multiheaded attention."""
def _AttentionExtendStepInputs(self,
input_dim=4,
num_heads=2,
dtype=tf.float32):
np.random.seed(6348575)
batch_size = 6
seq_len = 6
query_vec_p = [np.random.rand(1, input_dim) for _ in range(batch_size)]
query_vec = tf.stack([tf.constant(x, dtype=dtype) for x in query_vec_p])
# pyformat: disable
per_step_padding_p = [[0, 1, 1, 1, 1, 1]]
per_step_padding_p = [per_step_padding_p for _ in range(batch_size)]
# pyformat: enable
per_step_padding = tf.stack(
[tf.constant(x, dtype=dtype) for x in per_step_padding_p])
source_vecs = tf.constant(
np.random.normal(
0.1, 0.5, [seq_len, batch_size, num_heads, input_dim // num_heads]),
dtype=dtype)
source_ctxs = tf.constant(
np.random.normal(
0.1, 0.5, [seq_len, batch_size, num_heads, input_dim // num_heads]),
dtype=dtype)
cached_states = py_utils.NestedMap(key=source_vecs, value=source_ctxs)
return query_vec, cached_states, per_step_padding
def testAttenProbs(self):
(query_vec, key_vec, paddings, per_step_padding, query_vec_p, key_vec_p,
paddings_p, per_step_padding_p) = _AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='atten',
input_dim=4,
hidden_dim=4,
enable_scaling_code_motion=True)
l = p.Instantiate()
probs, probs_sum = l.AttenProbs(
l.theta,
tf.expand_dims(query_vec, 2),
tf.expand_dims(key_vec, 2),
paddings,
segment_mask=None,
per_step_padding=per_step_padding)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
prob_out = sess.run(tf.squeeze(probs / probs_sum))
# Use numpy to perform the same computation to generate expected results.
query_vec_p = np.array(query_vec_p)
key_vec_p = np.array(key_vec_p)
key_vec_p = np.transpose(key_vec_p, (0, 2, 1))
expected_logit = np.matmul(query_vec_p, key_vec_p)
paddings_p = np.array(paddings_p)
paddings_p = np.expand_dims(paddings_p, axis=1)
paddings_p = np.tile(paddings_p, (1, 6, 1))
per_step_padding_p = np.array(per_step_padding_p)
paddings_p = 1.0 * np.logical_or(paddings_p, per_step_padding_p)
elexp = np.exp(expected_logit)
elexp *= (1.0 - paddings_p)
elexp += 1e-9
expected_prob_out = elexp / np.expand_dims(np.sum(elexp, axis=-1), axis=-1)
expected_prob_out = np.reshape(expected_prob_out, (6, 6, 6))
self.assertAllClose(expected_prob_out, prob_out)
def testCrossAttentionPaddingWithTimestamp(self):
with self.session(use_gpu=False) as sess:
# batch=2, max_target_len=6
timestamp = tf.constant([[0, 1, 2, 3, 4, 4], [0, 1, 1, 2, 3, 2]],
dtype=tf.int32)
# max_source_len=5
source_paddings = tf.constant([[0, 0, 0, 0, 0], [0, 0, 0, 0, 1]],
dtype=tf.float32)
out_paddings = attention.CrossAttentionPaddingWithTimestamp(
timestamp, source_paddings, 2, 1)
paddings_val = sess.run(out_paddings)
print(paddings_val)
paddings_expected = tf.constant(
[[[0, 0, 1, 1, 1], [0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [1, 1, 0, 0, 0],
[1, 1, 1, 0, 0], [1, 1, 1, 0, 0]],
[[0, 0, 1, 1, 1], [0, 0, 0, 1, 1], [0, 0, 0, 1, 1], [1, 0, 0, 0, 1],
[1, 1, 0, 0, 1], [1, 0, 0, 0, 1]]],
dtype=tf.float32)
self.assertAllEqual(paddings_val, paddings_expected)
def testFPropCrossAttention(self):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=True) as sess:
query_vec, memory_vec, paddings, per_step_padding, _, _, _, _ = (
_AttentionInputs())
p = attention.MultiHeadedAttention.Params().Set(
name='cross_atten', num_heads=2, input_dim=4, hidden_dim=4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta,
query_vec,
memory_vec,
memory_vec,
paddings,
segment_mask=None,
per_step_padding=per_step_padding)
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (6, 24))
self.assertAllClose(
[24.624561, 27.805634, 23.358835, 11.085404, 27.165989, 23.750813],
np.sum(context_vec_out, axis=1))
def testExtendStepAsyncTimeStepSelfAttention(self):
use_short_seq_opt = False
# input_batch:6, seq_len:6, query_len: 1. Test n = 2 case.
with self.session(use_gpu=True) as sess:
query_vec, cached_states, per_step_padding = self._AttentionExtendStepInputs(
)
p = attention.MultiHeadedAttention.Params().Set(
name='atten', num_heads=2, input_dim=4, hidden_dim=4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
allzero_time_step = tf.constant([0] * 6)
time_step = tf.constant([0, 1, 2, 3, 4, 5])
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, updated_states = l.ExtendStep(l.theta, query_vec, cached_states,
None, None, per_step_padding, 0,
use_short_seq_opt)
ctx_vec_async, updated_states_async = l.ExtendStep(
l.theta, query_vec, cached_states, None, None, per_step_padding,
allzero_time_step, use_short_seq_opt)
context_vec_out = sess.run(ctx_vec)
new_source_vecs = sess.run(updated_states.key)
context_vec_out_async = sess.run(ctx_vec_async)
new_source_vecs_async = sess.run(updated_states_async.key)
self.assertAllClose(
np.sum(context_vec_out, axis=1),
np.sum(context_vec_out_async, axis=1))
self.assertAllClose(
np.sum(new_source_vecs, axis=1),
np.sum(new_source_vecs_async, axis=1))
ctx_vec_async, updated_states_async = l.ExtendStep(
l.theta, query_vec, cached_states, None, None, per_step_padding,
time_step, use_short_seq_opt)
_, updated_states_step1 = l.ExtendStep(l.theta, query_vec, cached_states,
None, None, per_step_padding, 1,
use_short_seq_opt)
context_vec_out_async = sess.run(ctx_vec_async)
new_source_vecs_async = sess.run(updated_states_async.key)
new_source_vecs_async_step1 = sess.run(updated_states_step1.key)
context_vec_out_async = np.reshape(context_vec_out_async, (6, 4))
self.assertAllClose(
[5.381485, -1.943824, 2.214111, 0.840045, -0.939259, 0.752783],
np.sum(context_vec_out_async, axis=1))
# Updated status are the same at step 0.
self.assertAllClose(new_source_vecs_async[0][0], new_source_vecs[0][0])
self.assertAllClose(new_source_vecs_async[1][1],
new_source_vecs_async_step1[1][1])
def testMultipleExtendStepAsyncTimeStepSelfAttention(self):
# input_batch:6, seq_len:6, query_len: 1. Test n = 2 case.
num_heads, input_dim, hidden_dim, batch, seqlen = 2, 4, 4, 6, 6
with self.session(use_gpu=True):
tf.random.set_seed(12345)
(query_vec, _, paddings, _, _, _, _, _) = _AttentionInputs()
p = attention.MultiHeadedAttention.Params().Set(
name='atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
# Verify ExtendStep() via compare N ExtendStep() with one FProp() call on
# a seq with length N.
per_step_padding = 1 - tf.linalg.band_part(
tf.ones((seqlen, seqlen)), -1, 0)
per_step_padding = tf.stack([per_step_padding] * batch)
dims_per_head = hidden_dim // num_heads
def _ResetCachedStates():
cached_source_vecs = tf.constant(
np.random.normal(0.1, 0.5,
[seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_source_ctxs = tf.constant(
np.random.normal(0.1, 0.5,
[seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_states = py_utils.NestedMap(
key=cached_source_vecs, value=cached_source_ctxs)
return cached_states
encoded_all = []
cached_states = _ResetCachedStates()
for i in range(seqlen):
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] * batch, seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, cached_states = l.ExtendStep(l.theta, query_vec[:, i:i + 1, :],
cached_states, paddings, None,
per_step_paddings, i)
# [batch, 1, dims_per_head]
encoded_all.append(encoded)
encoded_all_async = []
cached_states = _ResetCachedStates()
for i in range(seqlen):
# Sample 1 to batch -1 time step are synchoronized: 1 -> Seqlen
# Sample batch, the time step are [0, 0, 0, 1, .., Seqlen-2]
index = i - 3 if i > 2 else 0
new_query_vec = tf.concat([
query_vec[:(batch - 1), i:i + 1, :], query_vec[(batch - 1):,
index:index + 1, :]
],
axis=0)
time_step = tf.constant([i] * (batch - 1) + [index], dtype=tf.int32)
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] *
(batch - 1) + [index + 1], seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, cached_states = l.ExtendStep(l.theta, new_query_vec,
cached_states, paddings, None,
per_step_paddings, time_step)
# [batch, 1, dims_per_head]
encoded_all_async.append(encoded)
# [batch, T, dims_per_head]
actual_ctx_vec = tf.concat(encoded_all, axis=1)
actual_ctx_vec_async = tf.concat(encoded_all_async, axis=1)
self.assertAllClose(actual_ctx_vec_async.eval()[:-1],
actual_ctx_vec.eval()[:-1])
# Sample batch move 3 step slower than the synchronized version.
self.assertAllClose(actual_ctx_vec_async.eval()[-1][3:],
actual_ctx_vec.eval()[-1][:3])
@parameterized.named_parameters(
('Short', 0.0, True, None), ('Long', 0.0, False, None),
('ShortSmallCap', 1.0, True, None), ('LongSmallCap', 1.0, False, None),
('ShortCap', 5.0, True, None), ('LongCap', 5.0, False, None),
('ExplicitDimPerHead', 0.0, False, 4))
def testExtendStep(self, cap, short_seq, explicit_dim_per_head):
num_heads, input_dim, hidden_dim, batch, seqlen = 2, 4, 4, 6, 6
with self.session(use_gpu=True) as sess:
tf.random.set_seed(12345)
query_vec = tf.random.normal([batch, seqlen, input_dim])
paddings = tf.zeros_like(query_vec[:, :, 0])
p = attention.MultiHeadedAttention.Params().Set(
name='atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
atten_logit_cap=cap)
if explicit_dim_per_head:
p.dim_per_head = explicit_dim_per_head
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
# Verify ExtendStep() via compare N ExtendStep() with one FProp() call on
# a seq with length N.
per_step_padding = 1 - tf.linalg.band_part(
tf.ones((seqlen, seqlen)), -1, 0)
per_step_padding = tf.stack([per_step_padding] * batch)
expected_ctx_tensor, _ = l.FPropDefaultTheta(
query_vec,
query_vec,
query_vec,
paddings,
segment_mask=None,
per_step_padding=per_step_padding)
states = l.InitStates(l.theta, batch, seqlen)
encoded_all = []
for i in range(seqlen):
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] * batch, seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, states = l.ExtendStep(l.theta, query_vec[:, i:i + 1, :],
states, paddings, None,
per_step_paddings, i, short_seq)
# [batch, 1, dims_per_head]
encoded_all.append(encoded)
# [batch, T, dims_per_head]
actual_ctx_tensor = tf.concat(encoded_all, axis=1)
expected_ctx, actual_ctx = sess.run(
[expected_ctx_tensor, actual_ctx_tensor])
self.assertAllClose(expected_ctx, actual_ctx)
class MultiSourceMultiHeadedAttentionTest(test_utils.TestCase):
def testAttenProbs(self):
(query_vec, key_vec, paddings, per_step_padding, query_vec_p, key_vec_p,
paddings_p, per_step_padding_p) = _AttentionInputs()
# Two-source attention.
mha_params = attention.MultiHeadedAttention.Params().Set(
name='atten',
input_dim=4,
hidden_dim=4,
enable_scaling_code_motion=True)
atten_merger_p = tm_attention.MergerLayer.Params().Set(
params_init=py_utils.WeightInit.Uniform(0.04),
merger_op='concat', # concatenate attention
pre_proj_input_dims=[4, 4],
pre_proj_output_dims=[4, 4])
params = attention.MultiSourceAttention.Params().Set(
name='two_source_atten',
input_dim=4,
hidden_dim=4,
source_atten_tpls=[('src_1', mha_params),
('src_2', mha_params.Copy().Set(name='atten2'))],
primary_source_key='src_1',
atten_merger_tpl=atten_merger_p)
l = params.Instantiate()
probs, probs_sum = l.AttenProbs(
l.theta,
tf.expand_dims(query_vec, 2),
py_utils.NestedMap({
'src_1': tf.expand_dims(key_vec, 2),
'src_2': tf.expand_dims(key_vec, 2)
}),
py_utils.NestedMap({
'src_1': paddings,
'src_2': paddings
}),
segment_mask=None,
per_step_padding=per_step_padding)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
prob_out = sess.run(tf.squeeze(probs / probs_sum))
# Use numpy to perform the same computation to generate expected results.
query_vec_p = np.array(query_vec_p)
key_vec_p = np.array(key_vec_p)
key_vec_p = np.transpose(key_vec_p, (0, 2, 1))
expected_logit = np.matmul(query_vec_p, key_vec_p)
paddings_p = np.array(paddings_p)
paddings_p = np.expand_dims(paddings_p, axis=1)
paddings_p = np.tile(paddings_p, (1, 6, 1))
per_step_padding_p = np.array(per_step_padding_p)
paddings_p = 1.0 * np.logical_or(paddings_p, per_step_padding_p)
elexp = np.exp(expected_logit)
elexp *= (1.0 - paddings_p)
elexp += 1e-9
expected_prob_out = elexp / np.expand_dims(np.sum(elexp, axis=-1), axis=-1)
expected_prob_out = np.reshape(expected_prob_out, (6, 6, 6))
self.assertAllClose(expected_prob_out, prob_out)
def testFPropCrossAttention(self):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=True) as sess:
query_vec, memory_vec, paddings, per_step_padding, _, _, _, _ = (
_AttentionInputs())
mha_params = attention.MultiHeadedAttention.Params().Set(
name='cross_atten', num_heads=2, input_dim=4, hidden_dim=4)
mha_params.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
atten_merger_p = tm_attention.MergerLayer.Params().Set(
params_init=py_utils.WeightInit.Uniform(0.04),
merger_op='concat', # concatenate attention
pre_proj_input_dims=[4, 4],
pre_proj_output_dims=[4, 4])
# Two-source attention.
p = attention.MultiSourceAttention.Params().Set(
name='two_source_atten',
input_dim=4,
hidden_dim=4,
source_atten_tpls=[('src_1', mha_params),
('src_2', mha_params.Copy().Set(name='atten2'))],
primary_source_key='src_1',
atten_merger_tpl=atten_merger_p)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta,
query_vec,
py_utils.NestedMap({
'src_1': memory_vec,
'src_2': memory_vec
}),
py_utils.NestedMap({
'src_1': memory_vec,
'src_2': memory_vec
}),
py_utils.NestedMap({
'src_1': paddings,
'src_2': paddings
}),
segment_mask=None,
per_step_padding=per_step_padding)
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (12, 24))
self.assertAllClose([
5.6162043, 5.0109887, 6.0565553, 6.0565553, 4.5718207, 5.253615,
2.0541124, 2.490314, 6.049119, 5.5567484, 4.409875, 5.8939424
], np.sum(context_vec_out, axis=1))
class MultiHeadedAttentionXLTest(test_utils.TestCase, parameterized.TestCase):
"""Test dot-product multiheaded attention."""
def _AttentionExtendStepInputs(self,
input_dim,
batch_size,
seq_len,
dtype=tf.float32):
np.random.seed(6348575)
query_vec_p = [
np.random.rand(seq_len, input_dim) for _ in range(batch_size)
]
query_vec = tf.stack([tf.constant(x, dtype=dtype) for x in query_vec_p])
paddings_p = [[0] * seq_len] * batch_size
paddings = tf.constant(paddings_p, dtype=dtype)
return query_vec, paddings
@parameterized.named_parameters(('OneHead', 1), ('OneHeadCausal', 1, True),
('MultiHead', 2),
('MultiHeadCausal', 2, True))
def testAttenProbs(self, num_heads, is_causal=False):
batch, slen = 6, 6
atten_dim = 4
input_dim = num_heads * atten_dim
(input_vecs, _, input_padding, per_step_padding, input_vecs_p, _,
input_padding_p, per_step_padding_p) = _AttentionInputs(
input_dim=input_dim, is_causal=is_causal)
p = attention.MultiHeadedAttentionXL.Params().Set(
name='self_atten',
input_dim=input_dim,
num_heads=num_heads,
hidden_dim=input_dim,
rel_pos_emb_dim=input_dim,
enable_scaling_code_motion=True)
l = p.Instantiate()
query = tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim))
probs, probs_sum = l.AttenProbs(
l.theta,
query,
query,
input_padding,
segment_mask=None,
per_step_padding=per_step_padding)
# [1, 2 * slen - 1]
positions = np.expand_dims(np.arange(-(slen - 1), slen), 0)
sinusoid_emb = l.pos_emb.FPropWithPosition(l.theta.pos_emb,
tf.convert_to_tensor(positions))
# [ 2 * slen - 1, emb_dim=input_dim]
sinusoid_emb = tf.squeeze(sinusoid_emb, 0)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
u, v, pos_proj = sess.run([l.vars.u, l.vars.v, l.pos_proj.vars.w])
actual_probs = sess.run(probs / probs_sum)
sinusoid_emb_p = sess.run(sinusoid_emb)
# Compute ground truth with oracle class.
# Use numpy to perform the same computation to generate expected results.
# [B, tgt_t, H]
input_vecs_p = np.array(input_vecs_p)
# [B, tgt_t, N, H]
input_vecs_p = np.reshape(input_vecs_p, (batch, slen, num_heads, atten_dim))
input_padding_p = np.array(input_padding_p)
oracle = MultiHeadedAttentionXLOracle(u, v, pos_proj, sinusoid_emb_p)
expected_probs = oracle.AttenProbs(input_vecs_p, input_vecs_p,
input_padding_p, per_step_padding_p)
self.assertAllClose(expected_probs, actual_probs)
def testFPropSelfAttention(self):
# input_batch:6, seq_len:6. Test n = 2 case.
with self.session(use_gpu=True) as sess:
query_vec, _, paddings, _, _, _, _, _ = _AttentionInputs()
num_heads, input_dim, hidden_dim = 2, 4, 4
p = attention.MultiHeadedAttentionXL.Params().Set(
name='self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
rel_pos_emb_dim=num_heads * hidden_dim)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FPropDefaultTheta(
query_vec, query_vec, query_vec, paddings, segment_mask=None)
tf.global_variables_initializer().run()
context_vec_out = sess.run(ctx_vec)
context_vec_out = np.reshape(context_vec_out, (6, 24))
self.assertAllClose(
[32.33513, 28.584404, 20.54517, 23.407812, 18.616188, 24.212755],
np.sum(context_vec_out, axis=1))
def testExtendStepAsyncTimeStepSelfAttention(self):
num_heads, input_dim, hidden_dim, batch, seqlen = 2, 4, 4, 6, 6
emb_dim = 4
with self.session(use_gpu=True):
tf.random.set_seed(12345)
query_vec, paddings = self._AttentionExtendStepInputs(
input_dim, batch, seqlen)
p = attention.MultiHeadedAttentionXL.Params().Set(
name='atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
rel_pos_emb_dim=emb_dim,
random_seed=0)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
# Verify ExtendStep() via compare N ExtendStep() with one FProp() call on
# a seq with length N.
per_step_padding = 1 - tf.linalg.band_part(
tf.ones((seqlen, seqlen)), -1, 0)
per_step_padding = tf.stack([per_step_padding] * batch)
dims_per_head = hidden_dim // num_heads
def _ResetCachedStates():
cached_source_vecs = tf.constant(
np.random.normal(0.1, 0.5,
[seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_source_ctxs = tf.constant(
np.random.normal(0.1, 0.5,
[seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_states = py_utils.NestedMap(
key=cached_source_vecs, value=cached_source_ctxs)
return cached_states
encoded_all = []
cached_states = _ResetCachedStates()
for i in range(seqlen):
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] * batch, seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, cached_states = l.ExtendStep(l.theta, query_vec[:, i:i + 1, :],
cached_states, paddings, None,
per_step_paddings, i)
# [batch, 1, dims_per_head]
encoded_all.append(encoded)
encoded_all_async = []
cached_states = _ResetCachedStates()
for i in range(seqlen):
# Sample 1 to batch -1 time step are synchoronized: 1 -> Seqlen
# Sample batch, the time step are [0, 0, 0, 1, .., Seqlen-2]
index = i - 3 if i > 2 else 0
new_query_vec = tf.concat([
query_vec[:(batch - 1), i:i + 1, :], query_vec[(batch - 1):,
index:index + 1, :]
],
axis=0)
time_step = tf.constant([i] * (batch - 1) + [index], dtype=tf.int32)
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] *
(batch - 1) + [index + 1], seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, cached_states = l.ExtendStep(l.theta, new_query_vec,
cached_states, paddings, None,
per_step_paddings, time_step)
# [batch, 1, dims_per_head]
encoded_all_async.append(encoded)
# [batch, T, dims_per_head]
actual_ctx_vec = tf.concat(encoded_all, axis=1)
actual_ctx_vec_async = tf.concat(encoded_all_async, axis=1)
self.assertAllClose(actual_ctx_vec_async.eval()[:-1],
actual_ctx_vec.eval()[:-1])
# Sample batch move 3 step slower than the synchronized version.
self.assertAllClose(actual_ctx_vec_async.eval()[-1][3:],
actual_ctx_vec.eval()[-1][:3])
def testExtendStepSelfAttention(self):
num_heads, input_dim, hidden_dim, batch, seqlen = 2, 4, 4, 6, 6
emb_dim = 4
with self.session(use_gpu=True):
tf.random.set_seed(12345)
query_vec, paddings = self._AttentionExtendStepInputs(
input_dim, batch, seqlen)
p = attention.MultiHeadedAttentionXL.Params().Set(
name='atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
rel_pos_emb_dim=emb_dim,
random_seed=0)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
# Verify ExtendStep() via compare N ExtendStep() with one FProp() call on
# a seq with length N.
per_step_padding = 1 - tf.linalg.band_part(
tf.ones((seqlen, seqlen)), -1, 0)
per_step_padding = tf.stack([per_step_padding] * batch)
expected_ctx_vec, _ = l.FPropDefaultTheta(
query_vec,
query_vec,
query_vec,
paddings,
segment_mask=None,
per_step_padding=per_step_padding)
dims_per_head = hidden_dim // num_heads
cached_source_vecs = tf.constant(
np.random.normal(0.1, 0.5, [seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_source_ctxs = tf.constant(
np.random.normal(0.1, 0.5, [seqlen, batch, num_heads, dims_per_head]),
dtype=tf.float32)
cached_states = py_utils.NestedMap(
key=cached_source_vecs, value=cached_source_ctxs)
encoded_all = []
for i in range(seqlen):
per_step_paddings = 1. - tf.cast(
tf.sequence_mask([i + 1] * batch, seqlen), tf.float32)
per_step_paddings = tf.expand_dims(per_step_paddings, 1)
encoded, cached_states = l.ExtendStep(l.theta, query_vec[:, i:i + 1, :],
cached_states, paddings, None,
per_step_paddings, i)
# [batch, 1, dims_per_head]
encoded_all.append(encoded)
# [batch, T, dims_per_head]
actual_ctx_vec = tf.concat(encoded_all, axis=1)
self.assertAllClose(expected_ctx_vec.eval(), actual_ctx_vec.eval())
class MultiHeadedAttentionRPEOracle:
"""Computes ground truths for MultiHeadedfAttentionRPE.
Written in a non-vectorized way.
"""
def __init__(self, num_heads, key_embs, value_embs):
"""Constructor.
Args:
num_heads: A Python int.
key_embs: A numpy array of shape [2 * radius + 1, hidden_dim]
value_embs: A numpy array of shape [2 * radius + 1, hidden_dim]
"""
assert key_embs.shape == value_embs.shape
self._num_heads = num_heads
self._hidden_dim = key_embs.shape[-1]
self._atten_dim = self._hidden_dim // self._num_heads
assert self._atten_dim * self._num_heads == self._hidden_dim
self._key_embs = np.reshape(
key_embs, [key_embs.shape[0], self._num_heads, self._atten_dim])
self._value_embs = np.reshape(
value_embs, [value_embs.shape[0], self._num_heads, self._atten_dim])
self._radius = key_embs.shape[0] // 2
def _GetEmb(self, tgt_t, src_t, head, emb_wt):
radius = self._radius
distance = np.clip(src_t - tgt_t, -radius, radius)
return emb_wt[distance][head]
def GetKeyEmb(self, tgt_t, src_t, head):
return self._GetEmb(tgt_t, src_t, head, self._key_embs)
def GetValueEmb(self, tgt_t, src_t, head):
return self._GetEmb(tgt_t, src_t, head, self._value_embs)
def AttenProbs(self, key, query, paddings):
assert query.ndim == 4
assert paddings.ndim == 2
assert key.shape == query.shape
batch, seqlen = query.shape[:2]
tgtlen, srclen = seqlen, seqlen
assert query.shape[2] == self._num_heads
assert query.shape[3] == self._atten_dim
assert paddings.shape == query.shape[:2]
# [B, N, T, T]
logits = np.zeros((batch, self._num_heads, tgtlen, srclen))
# [B, N, T, T]
probs = np.zeros((batch, self._num_heads, tgtlen, srclen))
paddings = np.broadcast_to(
np.reshape(paddings, (batch, 1, 1, seqlen)),
(batch, self._num_heads, seqlen, seqlen))
def Normalize(vec):
expx = np.exp(vec)
expxsum = np.sum(expx, axis=-1)
return expx / expxsum
for b in range(batch):
for h in range(self._num_heads):
for i in range(tgtlen):
for j in range(srclen):
logits[b][h][i][j] = np.dot(query[b][i][h],
key[b][j][h] + self.GetKeyEmb(i, j, h))
logits[b][h][i] = np.where(paddings[b][h][i] > 0,
np.finfo(np.float32).max * (-0.7),
logits[b][h][i])
probs[b][h][i] = Normalize(logits[b][h][i])
return probs
def AttenContext(self, probs, values):
assert probs.ndim == 4
assert values.ndim == 4
assert probs.shape[0] == values.shape[0] # batch
assert probs.shape[1] == values.shape[2] # head
assert probs.shape[2] == values.shape[1] # tgtlen
assert probs.shape[3] == probs.shape[2] # slen
assert values.shape[-1] == self._atten_dim
batch, _, tgtlen, srclen = probs.shape
# [B, N, T, H]
ctx = np.zeros((batch, self._num_heads, tgtlen, self._atten_dim))
for b in range(batch):
for h in range(self._num_heads):
for i in range(tgtlen):
for j in range(srclen):
ctx[b][h][i] += probs[b][h][i][j] * (
values[b][j][h] + self.GetValueEmb(i, j, h))
# [B, T, N, H]
return np.transpose(ctx, (0, 2, 1, 3))
class MultiHeadedAttentionRPETest(test_utils.TestCase, parameterized.TestCase):
@parameterized.named_parameters(('OneHead', 1), ('MultiHead', 2))
def testAttenProbs(self, num_heads):
batch, slen = 6, 6
atten_dim = 4
radius = 3
input_dim = num_heads * atten_dim
(input_vecs, _, input_padding, _, input_vecs_p, _, input_padding_p,
_) = _AttentionInputs(input_dim=input_dim)
p = attention.MultiHeadedAttentionRPE.Params().Set(
name='self_atten',
input_dim=input_dim,
num_heads=num_heads,
hidden_dim=input_dim,
rel_pos_radius=radius,
enable_scaling_code_motion=True)
l = p.Instantiate()
query = tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim))
probs, probs_sum = l.AttenProbs(
l.theta, query, query, input_padding, segment_mask=None)
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
# [radius * 2 + 1, hidden_dim], [B, tgt_t, src_t]
key_emb, value_emb, actual_probs = sess.run(
[l.key_emb.vars.w, l.value_emb.vars.w, probs / probs_sum])
oracle = MultiHeadedAttentionRPEOracle(num_heads, key_emb, value_emb)
# Use numpy to perform the same computation to generate expected results.
# [B, tgt_t, N, H]
input_vecs_p = np.reshape(input_vecs_p, (batch, slen, num_heads, atten_dim))
expected_probs = oracle.AttenProbs(input_vecs_p, input_vecs_p,
input_padding_p)
self.assertAllClose(expected_probs, actual_probs)
@parameterized.named_parameters(('OneHead', 1), ('MultiHead', 2))
def testAttenContext(self, num_heads):
batch, slen = 6, 6
atten_dim = 4
radius = 3
input_dim = num_heads * atten_dim
(input_vecs, _, _, _, input_vecs_p, _, _,
_) = _AttentionInputs(input_dim=input_dim)
p = attention.MultiHeadedAttentionRPE.Params().Set(
name='self_atten',
input_dim=input_dim,
num_heads=num_heads,
hidden_dim=input_dim,
rel_pos_radius=radius)
l = p.Instantiate()
probs = np.random.rand(batch, num_heads, slen, slen).astype(np.float32)
probs = np.exp(probs) / np.sum(np.exp(probs), axis=-1, keepdims=True)
ctx = l._AttenContext(
l.theta, tf.convert_to_tensor(probs),
tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim)))
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
key_emb, value_emb, actual_ctx = sess.run(
[l.key_emb.vars.w, l.value_emb.vars.w, ctx])
oracle = MultiHeadedAttentionRPEOracle(num_heads, key_emb, value_emb)
# [B, tgt_t, N, H]
input_vecs_p = np.reshape(input_vecs_p, (batch, slen, num_heads, atten_dim))
expected_ctx = oracle.AttenContext(probs, input_vecs_p)
self.assertAllClose(expected_ctx, actual_ctx)
@parameterized.named_parameters(('OneHead', 1), ('MultiHead', 2))
def testAttenLogitsOneStep(self, num_heads):
batch, slen = 6, 6
atten_dim = 4
radius = 3
input_dim = num_heads * atten_dim
(input_vecs, _, _, _, _, _, _, _) = _AttentionInputs(
input_dim=input_dim, is_causal=True)
p = attention.MultiHeadedAttentionRPE.Params().Set(
name='self_atten',
input_dim=input_dim,
num_heads=num_heads,
hidden_dim=input_dim,
rel_pos_radius=radius)
l = p.Instantiate()
# [B, T, N, H]
query = tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim))
# Causal self attention.
# [B, N, T, S]
logits = l._AttenLogits(
l.theta,
query,
query,
)
one_step_logits = []
# [S=T, B, N, H]
key = tf.transpose(query, [1, 0, 2, 3])
for i in range(slen):
local_logits = l._AttenLogitsOneStep(l.theta, query[:, i, :, :], key, i)
one_step_logits.append(local_logits)
# [T, S, B, N]
stacked_logits = tf.stack(one_step_logits)
stacked_logits = tf.transpose(stacked_logits, [2, 3, 0, 1])
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
expected_logits, actual_logits = sess.run([logits, stacked_logits])
self.assertAllClose(expected_logits, actual_logits)
@parameterized.named_parameters(('OneHead', 1), ('MultiHead', 2))
def testAttenContextsOneStep(self, num_heads):
batch, slen = 6, 6
atten_dim = 4
radius = 3
input_dim = num_heads * atten_dim
(input_vecs, _, _, per_step_padding, _, _, _, _) = _AttentionInputs(
input_dim=input_dim, is_causal=True)
p = attention.MultiHeadedAttentionRPE.Params().Set(
name='self_atten',
input_dim=input_dim,
num_heads=num_heads,
hidden_dim=input_dim,
rel_pos_radius=radius)
l = p.Instantiate()
# [B, N, T, S=T]
# Make causal attention probs.
probs = np.random.rand(batch, num_heads, slen, slen).astype(np.float32)
per_step_padding = 1 - np.tril(np.ones((slen, slen))).astype(np.float32)
probs *= per_step_padding
# Normalize
probs = np.exp(probs) / np.sum(np.exp(probs), axis=-1, keepdims=True)
# Causal self attention.
# [B, N, T, S]
ctx = l._AttenContext(
l.theta, tf.convert_to_tensor(probs),
tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim)))
one_step_ctx = []
# [B, T, N, H] -> [S=T, B, N, H]
value = tf.reshape(input_vecs, (batch, slen, num_heads, atten_dim))
value = tf.transpose(value, [1, 0, 2, 3])
for i in range(slen):
# [B, N, S]
local_prob = probs[:, :, i, :]
# [S, B, N]
local_prob = tf.transpose(local_prob, [2, 0, 1])
# [B, N, H]
local_ctx = l._AttenContextOneStep(l.theta, local_prob, value, i,
atten_dim)
one_step_ctx.append(local_ctx)
# [T, B, N, H]
stacked_ctx = tf.stack(one_step_ctx)
stacked_ctx = tf.transpose(stacked_ctx, [1, 0, 2, 3])
with self.session(use_gpu=False) as sess:
tf.global_variables_initializer().run()
expected_ctx, actual_ctx = sess.run([ctx, stacked_ctx])
self.assertAllClose(expected_ctx, actual_ctx)
class LocalSelfAttentionTest(test_utils.TestCase, parameterized.TestCase):
"""Test local causual self attention."""
def _LocalCasualPadding(self, b, t, l, r, query_stride):
s = t // query_stride
padding = np.ones((b, s, t))
for i in range(s):
j = i * query_stride
padding[:, i, max(0, j - l + 1):j + r + query_stride] = 0
return tf.constant(padding, dtype=tf.float32)
@parameterized.named_parameters(
{
'testcase_name': 'block_size_unspecified',
'block_size': None,
'left_context': 4,
'right_context': 1
}, {
'testcase_name': 'block_size_long',
'block_size': 5,
'left_context': 3,
'right_context': 4
}, {
'testcase_name': 'mimic_full_attention',
'block_size': None,
'left_context': 6,
'right_context': 5
}, {
'testcase_name': 'left_context_only',
'block_size': 3,
'left_context': 4,
'right_context': 0,
}, {
'testcase_name': 'right_context_only',
'block_size': 4,
'left_context': 1,
'right_context': 4,
}, {
'testcase_name': 'block_longer_than_sequence',
'block_size': 10,
'left_context': 7,
'right_context': 0,
}, {
'testcase_name': 'pos_emb_left_context_only',
'block_size': 3,
'left_context': 4,
'right_context': 0,
'pos_emb_dim': 8,
}, {
'testcase_name': 'pos_emb_left_and_right_context',
'block_size': 3,
'left_context': 4,
'right_context': 2,
'pos_emb_dim': 8,
}, {
'testcase_name': 'lite_pos_emb_left_and_right_context',
'block_size': 3,
'left_context': 4,
'right_context': 2,
'pos_emb_dim': 8,
'skip_term_b': True,
}, {
'testcase_name': 'funnel_pool',
'block_size': None,
'left_context': 3,
'right_context': 2,
'query_stride': 2,
})
def testFPropAgainstReference(self,
block_size,
left_context,
right_context,
pos_emb_dim=0,
num_heads=2,
input_dim=4,
hidden_dim=4,
skip_term_b=False,
query_stride=1,
use_additional_per_step_padding=False):
tf.reset_default_graph()
with self.session(use_gpu=True) as sess:
query_vec, _, paddings, _, _, _, _, _ = _AttentionInputs(input_dim)
if use_additional_per_step_padding:
# Generate a random binary mask of shape [N, T, S].
additional_per_step_padding_val = np.random.random_integers(
low=0, high=1, size=(6, 6, 6))
additional_per_step_padding = tf.constant(
additional_per_step_padding_val, tf.float32)
else:
additional_per_step_padding = None
# Use the reference implementation + local casual padding to verify
# correctness.
if pos_emb_dim == 0:
p_cls = attention.LocalSelfAttention
expected_p_cls = attention.MultiHeadedAttention
else:
p_cls = attention.LocalSelfAttentionXL
expected_p_cls = attention.MultiHeadedAttentionXL
p = p_cls.Params().Set(
name='self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
block_size=block_size,
left_context=left_context,
right_context=right_context,
query_stride=query_stride,
force_consistent_probs_shape=True)
expected_p = expected_p_cls.Params().Set(
name='expected_self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim)
if pos_emb_dim != 0:
p.rel_pos_emb_dim = pos_emb_dim
expected_p.rel_pos_emb_dim = pos_emb_dim
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
expected_p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
expected_l = expected_p.Instantiate()
funnel_pool = attention.FunnelPoolingLayer.Params().Set(
name='funnel_pool', stride=query_stride).Instantiate()
tf.global_variables_initializer().run()
pooled_query_vec, pooled_paddings = funnel_pool.FProp(
funnel_pool.theta, query_vec, paddings)
ctx_vec, probs = l.FProp(
l.theta,
pooled_query_vec,
query_vec,
query_vec,
paddings,
segment_mask=None,
per_step_padding=additional_per_step_padding)
context_vec_out, probs_out = sess.run([ctx_vec, probs])
per_step_padding = self._LocalCasualPadding(6, 6, left_context,
right_context, query_stride)
if additional_per_step_padding is not None:
per_step_padding += additional_per_step_padding
expected_ctx_vec, expected_probs = expected_l.FProp(
expected_l.theta, pooled_query_vec, query_vec, query_vec, paddings,
None, per_step_padding)
expected_context_vec_out, expected_probs_out = sess.run(
[expected_ctx_vec, expected_probs])
# Don't compare if the query position is padded, or if all key positions
# are padded.
pooled_paddings_val, paddings_val = sess.run([pooled_paddings, paddings])
per_step_padding_val = sess.run(per_step_padding)
per_step_padding_val += pooled_paddings_val[:, :, np.newaxis]
per_step_padding_val += paddings_val[:, np.newaxis, :]
dont_compare = np.sum(
per_step_padding_val > 0, axis=-1) == per_step_padding_val.shape[-1]
factor = (1 - dont_compare)[:, None, :, None]
expected_probs_out *= factor
probs_out *= factor
self.assertAllClose(probs_out, expected_probs_out)
expected_context_vec_out *= (1 - dont_compare)[..., np.newaxis]
context_vec_out *= (1 - dont_compare)[..., np.newaxis]
self.assertAllClose(context_vec_out, expected_context_vec_out)
def testFPropWithDropout(self):
with self.session(use_gpu=True) as sess:
query_vec, _, paddings, _, _, _, _, _ = _AttentionInputs(input_dim=4)
p = attention.LocalSelfAttention.Params().Set(
name='self_atten',
num_heads=2,
input_dim=4,
hidden_dim=4,
block_size=2,
left_context=2,
right_context=0,
atten_dropout_prob=0.3,
)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(
l.theta, query_vec, query_vec, query_vec, paddings, segment_mask=None)
ctx_vec_val = sess.run(ctx_vec)
print(ctx_vec_val)
def _AttentionExtendStepInputs(self,
batch_size=6,
input_dim=4,
num_heads=2,
dtype=tf.float32):
np.random.seed(6348575)
seq_len = 6
query_vec_p = [np.random.rand(1, input_dim) for _ in range(batch_size)]
query_vec = tf.stack([tf.constant(x, dtype=dtype) for x in query_vec_p])
source_vecs = tf.constant(
np.random.normal(
0.1, 0.5, [seq_len, batch_size, num_heads, input_dim // num_heads]),
dtype=dtype)
source_ctxs = tf.constant(
np.random.normal(
0.1, 0.5, [seq_len, batch_size, num_heads, input_dim // num_heads]),
dtype=dtype)
cached_states = py_utils.NestedMap(key=source_vecs, value=source_ctxs)
return query_vec, cached_states
def testExtendStepSelfAttention(self):
# input_batch:6, seq_len:6, query_len: 1. Test n = 2 case.
batch_size = 6
input_dim = 4
num_heads = 2
with self.session(use_gpu=True) as sess:
query_vec, cached_states = (
self._AttentionExtendStepInputs(
batch_size=batch_size, input_dim=input_dim, num_heads=num_heads))
p = attention.LocalSelfAttention.Params().Set(
name='self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=4,
block_size=2,
left_context=2,
right_context=0,
atten_dropout_prob=0.3,
)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, updated_states = l.ExtendStep(
l.theta,
query_vec,
cached_states,
paddings=None,
segment_mask=None,
per_step_padding=None,
time_step=3,
use_short_seq_opt=False)
context_vec_out = sess.run(ctx_vec)
new_source_vecs = sess.run(updated_states.key)
context_vec_out = np.reshape(context_vec_out, (6, 4))
tf.logging.info(np.array_repr(np.sum(context_vec_out, axis=1)))
self.assertAllClose(
[3.303124, 3.90266, 2.971359, 2.486641, 3.109267, 1.54773],
np.sum(context_vec_out, axis=1))
new_source_vecs = np.reshape(new_source_vecs, (6, 24))
tf.logging.info(np.array_repr(np.sum(new_source_vecs, axis=1)))
self.assertAllClose(
[5.135725, 1.340482, 1.065773, 4.116683, 4.928454, 3.161165],
np.sum(new_source_vecs, axis=1))
class LocalSelfAttentionStreamStepTest(stream_step_test_base.StreamStepTestBase
):
"""Tests StreamStep()."""
def _GetParams(self, **kwargs):
num_heads = kwargs['num_heads']
input_dim = kwargs['input_dim']
hidden_dim = kwargs['hidden_dim']
left_context = kwargs['left_context']
right_context = kwargs['right_context']
p_cls = kwargs.get('p_cls', attention.LocalSelfAttention)
query_stride = kwargs.get('query_stride', 1)
use_3d_recurrent_state = kwargs.get('use_3d_recurrent_state', False)
inference_step_max_length = kwargs.get('inference_step_max_length', None)
minimize_state_size = kwargs.get('minimize_state_size', False)
p = p_cls.Params().Set(
name='local_self_atten',
num_heads=num_heads,
input_dim=input_dim,
hidden_dim=hidden_dim,
left_context=left_context,
right_context=right_context,
query_stride=query_stride)
if p_cls == attention.LocalSelfAttentionXL:
p.Set(rel_pos_emb_dim=input_dim)
p.minimize_state_size = minimize_state_size
p.use_3d_recurrent_state = use_3d_recurrent_state
p.inference_step_max_length = inference_step_max_length
return p
def _FProp(self, layer, inputs, paddings):
funnel_pool = attention.FunnelPoolingLayer.Params().Set(
name='funnel_pool', stride=layer.params.query_stride).Instantiate()
query_vec, query_paddings = funnel_pool.FProp(funnel_pool.theta, inputs,
paddings)
return layer.FProp(layer.theta, query_vec, inputs, inputs,
paddings)[0], query_paddings
def _StreamStep(self, layer, step_inputs, step_paddings, state):
funnel_pool = attention.FunnelPoolingLayer.Params().Set(
name='funnel_pool', stride=layer.params.query_stride).Instantiate()
query_vec, query_paddings = funnel_pool.StreamStep(funnel_pool.theta,
step_inputs,
step_paddings)
return layer.StreamStep(layer.theta, query_vec, query_paddings, step_inputs,
step_paddings, state)
def _GetFPropOutput(self, fprop_out):
return fprop_out[0], fprop_out[1]
@parameterized.named_parameters(
('Basic',),
('Basic3d', attention.LocalSelfAttention, False, 1, 1, True),
('Basic3dMin', attention.LocalSelfAttention, False, 1, 1, True, True),
('BasicS4', attention.LocalSelfAttention, False, 4, 4),
('BasicS4L8', attention.LocalSelfAttention, False, 4, 8),
('BasicS4L8Min', attention.LocalSelfAttention, False, 4, 8, False, True),
('BasicS4L83d', attention.LocalSelfAttention, False, 4, 8, True),
('BasicS4L83dMin', attention.LocalSelfAttention, False, 4, 8, True, True),
('BasicDynamic', attention.LocalSelfAttention, False, 1, None),
('BasicS4Dynamic', attention.LocalSelfAttention, False, 4, None),
('SkipNorm', attention.LocalSelfAttention, True),
('SkipNormS2', attention.LocalSelfAttention, True, 2, 2),
('SkipNormS2L3', attention.LocalSelfAttention, True, 2, 3),
('SkipNormDynamic', attention.LocalSelfAttention, True, 1, None),
('SkipNormS2Dynamic', attention.LocalSelfAttention, True, 2, None),
('BasicXL', attention.LocalSelfAttentionXL),
('BasicS4XL', attention.LocalSelfAttentionXL, False, 4, 4),
('BasicDynamicXL', attention.LocalSelfAttentionXL, False, 1, None),
('BasicS4DynamicXL', attention.LocalSelfAttentionXL, False, 4, None),
('SkipNormXL', attention.LocalSelfAttentionXL, True),
('SkipNormS2XL', attention.LocalSelfAttentionXL, True, 2, 2),
('SkipNormDynamicXL', attention.LocalSelfAttentionXL, True, 1, None),
('SkipNormS2DynamicXL', attention.LocalSelfAttentionXL, True, 2, None),
('FunnelS2', attention.LocalSelfAttention, False, 2, 2, False, False, 2),
('FunnelS2Dynamic', attention.LocalSelfAttention, False, 2, None, False,
False, 2),
)
def testLeftContext(self,
p_cls=attention.LocalSelfAttention,
testonly_skip_norm_layers=False,
stride=1,
inference_step_max_length=1,
use_3d_recurrent_state=False,
minimize_state_size=False,
query_stride=1):
tf.random.set_seed(2021)
kwargs = dict(
stride=stride,
input_dim=4,
num_heads=2,
hidden_dim=4,
left_context=3,
right_context=0,
query_stride=query_stride,
p_cls=p_cls,
minimize_state_size=minimize_state_size,
use_3d_recurrent_state=use_3d_recurrent_state,
inference_step_max_length=inference_step_max_length)
with flagsaver.flagsaver(
testonly_skip_norm_layers=testonly_skip_norm_layers):
self._TestStreamStepHelper(**kwargs)
def testRightContext(self):
tf.random.set_seed(2021)
kwargs = dict(
stride=2,
input_dim=4,
num_heads=4,
hidden_dim=4,
left_context=9,
right_context=5)
self._TestStreamStepHelper(**kwargs)
def testRightContextStackingLayers(self):
tf.random.set_seed(2021)
kwargs = dict(
stride=2,
input_dim=2,
num_heads=2,
hidden_dim=2,
left_context=6,
right_context=3,
num_layers=5)
self._TestRightContextStackingLayersHelper(**kwargs)
class ChunkwiseSelfAttentionTest(test_utils.TestCase, parameterized.TestCase):
"""Test Chunkwise Self Attention."""
def _ChunkwisePadding(self, b, t, w, l, r):
s = t
padding = np.ones((b, s, t), dtype=np.float32)
u = (t + w - 1) // w
for u_ in range(u):
q_st = u_ * w
q_en = min((u_ + 1) * w, t)
k_st = max(q_st - (l - 1), 0)
k_en = min((u_ + 1) * w + r, t)
padding[:, q_st:q_en, k_st:k_en] = 0.0
return tf.constant(padding, dtype=tf.float32)
def _CompareEncoded(self, encode1, encode2, paddings):
self.assertAllEqual(encode1.shape, encode2.shape)
b = encode1.shape[0]
for num_seq in range(b):
length = int(np.sum(1 - paddings[num_seq, :]))
self.assertAllClose(encode1[num_seq, :length], encode2[num_seq, :length])
@parameterized.named_parameters(
{
'testcase_name': '_w2_l1_r0',
'chunk_size': 2,
'left_context': 1,
'right_context': 0,
},
{
'testcase_name': '_w2_l2_r1',
'chunk_size': 2,
'left_context': 2,
'right_context': 1,
},
{
'testcase_name': '_w2_l1_r0_rel',
'chunk_size': 2,
'left_context': 1,
'right_context': 0,
'pos_emb_dim': 2,
},
{
'testcase_name': '_w2_l2_r1_rel',
'chunk_size': 2,
'left_context': 2,
'right_context': 1,
'pos_emb_dim': 2,
},
{
'testcase_name': '_w2_l2_r1_rel_lite',
'chunk_size': 2,
'left_context': 2,
'right_context': 1,
'pos_emb_dim': 2,
'skip_term_b': True,
},
)
def testFPropAgainstReference(self,
chunk_size,
left_context,
right_context,
pos_emb_dim=0,
num_heads=2,
input_dim=4,
hidden_dim=4,
skip_term_b=False):
tf.reset_default_graph()
with self.session(use_gpu=False) as sess:
query, _, paddings, _, _, _, _, _ = _AttentionInputs(input_dim)
b, t, _ = py_utils.GetShape(query)
if pos_emb_dim == 0:
p_cls = attention.ChunkwiseSelfAttention
expected_p_cls = attention.MultiHeadedAttention
else:
p_cls = attention.ChunkwiseSelfAttentionXL
expected_p_cls = attention.MultiHeadedAttentionXL
common_params = {
'num_heads': num_heads,
'input_dim': input_dim,
'hidden_dim': hidden_dim
}
chunk_wise_params = {
'chunk_size': chunk_size,
'left_context': left_context,
'right_context': right_context
}
p = p_cls.Params().Set(
name='self_attn', **chunk_wise_params, **common_params)
expected_p = expected_p_cls.Params().Set(
name='expected_self_attn', **common_params)
if pos_emb_dim != 0:
expected_p.skip_term_b = skip_term_b
p.skip_term_b = skip_term_b
if pos_emb_dim > 0:
p.rel_pos_emb_dim = pos_emb_dim
expected_p.rel_pos_emb_dim = pos_emb_dim
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
expected_p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
expected_l = expected_p.Instantiate()
tf.global_variables_initializer().run()
out_emb, _ = l.FProp(
l.theta,
query,
query,
query,
paddings,
segment_mask=None,
per_step_padding=None)
(out_emb_np,) = sess.run([out_emb])
per_step_padding = self._ChunkwisePadding(b, t, chunk_size, left_context,
right_context)
expected_out_emb, _ = expected_l.FProp(expected_l.theta, query, query,
query, paddings, None,
per_step_padding)
expected_out_emb_np, paddings_np = sess.run([expected_out_emb, paddings])
self._CompareEncoded(expected_out_emb_np, out_emb_np, paddings_np)
class RoutingAttentionTest(test_utils.TestCase, parameterized.TestCase):
"""Tests for RoutingAttention."""
def testDotAttenSlow(self):
batch_size = 7
source_length = 6
target_length = 4
num_heads = 2
dim_per_head = 5
num_clusters = 3
attention_window = 4
q = np.random.rand(batch_size, target_length, num_heads,
dim_per_head).astype(np.float32)
k = np.random.rand(batch_size, source_length, num_heads,
dim_per_head).astype(np.float32)
v = np.random.rand(batch_size, source_length, num_heads,
dim_per_head).astype(np.float32)
query_paddings = np.zeros([batch_size, target_length], dtype=np.float32)
key_paddings = np.zeros([batch_size, source_length], dtype=np.float32)
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
fast_path=False)
atten = p.Instantiate()
with self.session() as sess:
tf.global_variables_initializer().run()
encoded, probs = sess.run(
atten._DotAtten(
atten.theta, q, k, v, key_paddings,
query_paddings=query_paddings))
self.assertEqual(encoded.shape,
(batch_size, target_length, num_heads, dim_per_head))
self.assertEqual(probs.shape,
(batch_size, num_heads, target_length, source_length))
# attention weights sum to 1.
self.assertAllClose(
np.sum(probs, axis=-1),
np.ones([batch_size, num_heads, target_length]))
def testDotAttenFast(self):
batch_size = 6
source_length = 8
target_length = 7
num_heads = 3
dim_per_head = 5
num_clusters = 2
attention_window = source_length
q = np.random.rand(batch_size, target_length, num_heads,
dim_per_head).astype(np.float32)
k = np.random.rand(batch_size, source_length, num_heads,
dim_per_head).astype(np.float32)
v = np.random.rand(batch_size, source_length, num_heads,
dim_per_head).astype(np.float32)
q_paddings = np.zeros([batch_size, target_length], dtype=np.float32)
k_paddings = np.zeros([batch_size, source_length], dtype=np.float32)
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
query_group_size_factor=1.5, # each group has 6 queries: 8 / 2 * 1.5.
fast_path=True)
atten = p.Instantiate()
# increase group size to 7.
atten2 = p.Copy().Set(
name='increase_group_size_routing_atten',
query_group_size_factor=1.75).Instantiate()
p = attention.MultiHeadedAttention.Params().Set(
name='full_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads)
full_atten = p.Instantiate()
with self.session() as sess:
tf.global_variables_initializer().run()
encoded, probs = sess.run(
atten._DotAtten(
atten.theta, q, k, v, k_paddings, query_paddings=q_paddings))
self.assertEqual(encoded.shape,
(batch_size, target_length, num_heads, dim_per_head))
self.assertEqual(probs.shape,
(batch_size, num_heads, target_length, source_length))
_, probs2 = sess.run(
atten2._DotAtten(
atten2.theta, q, k, v, k_paddings, query_paddings=q_paddings))
# In order to match the full attention, we apply layer norm first.
q_ln = attention_util.KMeansClusteringForAtten.LayerNorm(q)
k_ln = attention_util.KMeansClusteringForAtten.LayerNorm(k)
full_encoded_t, full_probs_t = full_atten._DotAtten(
full_atten.theta, q_ln, k_ln, v, k_paddings, None)
full_probs, full_encoded = sess.run([full_probs_t, full_encoded_t])
# When we increase p.query_group_size_factor, the number of left out queries
# decreases.
self.assertLess(np.sum(probs), np.sum(probs2))
for batch_idx in range(batch_size):
for time_idx in range(target_length):
for head_idx in range(num_heads):
sub_probs = probs[batch_idx, head_idx, time_idx, :]
sub_encoded = encoded[batch_idx, time_idx, head_idx, :]
# encoded output is either 0 or matching full attention output
# for each query position.
if np.allclose(sub_probs, np.zeros_like(sub_probs)):
self.assertAllClose(sub_encoded, np.zeros_like(sub_encoded))
continue
self.assertAllClose(sub_probs, full_probs[batch_idx, head_idx,
time_idx, :])
self.assertAllClose(sub_encoded, full_encoded[batch_idx, time_idx,
head_idx, :])
@parameterized.parameters((False, 0), (False, 1), (False, 2), (True, 0),
(True, 1), (True, 2))
def testDotAttenFull(self, fast_path, num_padded):
batch_size = 2
source_length = 5
target_length = 6
num_heads = 2
dim_per_head = 5
# fast_path=True with multiple clusters might leave out some queries.
# For the purpose of this test we only use a single cluster.
num_clusters = 1 if fast_path else 3
attention_window = source_length
q = tf.random.normal(
shape=[batch_size, target_length, num_heads, dim_per_head])
k = tf.random.normal(
shape=[batch_size, source_length, num_heads, dim_per_head])
v = tf.random.normal(
shape=[batch_size, source_length, num_heads, dim_per_head])
q_paddings = np.zeros([batch_size, target_length], dtype=np.float32)
k_paddings = np.zeros([batch_size, source_length], dtype=np.float32)
if num_padded:
# randomly pad elements.
for i in range(batch_size):
zero_index = np.random.choice(source_length, num_padded, False)
for j in zero_index:
k_paddings[i, j] = 1.
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
query_group_size_factor=1.0,
fast_path=fast_path)
atten = p.Instantiate()
p = attention.MultiHeadedAttention.Params().Set(
name='full_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads)
full_atten = p.Instantiate()
with self.session() as sess:
tf.global_variables_initializer().run()
encoded_t, probs_t = atten._DotAtten(
atten.theta, q, k, v, k_paddings, query_paddings=q_paddings)
gradients_t = tf.gradients(encoded_t, [q, k, v])
# In order to match the full attention, we apply layer norm first.
q_ln = attention_util.KMeansClusteringForAtten.LayerNorm(q)
k_ln = attention_util.KMeansClusteringForAtten.LayerNorm(k)
full_encoded_t, full_probs_t = full_atten._DotAtten(
full_atten.theta, q_ln, k_ln, v, k_paddings, None)
full_gradients_t = tf.gradients(full_encoded_t, [q, k, v])
(encoded, probs, full_encoded, full_probs, gradients,
full_gradients) = sess.run([
encoded_t, probs_t, full_encoded_t, full_probs_t, gradients_t,
full_gradients_t
])
self.assertAllClose(probs, full_probs)
self.assertAllClose(encoded, full_encoded)
# The 3 gradients (dq, dk, dv) should also match
self.assertAllClose(gradients, full_gradients)
@parameterized.parameters(False, True)
def testDotAttenCausalMasking(self, fast_path):
batch_size = 3
seq_length = 12
num_heads = 2
dim_per_head = 4
num_clusters = 1 if fast_path else 3
attention_window = seq_length
q = np.random.rand(batch_size, seq_length, num_heads,
dim_per_head).astype(np.float32)
k = np.random.rand(batch_size, seq_length, num_heads,
dim_per_head).astype(np.float32)
v = np.random.rand(batch_size, seq_length, num_heads,
dim_per_head).astype(np.float32)
q_paddings = np.zeros([batch_size, seq_length], dtype=np.float32)
k_paddings = np.zeros([batch_size, seq_length], dtype=np.float32)
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
causal_masking=True,
query_group_size_factor=1.0,
fast_path=fast_path)
atten = p.Instantiate()
p = attention.MultiHeadedAttention.Params().Set(
name='full_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads)
full_atten = p.Instantiate()
with self.session() as sess:
tf.global_variables_initializer().run()
encoded, probs = sess.run(
atten._DotAtten(
atten.theta, q, k, v, k_paddings, query_paddings=q_paddings))
# In order to match the full attention, we apply layer norm first.
q_ln = attention_util.KMeansClusteringForAtten.LayerNorm(q)
k_ln = attention_util.KMeansClusteringForAtten.LayerNorm(k)
# Manually apply causal padding to full attention.
per_step_padding = tf.tile(
tf.expand_dims(
attention.CausalPadding(seq_length, dtype=q_ln.dtype), 0),
[batch_size, 1, 1])
full_encoded, full_probs = full_atten._DotAtten(
full_atten.theta,
q_ln,
k_ln,
v,
k_paddings,
segment_mask=None,
per_step_padding=per_step_padding)
self.assertAllClose(probs, full_probs.eval())
self.assertAllClose(encoded, full_encoded.eval())
# Verify that the first token only attends to position 0.
first_token_probs = probs[:, :, 0, :]
expected = np.zeros_like(first_token_probs)
expected[:, :, 0] = 1.
self.assertAllClose(first_token_probs, expected)
@parameterized.parameters(False, True)
def testSelfAtten(self, fast_path):
batch_size = 4
target_length = 8
num_heads = 4
dim_per_head = 5
num_clusters = 3
attention_window = 6
q = tf.random.normal(
shape=[batch_size, target_length, num_heads, dim_per_head])
v = tf.random.normal(
shape=[batch_size, target_length, num_heads, dim_per_head])
q_copy = tf.identity(q)
paddings = np.zeros([batch_size, target_length], dtype=np.float32)
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=1,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
query_group_size_factor=1.0,
fast_path=fast_path)
atten = p.Instantiate()
with self.session() as sess, self.SetEval(True):
tf.global_variables_initializer().run()
# self attention path
encoded_self_t, probs_self_t = atten._DotAtten(
atten.theta, q, q, v, paddings, query_paddings=paddings)
# computed as cross attention
encoded_t, probs_t = atten._DotAtten(
atten.theta, q, q_copy, v, paddings, query_paddings=paddings)
encoded, probs, encoded_self, probs_self = sess.run(
[encoded_t, probs_t, encoded_self_t, probs_self_t])
self.assertAllClose(probs, probs_self)
self.assertAllClose(encoded, encoded_self)
def testExtendStep(self):
batch_size = 8
target_length = 10
num_heads = 4
dim_per_head = 5
num_clusters = 6
attention_window = target_length
input_dim = 7
q = np.random.rand(batch_size, target_length, input_dim).astype(np.float32)
paddings = np.zeros([batch_size, target_length], dtype=np.float32)
p = attention.RoutingAttention.Params().Set(
name='routing_atten',
input_dim=input_dim,
hidden_dim=num_heads * dim_per_head,
num_heads=num_heads,
num_clusters=num_clusters,
attention_window=attention_window,
causal_masking=True,
fast_path=False)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
atten = p.Instantiate()
# We ensure that the encoded attention result is the same between FProp()
# and sequential calls to ExtendStep().
with self.session() as sess:
# self attention path via ExtendStep
encoded_all = []
states = atten.InitStates(atten.theta, batch_size, target_length)
self.assertEqual(states.key.shape,
(target_length, batch_size, num_heads, dim_per_head))
self.assertEqual(states.value.shape,
(target_length, batch_size, num_heads, dim_per_head))
self.assertEqual(states.key_dists.shape,
(target_length, batch_size, num_heads, num_clusters))
for i in range(target_length):
encoded, states = atten.ExtendStep(atten.theta, q[:, i:i + 1, :],
states, paddings, i)
self.assertEqual(encoded.shape, (batch_size, 1, input_dim))
encoded_all.append(encoded)
encoded_extend_t = tf.concat(encoded_all, axis=1)
# self attention path via FProp
encoded_fprop_t, _ = atten.FProp(atten.theta, q, q, q, paddings)
self.assertEqual(encoded_fprop_t.shape,
(batch_size, target_length, input_dim))
tf.global_variables_initializer().run()
encoded_extend, encoded_fprop = sess.run(
[encoded_extend_t, encoded_fprop_t])
self.assertAllClose(encoded_extend, encoded_fprop)
class TransformerAttentionLayerTest(test_utils.TestCase,
parameterized.TestCase):
"""Tests for TransformerAttentionLayer."""
@parameterized.named_parameters(
('Basic',),
('BasicR1', False, 1, None, 1),
('BasicS4', False, 4, 4),
('BasicS4L8', False, 4, 8),
('SkipNorm', True),
('SkipNormS2', True, 2, 2),
('SkipNormS2L3', True, 2, 3),
('SkipNormS4R2', True, 4, None, 2),
)
def testStreamStep(self,
testonly_skip_norm_layers=False,
stride=1,
inference_step_max_length=1,
right_context=0):
with flagsaver.flagsaver(
testonly_skip_norm_layers=testonly_skip_norm_layers):
self._TestStreamStepHelper(stride, inference_step_max_length,
right_context)
def _TestStreamStepHelper(self, stride, inference_step_max_length,
right_context):
batch_size, max_seqlen, input_dim = 2, 32, 4
num_heads = 2
left_context = 3
# Prepares inputs.
np.random.seed(None)
inputs = np.random.normal(
0.5, 1, [batch_size, max_seqlen, input_dim]).astype(np.float32)
print(f'np.sum(inputs): {np.sum(inputs)}')
inputs = tf.convert_to_tensor(inputs)
seqlen = np.random.randint(
low=max_seqlen // 2,
high=max_seqlen + 1,
size=(batch_size,),
dtype=np.int32)
print(f'seqlen: {repr(seqlen)}')
seqlen = tf.convert_to_tensor(seqlen)
paddings = py_utils.PaddingsFromLengths(seqlen, max_seqlen)
# Builds graph.
p = attention.TransformerAttentionLayer.CommonParams(
input_dim=input_dim,
num_heads=num_heads,
is_masked=True,
left_context=left_context,
right_context=right_context)
p.name = 'transformer_atten'
p.atten_tpl.inference_step_max_length = inference_step_max_length
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
init_op = tf.global_variables_initializer()
base_outputs, _ = l.FProp(l.theta, inputs, None, paddings)
base_outputs *= tf.reshape(1. - paddings, [batch_size, max_seqlen, 1])
state = l.zero_state(batch_size)
outputs = []
assert max_seqlen % stride == 0
for i in range(max_seqlen // stride +
int(math.ceil(right_context / stride))):
if i < max_seqlen // stride:
step_inputs = inputs[:, stride * i:stride * (i + 1)]
step_paddings = paddings[:, stride * i:stride * (i + 1)]
else:
step_inputs = tf.zeros_like(inputs[:, 0:stride])
step_paddings = tf.ones_like(paddings[:, 0:stride])
output, _, state = l.StreamStep(l.theta, step_inputs, step_paddings,
state)
outputs.append(output)
outputs = tf.concat(outputs, axis=1)
outputs = outputs[:, right_context:][:, :max_seqlen]
outputs *= tf.reshape(1. - paddings, [batch_size, max_seqlen, 1])
with self.session(use_gpu=False) as sess:
sess.run(init_op)
expected, actual = sess.run([base_outputs, outputs])
print(repr(expected))
print(repr(actual))
print(f'np.sum(np.abs(expected)): {np.sum(np.abs(expected))}')
print(f'np.sum(np.abs(actual)): {np.sum(np.abs(actual))}')
self.assertAllClose(expected, actual)
self.assertEqual(
tuple(expected.shape), (batch_size, max_seqlen, input_dim))
def testStreamStepDropout(self):
batch_size, input_dim, num_heads, stride, left_context = 2, 4, 2, 8, 3
# Prepares inputs.
np.random.seed(None)
inputs = np.random.normal(0.5, 1, [batch_size, stride, input_dim]).astype(
np.float32)
print(f'np.sum(inputs): {np.sum(inputs)}')
inputs = tf.convert_to_tensor(inputs)
seqlen = np.random.randint(
low=4, high=stride + 1, size=(batch_size,), dtype=np.int32)
seqlen = tf.convert_to_tensor(seqlen)
paddings = py_utils.PaddingsFromLengths(seqlen, stride)
# Builds graph.
p = attention.TransformerAttentionLayer.CommonParams(
input_dim=input_dim,
num_heads=num_heads,
is_masked=True,
left_context=left_context,
right_context=0,
dropout_prob=0.5)
p.name = 'transformer_atten'
p.atten_tpl.inference_step_max_length = stride
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
output, _, _ = l.StreamStep(l.theta, inputs, paddings,
l.zero_state(batch_size))
output *= tf.reshape(1. - paddings, [batch_size, stride, 1])
init_op = tf.global_variables_initializer()
with self.session(use_gpu=False) as sess:
sess.run(init_op)
res = []
for _ in range(2):
out = sess.run([output])
res.append(out)
self.assertNotAllClose(res[0], res[1])
class FunnelTransformerAttentionLayerTest(test_utils.TestCase,
parameterized.TestCase):
"""Tests for FunnelTransformerAttentionLayer."""
# MultiHeadedAttention and LocalSelfAttention must return same values.
def testBasic(self):
batch_size, max_seqlen, input_dim = 2, 31, 4
query_stride = 2
num_heads = 2
out_seqlen = (max_seqlen + query_stride - 1) // query_stride
# Prepares inputs.
np.random.seed(None)
inputs = np.random.normal(
0.5, 1, [batch_size, max_seqlen, input_dim]).astype(np.float32)
print(f'np.sum(inputs): {np.sum(inputs)}')
inputs = tf.convert_to_tensor(inputs)
seqlen = np.random.randint(
low=max_seqlen // 2,
high=max_seqlen + 1,
size=(batch_size,),
dtype=np.int32)
print(f'seqlen: {repr(seqlen)}')
seqlen = tf.convert_to_tensor(seqlen)
paddings = py_utils.PaddingsFromLengths(seqlen, max_seqlen)
# Builds graph.
left_context = None
base_p = attention.FunnelTransformerAttentionLayer.CommonParams(
input_dim=input_dim,
num_heads=num_heads,
is_masked=True,
left_context=left_context,
right_context=0,
query_stride=query_stride)
base_p.name = 'base_funnel'
left_context = max_seqlen + 1
local_p = attention.FunnelTransformerAttentionLayer.CommonParams(
input_dim=input_dim,
num_heads=num_heads,
is_masked=True,
left_context=left_context,
right_context=0,
query_stride=query_stride)
local_p.name = 'local_funnel'
base_l = base_p.Instantiate()
local_l = local_p.Instantiate()
def _CopyVariables(a_vars, b_vars):
for a, b in zip(a_vars.Flatten(), b_vars.Flatten()):
tf.assign(a, b).eval()
with self.session(use_gpu=False) as sess:
sess.run(tf.global_variables_initializer())
_CopyVariables(local_l.vars, base_l.vars)
base_outputs, base_paddings, _ = base_l.FProp(base_l.theta, inputs, None,
paddings)
base_outputs *= tf.reshape(1. - base_paddings,
[batch_size, out_seqlen, 1])
local_outputs, local_paddings, _ = local_l.FProp(local_l.theta, inputs,
None, paddings)
local_outputs *= tf.reshape(1. - local_paddings,
[batch_size, out_seqlen, 1])
expected, actual = sess.run([base_outputs, local_outputs])
print(repr(expected))
print(repr(actual))
print(f'np.sum(np.abs(expected)): {np.sum(np.abs(expected))}')
print(f'np.sum(np.abs(actual)): {np.sum(np.abs(actual))}')
self.assertAllClose(expected, actual)
self.assertEqual(
tuple(expected.shape), (batch_size, out_seqlen, input_dim))
@parameterized.named_parameters(
('Basic',),
('BasicR2', 2, 2, 2),
('BasicS4', 4, 2),
)
def testStreamStep(self, stride=2, query_stride=2, right_context=0):
batch_size, max_seqlen, input_dim = 2, 32, 4
num_heads = 2
left_context = 3
out_seqlen = max_seqlen // query_stride
query_right_context = right_context // query_stride
# Prepares inputs.
np.random.seed(None)
inputs = np.random.normal(
0.5, 1, [batch_size, max_seqlen, input_dim]).astype(np.float32)
print(f'np.sum(inputs): {np.sum(inputs)}')
inputs = tf.convert_to_tensor(inputs)
seqlen = np.random.randint(
low=max_seqlen // 2,
high=max_seqlen + 1,
size=(batch_size,),
dtype=np.int32)
print(f'seqlen: {repr(seqlen)}')
seqlen = tf.convert_to_tensor(seqlen)
paddings = py_utils.PaddingsFromLengths(seqlen, max_seqlen)
# Builds graph.
p = attention.FunnelTransformerAttentionLayer.CommonParams(
input_dim=input_dim,
num_heads=num_heads,
is_masked=True,
left_context=left_context,
right_context=right_context,
query_stride=query_stride)
p.name = 'transformer_atten'
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
init_op = tf.global_variables_initializer()
base_outputs, out_paddings, _ = l.FProp(l.theta, inputs, None, paddings)
base_outputs *= tf.reshape(1. - out_paddings, [batch_size, out_seqlen, 1])
state = l.zero_state(batch_size)
outputs = []
out_paddings = []
assert max_seqlen % stride == 0
for i in range(max_seqlen // stride +
int(math.ceil(right_context / stride))):
if i < max_seqlen // stride:
step_inputs = inputs[:, stride * i:stride * (i + 1)]
step_paddings = paddings[:, stride * i:stride * (i + 1)]
else:
step_inputs = tf.zeros_like(inputs[:, 0:stride])
step_paddings = tf.ones_like(paddings[:, 0:stride])
output, out_padding, state = l.StreamStep(l.theta, step_inputs,
step_paddings, state)
outputs.append(output)
out_paddings.append(out_padding)
outputs = tf.concat(outputs, axis=1)
outputs = outputs[:, query_right_context:][:, :out_seqlen]
out_paddings = tf.concat(out_paddings, axis=1)
out_paddings = out_paddings[:, query_right_context:][:, :out_seqlen]
outputs *= tf.reshape(1. - out_paddings, [batch_size, out_seqlen, 1])
with self.session(use_gpu=False) as sess:
sess.run(init_op)
expected, actual = sess.run([base_outputs, outputs])
print(repr(expected))
print(repr(actual))
print(f'np.sum(np.abs(expected)): {np.sum(np.abs(expected))}')
print(f'np.sum(np.abs(actual)): {np.sum(np.abs(actual))}')
self.assertAllClose(expected, actual)
self.assertEqual(
tuple(expected.shape), (batch_size, out_seqlen, input_dim))
class TransformerLayerTest(test_utils.TestCase, parameterized.TestCase):
"""Test Transformer decoder layers."""
def _TransformerAttentionLayerInputs(self, input_dim=4, dtype=tf.float32):
np.random.seed(6348575)
query_vec = tf.transpose(
tf.stack([
tf.constant(np.random.rand(2, input_dim), dtype=dtype)
for _ in range(5)
]), [1, 0, 2])
paddings = tf.constant([[0, 0, 1, 1, 0], [1, 0, 0, 0, 1]], dtype=dtype)
aux_vec = tf.transpose(
tf.stack([
tf.constant(np.random.rand(2, input_dim), dtype=dtype)
for _ in range(7)
]), [1, 0, 2])
aux_paddings = tf.constant([[0, 1, 0, 1, 0, 1, 0], [1, 0, 1, 0, 1, 0, 1]],
dtype=dtype)
return query_vec, paddings, aux_vec, aux_paddings
def testTransformerAttentionLayerFPropMaskedSelfAttention(self):
with self.session(use_gpu=True) as sess:
query_vec, paddings, _, _ = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten',
input_dim=4,
is_masked=True,
num_heads=2)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, None, paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [7.777687, 5.219166, 6.305151, 4.817311]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=0))
def testTransformerAttentionLayerMaskedSelfAttentionMixHeads(self):
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten',
input_dim=16,
is_masked=True,
num_heads=[4, 4])
p.atten_tpl = [
attention.LocalSelfAttention.Params().Set(
left_context=2, right_context=2, block_size=4),
attention.RoutingAttention.Params().Set(
num_clusters=1, attention_window=2)
]
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
self.assertIsInstance(l.atten[0], attention.LocalSelfAttention)
self.assertIsInstance(l.atten[1], attention.RoutingAttention)
def testTransformerAttentionLayerFPropMultiHeadedAttentionMixHeads(self):
with self.session(use_gpu=True) as sess:
query_vec, paddings, _, _ = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten_mix',
input_dim=4,
is_masked=True,
num_heads=[2])
p.atten_tpl = [attention.MultiHeadedAttention.Params().Set()]
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, None, paddings)
p2 = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten',
input_dim=4,
is_masked=True,
num_heads=2)
p2.atten_tpl = attention.MultiHeadedAttention.Params().Set()
p2.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l2 = p2.Instantiate()
ctx_vec2, _ = l2.FProp(l2.theta, query_vec, None, paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx2 = sess.run(ctx_vec2)
self.assertAllClose(actual_ctx, actual_ctx2)
def testTransformerAttentionLayerFPropMaskedSelfAttentionMixHeads(self):
with self.session(use_gpu=True) as sess:
query_vec, paddings, _, _ = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten',
input_dim=4,
hidden_dim=8,
is_masked=True,
num_heads=[2, 3])
p.atten_tpl = [
attention.MultiHeadedAttention.Params().Set(),
attention.MultiHeadedAttention.Params().Set()
]
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, None, paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [12.3041725, 5.4454093, 1.684509, 10.300517]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=0))
def testAttentionLayerFPropMaskedSelfAttentionPaddingOverride(self):
with self.session(use_gpu=True) as sess:
query_vec, paddings, _, _ = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_masked_self_atten',
input_dim=4,
is_masked=True,
num_heads=2)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
triangle_padding = 1.0 - tf.linalg.band_part(
tf.ones([5, 5], dtype=query_vec.dtype), -1, 0)
per_step_padding_override = tf.tile(
tf.expand_dims(triangle_padding, 0), [2, 1, 1])
ctx_vec1, _ = l.FProp(l.theta, query_vec, None, paddings,
per_step_padding_override)
expected_ctx1, _ = l.FProp(l.theta, query_vec, None, paddings)
per_step_padding_override = tf.zeros([2, 5, 5])
ctx_vec2, _ = l.FProp(l.theta, query_vec, None, paddings,
per_step_padding_override)
tf.global_variables_initializer().run()
actual_ctx1, actual_ctx2, actual_expected_ctx1 = sess.run(
[ctx_vec1, ctx_vec2, expected_ctx1])
tf.logging.info(np.array_repr(actual_ctx1))
tf.logging.info(np.array_repr(actual_ctx2))
expected_ctx2 = [7.9491496, 5.2976646, 6.5383415, 5.0169916]
self.assertAllClose(actual_expected_ctx1, ctx_vec1)
self.assertAllClose(expected_ctx2,
np.sum(np.reshape(actual_ctx2, (10, 4)), axis=0))
def testTransformerAttentionLayerFPropCrossAttention(self):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_cross_atten',
input_dim=4,
is_masked=False,
num_heads=2)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, aux_vec, aux_paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
expected_ctx = [19.345360, 15.057412, 13.744134, 13.387347]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=0))
def testTransformerAttentionLayerFPropCrossAttentionPaddingOverride(self):
# We use self-attention to verify cross-attention padding works correctly.
with self.session(use_gpu=True) as sess:
query_vec, _, _, _ = self._TransformerAttentionLayerInputs()
paddings = tf.convert_to_tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 1]],
dtype=tf.float32)
# Setup LocalSelfAttention.
self_atten_tpl = attention.LocalSelfAttention.Params().Set(
left_context=2, right_context=1)
p1 = attention.TransformerAttentionLayer.Params().Set(
name='transformer_self_atten',
input_dim=4,
is_masked=False,
num_heads=2,
atten_tpl=self_atten_tpl)
p1.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l1 = p1.Instantiate()
# Setup MultiHeadedAttention.
source_atten_tpl = attention.MultiHeadedAttention.Params()
p2 = attention.TransformerAttentionLayer.Params().Set(
name='transformer_cross_atten',
input_dim=4,
is_masked=False,
num_heads=2,
atten_tpl=source_atten_tpl)
l2 = p2.Instantiate()
# LocalSelfAttention FProp
self_ctx_vec, _ = l1.FProp(l1.theta, query_vec, query_vec, paddings)
# timestamp includes valid indices to source.
timestamp = tf.convert_to_tensor([[0, 1, 2, 3, 4], [0, 1, 2, 3, 0]],
dtype=tf.int32)
per_step_padding = attention.CrossAttentionPaddingWithTimestamp(
timestamp, paddings, left_context=2, right_context=1)
# MultiHeadedAttention FProp with same theta and per_step_padding.
cross_ctx_vec, _ = l2.FProp(
l1.theta,
query_vec,
query_vec,
paddings,
per_step_padding_override=per_step_padding)
tf.global_variables_initializer().run()
act_self_ctx, act_cross_ctx = sess.run([self_ctx_vec, cross_ctx_vec])
# They can only differ in padded output positions.
self.assertAllClose(act_self_ctx[0, :4, :], act_cross_ctx[0, :4, :])
self.assertAllClose(act_self_ctx[1, :3, :], act_cross_ctx[1, :3, :])
def testTransformerAttentionLayerFPropCrossAttentionInputDimAsDict(self):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_cross_atten',
input_dim={
'query': 4,
'key': 4,
'value': 4
},
is_masked=False,
num_heads=2)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, aux_vec, aux_paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
expected_ctx = [19.345360, 15.057412, 13.744134, 13.387347]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=0))
def testMultiSourceTransformerAttentionLayerFPropCrossAttention(self):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
p = attention.TransformerMultiSourceAttentionLayer.Params().Set(
name='transformer_multi_source_cross_atten',
input_dim=4,
is_masked=False,
num_heads=2,
num_source=2)
p.multi_source_atten.atten_merger_tpl = (
tm_attention.MergerLayer.Params().Set(merger_op='sum'))
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(
l.theta, query_vec,
py_utils.NestedMap({
'source_0': aux_vec,
'source_1': aux_vec
}),
py_utils.NestedMap({
'source_0': aux_paddings,
'source_1': aux_paddings
}))
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [32.4878, 25.145725, 21.534966, 22.007454]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=0))
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
})
def testTransformerAttentionLayerExtendStep(self, use_short_seq_opt):
with self.session(use_gpu=True) as sess:
query_vec, _, _, _ = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_atten', input_dim=4, is_masked=True, num_heads=2)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec1, _ = l.FProp(l.theta, query_vec, None, paddings)
ctx_vec2 = []
for i in range(5):
ctx_vec, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(query_vec[:, i, :], 1), prefix_states, i,
use_short_seq_opt)
ctx_vec2.append(tf.squeeze(ctx_vec, 1))
ctx_vec2 = tf.transpose(tf.stack(ctx_vec2), [1, 0, 2])
tf.global_variables_initializer().run()
ctx1, ctx2 = sess.run([ctx_vec1, ctx_vec2])
self.assertAllClose(ctx1, ctx2)
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
})
def testTransformerAttentionLayerExtendStepMixHeads(self, use_short_seq_opt):
with self.session(use_gpu=True) as sess:
query_vec, _, _, _ = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 1, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 1, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
prefix_states = py_utils.NestedMap(atten=[prefix_states, prefix_states])
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_atten', input_dim=4, is_masked=True)
p.atten_tpl = [
attention.MultiHeadedAttention.Params().Set(),
attention.MultiHeadedAttention.Params().Set()
]
p.num_heads = [1, 1]
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec1, _ = l.FProp(l.theta, query_vec, None, paddings)
ctx_vec2 = []
for i in range(5):
ctx_vec, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(query_vec[:, i, :], 1), prefix_states, i,
use_short_seq_opt)
ctx_vec2.append(tf.squeeze(ctx_vec, 1))
ctx_vec2 = tf.transpose(tf.stack(ctx_vec2), [1, 0, 2])
tf.global_variables_initializer().run()
ctx1, ctx2 = sess.run([ctx_vec1, ctx_vec2])
self.assertAllClose(ctx1, ctx2)
def testTransformerAttentionLayerNoLayernorm(self):
"""Verify if Transformer attention allows no layernorm in FProp and Extend."""
with self.session(use_gpu=True) as sess:
query_vec, _, _, _ = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
p = attention.TransformerAttentionLayer.Params().Set(
name='transformer_atten',
input_dim=4,
is_masked=True,
num_heads=2,
ln_tpl=None) # Set ln_tpl to None.
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec1, _ = l.FProp(l.theta, query_vec, None, paddings)
ctx_vec2 = []
for i in range(5):
ctx_vec, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(query_vec[:, i, :], 1), prefix_states, i,
False)
ctx_vec2.append(tf.squeeze(ctx_vec, 1))
ctx_vec2 = tf.transpose(tf.stack(ctx_vec2), [1, 0, 2])
tf.global_variables_initializer().run()
ctx1, ctx2 = sess.run([ctx_vec1, ctx_vec2])
self.assertAllClose(ctx1, ctx2)
def _ConstructTransformerDecoderLayer(self, use_relative_atten=False):
p = attention.TransformerDecoderLayer.Params()
p.name = 'transformer_decoder_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
if use_relative_atten:
p = attention.UseRelativeAttentionInTransformerLayer(p, 4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
return attention.TransformerDecoderLayer(p)
def _ConstructTransformerDecoderLayerMixHeads(self, use_relative_atten=False):
p = attention.TransformerDecoderLayer.Params()
p.name = 'transformer_decoder_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = [1, 1]
p.tr_atten_tpl.atten_tpl = [
attention.MultiHeadedAttention.Params().Set(),
attention.MultiHeadedAttention.Params().Set()
]
if use_relative_atten:
p = attention.UseRelativeAttentionInTransformerLayer(p, 4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
return attention.TransformerDecoderLayer(p)
def testTransformerLayerCommonParams(self):
with self.session(use_gpu=True) as sess:
input_dim, fflayer_hidden_dim, num_heads = 4, 7, 2
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs(
input_dim=input_dim)
query_vec = tf.tile(query_vec, [1, 1, 1])
paddings = tf.zeros([2, 5])
p = attention.TransformerLayer.CommonParams(
input_dim=input_dim,
atten_num_heads=num_heads,
fflayer_hidden_dim=fflayer_hidden_dim)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, paddings, aux_vec, aux_paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [
4.7839108, 4.5303655, 5.5551023, 5.0657663, 5.0493064, 3.2142467,
2.820018, 5.659971, 4.3814187, 2.60475
]
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=1))
@parameterized.named_parameters(
('F32FPropF32Input', tf.float32, tf.float32),
('F32FPropBF16Input', tf.float32, tf.bfloat16),
('BF16FPropF32Input', tf.bfloat16, tf.float32),
('BF16FPropBF16Input', tf.bfloat16, tf.bfloat16),
('BF16AddNormalizedInput', tf.bfloat16, tf.bfloat16, False),
)
def testTransformerLayerFPropDtypes(self,
fprop_dtype,
input_dtype,
add_unnormalized_input=True):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs(dtype=input_dtype)
paddings = tf.zeros([2, 5])
p = attention.TransformerDecoderLayer.Params()
p.name = 'transformer_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.tr_atten_tpl.add_unnormalized_input = add_unnormalized_input
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
p.random_seed = 1234
p.cls.SetFPropDtype(p, fprop_dtype)
# fprop_dtype set accordingly.
self.assertEqual(fprop_dtype, p.fprop_dtype)
l = p.Instantiate()
tf.global_variables_initializer().run()
ctx_vec, _ = l.FProp(l.theta, query_vec, paddings, aux_vec, aux_paddings)
tgt_batch, tgt_len = py_utils.GetShape(paddings)
with tf.name_scope('init_states'):
prefix_states = l.InitStates(l.theta, tgt_batch, tgt_len)
extend_step_outputs = []
for i in range(tgt_len):
with tf.name_scope(f'extend_step_{i}'):
layer_output, _, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(query_vec[:, i, :], 1), aux_vec,
aux_paddings, prefix_states, i)
extend_step_outputs.append(layer_output)
extend_step_outputs = tf.concat(extend_step_outputs, axis=1)
ctx_sum, step_sum = sess.run(
[tf.reduce_sum(ctx_vec),
tf.reduce_sum(extend_step_outputs)])
self.assertAllClose(ctx_sum, step_sum)
@parameterized.named_parameters(('SingleBatch', 1), ('DoubleBatch', 2))
def testTransformerLayerFPropWithCrossAttentionInputDimAsDict(
self, multiplier):
with self.session(use_gpu=True) as sess:
(query_vec, _, _, _) = self._TransformerAttentionLayerInputs()
(_, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs(input_dim=2)
query_vec = tf.tile(query_vec, [multiplier, 1, 1])
paddings = tf.zeros([2 * multiplier, 5])
p = attention.TransformerLayer.Params()
p.name = 'transformer_layer'
p.input_dim = 4
p.has_aux_atten = True
p.aux_atten_input_dim = {'query': 4, 'key': 2, 'value': 2}
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, paddings, aux_vec, aux_paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10 * multiplier, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [
7.3633065, 8.883232, 5.772561, 8.73429, 9.295169, 8.068511, 7.8807983,
6.7816095, 9.321457, 7.6491246
] * multiplier
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=1))
@parameterized.named_parameters(('SingleBatch', 1), ('DoubleBatch', 2))
def testTransformerLayerFPropWithCrossAttention(self, multiplier):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
query_vec = tf.tile(query_vec, [multiplier, 1, 1])
paddings = tf.zeros([2 * multiplier, 5])
p = attention.TransformerLayer.Params()
p.name = 'transformer_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, paddings, aux_vec, aux_paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10 * multiplier, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [
4.7839108, 4.5303655, 5.5551023, 5.065767, 5.0493064, 3.2142467,
2.8200178, 5.659971, 4.3814187, 2.60475
] * multiplier
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=1))
def testTransformerLayerDecodeWithCrossAttention(self):
np.random.seed(6348575)
dtype = tf.float32
b_size = 2
input_dim = 4
src_seq_len = 4
tgt_seq_len = 3
query_vec = np.random.rand(b_size, tgt_seq_len, input_dim)
paddings = tf.constant([[0, 0, 0], [0, 0, 0]], dtype=dtype)
aux_vec = np.random.rand(b_size, src_seq_len, input_dim)
aux_paddings = tf.constant([[0, 1, 0, 1], [1, 0, 1, 0]], dtype=dtype)
segment_mask = tf.constant(
[[0, -1e30, -1e30], [-1e30, 0, -1e30], [0, -1e30, 0]], dtype=dtype)
segment_mask = tf.tile(segment_mask[tf.newaxis, tf.newaxis, :, :],
[b_size, 1, 1, 1])
aux_segment_mask = tf.zeros([b_size, 1, tgt_seq_len, src_seq_len])
with self.session(use_gpu=True) as sess:
p = attention.TransformerLayer.Params()
p.name = 'transformer_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.mask_self_atten = True
p.packed_input = True
p.has_aux_atten = True
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(
l.theta,
query_vec,
paddings,
aux_vec,
aux_paddings,
segment_mask=segment_mask,
aux_segment_mask=aux_segment_mask)
cached_states = l.InitStates(l.theta, b_size, tgt_seq_len)
extend_step_outs = []
for t in range(tgt_seq_len):
out_t, _, cached_states = l.ExtendStep(
l.theta,
query_vec[:, t:t + 1, :],
aux_vec,
aux_paddings,
cached_states,
t,
segment_mask=segment_mask[:, :, t, :],
aux_segment_mask=aux_segment_mask[:, :, t, :])
extend_step_outs.append(out_t[:, 0, :])
decoder_out = tf.stack(extend_step_outs, axis=1)
tf.global_variables_initializer().run()
fprop_out_v, decoder_out_v = sess.run([ctx_vec, decoder_out])
tf.logging.info(np.array_repr(fprop_out_v))
tf.logging.info(np.array_repr(decoder_out_v))
self.assertAllClose(fprop_out_v, decoder_out_v)
def testReshapedTransformerLayerFPropNoCrossAttention(self):
with self.session(use_gpu=True) as sess:
query_vec, _, _, _ = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
# default setup
p = attention.TransformerLayer.Params()
p.name = 'transformer_layer'
p.has_aux_atten = False
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
ctx_vec, _ = l.FProp(l.theta, query_vec, paddings)
# reshaped setup
reshaped_p = p.Copy()
attention.TransformerLayer.SetReshapedLayers(reshaped_p)
reshaped_p.device_mesh = np.reshape(np.arange(4), [2, 2])
attention.TransformerLayer.SetCanonicalShardingParams(
reshaped_p, reshape_dim=True)
reshaped_p.name = 'reshaped_transformer_layer'
reshaped_l = reshaped_p.Instantiate()
# Use l.theta as it is compatible with reshaped_l.
reshaped_ctx_vec, _ = reshaped_l.FProp(
l.theta, tf.reshape(query_vec, [2, 5, 2, 2]), paddings)
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (2, 5, 4))
reshaped_ctx = sess.run(reshaped_ctx_vec)
reshaped_ctx = np.reshape(reshaped_ctx, (2, 5, 4))
self.assertAllClose(actual_ctx, reshaped_ctx)
def testReshapedTransformerLayerDecodeNoCrossAttention(self):
np.random.seed(6348575)
dtype = tf.float32
b_size = 2
input_dim = 4
seq_len = 3
query_vec = np.random.rand(b_size, seq_len, input_dim)
paddings = tf.zeros(shape=[b_size, seq_len], dtype=dtype)
segment_mask = tf.constant(
[[0, -1e30, -1e30], [-1e30, 0, -1e30], [0, -1e30, 0]], dtype=dtype)
segment_mask = tf.tile(segment_mask[tf.newaxis, tf.newaxis, :, :],
[b_size, 1, 1, 1])
with self.session(use_gpu=True) as sess:
p = attention.TransformerLayer.Params()
p.name = 'reshaped_transformer_layer'
p.input_dim = input_dim
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.mask_self_atten = True
p.packed_input = True
p.has_aux_atten = False
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
attention.TransformerLayer.SetReshapedLayers(p)
p.device_mesh = np.reshape(np.arange(4), [2, 2])
attention.TransformerLayer.SetCanonicalShardingParams(p, reshape_dim=True)
l = p.Instantiate()
ctx_vec, _ = l.FProp(
l.theta,
tf.reshape(query_vec, [b_size, seq_len, 2, 2]),
paddings,
None,
None,
segment_mask=segment_mask)
ctx_vec = tf.reshape(ctx_vec, [b_size, seq_len, input_dim])
cached_states = l.InitStates(l.theta, b_size, seq_len)
extend_step_outs = []
for t in range(seq_len):
out_t, _, cached_states = l.ExtendStep(
l.theta,
query_vec[:, t:t + 1, :],
None,
None,
cached_states,
t,
segment_mask=segment_mask[:, :, t, :])
extend_step_outs.append(out_t[:, 0, :])
decoder_out = tf.stack(extend_step_outs, axis=1)
tf.global_variables_initializer().run()
fprop_out_v, decoder_out_v = sess.run([ctx_vec, decoder_out])
tf.logging.info(np.array_repr(fprop_out_v))
tf.logging.info(np.array_repr(decoder_out_v))
self.assertAllClose(fprop_out_v, decoder_out_v)
@parameterized.named_parameters(('SingleBatch', 1), ('DoubleBatch', 2))
def testMultiSourceTransformerLayerFPropWithCrossAttention(self, multiplier):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
query_vec = tf.tile(query_vec, [multiplier, 1, 1])
paddings = tf.zeros([2 * multiplier, 5])
p = attention.TransformerLayer.Params()
p.name = 'transformer_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
# multi-source cross attention
p.tr_atten_tpl = (
attention.TransformerMultiSourceAttentionLayer.Params().Set(
num_source=2, primary_source_index=0, num_heads=2))
p.tr_self_atten_tpl = attention.TransformerAttentionLayer.Params().Set(
input_dim=4, num_heads=2)
l = p.Instantiate()
ctx_vec, _ = l.FProp(
l.theta, query_vec, paddings,
py_utils.NestedMap({
'source_0': aux_vec,
'source_1': aux_vec
}),
py_utils.NestedMap({
'source_0': aux_paddings,
'source_1': aux_paddings
}))
tf.global_variables_initializer().run()
actual_ctx = sess.run(ctx_vec)
actual_ctx = np.reshape(actual_ctx, (10 * multiplier, 4))
tf.logging.info(np.array_repr(actual_ctx))
expected_ctx = [
4.7839108, 4.5303655, 5.5551023, 5.0657663, 5.0493064, 3.2142467,
2.820018, 5.659971, 4.3814187, 2.60475
] * multiplier
self.assertAllClose(expected_ctx, np.sum(actual_ctx, axis=1))
@parameterized.named_parameters(('Base', False), ('RelativeAtten', True))
def testTransformerDecoderLayerConstruction(self, use_relative_atten):
_ = self._ConstructTransformerDecoderLayer(
use_relative_atten=use_relative_atten)
def testTransformerDecoderLayerFProp(self):
with self.session(use_gpu=True) as sess:
(query_vec, paddings, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
l = self._ConstructTransformerDecoderLayer()
layer_output, _ = l.FProp(l.theta, query_vec, paddings, aux_vec,
aux_paddings)
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (10, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [16.939590, 24.121685, 19.975197, 15.924350]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
def testTransformerDecoderLayerFPropMixHeads(self):
with self.session(use_gpu=True) as sess:
(query_vec, paddings, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
l = self._ConstructTransformerDecoderLayerMixHeads()
layer_output, _ = l.FProp(l.theta, query_vec, paddings, aux_vec,
aux_paddings)
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (10, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [6.2344074, 15.817548, 6.8874574, 4.879834]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
def _ConstructTransformerEncoderLayerStack(self):
p = attention.StackedTransformerLayers.Params()
p.name = 'encoder_layers'
p.has_aux_atten = False
p.mask_self_atten = False
p.num_layers = 2
p.mdl_dim = 4
p.hidden_dim = 8
p.num_atten_heads = 2
p.dropout_prob = 0.2
p.params_init = py_utils.WeightInit.Xavier()
p.random_seed = 12345
return p.Instantiate()
def _ConstructTransformerDecoderLayerStack(self, dropout_prob=0.2):
p = attention.StackedTransformerLayers.Params()
p.name = 'decoder_layers'
p.has_aux_atten = True
p.mask_self_atten = True
p.num_layers = 2
p.mdl_dim = 4
p.hidden_dim = 8
p.num_atten_heads = 2
p.dropout_prob = dropout_prob
p.params_init = py_utils.WeightInit.Xavier()
p.random_seed = 12345
return p.Instantiate()
def _ConstructTransformerParamsTplListMixHeadsStack(self):
p = attention.StackedTransformerLayers.Params()
p.name = 'encoder_layers'
p.has_aux_atten = False
p.mask_self_atten = False
p.num_layers = 6
params1 = attention.TransformerLayer.Params()
params1.tr_atten_tpl.atten_tpl = (
attention.LocalSelfAttention.Params().Set(
left_context=2, right_context=2, block_size=4))
params2 = attention.TransformerLayer.Params()
params2.tr_atten_tpl.atten_tpl = (
attention.RoutingAttention.Params().Set(
num_clusters=1, attention_window=2))
params3 = attention.TransformerLayer.Params()
params3.tr_atten_tpl.atten_tpl = [
attention.LocalSelfAttention.Params().Set(
left_context=2, right_context=2, block_size=4),
attention.RoutingAttention.Params().Set(
num_clusters=1, attention_window=2)
]
params3.num_heads = [1, 1]
p.transformer_layer_params_tpl = [params1, params2, params3]
p.mdl_dim = 4
p.hidden_dim = 8
p.num_atten_heads = 2
p.dropout_prob = 0.2
p.params_init = py_utils.WeightInit.Xavier()
p.random_seed = 12345
return p.Instantiate()
def _ConstructRepeatedTransformerDecoderLayer(self,
repeat,
per_layer_vars=False):
p = attention.RepeatedTransformerLayer.Params()
p.name = 'repeated_decoder_layer'
p.params_init = py_utils.WeightInit.Xavier()
p.random_seed = 12345
p.repeat = repeat
p.per_layer_vars = per_layer_vars
p.atten_prob_aggregation = 'mean'
tp = p.body = attention.TransformerDecoderLayer.Params()
tp.input_dim = 4
tp.tr_fflayer_tpl.hidden_dim = 7
tp.tr_atten_tpl.num_heads = 2
return p.Instantiate()
def testTransformerStackTplList(self):
l = self._ConstructTransformerParamsTplListMixHeadsStack()
self.assertIsInstance(l.x_layers[0].self_atten.atten,
attention.LocalSelfAttention)
self.assertIsInstance(l.x_layers[1].self_atten.atten,
attention.LocalSelfAttention)
self.assertIsInstance(l.x_layers[2].self_atten.atten,
attention.RoutingAttention)
self.assertIsInstance(l.x_layers[3].self_atten.atten,
attention.RoutingAttention)
self.assertIsInstance(l.x_layers[4].self_atten.atten[0],
attention.LocalSelfAttention)
self.assertIsInstance(l.x_layers[4].self_atten.atten[1],
attention.RoutingAttention)
self.assertIsInstance(l.x_layers[5].self_atten.atten[0],
attention.LocalSelfAttention)
self.assertIsInstance(l.x_layers[5].self_atten.atten[1],
attention.RoutingAttention)
def testStackedTransformerGetSplitForLayer(self):
cls = attention.StackedTransformerLayers
buckets = [2, 4, 5, 6, 9, 11, 15]
ys = [cls.GetSplitForLayer(buckets, i) for i in range(16)]
self.assertEqual(0, ys[0])
self.assertEqual(0, ys[1])
self.assertEqual(0, ys[2])
self.assertEqual(1, ys[3])
self.assertEqual(1, ys[4])
self.assertEqual(2, ys[5])
self.assertEqual(3, ys[6])
self.assertEqual(4, ys[7])
self.assertEqual(4, ys[8])
self.assertEqual(4, ys[9])
self.assertEqual(5, ys[10])
self.assertEqual(5, ys[11])
self.assertEqual(6, ys[12])
self.assertEqual(6, ys[13])
self.assertEqual(6, ys[14])
self.assertEqual(6, ys[15])
def testTransformerEncoderLayerStackFProp(self):
with self.session(use_gpu=True) as sess:
(query_vec, paddings, _, _) = self._TransformerAttentionLayerInputs()
l = self._ConstructTransformerEncoderLayerStack()
layer_output, _ = l.FProp(l.theta, query_vec=query_vec, paddings=paddings)
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (10, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [6.178955, -11.376661, 7.032681, -1.532627]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
def testTransformerDecoderLayerStackFProp(self):
with self.session(use_gpu=True) as sess:
(query_vec, paddings, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
l = self._ConstructTransformerDecoderLayerStack()
layer_output, _ = l.FProp(
l.theta,
query_vec=query_vec,
paddings=paddings,
aux_vec=aux_vec,
aux_paddings=aux_paddings)
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (10, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [9.926413, -4.491376, 27.051598, 2.112684]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
})
def testTransformerDecoderLayerStackExtendStep(self, use_short_seq_opt):
def _Rnd(seed):
return tf.random.normal([5, 2, 2, 2], seed=seed)
graph = tf.Graph()
with graph.as_default():
tf.random.set_seed(123456)
query_vec, _, aux_vec, aux_paddings = (
self._TransformerAttentionLayerInputs())
paddings = tf.zeros([2, 5])
layer_prefix_states_1 = py_utils.NestedMap(key=_Rnd(1), value=_Rnd(2))
layer_prefix_states_2 = py_utils.NestedMap(key=_Rnd(3), value=_Rnd(4))
prefix_states = py_utils.NestedMap(
x_layers=[layer_prefix_states_1, layer_prefix_states_2])
l = self._ConstructTransformerDecoderLayerStack(dropout_prob=0.)
layer_output1, _ = l.FProp(l.theta, query_vec, paddings, aux_vec,
aux_paddings)
layer_output2 = []
for i in range(5):
layer_output, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(query_vec[:, i, :], 1), aux_vec,
aux_paddings, prefix_states, i, use_short_seq_opt)
layer_output2.append(tf.squeeze(layer_output, 1))
layer_output2 = tf.transpose(tf.stack(layer_output2), [1, 0, 2])
with self.session(graph=graph, use_gpu=True) as sess:
tf.global_variables_initializer().run()
actual_layer_output1, actual_layer_output2 = sess.run(
[layer_output1, layer_output2])
self.assertAllClose(actual_layer_output1, actual_layer_output2)
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
})
def testTransformerDecoderLayerExtendStep(self, use_short_seq_opt):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
l = self._ConstructTransformerDecoderLayer()
layer_output1, layer_atten_probs1 = l.FProp(l.theta, query_vec, paddings,
aux_vec, aux_paddings)
layer_atten_probs1 = layer_atten_probs1.aux_atten
layer_output2 = []
layer_atten_probs2 = []
for i in range(5):
layer_output, cross_atten_probs, prefix_states = l.ExtendStep(
l.theta,
tf.expand_dims(query_vec[:, i, :], 1),
aux_vec,
aux_paddings,
prefix_states,
i,
use_short_seq_opt,
compute_atten_probs=True)
layer_output2.append(tf.squeeze(layer_output, 1))
layer_atten_probs2.append(cross_atten_probs)
layer_output2 = tf.transpose(tf.stack(layer_output2), [1, 0, 2])
# [B, N, T, S].
layer_atten_probs2 = tf.concat(layer_atten_probs2, axis=2)
tf.global_variables_initializer().run()
(actual_layer_output1, actual_layer_output2, actual_layer_atten_probs1,
actual_layer_atten_probs2) = sess.run([
layer_output1, layer_output2, layer_atten_probs1, layer_atten_probs2
])
self.assertAllClose(actual_layer_output1, actual_layer_output2)
self.assertAllClose(actual_layer_atten_probs1, actual_layer_atten_probs2)
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
}, {
'testcase_name': '_repeat',
'repeat': 3,
}, {
'testcase_name': '_repeat_per_layer_var',
'repeat': 3,
'per_layer_var': True,
})
def testTransformerDecoderLayerExtendStepDifferentBatchSizes(
self, use_short_seq_opt=False, repeat=None, per_layer_var=False):
with self.session(use_gpu=True) as sess:
if repeat:
l = self._ConstructRepeatedTransformerDecoderLayer(
repeat, per_layer_var)
else:
l = self._ConstructTransformerDecoderLayer()
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
layer_output1, layer_atten_probs1 = l.FProp(
l.theta,
query_vec,
paddings=paddings,
aux_vec=aux_vec,
aux_paddings=aux_paddings)
layer_atten_probs1 = layer_atten_probs1.aux_atten
source_batch, source_length = py_utils.GetShape(aux_paddings, 2)
batch_multiplier = 2
target_batch = source_batch * batch_multiplier
num_heads = 2
prefix_states = l.InitStates(
l.theta, target_batch_size=target_batch, target_max_length=5)
def _TileByBatchMultiplier(x):
"""Tile 'x' along the batch dim by batch_multiplier."""
b, t, d = py_utils.GetShape(x)
# [b, batch_multiplier, t, d].
x = tf.tile(tf.expand_dims(x, axis=1), [1, batch_multiplier, 1, 1])
return tf.reshape(x, [b * batch_multiplier, t, d])
tiled_query_vec = _TileByBatchMultiplier(query_vec)
layer_output2 = []
layer_atten_probs2 = []
for i in range(5):
layer_output, cross_atten_probs, prefix_states = l.ExtendStep(
l.theta,
tiled_query_vec[:, i:i + 1, :],
cached_states=prefix_states,
aux_vec=aux_vec,
aux_paddings=aux_paddings,
time_step=i,
use_short_seq_opt=use_short_seq_opt,
compute_atten_probs=True)
layer_output2.append(layer_output)
layer_atten_probs2.append(
py_utils.HasShape(cross_atten_probs,
[target_batch, num_heads, 1, source_length]))
layer_output2 = tf.concat(layer_output2, axis=1)
# [B, N, T, S].
layer_atten_probs2 = tf.concat(layer_atten_probs2, axis=-2)
tf.global_variables_initializer().run()
(actual_layer_output1, actual_layer_output2, actual_layer_atten_probs1,
actual_layer_atten_probs2) = sess.run([
layer_output1, layer_output2, layer_atten_probs1, layer_atten_probs2
])
for i in range(source_batch):
for j in range(batch_multiplier):
tf.logging.info('Expected (%s): %s', i, actual_layer_output1[i])
tf.logging.info('Actual (%s, %s): %s', i, j,
actual_layer_output2[i * batch_multiplier + j])
self.assertAllClose(actual_layer_output1[i],
actual_layer_output2[i * batch_multiplier + j])
self.assertAllClose(
actual_layer_atten_probs1[i],
actual_layer_atten_probs2[i * batch_multiplier + j])
def _ConstructMultiSourceTransformerDecoderLayer(self,
use_relative_atten=False):
p = attention.MultiSourceTransformerDecoderLayer.Params().Set(num_source=2)
p.name = 'multi_source_transformer_decoder_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
# multi-source cross attention
p.tr_atten_tpl = (
attention.TransformerMultiSourceAttentionLayer.Params().Set(
num_source=2, primary_source_index=0, num_heads=2))
p.tr_self_atten_tpl = attention.TransformerAttentionLayer.Params().Set(
input_dim=4, num_heads=2)
p.tr_atten_tpl.multi_source_atten.atten_merger_tpl = (
tm_attention.MergerLayer.Params().Set(merger_op='sum'))
if use_relative_atten:
p = attention.UseRelativeAttentionInTransformerLayer(p, 4)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
return attention.MultiSourceTransformerDecoderLayer(p)
@parameterized.named_parameters(
{
'testcase_name': '_short_seq',
'use_short_seq_opt': True,
}, {
'testcase_name': '_long_seq',
'use_short_seq_opt': False,
})
def testMultiSourceTransformerDecoderLayerExtendStep(self, use_short_seq_opt):
with self.session(use_gpu=True) as sess:
(query_vec, _, aux_vec,
aux_paddings) = self._TransformerAttentionLayerInputs()
paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
l = self._ConstructMultiSourceTransformerDecoderLayer()
ms_aux_vec = py_utils.NestedMap({
'source_0': aux_vec,
'source_1': aux_vec
})
ms_aux_paddings = py_utils.NestedMap({
'source_0': aux_paddings,
'source_1': aux_paddings
})
layer_output1, layer_atten_probs1 = l.FProp(l.theta, query_vec, paddings,
ms_aux_vec, ms_aux_paddings)
layer_atten_probs1 = layer_atten_probs1.aux_atten
layer_output2 = []
layer_atten_probs2 = []
for i in range(5):
layer_output, cross_atten_probs, prefix_states = l.ExtendStep(
l.theta,
tf.expand_dims(query_vec[:, i, :], 1),
ms_aux_vec,
ms_aux_paddings,
prefix_states,
i,
use_short_seq_opt,
compute_atten_probs=True)
layer_output2.append(tf.squeeze(layer_output, 1))
layer_atten_probs2.append(cross_atten_probs)
layer_output2 = tf.transpose(tf.stack(layer_output2), [1, 0, 2])
# [B, N, T, S].
layer_atten_probs2 = tf.concat(layer_atten_probs2, axis=2)
tf.global_variables_initializer().run()
(actual_layer_output1, actual_layer_output2, actual_layer_atten_probs1,
actual_layer_atten_probs2) = sess.run([
layer_output1, layer_output2, layer_atten_probs1, layer_atten_probs2
])
self.assertAllClose(actual_layer_output1, actual_layer_output2)
self.assertAllClose(actual_layer_atten_probs1, actual_layer_atten_probs2)
def _testTransformerDecoderLayerInputs(self,
depth=3,
context_depth=3,
dtype=tf.float32):
source_vecs = tf.stack(
[tf.constant(np.random.rand(2, depth), dtype=dtype) for _ in range(5)])
source_padding = tf.transpose(
tf.constant([[0, 0, 1, 1, 0], [1, 0, 0, 0, 1]], dtype=dtype))
aux_source_vecs = tf.stack(
[tf.constant(np.random.rand(2, depth), dtype=dtype) for _ in range(7)])
aux_source_paddings = tf.transpose(
tf.constant([[0, 1, 0, 1, 0, 1, 0], [1, 0, 1, 0, 1, 0, 1]],
dtype=dtype))
context_vecs = tf.stack([
tf.constant(np.random.rand(2, context_depth), dtype=dtype)
for _ in range(7)
])
return (source_vecs, source_padding, aux_source_vecs, aux_source_paddings,
context_vecs)
def testPrefixTransformerLayerExtendStep(self):
with self.session(use_gpu=False):
np.random.seed(6348575)
depth = 4
p = attention.TransformerDecoderLayer.Params()
p.name = 'TransformerDecoderLayer'
p.input_dim = 4
p.tr_fflayer_tpl.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 8
p.has_aux_atten = True
p.mask_self_atten = True
p.tr_atten_tpl = attention.TransformerAttentionLayer.Params().Set(
num_heads=2, input_dim=4)
transformer = p.Instantiate()
(source_vecs, _, aux_vecs, aux_paddings,
_) = self._testTransformerDecoderLayerInputs(depth=depth)
source_padding = tf.zeros([5, 2])
source_vecs = tf.transpose(source_vecs, [1, 0, 2])
source_padding = tf.transpose(source_padding, [1, 0])
aux_vecs = tf.transpose(aux_vecs, [1, 0, 2])
aux_paddings = tf.transpose(aux_paddings, [1, 0])
h1, _ = transformer.FPropDefaultTheta(
source_vecs,
source_padding,
aux_vec=aux_vecs,
aux_paddings=aux_paddings)
h2 = []
cached_source_vecs = tf.concat([
tf.random.uniform((2, 2, 2, 2), 0.0, 1.0),
tf.zeros((5, 2, 2, 2), dtype=tf.float32)
],
axis=0)
cached_source_contexts = tf.concat([
tf.random.uniform((2, 2, 2, 2), 0.0, 1.0),
tf.zeros((5, 2, 2, 2), dtype=tf.float32)
],
axis=0)
prefix_states = py_utils.NestedMap(
key=cached_source_vecs, value=cached_source_contexts)
for i in range(5):
# Ignore the first two timesteps in cached_source.
per_step_padding = tf.concat([
tf.ones([2, 2], dtype=tf.float32),
tf.zeros([2, i + 1], dtype=tf.float32),
tf.ones([2, 4 - i], dtype=tf.float32)
],
axis=1)
per_step_padding = tf.expand_dims(per_step_padding, axis=1)
h, _, prefix_states = transformer.ExtendStep(
transformer.theta,
source_vecs[:, i:i + 1, :],
aux_vecs,
aux_paddings,
prefix_states,
time_step=i + 2,
per_step_padding=per_step_padding)
h2.append(h)
h2 = tf.concat(h2, axis=1)
self.evaluate(tf.global_variables_initializer())
h1_v, h2_v = self.evaluate([h1, h2])
self.assertAllClose(h1_v, h2_v, atol=1e-3)
class GPipeBatchMajorTransformerLayerTest(test_utils.TestCase,
parameterized.TestCase):
"""Test GPipeBatchMajorTransformer layers."""
def _ConstructGPipeBatchMajorTransformerLayer(self,
decoder=False,
packed=True,
dropout=0.1):
p = attention.GPipeBatchMajorTransformerLayer.Params()
p.name = 'gpipe_transformer_layer'
p.input_dim = 4
p.tr_fflayer_tpl.hidden_dim = 7
p.tr_atten_tpl.num_heads = 2
p.tr_atten_tpl.residual_dropout_prob = dropout
p.packed_input = packed
if decoder:
p.has_aux_atten = True
p.mask_self_atten = True
p.cls.SetupDeterministicDropout(p)
layer = p.Instantiate()
return p, layer
def _GPipeBatchMajorTransformerLayerInputs(self,
input_dim=4,
dtype=tf.float32):
np.random.seed(6348575)
target_vec = tf.transpose(
tf.stack([
tf.constant(np.random.rand(2, input_dim), dtype=dtype)
for _ in range(5)
]), [1, 0, 2])
target_paddings = tf.constant([[0, 0, 0, 0, 1], [0, 0, 0, 0, 0]],
dtype=dtype)
aux_vec = tf.transpose(
tf.stack([
tf.constant(np.random.rand(2, input_dim), dtype=dtype)
for _ in range(7)
]), [1, 0, 2])
aux_paddings = tf.constant([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1]],
dtype=dtype)
aux_segment_ids = tf.constant(
[[0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1]], dtype=dtype)
target_segment_ids = tf.constant([[0, 0, 0, 1, 1], [0, 0, 1, 1, 1]],
dtype=dtype)
target_sa_mask = attention.SegmentMask(
target_segment_ids, target_segment_ids, apply_dtype_min=False)
aux_sa_mask = attention.SegmentMask(
aux_segment_ids, aux_segment_ids, apply_dtype_min=False)
ca_mask = attention.SegmentMask(
target_segment_ids, aux_segment_ids, apply_dtype_min=False)
causal_padding = tf.expand_dims(
tf.tile(
tf.expand_dims(attention.CausalPadding(5, dtype=dtype), 0),
[2, 1, 1]), 1)
target_sa_mask = tf.math.maximum(causal_padding, target_sa_mask)
return (target_vec, target_paddings, target_sa_mask, aux_vec, aux_paddings,
aux_sa_mask, ca_mask)
def testGPipeBatchMajorTransformerEncoderLayerConstruction(self):
_, layer = self._ConstructGPipeBatchMajorTransformerLayer()
self.assertEqual(0.1, layer.params.tr_atten_tpl.residual_dropout_prob)
def testGPipeBatchMajorTransformerDecoderLayerConstruction(self):
_, layer = self._ConstructGPipeBatchMajorTransformerLayer(decoder=True)
self.assertEqual(0.1, layer.params.tr_atten_tpl.residual_dropout_prob)
def testGPipeBatchMajorTransformerEncoderLayerFProp(self):
with self.session(use_gpu=True) as sess:
(_, _, _, aux_vec, aux_paddings, aux_sa_mask,
_) = self._GPipeBatchMajorTransformerLayerInputs()
_, l = self._ConstructGPipeBatchMajorTransformerLayer()
layer_output = l.FProp(l.theta, aux_vec, aux_paddings, None, None,
aux_sa_mask, None, None)[0]
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (14, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [7.616176, 8.611565, -0.932456, -4.5797]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
def testGPipeBatchMajorTransformerDecoderLayerFProp(self):
with self.session(use_gpu=True) as sess:
(target_vec, target_paddings, target_sa_mask, aux_vec, aux_paddings,
aux_sa_mask, ca_mask) = self._GPipeBatchMajorTransformerLayerInputs()
_, l = self._ConstructGPipeBatchMajorTransformerLayer(decoder=True)
layer_output = l.FProp(l.theta, aux_vec, aux_paddings, target_vec,
target_paddings, aux_sa_mask, target_sa_mask,
ca_mask)[2]
tf.global_variables_initializer().run()
actual_layer_output = sess.run(layer_output)
actual_layer_output = np.reshape(actual_layer_output, (10, 4))
tf.logging.info(np.array_repr(actual_layer_output))
expected_layer_output = [2.721037, 5.228053, 2.27512, 6.92945]
self.assertAllClose(expected_layer_output,
np.sum(actual_layer_output, axis=0))
def testGPipeBatchMajorTransformerDecoderLayerExtendStep(self):
with self.session(use_gpu=True) as sess:
(target_vec, _, _, aux_vec, aux_paddings, _,
_) = self._GPipeBatchMajorTransformerLayerInputs()
target_paddings = tf.zeros([2, 5])
cached_key = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
cached_value = tf.constant(
np.random.normal(0.1, 0.5, [5, 2, 2, 2]), dtype=tf.float32)
prefix_states = py_utils.NestedMap(key=cached_key, value=cached_value)
_, l = self._ConstructGPipeBatchMajorTransformerLayer(
decoder=True, packed=False, dropout=0.0)
layer_output1 = l.FProp(l.theta, aux_vec, aux_paddings, target_vec,
target_paddings, None, None, None)[2]
layer_output2 = []
for i in range(5):
layer_output, _, prefix_states = l.ExtendStep(
l.theta, tf.expand_dims(target_vec[:, i, :], 1), aux_vec,
aux_paddings, prefix_states, i)
layer_output2.append(tf.squeeze(layer_output, 1))
layer_output2 = tf.transpose(tf.stack(layer_output2), [1, 0, 2])
tf.global_variables_initializer().run()
actual_layer_output1, actual_layer_output2 = sess.run(
[layer_output1, layer_output2])
self.assertAllClose(actual_layer_output1, actual_layer_output2)
class BuilderTest(test_utils.TestCase, parameterized.TestCase):
def _testGraph(self, glu_with_tanh=False, dtype=tf.float32):
tf.random.set_seed(398847392)
np.random.seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=4, num_heads=2, ff_hidden_dim=16, glu_with_tanh=glu_with_tanh)
params = atten_builder.Instantiate().LConvStack(
name='lightconv', kernel_sizes=[3, 3])
params.dtype = dtype
params.random_seed = 0
params.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = params.Instantiate()
l_in = tf.constant(np.random.rand(2, 3, 4), dtype=dtype)
l_padding = tf.zeros([2, 3], dtype=dtype)
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=l_in, paddings=l_padding))
return l_out.vec
@parameterized.parameters((False, 38.163662), (True, 35.88797))
def testFprop(self, glu_with_tanh, expected_result):
with self.session(use_gpu=False, graph=tf.Graph()) as sess:
l_out = self._testGraph(glu_with_tanh)
l_out = tf.reduce_sum(l_out)
tf.global_variables_initializer().run()
l_out_eval = sess.run(l_out)
self.assertAllClose(expected_result, l_out_eval)
def testBProp(self):
with self.session(use_gpu=True) as sess:
output = self._testGraph(dtype=tf.float64)
loss = tf.reduce_sum(output)
all_vars = tf.trainable_variables()
grads = tf.gradients(loss, all_vars)
tf.global_variables_initializer().run()
sym_grads = [sg.eval() for sg in grads]
num_grads = [
test_utils.ComputeNumericGradient(sess, loss, v) for v in all_vars
]
for ng, sg in zip(num_grads, sym_grads):
self.assertAllClose(ng, sg, rtol=5e-02, atol=5e-02)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [1, 1],
}, {
'testcase_name': '_stride_2',
'strides': [1, 2],
}, {
'testcase_name': '_first_token',
'strides': [2, 0],
}, {
'testcase_name': '_stride_2_begin_intact_1_no_trunc',
'strides': [1, 2],
'begin_intact': 1,
'trunc_seq': False,
}, {
'testcase_name': '_stride_2_begin_intact_1_trunc',
'strides': [1, 2],
'begin_intact': 1,
'trunc_seq': True,
}, {
'testcase_name': '_gpipe',
'strides': [1, 1],
'num_splits': 2,
'num_micro_batches': 2,
})
def testFunnelTransformerStack(self,
strides,
begin_intact=0,
trunc_seq=True,
num_splits=1,
num_micro_batches=1):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
num_splits=num_splits,
num_micro_batches=num_micro_batches,
deterministic_dropout=num_splits > 1 or num_micro_batches > 1,
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
funnel_pool_tpl=attention.FunnelPoolingLayer.Params().Set(
begin_intact=begin_intact, trunc_seq=trunc_seq))
atten_builder = atten_builder_params.Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Stack('model', layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
if accumulate_stride == 0:
self.assertAllEqual([bs, 1, d], actual_enc_out.shape)
elif (not begin_intact) or (begin_intact and trunc_seq):
seq_len = sl // accumulate_stride
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
elif begin_intact and not trunc_seq:
seq_len = sl
for stride in strides:
if stride > 1:
seq_len = begin_intact + int(
math.ceil((seq_len - begin_intact) / stride))
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [1, 1],
}, {
'testcase_name': '_stride_2',
'strides': [1, 2],
}, {
'testcase_name': '_first_token',
'strides': [2, 0],
})
def testFunnelTransformerStackStochasticDepth(self,
strides,
begin_intact=0,
trunc_seq=True):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
survival_prob=0.9,
funnel_pool_tpl=attention.FunnelPoolingLayer.Params().Set(
begin_intact=begin_intact, trunc_seq=trunc_seq))
atten_builder = atten_builder_params.Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
if accumulate_stride == 0:
self.assertAllEqual([bs, 1, d], actual_enc_out.shape)
elif (not begin_intact) or (begin_intact and trunc_seq):
seq_len = sl // accumulate_stride
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
elif begin_intact and not trunc_seq:
seq_len = sl
for stride in strides:
if stride > 1:
seq_len = begin_intact + int(
math.ceil((seq_len - begin_intact) / stride))
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters(
{
'testcase_name': '_avg_pool_exclude',
'stride': 2,
'pooling_type': 'AVG',
'exclude_pad_effect': True,
}, {
'testcase_name': '_max_pool_exclude',
'stride': 2,
'pooling_type': 'MAX',
'exclude_pad_effect': True,
}, {
'testcase_name': '_avg_pool',
'stride': 2,
'pooling_type': 'AVG',
'exclude_pad_effect': False,
}, {
'testcase_name': '_max_pool',
'stride': 2,
'pooling_type': 'MAX',
'exclude_pad_effect': False,
})
def testFunnelPoolingFixPaddingEffect(self, stride, pooling_type,
exclude_pad_effect):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
funnel_pooling_params = attention.FunnelPoolingLayer.Params().Set(
name='funnel_pool',
stride=stride,
pooling_type=pooling_type,
exclude_pad_effect=exclude_pad_effect)
l = funnel_pooling_params.Instantiate()
inputs_np = np.random.random([bs, sl, d]) * 10
non_pad_len = np.random.randint(sl // 2, sl, size=[bs])
paddings_np = np.arange(sl)[None, :] >= non_pad_len[:, None]
paddings_np = paddings_np.astype(np.float)
inputs = tf.constant(inputs_np, dtype=np.float)
paddings = tf.constant(paddings_np, dtype=np.float)
pooled_tensor, pooled_paddings = l.FPropDefaultTheta(inputs, paddings)
tf.global_variables_initializer().run()
pooled_tensor_np, pooled_paddings_np = sess.run(
[pooled_tensor, pooled_paddings])
self.assertAllEqual([bs, sl // stride, d], pooled_tensor_np.shape)
self.assertAllEqual([bs, sl // stride], pooled_paddings_np.shape)
self.assertAllClose(paddings_np[:, ::stride], pooled_paddings_np)
# construct groudtruth
inputs_4d = inputs_np.copy().reshape([bs, sl // stride, stride, d])
paddings_4d = paddings_np.copy().reshape([bs, sl // stride, stride, 1])
if pooling_type == 'AVG':
if exclude_pad_effect:
not_padding_4d = 1.0 - paddings_4d
target_tensor = np.sum(inputs_4d * not_padding_4d, axis=2)
target_tensor /= 1e-8 + np.sum(not_padding_4d, axis=2)
else:
target_tensor = np.mean(inputs_4d, axis=2)
elif pooling_type == 'MAX':
if exclude_pad_effect:
padding_mask = np.tile(paddings_4d > 0, [1, 1, 1, d])
inputs_4d[padding_mask] = np.finfo(inputs_4d.dtype).min
target_tensor = np.max(inputs_4d, axis=2)
target_tensor *= (1.0 - paddings_np[:, ::stride, None])
self.assertAllClose(target_tensor, pooled_tensor_np)
@parameterized.named_parameters(
{
'testcase_name': '_avg_pool_no_paddings',
'stride': 2,
'pooling_type': 'AVG',
}, {
'testcase_name': '_max_pool_no_paddings',
'stride': 2,
'pooling_type': 'MAX',
})
def testFunnelPoolingNoPaddings(self, stride, pooling_type):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
funnel_pooling_params = attention.FunnelPoolingLayer.Params().Set(
name='funnel_pool', stride=stride, pooling_type=pooling_type)
l = funnel_pooling_params.Instantiate()
inputs_np = np.random.random([bs, sl, d]) * 10
inputs = tf.constant(inputs_np, dtype=np.float)
pooled_tensor = l.FPropDefaultTheta(inputs)
tf.global_variables_initializer().run()
pooled_tensor_np = sess.run(pooled_tensor)
with self.subTest('test_output_shape'):
self.assertAllEqual([bs, sl // stride, d], pooled_tensor_np.shape)
inputs_4d = inputs_np.copy().reshape([bs, sl // stride, stride, d])
if pooling_type == 'AVG':
target_tensor = np.sum(inputs_4d, axis=2) / 2
elif pooling_type == 'MAX':
target_tensor = np.max(inputs_4d, axis=2)
with self.subTest('test_output_value'):
self.assertAllClose(target_tensor, pooled_tensor_np)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'split': 1,
'num_micro_batches': 1,
}, {
'testcase_name': '_split',
'split': 2,
'num_micro_batches': 1,
}, {
'testcase_name': '_gpipe',
'split': 2,
'num_micro_batches': 2,
})
def testFunnelTransformerStackWithSplit(self, split, num_micro_batches):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
num_splits=split,
num_micro_batches=num_micro_batches,
deterministic_dropout=split > 1 or num_micro_batches > 1,
funnel_pool_tpl=attention.FunnelPoolingLayer.Params())
atten_builder = atten_builder_params.Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate([1, 2]):
accumulate_stride *= stride
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
seq_len = sl // accumulate_stride
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [1, 1],
}, {
'testcase_name': '_stride_2',
'strides': [1, 2],
}, {
'testcase_name': '_stride_2_begin_intact_1_no_trunc',
'strides': [1, 2],
'begin_intact': 1,
'trunc_seq': False,
}, {
'testcase_name': '_stride_2_begin_intact_1_trunc',
'strides': [1, 2],
'begin_intact': 1,
'trunc_seq': True,
})
def testFunnelTransformerStackWithUpsampling(self,
strides,
begin_intact=0,
trunc_seq=True):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
funnel_pool_tpl=attention.FunnelPoolingLayer.Params().Set(
begin_intact=begin_intact, trunc_seq=trunc_seq))
atten_builder = atten_builder_params.Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
upsample_p = attention.FunnelUpsampleLayer.Params().Set(
name='funnel_upsample',
begin_intact=begin_intact,
trunc_seq=trunc_seq,
upsample_rate=accumulate_stride)
l_upsample = upsample_p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
upsampled_out = l_upsample.FPropDefaultTheta(enc_out)
tf.global_variables_initializer().run()
actual_enc_out, actual_upsample_out = sess.run([enc_out, upsampled_out])
if (begin_intact == 0) or (begin_intact > 0 and trunc_seq):
seq_len = sl // accumulate_stride
elif begin_intact > 0 and not trunc_seq:
seq_len = sl
for stride in strides:
if stride > 1:
seq_len = begin_intact + int(
math.ceil((seq_len - begin_intact) / stride))
tf.logging.info('Pool out: %s, Upsample out: %s', actual_enc_out.shape,
actual_upsample_out.shape)
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
self.assertAllEqual([bs, sl, d], actual_upsample_out.shape)
def testFunnelTransformerWithDecoderUpsampling(self,
upsample_type='REPEAT',
upsample_shortcut_idx=0,
num_decoder_layers=1):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
model_dim=d, num_heads=2, ff_hidden_dim=5)
atten_builder = atten_builder_params.Instantiate()
layers = []
strides = [1, 2]
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
if upsample_shortcut_idx is not None:
p = atten_builder.Stack('stack', layers, output_all_layer_hiddens=True)
else:
p = atten_builder.Stack('stack', layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
upsample_p = attention.FunnelUpsampleLayer.Params().Set(
name='funnel_upsample',
upsample_rate=accumulate_stride,
upsample_type=upsample_type,
shortcut_index=upsample_shortcut_idx)
if num_decoder_layers:
decoder_layers = []
for i in range(num_decoder_layers):
decoder_layers.append(
atten_builder.TransformerEncoderLayer(
name='iter_{:0>3d}'.format(i), num_heads=2, ff_hidden_dim=5))
upsample_p.decoder_stack = atten_builder.Stack('stack', decoder_layers)
l_upsample = upsample_p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
if upsample_shortcut_idx is not None:
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out[-1].vec
upsampled_out = l_upsample.FPropDefaultTheta(enc_out, all_hiddens=l_out)
else:
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
upsampled_out = l_upsample.FPropDefaultTheta(enc_out)
tf.global_variables_initializer().run()
actual_enc_out, actual_upsample_out = sess.run([enc_out, upsampled_out])
seq_len = sl // accumulate_stride
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
self.assertAllEqual([bs, sl, d], actual_upsample_out.shape)
def testFunnelEncoderLayerWithPerLayerFfns(self):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
num_ffns_list = [2, 1, 3]
strides = [1, 2, 2]
tf.random.set_seed(12345)
atten_builder_params = attention.Builder.Params().Set(
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
funnel_pool_tpl=attention.FunnelPoolingLayer.Params().Set())
atten_builder = atten_builder_params.Instantiate()
layers = []
for layer_i, stride in enumerate(strides):
layers.append(
atten_builder.FunnelEncoderLayer(
name='atten_{}'.format(layer_i),
stride=stride,
num_ffns=num_ffns_list[layer_i]))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
out = tf.reduce_sum(l_out.vec)
tf.global_variables_initializer().run()
actual_out = sess.run(out)
self.assertAllClose(actual_out, 79.52954)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [1, 1],
}, {
'testcase_name': '_stride_2',
'strides': [2, 1],
}, {
'testcase_name': '_first_token',
'strides': [2, 0],
})
def testTransformerStackWithStride(self, strides):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=d, num_heads=2, ff_hidden_dim=5).Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.TransformerEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
seq_len = sl // accumulate_stride if accumulate_stride != 0 else 1
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [1, 1],
}, {
'testcase_name': '_stride_2',
'strides': [2, 1],
}, {
'testcase_name': '_first_token',
'strides': [2, 0],
})
def testTransformerStackWithStochasticDepth(self, strides):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=d, num_heads=2, ff_hidden_dim=5,
survival_prob=0.9).Instantiate()
layers = []
accumulate_stride = 1
for layer_i, stride in enumerate(strides):
accumulate_stride *= stride
layers.append(
atten_builder.TransformerEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
seq_len = sl // accumulate_stride if accumulate_stride != 0 else 1
self.assertAllEqual([bs, seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters(
{
'testcase_name': '_baseline',
'strides': [(1, 6), (1, 3), 3],
}, {
'testcase_name': '_stride_2',
'strides': [(2, 4), (1, None), 2],
}, {
'testcase_name': '_first_token',
'strides': [(2, 5), (0, None), 1],
})
def testTransformerStackWithStrideAndOutLength(self, strides):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=d, num_heads=2, ff_hidden_dim=5).Instantiate()
layers = []
out_seq_len = strides.pop()
for layer_i, (stride, first_n) in enumerate(strides):
layers.append(
atten_builder.TransformerEncoderLayer(
name='atten_{}'.format(layer_i), stride=stride,
first_n=first_n))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
self.assertAllEqual([bs, out_seq_len, d], actual_enc_out.shape)
@parameterized.named_parameters({
'testcase_name': '_baseline',
}, {
'testcase_name': '_first_token',
'first_n': 1,
}, {
'testcase_name': '_pack_sequences',
'pack_sequences': 2,
}, {
'testcase_name': '_pack_sequences_first_token',
'pack_sequences': 2,
'first_n': 1,
})
def testStridingWithPackedInput(self, pack_sequences=None, first_n=None):
with self.session(use_gpu=False) as sess:
np.random.seed(123)
bs = 2
sl = 10
d = 16
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
segment_mask = None
if pack_sequences:
# Pack multiple original sequences into one, delineated with
# segment_mask.
input_embs = tf.reshape(input_embs,
[bs // pack_sequences, pack_sequences * sl, d])
paddings = tf.reshape(paddings,
[bs // pack_sequences, pack_sequences * sl])
segment_ids = tf.reshape(
tf.cumsum(tf.ones([bs, sl]), axis=0),
[bs // pack_sequences, pack_sequences * sl])
segment_mask = attention.SegmentMask(segment_ids, segment_ids)
tf.random.set_seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=d,
num_heads=2,
ff_hidden_dim=5,
packed_input=pack_sequences is not None).Instantiate()
if first_n is None:
stride, atten_first_n = (1, None)
elif pack_sequences:
stride, atten_first_n = (sl, None)
else:
stride, atten_first_n = (0, 1)
p = atten_builder.TransformerEncoderLayer(
name='trans', stride=stride, first_n=atten_first_n)
p.random_seed = 1234
l = p.Instantiate()
l_in = py_utils.NestedMap(vec=input_embs, paddings=paddings)
if segment_mask is not None:
l_in.segment_mask = segment_mask
l_out = l.FPropDefaultTheta(l_in)
enc_out = l_out.vec
# Get the first token outputs.
if pack_sequences:
out_segment_mask = l_out.segment_mask
if first_n:
enc_out = py_utils.HasShape(enc_out,
[bs // pack_sequences, pack_sequences, d])
enc_out = tf.reshape(enc_out, [bs, d])
self.assertAllEqual(
out_segment_mask.shape,
[bs // pack_sequences, 1, pack_sequences, pack_sequences])
else:
enc_out = py_utils.HasShape(
enc_out, [bs // pack_sequences, pack_sequences * sl, d])
enc_out = tf.reshape(enc_out, [bs, sl, d])
enc_out = enc_out[:, 0, :]
self.assertAllEqual(out_segment_mask.shape, [
bs // pack_sequences, 1, pack_sequences * sl, pack_sequences * sl
])
else:
if first_n:
enc_out = py_utils.HasShape(enc_out, [bs, 1, d])
enc_out = tf.reshape(enc_out, [bs, 1, d])
else:
enc_out = py_utils.HasShape(enc_out, [bs, sl, d])
enc_out = enc_out[:, 0, :]
tf.global_variables_initializer().run()
self.assertAllClose(20.82248, sess.run(tf.reduce_sum(enc_out)))
def testTransformerEncoderWithGatedGelu(self):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(12345)
atten_builder = attention.Builder.Params().Set(
model_dim=d, num_heads=2, ff_hidden_dim=5).Instantiate()
# TODO(huangyp): Change to GatedGeluFeedforward once tf.nn.gelu is in
# latest release of tensorflow.
encoder_block = atten_builder.Seq(
'block', atten_builder._StridedAttention('self_atten', num_heads=2),
atten_builder.Feedforward('ff', ff_hidden_dim=5))
layers = []
for layer_i in range(2):
layers.append(
atten_builder.Seq('atten_{}'.format(layer_i), encoder_block))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
enc_out = l_out.vec
tf.global_variables_initializer().run()
actual_enc_out = sess.run(enc_out)
self.assertAllEqual([bs, sl, d], actual_enc_out.shape)
def testEncoderLayerWithPerLayerParam(self):
with self.session(use_gpu=False) as sess:
bs = 2
sl = 10
d = 16
tf.random.set_seed(398847392)
np.random.seed(12345)
heads = [1, 2, 4]
ff_dims = [16, 32, 16]
atten_builder = attention.Builder.Params().Set(
model_dim=16, num_heads=heads, ff_hidden_dim=ff_dims).Instantiate()
layers = []
for layer_i, (head, ff_dim) in enumerate(zip(heads, ff_dims)):
layers.append(
atten_builder.TransformerEncoderLayer(
name='atten_{}'.format(layer_i),
ff_hidden_dim=ff_dim,
num_heads=head,
stride=1 if layer_i < 2 else 0))
p = atten_builder.Seq('model', *layers)
p.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = p.Instantiate()
input_embs = tf.constant(
np.random.random(size=[bs, sl, d]), dtype=np.float)
paddings = tf.zeros([bs, sl])
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=input_embs, paddings=paddings))
out = tf.reduce_sum(l_out.vec)
tf.global_variables_initializer().run()
actual_out = sess.run(out)
self.assertAllClose(actual_out, 17.40516)
def testSerialization(self):
heads = [1, 2, 4]
ff_dims = [16, 32, 16]
atten_builder = attention.Builder.Params().Set(
model_dim=16, num_heads=heads, ff_hidden_dim=ff_dims).Instantiate()
layers = []
for layer_i, (head, ff_dim) in enumerate(zip(heads, ff_dims)):
layers.append(
atten_builder.TransformerEncoderLayer(
name='atten_{}'.format(layer_i),
ff_hidden_dim=ff_dim,
num_heads=head,
stride=1 if layer_i < 2 else 0))
p = atten_builder.Seq('model', *layers)
serialized = p.ToProto()
p2 = hyperparams.InstantiableParams.FromProto(serialized)
self.assertLen(p2.sub, len(p.sub))
class LmBuilderTest(test_utils.TestCase):
def _testGraph(self, dtype=tf.float32):
tf.random.set_seed(398847392)
np.random.seed(12345)
atten_builder = attention.LmBuilder.Params().Set(
model_dim=4, num_heads=2, ff_hidden_dim=16, dtype=dtype)
params = atten_builder.Instantiate().TransformerEncoderStack(
name='xformer', num_layers=2)
params.dtype = dtype
params.random_seed = 0
params.params_init = py_utils.WeightInit.Xavier(scale=1.0, seed=0)
l = params.Instantiate()
l_in = tf.constant(np.random.rand(2, 3, 4), dtype=dtype)
l_padding = tf.zeros([2, 3], dtype=dtype)
l_out = l.FPropDefaultTheta(
py_utils.NestedMap(vec=l_in, paddings=l_padding))
return l_out.vec
def testFprop(self):
with self.session(use_gpu=False, graph=tf.Graph()) as sess:
l_out = self._testGraph()
l_out = tf.reduce_sum(l_out)
tf.global_variables_initializer().run()
l_out_eval = sess.run(l_out)
self.assertAllClose(36.04808, l_out_eval)
def testBProp(self):
with self.session(use_gpu=True) as sess:
output = self._testGraph(dtype=tf.float64)
loss = tf.reduce_sum(output)
all_vars = tf.trainable_variables()
grads = tf.gradients(loss, all_vars)
tf.global_variables_initializer().run()
sym_grads = [sg.eval() for sg in grads]
num_grads = [
test_utils.ComputeNumericGradient(sess, loss, v) for v in all_vars
]
for ng, sg in zip(num_grads, sym_grads):
self.assertAllClose(ng, sg, rtol=5e-02, atol=5e-02)
def _CreateDummyParams(field_names):
p = hyperparams.Params()
for name in field_names:
p.Define(name, None, 'Dummy')
return p
class DummyDecoderRNNT(base_layer.BaseLayer):
@classmethod
def Params(cls):
p = super().Params()
p.name = 'dummy_decoder_rnnt'
p.Define('emb', _CreateDummyParams(['vocab_size']), 'Dummy emb.')
p.Define('target_seq_len', 20, 'Dummy target seq len.')
p.Define('num_classes', None, 'Dummy num classes.')
return p
@classmethod
def UpdateTargetVocabSize(cls, p, vocab_size, wpm_model=None):
p.emb.vocab_size = vocab_size
p.num_classes = vocab_size
return p
class RelativeAttentionHelperTest(test_utils.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
('MultiHeadedAttentionXL', attention.MultiHeadedAttentionXL,
attention.MultiHeadedAttention),
('LocalSelfAttentionXL', attention.LocalSelfAttentionXL,
attention.LocalSelfAttention))
def testClearRelativeAttentionInTransformerLayer(self, atten_cls,
expected_atten_cls):
"""Tests scenarios in clear relative attention in transformer layer."""
trans_p = attention.TransformerLayer.Params()
# set attention params in transformer layer.
input_dim = 4
rel_pos_emb_dim = 4
# Set rel_pos_emb_dim in attention params.
trans_p.tr_atten_tpl.atten_tpl = (
atten_cls.Params().Set(
input_dim=input_dim, rel_pos_emb_dim=rel_pos_emb_dim))
new_trans_p = attention.ClearRelativeAttentionInTransformerLayer(trans_p)
tr_atten_tpl = new_trans_p.tr_self_atten_tpl.atten_tpl
self.assertEqual(tr_atten_tpl.cls, expected_atten_cls)
self.assertEqual(tr_atten_tpl.input_dim, input_dim)
def testClearRelativeAttentionTransformerLayerNotSupportedError(self):
transformer_params = DummyDecoderRNNT.Params()
with self.assertRaises(ValueError):
_ = attention.ClearRelativeAttentionInTransformerLayer(transformer_params)
def testClearRelativeAttentionAttentionParamsNotSupportedError(self):
trans_p = attention.TransformerLayer.Params()
# MultiHeadedAttention is not supported in ClearRelativeAttention.
attention_params = attention.MultiHeadedAttention.Params()
trans_p.tr_atten_tpl.atten_tpl = attention_params
with self.assertRaises(ValueError):
_ = attention.ClearRelativeAttentionInTransformerLayer(trans_p)
@parameterized.named_parameters(
('AttentionParamsNotSupported', _CreateDummyParams(
['name', 'cls']), attention.ATTEN_TRANSFORMER_XL),
('AttentionTypeNotSupported', attention.MultiHeadedAttention.Params(),
'unsupported_atten_type'))
def testUseRelativeAttentionInTransformerLayerValueError(
self, attention_params, attention_type):
"""Tests unsupported Use Relative Attention cases."""
transformer_param = attention.TransformerLayer.Params()
transformer_param.tr_atten_tpl.atten_tpl = attention_params
rel_pos_emb_dim = 4
with self.assertRaises(ValueError):
_ = attention.UseRelativeAttentionInTransformerLayer(
transformer_param, rel_pos_emb_dim, atten_type=attention_type)
def testUseRelativeAttentionInTransformerLayerNotSupportedError(self):
"""Tests unsupported input transformer params in Use Relative Attention."""
transformer_params = DummyDecoderRNNT.Params()
with self.assertRaises(ValueError):
_ = attention.UseRelativeAttentionInTransformerLayer(
transformer_params, 4, atten_type=attention.ATTEN_TRANSFORMER_XL)
@parameterized.named_parameters(
('MultiHeadedAttention', attention.MultiHeadedAttention,
attention.MultiHeadedAttentionXL, attention.ATTEN_TRANSFORMER_XL),
('LocalSelfAttention', attention.LocalSelfAttention,
attention.LocalSelfAttentionXL, attention.ATTEN_TRANSFORMER_XL),
('MultiHeadedAttentionRPE', attention.MultiHeadedAttention,
attention.MultiHeadedAttentionRPE, attention.ATTEN_RPE))
def testUseRelativeAttentionInTransformerLayer(self, atten_cls,
expected_atten_cls,
atten_type):
"""Tests different scenarios in Use Relative Attention."""
trans_p = attention.TransformerLayer.Params()
# set attenion params in transformer layer.
input_dim = 4
trans_p.tr_atten_tpl.atten_tpl = atten_cls.Params().Set(input_dim=input_dim)
rel_pos_emb_dim = 4
new_trans_p = attention.UseRelativeAttentionInTransformerLayer(
trans_p, rel_pos_emb_dim, atten_type=atten_type)
tr_atten_tpl = new_trans_p.tr_self_atten_tpl.atten_tpl
self.assertEqual(tr_atten_tpl.cls, expected_atten_cls)
self.assertEqual(tr_atten_tpl.rel_pos_emb_dim, rel_pos_emb_dim)
self.assertEqual(tr_atten_tpl.input_dim, input_dim)
class ResidualAddLayerTest(test_utils.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
{
'testcase_name': 'apply_residual',
'apply_residual': True,
'residual_weight': 1.0,
'expected_output': [[0.3, 0.5, 0.7]]
}, {
'testcase_name': 'no_residual',
'apply_residual': False,
'residual_weight': 1.0,
'expected_output': [[0.2, 0.3, 0.4]]
}, {
'testcase_name': 'apply_residual_w_weight',
'apply_residual': True,
'residual_weight': 0.5,
'expected_output': [[0.2, 0.35, 0.5]]
}, {
'testcase_name': 'no_residual_w_weight',
'apply_residual': False,
'residual_weight': 0.5,
'expected_output': [[0.1, 0.15, 0.2]]
})
def testClearRelativeAttentionInTransformerLayer(self, apply_residual,
residual_weight,
expected_output):
x = tf.constant([[0.1, 0.2, 0.3]])
fx = tf.constant([[0.2, 0.3, 0.4]])
p = attention.ResidualAddLayer.Params().Set(
name='residual_test',
residual_weight=residual_weight,
apply_residual=apply_residual)
l = p.Instantiate()
ret = l.FPropDefaultTheta(x, fx)
init = tf.group(
[tf.global_variables_initializer(),
tf.local_variables_initializer()])
with self.session(use_gpu=False) as sess:
sess.run(init)
ret_val = sess.run(ret)
self.assertAllClose(ret_val, np.array(expected_output))
if __name__ == '__main__':
tf.test.main()
|
tensorflow/lingvo
|
lingvo/core/batch_major_attention_test.py
|
Python
|
apache-2.0
| 195,451
|
[
"Gaussian"
] |
81708869351170c260a5452665a8c3df31a270b6923c681401a8f746f6cc3b46
|
#!/usr/bin/env python
"""A build script which (thus far) works on Ubuntu 14."""
# TODO(powdercloud): Make a gulp file or similar for this. For now
# it's simply split off from the main build.py in the parent
# directory, but this is not an idiomatic use to build a Javascript or
# Polymer project, and unlike for the parent directory there's no
# particular benefit to using Python.
from __future__ import print_function
import logging
import os
import platform
import re
import shutil
import subprocess
import sys
import tempfile
def Die(msg):
"""Prints error and exits with status 1.
Args:
msg: The error message to emit
"""
print(msg, file=sys.stderr)
sys.exit(1)
def GetNodeJsCmd():
"""Ensure Node.js is installed and return the proper command to run."""
logging.info('entering ...')
for cmd in ['node', 'nodejs']:
try:
output = subprocess.check_output([cmd, '--eval', 'console.log("42")'])
if output.strip() == b'42':
logging.info('... done')
return cmd
except (subprocess.CalledProcessError, OSError):
continue
Die('Node.js not found. Try "apt-get install nodejs".')
def CheckPrereqs():
"""Checks that various prerequisites for this script are satisfied."""
logging.info('entering ...')
if platform.system() != 'Linux' and platform.system() != 'Darwin':
Die('Sorry, this script assumes Linux or Mac OS X thus far. '
'Please feel free to edit the source and fix it to your needs.')
# Ensure source files are available.
for f in ['webui.js', 'index.html',
'logo-blue.svg', 'package.json']:
if not os.path.exists(f):
Die('%s not found. Must run in amp_validator source directory.' % f)
def SetupOutDir(out_dir):
"""Sets up a clean output directory.
Args:
out_dir: directory name of the output directory. Must not have slashes,
dots, etc.
"""
logging.info('entering ...')
assert re.match(r'^[a-zA-Z_\-0-9]+$', out_dir), 'bad out_dir: %s' % out_dir
if os.path.exists(out_dir):
subprocess.check_call(['rm', '-rf', out_dir])
os.mkdir(out_dir)
logging.info('... done')
def InstallNodeDependencies():
"""Installs the dependencies using npm install."""
logging.info('entering ...')
# Install the project dependencies specified in package.json into
# node_modules.
logging.info('installing AMP Validator webui dependencies ...')
subprocess.check_call(
['npm', 'install', '--userconfig', '../../../.npmrc'],
stdout=(open(os.devnull, 'wb') if os.environ.get('CI') else sys.stdout))
logging.info('... done')
def CreateWebuiAppengineDist(out_dir):
"""Creates the webui vulcanized directory to deploy to Appengine.
Args:
out_dir: directory name of the output directory. Must not have slashes,
dots, etc.
"""
logging.info('entering ...')
try:
tempdir = tempfile.mkdtemp()
# Merge the contents of webui with the installed node_modules into a
# common root (a temp directory). This lets us use the vulcanize tool.
for entry in os.listdir('.'):
if entry != 'node_modules':
if os.path.isfile(entry):
shutil.copyfile(entry, os.path.join(tempdir, entry))
else:
shutil.copytree(entry, os.path.join(tempdir, entry))
for entry in os.listdir('node_modules'):
if not os.path.isdir('node_modules/' + entry):
continue
elif entry == 'web-animations-js':
shutil.copytree(os.path.join('node_modules', entry),
os.path.join(tempdir, '@polymer', entry))
elif entry != '@polymer':
shutil.copytree(os.path.join('node_modules', entry),
os.path.join(tempdir, entry))
for entry in os.listdir('node_modules/@polymer'):
shutil.copytree(os.path.join('node_modules/@polymer', entry),
os.path.join(tempdir, '@polymer', entry))
vulcanized_index_html = subprocess.check_output([
'node_modules/vulcanize/bin/vulcanize',
'--inline-scripts', '--inline-css',
'-p', tempdir, 'index.html'])
finally:
shutil.rmtree(tempdir)
webui_out = os.path.join(out_dir, 'webui_appengine')
shutil.copytree('.', webui_out, ignore=shutil.ignore_patterns('dist'))
f = open(os.path.join(webui_out, 'index.html'), 'wb')
f.write(vulcanized_index_html)
f.close()
f = open(os.path.join(webui_out, 'legacy.html'), 'wb')
f.write(vulcanized_index_html.replace(b'https://cdn.ampproject.org/v0/validator_wasm.js', b'https://cdn.ampproject.org/v0/validator.js', 1))
f.close()
logging.info('... success')
def Main():
"""The main method, which executes all build steps and runs the tests."""
logging.basicConfig(
format='[[%(filename)s %(funcName)s]] - %(message)s',
level=(logging.ERROR if os.environ.get('CI') else logging.INFO))
GetNodeJsCmd()
CheckPrereqs()
InstallNodeDependencies()
SetupOutDir(out_dir='dist')
CreateWebuiAppengineDist(out_dir='dist')
if __name__ == '__main__':
Main()
|
honeybadgerdontcare/amphtml
|
validator/js/webui/build.py
|
Python
|
apache-2.0
| 4,986
|
[
"GULP"
] |
dda4b815f7eeb060b00ccd54e7d3a17b84904816a0262d0f1a4c3cabc2b07187
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Preprocess images and bounding boxes for detection.
We perform two sets of operations in preprocessing stage:
(a) operations that are applied to both training and testing data,
(b) operations that are applied only to training data for the purpose of
data augmentation.
A preprocessing function receives a set of inputs,
e.g. an image and bounding boxes,
performs an operation on them, and returns them.
Some examples are: randomly cropping the image, randomly mirroring the image,
randomly changing the brightness, contrast, hue and
randomly jittering the bounding boxes.
The preprocess function receives a tensor_dict which is a dictionary that maps
different field names to their tensors. For example,
tensor_dict[fields.InputDataFields.image] holds the image tensor.
The image is a rank 4 tensor: [1, height, width, channels] with
dtype=tf.float32. The groundtruth_boxes is a rank 2 tensor: [N, 4] where
in each row there is a box with [ymin xmin ymax xmax].
Boxes are in normalized coordinates meaning
their coordinate values range in [0, 1]
To preprocess multiple images with the same operations in cases where
nondeterministic operations are used, a preprocessor_cache.PreprocessorCache
object can be passed into the preprocess function or individual operations.
All nondeterministic operations except random_jitter_boxes support caching.
E.g.
Let tensor_dict{1,2,3,4,5} be copies of the same inputs.
Let preprocess_options contain nondeterministic operation(s) excluding
random_jitter_boxes.
cache1 = preprocessor_cache.PreprocessorCache()
cache2 = preprocessor_cache.PreprocessorCache()
a = preprocess(tensor_dict1, preprocess_options, preprocess_vars_cache=cache1)
b = preprocess(tensor_dict2, preprocess_options, preprocess_vars_cache=cache1)
c = preprocess(tensor_dict3, preprocess_options, preprocess_vars_cache=cache2)
d = preprocess(tensor_dict4, preprocess_options, preprocess_vars_cache=cache2)
e = preprocess(tensor_dict5, preprocess_options)
Then correspondings tensors of object pairs (a,b) and (c,d)
are guaranteed to be equal element-wise, but the equality of any other object
pair cannot be determined.
Important Note: In tensor_dict, images is a rank 4 tensor, but preprocessing
functions receive a rank 3 tensor for processing the image. Thus, inside the
preprocess function we squeeze the image to become a rank 3 tensor and then
we pass it to the functions. At the end of the preprocess we expand the image
back to rank 4.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import inspect
import sys
import six
from six.moves import range
from six.moves import zip
import tensorflow.compat.v1 as tf
from tensorflow.python.ops import control_flow_ops
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import densepose_ops
from object_detection.core import keypoint_ops
from object_detection.core import preprocessor_cache
from object_detection.core import standard_fields as fields
from object_detection.utils import autoaugment_utils
from object_detection.utils import ops
from object_detection.utils import patch_ops
from object_detection.utils import shape_utils
def _apply_with_random_selector(x,
func,
num_cases,
preprocess_vars_cache=None,
key=''):
"""Computes func(x, sel), with sel sampled from [0...num_cases-1].
If both preprocess_vars_cache AND key are the same between two calls, sel will
be the same value in both calls.
Args:
x: input Tensor.
func: Python function to apply.
num_cases: Python int32, number of cases to sample sel from.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
key: variable identifier for preprocess_vars_cache.
Returns:
The result of func(x, sel), where func receives the value of the
selector as a python integer, but sel is sampled dynamically.
"""
generator_func = functools.partial(
tf.random_uniform, [], maxval=num_cases, dtype=tf.int32)
rand_sel = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.SELECTOR,
preprocess_vars_cache, key)
# Pass the real x only to one of the func calls.
return control_flow_ops.merge([func(
control_flow_ops.switch(x, tf.equal(rand_sel, case))[1], case)
for case in range(num_cases)])[0]
def _apply_with_random_selector_tuples(x,
func,
num_cases,
preprocess_vars_cache=None,
key=''):
"""Computes func(x, sel), with sel sampled from [0...num_cases-1].
If both preprocess_vars_cache AND key are the same between two calls, sel will
be the same value in both calls.
Args:
x: A tuple of input tensors.
func: Python function to apply.
num_cases: Python int32, number of cases to sample sel from.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
key: variable identifier for preprocess_vars_cache.
Returns:
The result of func(x, sel), where func receives the value of the
selector as a python integer, but sel is sampled dynamically.
"""
num_inputs = len(x)
generator_func = functools.partial(
tf.random_uniform, [], maxval=num_cases, dtype=tf.int32)
rand_sel = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.SELECTOR_TUPLES,
preprocess_vars_cache, key)
# Pass the real x only to one of the func calls.
tuples = [list() for t in x]
for case in range(num_cases):
new_x = [control_flow_ops.switch(t, tf.equal(rand_sel, case))[1] for t in x]
output = func(tuple(new_x), case)
for j in range(num_inputs):
tuples[j].append(output[j])
for i in range(num_inputs):
tuples[i] = control_flow_ops.merge(tuples[i])[0]
return tuple(tuples)
def _get_or_create_preprocess_rand_vars(generator_func,
function_id,
preprocess_vars_cache,
key=''):
"""Returns a tensor stored in preprocess_vars_cache or using generator_func.
If the tensor was previously generated and appears in the PreprocessorCache,
the previously generated tensor will be returned. Otherwise, a new tensor
is generated using generator_func and stored in the cache.
Args:
generator_func: A 0-argument function that generates a tensor.
function_id: identifier for the preprocessing function used.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
key: identifier for the variable stored.
Returns:
The generated tensor.
"""
if preprocess_vars_cache is not None:
var = preprocess_vars_cache.get(function_id, key)
if var is None:
var = generator_func()
preprocess_vars_cache.update(function_id, key, var)
else:
var = generator_func()
return var
def _random_integer(minval, maxval, seed):
"""Returns a random 0-D tensor between minval and maxval.
Args:
minval: minimum value of the random tensor.
maxval: maximum value of the random tensor.
seed: random seed.
Returns:
A random 0-D tensor between minval and maxval.
"""
return tf.random_uniform(
[], minval=minval, maxval=maxval, dtype=tf.int32, seed=seed)
# TODO(mttang): This method is needed because the current
# tf.image.rgb_to_grayscale method does not support quantization. Replace with
# tf.image.rgb_to_grayscale after quantization support is added.
def _rgb_to_grayscale(images, name=None):
"""Converts one or more images from RGB to Grayscale.
Outputs a tensor of the same `DType` and rank as `images`. The size of the
last dimension of the output is 1, containing the Grayscale value of the
pixels.
Args:
images: The RGB tensor to convert. Last dimension must have size 3 and
should contain RGB values.
name: A name for the operation (optional).
Returns:
The converted grayscale image(s).
"""
with tf.name_scope(name, 'rgb_to_grayscale', [images]) as name:
images = tf.convert_to_tensor(images, name='images')
# Remember original dtype to so we can convert back if needed
orig_dtype = images.dtype
flt_image = tf.image.convert_image_dtype(images, tf.float32)
# Reference for converting between RGB and grayscale.
# https://en.wikipedia.org/wiki/Luma_%28video%29
rgb_weights = [0.2989, 0.5870, 0.1140]
rank_1 = tf.expand_dims(tf.rank(images) - 1, 0)
gray_float = tf.reduce_sum(
flt_image * rgb_weights, rank_1, keep_dims=True)
gray_float.set_shape(images.get_shape()[:-1].concatenate([1]))
return tf.image.convert_image_dtype(gray_float, orig_dtype, name=name)
def normalize_image(image, original_minval, original_maxval, target_minval,
target_maxval):
"""Normalizes pixel values in the image.
Moves the pixel values from the current [original_minval, original_maxval]
range to a the [target_minval, target_maxval] range.
Args:
image: rank 3 float32 tensor containing 1
image -> [height, width, channels].
original_minval: current image minimum value.
original_maxval: current image maximum value.
target_minval: target image minimum value.
target_maxval: target image maximum value.
Returns:
image: image which is the same shape as input image.
"""
with tf.name_scope('NormalizeImage', values=[image]):
original_minval = float(original_minval)
original_maxval = float(original_maxval)
target_minval = float(target_minval)
target_maxval = float(target_maxval)
image = tf.cast(image, dtype=tf.float32)
image = tf.subtract(image, original_minval)
image = tf.multiply(image, (target_maxval - target_minval) /
(original_maxval - original_minval))
image = tf.add(image, target_minval)
return image
def retain_boxes_above_threshold(boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
threshold=0.0):
"""Retains boxes whose label weight is above a given threshold.
If the label weight for a box is missing (represented by NaN), the box is
retained. The boxes that don't pass the threshold will not appear in the
returned tensor.
Args:
boxes: float32 tensor of shape [num_instance, 4] representing boxes
location in normalized coordinates.
labels: rank 1 int32 tensor of shape [num_instance] containing the object
classes.
label_weights: float32 tensor of shape [num_instance] representing the
weight for each box.
label_confidences: float32 tensor of shape [num_instance] representing the
confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks are of
the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
threshold: scalar python float.
Returns:
retained_boxes: [num_retained_instance, 4]
retianed_labels: [num_retained_instance]
retained_label_weights: [num_retained_instance]
If multiclass_scores, masks, or keypoints are not None, the function also
returns:
retained_multiclass_scores: [num_retained_instance, num_classes]
retained_masks: [num_retained_instance, height, width]
retained_keypoints: [num_retained_instance, num_keypoints, 2]
"""
with tf.name_scope('RetainBoxesAboveThreshold',
values=[boxes, labels, label_weights]):
indices = tf.where(
tf.logical_or(label_weights > threshold, tf.is_nan(label_weights)))
indices = tf.squeeze(indices, axis=1)
retained_boxes = tf.gather(boxes, indices)
retained_labels = tf.gather(labels, indices)
retained_label_weights = tf.gather(label_weights, indices)
result = [retained_boxes, retained_labels, retained_label_weights]
if label_confidences is not None:
retained_label_confidences = tf.gather(label_confidences, indices)
result.append(retained_label_confidences)
if multiclass_scores is not None:
retained_multiclass_scores = tf.gather(multiclass_scores, indices)
result.append(retained_multiclass_scores)
if masks is not None:
retained_masks = tf.gather(masks, indices)
result.append(retained_masks)
if keypoints is not None:
retained_keypoints = tf.gather(keypoints, indices)
result.append(retained_keypoints)
return result
def drop_label_probabilistically(boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
dropped_label=None,
drop_probability=0.0,
seed=None):
"""Drops boxes of a certain label with probability drop_probability.
Boxes of the label dropped_label will not appear in the returned tensor.
Args:
boxes: float32 tensor of shape [num_instance, 4] representing boxes
location in normalized coordinates.
labels: rank 1 int32 tensor of shape [num_instance] containing the object
classes.
label_weights: float32 tensor of shape [num_instance] representing the
weight for each box.
label_confidences: float32 tensor of shape [num_instance] representing the
confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks are of
the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
dropped_label: int32 id of label to drop.
drop_probability: float32 probability of dropping a label.
seed: random seed.
Returns:
retained_boxes: [num_retained_instance, 4]
retianed_labels: [num_retained_instance]
retained_label_weights: [num_retained_instance]
If multiclass_scores, masks, or keypoints are not None, the function also
returns:
retained_multiclass_scores: [num_retained_instance, num_classes]
retained_masks: [num_retained_instance, height, width]
retained_keypoints: [num_retained_instance, num_keypoints, 2]
"""
with tf.name_scope('DropLabelProbabilistically',
values=[boxes, labels]):
indices = tf.where(
tf.logical_or(
tf.random_uniform(tf.shape(labels), seed=seed) > drop_probability,
tf.not_equal(labels, dropped_label)))
indices = tf.squeeze(indices, axis=1)
retained_boxes = tf.gather(boxes, indices)
retained_labels = tf.gather(labels, indices)
retained_label_weights = tf.gather(label_weights, indices)
result = [retained_boxes, retained_labels, retained_label_weights]
if label_confidences is not None:
retained_label_confidences = tf.gather(label_confidences, indices)
result.append(retained_label_confidences)
if multiclass_scores is not None:
retained_multiclass_scores = tf.gather(multiclass_scores, indices)
result.append(retained_multiclass_scores)
if masks is not None:
retained_masks = tf.gather(masks, indices)
result.append(retained_masks)
if keypoints is not None:
retained_keypoints = tf.gather(keypoints, indices)
result.append(retained_keypoints)
return result
def remap_labels(labels,
original_labels=None,
new_label=None):
"""Remaps labels that have an id in original_labels to new_label.
Args:
labels: rank 1 int32 tensor of shape [num_instance] containing the object
classes.
original_labels: int list of original labels that should be mapped from.
new_label: int label to map to
Returns:
Remapped labels
"""
new_labels = labels
for original_label in original_labels:
change = tf.where(
tf.equal(new_labels, original_label),
tf.add(tf.zeros_like(new_labels), new_label - original_label),
tf.zeros_like(new_labels))
new_labels = tf.add(
new_labels,
change)
new_labels = tf.reshape(new_labels, tf.shape(labels))
return new_labels
def _flip_boxes_left_right(boxes):
"""Left-right flip the boxes.
Args:
boxes: Float32 tensor containing the bounding boxes -> [..., 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each last dimension is in the form of [ymin, xmin, ymax, xmax].
Returns:
Flipped boxes.
"""
ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=-1)
flipped_xmin = tf.subtract(1.0, xmax)
flipped_xmax = tf.subtract(1.0, xmin)
flipped_boxes = tf.concat([ymin, flipped_xmin, ymax, flipped_xmax], axis=-1)
return flipped_boxes
def _flip_boxes_up_down(boxes):
"""Up-down flip the boxes.
Args:
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
Returns:
Flipped boxes.
"""
ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=1)
flipped_ymin = tf.subtract(1.0, ymax)
flipped_ymax = tf.subtract(1.0, ymin)
flipped_boxes = tf.concat([flipped_ymin, xmin, flipped_ymax, xmax], 1)
return flipped_boxes
def _rot90_boxes(boxes):
"""Rotate boxes counter-clockwise by 90 degrees.
Args:
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
Returns:
Rotated boxes.
"""
ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=1)
rotated_ymin = tf.subtract(1.0, xmax)
rotated_ymax = tf.subtract(1.0, xmin)
rotated_xmin = ymin
rotated_xmax = ymax
rotated_boxes = tf.concat(
[rotated_ymin, rotated_xmin, rotated_ymax, rotated_xmax], 1)
return rotated_boxes
def _flip_masks_left_right(masks):
"""Left-right flip masks.
Args:
masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
Returns:
flipped masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
"""
return masks[:, :, ::-1]
def _flip_masks_up_down(masks):
"""Up-down flip masks.
Args:
masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
Returns:
flipped masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
"""
return masks[:, ::-1, :]
def _rot90_masks(masks):
"""Rotate masks counter-clockwise by 90 degrees.
Args:
masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
Returns:
rotated masks: rank 3 float32 tensor with shape
[num_instances, height, width] representing instance masks.
"""
masks = tf.transpose(masks, [0, 2, 1])
return masks[:, ::-1, :]
def random_horizontal_flip(image,
boxes=None,
masks=None,
keypoints=None,
keypoint_visibilities=None,
densepose_part_ids=None,
densepose_surface_coords=None,
keypoint_flip_permutation=None,
probability=0.5,
seed=None,
preprocess_vars_cache=None):
"""Randomly flips the image and detections horizontally.
Args:
image: rank 3 float32 tensor with shape [height, width, channels].
boxes: (optional) rank 2 float32 tensor with shape [N, 4]
containing the bounding boxes.
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
keypoint_visibilities: (optional) rank 2 bool tensor with shape
[num_instances, num_keypoints].
densepose_part_ids: (optional) rank 2 int32 tensor with shape
[num_instances, num_points] holding the part id for each
sampled point. These part_ids are 0-indexed, where the
first non-background part has index 0.
densepose_surface_coords: (optional) rank 3 float32 tensor with shape
[num_instances, num_points, 4]. The DensePose
coordinates are of the form (y, x, v, u) where
(y, x) are the normalized image coordinates for a
sampled point, and (v, u) is the surface
coordinate for the part.
keypoint_flip_permutation: rank 1 int32 tensor containing the keypoint flip
permutation.
probability: the probability of performing this augmentation.
seed: random seed
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
If boxes, masks, keypoints, keypoint_visibilities,
keypoint_flip_permutation, densepose_part_ids, or densepose_surface_coords
are not None,the function also returns the following tensors.
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
keypoint_visibilities: rank 2 bool tensor with shape
[num_instances, num_keypoints].
densepose_part_ids: rank 2 int32 tensor with shape
[num_instances, num_points].
densepose_surface_coords: rank 3 float32 tensor with shape
[num_instances, num_points, 4].
Raises:
ValueError: if keypoints are provided but keypoint_flip_permutation is not.
ValueError: if either densepose_part_ids or densepose_surface_coords is
not None, but both are not None.
"""
def _flip_image(image):
# flip image
image_flipped = tf.image.flip_left_right(image)
return image_flipped
if keypoints is not None and keypoint_flip_permutation is None:
raise ValueError(
'keypoints are provided but keypoints_flip_permutation is not provided')
if ((densepose_part_ids is not None and densepose_surface_coords is None) or
(densepose_part_ids is None and densepose_surface_coords is not None)):
raise ValueError(
'Must provide both `densepose_part_ids` and `densepose_surface_coords`')
with tf.name_scope('RandomHorizontalFlip', values=[image, boxes]):
result = []
# random variable defining whether to do flip or not
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_a_flip_random = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.HORIZONTAL_FLIP,
preprocess_vars_cache)
do_a_flip_random = tf.less(do_a_flip_random, probability)
# flip image
image = tf.cond(do_a_flip_random, lambda: _flip_image(image), lambda: image)
result.append(image)
# flip boxes
if boxes is not None:
boxes = tf.cond(do_a_flip_random, lambda: _flip_boxes_left_right(boxes),
lambda: boxes)
result.append(boxes)
# flip masks
if masks is not None:
masks = tf.cond(do_a_flip_random, lambda: _flip_masks_left_right(masks),
lambda: masks)
result.append(masks)
# flip keypoints
if keypoints is not None and keypoint_flip_permutation is not None:
permutation = keypoint_flip_permutation
keypoints = tf.cond(
do_a_flip_random,
lambda: keypoint_ops.flip_horizontal(keypoints, 0.5, permutation),
lambda: keypoints)
result.append(keypoints)
# flip keypoint visibilities
if (keypoint_visibilities is not None and
keypoint_flip_permutation is not None):
kpt_flip_perm = keypoint_flip_permutation
keypoint_visibilities = tf.cond(
do_a_flip_random,
lambda: tf.gather(keypoint_visibilities, kpt_flip_perm, axis=1),
lambda: keypoint_visibilities)
result.append(keypoint_visibilities)
# flip DensePose parts and coordinates
if densepose_part_ids is not None:
flip_densepose_fn = functools.partial(
densepose_ops.flip_horizontal, densepose_part_ids,
densepose_surface_coords)
densepose_tensors = tf.cond(
do_a_flip_random,
flip_densepose_fn,
lambda: (densepose_part_ids, densepose_surface_coords))
result.extend(densepose_tensors)
return tuple(result)
def random_vertical_flip(image,
boxes=None,
masks=None,
keypoints=None,
keypoint_flip_permutation=None,
probability=0.5,
seed=None,
preprocess_vars_cache=None):
"""Randomly flips the image and detections vertically.
The probability of flipping the image is 50%.
Args:
image: rank 3 float32 tensor with shape [height, width, channels].
boxes: (optional) rank 2 float32 tensor with shape [N, 4]
containing the bounding boxes.
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
keypoint_flip_permutation: rank 1 int32 tensor containing the keypoint flip
permutation.
probability: the probability of performing this augmentation.
seed: random seed
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
If boxes, masks, keypoints, and keypoint_flip_permutation are not None,
the function also returns the following tensors.
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
Raises:
ValueError: if keypoints are provided but keypoint_flip_permutation is not.
"""
def _flip_image(image):
# flip image
image_flipped = tf.image.flip_up_down(image)
return image_flipped
if keypoints is not None and keypoint_flip_permutation is None:
raise ValueError(
'keypoints are provided but keypoints_flip_permutation is not provided')
with tf.name_scope('RandomVerticalFlip', values=[image, boxes]):
result = []
# random variable defining whether to do flip or not
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_a_flip_random = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.VERTICAL_FLIP,
preprocess_vars_cache)
do_a_flip_random = tf.less(do_a_flip_random, probability)
# flip image
image = tf.cond(do_a_flip_random, lambda: _flip_image(image), lambda: image)
result.append(image)
# flip boxes
if boxes is not None:
boxes = tf.cond(do_a_flip_random, lambda: _flip_boxes_up_down(boxes),
lambda: boxes)
result.append(boxes)
# flip masks
if masks is not None:
masks = tf.cond(do_a_flip_random, lambda: _flip_masks_up_down(masks),
lambda: masks)
result.append(masks)
# flip keypoints
if keypoints is not None and keypoint_flip_permutation is not None:
permutation = keypoint_flip_permutation
keypoints = tf.cond(
do_a_flip_random,
lambda: keypoint_ops.flip_vertical(keypoints, 0.5, permutation),
lambda: keypoints)
result.append(keypoints)
return tuple(result)
def random_rotation90(image,
boxes=None,
masks=None,
keypoints=None,
keypoint_rot_permutation=None,
probability=0.5,
seed=None,
preprocess_vars_cache=None):
"""Randomly rotates the image and detections 90 degrees counter-clockwise.
The probability of rotating the image is 50%. This can be combined with
random_horizontal_flip and random_vertical_flip to produce an output with a
uniform distribution of the eight possible 90 degree rotation / reflection
combinations.
Args:
image: rank 3 float32 tensor with shape [height, width, channels].
boxes: (optional) rank 2 float32 tensor with shape [N, 4]
containing the bounding boxes.
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
keypoint_rot_permutation: rank 1 int32 tensor containing the keypoint flip
permutation.
probability: the probability of performing this augmentation.
seed: random seed
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
If boxes, masks, and keypoints, are not None,
the function also returns the following tensors.
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
"""
def _rot90_image(image):
# flip image
image_rotated = tf.image.rot90(image)
return image_rotated
with tf.name_scope('RandomRotation90', values=[image, boxes]):
result = []
# random variable defining whether to rotate by 90 degrees or not
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_a_rot90_random = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.ROTATION90,
preprocess_vars_cache)
do_a_rot90_random = tf.less(do_a_rot90_random, probability)
# flip image
image = tf.cond(do_a_rot90_random, lambda: _rot90_image(image),
lambda: image)
result.append(image)
# flip boxes
if boxes is not None:
boxes = tf.cond(do_a_rot90_random, lambda: _rot90_boxes(boxes),
lambda: boxes)
result.append(boxes)
# flip masks
if masks is not None:
masks = tf.cond(do_a_rot90_random, lambda: _rot90_masks(masks),
lambda: masks)
result.append(masks)
# flip keypoints
if keypoints is not None:
keypoints = tf.cond(
do_a_rot90_random,
lambda: keypoint_ops.rot90(keypoints, keypoint_rot_permutation),
lambda: keypoints)
result.append(keypoints)
return tuple(result)
def random_pixel_value_scale(image,
minval=0.9,
maxval=1.1,
seed=None,
preprocess_vars_cache=None):
"""Scales each value in the pixels of the image.
This function scales each pixel independent of the other ones.
For each value in image tensor, draws a random number between
minval and maxval and multiples the values with them.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
minval: lower ratio of scaling pixel values.
maxval: upper ratio of scaling pixel values.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
"""
with tf.name_scope('RandomPixelValueScale', values=[image]):
generator_func = functools.partial(
tf.random_uniform, tf.shape(image),
minval=minval, maxval=maxval,
dtype=tf.float32, seed=seed)
color_coef = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.PIXEL_VALUE_SCALE,
preprocess_vars_cache)
image = tf.multiply(image, color_coef)
image = tf.clip_by_value(image, 0.0, 255.0)
return image
def random_image_scale(image,
masks=None,
min_scale_ratio=0.5,
max_scale_ratio=2.0,
seed=None,
preprocess_vars_cache=None):
"""Scales the image size.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels].
masks: (optional) rank 3 float32 tensor containing masks with
size [height, width, num_masks]. The value is set to None if there are no
masks.
min_scale_ratio: minimum scaling ratio.
max_scale_ratio: maximum scaling ratio.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
masks: If masks is not none, resized masks which are the same rank as input
masks will be returned.
"""
with tf.name_scope('RandomImageScale', values=[image]):
result = []
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
generator_func = functools.partial(
tf.random_uniform, [],
minval=min_scale_ratio, maxval=max_scale_ratio,
dtype=tf.float32, seed=seed)
size_coef = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.IMAGE_SCALE,
preprocess_vars_cache)
image_newysize = tf.cast(
tf.multiply(tf.cast(image_height, dtype=tf.float32), size_coef),
dtype=tf.int32)
image_newxsize = tf.cast(
tf.multiply(tf.cast(image_width, dtype=tf.float32), size_coef),
dtype=tf.int32)
image = tf.image.resize_images(
image, [image_newysize, image_newxsize], align_corners=True)
result.append(image)
if masks is not None:
masks = tf.image.resize_images(
masks, [image_newysize, image_newxsize],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
align_corners=True)
result.append(masks)
return tuple(result)
def _augment_only_rgb_channels(image, augment_function):
"""Augments only the RGB slice of an image with additional channels."""
rgb_slice = image[:, :, :3]
augmented_rgb_slice = augment_function(rgb_slice)
image = tf.concat([augmented_rgb_slice, image[:, :, 3:]], -1)
return image
def random_rgb_to_gray(image,
probability=0.1,
seed=None,
preprocess_vars_cache=None):
"""Changes the image from RGB to Grayscale with the given probability.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
probability: the probability of returning a grayscale image.
The probability should be a number between [0, 1].
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
"""
def _image_to_gray(image):
image_gray1 = _rgb_to_grayscale(image)
image_gray3 = tf.image.grayscale_to_rgb(image_gray1)
return image_gray3
with tf.name_scope('RandomRGBtoGray', values=[image]):
# random variable defining whether to change to grayscale or not
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_gray_random = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.RGB_TO_GRAY,
preprocess_vars_cache)
image = tf.cond(
tf.greater(do_gray_random, probability), lambda: image,
lambda: _augment_only_rgb_channels(image, _image_to_gray))
return image
def random_adjust_brightness(image,
max_delta=0.2,
seed=None,
preprocess_vars_cache=None):
"""Randomly adjusts brightness.
Makes sure the output image is still between 0 and 255.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
max_delta: how much to change the brightness. A value between [0, 1).
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
boxes: boxes which is the same shape as input boxes.
"""
with tf.name_scope('RandomAdjustBrightness', values=[image]):
generator_func = functools.partial(tf.random_uniform, [],
-max_delta, max_delta, seed=seed)
delta = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.ADJUST_BRIGHTNESS,
preprocess_vars_cache)
def _adjust_brightness(image):
image = tf.image.adjust_brightness(image / 255, delta) * 255
image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
return image
image = _augment_only_rgb_channels(image, _adjust_brightness)
return image
def random_adjust_contrast(image,
min_delta=0.8,
max_delta=1.25,
seed=None,
preprocess_vars_cache=None):
"""Randomly adjusts contrast.
Makes sure the output image is still between 0 and 255.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
min_delta: see max_delta.
max_delta: how much to change the contrast. Contrast will change with a
value between min_delta and max_delta. This value will be
multiplied to the current contrast of the image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
"""
with tf.name_scope('RandomAdjustContrast', values=[image]):
generator_func = functools.partial(tf.random_uniform, [],
min_delta, max_delta, seed=seed)
contrast_factor = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.ADJUST_CONTRAST,
preprocess_vars_cache)
def _adjust_contrast(image):
image = tf.image.adjust_contrast(image / 255, contrast_factor) * 255
image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
return image
image = _augment_only_rgb_channels(image, _adjust_contrast)
return image
def random_adjust_hue(image,
max_delta=0.02,
seed=None,
preprocess_vars_cache=None):
"""Randomly adjusts hue.
Makes sure the output image is still between 0 and 255.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
max_delta: change hue randomly with a value between 0 and max_delta.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
"""
with tf.name_scope('RandomAdjustHue', values=[image]):
generator_func = functools.partial(tf.random_uniform, [],
-max_delta, max_delta, seed=seed)
delta = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.ADJUST_HUE,
preprocess_vars_cache)
def _adjust_hue(image):
image = tf.image.adjust_hue(image / 255, delta) * 255
image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
return image
image = _augment_only_rgb_channels(image, _adjust_hue)
return image
def random_adjust_saturation(image,
min_delta=0.8,
max_delta=1.25,
seed=None,
preprocess_vars_cache=None):
"""Randomly adjusts saturation.
Makes sure the output image is still between 0 and 255.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
min_delta: see max_delta.
max_delta: how much to change the saturation. Saturation will change with a
value between min_delta and max_delta. This value will be
multiplied to the current saturation of the image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
"""
with tf.name_scope('RandomAdjustSaturation', values=[image]):
generator_func = functools.partial(tf.random_uniform, [],
min_delta, max_delta, seed=seed)
saturation_factor = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.ADJUST_SATURATION,
preprocess_vars_cache)
def _adjust_saturation(image):
image = tf.image.adjust_saturation(image / 255, saturation_factor) * 255
image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
return image
image = _augment_only_rgb_channels(image, _adjust_saturation)
return image
def random_distort_color(image, color_ordering=0, preprocess_vars_cache=None):
"""Randomly distorts color.
Randomly distorts color using a combination of brightness, hue, contrast and
saturation changes. Makes sure the output image is still between 0 and 255.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
color_ordering: Python int, a type of distortion (valid values: 0, 1).
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same shape as input image.
Raises:
ValueError: if color_ordering is not in {0, 1}.
"""
with tf.name_scope('RandomDistortColor', values=[image]):
if color_ordering == 0:
image = random_adjust_brightness(
image, max_delta=32. / 255.,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_saturation(
image, min_delta=0.5, max_delta=1.5,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_hue(
image, max_delta=0.2,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_contrast(
image, min_delta=0.5, max_delta=1.5,
preprocess_vars_cache=preprocess_vars_cache)
elif color_ordering == 1:
image = random_adjust_brightness(
image, max_delta=32. / 255.,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_contrast(
image, min_delta=0.5, max_delta=1.5,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_saturation(
image, min_delta=0.5, max_delta=1.5,
preprocess_vars_cache=preprocess_vars_cache)
image = random_adjust_hue(
image, max_delta=0.2,
preprocess_vars_cache=preprocess_vars_cache)
else:
raise ValueError('color_ordering must be in {0, 1}')
return image
def random_jitter_boxes(boxes, ratio=0.05, seed=None):
"""Randomly jitter boxes in image.
Args:
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
ratio: The ratio of the box width and height that the corners can jitter.
For example if the width is 100 pixels and ratio is 0.05,
the corners can jitter up to 5 pixels in the x direction.
seed: random seed.
Returns:
boxes: boxes which is the same shape as input boxes.
"""
def random_jitter_box(box, ratio, seed):
"""Randomly jitter box.
Args:
box: bounding box [1, 1, 4].
ratio: max ratio between jittered box and original box,
a number between [0, 0.5].
seed: random seed.
Returns:
jittered_box: jittered box.
"""
rand_numbers = tf.random_uniform(
[1, 1, 4], minval=-ratio, maxval=ratio, dtype=tf.float32, seed=seed)
box_width = tf.subtract(box[0, 0, 3], box[0, 0, 1])
box_height = tf.subtract(box[0, 0, 2], box[0, 0, 0])
hw_coefs = tf.stack([box_height, box_width, box_height, box_width])
hw_rand_coefs = tf.multiply(hw_coefs, rand_numbers)
jittered_box = tf.add(box, hw_rand_coefs)
jittered_box = tf.clip_by_value(jittered_box, 0.0, 1.0)
return jittered_box
with tf.name_scope('RandomJitterBoxes', values=[boxes]):
# boxes are [N, 4]. Lets first make them [N, 1, 1, 4]
boxes_shape = tf.shape(boxes)
boxes = tf.expand_dims(boxes, 1)
boxes = tf.expand_dims(boxes, 2)
distorted_boxes = tf.map_fn(
lambda x: random_jitter_box(x, ratio, seed), boxes, dtype=tf.float32)
distorted_boxes = tf.reshape(distorted_boxes, boxes_shape)
return distorted_boxes
def _strict_random_crop_image(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
keypoint_visibilities=None,
densepose_num_points=None,
densepose_part_ids=None,
densepose_surface_coords=None,
min_object_covered=1.0,
aspect_ratio_range=(0.75, 1.33),
area_range=(0.1, 1.0),
overlap_thresh=0.3,
clip_boxes=True,
preprocess_vars_cache=None):
"""Performs random crop.
Note: Keypoint coordinates that are outside the crop will be set to NaN, which
is consistent with the original keypoint encoding for non-existing keypoints.
This function always crops the image and is supposed to be used by
`random_crop_image` function which sometimes returns the image unchanged.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes with shape
[num_instances, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: (optional) float32 tensor of shape [num_instances]
representing the confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
keypoint_visibilities: (optional) rank 2 bool tensor with shape
[num_instances, num_keypoints].
densepose_num_points: (optional) rank 1 int32 tensor with shape
[num_instances] with the number of sampled points per
instance.
densepose_part_ids: (optional) rank 2 int32 tensor with shape
[num_instances, num_points] holding the part id for each
sampled point. These part_ids are 0-indexed, where the
first non-background part has index 0.
densepose_surface_coords: (optional) rank 3 float32 tensor with shape
[num_instances, num_points, 4]. The DensePose
coordinates are of the form (y, x, v, u) where
(y, x) are the normalized image coordinates for a
sampled point, and (v, u) is the surface
coordinate for the part.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If label_weights, multiclass_scores, masks, keypoints,
keypoint_visibilities, densepose_num_points, densepose_part_ids, or
densepose_surface_coords is not None, the function also returns:
label_weights: rank 1 float32 tensor with shape [num_instances].
multiclass_scores: rank 2 float32 tensor with shape
[num_instances, num_classes]
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
keypoint_visibilities: rank 2 bool tensor with shape
[num_instances, num_keypoints]
densepose_num_points: rank 1 int32 tensor with shape [num_instances].
densepose_part_ids: rank 2 int32 tensor with shape
[num_instances, num_points].
densepose_surface_coords: rank 3 float32 tensor with shape
[num_instances, num_points, 4].
Raises:
ValueError: If some but not all of the DensePose tensors are provided.
"""
with tf.name_scope('RandomCropImage', values=[image, boxes]):
densepose_tensors = [densepose_num_points, densepose_part_ids,
densepose_surface_coords]
if (any(t is not None for t in densepose_tensors) and
not all(t is not None for t in densepose_tensors)):
raise ValueError('If cropping DensePose labels, must provide '
'`densepose_num_points`, `densepose_part_ids`, and '
'`densepose_surface_coords`')
image_shape = tf.shape(image)
# boxes are [N, 4]. Lets first make them [N, 1, 4].
boxes_expanded = tf.expand_dims(
tf.clip_by_value(
boxes, clip_value_min=0.0, clip_value_max=1.0), 1)
generator_func = functools.partial(
tf.image.sample_distorted_bounding_box,
image_shape,
bounding_boxes=boxes_expanded,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
max_attempts=100,
use_image_if_no_bounding_boxes=True)
# for ssd cropping, each value of min_object_covered has its own
# cached random variable
sample_distorted_bounding_box = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.STRICT_CROP_IMAGE,
preprocess_vars_cache, key=min_object_covered)
im_box_begin, im_box_size, im_box = sample_distorted_bounding_box
im_box_end = im_box_begin + im_box_size
new_image = image[im_box_begin[0]:im_box_end[0],
im_box_begin[1]:im_box_end[1], :]
new_image.set_shape([None, None, image.get_shape()[2]])
# [1, 4]
im_box_rank2 = tf.squeeze(im_box, axis=[0])
# [4]
im_box_rank1 = tf.squeeze(im_box)
boxlist = box_list.BoxList(boxes)
boxlist.add_field('labels', labels)
if label_weights is not None:
boxlist.add_field('label_weights', label_weights)
if label_confidences is not None:
boxlist.add_field('label_confidences', label_confidences)
if multiclass_scores is not None:
boxlist.add_field('multiclass_scores', multiclass_scores)
im_boxlist = box_list.BoxList(im_box_rank2)
# remove boxes that are outside cropped image
boxlist, inside_window_ids = box_list_ops.prune_completely_outside_window(
boxlist, im_box_rank1)
# remove boxes that are outside image
overlapping_boxlist, keep_ids = box_list_ops.prune_non_overlapping_boxes(
boxlist, im_boxlist, overlap_thresh)
# change the coordinate of the remaining boxes
new_labels = overlapping_boxlist.get_field('labels')
new_boxlist = box_list_ops.change_coordinate_frame(overlapping_boxlist,
im_box_rank1)
new_boxes = new_boxlist.get()
if clip_boxes:
new_boxes = tf.clip_by_value(
new_boxes, clip_value_min=0.0, clip_value_max=1.0)
result = [new_image, new_boxes, new_labels]
if label_weights is not None:
new_label_weights = overlapping_boxlist.get_field('label_weights')
result.append(new_label_weights)
if label_confidences is not None:
new_label_confidences = overlapping_boxlist.get_field('label_confidences')
result.append(new_label_confidences)
if multiclass_scores is not None:
new_multiclass_scores = overlapping_boxlist.get_field('multiclass_scores')
result.append(new_multiclass_scores)
if masks is not None:
masks_of_boxes_inside_window = tf.gather(masks, inside_window_ids)
masks_of_boxes_completely_inside_window = tf.gather(
masks_of_boxes_inside_window, keep_ids)
new_masks = masks_of_boxes_completely_inside_window[:, im_box_begin[
0]:im_box_end[0], im_box_begin[1]:im_box_end[1]]
result.append(new_masks)
if keypoints is not None:
keypoints_of_boxes_inside_window = tf.gather(keypoints, inside_window_ids)
keypoints_of_boxes_completely_inside_window = tf.gather(
keypoints_of_boxes_inside_window, keep_ids)
new_keypoints = keypoint_ops.change_coordinate_frame(
keypoints_of_boxes_completely_inside_window, im_box_rank1)
if clip_boxes:
new_keypoints = keypoint_ops.prune_outside_window(new_keypoints,
[0.0, 0.0, 1.0, 1.0])
result.append(new_keypoints)
if keypoint_visibilities is not None:
kpt_vis_of_boxes_inside_window = tf.gather(keypoint_visibilities,
inside_window_ids)
kpt_vis_of_boxes_completely_inside_window = tf.gather(
kpt_vis_of_boxes_inside_window, keep_ids)
if clip_boxes:
# Set any keypoints with NaN coordinates to invisible.
new_kpt_visibilities = keypoint_ops.set_keypoint_visibilities(
new_keypoints, kpt_vis_of_boxes_completely_inside_window)
result.append(new_kpt_visibilities)
if densepose_num_points is not None:
filtered_dp_tensors = []
for dp_tensor in densepose_tensors:
dp_tensor_inside_window = tf.gather(dp_tensor, inside_window_ids)
dp_tensor_completely_inside_window = tf.gather(dp_tensor_inside_window,
keep_ids)
filtered_dp_tensors.append(dp_tensor_completely_inside_window)
new_dp_num_points = filtered_dp_tensors[0]
new_dp_point_ids = filtered_dp_tensors[1]
new_dp_surf_coords = densepose_ops.change_coordinate_frame(
filtered_dp_tensors[2], im_box_rank1)
if clip_boxes:
new_dp_num_points, new_dp_point_ids, new_dp_surf_coords = (
densepose_ops.prune_outside_window(
new_dp_num_points, new_dp_point_ids, new_dp_surf_coords,
window=[0.0, 0.0, 1.0, 1.0]))
result.extend([new_dp_num_points, new_dp_point_ids, new_dp_surf_coords])
return tuple(result)
def random_crop_image(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
keypoint_visibilities=None,
densepose_num_points=None,
densepose_part_ids=None,
densepose_surface_coords=None,
min_object_covered=1.0,
aspect_ratio_range=(0.75, 1.33),
area_range=(0.1, 1.0),
overlap_thresh=0.3,
clip_boxes=True,
random_coef=0.0,
seed=None,
preprocess_vars_cache=None):
"""Randomly crops the image.
Given the input image and its bounding boxes, this op randomly
crops a subimage. Given a user-provided set of input constraints,
the crop window is resampled until it satisfies these constraints.
If within 100 trials it is unable to find a valid crop, the original
image is returned. See the Args section for a description of the input
constraints. Both input boxes and returned Boxes are in normalized
form (e.g., lie in the unit square [0, 1]).
This function will return the original image with probability random_coef.
Note: Keypoint coordinates that are outside the crop will be set to NaN, which
is consistent with the original keypoint encoding for non-existing keypoints.
Also, the keypoint visibility will be set to False.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes with shape
[num_instances, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: (optional) float32 tensor of shape [num_instances].
representing the confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
keypoint_visibilities: (optional) rank 2 bool tensor with shape
[num_instances, num_keypoints].
densepose_num_points: (optional) rank 1 int32 tensor with shape
[num_instances] with the number of sampled points per
instance.
densepose_part_ids: (optional) rank 2 int32 tensor with shape
[num_instances, num_points] holding the part id for each
sampled point. These part_ids are 0-indexed, where the
first non-background part has index 0.
densepose_surface_coords: (optional) rank 3 float32 tensor with shape
[num_instances, num_points, 4]. The DensePose
coordinates are of the form (y, x, v, u) where
(y, x) are the normalized image coordinates for a
sampled point, and (v, u) is the surface
coordinate for the part.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: Image shape will be [new_height, new_width, channels].
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form.
labels: new labels.
If label_weights, multiclass_scores, masks, keypoints,
keypoint_visibilities, densepose_num_points, densepose_part_ids,
densepose_surface_coords is not None, the function also returns:
label_weights: rank 1 float32 tensor with shape [num_instances].
multiclass_scores: rank 2 float32 tensor with shape
[num_instances, num_classes]
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
keypoint_visibilities: rank 2 bool tensor with shape
[num_instances, num_keypoints]
densepose_num_points: rank 1 int32 tensor with shape [num_instances].
densepose_part_ids: rank 2 int32 tensor with shape
[num_instances, num_points].
densepose_surface_coords: rank 3 float32 tensor with shape
[num_instances, num_points, 4].
"""
def strict_random_crop_image_fn():
return _strict_random_crop_image(
image,
boxes,
labels,
label_weights,
label_confidences=label_confidences,
multiclass_scores=multiclass_scores,
masks=masks,
keypoints=keypoints,
keypoint_visibilities=keypoint_visibilities,
densepose_num_points=densepose_num_points,
densepose_part_ids=densepose_part_ids,
densepose_surface_coords=densepose_surface_coords,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
overlap_thresh=overlap_thresh,
clip_boxes=clip_boxes,
preprocess_vars_cache=preprocess_vars_cache)
# avoids tf.cond to make faster RCNN training on borg. See b/140057645.
if random_coef < sys.float_info.min:
result = strict_random_crop_image_fn()
else:
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_a_crop_random = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.CROP_IMAGE,
preprocess_vars_cache)
do_a_crop_random = tf.greater(do_a_crop_random, random_coef)
outputs = [image, boxes, labels]
if label_weights is not None:
outputs.append(label_weights)
if label_confidences is not None:
outputs.append(label_confidences)
if multiclass_scores is not None:
outputs.append(multiclass_scores)
if masks is not None:
outputs.append(masks)
if keypoints is not None:
outputs.append(keypoints)
if keypoint_visibilities is not None:
outputs.append(keypoint_visibilities)
if densepose_num_points is not None:
outputs.extend([densepose_num_points, densepose_part_ids,
densepose_surface_coords])
result = tf.cond(do_a_crop_random, strict_random_crop_image_fn,
lambda: tuple(outputs))
return result
def random_pad_image(image,
boxes,
masks=None,
keypoints=None,
densepose_surface_coords=None,
min_image_size=None,
max_image_size=None,
pad_color=None,
seed=None,
preprocess_vars_cache=None):
"""Randomly pads the image.
This function randomly pads the image with zeros. The final size of the
padded image will be between min_image_size and max_image_size.
if min_image_size is smaller than the input image size, min_image_size will
be set to the input image size. The same for max_image_size. The input image
will be located at a uniformly random location inside the padded image.
The relative location of the boxes to the original image will remain the same.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[N, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[N, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
densepose_surface_coords: (optional) rank 3 float32 tensor with shape
[N, num_points, 4]. The DensePose coordinates are
of the form (y, x, v, u) where (y, x) are the
normalized image coordinates for a sampled point,
and (v, u) is the surface coordinate for the part.
min_image_size: a tensor of size [min_height, min_width], type tf.int32.
If passed as None, will be set to image size
[height, width].
max_image_size: a tensor of size [max_height, max_width], type tf.int32.
If passed as None, will be set to twice the
image [height * 2, width * 2].
pad_color: padding color. A rank 1 tensor of [channels] with dtype=
tf.float32. if set as None, it will be set to average color of
the input image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: Image shape will be [new_height, new_width, channels].
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form.
if masks is not None, the function also returns:
masks: rank 3 float32 tensor with shape [N, new_height, new_width]
if keypoints is not None, the function also returns:
keypoints: rank 3 float32 tensor with shape [N, num_keypoints, 2]
if densepose_surface_coords is not None, the function also returns:
densepose_surface_coords: rank 3 float32 tensor with shape
[num_instances, num_points, 4]
"""
if pad_color is None:
pad_color = tf.reduce_mean(image, axis=[0, 1])
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
if max_image_size is None:
max_image_size = tf.stack([image_height * 2, image_width * 2])
max_image_size = tf.maximum(max_image_size,
tf.stack([image_height, image_width]))
if min_image_size is None:
min_image_size = tf.stack([image_height, image_width])
min_image_size = tf.maximum(min_image_size,
tf.stack([image_height, image_width]))
target_height = tf.cond(
max_image_size[0] > min_image_size[0],
lambda: _random_integer(min_image_size[0], max_image_size[0], seed),
lambda: max_image_size[0])
target_width = tf.cond(
max_image_size[1] > min_image_size[1],
lambda: _random_integer(min_image_size[1], max_image_size[1], seed),
lambda: max_image_size[1])
offset_height = tf.cond(
target_height > image_height,
lambda: _random_integer(0, target_height - image_height, seed),
lambda: tf.constant(0, dtype=tf.int32))
offset_width = tf.cond(
target_width > image_width,
lambda: _random_integer(0, target_width - image_width, seed),
lambda: tf.constant(0, dtype=tf.int32))
gen_func = lambda: (target_height, target_width, offset_height, offset_width)
params = _get_or_create_preprocess_rand_vars(
gen_func, preprocessor_cache.PreprocessorCache.PAD_IMAGE,
preprocess_vars_cache)
target_height, target_width, offset_height, offset_width = params
new_image = tf.image.pad_to_bounding_box(
image,
offset_height=offset_height,
offset_width=offset_width,
target_height=target_height,
target_width=target_width)
# Setting color of the padded pixels
image_ones = tf.ones_like(image)
image_ones_padded = tf.image.pad_to_bounding_box(
image_ones,
offset_height=offset_height,
offset_width=offset_width,
target_height=target_height,
target_width=target_width)
image_color_padded = (1.0 - image_ones_padded) * pad_color
new_image += image_color_padded
# setting boxes
new_window = tf.cast(
tf.stack([
-offset_height, -offset_width, target_height - offset_height,
target_width - offset_width
]),
dtype=tf.float32)
new_window /= tf.cast(
tf.stack([image_height, image_width, image_height, image_width]),
dtype=tf.float32)
boxlist = box_list.BoxList(boxes)
new_boxlist = box_list_ops.change_coordinate_frame(boxlist, new_window)
new_boxes = new_boxlist.get()
result = [new_image, new_boxes]
if masks is not None:
new_masks = tf.image.pad_to_bounding_box(
masks[:, :, :, tf.newaxis],
offset_height=offset_height,
offset_width=offset_width,
target_height=target_height,
target_width=target_width)[:, :, :, 0]
result.append(new_masks)
if keypoints is not None:
new_keypoints = keypoint_ops.change_coordinate_frame(keypoints, new_window)
result.append(new_keypoints)
if densepose_surface_coords is not None:
new_densepose_surface_coords = densepose_ops.change_coordinate_frame(
densepose_surface_coords, new_window)
result.append(new_densepose_surface_coords)
return tuple(result)
def random_absolute_pad_image(image,
boxes,
masks=None,
keypoints=None,
densepose_surface_coords=None,
max_height_padding=None,
max_width_padding=None,
pad_color=None,
seed=None,
preprocess_vars_cache=None):
"""Randomly pads the image by small absolute amounts.
As random_pad_image above, but the padding is of size [0, max_height_padding]
or [0, max_width_padding] instead of padding to a fixed size of
max_height_padding for all images.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[N, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[N, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
densepose_surface_coords: (optional) rank 3 float32 tensor with shape
[N, num_points, 4]. The DensePose coordinates are
of the form (y, x, v, u) where (y, x) are the
normalized image coordinates for a sampled point,
and (v, u) is the surface coordinate for the part.
max_height_padding: a scalar tf.int32 tensor denoting the maximum amount of
height padding. The padding will be chosen uniformly at
random from [0, max_height_padding).
max_width_padding: a scalar tf.int32 tensor denoting the maximum amount of
width padding. The padding will be chosen uniformly at
random from [0, max_width_padding).
pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
if set as None, it will be set to average color of the input
image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: Image shape will be [new_height, new_width, channels].
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form.
if masks is not None, the function also returns:
masks: rank 3 float32 tensor with shape [N, new_height, new_width]
if keypoints is not None, the function also returns:
keypoints: rank 3 float32 tensor with shape [N, num_keypoints, 2]
"""
min_image_size = tf.shape(image)[:2]
max_image_size = min_image_size + tf.cast(
[max_height_padding, max_width_padding], dtype=tf.int32)
return random_pad_image(
image,
boxes,
masks=masks,
keypoints=keypoints,
densepose_surface_coords=densepose_surface_coords,
min_image_size=min_image_size,
max_image_size=max_image_size,
pad_color=pad_color,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
def random_crop_pad_image(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
min_object_covered=1.0,
aspect_ratio_range=(0.75, 1.33),
area_range=(0.1, 1.0),
overlap_thresh=0.3,
clip_boxes=True,
random_coef=0.0,
min_padded_size_ratio=(1.0, 1.0),
max_padded_size_ratio=(2.0, 2.0),
pad_color=None,
seed=None,
preprocess_vars_cache=None):
"""Randomly crops and pads the image.
Given an input image and its bounding boxes, this op first randomly crops
the image and then randomly pads the image with background values. Parameters
min_padded_size_ratio and max_padded_size_ratio, determine the range of the
final output image size. Specifically, the final image size will have a size
in the range of min_padded_size_ratio * tf.shape(image) and
max_padded_size_ratio * tf.shape(image). Note that these ratios are with
respect to the size of the original image, so we can't capture the same
effect easily by independently applying RandomCropImage
followed by RandomPadImage.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: rank 1 float32 containing the label weights.
label_confidences: rank 1 float32 containing the label confidences.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
min_padded_size_ratio: min ratio of padded image height and width to the
input image's height and width.
max_padded_size_ratio: max ratio of padded image height and width to the
input image's height and width.
pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
if set as None, it will be set to average color of the randomly
cropped image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
padded_image: padded image.
padded_boxes: boxes which is the same rank as input boxes. Boxes are in
normalized form.
cropped_labels: cropped labels.
if label_weights is not None also returns:
cropped_label_weights: cropped label weights.
if multiclass_scores is not None also returns:
cropped_multiclass_scores: cropped_multiclass_scores.
"""
image_size = tf.shape(image)
image_height = image_size[0]
image_width = image_size[1]
result = random_crop_image(
image=image,
boxes=boxes,
labels=labels,
label_weights=label_weights,
label_confidences=label_confidences,
multiclass_scores=multiclass_scores,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
overlap_thresh=overlap_thresh,
clip_boxes=clip_boxes,
random_coef=random_coef,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
cropped_image, cropped_boxes, cropped_labels = result[:3]
min_image_size = tf.cast(
tf.cast(tf.stack([image_height, image_width]), dtype=tf.float32) *
min_padded_size_ratio,
dtype=tf.int32)
max_image_size = tf.cast(
tf.cast(tf.stack([image_height, image_width]), dtype=tf.float32) *
max_padded_size_ratio,
dtype=tf.int32)
padded_image, padded_boxes = random_pad_image(
cropped_image,
cropped_boxes,
min_image_size=min_image_size,
max_image_size=max_image_size,
pad_color=pad_color,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
cropped_padded_output = (padded_image, padded_boxes, cropped_labels)
index = 3
if label_weights is not None:
cropped_label_weights = result[index]
cropped_padded_output += (cropped_label_weights,)
index += 1
if label_confidences is not None:
cropped_label_confidences = result[index]
cropped_padded_output += (cropped_label_confidences,)
index += 1
if multiclass_scores is not None:
cropped_multiclass_scores = result[index]
cropped_padded_output += (cropped_multiclass_scores,)
return cropped_padded_output
def random_crop_to_aspect_ratio(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
aspect_ratio=1.0,
overlap_thresh=0.3,
clip_boxes=True,
seed=None,
preprocess_vars_cache=None):
"""Randomly crops an image to the specified aspect ratio.
Randomly crops the a portion of the image such that the crop is of the
specified aspect ratio, and the crop is as large as possible. If the specified
aspect ratio is larger than the aspect ratio of the image, this op will
randomly remove rows from the top and bottom of the image. If the specified
aspect ratio is less than the aspect ratio of the image, this op will randomly
remove cols from the left and right of the image. If the specified aspect
ratio is the same as the aspect ratio of the image, this op will return the
image.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: (optional) float32 tensor of shape [num_instances]
representing the confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
aspect_ratio: the aspect ratio of cropped image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If label_weights, masks, keypoints, or multiclass_scores is not None, the
function also returns:
label_weights: rank 1 float32 tensor with shape [num_instances].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
multiclass_scores: rank 2 float32 tensor with shape
[num_instances, num_classes]
Raises:
ValueError: If image is not a 3D tensor.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
with tf.name_scope('RandomCropToAspectRatio', values=[image]):
image_shape = tf.shape(image)
orig_height = image_shape[0]
orig_width = image_shape[1]
orig_aspect_ratio = tf.cast(
orig_width, dtype=tf.float32) / tf.cast(
orig_height, dtype=tf.float32)
new_aspect_ratio = tf.constant(aspect_ratio, dtype=tf.float32)
def target_height_fn():
return tf.cast(
tf.round(tf.cast(orig_width, dtype=tf.float32) / new_aspect_ratio),
dtype=tf.int32)
target_height = tf.cond(orig_aspect_ratio >= new_aspect_ratio,
lambda: orig_height, target_height_fn)
def target_width_fn():
return tf.cast(
tf.round(tf.cast(orig_height, dtype=tf.float32) * new_aspect_ratio),
dtype=tf.int32)
target_width = tf.cond(orig_aspect_ratio <= new_aspect_ratio,
lambda: orig_width, target_width_fn)
# either offset_height = 0 and offset_width is randomly chosen from
# [0, offset_width - target_width), or else offset_width = 0 and
# offset_height is randomly chosen from [0, offset_height - target_height)
offset_height = _random_integer(0, orig_height - target_height + 1, seed)
offset_width = _random_integer(0, orig_width - target_width + 1, seed)
generator_func = lambda: (offset_height, offset_width)
offset_height, offset_width = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.CROP_TO_ASPECT_RATIO,
preprocess_vars_cache)
new_image = tf.image.crop_to_bounding_box(
image, offset_height, offset_width, target_height, target_width)
im_box = tf.stack([
tf.cast(offset_height, dtype=tf.float32) /
tf.cast(orig_height, dtype=tf.float32),
tf.cast(offset_width, dtype=tf.float32) /
tf.cast(orig_width, dtype=tf.float32),
tf.cast(offset_height + target_height, dtype=tf.float32) /
tf.cast(orig_height, dtype=tf.float32),
tf.cast(offset_width + target_width, dtype=tf.float32) /
tf.cast(orig_width, dtype=tf.float32)
])
boxlist = box_list.BoxList(boxes)
boxlist.add_field('labels', labels)
boxlist.add_field('label_weights', label_weights)
if label_confidences is not None:
boxlist.add_field('label_confidences', label_confidences)
if multiclass_scores is not None:
boxlist.add_field('multiclass_scores', multiclass_scores)
im_boxlist = box_list.BoxList(tf.expand_dims(im_box, 0))
# remove boxes whose overlap with the image is less than overlap_thresh
overlapping_boxlist, keep_ids = box_list_ops.prune_non_overlapping_boxes(
boxlist, im_boxlist, overlap_thresh)
# change the coordinate of the remaining boxes
new_labels = overlapping_boxlist.get_field('labels')
new_boxlist = box_list_ops.change_coordinate_frame(overlapping_boxlist,
im_box)
if clip_boxes:
new_boxlist = box_list_ops.clip_to_window(
new_boxlist, tf.constant([0.0, 0.0, 1.0, 1.0], tf.float32))
new_boxes = new_boxlist.get()
result = [new_image, new_boxes, new_labels]
new_label_weights = overlapping_boxlist.get_field('label_weights')
result.append(new_label_weights)
if label_confidences is not None:
new_label_confidences = (
overlapping_boxlist.get_field('label_confidences'))
result.append(new_label_confidences)
if multiclass_scores is not None:
new_multiclass_scores = overlapping_boxlist.get_field('multiclass_scores')
result.append(new_multiclass_scores)
if masks is not None:
masks_inside_window = tf.gather(masks, keep_ids)
masks_box_begin = tf.stack([0, offset_height, offset_width])
masks_box_size = tf.stack([-1, target_height, target_width])
new_masks = tf.slice(masks_inside_window, masks_box_begin, masks_box_size)
result.append(new_masks)
if keypoints is not None:
keypoints_inside_window = tf.gather(keypoints, keep_ids)
new_keypoints = keypoint_ops.change_coordinate_frame(
keypoints_inside_window, im_box)
if clip_boxes:
new_keypoints = keypoint_ops.prune_outside_window(new_keypoints,
[0.0, 0.0, 1.0, 1.0])
result.append(new_keypoints)
return tuple(result)
def random_pad_to_aspect_ratio(image,
boxes,
masks=None,
keypoints=None,
aspect_ratio=1.0,
min_padded_size_ratio=(1.0, 1.0),
max_padded_size_ratio=(2.0, 2.0),
seed=None,
preprocess_vars_cache=None):
"""Randomly zero pads an image to the specified aspect ratio.
Pads the image so that the resulting image will have the specified aspect
ratio without scaling less than the min_padded_size_ratio or more than the
max_padded_size_ratio. If the min_padded_size_ratio or max_padded_size_ratio
is lower than what is possible to maintain the aspect ratio, then this method
will use the least padding to achieve the specified aspect ratio.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
aspect_ratio: aspect ratio of the final image.
min_padded_size_ratio: min ratio of padded image height and width to the
input image's height and width.
max_padded_size_ratio: max ratio of padded image height and width to the
input image's height and width.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If masks, or keypoints is not None, the function also returns:
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
Raises:
ValueError: If image is not a 3D tensor.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
with tf.name_scope('RandomPadToAspectRatio', values=[image]):
image_shape = tf.shape(image)
image_height = tf.cast(image_shape[0], dtype=tf.float32)
image_width = tf.cast(image_shape[1], dtype=tf.float32)
image_aspect_ratio = image_width / image_height
new_aspect_ratio = tf.constant(aspect_ratio, dtype=tf.float32)
target_height = tf.cond(
image_aspect_ratio <= new_aspect_ratio,
lambda: image_height,
lambda: image_width / new_aspect_ratio)
target_width = tf.cond(
image_aspect_ratio >= new_aspect_ratio,
lambda: image_width,
lambda: image_height * new_aspect_ratio)
min_height = tf.maximum(
min_padded_size_ratio[0] * image_height, target_height)
min_width = tf.maximum(
min_padded_size_ratio[1] * image_width, target_width)
max_height = tf.maximum(
max_padded_size_ratio[0] * image_height, target_height)
max_width = tf.maximum(
max_padded_size_ratio[1] * image_width, target_width)
max_scale = tf.minimum(max_height / target_height, max_width / target_width)
min_scale = tf.minimum(
max_scale,
tf.maximum(min_height / target_height, min_width / target_width))
generator_func = functools.partial(tf.random_uniform, [],
min_scale, max_scale, seed=seed)
scale = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.PAD_TO_ASPECT_RATIO,
preprocess_vars_cache)
target_height = tf.round(scale * target_height)
target_width = tf.round(scale * target_width)
new_image = tf.image.pad_to_bounding_box(
image, 0, 0, tf.cast(target_height, dtype=tf.int32),
tf.cast(target_width, dtype=tf.int32))
im_box = tf.stack([
0.0,
0.0,
target_height / image_height,
target_width / image_width
])
boxlist = box_list.BoxList(boxes)
new_boxlist = box_list_ops.change_coordinate_frame(boxlist, im_box)
new_boxes = new_boxlist.get()
result = [new_image, new_boxes]
if masks is not None:
new_masks = tf.expand_dims(masks, -1)
new_masks = tf.image.pad_to_bounding_box(
new_masks, 0, 0, tf.cast(target_height, dtype=tf.int32),
tf.cast(target_width, dtype=tf.int32))
new_masks = tf.squeeze(new_masks, [-1])
result.append(new_masks)
if keypoints is not None:
new_keypoints = keypoint_ops.change_coordinate_frame(keypoints, im_box)
result.append(new_keypoints)
return tuple(result)
def random_black_patches(image,
max_black_patches=10,
probability=0.5,
size_to_image_ratio=0.1,
random_seed=None,
preprocess_vars_cache=None):
"""Randomly adds some black patches to the image.
This op adds up to max_black_patches square black patches of a fixed size
to the image where size is specified via the size_to_image_ratio parameter.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
max_black_patches: number of times that the function tries to add a
black box to the image.
probability: at each try, what is the chance of adding a box.
size_to_image_ratio: Determines the ratio of the size of the black patches
to the size of the image.
box_size = size_to_image_ratio *
min(image_width, image_height)
random_seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image
"""
def add_black_patch_to_image(image, idx):
"""Function for adding one patch to the image.
Args:
image: image
idx: counter for number of patches that could have been added
Returns:
image with a randomly added black box
"""
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
box_size = tf.cast(
tf.multiply(
tf.minimum(
tf.cast(image_height, dtype=tf.float32),
tf.cast(image_width, dtype=tf.float32)), size_to_image_ratio),
dtype=tf.int32)
generator_func = functools.partial(tf.random_uniform, [], minval=0.0,
maxval=(1.0 - size_to_image_ratio),
seed=random_seed)
normalized_y_min = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.ADD_BLACK_PATCH,
preprocess_vars_cache, key=str(idx) + 'y')
normalized_x_min = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.ADD_BLACK_PATCH,
preprocess_vars_cache, key=str(idx) + 'x')
y_min = tf.cast(
normalized_y_min * tf.cast(image_height, dtype=tf.float32),
dtype=tf.int32)
x_min = tf.cast(
normalized_x_min * tf.cast(image_width, dtype=tf.float32),
dtype=tf.int32)
black_box = tf.ones([box_size, box_size, 3], dtype=tf.float32)
mask = 1.0 - tf.image.pad_to_bounding_box(black_box, y_min, x_min,
image_height, image_width)
image = tf.multiply(image, mask)
return image
with tf.name_scope('RandomBlackPatchInImage', values=[image]):
for idx in range(max_black_patches):
generator_func = functools.partial(tf.random_uniform, [],
minval=0.0, maxval=1.0,
dtype=tf.float32, seed=random_seed)
random_prob = _get_or_create_preprocess_rand_vars(
generator_func,
preprocessor_cache.PreprocessorCache.BLACK_PATCHES,
preprocess_vars_cache, key=idx)
image = tf.cond(
tf.greater(random_prob, probability), lambda: image,
functools.partial(add_black_patch_to_image, image=image, idx=idx))
return image
def random_jpeg_quality(image,
min_jpeg_quality=0,
max_jpeg_quality=100,
random_coef=0.0,
seed=None,
preprocess_vars_cache=None):
"""Randomly encode the image to a random JPEG quality level.
Args:
image: rank 3 float32 tensor with shape [height, width, channels] and
values in the range [0, 255].
min_jpeg_quality: An int for the lower bound for selecting a random jpeg
quality level.
max_jpeg_quality: An int for the upper bound for selecting a random jpeg
quality level.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the encoded image,
and if it is 1.0, we will always get the original image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this function is called
multiple times with the same non-null cache, it will perform
deterministically.
Returns:
image: image which is the same shape as input image.
"""
def _adjust_jpeg_quality():
"""Encodes the image as jpeg with a random quality and then decodes."""
generator_func = functools.partial(
tf.random_uniform, [],
minval=min_jpeg_quality,
maxval=max_jpeg_quality,
dtype=tf.int32,
seed=seed)
quality = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.JPEG_QUALITY,
preprocess_vars_cache, key='quality')
# Need to convert to uint8 before calling adjust_jpeg_quality since it
# assumes that float features are in the range [0, 1], where herein the
# range is [0, 255].
image_uint8 = tf.cast(image, tf.uint8)
adjusted_image = tf.image.adjust_jpeg_quality(image_uint8, quality)
return tf.cast(adjusted_image, tf.float32)
with tf.name_scope('RandomJpegQuality', values=[image]):
generator_func = functools.partial(tf.random_uniform, [], seed=seed)
do_encoding_random = _get_or_create_preprocess_rand_vars(
generator_func, preprocessor_cache.PreprocessorCache.JPEG_QUALITY,
preprocess_vars_cache)
do_encoding_random = tf.greater_equal(do_encoding_random, random_coef)
image = tf.cond(do_encoding_random, _adjust_jpeg_quality,
lambda: tf.cast(image, tf.float32))
return image
def random_downscale_to_target_pixels(image,
masks=None,
min_target_pixels=300000,
max_target_pixels=800000,
random_coef=0.0,
seed=None,
preprocess_vars_cache=None):
"""Randomly downscales the image to a target number of pixels.
If the image contains less than the chosen target number of pixels, it will
not be downscaled.
Args:
image: Rank 3 float32 tensor with shape [height, width, channels] and
values in the range [0, 255].
masks: (optional) Rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks are of
the same height, width as the input `image`.
min_target_pixels: Integer. An inclusive lower bound for for the target
number of pixels.
max_target_pixels: Integer. An exclusive upper bound for for the target
number of pixels.
random_coef: Float. Random coefficient that defines the chance of getting
the original image. If random_coef is 0, we will always apply downscaling,
and if it is 1.0, we will always get the original image.
seed: (optional) Integer. Random seed.
preprocess_vars_cache: (optional) PreprocessorCache object that records
previously performed augmentations. Updated in-place. If this function is
called multiple times with the same non-null cache, it will perform
deterministically.
Returns:
Tuple with elements:
image: Resized image which is the same rank as input image.
masks: If masks is not None, resized masks which are the same rank as
the input masks.
Raises:
ValueError: If min_target_pixels or max_target_pixels are not positive.
"""
if min_target_pixels <= 0:
raise ValueError('Minimum target pixels must be positive')
if max_target_pixels <= 0:
raise ValueError('Maximum target pixels must be positive')
def _resize_image_to_target(target_height, target_width):
# pylint: disable=unbalanced-tuple-unpacking
new_image, _ = resize_image(image, None, target_height, target_width)
return (new_image,)
def _resize_image_and_masks_to_target(target_height, target_width):
# pylint: disable=unbalanced-tuple-unpacking
new_image, new_masks, _ = resize_image(image, masks, target_height,
target_width)
return new_image, new_masks
with tf.name_scope('RandomDownscaleToTargetPixels', values=[image]):
generator_fn = functools.partial(tf.random_uniform, [], seed=seed)
do_downscale_random = _get_or_create_preprocess_rand_vars(
generator_fn,
preprocessor_cache.PreprocessorCache.DOWNSCALE_TO_TARGET_PIXELS,
preprocess_vars_cache)
do_downscale_random = tf.greater_equal(do_downscale_random, random_coef)
generator_fn = functools.partial(
tf.random_uniform, [],
minval=min_target_pixels,
maxval=max_target_pixels,
dtype=tf.int32,
seed=seed)
target_pixels = _get_or_create_preprocess_rand_vars(
generator_fn,
preprocessor_cache.PreprocessorCache.DOWNSCALE_TO_TARGET_PIXELS,
preprocess_vars_cache,
key='target_pixels')
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
image_pixels = image_height * image_width
scale_factor = tf.sqrt(
tf.cast(target_pixels, dtype=tf.float32) /
tf.cast(image_pixels, dtype=tf.float32))
target_height = tf.cast(
scale_factor * tf.cast(image_height, dtype=tf.float32), dtype=tf.int32)
target_width = tf.cast(
scale_factor * tf.cast(image_width, dtype=tf.float32), dtype=tf.int32)
image_larger_than_target = tf.greater(image_pixels, target_pixels)
should_apply_resize = tf.logical_and(do_downscale_random,
image_larger_than_target)
if masks is not None:
resize_fn = functools.partial(_resize_image_and_masks_to_target,
target_height, target_width)
return tf.cond(should_apply_resize, resize_fn,
lambda: (tf.cast(image, dtype=tf.float32), masks))
else:
resize_fn = lambda: _resize_image_to_target(target_height, target_width)
return tf.cond(should_apply_resize, resize_fn,
lambda: (tf.cast(image, dtype=tf.float32),))
def random_patch_gaussian(image,
min_patch_size=1,
max_patch_size=250,
min_gaussian_stddev=0.0,
max_gaussian_stddev=1.0,
random_coef=0.0,
seed=None,
preprocess_vars_cache=None):
"""Randomly applies gaussian noise to a random patch on the image.
The gaussian noise is applied to the image with values scaled to the range
[0.0, 1.0]. The result of applying gaussian noise to the scaled image is
clipped to be within the range [0.0, 1.0], equivalent to the range
[0.0, 255.0] after rescaling the image back.
See "Improving Robustness Without Sacrificing Accuracy with Patch Gaussian
Augmentation " by Lopes et al., 2019, for further details.
https://arxiv.org/abs/1906.02611
Args:
image: Rank 3 float32 tensor with shape [height, width, channels] and
values in the range [0.0, 255.0].
min_patch_size: Integer. An inclusive lower bound for the patch size.
max_patch_size: Integer. An exclusive upper bound for the patch size.
min_gaussian_stddev: Float. An inclusive lower bound for the standard
deviation of the gaussian noise.
max_gaussian_stddev: Float. An exclusive upper bound for the standard
deviation of the gaussian noise.
random_coef: Float. Random coefficient that defines the chance of getting
the original image. If random_coef is 0.0, we will always apply
downscaling, and if it is 1.0, we will always get the original image.
seed: (optional) Integer. Random seed.
preprocess_vars_cache: (optional) PreprocessorCache object that records
previously performed augmentations. Updated in-place. If this function is
called multiple times with the same non-null cache, it will perform
deterministically.
Returns:
Rank 3 float32 tensor with same shape as the input image and with gaussian
noise applied within a random patch.
Raises:
ValueError: If min_patch_size is < 1.
"""
if min_patch_size < 1:
raise ValueError('Minimum patch size must be >= 1.')
get_or_create_rand_vars_fn = functools.partial(
_get_or_create_preprocess_rand_vars,
function_id=preprocessor_cache.PreprocessorCache.PATCH_GAUSSIAN,
preprocess_vars_cache=preprocess_vars_cache)
def _apply_patch_gaussian(image):
"""Applies a patch gaussian with random size, location, and stddev."""
patch_size = get_or_create_rand_vars_fn(
functools.partial(
tf.random_uniform, [],
minval=min_patch_size,
maxval=max_patch_size,
dtype=tf.int32,
seed=seed),
key='patch_size')
gaussian_stddev = get_or_create_rand_vars_fn(
functools.partial(
tf.random_uniform, [],
minval=min_gaussian_stddev,
maxval=max_gaussian_stddev,
dtype=tf.float32,
seed=seed),
key='gaussian_stddev')
image_shape = tf.shape(image)
y = get_or_create_rand_vars_fn(
functools.partial(
tf.random_uniform, [],
minval=0,
maxval=image_shape[0],
dtype=tf.int32,
seed=seed),
key='y')
x = get_or_create_rand_vars_fn(
functools.partial(
tf.random_uniform, [],
minval=0,
maxval=image_shape[1],
dtype=tf.int32,
seed=seed),
key='x')
gaussian = get_or_create_rand_vars_fn(
functools.partial(
tf.random.normal,
image_shape,
stddev=gaussian_stddev,
dtype=tf.float32,
seed=seed),
key='gaussian')
scaled_image = image / 255.0
image_plus_gaussian = tf.clip_by_value(scaled_image + gaussian, 0.0, 1.0)
patch_mask = patch_ops.get_patch_mask(y, x, patch_size, image_shape)
patch_mask = tf.expand_dims(patch_mask, -1)
patch_mask = tf.tile(patch_mask, [1, 1, image_shape[2]])
patched_image = tf.where(patch_mask, image_plus_gaussian, scaled_image)
return patched_image * 255.0
with tf.name_scope('RandomPatchGaussian', values=[image]):
image = tf.cast(image, tf.float32)
patch_gaussian_random = get_or_create_rand_vars_fn(
functools.partial(tf.random_uniform, [], seed=seed))
do_patch_gaussian = tf.greater_equal(patch_gaussian_random, random_coef)
image = tf.cond(do_patch_gaussian,
lambda: _apply_patch_gaussian(image),
lambda: image)
return image
# TODO(barretzoph): Put in AutoAugment Paper link when paper is live.
def autoaugment_image(image, boxes, policy_name='v0'):
"""Apply an autoaugment policy to the image and boxes.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 255].
boxes: rank 2 float32 tensor containing the bounding boxes with shape
[num_instances, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
policy_name: The name of the AutoAugment policy to use. The available
options are `v0`, `v1`, `v2`, `v3` and `test`. `v0` is the policy used for
all of the results in the paper and was found to achieve the best results
on the COCO dataset. `v1`, `v2` and `v3` are additional good policies
found on the COCO dataset that have slight variation in what operations
were used during the search procedure along with how many operations are
applied in parallel to a single image (2 vs 3).
Returns:
image: the augmented image.
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form. boxes will have been augmented along with image.
"""
return autoaugment_utils.distort_image_with_autoaugment(
image, boxes, policy_name)
def image_to_float(image):
"""Used in Faster R-CNN. Casts image pixel values to float.
Args:
image: input image which might be in tf.uint8 or sth else format
Returns:
image: image in tf.float32 format.
"""
with tf.name_scope('ImageToFloat', values=[image]):
image = tf.cast(image, dtype=tf.float32)
return image
def random_resize_method(image, target_size, preprocess_vars_cache=None):
"""Uses a random resize method to resize the image to target size.
Args:
image: a rank 3 tensor.
target_size: a list of [target_height, target_width]
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
resized image.
"""
resized_image = _apply_with_random_selector(
image,
lambda x, method: tf.image.resize_images(x, target_size, method),
num_cases=4,
preprocess_vars_cache=preprocess_vars_cache,
key=preprocessor_cache.PreprocessorCache.RESIZE_METHOD)
return resized_image
def resize_to_range(image,
masks=None,
min_dimension=None,
max_dimension=None,
method=tf.image.ResizeMethod.BILINEAR,
align_corners=False,
pad_to_max_dimension=False,
per_channel_pad_value=(0, 0, 0)):
"""Resizes an image so its dimensions are within the provided value.
The output size can be described by two cases:
1. If the image can be rescaled so its minimum dimension is equal to the
provided value without the other dimension exceeding max_dimension,
then do so.
2. Otherwise, resize so the largest dimension is equal to max_dimension.
Args:
image: A 3D tensor of shape [height, width, channels]
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks.
min_dimension: (optional) (scalar) desired size of the smaller image
dimension.
max_dimension: (optional) (scalar) maximum allowed size
of the larger image dimension.
method: (optional) interpolation method used in resizing. Defaults to
BILINEAR.
align_corners: bool. If true, exactly align all 4 corners of the input
and output. Defaults to False.
pad_to_max_dimension: Whether to resize the image and pad it with zeros
so the resulting image is of the spatial size
[max_dimension, max_dimension]. If masks are included they are padded
similarly.
per_channel_pad_value: A tuple of per-channel scalar value to use for
padding. By default pads zeros.
Returns:
Note that the position of the resized_image_shape changes based on whether
masks are present.
resized_image: A 3D tensor of shape [new_height, new_width, channels],
where the image has been resized (with bilinear interpolation) so that
min(new_height, new_width) == min_dimension or
max(new_height, new_width) == max_dimension.
resized_masks: If masks is not None, also outputs masks. A 3D tensor of
shape [num_instances, new_height, new_width].
resized_image_shape: A 1D tensor of shape [3] containing shape of the
resized image.
Raises:
ValueError: if the image is not a 3D tensor.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
def _resize_landscape_image(image):
# resize a landscape image
return tf.image.resize_images(
image, tf.stack([min_dimension, max_dimension]), method=method,
align_corners=align_corners, preserve_aspect_ratio=True)
def _resize_portrait_image(image):
# resize a portrait image
return tf.image.resize_images(
image, tf.stack([max_dimension, min_dimension]), method=method,
align_corners=align_corners, preserve_aspect_ratio=True)
with tf.name_scope('ResizeToRange', values=[image, min_dimension]):
if image.get_shape().is_fully_defined():
if image.get_shape()[0] < image.get_shape()[1]:
new_image = _resize_landscape_image(image)
else:
new_image = _resize_portrait_image(image)
new_size = tf.constant(new_image.get_shape().as_list())
else:
new_image = tf.cond(
tf.less(tf.shape(image)[0], tf.shape(image)[1]),
lambda: _resize_landscape_image(image),
lambda: _resize_portrait_image(image))
new_size = tf.shape(new_image)
if pad_to_max_dimension:
channels = tf.unstack(new_image, axis=2)
if len(channels) != len(per_channel_pad_value):
raise ValueError('Number of channels must be equal to the length of '
'per-channel pad value.')
new_image = tf.stack(
[
tf.pad(
channels[i], [[0, max_dimension - new_size[0]],
[0, max_dimension - new_size[1]]],
constant_values=per_channel_pad_value[i])
for i in range(len(channels))
],
axis=2)
new_image.set_shape([max_dimension, max_dimension, 3])
result = [new_image]
if masks is not None:
new_masks = tf.expand_dims(masks, 3)
new_masks = tf.image.resize_images(
new_masks,
new_size[:-1],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
align_corners=align_corners)
if pad_to_max_dimension:
new_masks = tf.image.pad_to_bounding_box(
new_masks, 0, 0, max_dimension, max_dimension)
new_masks = tf.squeeze(new_masks, 3)
result.append(new_masks)
result.append(new_size)
return result
def _get_image_info(image):
"""Returns the height, width and number of channels in the image."""
image_height = tf.shape(image)[0]
image_width = tf.shape(image)[1]
num_channels = tf.shape(image)[2]
return (image_height, image_width, num_channels)
# TODO(alirezafathi): Make sure the static shapes are preserved.
def resize_to_min_dimension(image, masks=None, min_dimension=600,
method=tf.image.ResizeMethod.BILINEAR):
"""Resizes image and masks given the min size maintaining the aspect ratio.
If one of the image dimensions is smaller than min_dimension, it will scale
the image such that its smallest dimension is equal to min_dimension.
Otherwise, will keep the image size as is.
Args:
image: a tensor of size [height, width, channels].
masks: (optional) a tensors of size [num_instances, height, width].
min_dimension: minimum image dimension.
method: (optional) interpolation method used in resizing. Defaults to
BILINEAR.
Returns:
An array containing resized_image, resized_masks, and resized_image_shape.
Note that the position of the resized_image_shape changes based on whether
masks are present.
resized_image: A tensor of size [new_height, new_width, channels].
resized_masks: If masks is not None, also outputs masks. A 3D tensor of
shape [num_instances, new_height, new_width]
resized_image_shape: A 1D tensor of shape [3] containing the shape of the
resized image.
Raises:
ValueError: if the image is not a 3D tensor.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
with tf.name_scope('ResizeGivenMinDimension', values=[image, min_dimension]):
(image_height, image_width, num_channels) = _get_image_info(image)
min_image_dimension = tf.minimum(image_height, image_width)
min_target_dimension = tf.maximum(min_image_dimension, min_dimension)
target_ratio = tf.cast(min_target_dimension, dtype=tf.float32) / tf.cast(
min_image_dimension, dtype=tf.float32)
target_height = tf.cast(
tf.cast(image_height, dtype=tf.float32) * target_ratio, dtype=tf.int32)
target_width = tf.cast(
tf.cast(image_width, dtype=tf.float32) * target_ratio, dtype=tf.int32)
image = tf.image.resize_images(
tf.expand_dims(image, axis=0), size=[target_height, target_width],
method=method,
align_corners=True)
result = [tf.squeeze(image, axis=0)]
if masks is not None:
masks = tf.image.resize_nearest_neighbor(
tf.expand_dims(masks, axis=3),
size=[target_height, target_width],
align_corners=True)
result.append(tf.squeeze(masks, axis=3))
result.append(tf.stack([target_height, target_width, num_channels]))
return result
def resize_to_max_dimension(image, masks=None, max_dimension=600,
method=tf.image.ResizeMethod.BILINEAR):
"""Resizes image and masks given the max size maintaining the aspect ratio.
If one of the image dimensions is greater than max_dimension, it will scale
the image such that its largest dimension is equal to max_dimension.
Otherwise, will keep the image size as is.
Args:
image: a tensor of size [height, width, channels].
masks: (optional) a tensors of size [num_instances, height, width].
max_dimension: maximum image dimension.
method: (optional) interpolation method used in resizing. Defaults to
BILINEAR.
Returns:
An array containing resized_image, resized_masks, and resized_image_shape.
Note that the position of the resized_image_shape changes based on whether
masks are present.
resized_image: A tensor of size [new_height, new_width, channels].
resized_masks: If masks is not None, also outputs masks. A 3D tensor of
shape [num_instances, new_height, new_width]
resized_image_shape: A 1D tensor of shape [3] containing the shape of the
resized image.
Raises:
ValueError: if the image is not a 3D tensor.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
with tf.name_scope('ResizeGivenMaxDimension', values=[image, max_dimension]):
(image_height, image_width, num_channels) = _get_image_info(image)
max_image_dimension = tf.maximum(image_height, image_width)
max_target_dimension = tf.minimum(max_image_dimension, max_dimension)
target_ratio = tf.cast(max_target_dimension, dtype=tf.float32) / tf.cast(
max_image_dimension, dtype=tf.float32)
target_height = tf.cast(
tf.cast(image_height, dtype=tf.float32) * target_ratio, dtype=tf.int32)
target_width = tf.cast(
tf.cast(image_width, dtype=tf.float32) * target_ratio, dtype=tf.int32)
image = tf.image.resize_images(
tf.expand_dims(image, axis=0), size=[target_height, target_width],
method=method,
align_corners=True)
result = [tf.squeeze(image, axis=0)]
if masks is not None:
masks = tf.image.resize_nearest_neighbor(
tf.expand_dims(masks, axis=3),
size=[target_height, target_width],
align_corners=True)
result.append(tf.squeeze(masks, axis=3))
result.append(tf.stack([target_height, target_width, num_channels]))
return result
def resize_pad_to_multiple(image, masks=None, multiple=1):
"""Resize an image by zero padding it to the specified multiple.
For example, with an image of size (101, 199, 3) and multiple=4,
the returned image will have shape (104, 200, 3).
Args:
image: a tensor of shape [height, width, channels]
masks: (optional) a tensor of shape [num_instances, height, width]
multiple: int, the multiple to which the height and width of the input
will be padded.
Returns:
resized_image: The image with 0 padding applied, such that output
dimensions are divisible by `multiple`
resized_masks: If masks are given, they are resized to the same
spatial dimensions as the image.
resized_image_shape: An integer tensor of shape [3] which holds
the shape of the input image.
"""
if len(image.get_shape()) != 3:
raise ValueError('Image should be 3D tensor')
with tf.name_scope('ResizePadToMultiple', values=[image, multiple]):
image_height, image_width, num_channels = _get_image_info(image)
image = image[tf.newaxis, :, :, :]
image = ops.pad_to_multiple(image, multiple)[0, :, :, :]
if masks is not None:
masks = tf.transpose(masks, (1, 2, 0))
masks = masks[tf.newaxis, :, :, :]
masks = ops.pad_to_multiple(masks, multiple)[0, :, :, :]
masks = tf.transpose(masks, (2, 0, 1))
if masks is None:
return image, tf.stack([image_height, image_width, num_channels])
else:
return image, masks, tf.stack([image_height, image_width, num_channels])
def scale_boxes_to_pixel_coordinates(image, boxes, keypoints=None):
"""Scales boxes from normalized to pixel coordinates.
Args:
image: A 3D float32 tensor of shape [height, width, channels].
boxes: A 2D float32 tensor of shape [num_boxes, 4] containing the bounding
boxes in normalized coordinates. Each row is of the form
[ymin, xmin, ymax, xmax].
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
Returns:
image: unchanged input image.
scaled_boxes: a 2D float32 tensor of shape [num_boxes, 4] containing the
bounding boxes in pixel coordinates.
scaled_keypoints: a 3D float32 tensor with shape
[num_instances, num_keypoints, 2] containing the keypoints in pixel
coordinates.
"""
boxlist = box_list.BoxList(boxes)
image_height = tf.shape(image)[0]
image_width = tf.shape(image)[1]
scaled_boxes = box_list_ops.scale(boxlist, image_height, image_width).get()
result = [image, scaled_boxes]
if keypoints is not None:
scaled_keypoints = keypoint_ops.scale(keypoints, image_height, image_width)
result.append(scaled_keypoints)
return tuple(result)
# TODO(alirezafathi): Investigate if instead the function should return None if
# masks is None.
# pylint: disable=g-doc-return-or-yield
def resize_image(image,
masks=None,
new_height=600,
new_width=1024,
method=tf.image.ResizeMethod.BILINEAR,
align_corners=False):
"""Resizes images to the given height and width.
Args:
image: A 3D tensor of shape [height, width, channels]
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks.
new_height: (optional) (scalar) desired height of the image.
new_width: (optional) (scalar) desired width of the image.
method: (optional) interpolation method used in resizing. Defaults to
BILINEAR.
align_corners: bool. If true, exactly align all 4 corners of the input
and output. Defaults to False.
Returns:
Note that the position of the resized_image_shape changes based on whether
masks are present.
resized_image: A tensor of size [new_height, new_width, channels].
resized_masks: If masks is not None, also outputs masks. A 3D tensor of
shape [num_instances, new_height, new_width]
resized_image_shape: A 1D tensor of shape [3] containing the shape of the
resized image.
"""
with tf.name_scope(
'ResizeImage',
values=[image, new_height, new_width, method, align_corners]):
new_image = tf.image.resize_images(
image, tf.stack([new_height, new_width]),
method=method,
align_corners=align_corners)
image_shape = shape_utils.combined_static_and_dynamic_shape(image)
result = [new_image]
if masks is not None:
num_instances = tf.shape(masks)[0]
new_size = tf.stack([new_height, new_width])
def resize_masks_branch():
new_masks = tf.expand_dims(masks, 3)
new_masks = tf.image.resize_nearest_neighbor(
new_masks, new_size, align_corners=align_corners)
new_masks = tf.squeeze(new_masks, axis=3)
return new_masks
def reshape_masks_branch():
# The shape function will be computed for both branches of the
# condition, regardless of which branch is actually taken. Make sure
# that we don't trigger an assertion in the shape function when trying
# to reshape a non empty tensor into an empty one.
new_masks = tf.reshape(masks, [-1, new_size[0], new_size[1]])
return new_masks
masks = tf.cond(num_instances > 0, resize_masks_branch,
reshape_masks_branch)
result.append(masks)
result.append(tf.stack([new_height, new_width, image_shape[2]]))
return result
def subtract_channel_mean(image, means=None):
"""Normalizes an image by subtracting a mean from each channel.
Args:
image: A 3D tensor of shape [height, width, channels]
means: float list containing a mean for each channel
Returns:
normalized_images: a tensor of shape [height, width, channels]
Raises:
ValueError: if images is not a 4D tensor or if the number of means is not
equal to the number of channels.
"""
with tf.name_scope('SubtractChannelMean', values=[image, means]):
if len(image.get_shape()) != 3:
raise ValueError('Input must be of size [height, width, channels]')
if len(means) != image.get_shape()[-1]:
raise ValueError('len(means) must match the number of channels')
return image - [[means]]
def one_hot_encoding(labels, num_classes=None):
"""One-hot encodes the multiclass labels.
Example usage:
labels = tf.constant([1, 4], dtype=tf.int32)
one_hot = OneHotEncoding(labels, num_classes=5)
one_hot.eval() # evaluates to [0, 1, 0, 0, 1]
Args:
labels: A tensor of shape [None] corresponding to the labels.
num_classes: Number of classes in the dataset.
Returns:
onehot_labels: a tensor of shape [num_classes] corresponding to the one hot
encoding of the labels.
Raises:
ValueError: if num_classes is not specified.
"""
with tf.name_scope('OneHotEncoding', values=[labels]):
if num_classes is None:
raise ValueError('num_classes must be specified')
labels = tf.one_hot(labels, num_classes, 1, 0)
return tf.reduce_max(labels, 0)
def rgb_to_gray(image):
"""Converts a 3 channel RGB image to a 1 channel grayscale image.
Args:
image: Rank 3 float32 tensor containing 1 image -> [height, width, 3]
with pixel values varying between [0, 1].
Returns:
image: A single channel grayscale image -> [image, height, 1].
"""
return _rgb_to_grayscale(image)
def random_self_concat_image(
image, boxes, labels, label_weights, label_confidences=None,
multiclass_scores=None, concat_vertical_probability=0.1,
concat_horizontal_probability=0.1, seed=None,
preprocess_vars_cache=None):
"""Randomly concatenates the image with itself.
This function randomly concatenates the image with itself; the random
variables for vertical and horizontal concatenation are independent.
Afterwards, we adjust the old bounding boxes, and add new bounding boxes
for the new objects.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: rank 1 float32 containing the label weights.
label_confidences: (optional) rank 1 float32 containing the label
confidences.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for
each box for each class.
concat_vertical_probability: (optional) a tf.float32 scalar denoting the
probability of a vertical concatenation.
concat_horizontal_probability: (optional) a tf.float32 scalar denoting the
probability of a horizontal concatenation.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: Image shape will be [new_height, new_width, channels].
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form.
if label_confidences is not None also returns:
maybe_concat_label_confidences: cropped label weights.
if multiclass_scores is not None also returns:
maybe_concat_multiclass_scores: cropped_multiclass_scores.
"""
concat_vertical = (tf.random_uniform([], seed=seed) <
concat_vertical_probability)
# Note the seed + 1 so we get some semblance of independence even with
# fixed seeds.
concat_horizontal = (tf.random_uniform([], seed=seed + 1 if seed else None)
< concat_horizontal_probability)
gen_func = lambda: (concat_vertical, concat_horizontal)
params = _get_or_create_preprocess_rand_vars(
gen_func, preprocessor_cache.PreprocessorCache.SELF_CONCAT_IMAGE,
preprocess_vars_cache)
concat_vertical, concat_horizontal = params
def _concat_image(image, boxes, labels, label_weights, axis):
"""Concats the image to itself on `axis`."""
output_images = tf.concat([image, image], axis=axis)
if axis == 0:
# Concat vertically, so need to reduce the y coordinates.
old_scaling = tf.constant([0.5, 1.0, 0.5, 1.0])
new_translation = tf.constant([0.5, 0.0, 0.5, 0.0])
elif axis == 1:
old_scaling = tf.constant([1.0, 0.5, 1.0, 0.5])
new_translation = tf.constant([0.0, 0.5, 0.0, 0.5])
old_boxes = old_scaling * boxes
new_boxes = old_boxes + new_translation
all_boxes = tf.concat([old_boxes, new_boxes], axis=0)
return [output_images, all_boxes, tf.tile(labels, [2]), tf.tile(
label_weights, [2])]
image, boxes, labels, label_weights = tf.cond(
concat_vertical,
lambda: _concat_image(image, boxes, labels, label_weights, axis=0),
lambda: [image, boxes, labels, label_weights],
strict=True)
outputs = tf.cond(
concat_horizontal,
lambda: _concat_image(image, boxes, labels, label_weights, axis=1),
lambda: [image, boxes, labels, label_weights],
strict=True)
if label_confidences is not None:
label_confidences = tf.cond(concat_vertical,
lambda: tf.tile(label_confidences, [2]),
lambda: label_confidences)
outputs.append(tf.cond(concat_horizontal,
lambda: tf.tile(label_confidences, [2]),
lambda: label_confidences))
if multiclass_scores is not None:
multiclass_scores = tf.cond(concat_vertical,
lambda: tf.tile(multiclass_scores, [2, 1]),
lambda: multiclass_scores)
outputs.append(tf.cond(concat_horizontal,
lambda: tf.tile(multiclass_scores, [2, 1]),
lambda: multiclass_scores))
return outputs
def ssd_random_crop(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
aspect_ratio_range=((0.5, 2.0),) * 7,
area_range=((0.1, 1.0),) * 7,
overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
clip_boxes=(True,) * 7,
random_coef=(0.15,) * 7,
seed=None,
preprocess_vars_cache=None):
"""Random crop preprocessing with default parameters as in SSD paper.
Liu et al., SSD: Single shot multibox detector.
For further information on random crop preprocessing refer to RandomCrop
function above.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: rank 1 float32 tensor containing the weights.
label_confidences: rank 1 float32 tensor containing the confidences.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If label_weights, multiclass_scores, masks, or keypoints is not None, the
function also returns:
label_weights: rank 1 float32 tensor with shape [num_instances].
multiclass_scores: rank 2 float32 tensor with shape
[num_instances, num_classes]
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
"""
def random_crop_selector(selected_result, index):
"""Applies random_crop_image to selected result.
Args:
selected_result: A tuple containing image, boxes, labels, keypoints (if
not None), and masks (if not None).
index: The index that was randomly selected.
Returns: A tuple containing image, boxes, labels, keypoints (if not None),
and masks (if not None).
"""
i = 3
image, boxes, labels = selected_result[:i]
selected_label_weights = None
selected_label_confidences = None
selected_multiclass_scores = None
selected_masks = None
selected_keypoints = None
if label_weights is not None:
selected_label_weights = selected_result[i]
i += 1
if label_confidences is not None:
selected_label_confidences = selected_result[i]
i += 1
if multiclass_scores is not None:
selected_multiclass_scores = selected_result[i]
i += 1
if masks is not None:
selected_masks = selected_result[i]
i += 1
if keypoints is not None:
selected_keypoints = selected_result[i]
return random_crop_image(
image=image,
boxes=boxes,
labels=labels,
label_weights=selected_label_weights,
label_confidences=selected_label_confidences,
multiclass_scores=selected_multiclass_scores,
masks=selected_masks,
keypoints=selected_keypoints,
min_object_covered=min_object_covered[index],
aspect_ratio_range=aspect_ratio_range[index],
area_range=area_range[index],
overlap_thresh=overlap_thresh[index],
clip_boxes=clip_boxes[index],
random_coef=random_coef[index],
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
result = _apply_with_random_selector_tuples(
tuple(
t for t in (image, boxes, labels, label_weights, label_confidences,
multiclass_scores, masks, keypoints) if t is not None),
random_crop_selector,
num_cases=len(min_object_covered),
preprocess_vars_cache=preprocess_vars_cache,
key=preprocessor_cache.PreprocessorCache.SSD_CROP_SELECTOR_ID)
return result
def ssd_random_crop_pad(image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
min_object_covered=(0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
aspect_ratio_range=((0.5, 2.0),) * 6,
area_range=((0.1, 1.0),) * 6,
overlap_thresh=(0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
clip_boxes=(True,) * 6,
random_coef=(0.15,) * 6,
min_padded_size_ratio=((1.0, 1.0),) * 6,
max_padded_size_ratio=((2.0, 2.0),) * 6,
pad_color=(None,) * 6,
seed=None,
preprocess_vars_cache=None):
"""Random crop preprocessing with default parameters as in SSD paper.
Liu et al., SSD: Single shot multibox detector.
For further information on random crop preprocessing refer to RandomCrop
function above.
Args:
image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: float32 tensor of shape [num_instances] representing the
confidences for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
min_padded_size_ratio: min ratio of padded image height and width to the
input image's height and width.
max_padded_size_ratio: max ratio of padded image height and width to the
input image's height and width.
pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
if set as None, it will be set to average color of the randomly
cropped image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: Image shape will be [new_height, new_width, channels].
boxes: boxes which is the same rank as input boxes. Boxes are in normalized
form.
new_labels: new labels.
new_label_weights: new label weights.
"""
def random_crop_pad_selector(image_boxes_labels, index):
"""Random crop preprocessing helper."""
i = 3
image, boxes, labels = image_boxes_labels[:i]
selected_label_weights = None
selected_label_confidences = None
selected_multiclass_scores = None
if label_weights is not None:
selected_label_weights = image_boxes_labels[i]
i += 1
if label_confidences is not None:
selected_label_confidences = image_boxes_labels[i]
i += 1
if multiclass_scores is not None:
selected_multiclass_scores = image_boxes_labels[i]
return random_crop_pad_image(
image,
boxes,
labels,
label_weights=selected_label_weights,
label_confidences=selected_label_confidences,
multiclass_scores=selected_multiclass_scores,
min_object_covered=min_object_covered[index],
aspect_ratio_range=aspect_ratio_range[index],
area_range=area_range[index],
overlap_thresh=overlap_thresh[index],
clip_boxes=clip_boxes[index],
random_coef=random_coef[index],
min_padded_size_ratio=min_padded_size_ratio[index],
max_padded_size_ratio=max_padded_size_ratio[index],
pad_color=pad_color[index],
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
return _apply_with_random_selector_tuples(
tuple(t for t in (image, boxes, labels, label_weights, label_confidences,
multiclass_scores) if t is not None),
random_crop_pad_selector,
num_cases=len(min_object_covered),
preprocess_vars_cache=preprocess_vars_cache,
key=preprocessor_cache.PreprocessorCache.SSD_CROP_PAD_SELECTOR_ID)
def ssd_random_crop_fixed_aspect_ratio(
image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
aspect_ratio=1.0,
area_range=((0.1, 1.0),) * 7,
overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
clip_boxes=(True,) * 7,
random_coef=(0.15,) * 7,
seed=None,
preprocess_vars_cache=None):
"""Random crop preprocessing with default parameters as in SSD paper.
Liu et al., SSD: Single shot multibox detector.
For further information on random crop preprocessing refer to RandomCrop
function above.
The only difference is that the aspect ratio of the crops are fixed.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: (optional) float32 tensor of shape [num_instances]
representing the confidences for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio: aspect ratio of the cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If multiclass_scores, masks, or keypoints is not None, the function also
returns:
multiclass_scores: rank 2 float32 tensor with shape
[num_instances, num_classes]
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
"""
aspect_ratio_range = ((aspect_ratio, aspect_ratio),) * len(area_range)
crop_result = ssd_random_crop(
image,
boxes,
labels,
label_weights=label_weights,
label_confidences=label_confidences,
multiclass_scores=multiclass_scores,
masks=masks,
keypoints=keypoints,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
overlap_thresh=overlap_thresh,
clip_boxes=clip_boxes,
random_coef=random_coef,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
i = 3
new_image, new_boxes, new_labels = crop_result[:i]
new_label_weights = None
new_label_confidences = None
new_multiclass_scores = None
new_masks = None
new_keypoints = None
if label_weights is not None:
new_label_weights = crop_result[i]
i += 1
if label_confidences is not None:
new_label_confidences = crop_result[i]
i += 1
if multiclass_scores is not None:
new_multiclass_scores = crop_result[i]
i += 1
if masks is not None:
new_masks = crop_result[i]
i += 1
if keypoints is not None:
new_keypoints = crop_result[i]
result = random_crop_to_aspect_ratio(
new_image,
new_boxes,
new_labels,
label_weights=new_label_weights,
label_confidences=new_label_confidences,
multiclass_scores=new_multiclass_scores,
masks=new_masks,
keypoints=new_keypoints,
aspect_ratio=aspect_ratio,
clip_boxes=clip_boxes,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
return result
def ssd_random_crop_pad_fixed_aspect_ratio(
image,
boxes,
labels,
label_weights,
label_confidences=None,
multiclass_scores=None,
masks=None,
keypoints=None,
min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
aspect_ratio=1.0,
aspect_ratio_range=((0.5, 2.0),) * 7,
area_range=((0.1, 1.0),) * 7,
overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
clip_boxes=(True,) * 7,
random_coef=(0.15,) * 7,
min_padded_size_ratio=(1.0, 1.0),
max_padded_size_ratio=(2.0, 2.0),
seed=None,
preprocess_vars_cache=None):
"""Random crop and pad preprocessing with default parameters as in SSD paper.
Liu et al., SSD: Single shot multibox detector.
For further information on random crop preprocessing refer to RandomCrop
function above.
The only difference is that after the initial crop, images are zero-padded
to a fixed aspect ratio instead of being resized to that aspect ratio.
Args:
image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
with pixel values varying between [0, 1].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1].
Each row is in the form of [ymin, xmin, ymax, xmax].
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
label_confidences: (optional) float32 tensor of shape [num_instances]
representing the confidence for each box.
multiclass_scores: (optional) float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x
normalized coordinates.
min_object_covered: the cropped image must cover at least this fraction of
at least one of the input bounding boxes.
aspect_ratio: the final aspect ratio to pad to.
aspect_ratio_range: allowed range for aspect ratio of cropped image.
area_range: allowed range for area ratio between cropped image and the
original image.
overlap_thresh: minimum overlap thresh with new cropped
image to keep the box.
clip_boxes: whether to clip the boxes to the cropped image.
random_coef: a random coefficient that defines the chance of getting the
original image. If random_coef is 0, we will always get the
cropped image, and if it is 1.0, we will always get the
original image.
min_padded_size_ratio: min ratio of padded image height and width to the
input image's height and width.
max_padded_size_ratio: max ratio of padded image height and width to the
input image's height and width.
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
If multiclass_scores, masks, or keypoints is not None, the function also
returns:
multiclass_scores: rank 2 with shape [num_instances, num_classes]
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
keypoints: rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]
"""
crop_result = ssd_random_crop(
image,
boxes,
labels,
label_weights=label_weights,
label_confidences=label_confidences,
multiclass_scores=multiclass_scores,
masks=masks,
keypoints=keypoints,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
overlap_thresh=overlap_thresh,
clip_boxes=clip_boxes,
random_coef=random_coef,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
i = 3
new_image, new_boxes, new_labels = crop_result[:i]
new_label_weights = None
new_label_confidences = None
new_multiclass_scores = None
new_masks = None
new_keypoints = None
if label_weights is not None:
new_label_weights = crop_result[i]
i += 1
if label_confidences is not None:
new_label_confidences = crop_result[i]
i += 1
if multiclass_scores is not None:
new_multiclass_scores = crop_result[i]
i += 1
if masks is not None:
new_masks = crop_result[i]
i += 1
if keypoints is not None:
new_keypoints = crop_result[i]
result = random_pad_to_aspect_ratio(
new_image,
new_boxes,
masks=new_masks,
keypoints=new_keypoints,
aspect_ratio=aspect_ratio,
min_padded_size_ratio=min_padded_size_ratio,
max_padded_size_ratio=max_padded_size_ratio,
seed=seed,
preprocess_vars_cache=preprocess_vars_cache)
result = list(result)
i = 3
result.insert(2, new_labels)
if new_label_weights is not None:
result.insert(i, new_label_weights)
i += 1
if new_label_confidences is not None:
result.insert(i, new_label_confidences)
i += 1
if multiclass_scores is not None:
result.insert(i, new_multiclass_scores)
result = tuple(result)
return result
def convert_class_logits_to_softmax(multiclass_scores, temperature=1.0):
"""Converts multiclass logits to softmax scores after applying temperature.
Args:
multiclass_scores: float32 tensor of shape
[num_instances, num_classes] representing the score for each box for each
class.
temperature: Scale factor to use prior to applying softmax. Larger
temperatures give more uniform distruibutions after softmax.
Returns:
multiclass_scores: float32 tensor of shape
[num_instances, num_classes] with scaling and softmax applied.
"""
# Multiclass scores must be stored as logits. Apply temp and softmax.
multiclass_scores_scaled = tf.multiply(
multiclass_scores, 1.0 / temperature, name='scale_logits')
multiclass_scores = tf.nn.softmax(multiclass_scores_scaled, name='softmax')
return multiclass_scores
def _get_crop_border(border, size):
border = tf.cast(border, tf.float32)
size = tf.cast(size, tf.float32)
i = tf.ceil(tf.log(2.0 * border / size) / tf.log(2.0))
divisor = tf.pow(2.0, i)
divisor = tf.clip_by_value(divisor, 1, border)
divisor = tf.cast(divisor, tf.int32)
return tf.cast(border, tf.int32) // divisor
def random_square_crop_by_scale(image, boxes, labels, label_weights,
masks=None, keypoints=None, max_border=128,
scale_min=0.6, scale_max=1.3, num_scales=8,
seed=None, preprocess_vars_cache=None):
"""Randomly crop a square in proportion to scale and image size.
Extract a square sized crop from an image whose side length is sampled by
randomly scaling the maximum spatial dimension of the image. If part of
the crop falls outside the image, it is filled with zeros.
The augmentation is borrowed from [1]
[1]: https://arxiv.org/abs/1904.07850
Args:
image: rank 3 float32 tensor containing 1 image ->
[height, width, channels].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
Boxes are in normalized form meaning their coordinates vary
between [0, 1]. Each row is in the form of [ymin, xmin, ymax, xmax].
Boxes on the crop boundary are clipped to the boundary and boxes
falling outside the crop are ignored.
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
masks: (optional) rank 3 float32 tensor with shape
[num_instances, height, width] containing instance masks. The masks
are of the same height, width as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape
[num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
coordinates.
max_border: The maximum size of the border. The border defines distance in
pixels to the image boundaries that will not be considered as a center of
a crop. To make sure that the border does not go over the center of the
image, we chose the border value by computing the minimum k, such that
(max_border / (2**k)) < image_dimension/2.
scale_min: float, the minimum value for scale.
scale_max: float, the maximum value for scale.
num_scales: int, the number of discrete scale values to sample between
[scale_min, scale_max]
seed: random seed.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
label_weights: rank 1 float32 tensor with shape [num_instances].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
"""
img_shape = tf.shape(image)
height, width = img_shape[0], img_shape[1]
scales = tf.linspace(scale_min, scale_max, num_scales)
scale = _get_or_create_preprocess_rand_vars(
lambda: scales[_random_integer(0, num_scales, seed)],
preprocessor_cache.PreprocessorCache.SQUARE_CROP_BY_SCALE,
preprocess_vars_cache, 'scale')
image_size = scale * tf.cast(tf.maximum(height, width), tf.float32)
image_size = tf.cast(image_size, tf.int32)
h_border = _get_crop_border(max_border, height)
w_border = _get_crop_border(max_border, width)
def y_function():
y = _random_integer(h_border,
tf.cast(height, tf.int32) - h_border + 1,
seed)
return y
def x_function():
x = _random_integer(w_border,
tf.cast(width, tf.int32) - w_border + 1,
seed)
return x
y_center = _get_or_create_preprocess_rand_vars(
y_function,
preprocessor_cache.PreprocessorCache.SQUARE_CROP_BY_SCALE,
preprocess_vars_cache, 'y_center')
x_center = _get_or_create_preprocess_rand_vars(
x_function,
preprocessor_cache.PreprocessorCache.SQUARE_CROP_BY_SCALE,
preprocess_vars_cache, 'x_center')
half_size = tf.cast(image_size / 2, tf.int32)
crop_ymin, crop_ymax = y_center - half_size, y_center + half_size
crop_xmin, crop_xmax = x_center - half_size, x_center + half_size
ymin = tf.maximum(crop_ymin, 0)
xmin = tf.maximum(crop_xmin, 0)
ymax = tf.minimum(crop_ymax, height - 1)
xmax = tf.minimum(crop_xmax, width - 1)
cropped_image = image[ymin:ymax, xmin:xmax]
offset_y = tf.maximum(0, ymin - crop_ymin)
offset_x = tf.maximum(0, xmin - crop_xmin)
oy_i = offset_y
ox_i = offset_x
output_image = tf.image.pad_to_bounding_box(
cropped_image, offset_height=oy_i, offset_width=ox_i,
target_height=image_size, target_width=image_size)
if ymin == 0:
# We might be padding the image.
box_ymin = -offset_y
else:
box_ymin = crop_ymin
if xmin == 0:
# We might be padding the image.
box_xmin = -offset_x
else:
box_xmin = crop_xmin
box_ymax = box_ymin + image_size
box_xmax = box_xmin + image_size
image_box = [box_ymin / height, box_xmin / width,
box_ymax / height, box_xmax / width]
boxlist = box_list.BoxList(boxes)
boxlist = box_list_ops.change_coordinate_frame(boxlist, image_box)
boxlist, indices = box_list_ops.prune_completely_outside_window(
boxlist, [0.0, 0.0, 1.0, 1.0])
boxlist = box_list_ops.clip_to_window(boxlist, [0.0, 0.0, 1.0, 1.0],
filter_nonoverlapping=False)
return_values = [output_image, boxlist.get(),
tf.gather(labels, indices),
tf.gather(label_weights, indices)]
if masks is not None:
new_masks = tf.expand_dims(masks, -1)
new_masks = new_masks[:, ymin:ymax, xmin:xmax]
new_masks = tf.image.pad_to_bounding_box(
new_masks, oy_i, ox_i, image_size, image_size)
new_masks = tf.squeeze(new_masks, [-1])
return_values.append(tf.gather(new_masks, indices))
if keypoints is not None:
keypoints = tf.gather(keypoints, indices)
keypoints = keypoint_ops.change_coordinate_frame(keypoints, image_box)
keypoints = keypoint_ops.prune_outside_window(keypoints,
[0.0, 0.0, 1.0, 1.0])
return_values.append(keypoints)
return return_values
def random_scale_crop_and_pad_to_square(
image,
boxes,
labels,
label_weights,
masks=None,
keypoints=None,
scale_min=0.1,
scale_max=2.0,
output_size=512,
resize_method=tf.image.ResizeMethod.BILINEAR,
seed=None):
"""Randomly scale, crop, and then pad an image to fixed square dimensions.
Randomly scale, crop, and then pad an image to the desired square output
dimensions. Specifically, this method first samples a random_scale factor
from a uniform distribution between scale_min and scale_max, and then resizes
the image such that it's maximum dimension is (output_size * random_scale).
Secondly, a square output_size crop is extracted from the resized image
(note, this will only occur when random_scale > 1.0). Lastly, the cropped
region is padded to the desired square output_size, by filling with zeros.
The augmentation is borrowed from [1]
[1]: https://arxiv.org/abs/1911.09070
Args:
image: rank 3 float32 tensor containing 1 image ->
[height, width, channels].
boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4]. Boxes
are in normalized form meaning their coordinates vary between [0, 1]. Each
row is in the form of [ymin, xmin, ymax, xmax]. Boxes on the crop boundary
are clipped to the boundary and boxes falling outside the crop are
ignored.
labels: rank 1 int32 tensor containing the object classes.
label_weights: float32 tensor of shape [num_instances] representing the
weight for each box.
masks: (optional) rank 3 float32 tensor with shape [num_instances, height,
width] containing instance masks. The masks are of the same height, width
as the input `image`.
keypoints: (optional) rank 3 float32 tensor with shape [num_instances,
num_keypoints, 2]. The keypoints are in y-x normalized coordinates.
scale_min: float, the minimum value for the random scale factor.
scale_max: float, the maximum value for the random scale factor.
output_size: int, the desired (square) output image size.
resize_method: tf.image.ResizeMethod, resize method to use when scaling the
input images.
seed: random seed.
Returns:
image: image which is the same rank as input image.
boxes: boxes which is the same rank as input boxes.
Boxes are in normalized form.
labels: new labels.
label_weights: rank 1 float32 tensor with shape [num_instances].
masks: rank 3 float32 tensor with shape [num_instances, height, width]
containing instance masks.
"""
img_shape = tf.shape(image)
input_height, input_width = img_shape[0], img_shape[1]
random_scale = tf.random_uniform([], scale_min, scale_max, seed=seed)
# Compute the scaled height and width from the random scale.
max_input_dim = tf.cast(tf.maximum(input_height, input_width), tf.float32)
input_ar_y = tf.cast(input_height, tf.float32) / max_input_dim
input_ar_x = tf.cast(input_width, tf.float32) / max_input_dim
scaled_height = tf.cast(random_scale * output_size * input_ar_y, tf.int32)
scaled_width = tf.cast(random_scale * output_size * input_ar_x, tf.int32)
# Compute the offsets:
offset_y = tf.cast(scaled_height - output_size, tf.float32)
offset_x = tf.cast(scaled_width - output_size, tf.float32)
offset_y = tf.maximum(0.0, offset_y) * tf.random_uniform([], 0, 1, seed=seed)
offset_x = tf.maximum(0.0, offset_x) * tf.random_uniform([], 0, 1, seed=seed)
offset_y = tf.cast(offset_y, tf.int32)
offset_x = tf.cast(offset_x, tf.int32)
# Scale, crop, and pad the input image.
scaled_image = tf.image.resize_images(
image, [scaled_height, scaled_width], method=resize_method)
scaled_image = scaled_image[offset_y:offset_y + output_size,
offset_x:offset_x + output_size, :]
output_image = tf.image.pad_to_bounding_box(scaled_image, 0, 0, output_size,
output_size)
# Update the boxes.
new_window = tf.cast(
tf.stack([offset_y, offset_x,
offset_y + output_size, offset_x + output_size]),
dtype=tf.float32)
new_window /= tf.cast(
tf.stack([scaled_height, scaled_width, scaled_height, scaled_width]),
dtype=tf.float32)
boxlist = box_list.BoxList(boxes)
boxlist = box_list_ops.change_coordinate_frame(boxlist, new_window)
boxlist, indices = box_list_ops.prune_completely_outside_window(
boxlist, [0.0, 0.0, 1.0, 1.0])
boxlist = box_list_ops.clip_to_window(
boxlist, [0.0, 0.0, 1.0, 1.0], filter_nonoverlapping=False)
return_values = [output_image, boxlist.get(),
tf.gather(labels, indices),
tf.gather(label_weights, indices)]
if masks is not None:
new_masks = tf.expand_dims(masks, -1)
new_masks = tf.image.resize_images(
new_masks, [scaled_height, scaled_width], method=resize_method)
new_masks = new_masks[:, offset_y:offset_y + output_size,
offset_x:offset_x + output_size, :]
new_masks = tf.image.pad_to_bounding_box(
new_masks, 0, 0, output_size, output_size)
new_masks = tf.squeeze(new_masks, [-1])
return_values.append(tf.gather(new_masks, indices))
if keypoints is not None:
keypoints = tf.gather(keypoints, indices)
keypoints = keypoint_ops.change_coordinate_frame(keypoints, new_window)
keypoints = keypoint_ops.prune_outside_window(
keypoints, [0.0, 0.0, 1.0, 1.0])
return_values.append(keypoints)
return return_values
def get_default_func_arg_map(include_label_weights=True,
include_label_confidences=False,
include_multiclass_scores=False,
include_instance_masks=False,
include_keypoints=False,
include_keypoint_visibilities=False,
include_dense_pose=False):
"""Returns the default mapping from a preprocessor function to its args.
Args:
include_label_weights: If True, preprocessing functions will modify the
label weights, too.
include_label_confidences: If True, preprocessing functions will modify the
label confidences, too.
include_multiclass_scores: If True, preprocessing functions will modify the
multiclass scores, too.
include_instance_masks: If True, preprocessing functions will modify the
instance masks, too.
include_keypoints: If True, preprocessing functions will modify the
keypoints, too.
include_keypoint_visibilities: If True, preprocessing functions will modify
the keypoint visibilities, too.
include_dense_pose: If True, preprocessing functions will modify the
DensePose labels, too.
Returns:
A map from preprocessing functions to the arguments they receive.
"""
groundtruth_label_weights = None
if include_label_weights:
groundtruth_label_weights = (
fields.InputDataFields.groundtruth_weights)
groundtruth_label_confidences = None
if include_label_confidences:
groundtruth_label_confidences = (
fields.InputDataFields.groundtruth_confidences)
multiclass_scores = None
if include_multiclass_scores:
multiclass_scores = (fields.InputDataFields.multiclass_scores)
groundtruth_instance_masks = None
if include_instance_masks:
groundtruth_instance_masks = (
fields.InputDataFields.groundtruth_instance_masks)
groundtruth_keypoints = None
if include_keypoints:
groundtruth_keypoints = fields.InputDataFields.groundtruth_keypoints
groundtruth_keypoint_visibilities = None
if include_keypoint_visibilities:
groundtruth_keypoint_visibilities = (
fields.InputDataFields.groundtruth_keypoint_visibilities)
groundtruth_dp_num_points = None
groundtruth_dp_part_ids = None
groundtruth_dp_surface_coords = None
if include_dense_pose:
groundtruth_dp_num_points = (
fields.InputDataFields.groundtruth_dp_num_points)
groundtruth_dp_part_ids = (
fields.InputDataFields.groundtruth_dp_part_ids)
groundtruth_dp_surface_coords = (
fields.InputDataFields.groundtruth_dp_surface_coords)
prep_func_arg_map = {
normalize_image: (fields.InputDataFields.image,),
random_horizontal_flip: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
groundtruth_instance_masks,
groundtruth_keypoints,
groundtruth_keypoint_visibilities,
groundtruth_dp_part_ids,
groundtruth_dp_surface_coords,
),
random_vertical_flip: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
groundtruth_instance_masks,
groundtruth_keypoints,
),
random_rotation90: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
groundtruth_instance_masks,
groundtruth_keypoints,
),
random_pixel_value_scale: (fields.InputDataFields.image,),
random_image_scale: (
fields.InputDataFields.image,
groundtruth_instance_masks,
),
random_rgb_to_gray: (fields.InputDataFields.image,),
random_adjust_brightness: (fields.InputDataFields.image,),
random_adjust_contrast: (fields.InputDataFields.image,),
random_adjust_hue: (fields.InputDataFields.image,),
random_adjust_saturation: (fields.InputDataFields.image,),
random_distort_color: (fields.InputDataFields.image,),
random_jitter_boxes: (fields.InputDataFields.groundtruth_boxes,),
random_crop_image:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights, groundtruth_label_confidences,
multiclass_scores, groundtruth_instance_masks, groundtruth_keypoints,
groundtruth_keypoint_visibilities, groundtruth_dp_num_points,
groundtruth_dp_part_ids, groundtruth_dp_surface_coords),
random_pad_image:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes, groundtruth_instance_masks,
groundtruth_keypoints, groundtruth_dp_surface_coords),
random_absolute_pad_image:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes, groundtruth_instance_masks,
groundtruth_keypoints, groundtruth_dp_surface_coords),
random_crop_pad_image: (fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences, multiclass_scores),
random_crop_to_aspect_ratio: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences,
multiclass_scores,
groundtruth_instance_masks,
groundtruth_keypoints,
),
random_pad_to_aspect_ratio: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
groundtruth_instance_masks,
groundtruth_keypoints,
),
random_black_patches: (fields.InputDataFields.image,),
random_jpeg_quality: (fields.InputDataFields.image,),
random_downscale_to_target_pixels: (
fields.InputDataFields.image,
groundtruth_instance_masks,
),
random_patch_gaussian: (fields.InputDataFields.image,),
autoaugment_image: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
),
retain_boxes_above_threshold: (
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences,
multiclass_scores,
groundtruth_instance_masks,
groundtruth_keypoints,
),
drop_label_probabilistically: (
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences,
multiclass_scores,
groundtruth_instance_masks,
groundtruth_keypoints,
),
remap_labels: (fields.InputDataFields.groundtruth_classes,),
image_to_float: (fields.InputDataFields.image,),
random_resize_method: (fields.InputDataFields.image,),
resize_to_range: (
fields.InputDataFields.image,
groundtruth_instance_masks,
),
resize_to_min_dimension: (
fields.InputDataFields.image,
groundtruth_instance_masks,
),
scale_boxes_to_pixel_coordinates: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
groundtruth_keypoints,
),
resize_image: (
fields.InputDataFields.image,
groundtruth_instance_masks,
),
subtract_channel_mean: (fields.InputDataFields.image,),
one_hot_encoding: (fields.InputDataFields.groundtruth_image_classes,),
rgb_to_gray: (fields.InputDataFields.image,),
random_self_concat_image:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights, groundtruth_label_confidences,
multiclass_scores),
ssd_random_crop: (fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences, multiclass_scores,
groundtruth_instance_masks, groundtruth_keypoints),
ssd_random_crop_pad: (fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences, multiclass_scores),
ssd_random_crop_fixed_aspect_ratio:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights, groundtruth_label_confidences,
multiclass_scores, groundtruth_instance_masks, groundtruth_keypoints
),
ssd_random_crop_pad_fixed_aspect_ratio: (
fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights,
groundtruth_label_confidences,
multiclass_scores,
groundtruth_instance_masks,
groundtruth_keypoints,
),
convert_class_logits_to_softmax: (multiclass_scores,),
random_square_crop_by_scale:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights, groundtruth_instance_masks,
groundtruth_keypoints),
random_scale_crop_and_pad_to_square:
(fields.InputDataFields.image,
fields.InputDataFields.groundtruth_boxes,
fields.InputDataFields.groundtruth_classes,
groundtruth_label_weights, groundtruth_instance_masks,
groundtruth_keypoints),
}
return prep_func_arg_map
def preprocess(tensor_dict,
preprocess_options,
func_arg_map=None,
preprocess_vars_cache=None):
"""Preprocess images and bounding boxes.
Various types of preprocessing (to be implemented) based on the
preprocess_options dictionary e.g. "crop image" (affects image and possibly
boxes), "white balance image" (affects only image), etc. If self._options
is None, no preprocessing is done.
Args:
tensor_dict: dictionary that contains images, boxes, and can contain other
things as well.
images-> rank 4 float32 tensor contains
1 image -> [1, height, width, 3].
with pixel values varying between [0, 1]
boxes-> rank 2 float32 tensor containing
the bounding boxes -> [N, 4].
Boxes are in normalized form meaning
their coordinates vary between [0, 1].
Each row is in the form
of [ymin, xmin, ymax, xmax].
preprocess_options: It is a list of tuples, where each tuple contains a
function and a dictionary that contains arguments and
their values.
func_arg_map: mapping from preprocessing functions to arguments that they
expect to receive and return.
preprocess_vars_cache: PreprocessorCache object that records previously
performed augmentations. Updated in-place. If this
function is called multiple times with the same
non-null cache, it will perform deterministically.
Returns:
tensor_dict: which contains the preprocessed images, bounding boxes, etc.
Raises:
ValueError: (a) If the functions passed to Preprocess
are not in func_arg_map.
(b) If the arguments that a function needs
do not exist in tensor_dict.
(c) If image in tensor_dict is not rank 4
"""
if func_arg_map is None:
func_arg_map = get_default_func_arg_map()
# changes the images to image (rank 4 to rank 3) since the functions
# receive rank 3 tensor for image
if fields.InputDataFields.image in tensor_dict:
images = tensor_dict[fields.InputDataFields.image]
if len(images.get_shape()) != 4:
raise ValueError('images in tensor_dict should be rank 4')
image = tf.squeeze(images, axis=0)
tensor_dict[fields.InputDataFields.image] = image
# Preprocess inputs based on preprocess_options
for option in preprocess_options:
func, params = option
if func not in func_arg_map:
raise ValueError('The function %s does not exist in func_arg_map' %
(func.__name__))
arg_names = func_arg_map[func]
for a in arg_names:
if a is not None and a not in tensor_dict:
raise ValueError('The function %s requires argument %s' %
(func.__name__, a))
def get_arg(key):
return tensor_dict[key] if key is not None else None
args = [get_arg(a) for a in arg_names]
if preprocess_vars_cache is not None:
if six.PY2:
# pylint: disable=deprecated-method
arg_spec = inspect.getargspec(func)
# pylint: enable=deprecated-method
else:
arg_spec = inspect.getfullargspec(func)
if 'preprocess_vars_cache' in arg_spec.args:
params['preprocess_vars_cache'] = preprocess_vars_cache
results = func(*args, **params)
if not isinstance(results, (list, tuple)):
results = (results,)
# Removes None args since the return values will not contain those.
arg_names = [arg_name for arg_name in arg_names if arg_name is not None]
for res, arg_name in zip(results, arg_names):
tensor_dict[arg_name] = res
# changes the image to images (rank 3 to rank 4) to be compatible to what
# we received in the first place
if fields.InputDataFields.image in tensor_dict:
image = tensor_dict[fields.InputDataFields.image]
images = tf.expand_dims(image, 0)
tensor_dict[fields.InputDataFields.image] = images
return tensor_dict
|
tombstone/models
|
research/object_detection/core/preprocessor.py
|
Python
|
apache-2.0
| 192,656
|
[
"Gaussian"
] |
72ee6610f299158ec79c72d3dde8ebb5a1062ce9523f0b999d178f2608be5b08
|
#!/usr/bin/python
#
# Created on Aug 25, 2016
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
#
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_poolgroupdeploymentpolicy
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of PoolGroupDeploymentPolicy Avi RESTful Object
description:
- This module is used to configure PoolGroupDeploymentPolicy object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent","present"]
auto_disable_old_prod_pools:
description:
- It will automatically disable old production pools once there is a new production candidate.
- Default value when not specified in API or module is interpreted by Avi Controller as True.
cloud_ref:
description:
- It is a reference to an object of type cloud.
description:
description:
- User defined description for the object.
evaluation_duration:
description:
- Duration of evaluation period for automatic deployment.
- Allowed values are 60-86400.
- Default value when not specified in API or module is interpreted by Avi Controller as 300.
name:
description:
- The name of the pool group deployment policy.
required: true
rules:
description:
- List of pgdeploymentrule.
scheme:
description:
- Deployment scheme.
- Enum options - BLUE_GREEN, CANARY.
- Default value when not specified in API or module is interpreted by Avi Controller as BLUE_GREEN.
target_test_traffic_ratio:
description:
- Target traffic ratio before pool is made production.
- Allowed values are 1-100.
- Default value when not specified in API or module is interpreted by Avi Controller as 100.
tenant_ref:
description:
- It is a reference to an object of type tenant.
test_traffic_ratio_rampup:
description:
- Ratio of the traffic that is sent to the pool under test.
- Test ratio of 100 means blue green.
- Allowed values are 1-100.
- Default value when not specified in API or module is interpreted by Avi Controller as 100.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the pool group deployment policy.
webhook_ref:
description:
- Webhook configured with url that avi controller will pass back information about pool group, old and new pool information and current deployment
- rule results.
- It is a reference to an object of type webhook.
- Field introduced in 17.1.1.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Example to create PoolGroupDeploymentPolicy object
avi_poolgroupdeploymentpolicy:
controller: 10.10.25.42
username: admin
password: something
state: present
name: sample_poolgroupdeploymentpolicy
"""
RETURN = '''
obj:
description: PoolGroupDeploymentPolicy (api/poolgroupdeploymentpolicy) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
auto_disable_old_prod_pools=dict(type='bool',),
cloud_ref=dict(type='str',),
description=dict(type='str',),
evaluation_duration=dict(type='int',),
name=dict(type='str', required=True),
rules=dict(type='list',),
scheme=dict(type='str',),
target_test_traffic_ratio=dict(type='int',),
tenant_ref=dict(type='str',),
test_traffic_ratio_rampup=dict(type='int',),
url=dict(type='str',),
uuid=dict(type='str',),
webhook_ref=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'poolgroupdeploymentpolicy',
set([]))
if __name__ == '__main__':
main()
|
e-gob/plataforma-kioscos-autoatencion
|
scripts/ansible-play/.venv/lib/python2.7/site-packages/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py
|
Python
|
bsd-3-clause
| 5,648
|
[
"VisIt"
] |
d8ebf4582e7edfd107b7ac6db62606b331bb4664b05811d48fe63188cd9a4bb4
|
#!/usr/local/bin/env python
"""
Test various utility functions.
"""
#=============================================================================================
# GLOBAL IMPORTS
#=============================================================================================
import textwrap
import openmoltools as omt
from schema import Schema
from openmmtools import testsystems
from nose import tools
from yank.utils import *
#=============================================================================================
# TESTING FUNCTIONS
#=============================================================================================
def test_is_iterable_container():
"""Test utility function not_iterable_container()."""
assert is_iterable_container(3) == False
assert is_iterable_container('ciao') == False
assert is_iterable_container([1, 2, 3]) == True
assert is_iterable_container(CombinatorialLeaf([1, 2, 3])) == True
def test_set_tree_path():
"""Test getting and setting of CombinatorialTree paths."""
test = CombinatorialTree({'a': 2})
test_nested = CombinatorialTree({'a': {'b': 2}})
test['a'] = 3
assert test == {'a': 3}
test_nested[('a', 'b')] = 3
assert test_nested == {'a': {'b': 3}}
test_nested[('a',)] = 5
assert test_nested == {'a': 5}
def test_find_leaves():
"""Test CombinatorialTree._find_leaves()."""
simple_tree = CombinatorialTree({'simple': {'scalar': 1,
'vector': [2, 3, 4],
'nested': {
'leaf': ['a', 'b', 'c']}}})
leaf_paths, leaf_vals = simple_tree._find_leaves()
print(leaf_paths)
assert all(leaf_path in [('simple', 'scalar'), ('simple', 'vector'),
('simple', 'nested', 'leaf')]
for leaf_path in leaf_paths)
assert all(leaf_val in [1, [2, 3, 4], ['a', 'b', 'c']]
for leaf_val in leaf_vals)
def test_find_combinatorial_leaves():
"""Test CombinatorialTree._find_combinatorial_leaves()."""
simple_tree = CombinatorialTree({'simple': {
'scalar': 1,
'vector': CombinatorialLeaf([2, 3, 4]),
'nested': {
'leaf': ['a', 'b', 'c'],
'comb-leaf': CombinatorialLeaf(['d', 'e'])}}})
leaf_paths, leaf_vals = simple_tree._find_combinatorial_leaves()
# Paths must be in alphabetical order with their associated values
assert leaf_paths == (('simple', 'nested', 'comb-leaf'), ('simple', 'vector'))
assert leaf_vals == (['d', 'e'], [2, 3, 4])
def test_expand_tree():
"""Test CombinatorialTree generators."""
simple_tree = CombinatorialTree({'simple': {'scalar': 1,
'vector': CombinatorialLeaf([2, 3, 4]),
'nested': {
'leaf': ['d', 'e'],
'combleaf': CombinatorialLeaf(['a', 'b', 'c'])}}})
result = [{'simple': {'scalar': 1, 'vector': 2, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'a'}}},
{'simple': {'scalar': 1, 'vector': 3, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'a'}}},
{'simple': {'scalar': 1, 'vector': 4, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'a'}}},
{'simple': {'scalar': 1, 'vector': 2, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'b'}}},
{'simple': {'scalar': 1, 'vector': 3, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'b'}}},
{'simple': {'scalar': 1, 'vector': 4, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'b'}}},
{'simple': {'scalar': 1, 'vector': 2, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'c'}}},
{'simple': {'scalar': 1, 'vector': 3, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'c'}}},
{'simple': {'scalar': 1, 'vector': 4, 'nested': {'leaf': ['d', 'e'], 'combleaf': 'c'}}}]
assert result == [exp for exp in simple_tree]
# Test named_combinations generator using either order to account for generator randomness
expected_names = {'a_2', 'a_3', 'a_4', 'b_2', 'b_3', 'b_4', 'c_2', 'c_3', 'c_4'}
assert expected_names == set([name for name, _ in simple_tree.named_combinations(separator='_',
max_name_length=3)])
# Test maximum length, similar names and special characters
long_tree = CombinatorialTree({'key1': CombinatorialLeaf(['th#*&^isnameistoolong1',
'th#*&^isnameistoolong2']),
'key2': CombinatorialLeaf(['test1', 'test2'])})
expected_names = {'thisn-test', 'thisn-test-2', 'thisn-test-3', 'thisn-test-4'}
assert expected_names == set([name for name, _ in long_tree.named_combinations(separator='-',
max_name_length=10)])
# Test file paths are handled correctly
data_dir = get_data_filename(os.path.join('tests', 'data'))
abl = os.path.join(data_dir, 'abl-imatinib-explicit', '2HYY-pdbfixer.pdb')
benzene = os.path.join(data_dir, 'benzene-toluene-explicit', 'benzene.tripos.mol2')
long_tree = CombinatorialTree({'key1': CombinatorialLeaf([abl, benzene]),
'key2': CombinatorialLeaf([benzene, benzene, 'notapath'])})
expected_names = {'2HYYpdbfixer-benzene', '2HYYpdbfixer-benzene-2', '2HYYpdbfixer-notapath',
'benzene-benzene', 'benzene-benzene-2', 'benzene-notapath'}
assert expected_names == set([name for name, _ in long_tree.named_combinations(separator='-',
max_name_length=25)])
def test_expand_id_nodes():
"""CombinatorialTree.expand_id_nodes()"""
d = {'molecules':
{'mol1': {'mol_value': CombinatorialLeaf([1, 2])},
'mol2': {'mol_value': CombinatorialLeaf([3, 4])}},
'systems':
{'sys1': {'molecules': 'mol1'},
'sys2': {'molecules': CombinatorialLeaf(['mol1', 'mol2'])},
'sys3': {'prmtopfile': 'mysystem.prmtop'}}}
t = CombinatorialTree(d).expand_id_nodes('molecules', [('systems', '*', 'molecules')])
assert t['molecules'] == {'mol1_1': {'mol_value': 1}, 'mol1_2': {'mol_value': 2},
'mol2_3': {'mol_value': 3}, 'mol2_4': {'mol_value': 4}}
assert t['systems'] == {'sys1': {'molecules': CombinatorialLeaf(['mol1_1', 'mol1_2'])},
'sys2': {'molecules': CombinatorialLeaf(['mol1_1', 'mol1_2', 'mol2_3', 'mol2_4'])},
'sys3': {'prmtopfile': 'mysystem.prmtop'}}
def test_topology_serialization():
"""Correct serialization of Topology objects."""
topology = testsystems.AlanineDipeptideImplicit().topology
topology_str = serialize_topology(topology)
deserialized_topology = deserialize_topology(topology_str)
assert mdtraj.Topology.from_openmm(topology) == deserialized_topology
def test_generate_signature_schema():
"""Test generate_signature_schema() function."""
def f(a, b, camelCase=True, none=None, quantity=3.0*unit.angstroms):
pass
f_schema = generate_signature_schema(f)
assert len(f_schema) == 3
for k in f_schema.keys():
assert isinstance(k, Optional)
# Remove Optional() marker for comparison
stripped_schema = {k._schema: v for k, v in f_schema.items() if k._schema != 'quantity'}
assert {'camel_case': bool, 'none': object} == stripped_schema
# Check conversion
f_schema = Schema(f_schema)
assert f_schema.validate({'quantity': '5*angstrom'}) == {'quantity': 5*unit.angstrom}
# Check update
optional_instance = Optional('camel_case')
updated_schema = generate_signature_schema(f, update_keys={'none': float, optional_instance: int},
exclude_keys={'quantity'})
assert len(updated_schema) == 2
assert updated_schema['none'] == float
assert updated_schema[optional_instance] == int
def test_get_keyword_args():
"""Test get_keyword_args() function."""
def f(a, b, c=True, d=3.0):
pass
expected = {'c': True, 'd': 3.0}
assert expected == get_keyword_args(f)
def test_validate_parameters():
"""Test validate_parameters function."""
template_pars = {
'bool': True,
'int': 2,
'float': 1e4,
'str': 'default',
'length': 2.0 * unit.nanometers,
'time': 2.0 * unit.femtoseconds
}
input_pars = {
'bool': False,
'int': 4,
'float': 3.0,
'str': 'input',
'length': 1.0 * unit.nanometers,
'time': 1.0 * unit.femtoseconds
}
# Accept correct parameters
assert input_pars == validate_parameters(input_pars, template_pars)
# Convert float, length and time
convert_pars = {
'bool': True,
'int': 3.0,
'length': '1.0*nanometers',
'time': '1.0*femtoseconds'
}
convert_pars = validate_parameters(convert_pars, template_pars,
process_units_str=True, float_to_int=True)
assert type(convert_pars['bool']) is bool
assert type(convert_pars['int']) is int
assert convert_pars['length'] == 1.0 * unit.nanometers
assert convert_pars['time'] == 1.0 * unit.femtoseconds
# If check_unknown flag is not True it should not raise an error
validate_parameters({'unkown': 0}, template_pars)
# Test special conversion
def convert_length(length):
return str(length)
special_conv = {'length': convert_length}
convert_pars = {'length': '1.0*nanometers'}
convert_pars = validate_parameters(convert_pars, template_pars, process_units_str=True,
special_conversions=special_conv)
assert convert_pars['length'] == '1.0*nanometers'
@tools.raises(ValueError)
def test_incompatible_parameters():
"""Check that validate_parameters raises exception with unknown parameter."""
template_pars = {'int': 3}
wrong_pars = {'int': 3.0}
validate_parameters(wrong_pars, template_pars)
@tools.raises(TypeError)
def test_unknown_parameters():
"""Test that validate_parameters() raises exception with unknown parameter."""
template_pars = {'known_par': 3}
wrong_pars = {'unknown_par': 3}
validate_parameters(wrong_pars, template_pars, check_unknown=True)
def test_underscore_to_camelcase():
"""Test underscore_to_camelCase() conversion function."""
cases = ['', '__', 'foo', 'foo_bar', '_foo_bar_', '__foo_bar__', '__foo__bar_']
expected = ['', '__', 'foo', 'fooBar', '_fooBar_', '__fooBar__', '__fooBar_']
for exp, case in zip(expected, cases):
assert exp == underscore_to_camelcase(case)
def test_quantity_from_string():
"""Test the quantity from string function to ensure output is as expected"""
tests = [
# (string, expected Unit)
('3', 3.0), # Handle basic float
('meter', unit.meter), # Handle basic unit object
('300 * kelvin', 300*unit.kelvin), # Handle standard Quantity
('" 0.3 * kilojoules_per_mole / watt**3"', 0.3*unit.kilojoules_per_mole/unit.watt**3), # Handle division, exponent, nested string
('1*meter / (4*second)', 0.25*unit.meter/unit.second), # Handle compound math and parenthesis
('1 * watt**2 /((1* kelvin)**3 / gram))', 1*(unit.watt**2)*(unit.gram)/(unit.kelvin**3)) #Handle everything
]
assert all(expected == quantity_from_string(passed_string) for passed_string, expected in tests)
def test_TLeap_script():
"""Test TLeap script creation."""
expected_script = """
source oldff/leaprc.ff99SBildn
source leaprc.gaff
receptor = loadPdb receptor.pdbfixer.pdb
loadAmberParams ligand.gaff.frcmod
ligand = loadMol2 path/to/ligand.gaff.mol2
transform ligand {{ 1 0 0 6} { 0 -1 0 0} { 0 0 1 0} { 0 0 0 1}}
complex = combine { receptor ligand }
solvateBox complex TIP3PBOX 10.0 iso
check complex
charge complex
# New section
saveAmberParm complex complex.prmtop complex.inpcrd
savePDB complex complex.pdb
solvateBox ligand TIP3PBOX 10.0 iso
saveAmberParm ligand solvent.prmtop solvent.inpcrd
savePDB ligand solvent.pdb
quit
"""
expected_script = textwrap.dedent(expected_script[1:]) # delete first \n char
tleap = TLeap()
tleap.load_parameters('oldff/leaprc.ff99SBildn', 'leaprc.gaff')
tleap.load_group(name='receptor', file_path='receptor.pdbfixer.pdb')
tleap.load_parameters('ligand.gaff.frcmod')
tleap.load_parameters('ligand.gaff.frcmod') # tLeap should not load this twice
tleap.load_group(name='ligand', file_path='path/to/ligand.gaff.mol2')
tleap.transform('ligand', np.array([[1, 0, 0, 6], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]))
tleap.combine('complex', 'receptor', 'ligand')
tleap.solvate(group='complex', water_model='TIP3PBOX', clearance=10.0)
tleap.add_commands('check complex', 'charge complex')
tleap.new_section('New section')
tleap.save_group(group='complex', output_path='complex.prmtop')
tleap.save_group(group='complex', output_path='complex.pdb')
tleap.solvate(group='ligand', water_model='TIP3PBOX', clearance=10.0)
tleap.save_group(group='ligand', output_path='solvent.inpcrd')
tleap.save_group(group='ligand', output_path='solvent.pdb')
assert tleap.script == expected_script
def test_TLeap_export_run():
"""Check that TLeap saves and runs scripts correctly."""
setup_dir = get_data_filename(os.path.join('tests', 'data',
'benzene-toluene-explicit'))
benzene_gaff = os.path.join(setup_dir, 'benzene.gaff.mol2')
benzene_frcmod = os.path.join(setup_dir, 'benzene.frcmod')
tleap = TLeap()
tleap.load_parameters('oldff/leaprc.ff99SB', 'leaprc.gaff')
tleap.load_group(name='benzene', file_path=benzene_gaff)
tleap.load_parameters(benzene_frcmod)
with omt.utils.temporary_directory() as tmp_dir:
output_path = os.path.join(tmp_dir, 'benzene')
tleap.save_group(group='benzene', output_path=output_path + '.prmtop')
export_path = os.path.join(tmp_dir, 'leap.in')
tleap.export_script(export_path)
assert os.path.isfile(export_path)
assert os.path.getsize(export_path) > 0
tleap.run()
assert os.path.isfile(output_path + '.prmtop')
assert os.path.isfile(output_path + '.inpcrd')
assert os.path.getsize(output_path + '.prmtop') > 0
assert os.path.getsize(output_path + '.inpcrd') > 0
assert os.path.isfile(os.path.join(tmp_dir, 'benzene.leap.log'))
|
andrrizzi/yank
|
Yank/tests/test_utils.py
|
Python
|
mit
| 15,068
|
[
"MDTraj"
] |
bbe7aafc214e37a2aee04b879e6a90de76c3d31e6fd15f008b66d55ca70b2f6b
|
# external modules
import numpy as num
# ANUGA modules
import anuga.utilities.log as log
from anuga.config import netcdf_mode_r, netcdf_mode_w, netcdf_mode_a, \
netcdf_float
from generic_asc2dem import generic_asc2dem
def generic_dem2pts(name_in, name_out=None, quantity_name=None,
easting_min=None, easting_max=None,
northing_min=None, northing_max=None,
use_cache=False, verbose=False,):
"""Read raster file from the following NetCDF format (.dem)
Generic function, created from dem2pts
Example:
ncols 3121
nrows 1800
xllcorner 722000
yllcorner 5893000
cellsize 25
NODATA_value -9999
138.3698 137.4194 136.5062 135.5558 ..........
name_in may be a .asc or .dem file to be converted.
Convert to NetCDF pts format which is
points: (Nx2) float array
elevation: N float array
"""
kwargs = {'name_out': name_out,
'quantity_name': quantity_name,
'easting_min': easting_min,
'easting_max': easting_max,
'northing_min': northing_min,
'northing_max': northing_max,
'verbose': verbose}
if use_cache is True:
from caching import cache
result = cache(_generic_dem2pts, name_in, kwargs,
dependencies = [name_in],
verbose = verbose)
else:
result = apply(_generic_dem2pts, [name_in], kwargs)
return result
def _generic_dem2pts(name_in, name_out=None, quantity_name=None, verbose=False,
easting_min=None, easting_max=None,
northing_min=None, northing_max=None):
"""Read raster from the following NetCDF format (.dem)
Internal function. See public function generic_dem2pts for details.
"""
# FIXME: Can this be written feasibly using write_pts?
import os
from anuga.file.netcdf import NetCDFFile
root = name_in[:-4]
if name_in[-4:] == '.asc':
intermediate = root + '.dem'
if verbose:
log.critical('Preconvert %s from asc to %s' % \
(name_in, intermediate))
asc2dem(name_in)
name_in = intermediate
elif name_in[-4:] != '.dem':
raise IOError('Input file %s should be of type .asc or .dem.' % name_in)
if name_out != None and basename_out[-4:] != '.pts':
raise IOError('Input file %s should be of type .pts.' % name_out)
# Get NetCDF
infile = NetCDFFile(name_in, netcdf_mode_r)
if verbose: log.critical('Reading raster from %s' % (name_in))
ncols = int(infile.ncols)
nrows = int(infile.nrows)
xllcorner = float(infile.xllcorner) # Easting of lower left corner
yllcorner = float(infile.yllcorner) # Northing of lower left corner
cellsize = float(infile.cellsize)
NODATA_value = float(infile.NODATA_value)
dem_elevation = infile.variables[quantity_name]
zone = int(infile.zone)
false_easting = float(infile.false_easting)
false_northing = float(infile.false_northing)
#print ncols, nrows, xllcorner,yllcorner, cellsize, NODATA_value, zone
# Text strings
projection = infile.projection
datum = infile.datum
units = infile.units
#print projection, datum, units
# Get output file
if name_out == None:
ptsname = root + '.pts'
else:
ptsname = name_out
if verbose: log.critical('Store to NetCDF file %s' % ptsname)
# NetCDF file definition
outfile = NetCDFFile(ptsname, netcdf_mode_w)
# Create new file
outfile.institution = 'Geoscience Australia'
outfile.description = 'NetCDF pts format for compact and portable ' \
'storage of spatial point data'
# Assign default values
if easting_min is None: easting_min = xllcorner
if easting_max is None: easting_max = xllcorner + ncols*cellsize
if northing_min is None: northing_min = yllcorner
if northing_max is None: northing_max = yllcorner + nrows*cellsize
#print easting_min, easting_max, northing_min, northing_max
# Compute offsets to update georeferencing
easting_offset = xllcorner - easting_min
northing_offset = yllcorner - northing_min
# Georeferencing
outfile.zone = zone
outfile.xllcorner = easting_min # Easting of lower left corner
outfile.yllcorner = northing_min # Northing of lower left corner
outfile.false_easting = false_easting
outfile.false_northing = false_northing
outfile.projection = projection
outfile.datum = datum
outfile.units = units
# Grid info (FIXME: probably not going to be used, but heck)
outfile.ncols = ncols
outfile.nrows = nrows
dem_elevation_r = num.reshape(dem_elevation, (nrows, ncols))
totalnopoints = nrows*ncols
#========================================
# Do the preceeding with numpy
#========================================
y = num.arange(nrows,dtype=num.float)
y = yllcorner + (nrows-1)*cellsize - y*cellsize
x = num.arange(ncols,dtype=num.float)
x = xllcorner + x*cellsize
xx,yy = num.meshgrid(x,y)
xx = xx.flatten()
yy = yy.flatten()
flag = num.logical_and(num.logical_and((xx <= easting_max),(xx >= easting_min)),
num.logical_and((yy <= northing_max),(yy >= northing_min)))
dem = dem_elevation[:].flatten()
id = num.where(flag)[0]
xx = xx[id]
yy = yy[id]
dem = dem[id]
clippednopoints = len(dem)
#print clippedpoints
#print xx
#print yy
#print dem
data_flag = dem != NODATA_value
data_id = num.where(data_flag)
xx = xx[data_id]
yy = yy[data_id]
dem = dem[data_id]
nn = clippednopoints - len(dem)
nopoints = len(dem)
if verbose:
log.critical('There are %d values in the raster' % totalnopoints)
log.critical('There are %d values in the clipped raster'
% clippednopoints)
log.critical('There are %d NODATA_values in the clipped raster' % nn)
outfile.createDimension('number_of_points', nopoints)
outfile.createDimension('number_of_dimensions', 2) #This is 2d data
# Variable definitions
outfile.createVariable('points', netcdf_float, ('number_of_points',
'number_of_dimensions'))
outfile.createVariable(quantity_name, netcdf_float, ('number_of_points',))
# Get handles to the variables
points = outfile.variables['points']
elevation = outfile.variables[quantity_name]
points[:,0] = xx - easting_min
points[:,1] = yy - northing_min
elevation[:] = dem
infile.close()
outfile.close()
|
mperignon/anuga-sedtransport
|
file_conversion/generic_dem2pts.py
|
Python
|
gpl-2.0
| 6,790
|
[
"NetCDF"
] |
29ea80cec1253c0cb34ea5b8629bcd0bf8cf1be195177ec09b41dba89a664eea
|
#* This file is part of the MOOSE framework
#* https://www.mooseframework.org
#*
#* All rights reserved, see COPYRIGHT for full restrictions
#* https://github.com/idaholab/moose/blob/master/COPYRIGHT
#*
#* Licensed under LGPL 2.1, please see LICENSE for details
#* https://www.gnu.org/licenses/lgpl-2.1.html
import os
import pandas
from . import message
class MooseDataFrame(object):
"""
A wrapper for handling data from a single csv file.
This utilizes a pandas.DataFrame for storing and accessing CSV data, while
allowing for the file to exist/not-exist.
"""
NOCHANGE = 0
UPDATED = 1
INVALID = 2
OLDFILE = 3
def __init__(self, filename, index=None, run_start_time=None, update=True, peacock_index=False):
self._filename = filename
self._data = pandas.DataFrame()
self._modified = None
self._index = index
self._add_peacock_index = peacock_index
self._run_start_time = run_start_time
if update:
self.update()
@property
def modified(self):
if self._modified is None:
return os.path.getmtime(self._filename)
return self._modified
@property
def exists(self):
return os.path.exists(self._filename)
@property
def filesize(self):
return os.path.getsize(self._filename)
@property
def data(self):
return self._data
@property
def filename(self):
return self._filename
def __getitem__(self, key):
"""
Provides [] access to data.
Args:
key[str|list]: The key(s) to extract.
"""
if self._data.empty:
return pandas.Series()
return self._data[key]
def __contains__(self, key):
"""
Test if a key is stored in the data.
"""
return (key in self.data)
def __bool__(self):
"""
Return False if the data is empty.
"""
return not self._data.empty
def clear(self):
"""
Remove existing data.
"""
self._modified = None
self._data = pandas.DataFrame()
def update(self):
"""
Update with new data.
"""
retcode = MooseDataFrame.NOCHANGE
file_exists = self.exists
if file_exists and (self._run_start_time is not None) and (os.path.getmtime(self._filename) < self._run_start_time):
self.clear()
message.mooseDebug("The csv file {} exists but is old ({}) compared to the run start time ({}).".format(self.filename, os.path.getmtime(self._filename), self._run_start_time), debug=True)
retcode = MooseDataFrame.OLDFILE
elif not file_exists:
self.clear()
message.mooseDebug("The csv file {} does not exist.".format(self._filename))
retcode = MooseDataFrame.INVALID
else:
modified = os.path.getmtime(self._filename)
if modified != self._modified:
retcode = MooseDataFrame.UPDATED
try:
self._modified = modified
self._data = pandas.read_csv(self._filename)
if self._index:
self._data.set_index(self._index, inplace=True)
if self._add_peacock_index:
self._data.insert(0, 'index (Peacock)',
pandas.Series(self._data.index, index=self._data.index))
message.mooseDebug("Reading csv file: {}".format(self._filename))
except:
self.clear()
message.mooseDebug("Unable to read file {} it likely does not contain data.".format(self._filename))
return retcode
|
nuclear-wizard/moose
|
python/mooseutils/MooseDataFrame.py
|
Python
|
lgpl-2.1
| 3,786
|
[
"MOOSE"
] |
22c5942f8d013018182989b67d0bf4c01f910ba1a76b73728b44ee92f158dab0
|
from __future__ import absolute_import, division, print_function
# ----------------------------------------------------------------------------
# Copyright (c) 2013--, scikit-bio development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
class BiologicalSequenceError(Exception):
"""General error for biological sequence validation failures."""
pass
class GeneticCodeError(Exception):
"""Base class exception used by the GeneticCode class"""
pass
class GeneticCodeInitError(ValueError, GeneticCodeError):
"""Exception raised by the GeneticCode class upon a bad initialization"""
pass
class InvalidCodonError(KeyError, GeneticCodeError):
"""Exception raised by the GeneticCode class if __getitem__ fails"""
pass
|
Kleptobismol/scikit-bio
|
skbio/sequence/_exception.py
|
Python
|
bsd-3-clause
| 932
|
[
"scikit-bio"
] |
1aef83b4796288b4b4cb54c6ee3c13f2af17166ef581fd8ce48903be8a343bc9
|
###########################################################################
#
# This program is part of Zenoss Core, an open source monitoring platform.
# Copyright (C) 2008, Zenoss Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 as published by
# the Free Software Foundation.
#
# For complete information please visit: http://www.zenoss.com/oss/
#
###########################################################################
################################
# These variables are overwritten by Zenoss when the ZenPack is exported
# or saved. Do not modify them directly here.
NAME = 'ZenPacks.LearningObjects.PostgresqlMonitor'
VERSION = '1.1'
AUTHOR = 'James S. Martin'
LICENSE = "GPLv2"
NAMESPACE_PACKAGES = ['ZenPacks', 'ZenPacks.LearningObjects']
PACKAGES = ['ZenPacks', 'ZenPacks.LearningObjects', 'ZenPacks.LearningObjects.PostgresqlMonitor']
INSTALL_REQUIRES = []
COMPAT_ZENOSS_VERS = '>=2.2'
PREV_ZENPACK_NAME = ''
# STOP_REPLACEMENTS
################################
# Zenoss will not overwrite any changes you make below here.
from setuptools import setup, find_packages
setup(
# This ZenPack metadata should usually be edited with the Zenoss
# ZenPack edit page. Whenever the edit page is submitted it will
# overwrite the values below (the ones it knows about) with new values.
name = NAME,
version = VERSION,
author = AUTHOR,
license = LICENSE,
# This is the version spec which indicates what versions of Zenoss
# this ZenPack is compatible with
compatZenossVers = COMPAT_ZENOSS_VERS,
# previousZenPackName is a facility for telling Zenoss that the name
# of this ZenPack has changed. If no ZenPack with the current name is
# installed then a zenpack of this name if installed will be upgraded.
prevZenPackName = PREV_ZENPACK_NAME,
# Indicate to setuptools which namespace packages the zenpack
# participates in
namespace_packages = NAMESPACE_PACKAGES,
# Tell setuptools what packages this zenpack provides.
packages = find_packages(),
# Tell setuptools to figure out for itself which files to include
# in the binary egg when it is built.
include_package_data = True,
# Tell setuptools what non-python files should also be included
# with the binary egg.
package_data = {
'': ['*.txt'],
'':['../COPYRIGHT.txt','../LICENSE.txt'],
NAME: ['objects/*','skins/*/*','services/*', 'reports/*/*',
'modeler/*/*', 'daemons/*', 'lib/*', 'libexec/*'],
},
# Indicate dependencies on other python modules or ZenPacks. This line
# is modified by zenoss when the ZenPack edit page is submitted. Zenoss
# tries to put add/delete the names it manages at the beginning of this
# list, so any manual additions should be added to the end. Things will
# go poorly if this line is broken into multiple lines or modified to
# dramatically.
install_requires = INSTALL_REQUIRES,
# Every ZenPack egg must define exactly one zenoss.zenpacks entry point
# of this form.
entry_points = {
'zenoss.zenpacks': '%s = %s' % (NAME, NAME),
},
# All ZenPack eggs must be installed in unzipped form.
zip_safe = False,
)
|
zenoss/ZenPacks.LearningObjects.PostgresqlMonitor
|
setup.py
|
Python
|
gpl-2.0
| 3,322
|
[
"VisIt"
] |
5ac0853f6145493f5a6b0de88797ba1d6999d192680730237d526ff1a329b9f4
|
#
# Copyright (C) 2000-2008 greg Landrum and Rational Discovery LLC
#
""" ID3 Decision Trees
contains an implementation of the ID3 decision tree algorithm
as described in Tom Mitchell's book "Machine Learning"
It relies upon the _Tree.TreeNode_ data structure (or something
with the same API) defined locally to represent the trees
"""
import numpy
from rdkit.ML.DecTree import DecTree
from rdkit.ML.InfoTheory import entropy
def CalcTotalEntropy(examples,nPossibleVals):
""" Calculates the total entropy of the data set (w.r.t. the results)
**Arguments**
- examples: a list (nInstances long) of lists of variable values + instance
values
- nPossibleVals: a list (nVars long) of the number of possible values each variable
can adopt.
**Returns**
a float containing the informational entropy of the data set.
"""
nRes = nPossibleVals[-1]
resList = numpy.zeros(nRes,'i')
for example in examples:
res = example[-1]
resList[res] = resList[res] + 1
return entropy.InfoEntropy(resList)
def GenVarTable(examples,nPossibleVals,vars):
"""Generates a list of variable tables for the examples passed in.
The table for a given variable records the number of times each possible value
of that variable appears for each possible result of the function.
**Arguments**
- examples: a list (nInstances long) of lists of variable values + instance
values
- nPossibleVals: a list containing the number of possible values of
each variable + the number of values of the function.
- vars: a list of the variables to include in the var table
**Returns**
a list of variable result tables. Each table is a Numeric array
which is varValues x nResults
"""
nVars = len(vars)
res = [None]*nVars
nFuncVals = nPossibleVals[-1]
for i in xrange(nVars):
res[i] = numpy.zeros((nPossibleVals[vars[i]],nFuncVals),'i')
for example in examples:
val = int(example[-1])
for i in xrange(nVars):
res[i][int(example[vars[i]]),val] += 1
return res
def ID3(examples,target,attrs,nPossibleVals,depth=0,maxDepth=-1,
**kwargs):
""" Implements the ID3 algorithm for constructing decision trees.
From Mitchell's book, page 56
This is *slightly* modified from Mitchell's book because it supports
multivalued (non-binary) results.
**Arguments**
- examples: a list (nInstances long) of lists of variable values + instance
values
- target: an int
- attrs: a list of ints indicating which variables can be used in the tree
- nPossibleVals: a list containing the number of possible values of
every variable.
- depth: (optional) the current depth in the tree
- maxDepth: (optional) the maximum depth to which the tree
will be grown
**Returns**
a DecTree.DecTreeNode with the decision tree
**NOTE:** This code cannot bootstrap (start from nothing...)
use _ID3Boot_ (below) for that.
"""
varTable = GenVarTable(examples,nPossibleVals,attrs)
tree=DecTree.DecTreeNode(None,'node')
# store the total entropy... in case that is interesting
totEntropy = CalcTotalEntropy(examples,nPossibleVals)
tree.SetData(totEntropy)
#tree.SetExamples(examples)
# the matrix of results for this target:
tMat = GenVarTable(examples,nPossibleVals,[target])[0]
# counts of each result code:
counts = sum(tMat)
nzCounts = numpy.nonzero(counts)[0]
if len(nzCounts) == 1:
# bottomed out because there is only one result code left
# with any counts (i.e. there's only one type of example
# left... this is GOOD!).
res = nzCounts[0]
tree.SetLabel(res)
tree.SetName(str(res))
tree.SetTerminal(1)
elif len(attrs) == 0 or (maxDepth>=0 and depth>=maxDepth):
# Bottomed out: no variables left or max depth hit
# We don't really know what to do here, so
# use the heuristic of picking the most prevalent
# result
v = numpy.argmax(counts)
tree.SetLabel(v)
tree.SetName('%d?'%v)
tree.SetTerminal(1)
else:
# find the variable which gives us the largest information gain
gains = [entropy.InfoGain(x) for x in varTable]
best = attrs[numpy.argmax(gains)]
# remove that variable from the lists of possible variables
nextAttrs = attrs[:]
if not kwargs.get('recycleVars',0):
nextAttrs.remove(best)
# set some info at this node
tree.SetName('Var: %d'%best)
tree.SetLabel(best)
#tree.SetExamples(examples)
tree.SetTerminal(0)
# loop over possible values of the new variable and
# build a subtree for each one
for val in xrange(nPossibleVals[best]):
nextExamples = []
for example in examples:
if example[best] == val:
nextExamples.append(example)
if len(nextExamples) == 0:
# this particular value of the variable has no examples,
# so there's not much sense in recursing.
# This can (and does) happen.
v = numpy.argmax(counts)
tree.AddChild('%d'%v,label=v,data=0.0,isTerminal=1)
else:
# recurse
tree.AddChildNode(ID3(nextExamples,best,nextAttrs,nPossibleVals,depth+1,maxDepth,
**kwargs))
return tree
def ID3Boot(examples,attrs,nPossibleVals,initialVar=None,depth=0,maxDepth=-1,
**kwargs):
""" Bootstrapping code for the ID3 algorithm
see ID3 for descriptions of the arguments
If _initialVar_ is not set, the algorithm will automatically
choose the first variable in the tree (the standard greedy
approach). Otherwise, _initialVar_ will be used as the first
split.
"""
totEntropy = CalcTotalEntropy(examples,nPossibleVals)
varTable = GenVarTable(examples,nPossibleVals,attrs)
tree=DecTree.DecTreeNode(None,'node')
#tree.SetExamples(examples)
tree._nResultCodes = nPossibleVals[-1]
# <perl>you've got to love any language which will let you
# do this much work in a single line :-)</perl>
if initialVar is None:
best = attrs[numpy.argmax([entropy.InfoGain(x) for x in varTable])]
else:
best = initialVar
tree.SetName('Var: %d'%best)
tree.SetData(totEntropy)
tree.SetLabel(best)
tree.SetTerminal(0)
nextAttrs = attrs[:]
if not kwargs.get('recycleVars',0):
nextAttrs.remove(best)
for val in xrange(nPossibleVals[best]):
nextExamples = []
for example in examples:
if example[best] == val:
nextExamples.append(example)
tree.AddChildNode(ID3(nextExamples,best,nextAttrs,nPossibleVals,depth,maxDepth,
**kwargs))
return tree
|
rdkit/rdkit-orig
|
rdkit/ML/DecTree/ID3.py
|
Python
|
bsd-3-clause
| 6,741
|
[
"RDKit"
] |
bffcbad8efedb142865ebed9ff79036e64dbfd30c1bb8fadba0719f0f66261eb
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of the SPORCO package. Details of the copyright
# and user license can be found in the 'LICENSE.txt' file distributed
# with the package.
"""
Basis Pursuit DeNoising with Joint Sparsity
===========================================
This example demonstrates the use of class :class:`.bpdn.BPDNJoint` to solve the Basis Pursuit DeNoising (BPDN) problem with joint sparsity via an $ℓ_{2,1}$ norm term
$$\mathrm{argmin}_\mathbf{x} \; (1/2) \| D X - S \|_2^2 + \lambda \| X \|_1 + \mu \| X \|_{2,1}$$
where $D$ is the dictionary, $X$ is the sparse representation, and $S$ is the signal to be represented. In this example the BPDN problem is used to estimate the reference sparse representation that generated a signal from a noisy version of the signal.
"""
from __future__ import print_function
from builtins import input
import numpy as np
from sporco.admm import bpdn
from sporco import util
from sporco import plot
"""
Configure problem size, sparsity, and noise level.
"""
N = 32 # Signal size
M = 4*N # Dictionary size
L = 12 # Number of non-zero coefficients in generator
K = 16 # Number of signals
sigma = 0.5 # Noise level
"""
Construct random dictionary, reference random sparse representation, and test signal consisting of the synthesis of the reference sparse representation with additive Gaussian noise.
"""
# Construct random dictionary and random sparse coefficients
np.random.seed(12345)
D = np.random.randn(N, M)
x0 = np.zeros((M, K))
si = np.random.permutation(list(range(0, M-1)))
x0[si[0:L],:] = np.random.randn(L, K)
# Construct reference and noisy signal
s0 = D.dot(x0)
s = s0 + sigma*np.random.randn(N,K)
"""
Set BPDNJoint solver class options.
"""
opt = bpdn.BPDNJoint.Options({'Verbose': False, 'MaxMainIter': 500,
'RelStopTol': 1e-3, 'rho': 10.0,
'AutoRho': {'RsdlTarget': 1.0}})
"""
Select regularization parameters $\lambda, \mu$ by evaluating the error in recovering the sparse representation over a logarithmicaly spaced grid. (The reference representation is assumed to be known, which is not realistic in a real application.) A function is defined that evalues the BPDN recovery error for a specified $\lambda, \mu$, and this function is evaluated in parallel by :func:`sporco.util.grid_search`.
"""
# Function computing reconstruction error for (lmbda, mu) pair
def evalerr(prm):
lmbda = prm[0]
mu = prm[1]
b = bpdn.BPDNJoint(D, s, lmbda, mu, opt)
x = b.solve()
return np.sum(np.abs(x-x0))
# Parallel evalution of error function on lmbda,mu grid
lrng = np.logspace(-4, 0.5, 10)
mrng = np.logspace(0.5, 1.6, 10)
sprm, sfvl, fvmx, sidx = util.grid_search(evalerr, (lrng, mrng))
lmbda = sprm[0]
mu = sprm[1]
print('Minimum ℓ1 error: %5.2f at (𝜆,μ) = (%.2e, %.2e)' % (sfvl, lmbda, mu))
"""
Once the best $\lambda, \mu$ have been determined, run :meth:`.bpdn.BPDNJoint.solve` with verbose display of ADMM iteration statistics.
"""
# Initialise and run BPDNJoint object for best lmbda and mu
opt['Verbose'] = True
b = bpdn.BPDNJoint(D, s, lmbda, mu, opt)
x = b.solve()
print("BPDNJoint solve time: %.2fs" % b.timer.elapsed('solve'))
"""
Plot comparison of reference and recovered representations.
"""
fig = plot.figure(figsize=(6, 8))
plot.subplot(1, 2, 1)
plot.imview(x0, cmap=plot.cm.Blues, title='Reference', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(x, cmap=plot.cm.Blues, title='Reconstruction', fig=fig)
fig.show()
"""
Plot lmbda,mu error surface, functional value, residuals, and rho
"""
its = b.getitstat()
fig = plot.figure(figsize=(15, 10))
ax = fig.add_subplot(2, 2, 1, projection='3d')
ax.locator_params(nbins=5, axis='y')
plot.surf(fvmx, x=np.log10(mrng), y=np.log10(lrng), xlbl='log($\mu$)',
ylbl='log($\lambda$)', zlbl='Error', fig=fig, ax=ax)
plot.subplot(2, 2, 2)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(2, 2, 3)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(2, 2, 4)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
# Wait for enter on keyboard
input()
|
bwohlberg/sporco
|
examples/scripts/sc/bpdnjnt_opt.py
|
Python
|
bsd-3-clause
| 4,338
|
[
"Gaussian"
] |
a1043d796d1b665fa67f81ea9ea384d8f9c852cf191537e9c740fad163d8907b
|
"""
parser.http.movieParser module (imdb package).
This module provides the classes (and the instances), used to parse the
IMDb pages on the akas.imdb.com server about a movie.
E.g., for Brian De Palma's "The Untouchables", the referred
pages would be:
combined details: http://akas.imdb.com/title/tt0094226/combined
plot summary: http://akas.imdb.com/title/tt0094226/plotsummary
...and so on...
Copyright 2004-2012 Davide Alberani <da@erlug.linux.it>
2008 H. Turgut Uyar <uyar@tekir.org>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
import re
import urllib
from imdb import imdbURL_base
from imdb.Person import Person
from imdb.Movie import Movie
from imdb.Company import Company
from imdb.utils import analyze_title, split_company_name_notes, _Container
from utils import build_person, DOMParserBase, Attribute, Extractor, \
analyze_imdbid
# Dictionary used to convert some section's names.
_SECT_CONV = {
'directed': 'director',
'directed by': 'director',
'directors': 'director',
'editors': 'editor',
'writing credits': 'writer',
'writers': 'writer',
'produced': 'producer',
'cinematography': 'cinematographer',
'film editing': 'editor',
'casting': 'casting director',
'costume design': 'costume designer',
'makeup department': 'make up',
'production management': 'production manager',
'second unit director or assistant director': 'assistant director',
'costume and wardrobe department': 'costume department',
'sound department': 'sound crew',
'stunts': 'stunt performer',
'other crew': 'miscellaneous crew',
'also known as': 'akas',
'country': 'countries',
'runtime': 'runtimes',
'language': 'languages',
'certification': 'certificates',
'genre': 'genres',
'created': 'creator',
'creators': 'creator',
'color': 'color info',
'plot': 'plot outline',
'seasons': 'number of seasons',
'art directors': 'art direction',
'assistant directors': 'assistant director',
'set decorators': 'set decoration',
'visual effects department': 'visual effects',
'production managers': 'production manager',
'miscellaneous': 'miscellaneous crew',
'make up department': 'make up',
'plot summary': 'plot outline',
'cinematographers': 'cinematographer',
'camera department': 'camera and electrical department',
'costume designers': 'costume designer',
'production designers': 'production design',
'production managers': 'production manager',
'music original': 'original music',
'casting directors': 'casting director',
'other companies': 'miscellaneous companies',
'producers': 'producer',
'special effects by': 'special effects department',
'special effects': 'special effects companies'
}
def _manageRoles(mo):
"""Perform some transformation on the html, so that roleIDs can
be easily retrieved."""
firstHalf = mo.group(1)
secondHalf = mo.group(2)
newRoles = []
roles = secondHalf.split(' / ')
for role in roles:
role = role.strip()
if not role:
continue
roleID = analyze_imdbid(role)
if roleID is None:
roleID = u'/'
else:
roleID += u'/'
newRoles.append(u'<div class="_imdbpyrole" roleid="%s">%s</div>' % \
(roleID, role.strip()))
return firstHalf + u' / '.join(newRoles) + mo.group(3)
_reRolesMovie = re.compile(r'(<td class="char">)(.*?)(</td>)',
re.I | re.M | re.S)
def _replaceBR(mo):
"""Replaces <br> tags with '::' (useful for some akas)"""
txt = mo.group(0)
return txt.replace('<br>', '::')
_reAkas = re.compile(r'<h5>also known as:</h5>.*?</div>', re.I | re.M | re.S)
def makeSplitter(lstrip=None, sep='|', comments=True,
origNotesSep=' (', newNotesSep='::(', strip=None):
"""Return a splitter function suitable for a given set of data."""
def splitter(x):
if not x: return x
x = x.strip()
if not x: return x
if lstrip is not None:
x = x.lstrip(lstrip).lstrip()
lx = x.split(sep)
lx[:] = filter(None, [j.strip() for j in lx])
if comments:
lx[:] = [j.replace(origNotesSep, newNotesSep, 1) for j in lx]
if strip:
lx[:] = [j.strip(strip) for j in lx]
return lx
return splitter
def _toInt(val, replace=()):
"""Return the value, converted to integer, or None; if present, 'replace'
must be a list of tuples of values to replace."""
for before, after in replace:
val = val.replace(before, after)
try:
return int(val)
except (TypeError, ValueError):
return None
class DOMHTMLMovieParser(DOMParserBase):
"""Parser for the "combined details" (and if instance.mdparse is
True also for the "main details") page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
mparser = DOMHTMLMovieParser()
result = mparser.parse(combined_details_html_string)
"""
_containsObjects = True
extractors = [Extractor(label='title',
path="//h1",
attrs=Attribute(key='title',
path=".//text()",
postprocess=analyze_title)),
Extractor(label='glossarysections',
group="//a[@class='glossary']",
group_key="./@name",
group_key_normalize=lambda x: x.replace('_', ' '),
path="../../../..//tr",
attrs=Attribute(key=None,
multi=True,
path={'person': ".//text()",
'link': "./td[1]/a[@href]/@href"},
postprocess=lambda x: \
build_person(x.get('person') or u'',
personID=analyze_imdbid(x.get('link')))
)),
Extractor(label='cast',
path="//table[@class='cast']//tr",
attrs=Attribute(key="cast",
multi=True,
path={'person': ".//text()",
'link': "td[2]/a/@href",
'roleID': \
"td[4]/div[@class='_imdbpyrole']/@roleid"},
postprocess=lambda x: \
build_person(x.get('person') or u'',
personID=analyze_imdbid(x.get('link')),
roleID=(x.get('roleID') or u'').split('/'))
)),
Extractor(label='genres',
path="//div[@class='info']//a[starts-with(@href," \
" '/Sections/Genres')]",
attrs=Attribute(key="genres",
multi=True,
path="./text()")),
Extractor(label='h5sections',
path="//div[@class='info']/h5/..",
attrs=[
Attribute(key="plot summary",
path="./h5[starts-with(text(), " \
"'Plot:')]/../div/text()",
postprocess=lambda x: \
x.strip().rstrip('|').rstrip()),
Attribute(key="aspect ratio",
path="./h5[starts-with(text()," \
" 'Aspect')]/../div/text()",
postprocess=lambda x: x.strip()),
Attribute(key="mpaa",
path="./h5/a[starts-with(text()," \
" 'MPAA')]/../../div/text()",
postprocess=lambda x: x.strip()),
Attribute(key="countries",
path="./h5[starts-with(text(), " \
"'Countr')]/../div[@class='info-content']//text()",
postprocess=makeSplitter('|')),
Attribute(key="language",
path="./h5[starts-with(text(), " \
"'Language')]/..//text()",
postprocess=makeSplitter('Language:')),
Attribute(key='color info',
path="./h5[starts-with(text(), " \
"'Color')]/..//text()",
postprocess=makeSplitter('Color:')),
Attribute(key='sound mix',
path="./h5[starts-with(text(), " \
"'Sound Mix')]/..//text()",
postprocess=makeSplitter('Sound Mix:')),
# Collects akas not encosed in <i> tags.
Attribute(key='other akas',
path="./h5[starts-with(text(), " \
"'Also Known As')]/../div//text()",
postprocess=makeSplitter(sep='::',
origNotesSep='" - ',
newNotesSep='::',
strip='"')),
Attribute(key='runtimes',
path="./h5[starts-with(text(), " \
"'Runtime')]/../div/text()",
postprocess=makeSplitter()),
Attribute(key='certificates',
path="./h5[starts-with(text(), " \
"'Certificat')]/..//text()",
postprocess=makeSplitter('Certification:')),
Attribute(key='number of seasons',
path="./h5[starts-with(text(), " \
"'Seasons')]/..//text()",
postprocess=lambda x: x.count('|') + 1),
Attribute(key='original air date',
path="./h5[starts-with(text(), " \
"'Original Air Date')]/../div/text()"),
Attribute(key='tv series link',
path="./h5[starts-with(text(), " \
"'TV Series')]/..//a/@href"),
Attribute(key='tv series title',
path="./h5[starts-with(text(), " \
"'TV Series')]/..//a/text()")
]),
Extractor(label='language codes',
path="//h5[starts-with(text(), 'Language')]/..//a[starts-with(@href, '/language/')]",
attrs=Attribute(key='language codes', multi=True,
path="./@href",
postprocess=lambda x: x.split('/')[2].strip()
)),
Extractor(label='country codes',
path="//h5[starts-with(text(), 'Country')]/..//a[starts-with(@href, '/country/')]",
attrs=Attribute(key='country codes', multi=True,
path="./@href",
postprocess=lambda x: x.split('/')[2].strip()
)),
Extractor(label='creator',
path="//h5[starts-with(text(), 'Creator')]/..//a",
attrs=Attribute(key='creator', multi=True,
path={'name': "./text()",
'link': "./@href"},
postprocess=lambda x: \
build_person(x.get('name') or u'',
personID=analyze_imdbid(x.get('link')))
)),
Extractor(label='thin writer',
path="//h5[starts-with(text(), 'Writer')]/..//a",
attrs=Attribute(key='thin writer', multi=True,
path={'name': "./text()",
'link': "./@href"},
postprocess=lambda x: \
build_person(x.get('name') or u'',
personID=analyze_imdbid(x.get('link')))
)),
Extractor(label='thin director',
path="//h5[starts-with(text(), 'Director')]/..//a",
attrs=Attribute(key='thin director', multi=True,
path={'name': "./text()",
'link': "@href"},
postprocess=lambda x: \
build_person(x.get('name') or u'',
personID=analyze_imdbid(x.get('link')))
)),
Extractor(label='top 250/bottom 100',
path="//div[@class='starbar-special']/" \
"a[starts-with(@href, '/chart/')]",
attrs=Attribute(key='top/bottom rank',
path="./text()")),
Extractor(label='series years',
path="//div[@id='tn15title']//span" \
"[starts-with(text(), 'TV series')]",
attrs=Attribute(key='series years',
path="./text()",
postprocess=lambda x: \
x.replace('TV series','').strip())),
Extractor(label='number of episodes',
path="//a[@title='Full Episode List']",
attrs=Attribute(key='number of episodes',
path="./text()",
postprocess=lambda x: \
_toInt(x, [(' Episodes', '')]))),
Extractor(label='akas',
path="//i[@class='transl']",
attrs=Attribute(key='akas', multi=True, path='text()',
postprocess=lambda x:
x.replace(' ', ' ').rstrip('-').replace('" - ',
'"::', 1).strip('"').replace(' ', ' '))),
Extractor(label='production notes/status',
path="//h5[starts-with(text(), 'Status:')]/..//div[@class='info-content']",
attrs=Attribute(key='production status',
path=".//text()",
postprocess=lambda x: x.strip().split('|')[0].strip().lower())),
Extractor(label='production notes/status updated',
path="//h5[starts-with(text(), 'Status Updated:')]/..//div[@class='info-content']",
attrs=Attribute(key='production status updated',
path=".//text()",
postprocess=lambda x: x.strip())),
Extractor(label='production notes/comments',
path="//h5[starts-with(text(), 'Comments:')]/..//div[@class='info-content']",
attrs=Attribute(key='production comments',
path=".//text()",
postprocess=lambda x: x.strip())),
Extractor(label='production notes/note',
path="//h5[starts-with(text(), 'Note:')]/..//div[@class='info-content']",
attrs=Attribute(key='production note',
path=".//text()",
postprocess=lambda x: x.strip())),
Extractor(label='blackcatheader',
group="//b[@class='blackcatheader']",
group_key="./text()",
group_key_normalize=lambda x: x.lower(),
path="../ul/li",
attrs=Attribute(key=None,
multi=True,
path={'name': "./a//text()",
'comp-link': "./a/@href",
'notes': "./text()"},
postprocess=lambda x: \
Company(name=x.get('name') or u'',
companyID=analyze_imdbid(x.get('comp-link')),
notes=(x.get('notes') or u'').strip())
)),
Extractor(label='rating',
path="//div[@class='starbar-meta']/b",
attrs=Attribute(key='rating',
path=".//text()")),
Extractor(label='votes',
path="//div[@class='starbar-meta']/a[@href]",
attrs=Attribute(key='votes',
path=".//text()")),
Extractor(label='cover url',
path="//a[@name='poster']",
attrs=Attribute(key='cover url',
path="./img/@src"))
]
preprocessors = [
(re.compile(r'(<b class="blackcatheader">.+?</b>)', re.I),
r'</div><div>\1'),
('<small>Full cast and crew for<br>', ''),
('<td> </td>', '<td>...</td>'),
('<span class="tv-extra">TV mini-series</span>',
'<span class="tv-extra">(mini)</span>'),
(_reRolesMovie, _manageRoles),
(_reAkas, _replaceBR)]
def preprocess_dom(self, dom):
# Handle series information.
xpath = self.xpath(dom, "//b[text()='Series Crew']")
if xpath:
b = xpath[-1] # In doubt, take the last one.
for a in self.xpath(b, "./following::h5/a[@class='glossary']"):
name = a.get('name')
if name:
a.set('name', 'series %s' % name)
# Remove links to IMDbPro.
for proLink in self.xpath(dom, "//span[@class='pro-link']"):
proLink.drop_tree()
# Remove some 'more' links (keep others, like the one around
# the number of votes).
for tn15more in self.xpath(dom,
"//a[@class='tn15more'][starts-with(@href, '/title/')]"):
tn15more.drop_tree()
return dom
re_space = re.compile(r'\s+')
re_airdate = re.compile(r'(.*)\s*\(season (\d+), episode (\d+)\)', re.I)
def postprocess_data(self, data):
# Convert section names.
for sect in data.keys():
if sect in _SECT_CONV:
data[_SECT_CONV[sect]] = data[sect]
del data[sect]
sect = _SECT_CONV[sect]
# Filter out fake values.
for key in data:
value = data[key]
if isinstance(value, list) and value:
if isinstance(value[0], Person):
data[key] = filter(lambda x: x.personID is not None, value)
if isinstance(value[0], _Container):
for obj in data[key]:
obj.accessSystem = self._as
obj.modFunct = self._modFunct
if 'akas' in data or 'other akas' in data:
akas = data.get('akas') or []
other_akas = data.get('other akas') or []
akas += other_akas
nakas = []
for aka in akas:
aka = aka.strip()
if aka.endswith('" -'):
aka = aka[:-3].rstrip()
nakas.append(aka)
if 'akas' in data:
del data['akas']
if 'other akas' in data:
del data['other akas']
if nakas:
data['akas'] = nakas
if 'runtimes' in data:
data['runtimes'] = [x.replace(' min', u'')
for x in data['runtimes']]
if 'original air date' in data:
oid = self.re_space.sub(' ', data['original air date']).strip()
data['original air date'] = oid
aid = self.re_airdate.findall(oid)
if aid and len(aid[0]) == 3:
date, season, episode = aid[0]
date = date.strip()
try: season = int(season)
except: pass
try: episode = int(episode)
except: pass
if date and date != '????':
data['original air date'] = date
else:
del data['original air date']
# Handle also "episode 0".
if season or type(season) is type(0):
data['season'] = season
if episode or type(season) is type(0):
data['episode'] = episode
for k in ('writer', 'director'):
t_k = 'thin %s' % k
if t_k not in data:
continue
if k not in data:
data[k] = data[t_k]
del data[t_k]
if 'top/bottom rank' in data:
tbVal = data['top/bottom rank'].lower()
if tbVal.startswith('top'):
tbKey = 'top 250 rank'
tbVal = _toInt(tbVal, [('top 250: #', '')])
else:
tbKey = 'bottom 100 rank'
tbVal = _toInt(tbVal, [('bottom 100: #', '')])
if tbVal:
data[tbKey] = tbVal
del data['top/bottom rank']
if 'year' in data and data['year'] == '????':
del data['year']
if 'tv series link' in data:
if 'tv series title' in data:
data['episode of'] = Movie(title=data['tv series title'],
movieID=analyze_imdbid(
data['tv series link']),
accessSystem=self._as,
modFunct=self._modFunct)
del data['tv series title']
del data['tv series link']
if 'rating' in data:
try:
data['rating'] = float(data['rating'].replace('/10', ''))
except (TypeError, ValueError):
pass
if 'votes' in data:
try:
votes = data['votes'].replace(',', '').replace('votes', '')
data['votes'] = int(votes)
except (TypeError, ValueError):
pass
return data
def _process_plotsummary(x):
"""Process a plot (contributed by Rdian06)."""
xauthor = x.get('author')
if xauthor:
xauthor = xauthor.replace('{', '<').replace('}', '>').replace('(',
'<').replace(')', '>').strip()
xplot = x.get('plot', u'').strip()
if xauthor:
xplot += u'::%s' % xauthor
return xplot
class DOMHTMLPlotParser(DOMParserBase):
"""Parser for the "plot summary" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a 'plot' key, containing a list
of string with the structure: 'summary::summary_author <author@email>'.
Example:
pparser = HTMLPlotParser()
result = pparser.parse(plot_summary_html_string)
"""
_defGetRefs = True
# Notice that recently IMDb started to put the email of the
# author only in the link, that we're not collecting, here.
extractors = [Extractor(label='plot',
path="//p[@class='plotpar']",
attrs=Attribute(key='plot',
multi=True,
path={'plot': './text()',
'author': './i/a/text()'},
postprocess=_process_plotsummary))]
def _process_award(x):
award = {}
award['award'] = x.get('award').strip()
if not award['award']:
return {}
award['year'] = x.get('year').strip()
if award['year'] and award['year'].isdigit():
award['year'] = int(award['year'])
award['result'] = x.get('result').strip()
category = x.get('category').strip()
if category:
award['category'] = category
received_with = x.get('with')
if received_with is not None:
award['with'] = received_with.strip()
notes = x.get('notes')
if notes is not None:
notes = notes.strip()
if notes:
award['notes'] = notes
award['anchor'] = x.get('anchor')
return award
class DOMHTMLAwardsParser(DOMParserBase):
"""Parser for the "awards" page of a given person or movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
awparser = HTMLAwardsParser()
result = awparser.parse(awards_html_string)
"""
subject = 'title'
_containsObjects = True
extractors = [
Extractor(label='awards',
group="//table//big",
group_key="./a",
path="./ancestor::tr[1]/following-sibling::tr/" \
"td[last()][not(@colspan)]",
attrs=Attribute(key=None,
multi=True,
path={
'year': "../td[1]/a/text()",
'result': "../td[2]/b/text()",
'award': "../td[3]/text()",
'category': "./text()[1]",
# FIXME: takes only the first co-recipient
'with': "./small[starts-with(text()," \
" 'Shared with:')]/following-sibling::a[1]/text()",
'notes': "./small[last()]//text()",
'anchor': ".//text()"
},
postprocess=_process_award
)),
Extractor(label='recipients',
group="//table//big",
group_key="./a",
path="./ancestor::tr[1]/following-sibling::tr/" \
"td[last()]/small[1]/preceding-sibling::a",
attrs=Attribute(key=None,
multi=True,
path={
'name': "./text()",
'link': "./@href",
'anchor': "..//text()"
}
))
]
preprocessors = [
(re.compile('(<tr><td[^>]*>.*?</td></tr>\n\n</table>)', re.I),
r'\1</table>'),
(re.compile('(<tr><td[^>]*>\n\n<big>.*?</big></td></tr>)', re.I),
r'</table><table class="_imdbpy">\1'),
(re.compile('(<table[^>]*>\n\n)</table>(<table)', re.I), r'\1\2'),
(re.compile('(<small>.*?)<br>(.*?</small)', re.I), r'\1 \2'),
(re.compile('(</tr>\n\n)(<td)', re.I), r'\1<tr>\2')
]
def preprocess_dom(self, dom):
"""Repeat td elements according to their rowspan attributes
in subsequent tr elements.
"""
cols = self.xpath(dom, "//td[@rowspan]")
for col in cols:
span = int(col.get('rowspan'))
del col.attrib['rowspan']
position = len(self.xpath(col, "./preceding-sibling::td"))
row = col.getparent()
for tr in self.xpath(row, "./following-sibling::tr")[:span-1]:
# if not cloned, child will be moved to new parent
clone = self.clone(col)
# XXX: beware that here we don't use an "adapted" function,
# because both BeautifulSoup and lxml uses the same
# "insert" method.
tr.insert(position, clone)
return dom
def postprocess_data(self, data):
if len(data) == 0:
return {}
nd = []
for key in data.keys():
dom = self.get_dom(key)
assigner = self.xpath(dom, "//a/text()")[0]
for entry in data[key]:
if not entry.has_key('name'):
if not entry:
continue
# this is an award, not a recipient
entry['assigner'] = assigner.strip()
# find the recipients
matches = [p for p in data[key]
if p.has_key('name') and (entry['anchor'] ==
p['anchor'])]
if self.subject == 'title':
recipients = [Person(name=recipient['name'],
personID=analyze_imdbid(recipient['link']))
for recipient in matches]
entry['to'] = recipients
elif self.subject == 'name':
recipients = [Movie(title=recipient['name'],
movieID=analyze_imdbid(recipient['link']))
for recipient in matches]
entry['for'] = recipients
nd.append(entry)
del entry['anchor']
return {'awards': nd}
class DOMHTMLTaglinesParser(DOMParserBase):
"""Parser for the "taglines" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
tparser = DOMHTMLTaglinesParser()
result = tparser.parse(taglines_html_string)
"""
extractors = [Extractor(label='taglines',
path="//div[@id='tn15content']/p",
attrs=Attribute(key='taglines', multi=True,
path="./text()"))]
class DOMHTMLKeywordsParser(DOMParserBase):
"""Parser for the "keywords" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
kwparser = DOMHTMLKeywordsParser()
result = kwparser.parse(keywords_html_string)
"""
extractors = [Extractor(label='keywords',
path="//a[starts-with(@href, '/keyword/')]",
attrs=Attribute(key='keywords',
path="./text()", multi=True,
postprocess=lambda x: \
x.lower().replace(' ', '-')))]
class DOMHTMLAlternateVersionsParser(DOMParserBase):
"""Parser for the "alternate versions" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
avparser = HTMLAlternateVersionsParser()
result = avparser.parse(alternateversions_html_string)
"""
_defGetRefs = True
extractors = [Extractor(label='alternate versions',
path="//ul[@class='trivia']/li",
attrs=Attribute(key='alternate versions',
multi=True,
path=".//text()",
postprocess=lambda x: x.strip()))]
class DOMHTMLTriviaParser(DOMParserBase):
"""Parser for the "trivia" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
avparser = HTMLAlternateVersionsParser()
result = avparser.parse(alternateversions_html_string)
"""
_defGetRefs = True
extractors = [Extractor(label='alternate versions',
path="//div[@class='sodatext']",
attrs=Attribute(key='trivia',
multi=True,
path=".//text()",
postprocess=lambda x: x.strip()))]
def preprocess_dom(self, dom):
# Remove "link this quote" links.
for qLink in self.xpath(dom, "//span[@class='linksoda']"):
qLink.drop_tree()
return dom
class DOMHTMLSoundtrackParser(DOMHTMLAlternateVersionsParser):
kind = 'soundtrack'
preprocessors = [
('<br>', '\n')
]
def postprocess_data(self, data):
if 'soundtrack' in data:
nd = []
for x in data['soundtrack']:
ds = x.split('\n')
title = ds[0]
if title[0] == '"' and title[-1] == '"':
title = title[1:-1]
nds = []
newData = {}
for l in ds[1:]:
if ' with ' in l or ' by ' in l or ' from ' in l \
or ' of ' in l or l.startswith('From '):
nds.append(l)
else:
if nds:
nds[-1] += l
else:
nds.append(l)
newData[title] = {}
for l in nds:
skip = False
for sep in ('From ',):
if l.startswith(sep):
fdix = len(sep)
kind = l[:fdix].rstrip().lower()
info = l[fdix:].lstrip()
newData[title][kind] = info
skip = True
if not skip:
for sep in ' with ', ' by ', ' from ', ' of ':
fdix = l.find(sep)
if fdix != -1:
fdix = fdix+len(sep)
kind = l[:fdix].rstrip().lower()
info = l[fdix:].lstrip()
newData[title][kind] = info
break
nd.append(newData)
data['soundtrack'] = nd
return data
class DOMHTMLCrazyCreditsParser(DOMParserBase):
"""Parser for the "crazy credits" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
ccparser = DOMHTMLCrazyCreditsParser()
result = ccparser.parse(crazycredits_html_string)
"""
_defGetRefs = True
extractors = [Extractor(label='crazy credits', path="//ul/li/tt",
attrs=Attribute(key='crazy credits', multi=True,
path=".//text()",
postprocess=lambda x: \
x.replace('\n', ' ').replace(' ', ' ')))]
class DOMHTMLGoofsParser(DOMParserBase):
"""Parser for the "goofs" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
gparser = DOMHTMLGoofsParser()
result = gparser.parse(goofs_html_string)
"""
_defGetRefs = True
extractors = [Extractor(label='goofs', path="//ul[@class='trivia']/li",
attrs=Attribute(key='goofs', multi=True, path=".//text()",
postprocess=lambda x: (x or u'').strip()))]
class DOMHTMLQuotesParser(DOMParserBase):
"""Parser for the "memorable quotes" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
qparser = DOMHTMLQuotesParser()
result = qparser.parse(quotes_html_string)
"""
_defGetRefs = True
extractors = [
Extractor(label='quotes',
path="//div[@class='_imdbpy']",
attrs=Attribute(key='quotes',
multi=True,
path=".//text()",
postprocess=lambda x: x.strip().replace(' \n',
'::').replace('::\n', '::').replace('\n', ' ')))
]
preprocessors = [
(re.compile('(<a name="?qt[0-9]{7}"?></a>)', re.I),
r'\1<div class="_imdbpy">'),
(re.compile('<hr width="30%">', re.I), '</div>'),
(re.compile('<hr/>', re.I), '</div>'),
(re.compile('<script.*?</script>', re.I|re.S), ''),
# For BeautifulSoup.
(re.compile('<!-- sid: t-channel : MIDDLE_CENTER -->', re.I), '</div>')
]
def preprocess_dom(self, dom):
# Remove "link this quote" links.
for qLink in self.xpath(dom, "//p[@class='linksoda']"):
qLink.drop_tree()
return dom
def postprocess_data(self, data):
if 'quotes' not in data:
return {}
for idx, quote in enumerate(data['quotes']):
data['quotes'][idx] = quote.split('::')
return data
class DOMHTMLReleaseinfoParser(DOMParserBase):
"""Parser for the "release dates" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
rdparser = DOMHTMLReleaseinfoParser()
result = rdparser.parse(releaseinfo_html_string)
"""
extractors = [Extractor(label='release dates',
path="//th[@class='xxxx']/../../tr",
attrs=Attribute(key='release dates', multi=True,
path={'country': ".//td[1]//text()",
'date': ".//td[2]//text()",
'notes': ".//td[3]//text()"})),
Extractor(label='akas',
path="//div[@class='_imdbpy_akas']/table/tr",
attrs=Attribute(key='akas', multi=True,
path={'title': "./td[1]/text()",
'countries': "./td[2]/text()"}))]
preprocessors = [
(re.compile('(<h5><a name="?akas"?.*</table>)', re.I | re.M | re.S),
r'<div class="_imdbpy_akas">\1</div>')]
def postprocess_data(self, data):
if not ('release dates' in data or 'akas' in data): return data
releases = data.get('release dates') or []
rl = []
for i in releases:
country = i.get('country')
date = i.get('date')
if not (country and date): continue
country = country.strip()
date = date.strip()
if not (country and date): continue
notes = i['notes']
info = u'%s::%s' % (country, date)
if notes:
info += notes
rl.append(info)
if releases:
del data['release dates']
if rl:
data['release dates'] = rl
akas = data.get('akas') or []
nakas = []
for aka in akas:
title = (aka.get('title') or '').strip()
if not title:
continue
countries = (aka.get('countries') or '').split('/')
if not countries:
nakas.append(title)
else:
for country in countries:
nakas.append('%s::%s' % (title, country.strip()))
if akas:
del data['akas']
if nakas:
data['akas from release info'] = nakas
return data
class DOMHTMLRatingsParser(DOMParserBase):
"""Parser for the "user ratings" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
rparser = DOMHTMLRatingsParser()
result = rparser.parse(userratings_html_string)
"""
re_means = re.compile('mean\s*=\s*([0-9]\.[0-9])\.\s*median\s*=\s*([0-9])',
re.I)
extractors = [
Extractor(label='number of votes',
path="//td[b='Percentage']/../../tr",
attrs=[Attribute(key='votes',
multi=True,
path={
'votes': "td[1]//text()",
'ordinal': "td[3]//text()"
})]),
Extractor(label='mean and median',
path="//p[starts-with(text(), 'Arithmetic mean')]",
attrs=Attribute(key='mean and median',
path="text()")),
Extractor(label='rating',
path="//a[starts-with(@href, '/search/title?user_rating=')]",
attrs=Attribute(key='rating',
path="text()")),
Extractor(label='demographic voters',
path="//td[b='Average']/../../tr",
attrs=Attribute(key='demographic voters',
multi=True,
path={
'voters': "td[1]//text()",
'votes': "td[2]//text()",
'average': "td[3]//text()"
})),
Extractor(label='top 250',
path="//a[text()='top 250']",
attrs=Attribute(key='top 250',
path="./preceding-sibling::text()[1]"))
]
def postprocess_data(self, data):
nd = {}
votes = data.get('votes', [])
if votes:
nd['number of votes'] = {}
for i in xrange(1, 11):
_ordinal = int(votes[i]['ordinal'])
_strvts = votes[i]['votes'] or '0'
nd['number of votes'][_ordinal] = \
int(_strvts.replace(',', ''))
mean = data.get('mean and median', '')
if mean:
means = self.re_means.findall(mean)
if means and len(means[0]) == 2:
am, med = means[0]
try: am = float(am)
except (ValueError, OverflowError): pass
if type(am) is type(1.0):
nd['arithmetic mean'] = am
try: med = int(med)
except (ValueError, OverflowError): pass
if type(med) is type(0):
nd['median'] = med
if 'rating' in data:
nd['rating'] = float(data['rating'])
dem_voters = data.get('demographic voters')
if dem_voters:
nd['demographic'] = {}
for i in xrange(1, len(dem_voters)):
if (dem_voters[i]['votes'] is not None) \
and (dem_voters[i]['votes'].strip()):
nd['demographic'][dem_voters[i]['voters'].strip().lower()] \
= (int(dem_voters[i]['votes'].replace(',', '')),
float(dem_voters[i]['average']))
if 'imdb users' in nd.get('demographic', {}):
nd['votes'] = nd['demographic']['imdb users'][0]
nd['demographic']['all votes'] = nd['demographic']['imdb users']
del nd['demographic']['imdb users']
top250 = data.get('top 250')
if top250:
sd = top250[9:]
i = sd.find(' ')
if i != -1:
sd = sd[:i]
try: sd = int(sd)
except (ValueError, OverflowError): pass
if type(sd) is type(0):
nd['top 250 rank'] = sd
return nd
class DOMHTMLEpisodesRatings(DOMParserBase):
"""Parser for the "episode ratings ... by date" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
erparser = DOMHTMLEpisodesRatings()
result = erparser.parse(eprating_html_string)
"""
_containsObjects = True
extractors = [Extractor(label='title', path="//title",
attrs=Attribute(key='title', path="./text()")),
Extractor(label='ep ratings',
path="//th/../..//tr",
attrs=Attribute(key='episodes', multi=True,
path={'nr': ".//td[1]/text()",
'ep title': ".//td[2]//text()",
'movieID': ".//td[2]/a/@href",
'rating': ".//td[3]/text()",
'votes': ".//td[4]/text()"}))]
def postprocess_data(self, data):
if 'title' not in data or 'episodes' not in data: return {}
nd = []
title = data['title']
for i in data['episodes']:
ept = i['ep title']
movieID = analyze_imdbid(i['movieID'])
votes = i['votes']
rating = i['rating']
if not (ept and movieID and votes and rating): continue
try:
votes = int(votes.replace(',', '').replace('.', ''))
except:
pass
try:
rating = float(rating)
except:
pass
ept = ept.strip()
ept = u'%s {%s' % (title, ept)
nr = i['nr']
if nr:
ept += u' (#%s)' % nr.strip()
ept += '}'
if movieID is not None:
movieID = str(movieID)
m = Movie(title=ept, movieID=movieID, accessSystem=self._as,
modFunct=self._modFunct)
epofdict = m.get('episode of')
if epofdict is not None:
m['episode of'] = Movie(data=epofdict, accessSystem=self._as,
modFunct=self._modFunct)
nd.append({'episode': m, 'votes': votes, 'rating': rating})
return {'episodes rating': nd}
def _normalize_href(href):
if (href is not None) and (not href.lower().startswith('http://')):
if href.startswith('/'): href = href[1:]
# TODO: imdbURL_base may be set by the user!
href = '%s%s' % (imdbURL_base, href)
return href
class DOMHTMLOfficialsitesParser(DOMParserBase):
"""Parser for the "official sites", "external reviews", "newsgroup
reviews", "miscellaneous links", "sound clips", "video clips" and
"photographs" pages of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
osparser = DOMHTMLOfficialsitesParser()
result = osparser.parse(officialsites_html_string)
"""
kind = 'official sites'
extractors = [
Extractor(label='site',
path="//ol/li/a",
attrs=Attribute(key='self.kind',
multi=True,
path={
'link': "./@href",
'info': "./text()"
},
postprocess=lambda x: (x.get('info').strip(),
urllib.unquote(_normalize_href(x.get('link'))))))
]
class DOMHTMLConnectionParser(DOMParserBase):
"""Parser for the "connections" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
connparser = DOMHTMLConnectionParser()
result = connparser.parse(connections_html_string)
"""
_containsObjects = True
extractors = [Extractor(label='connection',
group="//div[@class='_imdbpy']",
group_key="./h5/text()",
group_key_normalize=lambda x: x.lower(),
path="./a",
attrs=Attribute(key=None,
path={'title': "./text()",
'movieID': "./@href"},
multi=True))]
preprocessors = [
('<h5>', '</div><div class="_imdbpy"><h5>'),
# To get the movie's year.
('</a> (', ' ('),
('\n<br/>', '</a>'),
('<br/> - ', '::')
]
def postprocess_data(self, data):
for key in data.keys():
nl = []
for v in data[key]:
title = v['title']
ts = title.split('::', 1)
title = ts[0].strip()
notes = u''
if len(ts) == 2:
notes = ts[1].strip()
m = Movie(title=title,
movieID=analyze_imdbid(v['movieID']),
accessSystem=self._as, notes=notes,
modFunct=self._modFunct)
nl.append(m)
data[key] = nl
if not data: return {}
return {'connections': data}
class DOMHTMLLocationsParser(DOMParserBase):
"""Parser for the "locations" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
lparser = DOMHTMLLocationsParser()
result = lparser.parse(locations_html_string)
"""
extractors = [Extractor(label='locations', path="//dt",
attrs=Attribute(key='locations', multi=True,
path={'place': ".//text()",
'note': "./following-sibling::dd[1]" \
"//text()"},
postprocess=lambda x: (u'%s::%s' % (
x['place'].strip(),
(x['note'] or u'').strip())).strip(':')))]
class DOMHTMLTechParser(DOMParserBase):
"""Parser for the "technical", "business", "literature",
"publicity" (for people) and "contacts (for people) pages of
a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
tparser = HTMLTechParser()
result = tparser.parse(technical_html_string)
"""
kind = 'tech'
extractors = [Extractor(label='tech',
group="//h5",
group_key="./text()",
group_key_normalize=lambda x: x.lower(),
path="./following-sibling::div[1]",
attrs=Attribute(key=None,
path=".//text()",
postprocess=lambda x: [t.strip()
for t in x.split('\n') if t.strip()]))]
preprocessors = [
(re.compile('(<h5>.*?</h5>)', re.I), r'</div>\1<div class="_imdbpy">'),
(re.compile('((<br/>|</p>|</table>))\n?<br/>(?!<a)', re.I),
r'\1</div>'),
# the ones below are for the publicity parser
(re.compile('<p>(.*?)</p>', re.I), r'\1<br/>'),
(re.compile('(</td><td valign="top">)', re.I), r'\1::'),
(re.compile('(</tr><tr>)', re.I), r'\n\1'),
# this is for splitting individual entries
(re.compile('<br/>', re.I), r'\n'),
]
def postprocess_data(self, data):
for key in data:
data[key] = filter(None, data[key])
if self.kind in ('literature', 'business', 'contacts') and data:
if 'screenplay/teleplay' in data:
data['screenplay-teleplay'] = data['screenplay/teleplay']
del data['screenplay/teleplay']
data = {self.kind: data}
else:
if self.kind == 'publicity':
if 'biography (print)' in data:
data['biography-print'] = data['biography (print)']
del data['biography (print)']
# Tech info.
for key in data.keys():
if key.startswith('film negative format'):
data['film negative format'] = data[key]
del data[key]
elif key.startswith('film length'):
data['film length'] = data[key]
del data[key]
return data
class DOMHTMLRecParser(DOMParserBase):
"""Parser for the "recommendations" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
rparser = HTMLRecParser()
result = rparser.parse(recommendations_html_string)
"""
_containsObjects = True
extractors = [Extractor(label='recommendations',
path="//td[@valign='middle'][1]",
attrs=Attribute(key='../../tr/td[1]//text()',
multi=True,
path={'title': ".//text()",
'movieID': ".//a/@href"}))]
def postprocess_data(self, data):
for key in data.keys():
n_key = key
n_keyl = n_key.lower()
if n_keyl == 'suggested by the database':
n_key = 'database'
elif n_keyl == 'imdb users recommend':
n_key = 'users'
data[n_key] = [Movie(title=x['title'],
movieID=analyze_imdbid(x['movieID']),
accessSystem=self._as, modFunct=self._modFunct)
for x in data[key]]
del data[key]
if data: return {'recommendations': data}
return data
class DOMHTMLNewsParser(DOMParserBase):
"""Parser for the "news" page of a given movie or person.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
nwparser = DOMHTMLNewsParser()
result = nwparser.parse(news_html_string)
"""
_defGetRefs = True
extractors = [
Extractor(label='news',
path="//h2",
attrs=Attribute(key='news',
multi=True,
path={
'title': "./text()",
'fromdate': "../following-sibling::p[1]/small//text()",
# FIXME: sometimes (see The Matrix (1999)) <p> is found
# inside news text.
'body': "../following-sibling::p[2]//text()",
'link': "../..//a[text()='Permalink']/@href",
'fulllink': "../..//a[starts-with(text(), " \
"'See full article at')]/@href"
},
postprocess=lambda x: {
'title': x.get('title').strip(),
'date': x.get('fromdate').split('|')[0].strip(),
'from': x.get('fromdate').split('|')[1].replace('From ',
'').strip(),
'body': (x.get('body') or u'').strip(),
'link': _normalize_href(x.get('link')),
'full article link': _normalize_href(x.get('fulllink'))
}))
]
preprocessors = [
(re.compile('(<a name=[^>]+><h2>)', re.I), r'<div class="_imdbpy">\1'),
(re.compile('(<hr/>)', re.I), r'</div>\1'),
(re.compile('<p></p>', re.I), r'')
]
def postprocess_data(self, data):
if not data.has_key('news'):
return {}
for news in data['news']:
if news.has_key('full article link'):
if news['full article link'] is None:
del news['full article link']
return data
def _parse_review(x):
result = {}
title = x.get('title').strip()
if title[-1] == ':': title = title[:-1]
result['title'] = title
result['link'] = _normalize_href(x.get('link'))
kind = x.get('kind').strip()
if kind[-1] == ':': kind = kind[:-1]
result['review kind'] = kind
text = x.get('review').replace('\n\n', '||').replace('\n', ' ').split('||')
review = '\n'.join(text)
if x.get('author') is not None:
author = x.get('author').strip()
review = review.split(author)[0].strip()
result['review author'] = author[2:]
if x.get('item') is not None:
item = x.get('item').strip()
review = review[len(item):].strip()
review = "%s: %s" % (item, review)
result['review'] = review
return result
class DOMHTMLSeasonEpisodesParser(DOMParserBase):
"""Parser for the "episode list" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
sparser = DOMHTMLSeasonEpisodesParser()
result = sparser.parse(episodes_html_string)
"""
extractors = [
Extractor(label='series link',
path="//div[@class='parent']",
attrs=[Attribute(key='series link',
path=".//a/@href")]
),
Extractor(label='series title',
path="//head/meta[@property='og:title']",
attrs=[Attribute(key='series title',
path="./@content")]
),
Extractor(label='seasons list',
path="//select[@id='bySeason']//option",
attrs=[Attribute(key='_seasons',
multi=True,
path="./@value")]),
Extractor(label='selected season',
path="//select[@id='bySeason']//option[@selected]",
attrs=[Attribute(key='_current_season',
path='./@value')]),
Extractor(label='episodes',
path=".",
group="//div[@class='info']",
group_key=".//meta/@content",
group_key_normalize=lambda x: 'episode %s' % x,
attrs=[Attribute(key=None,
multi=True,
path={
"link": ".//strong//a[@href][1]/@href",
"original air date": ".//div[@class='airdate']/text()",
"title": ".//strong//text()",
"plot": ".//div[@class='item_description']//text()"
}
)]
)
]
def postprocess_data(self, data):
series_id = analyze_imdbid(data.get('series link'))
series_title = data.get('series title', '').strip()
selected_season = data.get('_current_season',
'unknown season').strip()
if not (series_id and series_title):
return {}
series = Movie(title=series_title, movieID=str(series_id),
accessSystem=self._as, modFunct=self._modFunct)
if series.get('kind') == 'movie':
series['kind'] = u'tv series'
try: selected_season = int(selected_season)
except: pass
nd = {selected_season: {}}
for episode_nr, episode in data.iteritems():
if not (episode and episode[0] and
episode_nr.startswith('episode ')):
continue
episode = episode[0]
episode_nr = episode_nr[8:].rstrip()
try: episode_nr = int(episode_nr)
except: pass
episode_id = analyze_imdbid(episode.get('link' ''))
episode_air_date = episode.get('original air date',
'').strip()
episode_title = episode.get('title', '').strip()
episode_plot = episode.get('plot', '')
if not (episode_nr and episode_id and episode_title):
continue
ep_obj = Movie(movieID=episode_id, title=episode_title,
accessSystem=self._as, modFunct=self._modFunct)
ep_obj['kind'] = u'episode'
ep_obj['episode of'] = series
ep_obj['season'] = selected_season
ep_obj['episode'] = episode_nr
if episode_air_date:
ep_obj['original air date'] = episode_air_date
if episode_air_date[-4:].isdigit():
ep_obj['year'] = episode_air_date[-4:]
if episode_plot:
ep_obj['plot'] = episode_plot
nd[selected_season][episode_nr] = ep_obj
_seasons = data.get('_seasons') or []
for idx, season in enumerate(_seasons):
try: _seasons[idx] = int(season)
except: pass
return {'episodes': nd, '_seasons': _seasons,
'_current_season': selected_season}
def _build_episode(x):
"""Create a Movie object for a given series' episode."""
episode_id = analyze_imdbid(x.get('link'))
episode_title = x.get('title')
e = Movie(movieID=episode_id, title=episode_title)
e['kind'] = u'episode'
oad = x.get('oad')
if oad:
e['original air date'] = oad.strip()
year = x.get('year')
if year is not None:
year = year[5:]
if year == 'unknown': year = u'????'
if year and year.isdigit():
year = int(year)
e['year'] = year
else:
if oad and oad[-4:].isdigit():
e['year'] = int(oad[-4:])
epinfo = x.get('episode')
if epinfo is not None:
season, episode = epinfo.split(':')[0].split(',')
e['season'] = int(season[7:])
e['episode'] = int(episode[8:])
else:
e['season'] = 'unknown'
e['episode'] = 'unknown'
plot = x.get('plot')
if plot:
e['plot'] = plot.strip()
return e
class DOMHTMLEpisodesParser(DOMParserBase):
"""Parser for the "episode list" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
eparser = DOMHTMLEpisodesParser()
result = eparser.parse(episodes_html_string)
"""
# XXX: no more used for the list of episodes parser,
# but only for the episodes cast parser (see below).
_containsObjects = True
kind = 'episodes list'
_episodes_path = "..//h4"
_oad_path = "./following-sibling::span/strong[1]/text()"
def _init(self):
self.extractors = [
Extractor(label='series',
path="//html",
attrs=[Attribute(key='series title',
path=".//title/text()"),
Attribute(key='series movieID',
path=".//h1/a[@class='main']/@href",
postprocess=analyze_imdbid)
]),
Extractor(label='episodes',
group="//div[@class='_imdbpy']/h3",
group_key="./a/@name",
path=self._episodes_path,
attrs=Attribute(key=None,
multi=True,
path={
'link': "./a/@href",
'title': "./a/text()",
'year': "./preceding-sibling::a[1]/@name",
'episode': "./text()[1]",
'oad': self._oad_path,
'plot': "./following-sibling::text()[1]"
},
postprocess=_build_episode))]
if self.kind == 'episodes cast':
self.extractors += [
Extractor(label='cast',
group="//h4",
group_key="./text()[1]",
group_key_normalize=lambda x: x.strip(),
path="./following-sibling::table[1]//td[@class='nm']",
attrs=Attribute(key=None,
multi=True,
path={'person': "..//text()",
'link': "./a/@href",
'roleID': \
"../td[4]/div[@class='_imdbpyrole']/@roleid"},
postprocess=lambda x: \
build_person(x.get('person') or u'',
personID=analyze_imdbid(x.get('link')),
roleID=(x.get('roleID') or u'').split('/'),
accessSystem=self._as,
modFunct=self._modFunct)))
]
preprocessors = [
(re.compile('(<hr/>\n)(<h3>)', re.I),
r'</div>\1<div class="_imdbpy">\2'),
(re.compile('(</p>\n\n)</div>', re.I), r'\1'),
(re.compile('<h3>(.*?)</h3>', re.I), r'<h4>\1</h4>'),
(_reRolesMovie, _manageRoles),
(re.compile('(<br/> <br/>\n)(<hr/>)', re.I), r'\1</div>\2')
]
def postprocess_data(self, data):
# A bit extreme?
if not 'series title' in data: return {}
if not 'series movieID' in data: return {}
stitle = data['series title'].replace('- Episode list', '')
stitle = stitle.replace('- Episodes list', '')
stitle = stitle.replace('- Episode cast', '')
stitle = stitle.replace('- Episodes cast', '')
stitle = stitle.strip()
if not stitle: return {}
seriesID = data['series movieID']
if seriesID is None: return {}
series = Movie(title=stitle, movieID=str(seriesID),
accessSystem=self._as, modFunct=self._modFunct)
nd = {}
for key in data.keys():
if key.startswith('filter-season-') or key.startswith('season-'):
season_key = key.replace('filter-season-', '').replace('season-', '')
try: season_key = int(season_key)
except: pass
nd[season_key] = {}
ep_counter = 1
for episode in data[key]:
if not episode: continue
episode_key = episode.get('episode')
if episode_key is None: continue
if not isinstance(episode_key, int):
episode_key = ep_counter
ep_counter += 1
cast_key = 'Season %s, Episode %s:' % (season_key,
episode_key)
if data.has_key(cast_key):
cast = data[cast_key]
for i in xrange(len(cast)):
cast[i].billingPos = i + 1
episode['cast'] = cast
episode['episode of'] = series
nd[season_key][episode_key] = episode
if len(nd) == 0:
return {}
return {'episodes': nd}
class DOMHTMLEpisodesCastParser(DOMHTMLEpisodesParser):
"""Parser for the "episodes cast" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
eparser = DOMHTMLEpisodesParser()
result = eparser.parse(episodes_html_string)
"""
kind = 'episodes cast'
_episodes_path = "..//h4"
_oad_path = "./following-sibling::b[1]/text()"
class DOMHTMLFaqsParser(DOMParserBase):
"""Parser for the "FAQ" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
fparser = DOMHTMLFaqsParser()
result = fparser.parse(faqs_html_string)
"""
_defGetRefs = True
# XXX: bsoup and lxml don't match (looks like a minor issue, anyway).
extractors = [
Extractor(label='faqs',
path="//div[@class='section']",
attrs=Attribute(key='faqs',
multi=True,
path={
'question': "./h3/a/span/text()",
'answer': "../following-sibling::div[1]//text()"
},
postprocess=lambda x: u'%s::%s' % (x.get('question').strip(),
'\n\n'.join(x.get('answer').replace(
'\n\n', '\n').strip().split('||')))))
]
preprocessors = [
(re.compile('<br/><br/>', re.I), r'||'),
(re.compile('<h4>(.*?)</h4>\n', re.I), r'||\1--'),
(re.compile('<span class="spoiler"><span>(.*?)</span></span>', re.I),
r'[spoiler]\1[/spoiler]')
]
class DOMHTMLAiringParser(DOMParserBase):
"""Parser for the "airing" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
aparser = DOMHTMLAiringParser()
result = aparser.parse(airing_html_string)
"""
_containsObjects = True
extractors = [
Extractor(label='series title',
path="//title",
attrs=Attribute(key='series title', path="./text()",
postprocess=lambda x: \
x.replace(' - TV schedule', u''))),
Extractor(label='series id',
path="//h1/a[@href]",
attrs=Attribute(key='series id', path="./@href")),
Extractor(label='tv airings',
path="//tr[@class]",
attrs=Attribute(key='airing',
multi=True,
path={
'date': "./td[1]//text()",
'time': "./td[2]//text()",
'channel': "./td[3]//text()",
'link': "./td[4]/a[1]/@href",
'title': "./td[4]//text()",
'season': "./td[5]//text()",
},
postprocess=lambda x: {
'date': x.get('date'),
'time': x.get('time'),
'channel': x.get('channel').strip(),
'link': x.get('link'),
'title': x.get('title'),
'season': (x.get('season') or '').strip()
}
))
]
def postprocess_data(self, data):
if len(data) == 0:
return {}
seriesTitle = data['series title']
seriesID = analyze_imdbid(data['series id'])
if data.has_key('airing'):
for airing in data['airing']:
title = airing.get('title', '').strip()
if not title:
epsTitle = seriesTitle
if seriesID is None:
continue
epsID = seriesID
else:
epsTitle = '%s {%s}' % (data['series title'],
airing['title'])
epsID = analyze_imdbid(airing['link'])
e = Movie(title=epsTitle, movieID=epsID)
airing['episode'] = e
del airing['link']
del airing['title']
if not airing['season']:
del airing['season']
if 'series title' in data:
del data['series title']
if 'series id' in data:
del data['series id']
if 'airing' in data:
data['airing'] = filter(None, data['airing'])
if 'airing' not in data or not data['airing']:
return {}
return data
class DOMHTMLSynopsisParser(DOMParserBase):
"""Parser for the "synopsis" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
sparser = HTMLSynopsisParser()
result = sparser.parse(synopsis_html_string)
"""
extractors = [
Extractor(label='synopsis',
path="//div[@class='display'][not(@style)]",
attrs=Attribute(key='synopsis',
path=".//text()",
postprocess=lambda x: '\n\n'.join(x.strip().split('||'))))
]
preprocessors = [
(re.compile('<br/><br/>', re.I), r'||')
]
class DOMHTMLParentsGuideParser(DOMParserBase):
"""Parser for the "parents guide" page of a given movie.
The page should be provided as a string, as taken from
the akas.imdb.com server. The final result will be a
dictionary, with a key for every relevant section.
Example:
pgparser = HTMLParentsGuideParser()
result = pgparser.parse(parentsguide_html_string)
"""
extractors = [
Extractor(label='parents guide',
group="//div[@class='section']",
group_key="./h3/a/span/text()",
group_key_normalize=lambda x: x.lower(),
path="../following-sibling::div[1]/p",
attrs=Attribute(key=None,
path=".//text()",
postprocess=lambda x: [t.strip().replace('\n', ' ')
for t in x.split('||') if t.strip()]))
]
preprocessors = [
(re.compile('<br/><br/>', re.I), r'||')
]
def postprocess_data(self, data):
data2 = {}
for key in data:
if data[key]:
data2[key] = data[key]
if not data2:
return {}
return {'parents guide': data2}
_OBJECTS = {
'movie_parser': ((DOMHTMLMovieParser,), None),
'plot_parser': ((DOMHTMLPlotParser,), None),
'movie_awards_parser': ((DOMHTMLAwardsParser,), None),
'taglines_parser': ((DOMHTMLTaglinesParser,), None),
'keywords_parser': ((DOMHTMLKeywordsParser,), None),
'crazycredits_parser': ((DOMHTMLCrazyCreditsParser,), None),
'goofs_parser': ((DOMHTMLGoofsParser,), None),
'alternateversions_parser': ((DOMHTMLAlternateVersionsParser,), None),
'trivia_parser': ((DOMHTMLTriviaParser,), None),
'soundtrack_parser': ((DOMHTMLSoundtrackParser,), {'kind': 'soundtrack'}),
'quotes_parser': ((DOMHTMLQuotesParser,), None),
'releasedates_parser': ((DOMHTMLReleaseinfoParser,), None),
'ratings_parser': ((DOMHTMLRatingsParser,), None),
'officialsites_parser': ((DOMHTMLOfficialsitesParser,), None),
'externalrev_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'external reviews'}),
'newsgrouprev_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'newsgroup reviews'}),
'misclinks_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'misc links'}),
'soundclips_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'sound clips'}),
'videoclips_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'video clips'}),
'photosites_parser': ((DOMHTMLOfficialsitesParser,),
{'kind': 'photo sites'}),
'connections_parser': ((DOMHTMLConnectionParser,), None),
'tech_parser': ((DOMHTMLTechParser,), None),
'business_parser': ((DOMHTMLTechParser,),
{'kind': 'business', '_defGetRefs': 1}),
'literature_parser': ((DOMHTMLTechParser,), {'kind': 'literature'}),
'locations_parser': ((DOMHTMLLocationsParser,), None),
'rec_parser': ((DOMHTMLRecParser,), None),
'news_parser': ((DOMHTMLNewsParser,), None),
'episodes_parser': ((DOMHTMLEpisodesParser,), None),
'season_episodes_parser': ((DOMHTMLSeasonEpisodesParser,), None),
'episodes_cast_parser': ((DOMHTMLEpisodesCastParser,), None),
'eprating_parser': ((DOMHTMLEpisodesRatings,), None),
'movie_faqs_parser': ((DOMHTMLFaqsParser,), None),
'airing_parser': ((DOMHTMLAiringParser,), None),
'synopsis_parser': ((DOMHTMLSynopsisParser,), None),
'parentsguide_parser': ((DOMHTMLParentsGuideParser,), None)
}
|
MosheBerman/brisket-mashup
|
source/libraries/IMDbPY-4.9/imdb/parser/http/movieParser.py
|
Python
|
mit
| 78,655
|
[
"Brian"
] |
a84860574cfeb8f37e5c274b6c60fb1a7c40706d76b4de782c6a977510023b5e
|
#/* FILE INFORMATION
#** Based mostly on the traub91proto.g by Dave Beeman
#** Main difference is addition of Glu and NMDA channels
#** The 1991 Traub set of voltage and concentration dependent channels
#** Implemented as tabchannels by : Dave Beeman
#** R.D.Traub, R. K. S. Wong, R. Miles, and H. Michelson
#** Journal of Neurophysiology, Vol. 66, p. 635 (1991)
#**
#** This file depends on functions and constants defined in defaults.g
#** As it is also intended as an example of the use of the tabchannel
#** object to implement concentration dependent channels, it has extensive
#** comments. Note that the original units used in the paper have been
#** converted to SI (MKS) units. Also, we define the ionic equilibrium
#** potentials relative to the resting potential, EREST_ACT. In the
#** paper, this was defined to be zero. Here, we use -0.060 volts, the
#** measured value relative to the outside of the cell.
#*/
#/* November 1999 update for GENESIS 2.2: Previous versions of this file used
# a combination of a table, tabgate, and vdep_channel to implement the
# Ca-dependent K Channel - K(C). This new version uses the new tabchannel
# "instant" field, introduced in GENESIS 2.2, to implement an
# "instantaneous" gate for the multiplicative Ca-dependent factor in the
# conductance. This allows these channels to be used with the fast
# hsolve chanmodes > 1.
#*/
# Apr 2012 update for pymoose. Converted to equivalent MOOSE funcs.
import moose
import numpy as np
import math
#CONSTANTS
global EK
global SOMA_A
global EREST_ACT
global ENA
global ECA
EREST_ACT = -0.060 #/* hippocampal cell resting potl */
ENA = 0.115 + EREST_ACT #// 0.055
EK = -0.015 + EREST_ACT #// -0.075
ECA = 0.140 + EREST_ACT #// 0.080
SOMA_A = 3.320e-9 #// soma area in square meters
CA_SCALE = 25000 # Ratio of Traub units to mM. 250::0.01
#/*
#For these channels, the maximum channel conductance (Gbar) has been
#calculated using the CA3 soma channel conductance densities and soma
#area. Typically, the functions which create these channels will be used
#to create a library of prototype channels. When the cell reader creates
#copies of these channels in various compartments, it will set the actual
#value of Gbar by calculating it from the cell parameter file.
#*/
#//========================================================================
#// Tabulated Ca Channel
#//========================================================================
def make_Ca( name ):
if moose.exists( '/library/' + name):
return
Ca = moose.HHChannel( '/library/' + name )
Ca.Ek = ECA
Ca.Gbar = 40 * SOMA_A
Ca.Gk = 0
Ca.Xpower = 2
Ca.Ypower = 1
Ca.Zpower = 0
xgate = moose.element( Ca.path + '/gateX' )
xA = np.array( [ 1.6e3, 0, 1.0, -1.0 * (0.065 + EREST_ACT), -0.01389, -20e3 * (0.0511 + EREST_ACT), 20e3, -1.0, -1.0 * (0.0511 + EREST_ACT), 5.0e-3, 3000, -0.1, 0.05 ] )
# xgate.min = -0.1
# xgate.max = 0.05
# xgate.divs = 3000
#// Converting Traub's expressions for the gCa/s alpha and beta functions
#// to SI units and entering the A, B, C, D and F parameters, we get:
# xgate.alpha( 1.6e3, 0, 1.0, -1.0 * (0.065 + EREST_ACT), -0.01389 )
# xgate.beta( -20e3 * (0.0511 + EREST_ACT), 20e3, -1.0, -1.0 * (0.0511 + EREST_ACT), 5.0e-3 )
#xgate.setupAlpha( xA )
xgate.alphaParms = xA
# The Y gate (gCa/r) is not quite of this form. For V > EREST_ACT, alpha =
# 5*{exp({-50*(V - EREST_ACT)})}. Otherwise, alpha = 5. Over the entire
# range, alpha + beta = 5. To create the Y_A and Y_B tables, we use some
# of the pieces of the setupalpha function.
ygate = moose.element( Ca.path + '/gateY' )
ygate.min = -0.1
ygate.max = 0.05
ygate.divs = 3000
yA = np.zeros( (ygate.divs + 1), dtype=float)
yB = np.zeros( (ygate.divs + 1), dtype=float)
#Fill the Y_A table with alpha values and the Y_B table with (alpha+beta)
dx = (ygate.max - ygate.min)/ygate.divs
x = ygate.min
for i in range( ygate.divs + 1 ):
if ( x > EREST_ACT):
yA[i] = 5.0 * math.exp( -50 * (x - EREST_ACT) )
else:
yA[i] = 0.0
#yB[i] = 6.0 - yA[i]
yB[i] = 5.0
x += dx
ygate.tableA = yA
ygate.tableB = yB
# Tell the cell reader that the current from this channel must be fed into
# the Ca_conc pool of calcium.
addmsg1 = moose.Mstring( Ca.path + '/addmsg1' )
addmsg1.value = '. IkOut ../Ca_conc current'
# in some compartments, whe have an NMDA_Ca_conc object to put the current
# into.
addmsg2 = moose.Mstring( Ca.path + '/addmsg2' )
addmsg2.value = '. IkOut ../NMDA_Ca_conc current'
# Here we put in an addmsg command for nernst objects, if any.
addmsg3 = moose.Mstring( Ca.path + '/addmsg3' )
addmsg3.value = '../Ca_conc/nernst Eout . setEk'
# As we typically use the cell reader to create copies of these prototype
#elements in one or more compartments, we need some way to be sure that the
#needed messages are established. Although the cell reader has enough
#information to create the messages which link compartments to their channels
#and to other adjacent compartments, it most be provided with the information
#needed to establish additional messages. This is done by placing the
#message string in a user-defined field of one of the elements which is
#involved in the message. The cell reader recognizes the added object names
#"addmsg1", "addmsg2", etc. as indicating that they are to be
#evaluated and used to set up messages. The paths are relative to the
#element which contains the message string in its added field. Thus,
#"../Ca_conc" refers to the sibling element Ca_conc and "."
#refers to the Ca element itself.
#/*************************************************************************
#Next, we need an element to take the Calcium current calculated by the Ca
#channel and convert it to the Ca concentration. The "Ca_concen" object
#solves the equation dC/dt = B*I_Ca - C/tau, and sets Ca = Ca_base + C. As
#it is easy to make mistakes in units when using this Calcium diffusion
#equation, the units used here merit some discussion.
#With Ca_base = 0, this corresponds to Traub's diffusion equation for
#concentration, except that the sign of the current term here is positive, as
#GENESIS uses the convention that I_Ca is the current flowing INTO the
#compartment through the channel. In SI units, the concentration is usually
#expressed in moles/m^3 (which equals millimoles/liter), and the units of B
#are chosen so that B = 1/(ion_charge * Faraday * volume). Current is
#expressed in amperes and one Faraday = 96487 coulombs. However, in this
#case, Traub expresses the concentration in arbitrary units, current in
#microamps and uses tau = 13.33 msec. If we use the same concentration units,
#but express current in amperes and tau in seconds, our B constant is then
#10^12 times the constant (called "phi") used in the paper. The actual value
#used will be typically be determined by the cell reader from the cell
#parameter file. However, for the prototype channel we wlll use Traub's
#corrected value for the soma. (An error in the paper gives it as 17,402
#rather than 17.402.) In our units, this will be 17.402e12.
#*************************************************************************/
#//========================================================================
#// Ca conc
#//========================================================================
def make_Ca_conc( name ):
if moose.exists( '/library/' + name ):
return
conc = moose.CaConc( '/library/tempName' )
conc.name = name
conc.tau = 0.013333 # sec
conc.B = 17.402e12 # Curr to conc conversion for soma
conc.Ca_base = 0.00000
#This Ca_concen element should receive a message from any calcium channels
# with the current going through the channel. Here we have this specified
# in the Ca channel, with the idea that more than one channel might
# contribute Ca ions to this calcium pool. In the original GENESIS file
# this was specified here in make_Ca_conc.
#========================================================================
# Calcium channel including Nernst potential and calcium pool
#========================================================================
def make_Ca_conc_with_Nernst( name ):
if moose.exists( '/library/' + name ):
return
make_Ca_conc( name )
Ca_conc = moose.element( '/library/' + name )
Ca_conc.Ca_base = 0.0001
nernst = moose.Nernst( '/library/' + name + '/nernst' )
nernst.Temperature = 300
nernst.valence = 2
nernst.Cout = 1.5 # 1.5 mM
moose.connect( Ca_conc, "concOut", nernst, 'ci' )
#addmsg1 = moose.Mstring( Ca_conc.path + '/addmsg1' )
#addmsg1.value = '. concOut nernst ci'
#moose.connect( nernst, "Eout", VGCC, "setEk" )
#moose.connect( Ca_conc, "concOut", nernst, 'ci' )
#========================================================================
# Tabulated Ca-dependent K AHP Channel
#========================================================================
# This is a tabchannel which gets the calcium concentration from Ca_conc
# in order to calculate the activation of its Z gate. It is set up much
# like the Ca channel, except that the A and B tables have values which are
# functions of concentration, instead of voltage.
def make_K_AHP( name ):
if moose.exists( '/library/' + name ):
return
K_AHP = moose.HHChannel( '/library/' + name )
K_AHP.Ek = EK # V
K_AHP.Gbar = 8 * SOMA_A # S
K_AHP.Gk = 0 # S
K_AHP.Xpower = 0
K_AHP.Ypower = 0
K_AHP.Zpower = 1
zgate = moose.element( K_AHP.path + '/gateZ' )
xmax = 500.0
zgate.min = 0
zgate.max = xmax
zgate.divs = 3000
zA = np.zeros( (zgate.divs + 1), dtype=float)
zB = np.zeros( (zgate.divs + 1), dtype=float)
dx = (zgate.max - zgate.min)/zgate.divs
x = zgate.min
for i in range( zgate.divs + 1 ):
zA[i] = min( 0.02 * CA_SCALE * x, 10 )
zB[i] = 1.0
x = x + dx
zgate.tableA = zA
zgate.tableB = zB
addmsg1 = moose.Mstring( K_AHP.path + '/addmsg1' )
addmsg1.value = '../Ca_conc concOut . concen'
# Use an added field to tell the cell reader to set up a message from the
# Ca_Conc with concentration info, to the current K_AHP object.
#//========================================================================
#// Ca-dependent K Channel - K(C) - (vdep_channel with table and tabgate)
#//========================================================================
#The expression for the conductance of the potassium C-current channel has a
#typical voltage and time dependent activation gate, where the time dependence
#arises from the solution of a differential equation containing the rate
#parameters alpha and beta. It is multiplied by a function of calcium
#concentration that is given explicitly rather than being obtained from a
#differential equation. Therefore, we need a way to multiply the activation
#by a concentration dependent value which is determined from a lookup table.
#This is accomplished by using the Z gate with the new tabchannel "instant"
#field, introduced in GENESIS 2.2, to implement an "instantaneous" gate for
#the multiplicative Ca-dependent factor in the conductance.
def make_K_C( name ):
if moose.exists( '/library/' + name ):
return
K_C = moose.HHChannel( '/library/' + name )
K_C.Ek = EK # V
K_C.Gbar = 100.0 * SOMA_A # S
K_C.Gk = 0 # S
K_C.Xpower = 1
K_C.Zpower = 1
K_C.instant = 4 # Flag: 0x100 means Z gate is instant.
K_C.useConcentration = 1
# Now make a X-table for the voltage-dependent activation parameter.
xgate = moose.element( K_C.path + '/gateX' )
xgate.min = -0.1
xgate.max = 0.05
xgate.divs = 3000
xA = np.zeros( (xgate.divs + 1), dtype=float)
xB = np.zeros( (xgate.divs + 1), dtype=float)
dx = (xgate.max - xgate.min)/xgate.divs
x = xgate.min
for i in range( xgate.divs + 1 ):
alpha = 0.0
beta = 0.0
if (x < EREST_ACT + 0.05):
alpha = math.exp( 53.872 * (x - EREST_ACT) - 0.66835 ) / 0.018975
beta = 2000* (math.exp ( (EREST_ACT + 0.0065 - x)/0.027)) - alpha
else:
alpha = 2000 * math.exp( ( EREST_ACT + 0.0065 - x)/0.027 )
beta = 0.0
xA[i] = alpha
xB[i] = alpha + beta
x = x + dx
xgate.tableA = xA
xgate.tableB = xB
# Create a table for the function of concentration, allowing a
# concentration range of 0 to 200, with 3000 divisions. This is done
# using the Z gate, which can receive a CONCEN message. By using
# the "instant" flag, the A and B tables are evaluated as lookup tables,
# rather than being used in a differential equation.
zgate = moose.element( K_C.path + '/gateZ' )
zgate.min = 0.0
xmax = 150.0
zgate.max = xmax
zgate.divs = 3000
zA = np.zeros( (zgate.divs + 1), dtype=float)
zB = np.zeros( (zgate.divs + 1), dtype=float)
dx = ( zgate.max - zgate.min)/ zgate.divs
x = zgate.min
#CaScale = 100000.0 / 250.0e-3
for i in range( zgate.divs + 1 ):
zA[i] = min( 1000.0, x * CA_SCALE / (250 * xmax ) )
zB[i] = 1000.0
x += dx
zgate.tableA = zA
zgate.tableB = zB
# Now we need to provide for messages that link to external elements.
# The message that sends the Ca concentration to the Z gate tables is stored
# in an added field of the channel, so that it may be found by the cell
# reader.
addmsg1 = moose.Mstring( K_C.path + '/addmsg1' )
addmsg1.value = '../Ca_conc concOut . concen'
# The remaining channels are straightforward tabchannel implementations
#/========================================================================
#/ Tabchannel Na Hippocampal cell channel
#/========================================================================
def make_Na( name ):
if moose.exists( '/library/' + name ):
return
Na = moose.HHChannel( '/library/' + name )
Na.Ek = ENA # V
Na.Gbar = 300 * SOMA_A # S
Na.Gk = 0 # S
Na.Xpower = 2
Na.Ypower = 1
Na.Zpower = 0
xgate = moose.element( Na.path + '/gateX' )
xA = np.array( [ 320e3 * (0.0131 + EREST_ACT),
-320e3, -1.0, -1.0 * (0.0131 + EREST_ACT), -0.004,
-280e3 * (0.0401 + EREST_ACT), 280e3, -1.0,
-1.0 * (0.0401 + EREST_ACT), 5.0e-3,
3000, -0.1, 0.05 ] )
xgate.alphaParms = xA
#xgate.alpha( 320e3 * (0.0131 + EREST_ACT), -320e3, -1.0, -1.0 * (0.0131 + EREST_ACT), -0.004 )
#xgate.beta( -280e3 * (0.0401 + EREST_ACT), 280e3, -1.0, -1.0 * (0.0401 + EREST_ACT), 5.0e-3 )
ygate = moose.element( Na.path + '/gateY' )
yA = np.array( [ 128.0, 0.0, 0.0, -1.0 * (0.017 + EREST_ACT), 0.018,
4.0e3, 0.0, 1.0, -1.0 * (0.040 + EREST_ACT), -5.0e-3,
3000, -0.1, 0.05 ] )
ygate.alphaParms = yA
#ygate.alpha( 128.0, 0.0, 0.0, -1.0 * (0.017 + EREST_ACT), 0.018 )
#ygate.beta( 4.0e3, 0.0, 1.0, -1.0 * (0.040 + EREST_ACT), -5.0e-3 )
#========================================================================
# Tabchannel K(DR) Hippocampal cell channel
#========================================================================
def make_K_DR( name ):
if moose.exists( '/library/' + name ):
return
K_DR = moose.HHChannel( '/library/' + name )
K_DR.Ek = EK # V
K_DR.Gbar = 150 * SOMA_A # S
K_DR.Gk = 0 # S
K_DR.Xpower = 1
K_DR.Ypower = 0
K_DR.Zpower = 0
xgate = moose.element( K_DR.path + '/gateX' )
xA = np.array( [ 16e3 * (0.0351 + EREST_ACT),
-16e3, -1.0, -1.0 * (0.0351 + EREST_ACT), -0.005,
250, 0.0, 0.0, -1.0 * (0.02 + EREST_ACT), 0.04,
3000, -0.1, 0.05 ] )
xgate.alphaParms = xA
#xgate.alpha( 16e3 * (0.0351 + EREST_ACT), -16e3, -1.0, -1.0 * (0.0351 + EREST_ACT), -0.005 )
#xgate.beta( 250, 0.0, 0.0, -1.0 * (0.02 + EREST_ACT), 0.04 )
#========================================================================
# Tabchannel K(A) Hippocampal cell channel
#========================================================================
def make_K_A( name ):
if moose.exists( '/library/' + name ):
return
K_A = moose.HHChannel( '/library/' + name )
K_A.Ek = EK # V
K_A.Gbar = 50 * SOMA_A # S
K_A.Gk = 0 # S
K_A.Xpower = 1
K_A.Ypower = 1
K_A.Zpower = 0
xgate = moose.element( K_A.path + '/gateX' )
xA = np.array( [ 20e3 * (0.0131 + EREST_ACT),
-20e3, -1.0, -1.0 * (0.0131 + EREST_ACT), -0.01,
-17.5e3 * (0.0401 + EREST_ACT),
17.5e3, -1.0, -1.0 * (0.0401 + EREST_ACT), 0.01,
3000, -0.1, 0.05 ] )
xgate.alphaParms = xA
# xgate.alpha( 20e3 * (0.0131 + EREST_ACT), -20e3, -1.0, -1.0 * (0.0131 + EREST_ACT), -0.01 )
# xgate.beta( -17.5e3 * (0.0401 + EREST_ACT), 17.5e3, -1.0, -1.0 * (0.0401 + EREST_ACT), 0.01 )
ygate = moose.element( K_A.path + '/gateY' )
yA = np.array( [ 1.6, 0.0, 0.0, 0.013 - EREST_ACT, 0.018,
50.0, 0.0, 1.0, -1.0 * (0.0101 + EREST_ACT), -0.005,
3000, -0.1, 0.05 ] )
ygate.alphaParms = yA
# ygate.alpha( 1.6, 0.0, 0.0, 0.013 - EREST_ACT, 0.018 )
# ygate.beta( 50.0, 0.0, 1.0, -1.0 * (0.0101 + EREST_ACT), -0.005 )
#========================================================================
# SynChan: Glu receptor
#========================================================================
def make_glu( name ):
if moose.exists( '/library/' + name ):
return
glu = moose.SynChan( '/library/' + name )
glu.Ek = 0.0
glu.tau1 = 2.0e-3
glu.tau2 = 9.0e-3
glu.Gbar = 40 * SOMA_A
sh = moose.SimpleSynHandler( glu.path + '/sh' )
moose.connect( sh, 'activationOut', glu, 'activation' )
sh.numSynapses = 1
sh.synapse[0].weight = 1
#========================================================================
# SynChan: Glu receptor
#========================================================================
def make_GABA( name ):
if moose.exists( '/library/' + name ):
return
GABA = moose.SynChan( '/library/' + name )
GABA.Ek = EK + 10.0e-3
GABA.tau1 = 4.0e-3
GABA.tau2 = 9.0e-3
GABA.Gbar = 40 * SOMA_A
sh = moose.SimpleSynHandler( GABA.path + '/sh' )
moose.connect( sh, 'activationOut', GABA, 'activation' )
sh.numSynapses = 1
sh.synapse[0].weight = 1
#========================================================================
# SynChan: NMDA receptor
#========================================================================
def make_NMDA( name ):
if moose.exists( '/library/' + name ):
return
NMDA = moose.NMDAChan( '/library/' + name )
NMDA.Ek = 0.0
NMDA.tau1 = 20.0e-3
NMDA.tau2 = 20.0e-3
NMDA.Gbar = 5 * SOMA_A
NMDA.CMg = 1.2 # [Mg]ext in mM
NMDA.KMg_A = 1.0/0.28
NMDA.KMg_B = 1.0/62
NMDA.temperature = 300 # Temperature in Kelvin.
NMDA.extCa = 1.5 # [Ca]ext in mM
NMDA.intCa = 0.00008 # [Ca]int in mM
NMDA.intCaScale = 1 # Scale factor from elec Ca units to mM
NMDA.intCaOffset = 0.00008 # Basal [Ca]int in mM
NMDA.condFraction = 0.02 # Fraction of conductance due to Ca
addmsg1 = moose.Mstring( NMDA.path + '/addmsg1' )
addmsg1.value = '. ICaOut ../Ca_conc current'
addmsg2 = moose.Mstring( NMDA.path + '/addmsg2' )
addmsg2.value = '../Ca_conc concOut . assignIntCa'
sh = moose.SimpleSynHandler( NMDA.path + '/sh' )
moose.connect( sh, 'activationOut', NMDA, 'activation' )
sh.numSynapses = 1
sh.synapse[0].weight = 1
#========================================================================
# The Ca_NMDA channel is a subset of the NMDA channel that carries Ca.
# It is identical to above, except that the Ek for Ca is much higher:
# 0.08 V from the consts at the top of this file.
# This is about the reversal potl for 1 uM Ca_in, 2 mM out.
# Also we do not want this channel to contribute to the current,
# which is already accounted for in the main channel. So there is
# no CHANNEL message to the parent compartment.
# I would like to have used the Nernst to do the Ca potential, and
# Synchans now take Ek messages but I haven't yet used this.
#========================================================================
def make_Ca_NMDA( name ):
if moose.exists( '/library/' + name ):
return
Ca_NMDA = moose.NMDAChan( '/library/' + name )
Ca_NMDA.Ek = 0.0
Ca_NMDA.tau1 = 20.0e-3
Ca_NMDA.tau2 = 20.0e-3
Ca_NMDA.Gbar = 5 * SOMA_A
Ca_NMDA.CMg = 1.2 # [Mg]ext in mM
Ca_NMDA.KMg_A = 1.0/0.28
Ca_NMDA.KMg_B = 1.0/62
Ca_NMDA.temperature = 300 # Temperature in Kelvin.
Ca_NMDA.extCa = 1.5 # [Ca]ext in mM
Ca_NMDA.intCa = 0.00008 # [Ca]int in mM
Ca_NMDA.intCaScale = 1 # Scale factor from elec Ca units to mM
Ca_NMDA.intCaOffset = 0.00008 # Basal [Ca]int in mM
Ca_NMDA.condFraction = 0.02 # Fraction of conductance due to Ca
addmsg1 = moose.Mstring( Ca_NMDA.path + '/addmsg1' )
addmsg1.value = '. ICaOut ../Ca_conc current'
addmsg2 = moose.Mstring( Ca_NMDA.path + '/addmsg2' )
addmsg2.value = '../Ca_conc concOut . assignIntCa'
sh = moose.SimpleSynHandler( Ca_NMDA.path + '/sh' )
moose.connect( sh, 'activationOut', Ca_NMDA, 'activation' )
sh.numSynapses = 1
sh.synapse[0].weight = 1
'''
if moose.exists( 'Ca_NMDA' ):
return
Ca_NMDA = moose.SynChan( 'Ca_NMDA' )
Ca_NMDA.Ek = ECA
Ca_NMDA.tau1 = 20.0e-3
Ca_NMDA.tau2 = 20.0e-3
Ca_NMDA.Gbar = 5 * SOMA_A
block = moose.MgBlock( '/library/Ca_NMDA/block' )
block.CMg = 1.2 # [Mg] in mM
block.Zk = 2
block.KMg_A = 1.0/0.28
block.KMg_B = 1.0/62
moose.connect( Ca_NMDA, 'channelOut', block, 'origChannel', 'OneToOne' )
addmsg1 = moose.Mstring( '/library/Ca_NMDA/addmsg1' )
addmsg1.value = '.. VmOut ./block Vm'
addmsg2 = moose.Mstring( '/library/Ca_NMDA/addmsg2' )
addmsg2.value = './block IkOut ../NMDA_Ca_conc current'
# The original model has the Ca current also coming here.
sh = moose.SimpleSynHandler( 'Ca_NMDA/sh' )
moose.connect( sh, 'activationOut', Ca_NMDA, 'activation' )
sh.numSynapses = 1
sh.synapse[0].weight = 1
'''
#=====================================================================
# SPIKE DETECTOR
#=====================================================================
#//addmsg axon/spike axon BUFFER name
def make_axon( name ):
if moose.exists( '/library/' + name ):
return
axon = moose.SpikeGen( '/library/' + name )
axon.threshold = -40e-3 # V
axon.abs_refract = 10e-3 # sec
|
BhallaLab/moose-examples
|
tutorials/Rdesigneur/chans/proto22.py
|
Python
|
gpl-2.0
| 23,141
|
[
"MOOSE"
] |
84cf1562771542e6fe95c0b877c2e2ccecf05ca92d4c53e78bbde6b0b440b766
|
"""Parsers related to tinker file formats from Molden.
"""
import re
from .. import Molecule, Atom
class TinkerXyzDataParser(object):
def __init__(self, filename):
self.filename = filename
def get_avail_properties(self):
return ["geometry"]
def get_property(self, prop):
if prop == "geometry":
return self._parse_geom()
def _parse_geom(self):
#a very big regex to parse and group the input file
r = re.compile(('\s*(\d+)\s*(\w+)\s*(-?\d+\.\d+)\s*'
'\s*(-?\d+\.\d+)\s*(-?\d+\.\d+)\s*(\d+)\s*(.*)'))
f = open(self.filename,'r')
input = f.readlines()
input.pop(0) # Removing the first comment line
f.close()
atoms=[]
#BUILDING ATOM OBJECTS
#generate a list of instances of Atom class
for i, line in enumerate(input):
match=r.search(line)
if not match:
raise Exception("Error parsing line %d in file %s\n>>> %s"%(i, self.filename, line[:-1]))
id=int(match.group(1))
type=match.group(2)
coords=[match.group(3),match.group(4),match.group(5)]
coords = [float(s) for s in coords]
atoms.append(Atom(type,coords, id))
#PARSING THE FILE TO GET THE COUPLES BONDED
#initialize bonds' list and compile the regex for tha atom's id
couples = []
atom_id = re.compile('\s*(\d+)\s*')
#looping the input's line
for el in input:
#match each line with the first big regex
line = r.search(el)
if line:
#line.group(1) is the number of the current atom in
#the input line
current_atom_id = line.group(1)
#line.group(7) are the numbers of atoms which
#the current one is bounded at
bounded_atoms = line.group(7)
#bounded_id.group(1) is one of the the bounded atom returned
#by finditer()
couples += [[int(current_atom_id),int(bounded_id.group(1))]
for bounded_id in re.finditer(atom_id,bounded_atoms)
if int(current_atom_id) < int(bounded_id.group(1))]
#BUILDING BOND OBJECTS
bonds = []
#looping over the couples previously determined
for couple in couples:
#looping over the atoms to match their id with the couple
for atom in atoms:
if couple[0]==atom.id:
atom1 = atom
if couple[1]==atom.id:
atom2 = atom
bonds += [Bond(atom1,atom2)]
break
return Molecule(atoms, bonds)
|
chemlab/chemlab
|
chemlab/io/handlers/tinker.py
|
Python
|
gpl-3.0
| 2,950
|
[
"TINKER"
] |
680d4ad816762ffdab3291db10f27ec9344f8dfd65fade9aa3bd33ba01739fa5
|
import utilities
from ..automation import CommandSequence
from ..automation import TaskManager
from openwpmtest import OpenWPMTest
expected_lso_content_a = [
1, # visit id
u'localtest.me',
u'FlashCookie.sol',
u'localtest.me/FlashCookie.sol',
u'test_key',
u'REPLACEME']
expected_lso_content_b = [
2, # visit id
u'localtest.me',
u'FlashCookie.sol',
u'localtest.me/FlashCookie.sol',
u'test_key',
u'REPLACEME']
expected_js_cookie = (
1, # visit id
u'%s' % utilities.BASE_TEST_URL_DOMAIN,
u'test_cookie',
u'Test-0123456789',
u'%s' % utilities.BASE_TEST_URL_DOMAIN,
u'/')
class TestStorageVectors(OpenWPMTest):
""" Runs some basic tests to check that the saving of
storage vectors (i.e. Flash LSOs, profile cookies) works.
NOTE: These tests are very basic and should be expanded
on to check for completeness and correctness.
"""
def get_config(self, data_dir=""):
return self.get_test_config(data_dir)
def test_flash_cookies(self):
""" Check that some Flash LSOs are saved and
are properly keyed in db."""
# Run the test crawl
manager_params, browser_params = self.get_config()
browser_params[0]['disable_flash'] = False
manager = TaskManager.TaskManager(manager_params, browser_params)
# Get a site we know sets Flash cookies and visit it twice
lso_value_a = utilities.rand_str(8)
expected_lso_content_a[5] = lso_value_a # we'll expect this to be present
qry_str = '?lso_test_key=%s&lso_test_value=%s' % ("test_key",
lso_value_a)
test_url_a = utilities.BASE_TEST_URL + '/lso/setlso.html' + qry_str
cs = CommandSequence.CommandSequence(test_url_a)
cs.get(sleep=3, timeout=120)
cs.dump_flash_cookies()
manager.execute_command_sequence(cs)
lso_value_b = utilities.rand_str(8)
expected_lso_content_b[5] = lso_value_b # we'll expect this to be present
qry_str = '?lso_test_key=%s&lso_test_value=%s' % ("test_key",
lso_value_b)
test_url_b = utilities.BASE_TEST_URL + '/lso/setlso.html' + qry_str
cs = CommandSequence.CommandSequence(test_url_b)
cs.get(sleep=3, timeout=120)
cs.dump_flash_cookies()
manager.execute_command_sequence(cs)
manager.close()
# Check that some flash cookies are recorded
qry_res = utilities.query_db(manager_params['db'],
"SELECT * FROM flash_cookies")
lso_count = len(qry_res)
assert lso_count == 2
lso_content_a = list(qry_res[0][2:]) # Remove first two items
lso_content_b = list(qry_res[1][2:]) # Remove first two items
# remove randomly generated LSO directory name
# e.g. TY2FOJUG/localtest.me/Flash.sol -> localtest.me/Flash.sol
lso_content_a[3] = lso_content_a[3].split("/", 1)[-1] # remove LSO dirname
lso_content_b[3] = lso_content_b[3].split("/", 1)[-1] # remove LSO dirname
assert lso_content_a == expected_lso_content_a
assert lso_content_b == expected_lso_content_b
def test_profile_cookies(self):
""" Check that some profile cookies are saved """
# Run the test crawl
manager_params, browser_params = self.get_config()
manager = TaskManager.TaskManager(manager_params, browser_params)
# TODO update this to local test site
url = 'http://www.yahoo.com'
cs = CommandSequence.CommandSequence(url)
cs.get(sleep=3, timeout=120)
cs.dump_profile_cookies()
manager.execute_command_sequence(cs)
manager.close()
# Check that some flash cookies are recorded
qry_res = utilities.query_db(manager_params['db'],
"SELECT COUNT(*) FROM profile_cookies")
prof_cookie_count = qry_res[0]
assert prof_cookie_count > 0
def test_js_profile_cookies(self):
""" Check that profile cookies set by JS are saved """
# Run the test crawl
manager_params, browser_params = self.get_config()
manager = TaskManager.TaskManager(manager_params, browser_params)
url = utilities.BASE_TEST_URL + "/js_cookie.html"
cs = CommandSequence.CommandSequence(url)
cs.get(sleep=3, timeout=120)
cs.dump_profile_cookies()
manager.execute_command_sequence(cs)
manager.close()
# Check that the JS cookie we stored is recorded
qry_res = utilities.query_db(manager_params['db'], "SELECT * FROM profile_cookies")
assert len(qry_res) == 1 # we store only one cookie
cookies = qry_res[0] # take the first cookie
# compare URL, domain, name, value, origin, path
assert cookies[2:8] == expected_js_cookie
|
natasasdj/OpenWPM
|
test/test_storage_vectors.py
|
Python
|
gpl-3.0
| 5,128
|
[
"VisIt"
] |
cb1f7ee7e20d36a61a5293c7b5d4eba4ec21aa91691be842e90b808dadb9dd99
|
#!/usr/bin/env python3
# Copyright (C) 2020
# Max Planck Institute for Polymer Research & JGU Mainz
#
# This file is part of ESPResSo++.
#
# ESPResSo++ is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo++ is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest
import espressopp
from espressopp.tools import readxyz
import time
def generate_vl(useBuffers):
print('VERLET LIST {}USING BUFFERS'.format('NOT ' if not useBuffers else ''))
nsteps = 1
isteps = 10
#
# NOTE: For performance comparison increase isteps to 1000
#
rc = 2.5
skin = 0.4
timestep = 0.005
dt = 0.005
epsilon = 1.0
sigma = 1.0
# set temperature to None for NVE-simulations
temperature = 1.0
xyz_file = "lennard_jones_fluid_10000.xyz"
pid, type, xpos, ypos, zpos, xvel, yvel, zvel, Lx, Ly, Lz = readxyz(xyz_file)
box = (Lx, Ly, Lz)
num_particles = len(pid)
system, integrator = espressopp.standard_system.Default(box=box, rc=rc, skin=skin, dt=timestep, temperature=temperature)
props = ['id', 'type', 'mass', 'pos', 'v']
new_particles = []
for i in range(num_particles):
part = [i + 1, 0, 1.0, espressopp.Real3D(xpos[i], ypos[i], zpos[i]), espressopp.Real3D(xvel[i], yvel[i], zvel[i])]
new_particles.append(part)
if i % 1000 == 0:
system.storage.addParticles(new_particles, *props)
system.storage.decompose()
new_particles = []
system.storage.addParticles(new_particles, *props)
system.storage.decompose()
# Lennard-Jones with Verlet list
vl = espressopp.VerletList(system, cutoff = rc, useBuffers = useBuffers)
potLJ = espressopp.interaction.LennardJones(epsilon=epsilon, sigma=sigma, cutoff=rc, shift=0)
interLJ = espressopp.interaction.VerletListLennardJones(vl)
interLJ.setPotential(type1=0, type2=0, potential=potLJ)
system.addInteraction(interLJ)
# espressopp.tools.analyse.info(system, integrator)
espressopp.tools.analyse.info(system, integrator)
start_time = time.process_time()
for k in range(nsteps):
integrator.run(isteps)
espressopp.tools.analyse.info(system, integrator)
end_time = time.process_time()
espressopp.tools.analyse.final_info(system, integrator, vl, start_time, end_time)
pairs = sum(vl.getAllPairs(),[])
return pairs
def sort_pairs(pairs):
# sort each tuple
pairs = [(p[1],p[0]) if p[1]<p[0] else p for p in pairs]
# sort all tuples in list
pairs = sorted(pairs)
return pairs
class TestVerletListBuffer(unittest.TestCase):
def test1vl(self):
print('-'*70)
pairs1 = sort_pairs(generate_vl(False))
print('-'*70)
pairs2 = sort_pairs(generate_vl(True))
# ensure the same pairs are generated
self.assertEqual(len(pairs1), len(pairs2))
for i in range(len(pairs1)):
self.assertEqual(pairs1[i],pairs2[i])
if __name__ == "__main__":
unittest.main()
|
espressopp/espressopp
|
testsuite/verlet_list_buffer/verlet_list_buffer.py
|
Python
|
gpl-3.0
| 3,568
|
[
"ESPResSo"
] |
9bec2550f5ac7e87e3bd142ae3de76f983f8a51f5c6d6d7ad3be11a4fca9c4af
|
"""
Test helper functions.
"""
import os
import requests
from regression.pages.studio.utils import get_course_key
from regression.pages.studio import BASE_URL
from regression.pages import (
BASIC_AUTH_USERNAME, BASIC_AUTH_PASSWORD, LOGIN_EMAIL, LOGIN_PASSWORD
)
from regression.pages.lms import LMS_BASE_URL
from regression.pages.lms import LOGIN_BASE_URL as LMS_AUTH_URL
from regression.pages.studio import STUDIO_BASE_URL
from regression.pages.studio import LOGIN_BASE_URL as STUDIO_AUTH_URL
COURSE_ORG = 'COURSE_ORG'
COURSE_NUMBER = 'COURSE_NUMBER'
COURSE_RUN = 'COURSE_RUN'
COURSE_DISPLAY_NAME = 'COURSE_DISPLAY_NAME'
def get_course_info():
"""
Returns the course info of the course that we use for
the regression tests.
"""
return {
'org': os.environ.get(COURSE_ORG),
'number': os.environ.get(COURSE_NUMBER),
'run': os.environ.get(COURSE_RUN),
'display_name': os.environ.get(
COURSE_DISPLAY_NAME)
}
def get_course_display_name():
"""
Returns the course info of the course that we use for
the regression tests.
"""
return os.environ.get(COURSE_DISPLAY_NAME)
def visit_all(pages):
"""
Visit each page object in `pages` (an iterable).
"""
for page in pages:
print "Visiting: {}".format(page)
page.visit()
def get_url(url_path, course_info):
"""
Construct a URL to the page within the course.
"""
course_key = get_course_key(course_info)
return "/".join([BASE_URL, url_path, unicode(course_key)])
def get_data_locator(page):
"""
Returns:
Unique data locator for the component
"""
data_locator = page.q(css='.hd-3').attrs('id')[0]
return data_locator
def get_data_id_of_component(page):
"""
Returns:
ID for the component
"""
data_id = page.q(css='.problem-header').attrs('id')[0]
return data_id
class LoginApiBaseClass(object):
"""
Base class for login api
"""
def __init__(self):
self.login_url = None
self.session = requests.Session()
self.session.auth = (
BASIC_AUTH_USERNAME,
BASIC_AUTH_PASSWORD
)
self.payload = {
'email': LOGIN_EMAIL,
'password': LOGIN_PASSWORD,
'remember': 'false'
}
self.login_response = None
self.login_post_url = None
self.browser_get_url = None
def check_response(self, response):
"""
Check whether a response was successful. If not raise an exception
Arguments:
response: HTTP response object
"""
if response.status_code != 200:
raise Exception(
'API request failed with following error code: ' +
str(response.status_code)
)
def post_headers(self, x_csrf):
"""
Header which are to be used in the POST Requests
Arguments:
x_csrf: Cross site request forgery protection token.
"""
return {
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Accept': '*/*',
'X-Requested-With': 'XMLHttpRequest',
'Referer': self.login_url,
'X-CSRFToken': x_csrf,
}
def create_base_session(self):
"""
Create a session with the host.
"""
response = self.session.get(self.login_url)
self.check_response(response)
self.session.cookies = response.cookies
self.session.headers = self.post_headers(
response.cookies['csrftoken']
)
def login(self):
"""
Login to the stage.
"""
self.create_base_session()
response = self.session.post(self.login_post_url, data=self.payload)
self.check_response(response)
self.session.cookies = response.cookies
self.session.headers = self.post_headers(
response.cookies['csrftoken']
)
self.login_response = response
def authenticate(self, browser):
"""
Authenticate the user and pass the session to the browser.
Arguments:
browser: Browser to pass the session to.
"""
self.login()
# To make cookies effective, we have to set the
# domain of the browser the same as that of the
# cookies. To do this, just visit a page of the
# same domain.
# Cookies require the domain to be ".stage.edx.org"
# Browser will navigate to the login page, but
# no one is required to login. Once cookies become
# effective, we don't need to login.
browser.get(self.browser_get_url)
for cookie in self.session.cookies:
browser.add_cookie(
{
'name': cookie.name,
'value': cookie.value,
'path': cookie.path,
'expiry': cookie.expires
}
)
class LmsLoginApi(LoginApiBaseClass):
"""
Login api for LMS
"""
def __init__(self):
super(LmsLoginApi, self).__init__()
self.login_url = 'https://{}/{}'.format(
LMS_BASE_URL, 'login'
)
self.login_post_url = 'https://{}/{}'.format(
LMS_BASE_URL, 'user_api/v1/account/login_session/'
)
self.browser_get_url = LMS_AUTH_URL + '/dashboard'
class StudioLoginApi(LoginApiBaseClass):
"""
Login api for Studio
"""
def __init__(self):
super(StudioLoginApi, self).__init__()
self.login_url = 'https://{}/{}'.format(
STUDIO_BASE_URL, 'signin'
)
self.login_post_url = 'https://{}/{}'.format(
STUDIO_BASE_URL, 'login_post'
)
self.browser_get_url = STUDIO_AUTH_URL + '/home'
|
raeeschachar/edx-e2e-mirror
|
regression/tests/helpers.py
|
Python
|
agpl-3.0
| 5,864
|
[
"VisIt"
] |
8018ba289d7ca3012affbbf8a82f03693aeebfd042edf59ea48a97cfa3c105b8
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
import sys
import platform
from setuptools import setup, find_packages, Extension
from setuptools.command.build_ext import build_ext as _build_ext
class build_ext(_build_ext):
def finalize_options(self):
_build_ext.finalize_options(self)
# Prevent numpy from thinking it is still in its setup process:
if sys.version_info[0] >= 3:
import builtins
if hasattr(builtins, '__NUMPY_SETUP__'):
del builtins.__NUMPY_SETUP__
import importlib
import numpy
importlib.reload(numpy)
else:
import __builtin__
if hasattr(__builtin__, '__NUMPY_SETUP__'):
del __builtin__.__NUMPY_SETUP__
import imp
import numpy
imp.reload(numpy)
self.include_dirs.append(numpy.get_include())
extra_link_args = []
if sys.platform.startswith('win') and platform.machine().endswith('64'):
extra_link_args.append('-Wl,--allow-multiple-definition')
long_desc = """
.. image:: https://circleci.com/gh/materialsproject/pymatgen.svg?style=shield&circle-token=:circle-token
.. image:: https://ci.appveyor.com/api/projects/status/akdyke5jxg6gps45?svg=true
.. image:: https://anaconda.org/matsci/pymatgen/badges/downloads.svg
.. image:: https://coveralls.io/repos/github/materialsproject/pymatgen/badge.svg?branch=master
Official docs: `http://pymatgen.org <http://pymatgen.org/>`_
Pymatgen (Python Materials Genomics) is a robust, open-source Python library
for materials analysis. These are some of the main features:
1. Highly flexible classes for the representation of Element, Site, Molecule,
Structure objects.
2. Extensive input/output support, including support for VASP
(http://cms.mpi.univie.ac.at/vasp/), ABINIT (http://www.abinit.org/), CIF,
Gaussian, XYZ, and many other file formats.
3. Powerful analysis tools, including generation of phase diagrams, Pourbaix
diagrams, diffusion analyses, reactions, etc.
4. Electronic structure analyses, such as density of states and band structure.
5. Integration with the Materials Project REST API.
Pymatgen is free to use. However, we also welcome your help to improve this
library by making your own contributions. These contributions can be in the
form of additional tools or modules you develop, or feature requests and bug
reports. Please report any bugs and issues at pymatgen's `Github page
<https://github.com/materialsproject/pymatgen>`_. If you wish to be notified
of pymatgen releases, you may become a member of `pymatgen's Google Groups page
<https://groups.google.com/forum/?fromgroups#!forum/pymatgen/>`_.
Why use pymatgen?
=================
There are many materials analysis codes out there, both commerical and free,
but pymatgen offer several advantages:
1. **It is (fairly) robust.** Pymatgen is used by thousands of researchers,
and is the analysis code powering the `Materials Project`_. The analysis it
produces survives rigorous scrutiny every single day. Bugs tend to be
found and corrected quickly. Pymatgen also uses
`CircleCI <https://circleci.com>`_ and `Appveyor <https://www.appveyor.com/>`_
for continuous integration on the Linux and Windows platforms,
respectively, which ensures that every commit passes a comprehensive suite
of unittests. The coverage of the unittests can be seen at
`here <coverage/index.html>`_.
2. **It is well documented.** A fairly comprehensive documentation has been
written to help you get to grips with it quickly.
3. **It is open.** You are free to use and contribute to pymatgen. It also means
that pymatgen is continuously being improved. We will attribute any code you
contribute to any publication you specify. Contributing to pymatgen means
your research becomes more visible, which translates to greater impact.
4. **It is fast.** Many of the core numerical methods in pymatgen have been
optimized by vectorizing in numpy/scipy. This means that coordinate
manipulations are extremely fast and are in fact comparable to codes
written in other languages. Pymatgen also comes with a complete system for
handling periodic boundary conditions.
5. **It will be around.** Pymatgen is not a pet research project. It is used in
the well-established Materials Project. It is also actively being developed
and maintained by the `Materials Virtual Lab`_, the ABINIT group and many
other research groups.
With effect from version 3.0, pymatgen now supports both Python 2.7 as well
as Python 3.x.
"""
setup(
name="pymatgen",
packages=find_packages(),
version="2017.9.23",
cmdclass={'build_ext': build_ext},
setup_requires=['numpy', 'setuptools>=18.0'],
install_requires=["numpy>=1.9", "six", "requests", "ruamel.yaml>=0.15.6",
"monty>=0.9.6", "scipy>=0.14", "pydispatcher>=2.0.5",
"tabulate", "spglib>=1.9.9.44",
"matplotlib>=1.5", "palettable>=2.1.1", "sympy"],
extras_require={
':python_version == "2.7"': [
'enum34',
],
"provenance": ["pybtex"],
"pourbaix": ["pyhull>=1.5.3"],
"bandstructure": ["pyhull>=1.5.3"],
"ase": ["ase>=3.3"],
"vis": ["vtk>=6.0.0"],
"abinit": ["apscheduler==2.1.0"]},
package_data={"pymatgen.core": ["*.json"],
"pymatgen.analysis": ["*.yaml", "*.json"],
"pymatgen.analysis.chemenv.coordination_environments.coordination_geometries_files": ["*.txt", "*.json"],
"pymatgen.analysis.chemenv.coordination_environments.strategy_files": ["*.json"],
"pymatgen.io.vasp": ["*.yaml"],
"pymatgen.io.feff": ["*.yaml"],
"pymatgen.symmetry": ["*.yaml", "*.json"],
"pymatgen.entries": ["*.yaml"],
"pymatgen.structure_prediction": ["data/*.json"],
"pymatgen.vis": ["ElementColorSchemes.yaml"],
"pymatgen.command_line": ["OxideTersoffPotentials"],
"pymatgen.analysis.defects": ["*.json"],
"pymatgen.analysis.diffraction": ["*.json"],
"pymatgen.util": ["structures/*.json"]},
author="Pymatgen Development Team",
author_email="pymatgen@googlegroups.com",
maintainer="Shyue Ping Ong",
maintainer_email="ongsp@eng.ucsd.edu",
url="http://www.pymatgen.org",
license="MIT",
description="Python Materials Genomics is a robust materials "
"analysis code that defines core object representations for "
"structures and molecules with support for many electronic "
"structure codes. It is currently the core analysis code "
"powering the Materials Project "
"(https://www.materialsproject.org).",
long_description=long_desc,
keywords=["VASP", "gaussian", "ABINIT", "nwchem", "materials", "project",
"electronic", "structure", "analysis", "phase", "diagrams"],
classifiers=[
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Chemistry",
"Topic :: Software Development :: Libraries :: Python Modules"
],
ext_modules=[Extension("pymatgen.optimization.linear_assignment",
["pymatgen/optimization/linear_assignment.c"],
extra_link_args=extra_link_args),
Extension("pymatgen.util.coord_cython",
["pymatgen/util/coord_cython.c"],
extra_link_args=extra_link_args)],
entry_points={
'console_scripts': [
'pmg = pymatgen.cli.pmg:main',
'feff_input_generation = pymatgen.cli.feff_input_generation:main',
'feff_plot_cross_section = pymatgen.cli.feff_plot_cross_section:main',
'feff_plot_dos = pymatgen.cli.feff_plot_dos:main',
'gaussian_analyzer = pymatgen.cli.gaussian_analyzer:main',
'get_environment = pymatgen.cli.get_environment:main',
'pydii = pymatgen.cli.pydii:main',
]
}
)
|
matk86/pymatgen
|
setup.py
|
Python
|
mit
| 8,781
|
[
"ABINIT",
"ASE",
"FEFF",
"Gaussian",
"NWChem",
"VASP",
"VTK",
"pymatgen"
] |
cd2aa4e0433afd9a6a6bac60787475e8d7a32a0fcc43b29b3d27c17f0be6a14a
|
import ptt_board_url as ptt
import pandas as pd
import sys
# import psycopg2 as pg
import time
start_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
hoturl = 'https://www.ptt.cc/bbs/hotboards.html'
hot_board = ptt.get_js_page(hoturl)
df = ptt.get_hotboard_df(hot_board)
end_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
print('start time : ', start_time)
print('end time : ', end_time)
# # print('\nresult_df = ', result_df)
# try:
# outputFile = 'ptt_hotboard_' + str(datetime.datetime.today().strftime('%Y-%m-%d')) + '.csv'
# df.to_csv(outputFile, sep=',', encoding='utf-8')
# except:
# print("Unexpected error:", sys.exc_info()[0])
# # connect to postgreSQL :
# pwd = input(">>> keyin your password: ")
# try:
# connect_str = " dbname = 'pttdb' user = 'amber' host = 'localhost' password = '" + pwd + "'"
# conn = pg.connect(connect_str)
# print('Prepare to write into database...')
# except:
# print("Unable to connect to the postgreSQL!")
#
# cur = conn.cursor()
#
# try:
# # prepare query
# sql1 = """INSERT INTO hotboard_article_title (
# board,
# nrec,
# mark,
# title,
# href,
# author,
# dates,
# get_time)
# VALUES (%s, %s, %s, %s, %s, %s, %s, %s) """
#
# for n in range(len(df)):
# cur.execute(sql1, (df.get_value(n, 'board'),
# df.get_value(n, 'nrec'),
# df.get_value(n, 'mark'),
# df.get_value(n, 'title'),
# df.get_value(n, 'href'),
# df.get_value(n, 'author'),
# df.get_value(n, 'dates'),
# df.get_value(n, 'get_time')))
# print('Insert sucessfully!')
# except:
# print('Can not execute query!')
#
# # Make the changes to the database persistent
# conn.commit()
#
# # Close communication with the database
# cur.close()
# conn.close()
######### Other database ##################
# connect to sqlite3
# conn = sqlite3.connect('ptt_hotboard_article_title')
# connect to firebase :
# fireDBurl = "https://py-ptt.firebaseio.com/"
# fdb = firebase.FirebaseApplication(fireDBurl, None)
# connect to mysql :
# db = connector.connect(
# host = 'localhost',
# user = 'amber',
# password = 'ww211214',
# database = 'ambermysqldb'
# )
# cur = db.cursor()
##########################################
|
AmberFu/ptt_crawler
|
ptt_hotboard_crawler.py
|
Python
|
mit
| 2,638
|
[
"Amber"
] |
d543b537e4ed5763c612b5d5e9d00b79c89fec17d34d3344fdca46ed2f969198
|
#!/usr/bin/env python
"""
This file tests vtk.vtkMolecule, and verifies that atoms/bonds are added.
"""
import sys
import vtk
from vtk.test import Testing
class TestMolecule(Testing.vtkTest):
def testCreation(self):
"Testing if molecules can be created/modified."
mol = vtk.vtkMolecule()
self.assertEqual(mol.GetNumberOfAtoms(), 0, "Number of atoms incorrect")
self.assertEqual(mol.GetNumberOfBonds(), 0, "Number of atoms incorrect")
h1 = mol.AppendAtom(1, 0.0, 0.0, -0.5)
h2 = mol.AppendAtom(1, 0.0, 0.0, 0.5)
b = mol.AppendBond(h1, h2, 1)
self.assertEqual(mol.GetNumberOfAtoms(), 2, "Number of atoms incorrect")
self.assertEqual(mol.GetNumberOfBonds(), 1, "Number of atoms incorrect")
if __name__ == "__main__":
Testing.main([(TestMolecule, 'test')])
|
HopeFOAM/HopeFOAM
|
ThirdParty-0.1/ParaView-5.0.1/VTK/Common/DataModel/Testing/Python/TestMolecule.py
|
Python
|
gpl-3.0
| 839
|
[
"VTK"
] |
2b7b20631a37e4169455b5218a64e57cb873ea76a31548c377725342f7b057b4
|
# Orca
#
# Copyright 2005-2009 Sun Microsystems Inc.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., Franklin Street, Fifth Floor,
# Boston MA 02110-1301 USA.
"""Custom formatting for OpenOffice and StarOffice."""
__id__ = "$Id$"
__version__ = "$Revision$"
__date__ = "$Date$"
__copyright__ = "Copyright (c) 2005-2009 Sun Microsystems Inc."
__license__ = "LGPL"
# pylint: disable-msg=C0301
import copy
import pyatspi
import orca.formatting
import orca.settings
formatting = {
'speech': {
pyatspi.ROLE_LABEL: {
'focused': 'expandableState + availability',
'unfocused': 'name + allTextSelection + expandableState + availability + positionInList',
'basicWhereAmI': 'roleName + name + positionInList + expandableState + (nodeLevel or nestingLevel)'
},
pyatspi.ROLE_TABLE_CELL: {
'focused': 'endOfTableIndicator + pause + tableCellRow + pause',
'unfocused': 'endOfTableIndicator + pause + tableCellRow + pause',
'basicWhereAmI': 'parentRoleName + pause + columnHeader + pause + rowHeader + pause + roleName + pause + cellCheckedState + pause + (realActiveDescendantDisplayedText or imageDescription + image) + pause + columnAndRow + pause + expandableState + pause + nodeLevel + pause',
'detailedWhereAmI': 'parentRoleName + pause + columnHeader + pause + rowHeader + pause + roleName + pause + cellCheckedState + pause + (realActiveDescendantDisplayedText or imageDescription + image) + pause + columnAndRow + pause + tableCellRow + pause + expandableState + pause + nodeLevel + pause'
},
'REAL_ROLE_TABLE_CELL': {
'focused': 'newRowHeader + newColumnHeader + realActiveDescendantDisplayedText',
'unfocused': 'newRowHeader + newColumnHeader + realActiveDescendantDisplayedText',
},
'ROLE_SPREADSHEET_CELL': {
# We treat spreadsheet cells differently from other table cells in
# whereAmI.
#
'basicWhereAmI': 'roleName + pause + column + pause + columnHeader + pause + row + pause + rowHeader + pause + (textContent or realTableCell) + pause + anyTextSelection + pause'
},
},
'braille': {
pyatspi.ROLE_LIST: {
'unfocused': '[Component(obj,\
asString(labelOrName + roleName + required))]'
},
pyatspi.ROLE_SCROLL_PANE: {
'unfocused': 'asPageTabOrScrollPane\
+ (childTab\
and ([Region(" ")] + childTab) or [])'
}
}
}
class Formatting(orca.formatting.Formatting):
def __init__(self, script):
orca.formatting.Formatting.__init__(self, script)
self.update(copy.deepcopy(formatting))
self._defaultFormatting = orca.formatting.Formatting(script)
def getFormat(self, **args):
if args.get('useDefaultFormatting', False):
return self._defaultFormatting.getFormat(**args)
else:
return orca.formatting.Formatting.getFormat(self, **args)
|
pvagner/orca
|
src/orca/scripts/apps/soffice/formatting.py
|
Python
|
lgpl-2.1
| 3,726
|
[
"ORCA"
] |
0cffe2c793785d44a74c20a234330772175af8b6cc09226fd4e9d0919e21ba00
|
"""
Migration script to add the includes_datatypes, has_repository_dependencies, includes_tools, includes_tool_dependencies and includes_workflows
columns to the repository_metadata table.
"""
from sqlalchemy import *
from sqlalchemy.orm import *
from migrate import *
from migrate.changeset import *
# Need our custom types, but don't import anything else from model
from galaxy.model.custom_types import *
import sys, logging
log = logging.getLogger( __name__ )
log.setLevel(logging.DEBUG)
handler = logging.StreamHandler( sys.stdout )
format = "%(name)s %(levelname)s %(asctime)s %(message)s"
formatter = logging.Formatter( format )
handler.setFormatter( formatter )
log.addHandler( handler )
metadata = MetaData()
def upgrade(migrate_engine):
print __doc__
metadata.bind = migrate_engine
metadata.reflect()
# Initialize.
if migrate_engine.name == 'mysql' or migrate_engine.name == 'sqlite':
default_false = "0"
elif migrate_engine.name in ['postgres', 'postgresql']:
default_false = "false"
# Create and initialize tools_functionally_correct, do_not_test, time_last_tested, and tool_test_errors columns in repository_metadata table.
RepositoryMetadata_table = Table( "repository_metadata", metadata, autoload=True )
# Create tools_functionally_correct column
c = Column( "includes_datatypes", Boolean, default=False, index=True )
try:
c.create( RepositoryMetadata_table, index_name="ix_repository_metadata_inc_datatypes")
assert c is RepositoryMetadata_table.c.includes_datatypes
migrate_engine.execute( "UPDATE repository_metadata SET includes_datatypes=%s" % default_false )
except Exception, e:
print "Adding includes_datatypes column to the repository_metadata table failed: %s" % str( e )
# Create includes_datatypes column
c = Column( "has_repository_dependencies", Boolean, default=False, index=True )
try:
c.create( RepositoryMetadata_table, index_name="ix_repository_metadata_has_repo_deps")
assert c is RepositoryMetadata_table.c.has_repository_dependencies
migrate_engine.execute( "UPDATE repository_metadata SET has_repository_dependencies=%s" % default_false )
except Exception, e:
print "Adding has_repository_dependencies column to the repository_metadata table failed: %s" % str( e )
# Create includes_tools column
c = Column( "includes_tools", Boolean, default=False, index=True )
try:
c.create( RepositoryMetadata_table, index_name="ix_repository_metadata_inc_tools")
assert c is RepositoryMetadata_table.c.includes_tools
migrate_engine.execute( "UPDATE repository_metadata SET includes_tools=%s" % default_false )
except Exception, e:
print "Adding includes_tools column to the repository_metadata table failed: %s" % str( e )
# Create includes_tool_dependencies column
c = Column( "includes_tool_dependencies", Boolean, default=False, index=True )
try:
c.create( RepositoryMetadata_table, index_name="ix_repository_metadata_inc_tool_deps")
assert c is RepositoryMetadata_table.c.includes_tool_dependencies
migrate_engine.execute( "UPDATE repository_metadata SET includes_tool_dependencies=%s" % default_false )
except Exception, e:
print "Adding includes_tool_dependencies column to the repository_metadata table failed: %s" % str( e )
# Create includes_workflows column
c = Column( "includes_workflows", Boolean, default=False, index=True )
try:
c.create( RepositoryMetadata_table, index_name="ix_repository_metadata_inc_workflows")
assert c is RepositoryMetadata_table.c.includes_workflows
migrate_engine.execute( "UPDATE repository_metadata SET includes_workflows=%s" % default_false )
except Exception, e:
print "Adding includes_workflows column to the repository_metadata table failed: %s" % str( e )
def downgrade(migrate_engine):
metadata.bind = migrate_engine
metadata.reflect()
# Drop tool_test_errors, time_last_tested, do_not_test, and tools_functionally_correct columns from repository_metadata table.
RepositoryMetadata_table = Table( "repository_metadata", metadata, autoload=True )
# Drop the includes_workflows column.
try:
RepositoryMetadata_table.c.includes_workflows.drop()
except Exception, e:
print "Dropping column includes_workflows from the repository_metadata table failed: %s" % str( e )
# Drop the includes_tool_dependencies column.
try:
RepositoryMetadata_table.c.includes_tool_dependencies.drop()
except Exception, e:
print "Dropping column includes_tool_dependencies from the repository_metadata table failed: %s" % str( e )
# Drop the includes_tools column.
try:
RepositoryMetadata_table.c.includes_tools.drop()
except Exception, e:
print "Dropping column includes_tools from the repository_metadata table failed: %s" % str( e )
# Drop the has_repository_dependencies column.
try:
RepositoryMetadata_table.c.has_repository_dependencies.drop()
except Exception, e:
print "Dropping column has_repository_dependencies from the repository_metadata table failed: %s" % str( e )
# Drop the includes_datatypes column.
try:
RepositoryMetadata_table.c.includes_datatypes.drop()
except Exception, e:
print "Dropping column includes_datatypes from the repository_metadata table failed: %s" % str( e )
|
mikel-egana-aranguren/SADI-Galaxy-Docker
|
galaxy-dist/lib/galaxy/webapps/tool_shed/model/migrate/versions/0017_add_galaxy_utility_columns_to_repository_metadata_table.py
|
Python
|
gpl-3.0
| 5,505
|
[
"Galaxy"
] |
ecf4ec72302c95dfa98fc784524519b75487c4c49ad107969b5c82fb9f245865
|
import math
from chainer.functions.activation import softplus
from chainer.functions.math import exponential
from chainer.functions.math import sum
from chainer import variable
def gaussian_kl_divergence(mean, ln_var):
"""Computes the KL-divergence of Gaussian variables from the standard one.
Given two variable ``mean`` representing :math:`\\mu` and ``ln_var``
representing :math:`\\log(\\sigma^2)`, this function returns a variable
representing the KL-divergence between the given multi-dimensional Gaussian
:math:`N(\\mu, S)` and the standard Gaussian :math:`N(0, I)`
.. math::
D_{\\mathbf{KL}}(N(\\mu, S) \\| N(0, I)),
where :math:`S` is a diagonal matrix such that :math:`S_{ii} = \\sigma_i^2`
and :math:`I` is an identity matrix.
Args:
mean (~chainer.Variable): A variable representing mean of given
gaussian distribution, :math:`\\mu`.
ln_var (~chainer.Variable): A variable representing logarithm of
variance of given gaussian distribution, :math:`\\log(\\sigma^2)`.
Returns:
~chainer.Variable: A variable representing KL-divergence between
given gaussian distribution and the standard gaussian.
"""
assert isinstance(mean, variable.Variable)
assert isinstance(ln_var, variable.Variable)
J = mean.data.size
var = exponential.exp(ln_var)
return (sum.sum(mean * mean) + sum.sum(var) - sum.sum(ln_var) - J) * 0.5
def bernoulli_nll(x, y):
"""Computes the negative log-likelihood of a Bernoulli distribution.
This function calculates the negative log-likelihood of a Bernoulli
distribution.
.. math::
-B(x; p) = -\\sum_i {x_i \\log(p_i) + (1 - x_i)\\log(1 - p_i)},
where :math:`p = \\sigma(y)`, and :math:`\\sigma(\\cdot)` is a sigmoid
funciton.
.. note::
As this funtion uses a sigmoid function, you can pass a result of
fully-connected layer (that means :class:`Linear`) to this function
directly.
Args:
x (~chainer.Variable): Input variable.
y (~chainer.Variable): A variable representing the parameter of
Bernoulli distribution.
Returns:
~chainer.Variable: A variable representing negative log-likelihood.
"""
assert isinstance(x, variable.Variable)
assert isinstance(y, variable.Variable)
return sum.sum(softplus.softplus(-y)) + sum.sum(y) - sum.sum(y * x)
def gaussian_nll(x, mean, ln_var):
"""Computes the negative log-likelihood of a Gaussian distribution.
Given two variable ``mean`` representing :math:`\\mu` and ``ln_var``
representing :math:`\\log(\\sigma^2)`, this function returns the negative
log-likelihood of :math:`x` on a Gaussian distribution :math:`N(\\mu, S)`,
.. math::
-\\log N(x; \\mu, \\sigma^2) =
\\log\\left(\\sqrt{(2\\pi)^D |S|}\\right) +
\\frac{1}{2}(x - \\mu)^\\top S^{-1}(x - \\mu),
where :math:`D` is a dimension of :math:`x` and :math:`S` is a diagonal
matrix where :math:`S_{ii} = \\sigma_i^2`.
Args:
x (~chainer.Variable): Input variable.
mean (~chainer.Variable): A variable representing mean of a Gaussian
distribution, :math:`\\mu`.
ln_var (~chainer.Variable): A variable representing logarithm of
variance of a Gaussian distribution, :math:`\\log(\\sigma^2)`.
Returns:
~chainer.Variable: A variable representing the negative log-likelihood.
"""
assert isinstance(x, variable.Variable)
assert isinstance(mean, variable.Variable)
assert isinstance(ln_var, variable.Variable)
D = x.data.size
x_prec = exponential.exp(-ln_var)
x_diff = x - mean
x_power = (x_diff * x_diff) * x_prec * -0.5
return (sum.sum(ln_var) + D * math.log(2 * math.pi)) / 2 - sum.sum(x_power)
|
cemoody/chainer
|
chainer/functions/loss/vae.py
|
Python
|
mit
| 3,838
|
[
"Gaussian"
] |
288aaa87273bfc6a45834ead935c74ef6ce08189aa8f5bba6b77f5e72139ff4a
|
"""
@name: /home/briank/workspace/PyHouse/Project/src/Modules/House/Lighting/_test/test_outlets.py
@author: D. Brian Kimmel
@contact: D.BrianKimmel@gmail.com
@copyright: (c) 2013-2020 by D. Brian Kimmel
@license: MIT License
@note: Created on Dec 7, 2019
@summary:
Passed all 8 tests - DBK - 2019-12-08
"""
__updated__ = '2020-02-09'
# Import system type stuff
from twisted.trial import unittest
from ruamel.yaml import YAML
# Import PyMh files and modules.
from _test.testing_mixin import SetupPyHouseObj
from Modules.House.Lighting.Outlets.outlets import LocalConfig as outletsConfig
from Modules.Core.Utilities.debug_tools import PrettyFormatAny
TEST_YAML = """\
Outlets:
- Name: Musicroom Lamp
Room: Music
Comment: This is the music room lamp
Family:
Name: Insteon
Address: 11.11.11
- Name: Christmas
Comment: ??
Family:
Name: Insteon
Address: 22.22.22
- Name: Gameroom Lamp
Room: Game
Comment: Fireplace end
Family:
Name: Insteon
Address: 33.33.33
- Name: Curio
Family:
Name: Insteon
Address: 44.44.44
- Name: China Cabinet
Family:
Name: Insteon
Address: 55.55.55
"""
class SetupMixin(object):
"""
"""
def setUp(self):
self.m_pyhouse_obj = SetupPyHouseObj().BuildPyHouseObj()
l_yaml = YAML()
self.m_test_config = l_yaml.load(TEST_YAML)
class A0(unittest.TestCase):
def test_00_Print(self):
_x = PrettyFormatAny.form('_x', 'title') # so it is defined when printing is cleaned up.
print('Id: test_outlets')
class A1_Setup(SetupMixin, unittest.TestCase):
"""
This section tests the above setup for things we will need further down in the tests.
"""
def setUp(self):
SetupMixin.setUp(self)
def test_01_Config(self):
""" Be sure that the config contains the right stuff.
"""
# print(PrettyFormatAny.form(self.m_test_config, 'A1-01-A - Config'))
# print(PrettyFormatAny.form(self.m_pyhouse_obj.House, 'PyHouse House'))
self.assertIsNotNone(self.m_test_config['Outlets'])
class C1_Read(SetupMixin, unittest.TestCase):
"""
This section tests the reading and writing of config used by lighting_lights.
"""
def setUp(self):
SetupMixin.setUp(self)
self.m_config = outletsConfig(self.m_pyhouse_obj)
def test_01_Outlet0(self):
""" Test loading outlet 0
"""
l_yaml = self.m_test_config['Outlets'][0]
# print('C1-01-A - Yaml: ', l_yaml)
l_outlet = self.m_config._extract_one_outlet(l_yaml)
# print(PrettyFormatAny.form(l_outlet, 'C1-01-B - Family'))
# print(PrettyFormatAny.form(l_outlet.Family, 'C1-01-C - Family'))
# print(PrettyFormatAny.form(l_outlet.Room, 'C1-01-d - Room'))
self.assertEqual(l_outlet.Name, 'Musicroom Lamp')
self.assertEqual(l_outlet.Comment, 'This is the music room lamp')
self.assertEqual(l_outlet.DeviceType, 'Lighting')
self.assertEqual(l_outlet.DeviceSubType, 'Outlet')
self.assertEqual(l_outlet.Family.Name, 'Insteon')
self.assertEqual(l_outlet.Family.Address, '11.11.11')
def test_02_Outlet1(self):
""" Test loading outlet 1
"""
l_yaml = self.m_test_config['Outlets'][1]
# print('C1-02-A - Yaml: ', l_yaml)
l_outlet = self.m_config._extract_one_outlet(l_yaml)
# print(PrettyFormatAny.form(l_light, 'C1-02-B - Light'))
self.assertEqual(l_outlet.Name, 'Christmas')
self.assertEqual(l_outlet.Comment, '??')
self.assertEqual(l_outlet.DeviceType, 'Lighting')
self.assertEqual(l_outlet.DeviceSubType, 'Outlet')
self.assertEqual(l_outlet.Family.Name, 'Insteon')
self.assertEqual(l_outlet.Family.Address, '22.22.22')
def test_03_Outlet2(self):
""" Test loading outlet 2
"""
l_yaml = self.m_test_config['Outlets'][2]
# print('C1-03-A - Yaml: ', l_yaml)
l_outlet = self.m_config._extract_one_outlet(l_yaml)
# print(PrettyFormatAny.form(l_outlet, 'C1-03-B - Outlet'))
self.assertEqual(l_outlet.Name, 'Gameroom Lamp')
self.assertEqual(l_outlet.Comment, 'Fireplace end')
self.assertEqual(l_outlet.DeviceType, 'Lighting')
self.assertEqual(l_outlet.DeviceSubType, 'Outlet')
self.assertEqual(l_outlet.Family.Name, 'Insteon')
self.assertEqual(l_outlet.Family.Address, '33.33.33')
def test_04_Outlets(self):
""" Test loading all outlets
"""
l_yaml = self.m_test_config['Outlets']
# print('C1-04-A - Yaml: ', l_yaml)
l_outlets = self.m_config._extract_all_outlets(l_yaml)
# print(PrettyFormatAny.form(l_outlets, 'C1-04-B - Outlets'))
self.assertEqual(l_outlets[0].Name, 'Musicroom Lamp')
self.assertEqual(l_outlets[1].Name, 'Christmas')
self.assertEqual(l_outlets[2].Name, 'Gameroom Lamp')
class C2_YamlWrite(SetupMixin, unittest.TestCase):
"""
This section tests the reading and writing of the Yaml config file used by lighting_lights.
"""
def setUp(self):
SetupMixin.setUp(self)
# self.m_obj = lightsXML().read_all_lights_xml(self.m_pyhouse_obj)
def test_01_(self):
"""Test the write for proper XML Base elements
"""
print(PrettyFormatAny.form(self.m_pyhouse_obj.House.Lighting.Lights, 'C2-01-A - Node'))
class Z9_YamlWrite(SetupMixin, unittest.TestCase):
"""
This section tests the reading and writing of the Yaml config file used by lighting_lights.
"""
def setUp(self):
SetupMixin.setUp(self)
def test_01_(self):
"""Test the write for proper XML Base elements
"""
# print(PrettyFormatAny.form(self.m_pyhouse_obj.House.Lighting.Lights, 'C2-01-A - Node'))
pass
# ## END DBK
|
DBrianKimmel/PyHouse
|
Project/src/Modules/House/Lighting/Outlets/_test/test_outlets.py
|
Python
|
mit
| 5,980
|
[
"Brian"
] |
e67c245f8ca16b5c68b06801d4e63a19def041980b89fbe100e3bc54ac0fb70e
|
#!/usr/bin/env python
from __future__ import print_function
import wx
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wx import NavigationToolbar2Wx
from matplotlib.figure import Figure
try: # Necessary for wx2.8.11.0
from wx.lib.pubsub import setupkwargs
# later versions: wxPython-phoenix
# sudo pip3 install -U --pre -f http://wxpython.org/Phoenix/snapshot-builds/ wxPython_Phoenix
# does not work so far! leon 2016-06-24
except:
pass
from wx.lib.pubsub import pub
from wx.lib.dialogs import ScrolledMessageDialog
from magpy.stream import read
import magpy.mpplot as mp
#import magpy.absolutes import as di
from magpy.absolutes import *
from magpy.transfer import *
from magpy.database import *
from magpy.version import __version__
from magpy.gui.streampage import *
from magpy.gui.metapage import *
from magpy.gui.dialogclasses import *
from magpy.gui.absolutespage import *
from magpy.gui.reportpage import *
from magpy.gui.developpage import * # remove this
from magpy.gui.analysispage import *
from magpy.gui.monitorpage import *
import glob, os, pickle, base64
import pylab
import thread, time
import threading
import wx.py
def saveobj(obj, filename):
with open(filename, 'wb') as f:
pickle.dump(obj,f,pickle.HIGHEST_PROTOCOL)
def loadobj(filename):
with open(filename, 'rb') as f:
return pickle.load(f)
def pydate2wxdate(datum):
assert isinstance(datum, (datetime, datetime.date))
tt = datum.timetuple()
dmy = (tt[2], tt[1]-1, tt[0])
#print (tt, dmy)
return wx.DateTimeFromDMY(*dmy)
def wxdate2pydate(date):
assert isinstance(date, wx.DateTime)
if date.IsValid():
ymd = map(int, date.FormatISODate().split('-'))
return datetime.date(*ymd)
else:
return None
def saveini(optionsdict): #dbname=None, user=None, passwd=None, host=None, dirname=None, compselect=None, abscompselect=None, basecompselect=None, resolution=None, dipathlist = None, divariopath = None, discalarpath = None, diexpD = None, diexpI = None, stationid = None, diid = None, ditype = None, diazimuth = None, dipier = None, dialpha = None, dideltaF = None, didbadd = None, bookmarks = None):
"""
Method for initializing deault paremeters credentials
"""
try:
normalpath = os.path.expanduser('~')
except:
normalpath = os.path('/') # Test that
# Updating version info in file
from magpy.version import __version__
optionsdict['magpyversion'] = __version__
if optionsdict.get('dbname','') == '':
optionsdict['dbname'] = 'None'
if optionsdict.get('user','') == '':
optionsdict['user'] = 'Max'
if optionsdict.get('passwd','') == '':
passwd = 'Secret'
else:
passwd = optionsdict.get('passwd','')
if optionsdict.get('host','') == '':
optionsdict['host'] = 'localhost'
if optionsdict.get('dirname','') == '':
optionsdict['dirname'] = normalpath
if optionsdict.get('basefilter','') == '':
optionsdict['basefilter'] = 'spline'
if optionsdict.get('dipathlist','') == '':
optionsdict['dipathlist'] = [normalpath]
if optionsdict.get('divariopath','') == '':
optionsdict['divariopath'] = os.path.join(normalpath,'*')
if optionsdict.get('discalarpath','') == '':
optionsdict['discalarpath'] = os.path.join(normalpath,'*')
if optionsdict.get('diexpD','') == '':
optionsdict['diexpD'] = '3.0'
if optionsdict.get('diexpI','') == '':
optionsdict['diexpI'] = '64.0'
if optionsdict.get('stationid','') == '':
optionsdict['stationid'] = 'WIC'
if optionsdict.get('diid','') == '':
optionsdict['diid'] = ''
if optionsdict.get('ditype','') == '':
optionsdict['ditype'] = 'manual' #abstype = ''
if optionsdict.get('diazimuth','') == '':
optionsdict['diazimuth'] = ''
if optionsdict.get('dipier','') == '':
optionsdict['dipier'] = 'A2'
if optionsdict.get('dialpha','') == '':
optionsdict['dialpha'] = '0.0'
if optionsdict.get('dideltaF','') == '':
optionsdict['dideltaF'] = '2.1'
if optionsdict.get('didbadd','') == '':
optionsdict['didbadd'] = 'False'
if optionsdict.get('bookmarks','') == '':
optionsdict['bookmarks'] = ['ftp://ftp.nmh.ac.uk/wdc/obsdata/hourval/single_year/2011/fur2011.wdc','ftp://user:passwd@www.zamg.ac.at/data/magnetism/wic/variation/WIC20160627pmin.min','http://www.conrad-observatory.at/zamg/index.php/downloads-en/category/13-definite2015?download=66:wic-2015-0000-pt1m-4','http://www-app3.gfz-potsdam.de/kp_index/qlyymm.tab']
if optionsdict.get('scalevalue','') == '':
optionsdict['scalevalue'] = 'True'
if optionsdict.get('double','') == '':
optionsdict['double'] = 'True'
if optionsdict.get('order','') == '':
optionsdict['order'] = 'MU,MD,EU,WU,ED,WD,NU,SD,ND,SU'
if optionsdict.get('didbadd','') == '':
optionsdict['didbadd'] = 'False'
#calculation
if optionsdict.get('fitfunction','') == '':
optionsdict['fitfunction'] = 'spline'
if optionsdict.get('fitdegree','') == '':
optionsdict['fitdegree'] = '5'
if optionsdict.get('fitknotstep','') == '':
optionsdict['fitknotstep'] = '0.3'
initpath = os.path.join(normalpath,'.magpyguiini')
pwd = base64.b64encode(passwd)
optionsdict['passwd'] = pwd
saveobj(optionsdict, initpath)
print ("Initialization: Added data ")
def loadini():
"""
Load initialisation data
"""
from magpy.version import __version__
home = os.path.expanduser('~')
initpath = os.path.join(home,'.magpyguiini')
print ("Trying to access initialization file:", initpath)
try:
initdata = loadobj(initpath)
magpyversion = __version__
if not initdata.get('magpyversion','') == magpyversion:
# version number has changes and thus eventually also the options ini
print ("MagPy version has changed ({}): inititalization parameters will be updated".format(magpyversion))
return initdata, True
print ("... success")
except:
print ("Initialization data not found: Setting defaults")
return {}, False
#print "Initialization data loaded"
return initdata, False
class RedirectText(object):
# Taken from: http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/
# Used to redirect di results to the multiline textctrl on the DI page
def __init__(self,aWxTextCtrl):
self.out=aWxTextCtrl
def write(self,string):
self.out.WriteText(string)
class PlotPanel(wx.Panel):
"""
DESCRIPTION
comtains all methods for the left plot panel
"""
def __init__(self, *args, **kwds):
wx.Panel.__init__(self, *args, **kwds)
self.figure = plt.figure()
self.plt = plt
scsetmp = ScreenSelections()
self.canvas = FigureCanvas(self,-1,self.figure)
self.datavars = {} # for monitoring
self.array = [[] for key in KEYLIST] # for monitoring
self.t1_stop= threading.Event()
self.xlimits = None
self.ylimits = None
self.selplt = 0 # Index to the selected plot - used by flagselection
self.initialPlot()
self.__do_layout()
def __do_layout(self):
# Resize graph and toolbar, create toolbar
self.vbox = wx.BoxSizer(wx.VERTICAL)
self.vbox.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.GROW)
self.toolbar = NavigationToolbar2Wx(self.canvas)
self.vbox.Add(self.toolbar, 0, wx.EXPAND)
self.SetSizer(self.vbox)
self.vbox.Fit(self)
def timer(self, arg1, stop_event):
while(not stop_event.is_set()):
self.update(self.array)
print ("Running ...")
stop_event.wait(self.datavars[7])
pass
def update(self,array):
"""
DESCRIPTION
Update array with new data and plot it.
If log file is chosen the this method makes use of collector.subscribe method:
storeData to save binary file
"""
def list_duplicates(seq):
seen = set()
seen_add = seen.add
return [idx for idx,item in enumerate(seq) if item in seen or seen_add(item)]
db = self.datavars[8]
parameterstring = 'time,'+self.datavars[1]
# li should contain a data source of a certain length (can be filled by any reading process)
li = sorted(dbselect(db, parameterstring, self.datavars[0], expert='ORDER BY time DESC LIMIT {}'.format(int(self.datavars[2]))))
tmpdt = [datetime.strptime(elem[0], "%Y-%m-%d %H:%M:%S.%f") for elem in li]
self.array[0].extend(tmpdt)
for idx,para in enumerate(parameterstring.split(',')):
if not para.endswith('time'):
i = KEYLIST.index(para)
self.array[i].extend([float(elem[idx]) for elem in li])
duplicateindicies = list_duplicates(self.array[0])
array = [[] for key in KEYLIST]
for idx, elem in enumerate(self.array):
if len(elem) > 0:
newelem = np.delete(np.asarray(elem), duplicateindicies)
array[idx] = list(newelem)
coverage = int(self.datavars[6])
array = [ar[-coverage:] if len(ar) > coverage else ar for ar in array ]
self.monitorPlot(array)
#if Log2File:
# msubs.output = 'file'
# #sensorid = row[0]
# #module = row[1]
# #line = row[2]
# #msubs.storeData(li,parameterstring.split(','))
def startMQTTMonitor(self,**kwargs):
"""
DEFINITION:
embbed matplotlib figure in canvas for mointoring
PARAMETERS:
kwargs: - all plot args
"""
dataid = self.datavars[0]
parameter = self.datavars[1]
period = self.datavars[2]
pad = self.datavars[3]
currentdate = self.datavars[4]
unitlist = self.datavars[5]
coverage = self.datavars[6] # coverage
updatetime = self.datavars[7]
db = self.datavars[8]
"""
# convert parameter list to a dbselect sql format
parameterstring = 'time,'+parameter
# Test whether data is available at all with selected keys and dataid
li = sorted(dbselect(db, parameterstring, dataid, expert='ORDER BY time DESC LIMIT {}'.format(int(coverage))))
if not len(li) > 0:
print("Parameter", parameterstring, dataid, coverage)
print("Did not find any data to display - aborting")
return
else:
valkeys = ['time']
valkeys = parameterstring.split(',')
for i,elem in enumerate(valkeys):
idx = KEYLIST.index(elem)
if elem == 'time':
self.array[idx] = [datetime.strptime(el[0],"%Y-%m-%d %H:%M:%S.%f") for el in li]
else:
self.array[idx] = [float(el[i]) for el in li]
"""
self.datavars = {0: dataid, 1: parameter, 2: period, 3: pad, 4: currentdate, 5: unitlist, 6: coverage, 7: updatetime, 8: db}
self.figure.clear()
t1 = threading.Thread(target=self.timer, args=(1,self.t1_stop))
t1.start()
# Display the plot
self.canvas.draw()
def startMARCOSMonitor(self,**kwargs):
"""
DEFINITION:
embbed matplotlib figure in canvas for mointoring
PARAMETERS:
kwargs: - all plot args
"""
dataid = self.datavars[0]
parameter = self.datavars[1]
period = self.datavars[2]
pad = self.datavars[3]
currentdate = self.datavars[4]
unitlist = self.datavars[5]
coverage = self.datavars[6] # coverage
updatetime = self.datavars[7]
db = self.datavars[8]
# convert parameter list to a dbselect sql format
parameterstring = 'time,'+parameter
# Test whether data is available at all with selected keys and dataid
li = sorted(dbselect(db, parameterstring, dataid, expert='ORDER BY time DESC LIMIT {}'.format(int(coverage))))
if not len(li) > 0:
print("Parameter", parameterstring, dataid, coverage)
print("Did not find any data to display - aborting")
return
else:
valkeys = ['time']
valkeys = parameterstring.split(',')
for i,elem in enumerate(valkeys):
idx = KEYLIST.index(elem)
if elem == 'time':
self.array[idx] = [datetime.strptime(el[0],"%Y-%m-%d %H:%M:%S.%f") for el in li]
else:
self.array[idx] = [float(el[i]) for el in li]
self.datavars = {0: dataid, 1: parameter, 2: period, 3: pad, 4: currentdate, 5: unitlist, 6: coverage, 7: updatetime, 8: db}
self.figure.clear()
t1 = threading.Thread(target=self.timer, args=(1,self.t1_stop))
t1.start()
# Display the plot
self.canvas.draw()
def startMARTASMonitor(self,**kwargs):
"""
DEFINITION:
embbed matplotlib figure in canvas for mointoring
PARAMETERS:
kwargs: - all plot args
"""
#clientname,clientip,destpath,dest,stationid,sshcredlst,s,o,printdata,dbcredlst
#dataid = self.datavars[0]
#parameter = self.datavars[1]
#period = self.datavars[2]
#pad = self.datavars[3]
#currentdate = self.datavars[4]
#unitlist = self.datavars[5]
#coverage = self.datavars[6] # coverage
#updatetime = self.datavars[7]
#db = self.datavars[8]
try:
from magpy.collector import subscribe2client as msubs
except:
print ("MARTAS and LogFile options not available - check dependencies")
return
# MARTAS specific
clientip = self.datavars[9]
destpath = self.datavars[10]
sshcredlst = self.datavars[11]
s = self.datavars[12]
o = self.datavars[13]
stationid = self.datavars[14]
# clientname
import socket
clientaddress = socket.getfqdn(clientip)
clientname = clientaddress.split('.')[0]
dest = 'file'
printdata = False
dbcredlst = []
print ("Here", clientname,clientip,destpath,dest,stationid,sshcredlst,s,o,printdata,dbcredlst)
factory = WampClientFactory("ws://"+clientip+":9100", debugWamp = False)
msubs.sendparameter(clientname,clientip,destpath,dest,stationid,sshcredlst,s,o,printdata,dbcredlst)
factory.protocol = msubs.PubSubClient
connectWS(factory)
reactor.run()
def monitorPlot(self,array,**kwargs):
"""
DEFINITION:
embbed matplotlib figure in canvas for mointoring
PARAMETERS:
kwargs: - all plot args
"""
# Read persistent data variables
dataid = self.datavars[0]
parameter = self.datavars[1]
period = self.datavars[2]
pad = self.datavars[3]
currentdate = self.datavars[4]
unitlist = self.datavars[5]
coverage = self.datavars[6] # coverage
updatetime = self.datavars[7]
db = self.datavars[8]
# convert parameter list to a dbselect sql format
parameterstring = 'time,'+parameter
self.figure.clear()
try:
self.axes.clear()
except:
pass
dt = array[0]
self.figure.suptitle("Live Data of %s - %s" % (dataid, currentdate))
for idx,para in enumerate(parameterstring.split(',')):
if not para.endswith('time'):
i = KEYLIST.index(para)
subind = int("{}1{}".format(len(parameterstring.split(','))-1,idx))
#print subind
self.axes = self.figure.add_subplot(subind)
self.axes.grid(True)
rd = array[i]
l, = self.axes.plot_date(dt,rd,'b-')
#l, = a.plot_date(dt,td,'g-')
plt.xlabel("Time")
plt.ylabel(r'%s [%s]' % (unitlist[idx-1][0],unitlist[idx-1][1]))
# Get the minimum and maximum temperatures these are
# used for annotations and scaling the plot of data
min_val = np.min(rd)
max_val = np.max(rd)
# Add annotations for minimum and maximum temperatures
self.axes.annotate(r'Min: %0.1f' % (min_val),
xy=(dt[rd.index(min_val)], min_val),
xycoords='data', xytext=(20, -20),
textcoords='offset points',
bbox=dict(boxstyle="round", fc="0.8"),
arrowprops=dict(arrowstyle="->",
shrinkA=0, shrinkB=1,
connectionstyle="angle,angleA=0,angleB=90,rad=10"))
self.axes.annotate(r'Max: %0.1f' % (max_val),
xy=(dt[rd.index(max_val)], max_val),
xycoords='data', xytext=(20, 20),
textcoords='offset points',
bbox=dict(boxstyle="round", fc="0.8"),
arrowprops=dict(arrowstyle="->",
shrinkA=0, shrinkB=1,
connectionstyle="angle,angleA=0,angleB=90,rad=10"))
# Set the axis limits to make the data more readable
#self.axes.axis([0,len(temps), min_t - pad,max_t + pad])
self.figure.canvas.draw_idle()
# Repack variables that need to be persistent between
# executions of this method
self.datavars = {0: dataid, 1: parameter, 2: period, 3: pad, 4: currentdate, 5: unitlist, 6: coverage, 7: updatetime, 8: db}
def guiPlot(self,streams,keys,plotopt={},**kwargs):
"""
DEFINITION:
embbed matplotlib figure in canvas
PARAMETERS:
kwargs: - all plot args
"""
#print ("GUI plot", plotopt)
# Declare and register callbacks
def on_xlims_change(axes):
self.xlimits = axes.get_xlim()
def on_ylims_change(axes):
#print ("updated ylims: ", axes.get_ylim())
self.ylimits = axes.get_ylim()
self.selplt = self.axlist.index(axes)
self.figure.clear()
try:
self.axes.clear()
except:
pass
self.axes = mp.plotStreams(streams,keys,figure=self.figure,**plotopt)
#self.axes = mp.plotStreams(streams,keys,figure=self.figure,**kwargs)
self.axlist = self.figure.axes
#get current xlimits:
for idx, ax in enumerate(self.axlist):
self.xlimits = ax.get_xlim()
self.ylimits = ax.get_ylim()
ax.callbacks.connect('xlim_changed', on_xlims_change)
ax.callbacks.connect('ylim_changed', on_ylims_change)
stream = streams[-1]
key = keys[-1]
if not len(stream.ndarray[0])>0:
#print ("Here")
self.t = [elem.time for elem in stream]
flag = [elem.flag for elem in stream]
self.k = [eval("elem."+keys[0]) for elem in stream]
else:
self.t = stream.ndarray[0]
flagpos = KEYLIST.index('flag')
firstcol = KEYLIST.index(key[0])
flag = stream.ndarray[flagpos]
self.k = stream.ndarray[firstcol]
#self.axes.af2 = self.AnnoteFinder(t,yplt,flag,self.axes)
#self.axes.af2 = self.AnnoteFinder(t,k,flag,self.axes)
#af2 = self.AnnoteFinder(t,k,flag,self.axes)
#self.figure.canvas.mpl_connect('button_press_event', af2)
self.canvas.draw()
def initialPlot(self):
"""
DEFINITION:
loads an image for the startup screen
"""
try:
self.axes = self.figure.add_subplot(111)
plt.axis("off") # turn off axis
try:
script_dir = os.path.dirname(__file__)
startupimage = os.path.join(script_dir,'magpy.png')
# TODO add alternative positions
# either use a walk to locate the image in /usr for linux and installation path on win
# or put installation path in ini
img = imread(startupimage)
self.axes.imshow(img)
except:
pass
self.canvas.draw()
except:
pass
def linkRep(self):
return ReportPage(self)
class AnnoteFinder:
"""
callback for matplotlib to display an annotation when points are clicked on. The
point which is closest to the click and within xtol and ytol is identified.
Register this function like this:
scatter(xdata, ydata)
af = AnnoteFinder(xdata, ydata, annotes)
connect('button_press_event', af)
"""
def __init__(self, xdata, ydata, annotes, axis=None, xtol=None, ytol=None):
self.data = zip(xdata, ydata, annotes)
if xtol is None:
xtol = ((max(xdata) - min(xdata))/float(len(xdata)))/2
if ytol is None:
ytol = ((max(ydata) - min(ydata))/float(len(ydata)))/2
ymin = min(ydata)
ymax = max(ydata)
self.xtol = xtol
self.ytol = ytol
if axis is None:
self.axis = pylab.gca()
else:
self.axis= axis
self.drawnAnnotations = {}
self.links = []
def distance(self, x1, x2, y1, y2):
"""
return the distance between two points
"""
return math.hypot(x1 - x2, y1 - y2)
def __call__(self, event):
if event.inaxes:
clickX = event.xdata
clickY = event.ydata
if self.axis is None or self.axis==event.inaxes:
annotes = []
for x,y,a in self.data:
#if clickX-self.xtol < x < clickX+self.xtol and clickY-self.ytol < y < clickY+self.ytol:
if clickX-self.xtol < x < clickX+self.xtol :
annotes.append((self.distance(x,clickX,y,clickY),x,y, a) )
if annotes:
annotes.sort()
distance, x, y, annote = annotes[0]
self.drawAnnote(event.inaxes, x, y, annote)
for l in self.links:
l.drawSpecificAnnote(annote)
def drawAnnote(self, axis, x, y, annote):
"""
Draw the annotation on the plot
"""
if (x,y) in self.drawnAnnotations:
markers = self.drawnAnnotations[(x,y)]
for m in markers:
m.set_visible(not m.get_visible())
self.axis.figure.canvas.draw()
else:
#t = axis.text(x,y, "(%3.2f, %3.2f) - %s"%(x,y,annote), )
datum = datetime.strftime(num2date(x).replace(tzinfo=None),"%Y-%m-%d")
t = axis.text(x,y, "(%s, %3.2f)"%(datum,y), )
m = axis.scatter([x],[y], marker='d', c='r', zorder=100)
scse = ScreenSelections()
scse.seldatelist.append(x)
scse.selvallist.append(y)
scse.updateList()
#test = MainFrame(parent=None)
#test.ReportPage.addMsg(str(x))
#rep_page.logMsg('Datum is %s ' % (datum))
#l = axis.plot([x,x],[0,y])
self.drawnAnnotations[(x,y)] =(t,m)
self.axis.figure.canvas.draw()
def drawSpecificAnnote(self, annote):
annotesToDraw = [(x,y,a) for x,y,a in self.data if a==annote]
for x,y,a in annotesToDraw:
self.drawAnnote(self.axis, x, y, a)
class MenuPanel(wx.Panel):
"""
DESCRIPTION
comtains all methods for the right menu panel and their insets
All methods are listed in the MainFrame class
"""
def __init__(self, *args, **kwds):
wx.Panel.__init__(self, *args, **kwds)
# Create pages on MenuPanel
nb = wx.Notebook(self,-1)
self.str_page = StreamPage(nb)
self.met_page = MetaPage(nb)
self.ana_page = AnalysisPage(nb)
self.abs_page = AbsolutePage(nb)
self.rep_page = ReportPage(nb)
self.com_page = MonitorPage(nb)
nb.AddPage(self.str_page, "Stream")
nb.AddPage(self.met_page, "Meta")
nb.AddPage(self.ana_page, "Analysis")
nb.AddPage(self.abs_page, "DI")
nb.AddPage(self.rep_page, "Report")
nb.AddPage(self.com_page, "Monitor")
sizer = wx.BoxSizer()
sizer.Add(nb, 1, wx.EXPAND)
self.SetSizer(sizer)
class MainFrame(wx.Frame):
def __init__(self, *args, **kwds):
kwds["style"] = wx.DEFAULT_FRAME_STYLE
wx.Frame.__init__(self, *args, **kwds)
# The Splitted Window
self.sp = wx.SplitterWindow(self, -1, style=wx.SP_3D|wx.SP_BORDER)
self.plot_p = PlotPanel(self.sp,-1)
self.menu_p = MenuPanel(self.sp,-1)
pub.subscribe(self.changeStatusbar, 'changeStatusbar')
# The Status Bar
self.StatusBar = self.CreateStatusBar(2, wx.ST_SIZEGRIP)
# Update Status Bar with plot values
self.plot_p.canvas.mpl_connect('motion_notify_event', self.UpdateCursorStatus)
# Allow flagging with double click
self.plot_p.canvas.mpl_connect('button_press_event', self.OnFlagClick)
self.streamlist = []
self.headerlist = []
self.plotoptlist = []
self.streamkeylist = []
self.currentstreamindex = 0
self.stream = DataStream() # used for storing original data
self.plotstream = DataStream() # used for manipulated data
self.orgheader = {}
self.shownkeylist = []
self.keylist = []
self.flaglist = []
self.compselect = 'None'
self.options = {}
self.dipathlist = 'None'
self.baselinedictlst = [] # variable to hold info on loaded DI streams for baselinecorrection
self.baselineidxlst = []
self.InitPlotParameter()
# Try to load ini-file
# located within home directory
inipara,update = loadini()
#print ("INIPARA", inipara)
if inipara == {}:
saveini(self.options) # initialize defaultvalues
inipara, test = loadini()
#print ("INIPARA", inipara)
if update:
self.initParameter(inipara)
saveini(self.options) # initialize defaultvalues
inipara, test = loadini()
# Variable initializations
self.initParameter(inipara)
# Menu Bar
# --------------
# Existing shortcuts: o,d,u,s,e,q,c,b,l,a,r,m,i,v (a,b,c,d,e,f,(gh),i,(jk),l,m,(n),o
self.MainMenu = wx.MenuBar()
# ## File Menu
self.FileMenu = wx.Menu()
self.FileOpen = wx.MenuItem(self.FileMenu, 101, "&Open File...\tCtrl+F", "Open file", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.FileOpen)
self.DirOpen = wx.MenuItem(self.FileMenu, 102, "Select &Directory...\tCtrl+D", "Select an existing directory", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.DirOpen)
self.WebOpen = wx.MenuItem(self.FileMenu, 103, "Open &URL...\tCtrl+U", "Get data from the internet", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.WebOpen)
self.DBOpen = wx.MenuItem(self.FileMenu, 104, "&Select DB table...\tCtrl+S", "Select a MySQL database", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.DBOpen)
self.DBOpen.Enable(False)
self.FileMenu.AppendSeparator()
self.ExportData = wx.MenuItem(self.FileMenu, 105, "&Export data...\tCtrl+E", "Export data to a file", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.ExportData)
self.ExportData.Enable(False)
self.FileMenu.AppendSeparator()
self.FileQuitItem = wx.MenuItem(self.FileMenu, wx.ID_EXIT, "&Quit\tCtrl+Q", "Quit the program", wx.ITEM_NORMAL)
self.FileMenu.AppendItem(self.FileQuitItem)
self.MainMenu.Append(self.FileMenu, "&File")
# ## Database Menu
self.DatabaseMenu = wx.Menu()
self.DBConnect = wx.MenuItem(self.DatabaseMenu, 201, "&Connect MySQL DB...\tCtrl+O", "Connect Database", wx.ITEM_NORMAL)
self.DatabaseMenu.AppendItem(self.DBConnect)
self.DatabaseMenu.AppendSeparator()
self.DBInit = wx.MenuItem(self.DatabaseMenu, 202, "&Initialize a new MySQL DB...\tCtrl+I", "Initialize Database", wx.ITEM_NORMAL)
self.DatabaseMenu.AppendItem(self.DBInit)
self.MainMenu.Append(self.DatabaseMenu, "Data&base")
# ## DI Menu
self.DIMenu = wx.Menu()
self.DIPath2DI = wx.MenuItem(self.DIMenu, 501, "&Load DI data...\tCtrl+L", "Load DI data...", wx.ITEM_NORMAL)
self.DIMenu.AppendItem(self.DIPath2DI)
self.DIPath2Vario = wx.MenuItem(self.DIMenu, 502, "Path to &variometer data...\tCtrl+A", "Variometer data...", wx.ITEM_NORMAL)
self.DIMenu.AppendItem(self.DIPath2Vario)
self.DIPath2Scalar = wx.MenuItem(self.DIMenu, 503, "Path to scala&r data...\tCtrl+R", "Scalar data...", wx.ITEM_NORMAL)
self.DIMenu.AppendItem(self.DIPath2Scalar)
self.DIMenu.AppendSeparator()
self.DIInputSheet = wx.MenuItem(self.DIMenu, 504, "O&pen input sheet...\tCtrl+P", "Input sheet...", wx.ITEM_NORMAL)
self.DIMenu.AppendItem(self.DIInputSheet)
self.MainMenu.Append(self.DIMenu, "D&I")
# ## Stream Operations
self.StreamOperationsMenu = wx.Menu()
self.StreamAddListSelect = wx.MenuItem(self.StreamOperationsMenu, 601, "Add current &working state to Streamlist...\tCtrl+W", "Add Stream", wx.ITEM_NORMAL)
self.StreamOperationsMenu.AppendItem(self.StreamAddListSelect)
self.StreamOperationsMenu.AppendSeparator()
self.StreamListSelect = wx.MenuItem(self.StreamOperationsMenu, 602, "Select active Strea&m...\tCtrl+M", "Select Stream", wx.ITEM_NORMAL)
self.StreamOperationsMenu.AppendItem(self.StreamListSelect)
self.MainMenu.Append(self.StreamOperationsMenu, "StreamO&perations")
# ## Options Menu
self.OptionsMenu = wx.Menu()
self.OptionsInitItem = wx.MenuItem(self.OptionsMenu, 401, "&Basic initialisation parameter\tCtrl+B", "Modify general defaults (e.g. DB, paths)", wx.ITEM_NORMAL)
self.OptionsMenu.AppendItem(self.OptionsInitItem)
self.OptionsMenu.AppendSeparator()
self.OptionsDIItem = wx.MenuItem(self.OptionsMenu, 402, "DI &initialisation parameter\tCtrl+I", "Modify DI related parameters (e.g. thresholds, paths, input sheet layout)", wx.ITEM_NORMAL)
self.OptionsMenu.AppendItem(self.OptionsDIItem)
#self.OptionsMenu.AppendSeparator()
#self.OptionsObsItem = wx.MenuItem(self.OptionsMenu, 403, "Observator&y specifications\tCtrl+Y", "Modify observatory specific meta data (e.g. pears, offsets)", wx.ITEM_NORMAL)
#self.OptionsMenu.AppendItem(self.OptionsObsItem)
self.MainMenu.Append(self.OptionsMenu, "&Options")
self.HelpMenu = wx.Menu()
self.HelpAboutItem = wx.MenuItem(self.HelpMenu, 301, "&About...", "Display general information about the program", wx.ITEM_NORMAL)
self.HelpMenu.AppendItem(self.HelpAboutItem)
self.HelpReadFormatsItem = wx.MenuItem(self.HelpMenu, 302, "Read Formats...", "Supported data formats to read", wx.ITEM_NORMAL)
self.HelpMenu.AppendItem(self.HelpReadFormatsItem)
self.HelpWriteFormatsItem = wx.MenuItem(self.HelpMenu, 303, "Write Formats...", "Supported data formats to write", wx.ITEM_NORMAL)
self.HelpMenu.AppendItem(self.HelpWriteFormatsItem)
self.MainMenu.Append(self.HelpMenu, "&Help")
self.SetMenuBar(self.MainMenu)
# Menu Bar end
self.__set_properties()
# BindingControls on the menu
self.Bind(wx.EVT_MENU, self.OnOpenDir, self.DirOpen)
self.Bind(wx.EVT_MENU, self.OnOpenFile, self.FileOpen)
self.Bind(wx.EVT_MENU, self.OnOpenURL, self.WebOpen)
self.Bind(wx.EVT_MENU, self.OnOpenDB, self.DBOpen)
self.Bind(wx.EVT_MENU, self.OnExportData, self.ExportData)
self.Bind(wx.EVT_MENU, self.OnFileQuit, self.FileQuitItem)
self.Bind(wx.EVT_MENU, self.OnDBConnect, self.DBConnect)
self.Bind(wx.EVT_MENU, self.OnDBInit, self.DBInit)
self.Bind(wx.EVT_MENU, self.OnStreamList, self.StreamListSelect)
self.Bind(wx.EVT_MENU, self.OnStreamAdd, self.StreamAddListSelect)
self.Bind(wx.EVT_MENU, self.onLoadDI, self.DIPath2DI)
self.Bind(wx.EVT_MENU, self.onDefineVario, self.DIPath2Vario)
self.Bind(wx.EVT_MENU, self.onDefineScalar, self.DIPath2Scalar)
self.Bind(wx.EVT_MENU, self.onInputSheet, self.DIInputSheet)
self.Bind(wx.EVT_MENU, self.OnOptionsInit, self.OptionsInitItem)
self.Bind(wx.EVT_MENU, self.OnOptionsDI, self.OptionsDIItem)
#self.Bind(wx.EVT_MENU, self.OnOptionsObs, self.OptionsObsItem)
self.Bind(wx.EVT_MENU, self.OnHelpAbout, self.HelpAboutItem)
self.Bind(wx.EVT_MENU, self.OnHelpReadFormats, self.HelpReadFormatsItem)
self.Bind(wx.EVT_MENU, self.OnHelpWriteFormats, self.HelpWriteFormatsItem)
self.Bind(wx.EVT_CLOSE, self.OnFileQuit) #Bind the EVT_CLOSE event to FileQuit()
# BindingControls on the notebooks
# Stream Page
# ------------------------
#self.Bind(wx.EVT_BUTTON, self.onOpenStreamButton, self.menu_p.str_page.openStreamButton)
self.Bind(wx.EVT_BUTTON, self.onTrimStreamButton, self.menu_p.str_page.trimStreamButton)
self.Bind(wx.EVT_BUTTON, self.onSelectKeys, self.menu_p.str_page.selectKeysButton)
self.Bind(wx.EVT_BUTTON, self.onExtractData, self.menu_p.str_page.extractValuesButton)
self.Bind(wx.EVT_BUTTON, self.onChangePlotOptions, self.menu_p.str_page.changePlotButton)
self.Bind(wx.EVT_BUTTON, self.onRestoreData, self.menu_p.str_page.restoreButton)
self.Bind(wx.EVT_CHECKBOX, self.onAnnotateCheckBox, self.menu_p.str_page.annotateCheckBox)
self.Bind(wx.EVT_CHECKBOX, self.onErrorBarCheckBox, self.menu_p.str_page.errorBarsCheckBox)
self.Bind(wx.EVT_CHECKBOX, self.onConfinexCheckBox, self.menu_p.str_page.confinexCheckBox)
self.Bind(wx.EVT_BUTTON, self.onDailyMeansButton, self.menu_p.str_page.dailyMeansButton)
self.Bind(wx.EVT_BUTTON, self.onApplyBCButton, self.menu_p.str_page.applyBCButton)
self.Bind(wx.EVT_RADIOBOX, self.onChangeComp, self.menu_p.str_page.compRadioBox)
self.Bind(wx.EVT_RADIOBOX, self.onChangeSymbol, self.menu_p.str_page.symbolRadioBox)
self.Bind(wx.EVT_BUTTON, self.onFlagOutlierButton, self.menu_p.str_page.flagOutlierButton)
self.Bind(wx.EVT_BUTTON, self.onFlagSelectionButton, self.menu_p.str_page.flagSelectionButton)
self.Bind(wx.EVT_BUTTON, self.onFlagRangeButton, self.menu_p.str_page.flagRangeButton)
self.Bind(wx.EVT_BUTTON, self.onFlagLoadButton, self.menu_p.str_page.flagLoadButton)
self.Bind(wx.EVT_BUTTON, self.onFlagSaveButton, self.menu_p.str_page.flagSaveButton)
self.Bind(wx.EVT_BUTTON, self.onFlagDropButton, self.menu_p.str_page.flagDropButton)
self.Bind(wx.EVT_BUTTON, self.onFlagMinButton, self.menu_p.str_page.flagMinButton)
self.Bind(wx.EVT_BUTTON, self.onFlagMaxButton, self.menu_p.str_page.flagMaxButton)
# Meta Page
# --------------------------
#self.Bind(wx.EVT_BUTTON, self.onFilterButton, self.menu_p.met_page.filterButton)
# Contains General info on top - previously on analysis page
# add sensor id, sensor name to general info
# add button with GetFromDB, WriteToDB (only active when DB connected) - WriteToDB only specific dataID or all sensorsID
# provide text boxes with data, sensor and station related info
# Edit/Review Data related meta data
# button
# textbox with existing Meta
# Edit/Review Sensor related meta data
# ....
# .... and so on
self.Bind(wx.EVT_BUTTON, self.onMetaGetDBButton, self.menu_p.met_page.getDBButton)
self.Bind(wx.EVT_BUTTON, self.onMetaPutDBButton, self.menu_p.met_page.putDBButton)
self.Bind(wx.EVT_BUTTON, self.onMetaDataButton, self.menu_p.met_page.MetaDataButton)
self.Bind(wx.EVT_BUTTON, self.onMetaSensorButton, self.menu_p.met_page.MetaSensorButton)
self.Bind(wx.EVT_BUTTON, self.onMetaStationButton, self.menu_p.met_page.MetaStationButton)
# Analysis Page
# --------------------------
self.Bind(wx.EVT_BUTTON, self.onDerivativeButton, self.menu_p.ana_page.derivativeButton)
self.Bind(wx.EVT_BUTTON, self.onRotationButton, self.menu_p.ana_page.rotationButton)
self.Bind(wx.EVT_BUTTON, self.onFitButton, self.menu_p.ana_page.fitButton)
self.Bind(wx.EVT_BUTTON, self.onMeanButton, self.menu_p.ana_page.meanButton)
self.Bind(wx.EVT_BUTTON, self.onMaxButton, self.menu_p.ana_page.maxButton)
self.Bind(wx.EVT_BUTTON, self.onMinButton, self.menu_p.ana_page.minButton)
self.Bind(wx.EVT_BUTTON, self.onOffsetButton, self.menu_p.ana_page.offsetButton)
self.Bind(wx.EVT_BUTTON, self.onFilterButton, self.menu_p.ana_page.filterButton)
self.Bind(wx.EVT_BUTTON, self.onSmoothButton, self.menu_p.ana_page.smoothButton)
self.Bind(wx.EVT_BUTTON, self.onActivityButton, self.menu_p.ana_page.activityButton)
self.Bind(wx.EVT_BUTTON, self.onBaselineButton, self.menu_p.ana_page.baselineButton)
self.Bind(wx.EVT_BUTTON, self.onDeltafButton, self.menu_p.ana_page.deltafButton)
# DI Page
# --------------------------
self.Bind(wx.EVT_BUTTON, self.onLoadDI, self.menu_p.abs_page.loadDIButton)
self.Bind(wx.EVT_BUTTON, self.onDefineVario, self.menu_p.abs_page.defineVarioButton)
self.Bind(wx.EVT_BUTTON, self.onDefineScalar, self.menu_p.abs_page.defineScalarButton)
self.Bind(wx.EVT_BUTTON, self.onDIAnalyze, self.menu_p.abs_page.AnalyzeButton)
self.Bind(wx.EVT_BUTTON, self.onDISetParameter, self.menu_p.abs_page.advancedButton)
self.Bind(wx.EVT_BUTTON, self.onSaveDIData, self.menu_p.abs_page.SaveLogButton)
self.Bind(wx.EVT_BUTTON, self.onClearDIData, self.menu_p.abs_page.ClearLogButton)
# Report Page
# --------------------------
self.Bind(wx.EVT_BUTTON, self.onSaveLogButton, self.menu_p.rep_page.saveLoggerButton)
self.menu_p.rep_page.logMsg('Begin logging...')
# Eventually kill this redirection because it might cause problems from other classes
#redir=RedirectText(self.menu_p.rep_page.logMsg) # Start redirecting stdout to log window
#sys.stdout=redir
# Monitor Page
# --------------------------
self.Bind(wx.EVT_BUTTON, self.onConnectMARCOSButton, self.menu_p.com_page.getMARCOSButton)
self.Bind(wx.EVT_BUTTON, self.onConnectMARTASButton, self.menu_p.com_page.getMARTASButton)
self.Bind(wx.EVT_BUTTON, self.onConnectMQTTButton, self.menu_p.com_page.getMQTTButton)
self.Bind(wx.EVT_BUTTON, self.onStartMonitorButton, self.menu_p.com_page.startMonitorButton)
self.Bind(wx.EVT_BUTTON, self.onStopMonitorButton, self.menu_p.com_page.stopMonitorButton)
self.Bind(wx.EVT_BUTTON, self.onLogDataButton, self.menu_p.com_page.saveMonitorButton)
# Connect to database
self._db_connect(self.options.get('host',''), self.options.get('user',''), self.options.get('passwd',''), self.options.get('dbname',''))
# Disable yet unavailbale buttons
# --------------------------
self.DeactivateAllControls()
self.sp.SplitVertically(self.plot_p,self.menu_p,800)
def __set_properties(self):
self.SetTitle("MagPy")
self.SetSize((1200, 800))
self.SetFocus()
self.StatusBar.SetStatusWidths([-1, -1])
# statusbar fields
StatusBar_fields = ["Ready", ""]
for i in range(len(StatusBar_fields)):
self.StatusBar.SetStatusText(StatusBar_fields[i], i)
self.menu_p.SetMinSize((400, 750))
self.plot_p.SetMinSize((100, 100))
def InitPlotParameter(self, keylist = None):
# Kwargs for plotting
#self.annotate = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
#self.errorbars = False
self.menu_p.str_page.errorBarsCheckBox.SetValue(False)
#self.confinex = False
self.menu_p.str_page.confinexCheckBox.SetValue(False)
#self.fullday = False
#self.includeid = False
#self.grid = True
#self.padding = None
#self.specialdict={}
self.colorlist = ['b','g','m','c','y','k','b','g','m','c','y','k']
#self.stormphases=None
#self.t_stormphases={}
#self.function=None
#self.plottype='discontinuous'
#self.labels=False
self.resolution=None
self.monitorSource=None
#collist=['b','g','m','c','y','k','b','g','m','c','y','k']
# please note: symbol and colorlists are defined in ActivateControls
#print ("colorlist", collist[:lenkeys])
#self.plotopt = {'labels':'None' , 'padding': 'None', 'stormphases': False, 'specialdict': {}, 'bartrange':'None', 'bgcolor': 'white', 'colorlist': ",".join(collist[:lenkeys]) ,'fullday':'False', 'grid':'True','gridcolor':'#316931', 'includeid':'False', 'labelcolor':'0.2', 'legendposition':'upper left', 'plottitle':'', 'plottype':'discontinuous', 'symbollist':",".join(self.symbollist),'t_stormphases':'None', 'opacity':'0.0'}
self.plotopt = {'labels':None ,
'errorbars':False,
'confinex':False,
'annotate':False,
'padding': None,
'stormphases': False,
'specialdict': {},
'bartrange':0.06,
'bgcolor': 'white',
'colorlist': [],
'fullday':False,
'grid':True,
'gridcolor':'#316931',
'includeid':False,
'labelcolor':'0.2',
'legendposition':'upper left',
'plottitle':'',
'plottype':'discontinuous',
'symbollist': [],
't_stormphases':{},
'opacity':1.0,
'function':None}
def initParameter(self, dictionary):
# Variable initializations
pwd = dictionary.get('passwd')
#self.passwd = base64.b64decode(pwd)
self.dirname = dictionary.get('dirname','')
self.dipathlist = dictionary.get('dipathlist','')
self.options = dictionary
self.options['passwd'] = base64.b64decode(pwd)
# ################
# Updating and reinitiatzation methods:
def DeactivateAllControls(self):
"""
DESCRIPTION
To be used at start and when an empty stream is loaded
Deactivates all control elements
"""
# Menu
self.ExportData.Enable(False)
# Stream
self.menu_p.str_page.pathTextCtrl.Disable() # remain disabled
self.menu_p.str_page.fileTextCtrl.Disable() # remain disabled
self.menu_p.str_page.startDatePicker.Disable() # always
self.menu_p.str_page.startTimePicker.Disable() # always
self.menu_p.str_page.endDatePicker.Disable() # always
self.menu_p.str_page.endTimePicker.Disable() # always
## TODO Modify method below - when directory/database is selected, automatically open dialog
## to modify time range and other read options
#self.menu_p.str_page.openStreamButton.Disable()
self.menu_p.str_page.trimStreamButton.Disable() # always
self.menu_p.str_page.restoreButton.Disable() # always
self.menu_p.str_page.selectKeysButton.Disable() # always
self.menu_p.str_page.extractValuesButton.Disable() # always
self.menu_p.str_page.changePlotButton.Disable() # always
self.menu_p.str_page.flagOutlierButton.Disable() # always
self.menu_p.str_page.flagSelectionButton.Disable() # always
self.menu_p.str_page.flagRangeButton.Disable() # always
self.menu_p.str_page.flagLoadButton.Disable() # always
self.menu_p.str_page.flagMinButton.Disable() # always
self.menu_p.str_page.flagMaxButton.Disable() # always
self.menu_p.str_page.xCheckBox.Disable() # always
self.menu_p.str_page.yCheckBox.Disable() # always
self.menu_p.str_page.zCheckBox.Disable() # always
self.menu_p.str_page.fCheckBox.Disable() # always
self.menu_p.str_page.FlagIDComboBox.Disable() # always
self.menu_p.str_page.flagDropButton.Disable() # activated if annotation are present
self.menu_p.str_page.flagSaveButton.Disable() # activated if annotation are present
self.menu_p.str_page.dailyMeansButton.Disable() # activated for DI data
self.menu_p.str_page.applyBCButton.Disable() # activated if DataAbsInfo is present
self.menu_p.str_page.annotateCheckBox.Disable() # activated if annotation are present
self.menu_p.str_page.errorBarsCheckBox.Disable() # activated delta columns are present and not DI file
self.menu_p.str_page.confinexCheckBox.Disable() # always
self.menu_p.str_page.compRadioBox.Disable() # activated if xyz,hdz or idf
self.menu_p.str_page.symbolRadioBox.Disable() # activated if less then 2000 points, active if DI data
# Meta
self.menu_p.met_page.getDBButton.Disable() # activated when DB is connected
self.menu_p.met_page.putDBButton.Disable() # activated when DB is connected
self.menu_p.met_page.MetaDataButton.Disable() # remain disabled
self.menu_p.met_page.MetaSensorButton.Disable() # remain disabled
self.menu_p.met_page.MetaStationButton.Disable() # remain disabled
self.menu_p.met_page.stationTextCtrl.Disable() # remain disabled
self.menu_p.met_page.sensorTextCtrl.Disable() # remain disabled
self.menu_p.met_page.dataTextCtrl.Disable() # remain disabled
# DI
self.menu_p.abs_page.AnalyzeButton.Disable() # activate if DI data is present i.e. diTextCtrl contains data
self.menu_p.abs_page.loadDIButton.Enable() # remain enabled
self.menu_p.abs_page.diTextCtrl.Disable() # remain disabled
self.menu_p.abs_page.defineVarioButton.Enable() # remain enabled
self.menu_p.abs_page.varioTextCtrl.Disable() # remain disabled
self.menu_p.abs_page.defineScalarButton.Enable() # remain enabled
self.menu_p.abs_page.scalarTextCtrl.Disable() # remain disabled
self.menu_p.abs_page.dilogTextCtrl.Disable() # remain disabled
self.menu_p.abs_page.ClearLogButton.Disable() # Activate if log contains text
self.menu_p.abs_page.SaveLogButton.Disable() # Activate if log contains text
self.menu_p.abs_page.varioTextCtrl.SetValue(self.options.get('divariopath',''))
self.menu_p.abs_page.scalarTextCtrl.SetValue(self.options.get('discalarpath',''))
# Analysis
self.menu_p.ana_page.rotationButton.Disable() # if xyz magnetic data
self.menu_p.ana_page.derivativeButton.Disable() # always
self.menu_p.ana_page.fitButton.Disable() # always
self.menu_p.ana_page.meanButton.Disable() # always
self.menu_p.ana_page.maxButton.Disable() # always
self.menu_p.ana_page.minButton.Disable() # always
self.menu_p.ana_page.offsetButton.Disable() # always
self.menu_p.ana_page.filterButton.Disable() # always
self.menu_p.ana_page.smoothButton.Disable() # always
self.menu_p.ana_page.activityButton.Disable() # if xyz, hdz magnetic data
self.menu_p.ana_page.baselineButton.Disable() # if absstream in streamlist
self.menu_p.ana_page.deltafButton.Disable() # if xyzf available
#self.menu_p.ana_page.mergeButton.Disable() # if len(self.streamlist) > 1
#self.menu_p.ana_page.subtractButton.Disable() # if len(self.streamlist) > 1
#self.menu_p.ana_page.stackButton.Disable() # if len(self.streamlist) > 1
# Report
self.menu_p.rep_page.logger.Disable() # remain disabled
# Monitor
self.menu_p.com_page.connectionLogTextCtrl.Disable() # remain disabled
self.menu_p.com_page.startMonitorButton.Disable() # always
self.menu_p.com_page.stopMonitorButton.Disable() # always
self.menu_p.com_page.saveMonitorButton.Disable() # always
self.menu_p.com_page.coverageTextCtrl.Disable() # always
self.menu_p.com_page.frequSlider.Disable() # always
self.menu_p.com_page.marcosLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.martasLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.mqttLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.marcosLabel.SetValue('not connected')
self.menu_p.com_page.martasLabel.SetValue('not connected')
self.menu_p.com_page.mqttLabel.SetValue('not connected')
def ActivateControls(self,stream):
"""
DESCRIPTION
Checks contents of stream and state of program.
Activates controls in dependency of the checks
"""
baselineexists = False
# initially reset all controls
self.DeactivateAllControls()
if not len(stream.ndarray[0]) > 0:
self.changeStatusbar("No data available")
return
# Always part
# --------------------------------
# Length
n = stream.length()[0]
keys = stream._get_key_headers()
keystr = ','.join(keys)
if len(self.shownkeylist) == 0: ## Initiaize self.shownkeylist if not yet done
keylist = [elem for elem in keys if elem in NUMKEYLIST]
self.shownkeylist = keylist[:9]
# Reset line/point selection
if n < 2000:
self.menu_p.str_page.symbolRadioBox.Enable()
else:
self.menu_p.str_page.symbolRadioBox.SetStringSelection('line')
self.menu_p.str_page.symbolRadioBox.Disable()
if len(self.plotopt.get('symbollist',[])) == len(self.shownkeylist):
# everything is fine use current symbollist
pass
elif self.menu_p.str_page.symbolRadioBox.GetStringSelection() == 'line':
self.symbollist = ['-'] * len(self.shownkeylist)
self.plotopt['symbollist'] = ['-'] * len(self.shownkeylist)
else:
self.symbollist = ['o'] * len(self.shownkeylist)
self.plotopt['symbollist'] = ['o'] * len(self.shownkeylist)
# Other plot options, which are related to len(shownkeylist)
if not len(self.plotopt.get('colorlist',[])) == len(self.shownkeylist):
self.plotopt['colorlist'] = self.colorlist[:len(self.shownkeylist)]
self.UpdatePlotOptions(self.shownkeylist)
# Sampling rate
try:
sr = stream.samplingrate()
except:
print ("Sampling rate determinations failed - might happen in DI files")
sr = 9999
# Coverage
ind = np.argmin(stream.ndarray[0].astype(float))
mintime = stream._get_min('time')
maxtime = stream._get_max('time')
# Flag column
commidx = KEYLIST.index('comment')
commcol = stream.ndarray[commidx]
commcol = np.asarray([el for el in commcol if not el in ['','-',np.nan]])
# Delta
deltas = False
if 'dx' in keys or 'dy' in keys or 'dz' in keys or 'df' in keys:
deltas = True
# Essential header info
comps = stream.header.get('DataComponents','')[:3]
sensorid = stream.header.get('SensorID','')
dataid = self.plotstream.header.get('DataID','')
formattype = self.plotstream.header.get('DataFormat','')
absinfo = self.plotstream.header.get('DataAbsInfo',None)
metadatatext = ''
metasensortext = ''
metastationtext = ''
for key in stream.header:
#print ("Activate", key)
if key.startswith('Data'):
value = stream.header.get(key,'')
#try: # python 3
if not isinstance(value, basestring): # p3: str
try:
if self.plotstream._is_number(value):
pass
else:
value = 'object - contains complex data'
except:
value = 'object - contains complex data'
#print ("-- ", value)
metadatatext += "{}: \t{}\n".format(key.replace('Data',''),value)
if key.startswith('Sensor'):
metasensortext += "{}: \t{}\n".format(key.replace('Sensor',''),stream.header.get(key,'')) # key.replace('Sensor','')+': \t'+stream.header.get(key,'')+'\n'
if key.startswith('Station'):
metastationtext += "{}: \t{}\n".format(key.replace('Station',''),stream.header.get(key,'')) #key.replace('Station','')+': \t'+stream.header.get(key,'')+'\n'
# Append baselineinfo to baselinedictlist
if formattype == 'MagPyDI':
filename = self.menu_p.str_page.fileTextCtrl.GetValue()
basedict = {'startdate':mintime,'enddate':maxtime, 'filename':filename, 'streamidx':len(self.streamlist)-1}
self.baselinedictlst.append(basedict)
def checkbaseline(baselinedictlst, sensorid, mintime, maxtime):
"""
DESCRIPTION:
check whether valid baseline info is existing
PARAMETER:
use global self.baselinedictlist
set baselineidxlist
RETURNS:
returns baselineidxlst e.g. [1,3,4] which contains currently
"""
# check self.baseline dictionary
baselineidxlst = []
#print (baselinedictlst)
for basedict in baselinedictlst:
startdate = basedict['startdate']
enddate = basedict['enddate']
if sensorid in basedict['filename']:
#print ("found filename")
if mintime <= startdate <= maxtime or mintime <= enddate <= maxtime or (startdate <= mintime and enddate >= maxtime):
baselineidxlst.append(basedict['streamidx'])
return baselineidxlst
# Activate "always" fields
# ----------------------------------------
# menu
self.ExportData.Enable(True)
# ----------------------------------------
# stream page
self.menu_p.str_page.startDatePicker.Enable() # always
self.menu_p.str_page.startTimePicker.Enable() # always
self.menu_p.str_page.endDatePicker.Enable() # always
self.menu_p.str_page.endTimePicker.Enable() # always
self.menu_p.str_page.trimStreamButton.Enable() # always
self.menu_p.str_page.restoreButton.Enable() # always
self.menu_p.str_page.selectKeysButton.Enable() # always
self.menu_p.str_page.extractValuesButton.Enable() # always
self.menu_p.str_page.changePlotButton.Enable() # always
self.menu_p.str_page.flagOutlierButton.Enable() # always
self.menu_p.str_page.flagSelectionButton.Enable() # always
self.menu_p.str_page.flagRangeButton.Enable() # always
self.menu_p.str_page.flagLoadButton.Enable() # always
self.menu_p.str_page.flagMinButton.Enable() # always
self.menu_p.str_page.flagMaxButton.Enable() # always
self.menu_p.str_page.FlagIDComboBox.Enable() # always
self.menu_p.str_page.confinexCheckBox.Enable() # always
self.menu_p.met_page.MetaDataButton.Enable() # always
self.menu_p.met_page.MetaSensorButton.Enable() # always
self.menu_p.met_page.MetaStationButton.Enable() # always
# ----------------------------------------
# analysis page
self.menu_p.ana_page.derivativeButton.Enable() # always
self.menu_p.ana_page.fitButton.Enable() # always
self.menu_p.ana_page.meanButton.Enable() # always
self.menu_p.ana_page.maxButton.Enable() # always
self.menu_p.ana_page.minButton.Enable() # always
self.menu_p.ana_page.offsetButton.Enable() # always
self.menu_p.ana_page.filterButton.Enable() # always
self.menu_p.ana_page.smoothButton.Enable() # always
# Selective fields
# ----------------------------------------
if comps in ['xyz','XYZ','hdz','HDZ','idf','IDF','hez','HEZ']:
self.menu_p.str_page.compRadioBox.Enable()
if comps in ['hdz','HDZ']:
self.menu_p.str_page.compRadioBox.SetStringSelection('hdz')
self.compselect = 'hdz'
elif comps in ['idf','IDF']:
self.menu_p.str_page.compRadioBox.SetStringSelection('idf')
self.compselect = 'idf'
else:
self.menu_p.str_page.compRadioBox.SetStringSelection('xyz')
self.compselect = 'xyz'
if len(commcol) > 0:
self.menu_p.str_page.flagDropButton.Enable() # activated if annotation are present
self.menu_p.str_page.flagSaveButton.Enable() # activated if annotation are present
self.menu_p.str_page.annotateCheckBox.Enable() # activated if annotation are present
if self.menu_p.str_page.annotateCheckBox.GetValue():
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.plotopt['annotate'] = True # activate annotation
if formattype == 'MagPyDI':
self.menu_p.str_page.dailyMeansButton.Enable() # activated for DI data
self.menu_p.str_page.symbolRadioBox.Enable() # activated for DI data
if deltas and not formattype == 'MagPyDI' and not sensorid.startswith('GP20S3'):
self.menu_p.str_page.errorBarsCheckBox.Enable() # activated if delta columns are present and not DI file
if not absinfo == None:
self.menu_p.str_page.applyBCButton.Enable() # activated if DataAbsInfo is present
if n < 2000:
self.menu_p.str_page.symbolRadioBox.Enable() # activated if less then 2000 points, active if DI data
if not dataid == '' and self.db:
self.menu_p.met_page.getDBButton.Enable() # activated when DB is connected
self.menu_p.met_page.putDBButton.Enable() # activated when DB is connected
if not str(self.menu_p.abs_page.dilogTextCtrl.GetValue()) == '':
self.menu_p.abs_page.ClearLogButton.Enable()
self.menu_p.abs_page.SaveLogButton.Enable()
if 'x' in keys and 'y' in keys and 'z' in keys:
self.menu_p.ana_page.rotationButton.Enable() # activate if vector appears to be present
self.menu_p.ana_page.activityButton.Enable() # activate if vector appears to be present
if 'f' in keys and not 'df' in keys:
self.menu_p.ana_page.deltafButton.Enable() # activate if full vector present
if not formattype == 'MagPyDI':
#print ("Checking baseline info")
self.baselineidxlst = checkbaseline(self.baselinedictlst, sensorid, mintime, maxtime)
if len(self.baselineidxlst) > 0:
self.menu_p.ana_page.baselineButton.Enable() # activate if baselinedata is existing
# Update "information" fields
# ----------------------------------------
self.menu_p.met_page.amountTextCtrl.SetValue(str(n))
self.menu_p.met_page.samplingrateTextCtrl.SetValue(str(sr))
self.menu_p.met_page.keysTextCtrl.SetValue(keystr)
self.menu_p.met_page.typeTextCtrl.SetValue(formattype)
self.menu_p.met_page.dataTextCtrl.SetValue(metadatatext)
self.menu_p.met_page.sensorTextCtrl.SetValue(metasensortext)
self.menu_p.met_page.stationTextCtrl.SetValue(metastationtext)
self.menu_p.str_page.startDatePicker.SetValue(pydate2wxdate(num2date(mintime)))
self.menu_p.str_page.endDatePicker.SetValue(pydate2wxdate(num2date(maxtime)))
self.menu_p.str_page.startTimePicker.SetValue(num2date(mintime).strftime('%X'))
self.menu_p.str_page.endTimePicker.SetValue(num2date(maxtime).strftime('%X'))
self.menu_p.abs_page.varioTextCtrl.SetValue(self.options.get('divariopath',''))
self.menu_p.abs_page.scalarTextCtrl.SetValue(self.options.get('discalarpath',''))
def InitialRead(self,stream):
"""
DESCRIPTION
Backups stream content and adds current strem and header info to streamlist and headerlist.
Creates plotstream copy and stores pointer towards lists.
Checks whether ndarray is resent and whether data is present at all
"""
if not len(stream.ndarray[0]) > 0:
stream = stream.linestruct2ndarray()
if not len(stream.ndarray[0]) > 0:
self.DeactivateAllControls()
self.changeStatusbar("No data available")
return False
self.stream = stream
self.plotstream = self.stream.copy()
currentstreamindex = len(self.streamlist)
self.streamlist.append(self.stream)
self.headerlist.append(self.stream.header)
self.currentstreamindex = currentstreamindex
# Moved the following to InitialPlot
#self.streamkeylist.append(self.stream._get_key_headers())
#self.plotoptlist.append(self.plotopt)
return True
def UpdatePlotOptions(self,keylist):
#print ("Update plot characteristics")
# check if lists:
#special = self.plotopt.get('specialdict',None)
pads = self.plotopt.get('padding',None)
labs = self.plotopt.get('labels',None)
if not pads or not len(pads[0]) == len(keylist):
#print ("Padding length not fitting")
self.plotopt['padding']= [[0] * len(keylist)]
if not labs or not len(labs[0]) == len(keylist):
#print ("Labels length not fitting")
self.plotopt['labels']= None
#if not special or not len(special[0]) == len(keylist):
# #print ("specialdict length not fitting")
# self.plotopt['specialdict']= None
def UpdatePlotCharacteristics(self,stream):
"""
DESCRIPTION
Checks and activates plot options, checks for correct lengths of all list options
"""
# Some general Checks on Stream
# ##############################
# 1. Preslect first nine keys and set up default options
keylist = []
keylist = stream._get_key_headers(limit=9)
# TODO: eventually remove keys with high percentage of nans
#for key in keylist:
# ar = [eval('elem.'+key) for elem in stream if not isnan(eval('elem.'+key))]
# div = float(len(ar))/float(len(stream))*100.0
# if div <= 5.:
# keylist.remove(key)
keylist = [elem for elem in keylist if elem in NUMKEYLIST]
# The following will be overwritten by ActivateControls
self.symbollist = ['-'] * len(keylist)
self.plotopt['symbollist'] = ['-'] * len(keylist)
self.plotopt['colorlist']=self.colorlist[:len(keylist)]
self.plotopt['plottitle'] = stream.header.get('StationID')
self.menu_p.str_page.symbolRadioBox.SetStringSelection('line')
self.menu_p.str_page.dailyMeansButton.Disable()
# 2. If stream too long then don't allow scatter plots -- too slowly
if stream.length()[0] < 2000:
self.menu_p.str_page.symbolRadioBox.Enable()
else:
self.menu_p.str_page.symbolRadioBox.Disable()
# 3. If DataFormat = MagPyDI then preselect scatter, and idf and basevalues
if stream.header.get('DataFormat') == 'MagPyDI':
self.menu_p.str_page.symbolRadioBox.Enable()
self.menu_p.str_page.symbolRadioBox.SetStringSelection('point')
self.shownkeylist = keylist
keylist = ['x','y','z','dx','dy','dz']
self.symbollist = ['o'] * len(keylist)
self.plotopt['symbollist'] = ['o'] * len(keylist)
self.plotopt['colorlist']=self.colorlist[:len(keylist)]
# enable daily average button
self.menu_p.str_page.dailyMeansButton.Enable()
# 4. If K values are shown: preselect bar chart
if 'var1' in keylist and stream.header.get('col-var1','').startswith('K'):
print ("Found K values - apply self.plotopt")
self.plotopt['specialdict']=[{'var1':[0,9]}]
pos = keylist.index('var1')
self.plotopt['symbollist'][pos] = 'z'
self.plotopt['bartrange'] = 0.06
self.plotopt['opacity'] = 1.0
self.shownkeylist = keylist
"""
# 4. If DataFormat = MagPyDI then preselect scatter, and idf and basevalues
typus = stream.header.get('DataComponents')
try:
typus = typus.lower()[:3]
except:
typus = ''
if typus in ['xyz','hdz','idf']:
self.compselect = typus
self.menu_p.str_page.compRadioBox.Enable()
self.menu_p.str_page.compRadioBox.SetStringSelection(self.compselect)
else:
if 'x' in keylist and 'y' in keylist and 'z' in keylist:
self.compselect = 'xyz'
self.menu_p.str_page.compRadioBox.Enable()
"""
# 5. Baseline correction if Object contained in stream
#if stream.header.get('DataAbsFunctionObject'):
# self.menu_p.str_page.applyBCButton.Enable()
#else:
# self.menu_p.str_page.applyBCButton.Disable()
self.UpdatePlotOptions(keylist)
return keylist
def defaultFileDialogOptions(self):
''' Return a dictionary with file dialog options that can be
used in both the save file dialog as well as in the open
file dialog. '''
return dict(message='Choose a file', defaultDir=self.dirname,
wildcard='*.*')
def askUserForFilename(self, **dialogOptions):
dialog = wx.FileDialog(self, **dialogOptions)
if dialog.ShowModal() == wx.ID_OK:
userProvidedFilename = True
self.filename = dialog.GetFilename()
self.dirname = dialog.GetDirectory()
#self.SetTitle() # Update the window title with the new filename
else:
userProvidedFilename = False
dialog.Destroy()
return userProvidedFilename
def OnInitialPlot(self, stream, restore = False):
"""
DEFINITION:
read stream, extract columns with values and display up to three of them by default
executes guiPlot then
"""
self.changeStatusbar("Plotting...")
self.InitPlotParameter()
# Init Controls
self.ActivateControls(self.plotstream)
# Override initial controls: Set setting (like keylist, basic plot options and basevalue selection)
keylist = self.UpdatePlotCharacteristics(self.plotstream)
self.menu_p.rep_page.logMsg('- keys: %s' % (', '.join(keylist)))
#if len(stream) > self.resolution:
# self.menu_p.rep_page.logMsg('- warning: resolution of plot reduced by a factor of %i' % (int(len(stream)/self.resolution)))
# Eventually change symbol as matplotlib reports errors for line plot with many points
if stream.length()[0] > 200000:
self.plotopt['symbollist']= ['.'] * len(keylist)
if not restore:
self.streamkeylist.append(keylist)
self.plotoptlist.append(self.plotopt)
self.plot_p.guiPlot([self.plotstream],[keylist], plotopt=self.plotopt)
boxes = ['x','y','z','f']
for box in boxes:
checkbox = getattr(self.menu_p.str_page, box + 'CheckBox')
if box in self.shownkeylist:
checkbox.Enable()
colname = self.plotstream.header.get('col-'+box, '')
if not colname == '':
checkbox.SetLabel(colname)
else:
checkbox.SetValue(False)
self.changeStatusbar("Ready")
def OnPlot(self, stream, keylist, **kwargs):
"""
DEFINITION:
read stream and display
"""
#self.plotopt = {'bgcolor':'green'}
self.changeStatusbar("Plotting...")
#print ("ConfineX:", confinex, symbollist)
"""
self.plot_p.guiPlot([stream],[keylist],padding=padding,specialdict=specialdict,errorbars=errorbars,
colorlist=colorlist,symbollist=symbollist,annotate=annotate,
includeid=includeid, function=function,plottype=plottype,
labels=labels,resolution=resolution,confinex=confinex,plotopt=plotopt)
"""
#print ("Keys", keylist)
if stream.length()[0] > 200000:
self.plotopt['symbollist']= ['.'] * len(keylist)
# Update Delta F if plotted
if 'df' in keylist:
stream = stream.delta_f()
self.plot_p.guiPlot([stream],[keylist],plotopt=self.plotopt)
#self.plot_p.guiPlot(stream,keylist,**kwargs)
if stream.length()[0] > 1 and len(keylist) > 0:
self.ExportData.Enable(True)
boxes = ['x','y','z','f']
for box in boxes:
checkbox = getattr(self.menu_p.str_page, box + 'CheckBox')
if box in self.shownkeylist:
checkbox.Enable()
colname = self.plotstream.header.get('col-'+box, '')
if not colname == '':
checkbox.SetLabel(colname)
else:
checkbox.SetValue(False)
self.changeStatusbar("Ready")
def OnMultiPlot(self, streamlst, keylst, padding=None, specialdict={},errorbars=None,
colorlist=None,symbollist=None,annotate=None,stormphases=None,
t_stormphases={},includeid=False,function=None,plottype='discontinuous',
labels=False,resolution=None, confinex=False, plotopt=None):
"""
DEFINITION:
read stream and display
"""
self.changeStatusbar("Plotting...")
"""
- labels: [ (str) ] List of labels for each stream and variable, e.g.:
[ ['FGE'], ['POS-1'], ['ENV-T1', 'ENV-T2'] ]
- padding: (float/list(list)) List of lists containing paddings for each
respective variable, e.g:
[ [5], [5], [0.1, 0.2] ]
(Enter padding = 5 for all plots to use 5 as padding.)
- specialdict: (list(dict)) Same as plot variable, e.g:
[ {'z': [100,150]}, {}, {'t1':[7,8]} ]
"""
#print ("ConfineX:", confinex, symbollist)
self.plot_p.guiPlot(streamlst,keylst)
#if stream.length()[0] > 1 and len(keylist) > 0:
# self.ExportData.Enable(True)
self.changeStatusbar("Ready")
# ################
# Top menu methods:
def OnHelpAbout(self, event):
description = """MagPy is developed for geomagnetic analysis.
Features include a support of many data formats, visualization,
advanced anaylsis routines, url/database accessability, DI analysis,
non-geomagnetic data support and more.
"""
licence = """MagPy is free software; you can redistribute
it and/or modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the License,
or any later version.
MagPy is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details. You should have
received a copy of the GNU General Public License along with MagPy;
if not, write to the Free Software Foundation, Inc., 59 Temple Place,
Suite 330, Boston, MA 02111-1307 USA"""
info = wx.AboutDialogInfo()
try:
script_dir = os.path.dirname(__file__)
iconimage = os.path.join(script_dir,'magpy128.xpm')
# Alternative:
#print ("Check", iconimage)
#if sys.platform.startswith('linux'):
info.SetIcon(wx.Icon(iconimage, wx.BITMAP_TYPE_XPM))
except:
pass
info.SetName('MagPy')
info.SetVersion(__version__)
info.SetDescription(description)
info.SetCopyright('(C) 2011 - 2017 Roman Leonhardt, Rachel Bailey, Mojca Miklavec')
info.SetWebSite('http://www.conrad-observatory.at')
info.SetLicence(licence)
info.AddDeveloper('Roman Leonhardt, Rachel Bailey, Mojca Miklavec')
info.AddDocWriter('Leonhardt,Bailey,Miklavec,Matzka')
info.AddArtist('Leonhardt')
info.AddTranslator('Bailey')
wx.AboutBox(info)
def OnHelpWriteFormats(self, event):
WriteFormats = [ "{}: \t{}".format(key, PYMAG_SUPPORTED_FORMATS[key][1]) for key in PYMAG_SUPPORTED_FORMATS if 'w' in PYMAG_SUPPORTED_FORMATS[key][0]]
message = "\n".join(WriteFormats)
dlg = ScrolledMessageDialog(self, message, 'Write formats:')
dlg.ShowModal()
def OnHelpReadFormats(self, event):
ReadFormats = [ "{}: \t{}".format(key, PYMAG_SUPPORTED_FORMATS[key][1]) for key in PYMAG_SUPPORTED_FORMATS if 'r' in PYMAG_SUPPORTED_FORMATS[key][0]]
message = "\n".join(ReadFormats)
dlg = ScrolledMessageDialog(self, message, 'Read formats:')
dlg.ShowModal()
"""
def OnExit(self, event):
print ("Exiting with exit") ### TODO this method is not used at all
if self.db:
self.db.close()
self.Destroy() # Close the main window.
sys.exit()
"""
def OnOpenDir(self, event):
stream = DataStream()
success = False
dialog = wx.DirDialog(None, "Choose a directory:",self.dirname,style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
if dialog.ShowModal() == wx.ID_OK:
filelist = glob.glob(os.path.join(dialog.GetPath(),'*'))
self.dirname = dialog.GetPath() # modify self.dirname
files = sorted(filelist, key=os.path.getmtime)
try:
oldest = extractDateFromString(files[0])[0]
old = wx.DateTimeFromTimeT(time.mktime(oldest.timetuple()))
newest = extractDateFromString(files[-1])[0]
newest = newest+timedelta(days=1)
new = wx.DateTimeFromTimeT(time.mktime(newest.timetuple()))
self.menu_p.str_page.pathTextCtrl.SetValue(dialog.GetPath())
self.menu_p.str_page.fileTextCtrl.SetValue("*")
success = True
except:
success = False
#self.changeStatusbar("Loading data ...")
dialog.Destroy()
if success:
stream = self.openStream(path=self.dirname,mintime=old, maxtime=new, extension='*')
self.menu_p.rep_page.logMsg('{}: found {} data points'.format(self.dirname,len(stream.ndarray[0])))
if self.InitialRead(stream):
#self.ActivateControls(self.plotstream)
self.OnInitialPlot(self.plotstream)
else:
dlg = wx.MessageDialog(self, "Could not identify appropriate files in directory!\n"
"please check and/or try OpenFile\n",
"OpenDirectory", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
self.changeStatusbar("Loading from directory failed ... Ready")
dlg.Destroy()
def OnOpenFile(self, event):
#self.dirname = ''
stream = DataStream()
success = False
stream.header = {}
filelist = []
dlg = wx.FileDialog(self, "Choose a file", self.dirname, "", "*.*", wx.MULTIPLE)
if dlg.ShowModal() == wx.ID_OK:
self.changeStatusbar("Loading data ...")
pathlist = dlg.GetPaths()
try:
for path in pathlist:
elem = os.path.split(path)
self.dirname = elem[0]
filelist.append(elem[1])
self.changeStatusbar(path)
tmp = read(path)
self.changeStatusbar("... found {} rows".format(tmp.length()[0]))
stream.extend(tmp.container,tmp.header,tmp.ndarray)
#stream = read(path_or_url=os.path.join(self.dirname, self.filename),tenHz=True,gpstime=True)
#self.menu_p.str_page.lengthStreamTextCtrl.SetValue(str(len(stream)))
self.filename = ' ,'.join(filelist)
self.menu_p.str_page.fileTextCtrl.SetValue(self.filename)
self.menu_p.str_page.pathTextCtrl.SetValue(self.dirname)
self.menu_p.rep_page.logMsg('{}: found {} data points'.format(self.filename,len(stream.ndarray[0])))
success = True
except:
sucess = False
dlg.Destroy()
# plot data
if success:
if self.InitialRead(stream):
#self.ActivateControls(self.plotstream)
self.OnInitialPlot(self.plotstream)
else:
dlg = wx.MessageDialog(self, "Could not identify file!\n"
"please check and/or try OpenDirectory\n",
"OpenFile", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
self.changeStatusbar("Loading file failed ... Ready")
dlg.Destroy()
def OnOpenURL(self, event):
stream = DataStream()
success = False
bookmarks = self.options.get('bookmarks',[])
if bookmarks == []:
bookmarks = ['http://www.intermagnet.org/test/ws/?id=BOU','ftp://ftp.nmh.ac.uk/wdc/obsdata/hourval/single_year/2011/fur2011.wdc','ftp://user:passwd@www.zamg.ac.at/data/magnetism/wic/variation/WIC20160627pmin.min','http://www.conrad-observatory.at/zamg/index.php/downloads-en/category/13-definite2015?download=66:wic-2015-0000-pt1m-4','http://www-app3.gfz-potsdam.de/kp_index/qlyymm.tab']
dlg = OpenWebAddressDialog(None, title='Open URL', favorites=bookmarks)
if dlg.ShowModal() == wx.ID_OK:
url = dlg.urlTextCtrl.GetValue()
self.changeStatusbar("Loading data ... be patient")
try:
if not url.endswith('/'):
self.menu_p.str_page.pathTextCtrl.SetValue(url)
self.menu_p.str_page.fileTextCtrl.SetValue(url.split('/')[-1])
try:
stream = read(path_or_url=url)
success = True
except:
success = False
else:
self.menu_p.str_page.pathTextCtrl.SetValue(url)
mintime = pydate2wxdate(datetime(1777,4,30)) # Gauss
maxtime = pydate2wxdate(datetime(2233,3,22)) # Kirk
try:
stream = self.openStream(path=url, mintime=mintime, maxtime=maxtime, extension='*')
success = True
except:
success = False
except:
pass
dlg.Destroy()
if success:
self.menu_p.rep_page.logMsg('{}: found {} data points'.format(url,len(stream.ndarray[0])))
if self.InitialRead(stream):
#self.ActivateControls(self.plotstream)
self.OnInitialPlot(self.plotstream)
self.options['bookmarks'] = dlg.favorites
#print ("Here", dlg.favorites)
#if not bookmarks == dlg.favorites:
#print ("Favorites have changed ... can be saved in init")
saveini(self.options)
inipara, check = loadini()
self.initParameter(inipara)
self.changeStatusbar("Ready")
else:
self.options['bookmarks'] = dlg.favorites
#print ("Here", dlg.favorites)
#if not bookmarks == dlg.favorites:
#print ("Favorites have changed ... can be saved in init")
saveini(self.options)
inipara, check = loadini()
self.initParameter(inipara)
dlg = wx.MessageDialog(self, "Could not access URL!\n"
"please check address or your internet connection\n",
"OpenWebAddress", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
self.changeStatusbar("Loading url failed ... Ready")
dlg.Destroy()
def OnOpenDB(self, event):
# a) get all DATAINFO IDs and store them in a list
# b) disable pathTextCtrl (DB: dbname)
# c) Open dialog which lets the user select list and time window
# d) update stream menu
getdata = False
stream = DataStream()
if self.db:
self.menu_p.rep_page.logMsg('- Accessing database ...')
cursor = self.db.cursor()
sql = "SELECT DataID, DataMinTime, DataMaxTime FROM DATAINFO"
cursor.execute(sql)
output = cursor.fetchall()
#print ("Test", output)
datainfoidlist = [elem[0] for elem in output]
if len(datainfoidlist) < 1:
dlg = wx.MessageDialog(self, "No data tables available!\n"
"please check your database\n",
"OpenDB", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
return
dlg = DatabaseContentDialog(None, title='MySQL Database: Get content',datalst=datainfoidlist)
if dlg.ShowModal() == wx.ID_OK:
datainfoid = dlg.dataComboBox.GetValue()
stream = DataStream()
mintime = stream._testtime([elem[1] for elem in output if elem[0] == datainfoid][0])
lastupload = stream._testtime([elem[2] for elem in output if elem[0] == datainfoid][0])
maxtime = stream._testtime(datetime.strftime(lastupload,'%Y-%m-%d'))+timedelta(days=1)
self.menu_p.str_page.pathTextCtrl.SetValue('MySQL Database')
self.menu_p.str_page.fileTextCtrl.SetValue(datainfoid)
getdata = True
dlg.Destroy()
else:
dlg = wx.MessageDialog(self, "Could not access database!\n"
"please check your connection\n",
"OpenDB", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
return
if getdata:
path = [self.db,datainfoid]
stream = self.openStream(path=path,mintime=pydate2wxdate(mintime), maxtime=pydate2wxdate(maxtime),extension='MySQL Database')
self.menu_p.rep_page.logMsg('{}: found {} data points'.format(path[1],len(stream.ndarray[0])))
if self.InitialRead(stream):
#self.ActivateControls(self.plotstream)
self.OnInitialPlot(self.plotstream)
def OnExportData(self, event):
self.changeStatusbar("Writing data ...")
dlg = ExportDataDialog(None, title='Export Data',path=self.dirname,stream=self.plotstream,defaultformat='PYCDF')
if dlg.ShowModal() == wx.ID_OK:
filenamebegins = dlg.filenamebegins
filenameends = dlg.filenameends
dateformat = dlg.dateformat
coverage = dlg.coverage
mode = dlg.mode
"""
datetyp = dlg.dateComboBox.GetValue()
if datetyp == '2000-11-22':
dateformat = '%Y-%m-%d'
elif datetyp == '20001122':
dateformat = '%Y%m%d'
else:
dateformat = '%b%d%y'
"""
path = dlg.selectedTextCtrl.GetValue()
fileformat = dlg.formatComboBox.GetValue()
"""
coverage = dlg.coverageComboBox.GetValue()
if coverage == 'hour':
coverage = timedelta(hour=1)
elif coverage == 'day':
coverage = timedelta(days=1)
elif coverage == 'year':
coverage = timedelta(year=1)
mode = dlg.modeComboBox.GetValue()
"""
#print "Stream: ", len(self.stream), len(self.plotstream)
#print "Data: ", self.stream[0].time, self.stream[-1].time, self.plotstream[0].time, self.plotstream[-1].time
#print ("Main : ", filenamebegins, filenameends, dateformat, fileformat, coverage, mode)
checkPath = os.path.join(path, dlg.filenameTextCtrl.GetValue())
export = False
if os.path.exists(checkPath):
msg = wx.MessageDialog(self, "The current export file will overwrite an existing file!\n"
"Choose 'Ok' to apply the overwrite or 'Cancel' to stop exporting.\n",
"VerifyOverwrite", wx.OK|wx.CANCEL|wx.ICON_QUESTION)
if msg.ShowModal() == wx.ID_OK:
export = True
msg.Destroy()
else:
export = True
if export == True:
try:
self.plotstream.write(path,
filenamebegins=filenamebegins,
filenameends=filenameends,
dateformat=dateformat,
mode=mode,
coverage=coverage,
format_type=fileformat)
self.menu_p.rep_page.logMsg("Data written to path: {}".format(path))
self.changeStatusbar("Data written ... Ready")
except:
self.menu_p.rep_page.logMsg("Writing failed - Permission?")
else:
self.changeStatusbar("Ready")
dlg.Destroy()
def _db_connect(self, host, user, passwd, dbname):
try:
self.db.close()
except:
pass
try:
self.db = mysql.connect (host=host,user=user,passwd=passwd,db=dbname)
except:
self.db = False
if self.db:
self.DBOpen.Enable(True)
self.menu_p.rep_page.logMsg('- MySQL Database selected.')
self.changeStatusbar("Database %s successfully connected" % (dbname))
else:
self.menu_p.rep_page.logMsg('- MySQL Database access failed.')
self.changeStatusbar("Database connection failed")
def OnDBConnect(self, event):
"""
Provide access for local network:
Open your /etc/mysql/my.cnf file in your editor.
scroll down to the entry:
bind-address = 127.0.0.1
and you can either hash that so it binds to all ip addresses assigned
#bind-address = 127.0.0.1
or you can specify an ipaddress to bind to. If your server is using dhcp then just hash it out.
Then you'll need to create a user that is allowed to connect to your database of choice from the host/ip your connecting from.
Login to your mysql console:
milkchunk@milkchunk-desktop:~$ mysql -uroot -p
GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
You change out the 'user' to whatever user your wanting to use and the '%' is a hostname wildcard. Meaning that you can connect from any hostname with it. You can change it to either specify a hostname or just use the wildcard.
Then issue the following:
FLUSH PRIVILEGES;
Be sure to restart your mysql (because of the config file editing):
/etc/init.d/mysql restart
"""
dlg = DatabaseConnectDialog(None, title='MySQL Database: Connect to')
dlg.hostTextCtrl.SetValue(self.options.get('host',''))
dlg.userTextCtrl.SetValue(self.options.get('user',''))
dlg.passwdTextCtrl.SetValue(self.options.get('passwd',''))
if self.db == None or self.db == 'None' or not self.db:
dlg.dbTextCtrl.SetValue('None')
else:
dlg.dbTextCtrl.SetValue(self.options.get('dbname',''))
if dlg.ShowModal() == wx.ID_OK:
self.options['host'] = dlg.hostTextCtrl.GetValue()
self.options['user'] = dlg.userTextCtrl.GetValue()
self.options['passwd'] = dlg.passwdTextCtrl.GetValue()
self.options['dbname'] = dlg.dbTextCtrl.GetValue()
self._db_connect(self.options.get('host',''), self.options.get('user',''), self.options.get('passwd',''), self.options.get('dbname',''))
"""
self.db = mysql.connect (host=host,user=user,passwd=passwd,db=mydb)
if self.db:
self.DBOpen.Enable(True)
self.menu_p.rep_page.logMsg('- MySQL Database selected.')
self.changeStatusbar("Database %s successfully connected" % (self.db))
else:
self.menu_p.rep_page.logMsg('- MySQL Database access failed.')
self.changeStatusbar("Database connection failed")
"""
dlg.Destroy()
def OnDBInit(self, event):
"""
Provide access for local network:
Open your /etc/mysql/my.cnf file in your editor.
scroll down to the entry:
bind-address = 127.0.0.1
and you can either hash that so it binds to all ip addresses assigned
#bind-address = 127.0.0.1
or you can specify an ipaddress to bind to. If your server is using dhcp then just hash it out.
Then you'll need to create a user that is allowed to connect to your database of choice from the host/ip your connecting from.
Login to your mysql console:
milkchunk@milkchunk-desktop:~$ mysql -uroot -p
GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
You change out the 'user' to whatever user your wanting to use and the '%' is a hostname wildcard. Meaning that you can connect from any hostname with it. You can change it to either specify a hostname or just use the wildcard.
Then issue the following:
FLUSH PRIVILEGES;
Be sure to restart your mysql (because of the config file editing):
/etc/init.d/mysql restart
"""
# Open a message box to confirm that you really want to do that and to provide info on prerequisits
dlg = wx.MessageDialog(self, "Your are going to intialize a new database\n"
"Please make sure that the following points are fullfilled:\n"
"1) MySQL is installed\n"
"2) An empty database has been created:\n"
" $ CREATE DATABASE mydb;\n"
"3) A new user has been added and access has been granted:\n"
" $ GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'some_pass';\n",
"Init database", wx.OK|wx.CANCEL)
if dlg.ShowModal() == wx.ID_OK:
dlg.Destroy()
# open dialog to select empty db or create new db if mysql is existing
dlg = DatabaseConnectDialog(None, title='MySQL Database: Initialize...')
dlg.hostTextCtrl.SetValue(self.options.get('host',''))
dlg.userTextCtrl.SetValue(self.options.get('user',''))
dlg.passwdTextCtrl.SetValue(self.options.get('passwd',''))
if self.db == None or self.db == 'None' or not self.db:
dlg.dbTextCtrl.SetValue('None')
else:
dlg.dbTextCtrl.SetValue(self.options.get('dbname',''))
if dlg.ShowModal() == wx.ID_OK:
self.options['host'] = dlg.hostTextCtrl.GetValue()
self.options['user'] = dlg.userTextCtrl.GetValue()
self.options['passwd'] = dlg.passwdTextCtrl.GetValue()
self.options['dbname'] = dlg.dbTextCtrl.GetValue()
self._db_connect(self.options.get('host',''), self.options.get('user',''), self.options.get('passwd',''), self.options.get('dbname',''))
dbinit(self.db)
self.changeStatusbar("New database initiated - Ready")
dlg.Destroy()
else:
dlg.Destroy()
def OnFileQuit(self, event):
if self.db:
self.db.close()
self.Destroy() # Close the main window.
sys.exit()
def OnSave(self, event):
textfile = open(os.path.join(self.dirname, self.filename), 'w')
textfile.write(self.control.GetValue())
textfile.close()
def OnSaveAs(self, event):
if self.askUserForFilename(defaultFile=self.filename, style=wx.SAVE,
**self.defaultFileDialogOptions()):
self.OnSave(event)
def OnOptionsInit(self, event):
"""
DEFINITION
Change options
"""
dlg = OptionsInitDialog(None, title='Options: Parameter specifications',options=self.options)
if dlg.ShowModal() == wx.ID_OK:
self.options['host'] = dlg.hostTextCtrl.GetValue()
self.options['user'] = dlg.userTextCtrl.GetValue()
self.options['passwd'] = dlg.passwdTextCtrl.GetValue()
#print (self.options['passwd'])
db = dlg.dbTextCtrl.GetValue()
if db == '':
self.options['dbname'] = 'None'
else:
self.options['dbname'] = db
self.options['dirname']=dlg.dirnameTextCtrl.GetValue()
self.options['stationid']=dlg.stationidTextCtrl.GetValue()
self.options['fitfunction']=dlg.fitfunctionComboBox.GetValue()
self.options['fitknotstep']=dlg.fitknotstepTextCtrl.GetValue()
self.options['fitdegree']=dlg.fitdegreeTextCtrl.GetValue()
saveini(self.options)
inipara, check = loadini()
self.initParameter(inipara)
dlg.Destroy()
def OnOptionsDI(self, event):
"""
DEFINITION
Change options
"""
dlg = OptionsDIDialog(None, title='Options: DI Analysis parameters', options=self.options)
if dlg.ShowModal() == wx.ID_OK:
self.options['diexpD']=dlg.diexpDTextCtrl.GetValue()
self.options['diexpI']=dlg.diexpITextCtrl.GetValue()
self.options['dialpha']=dlg.dialphaTextCtrl.GetValue()
self.options['dideltaF']=dlg.dideltaFTextCtrl.GetValue()
self.options['ditype']=dlg.ditypeComboBox.GetValue()
self.options['divariopath']=dlg.divariopathTextCtrl.GetValue()
self.options['discalarpath']=dlg.discalarpathTextCtrl.GetValue()
self.options['diid']=dlg.diidTextCtrl.GetValue()
self.options['diazimuth']=dlg.diazimuthTextCtrl.GetValue()
self.options['dipier']=dlg.dipierTextCtrl.GetValue()
self.options['didbadd']=dlg.didbaddTextCtrl.GetValue()
# TODO to be added
#self.options['dideltaD']=dlg.dideltaDTextCtrl.GetValue()
#self.options['dideltaI']=dlg.dideltaITextCtrl.GetValue()
#self.options['disign']=dlg.disignTextCtrl.GetValue()
self.dipathlist = dlg.dipathlistTextCtrl.GetValue().split(',')
dipathlist = dlg.dipathlistTextCtrl.GetValue().split(',')
dipath = dipathlist[0]
if os.path.isfile(dipath):
dipath = os.path.split(dipath)[0]
self.options['dipathlist'] = [dipath]
order=dlg.sheetorderTextCtrl.GetValue()
double=dlg.sheetdoubleCheckBox.GetValue()
scalevalue=dlg.sheetscaleCheckBox.GetValue()
self.options['double'] = 'True'
self.options['scalevalue'] = 'True'
if not double:
self.options['double'] = 'False'
if not scalevalue:
self.options['scalevalue'] = 'False'
self.options['order'] = order
saveini(self.options)
inipara, check = loadini()
self.initParameter(inipara)
dlg.Destroy()
"""
def OnOptionsObs(self, event):
dlg = OptionsObsDialog(None, title='Options: Observatory specifications')
dlg.ShowModal()
dlg.Destroy()
#dlg = wx.MessageDialog(self, "Coming soon:\n"
# "Modify observatory specifications\n",
# "PyMag by RL", wx.OK|wx.ICON_INFORMATION)
#dlg.ShowModal()
#dlg.Destroy()
"""
def onOpenAuxButton(self, event):
if self.askUserForFilename(style=wx.OPEN,
**self.defaultFileDialogOptions()):
#dat = read_general(os.path.join(self.dirname, self.filename), 0)
textfile = open(os.path.join(self.dirname, self.filename), 'r')
self.menu_p.gen_page.AuxDataTextCtrl.SetValue(textfile.read())
textfile.close()
#print dat
def changeStatusbar(self,msg):
self.SetStatusText(msg)
def UpdateCursorStatus(self, event):
"""Motion event for displaying values under cursor."""
if not event.inaxes or not self.menu_p.str_page.trimStreamButton.IsEnabled():
self.changeStatusbar("Ready")
return
pickX, pickY = event.xdata, event.ydata
xdata = self.plot_p.t
idx = (np.abs(xdata - pickX)).argmin()
time = self.plotstream.ndarray[KEYLIST.index('time')][idx]
possible_val = []
possible_key = []
try:
time = datetime.strftime(num2date(time),"%Y-%m-%d %H:%M:%S %Z")
except:
time = num2date(time)
for elem in self.shownkeylist:
ul = np.nanmax(self.plotstream.ndarray[KEYLIST.index(elem)])
ll = np.nanmin(self.plotstream.ndarray[KEYLIST.index(elem)])
if ll < pickY < ul:
possible_key += elem
possible_val += [self.plotstream.ndarray[KEYLIST.index(elem)][idx]]
idy = (np.abs(possible_val - pickY)).argmin()
key = possible_key[idy]
val = possible_val[idy]
colname = self.plotstream.header.get('col-'+key, '')
if not colname == '':
key = colname
self.changeStatusbar("time: " + str(time) + " | " + key + " data value: " + str(val))
# ################
# page methods:
# pages: stream (plot, coordinate), analysis (smooth, filter, fit, baseline etc),
# specials(spectrum, power), absolutes (), report (log), monitor (access web socket)
# ------------------------------------------------------------------------------------------
# ################
# Analysis functions
# ################
# ------------------------------------------------------------------------------------------
def onFilterButton(self, event):
"""
Method for filtering
"""
self.changeStatusbar("Filtering...")
# open dialog to modify filter parameters
#keystr = self.menu_p.met_page.keysTextCtrl.GetValue().encode('ascii','ignore')
#keys = keystr.split(',')
sr = self.plotstream.samplingrate()
filter_type = 'gaussian'
resample_offset = 0.0
if sr < 0.5: # use 1 second filter with 0.3 Hz cut off as default
filter_width = timedelta(seconds=3.33333333)
resample_period = 1.0
elif sr < 50: # use 1 minute filter with 0.008 Hz cut off as default
filter_width = timedelta(minutes=2)
resample_period = 60.0
else: # use 1 hour flat filter
filter_width = timedelta(minutes=60)
resample_period = 3600.0
resample_offset = 1800.0
filter_type = 'flat'
miss = 'conservative'
dlg = AnalysisFilterDialog(None, title='Analysis: Filter', samplingrate=sr, resample=True, winlen=filter_width.total_seconds(), resint=resample_period, resoff= resample_offset, filtertype=filter_type)
if sr < 0.5: # use 1 second filter with 0.3 Hz cut off as default
dlg.methodRadioBox.SetStringSelection('conservative')
if dlg.ShowModal() == wx.ID_OK:
filtertype = dlg.filtertypeComboBox.GetValue()
filterlength = float(dlg.lengthTextCtrl.GetValue())
resampleinterval = float(dlg.resampleTextCtrl.GetValue())
resampleoffset = float(dlg.resampleoffsetTextCtrl.GetValue())
missingdata = dlg.methodRadioBox.GetStringSelection()
#print (filtertype,filterlength,missingdata,resampleinterval,resampleoffset)
if missingdata == 'IAGA':
miss = 'mean'
elif missingdata == 'interpolate':
miss = 'interpolate'
self.plotstream = self.plotstream.filter(keys=self.shownkeylist,filter_type=filtertype,filter_width=timedelta(seconds=filterlength),resample_period=resampleinterval,resample_offset=resampleoffset,missingdata=miss,resample=True)
self.menu_p.rep_page.logMsg('- data filtered: {} window, {} Hz passband'.format(filtertype,1./filterlength))
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onDerivativeButton(self, event):
"""
Method for derivative
"""
self.changeStatusbar("Calculating derivative ...")
keys = self.shownkeylist
if len(self.plotstream.ndarray[0]) == 0:
self.plotstream = self.stream.copy()
self.menu_p.rep_page.logMsg("- calculating derivative")
self.plotstream = self.plotstream.differentiate(keys=keys,put2keys=keys)
self.menu_p.rep_page.logMsg('- derivative calculated')
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFitButton(self, event):
"""
Method for fitting
"""
self.changeStatusbar("Fitting ...")
keys = self.shownkeylist
if len(self.plotstream.ndarray[0]) == 0:
self.plotstream = self.stream.copy()
#fitknots = str(0.5)
#fitdegree = str(4)
#fitfunc='spline'
dlg = AnalysisFitDialog(None, title='Analysis: Fit parameter', options=self.options)
if dlg.ShowModal() == wx.ID_OK:
fitfunc = dlg.funcComboBox.GetValue()
knots = dlg.knotsTextCtrl.GetValue()
degree = dlg.degreeTextCtrl.GetValue()
self.options['fitfunction'] = fitfunc
if fitfunc.startswith('poly'):
fitfunc = 'poly'
self.menu_p.rep_page.logMsg('Fitting with %s, %s, %s' % (fitfunc, knots, degree))
if not 0<float(knots)<1:
knots = 0.5
else:
knots = float(knots)
if not int(degree)>0:
degree = 1
else:
degree = int(degree)
self.options['fitknotstep'] = str(knots)
self.options['fitdegree'] = str(degree)
if len(self.plotstream.ndarray[0]) > 0:
func = self.plotstream.fit(keys=keys,fitfunc=fitfunc,fitdegree=degree,knotstep=knots)
self.function = func
self.plotopt['function'] = func
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
else:
# Msgbox to load data first
pass
dlg.Destroy()
self.menu_p.rep_page.logMsg('- data fitted')
self.changeStatusbar("Ready")
def onOffsetButton(self, event):
"""
Method for offset correction
"""
self.changeStatusbar("Adding offsets ...")
keys = self.shownkeylist
offsetdict = {}
# get currently zoomed time limits and use as timerange
self.xlimits = self.plot_p.xlimits
if not self.xlimits:
self.xlimits = [num2date(self.plotstream.ndarray[0][0]),num2date(self.plotstream.ndarray[0][-1])]
else:
self.xlimits = [num2date(self.xlimits[0]),num2date(self.xlimits[-1])]
# get existing deltas from database
deltas = self.plotstream.header.get('DataDeltaValues','')
dlg = AnalysisOffsetDialog(None, title='Analysis: define offsets', keylst=keys, xlimits=self.xlimits, deltas=deltas)
if dlg.ShowModal() == wx.ID_OK:
for key in keys:
offset = eval('dlg.'+key+'TextCtrl.GetValue()')
if not offset in ['','0']:
if not float(offset) == 0:
offsetdict[key] = float(offset)
val = dlg.offsetRadioBox.GetStringSelection()
print ("Offset", val)
if str(val) == 'all':
toffset = dlg.timeshiftTextCtrl.GetValue()
if not float(toffset) == 0:
offsetdict['time'] = timedelta(seconds=float(toffset))
self.plotstream = self.plotstream.offset(offsetdict)
else:
stday = dlg.StartDatePicker.GetValue()
sttime = str(dlg.StartTimeTextCtrl.GetValue())
sd = datetime.strftime(datetime.fromtimestamp(stday.GetTicks()), "%Y-%m-%d")
st= datetime.strptime(str(sd)+'_'+sttime, "%Y-%m-%d_%H:%M:%S")
edday = dlg.EndDatePicker.GetValue()
edtime = str(dlg.EndTimeTextCtrl.GetValue())
ed = datetime.strftime(datetime.fromtimestamp(edday.GetTicks()), "%Y-%m-%d")
et= datetime.strptime(str(ed)+'_'+edtime, "%Y-%m-%d_%H:%M:%S")
self.plotstream = self.plotstream.offset(offsetdict, starttime=st, endtime=et)
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
dlg.Destroy()
self.changeStatusbar("Ready")
def onActivityButton(self, event):
"""
Method for offset correction
"""
self.changeStatusbar("Getting activity (FMI method)...")
keys = self.shownkeylist
offsetdict = {}
#dlg = AnalysisActivityDialog(None, title='Analysis: get k values (FMI)')
#if dlg.ShowModal() == wx.ID_OK:
backup = self.plotstream.copy()
stream = self.plotstream.k_fmi()
self.streamlist.append(stream)
self.streamkeylist.append(stream._get_key_headers())
self.currentstreamindex = len(self.streamlist)-1
self.plotstream = self.streamlist[-1]
#self.headerlist.append(self.plotstream.header)
self.headerlist.append(stream.header)
self.shownkeylist = self.plotstream._get_key_headers(numerical=True)
if self.plotstream and len(self.plotstream.ndarray[0]) > 0:
self.ActivateControls(self.plotstream)
keylist = self.UpdatePlotCharacteristics(self.plotstream)
self.plotoptlist.append(self.plotopt)
self.OnPlot(self.plotstream,self.shownkeylist)
else:
self.plotstream = backup.copy()
self.changeStatusbar("Ready")
def onRotationButton(self, event):
"""
Method for offset correction
"""
self.changeStatusbar("Rotating data ...")
if len(self.plotstream.ndarray[0]) > 0:
# XXX Eventually SetValues from init
dlg = AnalysisRotationDialog(None, title='Analysis: rotate data')
if dlg.ShowModal() == wx.ID_OK:
alphat = dlg.alphaTextCtrl.GetValue()
betat = dlg.betaTextCtrl.GetValue()
try:
alpha = float(alphat)
except:
alpha = 0.0
try:
beta = float(betat)
except:
beta = 0.0
self.plotstream = self.plotstream.rotation(alpha=alpha, beta=beta)
self.menu_p.rep_page.logMsg('- rotated stream by alpha = %s and beta = %s' % (alphat,betat))
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
dlg.Destroy()
self.changeStatusbar("Ready")
def onMeanButton(self, event):
"""
DESCRIPTION
Calculates means values for all keys of shownkeylist
"""
self.changeStatusbar("Calculating means ...")
keys = self.shownkeylist
meanfunc = 'mean'
teststream = self.plotstream.copy()
# limits
self.xlimits = self.plot_p.xlimits
if not self.xlimits == [self.plotstream.ndarray[0],self.plotstream.ndarray[-1]]:
testarray = self.plotstream._select_timerange(starttime=self.xlimits[0],endtime=self.xlimits[1])
teststream = DataStream([LineStruct()],self.plotstream.header,testarray)
mean = [teststream.mean(key,meanfunction='mean',std=True,percentage=10) for key in keys]
t_limits = teststream._find_t_limits()
trange = '- mean - timerange: {} to {}'.format(t_limits[0],t_limits[1])
self.menu_p.rep_page.logMsg(trange)
for idx,me in enumerate(mean):
meanline = '- mean - key: {} = {} +/- {}'.format(keys[idx],me[0],me[1])
self.menu_p.rep_page.logMsg(meanline)
trange = trange + '\n' + meanline
# open message dialog
dlg = wx.MessageDialog(self, "Means:\n"+
str(trange),
"Analysis: Mean values", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
self.changeStatusbar("Ready")
def onMaxButton(self, event):
"""
DESCRIPTION
Calculates max values for all keys of shownkeylist
"""
self.changeStatusbar("Calculating maxima ...")
keys = self.shownkeylist
teststream = self.plotstream.copy()
# limits
self.xlimits = self.plot_p.xlimits
if not self.xlimits == [self.plotstream.ndarray[0],self.plotstream.ndarray[-1]]:
testarray = self.plotstream._select_timerange(starttime=self.xlimits[0],endtime=self.xlimits[1])
teststream = DataStream([LineStruct()],self.plotstream.header,testarray)
maxi = [teststream._get_max(key,returntime=True) for key in keys]
t_limits = teststream._find_t_limits()
trange = '- maxima - timerange: {} to {}'.format(t_limits[0],t_limits[1])
self.menu_p.rep_page.logMsg(trange)
for idx,me in enumerate(maxi):
meanline = '- maxima - key: {} = {} at {}'.format(keys[idx],me[0],num2date(me[1]))
self.menu_p.rep_page.logMsg(meanline)
trange = trange + '\n' + meanline
# open message dialog
dlg = wx.MessageDialog(self, "Maxima:\n"+
str(trange),
"Analysis: Maximum values", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
self.changeStatusbar("Ready")
def onMinButton(self, event):
"""
DESCRIPTION
Calculates means values for all keys of shownkeylist
"""
self.changeStatusbar("Calculating minima ...")
keys = self.shownkeylist
teststream = self.plotstream.copy()
# limits
self.xlimits = self.plot_p.xlimits
if not self.xlimits == [self.plotstream.ndarray[0],self.plotstream.ndarray[-1]]:
testarray = self.plotstream._select_timerange(starttime=self.xlimits[0],endtime=self.xlimits[1])
teststream = DataStream([LineStruct()],self.plotstream.header,testarray)
mini = [teststream._get_min(key,returntime=True) for key in keys]
t_limits = teststream._find_t_limits()
trange = '- minima - timerange: {} to {}'.format(t_limits[0],t_limits[1])
self.menu_p.rep_page.logMsg(trange)
for idx,me in enumerate(mini):
meanline = '- minima - key: {} = {} at {}'.format(keys[idx],me[0],num2date(me[1]))
self.menu_p.rep_page.logMsg(meanline)
trange = trange + '\n' + meanline
# open message dialog
dlg = wx.MessageDialog(self, "Minima:\n"+
str(trange),
"Analysis: Minimum values", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
self.changeStatusbar("Ready")
def onSmoothButton(self, event):
"""
DESCRIPTION
Calculates smoothed curve
"""
self.changeStatusbar("Smoothing ... be patient")
sr = self.plotstream.samplingrate()
filter_type = 'gaussian'
resample_offset = 0.0
if sr < 0.2: # use 1 second filter with 0.3 Hz cut off as default
filter_width = timedelta(seconds=3.33333333)
resample_period = 1.0
elif sr < 50: # use 1 minute filter with 0.008 Hz cut off as default
filter_width = timedelta(minutes=2)
resample_period = 60.0
else: # use 1 hour flat filter
filter_width = timedelta(minutes=60)
resample_period = 3600.0
resample_offset = 1800.0
filter_type = 'flat'
miss = 'conservative'
dlg = AnalysisFilterDialog(None, title='Analysis: Filter', samplingrate=sr, resample=False, winlen=filter_width.seconds, resint=resample_period, resoff= resample_offset, filtertype=filter_type)
if dlg.ShowModal() == wx.ID_OK:
filtertype = dlg.filtertypeComboBox.GetValue()
filterlength = float(dlg.lengthTextCtrl.GetValue())
missingdata = dlg.methodRadioBox.GetStringSelection()
if missingdata == 'IAGA':
miss = 'mean'
elif missingdata == 'interpolate':
miss = 'interpolate'
self.plotstream = self.plotstream.filter(keys=self.shownkeylist,filter_type=filtertype,filter_length=filterlength,missingdata=miss,noresample=True)
self.menu_p.rep_page.logMsg('- data filtered: {} window, {} Hz passband'.format(filtertype,1./filterlength))
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onBaselineButton(self, event):
"""
DESCRIPTION
Calculates baseline correction
"""
self.changeStatusbar("Baseline adoption ...")
dlg = AnalysisBaselineDialog(None, title='Analysis: Baseline adoption', idxlst=self.baselineidxlst, dictlst = self.baselinedictlst, options=self.options)
# open dlg which allows to choose baseline data stream, function and parameters
# Drop down for baseline data stream (idx: filename)
# Text window describing baseline parameter
# button to modify baseline parameter
if dlg.ShowModal() == wx.ID_OK:
# return active stream idx ()
#print ("Here", dlg.absstreamComboBox.GetStringSelection())
#print ("Here2", dlg.absstreamComboBox.GetValue())
idx = int(dlg.absstreamComboBox.GetValue().split(':')[0])
self.options = dlg.options
absstream = self.streamlist[idx]
tmpbasedict = [el for el in self.baselinedictlst if el['streamidx']==idx]
basedict = tmpbasedict[0]
fitfunc = self.options.get('fitfunction','spline')
if fitfunc.startswith('poly'):
fitfunc = 'poly'
baselinefunc = self.plotstream.baseline(absstream,fitfunc=self.options.get('fitfunction','spline'), knotstep=float(self.options.get('fitknotstep','0.3')), fitdegree=int(self.options.get('fitdegree','5')))
#keys = self.shownkeylist
self.menu_p.rep_page.logMsg('- baseline adoption performed using DI data from {}. Parameters: function={}, knotsteps(spline)={}, degree(polynomial)={}'.format(basedict['filename'],self.options.get('fitfunction',''),self.options.get('fitknotstep',''),self.options.get('fitdegree','')))
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("BC function available - Ready")
else:
self.changeStatusbar("Ready")
def onDeltafButton(self, event):
"""
DESCRIPTION
Calculates delta F values
"""
self.changeStatusbar("Delta F ...")
self.plotstream = self.plotstream.delta_f()
self.streamlist[self.currentstreamindex].delta_f()
#print (self.plotstream._get_key_headers())
if 'df' in self.plotstream._get_key_headers() and not 'df' in self.shownkeylist:
self.shownkeylist.append('df')
self.menu_p.rep_page.logMsg('- determined delta F between x,y,z and f')
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
# ------------------------------------------------------------------------------------------
# ################
# Stream page functions
# ################
# ------------------------------------------------------------------------------------------
def onErrorBarCheckBox(self,event):
"""
DESCRIPTION
Switch display of error bars.
RETURNS
kwarg for OnPlot method
"""
if not self.menu_p.str_page.errorBarsCheckBox.GetValue():
self.errorbars=False
self.plotopt['errorbars'] = [[False]*len(self.shownkeylist)]
self.menu_p.str_page.errorBarsCheckBox.SetValue(False)
else:
self.errorbars=True
self.plotopt['errorbars'] = [[True]*len(self.shownkeylist)]
self.menu_p.str_page.errorBarsCheckBox.SetValue(True)
self.ActivateControls(self.plotstream)
if self.plotstream.length()[0] > 0:
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
else:
self.changeStatusbar("Failure")
def onConfinexCheckBox(self,event):
"""
DESCRIPTION
Switch display of error bars.
RETURNS
kwarg for OnPlot method
"""
if not self.menu_p.str_page.confinexCheckBox.GetValue():
self.confinex=False
self.plotopt['confinex'] = False
self.menu_p.str_page.confinexCheckBox.SetValue(False)
else:
self.confinex=True
self.plotopt['confinex'] = True
self.menu_p.str_page.confinexCheckBox.SetValue(True)
self.ActivateControls(self.plotstream)
if self.plotstream.length()[0] > 0:
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
else:
self.changeStatusbar("Failure")
def onTrimStreamButton(self,event):
"""
DESCRIPTION
"""
stday = self.menu_p.str_page.startDatePicker.GetValue()
sttime = str(self.menu_p.str_page.startTimePicker.GetValue())
if sttime.endswith('AM') or sttime.endswith('am'):
sttime = datetime.strftime(datetime.strptime(sttime,"%I:%M:%S %p"),"%H:%M:%S")
if sttime.endswith('pm') or sttime.endswith('PM'):
sttime = datetime.strftime(datetime.strptime(sttime,"%I:%M:%S %p"),"%H:%M:%S")
sd = datetime.strftime(datetime.fromtimestamp(stday.GetTicks()), "%Y-%m-%d")
start= datetime.strptime(str(sd)+'_'+sttime, "%Y-%m-%d_%H:%M:%S")
enday = self.menu_p.str_page.endDatePicker.GetValue()
entime = str(self.menu_p.str_page.endTimePicker.GetValue())
if entime.endswith('AM') or entime.endswith('am'):
entime = datetime.strftime(datetime.strptime(entime,"%I:%M:%S %p"),"%H:%M:%S")
if entime.endswith('pm') or entime.endswith('PM'):
print ("ENDTime", entime, datetime.strptime(entime,"%I:%M:%S %p"))
entime = datetime.strftime(datetime.strptime(entime,"%I:%M:%S %p"),"%H:%M:%S")
ed = datetime.strftime(datetime.fromtimestamp(enday.GetTicks()), "%Y-%m-%d")
end= datetime.strptime(ed+'_'+entime, "%Y-%m-%d_%H:%M:%S")
print ("Range", start, end)
if end > start:
try:
self.changeStatusbar("Trimming stream ...")
newarray = self.plotstream._select_timerange(starttime=start, endtime=end)
self.plotstream=DataStream([LineStruct()],self.plotstream.header,newarray)
self.menu_p.rep_page.logMsg('- Stream trimmed: {} to {}'.format(start,end))
except:
self.menu_p.rep_page.logMsg('- Trimming failed')
self.ActivateControls(self.plotstream)
if self.plotstream.length()[0] > 0:
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
else:
self.changeStatusbar("Failure")
else:
dlg = wx.MessageDialog(self, "Could not trim timerange!\n"
"Entered dates are out of order.\n",
"TrimTimerange", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
self.changeStatusbar("Trimming timerange failed ... Ready")
dlg.Destroy()
def openStream(self,path='',mintime=None,maxtime=None,extension=None):
# TODO Move this method to section File menu
"""
DESCRIPTION:
Opens time range dialog and loads data. Returns stream.
USED BY:
OnOpenDir and OnOpenDB , OnOpen
"""
dlg = LoadDataDialog(None, title='Select timerange:',mintime=mintime,maxtime=maxtime, extension=extension)
if dlg.ShowModal() == wx.ID_OK:
stday = dlg.startDatePicker.GetValue()
sttime = dlg.startTimePicker.GetValue()
enday = dlg.endDatePicker.GetValue()
entime = dlg.startTimePicker.GetValue()
ext = dlg.fileExt.GetValue()
sd = datetime.fromtimestamp(stday.GetTicks())
ed = datetime.fromtimestamp(enday.GetTicks())
st = datetime.strftime(sd, "%Y-%m-%d") + " " + sttime
start = datetime.strptime(st, "%Y-%m-%d %H:%M:%S")
et = datetime.strftime(ed, "%Y-%m-%d") + " " + entime
end = datetime.strptime(et, "%Y-%m-%d %H:%M:%S")
if isinstance(path, basestring):
if not path=='':
self.menu_p.str_page.fileTextCtrl.SetValue(ext)
self.changeStatusbar("Loading data ... please be patient")
if path.find('//') > 0:
stream = read(path_or_url=path, starttime=start, endtime=end)
else:
stream = read(path_or_url=os.path.join(path,ext), starttime=start, endtime=end)
else:
# assume Database
try:
self.changeStatusbar("Loading data ... please be patient")
stream = readDB(path[0],path[1], starttime=start, endtime=end)
except:
print ("Reading failed")
return stream
else:
return DataStream()
def onSelectKeys(self,event):
"""
DESCRIPTION
open dialog to select shown keys (check boxes)
"""
if len(self.plotstream.ndarray[0]) == 0:
self.plotstream = self.stream.copy()
keylist = self.plotstream._get_key_headers(numerical=True)
self.keylist = keylist
shownkeylist = [el for el in self.shownkeylist if el in NUMKEYLIST]
namelist = []
unitlist = []
for key in keylist:
if not len(self.plotstream.ndarray[KEYLIST.index(key)]) == 0:
value = self.plotstream.header.get('col-'+key)
unit = self.plotstream.header.get('unit-col-'+key)
if not value == '':
namelist.append(value)
else:
namelist.append(key)
if not unit == '':
unitlist.append(unit)
else:
unitlist.append('')
if len(self.plotstream.ndarray[0]) > 0:
dlg = StreamSelectKeysDialog(None, title='Select keys:',keylst=keylist,shownkeys=self.shownkeylist,namelist=namelist)
for elem in shownkeylist:
exec('dlg.'+elem+'CheckBox.SetValue(True)')
if dlg.ShowModal() == wx.ID_OK:
shownkeylist = []
for elem in keylist:
boolval = eval('dlg.'+elem+'CheckBox.GetValue()')
if boolval:
shownkeylist.append(elem)
if len(shownkeylist) == 0:
shownkeylist = self.shownkeylist
else:
self.shownkeylist = shownkeylist
self.symbollist = [self.symbollist[0]]*len(shownkeylist)
self.plotopt['symbollist'] = [self.symbollist[0]]*len(shownkeylist)
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
else:
self.changeStatusbar("Failure")
def onExtractData(self,event):
"""
DESCRIPTION:
open dialog to choose extract parameter (paramater compare value)
up to three possibilities
"""
if len(self.plotstream.ndarray[0]) == 0:
self.plotstream = self.stream.copy()
keylist = self.shownkeylist
if len(self.plotstream.ndarray[0]) > 0:
dlg = StreamExtractValuesDialog(None, title='Extract:',keylst=keylist)
if dlg.ShowModal() == wx.ID_OK:
key1 = dlg.key1ComboBox.GetValue()
comp1 = dlg.compare1ComboBox.GetValue()
val1 = dlg.value1TextCtrl.GetValue()
logic2 = dlg.logic2ComboBox.GetValue()
logic3 = dlg.logic3ComboBox.GetValue()
extractedstream = self.plotstream.extract(key1,val1,comp1)
if len(extractedstream) < 2 and extractedstream.length()[0] < 2:
# Empty stream returned -- looks complex because of old LineStruct rubbish
self.menu_p.rep_page.logMsg('Extract: criteria would return an empty data stream - skipping')
extractedstream = self.plotstream
val2 = dlg.value2TextCtrl.GetValue()
if not val2 == '':
key2 = dlg.key2ComboBox.GetValue()
comp2 = dlg.compare2ComboBox.GetValue()
if logic2 == 'and':
extractedstream = extractedstream.extract(key2,val2,comp2)
else:
extractedstream2 = self.plotstream.extract(key2,val2,comp2)
extractedstream.extend(extractedstream2.container, extractedstream2.header,extractedstream2.ndarray)
extractedstream = extractedstream.removeduplicates()
extractedstream = extractedstream.sorting()
extractedstream = extractedstream.get_gaps()
val3 = dlg.value3TextCtrl.GetValue()
if not val3 == '':
key3 = dlg.key3ComboBox.GetValue()
comp3 = dlg.compare3ComboBox.GetValue()
if logic3 == 'and':
extractedstream = extractedstream.extract(key3,val3,comp3)
else:
extractedstream3 = self.plotstream.extract(key3,val3,comp3)
extractedstream.extend(extractedstream3.container, extractedstream3.header,extractedstream3.ndarray)
extractedstream = extractedstream.removeduplicates()
extractedstream = extractedstream.sorting()
extractedstream = extractedstream.get_gaps()
self.plotstream = extractedstream
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
else:
self.menu_p.rep_page.logMsg("Extract: No data available so far")
# specify filters -> allow to define filters Combo with key - Combo with selector (>,<,=) - TextBox with Filter
def onChangePlotOptions(self,event):
"""
DESCRIPTION:
open dialog to modify plot options (general (e.g. bgcolor) and key
specific (key: symbol color errorbar etc)
"""
if len(self.plotstream.ndarray[0]) > 0:
dlg = StreamPlotOptionsDialog(None, title='Plot Options:',optdict=self.plotopt)
if dlg.ShowModal() == wx.ID_OK:
for elem in self.plotopt:
if not elem in ['function']:
val = eval('dlg.'+elem+'TextCtrl.GetValue()')
if val in ['False','True','None'] or val.startswith('[') or val.startswith('{'):
val = eval(val)
if elem in ['opacity','bartrange']:
val = float(val)
if not val == self.plotopt[elem]:
self.plotopt[elem] = val
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
def onRestoreData(self,event):
"""
Restore originally loaded data
"""
self.flaglist = []
if not len(self.stream.ndarray[0]) > 0:
self.DeactivateAllControls()
self.changeStatusbar("No data available")
return False
print ("Restoring (works only for latest stream):", self.currentstreamindex)
#print ("Header", self.headerlist)
#self.plotstream = self.streamlist[self.currentstreamindex].copy()
self.plotstream = self.stream.copy()
self.plotstream.header = self.headerlist[self.currentstreamindex]
self.menu_p.rep_page.logMsg('Original data restored...')
#self.InitPlotParameter()
#self.ActivateControls(self.stream)
self.OnInitialPlot(self.stream, restore=True)
def onDailyMeansButton(self,event):
"""
Restore originally loaded data
"""
if self.plotstream.header.get('DataFormat') == 'MagPyDI':
keys=['dx','dy','dz']
else:
keys = False
self.plotstream = self.plotstream.dailymeans(keys)
self.shownkeylist = self.plotstream._get_key_headers(numerical=True)[:3]
self.symbollist = self.symbollist[0]*len(self.shownkeylist)
self.plotopt['symbollist'] = self.symbollist[0]*len(self.shownkeylist)
self.plotopt['errorbars'] = [[True]*len(self.shownkeylist)]
self.ActivateControls(self.plotstream)
self.errorbars = True
self.OnPlot(self.plotstream,self.shownkeylist)
self.menu_p.str_page.errorBarsCheckBox.SetValue(True)
self.menu_p.str_page.errorBarsCheckBox.Enable()
self.changeStatusbar("Ready")
def onApplyBCButton(self,event):
"""
Apply baselinecorrection
"""
print ('self.plotstream', self.plotstream.header.get('DataComponents',''))
self.plotstream = self.plotstream.bc()
print ('self.plotstream', self.plotstream.header.get('DataComponents',''))
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
def onAnnotateCheckBox(self,event):
"""
Restore originally loaded data
"""
#### get True or False
if not self.menu_p.str_page.annotateCheckBox.GetValue():
#self.annotate=False
self.plotopt['annotate'] = False
self.menu_p.str_page.annotateCheckBox.SetValue(False)
else:
#self.annotate=True
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
#mp.plot(self.plotstream,annotate=True)
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
def onChangeComp(self, event):
orgcomp = self.compselect
self.compselect = self.menu_p.str_page.comp[event.GetInt()]
coordinate = orgcomp+'2'+self.compselect
self.changeStatusbar("Transforming ... {}".format(coordinate))
print("Transforming ... {}".format(coordinate))
self.plotstream = self.plotstream._convertstream(coordinate)
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
def onChangeSymbol(self, event):
#orgsymbol = self.symbolselect
symbolselect = self.menu_p.str_page.symbol[event.GetInt()]
self.changeStatusbar("Transforming ...")
self.ActivateControls(self.plotstream)
#if len(self.plotstream.ndarray[0]) == 0:
# self.plotstream = self.stream.copy()
if symbolselect == 'line':
self.symbollist = ['-' for elem in self.shownkeylist]
self.plotopt['symbollist'] = ['-' for elem in self.shownkeylist]
self.OnPlot(self.plotstream,self.shownkeylist)
elif symbolselect == 'point':
self.symbollist = ['o' for elem in self.shownkeylist]
self.plotopt['symbollist'] = ['o' for elem in self.shownkeylist]
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def OnFlagClick(self, event):
"""Mouse event for flagging with double click."""
if not event.inaxes or not event.dblclick:
return
else:
sensid = self.plotstream.header.get('SensorID','')
dataid = self.plotstream.header.get('DataID','')
if sensid == '' and not dataid == '':
sensid = dataid[:-5]
if sensid == '':
dlg = wx.MessageDialog(self, "No Sensor ID available!\n"
"You need to define a unique Sensor ID\nfor the data set in order to use flagging.\nPlease go the tab Meta for this purpose.\n","Undefined Sensor ID", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
else:
flaglist = []
xdata = self.plot_p.t
xtol = ((max(xdata) - min(xdata))/float(len(xdata)))/2
pickX = event.xdata
idx = (np.abs(xdata - pickX)).argmin()
time = self.plotstream.ndarray[KEYLIST.index('time')][idx]
starttime = num2date(time - xtol)
endtime = num2date(time + xtol)
dlg = StreamFlagSelectionDialog(None, title='Stream: Flag Selection', shownkeylist=self.shownkeylist, keylist=self.keylist)
if dlg.ShowModal() == wx.ID_OK:
keys2flag = dlg.AffectedKeysTextCtrl.GetValue()
keys2flag = keys2flag.split(',')
keys2flag = [el for el in keys2flag if el in KEYLIST]
flagid = dlg.FlagIDComboBox.GetValue()
flagid = int(flagid[0])
comment = dlg.CommentTextCtrl.GetValue()
if comment == '' and flagid != 0:
comment = 'Point flagged with unspecified reason'
flaglist = self.plotstream.flag_range(keys=self.shownkeylist,flagnum=flagid,text=comment,keystoflag=keys2flag,starttime=starttime,endtime=endtime)
self.menu_p.rep_page.logMsg('- flagged time range: added {} flags'.format(len(flaglist)))
if len(flaglist) > 0:
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag(flaglist)
self.ActivateControls(self.plotstream)
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFlagSelectionButton(self,event):
"""
DESCRIPTION
Flag all data within the zoomed region
"""
flaglist = []
sensid = self.plotstream.header.get('SensorID','')
dataid = self.plotstream.header.get('DataID','')
if sensid == '' and not dataid == '':
sensid = dataid[:-5]
self.xlimits = self.plot_p.xlimits
self.ylimits = self.plot_p.ylimits
selplt = self.plot_p.selplt
selkey=[self.shownkeylist[selplt]] # Get the marked key here
if sensid == '':
dlg = wx.MessageDialog(self, "No Sensor ID available!\n"
"You need to define a unique Sensor ID\nfor the data set in order to use flagging.\nPlease go the tab Meta for this purpose.\n","Undefined Sensor ID", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
else:
self.changeStatusbar("Flagging selection ...")
dlg = StreamFlagSelectionDialog(None, title='Stream: Flag Selection', shownkeylist=self.shownkeylist, keylist=self.keylist)
if dlg.ShowModal() == wx.ID_OK:
keys2flag = dlg.AffectedKeysTextCtrl.GetValue()
keys2flag = keys2flag.split(',')
keys2flag = [el for el in keys2flag if el in KEYLIST]
comment = dlg.CommentTextCtrl.GetValue()
flagid = dlg.FlagIDComboBox.GetValue()
flagid = int(flagid[0])
above = min(self.ylimits)
below = max(self.ylimits)
starttime =num2date(min(self.xlimits))
endtime = num2date(max(self.xlimits))
print ("FlagID:", flagid)
flaglist = self.plotstream.flag_range(keys=selkey,flagnum=flagid,text=comment,keystoflag=keys2flag,starttime=starttime,endtime=endtime,above=above,below=below)
self.menu_p.rep_page.logMsg('- flagged selection: added {} flags'.format(len(flaglist)))
if len(flaglist) > 0:
#print ("FlagRange: Please note that the range definition needs an update as only single values are considered")
#print ("TEst", flaglist)
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag(flaglist)
self.ActivateControls(self.plotstream)
#self.annotate = True
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
"""
#dlg = StreamFlagSelectionDialog(None, title='Stream: Flag selection ...')
#prev_redir = sys.stdout
#redir=RedirectText(dlg.SelectionTextCtrl)
#sys.stdout=redir
### commands
#sys.stdout=prev_redir
self.changeStatusbar("Opening external data viewer ...")
self.plot_p.plt.close()
variables = self.keylist
#p = subprocess.Popen(['ls', '-a'], stdout = subprocess.PIPE)
#text = p.stdout.readlines()
#text = "".join(text)
self.plotstream, flaglist = mp.plotFlag(self.plotstream,variables)
self.flaglist.extend(flaglist)
self.changeStatusbar("Updating plot ...")
self.menu_p.rep_page.logMsg('- flagged user selection: added {} flags'.format(len(flaglist)))
self.ActivateControls(self.plotstream)
#self.annotate = True
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
"""
def onFlagOutlierButton(self, event):
"""
DESCRIPTION
Method for Outlier
"""
self.changeStatusbar("Flagging outliers ...")
sr = self.menu_p.met_page.samplingrateTextCtrl.GetValue().encode('ascii','ignore')
keys = self.shownkeylist
timerange = float(sr)*600.
threshold=5.0
# Open Dialog and return the parameters threshold, keys, timerange
dlg = StreamFlagOutlierDialog(None, title='Stream: Flag outlier', threshold=threshold, timerange=timerange)
if dlg.ShowModal() == wx.ID_OK:
threshold = dlg.ThresholdTextCtrl.GetValue()
timerange = dlg.TimerangeTextCtrl.GetValue()
try:
threshold = float(threshold)
timerange = float(timerange)
timerange = timedelta(seconds=timerange)
flaglist = self.plotstream.flag_outlier(stdout=True,returnflaglist=True, keys=keys,threshold=threshold,timerange=timerange)#,markall=markall)
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag_outlier(stdout=True, keys=keys,threshold=threshold,timerange=timerange)
self.menu_p.rep_page.logMsg('- flagged outliers: added {} flags'.format(len(flaglist)))
except:
print("flag outliers failed: check parameter")
self.menu_p.rep_page.logMsg('- flag outliers failed: check parameter')
self.ActivateControls(self.plotstream)
#self.annotate = True
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFlagRangeButton(self,event):
"""
DESCRIPTION
Opens a dialog which allows to select the range to be flagged
"""
flaglist = []
sensid = self.plotstream.header.get('SensorID','')
dataid = self.plotstream.header.get('DataID','')
if sensid == '' and not dataid == '':
sensid = dataid[:-5]
self.xlimits = self.plot_p.xlimits
if sensid == '':
dlg = wx.MessageDialog(self, "No Sensor ID available!\n"
"You need to define a unique Sensor ID\nfor the data set in order to use flagging.\nPlease go the tab Meta for this purpose.\n","Undefined Sensor ID", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
else:
self.changeStatusbar("Flagging range ...")
dlg = StreamFlagRangeDialog(None, title='Stream: Flag range', stream = self.plotstream, shownkeylist=self.shownkeylist, keylist=self.keylist)
startdate=self.xlimits[0]
enddate=self.xlimits[1]
starttime = num2date(startdate).strftime('%X')
endtime = num2date(enddate).strftime('%X')
dlg.startFlagDatePicker.SetValue(pydate2wxdate(num2date(startdate)))
dlg.endFlagDatePicker.SetValue(pydate2wxdate(num2date(enddate)))
dlg.startFlagTimePicker.SetValue(starttime)
dlg.endFlagTimePicker.SetValue(endtime)
if dlg.ShowModal() == wx.ID_OK:
# get values from dlg
flagtype = dlg.rangeRadioBox.GetStringSelection()
keys2flag = dlg.AffectedKeysTextCtrl.GetValue()
keys2flag = keys2flag.split(',')
keys2flag = [el for el in keys2flag if el in KEYLIST]
comment = dlg.CommentTextCtrl.GetValue()
flagid = dlg.FlagIDComboBox.GetValue()
flagid = int(flagid[0])
if flagtype == 'value':
keys = str(dlg.SelectKeyComboBox.GetValue())
above = dlg.LowerLimitTextCtrl.GetValue()
below = dlg.UpperLimitTextCtrl.GetValue()
flagval = True
if not below == '' and not above == '':
above = float(above)
below = float(below)
#below = None
self.menu_p.rep_page.logMsg('- flagging values between {} and {}'.format(above, below))
elif not below == '':
below = float(below)
above = None
self.menu_p.rep_page.logMsg('- flagging values below {}'.format(below))
elif not above == '':
above = float(above)
below = None
self.menu_p.rep_page.logMsg('- flagging values above {}'.format(above))
else:
flagval = False
if flagval:
#print ("Above , Below:", above, below)
flaglist = self.plotstream.flag_range(keys=[keys],flagnum=flagid,text=comment,keystoflag=keys2flag,above=above,below=below)
self.menu_p.rep_page.logMsg('- flagged value range: added {} flags'.format(len(flaglist)))
elif flagtype == 'time':
if comment == '':
comment = 'Time range flagged with unspecified reason'
stday = dlg.startFlagDatePicker.GetValue()
sttime = str(dlg.startFlagTimePicker.GetValue())
if sttime.endswith('AM') or sttime.endswith('am'):
sttime = datetime.strftime(datetime.strptime(sttime,"%I:%M:%S %p"),"%H:%M:%S")
if sttime.endswith('pm') or sttime.endswith('PM'):
sttime = datetime.strftime(datetime.strptime(sttime,"%I:%M:%S %p"),"%H:%M:%S")
sd = datetime.strftime(datetime.fromtimestamp(stday.GetTicks()), "%Y-%m-%d")
starttime= datetime.strptime(str(sd)+'_'+sttime, "%Y-%m-%d_%H:%M:%S")
enday = dlg.endFlagDatePicker.GetValue()
entime = str(dlg.endFlagTimePicker.GetValue())
if entime.endswith('AM') or entime.endswith('am'):
entime = datetime.strftime(datetime.strptime(entime,"%I:%M:%S %p"),"%H:%M:%S")
if entime.endswith('pm') or entime.endswith('PM'):
entime = datetime.strftime(datetime.strptime(entime,"%I:%M:%S %p"),"%H:%M:%S")
ed = datetime.strftime(datetime.fromtimestamp(enday.GetTicks()), "%Y-%m-%d")
endtime= datetime.strptime(str(ed)+'_'+entime, "%Y-%m-%d_%H:%M:%S")
#print ("Range", starttime, endtime, keys2flag)
flaglist = self.plotstream.flag_range(keys=self.shownkeylist,flagnum=flagid,text=comment,keystoflag=keys2flag,starttime=starttime,endtime=endtime)
self.menu_p.rep_page.logMsg('- flagged time range: added {} flags'.format(len(flaglist)))
else:
pass
if len(flaglist) > 0:
#print ("FlagRange: Please note that the range definition needs an update as only single values are considered")
#print ("TEst", flaglist)
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag(flaglist)
self.ActivateControls(self.plotstream)
#self.annotate = True
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFlagLoadButton(self,event):
"""
DESCRIPTION
Opens a dialog which allows to load flags either from a DB or from file
"""
sensorid = self.plotstream.header.get('SensorID','')
# Open Dialog and return the parameters threshold, keys, timerange
self.changeStatusbar("Loading flags ... please be patient")
dlg = StreamLoadFlagDialog(None, title='Load Flags', db = self.db, sensorid=sensorid, start=self.plotstream.start(),end=self.plotstream.end())
dlg.ShowModal()
if len(dlg.flaglist) > 0:
flaglist = dlg.flaglist
#print ("Loaded flags like", flaglist[0], self.flaglist[0])
self.flaglist.extend(flaglist)
#print ("extended flaglist looking like", self.flaglist[0])
self.changeStatusbar("Applying flags ... please be patient")
self.plotstream = self.plotstream.flag(flaglist)
self.menu_p.rep_page.logMsg('- loaded flags: added {} flags'.format(len(flaglist)))
self.ActivateControls(self.plotstream)
#self.annotate = True
self.plotopt['annotate'] = True
#self.menu_p.str_page.annotateCheckBox.SetValue(False)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFlagSaveButton(self,event):
"""
DESCRIPTION
Opens a dialog which allows to save flags either to DB or to file
"""
currentlen = len(self.flaglist)
#print ("FlagSave", self.flaglist)
self.changeStatusbar("Saving flags ...")
dlg = StreamSaveFlagDialog(None, title='Save Flags', db = self.db, flaglist=self.flaglist)
if dlg.ShowModal() == wx.ID_OK:
#flaglist = dlg.flaglist
pass
#self.flaglist = []
self.changeStatusbar("Flaglist saved and reset - Ready")
def onFlagDropButton(self,event):
"""
DESCRIPTION
Drops all flagged data
"""
self.changeStatusbar("Dropping flagged data ...")
#dlg = wx.MessageDialog(self, "Please select:\n"
# "Yes: drop data from all columns\nNo: drop only selected data\n","Drop", wx.YES_NO |wx.ICON_INFORMATION)
#if dlg.ShowModal() == wx.ID_YES:
# self.plotstream = self.plotstream.flag(self.shownkeylist)
#else:
self.plotstream = self.plotstream.remove_flagged()
flagid = KEYLIST.index('flag')
check = [el for el in self.plotstream.ndarray[flagid] if '0' in el or '2' in el or '4' in el]
if not len(check) > 0:
self.plotstream = self.plotstream._drop_column('flag')
self.plotstream = self.plotstream._drop_column('comment')
#self.plotopt['annotate'] = False
else:
pass
#self.plotopt['annotate'] = True
self.menu_p.rep_page.logMsg('- flagged data removed')
self.flaglist = []
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
self.changeStatusbar("Ready")
def onFlagMinButton(self,event):
"""
DESCRIPTION
Flags minimum value in zoomed region
"""
keys = self.shownkeylist
teststream = self.plotstream.copy()
# limits
self.xlimits = self.plot_p.xlimits
if not self.xlimits == [self.plotstream.ndarray[0],self.plotstream.ndarray[-1]]:
testarray = self.plotstream._select_timerange(starttime=self.xlimits[0],endtime=self.xlimits[1])
teststream = DataStream([LineStruct()],self.plotstream.header,testarray)
xdata = self.plot_p.t
xtol = ((max(xdata) - min(xdata))/float(len(xdata)))/2
mini = [teststream._get_min(key,returntime=True) for key in keys]
flaglist = []
comment = 'Flagged minimum'
flagid = self.menu_p.str_page.FlagIDComboBox.GetValue()
flagid = int(flagid[0])
if flagid is 0:
comment = ''
for idx,me in enumerate(mini):
if keys[idx] is not 'df':
checkbox = getattr(self.menu_p.str_page, keys[idx] + 'CheckBox')
if checkbox.IsChecked():
starttime = num2date(me[1] - xtol)
endtime = num2date(me[1] + xtol)
flaglist.extend(self.plotstream.flag_range(keys=self.shownkeylist,flagnum=flagid,text=comment,keystoflag=keys[idx],starttime=starttime,endtime=endtime))
if len(flaglist) > 0:
self.menu_p.rep_page.logMsg('- flagged minimum: added {} flags'.format(len(flaglist)))
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag(flaglist)
self.ActivateControls(self.plotstream)
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
def onFlagMaxButton(self,event):
"""
DESCRIPTION
Flags maximum value in zoomed region
"""
keys = self.shownkeylist
teststream = self.plotstream.copy()
# limits
self.xlimits = self.plot_p.xlimits
if not self.xlimits == [self.plotstream.ndarray[0],self.plotstream.ndarray[-1]]:
testarray = self.plotstream._select_timerange(starttime=self.xlimits[0],endtime=self.xlimits[1])
teststream = DataStream([LineStruct()],self.plotstream.header,testarray)
xdata = self.plot_p.t
xtol = ((max(xdata) - min(xdata))/float(len(xdata)))/2
maxi = [teststream._get_max(key,returntime=True) for key in keys]
flaglist = []
comment = 'Flagged maximum'
flagid = self.menu_p.str_page.FlagIDComboBox.GetValue()
flagid = int(flagid[0])
if flagid is 0:
comment = ''
for idx,me in enumerate(maxi):
if keys[idx] is not 'df':
checkbox = getattr(self.menu_p.str_page, keys[idx] + 'CheckBox')
if checkbox.IsChecked():
starttime = num2date(me[1] - xtol)
endtime = num2date(me[1] + xtol)
flaglist.extend(self.plotstream.flag_range(keys=self.shownkeylist,flagnum=flagid,text=comment,keystoflag=keys[idx],starttime=starttime,endtime=endtime))
if len(flaglist) > 0:
self.menu_p.rep_page.logMsg('- flagged maximum: added {} flags'.format(len(flaglist)))
self.flaglist.extend(flaglist)
self.plotstream = self.plotstream.flag(flaglist)
self.ActivateControls(self.plotstream)
self.plotopt['annotate'] = True
self.menu_p.str_page.annotateCheckBox.SetValue(True)
self.OnPlot(self.plotstream,self.shownkeylist)
# ------------------------------------------------------------------------------------------
# ################
# Meta page functions
# ################
# ------------------------------------------------------------------------------------------
def onMetaGetDBButton(self,event):
# TODO Move to Meta page
"""
DESCRIPTION
get Meta data for the current sensorid from database
"""
# open dialog with all header info
dataid = self.plotstream.header.get('DataID','')
if dataid == '':
dlg = wx.MessageDialog(self, "No Data ID available!\n"
"You need to specify a unique Data ID\nfor which meta information is obtained.\n","Undefined Data ID", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
self.menu_p.rep_page.logMsg(" - failed to add meta information from DB")
else:
self.plotstream.header = dbfields2dict(self.db,dataid)
self.menu_p.rep_page.logMsg(" - added meta information for {} from DB".format(dataid))
self.ActivateControls(self.plotstream)
def onMetaPutDBButton(self,event):
"""
DESCRIPTION
write meta data to the database
"""
# open dialog with all header info
dataid = self.plotstream.header.get('DataID','')
if dataid == '':
dlg = wx.MessageDialog(self, "No Data ID available!\n"
"You need to specify a unique Data ID\nfor which meta information is stored.\n","Undefined Data ID", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
self.menu_p.rep_page.logMsg(" - failed to add meta information to DB")
else:
dlg = wx.MessageDialog(self, "Please confirm!\n"
"I want to replace the Meta information\nfrom the DB with data provided.\n","Confirm", wx.YES_NO |wx.ICON_INFORMATION)
if dlg.ShowModal() == wx.ID_YES:
dbdict2fields(self.db,self.plotstream.header)
self.menu_p.rep_page.logMsg(" - added meta information for {} to DB".format(dataid))
self.ActivateControls(self.plotstream)
def onMetaDataButton(self,event):
"""
DESCRIPTION
open dialog to modify plot options (general (e.g. bgcolor) and key
specific (key: symbol color errorbar etc)
"""
# open dialog with all header info
if len(self.plotstream.ndarray[0]) > 0:
dlg = MetaDataDialog(None, title='Meta information:',header=self.plotstream.header,layer='DATAINFO')
if dlg.ShowModal() == wx.ID_OK:
for key in DATAINFOKEYLIST:
exec('value = dlg.panel.'+key+'TextCtrl.GetValue()')
if not value == dlg.header.get(key,''):
self.plotstream.header[key] = value
self.ActivateControls(self.plotstream)
else:
self.menu_p.rep_page.logMsg("Meta information: No data available")
def onMetaSensorButton(self,event):
# TODO Move to Meta page
"""
DESCRIPTION
open dialog to modify plot options (general (e.g. bgcolor) and key
specific (key: symbol color errorbar etc)
"""
# open dialog with all header info
if len(self.plotstream.ndarray[0]) > 0:
dlg = MetaDataDialog(None, title='Meta information:',header=self.plotstream.header,layer='SENSORS')
if dlg.ShowModal() == wx.ID_OK:
for key in SENSORSKEYLIST:
exec('value = dlg.panel.'+key+'TextCtrl.GetValue()')
if not value == dlg.header.get(key,''):
self.plotstream.header[key] = value
self.ActivateControls(self.plotstream)
else:
self.menu_p.rep_page.logMsg("Meta information: No data available")
def onMetaStationButton(self,event):
# TODO Move to Meta page
"""
DESCRIPTION
open dialog to modify plot options (general (e.g. bgcolor) and key
specific (key: symbol color errorbar etc)
"""
# open dialog with all header info
if len(self.plotstream.ndarray[0]) > 0:
dlg = MetaDataDialog(None, title='Meta information:',header=self.plotstream.header,layer='STATIONS')
if dlg.ShowModal() == wx.ID_OK:
for key in STATIONSKEYLIST:
exec('value = dlg.panel.'+key+'TextCtrl.GetValue()')
if not value == dlg.header.get(key,''):
self.plotstream.header[key] = value
self.ActivateControls(self.plotstream)
else:
self.menu_p.rep_page.logMsg("Meta information: No data available")
# ------------------------------------------------------------------------------------------
# ####################
# Stream Operations functions
# ####################
# ------------------------------------------------------------------------------------------
def OnStreamList(self,event):
"""
DESCRIPTION
open dialog to select active stream
"""
plotstreamlist = []
plotkeylist = []
dlg = MultiStreamDialog(None, title='Select stream(s):',streamlist=self.streamlist, idx=self.currentstreamindex, streamkeylist=self.streamkeylist)
if dlg.ShowModal() == wx.ID_OK:
namelst = dlg.namelst
for idx, elem in enumerate(self.streamlist):
val = eval('dlg.'+namelst[idx]+'CheckBox.GetValue()')
if val:
plotstreamlist.append(elem)
plotkeylist.append(dlg.streamkeylist[idx])
activeidx = idx
if len(plotstreamlist) > 1:
# deactivate all Meta; Analysis methods
self.DeactivateAllControls()
self.OnMultiPlot(plotstreamlist,plotkeylist)
else:
self.currentstreamindex = activeidx
self.plotstream = plotstreamlist[0]
self.shownkeylist = [el for el in plotkeylist[0] if el in NUMKEYLIST]
#self.shownkeylist = self.streamkeylist[activeidx]
self.plotopt = self.plotoptlist[activeidx]
self.ActivateControls(self.plotstream)
self.OnPlot(self.plotstream,self.shownkeylist)
else:
mod = dlg.modify
if mod == True:
self.streamlist.append(dlg.result)
self.streamkeylist.append(dlg.resultkeys)
self.currentstreamindex = len(self.streamlist)-1
self.plotstream = self.streamlist[-1]
self.headerlist.append(self.plotstream.header)
self.shownkeylist = self.plotstream._get_key_headers(numerical=True)
self.ActivateControls(self.plotstream)
self.plotoptlist.append(self.plotopt)
self.OnPlot(self.plotstream,self.shownkeylist)
dlg.Destroy()
def OnStreamAdd(self,event):
currentstreamindex = len(self.streamlist)
self.streamlist.append(self.plotstream)
self.streamkeylist.append(self.shownkeylist)
self.headerlist.append(self.plotstream.header)
self.currentstreamindex = currentstreamindex
self.plotoptlist.append(self.plotopt)
# ------------------------------------------------------------------------------------------
# ################
# Absolute functions
# ################
# ------------------------------------------------------------------------------------------
def onLoadDI(self,event):
"""
open dialog to load DI data
"""
if isinstance(self.dipathlist, str):
dipathlist = self.dipathlist
else:
dipathlist = self.dipathlist[0]
if os.path.isfile(dipathlist):
dipathlist = os.path.split(dipathlist)[0]
dlg = LoadDIDialog(None, title='Get DI data', dirname=dipathlist)
dlg.ShowModal()
if not dlg.pathlist == 'None' and not len(dlg.pathlist) == 0:
self.menu_p.rep_page.logMsg("- loaded DI data")
self.menu_p.abs_page.diTextCtrl.SetValue(','.join(dlg.pathlist))
self.dipathlist = dlg.pathlist
if os.path.isfile(dlg.pathlist[0]):
dlgpath = os.path.split(dlg.pathlist[0])[0]
else:
dlgpath = dlg.pathlist[0]
self.options['dipathlist'] = [dlgpath]
self.menu_p.abs_page.AnalyzeButton.Enable()
dlg.Destroy()
def onDefineVario(self,event):
"""
open dialog to load DI data
"""
if len(self.stream) > 0:
pass
# send a message box that this data will be erased
#self.variopath = ''
divariopath = self.options.get('divariopath','')
# Open a select path dlg as long as db and remote is not supported
dialog = wx.DirDialog(None, "Choose a directory with variometer data:",divariopath,style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
if dialog.ShowModal() == wx.ID_OK:
path = dialog.GetPath()
self.menu_p.abs_page.varioTextCtrl.SetValue(path)
self.options['divariopath'] = os.path.join(path,'*')
dialog.Destroy()
def onDefineScalar(self,event):
"""
open dialog to load DI data
"""
if len(self.stream) > 0:
pass
# send a message box that this data will be erased
# Open a select path dlg as long as db and remote is not supported
discalarpath = self.options.get('discalarpath','')
dialog = wx.DirDialog(None, "Choose a directory with scalar data:",discalarpath,style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
if dialog.ShowModal() == wx.ID_OK:
path = dialog.GetPath()
self.menu_p.abs_page.scalarTextCtrl.SetValue(path)
self.options['discalarpath'] = os.path.join(path,'*')
dialog.Destroy()
def onDIAnalyze(self,event):
"""
open dialog to load DI data
"""
# Get parameters from options
divariopath = self.options.get('divariopath','')
discalarpath = self.options.get('discalarpath','')
stationid= self.options.get('stationid','')
abstype= self.options.get('ditype','')
azimuth= self.options.get('diazimuth','')
try:
expD= float(self.options.get('diexpD','0.0'))
except:
expD = 0.0
try:
expI= float(self.options.get('diexpI','0.0'))
except:
expI = 0.0
try:
alpha= float(self.options.get('dialpha','0.0'))
except:
alpha = 0.0
try:
deltaF= float(self.options.get('dideltaF','0.0'))
except:
deltaF = 0.0
if len(self.dipathlist) > 0:
self.changeStatusbar("Processing DI data ... please be patient")
#absstream = absoluteAnalysis(self.dipathlist,self.divariopath,self.discalarpath, expD=self.diexpD,expI=self.diexpI,diid=self.diid,stationid=self.stationid,abstype=self.ditype, azimuth=self.diazimuth,pier=self.dipier,alpha=self.dialpha,deltaF=self.dideltaF, dbadd=self.didbadd)
prev_redir = sys.stdout
redir=RedirectText(self.menu_p.abs_page.dilogTextCtrl)
sys.stdout=redir
if not azimuth == '':
azimuth = float(azimuth)
absstream = absoluteAnalysis(self.dipathlist,divariopath,discalarpath, expD=expD,expI=expI,stationid=stationid,abstype=abstype, azimuth=azimuth,alpha=alpha,deltaF=deltaF)
else:
absstream = absoluteAnalysis(self.dipathlist,divariopath,discalarpath, expD=expD,expI=expI,stationid=stationid,alpha=alpha,deltaF=deltaF)
sys.stdout=prev_redir
# only if more than one point is selected
self.changeStatusbar("Ready")
if len(absstream.length()) > 1 and absstream.length()[0] > 0:
# Convert absstream
array = [[] for el in KEYLIST]
for idx,el in enumerate(absstream.ndarray):
if KEYLIST[idx] in NUMKEYLIST or KEYLIST[idx] == 'time':
array[idx] = np.asarray(el).astype(float)
else:
array[idx] = np.asarray(el)
absstream.ndarray = np.asarray(array)
self.stream = absstream
self.plotstream = absstream
currentstreamindex = len(self.streamlist)
self.streamlist.append(self.stream)
self.streamkeylist.append(absstream._get_key_headers())
self.headerlist.append(self.stream.header)
self.currentstreamindex = currentstreamindex
#self.ActivateControls(self.plotstream)
self.OnInitialPlot(self.plotstream)
#self.plotoptlist.append(self.plotopt)
else:
self.ActivateControls(self.plotstream)
if not str(self.menu_p.abs_page.dilogTextCtrl.GetValue()) == '':
self.menu_p.abs_page.ClearLogButton.Enable()
self.menu_p.abs_page.SaveLogButton.Enable()
# set load di to something useful (seems to be empty now)
#redir=RedirectText(self.menu_p.rep_page.logMsg)
#sys.stdout=prev_redir
def onDISetParameter(self,event):
"""
open parameter box for DI analysis
"""
dlg = DISetParameterDialog(None, title='Set Parameter')
dlg.expDTextCtrl.SetValue(self.options.get('diexpD',''))
dlg.deltaFTextCtrl.SetValue(self.options.get('dideltaF',''))
dlg.azimuthTextCtrl.SetValue(self.options.get('diazimuth',''))
dlg.alphaTextCtrl.SetValue(self.options.get('dialpha',''))
dlg.pierTextCtrl.SetValue(self.options.get('dipier',''))
dlg.abstypeComboBox.SetStringSelection(self.options.get('ditype',''))
if dlg.ShowModal() == wx.ID_OK:
if not dlg.expDTextCtrl.GetValue() == '':
self.options['diexpD'] = dlg.expDTextCtrl.GetValue()
if not dlg.azimuthTextCtrl.GetValue() == '':
self.options['diazimuth'] = dlg.azimuthTextCtrl.GetValue()
if not dlg.pierTextCtrl.GetValue() == '':
self.options['dipier'] = dlg.pierTextCtrl.GetValue()
if not dlg.alphaTextCtrl.GetValue() == '':
self.options['dialpha'] = dlg.alphaTextCtrl.GetValue()
if not dlg.deltaFTextCtrl.GetValue() == '':
self.options['dideltaF'] = dlg.deltaFTextCtrl.GetValue()
self.options['ditype'] = dlg.abstypeComboBox.GetValue()
dlg.Destroy()
def onInputSheet(self,event):
"""
DESCRITPTION:
open dialog to input DI data
"""
if isinstance(self.dipathlist, str):
dipath = self.dipathlist
else:
dipath = self.dipathlist[0]
if os.path.isfile(dipath):
dipath = os.path.split(dipath)[0]
self.dilayout = {}
self.dilayout['scalevalue'] = self.options['scalevalue']
self.dilayout['double'] = self.options['double']
self.dilayout['order'] = self.options['order'].split(',')
#self.dilayout = {'order':['MU','MD','EU','WU','ED','WD','NU','SD','ND','SU'], 'scalevalue':'True', 'double':'True'}
#self.dilayout = {'order':['MU','MD','WU','EU','WD','ED','NU','SD','ND','SU'], 'scalevalue':'True', 'double':'False'}
defaults = self.options
cdate = pydate2wxdate(datetime.utcnow())
dlg = InputSheetDialog(None, title='Add DI data',path=dipath,layout=self.dilayout, defaults=defaults, cdate=cdate, db = self.db)
if dlg.ShowModal() == wx.ID_OK:
pass
dlg.Destroy()
def onSaveDIData(self, event):
"""
DESCRIPTION
Save data of the logger to file
"""
# TODO When starting ANalysis -> stout is redirected .. switch back to normal afterwards
saveFileDialog = wx.FileDialog(self, "Save As", "", "",
"DI analysis report (*.txt)|*.txt",
wx.FD_SAVE | wx.FD_OVERWRITE_PROMPT)
saveFileDialog.ShowModal()
savepath = saveFileDialog.GetPath()
text = self.menu_p.abs_page.dilogTextCtrl.GetValue()
saveFileDialog.Destroy()
difile = open(savepath, "w")
difile.write(text)
difile.close()
def onClearDIData(self, event):
self.menu_p.abs_page.dilogTextCtrl.SetValue('')
# ------------------------------------------------------------------------------------------
# ################
# Report page functions
# ################
# ------------------------------------------------------------------------------------------
def onSaveLogButton(self, event):
saveFileDialog = wx.FileDialog(self, "Save As", "", "",
"Log files (*.log)|*.log",
wx.FD_SAVE | wx.FD_OVERWRITE_PROMPT)
saveFileDialog.ShowModal()
savepath = saveFileDialog.GetPath()
text = self.menu_p.rep_page.logger.GetValue()
saveFileDialog.Destroy()
logfile = open(savepath, "w")
logfile.write(text)
logfile.close()
# ------------------------------------------------------------------------------------------
# ################
# Monitor page functions
# ################
# ------------------------------------------------------------------------------------------
def onConnectMARTASButton(self, event):
# start a subscribe to client call
success = True
# continuously collect data to stream and periodically call monitor plots
# Open dlg to select MARTAS-address (IP number)
# and to provide ssh access
# (favorite dict on MARTAS sheet {'MARTAS':'address','MQTT':'address'})
dlg = AGetMARTASDialog(None, title='Select MARTAS',options=self.options)
if dlg.ShowModal() == wx.ID_OK:
martasaddress = dlg.addressComboBox.GetValue()
martasuser = dlg.userTextCtrl.GetValue()
martaspasswd = dlg.pwdTextCtrl.GetValue()
else:
dlg.Destroy()
return
# If IP selected try to get sensor.txt from MARTAS using ssh
# If true : start record with sensorid
# if false: ask for sensorid (windows)
print ("Getting sensor information from ", martasaddress)
martaspath = os.path.join('/home',martasuser,'MARTAS')
print (martaspath)
sensfile = os.path.join(martaspath,'sensors.txt')
owfile = os.path.join(martaspath,'owlist.csv')
import tempfile
destpath = tempfile.gettempdir()
destsensfile = os.path.join(destpath,martasaddress+'_sensors.txt')
destowfile = os.path.join(destpath,martasaddress+'_owlist.csv')
try:
scptransfer(martasuser+'@'+martasaddress+':'+sensfile,destsensfile,martaspasswd)
except:
print ("Could not connect to/get sensor info of client {} - aborting".format(martasaddress))
success = False
#print "Please make sure that you connected at least once to the client by ssh"
#print " with your defaultuser %s " % martasuser
#print " This way the essential key data is established."
print ("Searching for onewire data from {}".format(martasaddress))
try:
scptransfer(martasuser+'@'+martasaddress+':'+owfile,destowfile,martaspasswd)
except:
print ("No one wire info available on client {} - proceeding".format(martasaddress))
s,o = [],[]
if os.path.exists(destsensfile):
with open(destsensfile,'rb') as f:
reader = csv.reader(f)
s = []
for line in reader:
print (line)
if len(line) < 2:
try:
s.append(line[0].split())
except:
# Empty line for example
pass
else:
s.append(line)
print (s)
else:
print ("Apparently no sensors defined on client {} - aborting".format(martasaddress))
success = False
return
if os.path.exists(destowfile):
with open(destowfile,'rb') as f:
reader = csv.reader(f)
o = [line for line in reader]
print (o)
# get all parameters
pad = 5
sr = 1.0 # sampling rate
currentdate = datetime.strftime(datetime.utcnow(),"%Y-%m-%d")
period = float(self.menu_p.com_page.frequSlider.GetValue())
covval = float(self.menu_p.com_page.coverageTextCtrl.GetValue())
coverage = covval/sr
limit = period/sr
# start subscribe2client
#self.plot_p.datavars = {0: datainfoid, 1: parameter, 2: limit, 3: pad, 4: currentdate, 5: unitlist, 6: coverage, 7: period, 8: self.db}
self.plot_p.datavars = {2: limit, 3: pad, 4: currentdate, 6: coverage, 7: period, 9: martasaddress, 10: destpath, 11: [martasuser,martaspasswd], 12: s, 13: o, 14: self.options.get('stationid','WIC')}
self.monitorSource='MARTAS'
success = True
if success:
self.menu_p.com_page.startMonitorButton.Enable()
self.menu_p.com_page.getMARCOSButton.Disable()
self.menu_p.com_page.getMQTTButton.Disable()
self.menu_p.com_page.martasLabel.SetBackgroundColour(wx.GREEN)
self.menu_p.com_page.martasLabel.SetValue('connected to {}'.format(martasaddress))
self.menu_p.com_page.logMsg('Begin monitoring...')
self.menu_p.com_page.logMsg(' - Selected MARTAS')
self.menu_p.com_page.logMsg(' - IP: {}'.format(martasaddress))
self.menu_p.com_page.coverageTextCtrl.Enable() # always
self.menu_p.com_page.frequSlider.Enable() # always
def onConnectMARCOSButton(self, event):
# active if database is connected
# open dlg
self.menu_p.rep_page.logMsg('- Selecting MARCOS table for monitoring ...')
output = dbselect(self.db,'DataID,DataMinTime,DataMaxTime','DATAINFO')
datainfoidlist = [elem[0] for elem in output]
if len(datainfoidlist) < 1:
dlg = wx.MessageDialog(self, "No data tables available!\n"
"please check your database\n",
"OpenDB", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
return
# select table
sr = 1
dlg = AGetMARCOSDialog(None, title='Select table',datalst=datainfoidlist)
if dlg.ShowModal() == wx.ID_OK:
datainfoid = dlg.dataComboBox.GetValue()
vals = dbselect(self.db, 'SensorID,DataSamplingRate,ColumnContents,ColumnUnits','DATAINFO', 'DataID = "'+datainfoid+'"')
vals = vals[0]
sensid= vals[0]
sr= float(vals[1].strip('sec'))
keys= vals[2].split(',')
units= vals[3].split(',')
else:
dlg.Destroy()
return
# get all parameters
pad = 5
currentdate = datetime.strftime(datetime.utcnow(),"%Y-%m-%d")
# start monitoring parameters
period = float(self.menu_p.com_page.frequSlider.GetValue())
covval = float(self.menu_p.com_page.coverageTextCtrl.GetValue())
coverage = covval/sr
limit = period/sr
unitlist = []
for idx,key in enumerate(keys):
if not key == '':
unitlist.append([key, units[idx]])
parameter = ','.join([KEYLIST[idx+1] for idx,key in enumerate(keys) if not key=='' and KEYLIST[idx+1] in NUMKEYLIST])
self.plot_p.datavars = {0: datainfoid, 1: parameter, 2: limit, 3: pad, 4: currentdate, 5: unitlist, 6: coverage, 7: period, 8: self.db}
self.monitorSource='MARCOS'
success = True
if success:
self.menu_p.com_page.startMonitorButton.Enable()
self.menu_p.com_page.getMARTASButton.Disable()
self.menu_p.com_page.getMQTTButton.Disable()
self.menu_p.com_page.marcosLabel.SetBackgroundColour(wx.GREEN)
self.menu_p.com_page.marcosLabel.SetValue('connected to {}'.format(self.options.get('dbname','')))
self.menu_p.com_page.logMsg('Begin monitoring...')
self.menu_p.com_page.logMsg(' - Selected MARCOS database')
self.menu_p.com_page.logMsg(' - Table: {}'.format(datainfoid))
self.menu_p.com_page.coverageTextCtrl.Enable() # always
self.menu_p.com_page.frequSlider.Enable() # always
def onConnectMQTTButton(self, event):
dlg = wx.MessageDialog(self, "MQTT protocol not yet implemented!\n"
"... coming soon\n",
"MQTT connection", wx.OK|wx.ICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
#success = False
#if success:
# self.menu_p.com_page.startMonitorButton.Enable()
# self.menu_p.com_page.coverageTextCtrl.Enable() # always
# self.menu_p.com_page.frequSlider.Enable() # always
def onStartMonitorButton(self, event):
self.DeactivateAllControls()
self.menu_p.com_page.getMARTASButton.Disable()
self.menu_p.com_page.getMARCOSButton.Disable()
self.menu_p.com_page.getMQTTButton.Disable()
self.menu_p.com_page.stopMonitorButton.Enable()
self.menu_p.com_page.saveMonitorButton.Enable()
# start monitoring parameters
period = float(self.menu_p.com_page.frequSlider.GetValue())
covval = float(self.menu_p.com_page.coverageTextCtrl.GetValue())
sr = self.plot_p.datavars[7]/self.plot_p.datavars[2]
coverage = covval/sr
limit = period/sr
self.plot_p.datavars[2] = limit
self.plot_p.datavars[6] = coverage
self.plot_p.datavars[7] = period
# Obtain the last values from the data base with given dataid and limit
# A DB query for 10 min 10Hz data needs approx 0.3 sec
if self.monitorSource=='MARCOS':
self.plot_p.t1_stop.clear()
self.menu_p.com_page.logMsg(' > Starting read cycle... {} sec'.format(period))
self.plot_p.startMARCOSMonitor()
self.menu_p.com_page.marcosLabel.SetBackgroundColour(wx.GREEN)
self.menu_p.com_page.marcosLabel.SetValue('connected to {}'.format(self.options.get('dbname','')))
elif self.monitorSource=='MARTAS':
self.plot_p.t1_stop.clear()
self.menu_p.com_page.logMsg(' > Starting read cycle... {} sec'.format(period))
self.plot_p.startMARTASMonitor()
# MARTASmonitor calls subscribe2client - output in temporary file (to start with) and access global array from storeData (move array to global)
#self.menu_p.com_page.martasLabel.SetBackgroundColour(wx.GREEN)
#self.menu_p.com_page.martasLabel.SetValue('connected to {}'.format('- address -'))
def _monitor2stream(self,array, db=None, dataid=None,header = {}):
"""
DESCRIPTION:
creates self.plotstream object from monitor data
"""
#header = {}
if db:
header = dbfields2dict(db,dataid)
array[0] = date2num(array[0])
stream = DataStream([LineStruct()],header,array)
return stream
def onStopMonitorButton(self, event):
if self.monitorSource=='MARCOS':
dataid = self.plot_p.datavars[0]
self.plot_p.t1_stop.set()
self.menu_p.com_page.logMsg(' > Read cycle stopped')
self.menu_p.com_page.logMsg('MARCOS disconnected')
self.stream = self._monitor2stream(self.plot_p.array,db=self.db,dataid=dataid)
self.plotstream = self.stream.copy()
currentstreamindex = len(self.streamlist)
self.streamlist.append(self.plotstream)
self.streamkeylist.append(self.plotstream._get_key_headers())
self.headerlist.append(self.plotstream.header)
self.currentstreamindex = currentstreamindex
self.menu_p.com_page.stopMonitorButton.Disable()
self.menu_p.com_page.saveMonitorButton.Disable()
self.ActivateControls(self.plotstream)
self.shownkeylist = self.UpdatePlotCharacteristics(self.plotstream)
self.plotoptlist.append(self.plotopt)
self.OnPlot(self.plotstream,self.shownkeylist)
self.menu_p.com_page.getMARTASButton.Enable()
self.menu_p.com_page.getMARCOSButton.Enable()
self.menu_p.com_page.getMQTTButton.Enable()
self.menu_p.com_page.marcosLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.martasLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.mqttLabel.SetBackgroundColour((255,23,23))
self.menu_p.com_page.marcosLabel.SetValue('not connected')
self.menu_p.com_page.martasLabel.SetValue('not connected')
self.menu_p.com_page.mqttLabel.SetValue('not connected')
def onLogDataButton(self, event):
# open dialog with pathname
# then use data_2_file method for binary writing
pass
class MagPyApp(wx.App):
# wxWindows calls this method to initialize the application
def OnInit(self):
# Create an instance of our customized Frame class
frame = MainFrame(None,-1,"")
frame.Show(True)
# Tell wxWindows that this is our main window
self.SetTopWindow(frame)
# Return a success flag
return True
'''
# To run:
app = MagPyApp(0)
app.MainLoop()
'''
|
hschovanec-usgs/magpy
|
magpy/gui/magpy_gui.py
|
Python
|
gpl-3.0
| 186,304
|
[
"Gaussian"
] |
bc2af2b4698b422bb86212446b9793507cf4b407f7b71353e8dcc00d6a5ee688
|
#!/usr/bin/python
#
# Created on Aug 25, 2016
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
#
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_ipamdnsproviderprofile
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of IpamDnsProviderProfile Avi RESTful Object
description:
- This module is used to configure IpamDnsProviderProfile object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent", "present"]
avi_api_update_method:
description:
- Default method for object update is HTTP PUT.
- Setting to patch will override that behavior to use HTTP PATCH.
version_added: "2.5"
default: put
choices: ["put", "patch"]
avi_api_patch_op:
description:
- Patch operation to use when using avi_api_update_method as patch.
version_added: "2.5"
choices: ["add", "replace", "delete"]
allocate_ip_in_vrf:
description:
- If this flag is set, only allocate ip from networks in the virtual service vrf.
- Applicable for avi vantage ipam only.
- Field introduced in 17.2.4.
- Default value when not specified in API or module is interpreted by Avi Controller as False.
version_added: "2.5"
aws_profile:
description:
- Provider details if type is aws.
azure_profile:
description:
- Provider details if type is microsoft azure.
- Field introduced in 17.2.1.
version_added: "2.5"
custom_profile:
description:
- Provider details if type is custom.
- Field introduced in 17.1.1.
gcp_profile:
description:
- Provider details if type is google cloud.
infoblox_profile:
description:
- Provider details if type is infoblox.
internal_profile:
description:
- Provider details if type is avi.
name:
description:
- Name for the ipam/dns provider profile.
required: true
openstack_profile:
description:
- Provider details if type is openstack.
proxy_configuration:
description:
- Field introduced in 17.1.1.
tenant_ref:
description:
- It is a reference to an object of type tenant.
type:
description:
- Provider type for the ipam/dns provider profile.
- Enum options - IPAMDNS_TYPE_INFOBLOX, IPAMDNS_TYPE_AWS, IPAMDNS_TYPE_OPENSTACK, IPAMDNS_TYPE_GCP, IPAMDNS_TYPE_INFOBLOX_DNS, IPAMDNS_TYPE_CUSTOM,
- IPAMDNS_TYPE_CUSTOM_DNS, IPAMDNS_TYPE_AZURE, IPAMDNS_TYPE_INTERNAL, IPAMDNS_TYPE_INTERNAL_DNS, IPAMDNS_TYPE_AWS_DNS, IPAMDNS_TYPE_AZURE_DNS.
required: true
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the ipam/dns provider profile.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Create IPAM DNS provider setting
avi_ipamdnsproviderprofile:
controller: '{{ controller }}'
username: '{{ username }}'
password: '{{ password }}'
internal_profile:
dns_service_domain:
- domain_name: ashish.local
num_dns_ip: 1
pass_through: true
record_ttl: 100
- domain_name: guru.local
num_dns_ip: 1
pass_through: true
record_ttl: 200
ttl: 300
name: Ashish-DNS
tenant_ref: Demo
type: IPAMDNS_TYPE_INTERNAL
"""
RETURN = '''
obj:
description: IpamDnsProviderProfile (api/ipamdnsproviderprofile) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.network.avi.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
avi_api_update_method=dict(default='put',
choices=['put', 'patch']),
avi_api_patch_op=dict(choices=['add', 'replace', 'delete']),
allocate_ip_in_vrf=dict(type='bool',),
aws_profile=dict(type='dict',),
azure_profile=dict(type='dict',),
custom_profile=dict(type='dict',),
gcp_profile=dict(type='dict',),
infoblox_profile=dict(type='dict',),
internal_profile=dict(type='dict',),
name=dict(type='str', required=True),
openstack_profile=dict(type='dict',),
proxy_configuration=dict(type='dict',),
tenant_ref=dict(type='str',),
type=dict(type='str', required=True),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'ipamdnsproviderprofile',
set([]))
if __name__ == '__main__':
main()
|
le9i0nx/ansible
|
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py
|
Python
|
gpl-3.0
| 6,384
|
[
"VisIt"
] |
447380d936c535bba5670eb2e5706827e5222eaf392a87618dd1ddf2f233826c
|
# -*- coding: utf-8 -*-
# Spearmint
#
# Academic and Non-Commercial Research Use Software License and Terms
# of Use
#
# Spearmint is a software package to perform Bayesian optimization
# according to specific algorithms (the “Software”). The Software is
# designed to automatically run experiments (thus the code name
# 'spearmint') in a manner that iteratively adjusts a number of
# parameters so as to minimize some objective in as few runs as
# possible.
#
# The Software was developed by Ryan P. Adams, Michael Gelbart, and
# Jasper Snoek at Harvard University, Kevin Swersky at the
# University of Toronto (“Toronto”), and Hugo Larochelle at the
# Université de Sherbrooke (“Sherbrooke”), which assigned its rights
# in the Software to Socpra Sciences et Génie
# S.E.C. (“Socpra”). Pursuant to an inter-institutional agreement
# between the parties, it is distributed for free academic and
# non-commercial research use by the President and Fellows of Harvard
# College (“Harvard”).
#
# Using the Software indicates your agreement to be bound by the terms
# of this Software Use Agreement (“Agreement”). Absent your agreement
# to the terms below, you (the “End User”) have no rights to hold or
# use the Software whatsoever.
#
# Harvard agrees to grant hereunder the limited non-exclusive license
# to End User for the use of the Software in the performance of End
# User’s internal, non-commercial research and academic use at End
# User’s academic or not-for-profit research institution
# (“Institution”) on the following terms and conditions:
#
# 1. NO REDISTRIBUTION. The Software remains the property Harvard,
# Toronto and Socpra, and except as set forth in Section 4, End User
# shall not publish, distribute, or otherwise transfer or make
# available the Software to any other party.
#
# 2. NO COMMERCIAL USE. End User shall not use the Software for
# commercial purposes and any such use of the Software is expressly
# prohibited. This includes, but is not limited to, use of the
# Software in fee-for-service arrangements, core facilities or
# laboratories or to provide research services to (or in collaboration
# with) third parties for a fee, and in industry-sponsored
# collaborative research projects where any commercial rights are
# granted to the sponsor. If End User wishes to use the Software for
# commercial purposes or for any other restricted purpose, End User
# must execute a separate license agreement with Harvard.
#
# Requests for use of the Software for commercial purposes, please
# contact:
#
# Office of Technology Development
# Harvard University
# Smith Campus Center, Suite 727E
# 1350 Massachusetts Avenue
# Cambridge, MA 02138 USA
# Telephone: (617) 495-3067
# Facsimile: (617) 495-9568
# E-mail: otd@harvard.edu
#
# 3. OWNERSHIP AND COPYRIGHT NOTICE. Harvard, Toronto and Socpra own
# all intellectual property in the Software. End User shall gain no
# ownership to the Software. End User shall not remove or delete and
# shall retain in the Software, in any modifications to Software and
# in any Derivative Works, the copyright, trademark, or other notices
# pertaining to Software as provided with the Software.
#
# 4. DERIVATIVE WORKS. End User may create and use Derivative Works,
# as such term is defined under U.S. copyright laws, provided that any
# such Derivative Works shall be restricted to non-commercial,
# internal research and academic use at End User’s Institution. End
# User may distribute Derivative Works to other Institutions solely
# for the performance of non-commercial, internal research and
# academic use on terms substantially similar to this License and
# Terms of Use.
#
# 5. FEEDBACK. In order to improve the Software, comments from End
# Users may be useful. End User agrees to provide Harvard with
# feedback on the End User’s use of the Software (e.g., any bugs in
# the Software, the user experience, etc.). Harvard is permitted to
# use such information provided by End User in making changes and
# improvements to the Software without compensation or an accounting
# to End User.
#
# 6. NON ASSERT. End User acknowledges that Harvard, Toronto and/or
# Sherbrooke or Socpra may develop modifications to the Software that
# may be based on the feedback provided by End User under Section 5
# above. Harvard, Toronto and Sherbrooke/Socpra shall not be
# restricted in any way by End User regarding their use of such
# information. End User acknowledges the right of Harvard, Toronto
# and Sherbrooke/Socpra to prepare, publish, display, reproduce,
# transmit and or use modifications to the Software that may be
# substantially similar or functionally equivalent to End User’s
# modifications and/or improvements if any. In the event that End
# User obtains patent protection for any modification or improvement
# to Software, End User agrees not to allege or enjoin infringement of
# End User’s patent against Harvard, Toronto or Sherbrooke or Socpra,
# or any of the researchers, medical or research staff, officers,
# directors and employees of those institutions.
#
# 7. PUBLICATION & ATTRIBUTION. End User has the right to publish,
# present, or share results from the use of the Software. In
# accordance with customary academic practice, End User will
# acknowledge Harvard, Toronto and Sherbrooke/Socpra as the providers
# of the Software and may cite the relevant reference(s) from the
# following list of publications:
#
# Practical Bayesian Optimization of Machine Learning Algorithms
# Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams
# Neural Information Processing Systems, 2012
#
# Multi-Task Bayesian Optimization
# Kevin Swersky, Jasper Snoek and Ryan Prescott Adams
# Advances in Neural Information Processing Systems, 2013
#
# Input Warping for Bayesian Optimization of Non-stationary Functions
# Jasper Snoek, Kevin Swersky, Richard Zemel and Ryan Prescott Adams
# Preprint, arXiv:1402.0929, http://arxiv.org/abs/1402.0929, 2013
#
# Bayesian Optimization and Semiparametric Models with Applications to
# Assistive Technology Jasper Snoek, PhD Thesis, University of
# Toronto, 2013
#
# 8. NO WARRANTIES. THE SOFTWARE IS PROVIDED "AS IS." TO THE FULLEST
# EXTENT PERMITTED BY LAW, HARVARD, TORONTO AND SHERBROOKE AND SOCPRA
# HEREBY DISCLAIM ALL WARRANTIES OF ANY KIND (EXPRESS, IMPLIED OR
# OTHERWISE) REGARDING THE SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE, OWNERSHIP, AND NON-INFRINGEMENT. HARVARD, TORONTO AND
# SHERBROOKE AND SOCPRA MAKE NO WARRANTY ABOUT THE ACCURACY,
# RELIABILITY, COMPLETENESS, TIMELINESS, SUFFICIENCY OR QUALITY OF THE
# SOFTWARE. HARVARD, TORONTO AND SHERBROOKE AND SOCPRA DO NOT WARRANT
# THAT THE SOFTWARE WILL OPERATE WITHOUT ERROR OR INTERRUPTION.
#
# 9. LIMITATIONS OF LIABILITY AND REMEDIES. USE OF THE SOFTWARE IS AT
# END USER’S OWN RISK. IF END USER IS DISSATISFIED WITH THE SOFTWARE,
# ITS EXCLUSIVE REMEDY IS TO STOP USING IT. IN NO EVENT SHALL
# HARVARD, TORONTO OR SHERBROOKE OR SOCPRA BE LIABLE TO END USER OR
# ITS INSTITUTION, IN CONTRACT, TORT OR OTHERWISE, FOR ANY DIRECT,
# INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR OTHER
# DAMAGES OF ANY KIND WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH
# THE SOFTWARE, EVEN IF HARVARD, TORONTO OR SHERBROOKE OR SOCPRA IS
# NEGLIGENT OR OTHERWISE AT FAULT, AND REGARDLESS OF WHETHER HARVARD,
# TORONTO OR SHERBROOKE OR SOCPRA IS ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGES.
#
# 10. INDEMNIFICATION. To the extent permitted by law, End User shall
# indemnify, defend and hold harmless Harvard, Toronto and Sherbrooke
# and Socpra, their corporate affiliates, current or future directors,
# trustees, officers, faculty, medical and professional staff,
# employees, students and agents and their respective successors,
# heirs and assigns (the "Indemnitees"), against any liability,
# damage, loss or expense (including reasonable attorney's fees and
# expenses of litigation) incurred by or imposed upon the Indemnitees
# or any one of them in connection with any claims, suits, actions,
# demands or judgments arising from End User’s breach of this
# Agreement or its Institution’s use of the Software except to the
# extent caused by the gross negligence or willful misconduct of
# Harvard, Toronto or Sherbrooke or Socpra. This indemnification
# provision shall survive expiration or termination of this Agreement.
#
# 11. GOVERNING LAW. This Agreement shall be construed and governed by
# the laws of the Commonwealth of Massachusetts regardless of
# otherwise applicable choice of law standards.
#
# 12. NON-USE OF NAME. Nothing in this License and Terms of Use shall
# be construed as granting End Users or their Institutions any rights
# or licenses to use any trademarks, service marks or logos associated
# with the Software. You may not use the terms “Harvard” or
# “University of Toronto” or “Université de Sherbrooke” or “Socpra
# Sciences et Génie S.E.C.” (or a substantially similar term) in any
# way that is inconsistent with the permitted uses described
# herein. You agree not to use any name or emblem of Harvard, Toronto
# or Sherbrooke, or any of their subdivisions for any purpose, or to
# falsely suggest any relationship between End User (or its
# Institution) and Harvard, Toronto and/or Sherbrooke, or in any
# manner that would infringe or violate any of their rights.
#
# 13. End User represents and warrants that it has the legal authority
# to enter into this License and Terms of Use on behalf of itself and
# its Institution.
import sys
import numpy as np
import numpy.random as npr
from .mcmc import slice_sample
# from .mcmc import slice_sample_simple as slice_sample
from .abstract_sampler import AbstractSampler
from ..utils import param as hyperparameter_utils
class SliceSampler(AbstractSampler):
"""generate samples from a model using slice sampling
Parameters
----------
*params_to_sample : args of type Params
The parameters that we are to be sampled.
**sampler_options
Attributes
----------
params : list of Params objects
The atribute `value` of each element in the list is updated
upon calling `self.sample()`.
"""
def logprob(self, x, model):
"""compute the log probability of observations x
This includes the model likelihood as well as any prior
probability of the parameters
Returns
-------
lp : float
the log probability
"""
# set values of the parameers in self.params to be x
hyperparameter_utils.set_params_from_array(self.params, x)
lp = 0.0
# sum the log probabilities of the parameter priors
for param in self.params:
lp += param.prior_logprob()
if np.isnan(lp): # Positive infinity should be ok, right?
print 'Param diagnostics:'
param.print_diagnostics()
print 'Prior logprob: %f' % param.prior_logprob()
raise Exception("Prior returned %f logprob" % lp)
if not np.isfinite(lp):
return lp
# include the log probability from the model
lp += model.log_likelihood()
if np.isnan(lp):
raise Exception("Likelihood returned %f logprob" % lp)
return lp
def sample(self, model):
"""generate a new sample of parameters for the model
Notes
-----
The parameters are stored as self.params which is a list of Params objects.
The values of the parameters are updated on each call. Pesumably the value of
the parameter affects the model (this is not required, but it would be a bit
pointless othewise)
"""
# turn self.params into a 1d numpy array
params_array = hyperparameter_utils.params_to_array(self.params)
for i in xrange(self.thinning + 1):
# get a new value for the parameter array via slice sampling
params_array, current_ll = slice_sample(params_array, self.logprob, model, **self.sampler_options)
hyperparameter_utils.set_params_from_array(self.params, params_array) # Can this be untabbed safely?
self.current_ll = current_ll # for diagnostics
if __name__ == '__main__':
sys.path.append('..')
from utils import priors
import matplotlib.pyplot as plt
n = 10000
# Test on 1D Gaussian
x_samples = np.zeros(n)
x = np.zeros(1)
gsn = priors.Gaussian(mu = -1, sigma = 4)
for i in xrange(n):
if i % 1000 == 0:
print 'Sample %d/%d' % (i,n)
x, cur_ll = slice_sample(x, gsn. logprob)
x_samples[i] = x.copy()
print '1D Gaussian actual mean: %f, mean of samples: %f' % (-1, np.mean(x_samples))
print '1D Gaussian actual sigma: %f, std of samples: %f' % (4, np.std(x_samples))
plt.figure(1)
plt.clf()
plt.hist(x_samples, 40)
plt.savefig('slice_sampler_test.pdf')
# Test on 2D Gaussian
mu = np.array([-2, 5])
a = npr.rand(2,2)
cov = np.dot(a,a.T)
mvn = priors.MultivariateNormal(mu = mu, cov = cov)
x_samples = np.zeros((2,n))
x = np.zeros(2)
for i in xrange(n):
if i % 1000 == 0:
print 'Sample %d/%d' % (i,n)
x, cur_ll = slice_sample(x, mvn.logprob)
x_samples[:,i] = x.copy()
mu_samp = np.mean(x_samples,axis=1)
print '2D Gaussian:'
print 'Actual mean: [%f,%f]' % (mu[0], mu[1])
print 'Mean of samples: [%f,%f]' % (mu_samp[0], mu_samp[1])
print 'Actual Cov:'
print str(cov)
print 'Cov of samples'
print str(np.cov(x_samples))
# plt.figure(1)
# plt.clf()
# plt.hist(x_samples, 40)
# plt.savefig('slice_sampler_test.pdf')
|
DavidMcDonald1993/ghsom
|
spearmint/spearmint/sampling/slice_sampler.py
|
Python
|
gpl-2.0
| 13,950
|
[
"Gaussian"
] |
3ae698a502c3a5611b69d32aef920553c727b8b45c757545d2701f9e7af65934
|
# Copyright (C) 2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest as ut
import importlib_wrapper
import numpy as np
tutorial, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@TUTORIALS_DIR@/11-ferrofluid/11-ferrofluid_part2.py",
equil_steps=200, equil_rounds=10, loops=500, alphas=[0.5])
@skipIfMissingFeatures
class Tutorial(ut.TestCase):
system = tutorial.system
def test(self):
self.assertGreater(
tutorial.magnetization_para[0],
tutorial.magnetization_perp[0])
self.assertGreater(
tutorial.magnetization_para_star[0],
tutorial.L(tutorial.alphas[0]))
self.assertLess(
tutorial.magnetization_perp_star[0],
tutorial.L(tutorial.alphas[0]))
if __name__ == "__main__":
ut.main()
|
mkuron/espresso
|
testsuite/scripts/tutorials/test_11-ferrofluid_2.py
|
Python
|
gpl-3.0
| 1,473
|
[
"ESPResSo"
] |
22a1e1814d5b58be2a920c3e21b46e07a808596e941ca0f39e266c512406ee91
|
#!/usr/bin/env python
import os, sys, io
import m6plot
import m6toolbox
import netCDF4
import numpy
def run():
try: import argparse
except: raise Exception('This version of python is not new enough. python 2.7 or newer is required.')
parser = argparse.ArgumentParser(description='''Script for plotting annual-average SST bias.''')
parser.add_argument('infile', type=str, help='''Annually-averaged file containing 3D 'temp'.''')
parser.add_argument('-l','--label', type=str, default='', help='''Label to add to the plot.''')
parser.add_argument('-s','--suptitle', type=str, default='', help='''Super-title for experiment. Default is to read from netCDF file.''')
parser.add_argument('-o','--outdir', type=str, default='.', help='''Directory in which to place plots.''')
parser.add_argument('-g','--gridspec', type=str, required=True,
help='''Directory containing mosaic/grid-spec files (ocean_hgrid.nc and ocean_mask.nc).''')
parser.add_argument('-w','--woa', type=str, required=True,
help='''File containing WOA (or obs) data to compare against.''')
cmdLineArgs = parser.parse_args()
main(cmdLineArgs)
def main(cmdLineArgs,stream=False):
numpy.seterr(divide='ignore', invalid='ignore', over='ignore') # To avoid warnings
if not os.path.exists(cmdLineArgs.gridspec): raise ValueError('Specified gridspec directory/tar file does not exist.')
if os.path.isdir(cmdLineArgs.gridspec):
x = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_hgrid.nc').variables['x'][::2,::2]
xcenter = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_hgrid.nc').variables['x'][1::2,1::2]
y = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_hgrid.nc').variables['y'][::2,::2]
ycenter = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_hgrid.nc').variables['y'][1::2,1::2]
msk = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_mask.nc').variables['mask'][:]
area = msk*netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_hgrid.nc').variables['area'][:,:].reshape([msk.shape[0], 2, msk.shape[1], 2]).sum(axis=-3).sum(axis=-1)
depth = netCDF4.Dataset(cmdLineArgs.gridspec+'/ocean_topog.nc').variables['depth'][:]
elif os.path.isfile(cmdLineArgs.gridspec):
x = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_hgrid.nc','x')[::2,::2]
xcenter = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_hgrid.nc','x')[1::2,1::2]
y = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_hgrid.nc','y')[::2,::2]
ycenter = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_hgrid.nc','y')[1::2,1::2]
msk = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_mask.nc','mask')[:]
area = msk*m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_hgrid.nc','area')[:,:].reshape([msk.shape[0], 2, msk.shape[1], 2]).sum(axis=-3).sum(axis=-1)
depth = m6toolbox.readNCFromTar(cmdLineArgs.gridspec,'ocean_topog.nc','depth')[:]
else:
raise ValueError('Unable to extract grid information from gridspec directory/tar file.')
Tobs = netCDF4.Dataset( cmdLineArgs.woa )
if 'temp' in Tobs.variables: Tobs = Tobs.variables['temp']
elif 'ptemp' in Tobs.variables: Tobs = Tobs.variables['ptemp']
else: raise Exception('Could not find "temp" or "ptemp" in file "%s"'%(cmdLineArgs.woa))
if len(Tobs.shape)==3: Tobs = Tobs[0]
else: Tobs = Tobs[:,0].mean(axis=0)
rootGroup = netCDF4.MFDataset( cmdLineArgs.infile )
if 'temp' in rootGroup.variables: varName = 'temp'
elif 'ptemp' in rootGroup.variables: varName = 'ptemp'
elif 'thetao' in rootGroup.variables: varName = 'thetao'
else: raise Exception('Could not find "temp", "ptemp" or "thetao" in file "%s"'%(cmdLineArgs.infile))
if rootGroup.variables[varName].shape[0]>1: Tmod = rootGroup.variables[varName][:,0].mean(axis=0)
else: Tmod = rootGroup.variables[varName][0,0]
if cmdLineArgs.suptitle != '': suptitle = cmdLineArgs.suptitle + ' ' + cmdLineArgs.label
else: suptitle = rootGroup.title + ' ' + cmdLineArgs.label
imgbufs = []
ci=m6plot.pmCI(0.25,4.5,.5)
if stream is True: img = io.BytesIO()
else: img = cmdLineArgs.outdir+'/SST_bias_WOA05.png'
m6plot.xyplot( Tmod - Tobs , x, y, area=area,
suptitle=suptitle, title='SST bias (w.r.t. WOA\'05) [$\degree$C]',
clim=ci, colormap='dunnePM', centerlabels=True, extend='both',
save=img)
if stream is True: imgbufs.append(img)
m6plot.xycompare( Tmod, Tobs , x, y, area=area,
suptitle=suptitle,
title1='SST [$\degree$C]',
title2='WOA\'05 SST [$\degree$C]',
clim=m6plot.linCI(-2,29,.5), colormap='dunneRainbow', extend='max',
dlim=ci, dcolormap='dunnePM', dextend='both', centerdlabels=True,
save=cmdLineArgs.outdir+'/SST_bias_WOA05.3_panel.png')
if stream is True:
return imgbufs
if __name__ == '__main__':
run()
|
nicjhan/MOM6-examples
|
tools/analysis/SST_bias_WOA05.py
|
Python
|
gpl-3.0
| 4,755
|
[
"NetCDF"
] |
3f348705c4a9346ce54900911f34a0b8ea8ca9ec4106f5c1304468f1833c4285
|
# a tentative script to upload all existing drstree "versions" into CMIP sqlite database
# each variable, mip, experiment, model, ensemble combination add a new instance in "instance"
# for each instance there should be at least one version in "version" table
# for each version add at least one file in table "files"
from __future__ import print_function
import argparse
from ARCCSSive.CMIP5.update_db_functions import insert_unique, add_bulk_items
from ARCCSSive.CMIP5.other_functions import list_logfile, list_drs_files, check_hash, get_trackid
#NB tmptree root dir is also defined there
from ARCCSSive.CMIP5 import DB
from ARCCSSive.CMIP5.Model import Instance, Version, VersionFile
import cdms2
import os,sys
import glob
def parse_input():
''' Parse input arguments '''
parser = argparse.ArgumentParser(description=r'''Update database using the
logs produced by compare_ESGF for new ensembles
to run:
python update_db.py -i <input-file1> <input-file2>
At least one input file must be passed as argument.''',formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('-i','--input_file', type=str, nargs="*", help='input file with dataset information', required=True)
return vars(parser.parse_args())
def check_version(fpath):
''' Check for version and/or creation date in netcdf file '''
# open netcdf file
try:
f = cdms2.open(fpath,'r')
except:
print("INVALID NETCDF,%s" % fpath)
return None
# read attributes
try:
version=f.version_number
except:
version=None
f.close()
return version
def check_realm(fpath):
''' Check for realm in netcdf file '''
# open netcdf file
try:
f = cdms2.open(fpath,'r')
except:
print("INVALID NETCDF,%s" % fpath)
return None
# read attributes
try:
realm=f.modeling_realm
except:
realm=None
f.close()
return realm
# assign input arguments
kwargs = parse_input()
ifiles = kwargs['input_file']
for f in ifiles:
if '*' in f:
ifiles.remove(f)
ifiles.extend(glob.glob(f))
# open local database using ARCSSive interface
cmip5 = DB.connect()
db = cmip5.session
# initiliase instances as empty list
instances=[]
# for each file read info into a list of dictionary containing dataset info
# each dict has keys:
# variable, mip, model, experiment, ensemble, realm, version, path, chk_type, files
# where files is a dict with keys: filename, tracking_id, checksum
for inf in ifiles:
flist = inf
instances.extend(list_logfile(flist))
#for each instance individuated add instance row
for kw_instance in instances:
# create dictionary of fields for new instance
var=kw_instance['variable']
kw_version={}
kw_files={}
kw_version['version'] = kw_instance.pop('version')
kw_version['dataset_id'] = kw_instance.pop('dataset_id')
vers_path = kw_instance.pop('path')
kw_version['path'] = vers_path
print(vers_path)
ctype = kw_instance.pop('cks_type').replace("\n","")
if ctype=="None":
ctype='sha256'
kw_files = kw_instance.pop('files')
if kw_instance['realm']=='NA':
fpaths=[p for p in os.listdir(vers_path) if p.split("_")[0]==var]
realm=check_realm(vers_path+"/"+fpaths[0])
if len(kw_version['version']) < 9:
fpaths=[p for p in os.listdir(kw_version['path']) if p.split("_")[0]==var]
fversion=check_version(vers_path+"/"+fpaths[0])
if fversion:
kw_version['version']= fversion
else:
kw_version['version']= "NA"
# add instance to database if does not exist yet
inst_obj,new = insert_unique(db, Instance, **kw_instance)
print("instance:",inst_obj.id,new)
# create dictionary of fields for new version
kw_version['instance_id'] = inst_obj.id
# add version to database if does not exist yet
v_obj,new = insert_unique(db, Version, **kw_version)
print("version:",v_obj.id,new)
# check if files objects exist already if not add from files dictionary
# add both tracking-ids and checksums, if checksums are "None" calculate sha256
for i,f in enumerate(kw_files):
if f['checksum']=="None":
kw_files[i][ctype]=check_hash(v_obj.path+"/"+f['filename'],ctype)
f.pop('checksum')
else:
kw_files[i][ctype]=f.pop('checksum')
if f['tracking_id']=="":
kw_files[i]['tracking_id']=get_trackid(v_obj.path+"/"+f['filename'])
kw_files[i]['version_id']=v_obj.id
# add files to database with bulk insert
if v_obj.filenames()==[]:
add_bulk_items(db, VersionFile, kw_files)
# if some files exist already use insert_unique instead
else:
for i,f in enumerate(kw_files):
insert_unique(db, VersionFile, **f)
|
coecms/ARCCSSive
|
database_updates/update_db.py
|
Python
|
apache-2.0
| 4,854
|
[
"NetCDF"
] |
a65f34278ec49cca1262cb92f3244e30ad68b709051626c349543c0639dd005f
|
# -*- coding: utf-8 -*-
'''
lucterios.contacts package
@author: Laurent GAY
@organization: sd-libre.fr
@contact: info@sd-libre.fr
@copyright: 2015 sd-libre.fr
@license: This file is part of Lucterios.
Lucterios is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
Lucterios is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with Lucterios. If not, see <http://www.gnu.org/licenses/>.
'''
from __future__ import unicode_literals
from shutil import rmtree
from datetime import date
from _io import StringIO
from os.path import isfile
from base64 import b64decode
from lucterios.framework.test import LucteriosTest
from lucterios.framework.filetools import get_user_dir
from lucterios.CORE.models import Parameter, LucteriosUser, LucteriosGroup
from lucterios.CORE.parameters import Params
from lucterios.CORE.views import ObjectMerge
from lucterios.contacts.views_contacts import LegalEntityShow
from lucterios.contacts.models import LegalEntity, Responsability
from lucterios.contacts.views import ContactImport
from lucterios.contacts.tests_contacts import change_ourdetail
from lucterios.mailing.test_tools import configSMTP, TestReceiver, decode_b64
from diacamma.accounting.views import ThirdShow
from diacamma.accounting.models import FiscalYear
from diacamma.accounting.test_tools import fill_accounts_fr, create_account, add_entry
from diacamma.accounting.views_entries import EntryAccountList, EntryAccountClose, EntryAccountLink
from diacamma.invoice.views import BillList, BillTransition, BillFromQuotation, BillAddModify, BillShow, DetailAddModify
from diacamma.invoice.models import get_or_create_customer
from diacamma.invoice.test_tools import InvoiceTest
from diacamma.payoff.views import PayoffAddModify
from diacamma.payoff.test_tools import check_pdfreport
from diacamma.member.models import Season, Adherent
from diacamma.member.views import AdherentActiveList, AdherentAddModify, AdherentShow,\
SubscriptionAddModify, SubscriptionShow, LicenseAddModify, LicenseDel,\
AdherentDoc, AdherentLicense, AdherentLicenseSave, AdherentStatistic,\
AdherentRenewList, AdherentRenew, SubscriptionTransition, AdherentCommand,\
AdherentCommandDelete, AdherentCommandModify, AdherentFamilyAdd,\
AdherentFamilySelect, AdherentFamilyCreate, FamilyAdherentAdd,\
FamilyAdherentCreate, FamilyAdherentAdded, AdherentListing,\
AdherentContactList, AdherentConnection, SubscriptionDel, AdherentDisableConnection,\
AdherentPrint, PrestationList, PrestationDel, PrestationAddModify,\
PrestationShow, AdherentPrestationAdd, AdherentPrestationSave,\
AdherentPrestationDel, PrestationSwap, PrestationSplit
from diacamma.member.test_tools import default_season, default_financial, default_params,\
default_adherents, default_subscription, set_parameters, default_prestation, create_adherent
from diacamma.member.views_conf import TaxReceiptList, TaxReceiptCheck, TaxReceiptShow, TaxReceiptPrint, CategoryConf
class BaseAdherentTest(LucteriosTest):
def __init__(self, methodName):
LucteriosTest.__init__(self, methodName)
if date.today().month > 8:
self.dateref_expected = date(
2009, date.today().month, date.today().day)
else:
self.dateref_expected = date(
2010, date.today().month, date.today().day)
def setUp(self):
LucteriosTest.setUp(self)
rmtree(get_user_dir(), True)
default_financial()
default_season()
default_params()
def add_subscriptions(self, year=2009, season_id=10, status=2):
default_adherents()
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'status': status, 'dateref': '%s-10-01' % year, 'subscriptiontype': 1, 'season': season_id, 'team': 2, 'activity': 1, 'value': '132'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 3, 'status': status, 'dateref': '%s-10-01' % year,
'subscriptiontype': 2, 'period': 37 + (year - 2009) * 4, 'season': season_id, 'team': 1, 'activity': 1, 'value': '645'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 4, 'status': status, 'dateref': '%s-10-01' % year,
'subscriptiontype': 3, 'month': '%s-10' % year, 'season': season_id, 'team': 3, 'activity': 1, 'value': '489'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 5, 'status': status, 'dateref': '%s-10-01' % year,
'subscriptiontype': 4, 'begin_date': '%s-09-15' % year, 'season': season_id, 'team': 3, 'activity': 2, 'value': '470'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 6, 'status': status, 'dateref': '%s-10-01' % year, 'subscriptiontype': 1, 'season': season_id, 'team': 1, 'activity': 2, 'value': '159'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
def add_family(self):
myfamily = LegalEntity()
myfamily.name = "LES DALTONS"
myfamily.structure_type_id = 3
myfamily.address = "Place des cocotiers"
myfamily.postal_code = "97200"
myfamily.city = "FORT DE FRANCE"
myfamily.country = "MARTINIQUE"
myfamily.tel1 = "01-23-45-67-89"
myfamily.email = "dalton@worldcompany.com"
myfamily.save()
return myfamily.id
def prep_family(self):
default_adherents()
default_subscription(True)
family_id = self.add_family()
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': family_id}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 5, 'legal_entity': family_id}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
def prep_subscription_family(self):
self.prep_family()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 1, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 1, 'adherent': 5, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillTransition()
self.calljson('/diacamma.invoice/billTransition', {'bill': 1, 'TRANSITION': 'valid', 'CONFIRME': 'YES', 'withpayoff': False, 'sendemail': False}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billTransition')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 1)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44 + 76.44)
self.assert_json_equal('', 'bill/@0/comment', "{[b]}cotisation{[/b]}")
class AdherentTest(BaseAdherentTest):
def setUp(self):
BaseAdherentTest.setUp(self)
Parameter.change_value('member-family-type', 0)
set_parameters(["team", "activite", "age", "licence", "genre", 'numero', 'birth'])
ThirdShow.url_text
def test_defaultlist(self):
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('', 2 + 6 + 2)
self.assert_attrib_equal('team', 'description', 'group')
self.assert_attrib_equal('activity', 'description', 'passion')
self.assert_select_equal('status', 3) # nb=3
self.assert_select_equal('age', 8, True)
self.assert_select_equal('team', 3, True)
self.assert_select_equal('activity', 2, True)
self.assert_json_equal('DATE', 'dateref', self.dateref_expected.isoformat())
self.assert_select_equal('genre', 3) # nb=3
self.assert_count_equal('#adherent/actions', 5)
self.assert_grid_equal('adherent', {'num': "N°", 'firstname': "prénom", 'lastname': "nom", 'tel1': "tel1", 'tel2': "tel2", 'email': "courriel", 'license': "participation"}, 0)
self.assert_json_equal('', '#adherent/size_by_page', 25)
self.assertEqual(len(self.json_actions), 3, self.json_actions)
Parameter.change_value("member-size-page", 100)
Parameter.change_value("member-fields", "firstname;lastname;email;documents")
Params.clear()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_grid_equal('adherent', {'firstname': "prénom", 'lastname': "nom", 'email': "courriel", 'documents': "documents demandés"}, 0)
self.assert_json_equal('', '#adherent/size_by_page', 100)
def test_add_adherent(self):
self.factory.xfer = AdherentAddModify()
self.calljson('/diacamma.member/adherentAddModify', {}, False)
self.assert_observer(
'core.custom', 'diacamma.member', 'adherentAddModify')
self.assert_count_equal('', 1 + 14)
self.assertEqual(len(self.json_actions), 2)
self.factory.xfer = AdherentAddModify()
self.calljson('/diacamma.member/adherentAddModify', {"address": 'Avenue de la Paix{[newline]}BP 987',
"comment": 'no comment', "firstname": 'Marie', "lastname": 'DUPOND',
"city": 'ST PIERRE', "country": 'MARTINIQUE', "tel2": '06-54-87-19-34', "SAVE": 'YES',
"tel1": '09-96-75-15-00', "postal_code": '97250', "email": 'marie.dupond@worldcompany.com',
"birthday": "1998-08-04", "birthplace": "Fort-de-France",
"genre": "2"}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('', 2 + (18 + 1) + 2 + 2) # header + identity + subscription + grade
self.assert_json_equal('LABELFORM', 'dateref', self.dateref_expected.isoformat(), True)
self.assert_json_equal('LABELFORM', 'firstname', "Marie")
self.assert_json_equal('LABELFORM', 'lastname', "DUPOND")
self.assert_json_equal('LABELFORM', 'num', "1")
self.assert_json_equal('LABELFORM', 'birthday', "1998-08-04")
self.assert_json_equal('LABELFORM', 'birthplace', "Fort-de-France")
self.assert_json_equal('LABELFORM', 'age_category', "Benjamins")
self.assert_count_equal('subscription', 0) # nb=6
self.assert_count_equal('degrees', 0)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'birthday', "1998-08-04")
self.assert_json_equal('LABELFORM', 'age_category', "Cadets")
self.factory.xfer = AdherentAddModify()
self.calljson('/diacamma.member/adherentAddModify', {"address": 'Avenue de la Paix{[newline]}BP 987',
"comment": 'no comment', "firstname": 'Jean', "lastname": 'DUPOND',
"city": 'ST PIERRE', "country": 'MARTINIQUE', "tel2": '06-54-87-19-34', "SAVE": 'YES',
"tel1": '09-96-75-15-00', "postal_code": '97250', "email": 'jean.dupond@worldcompany.com',
"birthday": "2000-06-22", "birthplace": "Fort-de-France",
"genre": "1"}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'firstname', "Jean")
self.assert_json_equal('LABELFORM', 'lastname', "DUPOND")
self.assert_json_equal('LABELFORM', 'num', "2")
self.assert_json_equal('LABELFORM', 'birthday', "2000-06-22")
self.assert_json_equal('LABELFORM', 'age_category', "Poussins")
def test_add_subscription(self):
default_adherents()
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'firstname', "Avrel")
self.assert_json_equal('LABELFORM', 'lastname', "Dalton")
self.assert_grid_equal('subscription', {'status': "statut", 'season': "saison", 'subscriptiontype': "type de cotisation", 'begin_date': "date de début", 'end_date': "date de fin", 'involvement': "participation"}, 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.exception', 'diacamma.member', 'subscriptionAddModify')
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 9)
self.assert_select_equal('season', 20) # nb=20
self.assert_select_equal('subscriptiontype', {1: "Annually [76,44 €]", 2: "Periodic [76,44 €]", 3: "Monthly [76,44 €]", 4: "Calendar [76,44 €]"})
self.assert_attrib_equal('team', 'description', 'group')
self.assert_attrib_equal('activity', 'description', 'passion')
self.assert_select_equal('team', 3) # nb=3
self.assert_select_equal('activity', 2) # nb=2
def test_add_subscription_annually(self):
default_adherents()
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 9)
self.assert_json_equal('SELECT', 'season', '10')
self.assert_select_equal('status', 2) # nb=2
self.assert_json_equal('SELECT', 'status', '1')
self.assert_json_equal('SELECT', 'subscriptiontype', '1')
self.assert_json_equal('LABELFORM', 'seasondates', "1 sept. 2009 => 31 août 2010")
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'status': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity1] abc123"])
def test_add_subscription_periodic(self):
default_adherents()
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 9)
self.assert_json_equal('SELECT', 'season', '10')
self.assert_json_equal('SELECT', 'subscriptiontype', '2')
self.assert_select_equal('period', 4) # nb=4
self.assert_json_equal('', '#period/case/@2/@0', '39')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 2, 'season': 10, 'period': 39, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Periodic")
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-03-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-05-31")
def test_add_subscription_monthly(self):
default_adherents()
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 9)
self.assert_json_equal('SELECT', 'season', '10')
self.assert_json_equal('SELECT', 'subscriptiontype', '3')
self.assert_select_equal('month', 12) # nb=12
self.assert_json_equal('', '#month/case/@3/@0', "2009-12")
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 3, 'season': 10, 'month': '2009-12', 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Monthly")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-12-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2009-12-31")
def test_add_subscription_calendar(self):
default_adherents()
default_subscription()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 4}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 9)
self.assert_json_equal('SELECT', 'season', '10')
self.assert_json_equal('SELECT', 'subscriptiontype', '4')
self.assert_json_equal('DATE', 'begin_date', self.dateref_expected.isoformat())
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 4, 'season': 10, 'begin_date': '2009-10-01', 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Calendar")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-10-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-09-30")
def test_adherent_print_pdf(self):
default_adherents()
default_subscription()
self.factory.xfer = AdherentPrint()
self.calljson('/diacamma.member/adherentPrint', {'adherent': 2, 'dateref': '2014-10-01', "PRINT_MODE": 3}, False)
self.assert_observer('core.print', 'diacamma.member', 'adherentPrint')
self.save_pdf()
def test_adherent_print_ods(self):
default_adherents()
default_subscription()
self.factory.xfer = AdherentPrint()
self.calljson('/diacamma.member/adherentPrint', {'adherent': 2, 'dateref': '2014-10-01', "PRINT_MODE": 2}, False)
self.assert_observer('core.print', 'diacamma.member', 'adherentPrint')
self.save_ods()
def test_show_subscription(self):
default_adherents()
default_subscription()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'status': 2, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_grid_equal('license', {'team': "group", 'activity': "passion", 'value': "N° licence"}, 1)
self.assert_json_equal('', 'license/@0/team', "team2")
self.assert_json_equal('', 'license/@0/activity', "activity1")
self.assert_json_equal('', 'license/@0/value', "abc123")
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'licenseAddModify')
self.assert_count_equal('', 4)
self.assert_attrib_equal('team', 'description', 'group')
self.assert_attrib_equal('activity', 'description', 'passion')
self.assert_select_equal('team', 3) # nb=3
self.assert_select_equal('activity', 2) # nb=2
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1, 'team': 1, 'activity': 2, 'value': '987xyz'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseAddModify')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/license', ["team1 [activity2] 987xyz", "team2 [activity1] abc123"])
self.factory.xfer = AdherentLicense()
self.calljson('/diacamma.member/adherentLicense', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentLicense')
self.assert_count_equal('', 4 + 4 * 2)
self.assert_json_equal('EDIT', 'value_1', 'abc123')
self.assert_json_equal('EDIT', 'value_2', '987xyz')
self.factory.xfer = AdherentLicenseSave()
self.calljson('/diacamma.member/adherentLicenseSave', {'adherent': 2, 'dateref': '2009-10-01', 'value_1': 'abcd1234', 'value_2': '9876wxyz'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentLicenseSave')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/license', ["team1 [activity2] 9876wxyz", "team2 [activity1] abcd1234"])
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('license', 2)
self.factory.xfer = LicenseDel()
self.calljson('/diacamma.member/licenseDel', {'CONFIRME': 'YES', 'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1, 'license': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseDel')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('license', 1)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/third', "Dalton Avrel")
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 76.44)
def test_subscription_bydate(self):
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('', 2 + 6 + 3)
self.assert_count_equal('adherent', 5)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "4")
self.assert_json_equal('', 'adherent/@2/id', "5")
self.assert_json_equal('', 'adherent/@3/id', "3")
self.assert_json_equal('', 'adherent/@4/id', "6")
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 1 octobre 2009")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-11-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 4)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@2/id', "3")
self.assert_json_equal('', 'adherent/@3/id', "6")
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 15 novembre 2009")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-20'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@2/id', "6")
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 20 janvier 2010")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-09-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "3")
self.assert_json_equal('', 'adherent/@2/id', "6")
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 1 septembre 2009")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-09-10'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/id', "5")
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 10 septembre 2010")
def test_subscription_byage(self):
self.add_subscriptions()
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2010-09-10'}, False)
self.assert_json_equal('LABELFORM', 'age_category', "Poussins")
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3, 'dateref': '2010-09-10'}, False)
self.assert_json_equal('LABELFORM', 'age_category', "Benjamins")
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4, 'dateref': '2010-09-10'}, False)
self.assert_json_equal('LABELFORM', 'age_category', "Juniors")
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 5, 'dateref': '2010-09-10'}, False)
self.assert_json_equal('LABELFORM', 'age_category', "Espoirs")
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 6}, False)
self.assert_json_equal('LABELFORM', 'age_category', "Seniors")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'age': '1;2;3'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 2)
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}Âge{[/u]}{[/b]} : Minimes, Benjamins, Poussins")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'age': '4;5;6'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 2)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'age': '7;8'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}Âge{[/u]}{[/b]} : Vétérans, Seniors")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'age': '1;3;5;7'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
def test_subscription_byteam(self):
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'team': '1'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 2)
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - team1 - date de référence : 1 octobre 2009")
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 6)
self.assertEqual(info[2], "{[b]}{[u]}group{[/u]}{[/b]}")
self.assertEqual(info[3], "team N°1")
self.assertEqual(info[4], "The bests")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'team': '2;3'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
self.assertEqual(self.json_context['TITLE'], "Adhérents cotisants - date de référence : 1 octobre 2009")
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}group{[/u]}{[/b]} : team2, team3")
def test_subscription_byactivity(self):
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'activity': '1'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}passion{[/u]}{[/b]} : activity1")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'activity': '2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 2)
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}passion{[/u]}{[/b]} : activity2")
def test_subscription_bygenre(self):
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'genre': '2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
info = self.json_context['INFO'].split("{[br]}")
self.assertEqual(len(info), 4)
self.assertEqual(info[2], "{[b]}{[u]}genre{[/u]}{[/b]} : Femme")
def test_subscription_doc(self):
self.add_subscriptions()
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_count_equal('', 2 + (19 + 5) + 2 + 5 + 5 + 2) # header + identity/docs + subscription + financial + invoice + grade
self.assert_attrib_equal('doc_1', "description", "Doc 1")
self.assert_attrib_equal('doc_2', "description", "Doc 2")
self.assert_json_equal('CHECK', 'doc_1', "0")
self.assert_json_equal('CHECK', 'doc_2', "0")
self.factory.xfer = AdherentDoc()
self.calljson('/diacamma.member/adherentDoc', {'adherent': 2, 'dateref': '2009-10-01', 'doc_1': 1, 'doc_2': 0}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentDoc')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_json_equal('CHECK', 'doc_1', "1")
self.assert_json_equal('CHECK', 'doc_2', "0")
self.factory.xfer = AdherentDoc()
self.calljson('/diacamma.member/adherentDoc', {'adherent': 2, 'dateref': '2009-10-01', 'doc_1': 0, 'doc_2': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentDoc')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_json_equal('CHECK', 'doc_1', "0")
self.assert_json_equal('CHECK', 'doc_2', "1")
def test_subscription_withoutparams(self):
self.add_subscriptions()
set_parameters([])
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-09-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('', 3 + 2 + 2)
self.assert_count_equal('adherent', 3)
self.assert_count_equal('#adherent/actions', 4)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('', 2 + (15 + 5) + 2 + 5 + 5 + 2) # header + identity + subscription + financial + invoice + grade
self.assert_count_equal('subscription', 1) # nb=5
self.factory.xfer = AdherentAddModify()
self.calljson('/diacamma.member/adherentAddModify', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentAddModify')
self.assert_count_equal('', 1 + 12)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 6)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 7)
def test_subscription_printlisting(self):
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'age': '1;2;3', 'team': '2;3', 'activity': '2', 'genre': '2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
new_context = dict(self.json_context)
new_context['PRINT_MODE'] = '4'
new_context['MODEL'] = 1
self.factory.xfer = AdherentListing()
self.calljson('/diacamma.member/adherentListing', new_context, False)
self.assert_observer('core.print', 'diacamma.member', 'adherentListing')
csv_value = b64decode(str(self.response_json['print']['content'])).decode("utf-8")
content_csv = csv_value.split('\n')
self.assertEqual(len(content_csv), 13, str(content_csv))
self.assertEqual(content_csv[1].strip(), '"Adhérents cotisants - date de référence : 1 octobre 2009"')
self.assertEqual(content_csv[4].strip(), '"statut : en création & validé,,passion : activity2,,group : team2,team3,,Âge : Minimes,Benjamins,Poussins,,genre : Femme"', str(content_csv))
self.assertEqual(content_csv[6].strip(), '"nom";"adresse";"ville";"tel";"courriel";', str(content_csv))
def test_statistic(self):
self.add_subscriptions()
self.factory.xfer = AdherentStatistic()
self.calljson('/diacamma.member/adherentStatistic', {'season': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentStatistic')
self.assert_count_equal('', 4)
self.factory.xfer = AdherentStatistic()
self.calljson('/diacamma.member/adherentStatistic', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentStatistic')
self.assertEqual(0, (len(self.json_data) - 3 - 6) % 5, "size of COMPONENTS/* = %d" % len(self.json_data))
self.assert_count_equal('town_1', 2)
self.assert_json_equal('', 'town_1/@1/ratio', '{[b]}2{[/b]}')
self.assert_count_equal('town_2', 2)
self.assert_json_equal('', 'town_2/@1/ratio', '{[b]}1{[/b]}')
self.assert_count_equal('seniority_1', 1)
self.assert_count_equal('team_1', 2)
self.assert_count_equal('activity_1', 2)
self.factory.xfer = AdherentStatistic()
self.calljson('/diacamma.member/adherentStatistic', {'dateref': '2009-10-01', 'only_valid': False}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentStatistic')
self.assertEqual(0, (len(self.json_data) - 3 - 6) % 5, "size of COMPONENTS/* = %d" % len(self.json_data))
self.assert_count_equal('town_1', 2)
self.assert_json_equal('', 'town_1/@1/ratio', '{[b]}2{[/b]}')
self.assert_count_equal('town_2', 2)
self.assert_json_equal('', 'town_2/@1/ratio', '{[b]}1{[/b]}')
self.assert_count_equal('seniority_1', 1)
self.assert_count_equal('team_1', 2)
self.assert_count_equal('activity_1', 2)
def test_renew(self):
self.add_subscriptions()
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 3)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@2/id', "6")
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-01-20'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 2)
self.assert_json_equal('', 'adherent/@0/id', "4")
self.assert_json_equal('', 'adherent/@1/id', "3")
self.factory.xfer = AdherentRenew()
self.calljson('/diacamma.member/adherentRenew', {'dateref': '2010-10-01', 'CONFIRME': 'YES', 'adherent': '2;5;6'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentRenew')
self.factory.xfer = AdherentRenew()
self.calljson('/diacamma.member/adherentRenew', {'dateref': '2010-01-20', 'CONFIRME': 'YES', 'adherent': '3;4'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentRenew')
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-01-20'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 2)
self.assert_json_equal('', 'subscription/@0/season', "2010/2011")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2011-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity1] 132"])
self.assert_json_equal('', 'subscription/@1/season', "2009/2010")
self.assert_json_equal('', 'subscription/@1/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@1/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@1/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@1/involvement', ["team2 [activity1] 132"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 2)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Periodic")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-12-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-02-28")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1] 645"])
self.assert_json_equal('', 'subscription/@1/season', "2009/2010")
self.assert_json_equal('', 'subscription/@1/subscriptiontype', "Periodic")
self.assert_json_equal('', 'subscription/@1/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@1/end_date', "2009-11-30")
self.assert_json_equal('', 'subscription/@1/involvement', ["team1 [activity1] 645"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 2)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Monthly")
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-01-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-01-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity1] 489"])
self.assert_json_equal('', 'subscription/@1/season', "2009/2010")
self.assert_json_equal('', 'subscription/@1/subscriptiontype', "Monthly")
self.assert_json_equal('', 'subscription/@1/begin_date', "2009-10-01")
self.assert_json_equal('', 'subscription/@1/end_date', "2009-10-31")
self.assert_json_equal('', 'subscription/@1/involvement', ["team3 [activity1] 489"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 5}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 2)
self.assert_json_equal('', 'subscription/@0/season', "2010/2011")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Calendar")
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-10-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2011-09-30")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2] 470"])
self.assert_json_equal('', 'subscription/@1/season', "2009/2010")
self.assert_json_equal('', 'subscription/@1/subscriptiontype', "Calendar")
self.assert_json_equal('', 'subscription/@1/begin_date', "2009-09-15")
self.assert_json_equal('', 'subscription/@1/end_date', "2010-09-14")
self.assert_json_equal('', 'subscription/@1/involvement', ["team3 [activity2] 470"])
def test_import(self):
csv_content = """'nom','prenom','sexe','adresse','codePostal','ville','fixe','portable','mail','DateNaissance','LieuNaissance','Type','NumLicence','Equipe','Activite'
'USIF','Pierre','Homme','37 avenue de la plage','99673','TOUINTOUIN','0502851031','0439423854','pierre572@free.fr','12/09/1961','BIDON SUR MER','Annually','1000029-00099','team1','activity1'
'NOJAXU','Amandine','Femme','11 avenue du puisatier','99247','BELLEVUE','0022456300','0020055601','amandine723@hotmail.fr','27/02/1976','ZINZIN','Periodic#2','1000030-00099','team2','activity2'
'','',
'GOC','Marie','Femme','33 impasse du 11 novembre','99150','BIDON SUR MER','0632763718','0310231012','marie762@free.fr','16/05/1998','KIKIMDILUI','Monthly#5','1000031-00099','team3','activity1'
'UHADIK','Marie','Femme','1 impasse de l'Oisan','99410','VIENVITEVOIR','0699821944','0873988470','marie439@orange.fr','27/08/1981','TOUINTOUIN','Calendar#01/11/2009','1000032-00099','team1','activity2'
'FEPIZIBU','Benjamin','Homme','30 cours de la Chartreuse','99247','BELLEVUE','0262009068','0754416670','benjamin475@free.fr','25/03/1979','KIKIMDILUI','Annually','1000033-00099','team2','activity2'
"""
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 1}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 1, 'modelname': 'member.Adherent', 'quotechar': "'",
'delimiter': ',', 'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent': StringIO(csv_content)}, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 6 + 17)
self.assert_select_equal('fld_city', 15) # nb=15
self.assert_select_equal('fld_country', 16) # nb=16
self.assert_count_equal('CSV', 6)
self.assert_count_equal('#CSV/actions', 0)
self.assertEqual(len(self.json_actions), 3)
self.assert_action_equal('POST', self.json_actions[0], (str('Retour'), 'images/left.png', 'lucterios.contacts', 'contactImport', 0, 2, 1, {'step': '0'}))
self.assert_action_equal('POST', self.json_actions[1], (str('Ok'), 'images/ok.png', 'lucterios.contacts', 'contactImport', 0, 2, 1, {'step': '2'}))
self.assertEqual(len(self.json_context), 8)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 2, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
"fld_birthday": "DateNaissance", "fld_birthplace": "LieuNaissance", 'fld_subscriptiontype': 'Type',
'fld_team': 'Equipe', 'fld_activity': 'Activite', 'fld_value': 'NumLicence', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 4)
self.assert_count_equal('CSV', 6)
self.assert_count_equal('#CSV/actions', 0)
self.assertEqual(len(self.json_actions), 3)
self.assert_action_equal('POST', self.json_actions[1], (str('Ok'), 'images/ok.png', 'lucterios.contacts', 'contactImport', 0, 2, 1, {'step': '3'}))
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
"fld_birthday": "DateNaissance", "fld_birthplace": "LieuNaissance", 'fld_subscriptiontype': 'Type',
'fld_team': 'Equipe', 'fld_activity': 'Activite', 'fld_value': 'NumLicence', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "5 éléments ont été importés")
self.assert_json_equal('LABELFORM', 'import_error', [])
self.assertEqual(len(self.json_actions), 1)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 7}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'lastname', "USIF")
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1] 1000029-00099"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 8}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'lastname', "NOJAXU")
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Periodic")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-12-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-02-28")
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2] 1000030-00099"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 9}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'lastname', "GOC")
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Monthly")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-01-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-01-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity1] 1000031-00099"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 10}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'lastname', "UHADIK")
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Calendar")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-11-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-10-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity2] 1000032-00099"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 11}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'lastname', "FEPIZIBU")
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2] 1000033-00099"])
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 8)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 1}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 10)
def test_bad_import(self):
csv_content = """'nom','prenom','sexe','adresse','codePostal','ville','fixe','portable','mail','DateNaissance','LieuNaissance','Type','NumLicence','Equipe','Activite'
'USIF','Pierre','Homme','37 avenue de la plage','99673','TOUINTOUIN','0502851031','0439423854','pierre572@free.fr','12/09/1961','BIDON SUR MER','Annua','1000029-00099','team1','activity1'
'NOJAXU','Amandine','Femme','11 avenue du puisatier','99247','BELLEVUE','0022456300','0020055601','amandine723@hotmail.fr','27/02/1976','ZINZIN','Periodic#2','1000030-00099','team7','activity2'
'','',
'GOC','Marie','Femme','33 impasse du 11 novembre','99150','BIDON SUR MER','0632763718','0310231012','marie762@free.fr','16/05/1998','KIKIMDILUI','Monthly#5','1000031-00099','team3','activity8'
"""
self.add_subscriptions()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 3)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
"fld_birthday": "DateNaissance", "fld_birthplace": "LieuNaissance", 'fld_subscriptiontype': 'Type',
'fld_team': 'Equipe', 'fld_activity': 'Activite', 'fld_value': 'NumLicence', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "3 éléments ont été importés")
self.assert_json_equal('LABELFORM', 'import_error', ["Type de cotisation 'Annua' inconnue !", "group 'team7' inconnu(e) !", "passion 'activity8' inconnu(e) !"])
self.assertEqual(len(self.json_actions), 1)
def test_status_subscription(self):
default_adherents()
default_subscription()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'status': 1, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_grid_equal('license', {'team': "group", 'activity': "passion", 'value': "N° licence"}, 1)
self.assert_json_equal('', 'license/@0/team', "team2")
self.assert_json_equal('', 'license/@0/activity', "activity1")
self.assert_json_equal('', 'license/@0/value', "abc123")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.factory.xfer = SubscriptionTransition()
self.calljson('/diacamma.member/subscriptionTransition', {'CONFIRME': 'YES', 'subscription': 1, 'TRANSITION': 'validate'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionTransition')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 2)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 76.44)
def test_valid_bill_of_subscription(self):
default_adherents()
default_subscription()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'status': 1, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_grid_equal('license', {'team': "group", 'activity': "passion", 'value': "N° licence"}, 1)
self.assert_json_equal('', 'license/@0/team', "team2")
self.assert_json_equal('', 'license/@0/activity', "activity1")
self.assert_json_equal('', 'license/@0/value', "abc123")
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = BillAddModify()
self.calljson('/diacamma.invoice/billAddModify',
{'bill': 1, 'date': '2015-04-01', 'SAVE': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.factory.xfer = BillTransition()
self.calljson('/diacamma.invoice/billTransition',
{'CONFIRME': 'YES', 'bill': 1, 'withpayoff': False, 'TRANSITION': 'valid'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billTransition')
self.factory.xfer = BillFromQuotation()
self.calljson('/diacamma.invoice/billFromQuotation',
{'CONFIRME': 'YES', 'bill': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billFromQuotation')
self.assertEqual(self.response_json['action']['id'], "diacamma.invoice/billShow")
self.assertEqual(len(self.response_json['action']['params']), 1)
self.assertEqual(self.response_json['action']['params']['bill'], 2)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 2)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01', 'status': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 76.44)
def test_command(self):
Season.objects.get(id=16).set_has_actif()
self.add_subscriptions(year=2014, season_id=15)
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2015-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 3)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@2/id', "6")
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 0)
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'adherent': '2;5;6'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 3)
self.assert_json_equal('', 'AdhCmd/@0/adherent', "Dalton Avrel")
self.assert_json_equal('', 'AdhCmd/@0/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@0/team', "team2")
self.assert_json_equal('', 'AdhCmd/@0/activity', "activity1")
self.assert_json_equal('', 'AdhCmd/@0/reduce', 0.00)
self.assert_json_equal('', 'AdhCmd/@1/adherent', "Dalton Joe")
self.assert_json_equal('', 'AdhCmd/@1/type', "Calendar [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@1/team', "team3")
self.assert_json_equal('', 'AdhCmd/@1/activity', "activity2")
self.assert_json_equal('', 'AdhCmd/@1/reduce', 0.00)
self.assert_json_equal('', 'AdhCmd/@2/adherent', "Luke Lucky")
self.assert_json_equal('', 'AdhCmd/@2/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@2/team', "team1")
self.assert_json_equal('', 'AdhCmd/@2/activity', "activity2")
self.assert_json_equal('', 'AdhCmd/@2/reduce', 0.00)
cmd_file = self.json_context["CMD_FILE"]
self.assertEqual(cmd_file[-23:], '/tmp/list-anonymous.cmd')
self.assertTrue(isfile(cmd_file))
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 3)
self.factory.xfer = AdherentCommandDelete()
self.calljson('/diacamma.member/adherentCommandDelete', {'dateref': '2010-10-01', 'CMD_FILE': cmd_file, 'AdhCmd': '2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentCommandDelete')
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 2)
self.factory.xfer = AdherentCommandModify()
self.calljson('/diacamma.member/adherentCommandModify', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file, 'AdhCmd': '5'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommandModify')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'adherent', 'Dalton Joe')
self.factory.xfer = AdherentCommandModify()
self.calljson('/diacamma.member/adherentCommandModify', {'dateref': '2015-10-01', 'SAVE': 'YES', 'CMD_FILE': cmd_file,
'AdhCmd': '5', 'type': '3', 'team': '2', 'activity': '1', 'reduce': '7.5'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentCommandModify')
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 2)
self.assert_json_equal('', 'AdhCmd/@0/adherent', "Dalton Joe")
self.assert_json_equal('', 'AdhCmd/@0/type', "Monthly [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@0/team', "team2")
self.assert_json_equal('', 'AdhCmd/@0/activity', "activity1")
self.assert_json_equal('', 'AdhCmd/@0/reduce', 7.50)
self.assert_json_equal('', 'AdhCmd/@1/adherent', "Luke Lucky")
self.assert_json_equal('', 'AdhCmd/@1/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@1/team', "team1")
self.assert_json_equal('', 'AdhCmd/@1/activity', "activity2")
self.assert_json_equal('', 'AdhCmd/@1/reduce', 0.00)
configSMTP('localhost', 3025)
change_ourdetail()
server = TestReceiver()
server.start(3025)
try:
self.assertEqual(0, server.count())
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'SAVE': 'YES', 'CMD_FILE': cmd_file, 'send_email': True}, False)
self.assert_observer('core.dialogbox', 'diacamma.member', 'adherentCommand')
self.assertEqual(2, server.count())
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(0)[1])
self.assertEqual(['Joe.Dalton@worldcompany.com', 'mr-sylvestre@worldcompany.com'], server.get(0)[2])
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(1)[1])
self.assertEqual(['Lucky.Luke@worldcompany.com', 'mr-sylvestre@worldcompany.com'], server.get(1)[2])
msg, msg_txt, msg_file = server.check_first_message('Nouvelle cotisation', 3, {'To': 'Joe.Dalton@worldcompany.com'})
self.assertEqual('text/plain', msg_txt.get_content_type())
self.assertEqual('text/html', msg.get_content_type())
self.assertEqual('base64', msg.get('Content-Transfer-Encoding', ''))
message = decode_b64(msg.get_payload())
self.assertTrue('Bienvenu' in message, message)
self.assertTrue('devis_A-1_Dalton Joe.pdf' in msg_file.get('Content-Type', ''), msg_file.get('Content-Type', ''))
self.save_pdf(base64_content=msg_file.get_payload())
finally:
server.stop()
def test_subscription_with_prestation(self):
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'firstname', "Avrel")
self.assert_json_equal('LABELFORM', 'lastname', "Dalton")
self.assert_grid_equal('subscription', {'status': "statut", 'season': "saison", 'subscriptiontype': "type de cotisation", 'begin_date': "date de début", 'end_date': "date de fin", 'involvement': "participation"}, 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionAddModify')
self.assert_count_equal('', 7)
self.assert_select_equal('season', 20) # nb=20
self.assert_select_equal('subscriptiontype', {1: "Annually [76,44 €]", 2: "Periodic [76,44 €]", 3: "Monthly [76,44 €]", 4: "Calendar [76,44 €]"})
self.assert_json_equal('CHECKLIST', 'prestations', [])
self.assert_count_equal('#prestations/case', 3)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 2, 'status': 1, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '1;3'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2014-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_json_equal('LABELFORM', 'prestations', ['team1 [activity1]', 'team3 [activity2]'])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 413.75)
self.factory.xfer = SubscriptionTransition()
self.calljson('/diacamma.member/subscriptionTransition', {'CONFIRME': 'YES', 'subscription': 1, 'TRANSITION': 'validate'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionTransition')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 413.75)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_grid_equal('license', {'team': "group", 'activity': "passion", 'value': "N° licence"}, 2)
self.assert_json_equal('', 'license/@0/team', "team1")
self.assert_json_equal('', 'license/@0/activity', "activity1")
self.assert_json_equal('', 'license/@0/value', None)
self.assert_json_equal('', 'license/@1/team', "team3")
self.assert_json_equal('', 'license/@1/activity', "activity2")
self.assert_json_equal('', 'license/@1/value', None)
def test_subscription_with_prestation_direct(self):
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 2, 'status': 2, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 133.22)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow',
{'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_count_equal('', 8)
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_grid_equal('license', {'team': "group", 'activity': "passion", 'value': "N° licence"}, 1)
self.assert_json_equal('', 'license/@0/team', "team2")
self.assert_json_equal('', 'license/@0/activity', "activity2")
self.assert_json_equal('', 'license/@0/value', None)
def test_renew_with_prestation(self):
default_adherents()
default_subscription()
default_prestation()
# season °10 / Year : 2009
Season.objects.get(id=10).set_has_actif()
new_year = FiscalYear.objects.create(begin='2010-01-01', end='2010-12-31', status=0)
new_year.set_has_actif()
fill_accounts_fr(new_year)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'status': 2, 'dateref': '2009-10-01', 'subscriptiontype': 1, 'season': 10}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1, 'team': 2, 'activity': 2}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseAddModify')
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1, 'team': 1, 'activity': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseAddModify')
self.factory.xfer = BillAddModify()
self.calljson('/diacamma.invoice/billAddModify', {'bill': 1, 'date': '2010-04-01', 'SAVE': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billAddModify')
self.factory.xfer = BillShow()
self.calljson('/diacamma.invoice/billShow', {'bill': 1}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billShow')
print('season', Season.objects.get(id=10))
print('year', new_year)
print('date', self.get_json_path('date'))
self.assert_json_equal('LABELFORM', 'info', [])
self.factory.xfer = BillTransition()
self.calljson('/diacamma.invoice/billTransition', {'bill': 1, 'TRANSITION': 'valid', 'CONFIRME': 'YES', 'withpayoff': False, 'sendemail': False}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billTransition')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 1)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team2 [activity2]"])
# season °11 / Year : 2011
Season.objects.get(id=11).set_has_actif()
new_year = FiscalYear.objects.create(begin='2011-01-01', end='2011-12-31', status=0)
new_year.set_has_actif()
fill_accounts_fr(new_year)
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.factory.xfer = AdherentRenew()
self.calljson('/diacamma.member/adherentRenew', {'dateref': '2010-10-01', 'CONFIRME': 'YES', 'adherent': '2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentRenew')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 2)
self.assert_json_equal('', 'subscription/@0/season', "2010/2011")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2010-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2011-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team2 [activity2]"])
self.assert_json_equal('', 'subscription/@1/season', "2009/2010")
self.assert_json_equal('', 'subscription/@1/status', 2)
self.assert_json_equal('', 'subscription/@1/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@1/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@1/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@1/involvement', ["team1 [activity1]", "team2 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 458.19)
self.assert_json_equal('', 'bill/@1/status', 1)
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/total', 76.44)
def test_command_with_prestation(self):
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'status': 2, 'dateref': '2009-10-01', 'subscriptiontype': 1, 'season': 10}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1, 'team': 2, 'activity': 2}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseAddModify')
self.factory.xfer = LicenseAddModify()
self.calljson('/diacamma.member/licenseAddModify',
{'SAVE': 'YES', 'adherent': 2, 'dateref': '2009-10-01', 'subscription': 1, 'team': 1, 'activity': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'licenseAddModify')
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2010-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'adherent': '2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 1)
self.assert_json_equal('', 'AdhCmd/@0/adherent', "Dalton Avrel")
self.assert_json_equal('', 'AdhCmd/@0/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@0/prestations', "team1 [activity1] 324,97 €{[br/]}team2 [activity2] 56,78 €")
cmd_file = self.json_context["CMD_FILE"]
self.assertEqual(cmd_file[-23:], '/tmp/list-anonymous.cmd')
self.assertTrue(isfile(cmd_file))
self.factory.xfer = AdherentCommandModify()
self.calljson('/diacamma.member/adherentCommandModify', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file, 'AdhCmd': '2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommandModify')
self.assert_count_equal('', 6)
self.assert_json_equal('LABELFORM', 'adherent', 'Dalton Avrel')
self.assert_json_equal('CHECKLIST', 'prestations', ['2', '3'])
self.assert_count_equal('#prestations/case', 3)
self.factory.xfer = AdherentCommandModify()
self.calljson('/diacamma.member/adherentCommandModify', {'dateref': '2015-10-01', 'SAVE': 'YES', 'CMD_FILE': cmd_file,
'AdhCmd': '2', 'type': '1', 'prestations': '1'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentCommandModify')
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'adherent': '2', 'CMD_FILE': cmd_file}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 1)
self.assert_json_equal('', 'AdhCmd/@0/adherent', "Dalton Avrel")
self.assert_json_equal('', 'AdhCmd/@0/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@0/prestations', "team3 [activity2] 12,34 €")
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'SAVE': 'YES', 'CMD_FILE': cmd_file, 'send_email': False}, False)
self.assert_observer('core.dialogbox', 'diacamma.member', 'adherentCommand')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/total', 76.44)
def test_import_with_prestation(self):
csv_content = """'nom','prenom','sexe','adresse','codePostal','ville','fixe','portable','mail','DateNaissance','LieuNaissance','Type','Cours'
'Dalton','Avrel','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','avrel.dalton@worldcompany.com','10/02/2000','BIDON SUR MER','Annually','Presta 1'
'Dalton','Joe','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','joe.dalton@worldcompany.com','18/05/1989','BIDON SUR MER','Annually','Presta 2,Presta 3'
'Luke','Lucky','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','lucky.luke@worldcompany.com','04/06/1979','BIDON SUR MER','Annually','Presta 1;Presta 3'
'GOC','Marie','Femme','33 impasse du 11 novembre','99150','BIDON SUR MER','0632763718','0310231012','marie762@free.fr','16/05/1998','KIKIMDILUI','Annually','Presta 1,Presta 2;Presta 3'
"""
# Avrel team3 [activity2]
# Joe team1 [activity1] team2 [activity2]
# Lucky team1 [activity1] team3 [activity2]
# Marie team1 [activity1] team2 [activity2]
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify',
{'SAVE': 'YES', 'adherent': 2, 'status': 1, 'dateref': '2009-10-01', 'subscriptiontype': 1, 'season': 10, 'prestations': '1;3'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 1)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@0/firstname', "Avrel")
self.assert_json_equal('', 'adherent/@0/license', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 0}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/total', 413.75)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 1, 'modelname': 'member.Adherent', 'quotechar': "'",
'delimiter': ',', 'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent': StringIO(csv_content)}, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 6 + 18)
self.assert_select_equal('fld_prestations', 14) # nb=14
self.assert_count_equal('CSV', 4)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
"fld_birthday": "DateNaissance", "fld_birthplace": "LieuNaissance", 'fld_subscriptiontype': 'Type',
'fld_prestations': 'Cours', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "4 éléments ont été importés")
self.assert_json_equal('LABELFORM', 'import_error', [])
self.assertEqual(len(self.json_actions), 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 4)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@0/firstname', "Avrel")
self.assert_json_equal('', 'adherent/@0/license', ["team3 [activity2]"])
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@1/firstname', "Joe")
self.assert_json_equal('', 'adherent/@1/license', ["team1 [activity1]", "team2 [activity2]"])
self.assert_json_equal('', 'adherent/@2/id', "7")
self.assert_json_equal('', 'adherent/@2/firstname', "Marie")
self.assert_json_equal('', 'adherent/@2/license', ["team1 [activity1]", "team2 [activity2]", "team3 [activity2]"])
self.assert_json_equal('', 'adherent/@3/id', "6")
self.assert_json_equal('', 'adherent/@3/firstname', "Lucky")
self.assert_json_equal('', 'adherent/@3/license', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 0}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 4)
self.assert_json_equal('', 'bill/@0/third', "Dalton Avrel")
self.assert_json_equal('', 'bill/@0/total', 88.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34
self.assert_json_equal('', 'bill/@1/third', "Dalton Joe")
self.assert_json_equal('', 'bill/@1/total', 458.19) # Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78 + art3:324.97
self.assert_json_equal('', 'bill/@2/third', "Luke Lucky")
self.assert_json_equal('', 'bill/@2/total', 413.75) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art3:324.97
self.assert_json_equal('', 'bill/@3/third', "GOC Marie")
self.assert_json_equal('', 'bill/@3/total', 470.53) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + art3:324.97
def test_bad_import_with_prestation(self):
csv_content = """'nom','prenom','sexe','adresse','codePostal','ville','fixe','portable','mail','DateNaissance','LieuNaissance','Type','Cours'
'Dalton','Avrel','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','avrel.dalton@worldcompany.com','10/02/2000','BIDON SUR MER','Annually','Presta 6'
"""
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
"fld_birthday": "DateNaissance", "fld_birthplace": "LieuNaissance", 'fld_subscriptiontype': 'Type',
'fld_prestations': 'Cours', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "1 élément a été importé")
self.assert_json_equal('LABELFORM', 'import_error', ["Prestation 'Presta 6' inconnue !"])
def test_connexion(self):
self.add_subscriptions()
new_groupe = LucteriosGroup.objects.create(name='new_groupe')
param = Parameter.objects.get(name='contacts-defaultgroup')
param.value = '%d' % new_groupe.id
param.save()
configSMTP('localhost', 3125)
change_ourdetail()
Parameter.change_value('member-connection', 1)
Params.clear()
adh_luke = Adherent.objects.get(firstname='Lucky')
adh_luke.user = LucteriosUser.objects.create(username='lucky', first_name=adh_luke.firstname, last_name=adh_luke.lastname, email=adh_luke.email, is_active=False)
adh_luke.save()
new_adh = create_adherent("Ma'a", 'Dalton', '1961-04-12')
new_adh.user = LucteriosUser.objects.create(username='maa', first_name=new_adh.firstname, last_name=new_adh.lastname, email=new_adh.email, is_active=True)
new_adh.save()
new_adh = create_adherent("Rantanplan", 'Chien', '2010-01-01')
new_adh.user = LucteriosUser.objects.create(username='rantanplan', first_name=new_adh.firstname, last_name=new_adh.lastname, email=new_adh.email, is_active=True)
new_adh.save()
Responsability.objects.create(individual=new_adh, legal_entity_id=1)
adh_joe = Adherent.objects.get(firstname='Joe')
adh_joe.email = 'badèèè@worldcompany.com'
adh_joe.save()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 5)
self.assertEqual(len(self.json_actions), 4)
server = TestReceiver()
server.start(3125)
try:
self.assertEqual(3, len(LucteriosUser.objects.filter(is_active=True)))
self.factory.xfer = AdherentConnection()
self.calljson('/diacamma.member/adherentConnection', {'CONFIRME': 'YES', 'RELOAD': 'YES'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentConnection')
self.assert_json_equal('LABELFORM', 'info', '{[center]}{[b]}Résultat{[/b]}{[/center]}{[br/]}1 connexion(s) supprimée(s).{[br/]}3 connexion(s) ajoutée(s).{[br/]}1 connexion(s) réactivée(s).{[br/]}{[br/]}1 courriel(s) ont échoué:{[ul]}{[li]}Dalton Joe : ', True)
print('email sending %s' % [server.get(srv_id)[2] for srv_id in range(server.count())])
self.assertEqual([['Avrel.Dalton@worldcompany.com'], ['Jack.Dalton@worldcompany.com'], ['Lucky.Luke@worldcompany.com'], ['William.Dalton@worldcompany.com']], sorted([server.get(srv_id)[2] for srv_id in range(server.count())]))
self.assertEqual(4, server.count())
self.assertEqual(7, len(LucteriosUser.objects.filter(is_active=True)))
self.factory.xfer = AdherentConnection()
self.calljson('/diacamma.member/adherentConnection', {'CONFIRME': 'YES', 'RELOAD': 'YES'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentConnection')
self.assert_json_equal('LABELFORM', 'info', '{[center]}{[b]}Résultat{[/b]}{[/center]}{[br/]}0 connexion(s) supprimée(s).{[br/]}0 connexion(s) ajoutée(s).{[br/]}0 connexion(s) réactivée(s).')
self.assertEqual(4, server.count())
self.assertEqual(7, len(LucteriosUser.objects.filter(is_active=True)))
finally:
server.stop()
user = LucteriosUser.objects.get(first_name='Avrel')
self.assertEqual('Dalton', user.last_name)
self.assertEqual('avrelD', user.username)
self.assertEqual('Avrel.Dalton@worldcompany.com', user.email)
self.assertEqual(True, user.is_active)
self.assertEqual([new_groupe], list(user.groups.all()))
user = LucteriosUser.objects.get(first_name='Lucky')
self.assertEqual('lucky', user.username)
self.assertEqual(True, user.is_active)
self.assertEqual([new_groupe], list(user.groups.all()))
user = LucteriosUser.objects.get(first_name='Joe')
self.assertEqual('joeD', user.username)
self.assertEqual(True, user.is_active)
self.assertEqual([new_groupe], list(user.groups.all()))
user = LucteriosUser.objects.get(first_name="Ma'a")
self.assertEqual('maa', user.username)
self.assertEqual(False, user.is_active)
self.assertEqual([], list(user.groups.all()))
user = LucteriosUser.objects.get(first_name="Rantanplan")
self.assertEqual('rantanplan', user.username)
self.assertEqual(True, user.is_active)
self.assertEqual([], list(user.groups.all()))
def test_prestation_manage(self):
default_prestation()
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('', 4)
self.assert_select_equal('activity', 3)
self.assert_grid_equal('prestation', {'team.name': "nom", 'team.description': "description", 'activity': "passion", "nb_adherent": "nombre d'adhérents", 'article.price': "prix"}, 3)
self.assert_count_equal('#prestation/actions', 7)
self.assert_json_equal('', '#prestation/actions/@0/action', "prestationShow")
self.assert_json_equal('', '#prestation/actions/@1/action', "prestationAddModify")
self.assert_json_equal('', '#prestation/actions/@2/action', "prestationDel")
self.assert_json_equal('', '#prestation/actions/@3/action', "prestationAddModify")
self.assert_json_equal('', '#prestation/actions/@4/action', "prestationSwap")
self.assert_json_equal('', '#prestation/actions/@5/action', "prestationSplit")
self.assert_json_equal('', '#prestation/actions/@6/action', "objectMerge")
self.factory.xfer = PrestationAddModify()
self.calljson('/diacamma.member/prestationAddModify', {'new_group': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationAddModify')
self.assert_count_equal('', 5)
self.assert_select_equal("new_group", {0: 'nouveau group', 1: 'sélectionner ancien group'})
self.assert_select_equal("team", {1: 'team1', 2: 'team2', 3: 'team3'})
self.assert_select_equal("activity", {1: 'activity1', 2: 'activity2'})
self.assert_select_equal('article', {1: 'ABC1 | Article 01 ', 2: 'ABC2 | Article 02 ', 3: 'ABC3 | Article 03 ', 4: 'ABC4 | Article 04 '})
self.factory.xfer = PrestationAddModify()
self.calljson('/diacamma.member/prestationAddModify',
{'SAVE': 'YES', 'team': 3, 'activity': 2, 'article': 1, 'new_group': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'prestationAddModify')
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('prestation', 4)
self.assert_json_equal('', 'prestation/@3/id', 4)
self.assert_json_equal('', 'prestation/@3/team.name', "team3")
self.assert_json_equal('', 'prestation/@3/team.description', "team N°3{[br/]}The newbies")
self.assert_json_equal('', 'prestation/@3/activity', "activity2")
self.assert_json_equal('', 'prestation/@3/article.price', 12.34)
self.factory.xfer = PrestationAddModify()
self.calljson('/diacamma.member/prestationAddModify', {'new_group': 0, 'prestation': 4}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationAddModify')
self.assert_count_equal('', 5)
self.assert_json_equal('EDIT', 'name', "team3")
self.assert_json_equal('MEMO', 'description', "team N°3{[br/]}The newbies")
self.assert_json_equal('SELECT', 'activity', 2)
self.assert_json_equal('SELECT', 'article', 1)
self.factory.xfer = PrestationAddModify()
self.calljson('/diacamma.member/prestationAddModify',
{'SAVE': 'YES', "name": "team #3", "description": "The team number 3", 'activity': 1, 'article': 2, 'new_group': 0, 'prestation': 4}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'prestationAddModify')
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('prestation', 4)
self.assert_json_equal('', 'prestation/@0/id', 4)
self.assert_json_equal('', 'prestation/@0/team.name', "team #3")
self.assert_json_equal('', 'prestation/@0/team.description', "The team number 3")
self.assert_json_equal('', 'prestation/@0/activity', "activity1")
self.assert_json_equal('', 'prestation/@0/article.price', 56.78)
self.factory.xfer = PrestationDel()
self.calljson('/diacamma.member/prestationDel', {"prestation": 4, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'prestationDel')
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('prestation', 3)
def test_prestation_change_subscription(self):
default_prestation()
self.add_subscriptions(status=1)
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('', 4)
self.assert_select_equal('activity', 3)
self.assert_count_equal('prestation', 3)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 76.44)
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('', 7)
self.assert_json_equal('LABELFORM', 'team.name', 'team3')
self.assert_json_equal('LABELFORM', 'activity', "activity2")
self.assert_json_equal('LABELFORM', 'article', 'ABC1')
self.assert_count_equal('adherent', 0)
self.assert_count_equal('#adherent/actions', 3)
self.assert_json_equal('', '#adherent/actions/@0/action', "adherentShow")
self.assert_json_equal('', '#adherent/actions/@1/action', "adherentPrestationDel")
self.assert_json_equal('', '#adherent/actions/@2/action', "adherentPrestationAdd")
self.factory.xfer = AdherentPrestationAdd()
self.calljson('/diacamma.member/adherentPrestationAdd', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentPrestationAdd')
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;6'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 2)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 88.78)
self.factory.xfer = AdherentPrestationDel()
self.calljson('/diacamma.member/adherentPrestationDel', {'prestation': 1, 'adherent': '2', 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationDel')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 88.78)
def test_prestation_new_subscription(self):
default_prestation()
default_adherents()
default_subscription()
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('', 4)
self.assert_select_equal('activity', 3)
self.assert_count_equal('prestation', 3)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 0)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 0)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;6'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentPrestationSave')
self.assert_count_equal('', 7)
self.assert_json_equal('LABELFORM', 'no_subscription', ['Dalton Avrel', 'Luke Lucky'])
self.assert_json_equal('LABELFORM', 'season', 10)
self.assert_select_equal('subscriptiontype', {1: "Annually [76,44 €]", 2: "Periodic [76,44 €]", 3: "Monthly [76,44 €]", 4: "Calendar [76,44 €]"})
self.assert_select_equal('status', {1: 'en création', 2: 'validé'})
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;6', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 2)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 88.78)
self.factory.xfer = AdherentPrestationDel()
self.calljson('/diacamma.member/adherentPrestationDel', {'prestation': 1, 'adherent': '2', 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationDel')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 1)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@1/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 88.78)
def test_prestation_subscription_validated(self):
default_prestation()
self.add_subscriptions(status=2)
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('', 4)
self.assert_select_equal('activity', 3)
self.assert_count_equal('prestation', 3)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 1)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 1)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 1)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 76.44)
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('', 7)
self.assert_json_equal('LABELFORM', 'team.name', 'team3')
self.assert_json_equal('LABELFORM', 'activity', "activity2")
self.assert_json_equal('LABELFORM', 'article', 'ABC1')
self.assert_count_equal('adherent', 0)
self.assert_count_equal('#adherent/actions', 3)
self.assert_json_equal('', '#adherent/actions/@0/action', "adherentShow")
self.assert_json_equal('', '#adherent/actions/@1/action', "adherentPrestationDel")
self.assert_json_equal('', '#adherent/actions/@2/action', "adherentPrestationAdd")
self.factory.xfer = AdherentPrestationAdd()
self.calljson('/diacamma.member/adherentPrestationAdd', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentPrestationAdd')
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;6'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 2)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 1)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 1)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 1)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 88.78)
self.factory.xfer = AdherentPrestationDel()
self.calljson('/diacamma.member/adherentPrestationDel', {'prestation': 1, 'adherent': '2', 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationDel')
self.factory.xfer = PrestationShow()
self.calljson('/diacamma.member/prestationShow', {'prestation': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationShow')
self.assert_count_equal('adherent', 1)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', [])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/season', "2009/2010")
self.assert_json_equal('', 'subscription/@0/status', 2)
self.assert_json_equal('', 'subscription/@0/subscriptiontype', "Annually")
self.assert_json_equal('', 'subscription/@0/begin_date', "2009-09-01")
self.assert_json_equal('', 'subscription/@0/end_date', "2010-08-31")
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 6)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@2/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@2/bill_type', 1)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 76.44)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 1)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 76.44)
self.assert_json_equal('', 'bill/@4/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@4/bill_type', 1)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 88.78)
self.assert_json_equal('', 'bill/@5/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@5/bill_type', 2)
self.assert_json_equal('', 'bill/@5/status', 0)
self.assert_json_equal('', 'bill/@5/total', 12.34)
def test_prestation_merge(self):
default_prestation()
default_adherents()
default_subscription()
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;3;6', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 2, 'adherent': '3;4;5;6', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 3, 'adherent': '2;3', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('prestation', 3)
self.assert_json_equal('', 'prestation/@0/id', 3)
self.assert_json_equal('', 'prestation/@0/team.name', "team1")
self.assert_json_equal('', 'prestation/@0/team.description', "team N°1{[br/]}The bests")
self.assert_json_equal('', 'prestation/@0/activity', "activity1")
self.assert_json_equal('', 'prestation/@0/nb_adherent', 2)
self.assert_json_equal('', 'prestation/@0/article.price', 324.97)
self.assert_json_equal('', 'prestation/@1/id', 2)
self.assert_json_equal('', 'prestation/@1/team.name', "team2")
self.assert_json_equal('', 'prestation/@1/team.description', "team N°2{[br/]}The chalengers")
self.assert_json_equal('', 'prestation/@1/activity', "activity2")
self.assert_json_equal('', 'prestation/@1/nb_adherent', 4)
self.assert_json_equal('', 'prestation/@1/article.price', 56.78)
self.assert_json_equal('', 'prestation/@2/id', 1)
self.assert_json_equal('', 'prestation/@2/team.name', "team3")
self.assert_json_equal('', 'prestation/@2/team.description', "team N°3{[br/]}The newbies")
self.assert_json_equal('', 'prestation/@2/activity', "activity2")
self.assert_json_equal('', 'prestation/@2/nb_adherent', 3)
self.assert_json_equal('', 'prestation/@2/article.price', 12.34)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.print_json('bill')
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 413.75) # 76.44 + 12,34 (1) + 324,97 (3)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 470.53) # 76.44 + 12,34 (1) + 56,78 (2) + 324,97 (3)
self.assert_json_equal('', 'bill/@2/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 145.56) # 76.44 + 12,34 (1) + 69,12 (2)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 133.22) # 76.44 + 56,78(2)
self.assert_json_equal('', 'bill/@4/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 133.22) # 76.44 + 56,78(2)
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team2 [activity2]", "team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2]"])
self.factory.xfer = CategoryConf()
self.calljson('/diacamma.member/categoryConf', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'categoryConf')
self.assert_count_equal('team', 3)
self.assert_json_equal('', 'team/@0/name', "team1")
self.assert_json_equal('', 'team/@1/name', "team2")
self.assert_json_equal('', 'team/@2/name', "team3")
self.factory.xfer = ObjectMerge()
self.calljson('/CORE/objectMerge', {'modelname': 'member.Prestation', 'field_id': 'prestation', 'prestation': '2;3', 'CONFIRME': 'YES', 'mrg_object': '3'}, False)
self.assert_observer('core.acknowledge', 'CORE', 'objectMerge')
self.assert_action_equal('GET', self.response_json['action'], ('Editer', 'images/show.png', 'diacamma.member', 'prestationShow', 1, 1, 1, {"prestation": 3}))
self.factory.xfer = PrestationList()
self.calljson('/diacamma.member/prestationList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationList')
self.assert_count_equal('prestation', 2)
self.assert_json_equal('', 'prestation/@0/id', 3)
self.assert_json_equal('', 'prestation/@0/team.name', "team1")
self.assert_json_equal('', 'prestation/@0/team.description', "team N°1{[br/]}The bests")
self.assert_json_equal('', 'prestation/@0/activity', "activity1")
self.assert_json_equal('', 'prestation/@0/nb_adherent', 5)
self.assert_json_equal('', 'prestation/@0/article.price', 324.97)
self.assert_json_equal('', 'prestation/@1/id', 1)
self.assert_json_equal('', 'prestation/@1/team.name', "team3")
self.assert_json_equal('', 'prestation/@1/team.description', "team N°3{[br/]}The newbies")
self.assert_json_equal('', 'prestation/@1/activity', "activity2")
self.assert_json_equal('', 'prestation/@1/nb_adherent', 3)
self.assert_json_equal('', 'prestation/@1/article.price', 12.34)
self.factory.xfer = CategoryConf()
self.calljson('/diacamma.member/categoryConf', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'categoryConf')
self.assert_count_equal('team', 2)
self.assert_json_equal('', 'team/@0/name', "team1")
self.assert_json_equal('', 'team/@1/name', "team3")
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]", "team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('subscription', 1)
self.assert_json_equal('', 'subscription/@0/involvement', ["team1 [activity1]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.print_json('bill')
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 413.75) # 76.44 + 12.34 (1) + 324.97 (3)
self.assert_json_equal('', 'bill/@1/third', 'Dalton William')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 413.75) # 76.44 + 12.34 (1) + 324.97 (3)
self.assert_json_equal('', 'bill/@2/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 413.75) # 76.44 + 12.34 (1) + 324.97 (3)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 401.41) # 76.44 + 324,97 (3)
self.assert_json_equal('', 'bill/@4/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 401.41) # 76.44 + 56,78(2)
def test_prestation_swap(self):
default_prestation()
default_adherents()
default_subscription()
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;4;6', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 2, 'adherent': '3;5', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 5, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 88.78)
self.assert_json_equal('', 'bill/@2/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 88.78)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 133.22)
self.assert_json_equal('', 'bill/@4/third', 'Dalton William')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 133.22)
self.factory.xfer = PrestationSwap()
self.calljson('/diacamma.member/prestationSwap', {'prestation': '1;2'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationSwap')
self.assert_count_equal('', 4)
self.assert_json_equal('LABELFORM', 'lbl_left', ' team2 [activity2]', txtrange=True)
self.assert_json_equal('LABELFORM', 'lbl_right', ' team3 [activity2]', txtrange=True)
self.assert_json_equal('CHECKLIST', 'swaps', ['2', '4', '6'])
self.assert_select_equal('swaps', {2: 'Dalton Avrel', 3: 'Dalton William', 4: 'Dalton Jack', 5: 'Dalton Joe', 6: 'Luke Lucky'}, True)
self.factory.xfer = PrestationSwap()
self.calljson('/diacamma.member/prestationSwap', {'prestation': '1;2', 'CONFIRME': 'YES', 'swaps': '2;4;5'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'prestationSwap')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 3, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 4, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 5, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team3 [activity2]"])
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 6, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('', 'subscription/@0/involvement', ["team2 [activity2]"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', 'Dalton Avrel')
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 88.78)
self.assert_json_equal('', 'bill/@1/third', 'Dalton Jack')
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/total', 88.78)
self.assert_json_equal('', 'bill/@2/third', 'Luke Lucky')
self.assert_json_equal('', 'bill/@2/bill_type', 0)
self.assert_json_equal('', 'bill/@2/status', 0)
self.assert_json_equal('', 'bill/@2/total', 133.22)
self.assert_json_equal('', 'bill/@3/third', 'Dalton Joe')
self.assert_json_equal('', 'bill/@3/bill_type', 0)
self.assert_json_equal('', 'bill/@3/status', 0)
self.assert_json_equal('', 'bill/@3/total', 88.78)
self.assert_json_equal('', 'bill/@4/third', 'Dalton William')
self.assert_json_equal('', 'bill/@4/bill_type', 0)
self.assert_json_equal('', 'bill/@4/status', 0)
self.assert_json_equal('', 'bill/@4/total', 133.22)
def test_prestation_split(self):
default_prestation()
default_adherents()
default_subscription()
self.factory.xfer = AdherentPrestationSave()
self.calljson('/diacamma.member/adherentPrestationSave', {'prestation': 1, 'adherent': '2;3;4;5;6', 'NEW_SUB': 'YES', 'subscriptiontype': 1, 'status': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentPrestationSave')
self.factory.xfer = PrestationSplit()
self.calljson('/diacamma.member/prestationSplit', {'prestation': '1'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'prestationSplit')
self.assert_count_equal('', 6)
self.assert_json_equal('EDIT', 'name', "team3")
self.assert_json_equal('MEMO', 'description', "team N°3{[br/]}The newbies")
self.assert_json_equal('SELECT', 'activity', 2)
self.assert_json_equal('SELECT', 'article', 1)
self.factory.xfer = PrestationSplit()
self.calljson('/diacamma.member/prestationSplit', {'prestation': '1', 'CONFIRME': 'YES', 'name': 'team3b', 'description': "team N°3b{[br/]}The newbies+", 'activity': 2, 'article': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'prestationSplit')
self.assert_action_equal('POST', self.response_json['action'], ('Permuter entre prestations', 'diacamma.member/images/adherent.png', 'diacamma.member', 'prestationSwap', 1, 1, 1, {"prestation": '1;4'}))
class AdherentFamilyTest(BaseAdherentTest):
def setUp(self):
BaseAdherentTest.setUp(self)
Parameter.change_value('member-family-type', 3)
Parameter.change_value("member-fields", "firstname;lastname;tel1;tel2;email;family")
set_parameters([])
def test_show_adherent(self):
self.add_subscriptions()
self.assertEqual("famille", str(Params.getobject('member-family-type')))
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow',
{'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_count_equal('', 2 + (15 + 2 + 5) + 2 + 5 + 5 + 2) # header + identity/family/docs + subscription + financial + invoice + grade
self.assert_json_equal('LABELFORM', 'family', None)
self.assert_json_equal('', '#famillybtn/action/icon', "/static/lucterios.CORE/images/add.png")
def test_new_family(self):
default_adherents()
default_subscription()
self.factory.xfer = AdherentFamilyAdd()
self.calljson('/diacamma.member/adherentFamilyAdd', {'adherent': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentFamilyAdd')
self.assert_count_equal('', 4)
self.assert_count_equal('legal_entity', 0)
self.assert_json_equal('', '#legal_entity/actions/@1/icon', "/static/lucterios.CORE/images/new.png")
json_values = self.get_json_path('#legal_entity/actions/@1/params').items()
self.assertEqual(len(json_values), 9)
params_value = {'adherent': 2}
for key, val in json_values:
params_value[key] = val
self.factory.xfer = AdherentFamilyCreate()
self.calljson('/diacamma.member/adherentFamilyCreate', params_value, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentFamilyCreate')
self.assert_json_equal('EDIT', 'name', 'Dalton')
params_value['SAVE'] = 'YES'
self.factory.xfer = AdherentFamilyCreate()
self.calljson('/diacamma.member/adherentFamilyCreate', params_value, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilyCreate')
self.assertEqual(self.response_json['action']['action'], 'adherentFamilySelect')
self.assertEqual(self.response_json['action']['params']['legal_entity'], 7)
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'family', "Dalton")
self.assert_json_equal('', '#famillybtn/action/icon', "/static/lucterios.CORE/images/edit.png")
self.factory.xfer = LegalEntityShow()
self.calljson('/lucterios.contacts/legalEntityShow', {'legal_entity': '7'}, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'legalEntityShow')
self.assert_json_equal('LABELFORM', 'name', "Dalton")
self.assert_json_equal('LABELFORM', 'structure_type', 'famille')
self.assert_json_equal('LABELFORM', 'address', 'rue de la liberté')
self.assert_json_equal('LABELFORM', 'postal_code', '97250')
self.assert_json_equal('LABELFORM', 'city', 'LE PRECHEUR')
self.assert_json_equal('LABELFORM', 'country', 'MARTINIQUE')
self.assert_json_equal('LINK', 'email', 'Avrel.Dalton@worldcompany.com')
self.assert_json_equal('LABELFORM', 'tel2', '02-78-45-12-95')
def test_select_family(self):
default_adherents()
default_subscription()
self.add_family()
self.factory.xfer = AdherentFamilyAdd()
self.calljson('/diacamma.member/adherentFamilyAdd', {'adherent': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentFamilyAdd')
self.assert_count_equal('legal_entity', 1)
self.assert_count_equal('#legal_entity/actions', 3)
self.assert_json_equal('', 'legal_entity/@0/name', "LES DALTONS")
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 2, 'dateref': '2009-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'family', "LES DALTONS")
self.assert_json_equal('', '#famillybtn/action/icon', "/static/lucterios.CORE/images/edit.png")
def test_add_adherent(self):
default_adherents()
default_subscription()
self.add_family()
self.factory.xfer = AdherentAddModify()
self.calljson('/diacamma.member/adherentAddModify', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentAddModify')
self.assert_json_equal('', '#famillybtn/action/icon', "/static/lucterios.CORE/images/add.png")
self.factory.xfer = FamilyAdherentAdd()
self.calljson('/diacamma.member/familyAdherentAdd', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'familyAdherentAdd')
self.assert_count_equal('legal_entity', 1)
self.assert_count_equal('#legal_entity/actions', 3)
self.assert_json_equal('', 'legal_entity/@0/name', "LES DALTONS")
self.factory.xfer = FamilyAdherentCreate()
self.calljson('/diacamma.member/familyAdherentCreate', {'legal_entity': 7}, False)
self.assert_observer('core.custom', 'diacamma.member', 'familyAdherentCreate')
self.assert_json_equal('EDIT', 'lastname', "LES DALTONS")
self.assert_json_equal('MEMO', 'address', 'Place des cocotiers')
self.factory.xfer = FamilyAdherentCreate()
self.calljson('/diacamma.member/familyAdherentCreate', {"address": 'Place des cocotiers',
"comment": 'no comment', "firstname": "Ma'a", "lastname": 'DALTON',
"city": 'ST PIERRE', "country": 'MARTINIQUE', "tel2": '06-54-87-19-34', "SAVE": 'YES',
"tel1": '09-96-75-15-00', "postal_code": '97250', "email": 'maa.dalton@worldcompany.com',
"birthday": "1998-08-04", "birthplace": "Fort-de-France",
"genre": "2", 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'familyAdherentCreate')
self.assertEqual(self.response_json['action']['params']['adherent'], 8)
self.factory.xfer = FamilyAdherentAdded()
self.calljson('/diacamma.member/familyAdherentAdded', {'adherent': 8, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'familyAdherentAdded')
self.factory.xfer = AdherentShow()
self.calljson('/diacamma.member/adherentShow', {'adherent': 8}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentShow')
self.assert_json_equal('LABELFORM', 'firstname', "Ma'a")
self.assert_json_equal('LABELFORM', 'lastname', "DALTON")
self.assert_json_equal('LABELFORM', 'family', "LES DALTONS")
self.assert_json_equal('', '#famillybtn/action/icon', "/static/lucterios.CORE/images/edit.png")
def test_subscription_bill(self):
default_adherents()
default_subscription()
family_third = get_or_create_customer(self.add_family())
self.factory.xfer = BillAddModify()
self.calljson('/diacamma.invoice/billAddModify', {'bill_type': 1, 'third': family_third.id, 'date': '2015-04-01', 'SAVE': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billAddModify')
self.factory.xfer = DetailAddModify()
self.calljson('/diacamma.invoice/detailAddModify', {'article': 0, 'designation': 'article 0', 'price': '100.00', 'quantity': 1, 'SAVE': 'YES', 'bill': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'detailAddModify')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 5, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/total', 100.00)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1, 'value': 'abc123'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 100.00)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/total', 76.44)
self.assert_json_equal('', 'bill/@1/comment', "{[b]}cotisation{[/b]}")
self.factory.xfer = BillShow()
self.calljson('/diacamma.invoice/billShow', {'bill': 2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billShow')
self.assert_json_equal('LINK', 'third', "LES DALTONS")
self.assert_count_equal('detail', 2)
self.assert_json_equal('', 'detail/@0/article', 'ABC1')
self.assert_json_equal('', 'detail/@0/designation', "Article 01{[br/]}Cotisation de 'Dalton Avrel'")
self.assert_json_equal('', 'detail/@0/price', 12.34)
self.assert_json_equal('', 'detail/@0/quantity', '1.000')
self.assert_json_equal('', 'detail/@0/total', 12.34)
self.assert_json_equal('', 'detail/@1/article', 'ABC5')
self.assert_json_equal('', 'detail/@1/designation', "Article 05{[br/]}Cotisation de 'Dalton Avrel'")
self.assert_json_equal('', 'detail/@1/price', 64.10)
self.assert_json_equal('', 'detail/@1/quantity', '1.00')
self.assert_json_equal('', 'detail/@1/total', 64.10)
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 5, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 1, 'activity': 1, 'value': 'uvw98'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/total', 100.00)
self.assert_json_equal('', 'bill/@1/status', 0)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@1/total', 152.88)
self.assert_json_equal('', 'bill/@1/comment', "{[b]}cotisation{[/b]}")
def test_change_cotation(self):
self.prep_subscription_family()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'subscriptiontype': 5, 'subscription': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/id', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 12.34 + 76.44)
self.assert_json_equal('', 'bill/@0/comment', "{[b]}cotisation{[/b]}")
self.assert_json_equal('', 'bill/@1/id', 1)
self.assert_json_equal('', 'bill/@1/status', 2)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44 + 76.44)
self.assert_json_equal('', 'bill/@1/comment', "{[b]}cotisation{[/b]}")
self.factory.xfer = BillShow()
self.calljson('/diacamma.invoice/billShow', {'bill': 2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billShow')
self.assert_action_equal('GET', self.get_json_path('#parentbill/action'), ("origine", "diacamma.invoice/images/origin.png",
"diacamma.invoice", "billShow", 0, 1, 1, {'bill': 1}))
def test_cancel_cotation(self):
self.prep_subscription_family()
self.factory.xfer = SubscriptionTransition()
self.calljson('/diacamma.member/subscriptionTransition', {'CONFIRME': 'YES', 'subscription': 1, 'TRANSITION': 'cancel'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionTransition')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/id', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@0/comment', "{[b]}cotisation{[/b]}")
self.assert_json_equal('', 'bill/@1/id', 1)
self.assert_json_equal('', 'bill/@1/status', 2)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44 + 76.44)
self.assert_json_equal('', 'bill/@1/comment', "{[b]}cotisation{[/b]}")
self.factory.xfer = BillShow()
self.calljson('/diacamma.invoice/billShow', {'bill': 2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billShow')
self.assert_action_equal('GET', self.get_json_path('#parentbill/action'), ("origine", "diacamma.invoice/images/origin.png",
"diacamma.invoice", "billShow", 0, 1, 1, {'bill': 1}))
def test_delete_cotation(self):
self.prep_subscription_family()
self.factory.xfer = SubscriptionDel()
self.calljson('/diacamma.member/subscriptionDel', {'CONFIRME': 'YES', 'subscription': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionDel')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/id', 2)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 76.44)
self.assert_json_equal('', 'bill/@0/comment', "{[b]}cotisation{[/b]}")
self.assert_json_equal('', 'bill/@1/id', 1)
self.assert_json_equal('', 'bill/@1/status', 2)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/total', 76.44 + 76.44)
self.assert_json_equal('', 'bill/@1/comment', "{[b]}cotisation{[/b]}")
self.factory.xfer = BillShow()
self.calljson('/diacamma.invoice/billShow', {'bill': 2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billShow')
self.assert_action_equal('GET', self.get_json_path('#parentbill/action'), ("origine", "diacamma.invoice/images/origin.png",
"diacamma.invoice", "billShow", 0, 1, 1, {'bill': 1}))
def test_command(self):
Season.objects.get(id=16).set_has_actif()
self.add_subscriptions(year=2014, season_id=15)
self.add_family()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@2/bill_type', 1)
self.assert_json_equal('', 'bill/@3/bill_type', 1)
self.assert_json_equal('', 'bill/@4/bill_type', 1)
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 5, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentRenewList()
self.calljson('/diacamma.member/adherentRenewList', {'dateref': '2015-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentRenewList')
self.assert_count_equal('adherent', 3)
self.assert_json_equal('', 'adherent/@0/id', "2")
self.assert_json_equal('', 'adherent/@1/id', "5")
self.assert_json_equal('', 'adherent/@2/id', "6")
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 0)
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'adherent': '2;5'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 2)
self.assert_json_equal('', 'AdhCmd/@0/adherent', "Dalton Avrel")
self.assert_json_equal('', 'AdhCmd/@0/type', "Annually [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@0/reduce', 0.00)
self.assert_json_equal('', 'AdhCmd/@1/adherent', "Dalton Joe")
self.assert_json_equal('', 'AdhCmd/@1/type', "Calendar [76,44 €]")
self.assert_json_equal('', 'AdhCmd/@1/reduce', 0.00)
cmd_file = self.json_context["CMD_FILE"]
self.assertEqual(cmd_file[-23:], '/tmp/list-anonymous.cmd')
self.assertTrue(isfile(cmd_file))
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'CMD_FILE': cmd_file}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentCommand')
self.assert_count_equal('AdhCmd', 2)
configSMTP('localhost', 3225)
change_ourdetail()
server = TestReceiver()
server.start(3225)
try:
self.assertEqual(0, server.count())
self.factory.xfer = AdherentCommand()
self.calljson('/diacamma.member/adherentCommand', {'dateref': '2015-10-01', 'SAVE': 'YES', 'CMD_FILE': cmd_file, 'send_email': True}, False)
self.assert_observer('core.dialogbox', 'diacamma.member', 'adherentCommand')
self.assertEqual(1, server.count())
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(0)[1])
self.assertEqual(['dalton@worldcompany.com', 'Avrel.Dalton@worldcompany.com', 'Joe.Dalton@worldcompany.com', 'mr-sylvestre@worldcompany.com'], server.get(0)[2])
msg, msg_txt, msg_file = server.check_first_message('Nouvelle cotisation', 3, {'To': 'dalton@worldcompany.com'})
self.assertEqual('text/plain', msg_txt.get_content_type())
self.assertEqual('text/html', msg.get_content_type())
self.assertEqual('base64', msg.get('Content-Transfer-Encoding', ''))
message = decode_b64(msg.get_payload())
self.assertTrue('Bienvenu' in message, message)
self.assertTrue('devis_A-1_LES DALTONS.pdf' in msg_file.get('Content-Type', ''), msg_file.get('Content-Type', ''))
self.save_pdf(base64_content=msg_file.get_payload())
finally:
server.stop()
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 6)
self.assert_json_equal('', 'bill/@0/status', 1)
self.assert_json_equal('', 'bill/@0/num_txt', "A-1")
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/total', 152.88)
self.assert_json_equal('', 'bill/@0/comment', "{[b]}cotisation{[/b]}")
self.assert_json_equal('', 'bill/@1/bill_type', 1)
self.assert_json_equal('', 'bill/@2/bill_type', 1)
self.assert_json_equal('', 'bill/@3/bill_type', 1)
self.assert_json_equal('', 'bill/@4/bill_type', 1)
self.assert_json_equal('', 'bill/@5/bill_type', 1)
def test_merge(self):
default_adherents()
default_subscription()
self.add_family()
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 5, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 2, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 1, 'activity': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 3, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 4, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 3, 'activity': 2}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 5, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 1, 'activity': 2}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'status': 2, 'adherent': 6, 'dateref': '2014-10-01', 'subscriptiontype': 1, 'season': 10, 'team': 2, 'activity': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 5)
self.assert_json_equal('', 'adherent/@0/id', 2)
self.assert_json_equal('', 'adherent/@0/firstname', 'Avrel')
self.assert_json_equal('', 'adherent/@0/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@0/family', 'LES DALTONS')
self.assert_json_equal('', 'adherent/@1/id', 4)
self.assert_json_equal('', 'adherent/@1/firstname', 'Jack')
self.assert_json_equal('', 'adherent/@1/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@1/family', None)
self.assert_json_equal('', 'adherent/@2/id', 5)
self.assert_json_equal('', 'adherent/@2/firstname', 'Joe')
self.assert_json_equal('', 'adherent/@2/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@2/family', 'LES DALTONS')
self.assert_json_equal('', 'adherent/@3/id', 3)
self.assert_json_equal('', 'adherent/@3/firstname', 'William')
self.assert_json_equal('', 'adherent/@3/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@3/family', None)
self.assert_json_equal('', 'adherent/@4/id', 6)
self.assert_json_equal('', 'adherent/@4/firstname', 'Lucky')
self.assert_json_equal('', 'adherent/@4/lastname', 'Luke')
self.assert_json_equal('', 'adherent/@4/family', None)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 4)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/third', "Dalton William")
self.assert_json_equal('', 'bill/@2/third', "Dalton Jack")
self.assert_json_equal('', 'bill/@3/third', "Luke Lucky")
self.factory.xfer = ObjectMerge()
self.calljson('/CORE/objectMerge',
{'modelname': 'contacts.Individual', 'field_id': 'individual', 'individual': '2;3', 'CONFIRME': 'YES', 'mrg_object': '3'}, False)
self.assert_observer('core.acknowledge', 'CORE', 'objectMerge')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 4)
self.assert_json_equal('', 'adherent/@0/id', 4)
self.assert_json_equal('', 'adherent/@0/firstname', 'Jack')
self.assert_json_equal('', 'adherent/@0/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@0/family', None)
self.assert_json_equal('', 'adherent/@1/id', 5)
self.assert_json_equal('', 'adherent/@1/firstname', 'Joe')
self.assert_json_equal('', 'adherent/@1/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@1/family', 'LES DALTONS')
self.assert_json_equal('', 'adherent/@2/id', 3)
self.assert_json_equal('', 'adherent/@2/firstname', 'William')
self.assert_json_equal('', 'adherent/@2/lastname', 'Dalton')
self.assert_json_equal('', 'adherent/@2/family', 'LES DALTONS')
self.assert_json_equal('', 'adherent/@3/id', 6)
self.assert_json_equal('', 'adherent/@3/firstname', 'Lucky')
self.assert_json_equal('', 'adherent/@3/lastname', 'Luke')
self.assert_json_equal('', 'adherent/@3/family', None)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 4)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@2/third', "Dalton Jack")
self.assert_json_equal('', 'bill/@3/third', "Luke Lucky")
def test_import(self):
csv_content = """"nom","prenom","sexe","famille","adresse","codePostal","ville","fixe","portable","mail","Type"
"Dalton","Avrel","Homme","LES DALTONS","rue de la liberté","99673","TOUINTOUIN","0502851031","0439423854","avrel.dalton@worldcompany.com","Annually"
"Dalton","Joe","Homme","Dalton","rue de la liberté","99673","TOUINTOUIN","0502851031","0439423854","joe.dalton@worldcompany.com","Annually"
"Dalton","Ma'a","Femme","LES DALTONS","rue de la liberté","99673","TOUINTOUIN","0502851031","0439423854","maa.dalton@worldcompany.com","Annually"
"Luke","Lucky","Homme","Luke","rue de la liberté","99673","TOUINTOUIN","0502851031","0439423854","lucky.luke@worldcompany.com","Annually"
"GOC","Marie","Femme","","33 impasse du 11 novembre","99150","BIDON SUR MER","0632763718","0310231012","marie762@free.fr","Annually"
"UHADIK","Jeanne","Femme","UHADIK-FEPIZIBU","1 impasse de l"Oisan","99410","VIENVITEVOIR","0699821944","0873988470","marie439@orange.fr","Annually"
"FEPIZIBU","Benjamin","Homme","UHADIK-FEPIZIBU","30 cours de la Chartreuse","99247","BELLEVUE","0262009068","0754416670","benjamin475@free.fr","Annually"
"""
default_adherents()
default_subscription()
self.add_family()
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 2, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentFamilySelect()
self.calljson('/diacamma.member/adherentFamilySelect', {'adherent': 5, 'legal_entity': 7}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'adherentFamilySelect')
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.assertEqual(len(self.json_actions), 3)
self.assertEqual(1, LegalEntity.objects.filter(structure_type_id=3).count())
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 1}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 1, 'modelname': 'member.Adherent', 'quotechar': '"',
'delimiter': ',', 'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent': StringIO(csv_content)}, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 6 + 13)
self.assert_select_equal('fld_family', 12)
self.assert_count_equal('CSV', 7)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': '"', 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
'fld_subscriptiontype': 'Type', 'fld_family': 'famille', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "7 éléments ont été importés")
self.assert_json_equal('LABELFORM', 'import_error', [])
self.assertEqual(len(self.json_actions), 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 7)
self.assert_json_equal('', 'adherent/@0/firstname', "Avrel")
self.assert_json_equal('', 'adherent/@0/family', "LES DALTONS")
self.assert_json_equal('', 'adherent/@1/firstname', "Joe")
self.assert_json_equal('', 'adherent/@1/family', "Dalton")
self.assert_json_equal('', 'adherent/@2/firstname', "Ma'a")
self.assert_json_equal('', 'adherent/@2/family', "LES DALTONS")
self.assert_json_equal('', 'adherent/@3/firstname', "Benjamin")
self.assert_json_equal('', 'adherent/@3/family', "UHADIK-FEPIZIBU")
self.assert_json_equal('', 'adherent/@4/firstname', "Marie")
self.assert_json_equal('', 'adherent/@4/family', None)
self.assert_json_equal('', 'adherent/@5/firstname', "Lucky")
self.assert_json_equal('', 'adherent/@5/family', "Luke")
self.assert_json_equal('', 'adherent/@6/firstname', "Jeanne")
self.assert_json_equal('', 'adherent/@6/family', "UHADIK-FEPIZIBU")
self.assertEqual(4, LegalEntity.objects.filter(structure_type_id=3).count())
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 1}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 5)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/total', 152.88) # Subscription: art1:12.34 + art5:64.10 x 2
self.assert_json_equal('', 'bill/@1/third', "Dalton")
self.assert_json_equal('', 'bill/@1/total', 76.44) # Subscription: art1:12.34 + art5:64.10
self.assert_json_equal('', 'bill/@2/third', "Luke")
self.assert_json_equal('', 'bill/@2/total', 76.44) # Subscription: art1:12.34 + art5:64.10
self.assert_json_equal('', 'bill/@3/third', "GOC Marie")
self.assert_json_equal('', 'bill/@3/total', 76.44) # Subscription: art1:12.34 + art5:64.10
self.assert_json_equal('', 'bill/@4/third', "UHADIK-FEPIZIBU")
self.assert_json_equal('', 'bill/@4/total', 152.88) # Subscription: art1:12.34 + art5:64.10 x 2
self.factory.xfer = AdherentContactList()
self.calljson('/diacamma.member/adherentContactList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentContactList')
self.assert_count_equal('abstractcontact', 5)
self.assert_json_equal('', 'abstractcontact/@0/ident', "Dalton")
self.assert_json_equal('', 'abstractcontact/@0/adherents', ["Dalton Joe"])
self.assert_json_equal('', 'abstractcontact/@1/ident', "GOC Marie")
self.assert_json_equal('', 'abstractcontact/@1/adherents', ["GOC Marie"])
self.assert_json_equal('', 'abstractcontact/@2/ident', "LES DALTONS")
self.assert_json_equal('', 'abstractcontact/@2/adherents', ["Dalton Avrel", "Dalton Ma'a"])
self.assert_json_equal('', 'abstractcontact/@3/ident', "Luke")
self.assert_json_equal('', 'abstractcontact/@3/adherents', ["Luke Lucky"])
self.assert_json_equal('', 'abstractcontact/@4/ident', "UHADIK-FEPIZIBU")
self.assert_json_equal('', 'abstractcontact/@4/adherents', ["FEPIZIBU Benjamin", "UHADIK Jeanne"])
def test_import_with_prestation(self):
csv_content = """'nom','prenom','famille','sexe','adresse','codePostal','ville','fixe','portable','mail','DateNaissance','LieuNaissance','Type','Cours'
'Dalton','Avrel','Dalton','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','avrel.dalton@worldcompany.com','10/02/2000','BIDON SUR MER','Annually','Presta 1'
'Dalton','Joe','Dalton','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','joe.dalton@worldcompany.com','18/05/1989','BIDON SUR MER','Annually','Presta 2,Presta 3'
'Luke','Lucky','Luke','Homme','rue de la liberté','99673','TOUINTOUIN','0502851031','0439423854','lucky.luke@worldcompany.com','04/06/1979','BIDON SUR MER','Annually','Presta 1;Presta 3'
'GOC','Marie','','Femme','33 impasse du 11 novembre','99150','BIDON SUR MER','0632763718','0310231012','marie762@free.fr','16/05/1998','KIKIMDILUI','Annually','Presta 1,Presta 2;Presta 3'
"""
# Avrel team3 [activity2]
# Joe team1 [activity1] team2 [activity2]
# Lucky team1 [activity1] team3 [activity2]
# Marie team1 [activity1] team2 [activity2]
Parameter.change_value("member-fields", "firstname;lastname;tel1;tel2;email;family;license")
set_parameters(["team", "licence"])
default_adherents()
default_subscription()
default_prestation()
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 0)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 0}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 0)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 1, 'modelname': 'member.Adherent', 'quotechar': "'",
'delimiter': ',', 'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent': StringIO(csv_content)}, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 6 + 16)
self.assert_select_equal('fld_family', 15)
self.assert_select_equal('fld_prestations', 15)
self.assert_count_equal('CSV', 4)
self.factory.xfer = ContactImport()
self.calljson('/lucterios.contacts/contactImport', {'step': 3, 'modelname': 'member.Adherent', 'quotechar': "'", 'delimiter': ',',
'encoding': 'utf-8', 'dateformat': '%d/%m/%Y', 'csvcontent0': csv_content,
"fld_lastname": "nom", "fld_firstname": "prenom", "fld_address": "adresse",
"fld_family": "famille", "fld_postal_code": "codePostal", "fld_city": "ville", "fld_email": "mail",
'fld_subscriptiontype': 'Type', 'fld_prestations': 'Cours', }, False)
self.assert_observer('core.custom', 'lucterios.contacts', 'contactImport')
self.assert_count_equal('', 3)
self.assert_json_equal('LABELFORM', 'result', "4 éléments ont été importés")
self.assert_json_equal('LABELFORM', 'import_error', [])
self.assertEqual(len(self.json_actions), 1)
self.factory.xfer = AdherentActiveList()
self.calljson('/diacamma.member/adherentActiveList', {'dateref': '2010-01-15'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentActiveList')
self.assert_count_equal('adherent', 4)
self.assert_json_equal('', 'adherent/@0/firstname', "Avrel")
self.assert_json_equal('', 'adherent/@0/family', "Dalton")
self.assert_json_equal('', 'adherent/@0/license', ["team3"])
self.assert_json_equal('', 'adherent/@1/firstname', "Joe")
self.assert_json_equal('', 'adherent/@1/family', "Dalton")
self.assert_json_equal('', 'adherent/@1/license', ["team1", "team2"])
self.assert_json_equal('', 'adherent/@2/firstname', "Marie")
self.assert_json_equal('', 'adherent/@2/family', None)
self.assert_json_equal('', 'adherent/@2/license', ["team1", "team2", "team3"])
self.assert_json_equal('', 'adherent/@3/firstname', "Lucky")
self.assert_json_equal('', 'adherent/@3/family', "Luke")
self.assert_json_equal('', 'adherent/@3/license', ["team1", "team3"])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': 0}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 3)
self.assert_json_equal('', 'bill/@0/third', "Dalton")
self.assert_json_equal('', 'bill/@0/total', 546.97) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78 + art3:324.97
self.assert_json_equal('', 'bill/@1/third', "Luke")
self.assert_json_equal('', 'bill/@1/total', 413.75) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art3:324.97
self.assert_json_equal('', 'bill/@2/third', "GOC Marie")
self.assert_json_equal('', 'bill/@2/total', 470.53) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + art3:324.97
def test_with_prestation_valid_subscription(self):
self.prep_family()
set_parameters(["team"])
default_prestation()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 2, 'status': 1, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '1;2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 5, 'status': 1, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/id', 1)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.factory.xfer = BillTransition()
self.calljson('/diacamma.invoice/billTransition', {'CONFIRME': 'YES', 'bill': 1, 'withpayoff': False, 'TRANSITION': 'valid'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billTransition')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_json_equal('LABELFORM', 'prestations', ['team2', 'team3'])
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 5, 'dateref': '2014-10-01', 'subscription': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_json_equal('LABELFORM', 'prestations', ['team2'])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 1)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.factory.xfer = SubscriptionTransition()
self.calljson('/diacamma.member/subscriptionTransition', {'CONFIRME': 'YES', 'subscription': 1, 'TRANSITION': 'validate'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionTransition')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_count_equal('license', 2)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 5, 'dateref': '2014-10-01', 'subscription': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_count_equal('license', 1)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/id', 2)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.assert_json_equal('', 'bill/@1/id', 1)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 3)
self.assert_json_equal('', 'bill/@1/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
def test_with_prestation_convert_bill(self):
self.prep_family()
set_parameters(["team"])
default_prestation()
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 2, 'status': 1, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '1;2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = SubscriptionAddModify()
self.calljson('/diacamma.member/subscriptionAddModify', {'SAVE': 'YES', 'adherent': 5, 'status': 1, 'dateref': '2014-10-01',
'subscriptiontype': 1, 'season': 10, 'prestations': '2'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'subscriptionAddModify')
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/id', 1)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.factory.xfer = BillTransition()
self.calljson('/diacamma.invoice/billTransition', {'CONFIRME': 'YES', 'bill': 1, 'withpayoff': False, 'TRANSITION': 'valid'}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billTransition')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_json_equal('LABELFORM', 'prestations', ['team2', 'team3'])
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 5, 'dateref': '2014-10-01', 'subscription': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 1)
self.assert_json_equal('LABELFORM', 'prestations', ['team2'])
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 1)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 0)
self.assert_json_equal('', 'bill/@0/status', 1)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.factory.xfer = BillFromQuotation()
self.calljson('/diacamma.invoice/billFromQuotation', {'CONFIRME': 'YES', 'bill': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.invoice', 'billFromQuotation')
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 2, 'dateref': '2014-10-01', 'subscription': 1}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_count_equal('license', 2)
self.factory.xfer = SubscriptionShow()
self.calljson('/diacamma.member/subscriptionShow', {'adherent': 5, 'dateref': '2014-10-01', 'subscription': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'subscriptionShow')
self.assert_json_equal('LABELFORM', 'status', 2)
self.assert_count_equal('license', 1)
self.factory.xfer = BillList()
self.calljson('/diacamma.invoice/billList', {'bill_type': -1, 'status_filter': -2}, False)
self.assert_observer('core.custom', 'diacamma.invoice', 'billList')
self.assert_count_equal('bill', 2)
self.assert_json_equal('', 'bill/@0/id', 2)
self.assert_json_equal('', 'bill/@0/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@0/bill_type', 1)
self.assert_json_equal('', 'bill/@0/status', 0)
self.assert_json_equal('', 'bill/@0/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
self.assert_json_equal('', 'bill/@1/id', 1)
self.assert_json_equal('', 'bill/@1/third', "LES DALTONS")
self.assert_json_equal('', 'bill/@1/bill_type', 0)
self.assert_json_equal('', 'bill/@1/status', 3)
self.assert_json_equal('', 'bill/@1/total', 278.78) # Subscription: art1:12.34 + art5:64.10 / Prestations: art1:12.34 + art2:56.78 + Subscription: art1:12.34 + art5:64.10 / Prestations: art2:56.78
class TaxtReceiptTest(InvoiceTest):
def setUp(self):
InvoiceTest.setUp(self)
rmtree(get_user_dir(), True)
default_financial()
default_season()
default_params()
create_account(['708'], 3)
default_adherents(True)
change_ourdetail()
def test_no_valid(self):
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1}]
bill_id = self._create_bill(details, 1, '2015-04-01', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supporting': bill_id, 'amount': '100.0', 'payer': "Ma'a Dalton", 'date': '2015-04-03', 'mode': 0, 'reference': 'abc', 'bank_account': 0}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '0'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 4)
self.assert_json_equal('LABELFORM', 'result', [100.00, 0.00, 100.00, 100.00, 0.00])
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
def test_valid_only_bill(self):
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1}]
bill_id = self._create_bill(details, 1, '2015-04-01', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supporting': bill_id, 'amount': '100.0', 'payer': "Ma'a Dalton", 'date': '2015-04-03', 'mode': 0, 'reference': 'abc', 'bank_account': 0}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '2', "entryline": "1"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '2'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 2)
self.assert_json_equal('LABELFORM', 'result', [100.00, 0.00, 100.00, 100.00, 0.00])
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
def test_valid(self):
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1}]
bill_id = self._create_bill(details, 1, '2015-04-01', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supporting': bill_id, 'amount': '100.0', 'payer': "Ma'a Dalton", 'date': '2015-04-03', 'mode': 0, 'reference': 'abc', 'bank_account': 0}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '2', "entryline": "1;3"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '2'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 4)
self.assert_json_equal('LABELFORM', 'result', [100.00, 0.00, 100.00, 100.00, 100.00])
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptShow()
self.calljson('/diacamma.member/taxReceiptShow', {'taxreceipt': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptShow')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'num', 1)
self.assert_json_equal('LABELFORM', 'third', 'Dalton Joe')
self.assert_count_equal('entryline', 1)
self.assert_json_equal('LABELFORM', 'total', 100.0)
self.assert_json_equal('LABELFORM', 'date_payoff', '2015-04-03')
self.assert_json_equal('LABELFORM', 'mode_payoff', 'espèces')
self.factory.xfer = TaxReceiptPrint()
self.calljson('/diacamma.member/taxReceiptPrint', {'taxreceipt': '2', 'PRINT_PERSITENT': True, 'PRINT_MODE': 3, 'MODEL': 8}, False)
self.assert_observer('core.print', 'diacamma.member', 'taxReceiptPrint')
check_pdfreport(self, 'TaxReceipt', 2, True)
self.save_pdf()
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
def test_valid_onlyone(self):
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1},
{'article': 1, 'designation': 'article 1', 'price': '100.00', 'quantity': 1}]
bill_id = self._create_bill(details, 1, '2015-06-21', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supportings': bill_id,
'amount': '200.0', 'payer': "Ma'a Dalton", 'date': '2015-06-30', 'mode': 3, 'reference': 'abc', 'bank_account': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 2}]
self._create_bill(details, 1, '2015-11-11', 4, True)
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '2', "entryline": "1;2;3;4;5;6;7"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
current_year = FiscalYear.get_current()
current_year.closed()
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '2'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 7 + 5)
self.assert_json_equal('LABELFORM', 'result', [400.00, 0.00, 400.00, 200.00, 200.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptShow()
self.calljson('/diacamma.member/taxReceiptShow', {'taxreceipt': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptShow')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'num', 1)
self.assert_json_equal('LABELFORM', 'third', 'Dalton Joe')
self.assert_count_equal('entryline', 1)
self.assert_json_equal('LABELFORM', 'total', 100.0)
self.assert_json_equal('LABELFORM', 'date_payoff', '2015-06-30')
self.assert_json_equal('LABELFORM', 'mode_payoff', 'carte de crédit')
def test_multi(self):
current_year = FiscalYear.get_current()
current_year.begin = '2014-09-01'
current_year.end = '2015-08-31'
current_year.save()
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1},
{'article': 1, 'designation': 'article 1', 'price': '100.00', 'quantity': 1}]
bill_id1 = self._create_bill(details, 1, '2014-10-23', 4, True)
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 2}]
bill_id2 = self._create_bill(details, 1, '2014-11-11', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supportings': "%d;%d" % (bill_id1, bill_id2),
'amount': '250.0', 'payer': "Ma'a Dalton", 'date': '2014-12-03', 'mode': 1, 'reference': 'abc', 'bank_account': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supportings': "%d;%d" % (bill_id1, bill_id2),
'amount': '150.0', 'payer': "Ma'a Dalton", 'date': '2015-02-25', 'mode': 2, 'reference': 'abc', 'bank_account': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '2', "entryline": "1;2;3;4;5;6;7;8;9"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '2'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 11)
self.assert_json_equal('LABELFORM', 'result', [400.00, 0.00, 400.00, 400.00, 400.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2014, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2014}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptShow()
self.calljson('/diacamma.member/taxReceiptShow', {'taxreceipt': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptShow')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'num', 1)
self.assert_json_equal('LABELFORM', 'third', 'Dalton Joe')
self.assert_count_equal('entryline', 2)
self.assert_json_equal('LABELFORM', 'total', 300.0)
self.assert_json_equal('LABELFORM', 'date_payoff', '2015-02-25')
self.assert_json_equal('LABELFORM', 'mode_payoff', 'chèque, virement')
def test_waiver_fee(self):
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1}]
self._create_bill(details, 1, '2015-03-29', 4, True)
add_entry(1, 2, '2015-03-15', 'depense 1', '-1|12|0|100.000000|0|0|None|\n-2|1|4|-100.000000|0|0|None|', True)
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '0', "entryline": "1;2"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountLink()
self.calljson('/diacamma.accounting/entryAccountLink', {'year': '1', 'journal': '0', 'filter': '0', 'entryline': '4;1'}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountLink')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '0'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 4)
self.assert_json_equal('LABELFORM', 'result', [100.00, 100.00, 0.00, 0.00, 0.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptShow()
self.calljson('/diacamma.member/taxReceiptShow', {'taxreceipt': 2}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptShow')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'num', 1)
self.assert_json_equal('LABELFORM', 'third', 'Dalton Joe')
self.assert_count_equal('entryline', 1)
self.assert_json_equal('LABELFORM', 'total', 100.0)
self.assert_json_equal('LABELFORM', 'date_payoff', '2015-03-15')
self.assert_json_equal('LABELFORM', 'mode_payoff', 'abandon de frais')
def test_waiver_revenu(self):
details = [{'article': 2, 'designation': 'article 2', 'price': '100.00', 'quantity': 1}]
self._create_bill(details, 2, '2015-03-25', 4, True)
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1}]
self._create_bill(details, 3, '2015-03-29', 4, True)
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '0', "entryline": "1;2;3;4"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountLink()
self.calljson('/diacamma.accounting/entryAccountLink', {'year': '1', 'journal': '0', 'filter': '0', 'entryline': '3;1'}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountLink')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '0'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 4)
self.assert_json_equal('LABELFORM', 'result', [0.00, 0.00, 0.00, 0.00, 0.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptShow()
self.calljson('/diacamma.member/taxReceiptShow', {'taxreceipt': 3}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptShow')
self.assert_count_equal('', 9)
self.assert_json_equal('LABELFORM', 'num', 1)
self.assert_json_equal('LABELFORM', 'third', 'Dalton Joe')
self.assert_count_equal('entryline', 1)
self.assert_json_equal('LABELFORM', 'total', 100.0)
self.assert_json_equal('LABELFORM', 'date_payoff', '2015-03-25')
self.assert_json_equal('LABELFORM', 'mode_payoff', 'abandon de revenus ou de produits')
def test_double_year(self):
current_year = FiscalYear.get_current()
# Last year
old_year = FiscalYear.objects.create(begin='2014-01-01', end='2014-12-31', status=1)
old_year.set_has_actif()
fill_accounts_fr(old_year, True, False)
create_account(['708'], 3, old_year)
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 1},
{'article': 1, 'designation': 'article 1', 'price': '100.00', 'quantity': 1}]
bill_id1 = self._create_bill(details, 1, '2014-10-23', 4, True)
details = [{'article': 4, 'designation': 'article 4', 'price': '100.00', 'quantity': 2}]
bill_id2 = self._create_bill(details, 1, '2014-11-11', 4, True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supportings': "%d;%d" % (bill_id1, bill_id2),
'amount': '250.0', 'payer': "Ma'a Dalton", 'date': '2014-12-03', 'mode': 1, 'reference': 'abc', 'bank_account': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '2', 'journal': '0', "entryline": "1;2;3;4;5;6;7"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
old_year.set_context(self.factory.xfer)
old_year.closed()
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '2', 'journal': '0', 'filter': '2'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 8 + 8)
self.assert_json_equal('LABELFORM', 'result', [400.00, 0.00, 400.00, 250.00, 250.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2014, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2014}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
# New year
current_year.last_fiscalyear = old_year
current_year.set_has_actif()
current_year.run_report_lastyear(True)
self.factory.xfer = PayoffAddModify()
self.calljson('/diacamma.payoff/payoffAddModify', {'SAVE': 'YES', 'supportings': "%d;%d" % (bill_id1, bill_id2),
'amount': '150.0', 'payer': "Ma'a Dalton", 'date': '2015-02-25', 'mode': 2, 'reference': 'abc', 'bank_account': 1}, False)
self.assert_observer('core.acknowledge', 'diacamma.payoff', 'payoffAddModify')
self.factory.xfer = EntryAccountClose()
self.calljson('/diacamma.accounting/entryAccountClose',
{'CONFIRME': 'YES', 'year': '1', 'journal': '2', "entryline": "25;26;27"}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountClose')
self.factory.xfer = EntryAccountLink()
self.calljson('/diacamma.accounting/entryAccountLink', {'year': '1', 'journal': '0', 'filter': '0', 'entryline': '21;22;23;24;25;26'}, False)
self.assert_observer('core.acknowledge', 'diacamma.accounting', 'entryAccountLink')
self.factory.xfer = EntryAccountList()
self.calljson('/diacamma.accounting/entryAccountList',
{'year': '1', 'journal': '0', 'filter': '0'}, False)
self.assert_observer('core.custom', 'diacamma.accounting', 'entryAccountList')
self.assert_count_equal('entryline', 8 + 3)
self.assert_json_equal('LABELFORM', 'result', [0.00, 0.00, 0.00, 400.00, 400.00])
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2015, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2015}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 1)
self.factory.xfer = TaxReceiptCheck()
self.calljson('/diacamma.member/taxReceiptCheck', {'year': 2014, 'CONFIRME': 'YES'}, False)
self.assert_observer('core.acknowledge', 'diacamma.member', 'taxReceiptCheck')
self.factory.xfer = TaxReceiptList()
self.calljson('/diacamma.member/taxReceiptList', {'year': 2014}, False)
self.assert_observer('core.custom', 'diacamma.member', 'taxReceiptList')
self.assert_count_equal('taxreceipt', 0)
class AdherentConnectionTest(BaseAdherentTest):
smtp_port = 3425
def setUp(self):
BaseAdherentTest.setUp(self)
Parameter.change_value('member-family-type', 3)
Parameter.change_value("member-fields", "firstname;lastname;tel1;tel2;email;family")
Parameter.change_value('member-connection', 2)
set_parameters([])
AdherentConnectionTest.smtp_port += 1
configSMTP('localhost', AdherentConnectionTest.smtp_port)
change_ourdetail()
def test_connection_ask_failed(self):
self.assertEqual(LucteriosUser.objects.all().count(), 1)
self.add_subscriptions(year=2008, season_id=9)
self.calljson('/diacamma.member/askAdherentAccess', {})
self.assert_observer('core.custom', 'diacamma.member', 'askAdherentAccess')
self.assertEqual(len(self.json_context), 0)
self.assertEqual(len(self.json_actions), 2)
self.assert_count_equal('', 3)
self.assert_json_equal('EDIT', "email", '')
server = TestReceiver()
server.start(AdherentConnectionTest.smtp_port)
try:
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "inconnu@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Ce courriel ne correspond pas avec un adhérent actif !')
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "Joe.Dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Ce courriel ne correspond pas avec un adhérent actif !')
self.assertEqual(0, server.count())
finally:
server.stop()
self.assertEqual(LucteriosUser.objects.all().count(), 1)
def test_connection_ask_simple(self):
self.assertEqual(LucteriosUser.objects.all().count(), 1)
self.add_subscriptions()
server = TestReceiver()
server.start(AdherentConnectionTest.smtp_port)
try:
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "Joe.Dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Les paramètres de connexion ont été envoyé.')
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "William.Dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Les paramètres de connexion ont été envoyé.')
self.assertEqual(2, server.count())
msg, _msg = server.check_first_message('Mot de passe de connexion', 2)
self.assertEqual('text/html', msg.get_content_type())
self.assertEqual('base64', msg.get('Content-Transfer-Encoding', ''))
message = decode_b64(msg.get_payload())
self.assertEqual('<html>Bienvenue<br/><br/>Confirmation de connexion à votre application :'
'<br/> - Alias : joeD<br/> - Mot de passe : ', message[:115])
password = message[115:].split('<br/>')[0]
finally:
server.stop()
self.calljson('/CORE/authentification', {'username': 'joeD', 'password': password})
self.assert_observer('core.auth', 'CORE', 'authentification')
self.assert_json_equal('', '', 'OK')
self.calljson('/lucterios.contacts/account', {}, 'get')
self.assert_observer('core.custom', 'lucterios.contacts', 'account')
self.assert_json_equal('LABELFORM', 'genre', 1)
self.assert_json_equal('LABELFORM', 'firstname', "Joe")
self.assert_json_equal('LABELFORM', 'lastname', "Dalton")
self.assert_json_equal('LINK', 'email', "Joe.Dalton@worldcompany.com")
self.assert_count_equal('subscription', 1)
self.assertEqual(LucteriosUser.objects.all().count(), 3)
self.assertEqual(LucteriosUser.objects.filter(is_active=True).count(), 3)
def test_connection_ask_family(self):
self.assertEqual(LucteriosUser.objects.all().count(), 1)
self.prep_subscription_family()
server = TestReceiver()
server.start(AdherentConnectionTest.smtp_port)
try:
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Les paramètres de connexion ont été envoyé.')
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "Joe.Dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Les paramètres de connexion ont été envoyé.')
self.calljson('/diacamma.member/askAdherentAccess', {"CONFIRME": "YES", "email": "Avrel.Dalton@worldcompany.com"})
self.assert_observer('core.dialogbox', 'diacamma.member', 'askAdherentAccess')
self.assert_json_equal('', 'text', 'Les paramètres de connexion ont été envoyé.')
self.assertEqual(3, server.count())
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(0)[1])
self.assertEqual(['dalton@worldcompany.com'], server.get(0)[2])
msg1, _msg = server.check_first_message('Mot de passe de connexion', 2)
self.assertEqual('text/html', msg1.get_content_type())
self.assertEqual('base64', msg1.get('Content-Transfer-Encoding', ''))
message = decode_b64(msg1.get_payload())
self.assertEqual('<html>Bienvenue<br/><br/>Confirmation de connexion à votre application :'
'<br/> - Alias : LES DALTONS<br/> - Mot de passe : ', message[:122])
password1 = message[122:].split('<br/>')[0]
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(1)[1])
self.assertEqual(['Joe.Dalton@worldcompany.com'], server.get(1)[2])
msg2, _msg = server.get_msg_index(1, 'Mot de passe de connexion')
message = decode_b64(msg2.get_payload())
self.assertEqual('<html>Bienvenue<br/><br/>Confirmation de connexion à votre application :'
'<br/> - Alias : LES DALTONS<br/> - Mot de passe : ', message[:122])
password2 = message[122:].split('<br/>')[0]
self.assertEqual('mr-sylvestre@worldcompany.com', server.get(2)[1])
self.assertEqual(['Avrel.Dalton@worldcompany.com'], server.get(2)[2])
msg3, _msg = server.get_msg_index(2, 'Mot de passe de connexion')
message = decode_b64(msg3.get_payload())
self.assertEqual('<html>Bienvenue<br/><br/>Confirmation de connexion à votre application :'
'<br/> - Alias : LES DALTONS<br/> - Mot de passe : ', message[:122])
password3 = message[122:].split('<br/>')[0]
finally:
server.stop()
self.calljson('/CORE/authentification', {'username': 'LES DALTONS', 'password': password1})
self.assert_observer('core.auth', 'CORE', 'authentification')
self.assert_json_equal('', '', 'BADAUTH')
self.calljson('/CORE/authentification', {'username': 'LES DALTONS', 'password': password2})
self.assert_observer('core.auth', 'CORE', 'authentification')
self.assert_json_equal('', '', 'BADAUTH')
self.calljson('/CORE/authentification', {'username': 'LES DALTONS', 'password': password3})
self.assert_observer('core.auth', 'CORE', 'authentification')
self.assert_json_equal('', '', 'OK')
self.calljson('/lucterios.contacts/account', {}, 'get')
self.assert_observer('core.custom', 'lucterios.contacts', 'account')
self.assert_json_equal('LABELFORM', 'legalentity_structure_type', "famille")
self.assert_json_equal('LABELFORM', 'legalentity_name', "LES DALTONS")
self.assert_json_equal('LINK', 'legalentity_email', "dalton@worldcompany.com")
self.assert_action_equal('POST', self.get_json_path('#btn_edit/action'), ("Editer", "images/edit.png",
"lucterios.contacts", "currentLegalEntityModify", 0, 1, 1, {'legal_entity': 7}))
self.assert_count_equal('subscription', 2)
self.assertEqual(LucteriosUser.objects.all().count(), 2)
self.assertEqual(LucteriosUser.objects.filter(is_active=True).count(), 2)
self.calljson('/lucterios.contacts/currentLegalEntityModify', {'legal_entity': 7})
self.assert_observer('core.custom', 'lucterios.contacts', 'currentLegalEntityModify')
self.assert_count_equal('', 12)
def test_disable_connexion(self):
self.add_subscriptions()
adh_luke = Adherent.objects.get(firstname='Lucky')
adh_luke.user = LucteriosUser.objects.create(username='lucky', first_name=adh_luke.firstname, last_name=adh_luke.lastname, email=adh_luke.email, is_active=False)
adh_luke.save()
new_adh = create_adherent("Ma'a", 'Dalton', '1961-04-12')
new_adh.user = LucteriosUser.objects.create(username='maa', first_name=new_adh.firstname, last_name=new_adh.lastname, email=new_adh.email, is_active=True)
new_adh.save()
new_adh = create_adherent("Rantanplan", 'Chien', '2010-01-01')
new_adh.user = LucteriosUser.objects.create(username='rantanplan', first_name=new_adh.firstname, last_name=new_adh.lastname, email=new_adh.email, is_active=True)
new_adh.save()
Responsability.objects.create(individual=new_adh, legal_entity_id=1)
self.assertEqual(LucteriosUser.objects.all().count(), 4)
self.assertEqual(len(LucteriosUser.objects.filter(is_active=True)), 3)
self.factory.xfer = AdherentDisableConnection()
self.calljson('/diacamma.member/adherentDisableConnection', {'CONFIRME': 'YES', 'RELOAD': 'YES'}, False)
self.assert_observer('core.custom', 'diacamma.member', 'adherentDisableConnection')
self.assert_json_equal('LABELFORM', 'info', '{[center]}{[b]}Résultat{[/b]}{[/center]}{[br/]}1 connexion(s) supprimée(s).', True)
self.assertEqual(LucteriosUser.objects.all().count(), 4)
self.assertEqual(len(LucteriosUser.objects.filter(is_active=True)), 2)
|
Diacamma2/asso
|
diacamma/member/tests_adherent.py
|
Python
|
gpl-3.0
| 254,243
|
[
"Dalton"
] |
1a68b29d0584f350e247ff339f72efa1ab2e8fc7e1a5a5806f28834a9e5536e6
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#====================================================
# FILE: img2sdat.py
# AUTHORS: xpirt - luxi78 - howellzhu
# DATE: 2016-12-23 16:47:24 CST
#====================================================
import sys, os, errno, tempfile
import common, blockimgdiff, sparse_img
__version__ = '1.2'
if sys.hexversion < 0x02070000:
print >> sys.stderr, "Python 2.7 or newer is required."
try:
input = raw_input
except NameError: pass
input('Press ENTER to exit...')
sys.exit(1)
else:
print('img2sdat binary - version: %s\n' % __version__)
try:
INPUT_IMAGE = str(sys.argv[1])
except IndexError:
print('Usage: img2sdat.py <system_img> [outdir] [version]\n')
print(' <system_img>: input system image\n')
print(' [outdir]: output directory (current directory by default)\n')
print(' [version]: transfer list version number (1 - 5.0, 2 - 5.1, 3 - 6.0, 4 - 7.0, will be asked by default, more info on xda thread)\n')
print('Visit xda thread for more information.\n')
try:
input = raw_input
except NameError: pass
input('Press ENTER to exit...')
sys.exit()
def main(argv):
if len(sys.argv) < 3:
outdir = './system'
else:
outdir = sys.argv[2] + '/system'
if len(sys.argv) < 4:
version = 4
item = True
while item:
print(''' 1. Android Lollipop 5.0
2. Android Lollipop 5.1
3. Android Marshmallow 6.0
4. Android Nougat 7.0
''')
item = raw_input('Choose system version: ')
if item == '1':
version = 1
break
elif item == '2':
version = 2
break
elif item == '3':
version = 3
break
elif item == '4':
version = 4
break
else:
return
else:
version = int(sys.argv[3])
# Get sparse image
image = sparse_img.SparseImage(INPUT_IMAGE, tempfile.mkstemp()[1], '0')
# Generate output files
b = blockimgdiff.BlockImageDiff(image, None, version)
b.Compute(outdir)
print('Done! Output files: %s' % os.path.dirname(outdir))
return
if __name__ == '__main__':
main(sys.argv)
|
Nevax07/FreedomOS
|
build/tools/img2sdat/img2sdat.py
|
Python
|
apache-2.0
| 2,383
|
[
"VisIt"
] |
b17e3d93946d833ad28cb1b731acbf5d42967f6122a78b382ef1d7fd7d7284fd
|
# -*- coding: utf-8 -*-
import sys
import math
import time
import wx
import shutil
import wx.lib.scrolledpanel
import platform
import webbrowser
from tools import *
class GFIntermediate:
"""Class defining the intermediates in a GeoFold DAG"""
def __init__(self,number=0, center=(0,0), radius=0.,dagfile=''):
"""Initialize given information from imagemap"""
self.number = number
self.radius = radius
self.center = center
self.dagfile = dagfile
if dagfile != '':
success = self.read_dagfile(dagfile)
assert success == True, 'Could not read dagfile: %s'%(self.dagfile)
else:
self.iflag = ""
#self.state = 0
self.sym = 0
self.Gsolv = 0.
self.sas = 0.
self.entropy = 0.
self.voids = 0
self.hbonds = 0
self.concentration = 0.
self.barrels = []
self.barrelflags = []
def read_dagfile(self,dagfile):
"""Given the GFIntermediate initialized with data from imagemap
open its parent dagfile and read in remaining information. Returns
False if something went wrong"""
try:
readDAG = open(dagfile,'r')
except IOError:
sys.stderr.write('\n\nError: DAG file %s could not be opened\n'%(dagfile))
sys.stderr.flush()
return False
while 1:
line = readDAG.readline()
if line == '':
sys.stderr.write("Error: End of file reached")
sys.stderr.flush()
return False
#find ISEGMT lines
if line[0:6] == 'ISEGMT':
#find matching ISEGMT number
try:
iseg_num = int(line[6:14])
if iseg_num == self.number:
#Read in remaining information
# line
self.iflag = readDAG.readline().split()[0]
line = line.split()
self.sym = int(line[3])
self.Gsolv = float(line[4])
self.sas = float(line[5])
self.entropy = float(line[6])
self.voids = int(line[7])
self.hbonds = int(line[8])
self.concentration = float(line[9])
self.barrels = []
self.barrelflags = []
for i in range(10,18):
self.barrels.append(int(line[i]))
for barrel in self.barrels:
if barrel != 0:
self.barrelflags.append(self.setbarrelflags(readDAG,barrel))
if barrel == 0:
self.barrelflags.append(('',''))
return True
except Exception as e:
sys.stderr.write("Error: "+e.message)
import traceback; traceback.print_exc()
sys.stderr.flush()
return False
readDAG.close()
def setbarrelflags(self,readDAG,barrel):
'''Given a non-zero barrel. return it's u1flags and u2flags for this intermediate'''
#initialize flags
u1flag = ''
u2flag = ''
#find the barrel number
barrel_num = len(self.barrelflags)+1
foundFlags = False
while not foundFlags:
line = readDAG.readline()
if line == '':
sys.stderr.write("setbarrelflags::Error: End of file reached")
sys.stderr.flush()
raise IOError
line = line.split()
if line[0] == 'BARREL' and int(line[1]) == barrel_num:
#read until we find the right seam
while not foundFlags:
line = readDAG.readline()
if line == '':
sys.stderr.write("setbarrelflags::Error: End of file reached")
sys.stderr.flush()
raise IOError
line = line.split()
if line[0] == 'SEAM' and int(line[1]) == barrel:
#read until you find u1flag
while not foundFlags:
line = readDAG.readline()
if line == '':
sys.stderr.write("setbarrelflags::Error: End of file reached")
sys.stderr.flush()
raise IOError
line = line.split()
if line[0] == 'U1FLAG':
u1flag = line[1]
#u2flag is on the next line
u2flag = readDAG.readline().split()[1]
foundFlags = True
for i in range(0,len(self.iflag)):
if self.iflag[i] == '.':
u1flag = u1flag[:i]+'.'+u1flag[i+1:]
u2flag = u2flag[:i]+'.'+u2flag[i+1:]
return (u1flag,u2flag)
def contains_point(self,(x,y)):
"""Returns True if the given coordinate is within the space on the map
defined by this intermediate (e.g. (x,y) lies within self.radius of
self.center)"""
center_x, center_y = self.center
if math.sqrt((center_x-x)**2+(center_y-y)**2) <= self.radius:
return True
else:
return False
def show(self,pymol,boundaries):
''' Will display the intermediate in the pymol window'''
#import pymol
#residues = self.get_residues()
#residues = '(%s) AND %s'%(get_flag_residues(self.iflag,boundaries),self.IDs)
residues = 'model %s and (%s)'%(self.IDs[0][0],get_flag_residues(self.iflag,boundaries))
logInfo('residues: %s'%(residues))
u1res = []
u2res = []
for (u1,u2) in self.barrelflags:
if u1 != '':
#u1res.append('(%s) and %s'%(get_flag_residues(u1),self.ID))
#u2res.append('(%s) and %s'%(get_flag_residues(u2),self.ID))
u1res.append('model %s and (%s)'%(self.IDs[0][0],get_flag_residues(u1,boundaries)))
u2res.append('model %s and (%s)'%(self.IDs[0][0],get_flag_residues(u2,boundaries)))
intermediate = ' intermediate_%s'%(str(self.number))
pymol.cmd.hide('ribbon','Native')
pymol.cmd.hide('cartoon','Native')
pymol.cmd.show_as('ribbon','Native')
#pymol.cmd.color('white',self.ID)
pymol.cmd.set('ribbon_color','white','Native')
pymol.cmd.select(intermediate,residues)
pymol.cmd.show_as('cartoon',intermediate)
#pymol.cmd.color('purple',intermediate)
pymol.cmd.set("cartoon_color",'purple',intermediate)
if len(u1res)!=0:
for i in range(0,len(u1res)):
u1label = 'i_%d_barrel_%d_u1'%(self.number,i)
u2label = 'i_%d_barrel_%d_u2'%(self.number,i)
#if u1res[i] != '(resi ) and %s'%(self.ID):
if u1res[i] != 'model %s and ()'%(self.IDs[0][0]):
logInfo('u1res[%i]: %s'%(i,u1res[i]))
pymol.cmd.select(u1label,u1res[i])
#pymol.cmd.color('yellow',u1label)
pymol.cmd.set("cartoon_color",'yellow',u1label)
#if u2res[i] != '(resi ) and %s'%(self.ID):
if u2res[i] != 'model %s and ()'%(self.IDs[0][0]):
logInfo('u2res[%i]: %s'%(i,u2res[i]))
pymol.cmd.select(u2label,u2res[i])
#pymol.cmd.color('green',u2label)
pymol.cmd.set("cartoon_color", 'green', u2label)
pymol.cmd.deselect()
def setIDs(self,IDs):
self.IDs = IDs
def get_flag_residues(flag,boundaries):
'''gets the residue labeling for given flag (iflag,u1flag,u2flag)'''
residues = []
for bound in boundaries:
tmpres = []
start,stop = (int(boundaries[bound][0]),int(boundaries[bound][1]))
logInfo('start: %i\nstop: %i'%(start,stop))
for i in range(0,len(flag)):
if flag[i] != '.' and i in range(start,stop+1):
tmpres.append(str(i+1))
if tmpres != []:
residues.append('chain %s and (resi %s)'%(bound,','.join(tmpres)))
logInfo(residues)
return ' | '.join(residues)
#return residues
class GFTransition:
"""Class defining the transition states in a GeoFold DAG"""
def __init__(self,number = 0, coords = ((0,0),(0,0),(0,0),(0,0)), dagfile = ''):
#info from imgmap
self.number = number
self.coords = coords
self.dagfile = dagfile
#info from dag
if dagfile != '':
success = self.read_dagfile(dagfile)
assert success == True, 'Could not read dagfile: %s'%(dagfile)
else:
self.f = 0
self.u1 = 0
self.u2 = 0
self.entropy = 0.
self.cuttype = ''
self.iseam = 0
self.traffic = 0.
def read_dagfile(self,dagfile):
try:
readDAG = open(dagfile,'r')
except IOError:
sys.stderr.write('\n\nGFTransition::Error: Could not open file: %s\n'%(dagfile))
sys.stderr.flush()
return False
while 1:
line = readDAG.readline()
if line == '':
sys.stderr.write("\n\nError: End of file reached\n")
sys.stderr.flush()
return False
#Find TSTATE line
if line[0:6] == 'TSTATE':
line = line.split()
if int(line[1]) == self.number:
try:
self.f = int(line[2])
self.u1 = int(line[3])
self.u2 = int(line[4])
self.entropy = float(line[5])
self.cuttype = line[6]
self.iseam = int(line[7])
self.traffic = float(line[8])
except Exception as e:
sys.stderr.write("\n\nError: %s\n"%(e.message))
sys.stderr.flush()
return False
else:
return True
def contains_point(self,(x,y)):
"""Returns True if the given coordinate is within the space on the map
defined by this intermediate (e.g. (x,y) lies within the box bounded by
self.coords). This uses the left-hand test"""
((x1,y1),(x2,y2),(x3,y3),(x4,y4)) = self.coords
#1,2
if self.isLeft((x1,y1),(x2,y2),(x,y)):
return False
#2,3
if self.isLeft((x2,y2),(x3,y3),(x,y)):
return False
#3,4
if self.isLeft((x3,y3),(x4,y4),(x,y)):
return False
#4,5
if self.isLeft((x4,y4),(x1,y1),(x,y)):
return False
return True
def isLeft(self,(x1,y1),(x2,y2),(x,y)):
A = -(y2-y1)
B = x2-x1
C = -(A*x1 + B*y1)
D = A*x + B*y + C
return D > 0
def setIDs(self,IDs):
self.IDs = IDs
def show(self,intermediates,pymol,boundaries):
'''Displays the transition state on the pymol viewer'''
#import pymol
if self.u2 == 0:
u2 = 0
for intermediate in intermediates:
if intermediate.number == self.f:
f = intermediate
if intermediate.number == self.u1:
u1 = intermediate
if intermediate.number == self.u2:
u2 = intermediate
#u1res = '(%s) and %s'%(get_flag_residues(u1.iflag),self.ID)
u1res = 'model %s and (%s)'%(self.IDs[0][0],get_flag_residues(u1.iflag,boundaries))
logInfo('u1res: %s'%(u1res))
if u2 != 0:
#u2res = '(%s) and %s'%(get_flag_residues(u2.iflag),self.ID)
u2res = 'model %s and (%s)'%(self.IDs[0][0],get_flag_residues(u2.iflag,boundaries))
logInfo('u2res: %s'%(u2res))
#pymol.cmd.select('f',fres)
f.show(pymol,boundaries)
pymol.cmd.select('u1',u1res)
if u2 != 0:
pymol.cmd.select('u2',u2res)
pymol.cmd.set("cartoon_color",'red','u1')
pymol.cmd.set("cartoon_color",'blue','u2')
pymol.cmd.deselect()
#is seam
else:
pymol.cmd.hide('cartoon',self.IDs[0][0])
pymol.cmd.hide('ribbon',self.IDs[0][0])
u1.show(pymol,boundaries)
def parseImgMap(mapFile,dag='',IDs=[]):
"""This function takes the html imagemap file generated by GeoFold
and uses it to create a list of Intermediate and transition states"""
transitions = []
intermediates = []
readMap = open(mapFile,'r')
for line in readMap:
if "<area shape" in line:
line = line.split('"')
querystring = line[5].split('=')
#intermediate
if line[1] == 'circle':
number = int(querystring[1].strip('n&dag'))
if dag == '':
dagfile = querystring[2].strip('&')
else:
dagfile = dag
for i in range(6,len(line)):
if line[i].strip() == 'coords=':
coords = line[i+1].split(',')
center = (int(coords[0]),int(coords[1]))
radius = int(coords[2])
tmpIntermediate = GFIntermediate(number,center,radius,dagfile)
tmpIntermediate.setIDs(IDs)
intermediates.append(tmpIntermediate)
#Transition
else:
number = int(querystring[1].strip('t&dagbphsm'))
if dag == '':
dagfile = querystring[2].strip('&')
else:
dagfile = dag
for i in range(6,len(line)):
if line[i].strip() == 'coords=':
coord = line[i+1].split(',')
coord = [int(j) for j in coord]
coords = ((coord[0],coord[1]),(coord[2],coord[3]),(coord[4],coord[5]),(coord[6],coord[7]))
tmpTransition = GFTransition(number,coords,dagfile)
tmpTransition.setIDs(IDs)
transitions.append(tmpTransition)
return (intermediates,transitions)
def GetChainBoundaries(intermediates):
'''Given a set of IDs find Intermediate 1 and extrapolate the boundaries
from it'''
boundaries = {}
for intermediate in intermediates:
if intermediate.number == 1:
iflag = intermediate.iflag
prev = ''
first = 0
second = 0
for i in range(0,len(iflag)):
if iflag[i] != prev:
if prev != '':
second = i-1
boundaries[prev] = (first,second)
prev = iflag[i]
first = i
if i == len(iflag)-1:
second = i
boundaries[prev] = (first,second)
logInfo(boundaries)
return boundaries
class dagPanel(wx.lib.scrolledpanel.ScrolledPanel):
def __init__(self,parent, dagImg,dagMap,dagFile,IDs=[]):
wx.lib.scrolledpanel.ScrolledPanel.__init__(self,parent,-1)
self.intermediates,self.transitions = parseImgMap(dagMap,dagFile,IDs)
self.boundaries = GetChainBoundaries(self.intermediates)
print self.boundaries
vbox = wx.BoxSizer(wx.VERTICAL)
img = wx.StaticBitmap(self, -1, wx.Bitmap(dagImg, wx.BITMAP_TYPE_ANY))
vbox.Add(img)
img.Bind(wx.EVT_LEFT_UP,self.onClick)
if sys.platform == 'Darwin':
img.Bind(wx.EVT_MOUSE_EVENTS,self.osxClick)
self.SetSizer(vbox)
self.SetupScrolling()
def setPyMOL(self,pymol):
self.pymol = pymol
self.cmd = pymol.cmd
self.stored = pymol.stored
def osxClick(self,event):
'''OSX doesn't recognize a click like linux does apparently'''
if event.GetClickCount() == 1 and event.ButtonUp():
self.onClick(event)
def onClick(self,event):
(x,y) = event.GetPosition()
if platform.system() != 'Darwin' and platform.system() != 'Windows':
(x,y) = self.CalcUnscrolledPosition(x,y)
notFound = True
for intermediate in self.intermediates:
if intermediate.contains_point((x,y)):
logInfo("intermediate %d"%(intermediate.number))
intermediate.show(self.pymol,self.boundaries)
notFound = False
break
if notFound:
for transition in self.transitions:
if transition.contains_point((x,y)):
logInfo("transition %d"%(transition.number))
transition.show(self.intermediates,self.pymol,self.boundaries)
notFound = False
break
if notFound:
logInfo("notFound")
class DagViewPanel(wx.lib.scrolledpanel.ScrolledPanel):
def __init__(self,parent,W,H):
#ScrolledPanel initialization
winh = H-330
wx.lib.scrolledpanel.ScrolledPanel.__init__(self,parent,id=-1,pos=(10,60),size=(340,winh),name="Pathway Visualization")
self.SetBackgroundColour("#333333")
self.parent = parent
logInfo('385: ScrolledPanel initialized!')
#Title labeling
if platform.system() == 'Windows':
self.lblDagView = wx.StaticText(self,-1,'Pathway Visualization',(25,15),(270,25),style=wx.ALIGN_CENTRE)
self.lblDagView.SetFont(wx.Font(12,wx.DEFAULT,wx.ITALIC,wx.BOLD))
#elif platform.system() == 'Darwin':
# self.lblDagView = wx.StaticBitmap(self,-1,wx.Image(self.parent.parent.scriptdir+"/images/osx/dagview/lblDagView.png",wx.BITMAP_TYPE_PNG).ConvertToBitmap(),pos=(25,15),size=(270,25))
else:
self.lblDagView = wx.StaticText(self,-1,'Pathway Visualization',pos=(90,15),style = wx.ALIGN_CENTRE)
self.lblDagView.SetFont(wx.Font(12,wx.DEFAULT,wx.ITALIC,wx.BOLD))
self.lblDagView.SetForegroundColour("#FFFFFF")
logInfo('397: Title label set!')
#Help Button
if platform.system() == 'Darwin':
self.HelpBtn = wx.BitmapButton(self,id=-1,bitmap=wx.Image(self.parent.parent.scriptdir+'/images/osx/HelpBtn.png',wx.BITMAP_TYPE_PNG).ConvertToBitmap(),pos=(295,10),size=(25,25))
else:
self.HelpBtn = wx.Button(self, id=-1,label='?',pos=(295,10),size=(25,25))
self.HelpBtn.SetForegroundColour("#0000FF")
self.HelpBtn.SetFont(wx.Font(10,wx.DEFAULT,wx.NORMAL,wx.BOLD))
self.HelpBtn.Bind(wx.EVT_BUTTON,self.showHelp)
self.HelpBtn.SetToolTipString("Display the help file for this window")
logInfo('408: Help button set')
#Subtile text
if platform.system() == 'Windows':
self.lblInst = wx.StaticText(self,-1,'View GeoFold pathways',(0,45),(320,25),wx.ALIGN_CENTRE)
self.lblInst.SetFont(wx.Font(10,wx.DEFAULT,wx.ITALIC,wx.NORMAL))
#elif platform.system() == 'Darwin':
# self.lblInst = wx.StaticBitmap(self,-1,wx.image(self.parent.parent.scriptdir+'/images/osx/dagview/lblInstDagView.png',wx.BITMAP_TYPE_PNG).ConvertToBitmap(),pos=(0,45),size=(320,25))
else:
self.lblInst = wx.StaticText(self,-1,'View GeoFold pathways',pos=(20,45),style=wx.ALIGN_CENTRE)
self.lblInst.SetFont(wx.Font(10,wx.DEFAULT,wx.ITALIC,wx.NORMAL))
resizeTextControlForUNIX(self.lblInst,0,self.GetSize()[0])
self.lblInst.SetForegroundColour("#FFFFFF")
logInfo('421: Subtitle set!')
#PDB button
self.lblPDB = wx.StaticText(self,-1,"None Uploaded", pos=(10,103),size=(180,25),style=wx.ALIGN_CENTRE)
self.lblPDB.SetFont(wx.Font(10,wx.DEFAULT,wx.NORMAL,wx.BOLD))
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblPDB,10,180)
self.lblPDB.SetForegroundColour("#FFFFFF")
#if platform.system() == 'Darwin':
# self.btnLoad = wx.BitmapButton(self,id=-1,bitmap=wx.Image(self.parent.parent.scriptdir+'/images/osx/dagview/btnLoadPDB.png',wx.BITMAP_TYPE_PNG).ConvertToBitmap(),pos=(200,100),size=(110,25))
#else:
self.btnLoad = wx.Button(self,id=-1,label='Load zip file',pos=(200,100),size=(110,25))
self.btnLoad.SetForegroundColour("#000000")
self.btnLoad.SetFont(wx.Font(10,wx.DEFAULT,wx.NORMAL,wx.BOLD))
self.btnLoad.Bind(wx.EVT_BUTTON,self.loadZip)
self.btnLoad.SetToolTipString('Load the zip file containing the GeoFold output')
logInfo('437: Zip button set!')
#combobox
self.dagMenu = wx.ComboBox(self, pos=(10,138), size=(110, 25), choices=[], style=wx.CB_READONLY)
#GO! Button ViewDag
#ypos = self.btnDagPng.GetPosition()[1]+self.btnDagPng.GetSize()[1]+10
ypos = self.dagMenu.GetPosition()[1]+self.dagMenu.GetSize()[1]+10
#if platform.system() == 'Darwin':
# self.btnViewDag = wx.BitmapButton(self,id=-1,bitmap=wx.Image(self.parent.parent.scriptdir+'/images/osx/dagview/btnViewDag.png',wx.BITMAP_TYPE_PNG).ConvertToBitmap(),pos=(80,ypos),size=(150,25))
#else:
self.btnViewDag = wx.Button(self,id=-1,label="View Pathway!",pos=(80,ypos),size=(150,25))
self.btnViewDag.SetForegroundColour("#000000")
self.btnViewDag.SetFont(wx.Font(10,wx.DEFAULT,wx.ITALIC,wx.BOLD))
self.btnViewDag.Bind(wx.EVT_BUTTON,self.ViewDagClick)
logInfo('496: View Dag Button set')
#Scrolling set up
logInfo('499: Setting scrolling...')
logInfo('501: GetSize = %d'%(self.btnViewDag.GetSize()[1]))
self.scrollh = self.btnViewDag.GetPosition()[1] + self.btnViewDag.GetSize()[1] + 5
logInfo('503: scrollh set to %d'%(self.scrollh))
self.SetScrollbars(1,1,320,self.scrollh)
logInfo('505: Scrollbars set!')
self.winscrollpos = 0
self.Bind(wx.EVT_SCROLLWIN, self.scrolled)
logInfo('508: Scrolling set.')
logInfo('509: Initialization complete!')
def loadZip(self,event):
'''opens a file dialog to open the zip file. Loads the PDB and populates
the dagMenu'''
#create file dialog
logInfo("Clicked Load Zip button")
dlg = wx.FileDialog(self, message = 'Choose a File',defaultDir=self.seqWin.cwd,defaultFile='',
wildcard="Zip Files (*.zip)|*.zip",style=wx.OPEN | wx.CHANGE_DIR)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPaths()
#Change cwd to the last opened file
if platform.system()=='Windows':
lastDirIndx = paths[len(paths)-1].rfind('\\')
else:
lastDirIndx = paths[len(paths)-1].rfind('/')
self.seqWin.cwd = str(paths[len(paths)-1][0:lastDirIndx])
filename = str(paths[0])
logInfo('filename = %s'%(filename))
if platform.system() == 'Windows':
localFile = filename.split('\\')
else:
localFile = filename.split('/')
localFile = localFile[len(localFile)-1]
logInfo('localFile: %s'%(localFile))
self.lblPDB.SetLabel(localFile)
self.lblPDB.SetForegroundColour('#FFFFFF')
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblPDB,10,180)
#run findFiles on the item
status,dags = self.findFiles(filename)
logInfo(dags)
#Error handling
logInfo(status)
if status != 0:
msgs = {-1:'The zip file selected is invalid.\nPlease try again',-2:'There was an error unzipping the file',-3:'No valid output was found in the zip file',-4:'PDB file could not be loaded from zip file'}
logInfo(msgs[status])
wx.MessageBox(msgs[status], "", wx.OK|wx.ICON_EXCLAMATION)
return -1
#Process dags to just show the base name
newdags = []
logInfo('newdags:')
for dag in dags:
dag = localFile.split('.zip')[0]+'_'+dag.split('_')[len(dag.split('_'))-1]
newdags.append(dag)
logInfo(dag)
#Populate dagMenu
self.dagMenu.Clear()
self.dagMenu.AppendItems(newdags)
return 0
def setSeqWin(self, seqWin):
self.seqWin = seqWin
def showHelp(self, event):
'''Open the help page'''
if platform.system() == 'Darwin':
try:
browser = webbrowser.get('Safari')
except Exception as e:
print 'Could not load Safari! The help files are located at %s/help'%(self.parent.parent.scriptdir)
return
browser.open(self.parent.parent.scriptdir+'/help/dagview.html')
else:
webbrowser.open(self.parent.parent.scriptdir+'/help/dagview.html')
def scrolled(self,event):
self.winscrollpos = self.GetScrollPos(wx.VERTICAL)
event.Skip()
def setPyMOL(self,pymol):
'''Sets PyMOL to be used for this class'''
self.pymol = pymol
self.cmd = pymol.cmd
self.stored = pymol.stored
def activate(self):
self.Scroll(0, self.winscrollpos)
def loadPDB(self,event):
'''Select PDB file to load'''
logInfo("Clicked Load PDB button")
dlg = wx.FileDialog(self, message = 'Choose a File',defaultDir=self.seqWin.cwd,defaultFile='',style=wx.OPEN | wx.CHANGE_DIR)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPaths()
#Change cwd to the last opened file
if platform.system()=='Windows':
lastDirIndx = paths[len(paths)-1].rfind('\\')
else:
lastDirIndx = paths[len(paths)-1].rfind('/')
self.seqWin.cwd = str(paths[len(paths)-1][0:lastDirIndx])
filename = str(paths[0])
self.loadedPdb = filename
localPdb = filename[lastDirIndx+1:]
goToSandbox()
try:
shutil.copy(filename,'params.pdb')
except:
logInfo('File could not be copied')
#Delete a file if we're loading a new one
try:
self.cmd.remove('params')
self.cmd.delete('params')
except:
pass
try:
self.cmd.load('params.pdb','params')
except:
wx.MessageBox('The file %s could not be read!'%(filename),'File cannot be read', wx.OK|wx.ICON_EXCLAMATION)
return
logInfo('PDB file loaded',filename)
self.cmd.select('paramssele','model params')
self.cmd.hide('everything','paramssele')
self.cmd.delete('paramssele')
self.lblPDB.SetLabel(localPdb)
self.lblPDB.SetForegroundColour('#FFFFFF')
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblPDB,10,180)
def loadDagOut(self,event):
'''Load .dag.out file'''
logInfo("Clicked Load DagOut button")
dlg = wx.FileDialog(self, message = 'Choose a File',defaultDir=self.seqWin.cwd,defaultFile='',style=wx.OPEN | wx.CHANGE_DIR)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPaths()
#Change cwd to the last opened file
if platform.system()=='Windows':
lastDirIndx = paths[len(paths)-1].rfind('\\')
else:
lastDirIndx = paths[len(paths)-1].rfind('/')
self.seqWin.cwd = str(paths[len(paths)-1][0:lastDirIndx])
filename = str(paths[0])
self.loadedDagOut = filename
localDagOut = filename[lastDirIndx+1:]
goToSandbox()
try:
shutil.copy(filename,'params.dag.out')
except:
pass
logInfo('dag.out file loaded',filename)
self.lblDagOut.SetLabel(localDagOut)
self.lblDagOut.SetForegroundColour('#FFFFFF')
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblDagOut,10,180)
def loadDagHtml(self,event):
'''Load .dag.html file'''
logInfo("Clicked Load DagHtml button")
dlg = wx.FileDialog(self, message = 'Choose a File',defaultDir=self.seqWin.cwd,defaultFile='',style=wx.OPEN | wx.CHANGE_DIR)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPaths()
#Change cwd to the last opened file
if platform.system()=='Windows':
lastDirIndx = paths[len(paths)-1].rfind('\\')
else:
lastDirIndx = paths[len(paths)-1].rfind('/')
self.seqWin.cwd = str(paths[len(paths)-1][0:lastDirIndx])
filename = str(paths[0])
self.loadedDagHtml = filename
localDagHtml = filename[lastDirIndx+1:]
goToSandbox()
try:
shutil.copy(filename,'params.dag.html')
except:
pass
logInfo('dag.html file loaded',filename)
self.lblDagHtml.SetLabel(localDagHtml)
self.lblDagHtml.SetForegroundColour('#FFFFFF')
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblDagHtml,10,180)
def loadDagPng(self,event):
'''Load .dag.png file'''
logInfo("Clicked Load DagPng button")
dlg = wx.FileDialog(self, message = 'Choose a File',defaultDir=self.seqWin.cwd,defaultFile='',style=wx.OPEN | wx.CHANGE_DIR)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPaths()
#Change cwd to the last opened file
if platform.system()=='Windows':
lastDirIndx = paths[len(paths)-1].rfind('\\')
else:
lastDirIndx = paths[len(paths)-1].rfind('/')
self.seqWin.cwd = str(paths[len(paths)-1][0:lastDirIndx])
filename = str(paths[0])
self.loadedDagPng = filename
localDagPng = filename[lastDirIndx+1:]
goToSandbox()
try:
shutil.copy(filename,'params.dag.png')
except:
pass
logInfo('dag.png file loaded',filename)
self.lblDagPng.SetLabel(localDagPng)
self.lblDagPng.SetForegroundColour('#FFFFFF')
if platform.system() == 'Linux':
resizeTextControlForUNIX(self.lblDagPng,10,180)
def ViewDagClick(self,event):
logInfo('View Dag Button Clicked')
self.cmd.show_as('cartoon','Native')
#self.cmd.color('purple',self.ID)
self.cmd.set("cartoon_color",'purple','Native')
try:
self.frame.Destroy()
except:
pass
dagbase = self.dagMenu.GetValue()
if platform.system() == 'Windows':
self.loadedDagHtml = '%s\\%s.dag.html'%(self.cwd,dagbase)
self.loadedDagOut = '%s\\%s.dag.out'%(self.cwd,dagbase)
self.loadedDagPng = '%s\\%s.dag.png'%(self.cwd,dagbase)
else:
self.loadedDagHtml = '%s/%s.dag.html'%(self.cwd,dagbase)
self.loadedDagOut = '%s/%s.dag.out'%(self.cwd,dagbase)
self.loadedDagPng = '%s/%s.dag.png'%(self.cwd,dagbase)
#self.intermediates,self.transitions = parseImgMap(self.loadedDagHtml,self.loadedDagOut,self.ID)
self.frame = wx.Frame(None,-1)
self.DagPanel = dagPanel(self.frame,self.loadedDagPng,self.loadedDagHtml,self.loadedDagOut,self.IDs)
self.DagPanel.setPyMOL(self.pymol)
self.frame.Show()
def findFiles(self,zipDir):
'''Takes a given zip file. extracts it in the sandbox and picks out all
the files able to be viewed. It outputs a list to be put in a ComboBox
to allow the user to select which one to view. If an error occurs, outputs
a negative number used to identify the error and handle it'''
import zipfile
logInfo('Calling findFiles')
output = []
#Check if selected file is valid
if not zipfile.is_zipfile(zipDir):
return -1, []
#Unzip the file in the sandbox
unzipped = zipfile.ZipFile(zipDir)
info = unzipped.infolist()
filename = info[0].filename[:len(info[0].filename)-1]
goToSandbox()
try:
unzipped.extractall()
except: #failed to unzip
return -2, []
#use glob to get all dag.out files
if platform.system() == 'Windows':
globDir = '%s\\%s\\*.dag.out'%(os.getcwd(),filename)
else:
globDir = '%s/%s/*.dag.out'%(os.getcwd(),filename)
logInfo('globDir: %s'%(globDir))
dagOuts = glob.glob(globDir)
#find pdb file
if platform.system() == 'Windows':
self.cwd = '%s\\%s'%(os.getcwd(),filename)
pdb = glob.glob('%s\\%s\\%s.pdb'%(os.getcwd(),filename,filename))
else:
self.cwd = '%s/%s'%(os.getcwd(),info[0].filename)
pdb = glob.glob('%s/%s/%s.pdb'%(os.getcwd(),filename,filename))
logInfo('pdb: %s'%(pdb))
if len(pdb) == 0:
return -4, []
#for each file in dagOuts
for dag in dagOuts:
#get base filename, looks complicated in case '.dag.out'
#is present elsewhere is file path
base = '.dag.out'.join(dag.split('.dag.out')[:len(dag.split('.dag.out'))-1])
logInfo('base: %s'%(base))
#is dag.png there?
dagPng = len(glob.glob(base+'.dag.png'))==1
#is dag.html there?
dagHtml = len(glob.glob(base+'.dag.html'))==1
#if so, append to output
if dagPng and dagHtml:
output.append(base)
#no valid output
logInfo('len(output): %i'%(len(output)))
logInfo(output)
if len(output) == 0:
return -3, []
#everything worked!
oldIDind = len(self.seqWin.IDs)
self.seqWin.PyMOLPDBLoad(1, pdb[0], "Show")
newIDs = self.seqWin.IDs[oldIDind:]
self.IDs = [] #tuples of the form (model,chain)
for ID in newIDs:
logInfo('newID: %s'%(ID))
ID = (ID[:len(ID)-2],ID[len(ID)-1])
self.IDs.append(ID)
logInfo('ID added: (%s,%s)'%ID)
logInfo('self.IDs')
logInfo(self.IDs)
logInfo('end self.IDs')
native = ''
for ID in self.IDs:
native += 'model %s and chain %s+'%ID
native = native[:len(native)-1]
logInfo('native: %s'%(native))
self.cmd.select('Native',native)
self.cmd.show_as('cartoon','Native')
self.cmd.set('cartoon_color','purple','Native')
self.cmd.deselect()
return 0,output
def startPyMOL(pdb):
'''starts PyMOL for us. Only for testing. PyMOL should already be opened
by InteractiveROSETTA'''
import __main__
__main__.pymol_argv = ["pymol", "-qhxi"]
import pymol
pymol.finish_launching()
pymol.cmd.load(pdb)
pymol.cmd.show_as('cartoon')
pymol.cmd.color('purple')
return pymol
if __name__ == '__main__':
intermediates,transitions = parseImgMap('2b3p_florynewtest.21846_1.dag.html','2b3p_florynewtest.21846_1.dag.out')
pymol = startPyMOL('2b3p_florynewtest.21846.pdb')
'''for intermediate in intermediates:
intermediate.show(pymol)
time.sleep(0.1)
transitions[0].show(intermediates)
for transition in transitions:
transition.show(intermediates)
time.sleep(0.1)
for intermediate in intermediates:
if intermediate.number == 302:
intermediate.show(pymol)'''
app = wx.App(0)
frame = wx.Frame(None,-1)
testPanel = dagPanel(frame,'2b3p_florynewtest.21846_1.dag.png','2b3p_florynewtest.21846_1.dag.html','2b3p_florynewtest.21846_1.dag.out')
testPanel.setPyMOL(pymol)
frame.Show()
app.MainLoop()
|
schenc3/InteractiveROSETTA
|
InteractiveROSETTA/scripts/dagview.py
|
Python
|
gpl-2.0
| 36,575
|
[
"PyMOL"
] |
5add170d030ef60ee976658b9435b6ff79fa66baa5f729d235d5daaaf6c7a09a
|
# standard modules
import copy
import datetime
import logging
import os
import sys
import traceback
# 3rd party modules
# PFP modules
from scripts import pfp_clim
from scripts import pfp_compliance
from scripts import pfp_cpd_barr
from scripts import pfp_cpd_mchugh
from scripts import pfp_cpd_mcnew
from scripts import pfp_io
from scripts import pfp_levels
from scripts import pfp_mpt
from scripts import pfp_plot
from scripts import pfp_utils
logger = logging.getLogger("pfp_log")
#def do_batch_cfcheck(cfg):
#"""
#Purpose:
#Wrapper to call CF compliance check routine.
#Author: PRI
#Date: July 2021
#"""
#nc_file_uri = pfp_io.get_outfilenamefromcf(cfg)
#pfp_compliance.CheckCFCompliance(nc_file_uri)
#return
def do_batch_fingerprints(cfg):
"""
Purpose:
Plot fingerprints at the end of conatenation, L4 and L5.
Author: PRI
Date: Back in the day
"""
cfg_fp_uri = os.path.join("controlfiles", "standard", "fingerprint.txt")
cfg_fp = pfp_io.get_controlfilecontents(cfg_fp_uri)
file_name = pfp_io.get_outfilenamefromcf(cfg)
file_path = os.path.join(os.path.split(file_name)[0], "")
plot_path = pfp_utils.get_keyvaluefromcf(cfg, ["Files"], "plot_path", default="plots/")
cfg_fp["Files"] = {"file_path": file_path, "in_filename": os.path.split(file_name)[1],
"plot_path": plot_path}
cfg_fp["Options"] = {"call_mode": "batch", "show_plots": "No"}
msg = "Doing fingerprint plots using " + cfg_fp["Files"]["in_filename"]
logger.info(msg)
pfp_plot.plot_fingerprint(cfg_fp)
logger.info("Finished fingerprint plots")
return
def do_L1_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L1 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l1 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l1_update_controlfile(cf_l1):
continue
ds1 = pfp_levels.l1qc(cf_l1)
outfilename = pfp_io.get_outfilenamefromcf(cf_l1)
pfp_io.NetCDFWrite(outfilename, ds1)
msg = "Finished L1 processing with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during L1 processing " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_L2_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L2 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l2 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l2_update_controlfile(cf_l2):
continue
if "Options" not in cf_l2:
cf_l2["Options"] = {}
cf_l2["Options"]["call_mode"] = "batch"
cf_l2["Options"]["show_plots"] = "No"
infilename = pfp_io.get_infilenamefromcf(cf_l2)
ds1 = pfp_io.NetCDFRead(infilename)
if ds1.returncodes["value"] != 0: return
ds2 = pfp_levels.l2qc(cf_l2, ds1)
outfilename = pfp_io.get_outfilenamefromcf(cf_l2)
pfp_io.NetCDFWrite(outfilename, ds2)
msg = "Finished L2 processing with " + cf_file_name[1]
logger.info(msg)
if "Plots" in list(cf_l2.keys()):
logger.info("Plotting L1 and L2 data")
for nFig in list(cf_l2['Plots'].keys()):
if "(disabled)" in nFig:
continue
plt_cf = cf_l2['Plots'][str(nFig)]
if 'type' in plt_cf.keys():
if str(plt_cf['type']).lower() == 'xy':
pfp_plot.plotxy(cf_l2, nFig, plt_cf, ds1, ds2)
else:
pfp_plot.plottimeseries(cf_l2, nFig, ds1, ds2)
else:
pfp_plot.plottimeseries(cf_l2, nFig, ds1, ds2)
logger.info("Finished plotting L1 and L2 data")
logger.info("")
except Exception:
msg = "Error occurred during L2 processing " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_L3_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L3 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l3 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l3_update_controlfile(cf_l3):
continue
if "Options" not in cf_l3:
cf_l3["Options"] = {}
cf_l3["Options"]["call_mode"] = "batch"
cf_l3["Options"]["show_plots"] = "No"
infilename = pfp_io.get_infilenamefromcf(cf_l3)
ds2 = pfp_io.NetCDFRead(infilename)
if ds2.returncodes["value"] != 0: return
ds3 = pfp_levels.l3qc(cf_l3, ds2)
outfilename = pfp_io.get_outfilenamefromcf(cf_l3)
pfp_io.NetCDFWrite(outfilename, ds3)
msg = "Finished L3 processing with " + cf_file_name[1]
logger.info(msg)
if "Plots" in list(cf_l3.keys()):
logger.info("Plotting L3 data")
for nFig in list(cf_l3['Plots'].keys()):
if "(disabled)" in nFig:
continue
plt_cf = cf_l3['Plots'][str(nFig)]
if 'type' in plt_cf.keys():
if str(plt_cf['type']).lower() == 'xy':
pfp_plot.plotxy(cf_l3, nFig, plt_cf, ds2, ds3)
else:
pfp_plot.plottimeseries(cf_l3, nFig, ds2, ds3)
else:
pfp_plot.plottimeseries(cf_l3, nFig, ds2, ds3)
logger.info("Finished plotting L3 data")
logger.info("")
except Exception:
msg = "Error occurred during L3 processing " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_ecostress_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting ECOSTRESS output with " + cf_file_name[1]
logger.info(msg)
try:
cf = pfp_io.get_controlfilecontents(cf_level[i])
pfp_io.write_csv_ecostress(cf)
msg = "Finished ECOSTRESS output with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during ECOSTRESS output with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_fluxnet_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting FluxNet output with " + cf_file_name[1]
logger.info(msg)
cf = pfp_io.get_controlfilecontents(cf_level[i])
pfp_io.write_csv_fluxnet(cf)
msg = "Finished FluxNet output with " + cf_file_name[1]
logger.info(msg)
logger.info("")
return
def do_reddyproc_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting REddyProc output with " + cf_file_name[1]
logger.info(msg)
cf = pfp_io.get_controlfilecontents(cf_level[i])
pfp_io.write_tsv_reddyproc(cf)
msg = "Finished REddyProc output with " + cf_file_name[1]
logger.info(msg)
logger.info("")
return
def do_concatenate_batch(main_ui, cf_level):
sites = sorted(list(cf_level.keys()), key=int)
for i in sites:
if not os.path.isfile(cf_level[i]):
msg = " Control file " + cf_level[i] + " not found"
logger.error(msg)
continue
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting concatenation with " + cf_file_name[1]
logger.info(msg)
try:
cf_cc = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.concatenate_update_controlfile(cf_cc):
continue
info = pfp_compliance.ParseConcatenateControlFile(cf_cc)
if not info["NetCDFConcatenate"]["OK"]:
msg = " Error occurred parsing the control file " + cf_file_name[1]
logger.error(msg)
continue
pfp_io.NetCDFConcatenate(info)
msg = "Finished concatenation with " + cf_file_name[1]
logger.info(msg)
# do the CF compliance check
#do_batch_cfcheck(cf_cc)
# and then plot the fingerprints for the concatenated files
do_batch_fingerprints(cf_cc)
logger.info("")
except Exception:
msg = "Error occurred during concatenation with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_climatology_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
if not os.path.isfile(cf_level[i]):
msg = " Control file " + cf_level[i] + " not found"
logger.error(msg)
continue
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting climatology with " + cf_file_name[1]
logger.info(msg)
try:
cf_ct = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.climatology_update_controlfile(cf_ct):
continue
pfp_clim.climatology(cf_ct)
msg = "Finished climatology with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during climatology with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_cpd_barr_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting CPD (Barr) with " + cf_file_name[1]
logger.info(msg)
try:
cf = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.cpd_barr_update_controlfile(cf):
continue
if "Options" not in cf:
cf["Options"] = {}
cf["Options"]["call_mode"] = "batch"
cf["Options"]["show_plots"] = "No"
pfp_cpd_barr.cpd_barr_main(cf)
msg = "Finished CPD (Barr) with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during CPD (Barr) with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_cpd_mchugh_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting CPD (McHugh) with " + cf_file_name[1]
logger.info(msg)
try:
cf = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.cpd_mchugh_update_controlfile(cf):
continue
if "Options" not in cf:
cf["Options"] = {}
cf["Options"]["call_mode"] = "batch"
cf["Options"]["show_plots"] = "No"
pfp_cpd_mchugh.cpd_mchugh_main(cf)
msg = "Finished CPD (McHugh) with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during CPD (McHugh) with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_cpd_mcnew_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting CPD (McNew) with " + cf_file_name[1]
logger.info(msg)
try:
cf = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.cpd_mcnew_update_controlfile(cf):
continue
if "Options" not in cf:
cf["Options"] = {}
cf["Options"]["call_mode"] = "batch"
cf["Options"]["show_plots"] = "No"
pfp_cpd_mcnew.cpd_mcnew_main(cf)
msg = "Finished CPD (McNew) with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during CPD (McNew) with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_mpt_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting MPT with " + cf_file_name[1]
logger.info(msg)
try:
cf = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.mpt_update_controlfile(cf):
continue
if "Options" not in cf:
cf["Options"] = {}
cf["Options"]["call_mode"] = "batch"
cf["Options"]["show_plots"] = "No"
pfp_mpt.mpt_main(cf)
msg = "Finished MPT with " + cf_file_name[1]
logger.info(msg)
logger.info("")
except Exception:
msg = "Error occurred during MPT with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_L4_batch(main_ui, cf_level):
sites = sorted(list(cf_level.keys()), key=int)
for i in sites:
if not os.path.isfile(cf_level[i]):
msg = " Control file " + cf_level[i] + " not found"
logger.error(msg)
continue
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L4 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l4 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l4_update_controlfile(cf_l4):
continue
if "Options" not in cf_l4:
cf_l4["Options"] = {}
cf_l4["Options"]["call_mode"] = "batch"
cf_l4["Options"]["show_plots"] = "No"
infilename = pfp_io.get_infilenamefromcf(cf_l4)
ds3 = pfp_io.NetCDFRead(infilename)
if ds3.returncodes["value"] != 0: return
ds4 = pfp_levels.l4qc(None, cf_l4, ds3)
outfilename = pfp_io.get_outfilenamefromcf(cf_l4)
pfp_io.NetCDFWrite(outfilename, ds4)
msg = "Finished L4 processing with " + cf_file_name[1]
logger.info(msg)
# do the CF compliance check
#do_batch_cfcheck(cf_l4)
# plot the L4 fingerprints
do_batch_fingerprints(cf_l4)
logger.info("")
except Exception:
msg = "Error occurred during L4 with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_L5_batch(main_ui, cf_level):
sites = sorted(list(cf_level.keys()), key=int)
for i in sites:
if not os.path.isfile(cf_level[i]):
msg = " Control file " + cf_level[i] + " not found"
logger.error(msg)
continue
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L5 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l5 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l5_update_controlfile(cf_l5):
continue
if "Options" not in cf_l5:
cf_l5["Options"] = {}
cf_l5["Options"]["call_mode"] = "batch"
cf_l5["Options"]["show_plots"] = "No"
infilename = pfp_io.get_infilenamefromcf(cf_l5)
ds4 = pfp_io.NetCDFRead(infilename)
if ds4.returncodes["value"] != 0: return
ds5 = pfp_levels.l5qc(None, cf_l5, ds4)
outfilename = pfp_io.get_outfilenamefromcf(cf_l5)
pfp_io.NetCDFWrite(outfilename, ds5)
msg = "Finished L5 processing with " + cf_file_name[1]
logger.info(msg)
# do the CF compliance check
#do_batch_cfcheck(cf_l5)
# plot the L5 fingerprints
do_batch_fingerprints(cf_l5)
logger.info("")
except Exception:
msg = "Error occurred during L5 with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_L6_batch(main_ui, cf_level):
for i in list(cf_level.keys()):
if not os.path.isfile(cf_level[i]):
msg = " Control file " + cf_level[i] + " not found"
logger.error(msg)
continue
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
cf_file_name = os.path.split(cf_level[i])
msg = "Starting L6 processing with " + cf_file_name[1]
logger.info(msg)
try:
cf_l6 = pfp_io.get_controlfilecontents(cf_level[i])
if not pfp_compliance.l6_update_controlfile(cf_l6):
continue
if "Options" not in cf_l6:
cf_l6["Options"] = {}
cf_l6["Options"]["call_mode"] = "batch"
cf_l6["Options"]["show_plots"] = "No"
infilename = pfp_io.get_infilenamefromcf(cf_l6)
ds5 = pfp_io.NetCDFRead(infilename)
if ds5.returncodes["value"] != 0: return
ds6 = pfp_levels.l6qc(None, cf_l6, ds5)
outfilename = pfp_io.get_outfilenamefromcf(cf_l6)
pfp_io.NetCDFWrite(outfilename, ds6)
msg = "Finished L6 processing with " + cf_file_name[1]
logger.info(msg)
# do the CF compliance check
#do_batch_cfcheck(cf_l6)
logger.info("")
except Exception:
msg = "Error occurred during L6 with " + cf_file_name[1]
logger.error(msg)
error_message = traceback.format_exc()
logger.error(error_message)
continue
return
def do_levels_batch(main_ui):
logger = logging.getLogger("pfp_log")
if main_ui.mode == "interactive":
tab_index_running = main_ui.tabs.tab_index_running
cf_batch = main_ui.tabs.tab_dict[tab_index_running].get_data_from_model()
elif main_ui.mode == "batch":
cf_batch = main_ui.cfg
else:
msg = "Unrecognised option for mode (" + main_ui.mode + ")"
logger.error(msg)
raise RuntimeError
start = datetime.datetime.now()
msg = "Started batch processing at " + start.strftime("%Y%m%d%H%M")
logger.info(msg)
if "Options" in cf_batch:
if "levels" in cf_batch["Options"]:
levels = pfp_utils.string_to_list(cf_batch["Options"]["levels"])
else:
msg = "No 'levels' entry found in [Options] section"
logger.error(msg)
sys.exit()
else:
msg = "No [Options] section in control file"
logger.error(msg)
sys.exit()
processing_levels = ["l1", "l2", "l3",
"ecostress", "fluxnet", "reddyproc",
"concatenate", "climatology",
"cpd_barr", "cpd_mchugh", "cpd_mcnew", "mpt",
"l4", "l5", "l6"]
for level in levels:
# check the stop flag
if main_ui.stop_flag:
# break out of the loop if user requested stop
break
if level.lower() not in processing_levels:
msg = "Unrecognised level " + level
logger.warning(msg)
continue
if level.lower() == "l1":
# L1 processing
do_L1_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "l2":
# L2 processing
do_L2_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "l3":
# L3 processing
do_L3_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "ecostress":
# convert netCDF files to ECOSTRESS CSV files
do_ecostress_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "fluxnet":
# convert netCDF files to FluxNet CSV files
do_fluxnet_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "reddyproc":
# convert netCDF files to REddyProc CSV files
do_reddyproc_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "concatenate":
# concatenate netCDF files
do_concatenate_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "climatology":
# climatology
do_climatology_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "cpd_barr":
# ustar threshold from change point detection
do_cpd_barr_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "cpd_mchugh":
# ustar threshold from change point detection
do_cpd_mchugh_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "cpd_mcnew":
# ustar threshold from change point detection
do_cpd_mcnew_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "mpt":
# ustar threshold from change point detection
do_mpt_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "l4":
# L4 processing
do_L4_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "l5":
# L5 processing
do_L5_batch(main_ui, cf_batch["Levels"][level])
elif level.lower() == "l6":
# L6 processing
do_L6_batch(main_ui, cf_batch["Levels"][level])
end = datetime.datetime.now()
msg = " Finished batch processing at " + end.strftime("%Y%m%d%H%M")
logger.info(msg)
return
|
OzFlux/PyFluxPro
|
scripts/pfp_batch.py
|
Python
|
gpl-3.0
| 25,051
|
[
"NetCDF"
] |
9496b467f4a8aa0a6bf72536d4f4d91d274fe43cc37279446f1f235fbb0e75cd
|
# $HeadURL$
__RCSID__ = "$Id$"
import GSI
requiredGSIVersion = "0.3.9"
if GSI.version.__version__ < requiredGSIVersion:
raise Exception( "pyGSI is not the latest version (installed %s required %s)" % ( GSI.version.__version__, requiredGSIVersion ) )
GSI.SSL.set_thread_safe()
nid = GSI.crypto.create_oid( "1.2.42.42", "diracGroup", "DIRAC group" )
GSI.crypto.add_x509_extension_alias( nid, 78 ) #Alias to netscape comment, text based extension
nid = GSI.crypto.create_oid( "1.3.6.1.4.1.8005.100.100.5", "vomsExtensions", "VOMS extension" )
GSI.crypto.add_x509_extension_alias( nid, 78 ) #Alias to netscape comment, text based extension
import VMDIRAC.Security.VmProperties
|
myco/VMDIRAC
|
Security/__init__.py
|
Python
|
gpl-3.0
| 680
|
[
"DIRAC"
] |
98e95cdfe41b82eb3f2081240f4051b2f2ced20e59f42dcf07cdf858f57f0723
|
""" Test_RSS_Policy_AlwaysActivePolicy
"""
import unittest
import DIRAC.ResourceStatusSystem.Policy.CEAvailabilityPolicy as moduleTested
################################################################################
class CEAvailabilityPolicy_TestCase(unittest.TestCase):
def setUp(self):
"""
Setup
"""
self.moduleTested = moduleTested
self.testClass = self.moduleTested.CEAvailabilityPolicy
def tearDown(self):
"""
TearDown
"""
del self.testClass
del self.moduleTested
################################################################################
# Tests
class CEAvailabilityPolicy_Success(CEAvailabilityPolicy_TestCase):
def test_instantiate(self):
"""tests that we can instantiate one object of the tested class"""
policy = self.testClass()
self.assertEqual("CEAvailabilityPolicy", policy.__class__.__name__)
def test_evaluate(self):
"""tests the evaluate method"""
policy = self.testClass()
commandResult = {
"OK": True,
"Value": {
"Reason": "All queues in 'Production'",
"Status": "Production",
"cccreamceli05.in2p3.fr:8443/cream-sge-long": "Production",
"cccreamceli05.in2p3.fr:8443/cream-sge-verylong": "Production",
},
}
res = policy._evaluate(commandResult)
self.assertTrue(res["OK"])
self.assertEqual("Active", res["Value"]["Status"])
commandResult = {
"OK": True,
"Value": {
"Reason": "All queues in 'Production'",
"Status": "Degraded",
"cccreamceli05.in2p3.fr:8443/cream-sge-long": "Production",
"cccreamceli05.in2p3.fr:8443/cream-sge-verylong": "Production",
},
}
res = policy._evaluate(commandResult)
self.assertTrue(res["OK"])
self.assertEqual("Banned", res["Value"]["Status"])
################################################################################
if __name__ == "__main__":
suite = unittest.defaultTestLoader.loadTestsFromTestCase(CEAvailabilityPolicy_TestCase)
suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(CEAvailabilityPolicy_Success))
testResult = unittest.TextTestRunner(verbosity=2).run(suite)
# EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF#EOF
|
DIRACGrid/DIRAC
|
src/DIRAC/ResourceStatusSystem/Policy/test/Test_RSS_Policy_CEAvailabilityPolicy.py
|
Python
|
gpl-3.0
| 2,480
|
[
"DIRAC"
] |
6cfe9a8d531f968d6af34f8abedd53b72b4797ae1eba2f784b545336e63cfb46
|
#!/usr/bin/env python2
###############################################################################
# ------------------------- Description ---------------------------------------
###############################################################################
# The purpose of this script is read the hdf5 GFED4s data and save out the same
# data as daily gridded data in nc file format for a single species. These are
# saved as yearly files.
#
# This script reads in data downloaded from the web where no changes have yet
# been made.
# When the startYear and endYear argument are different the different years
# data are merged and saved to an .nc file that has the year range in the file
# name.
# Functions
# getYearlyEmissions()
# getMonthlyBurnArea()
# Daily emissions estimates made possible by
# http://onlinelibrary.wiley.com/doi/10.1029/2011JD016245/abstract
# Follows ----------------------------------------
# - NA
# Precedes
# - any script that reads in GFED4s in .nc format.
# Datasource: http://www.geo.vu.nl/~gwerf/GFED/GFED4/
# Data README: http://www.geo.vu.nl/~gwerf/GFED/GFED4/Readme.pdf
# Clear all variables before running this script.
#import sys
#sys.modules[__name__].__dict__.clear()
import sys
import h5py # if this creates an error please make sure you have the h5py library
import os
import numpy as np
from netCDF4 import Dataset
import matplotlib.pyplot as plt
import datetime
from datetime import date
from datetime import timedelta
from datetime import datetime
import cesm_nc_manager as cnm
# TODO: estimate daily burn area and save this out!
# TODO: include 'basis_regions' in nc output?
# TODO: Save all the two dimensional attributes as thier own NETCDF file
startYear = 1997
endYear = 1997 # If different than startYear, they will be appended.
species = 'DM' # 'C' , 'DM' 'burned_area'# (These have daily fraction est.)
getDaily = False # execute code to create daily nc
getMonthly= True # execute code to create monthly nc
# Figure out what machine this code is running on. Set file paths.
drive = cnm.getDrive()
dataDir = drive + 'GFED4s/'
# Months to loop over
months = ['01', '02', '03', '04', '05','06',\
'07', '08', '09', '10', '11', '12']
# TODO: Get daily fraction arrays and save to NETCDF data. Then combine with
# TODO: monthly burn area. Then regrid to met grid!
def getDailyEmissions(dataDir, year, months, species):
"""This function gets all the data for a species for a given year and returns
time, lat, lon arrays that describe daily species emission data on.
return: time, latitude, longitude, yearData
"""
yearFile = 'GFED4.1s_' + str(year) + '.hdf5'
f = h5py.File(dataDir + yearFile, 'r')
# Get dimensions
latitude = f['lat'][:]
longitude = f['lon'][:]
nLat = latitude.shape[0]
nLon = longitude.shape[1]
# Get grid cell area [m**2]
grid_cell_area_m2 = f['ancill/grid_cell_area'][:]
# Create an array with the correct lat and lon dimension to append data
# NOTE: will trim 366th day if no assignment is made
yearData = np.zeros((366, latitude.shape[0], latitude.shape[1]))
yearData[:] = -1
# Create an array to append datetime.date objects to
date0 = date(year=year, month=1, day=1) # reference date in Jan 1 of year
time = []
jDay = 0 # Be careful about leap yaers?
for m in months:
print 'Getting ' + str(year) + ' ' + m + ' month daily data for species ' + species
# Write species emissions path
if species != 'burned_area':
speciesDir = '/emissions/' + m + '/' + species + '/'
elif species == 'burned_area':
speciesDir = 'burned_area/' + m + '/burned_fraction/'
else:
raise ValueError('Unknown species. Not available in hdf5 file.')
# Get this species monthly values array
month_emission = f[speciesDir][:]
# How many days in this month?
days = f['/emissions/' + m + '/daily_fraction/']
nDaysInMonth = len(days.keys())
# because keys() does not put them in order, make a counter, and get the
# data in the correct order
dayNumber = np.arange(1,nDaysInMonth+1)
month_daily_frac = np.zeros((nDaysInMonth, nLat, nLon))
# loop through the days the monthly emissions are distributed over
# keep track of daily_fraction
for i in range(nDaysInMonth):
# Advance the JDay Count (after adding dt to date0, since origin jan1)
dt = timedelta(days=jDay)
time.append(date0+dt)
jDay = jDay + 1
# Get fraction of monthly emissions that occured on THIS day
dayString = 'day_' + str(dayNumber[i])
#print dayString
dayFraction = days[dayString][:]
month_daily_frac[i,:,:] = dayFraction
# apply fraction to monthly data, area gets per m**-2 out of emission units
daily_emission_data = month_emission * dayFraction * grid_cell_area_m2
# Append the daily data to 'yearData' array
yearData[jDay-1, :, :] = daily_emission_data # -1 for python 0 based index
# At the end of looping through each months days data, make sure the
# daily fraction at each location adds up to 1 or 0.
month_daily_frac_sum = np.sum(month_daily_frac, axis=0)
# At locations not equal to zero, how different are values from 1?
# these are locations with non-zero emissions, so the daily fract of monthly
# emissions needs to total 1 in these locations.
notZero = month_daily_frac_sum != 0.
notZeroValues = month_daily_frac_sum[notZero]
diff = np.abs(notZeroValues - 1.)
test = diff > 1e-5
if np.sum(test) > 0:
print 'These is a month daily fraction array sum equal to: ' + str(np.max(diff))
raise ValueError('Monthly Fraction array of non 0 or 1 at a location.')
# Outside loop going over months.
# Check for leap year, if 366 day of year is still all -1 get rid of it
dimProduct = yearData.shape[1] * yearData.shape[2]
if np.sum(yearData[365,:,:]) == dimProduct * -1:
yearData = yearData[0:365,:,:]
# now loop over each day in dataframe, making sure every day was aassigned
# data.
for i in range(yearData.shape[0]):
if np.sum(yearData[i,:,:]) == dimProduct * -1:
raise ValueError('Time (day) index: ' + str(i) + ' was never assigned data.')
# Make this a much more useful array
time = np.array(time)
return time, latitude, longitude, yearData, grid_cell_area_m2
################################################################################
# Get monthly emissions too.
################################################################################
def getMonthlyEmissions(dataDir, year, months, species):
"""This function gets all the monthly burn area.
return: time, latitude, longitude, yearData
"""
yearFile = 'GFED4.1s_' + str(year) + '.hdf5'
f = h5py.File(dataDir + yearFile, 'r')
# Get dimensions
latitude = f['lat'][:]
longitude = f['lon'][:]
# Get grid cell area [m**2]
grid_cell_area_m2 = f['ancill/grid_cell_area'][:]
# Create an array with the correct lat and lon dimension to append data
# NOTE: will trim 366th day if no assignment is made
dims = (12, latitude.shape[0], latitude.shape[1])
emissions = np.zeros(dims) # to store emissions
AGRI = np.zeros(dims) # to store fraction from this type of source
BORF = np.zeros(dims) # ...
DEFO = np.zeros(dims) # ..
PEAT = np.zeros(dims) # .
SAVA = np.zeros(dims)
TEMF = np.zeros(dims)
# To store burn area fraction
area_fraction = np.zeros(dims)
# Create a list where time string year-month can be stored
timeString = []
monthCount = -1
for m in months:
timeString.append(str(year) + m)
monthCount = monthCount + 1
print 'Getting ' + str(year) + ' ' + m + ' monthly '+species+' and source data'
# Write species emissions path
EPath = '/emissions/' + m + '/' + species
emissions[monthCount, :, :] = f[EPath][:]
# Get the source fraction for these emission data
sourceBase = '/emissions/' + m + '/partitioning/' + species + '_'
AGRI[monthCount, :, :] = f[sourceBase + 'AGRI']
BORF[monthCount, :, :] = f[sourceBase + 'BORF']
DEFO[monthCount, :, :] = f[sourceBase + 'DEFO']
PEAT[monthCount, :, :] = f[sourceBase + 'PEAT']
SAVA[monthCount, :, :] = f[sourceBase + 'SAVA']
TEMF[monthCount, :, :] = f[sourceBase + 'TEMF']
# Get burn area fraction
areaPath = '/burned_area/' + m + '/burned_fraction'
area_fraction[monthCount,:,:] = f[areaPath]
# Make the return of these many variables easier with a dictionary
yearData = {}
yearData['emissions'] = emissions
yearData['AGRI'] = AGRI
yearData['BORF'] = BORF
yearData['DEFO'] = DEFO
yearData['PEAT'] = PEAT
yearData['SAVA'] = SAVA
yearData['TEMF'] = TEMF
yearData['area_fraction'] = area_fraction
timeString = np.array(timeString) # will make easier to append and handle later
return timeString, latitude, longitude, yearData, grid_cell_area_m2
################################################################################
# Append the yearly data matricies by executing yearly data function
################################################################################
if getDaily:
years = np.arange(startYear, endYear+1)
for year in years:
print 'Appending: ' + str(year)
if year == years[0]:
timeBase, lat, lon, dataBase, a = getDailyEmissions(dataDir, year, months, species)
else:
time, lat, lon, yearData, a = getDailyEmissions(dataDir, year, months, species)
# append the new data to the existing base
dataBase = np.append(dataBase, yearData, axis=0)
timeBase = np.append(timeBase, time)
# go back to the nice names
yearlyData = dataBase
time = timeBase
# Create origin object that matches ecmwf era-interm
t0 = datetime(year=1900, month=1, day=1, hour=0, minute=0, second=0)
secondsPerHour = 60**2
hoursFromOrigin = []
for i in range(len(time)):
# assume midnight on each date
time_datetime = datetime.combine(time[i], datetime.min.time())
# create a difftime object so we can extract an absolute diff in seconds
diff = time_datetime - t0
# convert difference in seconds to difference in hours
hoursFromOrigin.append(diff.total_seconds()/secondsPerHour)
# Make into nice array
hoursFromOrigin = np.array(hoursFromOrigin)
# make sure time step is ALWAYS 1 day, or something when wrong somewhere
if len(np.unique(np.diff(time))) > 1.:
raise ValueError('There is a missing timestep in the datamerge.')
################################################################################
# Write the NETCDF data. Make sure to include all relevant units for a given
# species!
################################################################################
print 'Working on writing the output as netCDF data'
nLat = lat.shape[0]
nLon = lon.shape[1]
nTime = len(time)
# When the start year is the same as the end year, only assign one year for
# file name
if startYear == endYear:
outputFile = dataDir + 'GFED4.1s_' + species + '_' +\
str(startYear) + '.nc'
else:
outputFile = dataDir + 'GFED4.1s_' + species + '_' +\
str(startYear) + '_' + str(endYear) + '.nc'
ncFile = Dataset(outputFile, 'w', format='NETCDF4')
ncFile.description = 'Data downloaded converted from hdf5 format'
ncFile.location = 'Global'
ncFile.createDimension('time', nTime )
ncFile.createDimension('latitude', nLat )
ncFile.createDimension('longitude', nLon )
VAR_ = ncFile.createVariable(species,\
'f4',('time','latitude','longitude'))
grid_area_ = ncFile.createVariable("grid_area", 'f4', ('latitude', 'longitude'))
grid_area_.units = 'm**2'
if species == 'C':
VAR_.units = 'g ' + species + ' per grid cell per day'
elif species == 'DM':
VAR_.units = 'kg ' + species + ' per grid cell per day'
elif species == 'burned_area':
VAR_.units = 'm**2 ' + species + ' per grid cell per day'
else:
raise ValueError('The units for the chosen species are not known.')
# Create time variable
time_ = ncFile.createVariable('time', 'i4', ('time',))
time_.units = 'hours since ' + str(t0)
# create lat variable
latitude_ = ncFile.createVariable('latitude', 'f4', ('latitude',))
latitude_.units = 'degrees north'
# create longitude variable
longitude_ = ncFile.createVariable('longitude', 'f4', ('longitude',))
longitude_.units = 'degrees east'
# Write the actual data to these dimensions
VAR_[:] = yearlyData[:]
grid_area_[:] = a
latitude_[:] = lat[:,0]
longitude_[:] = lon[0,:]
time_[:] = hoursFromOrigin[:]
ncFile.close()
################################################################################
# Get all years mothly emissions and write the nc
################################################################################
if getMonthly:
years = np.arange(startYear, endYear+1)
for year in years:
print 'Appending: ' + str(year)
if year == years[0]:
timeBase, lat, lon, yearData, a = getMonthlyEmissions(dataDir, year, months, species)
area_fraction_base = yearData['area_fraction']
emissions_base = yearData['emissions']
PEAT_base = yearData['PEAT']
TEMF_base = yearData['TEMF']
AGRI_base = yearData['AGRI']
BORF_base = yearData['BORF']
DEFO_base = yearData['DEFO']
SAVA_base = yearData['SAVA']
else:
time, lat, lon, yearData, a = getMonthlyEmissions(dataDir, year, months, species)
# append the new data to the existing base
area_fraction_base = np.append(area_fraction_base, yearData['area_fraction'])
emissions_base = np.append(emissions_base, yearData['emissions'])
PEAT_base = np.append(PEAT_base, yearData['PEAT'])
TEMF_base = np.append(TEMF_base, yearData['TEMF'])
AGRI_base = np.append(AGRI_base, yearData['AGRI'])
BORF_base = np.append(BORF_base, yearData['BORF'])
DEFO_base = np.append(DEFO_base, yearData['DEFO'])
SAVA_base = np.append(SAVA_base, yearData['SAVA'])
# Append time too, simple 1D append
timeBase = np.append(timeBase, time)
# Time needs to be type int in order to be stored in nc data as an array
timeBase = np.array(timeBase, 'int')
nLat = lat.shape[0]
nLon = lon.shape[1]
################################################################################
# Write nc burn area
################################################################################
# When the start year is the same as the end year, only assign one year for file name
if startYear == endYear:
outputFile = dataDir + 'GFED4.1s_monthly_'+species+'_' +\
str(startYear) + '.nc'
else:
outputFile = dataDir + 'GFED4.1s_monthly_'+species+'_' +\
str(startYear) + '_' + str(endYear) + '.nc'
ncFile = Dataset(outputFile, 'w', format='NETCDF4')
ncFile.description = 'Data downloaded converted from hdf5 format'
ncFile.location = 'Global'
ncFile.createDimension('time', len(timeBase) )
ncFile.createDimension('latitude', nLat )
ncFile.createDimension('longitude', nLon )
# Burn area
burn_area_fraction_ = ncFile.createVariable('burn_area_fraction',\
'f4',('time','latitude','longitude'))
burn_area_fraction_.units = 'fraction of grid cell that burned'
# Emissions
emissions_ = ncFile.createVariable(species,\
'f4',('time','latitude','longitude'))
if species == 'DM':
emissions_.units = 'kg DM m**-2 month**-1'
else:
emissions_.units = 'g C m**-2 month**-1'
# The source partition fractions
PEAT_base_ = ncFile.createVariable('PEAT_fraction','f4',('time','latitude','longitude'))
TEMF_base_ = ncFile.createVariable('TEMF_fraction','f4',('time','latitude','longitude'))
AGRI_base_ = ncFile.createVariable('AGRI_fraction','f4',('time','latitude','longitude'))
BORF_base_ = ncFile.createVariable('BORF_fraction','f4',('time','latitude','longitude'))
DEFO_base_ = ncFile.createVariable('DEFO_fraction','f4',('time','latitude','longitude'))
SAVA_base_ = ncFile.createVariable('SAVA_fraction','f4',('time','latitude','longitude'))
PEAT_base_.units = 'fraction of emissions'
TEMF_base_.units = 'fraction of emissions'
AGRI_base_.units = 'fraction of emissions'
BORF_base_.units = 'fraction of emissions'
DEFO_base_.units = 'fraction of emissions'
SAVA_base_.units = 'fraction of emissions'
# Area
grid_area_ = ncFile.createVariable("grid_area", 'f4', ('latitude', 'longitude'))
grid_area_.units = 'm**2'
# Create time variables
time_ = ncFile.createVariable('time', 'f4', ('time',))
time_.units = 'YYYYMM for monthly data'
# create lat variable
latitude_ = ncFile.createVariable('latitude', 'f4', ('latitude',))
latitude_.units = 'degrees north'
# create longitude variable
longitude_ = ncFile.createVariable('longitude', 'f4', ('longitude',))
longitude_.units = 'degrees east'
# Write the actual data to these dimensions
burn_area_fraction_[:] = area_fraction_base[:]
emissions_[:] = emissions_base[:]
PEAT_base_[:] = PEAT_base
TEMF_base_[:] = TEMF_base
AGRI_base_[:] = AGRI_base
BORF_base_[:] = BORF_base
DEFO_base_[:] = DEFO_base
SAVA_base_[:] = SAVA_base
grid_area_[:] = a[:]
latitude_[:] = lat[:,0]
longitude_[:] = lon[0,:]
time_[:] = timeBase[:]
ncFile.close()
|
stevenjoelbrey/PMFutures
|
Python/GFED4s_to_nc.py
|
Python
|
mit
| 16,988
|
[
"NetCDF"
] |
e8e5f7da2ae1526aeb6a495b2d890cda5c5c360b66aced80e8418cab84f94162
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.